Overview
The Objects module provides a high-level interface for object detection and classification using YOLOv8. It supports multiple detection strategies, custom models, real-time processing, object tracking, and detailed analysis of detected objects.
Detection
Object detection with configurable confidence thresholds and custom model support
Tracking
Integrated object tracking capabilities for video streams and real-time analysis
Processing
Support for multiple compute devices (CPU, CUDA, MPS) for optimal performance
Visualization
Comprehensive visualization tools for detected objects and tracking results
Analysis
Object counting and center point analysis for advanced applications
Installation
pip install yolozoneInitialization
from yolozone import ObjectDetector
# Initialize with default model
detector = ObjectDetector()
# Initialize with custom model
detector = ObjectDetector(model="path/to/custom/model.pt")Methods
__init__(model="yolov8s.pt")
Initialize the object detector with a YOLO model
Parameters
- model(str): Path to YOLO model weights (default: "yolov8s.pt")
detect_objects(img, device="cpu", conf=0.25, track=False)
Detect objects in an image with optional tracking
Parameters
- img(numpy.ndarray): Input image
- device(str): Device to run inference on ('cpu', 'cuda', 'mps')
- conf(float): Confidence threshold (0-1)
- track(bool): Enable object tracking
Returns
- Results: Detection results containing boxes, classes, and confidence scores
Example
results = detector.detect_objects(
    image,
    device="cuda",
    conf=0.35,
    track=True
)get_boxes(results)
Extract bounding boxes from detection results
Parameters
- results(Results): Detection results from detect_objects()
Returns
- numpy.ndarray: Array of [x1, y1, x2, y2, confidence, class_id]
Example
boxes = detector.get_boxes(results)
for box in boxes:
    x1, y1, x2, y2, conf, class_id = boxdraw_detections(img, results, classes=None, color=(0, 255, 0), thickness=2)
Visualize detection results on the image
Parameters
- img(numpy.ndarray): Image to draw on
- results(Results): Detection results
- classes(List[str], optional): List of class names to filter
- color(tuple): Box and text color in BGR format
- thickness(int): Line thickness
Returns
- tuple: (Annotated image, List of (class_name, confidence, box) tuples)
Example
img, detections = detector.draw_detections(
    img,
    results,
    classes=['person', 'car'],
    color=(0, 255, 0),
    thickness=2
)count_objects(results, classes=None)
Count detected objects by class
Parameters
- results(Results): Detection results
- classes(List[str], optional): List of class names to filter
Returns
- dict: Dictionary of {class_name: count}
Example
counts = detector.count_objects(results)
print(f"Found {counts.get('person', 0)} people")get_object_centers(results)
Calculate center points of all detected objects
Parameters
- results(Results): Detection results
Returns
- dict: Dictionary of {class_name: [(x,y), ...]}
Example
centers = detector.get_object_centers(results)
for class_name, points in centers.items():
    print(f"{class_name} centers: {points}")Complete Examples
Basic Object Detection
from yolozone import ObjectDetector
import cv2
# Initialize detector
detector = ObjectDetector()
# Read image
image = cv2.imread('image.jpg')
# Detect objects
results = detector.detect_objects(image, conf=0.25)
# Draw detections
image, detections = detector.draw_detections(image, results)
# Display results
for class_name, conf, box in detections:
    print(f"Found {class_name} with confidence {conf:.2f}")Object Tracking
import cv2
# Initialize with tracking
detector = ObjectDetector()
# Open video capture
cap = cv2.VideoCapture(0)
while True:
    ret, frame = cap.read()
    if not ret:
        break
        
    # Detect and track objects
    results = detector.detect_objects(
        frame,
        track=True,
        conf=0.35
    )
    
    # Draw detections
    frame, detections = detector.draw_detections(frame, results)
    
    # Show frame
    cv2.imshow('Tracking', frame)
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break
cap.release()
cv2.destroyAllWindows()Object Counting and Analysis
# Detect objects
results = detector.detect_objects(image)
# Count objects by class
counts = detector.count_objects(results)
print("Object counts:", counts)
# Get object centers
centers = detector.get_object_centers(results)
for class_name, points in centers.items():
    print(f"{class_name} locations:", points)