• 博客园logo
  • 会员
  • 众包
  • 新闻
  • 博问
  • 闪存
  • 赞助商
  • HarmonyOS
  • Chat2DB
    • 搜索
      所有博客
    • 搜索
      当前博客
  • 写随笔 我的博客 短消息 简洁模式
    用户头像
    我的博客 我的园子 账号设置 会员中心 简洁模式 ... 退出登录
    注册 登录
MKT-porter
博客园    首页    新随笔    联系   管理    订阅  订阅
yolov8 跟踪 姿态 识别

官网

https://github.com/ultralytics/ultralytics

教程

https://gitcode.net/mirrors/ultralytics/ultralytics?utm_source=csdn_github_accelerator

 

Pip 安装 ultralytics 包,包括使用PyTorch>=1.8的Python>=3.8环境中的所有要求。

 

1 创建环境

conda create -n py39-yolov8 python=3.9

激活

activate py39-yolov8

2 安装库

#克隆地址
git clone https://github.com/ultralytics/ultralytics.git
#安装依赖
pip install -r requirements.txt

在最新版本中,Ultralytics YOLOv8同时提供了完整的命令行界面(CLI)API和Python SDK,用于执行训练、验证和推理任务。

为了使用yolo命令行界面(CLI),我们需要安装ultralytics包,命令如下:

pip install ultralytics

  

3测试

yolo predict model=yolov8n.pt source='ultralytics/assets/bus.jpg' show=True save=True

 

  

 

 Python测试

https://gitcode.net/mirrors/ultralytics/ultralytics?utm_source=csdn_github_accelerator

 

 

1 跟踪功能

https://docs.ultralytics.com/modes/track/

 

 

 

对象跟踪是一项涉及识别对象的位置和类别,然后为视频流中的检测分配唯一 ID 的任务。

跟踪器的输出与添加对象 ID 的检测相同。

可用的追踪器

Ultralytics YOLO 支持以下跟踪算法。可以通过传递相关的 YAML 配置文件来启用它们,例如tracker=tracker_type.yaml:

  • BoT-SORT - 用于botsort.yaml启用此跟踪器。
  • ByteTrack - 用于bytetrack.yaml启用此跟踪器。

默认跟踪器是 BoT-SORT。

追踪

要在视频流上运行跟踪器,请使用经过训练的检测、分段或姿势模型,例如 YOLOv8n、YOLOv8n-seg 和 YOLOv8n-pose。

from ultralytics import YOLO

# Load an official or custom model
model = YOLO('yolov8n.pt')  # Load an official Detect model
#model = YOLO('yolov8n-seg.pt')  # Load an official Segment model
#model = YOLO('yolov8n-pose.pt')  # Load an official Pose model
#model = YOLO('path/to/best.pt')  # Load a custom trained model

# Perform tracking with the model
results = model.track(source="https://youtu.be/Zgi9g1ksQHc", show=True)  # Tracking with default tracker
#results = model.track(source="https://youtu.be/Zgi9g1ksQHc", show=True, tracker="bytetrack.yaml")  # Tracking with ByteTrack trackerz

 

 

自动下载权重

https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n-seg.ptq

 缺库自动下载安装

 

配置

跟踪参数

跟踪配置与预测模式共享属性,例如conf、iou和show。如需进一步配置,请参阅预测模型页面。

from ultralytics import YOLO

# Configure the tracking parameters and run the tracker
model = YOLO('yolov8n.pt')
results = model.track(source="https://youtu.be/Zgi9g1ksQHc", conf=0.3, iou=0.5, show=True)

  

追踪器选择

Ultralytics 还允许您使用修改后的跟踪器配置文件。为此,只需custom_tracker.yaml从ultralytics/cfg/trackerstracker_type复制跟踪器配置文件(例如 ),并根据您的需要修改任何配置(除了)。

from ultralytics import YOLO

# Load the model and run the tracker with a custom configuration file
model = YOLO('yolov8n.pt')
results = model.track(source="https://youtu.be/Zgi9g1ksQHc", tracker='custom_tracker.yaml')

  

Python 示例

持久曲目循环

这是一个使用 OpenCV ( cv2) 和 YOLOv8 在视频帧上运行对象跟踪的 Python 脚本。该脚本仍然假设您已经安装了必要的软件包(opencv-python和ultralytics)。

import cv2
from ultralytics import YOLO

# Load the YOLOv8 model
model = YOLO('yolov8n.pt')

# Open the video file
video_path = "path/to/video.mp4"
cap = cv2.VideoCapture(video_path)

# Loop through the video frames
while cap.isOpened():
    # Read a frame from the video
    success, frame = cap.read()

    if success:
        # Run YOLOv8 tracking on the frame, persisting tracks between frames
        results = model.track(frame, persist=True)

        # Visualize the results on the frame
        annotated_frame = results[0].plot()

        # Display the annotated frame
        cv2.imshow("YOLOv8 Tracking", annotated_frame)

        # Break the loop if 'q' is pressed
        if cv2.waitKey(1) & 0xFF == ord("q"):
            break
    else:
        # Break the loop if the end of the video is reached
        break

# Release the video capture object and close the display window
cap.release()
cv2.destroyAllWindows()

 

绘制随时间变化的轨迹

通过连续帧可视化对象轨迹可以为视频中检测到的对象的运动模式和行为提供有价值的见解。借助 Ultralytics YOLOv8,绘制这些轨迹是一个无缝且高效的过程。

在下面的示例中,我们演示了如何利用 YOLOv8 的跟踪功能来绘制多个视频帧中检测到的对象的运动。该脚本涉及打开视频文件、逐帧读取它,并利用 YOLO 模型来识别和跟踪各种对象。通过保留检测到的边界框的中心点并将它们连接起来,我们可以绘制代表跟踪对象所遵循的路径的线。

 

 

 

 

 

from collections import defaultdict
 
import cv2
import numpy as np
 
from ultralytics import YOLO
 
# Load the YOLOv8 model
model = YOLO('yolov8n.pt')
 
# Open the video file
#video_path = "path/to/video.mp4"
video_path = "video1.mp4"
#video_path=1
cap = cv2.VideoCapture(video_path)
 
# Store the track history
track_history = defaultdict(lambda: [])
cv2.namedWindow("Car Tracking",0)
# Loop through the video frames

while cap.isOpened():
    # Read a frame from the video
    success, frame = cap.read()
 
    if success:
        # Run YOLOv8 tracking on the frame, persisting tracks between frames
        results = model.track(frame, persist=True)

        print("数目",len(results))

       

        # Get the boxes and track IDs
        boxes = results[0].boxes.xywh.cpu()

        cur_carnum=len(boxes) 

        print("当前数目",cur_carnum)

        if len(boxes)==0 or results[0].boxes.id==None:
            cv2.imshow("Car Tracking", frame)
            continue 

       

        track_ids = results[0].boxes.id.int().cpu().tolist()


        

        # Visualize the results on the frame
        annotated_frame = results[0].plot()
        


        # Plot the tracks
        for box, track_id in zip(boxes, track_ids):
            x, y, w, h = box
            track = track_history[track_id]
            track.append((float(x), float(y)))  # x, y center point
            if len(track) > 30:  # retain 90 tracks for 90 frames
                track.pop(0)

            # Draw the tracking lines
            points = np.hstack(track).astype(np.int32).reshape((-1, 1, 2))
            cv2.polylines(annotated_frame, [points], isClosed=False, color=(0, 0, 255), thickness=3)

        # Display the annotated frame
        cv2.imshow("Car Tracking", annotated_frame)

        #else:
            #cv2.imshow("Car Tracking", frame)

 
        # Break the loop if 'q' is pressed
        if cv2.waitKey(1) & 0xFF == ord("q"):
            break
    else:
        # Break the loop if the end of the video is reached
        break
 
# Release the video capture object and close the display window
cap.release()
cv2.destroyAllWindows()

  

  

多线程跟踪

多线程跟踪提供了同时在多个视频流上运行对象跟踪的功能。这在处理多个视频输入(例如来自多个监控摄像头的视频输入)时特别有用,其中并发处理可以大大提高效率和性能。

在提供的 Python 脚本中,我们利用 Python 的threading模块同时运行跟踪器的多个实例。每个线程负责在一个视频文件上运行跟踪器,并且所有线程在后台同时运行。

为了确保每个线程接收正确的参数(视频文件和要使用的模型),我们定义一个run_tracker_in_thread接受这些参数并包含主跟踪循环的函数。该函数逐帧读取视频,运行跟踪器并显示结果。

本示例中使用了两个不同的模型:yolov8n.pt和yolov8n-seg.pt,每个模型跟踪不同视频文件中的对象。video_file1视频文件在和中指定video_file2。

daemon=True中的参数表示threading.Thread主程序一结束就关闭这些线程。start()然后,我们使用和 use启动线程,join()使主线程等待,直到两个跟踪器线程完成。

最后,在所有线程完成其任务后,使用 关闭显示结果的窗口cv2.destroyAllWindows()。

import threading

import cv2
from ultralytics import YOLO


def run_tracker_in_thread(filename, model):
    video = cv2.VideoCapture(filename)
    frames = int(video.get(cv2.CAP_PROP_FRAME_COUNT))
    for _ in range(frames):
        ret, frame = video.read()
        if ret:
            results = model.track(source=frame, persist=True)
            res_plotted = results[0].plot()
            cv2.imshow('p', res_plotted)
            if cv2.waitKey(1) == ord('q'):
                break


# Load the models
model1 = YOLO('yolov8n.pt')
model2 = YOLO('yolov8n-seg.pt')

# Define the video files for the trackers
video_file1 = 'path/to/video1.mp4'
video_file2 = 'path/to/video2.mp4'

# Create the tracker threads
tracker_thread1 = threading.Thread(target=run_tracker_in_thread, args=(video_file1, model1), daemon=True)
tracker_thread2 = threading.Thread(target=run_tracker_in_thread, args=(video_file2, model2), daemon=True)

# Start the tracker threads
tracker_thread1.start()
tracker_thread2.start()

# Wait for the tracker threads to finish
tracker_thread1.join()
tracker_thread2.join()

# Clean up and close windows
cv2.destroyAllWindows()

  通过创建更多线程并应用相同的方法,可以轻松扩展此示例以处理更多视频文件和模型。

 

 

 更多测试样例

https://docs.ultralytics.com/modes/track/#multithreaded-tracking

 

 

 

手动权重下载地址

https://github.com/ultralytics/assets/releases/

 

 

 姿态识别

https://docs.ultralytics.com/tasks/pose/

https://docs.ultralytics.com/modes/predict/#inference-arguments

 

 

 

 

 

 

from collections import defaultdict
 
import cv2
import numpy as np
 
from ultralytics import YOLO
 
# Load the YOLOv8 model
model = YOLO('yolov8n-pose.pt') # yolov8n  yolov8n-pose
 
# Open the video file
#video_path = "path/to/video.mp4"
video_path = 0
cap = cv2.VideoCapture(video_path)
 
# Store the track history
track_history = defaultdict(lambda: [])
 
# Loop through the video frames
while cap.isOpened():
    # Read a frame from the video
    success, frame = cap.read()
 
    if success:

        results = model(frame)
        print(results)
        
        # Process results list
        for result in results:
            boxes = result.boxes  # Boxes object for bbox outputs
            masks = result.masks  # Masks object for segmentation masks outputs
            keypoints = result.keypoints  # Keypoints object for pose outputs
            probs = result.probs  # Probs object for classification outputs
               # Visualize the results on the frame
        
        annotated_frame = results[0].plot()

        # Display the annotated frame
        cv2.imshow("YOLOv8 Inference", annotated_frame)     

        if cv2.waitKey(1) & 0xFF == ord("q"):
            break
    else:
        # Break the loop if the end of the video is reached
        break
 
# Release the video capture object and close the display window
cap.release()
cv2.destroyAllWindows()

  

 

 

posted on 2023-09-08 17:38  MKT-porter  阅读(2732)  评论(0)    收藏  举报
刷新页面返回顶部
博客园  ©  2004-2025
浙公网安备 33010602011771号 浙ICP备2021040463号-3