AI学习笔记二十九:YOLOV12部署测试 - 详解

若该文为原创文章,转载请注明原文出处。

先测试下yolov12,后面部署到RK3568上。

本次测试使用的是win 10 ,无GPU,直接简单安装环境并使用CPU测试,数据集制作及训练后续介绍。

一、环境搭建

1、创建虚拟环境

conda create -n yolov12 python=3.11

2、激活

conda activate yolov12

3、下载YOLOV12源码

GitHub - sunsmarterjie/yolov12: YOLOv12: Attention-Centric Real-Time Object Detectors

4、下载flash_attn

这个分为先多版本

Releases · kingbri1/flash-attention · GitHub

下载win版本

下载好后将其放置到【yolov12-main】文件夹中

5、安装

//加清华源pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
pip install flash_attn-2.7.4.post1+cu124torch2.6.0cxx11abiFALSE-cp311-cp311-win_amd64
pip install -e .

到此,环境搭建完成

接下来测试

二、测试

1、下载模型

https://github.com/sunsmarterjie/yolov12/releases/download/turbo/yolov12s.pt

2、测试

yolo predict model=E:/desktop/yolov12-main/yolov12s.pt source='E:/desktop/yolov12-main/bus.jpg'

3、编写代码测试

import cv2from ultralytics import YOLO def predict(chosen_model, img, classes=[], conf=0.5):    if classes:        results = chosen_model.predict(img, classes=classes, conf=conf)    else:        results = chosen_model.predict(img, conf=conf)     return results def predict_and_detect(chosen_model, img, classes=[], conf=0.5, rectangle_thickness=2, text_thickness=1):    results = predict(chosen_model, img, classes, conf=conf)    for result in results:        for box in result.boxes:            cv2.rectangle(img, (int(box.xyxy[0][0]), int(box.xyxy[0][1])),                          (int(box.xyxy[0][2]), int(box.xyxy[0][3])), (255, 0, 0), rectangle_thickness)            cv2.putText(img, f"{result.names[int(box.cls[0])]}",                        (int(box.xyxy[0][0]), int(box.xyxy[0][1]) - 10),                        cv2.FONT_HERSHEY_PLAIN, 1, (255, 0, 0), text_thickness)    return img, results # defining function for creating a writer (for mp4 videos)def create_video_writer(video_cap, output_filename):    # grab the width, height, and fps of the frames in the video stream.    frame_width = int(video_cap.get(cv2.CAP_PROP_FRAME_WIDTH))    frame_height = int(video_cap.get(cv2.CAP_PROP_FRAME_HEIGHT))    fps = int(video_cap.get(cv2.CAP_PROP_FPS))    # initialize the FourCC and a video writer object    fourcc = cv2.VideoWriter_fourcc(*'MP4V')    writer = cv2.VideoWriter(output_filename, fourcc, fps,                             (frame_width, frame_height))    return writer model = YOLO("E:/desktop/yolov12-main/yolov12s.pt") output_filename = "E:/desktop/yolov12-main/result.mp4" video_path = r"E:/desktop/yolov12-main/test.mp4"cap = cv2.VideoCapture(video_path)writer = create_video_writer(cap, output_filename)while True:    success, img = cap.read()    if not success:        break    result_img, _ = predict_and_detect(model, img, classes=[], conf=0.5)            writer.write(result_img)    cv2.imshow("Image", result_img)        cv2.waitKey(1)writer.release()

测试,好像效果不是很好。

如有侵权,或需要完整代码,请及时联系博主。

posted on 2025-07-23 20:38  ljbguanli  阅读(12)  评论(0)    收藏  举报