OpenCV基于稠密光流跟踪

一、概述

  案例:基于稠密光流的视频跟踪

  API介绍:    

calcOpticalFlowFarneback( InputArray prev, InputArray next, InputOutputArray flow,
                                            double pyr_scale, int levels, int winsize,
                                            int iterations, int poly_n, double poly_sigma,
                                            int flags );
prev:前一帧单通道CV_8UC1图像
next:当前帧单通道CV_8UC1图像
flow:输出的光流数据
pyr_scale:金字塔上下两层的尺度关系
levels:金字塔层数
winsize:窗口大小
iterations:迭代次数
poly_n:像素领域大小,一般是5、7
poly_sigma:高斯标准差一般是1~1.5
flags:计算方法:主要包括OPTFLOW_USE_INITIAL_FLOW和OPTFLOW_FARNEBACK_GAUSSIAN

  实现步骤:

    1.实例化VideoCapture

    2.使用其open方法打开视频文件

    3.获取视频第一帧并得到其灰度图(因为稠密光流输入只支持单通道8位)

    4.while(true)循环读取视频帧

    5.将当前帧灰度化

    6.执行稠密光流函数,并输出光流数据

    7.将光流数据绘制出来

    8.显示光流数据

二、代码示例(ps:界面中的按钮元素使用到了Qt)

HF_Object_Tracking::HF_Object_Tracking(QWidget *parent)
    : MyGraphicsView{parent}
{
    this->setWindowTitle("稠密光流对象跟踪");
    QPushButton *btn = new QPushButton(this);
    btn->setText("选择视频");
    connect(btn,&QPushButton::clicked,[=](){
        choiceVideo();
    });
}


void HF_Object_Tracking::choiceVideo(){
    path = QFileDialog::getOpenFileName(this,"请选择视频","/Users/yangwei/Downloads/",tr("Image Files(*.mp4 *.avi)"));
    qDebug()<<"视频路径:"<<path;
    hfObjectTracking(path.toStdString().c_str());
}

void HF_Object_Tracking::hfObjectTracking(const char* filePath){
    VideoCapture capture;
    capture.open(filePath);
    if(!capture.isOpened()){
        qDebug()<<"视频路径为空";
        return;
    }
    Mat frame,gray;
    Mat prev_frame ,prev_gray;
    Mat flowResult,flowData;
    capture.read(frame);//读取第一帧数据
    //转灰度图
    cvtColor(frame,prev_gray,COLOR_BGR2GRAY);//将frame转灰度图赋值给前一帧

    while(capture.read(frame)){
        cvtColor(frame,gray,COLOR_BGR2GRAY);
        if(!prev_gray.empty()){
            //稠密光流跟踪
            calcOpticalFlowFarneback(prev_gray,gray,flowData, 0.5, 3, 15, 3, 5, 1.2, 0);
            cvtColor(prev_gray, flowResult, COLOR_GRAY2BGR);
            for (int row = 0; row < flowResult.rows; row++) {
                for (int col = 0; col < flowResult.cols; col++) {
                    const Point2f fxy = flowData.at<Point2f>(row, col);
                    if (fxy.x > 1 || fxy.y > 1) {
                        line(flowResult, Point(col, row), Point(cvRound(col + fxy.x), cvRound(row + fxy.y)), Scalar(0, 255, 0), 2, 8, 0);
                        circle(flowResult, Point(col, row), 2, Scalar(0, 0, 255), -1);
                    }
                }
            }
            imshow("flow", flowResult);
            imshow("input", frame);
        }
//        imshow("frame",frame);
        int key = waitKey(1);
        if(key==27){
            break;
        }
    }

}

 

三、图像演示

 

posted on 2022-04-25 13:29  飘杨......  阅读(163)  评论(1编辑  收藏  举报