opencv + cuda 实现视频硬解码

原文链接:https://note.youdao.com/ynoteshare/index.html?id=700052b0a49301059a34f20a00a830ca&type=note&_time=1638503513531

 

目录:
 
一、教程目的(why)
实现基于Opencv的GPU视频编解码,并且可以使用RTSP协议采集IPC摄像头h.264/h.265视频
二、需要准备安装包(what)
1.nvidia最新驱动.run
2.cuda10.1及对应cudnn+nvidia video codec sdk 9.1
3.ffmpeg最新版本+nv-codec-headers9.1
4.opencv4.2.0+opencv_contrib-4.2.0
 
0
 
三、配置步骤(how)
 
1.手动安装最新nvidia驱动
使用ppa无法获取最新驱动,因此手动安装
 
注:安装435版本以上
 
1)查看显卡信息:
$ lspci | grep VGA #查看驱动信息 ubuntu-drivers devices
 
ps:查看驱动信息不显示结果处理
 
2)下载驱动程序:
http://www.nvidia.cn/Download/index.aspx
选择适合自己电脑的版本就可以,下载完之后是一个名称为 NVIDIA-Linux-x86_64-xxx.xx.run 的文件。
 
3)禁用nouveau驱动
查看是否成功禁用
 
只要是安装过NVIDIA显卡驱动的,nouveau一般都被禁止了。可以通过命令
lsmod | grep nouveau
查看。如果没有任何输出就是禁用成功了。否则,请参考禁用步骤。
 
禁用步骤
1.创建/etc/modprobe.d/blacklist-nouveau.conf文件,你可以通过如下命令:
sudo gedit /etc/modprobe.d/blacklist-nouveau.conf
 
2.填入以下内容:
blacklist nouveau options nouveau modeset=0
 
3.重新生成kernel initramfs
sudo update-initramfs -u
 
4.重启电脑
sudo reboot
 
4) 卸载原有驱动
sudo apt-get remove --purge nvidia*
 
5)安装
sudo service lightdm stop sudo ./NVIDIA-Linux-x86_64-390.77.run sudo service lightdm start
 
如果提示unit lightdm.service not loaded
则先安装LightDm: sudo apt install lightdm
安装完毕后跳出一个界面,选择lightdm,再sudo service lightdm stop
 
#使用ppa源安装驱动(不推荐使用) sudo service lightdm stop sudo add-apt-repository ppa:graphics-drivers/ppa sudo apt- get update sudo apt-get install nvidia-(TAB键,可以看到有那些驱动) sudo apt-get install nvidia-396
 
6)驱动安装选择选项如下:
  • The distribution-provided pre-install script failed! Are you sure you want to continue? 选择 yes 继续。
  • Would you like to register the kernel module souces with DKMS? This will allow DKMS to automatically build a new module, if you install a different kernel later? 选择 No 继续。
  • Nvidia’s 32-bit compatibility libraries? 选择 No 继续。
  • Would you like to run the nvidia-xconfig utility to automatically update your x configuration so that the NVIDIA x driver will be used when you restart x? Any pre-existing x confile will be backed up. 选择 Yes 继续
 
7)验证驱动安装完成
nvidia-smi
 
2.CUDA10.1 +CUDNN+nvidia video codec sdk 9.1安装
 
2.1 CUDA10.1安装
 
1)在http://developer.nvidia.com/cuda-downloads上下载安装包
0
你自己可以建个文件夹,然后在文件夹中输入上述两条命令:
sudo wget http://developer.nload.nvidia.com/compute/cuda/10.1/Prod/local_installers/cuda_10.1.243_418.87.00_linux.run$:sudo sh cuda_10.1.243_418.87.00_linux.run
 
2)再出现的提示中选择continue和accept,直到出现如下画面:
0
由于我之前已经安装了Nvidia的显卡驱动,这里不安装driver,那么只需要移动到Driver,按enter键,将"[]"中的X去掉即是不选择.然后在Install.
 
3)安装成功
0
4)添加环境变量
sudo vi ~/.bashrc
 
在文件末尾添加
export PATH="/usr/local/cuda-10.1/bin:$PATH" export LD_LIBRARY_PATH="/usr/lcoal/cuda-10.1/lib64:$LD_LIBRARY_PATH"
 
最后使其生效
source ~/.bashrc
 
5)验证安装成功
 
终端输入
cd /usr/local/cuda-10.1/samples/1_Utilities/deviceQuery sudo make./deviceQuery
 
0
出现Result = PASS则表示安装成功通过!!
 
6)在终端输入命令,实时查看GPU的使用情况:
watch -n 1 nvidia-smi
 
2.2 CUDNN安装
 
1)官网下载https://developer.nvidia.com/rdp/cudnn-archive对应10.1版本cudnn
0
 
2) 找到对应版本下载安装
解压cudnn-10.1-linux-x64-v7.6.3.30.solitairetheme8的后缀名修改为tgz,然后用如下命令解压
 
$ tar -xzvf cudnn-10.1-linux-x64-v7.6.3.30.tgz
 
拷贝.h 和 libs文件到cuda安装目录,并给予执行权限:
 
$ sudo cp cuda/include/cudnn.h /usr/local/cuda/include $ sudo cp cuda/lib64/libcudnn* /usr/local/cuda/lib64 $ sudo chmod a+r /usr/local/cuda/include/cudnn.h /usr/local/cuda/lib64/libcudnn*
 
 
可能问题
/sbin/ldconfig.real: /usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_train.so.8 is not a symbolic link /sbin/ldconfig.real: /usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_train.so.8 is not a symbolic link /sbin/ldconfig.real: /usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8 is not a symbolic link /sbin/ldconfig.real: /usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8 is not a symbolic link /sbin/ldconfig.real: /usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8 is not a symbolic link /sbin/ldconfig.real: /usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8 is not a symbolic link /sbin/ldconfig.real: /usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn.so.8 is not a symbolic link
解决方法
sudo ln -sf /usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.0.2 /usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_train.so.8 sudo ln -sf /usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.0.2 /usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_train.so.8 sudo ln -sf /usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.0.2 /usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8 sudo ln -sf /usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.0.2 /usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8 sudo ln -sf /usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.0.2 /usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8 sudo ln -sf /usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.0.2 /usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8 sudo ln -sf /usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn.so.8.0.2 /usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn.so.8
 
sudo ln -sf /usr/local/cuda-10.2/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.0.3 /usr/local/cuda-10.2/targets/x86_64-linux/lib/libcudnn_ops_train.so.8 && \ sudo ln -sf /usr/local/cuda-10.2/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.0.3 /usr/local/cuda-10.2/targets/x86_64-linux/lib/libcudnn_adv_train.so.8 && \ sudo ln -sf /usr/local/cuda-10.2/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.0.3 /usr/local/cuda-10.2/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8 && \ sudo ln -sf /usr/local/cuda-10.2/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.0.3 /usr/local/cuda-10.2/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8 && \ sudo ln -sf /usr/local/cuda-10.2/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.0.3 /usr/local/cuda-10.2/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8 && \ sudo ln -sf /usr/local/cuda-10.2/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.0.3 /usr/local/cuda-10.2/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8 && \ sudo ln -sf /usr/local/cuda-10.2/targets/x86_64-linux/lib/libcudnn.so.8.0.3 /usr/local/cuda-10.2/targets/x86_64-linux/lib/libcudnn.so.8 && \ sudo ln -sf /usr/lib/x86_64-linux-gnu/libcuda.so.450.66 /usr/lib/x86_64-linux-gnu/libcuda.so
 
ln -sf /usr/local/cuda-10.2/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.0.3 /usr/local/cuda-10.2/targets/x86_64-linux/lib/libcudnn_ops_train.so.8 && \ ln -sf /usr/local/cuda-10.2/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.0.3 /usr/local/cuda-10.2/targets/x86_64-linux/lib/libcudnn_adv_train.so.8 && \ ln -sf /usr/local/cuda-10.2/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.0.3 /usr/local/cuda-10.2/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8 && \ ln -sf /usr/local/cuda-10.2/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.0.3 /usr/local/cuda-10.2/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8 && \ ln -sf /usr/local/cuda-10.2/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.0.3 /usr/local/cuda-10.2/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8 && \ ln -sf /usr/local/cuda-10.2/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.0.3 /usr/local/cuda-10.2/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8 && \ ln -sf /usr/local/cuda-10.2/targets/x86_64-linux/lib/libcudnn.so.8.0.3 /usr/local/cuda-10.2/targets/x86_64-linux/lib/libcudnn.so.8
ubuntu终端一次多条命令方法和区别
1.每个命令之间用;隔开:各个命令都会执行,但不保证每个命令都执行成功。
2.每个命令之间用&&隔开:若前面的命令执行成功,才会去执行后面的命令。保证所有的命令执行完毕后,执行过程都是成功的。
3.每个命令之间用||隔开:||是或的意思,只有前面的命令执行失败后才去执行下一条命令,直到执行成功一条命令为止。
 
安装剩下的三个.deb文件:
 
#Install the runtime library, for example: sudo dpkg -i libcudnn7_7.6.3.30-1+cuda10.1_amd64.deb #Install the developer library, for example: sudo dpkg -i libcudnn7-dev_7.6.3.30-1+cuda10.1_amd64.deb # Install the code samples and the cuDNN Library User Guide, for example: sudo dpkg -i libcudnn7-doc_7.6.3.30-1+cuda10.1_amd64.deb
 
2.3 测试cudnn
 
cat /usr/local/cuda/include/cudnn.h | grep CUDNN_MAJOR -A 2
或者
# Copy the cuDNN sample to a writable path. cp -r /usr/src/cudnn_samples_v7/ $HOME # Go to the writable path. cd $HOME/cudnn_samples_v7/mnistCUDNN #Compile the mnistCUDNN sample. make clean && make #Run the mnistCUDNN sample. ./mnistCUDNN If cuDNN is properly installed and running on your Linux system, you will see a message similar to the following: Test passed!
 
2.4 nvidia video codec sdk 9.1安装
 
注意:GPU编解码必备!!!否则后续opencv编译会报错
0
fatal error: nvcuvid.h: No such file or directory
 
1.需要在英伟达官网把需要的组件下载下来(https://developer.nvidia.com/nvidia-video-codec-sdk#Download),当前最新为9.1版本
 
2.解压缩之后把  Video_Codec_SDK_9.1.23/include/ 下面的cuviddec.h 和 nvcuvid.h文件拷贝到/usr/include下面就好了
 
3.继续后续操作即可
 
2.5参考
 
3.ffmpeg + nv-codec-headers9.1安装
 
3.1 安装ffmpeg
 
 3.1.1 安装基础依赖:
sudo apt-get update sudo apt-get -y install autoconf automake build-essential libass-dev libfreetype6-dev \ libsdl2-dev libtheora-dev libtool libva-dev libvdpau-dev libvorbis-dev libxcb1-dev libxcb-shm0-dev \ libxcb-xfixes0-dev pkg-config texinfo zlib1g-dev
 
apt-get install yasm -y && apt-get install libx264-dev -y && apt-get install libx265-dev -y && \ apt-get install libvpx-dev -y && \ apt-get install libfdk-aac-dev -y && \ apt-get install libmp3lame-dev -y && \ apt-get install libopus-dev -y
 
3.1.2 安装yasm  汇编编译器,编译某些依赖库的时候需要
sudo apt-get install yasm #版本为1.3
 
3.1.3 安装lib264 H.264视频编码器,如果需要输出H.264编码的视频就需要此库,所以可以说是必备
sudo apt-get install libx264-dev #版本为148
 
3.1.4 安装libx265(显卡不一定支持265编码)
H.265/HEVC视频编码器。
如果不需要此编码器,可以跳过,并在ffmpeg的configure命令中移除--enable-libx265
sudo apt-get install libx265-dev
 
3.1.5 安装 libvpx
VP8/VP9视频编/解码器
如果不需要此编/解码器,可以跳过,并在ffmpeg的configure命令中移除--enable-libvpx
sudo apt-get install libvpx-dev #版本为1.5
 
3.1.6 安装 安装libfdk-aac AAC音频编码器,必备
sudo apt-get install libfdk-aac-dev # 无版本要求
 
3.1.7 安装libmp3lam MP3音频编码器,必备
sudo apt-get install libmp3lame-dev
   
 3.1.8安装libopus
OPUS音频编码器
如果不需要此编码器,可以跳过,并在ffmpeg的configure命令中移除--enable-libopus
sudo apt-get install libopus-dev # 1.1.2
 
3.1.9 安装NVENC:
安装依赖:
sudo apt-get -y install glew-utils libglew-dbg libglew-dev libglew1.13 \ libglewmx-dev libglewmx-dbg freeglut3 freeglut3-dev freeglut3-dbg libghc-glut-dev \ libghc-glut-doc libghc-glut-prof libalut-dev libxmu-dev libxmu-headers libxmu6 \ libxmu6-dbg libxmuu-dev libxmuu1 libxmuu1-dbg
   
3.1.10 下载ffmpeg
git clone https://github.com/FFmpeg/FFmpeg ffmpeg -b master
 
3.2 安装nv-codec-headers9.1安装
 
要让ffmpeg能够使用CUDA提供的GPU编解码器,必须重新编译ffmpeg,让其能够通过动态链接调用CUDA的能力
首先要编译安装nv-codec-headers库--https://github.com/FFmpeg/nv-codec-headers/tree/sdk/9.1
执行如下命令安装:
cuda10.2对应9.1版本,git clone下载的是最新版本,不适合
git clone https://git.videolan.org/git/ffmpeg/nv-codec-headers.git #此为9.1版本 #有的需要下载9.0版本下载--cuda9.0需要9.0版本 cuda10.2用9.1版本即可 cd nv-codec-headers make sudo make install
 
然后cd到 ffmpeg目录下
 
3.3 编译ffmpeg
cd到ffmpeg安装目录下
 
编译命令如下:
./configure --prefix=/usr/local/ffmpeg --disable-asm --disable-x86asm \ --enable-cuda --enable-cuvid --enable-nvenc \ --enable-nonfree --enable-libnpp \ --extra-cflags=-I/usr/local/cuda/include \ --extra-cflags=-fPIC --extra-ldflags=-L/usr/local/cuda/lib64 \ --enable-gpl --enable-libx264 --enable-libx265 \ --enable-shared \ --enable-libass \ --enable-libfdk-aac \ --enable-libfreetype \ --enable-libmp3lame \ --enable-libopus \ --enable-libtheora \ --enable-libvorbis make -j8 sudo make -j8 install make -j8 distclean hash -r #清除缓存
 
配置环境
sudo vi /etc/ld.so.conf 添加: /usr/local/ffmpeg/lib sudo ldconfig
 
然后为 Ffmpeg 加入环境变量:
sudo vi /etc/profile 加入以下内容 export PATH=/usr/local/ffmpeg/bin:$PATH export FFMPEG_HOME=/usr/local/ffmpeg export PATH=$FFMPEG_HOME/bin:$PATH export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib #添加动态库路径 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/local/ffmpeg/lib/pkgconfig:/usr/local/lib 执行 source /etc/profile
 
ffmpeg -h ffmpeg -version
 
3.4 常见问题解决
 
问题1:ERROR: freetype2 not found using pkg-config
解决1:安装freetype 并将路径添加到~/.bashrc中,并source ~/.bashrc
0
 
问题2:ERROR: vorbis not found using pkg-config
解决2:安装依赖库
sudo apt-get install -y autoconf automake build-essential git libass-dev libfreetype6-dev libsdl2-dev libtheora-dev libtool libva-dev libvdpau-dev libvorbis-dev libxcb1-dev libxcb-shm0-dev libxcb-xfixes0-dev pkg-config texinfo wget zlib1g-dev apt install libavformat-dev apt install libavcodec-dev apt install libswresample-dev apt install libswscale-dev apt install libavutil-dev apt install libsdl1.2-dev
 
问题3:ERROR: opus not found using pkg-config
解决3:sudo apt-get install libopus-dev
 
3.5 验证安装
 
重新安装完ffmpeg,使用ffmpeg -hwaccels命令查看支持的硬件加速选项
Hardware acceleration methods: cuvid
可以看到多出来一种叫做cuvid的硬件加速选项,这就是CUDA提供的GPU视频编解码加速选项
然后查看cuvid提供的GPU编解码器ffmpeg -codecs | grep cuvid
DEV.LS h264 H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 (decoders: h264 h264_cuvid ) (encoders: libx264 libx264rgb h264_nvenc nvenc nvenc_h264 ) DEV.L. hevc H.265 / HEVC (High Efficiency Video Coding) (decoders: hevc hevc_cuvid ) (encoders: libx265 nvenc_hevc hevc_nvenc ) DEVIL. mjpeg Motion JPEG (decoders: mjpeg mjpeg_cuvid ) DEV.L. mpeg1video MPEG-1 video (decoders: mpeg1video mpeg1_cuvid ) DEV.L. mpeg2video MPEG-2 video (decoders: mpeg2video mpegvideo mpeg2_cuvid ) DEV.L. mpeg4 MPEG-4 part 2 (decoders: mpeg4 mpeg4_cuvid ) D.V.L. vc1 SMPTE VC-1 (decoders: vc1 vc1_cuvid ) DEV.L. vp8 On2 VP8 (decoders: vp8 libvpx vp8_cuvid ) (encoders: libvpx ) DEV.L. vp9 Google VP9 (decoders: vp9 libvpx-vp9 vp9_cuvid ) (encoders: libvpx-vp9 )
所有带有"cuvid"或"nvenc"的,都是CUDA提供的GPU编解码器
可以看到,我们现在可以进行h264/hevc/mjpeg/mpeg1/mpeg2/mpeg4/vc1/vp8/vp9格式的GPU解码,以及h264/hevc格式的GPU编码
 
3.6 转码测试
 
ffmpeg -i input.flv -c:v h264_nvenc -c:a aac output.mp4 #docker容器中出现问题参考下面解决方法
倍速对比,同样硬件条件下,gpu 提速在7-8倍左右。
frame=21022 fps=398 q=21.0 Lsize= 232698kB time=00:14:36.75 bitrate=2174.2kbits/s dup=137 drop=0 speed=16.6x
播放试了下播放效果,和cpu 播放无明显差别。
 
3.7 使用GPU进行视频转码
 
用GPU进行转码的命令和软转码命令不太一样,CPU转码的时候,我们可以依赖ffmpeg识别输入视频的编码格式并选择对应的解码器,但ffmpeg只会自动选择CPU解码器,要让ffmpeg使用GPU解码器,必须先用ffprobe识别出输入视频的编码格式,然后在命令行中指定对应的GPU解码器。
例如,将h264编码的源视频转码为指定尺寸和码率的h264编码视频:
ffmpeg -hwaccel cuvid -c:v h264_cuvid -i <input> -c:v h264_nvenc -b:v 2048k -vf scale_npp=1280:-1 -y <output>
 
使用GPU进行RTSP协议转码测试 ffmpeg -hwaccel cuvid -c:v h264_cuvid -rtsp_transport tcp -i "rtsp://admin:hk888888@192.168.1.235/h264/ch1/main/av_stream" -c:v h264_nvenc -b:v 2048k -vf scale_npp=1280:-1 -y /root/2.mp4
-hwaccel cuvid:指定使用cuvid硬件加速
-c:v h264_cuvid:使用h264_cuvid进行视频解码
-c:v h264_nvenc:使用h264_nvenc进行视频编码
-vf scale_npp=1280:-1:指定输出视频的宽高,注意,这里和软解码时使用的-vf scale=x:x不一样
转码期间使用nvidia-smi查看显卡状态,能够看到ffmpeg确实是在使用GPU进行转码:
+-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 62543 C ffmpeg 193MiB | +-----------------------------------------------------------------------------+
 
可能错误
出现:
[nvenc_hevc @ 0x3f928c0] Driver does not support the required nvenc API version. Required: 9.1 Found: 8.1 [nvenc_hevc @ 0x3f928c0] The minimum required Nvidia driver for nvenc is 390.25 or newer
原因可能是nv-codec-headers的版本是9.1,但是Nvidia driver版本只支持8.1。查看nv-codec-headers的所有tag,checkout到8.1版本,重新编译ffmpeg成功。注意:要彻底删除ffmpeg安装包,重新编译!!!
 
重点:nvidia-docker2遇到问题及解决
ffmpeg -hwaccel cuvid -c:v h264_cuvid -i 1.mp4 -c:v h264_nvenc -b:v 2048k -vf scale_npp=1280:-1 -y out.mp4
错误信息
使用测试命令,可能会报如下错误
Cannot load libnvcuvid.so.1 Cannot load libnvidia-encode.so.1 [h264_nvenc @ 0x17f6270] Cannot load libnvidia-encode.so.1 [h264_nvenc @ 0x17f6270] The minimum required Nvidia driver for nvenc is 445.87 or newer Error initializing output stream 0:0 -- Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height
因为docker容器内没有这两个文件,经过我查找发现,在宿主机中有,我们需要拷贝今容器内,然后再作一个软连接。
# 在这些目录下查找上述文件 /lib/x86_64-linux-gnu /usr/lib/x86_64-linux-gnu
找到这两个文件,我们发现,他们其实是软连接
我们ll命令文件,找到原始文件,
libnvcuvid.so.440.100
libnvidia-encode.so.440.100
后面的尾号可能不一样,它是显卡驱动版本号,
我们把这个拷贝到docker中,然后软连接
# 宿主机 /usr/lib/x86_64-linux-gnu/libnvcuvid.so.440.100 /usr/lib/x86_64-linux-gnu/libnvidia-encode.so.440.100 # 拷贝到docker这里 /lib64/libnvcuvid.so.440.100 /lib64/libnvidia-encode.so.440.100 # docker内软连接 ln -s /lib64/libnvcuvid.so.440.100 /lib64/libnvcuvid.so.1 ln -s /lib64/libnvidia-encode.so.440.100 /lib64/libnvidia-encode.so.1 # 然后上面的操作,也最好写在dockerfile内
然后
vi /etc/ld.so.conf #添加软连接路径 /lib64 #更新 ldconfig
最后再次测试GPU视频编解码
 
3.8 GPU转码效率测试
 
在配有两颗Intel-E5-2630v3 CPU和两块Nvidia Tesla M4显卡的服务器上,进行h264视频转码测试,成绩如下:
  • GPU转码平均耗时:8s
  • CPU转码平均耗时:25s
并行转码时,CPU软转的效率有所提高,3个转码任务并行时32颗核心全被占满,此时的成绩
  • GPU转码平均耗时:8s
  • CPU转码平均耗时:18s
不难看出,并行时GPU的转码速度并没有提高,可见一颗GPU同时只能执行一个转码任务。那么,如果服务器上插有多块显卡,ffmpeg是否会使用多颗GPU进行并行转码呢?
很遗憾,答案是否。
ffmpeg并不具备自动向不同GPU分配转码任务的能力,但经过一番调查后,发现可以通过-hwaccel_device参数指定转码任务使用的GPU!
向不同GPU提交转码任务
ffmpeg -hwaccel cuvid -hwaccel_device 0 -c:v h264_cuvid -i <input> -c:v h264_nvenc -b:v 2048k -vf scale_npp=1280:-1 -y <output> ffmpeg -hwaccel cuvid -hwaccel_device 1 -c:v h264_cuvid -i <input> -c:v h264_nvenc -b:v 2048k -vf scale_npp=1280:-1 -y <output>
-hwaccel_device N:指定某颗GPU执行转码任务,N为数字
此时nvidia-smi显示:
+-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 96931 C ffmpeg 193MiB | | 1 96930 C ffmpeg 193MiB | +-----------------------------------------------------------------------------+
可以进行并行GPU转码了!
那么在占满服务器资源时,GPU转码和CPU转码的效率如下:
  • GPU转码平均耗时:4s
  • CPU转码平均耗时:18s
GPU效率是CPU的4.5倍
 
3.9 参考
 
4.opencv 4.2.0+opencv_contirb安装
 
4.1 opencv依赖库---ubuntu16.04
 
ps:建议使用ubuntu官方源,本人在配置中使用其他原出现错误
 
[1] - 官方必须依赖库
sudo apt-get update sudo apt-get install cmake git sudo apt-get install build-essential \ libgtk2.0-dev \ pkg-config \ libavcodec-dev \ libavformat-dev
[2] - 官方建议依赖库
sudo apt-get install python-dev \ libtbb2 \ libtbb-dev \ libjpeg-dev \ libpng-dev \ libtiff-dev \ libjasper-dev \ libdc1394-22-dev
[3] - OPENGL 支持依赖库
sudo apt-get install freeglut3-dev \ mesa-common-dev \ libgtkglext1 \ libgtkglext1-dev
[4] - 视频解码支持依赖库
sudo apt-get install checkinstall \ yasm \ libgstreamer0.10-dev \ libgstreamer-plugins-base0.10-dev \ libv4l-dev \ libtbb-dev \ libqt4-dev \ libgtk2.0-dev \ libmp3lame-dev \ libtheora-dev \ libvorbis-dev \ libxvidcore-dev \ x264 \ v4l-utils
[5] - 其它可能依赖项
sudo apt-get install libgphoto2-dev libavresample-dev liblapacke-dev gtk+-3.0 sudo apt-get install libgtk-3-dev libeigen3-dev tesseract-ocr liblept5 leptonica-progs libleptonica-dev
 
 
ubuntu18.04
sudo apt-get update -y # Update the list of packages sudo apt-get remove -y x264 libx264-dev # Remove the older version of libx264-dev and x264 sudo apt-get install -y build-essential checkinstall cmake pkg-config yasm sudo apt-get install -y git gfortran sudo add-apt-repository -y "deb http://security.ubuntu.com/ubuntu xenial-security main" sudo apt-get install -y libjpeg8-dev libjasper-dev libpng12-dev sudo apt-get install -y libtiff5-dev sudo apt-get install -y libavcodec-dev libavformat-dev libswscale-dev libdc1394-22-dev sudo apt-get install -y libxine2-dev libv4l-dev sudo apt-get install -y libgstreamer1.0-dev libgstreamer-plugins-base1.0-dev sudo apt-get install -y qt5-default libgtk2.0-dev libtbb-dev sudo apt-get install -y libatlas-base-dev sudo apt-get install -y libfaac-dev libmp3lame-dev libtheora-dev sudo apt-get install -y libvorbis-dev libxvidcore-dev sudo apt-get install -y libopencore-amrnb-dev libopencore-amrwb-dev sudo apt-get install -y x264 v4l-utils # Some Optional Dependencies sudo apt-get install -y libprotobuf-dev protobuf-compiler sudo apt-get install -y libgoogle-glog-dev libgflags-dev sudo apt-get install -y libgphoto2-dev libeigen3-dev libhdf5-dev doxygen
 
在安装上述依赖包的过程中,可能会存在一些错误提示,这里我将自己遇到的问题列出,并给出解决方案;
错误1:
E: Unable to locate package libjasper-dev
执行:
sudo add-apt-repository "deb http://security.ubuntu.com/ubuntu xenial-security main" sudo apt-get update sudo apt-get install libjasper-dev
再次执行安装依赖包就行;
错误2:
E: Unable to locate package libgstreamer0.10-dev
执行:
sudo apt install libgstreamer1.0-dev libgstreamer-plugins-base1.0-dev
即可;
 
 
4.2 opencv+opencv_contrib安装配置
下载库失败问题解决方法1:
需要更改一些资源的下载路径:
1.ippicv_2020_win_intel64_20191018_general.zip下载失败
进入opencv4.3.0\3rdparty\ippicv目录
将 ippicv.cmake 第47行的https://raw.githubusercontent.com
 
修改为:https://raw.staticdn.net
 
2.opencv_videoio_ffmpeg_64.dll、opencv_videoio_ffmpeg.dll下载失败
进入opencv4.3.0\3rdparty\ffmpeg目录
将 ffmpeg.cmake 第25行的https://raw.githubusercontent.com
 
修改为:https://raw.staticdn.net
 
3.boostdesc_bgm.i相关文件下载失败
进入opencv_contrib-4.3.0\modules\xfeatures2d\cmake目录
 
将 download_boostdesc.cmake中的https://raw.githubusercontent.com改为https://raw.staticdn.net
 
4.vgg_generated_120.i相关文件下载失败
进入opencv_contrib-4.3.0\modules\xfeatures2d\cmake目录
 
将 download_vgg.cmake中的https://raw.githubusercontent.com改为https://raw.staticdn.net
 
5.opencv_contrib-4.3.0\modules\face下没有cmake目录,直接去CMakeLists.txt里修改下载位置
 
 
下载库失败问题解决方法2:
1)下载opencv 以及对应版本的opencv_contrib https://github.com/opencv
 
2)opencv_contrib文件放入opencv文件内
 
3)手动ippicv下载
 
1,下载 ippicv_2019_lnx_intel64_general_20180723.tgz
保存路径随意,我是放在默认的下载路径/home/lc/下载
 
2,修改opencv里相关配置文件
打开终端,输入
    gedit /home/lc/opencv_source/opencv/3rdparty/ippicv/ippicv.cmake #记得lc换成自己的用户名
将47行的
    "https://raw.githubusercontent.com/opencv/opencv_3rdparty/${IPPICV_COMMIT}/ippicv/"
改为步骤1中手动下载的文件的本地路径:
     "file:///home/lc/下载/" #(仅供参考,根据自己的路径填写)
编辑完成保存退出。
 
3,cmake时,到了下载ippicv那一步时会自动从本地下载。
 
 
4)face_landmark_model.dat下载
 
安装opencv4.2.0+ contrib时卡在face_landmark_model.dat下载的地方,一直下载不下来。
解决办法:
a.手动下载 face_landmark_model.dat ,链接如下, 文件放置路径随意
b. 修改相应的配置文件
$ gedit /home/usrname/tool/opencv-3.4.0/opencv_contrib-3.4.0/modules/face/CMakeLists.txt #usrname 换成自己的用户名, <tool/opencv-3.4.0>换成自己opencv源码对应的文件夹
将CMakeLists.txt文件的第19行修改为本地路径,即将原来的网址修改为下载的文件保存的路径
"file:///home/usrname/install/" #"https://raw.githubusercontent.com/opencv/opencv_3rdparty/${__commit_hash}/" # usrname记得替换为自己的用户名,路径记得替换为自己文件对应的路径
我将下载下来的face_landmark_model.dat 放在 /home/usrname/install/ 下,所以把下载网址换为本地, 如上所示。
c. 重新编译即可。
 
5)boostdesc_bgm.i等下载
boostdesc_bgm.i
boostdesc_bgm_bi.i
boostdesc_bgm_hd.i
boostdesc_lbgm.i
boostdesc_binboost_064.i
boostdesc_binboost_128.i
boostdesc_binboost_256.i
vgg_generated_120.i
vgg_generated_64.i
vgg_generated_80.i
vgg_generated_48.i
拷贝到opencv_contrib/modules/xfeatures2d/src/目录下,而且网上直接可以用的资源并不多。所以本人在这篇文章里分享一下资源。
提取码:e1wc
然后更改opencv_contrib/modules/xfeatures2d/cmake/里的下载路径即可
 
5)开始编译
 
首选 无python版本
cd opencv mkdir build cd build sudo cmake -D CMAKE_INSTALL_PREFIX=/usr/local/opencv-4.2.0 \ -D CMAKE_BUILD_TYPE=Debug \ -D OPENCV_EXTRA_MODULES_PATH=../opencv_contrib-4.2.0/modules \ -D BUILD_opencv_hdf=OFF \ -D BUILD_opencv_python3=ON \ -D WITH_CUDA=ON \ -D WITH_OPENGL=ON \ -D WITH_OPENMP=ON \ -D WITH_GTK=ON \ -D WITH_OPENCL=ON \ -D WITH_VTK=ON -D WITH_TBB=ON \ -D WITH_GSTREAMER=ON \ -D WITH_CUDNN=ON \ -D WITH_CUBLAS=ON \ -D WITH_GTK_2_X=ON \ -D BUILD_EXAMPLES=ON \ -D OPENCV_ENABLE_NONFREE=ON \ -D WITH_FFMPEG=ON \ -D OPENCV_GENERATE_PKGCONFIG=ON \ -D WITH_NVCUVID=ON \ -D CUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda-10.2 \ -D CUDA_ARCH_BIN=5.3,6.0,6.1,7.0,7.5 \ -D CUDA_ARCH_PTX=7.5 \ ..
#有anacodna版本,会有问题 cd opencv mkdir build cd build sudo cmake -D CMAKE_INSTALL_PREFIX=/usr/local/opencv-4.2.0 \ -D CMAKE_BUILD_TYPE=Debug \ -D OPENCV_EXTRA_MODULES_PATH=../opencv_contrib-4.2.0/modules \ -D BUILD_opencv_python3=ON \ -D PYTHON_DEFAULT_EXECUTABLE=/root/anaconda3/lib/python3.7 \ -D BUILD_opencv_python2=OFF \ -D PYTHON3_EXCUTABLE=/root/anaconda3/lib/python3.7 \ -D PYTHON3_INCLUDE_DIR=/root/anaconda3/include/python3.7m \ -D PYTHON3_LIBRARY=/root/anaconda3/lib/libpython3.7m.so.1.0 \ -D PYTHON_NUMPY_PATH==/root/anaconda3/lib/python3.7/site-packages \ -D BUILD_opencv_hdf=OFF \ -D WITH_CUDA=ON \ -D WITH_OPENGL=ON \ -D WITH_OPENMP=ON \ -D WITH_GTK=ON \ -D WITH_VTK=ON -D WITH_TBB=ON \ -D WITH_GSTREAMER=ON \ -D WITH_CUDNN=ON \ -D WITH_CUBLAS=ON \ -D WITH_GTK_2_X=ON \ -D BUILD_EXAMPLES=ON \ -D OPENCV_ENABLE_NONFREE=ON \ -D WITH_FFMPEG=ON \ -D OPENCV_GENERATE_PKGCONFIG=ON \ -D WITH_NVCUVID=ON \ -D CUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda-10.2 \ -D CUDA_ARCH_BIN=5.3,6.0,6.1,7.0,7.5 \ -D CUDA_ARCH_PTX=7.5 \ ..
 
ps:重要注意事项!!!!
1.cmake选项注意:
-D WITH_OPENGL=ON #打开OPENGL,必须
-D WITH_GTK_2_X=ON #必须,否则OPENGL无法打开
-D BUILD_EXAMPLES=ON #用于后续验证GPU编解码
-D WITH_NVCUVID=ON #用于安装显卡编解码相关库 必须
 
2.输出configure验证检测:
【1】查看cuda是否打开,重点是NVCUVID是否打开
0
docker中配置问题 无NVCUVID解决办法
 
需要将host(主机)中的libnvcuvid.so与libnvcuvid.so.1 libnvcuvid.so.440.80.2(库的实体)放入到docker的/usr/lib/x86_64-linux-gnu/路径下,并在docker中创建软连接
ln -s /usr/lib/x86_64-linux-gnu/libnvcuvid.so.450.80.02 /usr/lib/x86_64-linux-gnu/libnvcuvid.so.1 ln -s /usr/lib/x86_64-linux-gnu/libnvcuvid.so.1 /usr/lib/x86_64-linux-gnu/libnvcuvid.so sudo ln -s /usr/lib/x86_64-linux-gnu/libnvcuvid.so /usr/lib/libnvcuvid.so sudo ln -s /usr/lib/x86_64-linux-gnu/libnvcuvid.so.1 /usr/lib/libnvcuvid.so.1
重新cmake
 
【2】查看ffmpeg是否打开
0
【3】查看OpenGL support是否打开
0
 
如果FFMPEG=NO,则建议使用cmake-gui进行cmake,具体步骤如下:
使用cmake-gui编译
# 安装cmake-gui sudo apt-get install cmake-qt-gui # 进入opencv源代码目录 cd opencv # 创建一个build文件夹,用于存放生成的代码 mkdir build cd build # 启动cmake-gui cmake-gui ..
 
0
step.1 点击 “Browse Source” ,选择源代码根路径opencv ,
 
step.2 点击 “Browse Build”,选择目标代码目录 build
 
step.3 点击 “Configure”,弹出CMakeSetup窗口,选择Unix Makefiles,选择Use default native compilers生成配置项
 
step.4 配置编译参数
 
Name Value 备注 CMAKE_BUILD_TYPE Release CMAKE_INSTALL_PREFIX /usr/local/opencv4.2.0 安装目录 OPENCV_EXTRA_MODULES_PATH opencv-4.2.0/opencv_contrib/modules opencv_contrib目录 BUILD_DOCS ON 构建文档 BUILD_EXAMPLES ON 构建所有示例 INSTALL_PYTHON_EXAMPLES ON INSTALL_C_EXAMPLES ON OPENCV_GENERATE_PKGCONFIG ON 务必勾选-后面就无需自己造opencv4.pc WITH_OPENGL ON
其余选择项,参考上述cmake指令
 
step.5 再次点击"Configure",然后点击"Generate"
 
step.6 开始编译
 
cd到build目录下
sudo make -j12 sudo make install
可能错误解决
1.编译opencv_contrib库时出现如下错误:
fatal error: vgg_generated_120.i: No such file or directory
 
Solutions:
Step 1:在如下链接下载缺失文件
Step 2:将所有文件复制到如下目录里面
opencv_contrib/modules/xfeatures2d/src/
Step 3:重新make
make -j4
 
2.make中可能出现的错误
XXX.hpp 没有那个文件或者目录 fatal error: features2d/test/test_detectors_invariance.impl.hpp: No such file
一般是说在features2d/test目录下没有XXX.hpp什么的,处理方式是将opencv-4.1.2/modules/features2d/test该目录下对于的缺少文件复制到opencv_contrib-4.1.2/modules/xfeatures2d/test该目录下,然后修改报错的文件的#include,将前面的地址删除,就让其在本地找
 
例如 :
报错说在文件test_rotation_and_scale_invariance.cpp中找不到
#include "xxxx/test_detectors_invariance.impl.hpp",
那么就在opencv-4.1.2/modules/features2d/test下去找test_detectors_invariance.impl.hpp文件,
将其复制到opencv_contrib-4.1.2/modules/xfeatures2d/test目录,
然后打开test_rotation_and_scale_invariance.cpp文件,
修改#include "xxxx/test_detectors_invariance.impl.hpp"为#include "test_detectors_invariance.impl.hpp"即可
如果觉得每个文件去找很麻烦,那么干脆将目录中的所有文件复制过去,之后就对于报错文件的#include位置就好了。
cp ../modules/features2d/test/test_detectors_regression.impl.hpp ../opencv_contrib-4.2.0/modules/xfeatures2d/test/ cp ../modules/features2d/test/test_descriptors_regression.impl.hpp ../opencv_contrib-4.2.0/modules/xfeatures2d/test/ cp ../modules/features2d/test/test_detectors_invariance.impl.hpp ../opencv_contrib-4.2.0/modules/xfeatures2d/test/ cp ../modules/features2d/test/test_descriptors_invariance.impl.hpp ../opencv_contrib-4.2.0/modules/xfeatures2d/test/ cp ../modules/features2d/test/test_invariance_utils.hpp ../opencv_contrib-4.2.0/modules/xfeatures2d/test/
 
3.anaconda编译错误
报错信息如下
libtbb.so.2: undefined reference to `__cxa_init_primary_exception@CXXABI_1.3
解决:
anaconda3/lib中的libtbb.so.2文件出现了一些无法解释的错误,到x86_64-linux-gnu下复制相同名字的文件进行替换编译成功。暂时不知道为什么,这个tbb文件是intel的一个多线程库,推测还是系统或者编译路径配置的问题
 
4.对于opencv2/xfeatures2d/cuda.hpp: No such file or directory 类问题的解决方法
遇到问题
/usr/local/arm/opencv-3.4.0/opencv_contrib-3.4.0/modules/xfeatures2d/include/opencv2/xfeatures2d.hpp:42:10: fatal error: /opencv2/xfeatures2d.hpp: No such file or directory #include "/opencv2/xfeatures2d.hpp" ^~~~~~~~~~~~~~~~~~~~~~~~~~ compilation terminated.
根据给的路径找到xfeature2d.hpp的文件并打开,找到第42行如下:
40 #ifndef __OPENCV_XFEATURES2D_HPP__ 41 #define __OPENCV_XFEATURES2D_HPP__ 42 #include"/opencv2/xfeatures2d.hpp"
改为绝对路径
40 #ifndef __OPENCV_XFEATURES2D_HPP__ 41 #define __OPENCV_XFEATURES2D_HPP__ 42#include"/usr/local/arm/opencv3.4.0/opencv_contrib3.4.0/modules/xfeatures2d/include/opencv2/xfeatures2d.hpp"
 
之后可能会遇到问题5,如下
 
5.遇到问题
undefined reference to `cv::cuda::SURF_CUDA::SURF_CUDA()'
解决方法
修改 <build_dir>/samples/gpu/CMakeFiles/example_gpu_surf_keypoint_matcher.dir/link.txt by 在
"<build_dir>/modules/xfeatures2d/CMakeFiles/opencv_xfeatures2d.dir/src/surf.cuda.cpp.o" 增加如下命令:
CMakeFiles/example_gpu_surf_keypoint_matcher.dir/surf_keypoint_matcher.cpp.o ../../modules/xfeatures2d/CMakeFiles/opencv_xfeatures2d.dir/src/surf.cuda.cpp.o ../../modules/xfeatures2d/CMakeFiles/cuda_compile_1.dir/src/cuda/cuda_compile_1_generated_surf.cu.o -o .....
 
6)opencv环境变量配置
环境配置添加opencv库路径
sudo gedit /etc/ld.so.conf.d/opencv.conf ##若没有则创建文件 //打开后可能是空文件,在文件内容最后添加 /usr/local/opencv-4.2.0/lib ##lib库路径,根据个人设置路径改变
 
更新系统库
sudo ldconfig
 
配置bash
sudo gedit /etc/bash.bashrc //在末尾添加 PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/local/opencv-4.2.0/lib/pkgconfig #根据opencv的lib设置的路径更改 export PKG_CONFIG_PATH
 
保存退出,然后执行如下命令使得配置生效
source /etc/bash.bashrc //激活配置然后更新database sudo updatedb
 
配置本地bash
打开~/.bashrc
$ gedit ~/.bashrc
 
在文件末尾增加以下内容
export PKG_CONFIG_PATH=/usr/local/opencv-4.2.0/lib/pkgconfig export LD_LIBRARY_PATH=/usr/local/opencv-4.2.0/lib
 
更新~/.bashrc
$ source ~/.bashrc
 
查询OpenCV版本
pkg-config opencv4 --modversion # or pkg-config --cflags --libs opencv4 #出现如下问题: Package opencv was not found in the pkg-config search path. Perhaps you should add the directory containing `opencv4.pc' to the PKG_CONFIG_PATH environment variable No package 'opencv4' found
 
原因:在configrue时,为指定 OPENCV_GENERATE_PKGCONFIG为NO(OFF为关闭),而安装后并未自动生成响应的opencv4.pc文件
当然,如果你勾选吧,那么,就没有下面4.3的问题啦
 
4.3 创建opencv4.pc
 
sudo gedit /usr/local/lib/pkgconfig/opencv4.pc
 
添加如下内容
# Package Information for pkg-config prefix=/usr/local exec_prefix=${prefix} libdir=${exec_prefix}/lib includedir_old=${prefix}/include/opencv4/opencv includedir_new=${prefix}/include/opencv4 Name: OpenCV Description: Open Source Computer Vision Library Version: 4.2.0 Libs: -L${exec_prefix}/lib -lopencv_gapi -lopencv_stitching -lopencv_aruco -lopencv_bgsegm -lopencv_bioinspired -lopencv_ccalib -lopencv_dnn_objdetect -lopencv_dpm -lopencv_face -lopencv_freetype -lopencv_fuzzy -lopencv_hfs -lopencv_img_hash -lopencv_line_descriptor -lopencv_quality -lopencv_reg -lopencv_rgbd -lopencv_saliency -lopencv_stereo -lopencv_structured_light -lopencv_phase_unwrapping -lopencv_superres -lopencv_optflow -lopencv_surface_matching -lopencv_tracking -lopencv_datasets -lopencv_text -lopencv_dnn -lopencv_plot -lopencv_videostab -lopencv_video -lopencv_xfeatures2d -lopencv_shape -lopencv_ml -lopencv_ximgproc -lopencv_xobjdetect -lopencv_objdetect -lopencv_calib3d -lopencv_features2d -lopencv_highgui -lopencv_videoio -lopencv_imgcodecs -lopencv_flann -lopencv_xphoto -lopencv_photo -lopencv_imgproc -lopencv_core Libs.private: -ldl -lm -lpthread -lrt Cflags: -I${includedir_old} -I${includedir_new}
 
再次pkg-config测试
pkg-config--cflags --libs opencv4 -I/usr/local/include/opencv4/opencv -I/usr/local/include/opencv4 -L/usr/local/lib -lopencv_gapi -lopencv_stitching -lopencv_aruco -lopencv_bgsegm -lopencv_bioinspired -lopencv_ccalib -lopencv_dnn_objdetect -lopencv_dpm -lopencv_face -lopencv_freetype -lopencv_fuzzy -lopencv_hfs -lopencv_img_hash -lopencv_line_descriptor -lopencv_quality -lopencv_reg -lopencv_rgbd -lopencv_saliency -lopencv_stereo -lopencv_structured_light -lopencv_phase_unwrapping -lopencv_superres -lopencv_optflow -lopencv_surface_matching -lopencv_tracking -lopencv_datasets -lopencv_text -lopencv_dnn -lopencv_plot -lopencv_videostab -lopencv_video -lopencv_xfeatures2d -lopencv_shape -lopencv_ml -lopencv_ximgproc -lopencv_xobjdetect -lopencv_objdetect -lopencv_calib3d -lopencv_features2d -lopencv_highgui -lopencv_videoio -lopencv_imgcodecs -lopencv_flann -lopencv_xphoto -lopencv_photo -lopencv_imgproc -lopencv_core
 
测试成功!
 
pkg-config 命令介绍:用于获得某一个库/模块的所有编译相关的信息
所有用opencv的其他程序,在编译时,只需要写“pkg-config opencv –libs –cflags”,而不需要自己去找opencv的头文件在哪里,要链接的库在哪里!省时省力!
 
sudo ldconfig 命令介绍: ldconfig是一个动态链接库管理命令,其目的为了让动态链接库为系统所共享。
主要是在默认搜寻目录/lib和/usr/lib以及动态库配置文件/etc/ld.so.conf内所列的目录下,否则需要 export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/<you_dir>/lib
搜索出可共享的动态链接库(格式如lib*.so*),进而创建出动态装入程序(ld.so)所需的连接和缓存文件,
缓存文件默认为/etc/ld.so.cache,此文件保存已排好序的动态链接库名字列表。
linux下的共享库机制采用了类似高速缓存机制,将库信息保存在/etc/ld.so.cache,程序连接的时候首先从这个文件里查找,然后再到ld.so.conf的路径中查找。
为了让动态链接库为系统所共享,需运行动态链接库的管理命令ldconfig,此执行程序存放在/sbin目录下。
 
 
 
 
 
5.OpenCV GPU视频编解码测试
 
opencv_cuda.cpp
#include <iostream> #include "opencv2/opencv_modules.hpp" #if defined(HAVE_OPENCV_CUDACODEC) #include <string> #include <vector> #include <algorithm> #include <numeric> #include <opencv2/core.hpp> #include <opencv2/core/opengl.hpp> #include <opencv2/cudacodec.hpp> #include <opencv2/highgui.hpp> int main(int argc, const char* argv[]) { if (argc != 2) return -1; const std::string fname(argv[1]); //显示视频 //cv::namedWindow("CPU", cv::WINDOW_NORMAL); cv::namedWindow("GPU", cv::WINDOW_OPENGL); cv::cuda::setGlDevice(); //cv::Mat frame; //cv::VideoCapture reader(fname); cv::cuda::GpuMat d_frame; cv::Ptr<cv::cudacodec::VideoReader> d_reader = cv::cudacodec::createVideoReader(fname); cv::TickMeter tm; std::vector<double> cpu_times; std::vector<double> gpu_times; int gpu_frame_count=0, cpu_frame_count=0; /* for (;;) { tm.reset(); tm.start(); if (!reader.read(frame)) break; tm.stop(); cpu_times.push_back(tm.getTimeMilli()); cpu_frame_count++; cv::imshow("CPU", frame); if (cv::waitKey(3) > 0) break; } */ for (;;) { tm.reset(); tm.start(); if (!d_reader->nextFrame(d_frame)) break; tm.stop(); gpu_times.push_back(tm.getTimeMilli()); gpu_frame_count++; cv::imshow("GPU", d_frame); if (cv::waitKey(3) > 0) break; } if (!cpu_times.empty() || !gpu_times.empty()) { std::cout << std::endl << "Results:" << std::endl; //std::sort(cpu_times.begin(), cpu_times.end()); std::sort(gpu_times.begin(), gpu_times.end()); //double cpu_avg = std::accumulate(cpu_times.begin(), cpu_times.end(), 0.0) / cpu_times.size(); double gpu_avg = std::accumulate(gpu_times.begin(), gpu_times.end(), 0.0) / gpu_times.size(); //std::cout << "CPU : Avg : " << cpu_avg << " ms FPS : " << 1000.0 / cpu_avg << " Frames " << cpu_frame_count << std::endl; std::cout << "GPU : Avg : " << gpu_avg << " ms FPS : " << 1000.0 / gpu_avg << " Frames " << gpu_frame_count << std::endl; } return 0; } #else int main() { std::cout << "OpenCV was built without CUDA Video decoding support\n" << std::endl; return 0; } #endif
 
Makefile
opencv_cuda.o:opencv_cuda.cpp g++ -std=c++11 -g -o main.out opencv_cuda.cpp `pkg-config opencv4 --cflags --libs` \ -I/usr/local/opencv-4.2.0/include/opencv4/opencv2 \ -I/usr/local/cuda/include \ -L/usr/local/cuda/lib64 \ -I/usr/include/eigen3 \ -L/usr/lib/x86_64-linux-gnu -lcuda -ldl -lnvcuvid clean: rm *.o main.out
 
编译并运行
make ./main.out test.h264 # or ./main.out rtsp://admin:hk888888@10.171.1.233/h265/ch1/main/av_stream
 
 

报错:

The called functionality is disabled for current build or platform in function 'throw_no_cuda'
解决办法:
发现cmake配置出来没有NVCUVID,查看/home/zty/opencv-3.4.16/cmake/OpenCVDetectCUDA.cmake的第49-77行:
       find_path(_header_result
        ${_filename}
        PATHS "${CUDA_TOOLKIT_TARGET_DIR}" "${CUDA_TOOLKIT_ROOT_DIR}"
        ENV CUDA_PATH
        ENV CUDA_INC_PATH
        PATH_SUFFIXES include
        NO_DEFAULT_PATH
        )
      if("x${_header_result}" STREQUAL "x_header_result-NOTFOUND")
        set(${_result} 0)
      else()
        set(${_result} 1)
      endif()
      unset(_header_result CACHE)
    endmacro()
    ocv_cuda_SEARCH_NVCUVID_HEADER("nvcuvid.h" HAVE_NVCUVID_HEADER)
    ocv_cuda_SEARCH_NVCUVID_HEADER("dynlink_nvcuvid.h" HAVE_DYNLINK_NVCUVID_HEADER)
    find_cuda_helper_libs(nvcuvid)
    if(WIN32)
      find_cuda_helper_libs(nvcuvenc)
    endif()
    if(CUDA_nvcuvid_LIBRARY AND (${HAVE_NVCUVID_HEADER} OR ${HAVE_DYNLINK_NVCUVID_HEADER}))
      # make sure to have both header and library before enabling
      set(HAVE_NVCUVID 1)
    endif()
    if(CUDA_nvcuvenc_LIBRARY)
      set(HAVE_NVCUVENC 1)
    endif()
  endif()
发现他会找${CUDA_TOOLKIT_TARGET_DIR} 和${CUDA_TOOLKIT_ROOT_DIR}文件夹,查找nvcuvid.h或者dynlink_nvcuvid.h
如果找到了才会启动NVCUVID
因为之前将nvcuvid.h拷贝到了/usr/include下面,因此


PATHS "${CUDA_TOOLKIT_TARGET_DIR}" "${CUDA_TOOLKIT_ROOT_DIR}"
改为
PATHS "${CUDA_TOOLKIT_TARGET_DIR}" "${CUDA_TOOLKIT_ROOT_DIR}" "/usr/include"
重新cmake, make, make install
 
 
运行效果(4k视频、H265编码)
 
cpu利用率
0
GPU使用
0
 
 
posted @ 2021-12-03 11:54  wangaolin  阅读(5191)  评论(1编辑  收藏  举报