2020系统综合实践 第7次实践作业 29组

2020系统综合实践 第7次实践作业 29组

一、在树莓派中安装opencv库

1.扩展文件系统

若使用全新的Raspbian Stretch进行安装,首先需要扩展文件系统,以包括micro-SD卡上的所有可用空间(使用putty登录树莓派进行操作):

sudo raspi-config

选择“高级选项”菜单项:

接下来选择“扩展文件系统”:

依次点击确定和finish,然后重启树莓派:



在重新启动树莓派后,文件系统应该已经扩展到包含micro-SD卡上的所有可用空间。接着可以验证磁盘是否已扩展并检查输出:

df -h

从结果可以看到,即时已经扩展了文件系统,也使用了48%左右的磁盘空间!!接下来可以删除LibreOffice和Wolfram引擎从而释放Pi上的一些空间:

sudo apt-get purge wolfram-engine
sudo apt-get purge libreoffice *
sudo apt-get clean
sudo apt-get autoremove



再次查看磁盘空间可以发现回收了将近2G左右的空间。

2.安装依赖项

# 更新和升级任何现有的软件包
sudo apt-get update && sudo apt-get upgrade 
# 安装开发工具CMake,帮助我们配置OpenCV构建过程
sudo apt-get install build-essential cmake pkg-config
# 图像I/O包,允许我们从磁盘加载各种图像文件格式。这种文件格式的例子包括JPEG,PNG,TIFF等
sudo apt-get install libjpeg-dev libtiff5-dev libjasper-dev libpng12-dev
# 视频I/O包。这些库允许我们从磁盘读取各种视频文件格式,并直接处理视频流
sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev libv4l-dev
sudo apt-get install libxvidcore-dev libx264-dev
# OpenCV库附带一个名为highgui的子模块 ,用于在我们的屏幕上显示图像并构建基本的GUI。为了编译 highgui模块,我们需要安装GTK开发库
sudo apt-get install libgtk2.0-dev libgtk-3-dev
# OpenCV中的许多操作(即矩阵操作)可以通过安装一些额外的依赖关系进一步优化
sudo apt-get install libatlas-base-dev gfortran
# 安装Python 2.7和Python 3头文件,以便我们可以用Python绑定来编译OpenCV
sudo apt-get install python2.7-dev python3-dev

在安装依赖项的过程中,可能会出现“无法修正错误,因为您要求某些软件包保持现状,就是它们破坏了软件包间的依赖关系”这样的错误,这是很让人头疼的一个问题(而且没有成功解决的话后面即时编译好了OpenCV,也可能导致某些功能无法正常使用,需要重头再来,所以必须确保解决了这个问题才继续往下做!1)然而,十分不幸我也遇到了这个问题,具体的解决办法我在后面给出了。




3.下载OpenCV源代码

在安装好了依赖项之后,可以从官方的OpenCV仓库中获取OpenCV 的 4.1.2归档以及opencv_contrib存储库(需要注意,下载的opencv和 opencv_contrib版本要相同):

cd ~
wget -O opencv.zip https://github.com/Itseez/opencv/archive/4.1.2.zip
wget -O opencv_contrib.zip https://github.com/Itseez/opencv_contrib/archive/4.1.2.zip

下载好之后进行解压:

unzip opencv.zip
unzip opencv_contrib.zip


4.安装pip

在我们的树莓派上开始编译OpenCV之前,首先需要安装Python包管理器pip:

wget https://bootstrap.pypa.io/get-pip.py
sudo python get-pip.py
sudo python3 get-pip.py



5.安装python虚拟机

虚拟环境是一种特殊的工具,它通过为每个项目创建隔离的、独立的Python环境,将不同项目所需的依赖项保存在不同的位置,以解决依赖冲突。

  • 安装virtualenv和 virtualenvwrapper:
sudo pip install virtualenv virtualenvwrapper
sudo rm -rf ~/.cache/pip


  • 配置~/.profile,添加内容:
export WORKON_HOME=$HOME/.virtualenvs
export VIRTUALENVWRAPPER_PYTHON=/usr/bin/python3
export VIRTUALENVWRAPPER_VIRTUALENV=/usr/local/bin/virtualenv
source /usr/local/bin/virtualenvwrapper.sh
export VIRTUALENVWRAPPER_ENV_BIN_DIR=bin

  • 通过如下命令使配置生效:
source ~/.profile

  • 使用Python3 安装虚拟机:
mkvirtualenv cv -p python3

虚拟机完成安装之后,后续的所有操作全部在虚拟机中进行。按照教程的说明,一定要看清楚命令行前面是否有(cv),以此作为是否在虚拟机的判断!

在退出虚拟机后,可以使用下列命令再次进入虚拟机内:

source ~/.profile && workon cv
  • 安装numpy(这步以及后续的步骤都是在虚拟机内进行的!!):
pip install numpy

6.编译并安装OpenCV

cd ~/opencv-4.1.2/
mkdir build
cd build
cmake -D CMAKE_BUILD_TYPE=RELEASE \
    -D CMAKE_INSTALL_PREFIX=/usr/local \
    -D INSTALL_PYTHON_EXAMPLES=ON \
    -D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib-4.1.2/modules \
    -D BUILD_EXAMPLES=ON ..


在开始编译过程之前,应先配置交换空间大小,这使得OpenCV能够编译树莓派的所有四个内核,而不会因为内存问题而导致编译挂起。

sudo nano /etc/dphys-swapfile  #虚拟机中sudo才可以修改,增大交换空间 CONF_SWAPSIZE=1024

接下来重启swap服务并开始编译:

sudo /etc/init.d/dphys-swapfile stop
sudo /etc/init.d/dphys-swapfile start
make -j4 

编译过程十分耗时(大概1.5-2小时左右),并且编译过程可能会出现一些错误,我将遇到的问题和解决方案在后面给出了。

Tip:make -j4表示使用4核同时进行编译,如果只有make则表示单核编译,使用make -j4可以加快编译的速度,这点我做实验的时候没有弄清楚,所以只使用make来编译花了更多的时间。。


在编译成功完成后,就可以开始安装OpenCV了:

sudo make install
sudo ldconfig

接着检查OpenCV的安装位置:

ls -l /usr/local/lib/python3.7/site-packages/
cd ~/.virtualenvs/cv/lib/python3.7/site-packages/
ln -s /usr/local/lib/python3.7/site-packages/cv2 cv2

验证OpenCV是否安装成功:

source ~/.profile 
workon cv
python
import cv2
cv2.__version__

出现版本信息即为成功安装OpenCV库。(在成功安装OpenCV库后,要记得打开 /etc/dphys-swapfile 然后把 CONF_SWAPSIZE改回原来的100。)

二、 使用opencv和python控制树莓派的摄像头

1.安装picamera(虚拟机环境下):

source ~/.profile
workon cv
pip install "picamera[array]"

2.使用示例程序进行拍照:

示例代码test_photo.py如下所示:

# import the necessary packages
from picamera.array import PiRGBArray
from picamera import PiCamera
import time
import cv2

# initialize the camera and grab a reference to the raw camera capture
camera = PiCamera()
rawCapture = PiRGBArray(camera)

# allow the camera to warmup
time.sleep(3)

# grab an image from the camera
camera.capture(rawCapture, format="bgr")
image = rawCapture.array

# display the image on screen and wait for a keypress
cv2.imshow("Image", image)
cv2.waitKey(0)

运行代码然后使用摄像头拍照:

python test_photo.py

3.使用示例程序进行视频流传输:

# 导入依赖
from picamera.array import PiRGBArray
from picamera import PiCamera
import time
import cv2
# 初始化摄像头
camera = PiCamera()
camera.resolution = (1024, 720)
camera.framerate = 32
rawCapture = PiRGBArray(camera, size=(1024, 720))
# 感光时间
time.sleep(5)

for frame in camera.capture_continuous(rawCapture, format="bgr", use_video_port=True):
	image = frame.array
	cv2.imshow("Frame", image)
	key = cv2.waitKey(1) & 0xFF
	# 清空
	rawCapture.truncate(0)
	# 输入q退出
	if key == ord("q"):
		break

运行代码然后使用摄像头进行拍摄:

python test_vedio.py

三、利用树莓派的摄像头实现人脸识别

1.先安装模块dlib和face_recognition:

source ~/.profile 
workon cv 
pip install dlib
pip install face_recognition

实验过程中,dlib模块的安装过程比较顺利,但在安装模块face_recognition的时候我却一直出现了超时的错误(估计是网络的问题,试了好多次也下不下来,心态有点爆炸):

后面决定先在本机中下载好相应文件,然后使用filezilla将文件传入树莓配内进行安装,如下所示:

文件传输完成后,就可以进入文件所在的目录下用python安装了(参考博客:click here):

python3 -m pip install face_recognition_models-0.3.0-py2.py3-none-any.whl
python3 -m pip install face_recognition-1.3.0-py2.py3-none-any.whl

最后测试一下安装是否成功:

python3
import face_recognition


可见已成功安装face_recognition这个模块了!

2.切换到放有要加载图片和python代码的目录下:

1.先将代码和图片传入树莓派:

2.运行示例代码:

  • 示例代码facerec_on_raspberry_pi.py如下
# This is a demo of running face recognition on a Raspberry Pi.
# This program will print out the names of anyone it recognizes to the console.

# To run this, you need a Raspberry Pi 2 (or greater) with face_recognition and
# the picamera[array] module installed.
# You can follow this installation instructions to get your RPi set up:
# https://gist.github.com/ageitgey/1ac8dbe8572f3f533df6269dab35df65

import face_recognition
import picamera
import numpy as np

# Get a reference to the Raspberry Pi camera.
# If this fails, make sure you have a camera connected to the RPi and that you
# enabled your camera in raspi-config and rebooted first.
camera = picamera.PiCamera()
camera.resolution = (320, 240)
output = np.empty((240, 320, 3), dtype=np.uint8)

# Load a sample picture and learn how to recognize it.
print("Loading known face image(s)")
obama_image = face_recognition.load_image_file("obama_test.jpg")
obama_face_encoding = face_recognition.face_encodings(obama_image)[0]

# Initialize some variables
face_locations = []
face_encodings = []

while True:
    print("Capturing image.")
    # Grab a single frame of video from the RPi camera as a numpy array
    camera.capture(output, format="rgb")

    # Find all the faces and face encodings in the current frame of video
    face_locations = face_recognition.face_locations(output)
    print("Found {} faces in image.".format(len(face_locations)))
    face_encodings = face_recognition.face_encodings(output, face_locations)

    # Loop over each face found in the frame to see if it's someone we know.
    for face_encoding in face_encodings:
        # See if the face is a match for the known face(s)
        match = face_recognition.compare_faces([obama_face_encoding], face_encoding)
        name = "<Unknown Person>"

        if match[0]:
            name = "Barack Hussein Obama"

        print("I see someone named {}!".format(name))

运行结果如下:

在将摄像头对准人物图像后,可以看到成功识别出了Barack Hussein Obama!!

  • 示例代码facerec_from_webcam_faster.py如下
import face_recognition
import cv2
import numpy as np

# This is a demo of running face recognition on live video from your webcam. It's a little more complicated than the
# other example, but it includes some basic performance tweaks to make things run a lot faster:
#   1. Process each video frame at 1/4 resolution (though still display it at full resolution)
#   2. Only detect faces in every other frame of video.

# PLEASE NOTE: This example requires OpenCV (the `cv2` library) to be installed only to read from your webcam.
# OpenCV is *not* required to use the face_recognition library. It's only required if you want to run this
# specific demo. If you have trouble installing it, try any of the other demos that don't require it instead.

# Get a reference to webcam #0 (the default one)
video_capture = cv2.VideoCapture(0)

# Load a sample picture and learn how to recognize it.
DonaldTrump_image = face_recognition.load_image_file("DonaldTrump.jpg")
DonaldTrump_face_encoding = face_recognition.face_encodings(DonaldTrump_image)[0]

# Load a second sample picture and learn how to recognize it.
AbeShinzou_image = face_recognition.load_image_file("AbeShinzou.jpg")
AbeShinzou_face_encoding = face_recognition.face_encodings(AbeShinzou_image)[0]

# Create arrays of known face encodings and their names
known_face_encodings = [
    DonaldTrump_face_encoding,
    AbeShinzou_face_encoding
]

known_face_names = [
    "DonaldTrump",
    "AbeShinzou"
]



# Initialize some variables
face_locations = []
face_encodings = []
face_names = []

process_this_frame = True

while True:

    # Grab a single frame of video
    ret, frame = video_capture.read()

    # Resize frame of video to 1/4 size for faster face recognition processing
    small_frame = cv2.resize(frame, (0, 0), fx=0.25, fy=0.25)

    # Convert the image from BGR color (which OpenCV uses) to RGB color (which face_recognition uses)
    rgb_small_frame = small_frame[:, :, ::-1]

    # Only process every other frame of video to save time
    if process_this_frame:

        # Find all the faces and face encodings in the current frame of video
        face_locations = face_recognition.face_locations(rgb_small_frame)
        face_encodings = face_recognition.face_encodings(rgb_small_frame, face_locations)

        face_names = []
        for face_encoding in face_encodings:
            # See if the face is a match for the known face(s)
            matches = face_recognition.compare_faces(known_face_encodings, face_encoding)
            name = "Unknown"

            # # If a match was found in known_face_encodings, just use the first one.
            # if True in matches:
            #     first_match_index = matches.index(True)
            #     name = known_face_names[first_match_index]
            # Or instead, use the known face with the smallest distance to the new face
            face_distances = face_recognition.face_distance(known_face_encodings, face_encoding)
            best_match_index = np.argmin(face_distances)
            if matches[best_match_index]:
                name = known_face_names[best_match_index]

            face_names.append(name)

    process_this_frame = not process_this_frame

    # Display the results
    for (top, right, bottom, left), name in zip(face_locations, face_names):
        # Scale back up face locations since the frame we detected in was scaled to 1/4 size
        top *= 4
        right *= 4
        bottom *= 4
        left *= 4

        # Draw a box around the face
        cv2.rectangle(frame, (left, top), (right, bottom), (0, 0, 255), 2)

        # Draw a label with a name below the face
        cv2.rectangle(frame, (left, bottom - 35), (right, bottom), (0, 0, 255), cv2.FILLED)
        font = cv2.FONT_HERSHEY_DUPLEX
        cv2.putText(frame, name, (left + 6, bottom - 6), font, 1.0, (255, 255, 255), 1)

    # Display the resulting image
    cv2.imshow('Video', frame)

    # Hit 'q' on the keyboard to quit!
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

# Release handle to the webcam
video_capture.release()
cv2.destroyAllWindows()

运行结果如下:



可见人脸识别验证成功,正确地区分了DonaldTrump和AbeShinzou这两个人。

四、结合微服务的进阶任务

1.安装Docker:

  • 下载安装脚本:
curl -fsSL https://get.docker.com -o get-docker.sh

  • 执行安装脚本(使用阿里云镜像):
sh get-docker.sh --mirror Aliyun

  • 查看docker版本,验证是否安装成功:
sudo docker version

  • 添加用户到docker组,然后重新登陆让用户组生效:
sudo usermod -aG docker pi

2.配置docker的镜像加速:

  • 写加速器地址:
sudo nano /etc/docker/daemon.json

  • 重启docker使其生效:

3.定制opencv镜像:

  • 拉取镜像:
sudo docker pull sixsq/opencv-python

  • 创建并运行容器:
sudo docker run -it sixsq/opencv-python /bin/bash

  • 在容器中,用pip3安装 "picamera[array]",dlib和face_recognition:
pip3 install "picamera[array]" 
pip3 install dlib
pip3 install face_recognition


在安装face_recognition模块时,同样遇到了下载超时的问题,这时可以通过将本机文件挂载到容器目录中再安装:


  • commit镜像:
sudo docker commit a91a278b98d9 myopencv1

4.自定义镜像:

Dockerfile

FROM myopencv1
MAINTAINER GROUP29
RUN mkdir /fzu
WORKDIR /fzu
COPY fzu .

在配置好相应文件后,就可以开始构建镜像了:

sudo docker build -t opencv2 .

查看镜像:

5.运行容器执行facerec_on_raspberry_pi.py:

sudo docker run -it --device=/dev/vchiq --device=/dev/vide00 --name myopencv29 myopencv2
python3 facerec_on_raspberry_pi.py


可见和之前一样,成功识别出了奥巴马的图像!

选做:在opencv的docker容器中跑通步骤(3)的示例代码facerec_from_webcam_faster.py
  • 先在Windows系统安装XMing(下载地址

  • 在putty中开启树莓派ssh配置中的X11:

接着检查树莓派的ssh配置中的X11是否开启:

cat /etc/ssh/sshd_config

  • 查看DISPLAY环境变量值:
printenv

  • 编写脚本run.sh:
#sudo apt-get install x11-xserver-utils
xhost +
docker run -it \
        --net=host \
        -v $HOME/.Xauthority:/root/.Xauthority \
        -e DISPLAY=:10.0  \
        -e QT_X11_NO_MITSHM=1 \
        --device=/dev/vchiq \
        --device=/dev/video0 \
        --name facerecgui \
        opencv2 \
	python3 facerec_from_webcam_faster.py
  • 打开终端,运行run.sh:
sh run.sh

可见结果和之前实验三的一样,可以正确识别DonaldTrump和AbeShinzou。

五、以小组为单位,发表一篇博客,记录遇到的问题和解决方法,提供小组成员名单、分工、各自贡献以及在线协作的图片

1.小组成员及分工(29组):

学号 姓名 任务
031702430 陈友昆 实际操作、解决问题及博客撰写
031702427 方瑞雄 查找相关资料,提供代码
131700114 张辉 查找相关资料,提供代码

2.记录遇到的问题和解决方法:

问题一:在安装依赖时报错,提示"无法修正错误,因为您要求某些软件包保持现状,就是它们破坏了软件包间的依赖关系":

解决办法:1.可以先下载aptitude,然后再用aptitude install下载这些依赖。使用aptitude install下载依赖时,运行后会出现一大堆消息,给出第一个解决方案一般是保留这些库文件,让选择y/n/q?,这里选择“n”;然后会继续找解决方案,有时会超时找不到,点击y继续找,直到给出得解决方案是“降级”这些库文件,选择“y”,然后等待结束就完了。完成后可以再输入“sudo aptitude install libgtk2.0-dev”进行测试,出现已经安装对应的库文件,就说明安装成功了。(参考博客:here

2.实在不行,也可以重新烧录一个干净的系统,接着再进行安装。

问题二:在编译opencv时,出现如下错误:

解决办法:参考博客(安装OpenCV时提示缺少boostdesc_bgm.i文件的问题解决方案

将链接中网盘的文件拷贝到opencv_contrib/modules/xfeatures2d/src/目录下。

问题三:在编译opencv时,出现如下错误:

解决办法:参考博客(在Ubuntu16.04上编译opencv4.1.0-gpu带contrib版本碰到的问题

将opencv-4.1.0/modules/features2d/test/文件下的

test_descriptors_regression.impl.hpp
test_detectors_regression.impl.hpp
test/test_detectors_invariance.impl.hpp
test_descriptors_invariance.impl.hpp
test_invariance_utils.hpp

拷贝到opencv_contrib-4.1.0/modules/xfeatures2d/test/文件下。

同时,将opencv_contrib-4.1.0/modules/xfeatures2d/test/test_features2d.cpp文件下的

#include "features2d/test/test_detectors_regression.impl.hpp"
#include "features2d/test/test_descriptors_regression.impl.hpp"

改成:

#include "test_detectors_regression.impl.hpp"
#include "test_descriptors_regression.impl.hpp"

将opencv_contrib-4.1.0/modules/xfeatures2d/test/test_rotation_and_scale_invariance.cpp文件下的

#include "features2d/test/test_detectors_invariance.impl.hpp" 
#include "features2d/test/test_descriptors_invariance.impl.hpp"

改成:

#include "test_detectors_invariance.impl.hpp"
#include "test_descriptors_invariance.impl.hpp"

问题四:在第二部分使用opencv和python控制树莓派的摄像头,运行示例代码时出错:

解决办法:由于之前做的时候依赖问题没有解决好,导致缺少libgtk2.0-dev和pkg-config,不得已,只能重新下载依赖,重新编译OpenCV。(泪目o(╥﹏╥)o)

问题五:在用pip下载模块face_recognition的时候一直出现了超时的错误:

解决办法:先在本机中下载好相应文件,然后使用filezilla将文件传入树莓配内进行安装,上面的实验步骤已具体给出。

posted @ 2020-06-12 16:59  cyk2430  阅读(328)  评论(0编辑  收藏  举报