第7次实践作业 (第17小组)

(1) 在树莓派中安装opencv库

(作业提供的参考)

1.扩展文件系统

$ sudo raspi-config
raspi-config ”菜单中选择“ Advanced Options ”项。


在Raspberry Pi 3上扩展文件系统

2.安装依赖项

先换源

更新软件源,更新软件 (换源)
sudo apt-get update && sudo apt-get upgrade

Cmake等开发者工具
sudo apt-get install build-essential cmake pkg-config

图片I/O包
sudo apt-get install libjpeg-dev libtiff5-dev libjasper-dev libpng12-dev

视频I/O包
sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev libv4l-dev
sudo apt-get install libxvidcore-dev libx264-dev
OpenCV用于显示图片的子模块需要GTK
sudo apt-get install libgtk2.0-dev libgtk-3-dev

性能优化包
sudo apt-get install libatlas-base-dev gfortran

安装 Python2.7 & Python3
sudo apt-get install python2.7-dev python3-dev

遇到的问题:
无法修正错误,因为您要求某些软件包保持现状,就是它们破坏了软件包间的依赖关系
解决: 尝试了网上的各种办法以后无果,最终重新烧录系统,
需要注意的是,烧录成功以后,先换源,并且安装aptitude(自动解决依赖关系) sudo apt-get install aptitude,防止出现一样的错误。
接着开始安装依赖项,很不幸,又是一样的错误(无法修正错误,因为您要求某些软件包保持现状,就是它们破坏了软件包间的依赖关系)
解决办法:将命令中apt-get 换成aptitude,逐一安装。
这个时候又遇到了新问题(太惨了):sudo apt-get install libgtk2.0-dev libgtk-3-dev 安装这项依赖的时候 显示未满足的依赖关系 用aptitude依然解决不了
解决办法 : 我们经过很长时间的研究,尝试了网上无数种办法没有成功。最后发现我们的清华源单词拼错字了。。。。。 重新换源
终于成功啦!

3.下载OpenCV源代码

$ CD〜
$ wget -O opencv.zip https://github.com/Itseez/opencv/archive/3.3.0.zip
$解压缩opencv.zip


原先安装的最新版本,但是出现无法解压的情况,于是重新找了网站,下载了3.3.0版本,成功解压。

4.Python 2.7或Python 3

  • 在Raspberry Pi上安装OpenCV 3 + Python
    $ wget https://bootstrap.pypa.io/get-pip.py
    $ sudo python get-pip.py
    $ sudo python3 get-pip.py

  • 安装虚拟环境
    sudo pip3 install -i https://pypi.tuna.tsinghua.edu.cn/simple virtualenv virtualenvwrapper

  • 配置~/.profile
    打开配置文件:
    sudo nano ~/.profile
    开始配置:
    export WORKON_HOME=$HOME/.virtualenvs
    export VIRTUALENVWRAPPER_PYTHON=/usr/bin/python3
    export VIRTUALENVWRAPPER_VIRTUALENV=/usr/local/bin/virtualenv
    source /usr/local/bin/virtualenvwrapper.sh
    export VIRTUALENVWRAPPER_ENV_BIN_DIR=bin
  • 创建虚拟机
    mkvirtualenv cv -p python3
  • 进入虚拟机 (每次进入之前都刷新一次配置文件)
    workon cv

  • 安装numpy
    pip install -i https://pypi.tuna.tsinghua.edu.cn/simple numpy

(5)编译OpenCV

cd ~/opencv-3.3.0/
mkdir build
cd build
cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D CMAKE_INSTALL_PREFIX=/usr/local \
-D INSTALL_PYTHON_EXAMPLES=ON \
-D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib-3.3.0/modules \
-D BUILD_EXAMPLES=ON ..

  • 打开swapfile文件 ,编辑为CONF_SWAPSIZE=1024,已调整交换空间大小
    sudo nano /etc/dphys-swapfile

  • 重启配置文件的服务
    sudo /etc/init.d/dphys-swapfile stop
    sudo /etc/init.d/dphys-swapfile start

  • 编译
    make -j4

1.编译遇到的问题: 编译卡住

解决办法,我们参考了一篇很好的博客,最终得以解决。
解决步骤:
打开cap_ffmpeg_impl.hpp文件
nano ~/opencv-3.3.0/modules/Videoio/sRc/cap_ffmpeg_impl.hpp
在顶部添加下列内容
#define AV_CODEC_FLAG_GLOBAL_HEADER (1 << 22)
#define CODEC_FLAG_GLOBAL_HEADER AV_CODEC_FLAG_GLOBAL_HEADER
#define AVFMT_RAWPICTURE 0x0020
重新编译以后,编译成功了!

编译过程中,如果遇到问题 ,可参考博客

  • 在Pi上安装OpenCV

sudo make install
sudo ldconfig


(2) 使用opencv和python控制树莓派的摄像头

  • picamare模块安装
    开启虚拟机
    $ source ~/.profile
    $ workon cv

安装picamare
$ pip install "picamera[array]"

  • 在Python代码中导入OpenCV控制摄像头
    test_image.py:(只有拍照功能)
from picamera.array import PiRGBArray
from picamera import PiCamera
import time
import cv2
camera = PiCamera()
rawCapture = PiRGBArray(camera)
time.sleep(5) # 感光时间需要长一些
camera.capture(rawCapture, format="bgr")
image = rawCapture.array
cv2.imshow("Image", image)
cv2.waitKey(0)

运行脚本:

vedio.py:(摄像功能)

from picamera.array import PiRGBArray
from picamera import PiCamera
import time
import cv2
camera = PiCamera()
camera.resolution = (640, 480)
camera.framerate = 32
rawCapture = PiRGBArray(camera, size=(640, 480))
time.sleep(0.1)
for frame in camera.capture_continuous(rawCapture, format="bgr", use_video_port=True):
    image = frame.array
    cv2.imshow("Frame", image)
    key = cv2.waitKey(1) & 0xFF
    rawCapture.truncate(0)
    if key == ord("q"):
        break

(3) 利用树莓派的摄像头实现人脸识别

  • 安装模块(dlib,face_recognition)
    pip install dlib -vvv
    face_recognition模块安装参考

  • 准备好需要用到的图片(用与和识别的脸作比较)和代码文件
    facerec_on_raspberry_pi.py代码:

# This is a demo of running face recognition on a Raspberry Pi.
# This program will print out the names of anyone it recognizes to the console.

# To run this, you need a Raspberry Pi 2 (or greater) with face_recognition and
# the picamera[array] module installed.
# You can follow this installation instructions to get your RPi set up:
# https://gist.github.com/ageitgey/1ac8dbe8572f3f533df6269dab35df65

import face_recognition
import picamera
import numpy as np

# Get a reference to the Raspberry Pi camera.
# If this fails, make sure you have a camera connected to the RPi and that you
# enabled your camera in raspi-config and rebooted first.
camera = picamera.PiCamera()
camera.resolution = (320, 240)
output = np.empty((240, 320, 3), dtype=np.uint8)

# Load a sample picture and learn how to recognize it.
print("Loading known face image(s)")
obama_image = face_recognition.load_image_file("obama_small.jpg")
obama_face_encoding = face_recognition.face_encodings(obama_image)[0]

# Initialize some variables
face_locations = []
face_encodings = []

while True:
    print("Capturing image.")
    # Grab a single frame of video from the RPi camera as a numpy array
    camera.capture(output, format="rgb")

    # Find all the faces and face encodings in the current frame of video
    face_locations = face_recognition.face_locations(output)
    print("Found {} faces in image.".format(len(face_locations)))
    face_encodings = face_recognition.face_encodings(output, face_locations)

    # Loop over each face found in the frame to see if it's someone we know.
    for face_encoding in face_encodings:
        # See if the face is a match for the known face(s)
        match = face_recognition.compare_faces([obama_face_encoding], face_encoding)
        name = "<Unknown Person>"

        if match[0]:
            name = "Barack Obama"

        print("I see someone named {}!".format(name))

facerec_from_webcam_faster.py代码:

import face_recognition
import cv2
import numpy as np

# This is a demo of running face recognition on live video from your webcam. It's a little more complicated than the
# other example, but it includes some basic performance tweaks to make things run a lot faster:
#   1. Process each video frame at 1/4 resolution (though still display it at full resolution)
#   2. Only detect faces in every other frame of video.

# PLEASE NOTE: This example requires OpenCV (the `cv2` library) to be installed only to read from your webcam.
# OpenCV is *not* required to use the face_recognition library. It's only required if you want to run this
# specific demo. If you have trouble installing it, try any of the other demos that don't require it instead.

# Get a reference to webcam #0 (the default one)
video_capture = cv2.VideoCapture(0)

# Load a sample picture and learn how to recognize it.
obama_image = face_recognition.load_image_file("obama.jpg")
obama_face_encoding = face_recognition.face_encodings(obama_image)[0]

# Load a second sample picture and learn how to recognize it.
biden_image = face_recognition.load_image_file("biden.jpg")
biden_face_encoding = face_recognition.face_encodings(biden_image)[0]

# Create arrays of known face encodings and their names
known_face_encodings = [
   obama_face_encoding,
   biden_face_encoding
]
known_face_names = [
   "Barack Obama",
   "Joe Biden"
]

# Initialize some variables
face_locations = []
face_encodings = []
face_names = []
process_this_frame = True

while True:
   # Grab a single frame of video
   ret, frame = video_capture.read()

   # Resize frame of video to 1/4 size for faster face recognition processing
   small_frame = cv2.resize(frame, (0, 0), fx=0.25, fy=0.25)

   # Convert the image from BGR color (which OpenCV uses) to RGB color (which face_recognition uses)
   rgb_small_frame = small_frame[:, :, ::-1]

   # Only process every other frame of video to save time
   if process_this_frame:
       # Find all the faces and face encodings in the current frame of video
       face_locations = face_recognition.face_locations(rgb_small_frame)
       face_encodings = face_recognition.face_encodings(rgb_small_frame, face_locations)

       face_names = []
       for face_encoding in face_encodings:
           # See if the face is a match for the known face(s)
           matches = face_recognition.compare_faces(known_face_encodings, face_encoding)
           name = "Unknown"

           # # If a match was found in known_face_encodings, just use the first one.
           # if True in matches:
           #     first_match_index = matches.index(True)
           #     name = known_face_names[first_match_index]

           # Or instead, use the known face with the smallest distance to the new face
           face_distances = face_recognition.face_distance(known_face_encodings, face_encoding)
           best_match_index = np.argmin(face_distances)
           if matches[best_match_index]:
               name = known_face_names[best_match_index]

           face_names.append(name)

   process_this_frame = not process_this_frame


   # Display the results
   for (top, right, bottom, left), name in zip(face_locations, face_names):
       # Scale back up face locations since the frame we detected in was scaled to 1/4 size
       top *= 4
       right *= 4
       bottom *= 4
       left *= 4

       # Draw a box around the face
       cv2.rectangle(frame, (left, top), (right, bottom), (0, 0, 255), 2)

       # Draw a label with a name below the face
       cv2.rectangle(frame, (left, bottom - 35), (right, bottom), (0, 0, 255), cv2.FILLED)
       font = cv2.FONT_HERSHEY_DUPLEX
       cv2.putText(frame, name, (left + 6, bottom - 6), font, 1.0, (255, 255, 255), 1)

   # Display the resulting image
   cv2.imshow('Video', frame)

   # Hit 'q' on the keyboard to quit!
   if cv2.waitKey(1) & 0xFF == ord('q'):
       break

# Release handle to the webcam
video_capture.release()
cv2.destroyAllWindows()

  • 运行文件
    python3 facerec_on_raspberry_pi.py

  • python3 facerec_from_webcam_faster.py


(4) 结合微服务的进阶任务

  • 安装docker
    sudo apt-get install curl(先安装curl)
    sudo curl -sSL https://get.docker.com | sh
  • 执行安装脚本(使用阿里云镜像)
    sh get-docker.sh --mirror Aliyun
  • 查看docker版本,验证是否安装成功
    docker --version

  • 拉取镜像
    `sudo docker pull sixsq/opencv-python
    下载速度能把人等哭 于是镜像加速
    参考下图,进行加速(速度直接飞起)

  • 创建并运行容器
    sudo docker run -it sixsq/opencv-python /bin/bash

  • 进入容器并安装所需依赖
    docker run -it [imageid] /bin/bash
    pip install "picamera[array]" dlib face_recognition

  • commit镜像
    docker commit [containerid] my-opencv

  • 编写dockerfile文件构建镜像 dockerfile文件
FROM face_opencv
RUN mkdir /test
WORKDIR /test
COPY test .

  • 生成镜像
    docker build -t my-opencv-test .

  • 运行容器
    docker run -it --device=/dev/vchiq --device=/dev/video0 myopencv myopencv

  • 运行代码
    python3 facerec_on_raspberry_pi.py


选做:在opencv的docker容器中跑通步骤(3)的示例代码facerec_from_webcam_faster.py

  • 在Windows系统中安装XMing
  • 启动putty
  • 查看DISPLAY环境变量值printenv

  • 编辑启动脚本 run.sh
xhost +	#允许来自任何主机的连接
docker run -it \
        --rm \
        -v ${PWD}/workdir:/myapp \
        --net=host \
        -v $HOME/.Xauthority:/root/.Xauthority \
        -e DISPLAY=:10.0  \	#此处填写上面查看到的变量值
        -e QT_X11_NO_MITSHM=1 \
        --device=/dev/vchiq \
        --device=/dev/video0 \
        --name my-running-py \
        my-opencv-test \
        recognition.py
  • 效果


(5) 以小组为单位,发表一篇博客,记录遇到的问题和解决方法,提供小组成员名单、分工、各自贡献以及在线协作的图片

1.遇到的问题以及解决办法,在上面都有提到。
2.小组成员名单及分工(第十七小组)

吕瑞峰(组长) 031702533 负责实操,截图
古力亚尔 031702511 负责攥写博客,寻找解决办法
严喜 031702514 负责寻找安装包,寻找问题解决办法

3.在线协作图片
本次协作采用屏幕共享,以及截图共享,群聊天等方式进行。

posted @ 2020-06-12 22:26  小同学-  阅读(276)  评论(0编辑  收藏  举报