面向监考业务的人证核实移动终端与系统

基于树莓派的便携考生人证核实设备与系统

一、树莓派系统Raspbian的安装

1.下载系统镜像

镜像下载地址

https://www.raspberrypi.org/downloads/raspbian/

image

image

注:官方的Raspbian有三个版本,在此我们选择
Raspbian Buster with desktop and recommended software

2.烧录系统镜像

为了安装Raspbian到树莓派内 我们需要把镜像烧录到SD卡中
使用的软件为balenaEtcher

balenaEtcher官方网址

https://www.balena.io/etcher/

安装后我们选择系统镜像并插入SD卡进行烧录

image

image

二、硬件连接和搭建

1.硬件参数

名称型号参数
主板Raspberry Pi 4 Model BBroadcom BCM2711 quad-core Cortex-A72 (ARM v8) 64-bit SoC @ 1.5 GHz
摄像头Razer USB Camera自动对焦
屏幕Waveshare 3.5inch RPi LCD(B)320×480分辨率 电阻式触摸控制
电池Raspberry Pi Club UPSPack V23800mhA

三、树莓派系统的配置

为了方便的调试系统 我们需要开启SSH和VNC功能

注:安装好的SD卡分区为EXT4格式 在WINDOWS系统下无法常规读取 因此我们需要使用DiskGenius软件对SD卡进行操作

image

image

DiskGenius官方网址

http://www.diskgenius.cn/

1.开启SSH

SSH的开启非常便捷 我们可以在/boot/分区内添加一个名为SSH的文件 系统在启动时便会自动的开启SSH功能 但此时并不能使用root用户登录SSH 因此我们需要修改文件配置 使用pi用户登录ssh输入以下指令

sudo vim /etc/ssh/sshd_config

找到 PermitRootLogin without-password 更改为 PermitRootLogin yes 便可使用root用户登录SSH

为了方便后续操作 我们需要使用root用户登录对系统进行配置
在成功进入系统后使用命令

sudo passwd root

在提示下设置root用户密码 便可自由使用SSH功能

2.开启VNC

在SSH终端内输入

sudo raspi-config

即可进入树莓派系统配置界面
image

image

选择 [5 Interfacing Options]->[P3 VNC] 即可开启VNC功能

注:配置完需进行重启
在Windows/Linux等平台可以使用VNC官方连接软件VNC Viewer进行连接

下载地址https://www.realvnc.com/en/connect/download/vnc/

3.配置树莓派摄像头

在SSH终端内输入

sudo raspi-config

即可进入树莓派系统配置界面
image

image

选择 [5 Interfacing Options]->[P1 Camera] 即可开启摄像头功能

注:配置完需进行重启 如使用CSI摄像头需在
/etc/modules-load.d/rpi-camera.conf
末尾添加
bcm2835-v4l2

4.安装显示屏驱动

在SSH终端内输入

git clone https://github.com/waveshare/LCD-show.git
cd LCD-show/
sudo ./LCD35B-show

重启后即可使用(为了方便使用,可以调整屏幕显示方向,参见#设置显示方向)。

注意1:执行apt-get upgrade会导致LCD无法正常工作。此时需要编辑SD卡中的 config.txt 文件,并删除这一句:dtoverlay=ads7846。

注意2:在Raspbian-lite下,需要执行sudo ./LCD35B-show lite命令,以安装驱动。

5.程序的开启自启动

关键词释意
[Desktop Entry]文件头
Encoding编码
Name应用名称
Name[xx]不同语言的应用名称
GenericName描述
Comment注释
Exec执行的命令
Icon图标路径
Terminal是否使用终端
Type启动器类型
Categories应用的类型(内容相关)

在/home/pi/.config/新建一个名为autostart的文件夹

mkdir .config/autostart

在 autostart 目录下新建boot.desktop

nano .config/autostart/testboot.desktop

文件内容如下

[Desktop Entry]
Categories=Application;Programme;
Comment=Demo
Encoding=UTF-8
Exec=python /home/Desktop/boot.py
Name=Demo Desktop
Type=Application

6.设置显示方向

cd LCD-show/
./LCD35B-show 180

注:执行apt-get upgrade会导致LCD无法正常工作。此时需要编辑SD卡中的 config.txt 文件,并删除这一句:dtoverlay=ads7846

四、相关组件的编译与安装

编译先必须更改swap 大小,否则会出现内存不足导致编译卡住的情况发生

sudo vi /etc/dphys-swapfile

将CONF_SWAPSIZE的值修改成2048

Ubuntu缺省情况下,并没有提供C/C++的编译环境,因此还需要手动安装。但是如果单独安装gcc以及g++比较麻烦,幸运的是,Ubuntu提供了一个build-essential软件包。也就是说,安装了该软件包,编译c/c++所需要的软件包也都会被安装。因此如果想在Ubuntu中编译c/c++程序,只需要安装该软件包就可以了。

1.curl

sudo apt-get install curl

cURL是一个利用URL语法在命令行下工作的文件传输工具,1997年首次发行。它支持文件上传和下载,所以是综合传输工具,但按传统,习惯称cURL为下载工具。cURL还包含了用于程序开发的libcurl。

2.opencv

OpenCV的全称是Open Source Computer Vision Library,是一个跨平台的计算机视觉库。OpenCV是由英特尔公司发起并参与开发,以BSD许可证授权发行,可以在商业和研究领域中免费使用。OpenCV可用于开发实时的图像处理、计算机视觉以及模式识别程序。该程序库也可以使用英特尔公司的IPP进行加速处理。

sudo apt-get install build-essential
sudo apt-get install cmake git libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev
sudo apt-get install python-dev python-numpy libtbb2 libtbb-dev libjpeg-dev libpng-dev libtiff-dev libjasper-dev libdc1394-22-dev
cd /root/
git clone https://github.com/opencv/opencv.git
git clone https://github.com/opencv/opencv_contrib.git
cd /root/opencv/
mkdir build
cd build
cmake -D CMAKE_BUILD_TYPE=Release -D CMAKE_INSTALL_PREFIX=/usr/local -DOPENCV_EXTRA_MODULES_PATH=/root/opencv_contrib/modules /root/opencv/
make -j4
sudo make install

3.dlib

pip install dlib

4.tkinter

pip install Pillow

5.face_recognition

pip install face_recognition

五、代码构建

1 .compare_core.py

import requests
import os
from json import JSONDecoder
import urllib3,base64,json
import numpy
import face_recognition

def submit(url,submit_data):
response = requests.post(url, data=submit_data)
req_con = response.content.decode('utf-8')
req_dict = JSONDecoder().decode(req_con)
return req_dict

def get_access_token(client_id,client_secret):
url = "https://aip.baidubce.com/oauth/2.0/token"
data = {"grant_type": "client_credentials", "client_id": client_id,"client_secret":client_secret}
response=submit(url,data)

access_token=response[<span class="hljs-string">'access_token'</span>]
<span class="hljs-keyword">return</span> access_token

def face_compare(access_token,locate1,locate2):
url = "https://aip.baidubce.com/rest/2.0/face/v3/match"+"?access_token=" + access_token

file1 = open(locate1,<span class="hljs-string">'rb'</span>)
file2 = open(locate2,<span class="hljs-string">'rb'</span>)
image1 = base64.b64encode(file1.read())
image2 = base64.b64encode(file2.read())

data = json.dumps(
[{<span class="hljs-string">"image"</span>: str(image1,<span class="hljs-string">'utf-8'</span>), <span class="hljs-string">"image_type"</span>: <span class="hljs-string">"BASE64"</span>, <span class="hljs-string">"face_type"</span>: <span class="hljs-string">"CERT"</span>, <span class="hljs-string">"quality_control"</span>: <span class="hljs-string">"NONE"</span>},
 {<span class="hljs-string">"image"</span>: str(image2,<span class="hljs-string">'utf-8'</span>), <span class="hljs-string">"image_type"</span>: <span class="hljs-string">"BASE64"</span>, <span class="hljs-string">"face_type"</span>: <span class="hljs-string">"LIVE"</span>, <span class="hljs-string">"quality_control"</span>: <span class="hljs-string">"NORMAL"</span>}])

response=submit(url,data)
print(response)
<span class="hljs-keyword">if</span>(response[<span class="hljs-string">'error_code'</span>]!=<span class="hljs-number">0</span>):
    <span class="hljs-keyword">return</span> <span class="hljs-number">0</span>
score=response[<span class="hljs-string">'result'</span>][<span class="hljs-string">'score'</span>]
<span class="hljs-keyword">return</span> score

def compare_face_recognition(locate1,locate2):
first_image = face_recognition.load_image_file(locate1)
second_image = face_recognition.load_image_file(locate2)
first_encoding = face_recognition.face_encodings(first_image)
second_encoding = face_recognition.face_encodings(second_image)
if(len(first_encoding)0):
return 2
if(len(second_encoding)
0):
return 3
results = face_recognition.compare_faces([first_encoding][0], second_encoding[0])
if(results!="True"):
results = face_recognition.face_distance([first_encoding][0], second_encoding[0])
return results

2. upload.py

import requests,base64,ssl
session = requests.session()

def upload(locate1,locate2,confidence):
file1 = open(locate1,'rb')
file2 = open(locate2,'rb')
image1 = base64.b64encode(file1.read())
image2 = base64.b64encode(file2.read())
url = "https://your_iot_domain/index.php/admin/add"

session.headers = {
    <span class="hljs-string">"Host"</span>: <span class="hljs-string">"your_iot_domain"</span>,
    <span class="hljs-string">"Connection"</span>: <span class="hljs-string">"keep-alive"</span>,
    <span class="hljs-string">"Accept"</span>: <span class="hljs-string">"application/json, text/javascript, */*; q=0.01"</span>,
    <span class="hljs-string">"Origin"</span>: <span class="hljs-string">"https://your_iot_domain/"</span>,
    <span class="hljs-string">"X-Requested-With"</span>: <span class="hljs-string">"XMLHttpRequest"</span>,
    <span class="hljs-string">"User-Agent"</span>: <span class="hljs-string">"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.110 Safari/537.36"</span>,
    <span class="hljs-string">"Content-Type"</span>: <span class="hljs-string">"application/x-www-form-urlencoded; charset=UTF-8"</span>,
    <span class="hljs-string">"Referer"</span>: <span class="hljs-string">"https://your_iot_domain/index.php/admin.html"</span>,
    <span class="hljs-string">"Accept-Encoding"</span><span class="hljs-symbol">:<span class="hljs-string">"gzip, deflate"</span></span>,
    <span class="hljs-string">"Accept-Language"</span>: <span class="hljs-string">"zh-CN,zh;q=0.9"</span>,
    }
data = {
        <span class="hljs-string">'image1'</span><span class="hljs-symbol">:<span class="hljs-string">'data:image/jpg;base64,'</span>+str</span>(image1,<span class="hljs-string">'utf-8'</span>),
        <span class="hljs-string">'image2'</span><span class="hljs-symbol">:<span class="hljs-string">'data:image/jpg;base64,'</span>+str</span>(image2,<span class="hljs-string">'utf-8'</span>),
        <span class="hljs-string">'confidence'</span><span class="hljs-symbol">:confidence</span>
    }
res = session.post(url=url,data=data,verify=False)
print(res.text)

3. main.py

import cv2
import threading
import os
import time

from tkinter import *
from PIL import Image, ImageTk
from compare_core import *
from upload import *

def resizeImage(image,width=None,height=None,inter=cv2.INTER_AREA):
newsize = (width,height)
(h,w) = image.shape[:2]
if width is None and height is None:
return image
if width is None:
n = height/float(h)
newsize = (int(nw),height)
else :
n = width/float(w)
newsize = (width,int(h
n))
newimage = cv2.resize(image, newsize, interpolation=inter)
return newimage

def rotate(image, angle, center=None, scale=1.0):
(h, w) = image.shape[:2]
if center is None:
center = (w // 2, h // 2)

M = cv2.getRotationMatrix2D(center, angle, scale)

rotated = cv2.warpAffine(image, M, (w, h))
<span class="hljs-keyword">return</span> rotated 

def compare_face_baidu():
access_token=get_access_token('sXgKYIWhtvo8G5Fe4O','DxAhEzSWqAGv1sKxYKixpxOcZq')
print(access_token)
score=face_compare(access_token,"1.jpg","2.jpg")
if(score>=80):
Label3.config(text="对比通过 相似度 %.2f"%(score)+"%")
else:
Label3.config(text="对比不通过 相似度 %.2f"%(score)+"%")
upload("1.jpg","2.jpg",score)
os.remove("1.jpg")
os.remove("2.jpg")

def compare_face_dlib():
score=compare_face_recognition("1.jpg","2.jpg")
print(score)
if(score2):
Label3.config(text="未采集到证件人脸")
elif(score
3):
Label3.config(text="未采集到人像人脸")
elif(score>=0.60):
Label3.config(text="对比通过")
else:
Label3.config(text="对比不通过 相似度 %.2f"%(score)+"%")
upload("1.jpg","2.jpg",score)
os.remove("./1.jpg")
os.remove("./2.jpg")

def show_frame():
ret, frame = cap.read()
print(cap)
if(retFalse):
print("cap error")
return 0
cv2image = cv2.cvtColor(frame, cv2.COLOR_BGR2RGBA)
cv2image = rotate(cv2image,180)
w = cv2image.shape[1]
h = cv2image.shape[2]
image_origin = resizeImage(cv2image,int(w/3.5),int(h/3.5),cv2.INTER_LINEAR)
img = Image.fromarray(image_origin)
imgtk = ImageTk.PhotoImage(image=img)
if(os.path.exists('1.jpg')
False):
Label1.imgtk = imgtk
Label1.configure(image=imgtk)
if(os.path.exists('2.jpg')==False):
Label2.imgtk = imgtk
Label2.configure(image=imgtk)
window.after(10, show_frame)

def event_button0():
Thread0.start()

def event_button1():
ret, frame = cap.read()
cv2image = cv2.cvtColor(frame, cv2.COLOR_BGR2RGBA)
saveimage = rotate(frame,180)
cv2.imwrite("1.upload.jpg",saveimage)
cv2.imwrite("1.jpg",cv2image)

def event_button2():
ret, frame = cap.read()
cv2image = cv2.cvtColor(frame, cv2.COLOR_BGR2RGBA)
saveimage = rotate(frame,180)
cv2.imwrite("2.upload.jpg",saveimage)
cv2.imwrite("2.jpg",cv2image)

def event_button3():
if(os.path.exists('1.jpg')==True):
os.remove("1.jpg")

def event_button4():
if(os.path.exists('2.jpg')==True):
os.remove("2.jpg")

def event_button5():
Label3.config(text="对比中......")
Thread1=threading.Thread(target=compare_face_baidu,name='compare_face_baidu')
Thread1.start()

if name == "main":

<span class="hljs-comment">#Set up GUI</span>
window = Tk()  
window.wm_title(<span class="hljs-string">"面向监考业务的人证核实移动终端与系统"</span>)
window.geometry(<span class="hljs-string">'680x350'</span>)

<span class="hljs-comment">#Graphics window</span>
main_Frame = Frame(window)
main_Frame.pack()
Frame_left=Frame(main_Frame)
Frame_right=Frame(main_Frame)
Frame_left.pack(side=<span class="hljs-string">'left'</span>)
Frame_right.pack(side=<span class="hljs-string">'right'</span>)

<span class="hljs-comment">#Capture video frames</span>
Label1 = Label(Frame_left)
Label1.pack()
Label2 = Label(Frame_right)
Label2.pack()

Button1=Button(Frame_left,text=<span class="hljs-string">"拍照"</span>,command=event_button1)
Button1.pack(ipadx=<span class="hljs-number">50</span>)
Button2=Button(Frame_right,text=<span class="hljs-string">"拍照"</span>,command=event_button2)
Button2.pack(ipadx=<span class="hljs-number">50</span>)
Button3=Button(Frame_left,text=<span class="hljs-string">"重拍"</span>,command=event_button3)
Button3.pack(ipadx=<span class="hljs-number">50</span>)
Button4=Button(Frame_right,text=<span class="hljs-string">"重拍"</span>,command=event_button4)
Button4.pack(ipadx=<span class="hljs-number">50</span>)
Label3=Label(window,text =<span class="hljs-string">'等待拍照....'</span>,justify=LEFT)
Label3.pack()


Button0=Button(Frame_left,text=<span class="hljs-string">"开启摄像头"</span>,command=event_button0)
Button0.pack(ipadx=<span class="hljs-number">50</span>)
Button5=Button(Frame_right,text=<span class="hljs-string">"对         比"</span>,command=event_button5)
Button5.pack(ipadx=<span class="hljs-number">50</span>)


cap = cv2.VideoCapture(<span class="hljs-number">0</span>)
print(cap)
<span class="hljs-keyword">if</span>(os.path.exists(<span class="hljs-string">'1.jpg'</span>)==<span class="hljs-keyword">True</span>):
    os.remove(<span class="hljs-string">"1.jpg"</span>)
<span class="hljs-keyword">if</span>(os.path.exists(<span class="hljs-string">'2.jpg'</span>)==<span class="hljs-keyword">True</span>):
    os.remove(<span class="hljs-string">"2.jpg"</span>)
Thread0=threading.Thread(target=show_frame,name=<span class="hljs-string">'show_frame'</span>)

<span class="hljs-comment">#show_frame()</span>
window.mainloop()  <span class="hljs-comment">#Starts GUI</span></code></pre>          </div>
                      <!--文章的页脚部件:打赏和其他信息的输出-->
         
         <div class="show-foot">
             <div class="notebook">
                 <i class="fontello fontello-clock-o"></i>
                 <span>最后修改:2019 年 10 月 30 日 10 : 03  PM</span>
             </div>
             <div class="copyright" data-toggle="tooltip" data-html="true" data-original-title="转载请保留本文转载地址,著作权归作者所有"><span>© 允许规范转载</span>
             </div>
         </div>

                              <!--/文章的页脚部件:打赏和其他信息的输出-->
     </div>
posted @ 2020-03-12 01:14  smallsung1999  阅读(333)  评论(0编辑  收藏  举报