This document is to give step by step guidance to allow the newer install and quickly familiar with OpenVINO.

OS/ENV

  • · Recommend OS is Ubuntu 16.04 or Ubuntu18.04. The Centos or Windows will have some trouble for Installation.
  • · This guide provides the steps for creating a Docker* image for Ubuntu system and further installation.
  • · Set python to python3 as default since model optimizer default is python3 based.

sudo update-alternatives --install /usr/bin/python python /usr/bin/python3 10

 

Base on Ubuntu

# Environment proxy settings:

vim /etc/apt/apt.conf

Acquire::http::proxy “http://proxy.example.com:xxx/ “;

Acquire::https::proxy “http://proxy.example.com:xxx/ “;

vim /etc/environment

export http_proxy="http://proxy.example.com:xxx/"

export https_proxy=”http://proxy.example.com:xxx/

vim ~/.bashrc

export http_proxy="http://proxy.example.com:xxx/"

export https_proxy=”http://proxy.example.com:xxx/

 

# Install python3.6

mkdir /usr/local/python3

cd /usr/local/python3

wget --no-check-certificate https://www.python.org/ftp/python/3.6.5/Python-3.6.5.tgz

tar -xzvf Python-3.6.5.tgz

cd Python-3.6.5

./configure --prefix=/usr/local/python3

make

make install

 

# Download get-pip.py

apt-get remove python-pip python3-pip

wget https://bootstrap.pypa.io/get-pip.py

apt-get install python3-distutils

python3 get-pip.py

sudo update-alternatives --install /usr/bin/python python /usr/bin/python3 10

pip3 freeze >requirements.txt

pip3 install -r requirements.txt

 

# Set environment variables

Open the .bashrc file in <user_directory>:

vi <user_directory>/.bashrc

Add this line to the end of the file:

source /opt/intel/openvino/bin/setupvars.sh

 

Download

  • · OpenVINO toolkit package: https://software.intel.com/en-us/openvino-toolkit/choosedownload/

free-download-linux

 

Install Guide

  • · OpenVINO toolkit install guide:

https://docs.openvinotoolkit.org/latest/_docs_install_guides_installing_openvino_linux.html

# Install the Intel® Distribution of OpenVINO™ Toolkit

tar -xvzf l_openvino_toolkit_p_<version>.tgz

cd l_openvino_toolkit_p_<version>

sudo ./install.sh

# Install External software dependencies

cd /opt/intel/openvino/install_dependencies

sudo -E ./install_openvino_dependencies.sh

# Configure the Model Optimizer

cd /opt/intel/openvino/deployment_tools/model_optimizer/install_prerequisites

sudo ./install_prerequisites.sh

# Run the Verification Scripts to Verify Installation and Compile Samples

cd /opt/intel/openvino/deployment_tools/demo

./demo_squeezenet_download_convert_run.sh

./demo_security_barrier_camera.sh

 

BUILD Sample APP

  • · Go to the directory with the build_samples.sh script and run it:

cd /opt/intel/openvino/deployment_tools/inference_engine/samples

 ./build_samples.sh

 

SSD-Mobilenet-v1

Prepare the model, datasets and related datafiles

  • · Model Downloader:

cd /opt/intel/openvino/deployment_tools/tools/model_downloader/

Download all via ./downloader.py --all or download specific model via ./downloader.py –

name MODEL_NAME --output_dir LOCAL_DIR

  • · Download datasets:

ImageNet 2012: http://www.image-net.org/download-images

COCO 2017:  http://images.cocodataset.org/zips/val2017.zip

VOC 2007: http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtest_06-Nov-2007.tar

 

FP32 INFERENCE

Convert to IR

Execute this step then you will get frozen_inference_graph.bin/mapping/xml files under

irmodels:

python /opt/intel/openvino/deployment_tools/model_optimizer/mo.py --framework=tf  --data_type=FP32 --reverse_input_channels --input_shape=[1,300,300,3] --input=image_tensor --tensorflow_use_custom_operations_config=/opt/intel/openvino/deployment_tools/model_optimizer/extensions/front/tf/ssd_v2_support.json --output=detection_scores,detection_boxes,num_detections --tensorflow_object_detection_api_pipeline_config=/home/dldt/model_downloader/public/ssd_mobilenet_v1_coco/ssd_mobilenet_v1_coco_2018_01_28/pipeline.config --input_model=/home/dldt/model_downloader/public/ssd_mobilenet_v1_coco/ssd_mobilenet_v1_coco_2018_01_28/frozen_inference_graph.pb --output_dir /home/dldt/irmodels

Convert Data Annotation Files

python /opt/intel/openvino/deployment_tools/tools/accuracy_checker_tool/convert_annotation.py mscoco_detection --annotation_file /home/dldt/datasets/annotations/instances_val2017.json --has_background True --use_full_label_map True -o /home/dldt/datasets/annotation -a mscoco_detection.pickle -m mscoco_detection.json

VALIDATE ACCURACY

Use accuracy_checker.py to verify the accuracy:

python /opt/intel/openvino/deployment_tools/tools/accuracy_checker_tool/accuracy_check.py --config /home/dldt/yml/ssd_mobilenet_v1_coco.yml -d /opt/intel/openvino/deployment_tools/tools/calibration_tool/configs/definitions.yml -M /opt/intel/openvino/deployment_tools/model_optimizer --models /home/dldt/irmodels --source /home/dldt/datasets --annotations /home/dldt/datasets/annotation

 

#Below are the result:

Processing info:

model: ssd_mobilenet_v1_coco

launcher: dlsdk

device: CPU

dataset: COCO2017_90cl_bkgr

OpenCV version: 4.1.2-openvino

IE version: 2.1.custom_releases/2019/R3_ac8584cb714a697a12f1f30b7a3b78a5b9ac5e05

Loaded CPU plugin version: 2.1.32974

4952 objects processed in 102.482 seconds

map: 36.09%

 

FP32 Inference

/root/inference_engine_samples_build/intel64/Release/benchmark_app -progress true -i /home/dldt/datasets/val2017 -b 8 -m /home/dldt/irmodels/frozen_inference_graph.xml -d CPU -api async -l /opt/intel/openvino/inference_engine/lib/intel64/libcpu_extension_avx512.so -nireq 1 -nstreams 1

 

#Below are the result:

[Step 11/11] Dumping statistics report

Count:      4154 iterations

Duration:   60017.08 ms

Latency:    14.39 ms

Throughput: 553.71 FPS

[Step 11/11] Dumping statistics report

Count:      4164 iterations

Duration:   60026.74 ms

Latency:    14.35 ms

Throughput: 554.95 FPS

 

INT8 INFERENCE

Calibration

The yml file is used to do calibration based on ssd_mobilenet_v1 model.

Below are using ssd_mobilenet_v1_coco yml file. Pls note, the file path

in the yml file need to be changed based on where you store those files.

The calibration command:

python /opt/intel/openvino/deployment_tools/tools/calibration_tool/calibrate.py --config /home/dldt/yml/ssd_mobilenet_v1_coco.yml --definition /opt/intel/openvino/deployment_tools/tools/calibration_tool/configs/definitions.yml -M /opt/intel/openvino/deployment_tools/model_optimizer --models /home/dldt/irmodels --source /home/dldt/datasets --annotations /home/dldt/datasets/annotation

After done, frozen_inference_graph_i8.bin/xml is saved in the same folder of fp32 model.

Now you need to create a yml file for int8 model.

Verify Accuracy

python /opt/intel/openvino/deployment_tools/tools/accuracy_checker_tool/accuracy_check.py --config /home/dldt/yml/ssd_mobilenet_v1_coco_i8.yml -d /opt/intel/openvino/deployment_tools/tools/calibration_tool/configs/definitions.yml -M /opt/intel/openvino/deployment_tools/model_optimizer --models /home/dldt/irmodels --source /home/dldt/datasets --annotations /home/dldt/datasets/annotation

 

#Below are the result:

Processing info:

model: ssd_mobilenet_v1_coco

launcher: dlsdk

device: CPU

dataset: COCO2017_90cl_bkgr

OpenCV version: 4.1.2-openvino

IE version: 2.1.custom_releases/2019/R3_ac8584cb714a697a12f1f30b7a3b78a5b9ac5e05

Loaded CPU plugin version: 2.1.32974

4952 objects processed in 97.580 seconds

map: 35.90%

 

INT8 Inference

/root/inference_engine_samples_build/intel64/Release/benchmark_app -progress true -i /home/dldt/datasets/val2017 -b 8 -m /home/dldt/irmodels/frozen_inference_graph_i8.xml -d CPU -api async -l /opt/intel/openvino/inference_engine/lib/intel64/libcpu_extension_avx512.so -nireq 1 -nstreams 1

 

#Below are the result:

[Step 11/11] Dumping statistics report

Count:      9900 iterations

Duration:   60009.23 ms

Latency:    6.04 ms

Throughput: 1319.80 FPS

[Step 11/11] Dumping statistics report

Count:      9940 iterations

Duration:   60006.40 ms

Latency:    6.01 ms

Throughput: 1325.19 FPS

 

Base on Docker

# Install Docker

Following https://docs.docker.com/install/linux/docker-ce/ubuntu/ to get and install docker.

# Download docker basic image

docker pull ubuntu:16.04

# Start the container

docker run -itd --privileged ubuntu:16.04 bash

docker container ls   # Get the container id, $container_id=CONTAINER ID

docker exec -it $container_id bash   #enter container

# Environment proxy settings:

export http_proxy="http://proxy.example.com:xxx/"

export https_proxy=”http://proxy.example.com:xxx/

#Install dependable environment

apt-get update

apt-get install curl lsb-release

apt install apt-utils

apt-get install vim

apt-get install wget

apt-get install sudo

apt-get install build-essential

apt-get install zlib1g-dev

apt-get install cpio

apt-get install pciutils

apt-get install numactl

apt-get install libnuma-dev

apt-get install libgtk-3-0

 

# Then next steps follow “Base on Ubuntu”