Tensorflow serving启动方式说明
-
介绍地址:https://github.com/tensorflow/serving
-
docker基础镜像地址:https://hub.docker.com/r/tensorflow/serving/tags/?page=1&ordering=last_updated
-
cpu安装镜像,对应
Dockerfile - cpu开发镜像devel,对应
Dockerfile.devel - gpu安装镜像,对应
Dockerfile.gpu,使用nvidia-docker支持GPU - gpu开发镜像,对应
Dockerfile.devel-gpu
-
-
以cpu为例,演示快速部署
# Download the TensorFlow Serving Docker image and repodocker pull tensorflow/servinggit clone https://github.com/tensorflow/serving# Location of demo modelsTESTDATA="$(pwd)/serving/tensorflow_serving/servables/tensorflow/testdata"# Start TensorFlow Serving container and open the REST API portdocker run -t --rm -p8501:8501\-v"$TESTDATA/saved_model_half_plus_two_cpu:/models/half_plus_two"\-e MODEL_NAME=half_plus_two \tensorflow/serving &# Query the model using the predict APIcurl -d'{"instances": [1.0, 2.0, 5.0]}'\-X POST http://localhost:8501/v1/models/half_plus_two:predict# Returns => {"predictions": [2.5,3.0,4.5] }其中8500默认为rpc端口,8501默认为REST端口,可以同时开启。环境变量MODEL_NAME指向模型目录(默认model),MODEL_BASE_PATH指向模型目录所在根目录(默认/models)
同时还可以传递额外参数,如指定配置文件:
docker run -p8500:8500-p8501:8501\--mount type=bind,source=/path/to/my_model/,target=/models/my_model \--mount type=bind,source=/path/to/my/models.config,target=/models/models.config \-t tensorflow/serving --model_config_file=/models/models.config -
cpu镜像启动过程详解:
-
ENTRYPOINT:/usr/bin/tf_serving_entrypoint.sh
#!/bin/bashtensorflow_model_server --port=8500 --rest_api_port=8501 --model_name=${MODEL_NAME} --model_base_path=${MODEL_BASE_PATH}/${MODEL_NAME}"$@" -
bin文件路径:/usr/bin/tensorflow_model_server
- 模型目录通过环境变量MODEL_BASE_PATH和MODEL_NAME传递
-
- 开发调试方法:
- 在devel镜像中,支持通过bin文件启动server,并通过python请求调试
- 支持通过给定的Dockerfile编译自己的镜像版本:https://github.com/tensorflow/serving/blob/master/tensorflow_serving/g3doc/building_with_docker.md
联系方式:emhhbmdfbGlhbmcxOTkxQDEyNi5jb20=

浙公网安备 33010602011771号