[TF Lite] Re-train ssd_mobilenet_v1_quantized_coco
Resources
[1] How to quantify ssd_mobilenet_v1_coco model and toco to .tflite ? #18829
方法论
一、开始训练
TF是个坑,但使用对的命令就可以了。
python object_detection/legacy/train.py --train_dir=training/ --pipeline_config_path=object_detection/ssd_mobilenet_v2_quantized_300x300_coco_2019_01_03/pipeline.config python object_detection/legacy/train.py --train_dir=training/ --pipeline_config_path=object_detection/ssd_mobilenet_v1_coco_2018_01_28/pipeline.config python object_detection/legacy/train.py --train_dir=training/ --pipeline_config_path=object_detection/ssd_mobilenet_v2_coco_2018_03_29/pipeline.config ython object_detection/model_main.py --train_dir=training/ --pipeline_config_path=object_detection/ssd_mobilenet_v1_quantized_300x300_coco14_sync_2018_07_18/pipeline.config
If using rtx 2080, this code may be added for some tricky issues: Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR #34695
gpus = tf.config.experimental.list_physical_devices('GPU') if gpus: try: # Currently, memory growth needs to be the same across GPUs for gpu in gpus: tf.config.experimental.set_memory_growth(gpu, True) logical_gpus = tf.config.experimental.list_logical_devices('GPU') print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPUs") except RuntimeError as e: # Memory growth must be set before GPUs have been initialized print(e)
二、训练结果
/* implement */
三、模型转换
/* implement */
/* implement */

浙公网安备 33010602011771号