Integrating yolov4_tiny trained in TAO framework to Deepstream

Hey there,
I’ve created a yolov4_tiny using TAO and use the export stage for creating etlt and trt files.
Now im trying to migrate the model into deepstream workflow, I followed the deepstream tao apps integration example (https://github.com/NVIDIA-AI-IOT/deepstream_tao_apps,which is in c++ although Im using Python so I’ll be glad to see a python example if one exist) and the configuration description (https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvinfer.html#gst-nvinfer-file-configuration-specifications)

and i have few questions about the config file:

TAO export crated the following files:

labels.txt	    trt.engine
nvinfer_config.txt  yolov4_cspdarknet_tiny_epoch_080.etlt

At the deepstream_tao_apps instruction we should change the next properties, as find at deepstream_tao_apps yolov4_tiny_tao config file :

labelfile-path=../../configs/yolov4-tiny_tao/yolov4_tiny_labels.txt
model-engine-file=../../models/yolov4-tiny/yolov4_cspdarknet_tiny_epoch_080.etlt.onnx.etlt_b1_gpu0_int8.engine
int8-calib-file=../../models/yolov4-tiny/cal.bin
tlt-encoded-model=../../models/yolov4-tiny/yolov4_cspdarknet_tiny_epoch_080.etlt.onnx.etlt
parse-bbox-func-name=NvDsInferParseCustomBatchedNMSTLT
custom-lib-path=../../post_processor/libnvds_infercustomparser_tao.so

so I change it to (I dropped the absolute path) :

labelfile-path=labels.txt
model-engine-file=   trt.engine
tlt-encoded-model=yolov4_cspdarknet_tiny_epoch_080.etlt

int8-calib-file=../../models/yolov4-tiny/cal.bin
parse-bbox-func-name=NvDsInferParseCustomBatchedNMSTLT
custom-lib-path=../../post_processor/libnvds_infercustomparser_tao.so
  1. I dont understand what path I should use to fill the int8-calib-file, custom-lib-path and parse-bbox-func-name properties, and how I generate these files?
  2. What is the need of nvinfer_config.txt generated by TAO export?
  3. At Tao SDK we use specs files for train and pretrain, should I import them to deepstream ? if yes, to where?

Please follow GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream .

The int8-calib-file is the cal.bin file which is generated when you export the tlt model.

int8-calib-file=…/…/models/yolov4/yolov4nv.trt8.cal.bin

The custom-lib-path and parse-bbox-func-name just follow the yolov4 config file locates at https://github.com/NVIDIA-AI-IOT/deepstream_tao_apps/blob/master/configs/yolov4_tao/pgie_yolov4_tao_config.txt

The nvinfer_config.txt is used as a label file.

No, it is not needed.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.