Integrating yolov4_tiny trained in TAO framework to Deepstream

Hey there,
I’ve created a yolov4_tiny using TAO and use the export stage for creating etlt and trt files.
Now im trying to migrate the model into deepstream workflow, I followed the deepstream tao apps integration example (,which is in c++ although Im using Python so I’ll be glad to see a python example if one exist) and the configuration description (

and i have few questions about the config file:

TAO export crated the following files:

labels.txt	    trt.engine
nvinfer_config.txt  yolov4_cspdarknet_tiny_epoch_080.etlt

At the deepstream_tao_apps instruction we should change the next properties, as find at deepstream_tao_apps yolov4_tiny_tao config file :


so I change it to (I dropped the absolute path) :

model-engine-file=   trt.engine

  1. I dont understand what path I should use to fill the int8-calib-file, custom-lib-path and parse-bbox-func-name properties, and how I generate these files?
  2. What is the need of nvinfer_config.txt generated by TAO export?
  3. At Tao SDK we use specs files for train and pretrain, should I import them to deepstream ? if yes, to where?

Please follow GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream .

The int8-calib-file is the cal.bin file which is generated when you export the tlt model.


The custom-lib-path and parse-bbox-func-name just follow the yolov4 config file locates at deepstream_tao_apps/pgie_yolov4_tao_config.txt at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub

The nvinfer_config.txt is used as a label file.

No, it is not needed.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.