Hey there,
I’ve created a yolov4_tiny using TAO and use the export stage for creating etlt and trt files.
Now im trying to migrate the model into deepstream workflow, I followed the deepstream tao apps integration example (https://github.com/NVIDIA-AI-IOT/deepstream_tao_apps
,which is in c++ although Im using Python so I’ll be glad to see a python example if one exist) and the configuration description (https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvinfer.html#gst-nvinfer-file-configuration-specifications
)
and i have few questions about the config file:
TAO export crated the following files:
labels.txt trt.engine
nvinfer_config.txt yolov4_cspdarknet_tiny_epoch_080.etlt
At the deepstream_tao_apps instruction we should change the next properties, as find at deepstream_tao_apps yolov4_tiny_ta
o config file :
labelfile-path=../../configs/yolov4-tiny_tao/yolov4_tiny_labels.txt
model-engine-file=../../models/yolov4-tiny/yolov4_cspdarknet_tiny_epoch_080.etlt.onnx.etlt_b1_gpu0_int8.engine
int8-calib-file=../../models/yolov4-tiny/cal.bin
tlt-encoded-model=../../models/yolov4-tiny/yolov4_cspdarknet_tiny_epoch_080.etlt.onnx.etlt
parse-bbox-func-name=NvDsInferParseCustomBatchedNMSTLT
custom-lib-path=../../post_processor/libnvds_infercustomparser_tao.so
so I change it to (I dropped the absolute path) :
labelfile-path=labels.txt
model-engine-file= trt.engine
tlt-encoded-model=yolov4_cspdarknet_tiny_epoch_080.etlt
int8-calib-file=../../models/yolov4-tiny/cal.bin
parse-bbox-func-name=NvDsInferParseCustomBatchedNMSTLT
custom-lib-path=../../post_processor/libnvds_infercustomparser_tao.so
- I dont understand what path I should use to fill the
int8-calib-file, custom-lib-path and parse-bbox-func-name
properties, and how I generate these files? - What is the need of
nvinfer_config.txt
generated by TAO export? - At Tao SDK we use specs files for train and pretrain, should I import them to deepstream ? if yes, to where?