Etlt model not detecting with deepsteam python apps

Please provide complete information as applicable to your setup.
• Hardware Platform: Jetson Xavier NX
• DeepStream Version: 6.0
• Language: Python

Hi,
I want to deploy .etlt models on deepstream with python bindings. I downloaded the models from this repo (https://github.com/NVIDIA-AI-IOT/deepstream_tao_apps). The yolo models are working fine with “ds-tao-detection”, based on the steps in the documentation.

Now I want to check if those TAO models work in the deepstream with python bindings. The python sample code is downloaded from deepstream_python_apps. I used “deepstream-imagedata-multistream” as an example. In the configuration file, I replaced

#model-file=…/…/…/…/samples/models/Primary_Detector/resnet10.caffemodel
#proto-file=…/…/…/…/samples/models/Primary_Detector/resnet10.prototxt
#model-engine-file=…/…/…/…/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
#labelfile-path=…/…/…/…/samples/models/Primary_Detector/labels.txt
#int8-calib-file=…/…/…/…/samples/models/Primary_Detector/cal_trt.bin

with

tlt-encoded-model=/home/user/Projects/deepstream_tao_apps/models/yolov3/yolov3_resnet18.etlt
int8-calib-file=/home/user/Projects/deepstream_tao_apps/models/yolov3/yolov3nv.trt8.cal.bin
model-engine-file=/home/user/Projects/deepstream_tao_apps/models/yolov3/yolov3_resnet18.etlt_b1_gpu0_int8.engine
labelfile-path=/home/user/Projects/deepstream_tao_apps/configs/yolov3_tao/yolov3_labels.txt
parse-bbox-func-name=NvDsInferParseCustomBatchedNMSTLT
custom-lib-path = /home/user/Projects/deepstream_tao_apps/post_processor/libnvds_infercustomparser_tao.so

Here the engine file was generated automatically from running the “ds-tao-detection” in the deepstream_tao_apps. The parsing functions (parse-bbox-func-name, custom-lib-path) are set the same as in the deepstream_tao_apps.

Now the python apps run with no error, but it is not detecting any object.

Thanks.

Hi @zzww ,
Please take a look DeepStream SDK FAQ - #21 by mchi , you may miss some offset, scale settings.

Hi @mchi ,
Thank you for your reply. I added the scale, offset and a few parameters as in the configuration file in deepstram_tao_apps. It is now working in a more reasonable way.

I am wondering what is a good way to find out other parameters. The configs in deepstream_tao_apps may provide a hint on those parameters when I deploy them in deepstream_python_apps, but if I want to deploy my customized model trained with TAO, how should I set those parameters? Thanks.

Hi @zzww
I think the best way to go through the doc and understand the meanings of these parameters, if you know well about TLT train, it should be easy to map them.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.