Please provide complete information as applicable to your setup.
• Hardware Platform: Jetson Xavier NX
• DeepStream Version: 6.0
• Language: Python
Hi,
I want to deploy .etlt models on deepstream with python bindings. I downloaded the models from this repo (https://github.com/NVIDIA-AI-IOT/deepstream_tao_apps). The yolo models are working fine with “ds-tao-detection”, based on the steps in the documentation.
Now I want to check if those TAO models work in the deepstream with python bindings. The python sample code is downloaded from deepstream_python_apps. I used “deepstream-imagedata-multistream” as an example. In the configuration file, I replaced
#model-file=…/…/…/…/samples/models/Primary_Detector/resnet10.caffemodel
#proto-file=…/…/…/…/samples/models/Primary_Detector/resnet10.prototxt
#model-engine-file=…/…/…/…/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
#labelfile-path=…/…/…/…/samples/models/Primary_Detector/labels.txt
#int8-calib-file=…/…/…/…/samples/models/Primary_Detector/cal_trt.bin
with
tlt-encoded-model=/home/user/Projects/deepstream_tao_apps/models/yolov3/yolov3_resnet18.etlt
int8-calib-file=/home/user/Projects/deepstream_tao_apps/models/yolov3/yolov3nv.trt8.cal.bin
model-engine-file=/home/user/Projects/deepstream_tao_apps/models/yolov3/yolov3_resnet18.etlt_b1_gpu0_int8.engine
labelfile-path=/home/user/Projects/deepstream_tao_apps/configs/yolov3_tao/yolov3_labels.txt
parse-bbox-func-name=NvDsInferParseCustomBatchedNMSTLT
custom-lib-path = /home/user/Projects/deepstream_tao_apps/post_processor/libnvds_infercustomparser_tao.so
Here the engine file was generated automatically from running the “ds-tao-detection” in the deepstream_tao_apps. The parsing functions (parse-bbox-func-name, custom-lib-path) are set the same as in the deepstream_tao_apps.
Now the python apps run with no error, but it is not detecting any object.
Thanks.