I am currently trying to use a custom engine trained with TLT to perform inferencing on a video stream. I am using the python example test 3 from the GitHub - NVIDIA-AI-IOT/deepstream_python_apps: DeepStream SDK Python bindings and sample applications. The model successfully loads and I receive no errors while running, however, the inference applies a random number of bounding boxes (between 20-32 boxes) usually clustered at the upper right corner of the stream. The model is highly accurate and when tlt-infer is used on the same video the performance is very high so I do not believe it is simply performing poor inference but rather I have it set up wrong. I am attaching the custom engine config and label file as well as the modified python script from test 3 that I am using. deepstream_test_3.py (14.8 KB) frcnn_engine_config.txt (958 Bytes) frcnn_labels_config.txt (48 Bytes)
Please refer to tlt user guide Integrating TAO Models into DeepStream — TAO Toolkit 3.22.05 documentation,
it recommends GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream instead of GitHub - NVIDIA-AI-IOT/deepstream_python_apps: DeepStream SDK Python bindings and sample applications
Please try that git repo. Thanks.
We are currently using that repo as a reference point for the engine config file and the custom parser for the tlt bounding boxes. The same error is still occurring. We would prefer to use the python bindings and use a .mp4 input which is why we have chosen the python repo for the example code. I have made adjustments to the engine config to better match the config in the tlt repo but this has no effect on the output.
Firstly, please try to download the demo frcnn etlt model(see below step) and run it with GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream, to make sure it works well.
cd deepstream_tlt_apps/
wget https://nvidia.box.com/shared/static/8k0zpe9gq837wsr0acoy4oh3fdf476gq.zip -O models.zip
unzip models.zip
rm models.zip