Deploying tao developed Yolov4 model in jetson device

I am going to run Tao Toolkit generated Yolov4 model on Jetson Xavier Nx. There are some Deepstream_tao_apps examples here: deepstream_tao_apps/pgie_yolov4_tao_config.txt at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub

However, I am after an example where there is Pgie, and 3 sgie models run together like this: deepstream_python_apps/apps/deepstream-test2 at master · NVIDIA-AI-IOT/deepstream_python_apps · GitHub Is there any such example for the Deepstream_Tao?

Also, all the examples in deepstream_python_apps include caffe model. Any option to run a Tao-generated model in the same manner i.e.: to fully integrate the model in deepstream just like the caffe models in those exmaples?

Yes, there are some examples in DS.
For example,
/opt/nvidia/deepstream/deepstream-6.0/samples/configs/tao_pretrained_models/deepstream_app_source1_dashcamnet_vehiclemakenet_vehicletypenet.txt

See its README for more details.
Above example will run object detection network(dashcamnet) and then two classification networks.

1 Like

Hi Morganh,

Thank you for the above, it worked perfectly for me. However, are there any python code examples available for it?

You can have a look at GitHub - NVIDIA-AI-IOT/tao-toolkit-triton-apps: Sample app code for deploying TAO Toolkit trained models to Triton.

Thanks, it was helpful.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.