Composer- loading different object detection models

Please provide complete information as applicable to your setup.

• Hardware Platform: dGPU RTX3060
**• DeepStream Version: 6.1.0
• JetPack Version (valid for Jetson only)
• TensorRT Version:
• NVIDIA GPU Driver Version: 510.47.03
• Composer 2.0.0

Everything configured as per Quickstart Guide — DeepStream 6.1.1 Release documentation


I am just getting started with Composer- so apologies if this is an obvious answer

How do I use object detection models other than the standard ones available with the samples ??

Specifically, I would like to use PeopleNet- but it does not appear in the list
This screenshot is from my PC

Whereas in the tutorial Creating an AI Application — DeepStream 6.1.1 Release documentation
other models (including PeopleNet) are shown

I’ve tried modifying config files such as /opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_infer_primary.txt
and I have also run /opt/nvidia/deepstream/deepstream/samples/

and followed the steps for the tao pretrained models

thank you


Can you refer to the trouble shooting page first? Troubleshooting — DeepStream 6.1.1 Release documentation

thank you for the reply- but I have already gone through this, and other documentation

Is there a specific item in the troubleshooting page?- I can’t see anything that relates to this issue


Just bumping this

I’ve spent a fair bit of time trying various config files- with no success


Final attempt

I would be very grateful if someone from NVIDIA (or anyone else) can assist

thank you


The models you mentioned are TAO models. Currently TAO models are not integrated in Graph Composer by default.

Hi Fiona

thanks for the reply

Does this mean that the models* shown in the NVIDIA documentation (second screenshot above) can’t be used ?

*such as PeopleNet

If they are not integrated ‘by default’- how can they be integrated or loaded ??


We will consider to put a complete sample in the package in the future.

I am also trying to write/create my own decoder in integration with ROS for a different AI model which is not from Nvidia NGC catalog and I would also like some documentation/support on how we can deploy our own custom models. That will be really helpful.