Custom Models for Deepstream


We are having some trouble using custom models in Deepstream. We tried multiple models (onnx, caffe, uff) however the models seem to be TRT (TensorRT) incompatible. We would like to know can we make a model TRT compatible, is there some documentation for that? Also are there readily availabe models that are TRT compatible which would help in easy prototyping?

Some other things that we tried:

  1. We tried using the TensorFlow -TRT converter, but when we supply the engine file in Deep Stream (via the config file), it doesn’t build
  2. We tried using TLT as well, but that is throwing an error while creating TF-Records.


  1. Could you tell us how do we use custom models in Deepstream?
  2. How do we use TensorFlow -TRT converter with Deepstream
  3. A way to use TRT effectively (some blog post, some documentation) because we followed the current documentation and it still threw an error.

• Hardware Platform (Jetson / GPU): Jetson TX2
• DeepStream Version: 5.1
• JetPack Version (valid for Jetson only): 4.5.1

what do you mean “incompatible”? some layers are not supported by TRT? If it’s, you need to implement TRT plugin if you want to run them with TRT?

What’s the error? Can be more specific?

Again, please elaborate what error you run into.
GitHub - NVIDIA-AI-IOT/deepstream_tlt_apps: Sample apps to demonstrate how to deploy models trained with TLT on DeepStream demonstrates running TLT modles with DS.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.