Custom Models for Deepstream

Hi,

We are having some trouble using custom models in Deepstream. We tried multiple models (onnx, caffe, uff) however the models seem to be TRT (TensorRT) incompatible. We would like to know can we make a model TRT compatible, is there some documentation for that? Also are there readily availabe models that are TRT compatible which would help in easy prototyping?

Specs:
• Hardware Platform (Jetson / GPU): Jetson TX2
• DeepStream Version: 5.1
• JetPack Version (valid for Jetson only): 4.5.1

Hey, what’s the issue you met.
You need to make sure the model can run well with TensorRT before you deploy the model in deepstream. If you met some issue related to Tensorrt, you should create a topic in TensorRT forum

We are working on deploying a custom trained model SSD - MobileNet V2 with TensorFlow 2.0 in DeepStream 5.1, any relevant documentation for the same?

I think you can refer following sample inside DS package, but I still suggest you can try to run the model well via trtecec

/opt/nvidia/deepstream/deepstream-6.0# ls sources/objectDetector_
objectDetector_FasterRCNN/ objectDetector_SSD/        objectDetector_Yolo/

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.