We are having some trouble using custom models in Deepstream. We tried multiple models (onnx, caffe, uff) however the models seem to be TRT (TensorRT) incompatible. We would like to know can we make a model TRT compatible, is there some documentation for that? Also are there readily availabe models that are TRT compatible which would help in easy prototyping?
Hey, what’s the issue you met.
You need to make sure the model can run well with TensorRT before you deploy the model in deepstream. If you met some issue related to Tensorrt, you should create a topic in TensorRT forum