Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) Orin
• DeepStream Version 6.1
• JetPack Version (valid for Jetson only) 5.0
• TensorRT Version 8.
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs) questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
I am trying to run a tensorflow saved model within a Deepstream python application. I tried first by trying the model with a standalone Triton Inference Server and a gRPC client and was able to get the model working as expected.
When migrating the model into Nvinferserver transposing the same model config keeps on giving me a dimension error while inputs are given in the correct shape. Can you please give me insights how to debug this?
The model I use is a tf2 Unet model with input shape of [-1,-1,3] and output [-1,-1,1] my
channel_offsets: [0, 0, 0]
why wouldn’t this work if TRTIS is supposed to guess the proper config file if not spcified?