Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU): Jetson • DeepStream Version:7.0 • JetPack Version (valid for Jetson only):6.0 • TensorRT Version:8.6.2 • Issue Type( questions, new requirements, bugs): Question • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing):
I am working on deploying a custom model with Deepstream. So far, I have been able to build the pipeline and convert my Detectron2 model to a TensorRT optimized ONNX file. However, when I run this model, it will only annotate one of our two classes of objects in our model. We have verified our model works in its non-TensorRT version. From what I have seen, this is a fairly niche combination of technologies on this forum, so I am mostly looking for a good way to approach this debugging task. I am fairly new to Deepstream and am wondering what is the general approach for problems like this? My first instinct would be to try and extract the output from the model but am unsure if this is wise or even how to best do this.
Here are the config files. What is the best way to get a log/what information would be helpful? I am using the deepstream-app -c deepstream_app_config.txt command to launch it. I saved the terminal output to a text file but I can attach more if needed.
I did some further testing on my own and managed to print the output layer’s values(I have this code on another machine so I cannot post the output at the moment but can later if it may be useful). One thing I noticed is that it does return several values for objects that match the first index of our classifier, but does not do it for the second, rather than just returning a low probability. This leads me to believe there may be a problem with the Detectron2->onnx->TensorRT optimized onnx process* rather than the Deepstream pipeline, but that is only a hypothesis supported by a single datapoint.
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks