Please provide complete information as applicable to your setup.
I am trying to build a simple pipeline ( appsrc—> gst-nvinfer(detector)—>fakesink) using an custom model (SSH) I had generated the trt engine file and it can do inferernce correctly base on trt inference API. But when i added this model to the pipeline ,I found the result is huge differ from trt inference API.I checked the preprocess in gst-nvinfer(fetch the result of NvBufSurfTransform in gstnvinfer.cpp) ,its normally. I also checked the properties about preprocess and I did not find error.
MY question is: Is there a way to debug? or good orientation for me to locate error? what makes the difference of result between deepstream and trt ?
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)