• Hardware Platform (Jetson / GPU): GPU
• DeepStream Version: 6.2
• TensorRT Version: 8.5.2
• NVIDIA GPU Driver Version (valid for GPU only): 545.23.08
• Issue Type( questions, new requirements, bugs): Performance degrade
Hi, I am porting my object detection model to Deepstream, but I noted degrade in performance (accuracy). For Deepstream model inference using nvinfer plugin, I have added “BatchedNMSDynamic_TRT” layer. I used that same model and same device for both Deepstream and TensorRT Python inference code, but output of both are not exactly matching. I made sure the input image data is the same by using the resized image data from DeepStream in my TensorRT Python inference code. Even if both are successfully identifying the same object, there exists a slight difference in coordinates. Additionally, I observed that the total number of predicted objects by Deepstream is less than TensorRT Python inference code.
I seek assistance in resolving this issue to facilitate the successful porting of my solution to NVIDIA devices. The ultimate goal is to leverage DeepStream for further optimization and acceleration. If any information is unclear or additional details are required, please let me know, and I will promptly provide the necessary information.
Your guidance and support are greatly appreciated.
Thank you.