Custom detection ONNX model gives wrong outputs using nvinfer with DeepStream 5.1

Hi @infinitesamsarax ,
Could you try " 2. [DS5.0GA_Jetson_GPU_Plugin] Dump the Inference Input" in DeepStream SDK FAQ - #9 by mchi to dump the inference image sent to TensorRT enqueue() and check if it match with your standalone sample?

Thanks!