Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
Jetson TX2/Xavier, RTX2060
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
Jetson : 7.1.3-1+cuda10.2
dGPU : 7.2.1-1+cuda11.1
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
I was working on a unit test to verify the inferencing result for automated test system for both Jetson and dGPU, expecting them to produce exact same result, which seems to be wrong.
Given a same input video and model (e.g: YoloV3 or .uff based model), can we expect the same exact result from both Jetson and dGPU in a deepstream pipeline?
So, I ran
trtexecusing a sample binary data and a single model .uff file across two different platform and got the same exact values. However, when these .uff are integrated into deepstream pipeline and ran with a video stream, I am observing different inferencing result. Is this an expected behaviour?
After looking at deepstream code, I noticed that Jetson uses RGBA color format while dGPU uses RGB color format. There are also some part where Jetson has to use
NvBufSurfaceMapEglImagebut this is not required for dGPU. Could this be contributing factor to the difference in inference result on two different platform?
I tested on 1080ti and RTX2060 and the result is the same for both this dGPU so it seems that the inference result should be the same across GPU. Is this correct?
I hope I can gain some insight or confirmation if the behavior that I am observing is normal.