All of this is with Jetson Xavier NX with Jetpack 4.5.1
I ran the hand pose model from jetson-inference (jetson-inference/posenet.md at master · dusty-nv/jetson-inference · GitHub) with the included app (./posenet) with an mp4 file containing a static image. I saved the resulting output heatmaps from the inference. Then with the same .engine and same input video file I ran inference in a DS pipeline and saved the resulting heatmaps from nvinfer.
When I compared the the two results, the output tensor from DS and the output tensor from the dusty_nv app, the results were very different.
I think the discrepancy is because of how DS does input scaling and normalization vs the dusty_nv program.
input video is 1280x720
NN engine is like this- input 3x224x224 output1 21x56x56 output2 40x56x56
DS pipeline- uridecode bin (1280x720 static image video file)-> nvstreammux(1280x720, enable-padding is set to TRUE)->nvinfer( NN model, net scale factor is 0.003921…) ->display sink
The .engine is FP16.
I would really appreciate it if Nvidia’s team can guide me in the right direction so that I may produce the same output tensor from both DS and the dusty_nv app.
• Hardware Platform (Jetson / GPU) • DeepStream Version • JetPack Version (valid for Jetson only) • TensorRT Version • NVIDIA GPU Driver Version (valid for GPU only) • Issue Type( questions, new requirements, bugs) • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) • Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
Could you try to compare the tensor value before the output parser.
Since parser is model-specific, it’s possible that your model’s representation is different from the default Deepstream parser.
It will be good to check if the difference comes from parser or not first.
• Hardware Platform (Jetson / GPU) Jetson Xavier NX dev kit • DeepStream Version 5.1 • JetPack Version (valid for Jetson only) 4.5.1 • TensorRT Version 7.1.3-1+cuda10.2 • NVIDIA GPU Driver Version (valid for GPU only) N/A • Issue Type( questions, new requirements, bugs) DS Nvinfer inference discrepancy with dusty_nv TensorRT inference app • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) run using this command-> bash App.sh (input video is in the input_video folder, add the full address in run_config.json).
For the dust_nv app (jetson-inference/posenet.md at master · dusty-nv/jetson-inference · GitHub) just follow this after building from source-> ./posenet --network=resnet18-hand address to video ( must be same as run_config.json). I got the BIN files by adding this to /jetson-inference/c/poseNet.cpp after line 534-
DS app download link-Easyupload.io - Upload files for free and transfer big files easily.
Password:asdf • Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description) DeepStream 5.1 and glib-2.0 gstreamer-1.0 gstreamer-base-1.0 gstreamer-video-1.0 gstreamer-rtp-1.0 gstreamer-plugins-base-1.0 gstreamer-plugins-good-1.0 x11 opencv4