Dusty_nv jetson-inference app vs DS nvinfer inference output tensor discrepancy

All of this is with Jetson Xavier NX with Jetpack 4.5.1

I ran the hand pose model from jetson-inference (jetson-inference/posenet.md at master · dusty-nv/jetson-inference · GitHub) with the included app (./posenet) with an mp4 file containing a static image. I saved the resulting output heatmaps from the inference. Then with the same .engine and same input video file I ran inference in a DS pipeline and saved the resulting heatmaps from nvinfer.

When I compared the the two results, the output tensor from DS and the output tensor from the dusty_nv app, the results were very different.

I think the discrepancy is because of how DS does input scaling and normalization vs the dusty_nv program.

input video is 1280x720
NN engine is like this- input 3x224x224 output1 21x56x56 output2 40x56x56

DS pipeline- uridecode bin (1280x720 static image video file)-> nvstreammux(1280x720, enable-padding is set to TRUE)->nvinfer( NN model, net scale factor is 0.003921…) ->display sink

The .engine is FP16.

I would really appreciate it if Nvidia’s team can guide me in the right direction so that I may produce the same output tensor from both DS and the dusty_nv app.

Hi,

Could you share the complete information with us?

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Could you try to compare the tensor value before the output parser.
Since parser is model-specific, it’s possible that your model’s representation is different from the default Deepstream parser.

It will be good to check if the difference comes from parser or not first.

Thanks.

• Hardware Platform (Jetson / GPU) Jetson Xavier NX dev kit
• DeepStream Version 5.1
• JetPack Version (valid for Jetson only) 4.5.1
• TensorRT Version 7.1.3-1+cuda10.2
• NVIDIA GPU Driver Version (valid for GPU only) N/A
• Issue Type( questions, new requirements, bugs) DS Nvinfer inference discrepancy with dusty_nv TensorRT inference app
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) run using this command-> bash App.sh (input video is in the input_video folder, add the full address in run_config.json).
For the dust_nv app (jetson-inference/posenet.md at master · dusty-nv/jetson-inference · GitHub) just follow this after building from source-> ./posenet --network=resnet18-hand address to video ( must be same as run_config.json). I got the BIN files by adding this to /jetson-inference/c/poseNet.cpp after line 534-

  float *cmap = mOutputs[0].CPU;
  float *paf = mOutputs[1].CPU;  
 //paste from here
  std::cout << "1 "
            << "\n";
  std::vector<float> heatVec;
  std::vector<float> tagsVec;
  heatVec.reserve(65856);
  for (unsigned int count = 0; count < 65856; count++) {
    heatVec.emplace_back(cmap[count]);
  }
  std::cout << "2 "
            << "\n";
  std::cout << "Size of cmap " << mOutputs[0].size << "\n";
  std::cout << "Size of cmap vec " << heatVec.size() << "\n";
  std::cout << "Size of paf " << mOutputs[1].size << "\n";
  tagsVec.reserve(125440);
  for (unsigned int count = 0; count < 125440; count++) {
    tagsVec.emplace_back(paf[count]);
  }
  std::cout << "3 "
            << "\n";
  std::ofstream data_file1; // pay attention here! ofstream
  data_file1.open("tensorB_official.bin", std::ios::out | std::ios::binary);
  data_file1.write(reinterpret_cast<char *>(&heatVec[0]),
                   heatVec.size() * sizeof(float));
  data_file1.close();
  std::ofstream data_file2; // pay attention here! ofstream
  data_file2.open("tensorA_official.bin", std::ios::out | std::ios::binary);
  data_file2.write(reinterpret_cast<char *>(&tagsVec[0]),
                   tagsVec.size() * sizeof(float));
  data_file2.close();
  std::cout << "Size of tensorB " << heatVec.size() << "\n";
  std::cout << "Size of tensorA " << tagsVec.size() << "\n";
  exit(0);

DS app download link-Easyupload.io - Upload files for free and transfer big files easily.
Password:asdf
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description) DeepStream 5.1 and glib-2.0 gstreamer-1.0 gstreamer-base-1.0 gstreamer-video-1.0 gstreamer-rtp-1.0 gstreamer-plugins-base-1.0 gstreamer-plugins-good-1.0 x11 opencv4

My comparison of the outputs-
comparison_squareIn_handpose_DS_OG_.html (589.1 KB)

Hi,

Thanks for the feedback.

You can find the pre-processing variable in the [property] of hand_pgie_config.txt.
The function used by Deepstream is :
https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvinfer.html

y = net scale factor*(x-mean)

Ex.

[property]
net-scale-factor=0.0039215697906911373
offsets=127.5;127.5;127.5
...

Please check if the mean(offsets) and scale(net-scale-factor) is identical to the jetson-inference source first.
Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.