Deepstream inference output same label

• Hardware Platform (Jetson / GPU): Jetson Orin NX
• DeepStream Version: 7.0
• JetPack Version (valid for Jetson only): 6.0
• TensorRT Version: 8.6.2.3-1+cuda12.2
• Issue Type( questions, new requirements, bugs): bugs
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing): I have trained a vehicle make net (Modified to 35 output classes) using TAO classification tf1. I exported my Onnx model and run it on my jetson device using deepstream nvinfer pluging (SGIE) but I have practically the same label in every iteration.
I have run evaluation using TAO deploy and get good results. Also, I use onnxruntime to inference the net and I obtain good results. I do not know why my deepstream net inference return the same label in each iteration.
My label file:
labels_ds.txt (229 Bytes)

This is my config file DS:

property:
  gpu-id: 0
  net-scale-factor: 1.0
  offsets: 103.939;116.779;123.68
  onnx-file: ../models/vehicle_make_net_2/final_model.onnx
  labelfile-path: ../models/vehicle_make_net_2/labels_ds.txt
  batch-size: 2
  num-detected-classes: 35
  # 0: FP32 and 1=INT8 mode
  network-mode: 0
  input-object-min-width: 64
  input-object-min-height: 64
  model-color-format: 1
  gpu-id: 0
  gie-unique-id: 2
  process-mode: 2
  operate-on-gie-id: 1
  operate-on-class-ids: 0
  is-classifier: 1
  network-type: 1
  output-blob-names: predictions
  classifier-async-mode: 1
  classifier-threshold: 0.7
  infer-dims: 3;224;224
  maintain-aspect-ratio: 0
  output-tensor-meta: 0

What is the model’s output? The layer name, the dimensions and the meaning of the data.

Hi @josemiad ,
It always return the first class in the labels_ds.txt right? I should configure each class in a new row in the labels_ds.txt.

The layer name is predictions, the output dimension is 35 and the net have a softmax output.

Please refer to ClassifyPostprocessor::parseAttributesFromSoftmaxLayers() function in /opt/nvidia/deepstream/deepstream/sources/libs/nvdsinfer/nvdsinfer_context_impl_output_parsing.cpp. Your model’s output does not match to the default classifier output parsing function, you may need to either change your model to match the function or customize your own classifier postprocessing callback function to get the correct label.
There is a classifier postprocessing customization sample NVIDIA-AI-IOT/deepstream_lpr_app: Sample app code for LPR deployment on DeepStream (github.com) to custimize the LPR model’s output parsing deepstream_lpr_app/nvinfer_custom_lpr_parser/nvinfer_custom_lpr_parser.cpp at master · NVIDIA-AI-IOT/deepstream_lpr_app (github.com)

I do not know why my output does not match. My output layer is an array of probabilities of the object belonging to each class with each probability being in the range [0,1] and sum all probabilities will be 1. That is exactly what the function need.

The output dimension is 35 but not 1x35. Please read the code to find out why it does not match

My real output layer size is (-1, 35)

Please answer the question clearly.

  1. After the TensorRT engine file is generated, you may get the log from TensorRT to show the model’s input and output layers. E.G. with our sample multi_taks classifier model, we can get
    INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:612 [Implicit Engine Info]: layers num: 4 0 INPUT kFLOAT input_1 3x80x60 1 OUTPUT kFLOAT season/Softmax 4x1x1 2 OUTPUT kFLOAT category/Softmax 10x1x1 3 OUTPUT kFLOAT base_color/Softmax 11x1x1
    What you have got from the log?
  2. Can you post your “…/models/vehicle_make_net_2/labels_ds.txt” file?

After investigating, I get the bug. When I exported the model to Onnx using TAO export it created the nvinfer_config file with the offsets and the net-scale-factor. I was using torch preprocess to train my net in TAO but TAO export the preprocess config for caffe type. So after do some math, I modified:

  net-scale-factor: 0.017507
  offsets: 123,675;116,28;103,53

Now it works good and get the same results as TAO.

Glad to hear that! Close the topic.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.