• Hardware Platform (Jetson / GPU): Jetson Orin NX • DeepStream Version: 7.0 • JetPack Version (valid for Jetson only): 6.0 • TensorRT Version: 8.6.2.3-1+cuda12.2 • Issue Type( questions, new requirements, bugs): bugs • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing): I have trained a vehicle make net (Modified to 35 output classes) using TAO classification tf1. I exported my Onnx model and run it on my jetson device using deepstream nvinfer pluging (SGIE) but I have practically the same label in every iteration.
I have run evaluation using TAO deploy and get good results. Also, I use onnxruntime to inference the net and I obtain good results. I do not know why my deepstream net inference return the same label in each iteration.
My label file: labels_ds.txt (229 Bytes)
I do not know why my output does not match. My output layer is an array of probabilities of the object belonging to each class with each probability being in the range [0,1] and sum all probabilities will be 1. That is exactly what the function need.
After the TensorRT engine file is generated, you may get the log from TensorRT to show the model’s input and output layers. E.G. with our sample multi_taks classifier model, we can get INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:612 [Implicit Engine Info]: layers num: 4 0 INPUT kFLOAT input_1 3x80x60 1 OUTPUT kFLOAT season/Softmax 4x1x1 2 OUTPUT kFLOAT category/Softmax 10x1x1 3 OUTPUT kFLOAT base_color/Softmax 11x1x1
What you have got from the log?
Can you post your “…/models/vehicle_make_net_2/labels_ds.txt” file?
After investigating, I get the bug. When I exported the model to Onnx using TAO export it created the nvinfer_config file with the offsets and the net-scale-factor. I was using torch preprocess to train my net in TAO but TAO export the preprocess config for caffe type. So after do some math, I modified: