Parse outputLayersInfo for resnet classificator

• Hardware Platform (Jetson / GPU) Jetson AGX ORIN
• DeepStream Version 6.1
• JetPack Version (valid for Jetson only) 5.0
• TensorRT Version 8.4.1.5
• NVIDIA GPU Driver Version (valid for GPU only)

I trained a custom resnet18 classification model and exported pytorch -> onnx -> TensoRT.

I can confirm that the model correctly classifies my images. I parse its output with the following python script:
run_trt.py (4.1 KB)

The script is a modification from the tutorial: TensorRT/tutorial-runtime.ipynb at main · NVIDIA/TensorRT · GitHub

The problem is that now I dont know how to parse its output with the /opt/nvidia/deepstream/deepstream/sources/libs/nvdsinfer_customparser/nvdsinfer_customclassifierparser.cpp

in particular, I dont know how to get the class probabilities from

float *outputCoverageBuffer = (float *)outputLayersInfo[l].buffer;

Any idea on how to proceed? is there any tutorial that I can follow?

I don’t know about the model ‘s output, model’s output data is in NvDsInferLayerInfo’ buffer.

After a long debugging, I realized that the problem was at the input image normalization. Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.