Problems to integrate Model Pytorch with Transforms in Deepstreams

• Hardware Platform : Orin AGX
• DeepStream Version 6.3
• JetPack Version : 5.1.2
• TensorRT Version : 8.5
Hey,
I use models for classifying gender and age from Hugging Face (composed with Transformers in PyTorch). I convert these models to ONNX and integrate them into DeepStream applications. When I execute the code, it converts the models to an engine, but it doesn’t show me anything (labels Classifications)
I use the output-blob-names for these models (output):

import onnx
import onnxruntime

# Load the ONNX model
onnx_path='gender_classification_model.onnx'
onnx_model = onnx.load(onnx_path)
# Print all output layers in the ONNX model
output_layers = [node.name for node in onnx_model.graph.output]
print("Output layers:", output_layers)

It Not gives me a warning when I use output-blob-names:

0:00:06.761897556 32790 0x2b23a00 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<secondary2-nvinference-engine> NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1988> [UID = 3]: deserialized trt engine from :/home/anavid/Desktop/shopanalytics_V2/model_classi_gendre/gender_classification_model.onnx_b2_gpu0_fp16.engine
INFO: [FullDims Engine Info]: layers num: 2
0 INPUT kFLOAT input 3x224x224 min: 1x3x224x224 opt: 2x3x224x224 Max: 2x3x224x224
1 OUTPUT kFLOAT output 2 min: 0 opt: 0 Max: 0

0:00:06.937877182 32790 0x2b23a00 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<secondary2-nvinference-engine> NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2091> [UID = 3]: Use deserialized engine model: /home/anavid/Desktop/shopanalytics_V2/model_classi_gendre/gender_classification_model.onnx_b2_gpu0_fp16.engine
0:00:06.941190358 32790 0x2b23a00 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<secondary2-nvinference-engine> [UID 3]: Load new model: config_huggings_genre.txt successfully

i use this file configuration :

[property]
gpu-id=0
net-scale-factor=1
onnx-file=model_classi_gendre/gender_classification_model.onnx
model-engine-file=model_classi_gendre/gender_classification_model.onnx_b2_gpu0_fp16.engine
labelfile-path=model_classi_gendre/labels_gendre.txt
batch-size=2
# 0=FP32 and 1=INT8 mode
network-mode=2
network-type=1
num-detected-classes=2
input-object-min-width=14
input-object-min-height=14
process-mode=2
secondary-reinfer-interval=0
model-color-format=0
output-blob-names=output
gie-unique-id=3
operate-on-gie-id=2
is-classifier=1
classifier-async-mode=0
classifier-threshold=0.5
scaling-filter=1
scaling-compute-hw=2

But it doesn’t perform the classifications. I need help integrating these models into my apps.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

Could you refer to our FAQ to learn how to use the classifier as pgie?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.