How to get classification results which from the classifier model

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson NX
• DeepStream Version 6.0
**• JetPack Version (valid for Jetson only)**4.6
• TensorRT Version 8.0

1、The classification model is a customized model, trained with pytorch, and then converted to engine file with trtexec
2、Model input: whole frame (NOT ROI)
3、The classification model inference as primary engine
config text :
[property]
gpu-id=0

preprocessing parameters: These are the same for all classification models generated by TAO Toolkit.

net-scale-factor=0.0078431372549
offsets=0.5;0.5;0.5
model-color-format=0
batch-size=1

labelfile-path=cfgs/ck_cls_label.txt
model-engine-file=cfgs/engine/ck5cls.engine
infer-dims=3;512;512 # where c = number of channels, h = height of the model input, w = width of model input
uff-input-blob-name=input
#uff-input-order=0
output-blob-names=output

0=FP32, 1=INT8, 2=FP16 mode

network-mode=0

process-mode: 2 - inferences on crops from primary detector, 1 - inferences on whole frame

process-mode=1
interval=0
network-type=1 # defines that the model is a classifier.
gie-unique-id=1
classifier-threshold=0.2

Are you using DeepStream for inference? Thanks to describe the software configs and reproducing steps in details.

1、Use deepstream6 0 read RTSP stream inference
2、pgie is classification model ,which inputs use RTSP full stream(NOT ROI)

i don’t konw how to get the raw output from the classification model

Do you mean you want to “output-tensor-meta” for nvinfer?

Gst-nvinfer — DeepStream 6.0.1 Release documentation (nvidia.com)

Thank you for your reply. This method has been solved. How can I get the complete image of the video stream and convert it to opencv mat format

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.