Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) x86/MXM RTX A2000
• DeepStream Version 6.3
• JetPack Version (valid for Jetson only) -
• TensorRT Version v8503
• NVIDIA GPU Driver Version (valid for GPU only) 530.30.02
• Issue Type( questions, new requirements, bugs) Question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
I am fairly new to Deepstream, so please excuse my beginner question.
I’d like to integrate a custom TRT engine model with a single sigmoid output, which corresponds to a binary classifier (0: class 1, 1: class 2). The model is EfficientNetB1 and I converted it successfully to the engine file on the A2000. I can deserialize the model with ‘nvinfer’ and it runs without problems with frame inputs from ‘streammux’. However, I can’t get hold of the output values - using a source pad callback from the next element from nvinfer I always get (NvDsObjectMeta):
- class_id: -1
- confidence: 0.0
NvDsClassifierMeta is also not working. I always get
This is my nvinfer config:
[property] gpu-id=0 model-engine-file=/opt/models/effnetb1_v0.2_c25e2604_fp16.engine labelfile-path=/opt/models/effnetb1_v0.2_c25e2604/labels_single.txt process-mode=1 batch-size=1 network-mode=2 interval=0 gie-unique-id=1 infer-dims=3;240;240 network-input-order=1 output-blob-names=dense is-classifier=1
Note that I had to change the input order because my model expects NHWC.
Am I assuming correctly that I can’t use the standard parsing, but need to implement it myself in order to interprete the single sigmoid output correctly?
Since this seems such a simple and common case, is there maybe an example out there for sigmoid single outputs. I couldn’t find any in the Python and C++ samples.