Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
My graphic card is RTX 3080.
• DeepStream Version
The Version is 6.0
• TensorRT Version
TensorRT Version on tags 21.08 is v18.104.22.168.
• NVIDIA GPU Driver Version (valid for GPU only)
The Driver version is 470.129.06.
• Issue Type( questions, new requirements, bugs)
Firstly I use nvidia/digits docker to train a caffe model and download model.
The model contains a deploy.prototxt and a snapshot_iter_3362.caffemodel .
The output layer name in deploy.prototxt is softmax.
So I executed the trtexec command which is
“./trtexec --deploy=model/deploy.prototxt --model=model/snapshot_iter_3362.caffemodel --output=softmax --batch=16 --saveEngine=deploy.engine”
Finally, in deepstream app secondary-gie0 settings, I modify the config-file as follows.
The deepstream app can execute well, but the result of sgie, class id is always 0.
I use opencv dnn to load caffe mode and prototxt, and get good inference resulit of class id.
In fact, no matter I use trtexec to produce engine or use deepstream app to produce snapshot_iter_3362.caffemodel_b16_gpu0_fp32.engine, the class id of inference result is always zero.
Thank you for your help and suggestion.
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)