Num Classes Mismatch / Custom Output parser / Migrating from DS5.1 to DS6.0

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) :Jetson Xavier
• DeepStream Version : 6.0
• JetPack Version (valid for Jetson only)
• TensorRT Version :8*

While running a custom app, I’m getting these logs. In the end, there is something related to trt engine. This worked fine with DS5.0 and DS5.1

Can you point out whats going on in here ??

logs :-

sudo ./client -c …/…/config.json
Opening in BLOCKING MODE
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is OFF
[NvMultiObjectTracker] Initialized
WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
0:00:06.142649930 9036 0x5569bfcd50 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/some_path/detection_model.engine
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input 3x416x416
1 OUTPUT kFLOAT boxes 2535x1x4
2 OUTPUT kFLOAT confs 2535x19
0:00:06.163695818 9036 0x5569bfcd50 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 1]: Use deserialized engine model: /home/some_path/detection_model.engine
0:00:06.172509970 9036 0x5569bfcd50 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus: [UID 1]: Load new model:…/…//models/det_config.txt sucessfully
Running…
Got state-changed message
2 ERROR from element nvvideo-renderer:
NvMMLiteOpen : Block : BlockType = 277
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 277
WARNING: Num classes mismatch. Configured: 19, detected by network: 0
client: nvdsparsebbox_Yolo.cpp:137: bool NvDsInferParseCustomYolo(const std::vector&, const NvDsInferNetworkInfo&, const NvDsInferParseDetectionParams&, std::vector&, const uint&, const uint&): Assertion `layer.inferDims.numDims == 3’ failed.
Aborted

(I have removed the actual path and placed a placeholder as “some_path” )
I have converted darkent to onnx to trt_engine correctly.
I need to understand what causes this { everything after WARNING in above log } Thanks.

Hi,

Could you share the complete environment information with us?
Especial which JetPack version do you use?

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Thanks.

• Hardware Platform (Jetson / GPU) ->Jetson Xavier
• DeepStream Version → 6.0
• JetPack Version (valid for Jetson only) → nvidia-l4t-core 32.7.1-20220219090344
• TensorRT Version → 8.2.1

Hi,

Sorry for the late update.

First, please note that the TensorRT engine is not portable.
Have you recreated the engine file on JetPack4.6.1 directly?

Based on the error, it seems that your model has some customized output (ex. #class).
Have you applied the customized in Deepstream 5.1 to Deepstream 6.0 as well?

Below is our YOLO customization guidance for your reference:
https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_custom_YOLO.html

Thanks.

Hey, I’m sorry I forgot to update here that it got fixed.
Yes, the “number of classes” was not configured properly.
We Fixed that and It worked. Thanks.

Good to know this.
Thanks for the update.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.