Using SSD Mobilenet in Deepstream 6.2

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 6.2
• JetPack Version (valid for Jetson only)
• TensorRT Version 8.5.2-1+cuda11.8
• NVIDIA GPU Driver Version (valid for GPU only) 525.125.06
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hi,
I’m trying to follow the SSD compilation README inside the objectDetector_SSD folder. Instead of changing the pb to the uff, I opted to download the precompiled uff from Dusty’s references:

I moved the uff model into the directory and follow the steps and I get:

gst-launch-1.0 filesrc location=…/…/samples/streams/sample_1080p_h264.mp4 ! decodebin ! m.sink_0 nvstreammux name=m batch-size=1 width=1280 height=720 ! nvinfer config-file-path= config_infer_primary_ssd.txt ! nvvideoconvert ! nvdsosd ! nveglglessink
Warn: ‘threshold’ parameter has been deprecated. Use ‘pre-cluster-threshold’ instead.
Setting pipeline to PAUSED …
0:00:00.246812293 148054 0x5597f091a920 WARN nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1170> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See CUDA_MODULE_LOADING in CUDA C++ Programming Guide
WARNING: …/nvdsinfer/nvdsinfer_model_builder.cpp:1487 Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-6.2/sources/objectDetector_SSD/sample_ssd_relu6.uff_b1_gpu0_fp32.engine open error
0:00:02.871302389 148054 0x5597f091a920 WARN nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1897> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.2/sources/objectDetector_SSD/sample_ssd_relu6.uff_b1_gpu0_fp32.engine failed
0:00:02.950615348 148054 0x5597f091a920 WARN nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2002> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.2/sources/objectDetector_SSD/sample_ssd_relu6.uff_b1_gpu0_fp32.engine failed, try rebuild
0:00:02.950661904 148054 0x5597f091a920 INFO nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1923> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See CUDA_MODULE_LOADING in CUDA C++ Programming Guide
WARNING: [TRT]: The implicit batch dimension mode has been deprecated. Please create the network with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag whenever possible.
ERROR: [TRT]: Validation failed: axis, ignoreBatch}ion missing required fields: {
plugin/common/plugin.cpp:41

ERROR: [TRT]: std::exception
ERROR: [TRT]: UffParser: Parser error: concat_box_conf: Could not create plugin object from Plugin Registry. Only IPluginV2 type plugins are supported with Plugin Registry
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:292 Failed to parse UFF file: /opt/nvidia/deepstream/deepstream-6.2/sources/objectDetector_SSD/sample_ssd_relu6.uff, incorrect file or incorrect input/output blob names
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:971 failed to build network since parsing model errors.
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:804 failed to build network.
0:00:05.560672352 148054 0x5597f091a920 ERROR nvinfer gstnvinfer.cpp:674:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1943> [UID = 1]: build engine file failed
0:00:05.638545078 148054 0x5597f091a920 ERROR nvinfer gstnvinfer.cpp:674:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2029> [UID = 1]: build backend context failed
0:00:05.638564760 148054 0x5597f091a920 ERROR nvinfer gstnvinfer.cpp:674:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1266> [UID = 1]: generate backend failed, check config file settings
0:00:05.638835816 148054 0x5597f091a920 WARN nvinfer gstnvinfer.cpp:888:gst_nvinfer_start: error: Failed to create NvDsInferContext instance
0:00:05.638869371 148054 0x5597f091a920 WARN nvinfer gstnvinfer.cpp:888:gst_nvinfer_start: error: Config file path: config_infer_primary_ssd.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
ERROR: Pipeline doesn’t want to pause.
Got context from element ‘eglglessink0’: gst.egl.EGLDisplay=context, display=(GstEGLDisplay)NULL;
ERROR: from element /GstPipeline:pipeline0/GstNvInfer:nvinfer0: Failed to create NvDsInferContext instance
Additional debug info:
gstnvinfer.cpp(888): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:nvinfer0:
Config file path: config_infer_primary_ssd.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Setting pipeline to NULL …
Freeing pipeline …

The error is:
UffParser: Parser error: concat_box_conf: Could not create plugin object from Plugin Registry. Only IPluginV2 type plugins are supported with Plugin Registry

I guess that the uff version is old compared to the one installed or something similar. Is there a way to migrate between uff incompatible types to the new ones?
I know dusty’s uff may be jetson while I’m running in a desktop GPU RTX 3060ti.

Thanks in advance

could you share the config_infer_primary_ssd.txt? are you testing new model ssd_mobilenet_v2_coco.uff? from the logs, the app is parsing the model sample_ssd_relu6.uff.

Hi,
Yes I mv the ssd_mobilenet_v2_coco.uff to sample_ssd_relu6.uff
Please find the config attached
config_infer_primary_ssd.txt (3.6 KB)

My question would be if a uff model generated with Jetson is compatible with any other Nvidia GPU?

No, the engine, which is genetaed by TenrsorRT, is bound to the specific GPU.

ok so it is related to the uff parser and something that the factory has to create for the concat_box_conf, any thoughts?

after testing ssd_mobilenet_v2_coco.uff in nvcr.io/nvidia/deepstream:6.2-triton, I can reproduce this issue. it is related to model parsing by TensorRT, we are checking.

Thanks for getting back!
Looking forward to your insights
Regards

TensorRT will focus on onnx model. we suggest using SSD onnx model, ssd-10.onnx or nvidia tao SSD model.

Thanks! I’ll take a look during the next couple of days!
Regards

Sorry for the late reply, Is this still an DeepStream issue to support? Thanks

Hi @fanzh, I wasn’t able to check it out. I’ll do it next week.
If you tell me that with those models it works, it shouldn’t be an issue then
Thanks

thanks for the update! If you encounter a new problem, please open a new topic, Thanks!

Thanks, let’s close it for now

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.