Getting some error when run ultra_light caffe model

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) - Jetson Nano
• DeepStream Version - 6.0.1
• JetPack Version (valid for Jetson only) - 4.6.3
• TensorRT Version - 8.4
• NVIDIA GPU Driver Version (valid for GPU only) - 10.2
• Issue Type( questions, new requirements, bugs) - getting some error when run ultra_light.caffe model and how to resolve this.
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1161> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5
0:00:00.891984554 14423 0x3139c690 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
Caffe Parser: Invalid reshape param. TensorRT does not support reshape in N (batch) dimension
error parsing layer type Reshape index 35
ERROR: Failed while parsing caffe network: /home/jetson/Downloads/face_detection/Slim_320/slim-320.prototxt
ERROR: failed to build network since parsing model errors.
ERROR: failed to build network.
0:00:04.115558555 14423 0x3139c690 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1934> [UID = 1]: build engine file failed
0:00:04.116732533 14423 0x3139c690 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2020> [UID = 1]: build backend context failed
0:00:04.116779825 14423 0x3139c690 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1257> [UID = 1]: generate backend failed, check config file settings
0:00:04.116823159 14423 0x3139c690 WARN nvinfer gstnvinfer.cpp:841:gst_nvinfer_start: error: Failed to create NvDsInferContext instance
0:00:04.116847066 14423 0x3139c690 WARN nvinfer gstnvinfer.cpp:841:gst_nvinfer_start: error: Config file path: onnx_caffeeM_model.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Error: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(841): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
Config file path: onnx_caffeeM_model.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Exiting app

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

from the logs, it is related to mdoel parsing. here are some questions.

  1. which deepstream sample are you testing? what is the whole media pipeline?
  2. could you share the model link and the nvinfer’s configuration file? Thanks!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.