Yolov5+DeepSteam Error

after running ‘deepstream-app -c deepstream_app_config’.txt,an error was reported(my engine file is in fp32):
Using winsys: x11
ERROR: Deserialize engine failed because file path: /my path/deepstream/./models/yolov5s_int8.engine open error
0:00:02.003027495 13286 0x18db92f0 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1889> [UID = 1]: deserialize engine from file :/my path/deepstream/./models/yolov5s_int8.engine failed
0:00:02.003818843 13286 0x18db92f0 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1996> [UID = 1]: deserialize backend context from engine from file :/my path /deepstream/./models/yolov5s_int8.engine failed, try rebuild
0:00:02.003884634 13286 0x18db92f0 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
ERROR: failed to build network since there is no model file matched.
ERROR: failed to build network.
0:00:02.004699810 13286 0x18db92f0 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1934> [UID = 1]: build engine file failed
0:00:02.004769183 13286 0x18db92f0 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2020> [UID = 1]: build backend context failed
0:00:02.004806861 13286 0x18db92f0 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1257> [UID = 1]: generate backend failed, check config file settings
0:00:02.005446125 13286 0x18db92f0 WARN nvinfer gstnvinfer.cpp:841:gst_nvinfer_start:<primary_gie> error: Failed to create NvDsInferContext instance
0:00:02.005495220 13286 0x18db92f0 WARN nvinfer gstnvinfer.cpp:841:gst_nvinfer_start:<primary_gie> error: Config file path: /my path/deepstream/config_infer_primary.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
** ERROR: main:707: Failed to set pipeline to PAUSED
Quitting
ERROR from primary_gie: Failed to create NvDsInferContext instance
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(841): gst_nvinfer_start (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie:
Config file path: /my path/config_infer_primary.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
App run failed

Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
• The pipeline being used

Did you ever try GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream
we support yolov5 model.

There are :
Jeson TX2 NX
YOLOV5 6.0
Jetpack 4.6
deepstream-app version 6.0.1
DeepStreamSDK 6.0.1
CUDA Driver Version: 10.2
CUDA Runtime Version: 10.2
TensorRT Version: 8.0
cuDNN Version: 8.2
libNVWarp360 Version: 2.0.1d3

I fixed this problem ,but there is another error :
after I run :deepstream-app -c deepstream_app_config.txt
,another error was report:
Opening in BLOCKING MODE

Using winsys: x11
0:00:05.053480321 24906 0x2b682cf0 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/home/lcfc/Desktop/deepstream/models/yolov5s_fp32.engine
INFO: [Implicit Engine Info]: layers num: 5
0 INPUT kFLOAT images 3x640x640
1 OUTPUT kFLOAT 752 3x80x80x17
2 OUTPUT kFLOAT 818 3x40x40x17
3 OUTPUT kFLOAT 884 3x20x20x17
4 OUTPUT kFLOAT output 25200x17

0:00:05.053803859 24906 0x2b682cf0 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 1]: Use deserialized engine model: /home/lcfc/Desktop/deepstream/models/yolov5s_fp32.engine
0:00:05.308403130 24906 0x2b682cf0 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/home/lcfc/Desktop/deepstream/config_infer_primary.txt sucessfully

Runtime commands:
h: Print this help
q: Quit

p: Pause
r: Resume

NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
To go back to the tiled display, right-click anywhere on the window.

**PERF: FPS 0 (Avg)
**PERF: 0.00 (0.00)
** INFO: <bus_callback:194>: Pipeline ready

Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
** INFO: <bus_callback:180>: Pipeline running

NvMMLiteOpen : Block : BlockType = 4
===== NVMEDIA: NVENC =====
NvMMLiteBlockCreate : Block : BlockType = 4
WARNING: Num classes mismatch. Configured: 12, detected by network: 80
deepstream-app: nvdsparsebbox_Yolo.cpp:203: bool NvDsInferParseCustomYolo(const std::vector&, const NvDsInferNetworkInfo&, const NvDsInferParseDetectionParams&, std::vector&, const uint&, const uint&): Assertion `layer.inferDims.numDims == 3 || layer.inferDims.numDims == 4’ failed.
Aborted (core dumped)

The tile for for display is all black.

Sorry,It seems that deepstream_tao_apps requires [DeepStream SDK 6.1.1 GA],I can only choice Jetpack 4.6.The DeepStream 6.1.1 can’t be used.

You can refer yolov5 post processing in that github.
WARNING: Num classes mismatch. Configured: 12, detected by network: 80
deepstream-app: nvdsparsebbox_Yolo.cpp:203: bool NvDsInferParseCustomYolo(const std::vector&, const NvDsInferNetworkInfo&, const NvDsInferParseDetectionParams&, std::vector&, const uint&, const uint&): Assertion `layer.inferDims.numDims == 3 || layer.inferDims.numDims == 4’ failed.
Aborted (core dumped)
Please check classes set in nvdsparsebbox_Yolo.cpp and num-detected-classes in nvinfer config file.

I edit classes set from 80 to 12 in nvdsparsebbox_Yolo.cpp , and make again. I try to run deepstream-app -c deepstream_app_config.txt .It reported:
Opening in BLOCKING MODE

Using winsys: x11
0:00:04.861713015 10390 0x3394e2f0 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/home/lcfc/Desktop/deepstream/models/yolov5s_fp32.engine
INFO: [Implicit Engine Info]: layers num: 5
0 INPUT kFLOAT images 3x640x640
1 OUTPUT kFLOAT 752 3x80x80x17
2 OUTPUT kFLOAT 818 3x40x40x17
3 OUTPUT kFLOAT 884 3x20x20x17
4 OUTPUT kFLOAT output 25200x17

0:00:04.862091838 10390 0x3394e2f0 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 1]: Use deserialized engine model: /home/lcfc/Desktop/deepstream/models/yolov5s_fp32.engine
0:00:05.105739424 10390 0x3394e2f0 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/home/lcfc/Desktop/deepstream/config_infer_primary.txt sucessfully

Runtime commands:
h: Print this help
q: Quit

p: Pause
r: Resume

NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
To go back to the tiled display, right-click anywhere on the window.

** INFO: <bus_callback:194>: Pipeline ready

Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261

**PERF: FPS 0 (Avg)
**PERF: 0.00 (0.00)
** INFO: <bus_callback:180>: Pipeline running

NvMMLiteOpen : Block : BlockType = 4
===== NVMEDIA: NVENC =====
NvMMLiteBlockCreate : Block : BlockType = 4
deepstream-app: nvdsparsebbox_Yolo.cpp:203: bool NvDsInferParseCustomYolo(const std::vector&, const NvDsInferNetworkInfo&, const NvDsInferParseDetectionParams&, std::vector&, const uint&, const uint&): Assertion `layer.inferDims.numDims == 3 || layer.inferDims.numDims == 4’ failed.
Aborted (core dumped)
The tile is black as before.
Edit classes set works,but there are still errors.

Assertion `layer.inferDims.numDims == 3 || layer.inferDims.numDims == 4’ failed.

remove layer.inferDims.numDims == 4 and try again.

Ignore above, please refer to this how to post processing for yolov5.
Object Detection using YOLOv5 and OpenCV DNN in C++ & Python 4.3.5 POST-PROCESSING YOLOv5 Prediction Output

Thanks for help

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.