Error While Running DeepStream Yolo Model

Hello. I found a GitHub rep and I am trying to run the python code on my Ubuntu machine. The rep is an application of Yolo model for face detection with DeepStream. However, every time I try to run the code I am getting the following error.

WARNING: …/nvdsinfer/nvdsinfer_model_builder.cpp:1487 Deserialize engine failed because file path: /home/ubuntu/DeepStream-Yolo-Face/yolov8n-face.onnx_b1_gpu0_fp32.engine open error
0:00:01.434567838 4366 0x2db0260 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1976> [UID = 1]: deserialize engine from file :/home/ubuntu/DeepStream-Yolo-Face/yolov8n-face.onnx_b1_gpu0_fp32.engine failed
0:00:01.484250357 4366 0x2db0260 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2081> [UID = 1]: deserialize backend context from engine from file :/home/ubuntu/DeepStream-Yolo-Face/yolov8n-face.onnx_b1_gpu0_fp32.engine failed, try rebuild
0:00:01.484263856 4366 0x2db0260 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2002> [UID = 1]: Trying to create engine from model files
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:307 Cannot access ONNX file ‘/home/ubuntu/DeepStream-Yolo-Face/yolov8n-face.onnx’
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:971 failed to build network since parsing model errors.
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:804 failed to build network.
0:00:02.667742677 4366 0x2db0260 ERROR nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2022> [UID = 1]: build engine file failed
0:00:02.719529681 4366 0x2db0260 ERROR nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2108> [UID = 1]: build backend context failed
0:00:02.719548264 4366 0x2db0260 ERROR nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1282> [UID = 1]: generate backend failed, check config file settings
0:00:02.719748316 4366 0x2db0260 WARN nvinfer gstnvinfer.cpp:898:gst_nvinfer_start: error: Failed to create NvDsInferContext instance
0:00:02.719753855 4366 0x2db0260 WARN nvinfer gstnvinfer.cpp:898:gst_nvinfer_start: error: Config file path: config_infer_primary_yoloV8_face.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
[NvMultiObjectTracker] De-initialized

ERROR: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): gstnvinfer.cpp(898): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:pgie:
Config file path: config_infer_primary_yoloV8_face.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED

Please check source onnx file

I have installed ultralytics so I can be able to convert the model into ONNX, however, I’m having the following issue:

(yolo-deepstream) ubuntu@ubuntu-Blade-15-2022-RZ09-0421:~/Documents/DeepStream-Yolo-Face/ultralytics$ python3 export_yoloV8_face.py -w yolov8n-face.pt --dynamic
Traceback (most recent call last):
File “/home/ubuntu/Documents/DeepStream-Yolo-Face/ultralytics/export_yoloV8_face.py”, line 10, in
from ultralytics.yolo.utils.torch_utils import select_device
ModuleNotFoundError: No module named ‘ultralytics.yolo’

This is a issue from the gitHub project you referred to, please ask directly in the project.

I Fixed the ERROR by converting the Yolo Model into ONNX format and then from ONNX format to TensorRT engine format. Thanks

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.