Yolo V8 run on deepstream 7 with python binding

Please provide complete information as applicable to your setup.

• JETSON AGX ORIN
• DeepStream 7.0
• nvidia-jetpack - 6.0
• TensorRT - 8.6.2
• Question
I have an NVIDIA Jetson AGX Orin and I’m working on a YOLO v8 object detection project. Previously, I used DeepStream 6.2 and referred to the DeepStream YOLO repository, DeepStream_Python_Apps_Bindings_v1.1.6, and the Seed Studio tutorial. These resources worked fine with DeepStream 6.2. However, the Seed Studio tutorial only covers up to DeepStream 6.2.

The Seed Studio tutorial mentions a process to generate WTS and CFG files for use in the configuration file. Since the latest DeepStream YOLO repository doesn’t include files to generate WTS and CFG, I assume there’s no need to generate them and that I can use the ONNX model directly. I also clone the latest version of deepstream-python-apps

Running deepstream -c [config file] on Deepstream-Yolo works fine. However, when I use the PGIE config file with the DeepStream Python Apps test 1 (and the other example too), I get this following error:

Opening in BLOCKING MODE Setting min object dimensions as 16x16 instead of 1x1 to support VIC compute mode. WARNING: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-7.0/sources/deepstream_python_apps/apps/deepstream-test1/model_b1_gpu0_fp32.engine open error 0:00:10.874194934 12205 0xaaaad196dca0 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2083> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-7.0/sources/deepstream_python_apps/apps/deepstream-test1/model_b1_gpu0_fp32.engine failed 0:00:11.188088894 12205 0xaaaad196dca0 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2188> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-7.0/sources/deepstream_python_apps/apps/deepstream-test1/model_b1_gpu0_fp32.engine failed, try rebuild 0:00:11.188145948 12205 0xaaaad196dca0 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2109> [UID = 1]: Trying to create engine from model files WARNING: [TRT]: onnx2trt_utils.cpp:372: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32. Building the TensorRT Engine Building complete WARNING: Serialize engine failed because of file path: /opt/nvidia/deepstream/deepstream-7.0/sources/deepstream_python_apps/apps/deepstream-test1/model_b1_gpu0_fp32.engine opened error 0:05:14.416695505 12205 0xaaaad196dca0 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2136> [UID = 1]: failed to serialize cude engine to file: /opt/nvidia/deepstream/deepstream-7.0/sources/deepstream_python_apps/apps/deepstream-test1/model_b1_gpu0_fp32.engine WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1. INFO: [Implicit Engine Info]: layers num: 2 0 INPUT kFLOAT images 3x640x640 1 OUTPUT kFLOAT output0 10x8400 0:05:14.757320821 12205 0xaaaad196dca0 INFO nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus: [UID 1]: Load new model:config_v8_ppe.txt sucessfully NvMMLiteOpen : Block : BlockType = 261 NvMMLiteBlockCreate : Block : BlockType = 261 Segmentation fault (core dumped)

Can someone provide me the right way to use yolo v8 with deepstream 7.0 and python binding?

here’s my config file :
[property] gpu-id=0 net-scale-factor=0.0039215697906911373 model-color-format=0 onnx-file=yolov8s.onnx model-engine-file=model_b1_gpu0_fp32.engine #int8-calib-file=calib.table labelfile-path=labels.txt batch-size=1 network-mode=0 num-detected-classes=80 interval=0 gie-unique-id=1 process-mode=1 network-type=0 cluster-mode=2 maintain-aspect-ratio=1 symmetric-padding=1 #workspace-size=2000 parse-bbox-func-name=NvDsInferParseYolo #parse-bbox-func-name=NvDsInferParseYoloCuda custom-lib-path=/home/agx/DeepStream-Yolo/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.soengine-create-func-name=NvDsInferYoloCudaEngineGet [class-attrs-all] nms-iou-threshold=0.45 pre-cluster-threshold=0.25 topk=300

and i try to run the deepstream_test1.py with this config

here’s what gst_debug return :
Opening in BLOCKING MODE 0:00:00.224490201 8307 0xaaaae95f3010 WARN v4l2 gstv4l2object.c:4671:gst_v4l2_object_probe_caps:nvv4l2-decoder:src Failed to probe pixel aspect ratio with VIDIOC_CROPCAP: Unknown error -1 Setting min object dimensions as 16x16 instead of 1x1 to support VIC compute mode. 0:00:04.810548332 8307 0xaaaae95f3010 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2095> [UID = 1]: deserialized trt engine from :/home/agx/deepstream_python_apps/apps/deepstream-test1/model_b1_gpu0_fp32.engine WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1. INFO: [Implicit Engine Info]: layers num: 2 0 INPUT kFLOAT images 3x640x640 1 OUTPUT kFLOAT output0 10x8400 0:00:05.089099451 8307 0xaaaae95f3010 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2198> [UID = 1]: Use deserialized engine model: /home/agx/deepstream_python_apps/apps/deepstream-test1/model_b1_gpu0_fp32.engine 0:00:05.099158085 8307 0xaaaae95f3010 INFO nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus: [UID 1]: Load new model:config_v8_ppe.txt sucessfully 0:00:05.099465246 8307 0xaaaae95f3010 WARN basesrc gstbasesrc.c:3688:gst_base_src_start_complete: pad not activated yet NvMMLiteOpen : Block : BlockType = 261 NvMMLiteBlockCreate : Block : BlockType = 261 0:00:05.202799491 8307 0xaaaae95f3860 WARN v4l2 gstv4l2object.c:4671:gst_v4l2_object_probe_caps:nvv4l2-decoder:src Failed to probe pixel aspect ratio with VIDIOC_CROPCAP: Unknown error -1 0:00:05.207116281 8307 0xaaaae95f3860 WARN v4l2videodec gstv4l2videodec.c:2311:gst_v4l2_video_dec_decide_allocation: Duration invalid, not setting latency 0:00:05.207582734 8307 0xaaaae95f3860 WARN v4l2bufferpool gstv4l2bufferpool.c:1116:gst_v4l2_buffer_pool_start:nvv4l2-decoder:pool:src Uncertain or not enough buffers, enabling copy threshold 0:00:05.221735796 8307 0xaaab304a8f60 WARN v4l2bufferpool gstv4l2bufferpool.c:1567:gst_v4l2_buffer_pool_dqbuf:nvv4l2-decoder:pool:src Driver should never set v4l2_buffer.field to ANY

How did you install the python binding?

Does deepstream_test_1.py run normally without changing anything?

If deepstream-app runs fine, the problem may be that the python binding is not installed correctly.

DS-7.0 needs to install pyds v1.1.11.

yes, i can run with deepstream-app -c deepstream_app_config.txt normally, even with my yolo v8 onnx model.

i also can run the default deepstream_test_1.py normally.

but when i use the pgie config model on the deepstream_test_1.py it’s error.

i’ve done this binding installation before i created this issue .

I tried it and the above configuration works fine

Is the *.engine file generated on the AGX orin ? This is hardware related.

yes, i convert my model (even experimenting with the yolo v8s model) to onnx with deepstream yolo utils export yolo v8, and load the model so the .engine file would be newly generated from that model. i did all of the process on AGX orin.

Hi,

I resolved the issue by redoing the model conversion and using the default YOLOv8 config file provided by the DeepStream YOLO repository. While I’m still unsure what was missing in my previous attempt, everything is working correctly now.

Thank you for your help!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.