Deepstream-YOLO-seg custom yolov8 onnx model not working as expected

Hardware Platform (Jetson / GPU) = Jetson nano
DeepStream Version = 6.0.1
JetPack Version (valid for Jetson only) = 4.6.1
TensorRT Version = 8.2.1.8-1+cuda10.2
Python version = 3.6.9

I have followed this link and perform all the steps

https://github.com/marcoslucianops/DeepStream-Yolo-Seg/blob/master/docs/YOLOv8_Seg.md

I have converted my custom yolov8-seg model into onnx using

python3 export_yoloV8.py -w yolov8s.pt --dynamic --simplify

While running this command

deepstream-app -c deepstream_app_config.txt

i am getting this output

Using winsys: x11
0:00:53.989521935 14418 0x7fb3400 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/DeepStream-Yolo-Seg/kerbhit_yolov8_seg.onnx_b1_gpu0_fp32.engine
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT images 3x640x640
1 OUTPUT kFLOAT output1 32x160x160
2 OUTPUT kFLOAT output0 39x8400
0:00:54.003349141 14418 0x7fb3400 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/DeepStream-Yolo-Seg/kerbhit_yolov8_seg.onnx_b1_gpu0_fp32.engine
0:00:54.288551776 14418 0x7fb3400 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/opt/nvidia/deepstream/DeepStream-Yolo-Seg/config_infer_primary_yoloV8_seg.txt sucessfully
Runtime commands:
h: Print this help
q: Quit
p: Pause
r: Resume
NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
To go back to the tiled display, right-click anywhere on the window.
**PERF: FPS 0 (Avg)
**PERF: 0.00 (0.00)
** INFO: <bus_callback:194>: Pipeline ready
Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 260
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 260
** INFO: <bus_callback:180>: Pipeline running
**PERF: 0.00 (0.00)
**PERF: 0.00 (0.00)
**PERF: 0.00 (0.00)
**PERF: 0.00 (0.00)
**PERF: 0.00 (0.00)
**PERF: 0.00 (0.00)
**PERF: 0.00 (0.00)
**PERF: 0.00 (0.00)
**PERF: 0.00 (0.00)
**PERF: 0.00 (0.00)
**PERF: 0.00 (0.00)

And a new window with blank screen

Please help me with it.
Thank you

could you share more logs? please do “export GST_DEBUG=6” first to modify Gstreamer’s log level, then run again, you can redirect the logs to a file.
deepstream-app -c deepstream_app_config.txt >1.log 2>1.log
you can compress the 1.log by zip.

Please find attached the log file and gst debug file
1.log (2.1 KB)

gst.log (14.5 MB)

  1. did you modify the code? if not modify the code and configuration, can the app run fine?
  2. the gst.log is too short, there is no nvinfer’s log after “Load new model”. could you share more logs? Thanks!

I have not modified the code and any configuration. The standard deepstream-apps and their samples are running fine. I am trying Deepstream-YOLO-seg for the first time.

for gst.log i have done two steps

  1. export GST_DEBUG=6
  2. export GST_DEBUG_FILE=/tmp/gst.log

and store all the gst debugger logs in a file and that i have shared with you its size is around 14 mb.

May i use the below link

https://github.com/NVIDIA-AI-IOT/deepstream_python_apps/tree/master/apps/deepstream-segmentation

for image segmentation but they require .engine and .uff files but i only have pytorch and onnx file.
Is there any document available to use custome image segmentation model using deepstream python bindings

  1. could you share the model? Thanks! when I execute “python3 export_yoloV8_seg.py -w yolov8s-seg.pt --dynamic”, there is an error “ModuleNotFoundError: No module named ‘ultralytics.yolo’”.
  2. Deepstream nvinfer can support onnx mdoel. please refer to sample.

In export_yolov8_seg.py change the line number 11 from

from ultralytics.yolo.utils.torch_utils import select_device

to this

from ultralytics.utils.torch_utils import select_device

The error “ModuleNotFoundError: No module named ‘ultralytics.yolo’” will be removed .

for your second point link we are trying the same only but its not working. So I want some python bindings or in existing python-deepstream-app folder where i can use onnx or pytorch file or some way to convert it into uff file.

1 Like

did you mistake the model? Deepstream-YOLO-seg is for segmentation model. yolov8s.pt is detector model.

No I have used export_yoloV8_seg.py file. Sorry that was my type mistake

I have used

python3 export_yoloV8_seg.py -w yolov8s-seg.pt --dynamic --simplify

but this yolov8s-seg.pt is my new custom model trained on 3 classes

  1. from the logs, there is no decoding output from timestamp “0:01:21.9”.
    could you share source file Cam1_12072301.avi? we need to check if there is decoding issue.
  2. can the app run well if using deepstream sample video?
    I did not modify the configuration and model. this app run fine. here is the output log.testlog.txt (2.5 KB)

sorry i can’t share the video or model file.
But I have done few things. Please find below details.

  1. I have created two onnx model from pytorch model using below commands.

python3 export_yoloV8_seg.py -w yolov8s-seg.pt --dynamic --simplify

python3 export_yoloV8_seg.py -w yolov8s-seg.pt --simplify

  1. I have created the setup again using the link that i have provided earlier. And run the following command.

deepstream-app -c deepstream_app_config.txt > log1

I am getting new error please find it below.
TerminalOUTPUT.txt (4.4 KB)
log1.txt (478 Bytes)

please refer to my last comment. the app ran well. please use my model to have a try. the generated command is python3 export_yoloV8_seg.py -w yolov8s-seg.pt --dynamic --simplify.

same error with your model as well

int the description, the engine was generated successfully. but in TerminalOUTPUT.txt, the app failed to generate engine. are there any different steps?

please refer to this answer

I have again done the setup with same steps but now getting different error. The Error where engine file itself is not generating. And also tried with your given model then too engine file is not generating i checked my steps twice but don’t know why this is happening, may be some versioning problem but don’t know exactly

I have flash the jet-pack again reinstall everything and all standard samples of deepstream apps are running but when i try to use below link

GitHub - marcoslucianops/DeepStream-Yolo-Seg: NVIDIA DeepStream SDK 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 implementation for YOLO-Segmentation models

and trying to test it with the seg model that you have provided i am getting same error.

Please help me out

sorry for the late replay. is the current system JetPack4.6.1+DS6.0? could you share the output of this command-line?
/usr/src/tensorrt/bin/trtexec --fp16 --onnx=yolov8s-seg.onnx --saveEngine=1.engine --minShapes=input:1x3x640x640 --optShapes=input:1x3x640x640 --maxShapes=input:1x3x640x640 --shapes=input:1x3x640x640 --workspace=10000

yes it is
Hardware Platform (Jetson / GPU) = Jetson nano
DeepStream Version = 6.0.1
JetPack Version (valid for Jetson only) = 4.6.1
TensorRT Version = 8.2.1.8-1+cuda10.2
Python version = 3.6.9

terminalOutput.txt (7.1 KB)

from log, there is an error “ERROR: [TRT]: ModelImporter.cpp:773: While parsing node number 290 [RoiAlign → “/1/RoiAlign_output_0”]:”. it is because the TensorRT version does not support RoiAlign Ops.
please refer to this topic.