Deepstream-YOLO-seg custom yolov8 onnx model not working as expected

  1. did you modify the code? if not modify the code and configuration, can the app run fine?
  2. the gst.log is too short, there is no nvinfer’s log after “Load new model”. could you share more logs? Thanks!

I have not modified the code and any configuration. The standard deepstream-apps and their samples are running fine. I am trying Deepstream-YOLO-seg for the first time.

for gst.log i have done two steps

  1. export GST_DEBUG=6
  2. export GST_DEBUG_FILE=/tmp/gst.log

and store all the gst debugger logs in a file and that i have shared with you its size is around 14 mb.

May i use the below link

https://github.com/NVIDIA-AI-IOT/deepstream_python_apps/tree/master/apps/deepstream-segmentation

for image segmentation but they require .engine and .uff files but i only have pytorch and onnx file.
Is there any document available to use custome image segmentation model using deepstream python bindings

  1. could you share the model? Thanks! when I execute “python3 export_yoloV8_seg.py -w yolov8s-seg.pt --dynamic”, there is an error “ModuleNotFoundError: No module named ‘ultralytics.yolo’”.
  2. Deepstream nvinfer can support onnx mdoel. please refer to sample.

In export_yolov8_seg.py change the line number 11 from

from ultralytics.yolo.utils.torch_utils import select_device

to this

from ultralytics.utils.torch_utils import select_device

The error “ModuleNotFoundError: No module named ‘ultralytics.yolo’” will be removed .

for your second point link we are trying the same only but its not working. So I want some python bindings or in existing python-deepstream-app folder where i can use onnx or pytorch file or some way to convert it into uff file.

did you mistake the model? Deepstream-YOLO-seg is for segmentation model. yolov8s.pt is detector model.

No I have used export_yoloV8_seg.py file. Sorry that was my type mistake

I have used

python3 export_yoloV8_seg.py -w yolov8s-seg.pt --dynamic --simplify

but this yolov8s-seg.pt is my new custom model trained on 3 classes

  1. from the logs, there is no decoding output from timestamp “0:01:21.9”.
    could you share source file Cam1_12072301.avi? we need to check if there is decoding issue.
  2. can the app run well if using deepstream sample video?
    I did not modify the configuration and model. this app run fine. here is the output log.testlog.txt (2.5 KB)

sorry i can’t share the video or model file.
But I have done few things. Please find below details.

  1. I have created two onnx model from pytorch model using below commands.

python3 export_yoloV8_seg.py -w yolov8s-seg.pt --dynamic --simplify

python3 export_yoloV8_seg.py -w yolov8s-seg.pt --simplify

  1. I have created the setup again using the link that i have provided earlier. And run the following command.

deepstream-app -c deepstream_app_config.txt > log1

I am getting new error please find it below.
TerminalOUTPUT.txt (4.4 KB)
log1.txt (478 Bytes)

please refer to my last comment. the app ran well. please use my model to have a try. the generated command is python3 export_yoloV8_seg.py -w yolov8s-seg.pt --dynamic --simplify.

same error with your model as well

int the description, the engine was generated successfully. but in TerminalOUTPUT.txt, the app failed to generate engine. are there any different steps?

please refer to this answer

I have again done the setup with same steps but now getting different error. The Error where engine file itself is not generating. And also tried with your given model then too engine file is not generating i checked my steps twice but don’t know why this is happening, may be some versioning problem but don’t know exactly

I have flash the jet-pack again reinstall everything and all standard samples of deepstream apps are running but when i try to use below link

GitHub - marcoslucianops/DeepStream-Yolo-Seg: NVIDIA DeepStream SDK 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 implementation for YOLO-Segmentation models

and trying to test it with the seg model that you have provided i am getting same error.

Please help me out

sorry for the late replay. is the current system JetPack4.6.1+DS6.0? could you share the output of this command-line?
/usr/src/tensorrt/bin/trtexec --fp16 --onnx=yolov8s-seg.onnx --saveEngine=1.engine --minShapes=input:1x3x640x640 --optShapes=input:1x3x640x640 --maxShapes=input:1x3x640x640 --shapes=input:1x3x640x640 --workspace=10000

yes it is
Hardware Platform (Jetson / GPU) = Jetson nano
DeepStream Version = 6.0.1
JetPack Version (valid for Jetson only) = 4.6.1
TensorRT Version = 8.2.1.8-1+cuda10.2
Python version = 3.6.9

terminalOutput.txt (7.1 KB)

from log, there is an error “ERROR: [TRT]: ModelImporter.cpp:773: While parsing node number 290 [RoiAlign → “/1/RoiAlign_output_0”]:”. it is because the TensorRT version does not support RoiAlign Ops.
please refer to this topic.

Hello fanzh,
I referred the link that you provided but i am not getting idea how to change things.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

the TensorRT version does not support RoiAlign Ops. and Jetson nano DeepStream can’t be upgraded to a higher version. please refer to Quickstart Guide — DeepStream 6.3 Release documentation. please ask the repo author if have the same issue on Jetson Nano+DS6.0.1.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.