could you share more logs? please do âexport GST_DEBUG=6â first to modify Gstreamerâs log level, then run again, you can redirect the logs to a file. deepstream-app -c deepstream_app_config.txt >1.log 2>1.log
you can compress the 1.log by zip.
I have not modified the code and any configuration. The standard deepstream-apps and their samples are running fine. I am trying Deepstream-YOLO-seg for the first time.
for gst.log i have done two steps
export GST_DEBUG=6
export GST_DEBUG_FILE=/tmp/gst.log
and store all the gst debugger logs in a file and that i have shared with you its size is around 14 mb.
for image segmentation but they require .engine and .uff files but i only have pytorch and onnx file.
Is there any document available to use custome image segmentation model using deepstream python bindings
could you share the model? Thanks! when I execute âpython3 export_yoloV8_seg.py -w yolov8s-seg.pt --dynamicâ, there is an error âModuleNotFoundError: No module named âultralytics.yoloââ.
Deepstream nvinfer can support onnx mdoel. please refer to sample.
In export_yolov8_seg.py change the line number 11 from
from ultralytics.yolo.utils.torch_utils import select_device
to this
from ultralytics.utils.torch_utils import select_device
The error âModuleNotFoundError: No module named âultralytics.yoloââ will be removed .
for your second point link we are trying the same only but its not working. So I want some python bindings or in existing python-deepstream-app folder where i can use onnx or pytorch file or some way to convert it into uff file.
from the logs, there is no decoding output from timestamp â0:01:21.9â.
could you share source file Cam1_12072301.avi? we need to check if there is decoding issue.
can the app run well if using deepstream sample video?
I did not modify the configuration and model. this app run fine. here is the output log.testlog.txt (2.5 KB)
please refer to my last comment. the app ran well. please use my model to have a try. the generated command is python3 export_yoloV8_seg.py -w yolov8s-seg.pt --dynamic --simplify.
int the description, the engine was generated successfully. but in TerminalOUTPUT.txt, the app failed to generate engine. are there any different steps?
I have again done the setup with same steps but now getting different error. The Error where engine file itself is not generating. And also tried with your given model then too engine file is not generating i checked my steps twice but donât know why this is happening, may be some versioning problem but donât know exactly
sorry for the late replay. is the current system JetPack4.6.1+DS6.0? could you share the output of this command-line? /usr/src/tensorrt/bin/trtexec --fp16 --onnx=yolov8s-seg.onnx --saveEngine=1.engine --minShapes=input:1x3x640x640 --optShapes=input:1x3x640x640 --maxShapes=input:1x3x640x640 --shapes=input:1x3x640x640 --workspace=10000
yes it is Hardware Platform (Jetson / GPU) = Jetson nano DeepStream Version = 6.0.1 JetPack Version (valid for Jetson only) = 4.6.1 TensorRT Version = 8.2.1.8-1+cuda10.2 Python version = 3.6.9
from log, there is an error âERROR: [TRT]: ModelImporter.cpp:773: While parsing node number 290 [RoiAlign â â/1/RoiAlign_output_0â]:â. it is because the TensorRT version does not support RoiAlign Ops.
please refer to this topic.