Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
GPU RTX3080TI
• DeepStream Version
6.4
• TensorRT Version
8.6.1.6-1+cuda12.0
• NVIDIA GPU Driver Version (valid for GPU only)
535.104.12
• Issue Type( questions, new requirements, bugs)
questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
i’v been convert a yolov8x-seg.onnx model by GitHub - marcoslucianops/DeepStream-Yolo-Seg: NVIDIA DeepStream SDK 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 implementation for YOLO-Segmentation models;
this model works fine when i use deepstream-app with this config file !!!
config_infer_primary_yoloV8_seg.txt (696 Bytes)
but when i use it in python app , it shows error like:
ERROR: [TRT]: 1: [runner.cpp::shapeChangeHelper::621] Error Code 1: Myelin (Division by 0 detected in the shape graph. Tensor (Divisor) “sp__mye3” is equal to 0.; )
ERROR: nvdsinfer_backend.cpp:506 Failed to enqueue trt inference batch
ERROR: nvdsinfer_context_impl.cpp:1821 Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR
0:00:07.156768321 256 0x5634669fe1e0 WARN nvinfer gstnvinfer.cpp:1404:gst_nvinfer_input_queue_loop: error: Failed to queue input batch for inferencing
here is my python code:
deepstream_test1_rtsp_in_rtsp_out.txt (18.2 KB)