Error Code 1: Myelin (Division by 0 detected in the shape graph. Tensor (Divisor) “sp__mye3” is equal to 0.; )

Description

A clear and concise description of the bug or issue.

Environment

TensorRT Version: 8.6.2.3-1+cuda12.2
GPU Type: orint nano
Nvidia Driver Version: NVIDIA-SMI 540.2.0
CUDA Version: cuda12.2
CUDNN Version:
Operating System + Version: 22.4
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Error Code 1: Myelin (Division by 0 detected in the shape graph. Tensor (Divisor) “sp__mye3” is equal to 0.; )

Github Link : DeepStream-Yolo-Seg/docs/YOLOv8_Seg.md at master · marcoslucianops/DeepStream-Yolo-Seg · GitHub

Error:

nvstreammux: Successfully handled EOS for source_id=0
ERROR: [TRT]: 1: [runner.cpp::shapeChangeHelper::621] Error Code 1: Myelin (Division by 0 detected in the shape graph. Tensor (Divisor) “sp__mye3” is equal to 0.; )
ERROR: Failed to enqueue trt inference batch
ERROR: Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR
0:00:11.744366453 15239 0xaaab371ad600 WARN nvinfer gstnvinfer.cpp:1418:gst_nvinfer_input_queue_loop:<primary_gie> error: Failed to queue input batch for inferencing
ERROR: [TRT]: 1: [runner.cpp::shapeChangeHelper::621] Error Code 1: Myelin (Division by 0 detected in the shape graph. Tensor (Divisor) “sp__mye3” is equal to 0.; )
ERROR: Failed to enqueue trt inference batch
ERROR: Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR
0:00:11.803543873 15239 0xaaab371ad600 WARN nvinfer gstnvinfer.cpp:1418:gst_nvinfer_input_queue_loop:<primary_gie> error: Failed to queue input batch for inferencing
ERROR from primary_gie: Failed to queue input batch for inferencing
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(1418): gst_nvinfer_input_queue_loop (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie
ERROR from primary_gie: Failed to queue input batch for inferencing
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(1418): gst_nvinfer_input_queue_loop (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie
ERROR: [TRT]: 1: [runner.cpp::shapeChangeHelper::621] Error Code 1: Myelin (Division by 0 detected in the shape graph. Tensor (Divisor) “sp__mye3” is equal to 0.; )
ERROR: Failed to enqueue trt inference batch
ERROR: Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR
0:00:11.852672214 15239 0xaaab371ad600 WARN nvinfer gstnvinfer.cpp:1418:gst_nvinfer_input_queue_loop:<primary_gie> error: Failed to queue input batch for inferencing
ERROR: [TRT]: 1: [runner.cpp::shapeChangeHelper::621] Error Code 1: Myelin (Division by 0 detected in the shape graph. Tensor (Divisor) “sp__mye3” is equal to 0.; )
ERROR: Failed to enqueue trt inference batch
ERROR: Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR
0:00:11.911965380 15239 0xaaab371ad600 WARN nvinfer gstnvinfer.cpp:1418:gst_nvinfer_input_queue_loop:<primary_gie> error: Failed to queue input batch for inferencing
App run failed

config_yoloV8n_seg.txt :

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
onnx-file=yolov8s-seg.onnx
model-engine-file=yolov8s-seg.onnx_b1_gpu0_fp32.engine
#int8-calib-file=calib.table
labelfile-path=labels.txt
batch-size=1
network-mode=0
num-detected-classes=80
interval=0
gie-unique-id=1
process-mode=1
network-type=3
cluster-mode=4
maintain-aspect-ratio=1
symmetric-padding=1
#workspace-size=2000
parse-bbox-instance-mask-func-name=NvDsInferParseYoloSeg
custom-lib-path=nvdsinfer_custom_impl_Yolo_seg/libnvdsinfer_custom_impl_Yolo_seg.so
output-instance-mask=1
segmentation-threshold=0.5

[class-attrs-all]
pre-cluster-threshold=0.25
topk=100

trtexec --onnx=yolov8s-seg.onnx --saveEngine=model.engine --fp16 --verbose.

It return error

[07/11/2024-09:51:28] [E] Error[1]: [runner.cpp::shapeChangeHelper::621] Error Code 1: Myelin (Division by 0 detected in the shape graph. Tensor (Divisor) “sp__mye3” is equal to 0.; )
[07/11/2024-09:51:28] [E] Error occurred during inference
&&&& FAILED TensorRT.trtexec [TensorRT v8602] # trtexec --onnx=yolov8s-seg.onnx --saveEngine=model.engine --fp16 --verbose

ERROR: [TRT]: 1: [runner.cpp::shapeChangeHelper::621] Error Code 1: Myelin (Division by 0 detected in the shape graph. Tensor (Divisor) “sp__mye3” is equal to 0.; )
ERROR: Failed to enqueue trt inference batch
ERROR: Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR
0:00:10.833192426 12399 0xaaab1403b6a0 WARN nvinfer gstnvinfer.cpp:1418:gst_nvinfer_input_queue_loop:<primary_gie> error: Failed to queue input batch for inferencing
App run failed

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

Hi @abhijeetthakare241093 ,
On the basis of error please let me know if you find this thread helpful

if not, I would recommend you to reach out to TRT forum.

Thanks

Issues solved now with newly installed setup: *
JETPACK : 5.1.2
D.S : 6.3
GPU Type : orint nano
Nvidia Driver Version : NVIDIA-SMI 540.2.0
CUDA Version : cuda11.4