Error Code 1: Myelin (Division by 0 detected in the shape graph. Tensor (Divisor) “sp__mye3” is equal to 0.; )

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) nano
• DeepStream Version 7.0
• JetPack Version (valid for Jetson only) 6.0
• TensorRT Version 8.6.2.3-1+cuda12.2
• NVIDIA GPU Driver Version (valid for GPU only) NVIDIA-SMI 540.2.0
• Issue Type( questions, new requirements, bugs)

Error Code 1: Myelin (Division by 0 detected in the shape graph. Tensor (Divisor) “sp__mye3” is equal to 0.; )

• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)\

** When convert yolov8n model convert into tensor format. It written below errors :

Github Link : DeepStream-Yolo-Seg/docs/YOLOv8_Seg.md at master · marcoslucianops/DeepStream-Yolo-Seg · GitHub

Error:

nvstreammux: Successfully handled EOS for source_id=0
ERROR: [TRT]: 1: [runner.cpp::shapeChangeHelper::621] Error Code 1: Myelin (Division by 0 detected in the shape graph. Tensor (Divisor) “sp__mye3” is equal to 0.; )
ERROR: Failed to enqueue trt inference batch
ERROR: Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR
0:00:11.744366453 15239 0xaaab371ad600 WARN nvinfer gstnvinfer.cpp:1418:gst_nvinfer_input_queue_loop:<primary_gie> error: Failed to queue input batch for inferencing
ERROR: [TRT]: 1: [runner.cpp::shapeChangeHelper::621] Error Code 1: Myelin (Division by 0 detected in the shape graph. Tensor (Divisor) “sp__mye3” is equal to 0.; )
ERROR: Failed to enqueue trt inference batch
ERROR: Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR
0:00:11.803543873 15239 0xaaab371ad600 WARN nvinfer gstnvinfer.cpp:1418:gst_nvinfer_input_queue_loop:<primary_gie> error: Failed to queue input batch for inferencing
ERROR from primary_gie: Failed to queue input batch for inferencing
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(1418): gst_nvinfer_input_queue_loop (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie
ERROR from primary_gie: Failed to queue input batch for inferencing
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(1418): gst_nvinfer_input_queue_loop (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie
ERROR: [TRT]: 1: [runner.cpp::shapeChangeHelper::621] Error Code 1: Myelin (Division by 0 detected in the shape graph. Tensor (Divisor) “sp__mye3” is equal to 0.; )
ERROR: Failed to enqueue trt inference batch
ERROR: Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR
0:00:11.852672214 15239 0xaaab371ad600 WARN nvinfer gstnvinfer.cpp:1418:gst_nvinfer_input_queue_loop:<primary_gie> error: Failed to queue input batch for inferencing
ERROR: [TRT]: 1: [runner.cpp::shapeChangeHelper::621] Error Code 1: Myelin (Division by 0 detected in the shape graph. Tensor (Divisor) “sp__mye3” is equal to 0.; )
ERROR: Failed to enqueue trt inference batch
ERROR: Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR
0:00:11.911965380 15239 0xaaab371ad600 WARN nvinfer gstnvinfer.cpp:1418:gst_nvinfer_input_queue_loop:<primary_gie> error: Failed to queue input batch for inferencing
App run failed

config_yoloV8n_seg.txt :

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
onnx-file=yolov8s-seg.onnx
model-engine-file=yolov8s-seg.onnx_b1_gpu0_fp32.engine
#int8-calib-file=calib.table
labelfile-path=labels.txt
batch-size=1
network-mode=0
num-detected-classes=80
interval=0
gie-unique-id=1
process-mode=1
network-type=3
cluster-mode=4
maintain-aspect-ratio=1
symmetric-padding=1
#workspace-size=2000
parse-bbox-instance-mask-func-name=NvDsInferParseYoloSeg
custom-lib-path=nvdsinfer_custom_impl_Yolo_seg/libnvdsinfer_custom_impl_Yolo_seg.so
output-instance-mask=1
segmentation-threshold=0.5

[class-attrs-all]
pre-cluster-threshold=0.25
topk=100

Seems duplicated with Error Code 1: Myelin (Division by 0 detected in the shape graph. Tensor (Divisor) “sp__mye3” is equal to 0.; ) - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums.

Please use “trtexec” tool to generate the TensorRT engine first.

trtexec --onnx=yolov8s-seg.onnx --saveEngine=model.engine --fp16 --verbose.

It return error

[07/11/2024-09:51:28] [E] Error[1]: [runner.cpp::shapeChangeHelper::621] Error Code 1: Myelin (Division by 0 detected in the shape graph. Tensor (Divisor) “sp__mye3” is equal to 0.; )
[07/11/2024-09:51:28] [E] Error occurred during inference
&&&& FAILED TensorRT.trtexec [TensorRT v8602] # trtexec --onnx=yolov8s-seg.onnx --saveEngine=model.engine --fp16 --verbose

When above error came, model.engine save in directory. We run the model.enginer and it return again same error

ERROR: [TRT]: 1: [runner.cpp::shapeChangeHelper::621] Error Code 1: Myelin (Division by 0 detected in the shape graph. Tensor (Divisor) “sp__mye3” is equal to 0.; )
ERROR: Failed to enqueue trt inference batch
ERROR: Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR
0:00:10.833192426 12399 0xaaab1403b6a0 WARN nvinfer gstnvinfer.cpp:1418:gst_nvinfer_input_queue_loop:<primary_gie> error: Failed to queue input batch for inferencing
App run failed

Please raise topic in TensorRT forum.
Latest Deep Learning (Training & Inference)/TensorRT topics - NVIDIA Developer Forums

Issues solved now with setup: *
JETPACK : 5.1.2
D.S : 6.3
GPU Type : orint nano
Nvidia Driver Version : NVIDIA-SMI 540.2.0
CUDA Version : cuda11.4

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.