Error Code 1: Myelin (Division by 0 detected in the shape graph. Tensor (Divisor) "sp__mye3" is equal to 0.; )

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 7.0
• JetPack Version (valid for Jetson only)
• TensorRT Version 8.6.1.6
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

I’m running a YoloV8-seg inside deepstream 7.0 and I’ve encountered this error:
ERROR: [TRT]: 1: [runner.cpp::shapeChangeHelper::621] Error Code 1: Myelin (Division by 0 detected in the shape graph. Tensor (Divisor) “sp__mye3” is equal to 0.; )
ERROR: nvdsinfer_backend.cpp:507 Failed to enqueue trt inference batch
ERROR: nvdsinfer_context_impl.cpp:1824 Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR
0:05:23.176120329 585203 0x55d26850b440 WARN nvinfer gstnvinfer.cpp:1418:gst_nvinfer_input_queue_loop: error: Failed to queue input batch for inferencing

from this:

I understand that you expect this bug to be solved in later versions of tensorrt

  1. can you confirm that the bug was solved in updated tensorrt?
  2. If I understand correctly deepstream sdk 7.0 depend on tensorrt 8.6.1.6, so what are my options to address this issue?
  3. If there is no solution for this right now, is there an option to catch this error and handle it in my python code? currently it crashes the whole app

thanks

we already fixed this internally. please pay attention to the latter version.
This is a bug of TRT-8.6. for a workaround, you can user other DeepStream versions which not use TRT-8.6. please refer to this link.

I’m currently use DS 7.0 with 8.6.1.6, what exact version of DS should I use? older than 7.0? can you please be more specific?

thanks.

This issue only existed on Tensorrt 8.6. please refer to my last comment. for a workaround, you can use DS6.3, which is using TRT 8.5.3.1.

我也在同样的版本下遇到了同样的问题,我可以重新下载一个新版本tensorRT吗,新版本TensorRT能直接在deepstream里运行吗。还是我不能这样做,只能去改变deepstream的版本

请问你有自己编程实现管道吗?我用python写,不知道如何才能将掩码区域很好的上色为半透明的彩色呢。你使用的什么管道元素

could you use English? Thanks, the forum is public to the world.
you need to use the corresponding TRT. please refer to this table.

could you use English? Thanks, the forum is public to the world.
seems this question is not related to the original topic issue. could you open a new forum topic?

I’m sorry, I thought you were Chinese. My problem is that I read in other posts that the new version of tensorRT will correct this error. Can I use apt-get to download the new version of tensorRT? Can TRT,deepstream downloaded in this way be used directly? Or I have to update the deepstream version to fix the bug. I saw in the form you sent that my dp7.0 needs trt8.6.2.3. So I can just download this version of tensorRT myself, right? don’t I need to do any additional work?

Thank you for your reply. In fact, I have already posted a new post about the pipe element. This is the link: https://forums.developer.nvidia.cn/t/python-gstreamer-yolov8-seg-deepstream-app-c-config-txt/25122
But I’m sorry I still wrote it in Chinese (because the technical support partner I know suggested to me), if it causes trouble to your understanding, I will change it to English again tomorrow. Looking forward to your reply.

the latest DeepStream 7.0 corresponds to TRT8.6. you can use the latter TRT versions because they are 1-1 corresponding. we did test DeepSTream7.0 on the latter TRT version.

But I use trt8.6.1.6 now,should I update trt version to 8.6.2.3 to solve this bug?

Sorry for the late reply, Is this still an DeepStream issue to support? Thanks! please refer to my first comment. This is a bug of TRT-8.6. updating to 8.6.2.3 can’t fix the bug.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.