Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) GPU • DeepStream Version 7.0 • JetPack Version (valid for Jetson only) • TensorRT Version 8.6.1.6 • NVIDIA GPU Driver Version (valid for GPU only) • Issue Type( questions, new requirements, bugs) • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) • Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
I’m running a YoloV8-seg inside deepstream 7.0 and I’ve encountered this error:
ERROR: [TRT]: 1: [runner.cpp::shapeChangeHelper::621] Error Code 1: Myelin (Division by 0 detected in the shape graph. Tensor (Divisor) “sp__mye3” is equal to 0.; )
ERROR: nvdsinfer_backend.cpp:507 Failed to enqueue trt inference batch
ERROR: nvdsinfer_context_impl.cpp:1824 Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR
0:05:23.176120329 585203 0x55d26850b440 WARN nvinfer gstnvinfer.cpp:1418:gst_nvinfer_input_queue_loop: error: Failed to queue input batch for inferencing
from this:
I understand that you expect this bug to be solved in later versions of tensorrt
can you confirm that the bug was solved in updated tensorrt?
If I understand correctly deepstream sdk 7.0 depend on tensorrt 8.6.1.6, so what are my options to address this issue?
If there is no solution for this right now, is there an option to catch this error and handle it in my python code? currently it crashes the whole app
we already fixed this internally. please pay attention to the latter version.
This is a bug of TRT-8.6. for a workaround, you can user other DeepStream versions which not use TRT-8.6. please refer to this link.
could you use English? Thanks, the forum is public to the world.
seems this question is not related to the original topic issue. could you open a new forum topic?
I’m sorry, I thought you were Chinese. My problem is that I read in other posts that the new version of tensorRT will correct this error. Can I use apt-get to download the new version of tensorRT? Can TRT,deepstream downloaded in this way be used directly? Or I have to update the deepstream version to fix the bug. I saw in the form you sent that my dp7.0 needs trt8.6.2.3. So I can just download this version of tensorRT myself, right? don’t I need to do any additional work?
Thank you for your reply. In fact, I have already posted a new post about the pipe element. This is the link: https://forums.developer.nvidia.cn/t/python-gstreamer-yolov8-seg-deepstream-app-c-config-txt/25122
But I’m sorry I still wrote it in Chinese (because the technical support partner I know suggested to me), if it causes trouble to your understanding, I will change it to English again tomorrow. Looking forward to your reply.
the latest DeepStream 7.0 corresponds to TRT8.6. you can use the latter TRT versions because they are 1-1 corresponding. we did test DeepSTream7.0 on the latter TRT version.
Sorry for the late reply, Is this still an DeepStream issue to support? Thanks! please refer to my first comment. This is a bug of TRT-8.6. updating to 8.6.2.3 can’t fix the bug.