In my case I’m testing it in a x86 PC with nvcr.io/nvidia/deepstream:7.0-triton-multiarch docker image that includes TensorRT 8.6.1.6.
Downgrading to a previous version of DeepStream is not an option for me.
Is there any workaround I can apply until you fix it and I can use the yolov8 segmentation model with DeepStream 7 (and TensorRT 8.6)?
Creating a custom TensorRT layer? (I don’t know if it’s possible to override an existing one)
Modifying the ONNX?
Building nvinfer with try/catch? (as it seems to occur when enqueuing the buffer, but maybe tensorrt context doesn’t recover after this)
ERROR: [TRT]: 1: [runner.cpp::shapeChangeHelper::621] Error Code 1: Myelin (Division by 0 detected in the shape
graph. Tensor (Divisor) "sp__mye3" is equal to 0.; )
ERROR: nvdsinfer_backend.cpp:507 Failed to enqueue trt inference batch
ERROR: nvdsinfer_context_impl.cpp:1824 Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR
Any clue, any advice, or should I look for another segmentation model?
The next version will upgrade TRT. Modifying the model may solve the problem, but I don’t know the details of TRT. You can go here to ask how to modify the model to avoid this problem.
I cannot use peoplenet from TAO, as I want to train it for another kind of objects (multiple objects) that are not people.
I’ll ask on TRT forum or github repo if I can have a workaround to avoid the crash.
Is there any release date for next DeepStream version and with which version of TRT it will be built?
A ugly hack consists of modifying export_yoloV8_seg.py just before selected_indices = NMS.apply(...) and make it believe there is an object with confidence of 100 and a size of 1 pixel in a corner, for example, and then nvdsparseseg_Yolo.cpp to skip this object…
I’m still experimenting to find the best solution…
Hello, I have encountered exactly the same problem as you. Can you teach me what kind of statement to add to modify the code here? Thank you very much. Looking forward to your reply.
Also, I have a question: do you use python or c to write pipes? What method do you use to draw a translucent mask? I can’t find a good way to reproduce the effect of deepstream-app-c. What I use is to add a probe in front of the nvdsosd, then take out the pixel of the mask and draw it within the range of the mask, but this is too slow.
Hello, I plan to try your model and I have browsed the repository you published. However, the pipeline.py and pipline_test.py you included do not include the mask drawing part of the seg model. How do you handle the semi-transparent color mask in Python? Do you have any example code for this part? @Levi_Pereira