• Hardware Platform (dGPU)
• DeepStream Version 6.0
• TensorRT Version 8.2.3-1+cuda11.4
• NVIDIA GPU Driver Version 495.29.05 and 470.103.01
• Issue Type (bugs)
• How to reproduce the issue? Enable output-tensor-meta on a keypoint model
This is a strange one.
We have built a pipeline in DS6 using the python bindings, its a simple filesrc → pgie (detector) → sgie (classifier, etc) → filesinke pipeline. We have tested this with both TAO and in-house built models as the pgie, this all works fine. We also tried a mix-match of triton infer as well as just nvinfer, doesn’t seem to change much.
However, we would like to implement a keypoint model - specifically mmpose hrnet_lite (we have tried other models, it’s not just this one). As you are aware, Deepstream does not natively support keypoint post-processing, so we’d need to create our own post-processor. No big deal. However, as you are also aware, setting the post processor to
postprocess { other { } }
warns you to
warning: Network(uid: 4) is defined for other postprocessing but output_tensor_meta is disabled to attach. If needed, please update output_control.output_tensor_meta: true in config file: config/keypoints_inferserver.txt.
without it enabled, the pipeline runs fine. It lowers the fps from 250 to 7 and it runs to completion, of course with nothing outputted related to the keypoint model.
When output_tensor_meta is set to true, the pipeline never starts, freezing just before it starts to run infer on the video. There is no error, no segfault, I can’t even kill the pipeline with an interrupt. It just sits there forever. We’re not even trying to do anything with the tensor_meta yet.
Ive attached here a screenshot of both docker stats and nvidia-smi, as you can see its not releasing the RAM or VRAM, but also not using the gpu at all.
Another thing to note is that it also works fine on videos with very few instances of what we’re trying to run inference on (2 bounding boxes work) but our usual test video contains anywhere from 8 to 12 at a time.
Any help would be appreciated, we’ve been stuck on this for months.