• NVIDIA GPU Driver Version (valid for GPU only)
NVIDIA-SMI 510.73.05 Driver Version: 510.73.05 CUDA Version: 11.6
• Issue Type( questions, new requirements, bugs)
I have found a Deepstream defect where the use of a videorate filter after a uridecodebin3 element in a multi-gpu environment throws the exception below. This is relevant for this forum as the uridecodebin3 element uses the nvv4l2decoder and stores the output buffer in the GPU memory.
GPUassert_VPI: VPI_ERROR_INVALID_OPERATION Container doesn't have any of the necessary backends enabled src/modules/cuDCFv2/featureExtractor.cu 527
GPUassert: invalid device function src/modules/cuDCFv2/cuDCFFrameTransformTexture.cu 694
If running on a single gpu the videorate element after a uridecodebin3 element behaves correctly with Deepstream.
I will have to write a python minimal example. From what I can tell forcing the gpu-id on the nvv4l2decoder created by the uridecodebin3 (via the ‘pad-added’ callback) interferes with the videorate and downstream inference engines when running with multiple GPUs.
I have just tested another config, and this error disappeared.
The difference is nvtracker plugin’s config: ll-lib-file keeps to/opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so, but ll-config-file changed from an config_tracker_NvDCF_xxx.yml modification to config_tracker_IOU.yml.
I can almost determine the error source: the NvDCF impl in libnvds_nvmultiobjecttracker.
btw, I don’t use videorate to adjust frame rate, but used the drop-frame-interval of plugin nvv4l2decoder.
I should have read the message more carefully which does indicate NvDCF. I found that this issue would take a while to eventuate (it was not easy to produce a deterministic test) so I can test more with both the trackers to try to produce a deeper confirmation.