Hello,
I’m trying to use NVIDIA’s Vision AI Movenet container (nvcr.io/nvidia/ace/vision-ai-movenet:0.1.82
) to run inference on local images or videos.
However, when I run the following command, I get an error: Could not configure supporting library
, and the container does not function correctly.
sudo docker run --gpus all -it --rm \
-p 8000:8000 \
nvcr.io/nvidia/ace/vision-ai-movenet:0.1.82
Partial error log:
Error from ...: Could not configure supporting library.
unable to connect to broker library
Failed to set GStreamer pipeline to PLAYING
...
Environment:
- Cloud platform: Azure VM (Standard NC6s v3, Tesla V100)
- OS: Ubuntu 22.04
- NVIDIA Driver: 565.57.01
- CUDA Version: 12.6 (installed) / 12.7 (reported by
nvidia-smi
) - Docker: 20.10.x
What I’ve tried:
✅ Verified that nvcc --version
confirms CUDA 12.6 is installed
✅ Checked nvidia-smi
to confirm the GPU and driver are working correctly
✅ Retrieved CUDA version information from /usr/local/cuda-12.6/version.json
Questions:
- What is the correct way to use Movenet for inference on local images and videos?
- What could be causing the
Could not configure supporting library
error, and how can I fix it? - How can I verify that the container is correctly utilizing the GPU at runtime?
I am new to DeepStream and Movenet, and I’ve been checking the official documentation, but I am struggling to figure this out.
Any advice or guidance would be greatly appreciated! Thank you in advance.