Just started to explore examples on Helo AI World. The pictures work fine, then the video streams returned following error (also tried RTSP feed but returned the same):
[gstreamer] gstDecoder – pipeline string:
[gstreamer] filesrc location=jellyfish.mkv ! matroskademux ! queue ! h264parse ! nvv4l2decoder name=decoder enable-max-performance=1 ! video/x-raw(memory:NVMM) ! nvvidconv name=vidconv ! video/x-raw ! appsink name=mysink
[gstreamer] gstDecoder – failed to create pipeline
[gstreamer] (no element “nvv4l2decoder”)
[gstreamer] failed to create decoder pipeline
[gstreamer] gstDecoder – failed to create decoder for file:///jetson-inference/build/aarch64/bin/jellyfish.mkv
Traceback (most recent call last):
File “./imagenet.py”, line 59, in
input = videoSource(args.input, argv=sys.argv)
Exception: jetson.utils – failed to create videoSource device
Hi @ai168, on JetPack 5 the GStreamer plugins get mounted from the host by the NVIDIA Container Runtime along with the GPU drivers. You can see what gets mounted under /etc/nvidia-container-runtime/host-files-for-container.d/l4t.csv
Can you confirm that you see these files (and that they have non-zero size) both inside and outside of container? Are you able to run gst-inspect-1.0 | grep nv outside container and see nvv4l2decoder listed?
Hi @ai168, sorry for the delay - in that case, are you able to run gst-inspect-1.0 nvv4l2decoder inside the container? How about any gst-launch-1.0 pipelines from the decode examples: