Exception: jetson.utils -- failed to create videoSource device

I am learning about Jetson Orin and trying to implement the AI Image classification following this tutorial with Jetson-Inference.

The Image classification works with the pictures. But when it is applied on the Jellyfish Video, it has some problem with video streaming

The error is sighted as below. [gstreamer] gstDecoder – failed to create pipeline
[gstreamer] (no element “nvv4l2decoder”)
[gstreamer] gstDecoder – failed to create decoder for file:///home/sesotec-ai-2/tk_ws/src/jetson-inference/build/aarch64/bin/jellyfish.mkv
Traceback (most recent call last):
File “./imagenet.py”, line 58, in
input = videoSource(args.input_URI, argv=sys.argv)
Exception: jetson.utils – failed to create videoSource device

I also checked for sesotec-ai-2@Sesotec-AI-2:~/tk_ws/src/recognition$ v4l2-ctl --list-devices

NVIDIA Tegra Video Input Device (platform:tegra-camrtc-ca):
/dev/media0

HD Pro Webcam C920 (usb-3610000.xhci-4.2):
/dev/video0
/dev/video1
/dev/media1

Kind request for help.

Hi @karishmathumu, it appears your system doesn’t have the nvv4l2decoder GStreamer element installed, which comes as part of JetPack-L4T - which Jetson device and version of JetPack-L4T are you running? Is it Jetson AGX Orin and JetPack 5.x?

Can you run this command on your system to check which NVIDIA GStreamer elements are installed?

$ gst-inspect-1.0 | grep nvv4l2
nvv4l2camerasrc:  nvv4l2camerasrc: NvV4l2CameraSrc
nvvideo4linux2:  nvv4l2decoder: NVIDIA v4l2 video decoder
nvvideo4linux2:  nvv4l2h264enc: V4L2 H.264 Encoder
nvvideo4linux2:  nvv4l2h265enc: V4L2 H.265 Encoder
nvvideo4linux2:  nvv4l2vp8enc: V4L2 VP8 Encoder
nvvideo4linux2:  nvv4l2vp9enc: V4L2 VP9 Encoder
nvvideo4linux2:  nvv4l2av1enc: V4L2 AV1 Encoder

Thank you for your quick response. As far as i know, it is Jetson AGX Orin.

When I ran the command you told me to the follwing appears.

sesotec-ai-2@Sesotec-AI-2:~$ gst-inspect-1.0 | grep nvv4l2
nvv4l2camerasrc: nvv4l2camerasrc: NvV4l2CameraSrc

sesotec-ai-2@Sesotec-AI-2:~$ dpkg-query --show nvidia-l4t-core
nvidia-l4t-core 34.1.1-20220516211757

is there a way to install nvv4l2decoder please? I have tried the
sudo apt-get install v4l-utils

Regards,
Karishma

Can you try installing nvidia-l4t-gstreamer package from apt?

If that doesn’t work, I recommend re-flashing the device with SDK Manager, as this stuff is normally already installed on the device.

  1. As you suggested, I did the sudo apt-get install nvidia-l4t-gstreamer
  2. and then went into build and ran “make” and “sudo make install”
  3. and then when into bin and ran
    wget https://nvidia.box.com/shared/static/tlswont1jnyu3ix2tbf7utaekpzcx4rc.mkv -O jellyfish.mkv
  4. and then ran
    ./imagenet.py --netwerk=resnet-18 jellyfish.mkv images/test/jellyfish-resnet18.mkv

But the error remains the same.

[gstreamer] gstDecoder – pipeline string:
[gstreamer] filesrc location=jellyfish.mkv ! matroskademux ! queue ! h264parse ! nvv4l2decoder ! video/x-raw(memory:NVMM) ! nvvidconv ! video/x-raw ! appsink name=mysink
[gstreamer] gstDecoder – failed to create pipeline
[gstreamer] (no element “nvv4l2decoder”)
[gstreamer] gstDecoder – failed to create decoder for file:///home/sesotec-ai-2/tk_ws/src/jetson-inference/build/aarch64/bin/jellyfish.mkv
Traceback (most recent call last):
File “./imagenet.py”, line 58, in
input = videoSource(args.input_URI, argv=sys.argv)
Exception: jetson.utils – failed to create videoSource device

I will try to re-flash the device with SDK manager as you said and see if it works. Will inform you soon.

Thank you very much. It is really kind of you.

Regards,
Karishma

Hai Sir,

I have went along with the next steps. I ran the image recognition program with the code you gave in the tutorial. It worked well with the black bear, polar bear and brown bear.

I wanted to then try with the images of my own. So I added two images to my github and downloaded them with wget, same as like the bear examples.

But when i ran the ./my.py sea.jpg. It has a huge error in RED and now the ‘sesotec-ai-2@Sesotec-AI-2:’ and every line is totally in RED color

Could you please tell me where I went wrong and how to rectify it please.

Regards,
Karishma

Hi @karishmathumu, it looks like you actually downloaded the HTML GitHub page of sea.jpg, and not the raw sea.jpg itself. If you navigate to the GitHub page of your image, right click on the Download button and copy the URL from there. It should have raw in it.

Hai Sir,

I tried to do as you said, but there still seems to not work. Have I committed any mistake?

I did the wget https://github.com/karishmathumu/karishmathumu/raw/main/sea.jpg

The following resulted.

Regards,
Karishma

If you check the output, wget saved your second download to sea.jpg.2. So do the following:

rm sea.jpg
mv sea.jpg.2 sea.jpg
./my.py sea.jpg

Hai Sir,

I think it worked. I forgot to add the number of the sea.

This time I ran it as, “./my.py sea.jpg.3” , as when i ran the wget (url) it displayed as
“2022-08-18 14:26:27 (2,37 MB/s) - ‘sea.jpg.3’ saved [98118/98118]”

About the issue with videos and camera, I found that the parts of tutorial involving, live camera or video image detection or video object detection. none of them working in my system . All of them are having the same issue of both:

Download test video

wget https://nvidia.box.com/shared/static/veuuimq6pwvd62p9fresqhrrmfqz0e2f.mp4 -O pedestrians.mp4

C++

./detectnet pedestrians.mp4 images/test/pedestrians_ssd.mp4

Python

./detectnet.py pedestrians.mp4 images/test/pedestrians_ssd.mp4
[gstreamer] gstDecoder – pipeline string:
[gstreamer] filesrc location=pedestrians.mp4 ! qtdemux ! queue ! h264parse ! nvv4l2decoder ! video/x-raw(memory:NVMM) ! nvvidconv ! video/x-raw ! appsink name=mysink
[gstreamer] gstDecoder – failed to create pipeline
[gstreamer] (no element “nvv4l2decoder”)
[gstreamer] gstDecoder – failed to create decoder for file:///home/sesotec-ai-2/tk_ws/src/jetson-inference/build/aarch64/bin/pedestrians.mp4
Traceback (most recent call last):
File “./detectnet.py”, line 51, in
input = videoSource(args.input_URI, argv=sys.argv)
Exception: jetson.utils – failed to create videoSource device

and Live Camera Recognition. “./imagenet.py /dev/video0 output.mp4”

I am sorry, for bothering you. If you would like me to, I will create a new request.

Kind regards,
Karishma


Thank you very much Sir. For your patience and great help.

Regards,
Karishma

It says that you are missing the nvv4l2decoder element again…not sure what is happening to it.

Regarding the camera, do you have a camera plugged in? Is it MIPI CSI or USB? What does ls /dev/video* show?

It is a logitech USB camera.

ls /dev/video*

shows these two

/dev/video0 /dev/video1

OK, can you try running with imagenet.py with /dev/video1 instead?

I ran it like this, ./imagenet.py /dev/video1

It shows this error : " failed to create videoSource device"


For ./imagenet.py /dev/video0

Can you copy and paste the terminal log for running the command ./video-viewer /dev/video1 ?

It seems there is some problem with using your USB camera - not sure if it’s related to jetson-inference/jetson-utils or not. You might want to try viewing your USB camera on some other app on your Jetson to make sure it’s working okay first.

sesotec-ai-2@Sesotec-AI-2:~/tk_ws/src/jetson-inference/build/aarch64/bin$ ./video-viewer /dev/video1
[gstreamer] initialized gstreamer, version 1.16.3.0
[gstreamer] gstCamera – attempting to create device v4l2:///dev/video1
[gstreamer] gstCamera – found v4l2 device: HD Pro Webcam C920
[gstreamer] v4l2-proplist, device.path=(string)/dev/video0, udev-probed=(boolean)false, device.api=(string)v4l2, v4l2.device.driver=(string)uvcvideo, v4l2.device.card=(string)“HD\ Pro\ Webcam\ C920”, v4l2.device.bus_info=(string)usb-3610000.xhci-4.2, v4l2.device.version=(uint)330305, v4l2.device.capabilities=(uint)2225078273, v4l2.device.device_caps=(uint)69206017;
[gstreamer] gstCamera – could not find v4l2 device /dev/video1
[gstreamer] gstCamera – device discovery failed, but /dev/video1 exists
[gstreamer] support for compressed formats is disabled
[gstreamer] gstCamera pipeline string:
[gstreamer] v4l2src device=/dev/video1 do-timestamp=true ! nvvidconv flip-method=0 ! video/x-raw ! appsink name=mysink
[gstreamer] gstCamera successfully created device v4l2:///dev/video1
[video] created gstCamera from v4l2:///dev/video1

gstCamera video options:

– URI: v4l2:///dev/video1
- protocol: v4l2
- location: /dev/video1
- port: 1
– deviceType: v4l2
– ioType: input
– codec: unknown
– width: 1280
– height: 720
– frameRate: 30.000000
– bitRate: 0
– numBuffers: 4
– zeroCopy: true
– flipMethod: none
– loop: 0
– rtspLatency 2000

[OpenGL] glDisplay – X screen 0 resolution: 3840x1080
[OpenGL] glDisplay – X window resolution: 3840x1080
[OpenGL] glDisplay – display device initialized (3840x1080)
[video] created glDisplay from display://0

glDisplay video options:

– URI: display://0
- protocol: display
- location: 0
– deviceType: display
– ioType: output
– codec: raw
– width: 3840
– height: 1080
– frameRate: 0.000000
– bitRate: 0
– numBuffers: 4
– zeroCopy: true
– flipMethod: none
– loop: 0
– rtspLatency 2000

[gstreamer] opening gstCamera for streaming, transitioning pipeline to GST_STATE_PLAYING
[gstreamer] gstCamera failed to set pipeline state to PLAYING (error 0)
video-viewer: failed to capture video frame
video-viewer: shutting down…
video-viewer: shutdown complete

I’m not sure why, but it has trouble discovering the different modes/formats of your camera.

Can you try running this:

sudo apt-get install v4l-utils
v4l2-ctl --device=/dev/video1 --list-formats-ext

Are you able to view your USB camera through other programs?

Good morning Sir,

Here is the result of the command as you said to try:

I am not sure how to check the USB with the other programs. Could you give any hints?

I am sorry, I forgot to mention to you that I am accessing the Jetson nano with SSH connection. Might this be the issue?

Regards,
Karishma