./video-viewer /dev/video0 from github Hello AI World failing

Hi, I’ve already successfully worked through the Nvidia DLI course “Getting Started with AI on Jetson Nano” and I know my Logitech C270 Webcam works well as I was able to perform the labs within the JupyterLab environment. But, as I was working through the HELLO AI WORLD Nvidia Jetson (GitHub - dusty-nv/jetson-inference: Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.) I’m having issues with the “video_viewer” application.

I “think” it might have something to do with X Server and X11 Forwarding using SSH. I’ve tried both PuTTY and MobaXTerm but didn’t have success with either

Below is my output with the associated error.

By the way, the step I have issues with comes at 8:35 in the associated YouTube video:

by @dusty_nv

where a camera feed pops up nicely in the demo.

branislav@branislav-desktop:~$ cd jetson-inference/
branislav@branislav-desktop:~/jetson-inference$
branislav@branislav-desktop:~/jetson-inference$ docker/run.sh
reading L4T version from /etc/nv_tegra_release
L4T BSP Version: L4T R32.4.4
size of data/networks: 535209937 bytes
CONTAINER: dustynv/jetson-inference:r32.4.4
DATA_VOLUME: --volume /home/branislav/jetson-inference/data:/jetson-inference/data --volume /home/branislav/jetson-inference/python/training/classification/data:/jetson-inference/python/training/classification/data --volume /home/branislav/jetson-inference/python/training/classification/models:/jetson-inference/python/training/classification/models --volume /home/branislav/jetson-inference/python/training/detection/ssd/data:/jetson-inference/python/training/detection/ssd/data --volume /home/branislav/jetson-inference/python/training/detection/ssd/models:/jetson-inference/python/training/detection/ssd/models
USER_VOLUME:
USER_COMMAND:
V4L2_DEVICES: --device /dev/video0
PuTTY X11 proxy: unable to connect to forwarded X server: Network error: Connection refused
xhost: unable to open display “localhost:10.0”

root@branislav-desktop:/jetson-inference# exit
exit

branislav@branislav-desktop:~/jetson-inference$ export DISPLAY=“:0”

branislav@branislav-desktop:~/jetson-inference$ docker/run.sh
reading L4T version from /etc/nv_tegra_release
L4T BSP Version: L4T R32.4.4
size of data/networks: 535209937 bytes
CONTAINER: dustynv/jetson-inference:r32.4.4
DATA_VOLUME: --volume /home/branislav/jetson-inference/data:/jetson-inference/data --volume /home/branislav/jetson-inference/python/training/classification/data:/jetson-inference/python/training/classification/data --volume /home/branislav/jetson-inference/python/training/classification/models:/jetson-inference/python/training/classification/models --volume /home/branislav/jetson-inference/python/training/detection/ssd/data:/jetson-inference/python/training/detection/ssd/data --volume /home/branislav/jetson-inference/python/training/detection/ssd/models:/jetson-inference/python/training/detection/ssd/models
USER_VOLUME:
USER_COMMAND:
V4L2_DEVICES: --device /dev/video0
localuser:root being added to access control list

root@branislav-desktop:/jetson-inference# ./video-viewer /dev/video0
bash: ./video-viewer: No such file or directory
root@branislav-desktop:/jetson-inference# ls
CMakeLists.txt CMakePreBuild.sh build c calibration data docs examples plugins python tools utils
root@branislav-desktop:/jetson-inference# ./build/aarch64/bin/video-viewer /dev/video0
[gstreamer] initialized gstreamer, version 1.14.5.0
[gstreamer] gstCamera – attempting to create device v4l2:///dev/video0
[gstreamer] gstCamera – found v4l2 device: UVC Camera (046d:0825)
[gstreamer] v4l2-proplist, device.path=(string)/dev/video0, udev-probed=(boolean)false, device.api=(string)v4l2, v4l2.device.driver=(string)uvcvideo, v4l2.device.card=(string)“UVC\ Camera\ (046d:0825)”, v4l2.device.bus_info=(string)usb-70090000.xusb-3.1, v4l2.device.version=(uint)264588, v4l2.device.capabilities=(uint)2216689665, v4l2.device.device_caps=(uint)69206017;
[gstreamer] gstCamera – found 38 caps for v4l2 device /dev/video0
[gstreamer] [0] video/x-raw, format=(string)YUY2, width=(int)1280, height=(int)960, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 15/2, 5/1 };
[gstreamer] [1] video/x-raw, format=(string)YUY2, width=(int)1280, height=(int)720, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 15/2, 5/1 };
[gstreamer] [2] video/x-raw, format=(string)YUY2, width=(int)1184, height=(int)656, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 10/1, 5/1 };
[gstreamer] [3] video/x-raw, format=(string)YUY2, width=(int)960, height=(int)720, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 10/1, 5/1 };
[gstreamer] [4] video/x-raw, format=(string)YUY2, width=(int)1024, height=(int)576, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 10/1, 5/1 };
[gstreamer] [5] video/x-raw, format=(string)YUY2, width=(int)960, height=(int)544, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 15/1, 10/1, 5/1 };
[gstreamer] [6] video/x-raw, format=(string)YUY2, width=(int)800, height=(int)600, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [7] video/x-raw, format=(string)YUY2, width=(int)864, height=(int)480, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [8] video/x-raw, format=(string)YUY2, width=(int)800, height=(int)448, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [9] video/x-raw, format=(string)YUY2, width=(int)752, height=(int)416, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [10] video/x-raw, format=(string)YUY2, width=(int)640, height=(int)480, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [11] video/x-raw, format=(string)YUY2, width=(int)640, height=(int)360, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [12] video/x-raw, format=(string)YUY2, width=(int)544, height=(int)288, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [13] video/x-raw, format=(string)YUY2, width=(int)432, height=(int)240, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [14] video/x-raw, format=(string)YUY2, width=(int)352, height=(int)288, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [15] video/x-raw, format=(string)YUY2, width=(int)320, height=(int)240, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [16] video/x-raw, format=(string)YUY2, width=(int)320, height=(int)176, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [17] video/x-raw, format=(string)YUY2, width=(int)176, height=(int)144, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [18] video/x-raw, format=(string)YUY2, width=(int)160, height=(int)120, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [19] image/jpeg, width=(int)1280, height=(int)960, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [20] image/jpeg, width=(int)1280, height=(int)720, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [21] image/jpeg, width=(int)1184, height=(int)656, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [22] image/jpeg, width=(int)960, height=(int)720, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [23] image/jpeg, width=(int)1024, height=(int)576, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [24] image/jpeg, width=(int)960, height=(int)544, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [25] image/jpeg, width=(int)800, height=(int)600, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [26] image/jpeg, width=(int)864, height=(int)480, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [27] image/jpeg, width=(int)800, height=(int)448, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [28] image/jpeg, width=(int)752, height=(int)416, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [29] image/jpeg, width=(int)640, height=(int)480, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [30] image/jpeg, width=(int)640, height=(int)360, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [31] image/jpeg, width=(int)544, height=(int)288, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [32] image/jpeg, width=(int)432, height=(int)240, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [33] image/jpeg, width=(int)352, height=(int)288, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [34] image/jpeg, width=(int)320, height=(int)240, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [35] image/jpeg, width=(int)320, height=(int)176, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [36] image/jpeg, width=(int)176, height=(int)144, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [37] image/jpeg, width=(int)160, height=(int)120, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] gstCamera – selected device profile: codec=mjpeg format=unknown width=1280 height=720
[gstreamer] gstCamera pipeline string:
[gstreamer] v4l2src device=/dev/video0 ! image/jpeg, width=(int)1280, height=(int)720 ! jpegdec ! video/x-raw ! appsink name=mysink
[gstreamer] gstCamera successfully created device v4l2:///dev/video0
[video] created gstCamera from v4l2:///dev/video0

gstCamera video options:

– URI: v4l2:///dev/video0
- protocol: v4l2
- location: /dev/video0
– deviceType: v4l2
– ioType: input
– codec: mjpeg
– width: 1280
– height: 720
– frameRate: 30.000000
– bitRate: 0
– numBuffers: 4
– zeroCopy: true
– flipMethod: none
– loop: 0

[OpenGL] glDisplay – X screen 0 resolution: 640x480
[OpenGL] glDisplay – X window resolution: 640x480
[OpenGL] glDisplay – display device initialized (640x480)
[video] created glDisplay from display://0

glDisplay video options:

– URI: display://0
- protocol: display
- location: 0
– deviceType: display
– ioType: output
– codec: raw
– width: 640
– height: 480
– frameRate: 0.000000
– bitRate: 0
– numBuffers: 4
– zeroCopy: true
– flipMethod: none
– loop: 0

[gstreamer] opening gstCamera for streaming, transitioning pipeline to GST_STATE_PLAYING
[gstreamer] gstreamer changed state from NULL to READY ==> mysink
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter1
[gstreamer] gstreamer changed state from NULL to READY ==> jpegdec0
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter0
[gstreamer] gstreamer changed state from NULL to READY ==> v4l2src0
[gstreamer] gstreamer changed state from NULL to READY ==> pipeline0
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter1
[gstreamer] gstreamer changed state from READY to PAUSED ==> jpegdec0
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter0
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer changed state from READY to PAUSED ==> v4l2src0
[gstreamer] gstreamer changed state from READY to PAUSED ==> pipeline0
[gstreamer] gstreamer message new-clock ==> pipeline0
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter1
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> jpegdec0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> v4l2src0
[gstreamer] gstreamer message stream-start ==> pipeline0
video-viewer: failed to capture video frame
video-viewer: failed to capture video frame
[gstreamer] gstCamera – onPreroll
[gstreamer] gstCamera – map buffer size was less than max size (1382400 vs 1382407)
[gstreamer] gstCamera recieve caps: video/x-raw, format=(string)I420, width=(int)1280, height=(int)720, interlace-mode=(string)progressive, multiview-mode=(string)mono, multiview-flags=(GstVideoMultiviewFlagsSet)0:ffffffff:/right-view-first/left-flipped/left-flopped/right-flipped/right-flopped/half-aspect/mixed-mono, pixel-aspect-ratio=(fraction)1/1, chroma-site=(string)mpeg2, colorimetry=(string)1:4:0:0, framerate=(fraction)30/1
[gstreamer] gstCamera – recieved first frame, codec=mjpeg format=i420 width=1280 height=720 size=1382407
RingBuffer – allocated 4 buffers (1382407 bytes each, 5529628 bytes total)
[gstreamer] gstreamer changed state from READY to PAUSED ==> mysink
[gstreamer] gstreamer message async-done ==> pipeline0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> mysink
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> pipeline0
RingBuffer – allocated 4 buffers (2764800 bytes each, 11059200 bytes total)
video-viewer: captured 1 frames (1280 x 720)
[OpenGL] glDisplay – set the window size to 640x480
[OpenGL] creating 1280x720 texture (GL_RGB8 format, 2764800 bytes)
[cuda] registered openGL texture for interop access (1280x720, GL_RGB8, 2764800 bytes)
video-viewer: captured 2 frames (1280 x 720)
video-viewer: captured 3 frames (1280 x 720)
video-viewer: captured 4 frames (1280 x 720)
video-viewer: captured 5 frames (1280 x 720)
video-viewer: captured 6 frames (1280 x 720)
video-viewer: captured 7 frames (1280 x 720)
^Creceived SIGINT
video-viewer: shutting down…
[gstreamer] gstCamera – stopping pipeline, transitioning to GST_STATE_NULL
[gstreamer] gstCamera – pipeline stopped
video-viewer: shutdown complete
root@branislav-desktop:/jetson-inference#

Hi,
Are you able to try with hdmi output? It may not work with X11 forwarding through ssh.

1 Like

Thanks for the suggestion. I connected a monitor directly to the Jetson Nano board with an HDMI cable but after the board boots up, I’m stuck in an endless loop at the login screen where it keeps asking me for my password. If I type in the incorrect password it lets me know with the message “Invalid password, please try again” but if I put in my correct password it takes it, the screen goes black for a moment, and then right back to ask me again for my password. The only thing I can do is click beside my username to select: LXDE (Default), Openbox, Ubuntu, Ubuntu on Wayland, Unity. But, they all bring me back to the login page. Any idea on what might be going wrong? Have you encountered this issue? Thanks!

1 Like

I guess my underlying question is why does the Logitech C270 Webcam works well in the “Hello Camera” lab within the JupyterLab environment but I can’t get a live camera feed going with ./video-viewer /dev/video0. In the JupyterLab environment, we bring up a widget and the live video feed is through the (image_widget) widget which is a video window embedded in the web browser. For the JupyterLab we type http://192.168.55.1:8888 into a browser window. For PuTTY, we SSH into 192.168.55.1:8888, so it’s both through the Micro USB connection from the PC to the Jetson Nano. Thank you again for any hints/suggestions here!

Good afternoon everyone,

I am having the exact same issue as @branko . I re-flashed the OS and pulled everything from the docker again just in case something was corrupted in the initial data transfer. Any help / suggestions would be greatly appreciated.

1 Like

@branko I tried the tutorial again by SSHing to the Jetson Nano through a virtual Ubuntu machine and ran into the same X11 error.

1 Like

Thanks very much, @rebrak for trying it out and confirming you also see the error! Much appreciated and hopefully, we can figure this out!

@branko I figured out a couple tentative solutions to the problem.

For whatever reason, our Jetson Nanos are not detecting any displays through the secure shell.

Fix #1 - Run the Hello AI Would code through the Jetson Nano as a stand-alone system. I got the video-viewer to work this way just like in the tutorial, but I doubt the Nano will be able to handle very much computation without throwing an overcurrent fault. Not really a fix but it’s a step forward nonetheless.

Fix #2 - This one is a bit tricky since neither the GitHub documentation nor the tutorial explain the small details very well. There is a way to transmit the video from the webcam, to the Jetson Nano, over wifi, and finally to the host machine.

  1. Download the VLC player - Official download of VLC media player, the best Open Source player - VideoLAN.

  2. Figure out your host computer specific IP address.

  3. Open Notepad (or something similar) and copy / paste the text below. Change IP address, delete the brackets [ ].

c=IN IP4 [YOUR HOST COMPUTER IP ADDRESS]
m=video 1234 RTP/AVP 96
a=rtpmap:96 H264/90000

Save as RTP.sdp to your desktop. <— .sdp seems to be specific to the VLC Player

  1. Through the ssh, follow the tutorial to run the docker and get to the point where the command line is in the root.

  2. Type the command below, then execute.

video-viewer /dev/video0 rtp://[YOUR HOST COMPUTER IP ADDRESS]:1234

  1. Your webcam should activate and start transmitting a video stream to the provided IP address. Now you must “catch” that video stream with the VLC player.

  2. Double-click on the RTP.spd file you saved to your desktop. The icon should be an orange traffic cone now. This will open the VLC terminal were the video stream should hopefully be visible after a brief moment.

My video stream was very pixelated, but it at somewhat works now without overloading the Jetson Nano.

1 Like

Thanks very much, @rebrak Much appreciated! I will work through these steps to see if I can get it to work with my setup and advise back.

@rebrak unfortunately, it looks like my Windows Firewall is blocking the video! The VLC Media Player gives the error message: “Your input can’t be opened:
VLC is unable to open the MRL ‘rtp://98.217.221.96:1234’. Check the log for details.” and upon googling it, it says it’s most likely a firewall issue. And, my IT Admin has it locked so I can’t disable it. So, I have to negotiate a bit with my IT folks now. Thanks again and I’ll keep at it!

@branko The best work around I have been able to find is just using the Jetson Nano as a stand-alone system (i.e. no SSH from Windows desktop).

The X11 issues seem to stem from each computer not being able to authenticate the other, even though some terminals such as MobaXterm make that process fairly straight forward. Attempting to rectify that is well above my novice knowledge of computer architecture and commands.

Even if you succeed in passing the video feed via the VLC player, that remedy does not last as the follow-on tasks require both a video feed and GUI to be displayed on screen, which VLC cannot support. Essentially, you’ll be able to use pre-trained neural networks but not create your own through transfer learning.

I ordered a new power supply for the Jetson Nano, and that has fixed the aforementioned overcurrent / under voltage issue. I assume my laptop (Surface Book 2) was cutting power to its USB ports as a fail-safe measure when the Jetson Nano pulled too much current. So far, I have been able to download and run the PyTorch models for the tutorial without having the Jetson Nano crash.

This is the power supply I am using now - Amazon.com: LABISTS Raspberry Pi 4 Power Supply USB-C Charger Adapter with On/Off Switch 5.1V 3A UL Listed: Home Audio & Theater

1 Like

Hi @rebrak, the issue is that the video-viewer tool from Hello AI World (and the other applications from Hello AI World) are using OpenGL for rendering the video (with CUDA->OpenGL interoperability), and that will not work over SSH with X11 tunneling. So for remote viewing, you would want to use RTP streaming like you have setup. If your RTP video is pixelated, try running it outside the container (built from source).

That’s correct, you would need a display attached for the data capture tool from Hello AI World. Otherwise I recommend to use the CVAT tool, you can run it from any browser on images/video that you have already recorded.

1 Like

Thank you again @rebrak and @dusty_nv for all your help getting things to work! I finally bought a monitor with HDMI input and connected it directly to the Jetson Nano 2G board and was able to run the video_viewer application! Much appreciated!

Hi, this is the first time posting an issue on this forum. Apologies if i should not be replying like this.

Anyway, as i was following the tutorial by Dusty on setting up jetson-inference on my jetson nano (4GB) with Jetpack 4.5.1 running on docker container, i found that i am facing a similar issue with video viewer application. I can confirm that my CSI camera is working fine as i was able to fire it up from another container using Hello camera running on JupyterLab. I am not using SSH to access the host and i have a HDMI monitor.

I have tried to restart the nvargus-daemon after i shut down the container, but the video-viewer still could not display the video stream from the camera after the restart the container.

The following is console output log when i attempted to run video-viewer to check that my CSI Camera is working;

root@nano:/jetson-inference# video-viewer /dev/video0
[gstreamer] initialized gstreamer, version 1.14.5.0
[gstreamer] gstCamera – attempting to create device v4l2:///dev/video0
[gstreamer] gstCamera – didn’t discover any v4l2 devices
[gstreamer] gstCamera – device discovery failed, but /dev/video0 exists
[gstreamer] support for compressed formats is disabled
[gstreamer] gstCamera pipeline string:
[gstreamer] v4l2src device=/dev/video0 ! appsink name=mysink
[gstreamer] gstCamera successfully created device v4l2:///dev/video0
[video] created gstCamera from v4l2:///dev/video0

gstCamera video options:

– URI: v4l2:///dev/video0
- protocol: v4l2
- location: /dev/video0
– deviceType: v4l2
– ioType: input
– codec: unknown
– width: 1280
– height: 720
– frameRate: 30.000000
– bitRate: 0
– numBuffers: 4
– zeroCopy: true
– flipMethod: none
– loop: 0

[OpenGL] glDisplay – X screen 0 resolution: 1920x1080
[OpenGL] glDisplay – X window resolution: 1920x1080
[OpenGL] glDisplay – display device initialized (1920x1080)
[video] created glDisplay from display://0

glDisplay video options:

– URI: display://0
- protocol: display
- location: 0
– deviceType: display
– ioType: output
– codec: raw
– width: 1920
– height: 1080
– frameRate: 0.000000
– bitRate: 0
– numBuffers: 4
– zeroCopy: true
– flipMethod: none
– loop: 0

[gstreamer] opening gstCamera for streaming, transitioning pipeline to GST_STATE_PLAYING
[gstreamer] gstreamer changed state from NULL to READY ==> mysink
[gstreamer] gstreamer changed state from NULL to READY ==> v4l2src0
[gstreamer] gstreamer changed state from NULL to READY ==> pipeline0
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer changed state from READY to PAUSED ==> v4l2src0
[gstreamer] gstreamer changed state from READY to PAUSED ==> pipeline0
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer message new-clock ==> pipeline0
[gstreamer] gstreamer message stream-start ==> pipeline0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> v4l2src0
[gstreamer] gstCamera – end of stream (EOS)
[gstreamer] gstreamer v4l2src0 ERROR Internal data stream error.
[gstreamer] gstreamer Debugging info: gstbasesrc.c(3055): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
streaming stopped, reason not-negotiated (-4)
[gstreamer] gstreamer changed state from READY to PAUSED ==> mysink
video-viewer: failed to capture video frame
video-viewer: failed to capture video frame
video-viewer: failed to capture video frame
video-viewer: failed to capture video frame
video-viewer: failed to capture video frame
video-viewer: failed to capture video frame
video-viewer: failed to capture video frame
video-viewer: failed to capture video frame
video-viewer: failed to capture video frame
video-viewer: failed to capture video frame
video-viewer: failed to capture video frame
^Creceived SIGINT
video-viewer: failed to capture video frame
video-viewer: shutting down…
[gstreamer] gstCamera – stopping pipeline, transitioning to GST_STATE_NULL
[gstreamer] gstCamera – pipeline stopped
video-viewer: shutdown complete
root@nano:/jetson-inference#

Hopefully someone can help me on this.

Many thanks

Hi @Benjamin_Lim, since you are using MIPI CSI camera, please run video-viewer like this instead:

video-viewer csi://0

Specifying /dev/video0 will instead try to use V4L2 (which is more common for USB cameras) instead of CSI interface.

Yes, it worked. I also discovered it after reading through the documentation more thoroughly.

Thanks !