I can't use my Realsense D435 with video-viewer on the jetson-inference docker container

Hello,

I was following the course “Hello AI World” and after going through every steps with a Logitech camera without trouble, I wanted to do the same using the Intel Realsense D435.

I managed to install the librealsense and I can now run “realsense-viewer” (version 2.41.0) to access the view of the different camera’s sensors.
However, when I am in the docker container (using docker/run.sh in jetson-inference) I cannot launch my camera with video-viewer. 3 feeds are recognized under /dev/video0 /dev/video1 and /dev/video2 (I think video2 is the one in RGB) but everytime I get the “failed to capture video frame” error. Here is the complete log of the “video-viewer --debug /dev/video2” command :

root@actemium-desktop:/jetson-inference# video-viewer --debug /dev/video2

[gstreamer] initialized gstreamer, version 1.14.5.0
[gstreamer] gstCamera – attempting to create device v4l2:///dev/video2
[gstreamer] gstCamera – found v4l2 device: Intel® RealSense™ Depth Ca
[gstreamer] v4l2-proplist, device.path=(string)/dev/video2, udev-probed=(boolean)false, device.api=(string)v4l2, v4l2.device.driver=(string)uvcvideo, v4l2.device.card=(string)“Intel®\ RealSense™\ Depth\ Ca”, v4l2.device.bus_info=(string)usb-70090000.xusb-1.1, v4l2.device.version=(uint)264649, v4l2.device.capabilities=(uint)2216689665, v4l2.device.device_caps=(uint)69206017;
[gstreamer] gstCamera – found 9 caps for v4l2 device /dev/video2
[gstreamer] [0] video/x-raw, format=(string)YUY2, width=(int)1920, height=(int)1080, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 15/1, 6/1 };
[gstreamer] [1] video/x-raw, format=(string)YUY2, width=(int)1280, height=(int)720, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 15/1, 6/1 };
[gstreamer] [2] video/x-raw, format=(string)YUY2, width=(int)960, height=(int)540, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 60/1, 30/1, 15/1, 6/1 };
[gstreamer] [3] video/x-raw, format=(string)YUY2, width=(int)848, height=(int)480, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 60/1, 30/1, 15/1, 6/1 };
[gstreamer] [4] video/x-raw, format=(string)YUY2, width=(int)640, height=(int)480, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 60/1, 30/1, 15/1, 6/1 };
[gstreamer] [5] video/x-raw, format=(string)YUY2, width=(int)640, height=(int)360, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 60/1, 30/1, 15/1, 6/1 };
[gstreamer] [6] video/x-raw, format=(string)YUY2, width=(int)424, height=(int)240, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 60/1, 30/1, 15/1, 6/1 };
[gstreamer] [7] video/x-raw, format=(string)YUY2, width=(int)320, height=(int)240, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 60/1, 30/1, 6/1 };
[gstreamer] [8] video/x-raw, format=(string)YUY2, width=(int)320, height=(int)180, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 60/1, 30/1, 6/1 };
[gstreamer] gstCamera – selected device profile: codec=raw format=yuyv width=1280 height=720
[gstreamer] gstCamera pipeline string:
[gstreamer] v4l2src device=/dev/video2 ! video/x-raw, format=(string)YUY2, width=(int)1280, height=(int)720 ! appsink name=mysink
[gstreamer] gstCamera successfully created device v4l2:///dev/video2
[video] created gstCamera from v4l2:///dev/video2

gstCamera video options:

– URI: v4l2:///dev/video2
- protocol: v4l2
- location: /dev/video2
- port: 2
– deviceType: v4l2
– ioType: input
– codec: raw
– width: 1280
– height: 720
– frameRate: 30.000000
– bitRate: 0
– numBuffers: 4
– zeroCopy: true
– flipMethod: none
– loop: 0

[OpenGL] glDisplay – X screen 0 resolution: 1920x1080
[OpenGL] glDisplay – X window resolution: 1920x1080
[OpenGL] glDisplay – display device initialized (1920x1080)
[video] created glDisplay from display://0

glDisplay video options:

– URI: display://0
- protocol: display
- location: 0
– deviceType: display
– ioType: output
– codec: raw
– width: 1920
– height: 1080
– frameRate: 0.000000
– bitRate: 0
– numBuffers: 4
– zeroCopy: true
– flipMethod: none
– loop: 0

[gstreamer] opening gstCamera for streaming, transitioning pipeline to GST_STATE_PLAYING
[gstreamer] gstreamer changed state from NULL to READY ==> mysink
[gstreamer] gstCamera – end of stream (EOS)
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter0
[gstreamer] gstreamer changed state from NULL to READY ==> v4l2src0
[gstreamer] gstreamer changed state from NULL to READY ==> pipeline0
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter0
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer changed state from READY to PAUSED ==> v4l2src0
[gstreamer] gstreamer changed state from READY to PAUSED ==> pipeline0
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer message new-clock ==> pipeline0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter0
[gstreamer] gstreamer message stream-start ==> pipeline0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> v4l2src0
[gstreamer] gstreamer v4l2src0 ERROR Device ‘/dev/video2’ is busy
[gstreamer] gstreamer Debugging info: gstv4l2object.c(3754): gst_v4l2_object_set_format_full (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
Call to S_FMT failed for YUYV @ 1280x720: Device or resource busy
[gstreamer] gstreamer v4l2src0 ERROR Internal data stream error.
[gstreamer] gstreamer Debugging info: gstbasesrc.c(3055): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
streaming stopped, reason not-negotiated (-4)
[gstreamer] gstreamer changed state from READY to PAUSED ==> mysink
video-viewer: failed to capture video frame
video-viewer: failed to capture video frame
video-viewer: failed to capture video frame
^Creceived SIGINT
video-viewer: failed to capture video frame
video-viewer: shutting down…
[gstreamer] gstCamera – stopping pipeline, transitioning to GST_STATE_NULL
[gstreamer] gstCamera – pipeline stopped
video-viewer: shutdown complete

I know this camera isn’t supported as plug and play by the docker container but is there a way I can use the realsense d435 to do inference and transfer learning ? Or maybe I miss something in the installation of the realsense camera (I just installed the packages with sudo apt-get install librealsense2-dbg) ?

I run the latest version of JetPack (4.5.1) including [L4T 32.5.1] on a Jetson nano 4gb.

Thank you for your great work and support.

Hi @Mactemium,

This doesn’t look like TensorRT issue. Please post your query in related forum.

Thank you.

Hi,

The backend camera frameworks of jetson_inference is GStreamer.
So could you help to check with the camera vender to see if D435 supports GStreamer first?

Thanks.