Jetson-inference docker container csi camera problem

Hello, I am having a problem to get the display from csi camera inside the docker container. I started the docker using this command: $ docker/run.sh

Then this is what I get:
ARCH: aarch64
reading L4T version from /etc/nv_tegra_release
L4T BSP Version: L4T R32.7.1
[sudo] password for asimov-jetson:
CONTAINER: dustynv/jetson-inference:r32.7.1
DATA_VOLUME: --volume /home/asimov-jetson/jetson-inference/data:/jetson-inference/data --volume /home/asimov-jetson/jetson-inference/python/training/classification/data:/jetson-inference/python/training/classification/data --volume /home/asimov-jetson/jetson-inference/python/training/classification/models:/jetson-inference/python/training/classification/models --volume /home/asimov-jetson/jetson-inference/python/training/detection/ssd/data:/jetson-inference/python/training/detection/ssd/data --volume /home/asimov-jetson/jetson-inference/python/training/detection/ssd/models:/jetson-inference/python/training/detection/ssd/models --volume /home/asimov-jetson/jetson-inference/python/www/recognizer/data:/jetson-inference/python/www/recognizer/data
USER_VOLUME:
USER_COMMAND:
V4L2_DEVICES: --device /dev/video0
localuser:root being added to access control list
DISPLAY_DEVICE: -e DISPLAY=:0 -v /tmp/.X11-unix/:/tmp/.X11-unix
root@ZW-Jetson-1:/jetson-inference#

Inside the docker I tried this command to get the live view from the csi camera: $ video-viewer csi://0

This is what I get:
[gstreamer] initialized gstreamer, version 1.14.5.0
[gstreamer] gstCamera – attempting to create device csi://0
[gstreamer] gstCamera pipeline string:
[gstreamer] nvarguscamerasrc sensor-id=0 ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, framerate=30/1, format=(string)NV12 ! nvvidconv flip-method=2 ! video/x-raw(memory:NVMM) ! appsink name=mysink
[gstreamer] gstCamera successfully created device csi://0
[video] created gstCamera from csi://0

gstCamera video options:

– URI: csi://0
- protocol: csi
- location: 0
– deviceType: csi
– ioType: input
– width: 1280
– height: 720
– frameRate: 30
– numBuffers: 4
– zeroCopy: true
– flipMethod: rotate-180

[OpenGL] glDisplay – X screen 0 resolution: 1920x1080
[OpenGL] glDisplay – X window resolution: 1920x1080
[OpenGL] glDisplay – display device initialized (1920x1080)
[video] created glDisplay from display://0

glDisplay video options:

– URI: display://0
- protocol: display
- location: 0
– deviceType: display
– ioType: output
– width: 1920
– height: 1080
– frameRate: 0
– numBuffers: 4
– zeroCopy: true

[gstreamer] opening gstCamera for streaming, transitioning pipeline to GST_STATE_PLAYING
[gstreamer] gstreamer changed state from NULL to READY ==> mysink
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter1
[gstreamer] gstreamer changed state from NULL to READY ==> nvvconv0
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter0
[gstreamer] gstreamer changed state from NULL to READY ==> nvarguscamerasrc0
[gstreamer] gstreamer changed state from NULL to READY ==> pipeline0
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter1
[gstreamer] gstreamer changed state from READY to PAUSED ==> nvvconv0
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter0
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer changed state from READY to PAUSED ==> nvarguscamerasrc0
[gstreamer] gstreamer changed state from READY to PAUSED ==> pipeline0
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer message new-clock ==> pipeline0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter1
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> nvvconv0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> nvarguscamerasrc0
[gstreamer] gstreamer message stream-start ==> pipeline0
GST_ARGUS: Creating output stream
CONSUMER: Waiting until producer is connected…
GST_ARGUS: Available Sensor modes :
GST_ARGUS: 3264 x 2464 FR = 21.000000 fps Duration = 47619048 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 3264 x 1848 FR = 28.000001 fps Duration = 35714284 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1920 x 1080 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1640 x 1232 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1280 x 720 FR = 59.999999 fps Duration = 16666667 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1280 x 720 FR = 120.000005 fps Duration = 8333333 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: Running with following settings:
Camera index = 0
Camera mode = 5
Output Stream W = 1280 H = 720
seconds to Run = 0
Frame Rate = 120.000005
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
CONSUMER: Producer has connected; continuing.
[gstreamer] gstCamera – onPreroll
[gstreamer] gstBufferManager recieve caps: video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, format=(string)NV12, framerate=(fraction)30/1
[gstreamer] gstBufferManager – recieved first frame, codec=raw format=nv12 width=1280 height=720 size=1008
[gstreamer] gstBufferManager – recieved NVMM memory
[cuda] allocated 4 ring buffers (8 bytes each, 32 bytes total)
[gstreamer] gstreamer changed state from READY to PAUSED ==> mysink
[gstreamer] gstreamer message async-done ==> pipeline0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> mysink
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> pipeline0
[cuda] allocated 4 ring buffers (2764800 bytes each, 11059200 bytes total)
video-viewer: captured 0 frames (1280x720)
[OpenGL] glDisplay – set the window size to 1280x720
[OpenGL] creating 1280x720 texture (GL_RGB8 format, 2764800 bytes)
[cuda] cudaGraphicsGLRegisterBuffer(&interop, allocDMA(type), cudaGraphicsRegisterFlagsFromGL(flags))
[cuda] invalid OpenGL or DirectX context (error 219) (hex 0xDB)
[cuda] /jetson-inference/utils/display/glTexture.cpp:360
video-viewer: captured 1 frames (1280x720)
[cuda] cudaGraphicsGLRegisterBuffer(&interop, allocDMA(type), cudaGraphicsRegisterFlagsFromGL(flags))
[cuda] invalid OpenGL or DirectX context (error 219) (hex 0xDB)
[cuda] /jetson-inference/utils/display/glTexture.cpp:360
video-viewer: captured 2 frames (1280x720)

A window did open up but only showing black screen. I would really appreciate if you can help me here. Thanks!

Hi @anasri89, do you have a display physically attached to your Jetson, or are you running this headlessly over SSH or VNC?

If the former, what does the output of glxinfo show? Also, to test OpenGL acceleration, does glxgears work for you? (you can do apt-get update && apt-get install mesa-utils first to install glxinfo/glxgears)

If the later, OpenGL/CUDA interoperability won’t work with X11-forwarding for remote desktop, so if you are running your Nano headlessly you can use WebRTC, RTP, or RTSP to stream the video to your PC and view it there: https://github.com/dusty-nv/jetson-inference/blob/master/docs/aux-streaming.md#output-streams

Hi dusty_nv,

thanks for your reply. I have a HDMI-Monitor attached to my Jetson-Nano. For your information the csi camera works fine outside the docker container. For example the camera live view pops out using this command:
$ gst-launch-1.0 nvarguscamerasrc sensor_id=0 ! nvoverlaysink

I tested OpenGL as you suggested and glxgears did show up, below is some of the output that i got:
:~$ glxinfo
name of display: :0
display: :0 screen: 0
direct rendering: Yes
server glx vendor string: SGI
server glx version string: 1.4
server glx extensions:
GLX_ARB_context_flush_control, GLX_ARB_create_context,
GLX_ARB_create_context_profile, GLX_ARB_fbconfig_float,
GLX_ARB_framebuffer_sRGB, GLX_ARB_multisample,
GLX_EXT_create_context_es2_profile, GLX_EXT_create_context_es_profile,
GLX_EXT_fbconfig_packed_float, GLX_EXT_framebuffer_sRGB,
GLX_EXT_import_context, GLX_EXT_libglvnd, GLX_EXT_texture_from_pixmap,
GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_MESA_copy_sub_buffer,
GLX_OML_swap_method, GLX_SGIS_multisample, GLX_SGIX_fbconfig,
GLX_SGIX_pbuffer, GLX_SGIX_visual_select_group, GLX_SGI_make_current_read
client glx vendor string: Mesa Project and SGI
client glx version string: 1.4

:~$ glxgears
501 frames in 5.0 seconds = 100.177 FPS
643 frames in 5.0 seconds = 128.568 FPS

I have also tried the headless version over ssh (ssh connection between windows-pc and jetson nano), the rtp streaming worked just fine and the camera live view pops out and displayed correctly.

It looks like your NVIDIA OpenGL graphics driver got uninstalled/replaced somehow (normally server glx vendor string would report NVIDIA, not SGI)

Sometimes reinstalling this library can help: GitHub - NVIDIA/libglvnd: The GL Vendor-Neutral Dispatch library

Othertimes it may be easier to just reflash your SD card to restore the factory environment. I believe that the nvoverlaysink element from your GStreamer pipeline uses EGL (which is a different driver than OpenGL, which is what jetson-inference/jetson-utils uses)

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.