imagenet-camera can't start correctly on TX2

Hi,

When I follow up with the doc from https://github.com/dusty-nv/jetson-inference#system-setup, the imagenet-camera can not create an openGL window and report the following error:

ubuntu@tegra-ubuntu:[~/jetson-inference/build/aarch64/bin]$ ./imagenet-camera googlenet
imagenet-camera
args (2): 0 [./imagenet-camera] 1 [googlenet]

[gstreamer] initialized gstreamer, version 1.8.3.0
[gstreamer] gstreamer decoder pipeline string:
nvcamerasrc fpsRange=“30.0 30.0” ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, format=(string)NV12 ! nvvidconv flip-method=2 ! video/x-raw ! appsink name=mysink

imagenet-camera: successfully initialized video device
width: 1280
height: 720
depth: 12 (bpp)

imageNet – loading classification network model from:
– prototxt networks/googlenet.prototxt
– model networks/bvlc_googlenet.caffemodel
– class_labels networks/ilsvrc12_synset_words.txt
– input_blob ‘data’
– output_blob ‘prob’
– batch_size 2

[GIE] attempting to open cache file networks/bvlc_googlenet.caffemodel.2.tensorcache
[GIE] loading network profile from cache… networks/bvlc_googlenet.caffemodel.2.tensorcache
[GIE] platform has FP16 support.
[GIE] networks/bvlc_googlenet.caffemodel loaded
[GIE] CUDA engine context initialized with 2 bindings
[GIE] networks/bvlc_googlenet.caffemodel input binding index: 0
[GIE] networks/bvlc_googlenet.caffemodel input dims (b=2 c=3 h=224 w=224) size=1204224
[cuda] cudaAllocMapped 1204224 bytes, CPU 0x102800000 GPU 0x102800000
[GIE] networks/bvlc_googlenet.caffemodel output 0 prob binding index: 1
[GIE] networks/bvlc_googlenet.caffemodel output 0 prob dims (b=2 c=1000 h=1 w=1) size=8000
[cuda] cudaAllocMapped 8000 bytes, CPU 0x102a00000 GPU 0x102a00000
networks/bvlc_googlenet.caffemodel initialized.
[GIE] networks/bvlc_googlenet.caffemodel loaded
imageNet – loaded 1000 class info entries
networks/bvlc_googlenet.caffemodel initialized.
Invalid MIT-MAGIC-COOKIE-1 key[OpenGL] failed to open X11 server connection.[OpenGL] failed to create X11 Window.

imagenet-camera: failed to create openGL display
loaded image fontmapA.png (256 x 512) 2097152 bytes
[cuda] cudaAllocMapped 2097152 bytes, CPU 0x102c00000 GPU 0x102c00000
[cuda] cudaAllocMapped 8192 bytes, CPU 0x102a02000 GPU 0x102a02000
[gstreamer] gstreamer transitioning pipeline to GST_STATE_PLAYING

And I confirm that the carmer can work via this command:
gst-launch-1.0 nvcamerasrc ! ‘video/x-raw(memory:NVMM), format=(string)I420’ ! nvvidconv flip-method=2 ! ‘video/x-raw(memory:NVMM), format=(string)I420’ ! nvoverlaysink -ev

Can anyone help me to resolve this issue?
Thanks.

Hi,

  1. Please login tx1 with nvidia account.

  2. Could try to this command:

export DISPLAY=:0

Thanks.

@AstaLLL,

Thanks, it works now after changing to nvidia.

But I still wonder why can ubuntu account not open X11 window ?

Some configuration is only available with nvidia account.

@aastalll

i am getting the exact same error.

[OpenGL] failed to create X11 Window.

imagenet-camera: failed to create openGL display

i am already logged into nvidia account and i also tried export DISPLAY=:0 on terminal. i am using jetson tx2.

could you please help.

Beware that “DISPLAY=:0” refers to a running GUI login via the same user who issues the command on the local system. If that user is not logged in locally to the GUI, then it will still fail. If a remote system is involved, then things get trickier and probably require a virtual desktop.

@linuxdev

i am logged into jetson via ssh command with a forward -X.
so i belive that it means i am running gui via the the same user who issues command on the local system.

can you suggest some solutions now ?

Forwarding via -X or -Y implies all X events go to the PC’s X server, not the Jetson’s (and this implies any GPU work…CUDA and OpenGL both apply). Thus you told the program to ignore the Jetson’s GPU and use the software and GPU of the PC…which will work if your PC has the right software and hardware, but won’t be what you wanted.

One solution: Log a user in to the GUI on the Jetson and ssh in as that user, without forwarding, and “export DISPLAY=:0” before running (on an Xavier it is “export DISPLAY=:1”).

Other solution: Install a virtual X server. The GPU won’t care whether the server is purely software or if it has a real world monitor attached. I don’t have any recommendations on any specific virtual X server, but your program would run as the same user who is logged in to the virtual X server after exporting DISPLAY to the relevant value (a virtual X server might start at “export DISPLAY=:10”…docs should mention this or offer a way to set this during virtual server startup).