Detectnet.py [OpenGL] failed to open X11 server connection

Hi all,
I recently installed JetPack 4.4 using the SD card image method on my Nano, and downloaded the latest jetson-inference demo code. It looks like the code has been nicely refactored, with videoSource and videoOutput classes and also error logging - thank you!

Whereas under 4.2 I ended up spending a lot of time trying to connect my RTSP camera streams, I am now having trouble trying to get a simple display window to show up when running the stock jetson-inference python example demos.

I am using a HDMI monitor, not a remote connection (although I did enable xrdp and vncserver as part of the setup).

For example, here is the output of my-detection.py where I have replaced the csi source with my rtsp one:

[video]  created gstDecoder from rtsp://user:pwd@192.168.1.27:554/Streaming/Channels/102
------------------------------------------------
gstDecoder video options:
------------------------------------------------
  -- URI: rtsp://user:pwd@192.168.1.27:554/Streaming/Channels/102
     - protocol:  rtsp
     - location:  user:pwd@192.168.1.27
     - port:      554
  -- deviceType: ip
  -- ioType:     input
  -- codec:      h264
  -- width:      640
  -- height:     480
  -- frameRate:  0.000000
  -- bitRate:    0
  -- numBuffers: 4
  -- zeroCopy:   true
  -- flipMethod: none
  -- loop:       0
------------------------------------------------
URI::Parse uri -- display://0
URI::Parse protocol: display, location: 0
[OpenGL] failed to open X11 server connection.
[OpenGL] failed to create X11 Window.
jetson.utils -- no output streams, creating fake null output
[gstreamer] opening gstDecoder for streaming, transitioning pipeline to GST_STATE_PLAYING

Same thing happens when running:

detectnet.py rtsp://user:pwd@192.168.1.27:554/Streaming/Channels/102

Interestingly, if I run the video-viewer demo, everything works as it should and the display window pops up with my camera stream:

  -- zeroCopy:   true
  -- flipMethod: none
  -- loop:       0
------------------------------------------------
URI::Parse uri -- display://0
URI::Parse protocol: display, location: 0
[OpenGL] glDisplay -- X screen 0 resolution:  2560x1440
[OpenGL] glDisplay -- X window resolution:    2560x1440
[OpenGL] glDisplay -- display device initialized (2560x1440)
[video]  created glDisplay from display://0
------------------------------------------------
glDisplay video options:
------------------------------------------------
  -- URI: display://0
     - protocol:  display
     - location:  0

I have tried running the export Display=:0 command at the terminal but it did not make a difference.
Any suggestion as to what might be going on here?
Thanks,

Hmm, interesting that it works with video-viewer but not my-detection.py.

Are you using the C++ version of video-viewer, or video-viewer.py? Likewise does the C++ detectnet work?

Also, do you get a GUI window from my-detection.py if you just run it on a directory of images or a video file from disk?

Also, here is something else to try: in my-detection.py, move the network creation to below the video stream creation:

camera = jetson.utils.videoSource("csi://0")      # '/dev/video0' for V4L2
display = jetson.utils.videoOutput("display://0") # 'my_video.mp4' for file
net = jetson.inference.detectNet("ssd-mobilenet-v2", threshold=0.5)

I am getting the same error after moving the network creation after camera and display creation in my-detection.py, and whether I am running it from jetson-inference/python/examples or from /usr/local/bin.

Both versions (I was not aware of the python one) work fine and create the display:

/usr/local/bin/video-player rtsp://user:pwd@192.168.1.27:554/Streaming/Channels/102
/usr/local/bin/video-player.py rtsp://user:pwd@192.168.1.27:554/Streaming/Channels/102

But when I copied the video-player.py file from /usr/local/bin to jetson-inference/python/examples and ran it from there:

./video-viewer.py rtsp://user:pwd@192.168.1.27:554/Streaming/Channels/102

lo and behold, it fails to create the display (with the same error).

Could it be some sort of permission issue? Is there a way to get more debugging information from “failed to open X11 server connection”?

Hmm ok, thanks for trying these things out.

What happens if you run the script as python3 video-viewer.py and python3 my-detection.py?

It shouldn’t really matter if you run the Python scripts from jetson-inference/python/examples, but I typically run them from either jetson-inference/build/aarch64/bin/ or /usr/local/bin/

This morning, while trying to run the experiments, even the video-viewer would not produce a window.
I found multiple python3 processes still running and terminated them using System Monitor. Unrelated?

I was then able to get the following results from the jetson-inference/python/examples directory:

The following creates a display window:

./video-player.py rtsp://user:pwd@192.168.1.27:554/Streaming/Channels/102
/usr/local/bin/video-player.py rtsp://user:pwd@192.168.1.27:554/Streaming/Channels/102
python3 /usr/local/bin/video-player.py rtsp://user:pwd@192.168.1.27:554/Streaming/Channels/102
python3 ./video-player.py rtsp://user:pwd@192.168.1.27:554/Streaming/Channels/102
./my-detection.py
python3 ./my-detection.py
python3 /usr/local/bin/my-detection.py
/usr/local/bin/my-detection.py

The following fail to create the display

/usr/local/bin/detectnet.py rtsp://user:pwd@192.168.1.27:554/Streaming/Channels/102
./detectnet.py rtsp://user:pwd@192.168.1.27:554/Streaming/Channels/102
python3 ./detectnet.py rtsp://user:pwd@192.168.1.27:554/Streaming/Channels/102
python3 /usr/local/bin/detectnet.py rtsp://user:pwd@192.168.1.27:554/Streaming/Channels/102

with the following error:

URI::Parse uri – display://0
URI::Parse protocol: display, location: 0
[OpenGL] glDisplay – X screen 0 resolution: 2560x1440
[OpenGL] glDisplay – X window resolution: 2560x1440
[OpenGL] failed to create X11 Window.
jetson.utils – no output streams, creating fake null output

(notice that it no longer says failed to “open X11 Server connection” but still fails to create X11 Window)

So, to summarize, my-detection.py now works fine (I have tried again with net creation before and after), but detectnet.py still does not create the display. I have rebooted the Nano and confirmed the results.
The only change I can think of is the termination of the orphan python3 processes. These python3 process appear when I start detectnet without a proper input argument, am unable to ctrl-c out and end up killing the process.
I made a copy of the detectnet.py script, called it foo.py and stripped everything except for input/output:

# create video sources & outputs
input = jetson.utils.videoSource(opt.input_URI, argv=sys.argv)
output = jetson.utils.videoOutput(opt.output_URI, argv=sys.argv+is_headless)

# process frames until the user exits
while True:
	img = input.Capture()
	output.Render(img)

	# exit on input/output EOS
	if not input.IsStreaming() or not output.IsStreaming():
		break

It still won’t create the display. If I compare this to my-detection.py, the only difference is the use of command line arguments vs hard coded strings.

OK, thanks for letting me know, unfortunately I’m unable to replicate your issue here but will try some more things on my side.

I am curious if you also tried running your script as python3 my_detection.py (as opposed to ./my_detection.py) and if that made any difference (same for detectnet.py)

python3 my_detection.py works fine, but
python3 detectnet.py ... fails to create display.

Given that I don’t really need the display for my application and that the camera acquisition and inference work fine, I suggest not spending any more time on this issue.

Although I did a clean install from SD card image, I remember struggling to install a library called “requests”, which in turn required installing pipenv. I suspect something may have gone wrong there. I may try a clean install later.

Thanks again for your help!

OK thank you, let me know if you run into the issue again in the future and we can re-visit.