Exception: jetson.utils -- failed to create glDisplay device

i am working on jetson nano and utilizing the jetson library provided by hello ai world. i have tested the basic algo which comes with the library all of them are working fine but i did a few changings to make it run on a video
got an error here is my code

import jetson.inference
import jetson.utils
import cv2

#import argparse
import sys
import numpy as np

width=720                          
height=480

vs=cv2.VideoCapture('b.m4v')                                #video input file

net = jetson.inference.detectNet("ssd-mobilenet-v2", threshold=0.5)   #loading the model
#camera = jetson.utils.gstCamera(1280, 720, "/dev/video0")       #using V4L2
display = jetson.utils.glDisplay()                            #initialting a display window




while display.IsOpen():
    _,frame = vs.read()                                 #reading a frmae
    img = cv2.cvtColor(frame, cv2.COLOR_BGR2BGRA)       #converting it to a bgra format for jetson util
    img = jetson.utils.cudaFromNumpy(img)               #converting image to cuda format from numpy array
    #img, width, height = camera.CaptureRGBA()
    detections = net.Detect(img, width, height)         #running detections on each image and saving the results in detections
    display.RenderOnce(img, width, height)              #display the output frame with detection 
    display.SetTitle("Object Detection | Network {:.0f} FPS".format(net.GetNetworkFPS()))         #display title for the output feed

this is the error:

[OpenGL] glDisplay -- X screen 0 resolution:  1280x1024
[OpenGL] failed to create X11 Window.
jetson.utils -- PyDisplay_Dealloc()
Traceback (most recent call last):
  File "pool1.py", line 16, in <module>
    display = jetson.utils.glDisplay()                            #initialting a display window
Exception: jetson.utils -- failed to create glDisplay device
PyTensorNet_Dealloc()

Hi @hamzashah411411, does display = jetson.utils.glDisplay() work in the original python scripts that come with the repo? (for example detectnet-camera.py)

If it works in the original scripts but not in your updated one, perhaps you might want to move the vs=cv2.VideoCapture('b.m4v') line to below the glDisplay() line.

Also, I think cv2.COLOR_BGR2BGRA in your code should be cv2.COLOR_BGR2RGBA, as detectNet expects RGBA image.

Hmmm

I did shifted it below the gldisplay but no good

And yes it works with the camera script

I recently flashed the card
This very code worked fine with my old image that I did almost 6 months back
It has opencv 3.something
The new one has a opencv 4
But it shouldn’t be a problem right?

As I have compared the log of both scripts it seems that opengl is working good but the instant where it has to shift to gstreamer it just stops and puts the blame on gldisplay

I might be wrong

here is the log of not working my code

[TRT] binding to input 0 Input binding index: 0
[TRT] binding to input 0 Input dims (b=1 c=3 h=300 w=300) size=1080000
[TRT] binding to output 0 NMS binding index: 1
[TRT] binding to output 0 NMS dims (b=1 c=1 h=100 w=7) size=2800
[TRT] binding to output 1 NMS_1 binding index: 2
[TRT] binding to output 1 NMS_1 dims (b=1 c=1 h=1 w=1) size=4
device GPU, /usr/local/bin/networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff initialized.
W = 7 H = 100 C = 1
detectNet – maximum bounding boxes: 100
detectNet – loaded 91 class info entries
detectNet – number of object classes: 91
jetson.utils – PyDisplay_New()
jetson.utils – PyDisplay_Init()
[OpenGL] glDisplay – X screen 0 resolution: 1280x1024
[OpenGL] failed to create X11 Window.
jetson.utils – PyDisplay_Dealloc()
Traceback (most recent call last):
File “pool1.py”, line 18, in
display = jetson.utils.glDisplay() #initialting a display window
Exception: jetson.utils – failed to create glDisplay device
PyTensorNet_Dealloc()

and here is the log of the builtin alog working

[TRT] binding to output 0 NMS dims (b=1 c=1 h=100 w=7) size=2800
[TRT] binding to output 1 NMS_1 binding index: 2
[TRT] binding to output 1 NMS_1 dims (b=1 c=1 h=1 w=1) size=4
device GPU, /usr/local/bin/networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff initialized.
W = 7 H = 100 C = 1
detectNet – maximum bounding boxes: 100
detectNet – loaded 91 class info entries
detectNet – number of object classes: 91
jetson.utils – PyCamera_New()
jetson.utils – PyCamera_Init()
[gstreamer] initialized gstreamer, version 1.14.5.0
[gstreamer] gstCamera attempting to initialize with GST_SOURCE_NVARGUS, camera /dev/video0
[gstreamer] gstCamera pipeline string:
v4l2src device=/dev/video0 ! video/x-raw, width=(int)1280, height=(int)720, format=YUY2 ! videoconvert ! video/x-raw, format=RGB ! videoconvert !appsink name=mysink
[gstreamer] gstCamera successfully initialized with GST_SOURCE_V4L2, camera /dev/video0
jetson.utils – PyDisplay_New()
jetson.utils – PyDisplay_Init()
[OpenGL] glDisplay – X screen 0 resolution: 1280x1024
[OpenGL] glDisplay – display device initialized
[gstreamer] opening gstCamera for streaming, transitioning pipeline to GST_STATE_PLAYING
[gstreamer] gstreamer changed state from NULL to READY ==> mysink
[gstreamer] gstreamer changed state from NULL to READY ==> videoconvert1
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter1
[gstreamer] gstreamer changed state from NULL to READY ==> videoconvert0

you can clearly see the problem

so my question is is gstramer supported by opencv 4.1.1
which is built in the sdk image?

Yes, I believe[quote=“hamzashah411411, post:5, topic:126403, full:true”]
so my question is is gstramer supported by opencv 4.1.1
which is built in the sdk image?
[/quote]

Yes, I believe OpenCV was built with GStreamer support enabled.

Does it work if you move the display = jetson.utils.glDisplay() line to above the net = jetson.inference.detectNet("ssd-mobilenet-v2", threshold=0.5) line?

It doesn’t and still says that failed to create gldisplay
And one more thing that’s funny
If I comment the the line (delete the line)
Import cv2
Then it doesn’t not show the error
But since I need to use it later so it just help

My only concern is to use the Jetson library with a video file
I want to run the model on the video file that’s it
It did worked with opencv 3 earlier but didn’t know what’s the problem with opencv 4

If you know any resources on how to run this algorithm in video file instead of camera feed that will suffice

I’m working on the video file input, but it’s only ready for C++ - I still need to do the Python bindings for it.

As a temporary workaround, what if you import cv2 after you create the glDisplay object?

i had a bit of progress by doing that as the display got initialized as you can see below
but got a new error
here is the log after moving import cv below the gldisplay

W = 7 H = 100 C = 1
detectNet – maximum bounding boxes: 100
detectNet – loaded 91 class info entries
detectNet – number of object classes: 91
jetson.utils – PyDisplay_New()
jetson.utils – PyDisplay_Init()
[OpenGL] glDisplay – X screen 0 resolution: 1440x900
[OpenGL] glDisplay – display device initialized
jetson.utils – PyDisplay_Dealloc()
5
Traceback (most recent call last):
File “pool1.py”, line 20, in
import cv2
File “/usr/lib/python3.6/dist-packages/cv2/init.py”, line 89, in
bootstrap()
File “/usr/lib/python3.6/dist-packages/cv2/init.py”, line 79, in bootstrap
import cv2
ImportError: /usr/lib/aarch64-linux-gnu/libgomp.so.1: cannot allocate memory in static TLS block
jetson.utils – PyDisplay_Dealloc()
PyTensorNet_Dealloc()

ah man
just resolved the issue i donot know why it occured but just importing opencv before any other library just made it work all good
i am still confused as it goes against the general programming knowledge as order of importing libraries should not matter but here it does ,still would be happy to have an explanation.

enjoy your project because i just wasted 3 days of my time on this issue and the remedy was just so simple

4 Likes

Sorry about that, I don’t know why it happens, my only thought is OpenCV is also doing some OpenGL initialization or extension loading when cv2 is imported.