Exception: jetson.utils -- failed to create glDisplay device

I’ve been trying to run the ssd-mobilenet-v2 demo using my-detection.py (available in Hello AI World). However, since I’m using Arducam’s stereoscopic camera (IMX219), it would give me this error:

File “/home/evannandi/Desktop/pyPro/openCV/openCV1.py”, line 5, in
camera = jetson.utils.gstCamera(2560, 720, 0) # using V4L2
Exception: jetson.utils – gstCamera.init() failed to parse args tuple
PyTensorNet_Dealloc()

So I’ve been trying to modify the code to be able to run my camera, as follows:

import jetson.inference
import jetson.utils
import cv2
import numpy as np

net = jetson.inference.detectNet(“ssd-mobilenet-v2”, threshold=0.5)
cam = cv2.VideoCapture("/dev/video0", cv2.CAP_V4L2)
cam.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter_fourcc(‘Y’, ‘1’, ‘6’, ’ '))
cam.set(cv2.CAP_PROP_CONVERT_RGB, False)
display = jetson.utils.glDisplay()

while True:
ret, frame = cam.read()
frame_rgba = cv2.convertScaleAbs(frame, None, 0.5) #makes brighter
frame_rgba = frame_rgba.astype(np.uint8) #funky
frame_rgba = cv2.cvtColor(frame_rgba, cv2.COLOR_BAYER_RG2RGBA)

width = frame.shape[1]
height = frame.shape[0]
img = jetson.utils.cudaFromNumpy(frame_rgba)
detections = net.Detect(img, width, height, 'opt.overlay')

cv2.imshow("Overtaking Assistance", frame_rgba)
ret = cv2.waitKey(1)
# press 'q' to exit.
if ret == ord('q'):
	break

However, now I would get this error:

Traceback (most recent call last):
File “/home/evannandi/Desktop/pyPro/Object Detection Trial/my-detection.py”, line 10, in
display = jetson.utils.glDisplay()
Exception: jetson.utils – failed to create glDisplay device
PyTensorNet_Dealloc()

Please help :)

Some similar post here.

Thank you for the reply! However, I’ve already tried restarting a few times and launching the code right away, but it still gives me the same error. I’ve also tried changing the video source from “/dev/video0” to 0 and still won’t work. :/

Hi @EvanNandi, the third argument to gstCamera needs to be a string - can you try:

camera = jetson.utils.gstCamera(2560, 720, "0")

Also, are you able to view your camera with nvgstcapture first?

Ahhh, changing the argument to a string fixed that error. Although giving another error as follows:

Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, execute:557 No cameras available

And it seems that the jetson nano won’t detect my camera with nvgstcapture although using cv2 or V4L2 commands does. Is there anything I can do? I’ve tried plugging the camera to the 2nd MIPI port and it still won’t work.

OK, let’s check what the V4L2 output formats are - what are the results of running this:

$ sudo apt-get install v4l-utils
$ v4l2-ctl --list-formats-ext

Here it is:

evannandi@CarOvertakingAssistance:~$ v4l2-ctl --list-formats-ext
ioctl: VIDIOC_ENUM_FMT
Index : 0
Type : Video Capture
Pixel Format: 'Y16 ’
Name : 16-bit Greyscale
Size: Discrete 1600x600
Size: Discrete 2560x720
Size: Discrete 3840x1080
Size: Discrete 5184x1944
Size: Discrete 6528x1848
Size: Discrete 6528x2464

Index       : 1
Type        : Video Capture
Pixel Format: 'RG10'
Name        : 10-bit Bayer RGRG/GBGB
	Size: Discrete 1600x600
	Size: Discrete 2560x720
	Size: Discrete 3840x1080
	Size: Discrete 5184x1944
	Size: Discrete 6528x1848
	Size: Discrete 6528x2464

It looks like it outputs Grayscale or raw Bayer, whereas what the GStreamer V4L2 pipeline is setup for is YUY2/YUYV. So you may need to adjust it here: https://github.com/dusty-nv/jetson-utils/blob/798c416c175d509571859c9290257bd5cce1fd63/camera/gstCamera.cpp#L432

gst-inspect-1.0 videoconvert doesn’t show support for bayer, but there appears to be a bayer2rgb gstreamer element that does. If you wanted the hardware/ISP to do the debayering, you would need to look into the MIPI CSI camera driver for your sensor - it doesn’t appear to currently be installed and/or supported, since you aren’t getting video from nvgstcapture.

Hey, I can’t seem to figure out how I can adjust it from the gstCamera.cpp so I decided to convert using cv2. I succeeded in getting an output display, however it seems it be in BGR (the colour is messed up), so it won’t detect any object. Do I need to find a way to convert it to RGB or something for it to detect objects, or should it detect objects anyway? Here’s my current code:

The code

import cv2
import numpy as np
from datetime import datetime
import jetson.inference
import jetson.utils

net = jetson.inference.detectNet(“ssd-mobilenet-v2”, threshold=0.5)
#camera = jetson.utils.gstCamera(2560, 720, “0”) # using V4L2
camera = cv2.VideoCapture(0, cv2.CAP_V4L2)
camera.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter_fourcc(‘Y’, ‘1’, ‘6’, ’ '))
camera.set(cv2.CAP_PROP_CONVERT_RGB, False)

display = jetson.utils.glDisplay()

while display.IsOpen():
ret, frame = camera.read()
frame = cv2.convertScaleAbs(frame, None, 1)
frame = frame.astype(np.uint8)
frame_rgb = cv2.cvtColor(frame, cv2.COLOR_BAYER_RG2RGBA)
width = frame.shape[1]
height = frame.shape[0]
img = jetson.utils.cudaFromNumpy(frame_rgb)
detections = net.Detect(img, width, height, “opt.overlay”)
display.RenderOnce(img, width, height)
display.SetTitle(“Object Detection | Network {:.0f} FPS”.format(net.GetNetworkFPS()))

Note: When

camera.set(cv2.CAP_PROP_CONVERT_RGB, False)

is set to True, it spits out this error:

cv2.error: OpenCV(4.1.1) /home/nvidia/host/build_opencv/nv_opencv/modules/imgproc/src/demosaicing.cpp:1700: error: (-215:Assertion failed) scn == 1 && (dcn == 3 || dcn == 4) in function ‘demosaicing’)

However, when set to False, this is the display output:

The input image to detectNet.Detect() should be in float4 RGBA format - see here for a post on converting it into that format with cv2: detectnet-video

I put another converter line and has succeeded in getting an RGBA display output. However, there seems to not be any object detection overlay. Is there something missing in my code? I’ve tried different objects

frame_rgb = cv2.cvtColor(frame, cv2.COLOR_BAYER_RG2RGBA)
frame_rgba = cv2.cvtColor(frame, cv2.COLOR_BGR2RGBA)
width = frame.shape[1]
height = frame.shape[0]
img = jetson.utils.cudaFromNumpy(frame_rgb)

It looks like you are getting a correct image from the camera at least, so that’s good to hear. Are you getting any detection info printed out on the console, or none at all? Are you getting any errors from the console?

Try reducing the --threshold argument to the program, the default is 0.5 but a lower value will produce more detections.

Also there may be the issue that the image’s aspect ratio is “squished” due to the stereoscopic view, so you may want to try cropping to only the left-hand or right-hand camera. A quick way to do that may be just to use cv2 again before passing the image to detectNet.Detect().

I tried cropping the image so only the left-hand camera is showing and decreasing the threshold to as low as 0.1. However, it still won’t detect any object. There’s no error in the console or any detection info, only these few lines being repeated over and over:

jetson.utils – cudaFromNumpy() ndarray dim 0 = 600
jetson.utils – cudaFromNumpy() ndarray dim 1 = 800
jetson.utils – cudaFromNumpy() ndarray dim 2 = 4
jetson.utils – freeing CUDA mapped memory
jetson.inference – PyDetection_Dealloc()
jetson.inference – PyDetection_Dealloc()
jetson.utils – cudaFromNumpy() ndarray dim 0 = 600
jetson.utils – cudaFromNumpy() ndarray dim 1 = 800
jetson.utils – cudaFromNumpy() ndarray dim 2 = 4
jetson.utils – freeing CUDA mapped memory
jetson.inference – PyDetection_Dealloc()
jetson.inference – PyDetection_Dealloc()

Here is the image output

Hmm you video does appear a bit dark - for the time being, I suppose keep trying reducing the threshold even lower, or add some additional logging here to inspect the results:

(remember to then run make and sudo make install)

I’ve reduced the threshold all the way down to 0.00000000000000005 lol and I still won’t get any detection. I did add an additional logging in line 719 of detectNet.cpp and it does keep printing my log command.
I’m really confused in why it wouldn’t detect anything. I guess it’s my camera input that’s not being read properly by detectNet?

Sorry for the delay - if the video is able to be displayed ok with glDisplay object, it is likely that detectNet is recieving it ok. Can you post your current code?

Also, it occurred to me that if you are using stereo camera and passing in both video frames, it could effect the aspect ratio (the network downsamples the input image to 300x300). Have you tried cropping the image to include only one of the camera frames?