Could not get EGL display connection, failed to create outputstream

Hi ! We have a python application that is supposed to count incoming and outgoing cars in a traffic. For that we are using a Jetson Nano 2Gb , Raspberry pi camera module V2 and a python ( openCV, Yolo , etc ).

We are working on making the camera work with the application to do a real time counting app with a livestream video.
For that we have to trace lines to define the entry and exit point of a " road" on a canvas ( which is working ) by capturing a frame of the video ( so it’s a picture displayed on the canvas so we can draw lines ) .

Then we have a button ‘play’ that is supposed to make the camera run again so we can start counting, but when we do that the picture is still there , the camera not running with the error: open OpenCV | cannot query video position status=0 value=-1 duration=-1

Tell me if you need to see the code,

Thanks for reading.

Hi,
Would like to clarify what the issue is. From the description, you can launch the camera in first run, and fails in second run. Is this correct?

And please share your release version:

$ cat /etc/nv_tegra_release

release version: # R32 (release), REISION: 6.1, GCID: 27863751, BOARD: t210ref, EABI: aarch64, ATE: Mon Jul 26 19:20:30 UC 2021

Yes I am opening the camera using cv2 like this:

def show_frame():
global tkImage
global notCaptured
if cap.isOpened() and notCaptured:
#canvas.pack_forget()
hasFrame, frame = cap.read()
if hasFrame is True:
frame = cv2.resize(frame, (800, 600))
cv2image = cv2.cvtColor(frame, cv2.COLOR_BGR2RGBA)
img = Image.fromarray(cv2image)
tkImage = ImageTk.PhotoImage(img)
canvas.create_image(0, 0, anchor=NW, image=tkImage)
window.after(10, show_frame)

def gstreamer_pipeline(
capture_width=1280,
capture_height=720,
display_width=640,
display_height=480,
framerate=60,
flip_method=0,
):
return (
"nvarguscamerasrc ! "
"video/x-raw(memory:NVMM), "
"width=(int)%d, height=(int)%d, "
"format=(string)NV12, framerate=(fraction)%d/1 ! "
"nvvidconv flip-method=%d ! "
"video/x-raw, width=(int)%d, height=(int)%d, format=(string)BGRx ! "
"videoconvert ! "
“video/x-raw, format=(string)BGR ! appsink”
% (
capture_width,
capture_height,
framerate,
flip_method,
display_width,
display_height,
)
)

and then when I capture a frame I do this:

def threadCapture():
t2 = Thread(target=capture())
t2.start()

def capture():
global notCaptured
notCaptured=False

(So when captured it stops the loop showing the video frame per frame ,kinda freezes on the las frame then )

then i have a play button supposed to make the video stream run again but it doesn’t work.

def Play():
global video
global videoList
global indexVideo
global cap
global fps
global frame_count
global duration

canvas.pack_forget()
cap = cv2.VideoCapture(gstreamer_pipeline(flip_method=0), cv2.CAP_GSTREAMER)
#cap = cv2.VideoCapture(0)
fps = cap.get(cv2.CAP_PROP_FPS)  # OpenCV2 version 2 used "CV_CAP_PROP_FPS"
frame_count = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
detect_objectsYolo()

Hi,
The issue may be specific to OpenCV. Please try this python sample and check if you can pass the loop:

#!/usr/bin/env python3

import gi
import time
gi.require_version('Gst', '1.0')
from gi.repository import Gst, GObject, GLib

pipeline = None
bus = None
message = None

# initialize GStreamer
Gst.init(None)

for i in range(1, 10):
    print("loop =",i," ")
    # build the pipeline
    pipeline = Gst.parse_launch(
        "nvarguscamerasrc ! video/x-raw(memory:NVMM),width=1920,height=1080,format=NV12 ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw,format=BGR ! fakesink "
    )

    # start playing
    print("Switch to PLAYING state")
    pipeline.set_state(Gst.State.PLAYING)

    time.sleep(5)
    print("Send EoS")
    Gst.Element.send_event(pipeline, Gst.Event.new_eos())
    # wait until EOS or error
    bus = pipeline.get_bus()
    msg = bus.timed_pop_filtered(
        Gst.CLOCK_TIME_NONE, Gst.MessageType.EOS)

    # free resources
    print("Switch to NULL state")
    pipeline.set_state(Gst.State.NULL)
    time.sleep(1)

We try on r32.6.1/Jetson Nano + Raspberry Pi camera V2(imx219) and the sample runs well. Please give it a try.

I passed the loop successfully . I managed to make things work somehow using “cap.release” and and " cv2.destroyAllWindows" in the capture methof , but I DON’T KNOW why it works.

So since i followed a tutorial to use my csi camera with opencv, I don’t fully understand my code , and I am new to using jetson nano AND python so i am still trying to learn. Is there a way to reduce video stream resolution ? Because I am under 1fps here so is there a way to get higher fps ?

Thanks a lot.

ok sometimes it works , sometimes it doesn’t . Like right now i got a " Cannot create camera provider" and a " cannot query video position ; status = 0, value = -1 , duration =-1" .

Hi,
If your use-case is to run Yolo model for inferencing. we would suggest use DeepStream SDK. It is an optimal solution o f running deep learning inference on Jetson platforms.

I am indeed using yolov3 for vehicule counting and classification . But it’s very very slow. Do you have any tutorial or links I could refer too?

Hi,
Please check document of DeepStream SDK:
https://docs.nvidia.com/metropolis/

If you install all packages through SDKManager, you shall see the package in

/opt/nvidia/deepstream/deepstream-6.0/sources/objectDetector_Yolo/

Due to capability of Jetson Nano, we would suggest run Yolov3 tiny.

1 Like

Hi! I followed your instructions and installed Deepstream SDK following the Quisckstart Guide (Quickstart Guide — DeepStream 6.1.1 Release documentation)

But I am stuck at the " Run deepstream-app (the reference application)" part with this error:

** ERROR: <create_multi_source_bin:1423>: Failed to create element 'src_bin_muxer' ** ERROR: <create_multi_source_bin:1516>: create_multi_source_bin failed ** ERROR: <create_pipeline:1323>: create_pipeline failed ** ERROR: <main:639>: Failed to create pipeline Quitting App run failed

Hi,
Please go to the folder and try this command:

/opt/nvidia/deepstream/deepstream-6.0/samples/configs/deepstream-app$ deepstream-app -c source8_1080p_dec_infer-resnet_tracker_tiled_display_fp16_nano.txt

Just tried, the error is the same:

deepstream-app -c source8_1080p_dec_infer-resnet_tracker_tiled_display_fp16_nano.txt

** ERROR: <create_multi_source_bin:1423>: Failed to create element ‘src_bin_muxer’
** ERROR: <create_multi_source_bin:1516>: create_multi_source_bin failed
** ERROR: <create_pipeline:1323>: create_pipeline failed
** ERROR: main:639: Failed to create pipeline
Quitting
App run failed

I think it’s slow mostly because of python.Would it be better to use another language like C++ or C or java for this kind of project?

Hi,
Please clean the cache and try again:

$ rm ~/.cache/gstreamer-1.0/registry.aarch64.bin

If the failure is still present we would suggest re-flash the system through SDKManager. The command shoudl work well on default release. A bit strange the failure is hit. Re-flashing the system may help.

I still get this :

gst-plugin-scanner:7083): GStreamer-WARNING **: 08:08:42.057: Failed to load plugin ‘/usr/lib/aarch64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_udp.so’: librivermax.so.0: cannot open shared object file: No such file or directory No EGL Display nvbufsurftransform: Could not get EGL display connection

** ERROR: <create_multi_source_bin:1423>: Failed to create element ‘src_bin_muxer’ ** ERROR: <create_multi_source_bin:1516>: create_multi_source_bin failed ** ERROR: <create_pipeline:1323>: create_pipeline failed ** ERROR: main:639: Failed to create pipeline Quitting App run failed

I am gonna reflash the system.

I dowloaded SDK manager 1.7.1 ( the .deb one ) and when i followed the installation instructions get this error:
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:

The following packages have unmet dependencies:
sdkmanager:amd64 : Depends: libgconf-2-4:amd64 but it is not installable
Depends: libcanberra-gtk-module:amd64 but it is not installable
Depends: locales:amd64 but it is not installable
E: Unable to correct problems, you have held broken packages.

I tried this command:
sudo dpkg -i sdkmanager_0.9.11-3405_amd64.deb

and got this : package architecture (amd64) does not match system (arm64)

Hi,
Do you install SDKManger on x86 PC? You would need a host PC and connect Jetson Nano to the PC for flashing system image and packages.

I decided to format the card and re flash the jetpack directly

I successfully installed deepstream. Do you know how i can now use it with my application ? Thanks a lot

Hi,
Yo may try this command first:

/opt/nvidia/deepstream/deepstream-6.0/samples/configs/deepstream-app$ deepstream-app -c source8_1080p_dec_infer-resnet_tracker_tiled_display_fp16_nano.txt

If it runs fine, please follow README to set up/run Yolo models:

/opt/nvidia/deepstream/deepstream-6.0/sources/objectDetector_Yolo/README