nvbuf_utils: Could not get EGL display connection

Hello there,
I am working with Jetson nano in handless mode through SSH, I have ZED camera plunged into the nano and trying to get live feed through ssh to another computer,the problem is when I run the ZED explorer (launcher for the ZED camera) it gives me this error

nvbuf_utils: Could not get EGL display connection
Error in Shader source code
Information written in shaderLog.txt
Wrong location, does this var exist : "texInput"? Is it used in the program? (May be the GLCompiler swapped it)
    Trying : texInput<-0

then the explorer shuts down, I tried other methods like running it through Cheese, openCV and ROS and managed to get it to work in all of them but the FPS was way too low (around 2-5 fps)
I also tried other cameras but still the same issue.
I found a post on jetson TX2 with the same issue but none of the methods there worked for me

PS
I’m using SSH -C -X command to connect to the nano

any ideas how to fix this issue?

Hi,
Please share clear steps( system setup and gstreamer pipeline ) so that we can reproduce the failure. Do you flash Jetpack4.2.2(r32.2.1) via sdkmanager?

1 Like

thank you for the reply,
for the steps I followed to setup the system
followed official tutorial on how to setup jetson Nano including jetpack

https://developer.nvidia.com/embedded/learn/get-started-jetson-nano-devkit

for gstreamer I set it up using the command sudo apt install -y gstreamer1.0-plugins-base

after that I just tried different cameras through ssh

I don’t really know how to use gstreamer to be honest I just downloaded it as a dependency for openCV
can you please specify what do you need from gstreamer pipline and how can I do it?

this is the full system setup after installing jetpack if that helps

# prepare system before install
cd $HOME
sudo apt update
sudo apt upgrade
sudo apt install -y git
sudo apt install -y python-pip

# ROV networking packages
sudo apt install -y openssh-server
sudo apt install -y arp-scan

# ROS
# add ROS ppa
sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list'
sudo apt-key adv --keyserver 'hkp://keyserver.ubuntu.com:80' --recv-key C1CF6E31E6BADE8868B172B4F42ED6FBAB17C654

sudo apt update
sudo apt upgrade

# Install ROS
sudo apt-get install -y ros-melodic-desktop 
sudo rosdep init
rosdep update
echo "source /opt/ros/melodic/setup.bash" >> ~/.bashrc
echo "source ~/ghattas/devel/setup.bash" >> ~/.bashrc
source ~/.bashrc
sudo apt install -y python-rosinstall python-rosinstall-generator python-wstool build-essential

# install additional ROS-packages
sudo apt-get install -y ros-melodic-rosserial-arduino
sudo apt-get install -y ros-melodic-rosserial
sudo apt install -y ros-melodic-ddynamic-reconfigure
sudo apt install -y ros-melodic-ddynamic-reconfigure-python

# creat, install needed deps and build workspace
# clone ghattas repo to work-space
cd $HOME
mkdir ~/ghattas

# Install camera drivers
# intel Realsense T256
echo 'deb http://realsense-hw-public.s3.amazonaws.com/Debian/apt-repo xenial main' | sudo tee /etc/apt/sources.list.d/realsense-public.list
sudo apt-key adv --keyserver keys.gnupg.net --recv-key F6E65AC044F831AC80A06380C8B3A55A6F3EFCDE || sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-key F6E65AC044F831AC80A06380C8B3A55A6F3EFCDE
sudo add-apt-repository "deb http://realsense-hw-public.s3.amazonaws.com/Debian/apt-repo bionic main" -u
sudo apt update
sudo apt install -y librealsense2-dkms
sudo apt install -y librealsense2-utils
sudo apt install -y librealsense2-dev
sudo apt install -y librealsense2-dbg
cd ~ghattas/src
git clone https://github.com/intel-ros/realsense

# zed-mini for jetson nano
mkdir ~/temp/
cd ~/temp/
wget "https://download.stereolabs.com/zedsdk/2.8/jetson_jp42"
chmod +x ZED_SDK_*.run
./ZED_SDK_*.run
chmod +x jetson_jp42
./jetson_jp42

# build work=space
source /opt/ros/melodic/setup.bash
cd ~/ghattas/
catkin_make -DCMAKE_BUILD_TYPE=Release
source devel/setup.bash

# OPENCV
# Install opencv required deps
sudo pip2 install -y --upgrade pip
sudo pip2 install -y imutils
python -m pip install -y --upgrade --user mss
sudo apt install -y cmake python-dev python-numpy
sudo apt install -y gcc g++
sudo apt install -y python-gtk2-dev
sudo apt install -y libffms2-4
sudo apt install -y gstreamer1.0-plugins-base

# clone and build latest stable opencv version
cd $HOME/temp/
mkdir opencv-source
cd opencv-source
git clone https://github.com/opencv/opencv.git
git clone https://github.com/opencv/opencv_contrib
mkdir build
cd build
cmake -DOPENCV_EXTRA_MODULES_PATH=../opencv_contrib/modules -D CMAKE_INSTALL_PREFIX=/opt/ros/melodic/ ../opencv
make
sudo make install -y

# install optional user convenienc packages
wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
sudo dpkg -i google-chrome-stable_current_amd64.deb
sudo add-apt-repository ppa:webupd8team/terminix
sudo apt update
sudo apt install -y tilix
sudo apt install -y htop

Hi,
We would like you to share a command such as
https://devtalk.nvidia.com/default/topic/1046218/jetson-tx2/unable-to-overlay-text-when-using-udpsrc-/post/5310313/#5310313

So that we can run to reproduce the issue.

In certain cases, the failure is triggered because DISPLAY is not set. You may configure it and try again.

$ export DISPLAY=:0(or 1)
2 Likes

hello,

so after setting the display to 0 using

export DISPLAY=:0

and using the following command

gst-launch-1.0 videotestsrc ! nvvidconv ! 'video/x-raw(memory:NVMM),format=NV12' ! nvivafilter cuda-process=true customer-lib-name="libnvsample_cudaprocess.so" ! "video/x-raw(memory:NVMM), format=(string)RGBA" ! nvoverlaysink

the output of the terminal is

Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock

and the test display showed up like this
https://images.app.goo.gl/RfJ3L6x3K9oXFAXe6

but when I try

export DISPLAY=:1

then do the same command this is the output

nvbuf_utils: Could not get EGL display connection
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...

it just freeze like this and nothing show up

Hi,
You may run ‘xrandr’ to check which DISPLAY is correct.

This is the output I get when I use the command

Screen 0: minimum 8 x 8, current 1366 x 768, maximum 16384 x 16384
HDMI-0 connected primary 1366x768+0+0 (normal left inverted right x axis y axis) 0mm x 0mm
   1366x768      59.97*+
   1920x1080     60.00    59.95    50.00    24.00    23.98  
   1280x768      59.99  
   1280x720      60.00    59.94    50.00  
   1024x768      60.01  
   800x600       60.32    56.25  
   720x576       50.00  
   720x480       59.94  
   720x400       70.04  
   640x480       59.94    59.94  
DP-0 disconnected (normal left inverted right x axis y axis)

Can you please direct me to a list of packages that I can install to run cameras using ssh or any other packages that I should install after flashing the jetson nano

Hi,
Please check
https://elinux.org/Jetson_Zoo

Thanks for the reply
Yes I did check that but I was talking about any necessary packages for the Jetson nano to setup network connection or display packages that need to be installed in order to have camera stream through SSH
That might be the problem

You should be able to see your onboard camera from remote with:

ssh -X Nano_IP
#when logged into nano:
gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM), format=NV12, framerate=30/1' ! nvvidconv ! videoconvert ! xvimagesink

Be also aware that nvivafilter silently falls back to libnvsample_cudaprocess.so in standard path (/usr/lib/aarch64…) if your custom lib cannot be found explictly (such as ./libnvsample_cudaprocess.so when your are in the right directory where this lib has beeen generated) or in a directory listed first in environment variable LD_LIBRARY_PATH.

1 Like

Okay so I checked the path (/usr/lib/aarch64…) and found libnvsample_cudaprocess.so there
I assume by custom lib you mean nvbuf_utils because I don’t really have much experience regarding all of this so can you please explain more or tell me what I need to look for

the camera is plugged and when I try the command

gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM), format=NV12, framerate=30/1' ! nvvidconv ! videoconvert ! xvimagesink

this is the output

gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM), format=NV12, framerate=30/1' ! nvvidconv ! videoconvert ! xvimagesink
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, execute:521 No cameras available
Got EOS from element "pipeline0".
Execution ended after 0:00:00.296856486
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...

Sorry, I realize I missed you are using a USB ZED camera. So the right command would be:

gst-launch-1.0 v4l2src device=/dev/video1 ! video/x-raw, format=YUY2 ! videoconvert ! xvimagesink

assuming that /dev/video1 is the video node for your ZED (should appear a few seconds after you plug it in). If you don’t have any CSI camera connected, it may be video0.

About custom lib, nvivafilter plugin uses a library providing a function for processing one frame. The original lib is the one you’ve found. But it doesn’t do so much for cuda-process, so you would want to rebuild your own (you would download sources for this), and provide location of this lib as nvivafilter option customer-lib-name as you did in post #5. If you provide a bad location and nvivafilter cannot find this lib, it would silently fall back to use the lib in /usr/lib/aarch64…

thanks for the explanation but I am not using any custom library all I need is to have a live feed from the zed camera using ssh

I tried this command

gst-launch-1.0 v4l2src device=/dev/video0 ! video/x-raw, format=YUY2 ! videoconvert ! xvimagesink

and this is the output

at=YUY2 ! videoconvert ! xvimagesink
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock

the camera worked but I am still having very low fps around 5 or so

Hi,
ZED camera shows below capability:

$ v4l2-ctl -d /dev/video1 --list-formats-ext
ioctl: VIDIOC_ENUM_FMT
        Index       : 0
        Type        : Video Capture
        Pixel Format: 'YUYV'
        Name        : YUYV 4:2:2
                Size: Discrete 2560x720
                        Interval: Discrete 0.017s (60.000 fps)
                        Interval: Discrete 0.033s (30.000 fps)
                        Interval: Discrete 0.067s (15.000 fps)
                Size: Discrete 1280x480
                        Interval: Discrete 0.010s (100.000 fps)
                        Interval: Discrete 0.017s (60.000 fps)
                        Interval: Discrete 0.033s (30.000 fps)
                        Interval: Discrete 0.067s (15.000 fps)
                Size: Discrete 3840x1080
                        Interval: Discrete 0.033s (30.000 fps)
                        Interval: Discrete 0.067s (15.000 fps)
                Size: Discrete 4416x1242
                        Interval: Discrete 0.067s (15.000 fps)

You can run

$ sudo jetson_clocks
$ gst-launch-1.0 v4l2src device=/dev/video1 ! video/x-raw,forma=YUY2,width=1280,height=480,framerate=30/1 ! nvvidconv ! 'video/x-raw(memory:NVMM),format=NV12' ! nvoverlaysink

nvarguscamerasrc is for Bayer sensors, not for USB cameras. Please run v4l2src.