Unable to open GStreamer Pipeline on Nano

Hi, I have been following the commands on https://www.jetsonhacks.com/2019/04/02/jetson-nano-raspberry-pi-camera/ to test the raspberry pi camera v2.1 and after I typed

gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM),width=3820, height=2464, framerate=21/1, format=NV12' ! nvvidconv flip-method=0 ! 'video/x-raw,width=960, height=616' ! nvvidconv ! nvegltransform ! nveglglessink -e

on terminal, a blank window is opened and I got the error message below:

Setting pipeline to PAUSED ...

Using winsys: x11 
Pipeline is live and does not need PREROLL ...
Got context from element 'eglglessink0': gst.egl.EGLDisplay=context, display=(GstEGLDisplay)NULL;
Setting pipeline to PLAYING ...
New clock: GstSystemClock
GST_ARGUS: Creating output stream
CONSUMER: Waiting until producer is connected...
GST_ARGUS: Available Sensor modes :
GST_ARGUS: 3264 x 2464 FR = 21,000000 fps Duration = 47619048 ; Analog Gain range min 1,000000, max 10,625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 3264 x 1848 FR = 28,000001 fps Duration = 35714284 ; Analog Gain range min 1,000000, max 10,625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1920 x 1080 FR = 29,999999 fps Duration = 33333334 ; Analog Gain range min 1,000000, max 10,625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1280 x 720 FR = 59,999999 fps Duration = 16666667 ; Analog Gain range min 1,000000, max 10,625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1280 x 720 FR = 120,000005 fps Duration = 8333333 ; Analog Gain range min 1,000000, max 10,625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: Running with following settings:
   Camera index = 0 
   Camera mode  = 0 
   Output Stream W = 3264 H = 2464 
   seconds to Run    = 0 
   Frame Rate = 21,000000 
GST_ARGUS: PowerService: requested_clock_Hz=43238580
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
CONSUMER: Producer has connected; continuing.
Unsupported Surface Count 0 
ERROR: from element /GstPipeline:pipeline0/GstNvArgusCameraSrc:nvarguscamerasrc0: Internal data stream error.
Additional debug info:
gstbasesrc.c(3055): gst_base_src_loop (): /GstPipeline:pipeline0/GstNvArgusCameraSrc:nvarguscamerasrc0:
streaming stopped, reason error (-5)
EOS on shutdown enabled -- waiting for EOS after Error
Waiting for EOS...

Thank you.

The available sensor modes on your camera appear to be 3264x2464, 3264x1848, 1920x1080 and 1280x720. Did you try changing your command line to match one of those instead of 3820x2464? In a simpler form:

$ gst-launch-1.0 nvarguscamerasrc ! nvoverlaysink

Ctrl-C to exit. This should also print out the available camera modes.

Yes, I have tried the configurations of the output but I got the same error. I was able to see the webcam with

gst-launch-1.0 nvarguscamerasrc ! nvoverlaysink

though.

At least the camera works. This line in the original error message:

gst.egl.EGLDisplay=context, display=(GstEGLDisplay)NULL;

indicates that the system was unable to create a display context. Is this a fresh system install? Many libraries and applications have the bad habit of overwriting the Jetson specific graphic drivers with generic ones.

Actually my primary goal was to work on OpenCV VideoCapture with the correct GStreamer configurations, and I just tried your tutorial on GitHub and it worked just fine with 1280x720 (capture and display) and 60 fps.

I am glad you were able to get it to work

I’m also facing the same issue.

@mrefedoganay did you able to solve this error?

Hi, it’s been a long time so as far as I remember I solved this by checking my camera sensors.

Thanks for your quick reply.
gst-launch-1.0 nvarguscamerasrc ! ‘video/x-raw(memory:NVMM),width=3820, height=2464, framerate=21/1, format=NV12’ ! nvvidconv flip-method=0 ! ‘video/x-raw,width=960, height=616’ ! nvvidconv ! nvegltransform ! nveglglessink -e

i tired the above command, it gives same error. do I need to change anything? could you pls explain

The two advices I can give:

1- Check whether your device has the camera sensor dimensions in the pipeline.

2- I have probably found a working pipeline in one of the issues in jetson-inference repo.