Multiple nvarguscamera sources in Python not linked

I’m building a custom python app and i need to use the two CSI camera modules as sources for the app, I’m not really sure why i’m missing or doing wrong but i get the error:

Error: gst-stream-error-quark: Internal data stream error. (1): gstbasesrc.c(3055): gst_base_src_loop (): /GstPipeline:pipeline0/GstBin:source-bin-0/GstNvArgusCameraSrc:camera-source:
streaming stopped, reason not-linked (-1)

Here is the source code:
https://github.com/socieboy/deepstream-examples/blob/master/multi-csi-source.py

Jetson NX/Nano
DeepStream 5.0
JetPack Version 4.5

Can you try with gst-launch to get the right pipeline and then start to write the python script? Your pipeline is wrong. Do you know anything about your camera? What is the output format of your camera? Frame rate?

After you get enough information for your camera, you can refer to the sample pipeline here DeepStream SDK FAQ - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums

Thank you for the quick reply, i know the pipeline is wrong that’s why i came to the forum looking for help.

This is the list of the output formats supported by the camera.

ioctl: VIDIOC_ENUM_FMT
Index : 0
Type : Video Capture
Pixel Format: ‘RG10’
Name : 10-bit Bayer RGRG/GBGB
Size: Discrete 3264x2464
Interval: Discrete 0.048s (21.000 fps)
Size: Discrete 3264x1848
Interval: Discrete 0.036s (28.000 fps)
Size: Discrete 1920x1080
Interval: Discrete 0.033s (30.000 fps)
Size: Discrete 1640x1232
Interval: Discrete 0.033s (30.000 fps)
Size: Discrete 1280x720
Interval: Discrete 0.017s (60.000 fps)

Please refer to DeepStream SDK FAQ - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums, you need to set correct caps according to your camera.

Let others help if you can’t.

There is sample pipeline in the link. Can you try?

Ok I havet tried this pipeline:

Gst-launch-1.0 nvarguscamerasrc bufapi-version=true sensor-id=0 ! ‘video/x-raw(memory:NVMM),width=640,height=480,framerate=30/1,format=NV12’ ! m.sink_0 nvstreammux name=m batch-size=1 width=1280 height=720 ! nvinfer config-file-path=/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/config_infer_primary.txt ! nvtracker tracker-width=640 tracker-height=480 ll-lib-file=/opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_mot_klt.so enable-batch-process=1 ! nvvideoconvert ! ‘video/x-raw(memory:NVMM),format=RGBA’ ! nvmultistreamtiler ! nvosd ! nvvideoconvert ! nvegltransform ! nveglglessink

seems like the pipeline example is for a older version of deepstream i got this error:

No such element or plugin 'nvosd'

I know my camera works, i have tested with other pipelines like the following and it works:

root@frank:/edge-appliance# gst-launch-1.0 nvarguscamerasrc sensor-id=0 ! nvvidconv ! nvegltransform ! nveglglessink
Setting pipeline to PAUSED ...

Using winsys: x11 
Pipeline is live and does not need PREROLL ...
Got context from element 'eglglessink0': gst.egl.EGLDisplay=context, display=(GstEGLDisplay)NULL;
Setting pipeline to PLAYING ...
New clock: GstSystemClock
GST_ARGUS: Creating output stream
CONSUMER: Waiting until producer is connected...
GST_ARGUS: Available Sensor modes :
GST_ARGUS: 3264 x 2464 FR = 21.000000 fps Duration = 47619048 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 3264 x 1848 FR = 28.000001 fps Duration = 35714284 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1920 x 1080 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1640 x 1232 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1280 x 720 FR = 59.999999 fps Duration = 16666667 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: Running with following settings:
   Camera index = 0 
   Camera mode  = 2 
   Output Stream W = 1920 H = 1080 
   seconds to Run    = 0 
   Frame Rate = 29.999999 
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
CONSUMER: Producer has connected; continuing.
ERROR: from element /GstPipeline:pipeline0/GstEglGlesSink:eglglessink0: Output window was closed
Additional debug info:
/dvs/git/dirty/git-master_linux/3rdparty/gst/gst-nveglglessink/ext/eglgles/gsteglglessink.c(913): gst_eglglessink_event_thread (): /GstPipeline:pipeline0/GstEglGlesSink:eglglessink0
Execution ended after 0:00:03.731645252
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
CONSUMER: Done Success
GST_ARGUS: Cleaning up
GST_ARGUS: Done Success
Setting pipeline to NULL ...
Freeing pipeline ...

Seems deepstream is not installed.

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)

Please refer to Quickstart Guide — DeepStream 6.3 Release documentation for installation

Your camera does not support 640x480 resolution, you may try:

Gst-launch-1.0 nvarguscamerasrc bufapi-version=true sensor-id=0 ! ‘video/x-raw(memory:NVMM),width=1920,height=1080,framerate=30/1,format=NV12’ ! m.sink_0 nvstreammux name=m batch-size=1 width=1280 height=720 ! nvinfer config-file-path=/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/config_infer_primary.txt ! nvtracker tracker-width=640 tracker-height=480 ll-lib-file=/opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_mot_klt.so enable-batch-process=1 ! nvvideoconvert ! ‘video/x-raw(memory:NVMM),format=RGBA’ ! nvmultistreamtiler ! nvdsosd ! nvvideoconvert ! nvegltransform ! nveglglessink

Deepstream is installed.
This pipeline worked:

gst-launch-1.0 nvarguscamerasrc bufapi-version=true sensor-id=0 ! "video/x-raw(memory:NVMM),width=1920,height=1080,framerate=30/1,format=NV12" ! m.sink_0 nvstreammux name=m batch-size=1 width=1280 height=720 ! nvinfer config-file-path=/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/config_infer_primary.txt ! nvtracker tracker-width=640 tracker-height=480 ll-lib-file=/opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_mot_klt.so enable-batch-process=1 ! nvvideoconvert ! "video/x-raw(memory:NVMM),format=RGBA" ! nvmultistreamtiler ! nvdsosd ! nvvideoconvert ! nvegltransform ! nveglglessink

So you can write your python script with this pipeline.

yes, no problems, it works fine.

https://github.com/socieboy/deepstream-examples/blob/master/test-pipeline-python.py

But my problem still exists… i need to run 2 CSI cameras using the nvarguscamerasrc in one pipeline.

Thanks for your guide, but i write the python script for your pipeline.
Do you know what the problem is on my original question? how to run mutiple csi cameras in one single pipeline?

https://github.com/socieboy/deepstream-examples/blob/master/multi-csi-source.py

There is a sample of multiple sources: deepstream-test3

I follow that example as starting point to my code, the sample use uridecodebin as source, no nvarguscamerasrc, with my understanding of gstreamer and deepstream, i was able to change the code to use nvaruscamerasrc, but i don’t why i’m getting the following not linked error:

root@frank:/deepstream-examples# python multi-csi-source.py 
Creating Element: nvstreammux
Creating Source for  Camera 1
Creating Source Bin
Creating Element: nvarguscamerasrc
Creating Source for  Camera 2
Creating Source Bin
Creating Element: nvarguscamerasrc
Creating Element: nvmultistreamtiler
Creating Element: nvvideoconvert
Creating Element: nvegltransform
Creating Element: nveglglessink
Creating Element: queue
Creating Element: queue
Creating Element: queue
Creating Element: queue
Adding elements to Pipeline
Linking elements in the Pipeline
Starting pipeline

Using winsys: x11 
GST_ARGUS: Creating output stream
GST_ARGUS: Creating output stream
CONSUMER: Waiting until producer is connected...
CONSUMER: Waiting until producer is connected...
GST_ARGUS: Available Sensor modes :
GST_ARGUS: Available Sensor modes :
GST_ARGUS: 3264 x 2464 FR = 21.000000 fps Duration = 47619048 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 3264 x 2464 FR = 21.000000 fps Duration = 47619048 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 3264 x 1848 FR = 28.000001 fps Duration = 35714284 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 3264 x 1848 FR = 28.000001 fps Duration = 35714284 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1920 x 1080 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1920 x 1080 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1640 x 1232 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1640 x 1232 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1280 x 720 FR = 59.999999 fps Duration = 16666667 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1280 x 720 FR = 59.999999 fps Duration = 16666667 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: Running with following settings:
   Camera index = 0 
   Camera mode  = 2 
   Output Stream W = 1920 H = 1080 
   seconds to Run    = 0 
   Frame Rate = 29.999999 
GST_ARGUS: Running with following settings:
   Camera index = 1 
   Camera mode  = 2 
   Output Stream W = 1920 H = 1080 
   seconds to Run    = 0 
   Frame Rate = 29.999999 
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
CONSUMER: Producer has connected; continuing.
CONSUMER: Producer has connected; continuing.
Error: gst-stream-error-quark: Internal data stream error. (1): gstbasesrc.c(3055): gst_base_src_loop (): /GstPipeline:pipeline0/GstBin:source-bin-0/GstNvArgusCameraSrc:camera-source:
streaming stopped, reason not-linked (-1)
GST_ARGUS: Cleaning up
CONSUMER: Done Success
GST_ARGUS: Done Success
GST_ARGUS: Cleaning up
CONSUMER: Done Success
GST_ARGUS: Done Success

I’m not sure what part i’m missing, so i was looking for help in the forum.

I don’t think you understand gstreamer.
Your source bin is wrong. Your camera support multiple resolutions and frame rates, if you don’t tell the camera which resolution and frame rate you want to use, how can it work?

Please refer to the pipeline which you can run successfully. Your source should be “nvarguscamerasrc + capsfilter” but not a single “nvarguscamerasrc”. I can’t find any capsfilter in your “def create_source_bin(camera)”.

You can also try the two cameras pipeline before you start to implement the multiple source script.

Gst-launch-1.0 nvarguscamerasrc bufapi-version=true sensor-id=0 ! ‘video/x-raw(memory:NVMM),width=1920,height=1080,framerate=30/1,format=NV12’ ! m.sink_0 nvstreammux name=m batch-size=1 width=1280 height=720 ! nvinfer config-file-path=/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/config_infer_primary.txt ! nvtracker tracker-width=640 tracker-height=480 ll-lib-file=/opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_mot_klt.so enable-batch-process=1 ! nvvideoconvert ! ‘video/x-raw(memory:NVMM),format=RGBA’ ! nvmultistreamtiler ! nvdsosd ! nvvideoconvert ! nvegltransform ! nveglglessink nvarguscamerasrc bufapi-version=true sensor-id=1 ! ‘video/x-raw(memory:NVMM),width=1920,height=1080,framerate=30/1,format=NV12’ ! m.sink_1

Gstreamer automatically uses one of the defaults capabilities if you don’t explicit set one. so that’s not the problem, if i add one it won’t solve the problem. like with your pipeline example, if you remove the capabilities after the nvarguscamerasrc it will still run.

Run your pipeline without caps and it will run:

gst-launch-1.0 nvarguscamerasrc bufapi-version=true sensor-id=0 ! m.sink_0 nvstreammux name=m batch-size=1 width=1280 height=720 ! nvinfer config-file-path=/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/config_infer_primary.txt ! nvtracker tracker-width=640 tracker-height=480 ll-lib-file=/opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_mot_klt.so enable-batch-process=1 ! nvvideoconvert ! "video/x-raw(memory:NVMM),format=RGBA" ! nvmultistreamtiler ! nvdsosd ! nvvideoconvert ! nvegltransform ! nveglglessink

Sorry the above pipeline is wrong, please try this one.
gst-launch-1.0 nvarguscamerasrc bufapi-version=true sensor-id=0 ! ‘video/x-raw(memory:NVMM),width=1920,height=1080,framerate=30/1,format=NV12’ ! m.sink_0 nvstreammux name=m batch-size=2 width=1280 height=720 live-source=1 ! nvinfer config-file-path=/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/config_infer_primary.txt ! nvtracker tracker-width=640 tracker-height=480 ll-lib-file=/opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_mot_klt.so enable-batch-process=1 ! nvvideoconvert ! ‘video/x-raw(memory:NVMM),format=RGBA’ ! nvmultistreamtiler ! nvdsosd ! nvvideoconvert ! nvegltransform ! nveglglessink nvarguscamerasrc bufapi-version=true sensor-id=1 ! ‘video/x-raw(memory:NVMM),width=1920,height=1080,framerate=30/1,format=NV12’ ! m.sink_1

Both pipelines worked, both shows the two cameras in the screen.

Can you try to add capsfilter?