Deepstream three v4l2src cameras

• DeepStream 6.2
**• JetPack Version 5, I think, **
• Tegra: 35 (release), REVISION: 3.1, GCID: 32827747, BOARD: t186ref, EABI: aarch64, DATE: Sun Mar 19 15:19:21 UTC 2023
• TensorRT Version: 8.5.2-1+cuda11.4

• Issue Type( questions, new requirements, bugs)
I would like to run three cameras with deepstream-test1-usbcam same as you do
for test3 with multiple sources but I cant get the gstreamer pipeline properly set up.

**• How to reproduce the issue ? I just run:
run python3 deepstream-test1-usbcam -i /dev/video0 /dev/video1

How can I modify the script to create three sources for the deepstream test 1?
I have dont this for openCV with one string but for deepstream and creating factories, Im lost.
I tried to look at deepstream-test3 and create a source loop to iterate over the input sources but
I dont know how to add all the remaining video converters and add them together in the end.

Any help is appriciated.

The core logic is that you need to port the create_source_bin from test3 to test1. You can try that.

Core Logic
for i in range(number_sources):
    print("Creating source_bin ",i," \n ")
    uri_name=args[i]
    if uri_name.find("rtsp://") == 0 :
        is_live = True
    source_bin=create_source_bin(i, uri_name)
    if not source_bin:
        sys.stderr.write("Unable to create source bin \n")
    pipeline.add(source_bin)
    padname="sink_%u" %i
    sinkpad= streammux.get_request_pad(padname) 
    if not sinkpad:
        sys.stderr.write("Unable to create sink pad bin \n")
    srcpad=source_bin.get_static_pad("src")
    if not srcpad:
        sys.stderr.write("Unable to create src pad bin \n")
    srcpad.link(sinkpad)

hi,

I tried to do this but I stumble on something. I don’t know really where to put the elements.

I test with 1 camera only. Hard coded into the program in the main, ignored the args.
With the help from Chat GPT I got the pipeline not to complain.

Code:
deepstream_yolov5_camera.zip (5.2 KB)

Error:

Using winsys: x11 
Opening in BLOCKING MODE 
0:00:00.189692514 60831      0x80e80c0 WARN                 nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1170> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5
WARNING: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-6.1/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine open error
0:00:03.012122819 60831      0x80e80c0 WARN                 nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1897> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.1/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine failed
0:00:03.175171620 60831      0x80e80c0 WARN                 nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2002> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.1/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine failed, try rebuild
0:00:03.175272645 60831      0x80e80c0 INFO                 nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1923> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: The implicit batch dimension mode has been deprecated. Please create the network with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag whenever possible.
Terminated

@yuweiw
I have tried again. I try to clean the code abit from all comments in the original nVidia code so its easier to read.
but essentially I have problem creating the pipeline with the source bins.
Can you see what I do wrong? Full code is attached… To be honest. I really hope you can support
with the pipeline.

def decodebin_child_added(child_proxy,Object,name,user_data):
    print("Decodebin child added:", name, "\n")
    if(name.find("v4l2src") != -1):
        Object.connect("child-added",decodebin_child_added,user_data)

    if "source" in name:
        source_element = child_proxy.get_by_name("source")
        if source_element.find_property('drop-on-latency') != None:
            Object.set_property("drop-on-latency", True)

The create_source_bin() function is:

def create_source_bin(index, src_name):
    bin_name="source-bin-%02d" %index
    print(bin_name)
    nbin=Gst.Bin.new(bin_name)
    
    src_decode_bin = Gst.ElementFactory.make("v4l2src", "usb-cam-source")
    src_decode_bin.set_property("device",src_name)
    src_decode_bin.connect("pad-added",cb_newpad,nbin)
    src_decode_bin.connect("child-added",decodebin_child_added,nbin)    

    Gst.Bin.add(nbin,src_decode_bin)
    bin_pad=nbin.add_pad(Gst.GhostPad.new_no_target("src",Gst.PadDirection.SRC))

    return nbin

Then I add some converters in the end.


#Adding elements to Pipeline,
    pipeline.add(caps_v4l2src)
    pipeline.add(vidconvsrc)
    pipeline.add(nvvidconvsrc)
    pipeline.add(caps_vidconvsrc)

    pipeline.add(pgie)
    pipeline.add(tiler)
    pipeline.add(nvvidconv)
    pipeline.add(nvosd)
    pipeline.add(sink)

    #Stream mux source to queue 1
    streammux.link(queue1)

    #Add all convertions
    queue1.link(caps_v4l2src)
    caps_v4l2src.link(queue2)
    queue2.link(vidconvsrc)
    vidconvsrc.link(queue3)
    queue3.link(nvvidconvsrc)
    nvvidconvsrc.link(queue4)
    queue4.link(caps_vidconvsrc)
    caps_vidconvsrc.link(queue5)

    #link queue5 to inference
    queue5.link(pgie)
    pgie.link(queue6)
    queue6.link(tiler)
    tiler.link(queue7)
    queue7.link(nvvidconv)
    nvvidconv.link(queue8)
    queue8.link(nvosd)
    nvosd.link(queue9)
    queue9.link(sink)

deepstream_pipeline_multisrc.zip (3.4 KB)

The v4l2 source can’t send the child-added signal. If you want to use this, you need to use the uridecodebin source and set the uri like uri=v4l2:///dev/video2.

Thanks. I really have no preference. I just dont know how to write this part. My code I provided is what I understand. So what option do I have if I don’t use the uridecodevin. I would prefer to make it as simple as possible. But I dont know if the rest of the pipeline is correct. Do you have any similar example I can have a look at? I am really at your mercy.

br,
Magnus

so I changed the create_source_bin to:

src_decode_bin = Gst.ElementFactory.mae("v4l2src",src_name)

return src_decode_bin

the error I get:

0:00:03.398301662 5434 0xd079600 INFO nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1909> [UID = 1]: deserialized trt engine from :/home/aiadmin/Development/deepstream_yolov5_camera/model_b1_gpu0_fp16.engine
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: [Implicit Engine Info]: layers num: 4
0 INPUT kFLOAT input 3x640x640
1 OUTPUT kFLOAT boxes 25200x4
2 OUTPUT kFLOAT scores 25200x1
3 OUTPUT kFLOAT classes 25200x1

0:00:03.565252077 5434 0xd079600 INFO nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2012> [UID = 1]: Use deserialized engine model: /home/aiadmin/Development/deepstream_yolov5_camera/model_b1_gpu0_fp16.engine
0:00:03.596161575 5434 0xd079600 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus: [UID 1]: Load new model:config_infer_primary_yoloV5.txt sucessfully
Error: gst-stream-error-quark: NvStreamMux does not suppport raw buffers. Use nvvideoconvert before NvStreamMux to convert to NVMM buffers (5): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvmultistream/gstnvstreammux.c(1209): gst_nvstreammux_sink_event (): /GstPipeline:pipeline0/GstNvStreamMux:Stream-muxer
Exiting app

@DaneLLL Do you have any idea? I think you helped me before too…

Here is the latest an greatest code

I get this error when I run the code python3 deepstream_pipeline_test1.py

Error: gst-stream-error-quark: Internal data stream error. (1): gstbasesrc.c(3072): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:usb-cam-source-cam1:
streaming stopped, reason not-linked (-1)

it works with one camera, but setting num_sources at the end to 2 will cause the error.
deepstream_pipeline_test1.zip (2.8 KB)

If you want to use v4l2 source instead of uridecodebin to implement multiple sources, you need to use the pipeline like below. You can refer to the deepstream_test_1_usb to create multiple branch before the nvstreammux.

v4l2src1->caps_v4l2src1->vidconvsrc1->nvvidconvsrc1->caps_vidconvsrc1->
                                                                        nvstreammux->........
v4l2src2->caps_v4l2src2->vidconvsrc2->nvvidconvsrc2->caps_vidconvsrc2->

Hi,

This I have done in the last code I sent. Apart from that I used video0 and video1 as video source I need to update to use video0 and video2.

regardless I get the error:

usb-cam-source-cam0: streaming stopped, reason not-linked.

So I don’t understand how to make the pipeline for those two sources and link them together. Cant see what I do wrong even with debug prints along the way…

@miguel.taylor - I say you had some input, can you support?

deepstream_test_1.py (10.3 KB)
Please refer to the code I attached. If you don’t want to use uridecodebin, just modify the file source to the v4l2 source. We suggest you learning some basic knowledge of Gstreamer. If you don’t understand Gstreamer at all, debugging code will be a bit difficult.

I will have a look, but the code I attached is a close as I get.
The basics of Gstreamer is understood but I cant tell where I go wrong in the pipeline.
It does not pick up a second camera so the linking (of course) somehow must be wrong so that
the two sources are does not connect properly, by name or srcpad I dont know.

To me it looks fine, hence my attempt to get help on the matter.
@kesong, you had a previous situation here:

I have a similar issue

Full error:


ARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
0:00:02.979995637  5932     0x1fcfb150 INFO                 nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1909> [UID = 1]: deserialized trt engine from :/home/aiadmin/Development/deepstream_yolov5_camera/model_b1_gpu0_fp16.engine
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: [Implicit Engine Info]: layers num: 4
0   INPUT  kFLOAT input           3x640x640       
1   OUTPUT kFLOAT boxes           25200x4         
2   OUTPUT kFLOAT scores          25200x1         
3   OUTPUT kFLOAT classes         25200x1         

0:00:03.137614027  5932     0x1fcfb150 INFO                 nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2012> [UID = 1]: Use deserialized engine model: /home/aiadmin/Development/deepstream_yolov5_camera/model_b1_gpu0_fp16.engine
0:00:03.165414881  5932     0x1fcfb150 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:config_infer_primary_yoloV5.txt sucessfully
0:00:03.166241353  5932     0x1ee118c0 DEBUG                v4l2src gstv4l2src.c:516:gst_v4l2src_negotiate:<usb-cam-source-cam2> caps of src: video/x-raw, format=(string)GRAY8, width=(int)1600, height=(int)1300, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction)60/1; video/x-raw, format=(string)GRAY8, width=(int)1280, height=(int)720, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 60/1, 10/1 }; video/x-raw, format=(string)GRAY8, width=(int)800, height=(int)650, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction)60/1; video/x-raw, format=(string)GRAY8, width=(int)640, height=(int)480, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction)180/1; video/x-raw, format=(string)GRAY8, width=(int)320, height=(int)240, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction)280/1; video/x-raw, format=(string)GRAY16_LE, width=(int)1600, height=(int)1300, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction)40/1; video/x-raw, format=(string)GRAY16_LE, width=(int)1280, height=(int)720, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction)90/1; video/x-raw, format=(string)GRAY16_LE, width=(int)640, height=(int)480, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction)180/1; video/x-raw, format=(string)GRAY16_LE, width=(int)320, height=(int)240, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction)280/1
0:00:03.166283241  5932     0x1ebe1b00 DEBUG                v4l2src gstv4l2src.c:516:gst_v4l2src_negotiate:<usb-cam-source-cam0> caps of src: video/x-raw, format=(string)GRAY8, width=(int)1600, height=(int)1300, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction)60/1; video/x-raw, format=(string)GRAY8, width=(int)1280, height=(int)720, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 60/1, 10/1 }; video/x-raw, format=(string)GRAY8, width=(int)800, height=(int)650, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction)60/1; video/x-raw, format=(string)GRAY8, width=(int)640, height=(int)480, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction)180/1; video/x-raw, format=(string)GRAY8, width=(int)320, height=(int)240, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction)280/1; video/x-raw, format=(string)GRAY16_LE, width=(int)1600, height=(int)1300, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction)40/1; video/x-raw, format=(string)GRAY16_LE, width=(int)1280, height=(int)720, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction)90/1; video/x-raw, format=(string)GRAY16_LE, width=(int)640, height=(int)480, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction)180/1; video/x-raw, format=(string)GRAY16_LE, width=(int)320, height=(int)240, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction)280/1
0:00:03.166330761  5932     0x1ebe1b00 DEBUG                v4l2src gstv4l2src.c:524:gst_v4l2src_negotiate:<usb-cam-source-cam0> caps of peer: video/x-raw, framerate=(fraction)60/1, width=(int)1280, height=(int)720
0:00:03.166360042  5932     0x1ebe1b00 DEBUG                v4l2src gstv4l2src.c:530:gst_v4l2src_negotiate:<usb-cam-source-cam0> intersect: video/x-raw, framerate=(fraction)60/1, width=(int)1280, height=(int)720, format=(string)GRAY8, pixel-aspect-ratio=(fraction)1/1
0:00:03.166371786  5932     0x1ebe1b00 DEBUG                v4l2src gstv4l2src.c:407:gst_v4l2src_fixate:<usb-cam-source-cam0> fixating caps video/x-raw, framerate=(fraction)60/1, width=(int)1280, height=(int)720, format=(string)GRAY8, pixel-aspect-ratio=(fraction)1/1
0:00:03.166387850  5932     0x1ebe1b00 DEBUG                v4l2src gstv4l2src.c:420:gst_v4l2src_fixate:<usb-cam-source-cam0> Prefered size 1280x720
0:00:03.166400458  5932     0x1ebe1b00 DEBUG                v4l2src gstv4l2src.c:443:gst_v4l2src_fixate:<usb-cam-source-cam0> sorted and normalized caps video/x-raw, framerate=(fraction)60/1, width=(int)1280, height=(int)720, format=(string)GRAY8, pixel-aspect-ratio=(fraction)1/1
0:00:03.167009071  5932     0x1ee118c0 DEBUG                v4l2src gstv4l2src.c:524:gst_v4l2src_negotiate:<usb-cam-source-cam2> caps of peer: video/x-raw, framerate=(fraction)60/1, width=(int)1280, height=(int)720, format=(string){ I420, P010_10LE, NV12, BGRx, RGBA, GRAY8, YUY2, UYVY, YVYU, Y42B, RGB, BGR, UYVP }; video/x-raw, framerate=(fraction)60/1, width=(int)1280, height=(int)720, format=(string){ I420, YV12, YUY2, UYVY, AYUV, VUYA, RGBx, BGRx, xRGB, xBGR, RGBA, BGRA, ARGB, ABGR, RGB, BGR, Y41B, Y42B, YVYU, Y444, v210, v216, Y210, Y410, NV12, NV21, GRAY8, GRAY16_BE, GRAY16_LE, v308, RGB16, BGR16, RGB15, BGR15, UYVP, A420, RGB8P, YUV9, YVU9, IYU1, ARGB64, AYUV64, r210, I420_10BE, I420_10LE, I422_10BE, I422_10LE, Y444_10BE, Y444_10LE, GBR, GBR_10BE, GBR_10LE, NV16, NV24, NV12_64Z32, A420_10BE, A420_10LE, A422_10BE, A422_10LE, A444_10BE, A444_10LE, NV61, P010_10BE, P010_10LE, IYU2, VYUY, GBRA, GBRA_10BE, GBRA_10LE, BGR10A2_LE, GBR_12BE, GBR_12LE, GBRA_12BE, GBRA_12LE, I420_12BE, I420_12LE, I422_12BE, I422_12LE, Y444_12BE, Y444_12LE, GRAY10_LE32, NV12_10LE32, NV16_10LE32, NV12_10LE40 }
0:00:03.167055056  5932     0x1ee118c0 DEBUG                v4l2src gstv4l2src.c:530:gst_v4l2src_negotiate:<usb-cam-source-cam2> intersect: video/x-raw, framerate=(fraction)60/1, width=(int)1280, height=(int)720, format=(string)GRAY8, pixel-aspect-ratio=(fraction)1/1
0:00:03.167064816  5932     0x1ee118c0 DEBUG                v4l2src gstv4l2src.c:407:gst_v4l2src_fixate:<usb-cam-source-cam2> fixating caps video/x-raw, framerate=(fraction)60/1, width=(int)1280, height=(int)720, format=(string)GRAY8, pixel-aspect-ratio=(fraction)1/1
0:00:03.167079216  5932     0x1ee118c0 DEBUG                v4l2src gstv4l2src.c:420:gst_v4l2src_fixate:<usb-cam-source-cam2> Prefered size 1280x720
0:00:03.167090480  5932     0x1ee118c0 DEBUG                v4l2src gstv4l2src.c:443:gst_v4l2src_fixate:<usb-cam-source-cam2> sorted and normalized caps video/x-raw, framerate=(fraction)60/1, width=(int)1280, height=(int)720, format=(string)GRAY8, pixel-aspect-ratio=(fraction)1/1
0:00:03.168937280  5932     0x1ebe1b00 DEBUG                v4l2src gstv4l2src.c:501:gst_v4l2src_fixate:<usb-cam-source-cam0> fixated caps video/x-raw, framerate=(fraction)60/1, width=(int)1280, height=(int)720, format=(string)GRAY8, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive
0:00:03.168956833  5932     0x1ebe1b00 DEBUG                v4l2src gstv4l2src.c:554:gst_v4l2src_negotiate:<usb-cam-source-cam0> fixated to: video/x-raw, framerate=(fraction)60/1, width=(int)1280, height=(int)720, format=(string)GRAY8, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive
0:00:03.169042657  5932     0x1ee118c0 DEBUG                v4l2src gstv4l2src.c:501:gst_v4l2src_fixate:<usb-cam-source-cam2> fixated caps video/x-raw, framerate=(fraction)60/1, width=(int)1280, height=(int)720, format=(string)GRAY8, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive
0:00:03.169060386  5932     0x1ee118c0 DEBUG                v4l2src gstv4l2src.c:554:gst_v4l2src_negotiate:<usb-cam-source-cam2> fixated to: video/x-raw, framerate=(fraction)60/1, width=(int)1280, height=(int)720, format=(string)GRAY8, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive
0:00:03.284478209  5932     0x1ebe1b00 DEBUG                v4l2src gstv4l2src.c:923:gst_v4l2src_create:<usb-cam-source-cam0> ts: 6:08:09.947665000 now 6:08:09.959769124 delay 0:00:00.012104124
0:00:03.284549250  5932     0x1ebe1b00 INFO                 v4l2src gstv4l2src.c:960:gst_v4l2src_create:<usb-cam-source-cam0> sync to 0:00:00.016666666 out ts 0:00:00.106366878
Error: gst-stream-error-quark: Internal data stream error. (1): gstbasesrc.c(3072): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:usb-cam-source-cam0:
streaming stopped, reason not-linked (-1)
0:00:03.291762273  5932     0x1ee118c0 DEBUG                v4l2src gstv4l2src.c:923:gst_v4l2src_create:<usb-cam-source-cam2> ts: 6:08:09.954965000 now 6:08:09.967069125 delay 0:00:00.012104125
0:00:03.291824130  5932     0x1ee118c0 INFO                 v4l2src gstv4l2src.c:960:gst_v4l2src_create:<usb-cam-source-cam2> sync to 0:00:00.016666666 out ts 0:00:00.113666718
0:00:03.308250324  5932     0x1ee118c0 DEBUG                v4l2src gstv4l2src.c:923:gst_v4l2src_create:<usb-cam-source-cam2> ts: 6:08:09.971632000 now 6:08:09.983561175 delay 0:00:00.011929175
0:00:03.308293844  5932     0x1ee118c0 INFO                 v4l2src gstv4l2src.c:960:gst_v4l2src_create:<usb-cam-source-cam2> sync to 0:00:00.033333332 out ts 0:00:00.130333398

I tried this with the deepstream_test_1.py and I get it to run for a short time, with
no image (fake sink) but it hang after 4.366 seconds.
It has alot of frame drops.
This is how I believe I have done:

# v4l2src -> caps_v4l2src -> vidconvsrc -> nvvidconvsrc -> caps_vidconvsrc
#                                                                                 -> queue1 -> streammux -> pgie -> nvvidconv -> nvosd -> sink
# v4l2src2 -> caps_v4l2src2 -> vidconvsrc3 -> nvvidconvsrc2 -> caps_vidconvsrc2

`

    # Source element for reading from the file
    print("Creating Source \n ")
    source = Gst.ElementFactory.make("v4l2src", "usb-cam-source")
    source2 = Gst.ElementFactory.make("v4l2src", "usb-cam-source2")
    if not source:
        sys.stderr.write(" Unable to create Source \n")
   
    caps_v4l2src = Gst.ElementFactory.make("capsfilter", "v4l2src_caps")
    caps_v4l2src2 = Gst.ElementFactory.make("capsfilter", "v4l2src_caps2")
    if not caps_v4l2src:
        sys.stderr.write(" Unable to create caps_v4l2src parser \n")

    # videoconvert to make sure a superset of raw formats are supported
    vidconvsrc = Gst.ElementFactory.make("videoconvert", "convertor_src1_1")   
    nvvidconvsrc = Gst.ElementFactory.make("nvvideoconvert", "convertor_src2_1")
    caps_vidconvsrc = Gst.ElementFactory.make("capsfilter", "nvmm_caps1")
    
    vidconvsrc2 = Gst.ElementFactory.make("videoconvert", "convertor_src1_2")   
    nvvidconvsrc2 = Gst.ElementFactory.make("nvvideoconvert", "convertor_src2_2")
    caps_vidconvsrc2 = Gst.ElementFactory.make("capsfilter", "nvmm_caps2")

And

    print("Playing file ")
    source.set_property('device', '/dev/video0')
    source2.set_property('device', '/dev/video2')
    if os.environ.get('USE_NEW_NVSTREAMMUX') != 'yes': 
        streammux.set_property('width', 1280)
        streammux.set_property('height', 720)
        streammux.set_property('batched-push-timeout', 4000000)
    
    streammux.set_property('batch-size', 2)
    queue1=Gst.ElementFactory.make("queue","queue1")
    pipeline.add(queue1)
    pgie.set_property('config-file-path', "config_infer_primary_yoloV5.txt")

    print("Adding elements to Pipeline \n")
    pipeline.add(source)
    pipeline.add(source2)
    pipeline.add(caps_v4l2src)
    pipeline.add(caps_v4l2src2)
    pipeline.add(vidconvsrc)
    pipeline.add(vidconvsrc2)
    pipeline.add(nvvidconvsrc)
    pipeline.add(nvvidconvsrc2)
    pipeline.add(caps_vidconvsrc)
    pipeline.add(caps_vidconvsrc2)
    pipeline.add(streammux)
    pipeline.add(pgie)
    pipeline.add(nvvidconv)
    pipeline.add(nvosd)
    pipeline.add(sink)

    # we link the elements together
    # v4l2src -> caps_v4l2src -> vidconvsrc -> nvvidconvsrc -> caps_vidconvsrc
    #                                                                                 -> queue1 -> streammux -> pgie -> nvvidconv -> nvosd -> sink
    # v4l2src2 -> caps_v4l2src2 -> vidconvsrc3 -> nvvidconvsrc2 -> caps_vidconvsrc2
    print("Linking elements in the Pipeline \n")
    source.link(caps_v4l2src)
    caps_v4l2src.link(vidconvsrc)
    vidconvsrc.link(nvvidconvsrc)    
    nvvidconvsrc.link(caps_vidconvsrc)

    source2.link(caps_v4l2src2)
    caps_v4l2src2.link(vidconvsrc2)
    vidconvsrc2.link(nvvidconvsrc2)
    
    nvvidconvsrc2.link(caps_vidconvsrc2)
    
    sinkpad = streammux.get_request_pad("sink_0")
    if not sinkpad:
        sys.stderr.write(" Unable to get the sink pad of streammux \n")
    srcpad = caps_vidconvsrc.get_static_pad("src")
    if not srcpad:
        sys.stderr.write(" Unable to get source pad of decoder \n")
    srcpad.link(sinkpad)
    sinkpad2 = streammux.get_request_pad("sink_1")
    if not sinkpad2:
        sys.stderr.write(" Unable to get the sink pad of streammux \n")
    srcpad2 = caps_vidconvsrc2.get_static_pad("src")
    if not srcpad2:
        sys.stderr.write(" Unable to get source pad of decoder \n")
    srcpad2.link(sinkpad2)

    streammux.link(queue1)
    queue1.link(pgie)
0:00:03.483315766  8449     0x2c742860 DEBUG                v4l2src gstv4l2src.c:923:gst_v4l2src_create:<usb-cam-source2> ts: 7:14:55.696910000 now 7:14:55.778013556 delay 0:00:00.081103556
0:00:03.483377430  8449     0x2c742860 INFO                 v4l2src gstv4l2src.c:960:gst_v4l2src_create:<usb-cam-source2> sync to 0:00:00.183333326 out ts 0:00:00.232961312
0:00:03.483410103  8449     0x2c742860 WARN                 v4l2src gstv4l2src.c:978:gst_v4l2src_create:<usb-cam-source2> lost frames detected: count = 1 - ts: 0:00:00.232961312
Frame Number=5 Number of Objects=0 Vehicle_count=0 Person_count=0
0:00:03.490650650  8449     0x2c742460 DEBUG                v4l2src gstv4l2src.c:923:gst_v4l2src_create:<usb-cam-source> ts: 7:14:55.712443000 now 7:14:55.785361657 delay 0:00:00.072918657
0:00:03.490700634  8449     0x2c742460 INFO                 v4l2src gstv4l2src.c:960:gst_v4l2src_create:<usb-cam-source> sync to 0:00:00.183333326 out ts 0:00:00.248494760
0:00:03.490726779  8449     0x2c742460 WARN                 v4l2src gstv4l2src.c:978:gst_v4l2src_create:<usb-cam-source> lost frames detected: count = 2 - ts: 0:00:00.248494760
Frame Number=6 Number of Objects=0 Vehicle_count=0 Person_count=0
0:00:03.502688234  8449     0x2c742460 DEBUG                v4l2src gstv4l2src.c:923:gst_v4l2src_create:<usb-cam-source> ts: 7:14:55.729123000 now 7:14:55.797403080 delay 0:00:00.068280080
0:00:03.502736426  8449     0x2c742460 INFO                 v4l2src gstv4l2src.c:960:gst_v4l2src_create:<usb-cam-source> sync to 0:00:00.199999992 out ts 0:00:00.265174728
Frame Number=6 Number of Objects=0 Vehicle_count=0 Person_count=0
0:00:03.509032933  8449     0x2c742860 DEBUG                v4l2src gstv4l2src.c:923:gst_v4l2src_create:<usb-cam-source2> ts: 7:14:55.730240000 now 7:14:55.803731395 delay 0:00:00.073491395
0:00:03.509091941  8449     0x2c742860 INFO                 v4l2src gstv4l2src.c:960:gst_v4l2src_create:<usb-cam-source2> sync to 0:00:00.199999992 out ts 0:00:00.266291472
0:00:03.509121253  8449     0x2c742860 WARN                 v4l2src gstv4l2src.c:978:gst_v4l2src_create:<usb-cam-source2> lost frames detected: count = 1 - ts: 0:00:00.266291472

After rewriting the hole thing with one source I get no errors and the stream works like a charm. I also made a second pipeline as you described and the mixing just does not work. Please advice… @anyone

Hi @magnus.gabell

I tested in my environment and encountered several issues when trying to use v4l2src with nvvideoconvert. What fixed my issues was switching to nvvidconv instead. Here is the pipeline I used:

gst-launch-1.0 \
v4l2src device=/dev/video2 ! nvvidconv ! "video/x-raw(memory:NVMM),format=NV12" ! queue ! nvstreammux0.sink_0 \
v4l2src device=/dev/video4 ! nvvidconv ! "video/x-raw(memory:NVMM),format=NV12" ! queue ! nvstreammux0.sink_1 \
nvstreammux name=nvstreammux0 batch-size=2 batched-push-timeout=40000 width=1920 height=1080 live-source=TRUE ! queue ! \
nvvideoconvert ! queue ! \
nvinfer name=nvinfer1 config-file-path="/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test1/dstest1_pgie_config.txt" ! queue ! \
perf ! fakesink name=fakesink0 sync=false

I hope this helps

In your code, you didn’t set the property for capsfilter, like deepstream_test_1_usb.py. Please don’t just copy the source code. You need to set the values corresponding to your camera.
1.Please refer to our FAQ to query the supported formats and capabilities of the camera.
2.You can refer to my code attached to add another branch for source.

Hi @miguel.taylor , thanks for sharing. I have finally been able to get deepstream to work.
I don’t know exactly what I added to make it work but a lot trial and error. I will share the code here for people for future reference. I have seen that some cameras does not like nvvideoconvert and that you need to convert the input before you pass to nvvideoconvert. I appreciate your input. Have you been able to run it in python code and not with gst-launch? Secondly I think that creating a decode-bin is a better option…

Hi,

I do not copy the code but sometimes I make mistakes I apologize…

I will check the code again for the caps-filter because I have some frame-drops, now and then, in the current solution. I do not really need to check the FAQ since I have made one camera work, so a second should work if I get the pipeline correct (which I have now).

What I would like to know and perhaps you can answer is if the decode bin is faster process wise than writing all sources after each other? I need to have high performance…

For all other people who look for an example on two(or more, if modified) camera v4l2src’s I provide the code here.

You need the yolo file and the code from to run yolo inference on the video. But the pipeline is the same.

How to do this nicely with a decode_bin instead of stacking all sources I dont know yet, but still a crude example that can take some of people further.

Note that the pgie_src_pad_buffer_probe contains a fram_meta.frame_num and with the camera there is no frame to fetch.

Secondly it for some reason does not print the FPS for both sources, only one…

tutorial_cam_source2.zip (2.6 KB)

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.