Deepstream multistream with USB webcam python?

i can run python multistream with 2 webcam already!
Can i run it with python3 code?
How can i run mutilstream like deepstream-imagedata-multistream and USB webcam like deepstream-test1-usbcam at one???
Thank you

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson Nano/Xavier
• DeepStream Version 5.0.1
• JetPack Version (valid for Jetson only) 4.4
• TensorRT Version 7.1.3
• Issue Type( questions, new requirements, bugs) questions

What does USB webcam? Is it a USB camera connected to the USB port of Jetson device or a web camera connected to ethernet port?

Do you want to use webcamera as source for deepstream-imagedata-multistream? If it’s, you can change the source to webcamera referring to deepstream-test1-usbcam.

So, what issue are you facing? can you be more specific?

1 Like
  • I tried what you say and complete this code (below) ! is this code good? i feel like it’s a little slow and not optimize!
    Can you check it?

  • And i can not use “usb_cam_source.connect(“child-added”,decodebin_child_added,nbin)” in my code either!

ERR: “TypeError: <gi.GstV4l2Src object at 0x7f8a016ab0 (GstV4l2Src at 0x361ee270)>: unknown signal name: child-added”

  • And can i get fps of the system?

  • Note: I ran this code with yolov4 enigine, batchsize =2 and i set the tiler’s width-height are 480-320.

Thank you for your reply

def create_source_bin(index,uri):
    print("Creating source bin")
    bin_name="source-bin-%02d" %index
    print(bin_name)
    nbin=Gst.Bin.new(bin_name)
    if not nbin:
        sys.stderr.write(" Unable to create source bin \n")

    usb_cam_source=Gst.ElementFactory.make("v4l2src", "usb-cam-source")
    usb_cam_source.set_property("device",uri)
    usb_cam_source.connect("pad-added",cb_newpad,nbin)
    #usb_cam_source.connect("child-added",decodebin_child_added,nbin)
    caps_v4l2src = Gst.ElementFactory.make("capsfilter", "v4l2src_caps")
    if not caps_v4l2src:
        sys.stderr.write(" Unable to create v4l2src capsfilter \n")
    vidconvsrc = Gst.ElementFactory.make("videoconvert", "convertor_src1")
    if not vidconvsrc:
        sys.stderr.write(" Unable to create videoconvert \n")
    nvvidconvsrc = Gst.ElementFactory.make("nvvideoconvert", "convertor_src2")
    if not nvvidconvsrc:
        sys.stderr.write(" Unable to create Nvvideoconvert \n")
    
    caps_vidconvsrc = Gst.ElementFactory.make("capsfilter", "nvmm_caps")
    if not caps_vidconvsrc:
        sys.stderr.write(" Unable to create capsfilter \n")


    caps_vidconvsrc.set_property('caps', Gst.Caps.from_string("video/x-raw(memory:NVMM)"))
    caps_v4l2src.set_property('caps', Gst.Caps.from_string("video/x-raw, framerate=30/1"))

    print('adding element to source bin')
    Gst.Bin.add(nbin,usb_cam_source)

    Gst.Bin.add(nbin, caps_v4l2src)
    Gst.Bin.add(nbin, vidconvsrc)
    Gst.Bin.add(nbin,nvvidconvsrc)
    Gst.Bin.add(nbin,caps_vidconvsrc)


    print('linking elemnent in source bin')
    nvvidconvsrc.link(caps_vidconvsrc)
    usb_cam_source.link(caps_v4l2src)
    caps_v4l2src.link(vidconvsrc)
    vidconvsrc.link(nvvidconvsrc)
    nvvidconvsrc.link(caps_vidconvsrc)

    pad = caps_vidconvsrc.get_static_pad("src")
    ghostpad = Gst.GhostPad.new("src",pad)
    bin_pad=nbin.add_pad(ghostpad)
    if not bin_pad:
        sys.stderr.write(" Failed to add ghost pad in source bin \n")
        return None
    return nbin
1 Like

please refer to [DS5.0GA_Jetson_GPU_Plugin] Measure of the FPS of pipeline in DeepStream SDK FAQ - #9 by mchi to measure the fps.

What your platform ? NANO or Xavier?
Could you measure the trtexec performance of your trtengine by referring to GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream ?