How to use multiple cameras in deepstream python version?

• Hardware (NVIDIA AGX )
• DeepStream Version** –Deepstream6.0
• Network Type (Detectnet_v2/Faster_rcnn/Yolo_v4/LPRnet/Mask_rcnn/Classification/etc) Detectnet_v2
• How to reproduce the issue ? (This is for errors. Please share the command line and the detailed log here.)

I refer to test1-cam and test3 in the example of deepstream python version on github I want to run multiple USB cameras on deep stream

this is my code

code.py (13.8 KB)

It doesn’t work. Could it be that I have a problem connecting to the <Gst.ElementFactory.make> element, or can you send me a sample code for using multiple USB cameras for my reference?

Does 1 camera setup work?

it works

I see. Then the problem might be near streammux. Can you share any error log?

CMD.txt (2.8 KB)

No error log it just stuck

Yeah, seems hard to say anything from the logs.
How about you try to run the pipeline without your inference plugins?
What I mean is instead of

 streammux.link(queue1)
    queue1.link(pgie)
    pgie.link(queue2)
    queue2.link(tiler)
    tiler.link(queue3)
    queue3.link(nvvidconv)
    nvvidconv.link(queue4)
    queue4.link(nvosd)

It will become

 streammux.link(queue1)
    queue1.link(tiler)
    tiler.link(queue3)
    queue3.link(nvvidconv)
    nvvidconv.link(queue4)
    queue4.link(nvosd)

still stuck

CMD2.txt (3.9 KB)

Can you export GST_DEBUG=v4l2src:6 to check if the camera output video data.

Can you tell me in detail about “(export GST_DEBUG=v4l2src:6)” how to do it?
I try using the command ‘deepstream-app -c multiple-camera-config.txt’ to test the multiple-camera. it works

hi, how is going?

For more GStreamer information. Please check: GStreamer

in the terminal, before running python code.py:

$ export GST_DEBUG=v4l2src:6
$ python code.py &> logs.txt

This will dump all output (stdout and stderr) to the logs.txt file.

I’d also suggest commenting out your buffer probe. That buffer probe is blocking, meaning buffers wont flow until the callback returns:

replace

    if not tiler_src_pad:
        sys.stderr.write(" Unable to get src pad \n")
    else:
        tiler_src_pad.add_probe(Gst.PadProbeType.BUFFER, tiler_src_pad_buffer_probe, 0)

with

#    if not tiler_src_pad:
#        sys.stderr.write(" Unable to get src pad \n")
#    else:
#        tiler_src_pad.add_probe(Gst.PadProbeType.BUFFER, tiler_src_pad_buffer_probe, 0)

If your pipeline now runs, it means you’ve got a problem in the buffer probe (in which case i’d add debugging messages to detect where the problem is)

If your pipeline still wont run after commenting the probe, the logs should have more info. You could even increase verbosity for other components, eg export GST_DEBUG="4,v4l2src:6" (meaning everything to level 4, and v4l2src to level 6)

What is the status? Do you fixed the issue?

Still stuck, we got this result

Using winsys: x11                                                                                                                      0:00:01.148066362 212054      0x1d8f360 WARN                 nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1161> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5                             0:00:03.789772294 212054      0x1d8f360 WARN                 nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1888> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/apps/REID_PRO/../../../../samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine failed                                                                           0:00:03.954175650 212054      0x1d8f360 WARN                 nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1993> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/apps/REID_PRO/../../../../samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine failed, try rebuild                                              0:00:03.954252099 212054      0x1d8f360 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files                                                                                                                           0:00:58.330625406 212054      0x1d8f360 WARN                 nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1941> [UID = 1]: failed to serialize cude engine to file: /opt/nvidia/deepstream/deepstream-6.1/samples/models/Primary_Detector/resnet10.caffemodel_b3_gpu0_int8.engine          0:00:58.523453945 212054      0x1d8f360 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:dstest3_pgie_config.txt sucessfully

Do you try to debug as @pwoolvett mentioned?

Load new model:dstest3_pgie_config.txt sucessfully

yes

Can you have a try with below method?

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks