How the Deepstream can support the multiple Videos input which from Multiple USB cameras and to do the analysis

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)–Jeston TX2
• DeepStream Version–Deepstream5.1
• JetPack Version (valid for Jetson only)–Jetpack4.5.1
• TensorRT Version–TensorRT7.1.3
• NVIDIA GPU Driver Version (valid for GPU only)
**• Issue Type( questions, new requirements, bugs)**questions
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)–want to make sure how the Deepstream can read the data from Multiple USB camera ,what’s the working logic and the steps is ?

Hi,
The sample config file for single USB camera is

/opt/nvidia/deepstream/deepstream-5.1/samples/configs/deepstream-appsource1_usb_dec_infer_resnet_int8.txt

Yo may try to run it and then modify config fie to run multiple-USB-camera case. Here is a config file for 4 USB cameras:
DeepStream4 Jetson nano multiple webcams issue - #12 by DaneLLL

aiu@aiu-desktop:/opt/nvidia/deepstream/deepstream-5.1/sources/deepstream_python_apps/apps/deepstream-test1-usbcam$ python3 deepstream_test_1_usb.py /dev/video1
Creating Pipeline
Creating Source
Creating Video Converter
Creating EGLSink
Playing cam /dev/video1
Adding elements to Pipeline
Linking elements in the Pipeline
Starting pipeline
Using winsys: x11
ERROR: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-5.1/sources/deepstream_python_apps/apps/deepstream-test1-usbcam/…/…/…/…/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine open error
0:00:01.893543150 19708 0x12261960 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1691> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-5.1/sources/deepstream_python_apps/apps/deepstream-test1-usbcam/…/…/…/…/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine failed
0:00:01.893600142 19708 0x12261960 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1798> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-5.1/sources/deepstream_python_apps/apps/deepstream-test1-usbcam/…/…/…/…/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine failed, try rebuild
0:00:01.893624910 19708 0x12261960 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1716> [UID = 1]: Trying to create engine from model files
WARNING: INT8 not supported by platform. Trying FP16 mode.
INFO: [TRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
INFO: [TRT]: Detected 1 inputs and 2 output network tensors.
ERROR: Serialize engine failed because of file path: /opt/nvidia/deepstream/deepstream-5.1/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_fp16.engine opened error
0:00:17.860437436 19708 0x12261960 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1744> [UID = 1]: failed to serialize cude engine to file: /opt/nvidia/deepstream/deepstream-5.1/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_fp16.engine
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x368x640
1 OUTPUT kFLOAT conv2d_bbox 16x23x40
2 OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40
0:00:17.891565500 19708 0x12261960 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus: [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully
Frame Number=0 Number of Objects=0 Vehicle_count=0 Person_count=0
0:00:18.054275099 19708 0x122588f0 WARN nvinfer gstnvinfer.cpp:1984:gst_nvinfer_output_loop: error: Internal data stream error.
0:00:18.054330715 19708 0x122588f0 WARN nvinfer gstnvinfer.cpp:1984:gst_nvinfer_output_loop: error: streaming stopped, reason error (-5)
Error: gst-stream-error-quark: Internal data stream error. (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(1984): gst_nvinfer_output_loop (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
streaming stopped, reason error (-5)

after try the way from your suggest got this result ,can make a explain about it ? thanks

Hi,
Please make sure you have executed export DISPLAY=:0(or 1). And can see video preview in running the command:

$ gst-launch-1.0 videotestsrc ! nvvidconv ! nvegltransform ! nveglglessink

we use the display connect to the TX2 then direction operation into the TX2 can success running the sample test .there is a new found is the same config ‘yuy2’ mode Very smooth, very stuck when running in mjpg mode, do you know the reason ?

Hi,
In deepstream-test1-usbcam, the source is linked as:

v4l2src ! video/x-raw,framerate=30/1 ! videoconvert ! nvvideoconvert ! video/x-raw(memory:NVMM) ! nvstreammux ! ...

For MJPEG source, need to use nvv4l2decoder and the source has to be modified to

v4l2src ! image/jpeg ! jpegparse ! nvv4l2decoder mjpeg=1 ! nvstreammux ! ...

aiu@aiu-desktop:~$ gst-launch-1.0 v4l2src device=/dev/video2 io-mode=2 ! “image/jpeg,framerate=25/1,width=480,height=320” ! jpegparse ! nvv4l2decoder mjpeg=1 ! nvstreammux ! nvjpegdec ! video/x-raw ! nvvidconv ! ‘video/x-raw(memory:NVMM)’ ! nvoverlaysink
WARNING: erroneous pipeline: could not link nvv4l2decoder0 to nvstreammux0
got the error

Hi,
Please check if you can run this and see video preview:

$ gst-launch-1.0 v4l2src device=/dev/video2 io-mode=2 ! image/jpeg,framerate=25/1,width=480,height=320 ! jpegparse ! nvv4l2decoder mjpeg=1 ! nvoverlaysink

aiu@aiu-desktop:~$ gst-launch-1.0 v4l2src device=/dev/video2 io-mode=2 ! image/jpeg,framerate=25/1,width=480,height=320 ! jpegparse ! nvv4l2decoder mjpeg=1 ! nvoverlaysink
Setting pipeline to PAUSED …
Opening in BLOCKING MODE
Opening in BLOCKING MODE
Pipeline is live and does not need PREROLL …
Setting pipeline to PLAYING …
New clock: GstSystemClock
NvMMLiteOpen : Block : BlockType = 277
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 277
ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Internal data stream error.
Additional debug info:
gstbasesrc.c(3055): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
streaming stopped, reason not-negotiated (-4)
Execution ended after 0:00:00.215651610
Setting pipeline to PAUSED …
Setting pipeline to READY …
Setting pipeline to NULL …
Freeing pipeline …
face this one error

Hi,
It looks like the source does not support 480x320p25. Please check and set the mode correctly.

sorry not understand,the source you said is mean? and where to se the correct mode,think we had following your suggest before edit test1.py

Hi,
Before customizing deepstream-test1-usbcam, we suggest have a working gst-launch-1.0 command for MJPEG decoding, so that you can do customization based on the command. For launching USB cameras, please refer to
Jetson Nano FAQ
[Q: I have a USB camera. How can I launch it on Jetson Nano?]

You would need to configure exact format/width/height/framerate of the v4l2 source(USB camera)

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.