Deepstream_test_1_usb with CSI camera on jetson orin nano

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson orin nano
• DeepStream Version 7.0
• JetPack Version (valid for Jetson only) 6.0
• TensorRT Version 8.6.2.3-1+cuda12.2
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hello, I’m attempting to run deepstream_test_1_usb.py on my Jetson Orin Nano using a CSI camera (IMX219), but I’m encountering issues. The output I’m getting is as follows:

python3 deepstream_test_1_usb.py /dev/video0
Creating Pipeline 
Creating Source 
Creating Video Converter 
Is it Integrated GPU? : 1
Creating nv3dsink 
Playing cam /dev/video0 
Adding elements to Pipeline 
Linking elements in the Pipeline 
Starting pipeline 

Setting min object dimensions as 16x16 instead of 1x1 to support VIC compute mode.
WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
0:00:45.410162530  4178 0xaaab503f7f20 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2095> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-7.0/samples/models/Primary_Detector/resnet18_trafficcamnet.etlt_b30_gpu0_int8.engine
INFO: [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x544x960       
1   OUTPUT kFLOAT output_bbox/BiasAdd 16x34x60        
2   OUTPUT kFLOAT output_cov/Sigmoid 4x34x60         

0:00:45.810853325  4178 0xaaab503f7f20 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2198> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-7.0/samples/models/Primary_Detector/resnet18_trafficcamnet.etlt_b30_gpu0_int8.engine
0:00:45.872422697  4178 0xaaab503f7f20 INFO                 nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully
Error: gst-stream-error-quark: Internal data stream error. (1): ../libs/gst/base/gstbasesrc.c(3127): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:usb-cam-source:
streaming stopped, reason not-negotiated (-4)
nvstreammux: Successfully handled EOS for source_id=0

As per some previous posts, I verified the camera input using the command: v4l2-ctl --list-devices, and the output I got is as follows:”

NVIDIA Tegra Video Input Device (platform:tegra-camrtc-ca):
	/dev/media0

vi-output, imx219 9-0010 (platform:tegra-capture-vi:2):
	/dev/video0

Based on other posts, I noticed that the issue might be related to the v4l2src plugin. Therefore, I attempted to obtain a camera preview using the following command: gst-launch-1.0 nvarguscamerasrc device=/dev/video0 ! video/x-h264,width=1280,height=720,framerate=30/1 ! fakesink
Output:

Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Internal data stream error.
Additional debug info:
../libs/gst/base/gstbasesrc.c(3127): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
streaming stopped, reason not-negotiated (-4)
New clock: GstSystemClock
Execution ended after 0:00:00.000169635
Setting pipeline to NULL ...
Freeing pipeline ...

kindly help to check and fix.

Please refer this FAQ for v4l2src

In addition, if you want to use deepstream_test_1_usb.py, please do not use nvarguscamerasrc. They are not compatible. Please refer to the FAQ.

Thank you for your replay.

I checked the supported formats and capabilities of the camera by v4l2-ctl tool using v4l2-ctl -d /dev/video2 --list-formats-ext and got the output:

ioctl: VIDIOC_ENUM_FMT
	Type: Video Capture

	[0]: 'RG10' (10-bit Bayer RGRG/GBGB)
		Size: Discrete 3280x2464
			Interval: Discrete 0.048s (21.000 fps)
		Size: Discrete 3280x1848
			Interval: Discrete 0.036s (28.000 fps)
		Size: Discrete 1920x1080
			Interval: Discrete 0.033s (30.000 fps)
		Size: Discrete 1640x1232
			Interval: Discrete 0.033s (30.000 fps)
		Size: Discrete 1280x720
			Interval: Discrete 0.017s (60.000 fps)

Then, I use gst-launch to construct a working pipeline based on the pipeline you gave as an example and the output of the previous query.

gst-launch-1.0 v4l2src device=/dev/video0 ! 'video/x-raw, format=RG10, width=1280, height=720, framerate=30/1'  ! nvvideoconvert ! 'video/x-raw(memory:NVMM),format=NV12' ! mux.sink_0  nvstreammux name=mux width=1280 height=720 batch-size=1  ! fakesink

But it didn’t work and got the following output:

WARNING: erroneous pipeline: could not link v4l2src0 to nvvideoconvert0, neither element can handle caps video/x-raw, format=(string)RG10, width=(int)1280, height=(int)720, framerate=(fraction)30/1

I also tried adding videoconvert before nvvideoconvert:

gst-launch-1.0  v4l2src device=/dev/video0 ! 'video/x-raw, format=RG10, width=1280, height=720, framerate=30/1' ! videoconvert ! 'video/x-raw, format=NV12' ! nvvideoconvert ! 'video/x-raw(memory:NVMM),format=NV12'  ! fakesink

But I got relatively the same error as the query without videoconvert.

WARNING: erroneous pipeline: could not link v4l2src0 to videoconvert0, neither element can handle caps video/x-raw, format=(string)RG10, width=(int)1280, height=(int)720, framerate=(fraction)30/1

Your camera output is in bayer format, not rgb. The above FAQ may not apply to you.

Since I don’t have a similar model of camera, I need to make some attempts

gst-launch-1.0 nvarguscamerasrc num-buffers=1 ! 'video/x-raw(memory:NVMM),width=1280, height=720, framerate=60/1, format=NV12' ! nv3dsink
gst-launch-1.0 -vvv v4l2src   ! 'video/x-bayer,width=1280,height=720,format=rggb,framerate=60/1' ! bayer2rgb ! nvvideoconvert ! nv3dsink

If arguscamera can work, you need to modify the code of deepstream_test_1_usb.py to run properly.

There is a more detailed discussion here, please refer it.

I tried this query gst-launch-1.0 nvarguscamerasrc num-buffers=1 ! 'video/x-raw(memory:NVMM),width=1280, height=720, framerate=60/1, format=NV12' ! nv3dsink and it’s working but not this one gst-launch-1.0 -vvv v4l2src ! 'video/x-bayer,width=1280,height=720,format=rggb,framerate=60/1' ! bayer2rgb ! nvvideoconvert ! nv3dsink, I’m getting the following output:

Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Internal data stream error.
Additional debug info:
../libs/gst/base/gstbasesrc.c(3127): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
streaming stopped, reason not-negotiated (-4)
New clock: GstSystemClock
Execution ended after 0:00:00.000386507
Setting pipeline to NULL ...
Freeing pipeline ...

So, if I’ve understood correctly, I need to update the deepstream_test_1_usb.py file for it to work properly.

I think you are right, you need to use nvarguscamerasrc thx

I did a quick test by changing the line source = Gst.ElementFactory.make("v4l2src", "usb-cam-source") to source = Gst.ElementFactory.make("nvarguscamerasrc", "usb-cam-source") and removing source.set_property('device', args[1]) as nvarguscamerasrc has no property device.
But I got this error:

python3 deepstream_test_1_usb.py /dev/video0
Creating Pipeline 
 
Creating Source 
 
Creating Video Converter 

Is it Integrated GPU? : 1
Creating nv3dsink 

Playing cam /dev/video0 
Adding elements to Pipeline 

Linking elements in the Pipeline 

Starting pipeline 

Setting min object dimensions as 16x16 instead of 1x1 to support VIC compute mode.
WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
0:00:26.314133663 12659 0xaaab34ad5760 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2095> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-7.0/samples/models/Primary_Detector/resnet18_trafficcamnet.etlt_b30_gpu0_int8.engine
INFO: [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x544x960       
1   OUTPUT kFLOAT output_bbox/BiasAdd 16x34x60        
2   OUTPUT kFLOAT output_cov/Sigmoid 4x34x60         

0:00:26.747214378 12659 0xaaab34ad5760 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2198> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-7.0/samples/models/Primary_Detector/resnet18_trafficcamnet.etlt_b30_gpu0_int8.engine
0:00:26.845027078 12659 0xaaab34ad5760 INFO                 nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully
GST_ARGUS: Creating output stream
CONSUMER: Waiting until producer is connected...
GST_ARGUS: Available Sensor modes :
GST_ARGUS: 3280 x 2464 FR = 21.000000 fps Duration = 47619048 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 3280 x 1848 FR = 28.000001 fps Duration = 35714284 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1920 x 1080 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1640 x 1232 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1280 x 720 FR = 59.999999 fps Duration = 16666667 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: Running with following settings:
   Camera index = 0 
   Camera mode  = 2 
   Output Stream W = 1920 H = 1080 
   seconds to Run    = 0 
   Frame Rate = 29.999999 
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
CONSUMER: Producer has connected; continuing.
Error: gst-stream-error-quark: Internal data stream error. (1): ../libs/gst/base/gstbasesrc.c(3127): gst_base_src_loop (): /GstPipeline:pipeline0/GstNvArgusCameraSrc:usb-cam-source:
streaming stopped, reason not-linked (-1)
GST_ARGUS: Cleaning up
CONSUMER: Done Success
GST_ARGUS: Done Success

Is there something else to change? or some major changes?

Try the following patch,since I don’t have a CSI camera,so this following patch just build a pipeline like the figure.

nvarguscamerasrc --> capsfiler --> nvstreammux --> "same as usb camera"
diff --git a/apps/deepstream-test1-usbcam/deepstream_test_1_usb.py b/apps/deepstream-test1-usbcam/deepstream_test_1_usb.py
index 9237e6c..2d335ef 100755
--- a/apps/deepstream-test1-usbcam/deepstream_test_1_usb.py
+++ b/apps/deepstream-test1-usbcam/deepstream_test_1_usb.py
@@ -137,37 +137,41 @@ def main(args):
         sys.stderr.write(" Unable to create Pipeline \n")
 
     # Source element for reading from the file
-    print("Creating Source \n ")
-    source = Gst.ElementFactory.make("v4l2src", "usb-cam-source")
-    if not source:
-        sys.stderr.write(" Unable to create Source \n")
+    # print("Creating Source \n ")
+    # source = Gst.ElementFactory.make("v4l2src", "usb-cam-source")
+    # if not source:
+    #     sys.stderr.write(" Unable to create Source \n")
+
+    # caps_v4l2src = Gst.ElementFactory.make("capsfilter", "v4l2src_caps")
+    # if not caps_v4l2src:
+    #     sys.stderr.write(" Unable to create v4l2src capsfilter \n")
 
-    caps_v4l2src = Gst.ElementFactory.make("capsfilter", "v4l2src_caps")
-    if not caps_v4l2src:
-        sys.stderr.write(" Unable to create v4l2src capsfilter \n")
 
+    # print("Creating Video Converter \n")
 
-    print("Creating Video Converter \n")
+    # # Adding videoconvert -> nvvideoconvert as not all
+    # # raw formats are supported by nvvideoconvert;
+    # # Say YUYV is unsupported - which is the common
+    # # raw format for many logi usb cams
+    # # In case we have a camera with raw format supported in
+    # # nvvideoconvert, GStreamer plugins' capability negotiation
+    # # shall be intelligent enough to reduce compute by
+    # # videoconvert doing passthrough (TODO we need to confirm this)
 
-    # Adding videoconvert -> nvvideoconvert as not all
-    # raw formats are supported by nvvideoconvert;
-    # Say YUYV is unsupported - which is the common
-    # raw format for many logi usb cams
-    # In case we have a camera with raw format supported in
-    # nvvideoconvert, GStreamer plugins' capability negotiation
-    # shall be intelligent enough to reduce compute by
-    # videoconvert doing passthrough (TODO we need to confirm this)
 
+    # # videoconvert to make sure a superset of raw formats are supported
+    # vidconvsrc = Gst.ElementFactory.make("videoconvert", "convertor_src1")
+    # if not vidconvsrc:
+    #     sys.stderr.write(" Unable to create videoconvert \n")
 
-    # videoconvert to make sure a superset of raw formats are supported
-    vidconvsrc = Gst.ElementFactory.make("videoconvert", "convertor_src1")
-    if not vidconvsrc:
-        sys.stderr.write(" Unable to create videoconvert \n")
+    # # nvvideoconvert to convert incoming raw buffers to NVMM Mem (NvBufSurface API)
+    # nvvidconvsrc = Gst.ElementFactory.make("nvvideoconvert", "convertor_src2")
+    # if not nvvidconvsrc:
+    #     sys.stderr.write(" Unable to create Nvvideoconvert \n")
 
-    # nvvideoconvert to convert incoming raw buffers to NVMM Mem (NvBufSurface API)
-    nvvidconvsrc = Gst.ElementFactory.make("nvvideoconvert", "convertor_src2")
-    if not nvvidconvsrc:
-        sys.stderr.write(" Unable to create Nvvideoconvert \n")
+    source = Gst.ElementFactory.make("nvarguscamerasrc", "usb-cam-source")
+    if not source:
+        sys.stderr.write(" Unable to create Source \n")
 
     caps_vidconvsrc = Gst.ElementFactory.make("capsfilter", "nvmm_caps")
     if not caps_vidconvsrc:
@@ -212,9 +216,9 @@ def main(args):
             sys.stderr.write(" Unable to create egl sink \n")
 
     print("Playing cam %s " %args[1])
-    caps_v4l2src.set_property('caps', Gst.Caps.from_string("video/x-raw, framerate=30/1"))
-    caps_vidconvsrc.set_property('caps', Gst.Caps.from_string("video/x-raw(memory:NVMM)"))
-    source.set_property('device', args[1])
+    # caps_v4l2src.set_property('caps', Gst.Caps.from_string("video/x-raw, framerate=30/1"))
+    caps_vidconvsrc.set_property('caps', Gst.Caps.from_string("video/x-raw(memory:NVMM), width=1280, height=720, framerate=60/1, format=NV12"))
+    # source.set_property('device', args[1])
     streammux.set_property('width', 1920)
     streammux.set_property('height', 1080)
     streammux.set_property('batch-size', 1)
@@ -225,9 +229,9 @@ def main(args):
 
     print("Adding elements to Pipeline \n")
     pipeline.add(source)
-    pipeline.add(caps_v4l2src)
-    pipeline.add(vidconvsrc)
-    pipeline.add(nvvidconvsrc)
+    # pipeline.add(caps_v4l2src)
+    # pipeline.add(vidconvsrc)
+    # pipeline.add(nvvidconvsrc)
     pipeline.add(caps_vidconvsrc)
     pipeline.add(streammux)
     pipeline.add(pgie)
@@ -239,10 +243,11 @@ def main(args):
     # v4l2src -> nvvideoconvert -> mux -> 
     # nvinfer -> nvvideoconvert -> nvosd -> video-renderer
     print("Linking elements in the Pipeline \n")
-    source.link(caps_v4l2src)
-    caps_v4l2src.link(vidconvsrc)
-    vidconvsrc.link(nvvidconvsrc)
-    nvvidconvsrc.link(caps_vidconvsrc)
+    # source.link(caps_v4l2src)
+    # caps_v4l2src.link(vidconvsrc)
+    # vidconvsrc.link(nvvidconvsrc)
+    # nvvidconvsrc.link(caps_vidconvsrc)
+    source.link(caps_vidconvsrc)
 
     sinkpad = streammux.request_pad_simple("sink_0")
     if not sinkpad:

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.