CSI camera input - deepstream python application

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) : Jetson Nano
• DeepStream Version: 6.0
• JetPack Version (valid for Jetson only): 4.6
• TensorRT Version: 8.2
• NVIDIA GPU Driver Version (valid for GPU only):
• Issue Type( questions, new requirements, bugs): question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hi,
I’m trying deepstream application with video file as input source. I want to reconstruct the pipeline which takes CSI input camera(Raspberry pi v2 camera module i’m using) as input. I couldn’t find any reference in deepstream sample python app github repo

Current pipeline:
file-source → h264-parser → nvh264-decoder → streammux-> nvinfer → …

Expected pipeline:
CSI input → streammux-> nvinfer → …

Current pipeline function:
deepstream_pipeline.py (5.5 KB)

kindly help to modify/ update the current pipeline to required pipeline.

Hi,

Any idea about the query?
Need some reference to try…

Maybe you can refer the link below and create an v4l2src:
https://github.com/NVIDIA-AI-IOT/deepstream_python_apps/tree/master/apps/deepstream-test1-usbcam

Hi @yuweiw.

USB Cam input have already tried and i don’t want to take input from USB input.

I want to take live video input from Raspberry pi camera module v2 which is connected to MIPI CSI camera input port.

This demo doesn’t only support usb camera, it support the camera with v4l2 protocal. So if your camera use v4l2 protocal(could you check for it?), you can refer it.
Or if your camera use the Jetson argus API, you can also refer the link below
https://docs.nvidia.com/jetson/archives/r34.1/DeveloperGuide/text/SD/Multimedia/AcceleratedGstreamer.html#camera-capture-with-gstreamer-1-0

hi @yuweiw,

I have modified my code according to usb cam sample, i’m getting below error

Error:

Traceback (most recent call last):
  File "demo_app_csi_live.py", line 285, in <module>
    sys.exit(main(sys.argv))
  File "demo_app_csi_live.py", line 207, in main
    caps_nvvidsrc.set_property('caps', Gst.Caps.from_string("video/x-raw(memory:NVMM)"))
TypeError: object of type `GstVideoConvert' does not have property `caps'

Code:
demo_app_csi_live.py (10.3 KB)
I’m using tao trained encrypted model, exported with fp16 precision

Camera input check: COMMAND: v4l2-ctl --list-devices
Output:

vi-output, imx219 7-0010 (platform:54080000.vi:0):
	/dev/video0

kindly help to check and fix.

hi @yuweiw,

The above error was resolved, i updated line number 156 with ‘capsfilter’ gst element.
Now i’m getting below error,

OUTPUT: Trying with Rpi camera module v2(/dev/video0)

Starting pipeline 

Using winsys: x11 
0:00:05.524182718  8851      0xfc276f0 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/fp16/yolov3_resnet18_epoch_010.etlt_b1_gpu0_fp16.engine
INFO: [Implicit Engine Info]: layers num: 5
0   INPUT  kFLOAT Input           3x384x1248      
1   OUTPUT kINT32 BatchedNMS      1               
2   OUTPUT kFLOAT BatchedNMS_1    200x4           
3   OUTPUT kFLOAT BatchedNMS_2    200             
4   OUTPUT kFLOAT BatchedNMS_3    200             

0:00:05.525435017  8851      0xfc276f0 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 1]: Use deserialized engine model: /fp16/yolov3_resnet18_epoch_010.etlt_b1_gpu0_fp16.engine
0:00:05.875603469  8851      0xfc276f0 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:fp16/nvinfer_config.txt sucessfully
Warning: converting ROIs to RGBA for VIC mode
Error: gst-stream-error-quark: Internal data stream error. (1): gstbasesrc.c(3055): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:csi-cam-source:
streaming stopped, reason not-negotiated (-4)

OUTPUT: Trying with USB camera input(/dev/video1)

 Starting pipeline 

Using winsys: x11 
0:00:05.692033024  9016     0x26a444f0 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/fp16/yolov3_resnet18_epoch_010.etlt_b1_gpu0_fp16.engine
INFO: [Implicit Engine Info]: layers num: 5
0   INPUT  kFLOAT Input           3x384x1248      
1   OUTPUT kINT32 BatchedNMS      1               
2   OUTPUT kFLOAT BatchedNMS_1    200x4           
3   OUTPUT kFLOAT BatchedNMS_2    200             
4   OUTPUT kFLOAT BatchedNMS_3    200             

0:00:05.693342042  9016     0x26a444f0 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 1]: Use deserialized engine model: /fp16/yolov3_resnet18_epoch_010.etlt_b1_gpu0_fp16.engine
0:00:05.942543424  9016     0x26a444f0 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:fp16/nvinfer_config.txt sucessfully
Warning: converting ROIs to RGBA for VIC mode
0:00:07.409475569  9016     0x265e1050 ERROR                nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::parseBoundingBox() <nvdsinfer_context_impl_output_parsing.cpp:59> [UID = 1]: Could not find output coverage layer for parsing objects
0:00:07.409705518  9016     0x265e1050 ERROR                nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::fillDetectionOutput() <nvdsinfer_context_impl_output_parsing.cpp:735> [UID = 1]: Failed to parse bboxes
Segmentation fault (core dumped)

NvInfer config:
nvinfer_config.txt (542 Bytes)

Tao trained model:
yolov3_resnet18_epoch_010.etlt (73.8 MB)

I’m stuck in this, kindly help to check and resolve issue as soon as possible…

Hi, @soundarrajan . We suggest you do it step by step.
1.You should know the basic knowledge of Gstreamer and know how to setup a pipeline with cli.
https://gstreamer.freedesktop.org/documentation/tutorials/index.html?gi-language=c
2.You can read the deepstream code and debug the demo by the README file, such as the link below
https://github.com/NVIDIA-AI-IOT/deepstream_python_apps/blob/master/apps/deepstream-test1-usbcam/deepstream_test_1_usb.py
3.If you want write the code by your self, you’d better run it by cli (gst-lunch-1.0 …) first.When your pipeline works well in cli mode, you can write code more quckly without error. Thanks

Hi @yuweiw ,

I can able to do detection with USB camera with reference of usb-cam input code.

But you said the pipeline is same for both USB & CSI camera module(I understand both are v4l2 source only).
I tried fro CSI cam input but still getting the same above mentioned error.

SAMPLE:

0:00:05.525435017  8851      0xfc276f0 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 1]: Use deserialized engine model: /fp16/yolov3_resnet18_epoch_010.etlt_b1_gpu0_fp16.engine
0:00:05.875603469  8851      0xfc276f0 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:fp16/nvinfer_config.txt sucessfully
Warning: converting ROIs to RGBA for VIC mode
Error: gst-stream-error-quark: Internal data stream error. (1): gstbasesrc.c(3055): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:csi-cam-source:
streaming stopped, reason not-negotiated (-4)
  1. you can run the cli below to verify your cam is OK.
  2. You can add plugin one by one before the fakesink plugin and make sure the complete pipeline runs well with cli
    3.then you can debug your own code.
gst-launch-1.0 v4l2src device=/dev/video0 ! video/x-h264,width=XXX,height=XXX,framerate=30/1 ! fakesink

Hi @yuweiw,

Is there any way that we can print FPS and the Inference timing on the output rendered video?
Also want to print confidence of object detected.

While doing inference in TAO toolkit, i can able to see label + confidence percentage. I want the same output here too…

Hi, @soundarrajan .
Could the camera work well now?
About the log print format, you can open a new topic about it. And attach the picture of the log format you wanted, the env you are using now. Thanks

Hi @yuweiw,

Yes camera is working now, reconstructed pipeline with correct gstreamer elemets.

opened new issue for display text Display confidence in the bounding box detected object

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.