Issue with Real-Time Camera Input in DeepStream for License Plate Recognition (LPR)

Description:

I am currently working on a project to implement real-time license plate recognition (LPR) using DeepStream on my NVIDIA Jetson platform. I have encountered an issue when trying to use the camera input instead of a pre-recorded video file. The application fails to access the camera stream and displays the following error:

ERROR from element file_src_0: Resource not found.
Error details: gstfilesrc.c(532): gst_file_src_start (): /GstPipeline:pipeline/GstFileSrc:file_src_0:
No such file "nvarguscamerasrc sensor-id=0"

I’ve followed the necessary steps to integrate the camera feed and am using the appropriate nvarguscamerasrc for accessing the camera, but the pipeline is not being recognized correctly by DeepStream.

I have tried simplifying the GStreamer pipeline and ensuring that the camera is connected and working with a standalone GStreamer command. The camera feed works fine when tested directly with GStreamer, but when I use the same configuration in DeepStream, it doesn’t recognize the source properly.

Complete Information Regarding My Setup:

  • Hardware Platform: Jetson Orin Nano 4GB
  • DeepStream Version: DeepStream 6.3
  • JetPack Version: 5.1
  • TensorRT Version: 8.2.2.1
  • NVIDIA GPU Driver Version: 510.39.01
  • Issue Type: Bug / Question
  • How to Reproduce the Issue:
    • I am trying to use the camera input as the source for a LPR application in DeepStream.
    • The configuration file lpr_app_infer_us_config.yml was modified to use nvarguscamerasrc sensor-id=0 for the camera source.
    • The error occurs when executing the following command:
      sudo ./deepstream-lpr-app lpr_app_infer_us_config.yml
      
    • The camera feed works correctly with standalone GStreamer:
      gst-launch-1.0 nvarguscamerasrc sensor-id=0 ! video/x-raw(memory:NVMM), width=1920, height=1080, framerate=30/1 ! nvegltransform ! nveglglessink
      

Steps Taken So Far:

  1. Verified that the camera is functioning properly with GStreamer.
  2. Simplified the GStreamer pipeline in the DeepStream config file.
  3. Tried different camera input configurations but encountered the same error.

Requirement Details:
I am looking for assistance with configuring DeepStream to properly access and use the camera feed (CSI) for real-time processing in the LPR application. Specifically, I need help with:

  • Correctly configuring the video source for real-time camera input.
  • Debugging the pipeline to ensure proper recognition of the camera.

I would appreciate any advice on how to resolve this issue or any potential configuration adjustments I might have missed.

I use this for my project: “Creating a Real-Time License Plate Detection and Recognition App | NVIDIA Technical Blog

I tried with .mp4 files and worked fine.

Thank you in advance for your help!

I’ve realized that the project only supports .mp4 files. Are there ways to modify it and run it in real time?

Yes. You can modify our source code deepstream_lpr_app.c to create the pipeline for the nvarguscamerasrc.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks