Trigger single capture in nvarguscamerasrc source pipeline that is always on

• Hardware Platform (Jetson / GPU) Jetson Orin(also TX2)
• DeepStream Version 7.0
• JetPack Version (valid for Jetson only) 6.0(also 4.4)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs) Question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

I have set up a single pipeline to run inference using nvinfer, tested to work with jpeg, h264, and nvarguscamerasrc as source.

Instead of getting nvarguscamerasrc to stream frames all the time, I want to trigger capture 1 frame. I have been looking around forum but could not find a typical solution. Can you suggest an approach?

I have seen so far

sudo enableCamInfiniteTimeout=1 nvargus-daemon

to allow capture to trigger whenever, and the camera driver is basically hot at all time.
https://docs.nvidia.com/jetson/archives/r35.2.1/DeveloperGuide/text/SD/CameraDevelopment/CameraSoftwareDevelopmentSolution.html#infinite-timeout-support

Without deepstream, this can be achievable via argus capture session where we would call “capture” of a capture session. Jetson Linux API Reference: Argus::ICaptureSession Class Reference | NVIDIA Docs

But I am confused on how to map this call to issue a “capture” to deepstreamer framework. Is it some basic functionality that I am missing?

For additional context, there will be multiple cameras that we want to trigger at the same time after we get 1 camera trigger to work.

Hi,
This mode is not supported by default and may not work properly. Please run the camera source in steady frame rate. Can be low frame rate such as 5 fps or 10 fps.

Is it possible to provide guidance on how to dynamically drop some frames, and continue processing other frames?

For the camera case, there is no way to drop designated frames in DeepStream.

I haven’t read through this fully yet.

https://gstreamer.freedesktop.org/documentation/tutorials/basic/dynamic-pipelines.html?gi-language=c

What if I have a pipeline that constantly dumps to fakesink, and listen for a signal(call this connect).
When connect signal(unsure how I can receive this connect signal) is raised, connect the camera to the rest of my working pipeline(resize, infer, draw rectangles).

After 1 frame is received (unsure how to raise a disconnect signal), take the camera stream and connect it back to fake sink?

Would this a workable approach or not a good approach?

Hello, can I get some guidance?

I was also reading about appsink. Maybe that is the better way so we don’t have dynamic pipeline and just pull frames from my app from gstreamer. Otherwise, gstreamer can just ignore/drop the frames?

I am leaning toward keeping camera running, but hoping for someway to avoid doing unnecessary inference.

Hi,
It looks possible to use appsink and appsrc like:

nvarguscamerasrc ! nvvideoconvert ! video/x-raw ! appsink
appsrc ! video/x-raw ! nvvideoconvert ! 'video/x-raw(memory:NVMM)' ! nvstreammux ! nvinfer ! ...

You may check the sample:

sources/apps/sample_apps/deepstream-appsrc-test/deepstream_appsrc_test_app.c

And develop this use-case based on it.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.