• Hardware Platform (Jetson / GPU) Jetson Orin(also TX2) • DeepStream Version 7.0 • JetPack Version (valid for Jetson only) 6.0(also 4.4) • TensorRT Version • NVIDIA GPU Driver Version (valid for GPU only) • Issue Type( questions, new requirements, bugs) Question • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) • Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
I have set up a single pipeline to run inference using nvinfer, tested to work with jpeg, h264, and nvarguscamerasrc as source.
Instead of getting nvarguscamerasrc to stream frames all the time, I want to trigger capture 1 frame. I have been looking around forum but could not find a typical solution. Can you suggest an approach?
Hi,
This mode is not supported by default and may not work properly. Please run the camera source in steady frame rate. Can be low frame rate such as 5 fps or 10 fps.
What if I have a pipeline that constantly dumps to fakesink, and listen for a signal(call this connect).
When connect signal(unsure how I can receive this connect signal) is raised, connect the camera to the rest of my working pipeline(resize, infer, draw rectangles).
After 1 frame is received (unsure how to raise a disconnect signal), take the camera stream and connect it back to fake sink?
Would this a workable approach or not a good approach?
I was also reading about appsink. Maybe that is the better way so we don’t have dynamic pipeline and just pull frames from my app from gstreamer. Otherwise, gstreamer can just ignore/drop the frames?
I am leaning toward keeping camera running, but hoping for someway to avoid doing unnecessary inference.
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks