Can Deepstream receives frames (not videos streaming) as an input?

** TX2 jetson
** DeepStream 6.1
Is there a way to configure the pipeline so instead of direct cam connection input or RTSP or MP4 file it will get a single frame (or list of frames) for detection/ tracking purposes? if yes how should I modify the pipeline so it would work and also wont shutdown after processing a single frame.


Have you considered using appsrc as your source element? With that you should be able to send in individual frames whenever you want.

DeepStream is just a SDK. The samples are used to shown how to use the DeepStream APIs. You can write your own app to accept frame data.

Thank you @marmikshah and @Fiona.Chen for your response.
OK good to know that this option exists.
To simplify my case, the program gets a list of frames, from time to time, and needs to send them to the deep stream pipeline for inference.

so the basic pipeline structure which (as I understand) works only with .h246 video file is:
file-source → h264-parser → nvh264-decoder → nvinfer → nvvidconv → nvosd → video-renderer

so basically if we work with frames only, we don’t need the " file-source → h264-parser → nvh264-decoder" plugins,
since we already holding the frame ( correct me if I am wrong)

My question is what suppose to be the plugins before the “nvinfer” so it could handle frame by frame and who should I pass them to? (example will be very helpful)

Many thanks!

Yes you’re right. You do not need those plugins.
Instead you have have something like:

appsrc -> nvvideoconvert -> nvstreammux -> nvinfer -> nvstreamdemux -> nvvideoconvert -> nvdsosd -> .. 

You can use appsrc's signals to send in buffers whenever you want. You can check the signal 'push-buffer. This takes in a GstBuffer.
This example here shows how to push a numpy array in the pipeline Appsrc with numpy input in Python - #8 by gautampt6ul.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.