** TX2 jetson
** DeepStream 6.1
Hey,
Is there a way to configure the pipeline so instead of direct cam connection input or RTSP or MP4 file it will get a single frame (or list of frames) for detection/ tracking purposes? if yes how should I modify the pipeline so it would work and also wont shutdown after processing a single frame.
Thank you @marmikshah and @Fiona.Chen for your response.
OK good to know that this option exists.
To simplify my case, the program gets a list of frames, from time to time, and needs to send them to the deep stream pipeline for inference.
so the basic pipeline structure which (as I understand) works only with .h246 video file is:
file-source → h264-parser → nvh264-decoder → nvinfer → nvvidconv → nvosd → video-renderer
so basically if we work with frames only, we don’t need the " file-source → h264-parser → nvh264-decoder" plugins,
since we already holding the frame ( correct me if I am wrong)
My question is what suppose to be the plugins before the “nvinfer” so it could handle frame by frame and who should I pass them to? (example will be very helpful)
You can use appsrc’s signals to send in buffers whenever you want. You can check the signal 'push-buffer. This takes in a GstBuffer.
This example here shows how to push a numpy array in the pipeline Appsrc with numpy input in Python - #8 by gautampt6ul.