How to encode raw camera video and processs it at the same time?

I am able to send the raw camera video to the encoder and display the video at the same time using tee:

gst-launch-1.0 -e nvarguscamerasrc ! ‘video/x-raw(memory:NVMM), width=1024, height=768, format=NV12, framerate=30/1’ ! nvvidconv flip-method = 2 ! tee name=streams streams. ! queue ! nvv4l2h265enc bitrate=8000000 ! h265parse ! qtmux ! filesink location=test.mp4 streams. ! queue ! nvoverlaysink -e

However, how do I encode the raw video and capture images for processing in, for examples, detectnet inference or custom python/C++ code processing? Just replacing nvoverlaysink -e with appsink (after converting to BGR format) does not seem to help.

The motivation here is that the performance of the online inferencing/processing can later be validated offline by decoding the saved video.

Naren

hello sivashakthi,

please have a try to have post-processing by using gst-nvivafilter plugin.
you might refer to L4T Accelerated GStreamer User Guide for details.

here’s sample pipeline to encode the preview content and also perform post-processing.
for example.

$ gst-launch-1.0 -e nvarguscamerasrc num-buffers=300 ! 'video/x-raw(memory:NVMM), width=2952, height=1944, format=NV12, framerate=30/1' ! tee name=streams streams. ! queue ! nvv4l2h265enc bitrate=8000000 ! h265parse ! qtmux ! filesink location=video0.mp4 streams. ! queue ! nvivafilter cuda-process=true customer-lib-name="libnvsample_cudaprocess.so" ! 'video/x-raw(memory:NVMM), format=NV12' ! nvoverlaysink -e

Thank you! Is there a reference example where the libnvsample_cudaprocess does AI inferencing?