I am able to send the raw camera video to the encoder and display the video at the same time using tee:
gst-launch-1.0 -e nvarguscamerasrc ! ‘video/x-raw(memory:NVMM), width=1024, height=768, format=NV12, framerate=30/1’ ! nvvidconv flip-method = 2 ! tee name=streams streams. ! queue ! nvv4l2h265enc bitrate=8000000 ! h265parse ! qtmux ! filesink location=test.mp4 streams. ! queue ! nvoverlaysink -e
However, how do I encode the raw video and capture images for processing in, for examples, detectnet inference or custom python/C++ code processing? Just replacing nvoverlaysink -e with appsink (after converting to BGR format) does not seem to help.
The motivation here is that the performance of the online inferencing/processing can later be validated offline by decoding the saved video.