I’m attempting to create a pipeline in gstreamer that captures using “nvcamerasrc”, breaks the video stream up into different segments and then encodes the individual segments using the h265 encoder. An example gst-launch that works is below.
gst-launch-1.0 nvcamerasrc fpsRange="30.0 30.0" ! \
'video/x-raw(memory:NVMM), \
width=(int)1920, \
height=(int)1080, \
format=(string)I420, \
framerate=(fraction)30/1' ! \
nvvidconv flip-method=2 ! \
tee name=t1 \
t1. ! tee name=t2 \
t1. ! tee name=t3 t2. ! queue ! \
videocrop left=860 top=440 ! \
fakesink \
t2. ! queue ! \
videocrop left=860 bottom=440 ! \
fakesink \
t3. ! queue ! \
videocrop right=860 bottom=440 ! \
fakesink \
t3. ! queue ! \
videocrop right=860 top=440 ! \
fakesink
While the “nvvidconv” element claims to allow cropping of the image, and there is a hardware accelerated h265 plugin, there seem to be a few issues with this when trying to implement it on the TX1:
- There doesn't seem to be a "tee" element that provides multiple copies of the input while keeping the video in the NVMM memory. (I'm aware of the nvtee plugin, but it appears to only have a single SRC pad)
- There doesn't seem to be a queue that allows you to use NVMM memory
Without keeping the videostream in NVMM memory is under performing compared to what it should be able to do.
Is there a way to achieve something like the above example while keeping everything in NVMM memory until after it is encoded using the “omxh265enc” or is there another way to achieve the same result?