Hi,
For an application I need to compose a single 1080p 30fps video from three camera sources, display it on HDMI and store the recorded video on flash.
This is a sample gstreamer 1.0 pipeline:
gst-launch-1.0 -ev \
v4l2src device="/dev/video2" ! 'video/x-raw, width=640, height=480, framerate=30/1' ! videoconvert ! videoscale ! m.sink_0 \
v4l2src device="/dev/video1" ! 'video/x-raw, width=1280, height=720, framerate=30/1' ! videoconvert ! videoscale ! m.sink_1 \
videotestsrc ! 'video/x-raw, width=720, height=480, framerate=30/1' ! m.sink_2 \
videomixer name=m sink_0::xpos=1280 sink_2::xpos=1280 sink_2::ypos=480 ! 'video/x-raw,width=1920,height=1080, framerate=30/1' ! tee name=t ! queue ! nvhdmioverlaysink sync=false \
t. ! queue ! omxh264enc ! matroskamux ! filesink location=file.mkv sync=false
In this pipeline, the GPU apparently isn’t being used for encoding (omxh264enc), composing (videomixer) or converting/scaling (videoconvert, videoscale) . Although it’s not using 100% of all CPUs, performance is subpar.
I noticed gstreamer 0.10 has nv_omx_h264enc and nv_omx_videomixer, but I haven’t been able to reproduce the same pipeline on this version. Trying to use nv_omx_videomixer results in this already reported issue: https://devtalk.nvidia.com/default/topic/824057/jetson-tk1/does-nv_omx_videomixer-work-in-jetson-tk1-/.
Any tips on getting this to run with acceptable performance?
Thanks.