Gstreamer pipeline replacements for hardware accelerated encoding with the same pipeline output

Here is our current pipeline string.

“appsrc name=input stream-type=0 is-live=true ! videoconvert ! video/x-raw ! queue ! x264enc bitrate=1024 byte-stream=true speed-preset=veryfast tune=zerolatency sliced-threads=true key-int-max=30 ! video/x-h264, profile=constrained-baseline, stream-format=byte-stream ! appsink name=output emit-signals=true”

How would I change this string so that the output into appsink is the same, but using Jetson hardware accelerated encoding instead, as in, using “omxh264enc” or “nvv4l2h264enc”?

I’m unable to try now, but you would try something like (assuming your app sends BGR frames) :

appsrc name=input stream-type=0 is-live=true ! queue ! videoconvert ! video/x-raw, format=BGRx ! nvvidconv ! nvv4l2h264enc ! video/x-h264, profile=constrained-baseline, stream-format=byte-stream ! queue ! appsink name=output emit-signals=true

Can you explain the reasoning behind your decision-making?

Why use two queues now instead of one and why are they placed where they are placed? Why is nvvidconv placed where it is? Why do we have to use both videoconvert and nvvidconv?

nvvidconv can copy from CPU allocated memory to NVMM memory. It can also convert format, but doesn’t support 3 bytes formats such as BGR.
So I first use videoconvert in order to convert into BGRx, this won’t make a big CPU load as it is just adding a 4th byte.
Then I use nvvidconv to convert into YUV (probably NV12 on recent L4T releases) and output into NVMM memory, as expected by the encoder.
I’ve used 2 queues for isolating this pipeline from your appsrc and appsink, so that they may be run on different CPUs, but you would try if this is a good scheme and change according to your application.