display video on HDMI without using GPU

I capture the camera in format 1920*1080 p60, process the video and show it ob monitor, using GStreamer.
The problem is that video showing via HDMI takes resources from GPU, which I like to use for my own image processing. Is there any possibility to avoid using GPU, when video is showed on HDMI? May be any other library instead of GStreamer?

Please share your gstreamer pipeline.

And do you see GPU freq varying to high value in tegrastats?

The pipeline is very simple:
gst-launch-1.0 v4l2src device = “/dev/video1” ! xvimagesink
Yes, I see that GPU using is growing to 30% an average. I’m not sure that it is a GStreamer problem. The GPU using is also growing when I move fast any Window in the Desktop.

The CPU/GPU usage should be lower with nvoverlaysink. Please try
gst-launch-1.0 v4l2src device = “/dev/video1” ! nvvidconv ! ‘video/x-raw(memory:NVMM),format=I420’ ! nvoverlaysink

It works Perfect, 0% of GPU and increase about 10% of CPU. I tested it from command line. Now I will implement it in C Source code.
Is there any possibility to reduce the latency from input to output? As I understand, GStreamer makes some buffering in sink element for reduce impact of jitters on the input to sink. How can I reduce buffering latency?
Today I measure about 1-2 frames latency on Video Input to memory and about 3-4 frames latency on output.
May be to use some other library?

Hi Alex66,
If you use YUV sensor connecting to TX1 CSI, you may try

Or you may refer to the sample tegra_multimedia_api/samples/12_camera_v4l2_cuda. Using low-level programming eliminates possible latency in gstreamer frameworks.

I donwt succeed to implement Gstreamer pipe
gst-launch-1.0 v4l2src device = “/dev/video1” ! nvvidconv ! ‘video/x-raw(memory:NVMM),format=I420’ ! nvoverlaysink
in my “C” code. Is there any source code example?

Here are some examples: