The platform is Jetson xavier NX (Jetpack 4.6 rev3).
The following pipeline is used to play a video: gst-launch-1.0 filesrc location='/home/user/FHD.mp4' ! qtdemux ! queue ! h265parse ! nvv4l2decoder ! nvvidconv ! videorate rate=30 ! 'video/x-raw(memory:NVMM), format=NV12, framerate=(fraction)30/1' ! nvoverlaysink display-id=1
I want to play a video with an offset. So, the video, when played, must be displayed with an offset from (0,0) on display ID 1. The offset area will be shown in black. The video must maintain its original size when shifted — it must not be resized or squeezed. For example, if the video contains a circle, its diameter must remain unchanged (the video may be cropped to stay within the monitor boundaries, if needed).
I thought that should be simple. But I am struggling to find a solution for it. I tried the following pipeline and kept it simple by applying only y-offset (assuming my monitor and video resolutions match and they are 1920x1080): gst-launch-1.0 -e nvcompositor name=comp sink_0::xpos=0 sink_0::ypos=OFFSET sink_0::width=1920 sink_0::height=1080-YOFFSET sink_1::xpos=0 sink_1::ypos=0 sink_1::width=1920 sink_1::height=YOFFSET ! 'video/x-raw(memory:NVMM)' ! nvoverlaysink display-id=1 filesrc location='/home/user/FHD.mp4' ! qtdemux ! queue ! h265parse ! nvv4l2decoder ! nvvidconv ! 'video/x-raw(memory:NVMM)' ! queue ! comp.sink_0 videotestsrc pattern=black ! nvvidconv ! 'video/x-raw(memory:NVMM), format=NV12, framerate=30/1' ! queue ! comp.sink_1
What I see in the monitor as a result is that the video is correctly shifted by the offset but it is squeezed (the circle is not circle anymore but it is an ellipse).
The other problem with this approach is that I am not able to shift the video in the other direction (to shift the video up, in this case, and the offset then is located at the bottom).
I also tried the following another pipeline but got the same output (the video is squeezed and the offset can be applied only in one direction): gst-launch-1.0 -e nvcompositor name=comp sink_0::xpos=0 sink_0::ypos=200 ! 'video/x-raw(memory:NVMM), width=1920, height=1080' ! nvoverlaysink display-id=1 filesrc location='/home/user/FHD.mp4' ! qtdemux ! queue ! h265parse ! nvv4l2decoder ! nvvidconv ! 'video/x-raw(memory:NVMM), format=NV12' ! queue ! comp.sink_0
The difference here that in the terminal I got: DstComp rect’s bottom out of boundary, set to maximum height
You need to add an nvvidconv that performs a crop before the compositor, that’s because it will always try to fit the whole image. The documentation in nvvidconv is missleading because it states that left, right, top and bottom are the pixels to crop, but they are actually the coordinates of the ROI you want from the original image (1080 - 200 = 880).
You can let nvcompositor handle the background itself using the background property. You can also use uridecodebin3 to handle file decoding and leave the caps negotiation to the elements:
Hi Miguel, thank you very much for the reply. I am still struggling as your pipeline does not solve the issue.
The pipeline you provided still scale the video in width axis based on the provided y-off value: the bigger the offset the narrower the width. In addition to that, I need to have control over the framerate and playback speed.
I tried as well the following two pipelines which also did not solve the issue.
The first pipeline, the offset is correctly applied and the black is true black (the range is not tv but rather full I believe) but the video was scaled in width with a y-offset: