Video tearing: how to set XV_SYNC_TO_VBLANK in nvidia_drv.so

The tearing is visible playing back a video, however how noticible it is depends on the amount of motion
in the video. Note that for playing back video from a local file I set sync=true, not sync=false
which we do for live playback to minimize latency.

filesrc location=/opt/data/sintel-1280-surround.mp4 ! decodebin ! videoconvert ! textoverlay font-desc="Sans, 12" ! xvimagesink sync=true

Our carrier board doesn’t have a camera, so I can’t playback local video.

Cary

I have uploaded a 33 mb transport stream file (about 10 seconds) that can be used
to test this problem.

https://send.firefox.com/download/44935637b4/#RhOQNNnOmC4I2FH8Z5lMnw

Note this will be deleted after 24 hours or two downloads.

Playing this file with

gst-launch-1.0 filesrc location=file3.ts ! decodebin ! videoconvert ! xvimagesink

On the TX2 shows tearing of vertical edges. Playing the same pipeline on a PC doesn’t

Hope this helps,

Cary

Hi cobrien,

gst-launch-1.0 filesrc location=file3.ts ! tsdemux ! h264parse ! avdec_h264 ! xvimagesink
gst-launch-1.0 filesrc location=file3.ts ! tsdemux ! h264prse ! omxh264dec ! nvvidconv ! xvimagesink
gst-launch-1.0 filesrc location=file3.ts ! tsdemux ! h264prse ! omxh264dec ! nvoverlaysink
gst-launch-1.0 filesrc location=file3.ts ! tsdemux ! h264prse ! omxh264dec ! nvegltransform ! nveglglessink

We have run above pipelines without seeing tearing. Are you on clean r28.2.1?

I went through these configurations very carefully with fresh onstalls of 28.2.1 on both
the NVIDIA eval board (1 hdmi) and our carrier board (2xHDMI, 1xDSI->HDMI).

Cornet carrier, window manager stopped (just Xorg)

systemctl stop lightdm
Xorg &
gst-launch-1.0 filesrc location=file3.ts ! tsdemux ! h264parse ! avdec_h264 ! xvimagesink
Visible ‘tearing’
Total cpu 33%

gst-launch-1.0 filesrc location=file3.ts ! tsdemux ! h264parse ! omxh264dec ! nvvidconv ! xvimagesink
Tearing, edge distortion
cpu utlization 7$

gst-launch-1.0 filesrc location=file3.ts ! tsdemux ! h264parse ! omxh264dec ! nvoverlaysink
Good quality video
cpu 2.5%

gst-launch-1.0 filesrc location=file3.ts ! tsdemux ! h264parse ! omxh264dec ! nvegltransform ! nveglglessink
tearing
cpu 3%

NVIDIA Eval Board, Same results.

Added second HDMI to cornet board

With window manager running, tried overlay video manager.
First screen:
gst-launch-1.0 filesrc location=file3.ts ! tsdemux ! h264parse ! omxh264dec ! nvoverlaysink
Good quality video

Second screen:
vxBaseWorkerFunction[2575] comp OMX.Nvidia.std.iv_renderer.overlay.yuv420 Error -2147479552

This has do do with overlay allocations per screen. Ran the following script:

fbs="fb0 fb1"
for fb in $fbs
do
        echo 4 > /sys/class/graphics/$fb/blank
done

# disconnect...
echo 0x00 > /sys/class/graphics/fb0/device/win_mask
echo 0x00 > /sys/class/graphics/fb1/device/win_mask
# and reconnect
echo 0x03 > /sys/class/graphics/fb0/device/win_mask
echo 0x0c > /sys/class/graphics/fb1/device/win_mask


for fb in $fbs
do
        echo 0 > /sys/class/graphics/$fb/blank
done

Now I could run, individually, both

gst-launch-1.0 filesrc location=file3.ts ! tsdemux ! h264parse ! omxh264dec ! nvoverlaysink

and

gst-launch-1.0 filesrc location=file3.ts ! tsdemux ! h264parse ! omxh264dec ! nvoverlaysink display-id=1

However they couldn’t be run concurrently. Doing this resulted in bright red banding covering the second display, as described in this topic (with attached picture).

https://devtalk.nvidia.com/default/topic/1042576/problem-with-nvoverlaysink-on-3rd-display-and-with-textoverlay/?offset=6#5288994

This happens with or without the window manager running.

So that rules out using the nvoverlaysink. Even if this were solved we would have
the problem of running a screen with 4 videos, one of the requirements.

The next option that provides good quality video is to run the application while lighdm/unity/compiz
are running.

It is possible to use a pipeline terminating in xvimagesink, however the cpu utilization of the compiz
process is approaching 90%, i.e. all of one core. The video doesn’t show tearing, but it does show
jerky behavior and occasional lags when displaying a live video.

So right now we don’t have a good way forward.

Any ideas would be helpful.

Thanks in Advance

Cary

Hi cobrien,
xvimagesink is a 3rd party element and there can be certain memcpy occupying CPU cycles.

Have you tried nveglglessink? Below is an example of launching multiple windows:
https://devtalk.nvidia.com/default/topic/976743/jetson-tx1/get-rgb-frame-data-from-nvvidconv-gstreamer-1-0/post/5022878/#5022878

Running just Xorg, and the following pipeline:

gst-launch-1.0 filesrc location=file3.ts ! tsdemux ! h264parse ! omxh264dec ! nvegltransform ! nveglglessink

I would still see horizontal dislocations (tearing) of the video.

Just to add one more piece of information to this puzzle, I tried rendering the test file using
the MMAPI example program in tegra_multimedia_api/samples/00_video_decode (after JetPack is
installed). Looking at the code, it looks like the underlying rendering pipeline is based
on egl (i.e it uses NvEglRenderer).

Running it on the eval board, with the window manager running, there was no tearing.

Running it on our custom board, with 1 HDMI connected, there was no tearing.

As soon as the second HDMI was connected and initialized, the tearing started up
on the first display where the video was being displayed. Unplug the second
monitor and the tearing went away.

We have a requirement for a multi-monitor display, so this is yet another problem for us.

Cary

https://devtalk.nvidia.com/default/topic/1025021/jetson-tx1/screen-tearing-when-dual-monitor/1

Looks like this is a “known bug” with TX1. I guess this is the same with the newer TX2 as well. Wow!

Dual display output is not a general case. If you need further support, please contact NVIDIA salesperson to let us understand your usecase and prioritize the issue. Thanks.

Hello DaneLLL,

This seems to be a classic NVIDIA answer to any question which reveals an issue with the product. Can you please provide me a contact name and number to reach out to prioritize the issue?

Thanks a lot.

Please go to https://www.nvidia.com/en-us/contact/sales/

Thanks

The topic you posted in previous link is an old issue on GLUT. However, looks like we are not talking about GLUT in current one.

I wonder if you could share the app you are using. Is it just sample from MMAPI?
Please try to use nvoverlaysink or DRMrenderer in MMAPI sample first. These light-weight sink should work without tearing.

We can’t use nvoverlaysink on two HDMI monitors. One of the monitors turns all red if we do.

Please add Option “TegraReserveDisplayBandwidth” “false” to your xorg.conf for the red screen issue. It is a known issue discussed in many threads.

Section "Device"
    Identifier  "Tegra0"
    Driver      "nvidia"
# Allow X server to be started even if no display devices are connected.
    Option      "AllowEmptyInitialConfiguration" "true"
    Option "TegraReserveDisplayBandwidth" "false"

EndSection

Or you can just disable lightdm for good since the red screen issue comes from lightdm.

Hi unninair & cobrien,

Confirmed we can run the video with two hdmi output without tearing issue.
List our test steps for you reference.

  1. Add Option “TegraReserveDisplayBandwidth” “false” to your xorg.conf (#22 Wayne’s command)
  2. sudo service lightdm restart
  3. Set win_mask
#cd /sys/class/graphics/fb0
# echo 4 > blank
# echo 0x0 > device/win_mask
# echo 0x3 > device/win_mask
# echo 0 > blank   
#cd /sys/class/graphics/fb1
# echo 4 > blank
# echo 0x0 > device/win_mask
# echo 0xc > device/win_mask
# echo 0 > blank
  1. Run below commands
gst-launch-1.0 filesrc location=file3.ts ! tsdemux ! h264parse ! omxh264dec ! nvoverlaysink display-id=0
gst-launch-1.0 filesrc location=file3.ts ! tsdemux ! h264parse ! omxh264dec ! nvoverlaysink display-id=1

Yes, this is correct. We are now able to run nvoverlaysink on all 3 monitors (We
have 2xHDMI and one DSI -> HDMI output). We are running without the window mananger,
and this approach gives us good video quality on all monitors. Running 3 hd, 1 sd,
and 2x videotestsrc (6 streams total) cpu load is very low, 12% total.

Running without lightdm+compiz we would get tearing due to sync to
vblank problems with all video/image sink elements except nvoverlaysink.
Running with lightdm+compiz we would not get tearing but the cpu load
was high and the video jerky due to (we believe) compiz compositing window
manager data transfer.

Thank you for all the help.

More here:

https://devtalk.nvidia.com/default/topic/1042576/problem-with-nvoverlaysink-on-3rd-display-and-with-textoverlay/#5293703

Note that this required us to upgrade from

Tegra186_Linux_R28.1.0_aarch64.tbz2
Tegra_Linux_Sample-Root-Filesystem_R28.1.0_aarch64.tbz2

To

Tegra186_Linux_R28.2.1_aarch64.tbz2
Tegra_Linux_Sample-Root-Filesystem_R28.2.1_aarch64.tbz2

And adding

Option "TegraReserveDisplayBandwidth" "false" to  xorg.conf

To solve the red screen problem when using 2x nvoverlaysink

The version can be checked in xorg.log.

[    55.307] (II) NVIDIA dlloader X Driver  28.2.1  Release Build  (integ_stage_rel)  (buildbrain@mobile-u64-773)  Thu May 17 00:16:09 PDT 2018

Note that the nvoverlaysink seems to use direct frame buffer access, so it will
run without Xorg or lightdm. We are investigating implementing the system this way,
which maximizes the number of xoverlaysink instances that can run (6). This will
provide 1 quad and 2 single stream displays, which is the customer requirement.

Again, thanks for the help.

Cary

Now that the tearing is completely eliminated, another phenomenon has surfaced. When the camera/video is panning from side to side we see jerkiness of the whole image, like it missed a frame or something. This goes away if we enable the SYNC flag - however this adds a few seconds of latency. Is there a way to address this?

Could you share a simple program or other way that can reproduce your issue? If this is not tearing issue anymore, could you file a new topic for it?