I’ve been trying to run an OpenGL application in maximum speed without any vertical syncing constraints.
However I haven’t been able to turnoff vsync, at least in fullscreen (in windowed mode it runs with no vsync by default).
Since there is no nvidia-settings for the Jetson platform so far, I tried putting
Option "RegistryDwords" "SyncToVBlank=0"
Option "RegistryDwords" "XVideoTextureSyncToVBlank=0"
in /etc/X11/xorg.cong nvidia “Device”, but I’m still getting vsynced frames.
I also tried to calling my application with
export __GL_SYNC_TO_VBLANK=0; myapp
still getting no effect.
Any clues on how to do it?
Moreover, I guess that triple buffering would also allow my application to run at higher frequencies, but doing
Option "TripleBuffer" "true"
doesn’t seem to produce any effect…
I appreciate any help on either the vsync or triple buffering issues. Thanks.
Just something to check…updates can overwrite the nVidia version of libglx.so which might change results. Verify your libglx.so (located in /usr/lib/xorg/modules/extensions/libglx.so) is the nVidia version which was originally unpacked from the L4T drivers package (the apply_binaries.sh should have installed the nVidia version).
More specifically, once you unpack your nVidia L4T (I’m using R19.2, so my checksums are from this) there will be nv_tegra/nvidia_drivers.tbz2. Unpack this manually if you want to see its checksum. For R19.2, the sha1sum from both L4T and the installed system should equal:
If the installed system fails to match, either the package was overwritten by an apt update, or else it was not applied. If the packages match, does the TripleBuffer option show up in /var/log/Xorg.0.log?
The sha1 sums match.
The Xorg log includes
[ 16.347] (**) NVIDIA(0): Option "TripleBuffer" "true"
, so I guess it should work as expected…
At least you know the configuration is not being rejected. I’m not sure how to verify that this software is supposed to accept or not accept TripleBuffer.
I made a minimal working example in GLUT to test whether triple buffer is working or not: https://github.com/cdsousa/glbuffertest
Basically, its a moving bar and a frame rate display. Within the display() function there is a sleep delay so that the render rate in no greater than 55 FPS. The function ends with a call to glFinish() to force a wait for the vsync (only when no triple buffering is used).
- In systems with active vertical syncing AND triple buffering, it is expected that the FPS is just a little less than 55, with no tearing effects.
- In systems with active vertical syncing BUT NO triple buffering, it is expected that the FPS is equal to the screen refresh rate divided by a positive integer number. For the example delay, if a screen refreshes at 60Hz then the shown FPS will be 30. For a screen at 75Hz the FPS will be 37.5.
I’ve tested this example in both the Jetson and in a desktop with a Nvidia card. In the desktop, the “TripleBuffer” flag of xorg.conf does indeed activate the triple buffering, and both enable/disable cases give the expected results. In the Jetson, the FPS is always 30 (for a 60Hz screen) no matter the xorg configuration…
Does anyone knows whether this is related to the Tegra K1 chip or to the specific Jetson/L4T/drivers setup?
I’m using Linux4Tegra 19.2 and respective drivers. I would be glad if someone try this in the most recent Jetson setup.
Still having no Triple Buffer in L4T 21.3…
Anyone has a clue why?
Could you please try using nvidia-settings from another machine using ssh X display forwarding as below ?
(from the shell on device): DISPLAY=:0 ssh -Y dev-machine
(within the ssh session, now on dev-machine): nvidia-settings -a SyncToVBlank=0