I installed VirtualGL and TurboVNC in my Jetson TK1.
I can execute CUDA samples in my Jetson TK1 without HDMI cable and watch rendered image from windows8 using TurboVNC Viewer or Android tablet using bVNC.
VirtualGL take rendering image from OpenGL application and TurboVNC send compressed image of whole desktop to client.
TurboVNC also recive user inputs from VNC clients and send them to applications in server.
What sort of performance are you getting through it? eg: is there about 1 second of delay between the Jetson and what you see remotely, and just showing 1 FPS of changes remotely, or is it better or worse than that?
I’m using VNC client in same local network to Jetson TK1.
I’m not still testing it through WAN.
When I use TurboVNC Viewer in windows8 to connect TurboVNC server in Jetson TK1 and run oceanFFT(512x512), I can see smooth animation.
I can rotate camera with almost no delay.
When I maximaize window of oceanFFT(1240x900), it runs less smooth(looks like about 10FPS) and I see few delay(100~200ms).
I have followed all the steps and I get to control the Jetson TK1 from my Laptop (Ubuntu 14.04 or Windows 8.1), I can only see images from games samples brownser and “no graphics” CUDA samples like “maxMul” but with CUDA samples like “particles” or “ocean FFT” I can’t get see anything in my remote desktop. I am sure I am to connect to my Jetson TK1 (I cheked).
My Jetson TK! is LAN connected to the router and my Laptop via WIFI access.
Does VirtualGL still use the GPU for graphics and compositing but redirect the final framebuffer to memory instead of HDMI? Does TurboVNC simply compress that framebuffer? Or is this all ARM based graphics? Would the GPU be completely free for compute then?
TurboVNC compress screen image using libjpeg-turbo.
[url]http://www.libjpeg-turbo.org/[/url]
libjpeg-turbo uses NEON SIMD instructions in ARM system.
I tried some OpenGL programs from TurboVNC client.
oceanFFT in CUDA samples runs smoothly from TurboVNC client.
smokeParticles runs in about 5fps.
It runs in almost same fps when jetson tk1 is connected to HDMI display.
glxgears runs in about 1000fps when Jetson tk1 is connected HDMI display.
But it runs in about 130fps from TurboVNC client.
When I maximized glxgears window, fps drop down to 15fps.
I can run samples in NVIDIAGameWorks/OpenGLSampels in Jetson tk1.
But When I try to run them from TurboVNC client, they output “Failed to initialize GLFW” and quit.
glxinfo from TurboVNC client says:
OpenGL version string: 4.4.0 NVIDIA 21.3
OpenGL shading language version string: 4.40 NVIDIA via Cg compiler
I think OpenGL programs are executed using GPU.
But when fps is more then 120 or screen size is large, copying frame buffer and compressing it become bottleneck and cannot run as fast as when Jetson is connected to HDMI display.
If TurboVNC or VirtualGL could use the hardware encoder in GPU to compress frame buffer, they might runs faster in large screen.
Has anyone (besides demotomohiro) been able to follow the instructions above and verify that it works? I haven’t had time to test it properly myself, but if someone verifies it works then I’ll post it on the Wiki.
Ok, so this looks like it uses the TK1s GPU for compositing and the final framebuffer is compressed via TurboJPEG with NEON SIMD optimizations and transmitted over the network. Does this means that if you use the GPU gdb you’d freeze the graphics (or crash the device)? It would be convienent to get a software GL implementation optimizated with NEON then tranmitted with TurboVNC as well so that the GPU was completely idle (from the graphics standpoint). This way it could be remotely debugged with no issues. Is my assessment correct?
I’ve tested it using demotomohiro’s binaries, and it works very well:) There can be a slight delay sometimes, but the performance is very good overall.
A couple of things though - I didn’t have a screen section in my xorg.conf, so I edited it to look like this:
I have built updated VirtualGL, TurboVNC and libjpeg-turbo.
OpenGLSamples in NVIDIAGameWorks did’nt run on old TurboVNC with VirtualGL, but they works on new version.
Tab key works without editing any config files on new one.
Tips: Run OpenGL program from VNC client using VirtualGL:
/opt/VirtualGL/bin/vglrun ./OpenGLProgram
Securing a TurboVNC Connection
Add “-localhost” option to vncserver.
“-localhost” option prevents remote VNC clients from connecting except when doing so through a secure tunnel.
Make sure that ssh server is running on your Jetson TK1.
On linux client,
ssh -L 5901:localhost:5901 ubuntu@tegra-ubuntu
(Add “-f -N” option if you want ssh runs in background)
Client viewer connect to localhost:5901
I have built VirtualGL, TurboVNC and libjpeg-turbo for 64-bit Linux For Tegra R24.1.
But they are not test on Jetson TX1.
They were built and tested on Nvidia Shield Android TV with 64-bit Linux For Tegra R24.1.
They might work on Jetson TX1 with 64-bit L4T R24.1, because same 64-bit Sample Root Filesystem and 64-bit driver package for Jetson TX1 is used on my Shield TV.
vi pkgscripts/makedpkg
#Change "DEBARCH=aarch64" to "DEBARCH=arm64"
vi pkgscripts/deb-control
#Change "Architecture: aarch64" to "Architecture: arm64"
make deb
sudo dpkg -i virtualgl_2.5.1_arm64.deb
cd ..
#Build and install TurboVNC
git clone https://github.com/TurboVNC/turbovnc.git
mkdir turbovnc-build
cd turbovnc-build
cmake -G "Unix Makefiles" -DTVNC_BUILDJAVA=0 -DTJPEG_LIBRARY="-L/opt/libjpeg-turbo/lib64/ -lturbojpeg" ../turbovnc
make
If you got error like #error “GLYPHPADBYTES must be 4”,
edit …/turbovnc/unix/Xvnc/programs/Xserver/include/servermd.h and add following code before line “#ifdefavr32”
vi pkgscripts/makedpkg
#Change "DEBARCH=aarch64" to "DEBARCH=arm64"
vi pkgscripts/deb-control
#Change "Architecture: aarch64" to "Architecture: arm64"
make deb
sudo dpkg -i turbovnc_2.0.91_arm64.deb
In my Jetson TX1 with L4T R21.4, TurboVNC 2.0.1 and VirtualGL 2.4.1 works without editing xorg.conf and ~/.vnc/xstartup.turbovnc.
Tab key works without editing ~/.config/xfce4/xfconf/xfce-perchannel-xml/xfce4-keyboard-shortcuts.xml.
Xorg must be running without connecting HDMI display, because default /etc/X11/xorg.conf in L4T R21.4 enables option “AllowEmptyInitialConfiguration”.
When you run OpenGL program from TurboVNC client, that program must be executed with vglrun.
For example:
/opt/VirtualGL/bin/vglrun ./oceanFFT
Are you sure that your user is added to “vglusers” group?
Do your CUDA samples work when you are using HDMI display?
Do your CUDA samples which dont use OpenGL (deviceQuery, matrixMul, etc) work without TurboVNC?