very poor video performance

hi
I eventually succeeded in installing some modules I need (autofs and bluetooth), but now I have another major problem. I noticed very poor video performance in VLC and also frequent skips on XBMC. it seems neither one is using hardware acceleration to render the video overlay, as processor use rises to nearly 100% when playing video. what am I missing here? glgears shows hardware opengl acceleration…
also there are some video artefacts as I resize windows like gnome-terminal. I also had some total freezes followed by restarts of X11 after a while (5-30s).

greetings
hugo.-

I tried this also.

Installing either or both VLC or XBMC require an apt-get install which also installs the prerequisite packages.

The problem here seems to be that none (or few) of those packages are linked against the special NVIDIA drivers that come with the Jetson. Thus, as you suspect, all display from those applications has to go through the CPU rather than being able to take advantage of the hardware acceleration.

Also, if there is a way to compile against the vendor-supplied GLUT/Mesa system, I haven’t found it. Linux4Tegra 19.2 (Ubuntu 14.04 version) doesn’t even seem to have a gl.h header anywhere.

I hope that the NVIDIA staff come back from a very restful long holiday weekend and are ready to post some documentation for us. A really useful start might be “how to compile software on L4T which needs to link against libGL and/or libMesa”. Once we know that, patching “configure” scripts becomes more possible. Perhaps they can post a patched aclocal/autoconf package that will handle this sort of thing. We can hope.

LATER: I stand corrected. Please see thread “Read the CUDA Getting Started Guide. It’s Brief. It’s essential”. Read and follow the directions and everything starts working.

That being said, if you want to run VLC or XBMC you probably want to build from source and link it against the new CUDA elements.

Video playback consists basically on two aspects: decoding the video and showing the decoded video frame. Both must be HW accelerated with optimized path between them for optimal solution.

Unfortunately there is no single API for either that would be supported by every application and thus different platforms support different APIs and different video players can benefit from different APIs.

In the case of Linux for Tegra, the supported multimedia API is GStreamer. Using GStreamer based players should provide you the best performance. The Linux for Tegra release 19.2 seems to include nvgstaplayer-1.0 which is should use the optimal path for video rendering, check the readme for details.

Unfortunately again, neither of XBMC or VLC support GStreamer as the backend. They could use APIs like VDPAU but that’s not supported by Linux for Tegra.

Decoding on CPU should be possible up to 1080p30 even though it’s inefficient and consumes a lot of power.

For the rendering it should be possible to use OpenGL/GLX with the CPU decoded video frames. That should be quite efficient even if there might be a conversion overhead from video frame to OpenGL texture.

There is no need to link applications compile time against NVIDIA’s libGL as that should be a runtime check. Applications should be compiled against e.g. mesa so that they run on any platform and then the platform will provide the needed libGL.so so that runtime linker finds it before any generic SW implemented MESA libraries.

Generally speaking CUDA doesn’t have anything to do with video playback. In theory there could be e.g. postprosessing plugins using CUDA but I don’t know if such things exist.

They sure do. Check out https://developer.nvidia.com/NPP

I myself, am primarily interested in fully HW accelerated video playback in C++/OpenGL similar to android. Android uses the EGLImage encapsulation primitive and binds the decode-surface to the OpenGL context for optimal performance. I never used gstreamer before so I hope to figure this out soon.

PS. is there any source code available for the nvgstplayer?

[EDIT] A quick look at gstgl_videosink seems to indicate that eglimages are used. I guess it’s time to learn some more about the gstreamer API [/EDIT]

N.

Ok, that is true, and interesting. I wonder if there are open source video decoding frameworks on PC side that uses the NPP?

On Android the most efficient way is to provide 12/16 bit YUV frames directly to the display instead of bloating them to 32bit RGBA textures. On Linux side traditionally XV Image Extension is used for showing video frames but I guess nowadays it’s common to encapsulate the video frames to OpenGL textures for good integration to application UI while maintaining good efficiency.

GStreamer have plugins to read videos from file, plugins to decode video and plugins to show the video frames. Those plugins don’t need to depend on each other in any way. So the same decoder plugin should work with different rendering plugins, like xvimagesink or something GL based. There are some incompatibilities though.

I’ve always used XV based renderers so I don’t know how to use GStreamer with GL based applications.

If gstreamer is really using EGLImages the OpenGL interface is fairly simple.

http://www.khronos.org/registry/gles/extensions/OES/OES_EGL_image_external.txt

It’s as easy as binding the texture ID to the GL_TEXTURE_EXTERNAL_OES target which acts as a black box.
The format of the underlying EGLimage is vendor-dependent and is most likely in some compressed YUV format and the YUV to RGBA conversion is handled automatically in the GLSL shader when performing a texture lookup.

I will need to check out the gstreamer API to handle synchronization as I don’t want it to update the EGLImage to the next frame until I’m done with it in my OpenGL app.

OK, I managed to get vlc built and it works pretty good, as does xbmc (installed from the repo).
I still get graphical artefacts as I resize windows and unity is a bit sluggish. I noticed compiz is using quite a bit of cpu.
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1853 ubuntu 20 0 341884 102112 63368 S 17.9 5.7 16:44.36 compiz
952 root 20 0 232368 54516 38384 S 9.7 3.0 4:58.43 Xorg
I read somewhere that the board has a second video out signal. is there a possibility to connect a second monitor to it with some adaptor?

It has an eDP/LVDS connector. LVDS has not been tested according to the information I gathered from Nvidia, but eDP has.

It should be possible to create a mechanical converter from eDP to DisplayPort, but I haven’t found any yet. There are LVDS converters to DVI, which should work fine, but I haven’t found out how to do it exactly yet (which pins you need to connect).

Here are some converters:

Note that it’s likely that you can’t get a resolution higher than 1366x768 (single channel LVDS).

There’s also a converter chip from eDP to HDMI, but I haven’t found a dev board for it: PS171 - DP to HDMI™/DVI - Parade Technologies, Ltd.

With this converter, 1080p is not a problem.

There’s the possibility to make a new board with that IC that plugs right into the Jetson board and provides the HDMI connector, but I don’t consider myself good enough with high-speed connections yet to make one myself. I also haven’t found a small-quantity source for this IC.

Another IC is http://www.st.com/web/en/resource/technical/document/data_brief/DM00056674.pdf which should be possible to get in smaller quantities.

Once you have got the CUDA toolkit and related apps installed, try invoking graphics-heavy apps like XBMC with the following prepended to the command line:

LD_LIBRARY_PRELOAD=“/usr/lib/arm-linux-gnueabihf/tegra/libtegrav412.so:/usr/lib/arm-linux-gnueabihf/tegra/libcuda.so:/usr/lib/arm-linux-gnueabihf/tegra/libGL.so.1” /path/to/application

This certainly speeds up my flight simulator, and XBMC runs quite well.