Core dump trying to run kivy demo app from x11 forwarding, OpenGL related on Jetson Nano

Hi, I’ve been trying to run a kivy demo app via ssh x11 forwarding, and it shows for a moment, but then it core dumps.
The app gives a warning about not having a recent enough version of OpenGL (1.4 and required at least 2.0) and hints that if you have problems you should try upgrading to at least 2.0. However, I can’t seem to find my way to upgrade to such libraries. Would anyone give me a hand on this? Basically upgrading to a more recent version of Opengl/Mesa .
Following is the console log from my last run, and also the glxinfo that shows the OpenGL/Mesa versions.

I’m trying this on a nano with Jetpack 4.2.1

Thank you.
Regards,
Eduardo

glxinfo output:

(nspi) drakorg@drakorg-desktop:~/workspace/nspi/kivy_$ glxinfo | grep -i version
MESA-LOADER: failed to open swrast (search paths /usr/lib/aarch64-linux-gnu/dri:\$${ORIGIN}/dri:/usr/lib/dri)
libGL error: failed to load driver: swrast
server glx version string: 1.4
client glx version string: 1.4
GLX version: 1.4
OpenGL version string: 1.4 (2.1 Mesa 10.5.4)

Console output:

Hello from the pygame community. https://www.pygame.org/contribute.html
[INFO   ] [Image       ] Providers: img_tex, img_dds, img_pygame, img_pil, img_gif (img_ffpyplayer ignored)
[INFO   ] [Text        ] Provider: pygame(['text_pango'] ignored)
[INFO   ] [Window      ] Provider: x11(['window_egl_rpi', 'window_pygame'] ignored)
MESA-LOADER: failed to open swrast (search paths /usr/lib/aarch64-linux-gnu/dri:\$${ORIGIN}/dri:/usr/lib/dri)
libGL error: failed to load driver: swrast
FBConfig selected:
Doublebuffer: Yes
Red Bits: 8, Green Bits: 8, Blue Bits: 8, Alpha Bits: 8, Depth Bits: 24
[INFO   ] [GL          ] Using the "OpenGL" graphics system
[INFO   ] [GL          ] Backend used <gl>
[INFO   ] [GL          ] OpenGL version <b'1.4 (2.1 Mesa 10.5.4)'>
[INFO   ] [GL          ] OpenGL vendor <b'Mesa Project'>
[INFO   ] [GL          ] OpenGL renderer <b'Software Rasterizer'>
[INFO   ] [GL          ] OpenGL parsed version: 1, 4
[CRITICAL] [GL          ] Minimum required OpenGL version (2.0) NOT found!

OpenGL version detected: 1.4

Version: b'1.4 (2.1 Mesa 10.5.4)'
Vendor: b'Mesa Project'
Renderer: b'Software Rasterizer'

Try upgrading your graphics drivers and/or your graphics hardware in case of problems.

The application will leave now.
Fatal Python error: (pygame parachute) Segmentation Fault

Current thread 0x0000007f8d76f010 (most recent call first):
  File "/home/drakorg/.local/share/virtualenvs/nspi-ClYdI6po/lib/python3.6/site-packages/kivy/core/gl/__init__.py", line 75 in print_gl_version
  File "/home/drakorg/.local/share/virtualenvs/nspi-ClYdI6po/lib/python3.6/site-packages/kivy/core/gl/__init__.py", line 39 in init_gl
  File "/home/drakorg/.local/share/virtualenvs/nspi-ClYdI6po/lib/python3.6/site-packages/kivy/core/window/__init__.py", line 1225 in initialize_gl
  File "/home/drakorg/.local/share/virtualenvs/nspi-ClYdI6po/lib/python3.6/site-packages/kivy/core/window/__init__.py", line 1254 in create_window
  File "/home/drakorg/.local/share/virtualenvs/nspi-ClYdI6po/lib/python3.6/site-packages/kivy/core/window/__init__.py", line 981 in __init__
  File "/home/drakorg/.local/share/virtualenvs/nspi-ClYdI6po/lib/python3.6/site-packages/kivy/core/__init__.py", line 71 in core_select_lib
  File "/home/drakorg/.local/share/virtualenvs/nspi-ClYdI6po/lib/python3.6/site-packages/kivy/core/window/__init__.py", line 2068 in <module>
  File "<frozen importlib._bootstrap>", line 219 in _call_with_frames_removed
  File "<frozen importlib._bootstrap_external>", line 678 in exec_module
  File "<frozen importlib._bootstrap>", line 665 in _load_unlocked
  File "<frozen importlib._bootstrap>", line 955 in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 971 in _find_and_load
  File "/home/drakorg/.local/share/virtualenvs/nspi-ClYdI6po/lib/python3.6/site-packages/kivy/base.py", line 123 in ensure_window
  File "/home/drakorg/.local/share/virtualenvs/nspi-ClYdI6po/lib/python3.6/site-packages/kivy/metrics.py", line 174 in dpi
  File "/home/drakorg/.local/share/virtualenvs/nspi-ClYdI6po/lib/python3.6/site-packages/kivy/utils.py", line 505 in __g
et__
  File "helloworld.py", line 14 in build
  File "/home/drakorg/.local/share/virtualenvs/nspi-ClYdI6po/lib/python3.6/site-packages/kivy/app.py", line 829 in run
  File "helloworld.py", line 18 in <module>
Aborted (core dumped)

Hi,
On Jetpack4.3/Nano, the GLX is 1.4 and OpenGL is 4.6.0

nvidia@nvidia-desktop:~$ glxinfo | grep version
server glx version string: 1.4
client glx version string: 1.4
GLX version: 1.4
OpenGL core profile version string: 4.6.0 NVIDIA 32.3.1
OpenGL core profile shading language version string: 4.60 NVIDIA
OpenGL version string: 4.6.0 NVIDIA 32.3.1
OpenGL shading language version string: 4.60 NVIDIA
OpenGL ES profile version string: OpenGL ES 3.2 NVIDIA 32.3.1
OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.20
    GL_EXT_shader_group_vote, GL_EXT_shader_implicit_conversions,

Please share the test app so that we can reproduce it and investigate.

1 Like

Thanks for the info.
I’ll just upgrade to 4.3 and try again then from there.
Best of regards,
Eduardo

Oh, regarding the test app, its the kivy hello world application, trying to run it from an ssh session forwarded to a windows machine running mobaxterm.
Kivy version 1.11.0 (latest fetched via pip3 install kivy) with USE_SDL=1 and USE_X11=1 at the moment of running
pip3.
Python 3.6
Jetpack 4.2.1
Opengl is the one posted in the previous message.
Mobaxterm latest version v20.2 personal edition.

Regarding the code, ill paste it here anyways:

import kivy
kivy.require('1.0.6') # replace with your current kivy version !

from kivy.app import App
from kivy.uix.label import Label


class MyApp(App):

    def build(self):
        return Label(text='Hello world')


if __name__ == '__main__':
    MyApp().run()

Run from command line with:
$ python3 helloworld.py

xclock and glxgears demos both work fine in this setup, its only with kivy the problem and core dump.

FYI, when forwarding X11 you are no longer using the OpenGL of the Jetson. See:
https://devtalk.nvidia.com/default/topic/1072640/jetson-agx-xavier/running-graphical-cuda-sample-over-ssh-from-within-l4t-docker-container/post/5434137/#5434137

The gist is that commands and events to use X11, and indirectly OpenGL, are being generated on the Jetson, but the GPU and libraries of the system receiving those events are what actually causes this:

Does your host PC have the correct release of these? You are receiving events from the Jetson, and the rendering device does not have a recent enough release.

Using a virtual X server on the Jetson, and then a virtual desktop program would solve the issue since both event generation and rendering would take place on the Jetson. The PC with the virtual client would simply be forwarding the result of X events, and not the actual events.

Hi, thanks for the insight.
The thing is that I have the exact same setup on an x64 ubuntu installation, and on that computer I can do everything fine (xclock, glxgears and kivy demo app, all three run fine rendered on my client via mobaxterm).
The gkxinfo on the x64 shows a different opengl version than the glxinfo running on the jetson, so its not that its showing the client capacity, its something of the host.
I’ll look into that link and take into consideration what you’ve just said, but first of all I’m gonna try upgrading to jetpack 4.3, I’m quite confident that will nail it for me. On the other hand, 4.3 also comes with a decent opencv version, which allows me to avoid setting that up too … it’d be a nice step up to step up to 4.3 for all of that. If I still face a similar problem I’ll let you know and look deeper into it.
Thank you.
Eduardo

Yes, the host glxinfo (for one specific instance of an application’s requirements, a subset of the full glxinfo) must now match between what the Jetson generates and what the host PC is capable of interpreting. Events in general can get complicated since there are extensions and options. This is not an error per se, it is just a requirement that in the middle of running the application part of it cannot use a version which differs significantly from the other part (and now part is on the Jetson, while part is on the host PC). I suspect that when release versions match between the event generator (Jetson) and event interpreter (X server on host PC), then the error will go away.

Hi, I just installed jetpack 4.3 and glx info showed newer versions as opposed to my original post, which is great.screenshot.1587185788c
However, once I started to install my dependencies for the project (all via apt install), at some point some package installed an older version of something, becase I’m now getting the same exact output as on jetpack 4.2.1, which makes me wonder if the original 4.2.1 did not also come with a newer version of the gl libraries already, and it was just me that broke it by installing some old package.

What makes me dazzle is that none of my apt install commands are pinned to a specific version, so it should just have installed always the newest version, so I don’t see how I could end up with an older version.

I’ll go straight to reflashing 4.3 for now, and try to pinpoint the exact library that makes me go backwards in respect to the information glxinfo reports.

I’ll keep you posted.
Thank you.

Eduardo

Jetsons have many dependencies, and thus for compatibility a given JetPack release sticks to only content compatible with that release. As soon as something mixes between 4.2.1 and 4.3 something will break due to incompatible releases. I suspect that in some cases a mix between the host PC and Jetson won’t matter, but when it comes to OpenGL forwarding (and CUDA…usually when OpenGL is forwarded, then so is the CUDA content and GPU requirements for that CUDA code), then you’ll end up with errors upon mixed releases.

Hi, when I spoke about my projects’ dependencies I meant standard apt packages installed via apt, by no means I did any kind of mixup between 4.2.1 and 4.3: once I started fresh in a 4.3 I did nothing as to get anything from 4.2.1.

Having said that, and after several reinstallations, I think I’ve finally managed to locate the step that broke hell into my opengl installation in the 4.3. So far I’ve got everything working, with the stock (mesa 19.x opengl versions), detected from an ssh session (which is what I wanted) and then only thing I did not do compared to other attempts was to upgrade the distro. As soon as you finish installing jetpack 4.3 the auto updater pops up telling you that there are some upgrades that you could install, totaling at the time of this writing like 450Mb … which includes upgrades to many system libraries … I’m pretty sure that those upgrades are the ones that ruin my opengl/mesa setup. I still have to confirm this, but I’m pretty sure that’s the problem. Without upgrading, I haven’t had a single problem so far. I was able to install all I needed, run the kivy demo locally, both with x11 and sdl2 window managers, and also via ssh … while still maintaining the original/stock mesa/gl drivers …

I’ll do the test just to let you know as soon as I can, but so far I’ve solved my problem just by not accepting to upgrade the system as soon as it finished installing.

Q. for NVIDIA: Which hardware accelerated library file from “/usr/lib/aarch64-linux-gnu/tegra/” is it which needs to be copied into (or sym linked into) “/usr/lib/xorg/” under R32.3.1?

In the past (earlier releases, don’t know about this release) there were times when Xorg Mesa package upgrades overwrote the NVIDIA hardware accelerated release of either libGLX.so or libGL.so (can’t remember for sure) I suspect this is the issue…an Xorg update overwrote the NVIDIA version of one of those files. Changes with R32.3.1 mean I don’t know for certain which file that would be (someone from NVIDIA can probably comment on that). Stock Mesa files are used in most places, but lack of the NVIDIA version of the hardware accelerated drivers and libraries will cause the GUI to continuously crash and attempt to respawn.

The gist is that the files in “/usr/lib/aarch64-linux-gnu/tegra/” (or subdirectories of “/usr/lib/aarch64-linux-gnu/”) will always be the preferred files in any case where that file name is encountered in an xorg directory, and hardware accelerated versions of these libraries will be installed by NVIDIA on the Jetson for a small subset of the aarch64 libraries (typically OpenGL/GLES/GLU related files would be from NVIDIA). Content in “/usr/lib/xorg/modules/” or subdirectories are generally from the Xorg install, but not all. The NVIDIA version will thus exist twice at times (or else be a symbolic link in an xorg directory pointing to an aarch64-linux-gnu directory).

These NVIDIA libraries supersede Xorg versions in those cases, and in some Xorg updates the NVIDIA version can be incorrectly overwritten with the Xorg/non-NVIDIA version. This results in the GUI failing and repeatedly attempting to respawn as it is doing for you right now. Copying the NVIDIA library file from “/usr/lib/aarch” to the location of the same file name in “/usr/lib/xorg/” (such as via ssh or serial console) will fix this since the Xorg package never updates the aarch64 version.

Prior to R32.3.1 all of those files were added by unpacking of a tar archive, and the “/etc/nv_tegra_release” file contained checksums. If a checksum failed, then you knew exactly which file was overwritten and where the correct file could be copied from. In R32.3.1 the checksums no longer exist because those libraries were migrated to “.deb” packages. Thus I am not sure which file might be overwritten on the most recent release.

If I am correct, then it means that after you flash and do package updates, then the problem will just show up again if it is really an updated package overwriting the NVIDIA version. The real solution would then be to fix the NVIDIA package to mark this as a substitute for the other, or to at least blacklist the update of a package which overwrite’s NVIDIA’s version. The workaround would be to simply copy the correct file back in place under the xorg directory.

Hi,
GLVND is enabled on Ubuntu 18.04 and so we install libGLX_nvidia.so.0 and libglxserver_nvidia.so. Those will not get overwritten by Mesa installation because Mesa installs libGLX_mesa.so.0.0.0 and libglx.so.

/var/log/Xorg.0.log should have prints similar to below indicating that server is using nvidia glx driver:

[1566266.903] (II) Loading sub module "glxserver_nvidia"
[1566266.903] (II) LoadModule: "glxserver_nvidia"
[1566266.903] (II) Loading /usr/lib/xorg/modules/extensions/libglxserver_nvidia.so
[1566266.911] (II) Module glxserver_nvidia: vendor="NVIDIA Corporation"
[1566266.911]   compiled for 4.0.2, module version = 1.0.0
[1566266.911]   Module class: X.Org Server Extension
[1566266.911] (II) NVIDIA GLX Module  418.00  Debug Build  (integ_stage_rel) 

Thanks, I have not looked closely at this in more recent releases. Good to know packages have worked around the very-long-ago issue.