Armhf docker image on L4T arm64 with GPU accelerated OpenGL? = Netflix playback :-)

Hi guys,…and girls?

Have a question for someone with docker experience.

Would it be possible in theory to run a docker container with an armhf based application for example Chromium-browser:armhf with support for gpu acceleration via OpenGl?

I know the first part is simple and works great …the performance is actually quite snappy however there is no gpu acceleration …only software via chromes built in swiftshader.

with the advent of nvidia-docker coming to the jetson line up I was wondering what would be required to enable gpu passthrough.

Would armhf versions of the nvidia opengl drivers need to be installed in the docker image?

If so would there be a way to use the armhf drivers from the tx1 driver package in the docker image.

Please take a look at this github : https://github.com/teacupx/docker-chromium-armhf

like I said the docker image works fine but I’m not exactly sure what would be required for opengl acceleration.

If we could get this working on the nano we would have a solution for watching DRM content on the nano i.e Netflix , Amazon Video and Hulu.

The docker image works and I have tested Netflix with the libwidevinecdm.so binary and playback works as well …we just need opengl.

p.s it needs to be an armhf version of chromium because google has never released an arm64 version of the libwidevinecdm.so binary.

As a dog, I feel left out!

I don’t think the NVIDIA 64-bit ARM GPU driver does 32-bit translation.
I could be wrong, though. Feel free to experiment and let us know how it goes :-)

The ARMv8-a/arm64/aarch64 CPU has a compatibility mode, but the operating system considers that a foreign architecture. There is no ability to use that mode unless you basically have the 32-bit compat infrastructure in place as well. 32-bit linkers, 32-bit drivers for cases where there is no 64-bit version (and that is a terrible performance penalty), so on. Unless docker itself translates all of that, then you won’t succeed.

As an added note the GPU drivers are for an integrated GPU (iGPU) which is directly connected to the memory controller. The drivers you see for download require PCIe, and are also the wrong architecture. I don’t think it would be possible to use hardware acceleration (only a software framebuffer) in 32-bit mode (that or a translation back and forth between 32-bit and 64-bit).

I could be wrong, but I doubt any of the Jetson series supports HDCP content protection. If there is content protection involved, I doubt it would work even if you had all of the above. If Google did release a 64-bit version it might work so long as it didn’t need content protection, but 32-bit is a problem for this. The last 32-bit platform which might do this is the TK1. Toradex (https://www.toradex.com/computer-on-modules/apalis-arm-family/nvidia-tegra-k1) still sells their DIMM format TK1 (you’d get one of their carrier boards to go with it), so if you absolutely must have this as armhf, then this would be your only practical solution. Just beware that 32-bit speeds are not even in the same class as 64-bit solutions.

Thanks for the info , like I mentioned before the 32bit version of chromium runs fine in a 32bit docker container on my 64bit Jetson nano.

Performance is fine and comparable to using 64bit chromium with OpenGL composition disabled.

Also I have tested DRM content decryption in this docker container i.e Netflix and it works …you just need to download the widevinecdm.so from a third-party source and drop it into /use/lib/chromium-browser and your good to go.

The only piece of the puzzle that’s missing is OpenGL support. If snarky is correct that the 64bit nvhost drivers are unable to process 32 bit instructions then we might be out of luck but if they do then this is my idea.

Create a lightweight 32bit docker image of l4t r24.1 with the 32bit Jetson tx1 driver package (the most recent 32bit version available) then build the chromium-browser armhf docker container I linked above using the base image we just created.

Run it all on the nano using nvidia-docker and make sure the container has hardware accesses to the Nanos /dev/nvhost devices and whatever else is required for gpu passthrough.

Buy some popcorn then Netflix and chill.

Thoughts?

P.s I believe the xserver needs to be run from within the container …it needs to use the the 33 bit xserver.

I don’t see any reason why this wouldn’t be able to be accomplished within a chroot environment as well …might even be easier than docker …less contained but whatever.

Yes, the container would translate between 32-bit and 64-bit calls. R24.1 did not use a container, but it is similar in the sense that the whole user space was still 32-bit (and armhf would work there), while the kernel was 64-bit. The translation between 32-bit and 64-bit while talking between kernel and user space slows down R24.1 compared to a purely 64-bit implementation which doesn’t use that translation. For the R24.1 case the kernel itself does not suffer the compatibility mode performance penalty since it is actually running 64-bit.

That’s right.

So what I’m attempting to do is use the 32bit r24.1 userspace drivers in a container or a chroot on the Jetson nano to provide 32bit OpenGl and xserver driver support.

I’m starting an xserver with the correct ABI for r24.1 from within the container and binding the Nanos /dev/nvhost devices to the container.

The goal is OpenGL support for 32bit applications on the Jetson nano.

I havnt succeeded yet.

It might work, but I couldn’t say for sure. At the time 24.1 came out the 64-bit ARMv8-a/arm64/aarch64 was new and there was almost no software ported to 64-bit ARM. The kernel was of course the starting point, and then applications were added in (but were still 32-bit in R24.1). The kernel itself was 64-bit, but I think Xorg was itself 32-bit (I’m not positive about that…if someone knows if R24.1 actually had a 64-bit Xorg, or can verify 32-bit Xorg, please speak up). The NVIDIA driver loads to a particular ABI in the Xorg software, and this is what gives hardware acceleration. The question is if you can identify and load that file on whatever Xorg server you use.

I don’t have the R24.1 “/usr/lib/aarch64-linux-gnu/tegra/libglx.so” handy (sorry, a victim of running out of space on my system), but if you run various utilities to identify how this file links (e.g., 32-bit, 64-bit via ldd and/or nm), and then make sure it is compatible with both the R24.1 Xorg ABI version and the 32-bit/64-bit of the Xorg binary, then your odds of success go up. Actually booting an R24.1 TX2 would help find the ABI from the “/var/log/Xorg.0.log”.

Hey, so r24.1 32bit for the tx1 uses xserver ABI ver.19 and I believe the entire userspace is 32bit including the xserver.

I don’t believe arm64 xserver can handle 32bit binaries even if I was using the correct xserver version on the nano.

So this leaves me with the only option being to launch the xserver from within the container/chroot which is a 32bit userspace with the correct ABI version and the 32bit xserver drivers libglx.so, nvidia_drv.so as well as the OpenGL components.

I need to provide the container/chroot with access to the Nanos GPU device components for example /dev/nvhost /dev/nvhost_gpu etc… I can do this by bind mounting those directories and providing the correct permissions.

The devices are named the same on the nano as they are on the tx1 and basically appear to be identical in that regard …I know the Nanos GPU is a slimmed down version of the t210.

So the main question is …will the Jetson nano /dev/nvhost components be able to communicate successfully with the r24.1 32bit drivers of the tx1. And if they can will I be able to run two xservers simultaneously from m within the container/chroot as well as in the Nanos normal userspace.

I’m still testing …pulling my hair out I’m progressing slowly and I seem close …but I may just be close to realizing it isn’t possible.

I appreciate any help.

Chris.

So an update…

I decided to create an armhf based system image to test this configuration that included the 32bit r24.1 tx1 userspace drivers and the r32.1 64bit kernel , modules and configuration files.

I figured this would be an easier starting point to confirm if the r24.1 OpenGL and xserver drivers would work with the Nanos GPU and kernel rather then attempting this from within a chroot or container.

So far I am unable to get an xserver to start and it is failing at the point where it loads the nvidia_drv.so xorg driver… the xorg log says "unable to initiate Nvidia GPU-0 device.

However if I remove the xorg configuration it will boot to the desktop and attempt to use the modesetti g driver which obviously doesn’t work…

I’ll post the xorg log when I get home.

Heres my most recent Xorg log… do any nvidia devs want to chime in to let me know what they think might be holding me back?

X.Org X Server 1.17.1
Release Date: 2015-02-10
[    47.914] X Protocol Version 11, Revision 0
[    47.914] Build Operating System: Linux 3.2.0-84-highbank armv7l Ubuntu
[    47.915] Current Operating System: Linux tegra-ubuntu 4.9.140-tegra #1 SMP PREEMPT Wed Mar 13 00:32:22 PDT 2019 aarch64
[    47.915] Kernel command line: tegraid=21.1.2.0.0 ddr_die=4096M@2048M section=512M memtype=0 vpr_resize usb_port_owner_info=0 lane_owner_info=0 emc_max_dvfs=0 touch_id=0@63 video=tegrafb no_console_suspend=1 console=ttyS0,115200n8 debug_uartport=lsport,2 earlyprintk=uart8250-32bit,0x70006000 maxcpus=4 usbcore.old_scheme_first=1 lp0_vec=0x1000@0xff780000 core_edp_mv=1125 core_edp_ma=4000 tegra_fbmem=0x800000@0x92cb6000 is_hdmi_initialised=1  root=/dev/mmcblk0p1 rw rootwait console=ttyS0,115200n8 console=tty0 fbcon=map:0 net.ifnames=0    root=/dev/mmcblk0p1 rw rootwait console=ttyS0,115200n8 console=tty0 fbcon=map:0 net.ifnames=0 rootfstype=ext4 root=/dev/mmcblk0p1 rw rootwait
[    47.915] Build Date: 11 September 2015  10:36:20AM
[    47.915] xorg-server 2:1.17.1-0ubuntu3.1 (For technical support please see http://www.ubuntu.com/support) 
[    47.915] Current version of pixman: 0.33.6
[    47.915] 	Before reporting problems, check http://wiki.x.org
	to make sure that you have the latest version.
[    47.915] Markers: (--) probed, (**) from config file, (==) default setting,
	(++) from command line, (!!) notice, (II) informational,
	(WW) warning, (EE) error, (NI) not implemented, (??) unknown.
[    47.915] (==) Log file: "/var/log/Xorg.0.log", Time: Wed Jul 24 06:45:37 2019
[    47.915] (==) Using config file: "/etc/X11/xorg.conf"
[    47.915] (==) Using system config directory "/usr/share/X11/xorg.conf.d"
[    47.916] (==) No Layout section.  Using the first Screen section.
[    47.916] (==) No screen section available. Using defaults.
[    47.916] (**) |-->Screen "Default Screen Section" (0)
[    47.916] (**) |   |-->Monitor "<default monitor>"
[    47.916] (==) No device specified for screen "Default Screen Section".
	Using the first device section listed.
[    47.916] (**) |   |-->Device "Device0"
[    47.916] (==) No monitor specified for screen "Default Screen Section".
	Using a default monitor configuration.
[    47.916] (==) Automatically adding devices
[    47.916] (==) Automatically enabling devices
[    47.916] (==) Automatically adding GPU devices
[    47.916] (WW) The directory "/usr/share/fonts/X11/cyrillic" does not exist.
[    47.916] 	Entry deleted from font path.
[    47.917] (WW) The directory "/usr/share/fonts/X11/100dpi/" does not exist.
[    47.917] 	Entry deleted from font path.
[    47.917] (WW) The directory "/usr/share/fonts/X11/75dpi/" does not exist.
[    47.917] 	Entry deleted from font path.
[    47.917] (WW) The directory "/usr/share/fonts/X11/Type1" does not exist.
[    47.917] 	Entry deleted from font path.
[    47.917] (WW) The directory "/usr/share/fonts/X11/100dpi" does not exist.
[    47.917] 	Entry deleted from font path.
[    47.917] (WW) The directory "/usr/share/fonts/X11/75dpi" does not exist.
[    47.917] 	Entry deleted from font path.
[    47.917] (==) FontPath set to:
	/usr/share/fonts/X11/misc,
	built-ins
[    47.917] (==) ModulePath set to "/usr/lib/arm-linux-gnueabihf/xorg/extra-modules,/usr/lib/xorg/extra-modules,/usr/lib/xorg/modules"
[    47.917] (II) The server relies on udev to provide the list of input devices.
	If no devices become available, reconfigure udev or disable AutoAddDevices.
[    47.917] (II) Loader magic: 0xf6a7f10
[    47.917] (II) Module ABI versions:
[    47.917] 	X.Org ANSI C Emulation: 0.4
[    47.917] 	X.Org Video Driver: 19.0
[    47.917] 	X.Org XInput driver : 21.0
[    47.917] 	X.Org Server Extension : 9.0
[    47.918] (II) no primary bus or device found
[    47.918] (WW) "dri" will not be loaded unless you've specified it to be loaded elsewhere.
[    47.918] (II) "glx" will be loaded by default.
[    47.918] (WW) "xmir" is not to be loaded by default. Skipping.
[    47.918] (II) LoadModule: "extmod"
[    47.918] (II) Module "extmod" already built-in
[    47.918] (II) LoadModule: "glx"
[    47.918] (II) Loading /usr/lib/xorg/modules/extensions/libglx.so
[    47.924] (II) Module glx: vendor="NVIDIA Corporation"
[    47.924] 	compiled for 4.0.2, module version = 1.0.0
[    47.924] 	Module class: X.Org Server Extension
[    47.924] (II) NVIDIA GLX Module  361.01.24.1  Release Build  (integ_stage_rel)  (buildbrain@mobile-u64-1086)  Tue May 17 16:33:41 PDT 2016
[    47.924] (II) LoadModule: "nvidia"
[    47.924] (II) Loading /usr/lib/xorg/modules/drivers/nvidia_drv.so
[    47.925] (II) Module nvidia: vendor="NVIDIA Corporation"
[    47.925] 	compiled for 4.0.2, module version = 1.0.0
[    47.925] 	Module class: X.Org Video Driver
[    47.925] (II) NVIDIA dlloader X Driver  361.01.24.1  Release Build  (integ_stage_rel)  (buildbrain@mobile-u64-1086)  Tue May 17 16:36:38 PDT 2016
[    47.925] (II) NVIDIA Unified Driver for all Supported NVIDIA GPUs
[    47.925] (++) using VT number 7

[    47.934] (WW) Falling back to old probe method for NVIDIA
[    47.934] (II) Loading sub module "fb"
[    47.934] (II) LoadModule: "fb"
[    47.934] (II) Loading /usr/lib/xorg/modules/libfb.so
[    47.934] (II) Module fb: vendor="X.Org Foundation"
[    47.934] 	compiled for 1.17.1, module version = 1.0.0
[    47.935] 	ABI class: X.Org ANSI C Emulation, version 0.4
[    47.935] (II) Loading sub module "wfb"
[    47.935] (II) LoadModule: "wfb"
[    47.935] (II) Loading /usr/lib/xorg/modules/libwfb.so
[    47.935] (II) Module wfb: vendor="X.Org Foundation"
[    47.935] 	compiled for 1.17.1, module version = 1.0.0
[    47.935] 	ABI class: X.Org ANSI C Emulation, version 0.4
[    47.935] (II) Loading sub module "ramdac"
[    47.935] (II) LoadModule: "ramdac"
[    47.935] (II) Module "ramdac" already built-in
[    47.936] (WW) VGA arbiter: cannot open kernel arbiter, no multi-card support
[    47.936] (II) NVIDIA(0): Creating default Display subsection in Screen section
	"Default Screen Section" for depth/fbbpp 24/32
[    47.936] (==) NVIDIA(0): Depth 24, (==) framebuffer bpp 32
[    47.936] (==) NVIDIA(0): RGB weight 888
[    47.936] (==) NVIDIA(0): Default visual is TrueColor
[    47.936] (==) NVIDIA(0): Using gamma correction (1.0, 1.0, 1.0)
[    47.936] (**) NVIDIA(0): Option "ModeDebug"
[    47.936] (**) NVIDIA(0): Enabling 2D acceleration
[    47.938] (EE) NVIDIA(GPU-0): Failed to initialize the NVIDIA graphics device!
[    47.938] (EE) NVIDIA(0): Failing initialization of X screen 0
[    47.938] (II) UnloadModule: "nvidia"
[    47.938] (II) UnloadSubModule: "wfb"
[    47.938] (II) UnloadSubModule: "fb"
[    47.938] (EE) Screen(s) found, but none have a usable configuration.
[    47.938] (EE) 
Fatal server error:
[    47.938] (EE) no screens found(EE) 
[    47.938] (EE) 
Please consult the The X.Org Foundation support 
	 at http://wiki.x.org
 for help. 
[    47.938] (EE) Please also check the log file at "/var/log/Xorg.0.log" for additional information.
[    47.938] (EE) 
[    48.075] (EE) Server terminated with error (1). Closing log file.

I’m really grasping, but in part you would also need to pass through the EDID data (i2c from the right controller…not sure which one, but it is a query to address 0x50 within that i2c controller). EDID is how the monitor passes its configuration to the driver, and EDID modes are the only modes the drivers will accept.

From what you said earlier:

I don't believe arm64 xserver can handle 32bit binaries even if I was using the correct xserver version on the nano.

You are correct that the server itself is 32-bit. The driver should work with the Nano GPU, but the driver itself must be a match for the Xorg application. This might imply taking both driver and Xorg from the R24.1 release.

Most of what you want to do is an experiment with things I have not attempted before.