Install nvidia-l4t-multimedia on generic aarch64 node

Hi, we are trying to replace our Jetsons with more scalable AWS G5g nodes, mainly for CI. Unfortunately the TensorRT library that you can get precompiled for aarch64 links against, which comes pre-installed as part of nvidia-l4t-multimedia on a Jetson.

After adding the proper repo one can call the install to nvidia-l4t-multimedia, but the install then fails due to a missing device-tree:

echo "deb r32.4 main" | sudo tee -a /etc/apt/sources.list.d/cuda.list
sudo apt-get update && sudo apt-get install -y  nvidia-l4t-multimedia

[2021-12-06T15:57:27.786Z] /var/lib/dpkg/ line 40: /proc/device-tree/compatible: No such file or directory
[2021-12-06T15:57:27.786Z] dpkg: error processing archive /var/cache/apt/archives/nvidia-l4t-core_32.4.4-20201016123640_arm64.deb (--unpack):
[2021-12-06T15:57:27.786Z]  new nvidia-l4t-core package pre-installation script subprocess returned error exit status 1
[2021-12-06T15:57:27.786Z] Errors were encountered while processing:
[2021-12-06T15:57:27.786Z]  /var/cache/apt/archives/nvidia-l4t-core_32.4.4-20201016123640_arm64.deb
[2021-12-06T15:57:27.786Z] E: Sub-process /usr/bin/dpkg returned an error code (1)

Any idea how we should setup the environment on a generic aarch64 Ubuntu to be able to use TensorRT? And please don’t tell me to use the SDK installer, I have not seen a worse tool for professional use-case yet.



Please note that some hardware (ex. DLA, VIC, …) is Jetson-specific.
You will need a Jetson platform to use the hardware.

Since this is a general TensorRT question rather than Jetson.
It’s recommended to file a topic to the TensorRT board for better support:


This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.