Installing nvidia-l4t-core package in a docker layer

I have a dockerfile:

ARG version=0.0.2
FROM nvcr.io/nvidia/l4t-base:r32.4.2
FROM balenalib/jetson-xavier-ubuntu:bionic
RUN echo “deb https://repo.download.nvidia.com/jetson/common r32.4 main” > /etc/apt/sources.list.d/nvidia-l4t-apt-source.list
RUN echo “deb https://repo.download.nvidia.com/jetson/t194 r32.4 main” >> /etc/apt/sources.list.d/nvidia-l4t-apt-source.list
RUN apt-key adv --fetch-key http://repo.download.nvidia.com/jetson/jetson-ota-public.asc
RUN apt-get update
RUN apt-get install nvidia-l4t-core nvidia-l4t-firmware -y

So I am using the nvidia l4t-base along with a balena base ubuntu bionic image.
I add the nvidia package repos, update and try to install the nvidia-l4t-core and firmware packages.

But I hit this error:

Preparing to unpack …/21-nvidia-l4t-core_32.4.3-20200625213407_arm64.deb …
/var/lib/dpkg/tmp.ci/preinst: line 40: /proc/device-tree/compatible: No such file or directory
dpkg: error processing archive /tmp/apt-dpkg-install-KsEMHe/21-nvidia-l4t-core_32.4.3-20200625213407_arm64.deb (–unpack):
new nvidia-l4t-core package pre-installation script subprocess returned error exit status 1
Errors were encountered while processing:
/tmp/apt-dpkg-install-KsEMHe/21-nvidia-l4t-core_32.4.3-20200625213407_arm64.deb
E: Sub-process /usr/bin/dpkg returned an error code (1)

Is it valid to install these packages in a docker layer as opposed to on the physical device?
Does Jetpack install these in the fakeroot before flashing the device? Or does it do it on the physical device after flashing?
How can I proceed?

[ This is the follow up to Error running apply_binaries.sh ]

Hi,

We saw some r32.4.3 key word in the log, would you mind to build the image on r32.4.3 to see if helps?

FROM nvcr.io/nvidia/l4t-base:r32.4.3

Thanks.

Thanks for your reply.
I found the answer here: https://forums.balena.io/t/getting-linux-for-tegra-into-a-container-on-balena-os/179421/20

Basically nvidia-l4t-core is meant to be installed on a physical device - hence the complaint when doing it in a docker build. But there was a workaround.