In the workflow I currently use I use a docker images based off of nvcr.io/nvidia/tensorrt:22.10-py3 and a locally created Jetson environment as explained in this documentation. I use this to compile images which I then move onto a Jetson Orin in order to run.
However, I am currently exploring if there is a better more robust and reliable way to do this at this point in time.
My hope would be to utilize either an Nvidia tool or base container, avoiding the need to maintain my own Jetson environment and reduce the overall size of the image. The current image I am using is 11GB.
I am aware that both the L4T Base and L4T Jetpack tagged images are intended for utilization on Jetson devices and therefore do not contain all dependencies needed for Qemu cross compilation, thus a custom image would need to be made.
The old L4T Base repository mentions a hybrid technique of l4t-base image generation which “We build a cuda-devel container and copy the stubs + headers” resulting in “~600MB if you use the current hybrid technique” This sounds ideal for what I am looking for a low weight image that has all the dependencies for qemu emulation compile. However, I cannot find mentions of this anywhere else, and the current jetson-containers repository states “- Building on/for x86 platforms isn’t supported at this time (one can typically install/run packages the upstream way there)”
I am aware that there exists the Jetson Cross Compile images but at a size of 25 GB for the 5.1.2 version which I would utilize this seems like it is more than what I am looking for.
I have attempted to hack together some lower weight containers, but to no avail. Any help with this and recommendations moving forward would be much appreciated.