Run Jetson Tegra Ubuntu 18.04 in Virtual Environment for Build Environment

I’m working on making a production environment for compiling and producing build artifacts for deployment on our NX in gitlab. At my work, we have various gitlab runners for building CI/CD pipelines all of which are on x86. We have been able to cross compile C/C++ and Docker images using the arm64 cross compile tool chains and docker buildx, but we are utilizing some Python libraries which need to be deployed on our NX. We can easily acquire these python libraries using Pip on an arm64 host, but this cannot be done to get or compile the arm64 builds on the x86 gitlab runners. We really need a gitlab runner to compile these for us into a deployable artifact.

I’ve been looking into potentially running the NX Ubuntu Arm64 image within QEMU/KVM on an x86 server to produce these artifacts which I think should work for us. There may be some time needed to invest into getting this up and running, but I would like to know if I’m potentially unaware of any tools we can use to build or download these python libraries without emulating the NX withing QEMU/KVM. If there is a quick and easy way of doing this to save some time, I would love to know.

The main method I’m looking for accomplishing this is as follows:

  1. Within the gitlab x86 runner, install qemu/kvm etc
  2. Start up an instance of Ubuntu 18.04 Tegra
  3. Create a python virtual environment that exists in the intended deployment location on the production NX
  4. Pip install all dependencies and build any python dependencies using pip which will be installed in the python virtual environment
  5. Save this virtual environment as an artifact of the pipeline
  6. Copy and deploy this virtual environment artifact on the production system.

Is this a way off base and overly contrived method? Is there something better? Thanks to anyone who can help me find a way to generate these python libs for arm64 and get them working on the production server. I do not have access to a build server that natively runs arm64 architecture

Cheers!

It’s not so easy - because many of the files are part of jetpack on the host. Even if you run docker containers via qemu-user-static you will be missing these parts.

Python libraries are the easy part! It’s all the rest of the nvidia stuff that will cause you errors.

That could be problem potentially in the future, but at this time, I am not concerned with acquiring an libraries for Tegra specifically. If/When using Tegra libraries on the NX, I would use what is included in the base image flashing, but this is not the concern. I need to construct a package of arm64 python libraries that more generally target the arm64 architecture, not Tegra. For example the python bindings to libraries like Mavlink and bindings for gstreamer. When these libraries are deployed on a non tegra arm64 system, they work the same because they are not utilizing the Tegra drivers (excluding gstreamer plugins that touch the nvidia argus library) and hardware.

Well sure, just run ubuntu arm64 via qemu-user-static and do your virtual environment thing then.

It’s only the stuff added via --runtime=nvidia that causes a problem really.

Note am referring to this: GitHub - multiarch/qemu-user-static: `/usr/bin/qemu-*-static`

Just do a normal container build pretty much. Don’t actually need the virtual env or other stuff, the container is your artifact.

Thank you for pointing out qemu-user-static! This looks like it will suit my needs for now. If we come to a point where we need to compile targets that require Tegra/Jetpack, we may just need to setup a local node of our own on an NX to run our builds and generate artififacts. But since I work at a place with a really tight network and proxy restrictions, settings this up in a timely manner is not feasible. So thank you for providing me with this working solution. I’ll let you know how this goes and mark your reply as a solution if something comes from it.

You are welcome!

I work in a similar environment and face the same issues. Qemu-user-static was a game changer for us. I run it on my (very restricted) windows corp laptop via WSL2 and it works great.

I have tried qemu on x86 server for jetpack 4.6 (arm64). It is very slow. Compiling tensorflow took a week, while compiling directly on jetson AGX took 8 hours. Doing the same using qemu on arm64 server for jetpack 4.6 (arm64) took less than an hour.

This solution is in fact working for me. And thank you for your additional perspective on running jetpack via qemu. I’m glad you have similar experience working for security trigger happy companies that restrict resources for build environments. I feel less alone in my experiences now.

I’ll probably need to just lobby for adding a native arm64 server to our company cloud or I will need to setup a dedicated local node and just deal with the longer (8 hours is better than a week) compile times.

Yes it’s very slow to run. But the point is not to run as such, the point is to build a container image - which is very fast.

It’s also useless for compilation that requires the nvidia host stuff - like compiling opencv or whatever. For that stuff I run natively.

Where things get really hairy is how you gonna build/compile with no internet? So what I end up doing is using qemu-user-static to prepare a “build environment container” - then I put that on the arm64 device and run the compile.

We created a full blown jetpack 4.6 based l4t docker image, and ran it on the arm64 server so that we can compile things natively much faster than doing it on the actual jetson hardware.

We aren’t allowed to even have internet, let alone an arm64 server :(

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.