Docker run jetson inference

Hello,

I am wondering if I can use docker on a pc by changing the architecture from linux / arm to linux / x86_x64?
What are the steps to follow please,

I have already succeeded in completing the project from the sources on the PC, by specifying compute from my gpu card, but for the docker, I don’t know much about it …
do I have to create a new image? or used the one on this link? Docker Hub

thanks

Hello @sylia

In my opinion, the answer should yes. You can refer this dockerfile jetson-inference/Dockerfile at master · dusty-nv/jetson-inference · GitHub.
I’m not sure it will work easily but this may take more time.

Regards

1 Like

The jetson-inference Dockerfile also relies on l4t-pytorch base container, which is for Jetson/aarch64. So you will want to use nvidia/pytorch base container from NGC instead for x86.

1 Like

Thank you for your answers,

the goal of all this is that I intend to launch the training on a cloud (ovh cloud), I wonder if I can directly use the docker image which was generated with the architecture of the ARM64? , because rebuilding in x86 takes time and I have some errors

No, you can’t run aarch64 container on x86 - you would need to rebuild it for x86.

I rebuilt the container for the x86 by modifying the base image NVIDIA NGC,
but i get these errors i dont understand where its coming from?
any suggestions please?error.txt (15.4 KB)

It seems like it might be because they have python3.8 symlink’d to python in that base container.

Try changing these lines to set(PYTHON_BINDING_VERSIONS 3.8)

Also I think you will need TensorRT in the base container. I officially support jetson-inference on Jetson, not x86, so you may find errors you will need to debug.

Thanks, it works

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.