TensorRT + Jetson NX + Docker: exec user process caused "exec format error"


Hi, I’m trying to build a Docker Image with TensorRT to be used in the Jetson NX. I’m trying to use this Docker Image
nvcr.io/nvidia/tensorrt:20.07.1-py3, but I’m getting this error message:
Error:standard_init_linux.go:211: exec user process caused “exec format error”

I’ve taken a look at some forums and it seems there is an incompatibility between Jetson NX SO and the Docker Image. So, if it’s true, what is the best way to solve this problem? Is there some Nvidia TensorRT image to Jetson NX or do I need to build everything? I’ve tried this second option, but I got a lot of errors as well.


I need to run on this image: TensorRT, and GPU access, PyTorch, OpenCV, Python3, and PyCUDA.

Moving to Jetson NX forum so that Jetson team can take a look.

Hi @adriano.santos, that Docker image is for x86, not the ARM aarch64 architecture that Jetson uses.

Instead, please try one of these containers for Jetson:

You should be able to use TensorRT from each of those containers.

Yes. I’ve tested these Docker images as well, but I’ve got this message:

root@6d1b568bfc54:/home/detector# python3
Python 3.6.9 (default, Jul 17 2020, 12:50:27)
[GCC 8.4.0] on linux
Type “help”, “copyright”, “credits” or “license” for more information.
import torch
Traceback (most recent call last):
File “”, line 1, in
File “/usr/local/lib/python3.6/dist-packages/torch/ init .py”, line 81, in
from torch._C import *
ImportError: libnvToolsExt.so.1: cannot open shared object file: No such file or directory

I˙ve tried to install the pytorch, but the error is the same. All the others libraries are working well (TensorRT, NumPy etc), but the pytorch isnt

Apologies for the delay - when running nvcr.io/nvidia/l4t-pytorch container, can you check to see if you have the library /usr/local/cuda/lib64/libnvToolsExt.so.1 ?

Does the tag of the l4t-pytorch container you are running match the version of JetPack-L4T that you have installed on your Jetson?