can tensorrt inference server be built on jetson xavier

I am trying to build the tensorrt inference server from source on jetson xavier. But there occurs an error saying “exec format error”:

Sending build context to Docker daemon 6.307MB
Step 1/78 : ARG
Step 2/78 : ARG
Step 3/78 : ARG
Step 4/78 : FROM ${PYTORCH_IMAGE} AS trtserver_caffe2
19.05-py3: Pulling from nvidia/pytorch
Digest: sha256:6614fa29720fc253bcb0e99c29af2f93caff16976661f241ec5ed5cf08e7c010
Status: Image is up to date for
—> 7e98758d4777
Step 5/78 : COPY src/servables/caffe2/netdef_bundle_c2.* /opt/pytorch/pytorch/caffe2/core/
—> Using cache
—> b3fd7653b7a4
Step 6/78 : WORKDIR /opt/pytorch
—> Using cache
—> 091fb04f5fd9
Step 7/78 : RUN pip uninstall -y torch
—> Running in a2ec119d0477
standard_init_linux.go:190: exec user process caused “exec format error”

what causes this error? I am wondering if tensorrt inference server can be built on jetson xavier?

xavier is setup with jetpack 4.2.

Hello abcZhaoling.

I’m not sure what is causing the error but if you plan to perform inference on the Jetson Xavier by using TensorRT, it is possible. We have a wiki page that explains some steps to perform inference with the board. I hope it to be useful for you:


thanks JC, but I mean the TensorRT Inference Server, not TensorRT. The TRT Server github here:

No, the TensorRT Inference Server is not for Jetson platform.