can tensorrt inference server be built on jetson xavier

I am trying to build the tensorrt inference server from source on jetson xavier. But there occurs an error saying “exec format error”:

Sending build context to Docker daemon 6.307MB
Step 1/78 : ARG BASE_IMAGE=nvcr.io/nvidia/tensorrtserver:19.05-py3
Step 2/78 : ARG PYTORCH_IMAGE=nvcr.io/nvidia/pytorch:19.05-py3
Step 3/78 : ARG TENSORFLOW_IMAGE=nvcr.io/nvidia/tensorflow:19.05-py3
Step 4/78 : FROM ${PYTORCH_IMAGE} AS trtserver_caffe2
19.05-py3: Pulling from nvidia/pytorch
Digest: sha256:6614fa29720fc253bcb0e99c29af2f93caff16976661f241ec5ed5cf08e7c010
Status: Image is up to date for nvcr.io/nvidia/pytorch:19.05-py3
—> 7e98758d4777
Step 5/78 : COPY src/servables/caffe2/netdef_bundle_c2.* /opt/pytorch/pytorch/caffe2/core/
—> Using cache
—> b3fd7653b7a4
Step 6/78 : WORKDIR /opt/pytorch
—> Using cache
—> 091fb04f5fd9
Step 7/78 : RUN pip uninstall -y torch
—> Running in a2ec119d0477
standard_init_linux.go:190: exec user process caused “exec format error”

what causes this error? I am wondering if tensorrt inference server can be built on jetson xavier?

xavier is setup with jetpack 4.2.

Hello abcZhaoling.

I’m not sure what is causing the error but if you plan to perform inference on the Jetson Xavier by using TensorRT, it is possible. We have a wiki page that explains some steps to perform inference with the board. I hope it to be useful for you:

-JC

thanks JC, but I mean the TensorRT Inference Server, not TensorRT. The TRT Server github here: GitHub - triton-inference-server/server: The Triton Inference Server provides an optimized cloud and edge inferencing solution.

No, the TensorRT Inference Server is not for Jetson platform.