TensorRT Server on Xavier

Hi,

I’m trying to run the docker TRTserver image on the Xavier with current Jetpack release 4.2.1.
The error I get is:

“standard_init_linux.go:207: exec user process caused “exec format error””

Since I also had problems building the ONNX runtime from source (with arm64,) I did not try to build the TRTserver from source yet.

Similar the TensorRT-Laboratory make process fails due to CPU Architecture issues.

I’m gratefull for advice and information on how to get the TRTserver running on the Xavier.

Best regards,
Malte

Hi,

We don’t support to run an x86 based container on the Jetson yet.

May I know what is the TensorRT function you want to use?
Currently, almost all the TensorRT features support ARM environment:
https://docs.nvidia.com/deeplearning/sdk/tensorrt-support-matrix/index.html#platform-matrix

You are able to run the TensorRT engine directly on the Xavier without using container.
Thanks.

Hi,

thank you for the answer.

I would like to run the YAIS tool from TensorRT-Laboratory or the C++ client of TensorRT-Server in order to test different execution contexts (async) for some models.
With the trtexec tool only synchronous execution can be testet (besides from running it multiple times in parallel).

Is there another way to run async tests on the Xavier?

Thanks,
Malte

Hi,

TensorRT supports both synchronous and asynchronous execution.
You can use native TensorRT API to achieve async test directly.

Here is the document of asynchronous execution for your reference:
https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#perform_inference_python

Thanks.

Are there any prepared images with tensorrt inside available for Jetson NANO?

Hi stiv.yakovenko,

Please help to open a new topic for your issue. Thanks