Inference Server on Jetson TX2

Hello,

I’m wondering whether it is possible to run Inference Server on Jetson TX2 since it has all the frameworks included out-of-the-box and should eliminate building tensorflow C++ from source that is pretty cumbersome on TX2?

I’m interested in running custom object detection models which I couldn’t convert into .uff/.plan format to run in pure TRT, and it seems that C++ tensorflow isn’t supported on TX2. So I ran into this sentence: “It is possible to execute your TF-TRT accelerated model using TensorRT’s C++ API or through the TensorRT Inference Server, without needing TensorFlow at all.” on this link https://docs.nvidia.com/deeplearning/frameworks/tf-trt-user-guide/index.html#tensorrt-plan and it seems as a possible solution.

Thank you very much.

Currently TensorRT Inference Server builds for x86. We will likely build for ARM (and specifically Jetson) in the future but we don’t have any specific plans. There is a PR on the repo to enable ARM builds but we haven’t tested or merged that yet: https://github.com/NVIDIA/tensorrt-inference-server/pull/414

Thank you very much for your answer. I hope that the official support will come soon. Until then, I’ll take a look at the PR you linked.