Inference Server on Jetson TX2

Hello,

I’m wondering whether it is possible to run Inference Server on Jetson TX2 since it has all the frameworks included out-of-the-box and should eliminate building tensorflow C++ from source that is pretty cumbersome on TX2?

I’m interested in running custom object detection models which I couldn’t convert into .uff/.plan format to run in pure TRT, and it seems that C++ tensorflow isn’t supported on TX2. So I ran into this sentence: “It is possible to execute your TF-TRT accelerated model using TensorRT’s C++ API or through the TensorRT Inference Server, without needing TensorFlow at all.” on this link Accelerating Inference In TF-TRT User Guide :: NVIDIA Deep Learning Frameworks Documentation and it seems as a possible solution.

Thank you very much.

Currently TensorRT Inference Server builds for x86. We will likely build for ARM (and specifically Jetson) in the future but we don’t have any specific plans. There is a PR on the repo to enable ARM builds but we haven’t tested or merged that yet: Code change for compilation on JetPack 4.2 by alfreds-nv · Pull Request #414 · triton-inference-server/server · GitHub

Thank you very much for your answer. I hope that the official support will come soon. Until then, I’ll take a look at the PR you linked.