Compiling Triton Inference Server

Hello NVIDIA,

PLEASE KINDLY READ ALL THIS THREAD BEFORE ANSWERING ME.

I would like to compile Triton inference server on my PC (Ubuntu 22.04, x86 machine) and get after that the shared library (.so) so I can run it on other platforms like raspberry, jetson, arm64, aarch64, etc. (cross compiling)

If I am not mistaken, there is two main methods, with and without docker.
I tried both and here is my resume.

Without docker here FAILED with lacking of dependencies even if I followed the official guide.

With docker here SUCCEED after running it on an enormous CPU with 64GB of RAM other wise it will not be able to compile it demands a lot of resources. Is that normal?

ALL what I did is to clone and build using the build.py script (as the official documentation promote to) after that nothing is clear to me on how to be able to run it on other devices like raspbarry pi or jetson devices, etc.

another thing here in this section of building it for jetson is not completed yet!

And here you told us developers/users to follow the installation instructions of the git repo releases. So where are the instructions? I am missing something?
You said as well

In our example, we placed the contents of downloaded release directory under /opt/tritonserver.

What contents? is it the extracted tar file or the content of the compiled build directory? On what platform?

Please, could you provide me the instructions step by step on :

  1. How to compile on my PC (Ubunru 22.04 x86 machine) (cross-compile) Triton Inference Server and get the shared library.
  2. How to be able to run Triton Inference Server on raspberry, jetson, etc. using the shared library compiled on my PC in question 1.

Thank you.

Thanks for posting to the NVIDIA AI Workbench forum.

We don’t currently support Triton Inference Server.

Are you perhaps looking for a different forum?

1 Like

Hello @twhitehouse,

Thank you for your answer.
Is there any other forum for NVIDIA?

Could you please point me to the right forum where they support Triton Inference Server?

I also posted on NVIDIA/Triton Server github here as well

Triton forum was archived: Latest Deep Learning (Training & Inference)/Triton Inference Server - archived topics - NVIDIA Developer Forums

Your best bet is GitHub I believe.

1 Like