PLEASE KINDLY READ ALL THIS THREAD BEFORE ANSWERING ME.
I would like to compile Triton inference server on my PC (Ubuntu 22.04, x86 machine) and get after that the shared library (.so) so I can run it on other platforms like raspberry, jetson, arm64, aarch64, etc. (cross compiling)
If I am not mistaken, there is two main methods, with and without docker.
I tried both and here is my resume.
Without docker here FAILED with lacking of dependencies even if I followed the official guide.
With docker here SUCCEED after running it on an enormous CPU with 64GB of RAM other wise it will not be able to compile it demands a lot of resources. Is that normal?
ALL what I did is to clone and build using the
build.py script (as the official documentation promote to) after that nothing is clear to me on how to be able to run it on other devices like raspbarry pi or jetson devices, etc.
another thing here in this section of building it for jetson is not completed yet!
And here you told us developers/users to follow the installation instructions of the git repo releases. So where are the instructions? I am missing something?
You said as well
In our example, we placed the contents of downloaded release directory under
What contents? is it the extracted tar file or the content of the compiled build directory? On what platform?
Please, could you provide me the instructions step by step on :
- How to compile on my PC (Ubunru 22.04 x86 machine) (cross-compile) Triton Inference Server and get the shared library.
- How to be able to run Triton Inference Server on raspberry, jetson, etc. using the shared library compiled on my PC in question 1.