Running TensorRT Inference Server without Docker Command

Hi all,

I am currently trying to run tensorrt inference server and I followed instructions listed here: [url]Documentation – Pre-release :: NVIDIA Deep Learning Triton Inference Server Documentation

I have successfully built the server from source with correcting a few C++ codes. However, there is literally no instruction about running the server without Docker command. Documentation – Pre-release :: NVIDIA Deep Learning Triton Inference Server Documentation

It doesn’t make sense if you put some instructions about building a software while not giving any info when it comes to run it.