Running TensorRT Inference Server without Docker Command

Hi all,

I am currently trying to run tensorrt inference server and I followed instructions listed here: https://docs.nvidia.com/deeplearning/sdk/tensorrt-inference-server-guide/docs/build.html#section-building

I have successfully built the server from source with correcting a few C++ codes. However, there is literally no instruction about running the server without Docker command. https://docs.nvidia.com/deeplearning/sdk/tensorrt-inference-server-guide/docs/run.html

It doesn’t make sense if you put some instructions about building a software while not giving any info when it comes to run it.