Getting error "/usr/local/bin/nvidia_entrypoint.sh: line 33: exec: trtserver: not found"

Hi,

I’m trying to run the tensorrt inference server via the docker container. Based on my current NVIDIA driver compatibility, I’m running the nvcr.io/nvidia/tensorrt:18.08-py2 image.

However, when I try to start the server with the following command

nvidia-docker run --rm --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 -p 8000:8000 -p 8001:8001 -p 8002:8002 -v /home/model_repository:/models nvcr.io/nvidia/tensorrt:18.08-py2 trtserver --model-store=/models

I get this error

/usr/local/bin/nvidia_entrypoint.sh: line 33: exec: trtserver: not found

Am I missing some configuration or setup options ? Any help would be really appreciated

Thanks