Getting error "/usr/local/bin/nvidia_entrypoint.sh: line 33: exec: trtserver: not found"

Hi,

I’m trying to run the tensorrt inference server via the docker container. Based on my current NVIDIA driver compatibility, I’m running the nvcr.io/nvidia/tensorrt:18.08-py2 image.

However, when I try to start the server with the following command

nvidia-docker run --rm --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 -p 8000:8000 -p 8001:8001 -p 8002:8002 -v /home/model_repository:/models nvcr.io/nvidia/tensorrt:18.08-py2 trtserver --model-store=/models

I get this error

/usr/local/bin/nvidia_entrypoint.sh: line 33: exec: trtserver: not found

Am I missing some configuration or setup options ? Any help would be really appreciated

Thanks

Hi, I encountered the same issue. Have you found the solution, please?

any solution on this?

I have a similar problem

(base) mona@ada:/data/FoundationPose/docker$ bash run_container.sh
foundationpose
access control disabled, clients can connect from any host

==========
== CUDA ==
==========

CUDA Version 12.1.0

Container image Copyright (c) 2016-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.

This container image and its contents are governed by the NVIDIA Deep Learning Container License.
By pulling and using the container, you accept the terms and conditions of this license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license

A copy of this license is made available in this container at /NGC-DL-CONTAINER-LICENSE for your convenience.

*************************
** DEPRECATION NOTICE! **
*************************
THIS IMAGE IS DEPRECATED and is scheduled for DELETION.
    https://gitlab.com/nvidia/container-images/cuda/blob/master/doc/support-policy.md

/opt/nvidia/nvidia_entrypoint.sh: line 67: exec: --: invalid option
exec: usage: exec [-cl] [-a name] [command [arguments ...]] [redirection ...]
(base) mona@ada:/data/FoundationPose/docker$ cat run_container.sh
docker rm -f foundationpose
CATGRASP_DIR=$(pwd)/../
xhost +  && docker run --gpus all --env NVIDIA_DISABLE_REQUIRE=1 -it --network=host --name foundationpose docker.io/shingarey/foundationpose_custom_cuda121:latest  --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -v /data:/data -v /mnt:/mnt -v /tmp/.X11-unix:/tmp/.X11-unix -v /tmp:/tmp  --ipc=host -e DISPLAY=${DISPLAY} -e GIT_INDEX_FILE foundationpose:latest bash

1 Like

This may be caused by running the wrong command “trtserver“ after the docker image. Try more parental commands than trtserver (e.g. “tensorrt”). I encountered similar error and solved it this way.

Hi @shubhabrata3,

The trtserver binary is not included in the standard tensorrt development image, it only lives inside the inference server images.

To run the inference server, you need to pull the tensorrtserver image (or the modern tritonserver image).

and something like this:

nvidia-docker run --rm --shm-size=1g \
--ulimit memlock=-1 --ulimit stack=67108864 \
-p 8000:8000 -p 8001:8001 -p 8002:8002 \
-v /home/model_repository:/models \
nvcr.io/nvidia/tensorrtserver:18.08-py3 \
trtserver --model-store=/models

Please let me know if i understood it correctly or is it something else.

Thank You,
Atharva