Docker fails to register cuda shared memory

Triton Server - failed to register CUDA shared memory region


Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

$ docker run -it --name tritonserver --gpus=1 -v /dev:/dev --ipc=host --network=host --shm-size=1g -d -v (pwd)/models:/models tritonserver --model-repository=/models --log-verbose=1
$ docker run --gpus=1 -it  --privileged --network host -v /dev:/dev --ipc=host --shm-size=1g  -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY --rm --name client_shm_sdk python3 /workspace/install/python/

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

We recommend you to raise this query in TRITON Inference Server Github instance issues section.


There is already an active issue there - Docker fails to register cuda shared memory · Issue #3429 · triton-inference-server/server · GitHub

1 Like