Problem running Triton docker examples

• Hardware Platform (GPU) RTX 2080
• Setup, running docker triton server v20.09
• DeepStream Version 5.0
• TensorRT Version 7.0.0.11
• NVIDIA GPU Driver Version (valid for GPU only) 455

I’m having problems running the deepstream apps for triton server on my laptop with an RTX2080 GPU. When trying to run the deepstream examples, I either get “no protocol specified” or “unable to parse config file”. Where do I find more information to successfully run a working example or how do I fix these problems?

root@46a91cca948a:/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app-trtis# deepstream-app -c source30_1080p_dec_infer-resnet_tiled_display_int8.txt
2021-01-04 09:35:50.857670: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.2
No protocol specified
No protocol specified
No protocol specified
No protocol specified
No protocol specified
No protocol specified
** ERROR: main:655: Failed to set pipeline to PAUSED
Quitting
App run failed

Or

root@46a91cca948a:/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app-trtis# deepstream-app -c config_infer_plan_engine_primary.txt
** ERROR: <parse_config_file:513>: parse_config_file failed
** ERROR: main:627: Failed to parse config file ‘config_infer_plan_engine_primary.txt’

Drivers and cuda are installed properly and are running on ubuntu 20.04
-----------------------------------------------------------------------------+
| NVIDIA-SMI 455.38 Driver Version: 455.38 CUDA Version: 11.1 |
|-------------------------------±---------------------±---------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 GeForce RTX 208… Off | 00000000:01:00.0 Off | N/A |
| N/A 48C P5 11W / N/A | 157MiB / 7982MiB | 8% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+

±----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 1389 G /usr/lib/xorg/Xorg 101MiB |
| 0 N/A N/A 1928 G /usr/bin/gnome-shell 40MiB |
| 0 N/A N/A 6184 G /usr/lib/firefox/firefox 2MiB |
| 0 N/A N/A 6326 G /usr/lib/firefox/firefox 2MiB |
| 0 N/A N/A 6757 G /usr/lib/firefox/firefox 2MiB |
| 0 N/A N/A 6836 G /usr/lib/firefox/firefox 2MiB |
±----------------------------------------------------------------------------+

How to reproduce:

docker run --gpus all -it --rm -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY -w /opt/nvidia/deepstream/deepstream-5.0 nvcr.io/nvidia/deepstream:5.0.1-20.09-triton

Then install ffmpeg

apt-get install ffmpeg
From the sample directory:
./prepare_ds_trtis_model_repo.sh
./prepare_classification_test_video.sh

I also tried to export the display

export DISPLAY=:0

what have I missed?

Hi @dangraf,
Could you provide the setup info as other topics?

Thanks!

Hello.
Setup info? Which info is missing? I’m using the docker container where all versions of ubuntu, tensorRT, deepstream etc is defined. And I’m specifying my computer with OS and GPU version. I think everything is there.

Regarding “No protocol specified”, did you run " xhost +" ?

Thanks!
That helped, but where did you find that information?
I’m looking at this documentation: https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_Quickstart.html#deepstream-triton-inference-server-usage-guidelines

and it says only:
GPU

  1. Pull the DeepStream Triton Inference Server docker
docker pull nvcr.io/nvidia/deepstream:5.0.1-20.09-triton
  1. Start the docker
docker run --gpus all -it --rm -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY nvcr.io/nvidia/deepstream:5.0.1-20.09-triton

maybe that documentation need to be updated?

I’m still getting the problem with parsing the config-file. Any suggestions why that happens?
root@1540a029e037:/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app-trtis# deepstream-app -c config_infer_plan_engine_primary.txt
2021-01-05 13:44:56.782235: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.2
** ERROR: <parse_config_file:513>: parse_config_file failed
** ERROR: main:627: Failed to parse config file ‘config_infer_plan_engine_primary.txt’
Quitting
App run failed

it’s from its NGC docker page - https://ngc.nvidia.com/catalog/containers/nvidia:deepstream

config_infer_plan_engine_primary.txt is just a config file of “nvinferserver” element, you need to use the pipeline config file - source*.txt, e.g. source30_1080p_dec_infer-resnet_tiled_display_int8.txt, i.e.

# deepstream-app -c source30_1080p_dec_infer-resnet_tiled_display_int8.txt

1 Like