How do I run the samples in the nvidia deepstream5.0 triton docker image?

Hardware Platform: GPU
DeepStream Version: 5.0
NVIDIA GPU Driver Version: a1


On my Linux PC, I’m trying to run the samples from the nvcr.io/nvidia/deepstream:5.0-20.07-triton docker container, but unsure how to run them.

Step 2 of the nvidia DeepStream page (here) says to run this command to enter the container:

docker run --gpus all -it --rm -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY -w /opt/nvidia/deepstream/deepstream-5.0  nvcr.io/nvidia/deepstream:5.0-20.07-triton

Towards the end of that section of the DeepStream page, this text is written:

See /opt/nvidia/deepstream/deepstream-5.0/README inside the container for deepstream-app usage.

When I open up that README inside the container, under the section “Running the samples” it says to:

Go to samples directory and run
deepstream-app -c [path to config.txt]

In the samples directory, these are the only files / directories present:

configs
models
prepare_classification_test_video.sh
prepare_ds_trtis_model_repo.sh
streams
trtis_model_repo

I have a few questions here.

  1. Which file am I supposed to run?
  2. Is the code in this container coming from an open-source location such as GitHub where I could inspect it?
  3. What is supposed to happen when I run the code?

Please refer to this, which is part of README


Running the TensorRT Inference Server samples

These samples are meant to be executed inside DeepStream’s TensorRT Inference
Server container. Refer to the DeepStream Quick Start Guide for instructions
on pulling the container image and starting the container. Once inside the
container, run the following commands:

  1. Go to the samples directory and run the following command to prepare the
    model repository.
    $ ./prepare_ds_trtis_model_repo.sh
  2. Install ffmpeg. It is a pre-requisite to run the next step.
    $ sudo apt-get update && sudo apt-get install ffmpeg
  3. Run the following script to create the sample classification video.
    $ ./prepare_classification_test_video.sh
  4. Run the following command to start the app.
    $ deepstream-app -c
  5. Application config files included in configs/deepstream-app-trtis/
    a. source30_1080p_dec_infer-resnet_tiled_display_int8.txt (30 Decode + Infer)
    b. source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt
    (4 Decode + Infer + SGIE + Tracker)
    c. source1_primary_classifier.txt (Single source + full frame classification)
    NOTE: Other classification models can be used by changing the nvinferserver
    config file in the [*-gie] group of application config file.
    d. source1_primary_detector.txt (Single source + object detection using ssd)
  6. Configuration files for “nvinferserver” element in configs/deepstream-app-trtis/
    a. config_infer_plan_engine_primary.txt (Primary Object Detector)
    b. config_infer_secondary_plan_engine_carcolor.txt (Secondary Car Color Classifier)
    c. config_infer_secondary_plan_engine_carmake.txt (Secondary Car Make Classifier)
    d. config_infer_secondary_plan_engine_vehicletypes.txt (Secondary Vehicle Type Classifier)
    e. config_infer_primary_classifier_densenet_onnx.txt (DenseNet-121 v1.2 classifier)
    f. config_infer_primary_classifier_inception_graphdef_postprocessInTrtis.txt
    (Tensorflow Inception v3 classifier - Post processing in TRT-IS)
    g. config_infer_primary_classifier_inception_graphdef_postprocessInDS.txt

this is git-hub for triton inference server