Problems running the Tensorflow Model Zoo example using Triton

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)

GPU

• DeepStream Version

5.0

• JetPack Version (valid for Jetson only)
• TensorRT Version

Version in nvcr.io/nvidia/deepstream:5.0-20.07-triton

• NVIDIA GPU Driver Version (valid for GPU only)

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 510.73.05    Driver Version: 510.73.05    CUDA Version: 11.6     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA RTX A5000    Off  | 00000000:0A:00.0  On |                  Off |
| 30%   48C    P8    24W / 230W |   2027MiB / 24564MiB |     19%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

• Issue Type( questions, new requirements, bugs)

Bug?

• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

I am trying to recreate the results from the Deploying Models from TensorFlow Model Zoo Using NVIDIA DeepStream and NVIDIA Triton Inference Server Forum post and Blog post. Links below:

I have set up a code repository here that attempts to recreate it: https://github.com/thebruce87m/Tensorflow-On-Deepstream-With-Triton-Server

I have created four scripts:

001-download.sh

#!/bin/bash

cd ./downloads/

# Download the model
wget http://download.tensorflow.org/models/object_detection/faster_rcnn_inception_v2_coco_2018_01_28.tar.gz

tar xvf faster_rcnn_inception_v2_coco_2018_01_28.tar.gz

# Download the labels

wget https://raw.githubusercontent.com/NVIDIA-AI-IOT/deepstream_triton_model_deploy/master/faster_rcnn_inception_v2/config/labels.txt

002-run-docker.sh

#!/bin/bash

# Run the docker:

docker run \
--gpus all \
-it \
--rm \
--shm-size=1g \
--ulimit memlock=-1 \
--ulimit stack=67108864 \
--net=host \
--privileged \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-v $(pwd):/code/ \
-e DISPLAY=$DISPLAY \
-e CUDA_VER=11.6 \
-w /code/ \
nvcr.io/nvidia/deepstream:5.0-20.07-triton

003-prepare.sh

#!/bin/bash

export DEEPSTREAM_DIR=/opt/nvidia/deepstream/deepstream-5.0/

# Copy the model

cd ${DEEPSTREAM_DIR}samples/trtis_model_repo

mkdir -p faster_rcnn_inception_v2 && cd faster_rcnn_inception_v2 && mkdir -p 1

cp /code/downloads/faster_rcnn_inception_v2_coco_2018_01_28/frozen_inference_graph.pb 1/model.graphdef


# Copy the config file

cp /code/files/config.pbtxt .

# Copy the labels

cp /code/downloads/labels.txt .


# Copy the deepstream configs

mkdir -p ${DEEPSTREAM_DIR}samples/configs/deepstream-app-trtis/

cp /code/files/config_infer_primary_faster_rcnn_inception_v2.txt ${DEEPSTREAM_DIR}samples/configs/deepstream-app-trtis/

cp /code/files/source1_primary_faster_rcnn_inception_v2.txt ${DEEPSTREAM_DIR}samples/configs/deepstream-app-trtis/


# Build the plugin

cd ${DEEPSTREAM_DIR}sources/libs/nvdsinfer_customparser
make all
cp ${DEEPSTREAM_DIR}sources/libs/nvdsinfer_customparser/libnvds_infercustomparser.so ${DEEPSTREAM_DIR}lib/

004-run.sh

#!/bin/bash

# Run the demo

export DEEPSTREAM_DIR=/opt/nvidia/deepstream/deepstream-5.0/

cd $DEEPSTREAM_DIR/samples/configs/deepstream-app-trtis
deepstream-app --gst-debug=3 -c source1_primary_faster_rcnn_inception_v2.txt

It does run the demo, however the boxes are not correct:

output-example

My questions:

  1. What am I doing wrong in the example? Why do the boxes not match?

  2. Every time I run the app it takes 25+ minutes to load before processing the video. Why? How can I avoid this?

Thanks!

Can you try with the latest DeepStream 6.1 version?

This actually fixed both the bounding box issue and the load time issue. I have updated my repo. Important changes are here: Update to deepstream 6.1 · thebruce87m/Tensorflow-On-Deepstream-With-Triton-Server@3eb2f11 · GitHub

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.