These samples are meant to be executed inside DeepStream’s TensorRT Inference
Server container. Refer to the DeepStream Quick Start Guide for instructions
on pulling the container image and starting the container. Once inside the
container, run the following commands:
Go to the samples directory and run the following command to prepare the
model repository.
$ ./prepare_ds_trtis_model_repo.sh
Install ffmpeg. It is a pre-requisite to run the next step.
$ sudo apt-get update && sudo apt-get install ffmpeg
Run the following script to create the sample classification video.
$ ./prepare_classification_test_video.sh
Run the following command to start the app.
$ deepstream-app -c
Application config files included in configs/deepstream-app-trtis/
a. source30_1080p_dec_infer-resnet_tiled_display_int8.txt (30 Decode + Infer)
b. source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt
(4 Decode + Infer + SGIE + Tracker)
c. source1_primary_classifier.txt (Single source + full frame classification)
NOTE: Other classification models can be used by changing the nvinferserver
config file in the [*-gie] group of application config file.
d. source1_primary_detector.txt (Single source + object detection using ssd)
Configuration files for “nvinferserver” element in configs/deepstream-app-trtis/
a. config_infer_plan_engine_primary.txt (Primary Object Detector)
b. config_infer_secondary_plan_engine_carcolor.txt (Secondary Car Color Classifier)
c. config_infer_secondary_plan_engine_carmake.txt (Secondary Car Make Classifier)
d. config_infer_secondary_plan_engine_vehicletypes.txt (Secondary Vehicle Type Classifier)
e. config_infer_primary_classifier_densenet_onnx.txt (DenseNet-121 v1.2 classifier)
f. config_infer_primary_classifier_inception_graphdef_postprocessInTrtis.txt
(Tensorflow Inception v3 classifier - Post processing in TRT-IS)
g. config_infer_primary_classifier_inception_graphdef_postprocessInDS.txt