How to deploy multiple models with deepstream 4.0.1.

Hi,
For example, 4 .uff files trained with tensorflow,How to deploy with deepstream?
Please help me!

[/quote]

Hi,

You can check this sample which also use uff model first:
/opt/nvidia/deepstream/deepstream-4.0/sources/objectDetector_SSD

In general, you will need to define the following parameters in the property group:

uff-file=sample_ssd_relu6.uff                                                                       model-file=VGG16_faster_rcnn_final.caffemodel                                                       
uff-input-dims=3;300;300;0                                                                          proto-file=faster_rcnn_test_iplugin.prototxt                                                        
uff-input-blob-name=Input        
output-blob-names=MarkOutput_0

Here is our document for your reference:
https://docs.nvidia.com/metropolis/deepstream/dev-guide/index.html#page/DeepStream_Development_Guide%2Fdeepstream_custom_model.html%23wwpID0ERHA

Not sure what kind of your model is, is it a classifier or a detector?
For multiple classifiers, you can just enable them with multiple secondary-gie components.
Check this configure file as an example:
/opt/nvidia/deepstream/deepstream-4.0/samples/configs/deepstream-app/source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt

Thanks.

Hi,
I use the platform type tx2,can provide a sample source code for multiple models?
Thanks.

[/quote]

Hi,

Deepstream is available for TX2.
You can install it directly from the sdkmanager.

Both uff and multiple model are supported by deepstream-app.
You can run it with this command for multi-model use case:

deepstream-app -c [your configure file]

The configure file sample of uff and multiple models can be found in:
/opt/nvidia/deepstream/deepstream-4.0/sources/objectDetector_SSD
/opt/nvidia/deepstream/deepstream-4.0/samples/configs/deepstream-app/source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt

Please update the model information(ex. path, output name, …) based on your use case.
If you are finding the source code of deepstream-app, you can also find it in this folder:
/opt/nvidia/deepstream/deepstream-4.0/sources/apps/sample_apps/deepstream-app/

Thanks.

How to deploy multiple models with deepstream-5.1 ? and how can we do same things in python application ?

How to use custom multiple models with deepstream-5.1 in python binding application ?

Hi shankarjadhav232,

Please help to open a new topic. Thanks