Naming model engines and distributing multiple app instances across multiple gpus

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) T4/V100
• DeepStream Version 4.0 to 5.1
• TensorRT Version 7.2.2
• NVIDIA GPU Driver Version (valid for GPU only) 460
• Issue Type( questions, new requirements, bugs) question

Hi everyone,

I’m migrating a DS4.0 application based on deepstream-test5 app and Yolo configs to DS5.1.
Previously, when building the model from yolo.weights it was named sth like: model_b1_int8.engine but now the names are like model_b1_gpu0_int8.engine. I’d be running multiple app instances with different models and this is looks like yet another error-prone change that needs to be made in the config files each time I’m adding or changing gpu to run on. How can I ensure that my model engines will be created with the simplified name?

The question is related also to the below topic:

Currently, to run the app on a different gpu we need to make a bunch of changes to the config files: 1 for each stream + 1 for each gie + streammux + (if using) tiled display & osd. Then another change in the inference config and now also model engine requires that change.
Are there any attempts to make this more elegant and less error prone? Could you share any good practices that you would recommend to handle this?

will all the components in the pipeline run on the same GPU, e.g. GPU#1 ?

@mchi yes, most probably all the componentsin a pipeline would share the same GPU.
(Unless you recommend to manage them differently - then I’m looking forward to hear your suggestions)

can you use

CUDA_VISIABLE_DEVICES, .e.g

CUDA_VISIBLE_DEVICES=0,1 ./cuda_executable