Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
GPU with tesla T4 • DeepStream Version
deepstream 6.1.1 • JetPack Version (valid for Jetson only) • TensorRT Version
tensorrt 8.4.1.5 • NVIDIA GPU Driver Version (valid for GPU only)
Driver Version: 515.65.01 • Issue Type( questions, new requirements, bugs)
questions: I try to run the deepstream_parallel_inference_app in docker container with revision of deepstream6.1.1-trition .
I feel confused about the settging of different source_id with different model. I found similar settings in deepstream_app_config.yml and config_metamux.txt。 Here is the snapshoot as below:
source id in branch means which sources need feed to the model in the branch of the pipeline.
source id in the metamux means which sources meta data need to mux to the output batch meta. You can filter out the meta data of some sources if you don’t want to mux all meta data into the output batch meta.
Many thanks!
for the setting in config_metamux.txt, how to confirm the model_id with src_ids? such as the snapshoot shown: src-ids-model-1=0;1.
Does it mean the output of first model mentioned in the primary-gie0 would be used in source_id 1 and 2? that is the 1 of src-ids-mode-1 means the model-id ?