Multiple model instance

• Hardware Platform (Jetson / GPU) NVIDIA GeForce RTX 3090
• DeepStream Version 6.3
• JetPack Version (valid for Jetson only)
• TensorRT Version 8.4.0
• NVIDIA GPU Driver Version (valid for GPU only) 535.113.01

Hello,
I am using a deepstream pipeline in Python. I use nvinferserver to infer my GIEs.

I am trying to create multiple model instances to make it a bit faster, I set:
instance_group [ { kind: KIND_GPU count: 2 gpus: [0,1] } ]
When I try to do something similar in the deepstream config, I get this error: update gpu_ids to keep single gpu.

I’ll attach my config files here
config_triton_infer_primary_reid.txt (714 Bytes)
config.txt (381 Bytes)

Am I doing it correctly?

Thanks

We cannot support this scenario currently can-gst-nvinferserver-support-inference-on-multiple-gpus. We’ll check if we can support later.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.