How to use instance-group in tensor RT with C++

I am a new one to Tensor RT. I see in this document explain how to do Concurrent Model Execution:
https://docs.nvidia.com/deeplearning/triton-inference-server/archives/tensorrt_inference_server_0110_beta/tensorrt-inference-server-guide/docs/architecture.html

And introduce the instance groups:
https://docs.nvidia.com/deeplearning/triton-inference-server/archives/tensorrt_inference_server_0110_beta/tensorrt-inference-server-guide/docs/model_configuration.html#section-instance-groups

There is not so many examples related to it online.
I want to know how use the parameter “instance_group” in tensor RT with C++.
Is there any one can provide a simple C++ example to use “instance_group” to launch more than one instance by a model?

We suggest you to reach out to Issues · triton-inference-server/server · GitHub, to get clarification on this.

Thaks