How to use instance-group in tensor RT with C++

I am a new one to Tensor RT. I see in this document explain how to do Concurrent Model Execution:

And introduce the instance groups:

There is not so many examples related to it online.
I want to know how use the parameter “instance_group” in tensor RT with C++.
Is there any one can provide a simple C++ example to use “instance_group” to launch more than one instance by a model?

We suggest you to reach out to Issues · triton-inference-server/server · GitHub, to get clarification on this.