Triton inference server

Hi,

I am trying to run the sample application “image_client” from triton inference server github and getting inference result successfully.

Can anyone please tell how to send multiple clients inference request to a single server ??

or

How multiple applications (clients) will pass on inference requests to Triton Server??

Thanks In Advance
Shivaleeladevi