Hello,
I have 4 deployed models on the Triton Inference server and have 10 images.
I need to use the Concurrent Model Execution feature exactly as described. (https://github.com/triton-inference-server/server/blob/main/docs/architecture.md#concurrent-model-execution)
All images will arrive on the same computer with 1 second intervals and I need to send the images to Triton Server from the same client script as they are received.
How should I do this?
Do I have to work with parallel programming?
Or Is there any way to do that?
Thanks for your help