I have 4 deployed models on the Triton Inference server and have 10 images.
I need to use the Concurrent Model Execution feature exactly as described. (server/architecture.md at main · triton-inference-server/server · GitHub)
All images will arrive on the same computer with 1 second intervals and I need to send the images to Triton Server from the same client script as they are received.
How should I do this?
Do I have to work with parallel programming?
Or Is there any way to do that?
Thanks for your help