Inference on video/audio streams in Triton

Where I can find info on how video/ audio streaming is done in Triton and how to deploy models in Triton which gets video/audio steams? what should be done differently than for models which get one time input tensors per every inference request /invocation ?

Thanks, Yariv

Please re-post your question on: Triton Inference Server · GitHub , the NVIDIA and other teams will be able to help you there.
Sorry for the inconvenience and thanks for your patience.