Inference on video/audio streams in Triton

Where I can find info on how video/ audio streaming is done in Triton and how to deploy models in Triton which gets video/audio steams? what should be done differently than for models which get one time input tensors per every inference request /invocation ?

Thanks, Yariv