I am working on a video action recognition model (which will run in Triton Inference Server) that requires batching up to 32 frames for one camera (or channel) to make a prediction. I would like to do this for multiple cameras as well. I’ve looked around the forum and there are posts from 2018 where the moderators said that temporal batching is not supported. Is temporal batching now supported in DeepStream version 5.0? If not, I would appreciate suggestions for an alternative approach.
• Hardware Platform (GPU) RTX 2080 Ti + RTX 2070 Super • DeepStream Version 5.0 • TensorRT Version 7.0 • NVIDIA GPU Driver Version 440
Are you saying your camera numbers is not constant, may change dynamically? if yes, you may refer to sample which dynamically add or remove sources in the pipeline.
I am working on a video action recognition model (which will run in Triton Inference Server) that requires batching up to 32 frames for one camera (or channel) to make a prediction.
→ and about this, we do not support batch processing for one source, but for multi source.
I suspect that the problem arises from “tensor_order: TENSOR_ORDER_NHWC” option, which is not what the model is expecting. I also tried the TENSOR_ORDER_NONE option which resulted in the following output: