Is Temporal Batching supported in DeepStream v5.0?

I am working on a video action recognition model (which will run in Triton Inference Server) that requires batching up to 32 frames for one camera (or channel) to make a prediction. I would like to do this for multiple cameras as well. I’ve looked around the forum and there are posts from 2018 where the moderators said that temporal batching is not supported. Is temporal batching now supported in DeepStream version 5.0? If not, I would appreciate suggestions for an alternative approach.

• Hardware Platform (GPU) RTX 2080 Ti + RTX 2070 Super
• DeepStream Version 5.0
• TensorRT Version 7.0
• NVIDIA GPU Driver Version 440

Are you saying your camera numbers is not constant, may change dynamically? if yes, you may refer to sample which dynamically add or remove sources in the pipeline.

I am working on a video action recognition model (which will run in Triton Inference Server) that requires batching up to 32 frames for one camera (or channel) to make a prediction.

→ and about this, we do not support batch processing for one source, but for multi source.

We haven’t tried multiple buffers from single camera source, but maybe you could try below configure for batch, e.g. batch-size = 32

batch-size = 32
live-source=0
batch-push-timeout= “suitable value”

Thanks for the response.

I tried the suggested configurations, but i got the following error:

ERROR: infer_preprocess.cpp:570 NvBufSurfTransform failed with error -3 while converting buffer
ERROR: infer_preprocess.cpp:570 NvBufSurfTransform failed with error -3 while converting buffer
ERROR: infer_preprocess.cpp:570 NvBufSurfTransform failed with error -3 while converting buffer
ERROR: infer_preprocess.cpp:570 NvBufSurfTransform failed with error -3 while converting buffer
0:00:10.864702632  2166 0x7f994c052860 WARN           nvinferserver gstnvinferserver.cpp:519:gst_nvinfer_server_push_buffer:<primary-inference> error: inference failed with unique-id:1
Error: gst-library-error-quark: inference failed with unique-id:1 (1): gstnvinferserver.cpp(519): gst_nvinfer_server_push_buffer (): /GstPipeline:pipeline0/GstNvInferServer:primary-inference
Exiting app

Here’s the config file for the model:

infer_config {
  unique_id: 1
  gpu_ids: [1]
  max_batch_size: 1
  backend {
    inputs: [ {
      name: "INPUT__0"
      dims: [3, 112, 112, 32]
    }]
    outputs: [
      {name: "OUTPUT__0"}
    ]
    trt_is {
      model_name: "r2plus1d_32"
      version: 1
      model_repo {
        root: "../../../trtis_model_repo"
        strict_model_config: false
        log_level: 1
      }
    }
  }

  preprocess {
    network_format: IMAGE_FORMAT_RGB
    tensor_order: TENSOR_ORDER_NHWC
    maintain_aspect_ratio: 0
    frame_scaling_hw: FRAME_SCALING_HW_DEFAULT
    frame_scaling_filter: 1
  }

  postprocess {
    labelfile_path: "../../../trtis_model_repo/r2plus1d_32/labels.txt"
    trtis_classification {
      topk: 5
      threshold:0.1
    }
  }
}

input_control {
  process_mode: PROCESS_MODE_FULL_FRAME
  interval: 0
}

I suspect that the problem arises from “tensor_order: TENSOR_ORDER_NHWC” option, which is not what the model is expecting. I also tried the TENSOR_ORDER_NONE option which resulted in the following output:

0:00:04.271828270  4705      0x54326d0 ERROR          nvinferserver gstnvinferserver.cpp:362:gst_nvinfer_server_logger:<primary-inference> nvinferserver[UID 1]: Error in fixateInferenceInfo() <infer_cuda_context.cpp:162> [UID = 1]: InferContext(uid:1) cannot figure out input tensor order, please specify in config file(preprocess.)
0:00:04.271853019  4705      0x54326d0 ERROR          nvinferserver gstnvinferserver.cpp:362:gst_nvinfer_server_logger:<primary-inference> nvinferserver[UID 1]: Error in initialize() <infer_base_context.cpp:83> [UID = 1]: Infer context faied to initialize inference information, nvinfer error:NVDSINFER_CONFIG_FAILED
0:00:04.271859648  4705      0x54326d0 WARN           nvinferserver gstnvinferserver_impl.cpp:439:start:<primary-inference> error: Failed to initialize InferTrtIsContext
0:00:04.271863061  4705      0x54326d0 WARN           nvinferserver gstnvinferserver_impl.cpp:439:start:<primary-inference> error: Config file path: configs/config_infer_r2plus1d.txt

I guess these all support amycao’s answer that there is no support for batch processing of a single source.