To run Tensort engine converted rt-detr model with multistream(multibatch)

Is any steps or documentation to run TensorRT converted RT-DETR model with multistream input videos. The steps are followed below

  1. Export rtder-l.pt to onnx with args ‘dynamic=True’

  2. Export to .engine by
    trtexec
    –onnx=rtdetr-l.onnx
    –saveEngine=rtdetr-l.engine
    –minShapes=images:1x3x640x640
    –optShapes=images:6x3x640x640
    –maxShapes=images:8x3x640x640
    –memPoolSize=workspace:4096

  3. On running the .engine with sample TRT inference code with multi-stream input videos found that the FPS is less compared to the .pt model.
    Is any sample code for TensorRT converted RT-DETR with multibatching support in python?

have you checked this: GitHub - DataXujing/TensorRT-DETR: ⚡⚡⚡NVIDIA-阿里2021 TRT比赛 `二等奖` 代码提交 团队:美迪康 AI Lab 🚀🚀🚀 ?