Is any steps or documentation to run TensorRT converted RT-DETR model with multistream input videos. The steps are followed below
-
Export rtder-l.pt to onnx with args ‘dynamic=True’
-
Export to .engine by
trtexec
–onnx=rtdetr-l.onnx
–saveEngine=rtdetr-l.engine
–minShapes=images:1x3x640x640
–optShapes=images:6x3x640x640
–maxShapes=images:8x3x640x640
–memPoolSize=workspace:4096 -
On running the .engine with sample TRT inference code with multi-stream input videos found that the FPS is less compared to the .pt model.
Is any sample code for TensorRT converted RT-DETR with multibatching support in python?