For model inference, I have 5 master engine models and I want to perform serial inference. I want to know the maximum number of serial models supported by nvinfer.
deepstream-app version 6.1.0
CUDA Driver Version: 11.4
CUDA Runtime Version: 11.0
TensorRT Version: 8.2
cuDNN Version: 8.4
libNVWarp360 Version: 2.0.1d3
device on : jetson orin
November 16, 2022, 1:17pm
Which do you mean， run mutilple models with one nvinfer or running multiple nvifers at the same time？
running multiple nvifers at the same time.
November 17, 2022, 10:36am
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one.
It depends on the memory size, codec capability , etc… of your device.
December 13, 2022, 5:39am
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.