How to Limit GPU usage of Tensorrt Engine inference?

I have a trt engine converted from yolo v3 etlt model. I am inferring it with python in my RTX 3070 machine. How can i control the GPU memory fraction taken by the tensorrt engine ? Is there an option as in tensorflow, like

config.gpu_options.per_process_gpu_memory_fraction = 0.1

is there any such parameters to control that?

Please try to
export TF_FORCE_GPU_ALLOW_GROWTH=true

1 Like

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.