Description
I’m running TensorRT models to inference images captured from a camera. However, I’m seeing a ton of terminal output which I suspect is slowing down the inference framerate. Sample terminal output is shown below - can I disable all TensorRT terminal logging and how?
Environment
TensorRT Version: 8.4.1
GPU Type: Jetson Xavier NX 16GB production
CUDA Version: 11.4.14
CUDNN Version: 8.4.1
Operating System + Version: Jetson Linux 35.1
Python Version (if applicable): 3.8
Sample Output Log to Suppress:
[03/07/2023-15:30:23] [TRT] [I] The logger passed into createInferRuntime differs from one already provided for an existing builder, runtime, or refitter. Uses of the global logger, returned by nvinfer1::getLogger(), will return the existing value.
[03/07/2023-15:30:23] [TRT] [I] [MemUsageChange] Init CUDA: CPU +0, GPU +0, now: CPU 1198, GPU 6153 (MiB)
[03/07/2023-15:30:23] [TRT] [I] Loaded engine size: 21 MiB
[03/07/2023-15:30:23] [TRT] [W] Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
[03/07/2023-15:30:23] [TRT] [I] [MemUsageChange] Init cuDNN: CPU +1, GPU +1, now: CPU 1221, GPU 6154 (MiB)
[03/07/2023-15:30:23] [TRT] [I] [MemUsageChange] TensorRT-managed allocation in engine deserialization: CPU +0, GPU +22, now: CPU 0, GPU 227 (MiB)
[03/07/2023-15:30:23] [TRT] [I] [MemUsageChange] Init cuDNN: CPU +0, GPU +0, now: CPU 1199, GPU 6154 (MiB)
[03/07/2023-15:30:23] [TRT] [I] [MemUsageChange] TensorRT-managed allocation in IExecutionContext creation: CPU +0, GPU +3, now: CPU 0, GPU 230 (MiB)