Description
I am working with the TensorRT C++ API.
In TensorRT 8.0, it is noticed that it has become more verbose than previous versions, when calling the refitting API. For example, each call of refitting-weights logs something like below on the console.
**
Using cublasLt a tactic source
[MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +8, now: CPU 12589, GPU 2718 (MiB)
Using cuDNN as a tactic source
[MemUsageChange] Init cuDNN: CPU +0, GPU +8, now: CPU 12589, GPU 2726 (MiB)
[MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 12597, GPU 2710 (MiB)
[MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 12553, GPU 2508 (MiB)
[MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 12264, GPU 2154 (MiB)
[MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 12081, GPU 1870 (MiB)
**
Is there anyway to suppress these logs?
Environment
TensorRT Version: TensorRT-8.0.3.4
GPU Type: GeForce 2080
Nvidia Driver Version: 465.89
CUDA Version: 11.0
CUDNN Version: 8.2.1
Operating System + Version: Windows 10
Python Version (if applicable): N/A
TensorFlow Version (if applicable): N/A
PyTorch Version (if applicable): N/A
Baremetal or Container (if container which image + tag): N/A
Relevant Files
Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)
Steps To Reproduce
Please include:
- Exact steps/commands to build your repro
- Exact steps/commands to run your repro
- Full traceback of errors encountered