The enqueue() method has been deprecated when used with engines built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag

Description

A clear and concise description of the bug or issue.

Environment

TensorRT Version: 8.4.1.5
GPU Type: 1660 Ti
Nvidia Driver Version: 515.65.01
CUDA Version: 11.7
CUDNN Version: 8.5.0
Operating System + Version: Linux 5.15.0-46-generic #49~20.04.1-Ubuntu
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

I converted a model via /usr/src/tensorrt/bin/trtexec command succesfully. But When I want to run project, for every inference it generates these messages:


[W] [TRT] The enqueue() method has been deprecated when used with engines built from a network created with
 NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. Please use enqueueV2() instead.

How could I disable these messages? My project has been wrote in C++.

Hi,

You can modify the ProfileVerbosity level to remove warning messages. Please refer to the following doc.
https://docs.nvidia.com/deeplearning/tensorrt/api/c_api/classnvinfer1_1_1_i_logger.html

Thank you.