Description
When I use TensorRT enqueueV2 it doesn’t work with no error
the output is as follows ,while the log level is verbose:
i print here0 before enqueueV2 and here1 after enqueueV2,it seems the problem is enqueueV2
i also use trtexec to do the same thing, using command ./trtexec --loadEngine=./my.engine --batch=1 --iterations=200 --verbose and the output is as below:
it just can’t work…
Also, I created TensorRT custom plugin for this engine, but I didn’t find any warning or error of this ,so quite confused
Environment
TensorRT Version: 8.2.3.0
GPU Type: 1080Ti
Nvidia Driver Version: 470.25
CUDA Version: 11.4
CUDNN Version: 8.2
Operating System + Version: Ubuntu 18.04
Relevant Files
Steps To Reproduce
Here are some insights and common troubleshooting steps for issues related to TensorRT’s enqueueV2
function, the use of trtexec
, and potential problems with custom plugins:
Common Issues with enqueueV2
:
-
Version Compatibility: Check that you are using a compatible version of TensorRT. Review the release notes and documentation for any known issues associated with enqueueV2
.
-
Input Data Format: Ensure that your input data matches the expected format and shape for the model. Mismatched input data can lead to silent failures in enqueueV2
.
-
Model Optimization: Confirm that the model has been optimized correctly for TensorRT. This includes checking layer precision, optimization profiles, and layer support.
Debugging Steps for enqueueV2
:
-
Check for Error Messages: Look out for any warnings or error messages generated during the call to enqueueV2
. This information can be crucial for diagnosing the issue.
-
Enable Logging: Set TensorRT logging to verbose to obtain detailed insights into the internal workings and potential issues while executing enqueueV2
.
-
Input Validation: Validate the inputs and parameters being passed to enqueueV2
to ensure they are compatible with the model.
Using trtexec
and Custom Plugins:
-
trtexec Tool: Double-check your command-line arguments for trtexec
to ensure proper benchmarking and profiling of your engine.
-
Custom Plugins: If you are utilizing custom plugins, verify that they are correctly implemented and registered. Issues with custom plugins may lead to failures in the inference process.
Identifying and Fixing the Issue:
-
Step-by-Step Debugging: Simplify the inference process, breaking it into smaller chunks to identify where the failure occurs. This approach helps isolate specific issues.
-
Consult Documentation and Forums: Refer to official TensorRT documentation and community forums for troubleshooting advice and to identify common problems related to enqueueV2
and custom plugins.
-
Collaboration: If the issue persists, consider reaching out to the TensorRT community or NVIDIA support for further assistance.
By following these steps and utilizing the suggestions, you should be able to diagnose and potentially resolve the issues you are facing with enqueueV2
and related elements in your environment.