TensorRT 5.0.2.6 with CuDNN 7.3.1 c++ execution raise an Engine.cpp (555) - Cuda Error in execute: 77

Hello,
My platform:
Linux distro and version - Linux-x86_64, Ubuntu, 16.04
GPU type - GeForce GTX 1080
nvidia driver version - 410.72
CUDA version - Release 9.0, V9.0.252
CUDNN version - 7.3.1
Python version – 3.5.2
Tensorflow version – 1.8
TensorRT version – 5.0.2.6

For the following operation I’m using the Python version of the TensorRT and Uff.
The frozen graph model (pb file) was successfully converted to uff file without any errors or warnings.

From here I’m using the c++ version of the TensorRT.
The uff file was successfully parsed using the nvuffparser::IUffParser parse function without any errors or warnings.

The CUDA engine was successfully built using the nvinfer1::IBuilder buildCudaEngine function without any errors or warnings.

But when I’m calling to nvinfer1::IExecutionContext execute function I’m getting the following error:
Engine.cpp (555) – Cuda Error in execute: 77

According to the CUDA toolkit reference manual:
cudaErrorIllegalAddress = 77
The device encountered a load or store instruction on an invalid memory address.
This leaves the process in an inconsistent state and any further CUDA work will
return the same error. To continue using CUDA, the process must be terminated and
relaunched.

When I’m coming back to (on the same Uff file):
TensorRT version – 4.0.1.6
CUDNN – 7.1.4

Without any other configuration change everything is working properly.
Consequently, I believe the error source is something related to an internal logic inside the TensorRT or CuDNN or both versions…

Please advise.

Hello,

Please make sure that the UFF was created using the same version of TensorRT as is being used during inference.