TensorRT enqueue error

Hello,
I have an issue using TensorRT in our C++ code for scientific computations

Ubuntu 16.04
GeForce 970
nvidia driver version: 410.66
CUDA version: 10.0
CUDNN version: 7
Tensorflow version: r1.11
TensorRT version: 5.0.2.6

When the inference is executed I get an error from the enqueue function from nvinfer. However, this error only appears for the second inference step. So the approach generally works. I just do not understand why it crashes for the second inference call?

This is the full error output:

#0  Foam::error::printStack(Foam::Ostream&) at ??:?
#1  Foam::sigFpe::sigHandler(int) at ??:?
#2  ? in "/lib/x86_64-linux-gnu/libc.so.6"
#3  ? in "/usr/local/cuda/lib64/libcudnn.so.7"
#4  cudnnActivationForward in "/usr/local/cuda/lib64/libcudnn.so.7"
#5  nvinfer1::rt::cuda::cudnnMLPMMLayer::execute(nvinfer1::rt::CommonContext const&, nvinfer1::rt::ExecutionParameters const&) const in "/usr/local/TensorRT-5.0.2.6/lib/libnvinfer.so.5"
#6  nvinfer1::rt::ExecutionContext::enqueue(int, void**, CUstream_st*, CUevent_st**) in "/usr/local/TensorRT-5.0.2.6/lib/libnvinfer.so.5"
#7  uffModel::infer(std::vector<float, std::allocator<float> >&, std::vector<float, std::allocator<float> >&) at ??:?
#8  Foam::combustionModels::FPVANNModel::correctCalculatedANNTransport() at ??:?
#9  Foam::combustionModels::FPVANNModel::correct() at ??:?
#10  ? at ??:?
#11  __libc_start_main in "/lib/x86_64-linux-gnu/libc.so.6"
#12  ? at ??:?

Thanks for your help!

Hello,

To help us debug, can you share a small repro containing the model, sample inference data, and source that demonstrate the multiple enqueue error you are seeing?

regards,
NVIDIA Enterprise Support