cuda error when using TensorRT5 for model inference and using tensorflow for data preprocessing

Environment:
ubuntu16.04
P100,4GPUs
Driver Version: 384.81
CUDA9.0
CUDNN7.3.1
Python3.6
tensorflow-gpu 1.12.0
TensorRT-5.0.2.6

Error:
Cuda error in file src/implicit_gemm.cu at line 1214: invalid resource handle
[TensorRT] ERROR: cuda/customWinogradConvActLayer.cpp (310) - Cuda Error in execute: 33
[TensorRT] ERROR: cuda/customWinogradConvActLayer.cpp (310) - Cuda Error in execute: 33

Details:
If I use tensorflow 1.12.0 instead of tensorflow-gpu 1.12.0, the program is running normally.
If I use DALI for data preprocessing, the same error happens

Please tell me how to fix this problem.

error 33 is an invalid resource handle, which could be due to a variety of reasons. Very likely CUDA toolkit isn’t configured correctly on your system. Highly recommend you try NGC docker containers which has minimal host-side dependencies.

https://www.nvidia.com/en-us/gpu-cloud/

My environment is ok. I can train my model with tensorflow-gpu,and inference model with tensorrt.
Could you give me a successful sample for using tensorflow-gpu and tensorrt at the same time.