caffemodel to TensorRT problem

WIN10 1903 (OS 18362.476)
CUDA 9.0
CUDNN 7.4.1
Python 3.5.6
Caffe
TensorRT 6.0.1.5

I want to implement a frcnn caffemode to TensorRT
So I want to test using the official VGG16_faster_rcnn_fina.caffemodell first.
If I use the official test.prototxt that comes with caffe. It will prompt the following error and crash
[11/06 / 2019-15: 35: 44] [E] [TRT] Parameter check failed at: Network.cpp :: nvinfer1 :: Network :: addInput :: 671, condition: isValidDims (dims, hasImplicitBatchDimension ())

If using TensorRT-6.0.1.5\data\faster-rcnn\faster_rcnn_test_iplugin.prototxt
[11/06 / 2019-14: 19: 42] [E] Could not find output blob prob
[11/06 / 2019-14: 19: 42] [E] Parsing model failed
[11/06 / 2019-14: 19: 42] [E] Engine could not be created
&&&& FAILED TensorRT.trtexec

I have seen that there is no difference in the model between the two files, but the details are slightly different.

What should I do
Thank you
test.txt (6.6 KB)

New problem
[11/06/2019-17:08:21] [E] [TRT] C:\source\rtSafe\cuda\cudaConvolutionRunner.cpp (303) - Cudnn Error in nvinfer1::rt::cuda::CudnnConvolutionRunner::execute: 8 (CUDNN_STATUS_EXECUTION_FAILED)

Hi,

Could you please try running the model with latest supported cuDNN & CUDA version?
https://docs.nvidia.com/deeplearning/sdk/tensorrt-archived/tensorrt-601/tensorrt-support-matrix/index.html#platform-matrix
https://docs.nvidia.com/deeplearning/sdk/tensorrt-archived/tensorrt-601/tensorrt-support-matrix/index.html#software-version-platform

Thanks