Error when running SampleMNIST using RTX 2080 Ti & TensorRT 4.0.1.6

Hello, guys please help me up!

I met a problem when running any samples in TensorRT 4.0.1.6 (like SampleMNIST).
The reason why I installed TensorRT 4.0.1.6, because we want to use similar development environment
with DRIVE AGX Xavier(Driveworks 1.2 & TensorRT 4).

The errors shown below:
ERROR: cudnnFullyConnectedLayer.cpp (108) - Cuda Error in rowMajorMultiply: 13
ERROR: cudnnFullyConnectedLayer.cpp (108) - Cuda Error in rowMajorMultiply: 13
ERROR: sample_mnist: Unable to create engine

The platform details:
Host : Ubuntu 16.04.5 LTS
GPU : NVIDIA RTX 2080 Ti
Driver version: 415.27
CUDA version : cuda-toolkit-9-0
CUDNN version : 7.1.3.16-1+cuda9.0

We have 5 same machines with the same configs, and all of them gets same errors.
Does RTX 2080 Ti only works on TensorRT 5?

Hello,

TensorRT supports all NVIDIA hardware with capability SM 3.0 or higher. RTX 2080 Ti is CUDA SM 7.5. I doubt this is GPU-specific, but rather a installation/configuration issue. To rule out any dependency issues, can you try NVIDIA Graphics Cloud (NGC) container? It’s free to pull. For TRT 4., you’d want to pull

docker pull nvcr.io/nvidia/tensorrt:18.08-py3

Hello,
Thanks for your reply, NVES. I had test the suggestion, and it showed the same error message.
So instead, maybe our team will use TenosrRT 5 on host PC for development, and skip out some functionalities which DRIVE AGX Xavier do not support.

Hello, here update the new state.

Now We have tested the sample code contains in DriveWorks 1.5 and still gets error. The environment is running on Ubuntu 16.04 LTS and install nothing but only NVIDIA driver and DriveWorks 1.5 (CUDA 10.0, TensorRT 4.0 GA).

So i have same question again, Does RTX 2080 Ti only works on TensorRT 5, not for TensorRT 4?
Because we tested the same environment using Titan Xp GPU card and ran smoothly.