TensorRT 3 sample run error: Cuda Error in smVersion

Hi all,

I built and run TensorRT 3.0.4 samples in docker container, but got below errors when running the sample.

$./bin/sample_mnist
ERROR: cudnnLayerUtils.cpp (288) - Cuda Error in smVersion: 35
terminate called after throwing an instance of ‘nvinfer1::CudaError’
what(): std::exception
Aborted (core dumped)

Env information:
os: Linux version 4.2.0-37-generic
gcc: 4.8.4
cuda : 9.0
cudnn: 7.0
nvidia Tesla P4 GPU driver: 390.25

Thanks in advance for your help.

wuyan

We created a new “Deep Learning Training and Inference” section in Devtalk to improve the experience for deep learning and accelerated computing, and HPC users:
https://devtalk.nvidia.com/default/board/301/deep-learning-training-and-inference-/

We are moving active deep learning threads to the new section.

URLs for topics will not change with the re-categorization. So your bookmarks and links will continue to work as earlier.

-Siddharth