Cannot execute TensorRT samples

Hello,

when I execute TensorRT samples (any of them) it throws an error, e.g. googlenet:

Building and running a GPU inference engine for GoogleNet, N=4...
ERROR: cudnnEngine.cpp (56) - Cuda Error in initializeCommonContext: 2
ERROR: cudnnEngine.cpp (56) - Cuda Error in initializeCommonContext: 2
sample_googlenet: sampleGoogleNet.cpp:98: void caffeToGIEModel(const string&, const string&, const std::vector<std::__cxx11::basic_string<char> >&, unsigned int, nvinfer1::IHostMemory*&): Assertion `engine' failed.
Aborted (core dumped)

TensorRT 3.0.4
CUDNN 7.0.5
CUDA 9.0
Ubuntu 16.04
GTX 1080

Any suggestions how to resolve this? Thanks in advance.

We created a new “Deep Learning Training and Inference” section in Devtalk to improve the experience for deep learning and accelerated computing, and HPC users:
https://devtalk.nvidia.com/default/board/301/deep-learning-training-and-inference-/

We are moving active deep learning threads to the new section.

URLs for topics will not change with the re-categorization. So your bookmarks and links will continue to work as earlier.

-Siddharth

Does ./infer_device work? This is our sanity check application that checks to make sure that you have dependencies installed correctly. Can you run it and post the output?