TensorRT - Error: could not build engine

Sample codes provided with the TensorRT are failing to create the engine

for sample giexec -
cudnnEngine.cpp (45) - Cuda Error in initializeCommonContext: 1
could not build engine
Engine could not be created
for sample_mnist & sample_googlenet
Assertion `engine’ failed.
Aborted (core dumped)

Using ubuntu 14.04
GPU :
|-------------------------------±---------------------±---------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX TIT… Off | 0000:05:00.0 On | N/A |
| 22% 58C P8 19W / 250W | 69MiB / 12284MiB | 0% Default |
±------------------------------±---------------------±---------------------+
| 1 GeForce GTX TIT… Off | 0000:06:00.0 Off | N/A |
| 22% 58C P8 16W / 250W | 23MiB / 12287MiB | 0% Default |
±------------------------------±---------------------±---------------------+
| 2 GeForce GTX TIT… Off | 0000:09:00.0 Off | N/A |
| 22% 55C P8 18W / 250W | 23MiB / 12287MiB | 0% Default |
±------------------------------±---------------------±---------------------+
| 3 GeForce GTX TIT… Off | 0000:0A:00.0 Off | N/A |
| 22% 48C P8 16W / 250W | 135MiB / 12287MiB | 0% Default |
±------------------------------±---------------------±---------------------+

Thanks in advance

Hi,i have the same problem,did you solve it?
And i can’t find cudnnEngine.cpp all of my computer.
Thanks a lot.

Same problem, here.

root@226d48678564:/TensorRT-3.0.0/bin# ./sample_googlenet
Building and running a GPU inference engine for GoogleNet, N=4…
ERROR: cudnnEngine.cpp (55) - Cuda Error in initializeCommonContext: 1
sample_googlenet: sampleGoogleNet.cpp:98: void caffeToGIEModel(const string&, const string&, const std::vector<std::__cxx11::basic_string >&, unsigned int, nvinfer1::IHostMemory*&): Assertion `engine’ failed.
Aborted (core dumped)

root@226d48678564:/TensorRT-3.0.0/bin# ./sample_mnist
ERROR: cudnnEngine.cpp (55) - Cuda Error in initializeCommonContext: 1
sample_mnist: sampleMNIST.cpp:63: void caffeToGIEModel(const string&, const string&, const std::vector<std::__cxx11::basic_string >&, unsigned int, nvinfer1::IHostMemory*&): Assertion `engine’ failed.
Aborted (core dumped)

root@226d48678564:/TensorRT-3.0.0/bin# ./sample_plugin
ERROR: cudnnEngine.cpp (55) - Cuda Error in initializeCommonContext: 1
sample_plugin: samplePlugin.cpp:74: void caffeToGIEModel(const string&, const string&, const std::vector<std::__cxx11::basic_string >&, unsigned int, nvcaffeparser1::IPluginFactory*, nvinfer1::IHostMemory*&): Assertion `engine’ failed.
Aborted (core dumped)


I am using docker (nvidia/cuda:8.0-cudnn7-devel-ubuntu16.04)

root@226d48678564:/TensorRT-3.0.0/bin# nvidia-smi
Wed Oct 11 19:30:09 2017
±----------------------------------------------------------------------------+
| NVIDIA-SMI 375.66 Driver Version: 375.66 |
|-------------------------------±---------------------±---------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 1080 Off | 0000:01:00.0 On | N/A |
| N/A 41C P8 11W / N/A | 444MiB / 8105MiB | 0% Default |
±------------------------------±---------------------±---------------------+

±----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
±----------------------------------------------------------------------------+

same problem on 1050

I have the same problem with inference running in docker. I found the problem.
You need to set the right CUDA_ARCH for your GPU

For example
export CUDA_ARCH=“50 52”