Hi, I’m trying to debug what I believe must be a build issue but I could use some help tracking it down.
I have a model which I’ve successfully made into an engine both on a 2080ti and a 1080ti.
I have one project in which I can load both engines in C++ and happily use them.
I have another project (running in the same docker, all pertinent dependencies identical etc.) in which I get the error:
[tensorrt] The engine plan file is generated on an incompatible device, expecting compute 1.0got compute 6.1, please rebuild.
When loading the 1080ti engine
[tensorrt] The engine plan file is generated on an incompatible device, expecting compute 1.0got compute 7.5, please rebuild.
When loading the 2080ti engine
I get the same error (1.0 unchanged etc.) if I attempt to load the engine on a 1080ti or a 2080ti.
Now, this might be a total red herring; but the only difference I can spot with the two projects (which I haven’t quite got to the root cause of either) is that if I don’t do anything when building the project where the models do not load, I get the following error:
In file included from /usr/local/cuda/include/cuda_runtime.h:120:0, from <command-line>:0: /usr/local/cuda/include/crt/common_functions.h:74:24: error: token ""__CUDACC_VER__ is no longer supported. Use __CUDACC_VER_MAJOR__, __CUDACC_VER_MINOR__, and __CUDACC_VER_BUILD__ instead."" is not valid in preprocessor expressions #define __CUDACC_VER__ "__CUDACC_VER__ is no longer supported. Use __CUDACC_VER_MAJOR__, __CUDACC_VER_MINOR__, and __CUDACC_VER_BUILD__ instead." ^ /usr/include/eigen3/Eigen/src/Core/util/Macros.h:364:33: note: in expansion of macro '__CUDACC_VER__' #if defined(__CUDACC_VER__) && __CUDACC_VER__ >= 70500 && __cplusplus > 199711L
Which I have found discussions about and have temporarily corrected with the following in my header files:
#ifdef __CUDACC_VER__ #undef __CUDACC_VER__ #endif #define __CUDACC_VER__ 7 * 10000 + __CUDACC_VER_MINOR__ * 100 + __CUDACC_VER_BUILD__
This seems to be caused by the version of Eigen I’m using, but the strange part is that I use the same version of Eigen in the project where the models load fine and I don’t have to add this hack to my headers to make it compile
Now the project in which both models can load is a little more focused, basically just this model and nothing else. The other project in which I get this tensorrt error is a little more involved. It creates a “.so” file with a bunch of other sub projects… which themselves have a bunch of cuda kernels… I only mention this because I’m starting to think these other projects must be what is causing this error but I am failing to see how
Does any of this look familiar to anyone?