I am trying to use CUDA 9 on x86_64 Linux which is on Fedora 27. Currently I have a Titan Xp and driver 390.42 (I could upgrade to 390.48 but I don’t think it will matter).
The problem I am running into is that the system’s gcc is version 7.3.1. An attempt to use some CUDA tools results in this:
In file included from /usr/local/cuda/bin/..//include/host_config.h:50:0,
from /usr/local/cuda/bin/..//include/cuda_runtime.h:78,
from <command-line>:0:
/usr/local/cuda/bin/..//include/crt/host_config.h:119:2: error: #error -- unsupported GNU version! gcc versions later than 6 are not supported!
#error -- unsupported GNU version! gcc versions later than 6 are not supported!
I’d like to find out if any information is available on compatibility with gcc 7 since it isn’t practical to downgrade the whole system, nor is it practical to hand build all of the older linker and libc systems connected to it (trying to build gcc 6 would likely result in needing compatibility builds of all kinds and be difficult in every case due to adjustments needed for the two to coexist).
Are there any suggestions on future gcc 7 compatibility? Or are there any workarounds? I can currently run pre-built CUDA apps, but I need to be able to compile and the gcc 6 or older requirement is a problem.
I’m finding this helps, but I have other issues I’m working to resolve so I don’t know if it is “the” answer. I can say that this edit does allow attempts to build with gcc 7. Right now I think I need to dual install CUDA 8 and CUDA 9 and compare some of the debug output side-by-side. If anyone is running into complaints about gcc 7 being too new though, this is probably the workaround.
I haven’t done this yet with CUDA9/GCC7 but CUDA8/GCC6 had a similar incompatibility, and CUDA 8 was expecting GCC5.4 (or prior).
I was using Fedora25 and I found it quite simple to install gcc5.4 and use that for my CUDA development on that platform. Applications that I built that way worked fine. I don’t recall any compatibility issues of the type you suggest:
Applications built using the 5.4 toolchain worked just fine on the platform, and I don’t recall having to build anything else at all, just gcc. I didn’t even install it. I just modified my PATH variable to point to the locally built bin directory.
May be it spearate topic but another issue is when I trying run deviceQuery from cuda test suite built with nvcc for 9.1 \gcc7.2:
./deviceQuery
./deviceQuery Starting…
CUDA Device Query (Runtime API) version (CUDART static linking)
cudaGetDeviceCount returned 3
→ initialization error
Result = FAIL
While the system contains two devices, nvidia-smi:
±----------------------------------------------------------------------------+
| NVIDIA-SMI 390.12 Driver Version: 390.12 |
|-------------------------------±---------------------±---------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla V100-SXM2… Off | 00000004:04:00.0 Off | 0 |
| N/A 41C P0 51W / 300W | Unknown Error | 0% Default |
±------------------------------±---------------------±---------------------+
| 1 Tesla V100-SXM2… Off | 00000035:03:00.0 Off | 0 |
| N/A 38C P0 50W / 300W | Unknown Error | 2% Default |
±------------------------------±---------------------±---------------------+
±----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
The system is Linux for Power9(IBM chip)
Any idea how to resolve it?
Thanks in advance.
Thanks, so i’ve tried to do that and now facing the problem:
$sudo systemctl enable nvidia-persistenced
The unit files have no installation config (WantedBy, RequiredBy, Also, Alias
settings in the [Install] section, and DefaultInstance for template units).
This means they are not meant to be enabled using systemctl.
Possible reasons for having this kind of units are:
A unit may be statically enabled by being symlinked from another unit’s
.wants/ or .requires/ directory.
A unit’s purpose may be to act as a helper for some other unit which has
a requirement dependency on it.
A unit may be started when needed via activation (socket, path, timer,
D-Bus, udev, scripted systemctl call, …).
In case of template units, the unit is meant to be enabled with some
instance name specified.
I’d like to bump this thread. Since gcc 7 has come out there is an ABI change and the entire linker environment of gcc 7 is incompatible with gcc 6 and previous. As a result more modern distributions (and I’m not talking bleeding edge…just stable distributions) cannot have a second backwards compatible gcc 6 installed. Basically it would require building an entirely new linker system and having the correct linker detected as some sort of compatibilty or foreign architecture version even though both are x86_64.
One can adjust the CUDA directory’s “include/crt/host_config.h” to try and get gcc 7 to be allowed, but then there will be other issues because of a large number of macros spread throughout causing incorrect use of defines in the actual system “/usr/include/” files.
Has anyone here found a good way to at least try using gcc 7 with CUDA 8 or 9?
More notes for other people…I have not tried this out but had a partial success based on the above URL. The synopsis:
sh ./cuda_8.0.61_375.26_linux.run --tar mxvf
sudo cp InstallUtils.pm /usr/lib64/perl5/
PERL5LIB=/usr/lib64/perl5/ sh ./cuda_8.0.61_375.26_linux.run --override
sudo rm /usr/lib64/perl5/InstallUtils.pm
NOTE: After creating “/etc/ld.so.conf/cuda-8.conf” with this pointing to the lib directory Blender 2.79 succeeds in using GPU despite the incompatible gcc7 (don’t forget to run “sudo ldconfig” first to update the linker path which is currently running):
/usr/local/cuda-8.0/lib64/
EDIT: There are still kernel compilation issues, perhaps from having CUDA 9 installed as well.