Hi,
I am trying to get cuda 10 to install but it seems to need gcc 8.0.1 I have just installed fedora 30 which comes with 9.1.1, how to roll back the version of gcc to 8.0.1?
Thanks for your help
chaslie
Hi,
I am trying to get cuda 10 to install but it seems to need gcc 8.0.1 I have just installed fedora 30 which comes with 9.1.1, how to roll back the version of gcc to 8.0.1?
Thanks for your help
chaslie
further to the above:
installing using https://linuxconfig.org/how-to-install-nvidia-cuda-toolkit-on-fedora-29-linux but with GCC at 9.1.1 gives the following output:
$nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2018 NVIDIA Corporation
Built on Sat_Aug_25_21:08:01_CDT_2018
Cuda compilation tools, release 10.0, V10.0.130
$nvidia-smi
Failed to initialize NVML: Driver/library version mismatch
chaslie
Fedora 30 is not a supported version of Linux for any current version of CUDA.
If you wish to proceed anyway, there are instructions for downgrading your gcc version in many places on the web. It can be done via downloading the gnu toolchain (c++) source code and building from source. After that, modify your PATH variable to select the version of g++ that you just built.
I’ve done it several times, and I don’t believe it is any more complicated than that. I’ve not done it on Fedora 30 going from g++ 9 to 8, so I won’t be able to give you precise instructions.
Again this is not a supported configuration for CUDA development, at this time.
Hi Robert,
Thanks for the reply. I know this is a silly question, but do you know when Fedora 30 will be supported? As this is the only version of Linux I have successfully got to install :-(.
It looks like its a waiting game to see who supports what first, will ubuntu sort the problems with the x299 motherboard or will Cuda support fedora 30…
Many thanks again for the response.
Regards,
Chaslie
I’m not able comment about future software releases.
I’m running CUDA 10 on Fedora 30 thanks to the negativo17.org repository. Basically, the only compatibility issue is the version of GCC. The negativo17 repository installs an older version alongside the version from the Fedora repos as cuda-gcc (or cuda-g++, or cuda-gfortran).
You will need to add -ccbin=cuda-gcc or -ccbin=cuda-g++ when compiling so that nvcc finds a compatible version, but everything just works. Also, all the libraries and headers install in default system paths so you don’t even need to tweak your bash profile most of the time. Also, if you go to build one of the samples, edit the makefile CUDA_PATH line to:
CUDA_PATH ?= /usr
and then they should build.
Hope this helps.
I am running into the same issue. Can you please send me the steps you followed to get this to work. I am very new to this. So thanks for your patience.
There are instructions for installation at https://negativo17.org/nvidia-driver/, but I understand that they are a bit long-winded to go through, so I will summarize here.
First, enable the repository:
dnf config-manager --add-repo=https://negativo17.org/repos/fedora-nvidia.repo
Then you can install the nvidia driver with CUDA
sudo dnf install nvidia-driver nvidia-driver-cuda cuda cuda-devel akmod-nvidia nvidia-settings cuda-gcc cuda-gcc-c++ cuda-samples
After the install completes you’ll need to reboot your system. Then, to verify that it is working you can build some of the CUDA samples. These will have been installed in /usr/share/cuda/samples, I recommend creating a ‘CUDA-samples’ directory in your home directory and then copying the contents of /usr/share/cuda/samples there. Then you can go into 1_Utilities/deviceQuery, open the makefile and change line 37 to read
CUDA_PATH ?= /usr
save the file and close it. Point a terminal at the deviceQuery directory and run make, then just ./deviceQuery to execute. You should see a lot of information about your GPU listed. To test something more interesting go into 5_Simulations/nbody, open the makefile and again change line 37 to the above, run make and then ./nbody. Your GPU will then be running and rendering many different gravitational N-body simulations.
Hope this helps!
Thank you @Entropy813. I tried the above steps. Here is what I see:
./deviceQuery
./deviceQuery Starting…
CUDA Device Query (Runtime API) version (CUDART static linking)
cudaGetDeviceCount returned 100
→ no CUDA-capable device is detected
Result = FAIL
Any ideas?
Credit goes to @Entropy813.
These are the steps outlined by @Entropy813 for compiling CUDA files. This is an example of compiling simplePrintf.cu in the samples directory.
Here are the steps -
$ dnf config-manager --add-repo=https://negativo17.org/repos/fedora-nvidia.repo
Install Driver
$ sudo dnf install nvidia-driver nvidia-driver-cuda akmod-nvidia nvidia-settings
Verify Driver Installation
$ glxgears -info
Install CUDA and cuDNN
$ sudo dnf install cuda cuda-devel cuda-gcc cuda-gcc-c++ cuda-cudnn cuda-cudnn-devel cuda-samples
Search for cuda packages
$ dnf search cuda
Verify gcc version is 8.3.0
$ cuda-gcc --version
cuda-gcc (GCC) 8.3.0
Copyright (C) 2018 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
Get GPU info using deviceQuery
$ cp -r /usr/share/cuda/samples ~/
$ cd samples/1_Utilities/deviceQuery
Replace
CUDA_PATH ?= /usr/local/cuda
with
CUDA_PATH ?= /usr
in the Makefile
$ make run
Device 0: "GeForce RTX 2080 Ti"
CUDA Driver Version / Runtime Version 10.1 / 10.1
CUDA Capability Major/Minor version number: 7.5
Total amount of global memory: 11019 MBytes (11554324480 bytes)
(68) Multiprocessors, ( 64) CUDA Cores/MP: 4352 CUDA Cores
GPU Max Clock rate: 1545 MHz (1.54 GHz)
Memory Clock rate: 7000 Mhz
Memory Bus Width: 352-bit
L2 Cache Size: 5767168 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor: 1024
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 3 copy engine(s)
Run time limit on kernels: Yes
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
Device supports Unified Addressing (UVA): Yes
Device supports Compute Preemption: Yes
Supports Cooperative Kernel Launch: Yes
Supports MultiDevice Co-op Kernel Launch: Yes
Device PCI Domain ID / Bus ID / location ID: 0 / 65 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 10.1, CUDA Runtime Version = 10.1, NumDevs = 1
Result = PASS
Compile and run simplePrintf
$ cd 0_Simple/simplePrintf/
$ make run
/usr/bin/nvcc --include-path /usr/include/cuda -ccbin /usr/bin/cuda-g++ -I../../common/inc -m64 -gencode arch=compute_30,code=sm_30 -gencode arch=compute_35,code=sm_35 -gencode arch=compute_37,code=sm_37 -gencode arch=compute_50,code=sm_50 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_60,code=sm_60 -gencode arch=compute_61,code=sm_61 -gencode arch=compute_70,code=sm_70 -gencode arch=compute_75,code=sm_75 -gencode arch=compute_75,code=compute_75 -o simplePrintf.o -c simplePrintf.cu
/usr/bin/nvcc --include-path /usr/include/cuda -ccbin /usr/bin/cuda-g++ -m64 -gencode arch=compute_30,code=sm_30 -gencode arch=compute_35,code=sm_35 -gencode arch=compute_37,code=sm_37 -gencode arch=compute_50,code=sm_50 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_60,code=sm_60 -gencode arch=compute_61,code=sm_61 -gencode arch=compute_70,code=sm_70 -gencode arch=compute_75,code=sm_75 -gencode arch=compute_75,code=compute_75 -o simplePrintf simplePrintf.o
mkdir -p ../../bin/x86_64/linux/release
cp simplePrintf ../../bin/x86_64/linux/release
./simplePrintf
GPU Device 0: "GeForce RTX 2080 Ti" with compute capability 7.5
Device 0: "GeForce RTX 2080 Ti" with Compute 7.5 capability
printf() is called. Output:
[0, 0]: Value is:10
[0, 1]: Value is:10
[0, 2]: Value is:10
[0, 3]: Value is:10
[0, 4]: Value is:10
[0, 5]: Value is:10
[0, 6]: Value is:10
[0, 7]: Value is:10
[1, 0]: Value is:10
[1, 1]: Value is:10
[1, 2]: Value is:10
[1, 3]: Value is:10
[1, 4]: Value is:10
[1, 5]: Value is:10
[1, 6]: Value is:10
[1, 7]: Value is:10
[3, 0]: Value is:10
[3, 1]: Value is:10
[3, 2]: Value is:10
[3, 3]: Value is:10
[3, 4]: Value is:10
[3, 5]: Value is:10
[3, 6]: Value is:10
[3, 7]: Value is:10
[2, 0]: Value is:10
[2, 1]: Value is:10
[2, 2]: Value is:10
[2, 3]: Value is:10
[2, 4]: Value is:10
[2, 5]: Value is:10
[2, 6]: Value is:10
[2, 7]: Value is:10
@dxapp, sorry for the slow response. Last week I was offered and accepted a new job which will require a move and that has occupied most of my time since then.
If you can’t get the negativo17.org repository to work, there is an alternative method which I had to use this morning. My main workstation was getting a bit old. It was still running an AMD FX 9590 and was starting to feel slow, in addition to giving some memory errors in the CPU cache when running “bursty” CPU loads (e.g. hitting 100% for a few seconds, then dropping back to 13% for a few seconds and repeating). So, this weekend I got around to upgrading the CPU, motherboard and RAM to something much more modern. After finally getting the thing to recognize the m.2 NVME SSD, installing Fedora and then getting it to recognize the Fedora installation, I went to install the NVIDIA driver and CUDA from negativo17 only to be greeted with a blank screen on reboot. No matter what I did, I couldn’t get it working which is unfortunate. Here’s what I did that finally got everything working.
For negativo17:
$ sudo dnf install nvidia-driver nvidia-driver-cuda akmod-nvidia
For rpmfusion:
$ sudo dnf install xorg-x11-drv-nvidia xorg-x11-drv-nvidia-cuda akmod-nvidia
For the if-not-true-then-false method, the CUDA components are included in the binary from NVIDIA.
$ sudo dnf install cuda-gcc cuda-gcc-c++ cuda-gcc-gfortran
After that, add
HOST_COMPILER=/usr/bin/cuda-g++
export HOST_COMPILER
to your .bash_profile
$ cd Downloads
$ chmod +x cuda_*.run
$ sudo ./cuda_*.run --override
It will take a little while before the terminal window will change and ask you to accept the user agreement by typing in ‘accept’. After that, you’ll be shown the components of the install. The first item in the list is the driver. Press enter while the driver line is highlighted to remove the X from next to it (and the indented line just below it.
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/.local/bin:$HOME/bin:/usr/local/cuda-10.1/:/usr/local/cuda-10.1/bin
LIBRARY_PATH=$LIBRARY_PATH:$HOME/lib
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$HOME/lib:/usr/local/cuda-10.1/lib64
CPLUS_INCLUDE_PATH=$CPLUS_INCLUDE_PATH:$HOME/include:/usr/local/cuda-10.1/include
HOST_COMPILER=/usr/bin/cuda-g++
export PATH
export LIBRARY_PATH
export LD_LIBRARY_PATH
export CPLUS_INCLUDE_PATH
export HOST_COMPILER
$ sudo dnf install mesa-libGLU mesa-libGLU-devel freeglut freeglut-devel
Then you can build the nbody sample in the 5_Simulations/nbody directory, again by simply running make.
So, that is how I was able to get things working this morning. When you compile your own codes, or compile 3rd party codes that use nvcc to compile CUDA components of those codes, you’ll have to make sure the
-ccbin=cuda-g++
flag is passed to nvcc so that nvcc will be pointed to the compatible version of GCC on your system.
I hope either this or the post above from @vishal.kvn will help you!
Hi, I ran into this same problem using OpenMandriva Linux X86_64 kernel 5.2.2 with gcc 9.1.1. I had to do similar installation steps to move away from the nouveau driver to the Nvidia driver. I am using a GT 730 card.
I made modifications to the stl_function.h file. I saved the original first. First, I changed the __builtin_is_constant_evaluated() to ‘std::is_constant_evaluated()’. Next, I modified either a Makefile or another header file to remove the logic check on the gcc version. The result, a successful compiled source, completed without errors! I tired all the sources that failed before and I am now able to compile them. I had to install additional libraries for the GL headers and -devel.
BEFORE:
[em 5_Simulations]$ cd particles/
[em particles]$ make
/usr/local/cuda/bin/nvcc -ccbin g++ -I…/…/common/inc -m64 -DCUDA_ENABLE_DEPRECATED -gencode arch=compute_30,code=sm_30 -gencode arch=compute_35,code=sm_35 -gencode arch=compute_37,code=sm_37 -gencode arch=compute_50,code=sm_50 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_60,code=sm_60 -gencode arch=compute_61,code=sm_61 -gencode arch=compute_70,code=sm_70 -gencode arch=compute_75,code=sm_75 -gencode arch=compute_75,code=compute_75 -o particleSystem_cuda.o -c particleSystem_cuda.cu
/usr/include/c++/9.1.1/bits/stl_function.h(443): error: identifier “__builtin_is_constant_evaluated” is undefined
1 error detected in the compilation of “/tmp/tmpxft_00001c79_00000000-14_particleSystem_cuda.compute_75.cpp1.ii”.
make: *** [Makefile:313: particleSystem_cuda.o] Error 1
[em particles]$
AFTER: making the file change, see below…
$ make
/usr/local/cuda/bin/nvcc -ccbin g++ -I…/…/common/inc -m64 -DCUDA_ENABLE_DEPRECATED -gencode arch=compute_30,code=sm_30 -gencode arch=compute_35,code=sm_35 -gencode arch=compute_37,code=sm_37 -gencode arch=compute_50,code=sm_50 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_60,code=sm_60 -gencode arch=compute_61,code=sm_61 -gencode arch=compute_70,code=sm_70 -gencode arch=compute_75,code=sm_75 -gencode arch=compute_75,code=compute_75 -o particleSystem_cuda.o -c particleSystem_cuda.cu
/usr/local/cuda/bin/nvcc -ccbin g++ -I…/…/common/inc -m64 -DCUDA_ENABLE_DEPRECATED -gencode arch=compute_30,code=sm_30 -gencode arch=compute_35,code=sm_35 -gencode arch=compute_37,code=sm_37 -gencode arch=compute_50,code=sm_50 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_60,code=sm_60 -gencode arch=compute_61,code=sm_61 -gencode arch=compute_70,code=sm_70 -gencode arch=compute_75,code=sm_75 -gencode arch=compute_75,code=compute_75 -o particles.o -c particles.cpp
/usr/local/cuda/bin/nvcc -ccbin g++ -I…/…/common/inc -m64 -DCUDA_ENABLE_DEPRECATED -gencode arch=compute_30,code=sm_30 -gencode arch=compute_35,code=sm_35 -gencode arch=compute_37,code=sm_37 -gencode arch=compute_50,code=sm_50 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_60,code=sm_60 -gencode arch=compute_61,code=sm_61 -gencode arch=compute_70,code=sm_70 -gencode arch=compute_75,code=sm_75 -gencode arch=compute_75,code=compute_75 -o render_particles.o -c render_particles.cpp
/usr/local/cuda/bin/nvcc -ccbin g++ -I…/…/common/inc -m64 -DCUDA_ENABLE_DEPRECATED -gencode arch=compute_30,code=sm_30 -gencode arch=compute_35,code=sm_35 -gencode arch=compute_37,code=sm_37 -gencode arch=compute_50,code=sm_50 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_60,code=sm_60 -gencode arch=compute_61,code=sm_61 -gencode arch=compute_70,code=sm_70 -gencode arch=compute_75,code=sm_75 -gencode arch=compute_75,code=compute_75 -o shaders.o -c shaders.cpp
/usr/local/cuda/bin/nvcc -ccbin g++ -m64 -gencode arch=compute_30,code=sm_30 -gencode arch=compute_35,code=sm_35 -gencode arch=compute_37,code=sm_37 -gencode arch=compute_50,code=sm_50 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_60,code=sm_60 -gencode arch=compute_61,code=sm_61 -gencode arch=compute_70,code=sm_70 -gencode arch=compute_75,code=sm_75 -gencode arch=compute_75,code=compute_75 -o particles particleSystem.o particleSystem_cuda.o particles.o render_particles.o shaders.o -lGL -lGLU -lglut
mkdir -p …/…/bin/x86_64/linux/release
cp particles …/…/bin/x86_64/linux/release
[em particles]$
if (__builtin_is_constant_evaluated())
if (__builtin_is_constant_evaluated())
if (__builtin_is_constant_evaluated())
if (__builtin_is_constant_evaluated())
$ gcc --version
gcc (GCC) 9.1.1 20190713 (OpenMandriva)
Copyright (C) 2019 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
$ uname -a
Linux emxtest 5.2.2-desktop-1omv4000 #1 SMP Sun Jul 21 14:13:51 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
$cd fluidsGL
$ make
/usr/local/cuda/bin/nvcc -ccbin g++ -I…/…/common/inc -m64 -gencode arch=compute_30,code=sm_30 -gencode arch=compute_35,code=sm_35 -gencode arch=compute_37,code=sm_37 -gencode arch=compute_50,code=sm_50 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_60,code=sm_60 -gencode arch=compute_61,code=sm_61 -gencode arch=compute_70,code=sm_70 -gencode arch=compute_75,code=sm_75 -gencode arch=compute_75,code=compute_75 -o fluidsGL_kernels.o -c fluidsGL_kernels.cu
/usr/local/cuda/bin/nvcc -ccbin g++ -m64 -gencode arch=compute_30,code=sm_30 -gencode arch=compute_35,code=sm_35 -gencode arch=compute_37,code=sm_37 -gencode arch=compute_50,code=sm_50 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_60,code=sm_60 -gencode arch=compute_61,code=sm_61 -gencode arch=compute_70,code=sm_70 -gencode arch=compute_75,code=sm_75 -gencode arch=compute_75,code=compute_75 -o fluidsGL fluidsGL.o fluidsGL_kernels.o -lGL -lGLU -lglut -lcufft
mkdir -p …/…/bin/x86_64/linux/release
cp fluidsGL …/…/bin/x86_64/linux/release
$
=======================================
./deviceQuery Starting…
CUDA Device Query (Runtime API) version (CUDART static linking)
Detected 1 CUDA Capable device(s)
Device 0: “GeForce GT 730”
CUDA Driver Version / Runtime Version 10.1 / 10.1
CUDA Capability Major/Minor version number: 3.5
Total amount of global memory: 977 MBytes (1024720896 bytes)
( 2) Multiprocessors, (192) CUDA Cores/MP: 384 CUDA Cores
GPU Max Clock rate: 902 MHz (0.90 GHz)
Memory Clock rate: 2505 Mhz
Memory Bus Width: 64-bit
L2 Cache Size: 524288 bytes
…
…
Device supports Unified Addressing (UVA): Yes
Device supports Compute Preemption: No
Supports Cooperative Kernel Launch: No
Supports MultiDevice Co-op Kernel Launch: No
Device PCI Domain ID / Bus ID / location ID: 0 / 2 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 10.1, CUDA Runtime Version = 10.1, NumDevs = 1
Result = PASS
$ nvidia-smi
Thu Aug 1 00:43:33 2019
±----------------------------------------------------------------------------+
| NVIDIA-SMI 430.34 Driver Version: 430.34 CUDA Version: 10.1 |
|-------------------------------±---------------------±---------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GT 730 Off | 00000000:02:00.0 N/A | N/A |
| 30% 33C P8 N/A / N/A | 300MiB / 977MiB | N/A Default |
±------------------------------±---------------------±---------------------+
±----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 Not Supported |
±----------------------------------------------------------------------------+
Nvidia does provide fixes for the Linux driver and the level of integration is getting better. I installed the 430.34 driver from the Nvidia download page, Download The Latest Official GeForce Drivers.
I downloaded the driver for my Nvidia card, in this case the NVIDIA-Linux-x86_64-430.34.run.
I hope this helps.