My Makefile rules typically expand to something like
nvcc -DHAS_CUDA -I/sfw/cuda/4.0/include -O0 -g -G -DENABLE_PARAMETER_CHECK --ptxas-options=-v -arch sm_13 -DHAS_CUDADOUBLEPREC -DNUM_MEM_BANK=16 -c -o /home/user/goeddeke/nobackup/feastobj/temp/1/object/pc-coreduo-linux64-intel-goto-optNO/coproc_axpy_cuda.o coproc_axpy_cuda.cu
which is not too funky except that I’m using the Intel 11.1 suite for all non-cuda stuff in my build process. I switched to CUDA 4.0 today, added --keep to compare PTX output, and guess what happened: The $%&/$$% new CUDA compiler deleted my .cu source files!!! Luckily, it didn’t remove the backup files my editor tends to generate
Honestly, guys. As much as I like debugging by redoing things over again properly, I don’t want a compiler to optimise all my efforts away on the file system level :)
This is on an Ubuntu 10.04 LTS box (which is for some funky reason unsupported although I tend to expect LTS from CUDA by now, but glibc and gcc and everything are very close to the supported 10.10, check http://distrowatch.com/table.php?distribution=ubuntu ), and I am 100% sure that nvcc never sees the Intel (or PSC, or PGI, or Sun or whatever) compilers I use under the hood.
Mildly annoyed, and confirmed again in never using .0 software :)