Problem about double configuration and GTX 690

Hi,

I have been one problem in the double configuration.

1 - My operation system is windonws.
2 - I use Visual Studio.
3 - My GPU is GTX 690

What do I do to configure my Visual Studio to work with double precision?

What do I do to configure in my program to work with dual cards like GTX690?

Best Regards,

Ana.

in order to use the double precision you have to specifiy compiling flags -arch=sm_xx where xx can be 13,20,21,30. In order to use both cards in a cuda program you have to specify directly which calls are made on specific card using cudasetdevice and by using streams to overlap copying with calculations.

Where I specify flags -arch=sm_xx in Visual Studio.
Which ‘xx’ I have to use.

Hello,

For the 690 you should use the flag -arch=sm_30. You should search in your project properties a field related to gpu architecture.

I configured my project propreties, in sequence, such as:

1 - Project Properties
2 - Configuration Properties
3 - Cuda C/C++
4 - Device
5 - Code Generation => compute_30,sm_30;%(CodeGeneration)

However, I´m not compiling my program using double and complex number.

Do I have to configure another screen?

Please, I need your help!

Ana Flávia.

You do not need to tell it that you will be using doubles, it will just run slower than if it is using 32 bit floats(as long as you have a GPU which can handle doubles, which you do).

The GTX 690 was designed for games mainly, and does not offer the same DP speed as the Teslas which were designed for scientific calcs.

Also the 690 is two 680s linked in one slot I believe.

Show the output from the device query, and the bandwidth speed test. They are in the SDK.

What is the actual problem when you run the code? You did not make that clear. If there is an error message, please show it.

Have you included all the necessary libraries, and set the dependencies? Are you using C++ or some other language? These things obviously matter.

Which Visual studio version and what operating system? Are you compiling as 64 bit?

As long as the compute capability is over 2.0 it should be able to handle doubles if you use them in a kernel or you are using one of the libraries from the SDK.

I have two problems:

1 - Configuration:
I used Visual Studio (VS) 2010 and C language.
I configured VS to 64 bits.
I am using cusparse and cublas libraries and I am also using cusp library.
I dont find the results when I resolve Ax=b, and I think the problem is in configuration of the double precision. My linear systems isnt converging. This is happening to several matrix whit complex number.

2 - My GTX 690 is very very slower when I compile my program with cublas and cusparse libraries. I dont Konw where I need to configure mey program and outher things to do it more quickly.
I have an old GPU (GT 240) and this GPU is very very faster than GTX 690. Why is happening? I dont have one answer.

Please, help me!!!

Thanks,

Ana.

Can you please post the results of the device query and the bandwidth test.

I use cuBLAS all the time, both with a 680 and a K20c using visual studio 2010 x64, and never had a problem with doubles or floats.

Also try running the matrixMulCUBLAS_vs2010 which is in ;

C:\ProgramData\NVIDIA Corporation\CUDA Samples\v5.0\0_Simple\matrixMulCUBLAS

and post that output as well.

If you are not able to get the device query, bandwidth test, and cuBLAS matrixMul results, there is a more simple way of determining the basic state of the GPUs.

go to [url]http://cuda-z.sourceforge.net/[/url]

download and install CUDA-Z, then run it and take a screenshot of the different output pages.You can post results and that may indicate if there is an issue with your hardware or configuration.

This will not be as detailed as the Nvidia tests, but it will give you a general idea of your setup functionality.