How to predict time taken for processing?

Hi Experts,

      I have planned to purchase a graphics card.Before that i have planned to do some time analysis. Now i don't have Graphics card in my PC. i am running programs in emulation mode. I want to know how much time GPU(any model) will take to process my algorithms. Is there any methods to predict this time?

Thanks in Advance,

Karguvel

I’m no expert, but I would say that you need to analyse your algorithms to determine whether they are more compute-intensive or more memory-intensive. You can then choose your card based on this information (i.e. weigh your budget vs computational capability of a card vs memory bandwidth capability of a card).

I think it will be difficult to estimate performance if you have no experience with running CUDA on a real device (device emulation is very different in that regard). I’d recommend to just get started, and chose the card on the basis of your available budget.

Choose from the GT260 upwards if you need double precision, GTX 465 upwards if the cache will help you, you need fast atomics, or more serious double precision speed. Check compute capability, number of cores, shader frequency and memory bandwidth vs. price to make sure you pick a good deal.

Rajanna,

Namaskara…!

Calculate FLOPs for your algorithm.

Use the theoretical FLOPs to estimate “Ideal time”…

Use a fraction of theorectical FLOPs to estimate “Practical time” – depends on how well you code. 40-50% of theorectical flops is a possible one… but one can reach much more than that… if properly designed.

Also, make sure that you are getting good FLOPs in your existing CPU implementation… U can use Intel tools (commercial) and some good assembly kernels to see if you can reach good speedups… Upto 10x improvement in CPU code itself is possible. Consider cache friendly data structures (since u look to be an imaging guy) and vectoring (intel compiler can auto-vectorize loops… will be very useful in image processing)…

Good Luck!

K.Thankyou

K.Thank you tera.

I want to know whether all series cards can capable of doing double precision or only some of the cards can do it?

For development purpose i planned to buy GTX 220 card.Is it capable of doing double precision? :unsure:

K.Thank you

A good CUDA development card for sure, but no double precision.

K. Thank you.

How can I identify which card having double precision? Is there any specifications related to that? (!)

There are (possibly incomplete) specifications in one of the appendices of the programming guide - Compute capability 1.3 and 2.0 cards are the only cards which support double precision. Off the top of my head this means GTX260/275/280/285/295 and GTX465/470/485 amongst the consumer cards. There are also Telsa T10 and T20 series compute cards and some Quadro professional OpenGL cards which also support double precision, but all are a lot more expensive than the consumer GPUs I mentioned.

K.Thank you