GeForce GTX 1070 for Deep Learning

Good evening,
I have a hardware question and I am not sure where to post it correctly, so I put it here.

I am looking for information on the capabilities of the GeForce GTX 1070 card.
I heard that the “consumer” cards like this one can not use double precision float numbers? Is that correct with the standard driver?
For my thesis it got necessary to train deep learning models using this card and if above statement is true, I wanted to ask whether and how it is possible to get a model trained with double precision float on this card?

Hello @schmitz and welcome to the NVIDIA developer forums!

I am curious, do you have a reference to where you read about the FP64 limitation of a GTX 1070? Because to ease your mind, that is not correct.

Any current (meaning newer than Kepler architecture) NVIDIA GPU is supported by the latest unified driver and as such supports the latest CUDA. While the 1070 as a consumer card might not be the fastest at Deep Learning, it is still capable of doing double precision calculations.

Now that you are registered as a developer you should definitely check out our Deep Learning resources!

And of course, if you plan on training very large networks that would take your 1070 days or weeks to train on, consider using Cloud based GPU servers. AWS for example has free and trial offers that include GPU instances.

I hope I could help!

1 Like

Hello Markus,

Thanks a lot for your reply.
That does sound good. (^.^)

A colleague mentioned a press release that there would be a limitation introduced to consumer grade products to increase sells of scientific hardware.
I did not find anything about it except to verify or discard this claim except some weird forum comments scattered over internet.
That’s why I am asking here :)

I know that there might be some issues with larger data sets.
I currently think that my datasets (self created simulations) are not too large. A TESLA K40c I found, currently needs 6 hours for a comparable dataset.
Taking into account that the GTX should be faster (Compute Capability 3.5 vs 6.1 with GTX), tells me I should be fine until thesis submission.

I will take a look on the deep learning resources you mentioned.
The cloud based services are a possible solution I already have on my list of possibilities.

So again, thanks a lot, that does help!

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.