GPU Card Recommendation for Time Series Deep Learning

I am looking to conduct exploratory research using deep learning (DL) and CUDA on financial time series data, ideally from a windows environment where my trading platform and other analytic tools reside. Would like to use python and supporting CUDA and DL libraries. Given the new NVidia technologies announced (e.g., Maxwell GPU, CUDA 7, Embedded cuDNN), I could use a recommendation on what type of GPU appliance to acquire given I am looking to do time series prediction vs image learning per se.

The new Maxwell chip within GTX 960 card is compelling, but am unclear whether CUDA/cuDNN tools will work with it or if the 32-bit floating point operation will be limiting to DL operations.

On the other hand, the Kepler chip based Jetson TK1 DevKit seems promising, but would then have some integration issues with my windows environment since I would use it as a remote device on my network. Perhaps there is an API solution to get around this.

Tesla cards too expensive for this exploratory endeavor.

Any thoughts or recommendations would be most appreciated.

Whitmark

http://www.videocardbenchmark.net/high_end_gpus.html

While that list uses a gaming test to determine performance, my experience indicates that the ordered list is fairly accurate for 32 bit float/int compute performance.

IMO the GTX 780ti and GTX 980 are very fairly priced given their performance and reliability. For laptop the GTX 980m is nice as well.

Thanks CudaaduC. Your recommendation is supported by rationale outlined in the following article I just found:

“Which GPU(s) to Get for Deep Learning: My Experience and Advice for Using GPUs in Deep Learning”

https://timdettmers.wordpress.com/2014/08/14/which-gpu-for-deep-learning/

with exhibits from the CUDA Toolkit v6.5 Documentation

http://docs.nvidia.com/cuda/cuda-c-programming-guide/#axzz3AI18t18Z

Also, using a GPU card vs TK1 dev kit will enable unified memory and potential expansion down the road.

Watch the demo of deep learning using eSOMTK1 - NVIDIA Tegra K1 SOM. https://youtu.be/69ojs_Zgj3g Deep learning is the set of algorithms to train the model data and based on trained data properties, we can recognize objects. Has lot of potential in autonomous driving systems - driverless cars.

eSOMTK1 is based on NVIDIA Tegra® K1 CPU in a compact form factor. This Tegra K1 Module has NVIDIA Tegra® K1 4-Plus-1™ ARM® Cortex™-A15 Quad core running at 2.2GHz with DDR3L SDRAM configurable up to 4GB, eMMC configurable up to 64GB.

Buy Now : https://www.e-consystems.com/tk1-som-tegra-k1-systemonmodule.asp