Hi, I work with large spreadsheets in microsoft excel. I’m not a programmer, just an intensive spreadsheet user. I am losing hours of time because Excel 2007 is simply too slow to calculate everything I’m trying to do. I recently purchased a duo quad core workstation and it’s still not fast enough.

Is there any way I can use an NVIDIA GPU and CUDA to just speed up Excel cacluations? Excel 2007 already does multi-core processing automatically with CPUs. Is there anyway I can just utilize the ~120+ cores on the GPU, today, to speed up Excel?

CUDA is a very different computing architecture, composed of units which are not like traditional CPU cores. As a result, custom code has to be written to offload calculations onto the graphics card. I’m sure a programmer could write an Excel plugin which did a specific task with CUDA, but I’m not aware of any such thing now.

This sounds just like standalone program that happens to use Excel as the input and output format. Totally sensible, but negates a lot of the convenience of a spreadsheet.

CELLs can be functions, right? Say, you have a spreadsheet with 10,000 entries etc… I would guess there would be a lot of batch processing involved in it. Like Front office banking uses lot of spreadsheets…

Its all my guesses…thoug…

Appreciate if any knowledgeable shares their knowledge.

Except that this exactly NOT like the kind of jobs for CUDA or any other GPGPU approach.

In this hypothetical monster-mega-sheet, every cell has its very own function.

Whereas GPU have a SPMD (single program / multiple data) approach at the GPU level (and SIMD units), which mean they work best when you apply 1 single kernel on a big dataset (or the same shader on all pixels of the polygon if you’re doing graphics).

A theoretical CUDA-accelerated Excel (or any GPGPU-accelerated SpreadSheet), would need to take a sheet, slice into series that run the exact same formula (like a whole column where only the “A$1”, “A$2”, etc. varies in the formula).

Compile that into a kernel, upload the dependencies either as streams (those with dollars like all “A$n”) or as constants (those with fixed coordinates like “B13”). Run the kernel, download the results, and restart the processus with the following series, until all dependencies are solved.

If formulas are compiled as an easy to analyse bytecode, maybe the dependency tracker it self could be programmed as a kernel.

Of course that requires completely rewritting the spreadsheet engine from scratch, which is completely out of question with a closed source software like Excel. But maybe some insane CUDA programmer could fork a CUDA variant of Gnumeric or something similar.

The whole thing would be mainly useful for accelerating situation where you have simple formulas (say a currency exchange rate computed for every product and then a mean on the whole sheet) on outrageously big sheet (hundreds of thousands of lines).

Then you also start to see accuracy and round off problems, given the dynamic range of floats vs. the quantity of elements you sum together. At least, due to the way reductions work on GPU (pairwise in stead of sequential), you’re going to see less round off errors - (see Kahan summation algorithm as an example of work around in long sums of doubles)

People talk about Plugins to excel. I really dont know what it is…

But is it possible that we write some kind of plugin in CUDA and attach it to Excel? That plugin would implement some functions for MS Excel… for a particular type of application… Do I make sense?

I have some experiences working mathematica, matlab and C, fortran with excel.

from the experience, the critical issue is overhead of function call in excel. Therefore you need make few numbers of subroutine instead you insert many same functions in each cells.

I’ll illustrate using cuda in excel.

Concept1 : workstation model

in VBA module,

we need define function for dll with CUDA routine

and we already compile for dll with VS2005 in CUDA routine

start of subroutine or function

get parameters, data from excel sheet

call function with cuda from dll

get results

write the results in the Cells

end of rounroutine

Concept2 : server-client model

in VBA module

we define socket functions with winsock.dll

and server listen to compute specific routines with CUDA (linux, multiGPU is ok)

start of subroutine or function

get parameters, data from excel sheet

connect server with socket

send parameter & data

execute cuda routine on server

recieve result

close socket

write results in the Cells

end of routine

I did not test excel dll can evoke multi thread… if it is possible, I guess us

ing multiGPU in excel dll is also available.

Copcept3 : server-client with mpi.

I think it is possible but I’m not yet fiqure out how to remotely execute mpirun from excel VBA in client. anyone know that?

These are really interesting responses, thanks a lot. I’m looking forward to the day when we will have real-time interactive spreadsheets in databases. I think this will really drive innovation in finance-- or in any industry that uses large databases and spreadsheets. Anything that allows people to focus on the analysis and intepretation and creative use of data as opposed to the building/modeling of data. That’s why even though I’m not a programmer the concept of CUDA is really exciting to me. Hopefully one of you guys will be able to figure this out…

by the way, here is a real life monster model that got some attention a few months in the press. this is not the model I was thinking of in my original post (the spreadhseet I use is a massive ranking system), but this is a big spreadsheet that takes a long time to cacluate. Maybe it would be a good baseline for any CUDA developers?

I have been experimenting to see if I can call a CUDA routine from inside a C++ routine called from EXCEL.

As soon as I reference a CUDA routine inside my xll, EXCEL refuses to load the xll saying it’s not an xll. If I comment out the call to it the xll registers and works.

Is there any intrinsic reason why you can’t call CUDA from inside a dll?

Microsoft supports XLLs that can talk to MSFT HPC server at the backend… And GPUs can live inside MS HPC… So, a round-about answer is “YES, We can DO!”

Although I have not explicitly done that, I am 100% sure, this can be done. We have written XLLs that can compute elsewere on the network… CUDA should work without a hickup.

MSJ, Is that a missing case of DLLs in the PATH? You may just need to make sure your PATH variable is OK and the dynamic linker finds the CUDART, CUDA DLLs