Urgent help needed!

I’ve been looking at the monte carlo based simulation example and I was wondering how much faster was running it in CUDA than on a CPU?

I haven’t been able to get a Geforce 8 yet and trying to compare the 80Million sample example (that’s in the documentation) with a CPU example takes an age!

Does anyone have any results for this or know of a rough speedup?




Generating random options…
Data init done.
Loading GPU twisters configurations…
Generated samples : 80003072
RandomGPU() time : 32.442001
Samples per second: 2.466034E+09
Transformed samples : 80003072
BoxMullerGPU() time : 25.790001
Samples per second : 3.102097E+09
Starting Monte-Carlo simulation…
Options count : 128
Simulation paths : 80000000
Total GPU time : 1344.793945
Options per second : 9.518187E+01
L1 norm: 1.610704e-05
Average reserve: 4.096417

Thank you,

Is there any chance that someone could run me these results for 1000000 simulaiton paths? I’m trying to decide between an FPGA and GPU results.