MersenneTwister configuration Trying to figure out how it works

Hello everyone,

I’m trying to reuse Mersenne Twister (SDK) in my application.
My application requires at least 1536000000 random numbers.
I’m not sure if I got it right for now.
In MT we have mt_struct_stripped ds_MT which is set of configs (bit-vector Mersenne Twister parameters) that are loaded from a file MersenneTwister.dat. For each config from the ds_MT we generate NPerRng random numbers.
MT_RNG_COUNT parameter specifies how many configs we have, that is, the number of independent random number series (?). For each config, we generate N_PER_RNG random numbers.
Questions:

  1. How independent are those numbers generated from two different configurations?
  2. How big can N_PER_RNG be?
  3. If it can’t be really big, we want to increase MT_RNG_COUNT in order to get more random numbers. Since MersenneTwister.dat contains data for only 4096 configs, we got to update the file (regenerate it, whatever). How that can be done? Is it just entropy, or those configs are specially generated by something (if that is the case, how can one generate more configs)?

Any comments would be appreciated.

Thanks in advance.

Please!!! Anybody? Any ideas?

http://www.jcornwall.me.uk/2009/04/mersenn…isters-in-cuda/

Thanks, I’ll give it a try tomorrow.

Edit:

Well, at least I found answer for the question 3 from above: these configurations are “offline”-generated ones, and it is computationally intense to generate them. From the site you posted we can have file with data for 32768 configs.

Question 2: seems like it can be very large, at least testing for numbers 10 millions and more didn’t hit the period.

Question 1: are they correlated in any way? What are distribution properties?

One more question: What is the difference (in terms of number series properties and quality) between generating m * n random numbers from m configurations, n numbers per configuration, and generating m * n numbers from a single configuration (all in row)?

My program calculates some well known graph so that I know what I aim to. When using regular rand() in sequential version, I get pretty smooth graph with 10000 random numbers. When using Mersenne Twister, my graph is not so smooth, almost the same as I get with 1000 random numbers in sequential version. I’ve tested numbers generated by Mersenne Twister for distribution properties, and found them satisfactory. Now I wonder what other properties of MT numbers could affect the solution in such way… External Image

You can’t use words like “smooth” and “random” when testing PRNG behavior… they’re too vague.

Is there some test which the MT values fail but the rand() test passes?

I suspect that “smooth” here does not mean random, it means a short period or a sample correlation such that the histogram of values is TOO evenly sampled. This happens a lot if you use an an inadequate LCG with power-of-two modulus as your PRNG and use only the low bits of the prand.

I suspect that you are absolutely right in that last one!

Sorry for me confusing you with this “smooth” word =) What I meant is that I build the simulator for some physics process, and the result is well known (that is, the graph of the process is studied well). So what I meant was that my sequential version produces results that are very close to the model with only 10000 random numbers involved in Monte Carlo algorithm, but parallel version under CUDA with 10000 randoms gives me a polyline (as in sequential with 1000 randoms). Hope it’s clear now.

So, assuming that you are right, the problem first of all in using rand()?

p.s. no, the only test I performed was drawing the square (with the side of 1) and filling it with points generated by PRNG.

I have an implementation of Park and Millers “Minimal” random number generator in CUDA.
It was presented at CIGPU this summer:
[url=“CIGPU-2012 WCCI-2012 IJCNN-2012, CEC2012”]http://www.cs.ucl.ac.uk/staff/W.Langdon/cigpu[/url]
A Fast High Quality Pseudo Random Number Generator for nVidia CUDA, W. B. Langdon. Presented at CIGPU 2009
PDF: http://www.cs.ucl.ac.uk/staff/W.Langdon/ft…_2009_CIGPU.pdf
CUDA source code: [url=“http://www.cs.ucl.ac.uk/staff/W.Langdon/ftp/gp-code/random-numbers/cuda_park-miller.tar.gz”]http://www.cs.ucl.ac.uk/staff/W.Langdon/ft...k-miller.tar.gz[/url]

Bill

Dr. W. B. Langdon,
Department of Computer Science,
King’s College London,
Strand, London, WC2R 2LS, UK
[url=“http://www.dcs.kcl.ac.uk/staff/W.Langdon/”]http://www.dcs.kcl.ac.uk/staff/W.Langdon/[/url]

FOGA 2011 [url=“FOGA 2011 - Foundations of Genetic Algorithms XI”]http://www.sigevo.org/foga-2011/[/url]
CIGPU 2010 [url=“http://www.cs.ucl.ac.uk/external/W.Langdon/cigpu”]http://www.cs.ucl.ac.uk/external/W.Langdon/cigpu[/url]
A Field Guide to Genetic Programming
[url=“http://www.gp-field-guide.org.uk/”]http://www.gp-field-guide.org.uk/[/url]
RNAnet [url=“http://bioinformatics.essex.ac.uk/users/wlangdon/rnanet”]http://bioinformatics.essex.ac.uk/users/wlangdon/rnanet[/url]
GP EM [url=“Genetic Programming and Evolvable Machines | Home”]http://www.springer.com/10710[/url]
GP Bibliography [url=“The Genetic Programming Bibliography”]http://www.cs.bham.ac.uk/~wbl/biblio/[/url]

There was a bug in the code in MT usage section. Not it works fine for me, using Jay’s MT (thanks cbuchner1)

Hi, the link is dead…

Could you give me a right link with a good MT because i have the same problem

Thanks

The authors seem to have fixed it.
You even have seed generator here