i checked the random number generation algorithm, and it seems to be compute intensive. My neural net GA uses a lot of random number calls. Memory cost is not a problem, but the speed is critical. So , i thought maybe I could reuse the random numbers?
Consider the following:
- I generate a lot of random numbers on the GPU.
- I transfrer these numbers from GPU to HOST (at a cost of 5GB/sec) and store them in an array inside the main memory (around 2GB of random numbers) with a further retrieval cost of around 25GB/sec with DDR3
- I fetch these numbers as I need them in sequential order and they will be used only within code run by CPU
- When I reach the end of the array of 2GB, i will loop it another 10 times to reuse the data.
- To make the reuse more random, i will not start reusing from the beginning again but shift the beginning by using a random number (generated by CPU or using current time) as the start of the loop, and then loop for the whole 2GB again. If the end of 2GB array is reached, I will just reset to the index 0 and finish at the position it started. After reusing random numbers for some time (i don’t know, 10 times, or maybe 20? experimental results will show) i will load another 2Gigs of random numbers again from the GPU.
I would like your comments on this idea, will reusing random numbers affect my problem-solving algorithm by some degree of non-randomness? And how big should be the array of random numbers for reuse? And how many times you suggest me to reuse it?
Thanks in advance for any comment.