I’m not sure what you mean by kernel size. I do give OptiX a larger stack size, but 2048 bytes seems sufficient for cuRAND. My initialization for the RNG is called from the ray generation program and looks like this:
You can do more if you want to the results to be different on every kernel launch. For instance, create a buffer of random numbers on the CPU, and then use them to initialize the RNG in your ray generation program.
In random.h, the rnd() function modifies the seed parameter so the next time you call rnd with the same seed VARIABLE it will give a different value. In practice you give it a first random seed and it will give a random sequence. I suppose that’s what they mean by “random number streams”…
Yes, but on a GPU, you have many threads executing the same commands, and you must be sure it initialize each with a different random seed. Otherwise, you will see patterns in the image or output you generate.
You’re right, I fill an RTBuffer using the host random function to initialize the random values. I did not understand how OptiX distributes the load over blocks so I just avoid CUDA calls that need to know the blockID / threadID. Still “random numbers stream” is the way I’d call a system that - given a random seed - can generate a random stream of values…
Remember that launch_index and threadID/blockID are two different concepts. OptiX guarantees that each thread running the ray generation program has a unique launch_index, but not a unique threadID. So you may use launch_index to initialize your pseudorandom number generator, but not threadID.
Are you trying to suggest that there is a problem with using cuRAND or any other random number generating library? cuRAND can indeed generate a “random number stream” as you describe, if you choose to use it that way.
Of course, as I said before, if you want unique results each time you run your program, you’re still better off generating your random seeds on the CPU rather than using launch_index.