 # Tuning a Monte Carlo Algorithm on GPUs: basic question

I am a complete novice with CUDA and so I am probably overlooking something basic here with scoping or something. As I was reading over the first listing in the tutorial I see:

``````do i=1,N
tempVal = X(i)*X(i) + Y(i)*Y(i)
if (tempVal < 1) then
temp(i) = 1
else
temp(i) = 0
endif
``````

So now temp(i) equals either 0 or 1 for all i in 1,N. Right?

Then later we see:

``````   do i=1,N
sumA = sumA + temp(i)
sumSq = sumSq + (temp(i)*temp(i))
``````

Since each value of temp(i) is either 0 or 1, temp(i)temp(i) is just 00=0 or 1*1=1 for each value of i. So sumA is equal to sumSq. Right? And what is the point of independently calculating the same value? And later:

``````    meanA = sumA / real(N);
meanSq = sumSq / real(N);
``````

Then doesn’t meanA equal meanSq? And how does that help compute the variance? So what is wrong with my understanding?

No, you’re correct. I considered removing the variance portion from the code since it didn’t matter in this case. But, it’s part of a Monte Carlo algorithm I was using, so decide to leave it in.

• Mat