gpu is a 1050Ti with compute capability 6.1, running cuda 8.0 on 64-bit windows 10.
The error is on the “min” side, had similar problem with thrust::reduce().
This happens with a certain dataset of floats, coming from an image luma value (HDR app)
Just to verify, I have written a very simple single threaded kernel to perform the min-reduction; its result agrees with the CPU.
The floats are not huge (or very small) values: the min values are like -6.61f vs -6.049f and the max is about 2.5f.
Anyone else had a similar experience with thurst?
Thanks in advance