yeah sure, but why add computations if the cudaReadModeNormalizedFloat on a signed type should returns values [-1,1]
I am forgetting something somewhere, I’d like to know what!
Here’s my guess: The texture interpolation circuit may have a limited range in hardware. For example, by default most OpenGL applications clamp inputs and outputs to [0…1], unless one makes use of float textures and a shader based rendering path. So there may have been a good reason (costs?) to limit the capability of the texture interpolator.
However a puzzling detail is that the CUDA documentation states signed textures return [-1…1].
How are you initializing this texture? Can you post some code?