bi-linear interpolation

I have seen several papers and thread on this and other forums about it is not possible to access many hardwired graphics components in CUDA. For example, for a bi-linear interpolation we can use texture fetching on CUDA but it won’t be faster that using CG which does not hide the hardwire components required for this operation. I would like to know if there is an efficient way to implement bi-linear interpolation on CUDA or if the option is to input data from CUDA (assuming that you are doing something else with your data before applying interpolation) to OpenGL (or DirectX) and make the interpolation using these APIs.

Thanks a lot,

You can get bilinear texture filtering in both Cg and CUDA, and the same hardware is used in both cases. I’m not sure what you mean by Cg not hiding the hardwire components required for bilinear interpolation. Do you mean the lerp function in Cg? I don’t think that using lerp will be faster than explicit interpolation in CUDA. Have you observed a performance difference?