Hey Guys, apologies again for the cross post from the OpenGL forums, but again I think this is probably more appropriate to ask here.
I’ve noticed that if I take a bilinear sample of a texture, that the result is noticeably lower quality than if I read the 4 involved pixels and then do a bilinear interpolation myself in the fragment shader.
This has occured for me in both WebGL (chrome, firefox and internet explorer using windows 8) and OpenGL. I have a pair of 980m’s but also have tried other machines including a desktop.
I haven’t yet tried DirectX but I’m going to be trying that soon.
Is there any way to get a higher quality hardware based bilinear sampling by chance, or is this a limitation of current hardware?
WebGL example at the link below. View source of the webpage to see the code:
Interestingly, yesterday it became a lot smoother for a while (not the full quality in fragment shader using floats for interpolation, but very close to that), and then i rebooted and it went back to being jagged. My best guess is that it got smoother after updating my drivers, but I can’t be 100% sure that’s what made it smooth.
Also, somewhat interestingly, on my phone it’s a lot smoother as well (DROID TURBO by Motorola 32GB), and also on my wife’s laptop which is a 2011 or 2012 macbook air.
Anyone have any thoughts on this?