Hi, I have compiled a algorithm to process some image data. there will be data about 2gb stored in GPU global memory before the entire processing finish. i run this program in some gpu device, e.g. GTX 1060 (6GB) and Tesla M4 (4GB). the out data was normal. but when i use the same code run in a relatively new one, e.g. GTX 1660 (6GB) and RTX 2060 (8GB), The output images were blurred.
what’s wrong in this? doesn’t the newer design of gpu architecture prefer the user takes too much space in global memory?