GL_OUT_OF_MEMORY on Windows, not on Linux

Hi, I’m developing a piece of software that (for the moment) allows to visualize huge medical images, by loading them directly on the GPU at a reasonable resolution and then doing some custom ray tracing (via a DDA) on the image itself. The program uses Qt’s built-in OpenGL functions and classes, with some custom shaders I wrote myself.

When running my program under Linux, loading a ~4GB dataset succeeds, but the dataset presents some artifacts on the first few slices of data loaded (see [1] at the bottom of the post). I know this is a graphical problem, and not one with my code as when dumping the contents of the loaded data onto disk, those artifacts are not present.

However, on Windows it completely fails to allocate enough memory (even though I’m asking for the exact same amount of data to allocate) when calling glTexImage3D(). nvidia-smi.exereads there are >7GB of VRAM still available on my card at loading time, and I cannot see any memory allocation on the GPU using Task Manager’s VRAM usage graph.

My question is this : is there any difference to how GPU VRAM is allocated/managed using OpenGL between Windows/Linux, or is this possibly a driver bug ?

P.S. : Also, nvidia-smi.exe can report the total amount of memory used, but cannot report on the individual processes memory use under Windows, even when running PowerShell in admin mode. Is this the expected behaviour of nvidia-smi ?


System info : I’m running W10 on a MSI GS65 Stealth, with a 1070 Max-Q. I made sure the program was running on the GPU by declaring the NvOptimusEnablement variable with :

extern "C"
{
  __declspec(dllexport) unsigned long NvOptimusEnablement = 0x00000001;
}

at the top of my program (before main() gets called). And it seems to work, as glGetString(GL_VENDOR) returns the string GeForce GTX 1070 with Max-Q Design/PCIe/SSE2.


[1] : Example of ‘garbled’ data when loading a dataset encoded with 16-bit values, uploaded using GL_RG16UI as the internalFormat, GL_RG_INTEGER as the format, and GL_UNSIGNED_SHORT as type, and pack and unpack alignments set to 1 :

I triple-checked the dimensions passed to glTexImage3D(), as that was already one of my problems in the past, and they are correct.

EDIT : I should mention the image’s coloring is done in real-time, via shaders. I’m uploading a RG texture and coloring it using an equation in the fragment shader.