Crash with OptiX pre-compiled sample

Hi there.

I am trying to make OptiX (v3.0.1) render data that can not fit in mine GPU memory and I’ve got a crash here.

I am using the “glass” pre-compiled example on Windows 7 64b, with 12GB of RAM and nVidia GTX 690 (driver v332.21).
If i replace the input .obj of the “glass” example (named wineglass.obj), with bigger one (lets say .obj that is 60-70 MB) everything is fine.

If I replace it with a lot bigger (832MB) it is crashing with message

OptiX Error: ObjLoader::loadImpl - glmReadOBJ( ‘media/glass/waterglass.obj’ ) failed.

You can find the file I test with here https://www.dropbox.com/s/rumvrr2rv2zgx0v/wineglass.obj

Can it be that it can’t load one huge file and it must be split ?

p.s. If the problem is that is one huge file, I will upload a version with many .hdr files loaded in OptiX, but it is crashing again when I run over the memory limit (with a c++ bad_alloc exception, but I really don’t think that my 6 free gigs of RAM may be fragmented enough for that to happen. Again it starts crashing with the .hdrs when I run over the GPU memory capacity). This test is made again with slightly modified SDK sample. Should I explicitly call some OptiX SDK function to enable that ?

Thanks.

Since the error is inside the ObjLoader respectively inside the glm library and you have the sources to that in the SDK, have you debugged into that crash?
Please step through glmReadOBJ() and see when it fails.
If it succeeded doing the _glmFirstPass() you should also know the size of the model.

Once you analyzed that, mind that your GTX 690 has only 2 GB per GPU, which means anything bigger than roughly 20 MTriangles will not fit and start paging. If you add lots of HDR images on top your memory will run out earlier.

(File sharing sites like dropbox are blocked from inside our offices. If you really want to send the file, I can setup a temporary FTP. But this really sounds like a host memory issue on your side you’d need to analyze yourself first.)

Thanks for the reply.

The paging is being enabled only for geometries ? I can’t load load more memory for textures than I can fit in the GPUs memory, is that right ?

Other than that, I will compile and debug the nVidia sample and I will send you are fix about it when I am ready. Just tell me where can I mail it so you can patch your SDK samples.

Paging should work for acceleration structures and textures but that doesn’t enable unlimited sized scenes. Rule of thumb is that a scene shouldn’t be bigger than three times the GPU memory.

There are of course hardware limits on the maximum size of textures. There is also a limit on the number of HW accelerated textures on GPUs before Kepler, latter supports bindless textures.
Also note that you should definitely disable SLI on your dual GPU board. You can read about that inside the OptiX Release Notes and Programming Guide.

You can attach small files to this forum (use the paper clip icon in the top right corner when hovering over your submitted post) or find the support e-mail address in the OptiX release notes.

Well it turns out that the crash is caused by memory fragmentation.

You may want to avoid using std::ifstream without reserving the proper amount of memory (otherwise it keeps reallocating and this fragments) … even in samples.

One solution : in HDRLoader.cpp, after
std::ifstream inf(filename.c_str(), std::ios::binary);

you may reserve as much as you want

inf.seekg(0, std::ios::end);
size_t const size = size_t(inf.tellg());
inf.seekg(0, std::ios::beg);
std::vector fileContents; //or string, etc.
fileContents.reserve(size);
fileContents.assign(std::istreambuf_iterator(inf), std::istreambuf_iterator());

This will also speed up (a lot) reading files. You will deal with the memory in the vector however.

Thanks for the clarification above, most likely I will have many more questions coming.