we are using CryEngine to develop a game and we currently have such a big level in the Crytek’ Sandbox editor that it always fails CUDA texture compressor initialization of any running RC.exe (a separate application for tiff to dds texture conversion). No other application is necessary to repro that. The Sandbox renderer could be set to either DX9 or DX11. The reason is “out of memory” for any cudaMalloc() called in the rc.exe. In fact, despite the cudaSetDevice() returns “cudaSuccess”, a call to cudaDeviceSynchronization() immediately after that returns “cudaErrorMemoryAllocation”. The original rc.exe DLL, responsible for CUDA DDS compression, is based on CUDA 3.0, so I tried to compile it with CUDA 4 and CUDA 5 SDK but it didn’t help. Even the CUDA SDK samples behave the same when rc.exe fails. They usually crash or report errors on malloc, so I think it is not a wrong CUDA initialization or so. Once I load a small level, everything is fine and CUDA is successfully used for texture conversion when needed. Is there any way to force a CUDA app to steal VRAM to other apps (e.g. cause device lost or something) ? My GPU is GeForce GTX 560 1GB VRAM, the driver version is 306.97. There is no possibility to attach a .txt file (only images of different types can be attached), so I can’t attach a full system report.