GeForce GTX 690 cards have two GPUs (0 and 1). The following sequence fails with NV_ENC_ERR_OUT_OF_MEMORY:
- Create and run an encoder on device 0
- Create and run an encoder on device 1
- Create and run an encoder on device 0 ← This fails in nvEncInitializeEncoder with NV_ENC_ERR_OUT_OF_MEMORY
The video can be very small (64x64) so it is not really an out of memory. Also, note that I don’t even have to encode anything. I can just create and delete the encoder. Note that if I don’t switch between GPUs (always stay on device 0 or 1), the out of memory issue doesn’t occur.
I have created an encoder that has an AutoSelect feature, which uses the GPU that is least used. After a few conversions, the encoder starts to fail and it won’t work again until I terminate the current process.
I am running this in Windows 10, using the latest driver (355.60).
Here is a short version of a function showing a sequence that reproduces the error. The error checking is stripped out to make the function easy to read:
// compress video using a GPU device
CUresult CompressVideo(int deviceID)
{
CUcontext pDevice;
CUdevice device;
CUresult cuResult = cuDeviceGet(&device, deviceID);
cuResult = cuCtxCreate(&pDevice, 0, device);
CUcontext cuContextCurr;
cuResult = cuCtxPopCurrent(&cuContextCurr);
CNvHWEncoder *pNvHWEncoder = new CNvHWEncoder;
/* On third call, this fails with NV_ENC_ERR_OUT_OF_MEMORY = 10 */
NVENCSTATUS nvStatus = pNvHWEncoder->Initialize(pDevice, NV_ENC_DEVICE_TYPE_CUDA);
if(nvStatus != CUDA_SUCCESS)
{
printf("\npNvHWEncoder->Initialize failed %d", nvStatus);
return nvStatus;
}
EncodeConfig encodeConfig = {0};
InitConfig(&encodeConfig); /* nothing special here, just use the defaults set Width,Height=64, encodeConfig.codec = NV_ENC_H264 */
nvStatus = pNvHWEncoder->CreateEncoder(&encodeConfig);
/* test code to flush the encoder - the error occurs with or without it
EncodeOutputBuffer stEOSOutputBfr = {0};
stEOSOutputBfr.bEOSFlag = TRUE;
nvStatus = pNvHWEncoder->NvEncRegisterAsyncEvent(&stEOSOutputBfr.hOutputEvent);
nvStatus = pNvHWEncoder->NvEncFlushEncoderQueue(stEOSOutputBfr.hOutputEvent);
WaitForSingleObject(stEOSOutputBfr.hOutputEvent, INFINITE);
pNvHWEncoder->NvEncUnregisterAsyncEvent(stEOSOutputBfr.hOutputEvent);
nvCloseFile(stEOSOutputBfr.hOutputEvent);
*/
pNvHWEncoder->NvEncDestroyEncoder();
delete pNvHWEncoder;
pNvHWEncoder = NULL;
return cuCtxDestroy(pDevice);
}
I attached a ZIP file containing a Visual Studio 2008 project showing the error to this post. Just build and run the program and you should see the error. Make sure you have a GeForce GTX 690 card or a similar one with two GPUs. The project simply does something like this:
cuInit();
CompressVideo(0);
CompressVideo(1);
CompressVideo(0); // this fails with NV_ENC_ERR_OUT_OF_MEMORY
Note that the NVENC 5.0 documentation specifies that “The client should call NvEncDestroyEncodeSession to close the encoding session”. But there is no NvEncDestroyEncodeSession; instead, the function to close the session is called nvEncDestroyEncoder.
Forgot to mention that I run the card with “maximize 3d performance”, which uses both MPUs. I also have PhysX set to auto select.
OutOfMem.zip (124 KB)