I am building an application where I would like to create and manage my own CUDA contexts using the driver API. My understanding is that I should be able to create a context and have the CUDA runtime libraries use that context too. However, some particular cuFFT runtime calls appear to be changing the context, when I would like it to use the existing context. I have written a minimal example to demonstrate this:
#include <cuda.h>
#include <cufft.h>#include <iostream>
int main()
{
cuInit(0);
CUcontext main_render_context;
// create main context
cuCtxCreate(&main_render_context, 0, 0);
std::cout << "main ctx: " << main_render_context << std::endl;
// push
cuCtxPushCurrent(main_render_context);// cufft
cufftHandle handle;
cufftCreate(&handle);
int fftDimensions[2] = { 2048, 2048 };
size_t fftSize;
cufftGetSizeMany(
handle,
2, fftDimensions,
NULL, 0, 0, // triple of embed, stride, distance (in)
NULL, 0, 0, // triple of embed, stride, distance (out)
CUFFT_C2C,
1,
&fftSize
);CUcontext current_render_context;
cuCtxGetCurrent(¤t_render_context);
// prints different to above
std::cout << "current ctx: " << current_render_context << std::endl;}
Building and running the above with CUDA 11.1 results in different values being printed for the contexts. Is this the expected behaviour, and if so, how should I ensure that I can run commands in my chosen context after a call to cuFFT?