For our project, we made a shared library used by Node.js with CUDA in it.
Everything works fine for running, but it’s when the app closes that it’s tricky. We want to properly destroy some objects that own memory allocated on the GPUs, but it crashes because the contexts are already destroyed.
We try using the cuDevicePrimaryCtxRetain which should increase the context count. And at the end, using cuDevicePrimaryCtxRelease. But even that doesn’t really work.
And we know our system to properly deallocate everything works “fine” because the tests are all in an executable and we can see the destruction being done in the right order when the app closes.
Is there anything we can do to control this properly?