I’m observing that Cuda seems to allocate ~30MB of device memory upon initialisation of either the runtime or driver APIs (i.e. a call to cuInit or similar). Does anybody know:
- What this is for; I assume some kind of internal storage for something-or-other, but some details would be interesting for the inquiring mind
- If there is anything I can do to reduce it
- If no to 2, if there is any scope for reduction in future versions of Cuda
The project I’m working on probably is going to rely on initialising multiple instances of cuda (themselves using about 50mb, so it’s a noticable overhead). If I were able to reduce this I’d be happy, but it’s not a huge problem if not as more recent cards are coming loaded with much more memory than my antique 256MB 8600GT.