Persisting CUDA cache between boots

I would like to point the CUDA_CACHE_PATH directory to a persistent storage location so that compilation results can be cached between boots. Is this recommended? What mechanisms does CUDA have to guard against corrupted cache files, if any?

Not sure I understand the question. By default, CUDA_CACHE_PATH should already point to a persistent section of the host’s file system. For example, on my Windows system, the location is C:\Users\Norbert\AppData\Roaming\NVIDIA\ComputeCache which is on rotational mass storage (a.k.a. a hard disk) and currently occupies 268 MB accumulated over time.

Why is important to know how CUDA validates JIT cache content?

Suppose an application writes to the cache, but a power loss or some other issue causes the file to get corrupted or truncated. Next boot the application runs again, and finds the corrupted/truncated entry in the cache. What happens? Does it see the cache entry and reject it e.g. due to a mismatched checksum? Or, does it load the corrupted cache entry and error/crash at runtime? This is in an applicaiton where a human (ideally) won’t be administrating it directly, so I need to know what sort of issues I have to implement protections against.

I don’t have details of the cache access, but presumably one option is to disable its use with the CUDA_CACHE_DISABLE environment variable.

This could have some undesirable side effects, of course, and these can probably be avoided by building binaries that generally don’t depend on JIT. Makefiles associated with CUDA sample codes generally show a best practice for this approach.

If you find that information is missing from the CUDA documentation, one possible avenue is to request a documentation update. You can do this using the bug filing mechanism linked in a sticky post at the top of this forum.