multiple applications accessing GPU One app does rendering, the other CUDA


do two applications interfere with each other if one does rendering stuff while the other one is executing CUDA code? Can we be sure, that CUDA memory allocation/cudaMemCpy calls don’t overwrite GPU memory used by the first application? (the first app doesn’t use CUDA).

Both of them are passing through the driver which maintains the global context (That is why drivers are there in the first place).

So, they wont interfere. However, they would shared the bandwidth and resources and hence things can be slower…