How to manage CUDA memory?

I am trying to run a python script on my Xavier NX16 where I have multiple modules trying to utilize the GPU cores.
Sometimes I however run into “Illegal memory access errors” where I assume that one module is using the GPU and when the second module tries passing data to the GPU it throws these types of errors.
How do I go about managing my CUDA memory so that all my modules can access the GPU memory without clogging the pipeline?

Hi,

Please noted that Jetson’s memory doesn’t support concurrent access.
Only one process can access the buffer per time.

Please add the synchronize to make sure the jobs from other process has finished.
Here is the memory for Jetson for your reference:

https://docs.nvidia.com/cuda/cuda-for-tegra-appnote/index.html

Thanks.

Thanks AastaLL

When you say please add the synchronize, are you referring to a flag, I should be setting somewhere or do you just mean that I should make sure that no Cuda process is runnung Asynchronously ?

Hi,

To avoid concurrent access, you will need to make sure all the GPU tasks are done before accessing it with the CPU.
An example can be found below:

https://docs.nvidia.com/cuda/archive/11.4.0/cuda-for-tegra-appnote/index.html#pinned-memory

Thanks.