Hello, let’s assume i have multiple AI/ML python applications that leverage GPU via CUDA. Preferably I want to run them on a Jetson device. Is there an efficient way to isolate each application from another? An lightweight virtualization approach I can follow?
Thank you!
Hi,
The CUDA kernels will share the GPU resources.
Thanks.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.