Accelerating Machine Learning on a Linux Laptop with an External GPU

Originally published at: https://developer.nvidia.com/blog/accelerating-machine-learning-on-a-linux-laptop-with-an-external-gpu/

With the introduction of Intel Thunderbolt 3 in laptops, you can now use an external GPU (eGPU) enclosure to use a dedicated GPU for gaming, production, and data science. A Thunderbolt 3 eGPU setup consists of A discrete GPUAn enclosure to house it inA power supplyA Thunderbolt 3 connection to the laptop Most enclosures provide…

Hi, this is Dhruv. Hope you enjoyed reading my blog. Setting up my TB3 eGPU came at the heels of using eGPUs throughout my time at university and trying to create a compact system when I joined NVIDIA during WFH. At university I used an Expresscard based eGPU setup with my T430 for my AI and Parallel Programming courses. When I got a TB3 machine, the theoretical performance increase from the improved bandwidth led me to getting an eGPU setup instead of a workstation. Since then, I’ve been very pleased with the performance and portability of the solution. It’s hard to beat being able to work on the couch and then plug into the eGPU on the desk for some compute.
I hope you have a great experience if you decide to use an eGPU for your work. If you have any questions or comments, let me know, and we can try to resolve them :)

Hi Dhruv:

Our company (https://kfocus.org) is working with organizations like JPL and other big data users that have interest in eGPU. We really appreciate your post and am very interested in pursuing providing solutions for them. However, it is not clear how to use the iGPU, dGPU, and eGPU concurrently - for example, use the dGPU for display and then the eGPU for blender rendering. Might you have recommendations on resources for that?

Any help would be greatly appreciated!

Sincerely, Mike

Hi Druv: Is this something you can help with? Cheers, Mike

Hi @deppman
While there isn’t a turnkey solution to the iGPU+dGPU+eGPU problem, there are a couple of ways of going about creaing a solution.
You could set the iGPU to be the X screen renderer on battery and the dGPU to be the X screen renderer on AC. There are some tools within Ubuntu for this like gpu-manager/prime-select or you can use Offloading Graphics Display with RandR 1.4.

When it comes to using the eGPU, you can use it either as a PRIME Render Offload device or a Compute device, depending on the task. PRIME Render Offload is meant for applications that require the X screen to be “rendered” on a different GPU for example Blender(an older alternative was Bumblebee). Compute is meant for CUDA/Accelerated Data Science tasks.

For my machine, I’m using the eGPU as my primary X renderer as I don’t have a dGPU. For a laptop with an iGPU + dGPU + eGPU, I’d imagine that wouldn’t be the case, and if it is, then you can use “AllowExternalGpus” to use the eGPU as the primary X renderer. Otherwise, you could “Prime Render Offload” Blender to the eGPU/dGPU (depending on what is connected and what is the power source) and use the eGPU as a compute device otherwise if it is connected.

In case you have both the dGPU and eGPU connected and can’t use Prime Render Offload, you’d have to rely on some other way of hiding the other GPU. A way that I’ve been using has been docker with the --gpus or NVIDIA_VISIBLE_DEVICES flag/envvar. Some other applications like OBS allow you select the GPU if you happen to have multiple GPUs in you system.

Here are some links to relevant information:




1 Like

Thanks for the great tutorial. Is it possible to use two eGPUs as well?

Hi Dsinga:

Sorry for the delay, but I must have missed the notification. Thank you for all your help. We will follow up on your articles. For our customers’ purposes, the use of the eGPU as an add-on compute unit is the most common use and here the CoreX is working great, and we have been able to run dGPU + eGPU to run two separate GPGPU workloads concurrently. I will report back when we resume testing. Thanks again!

Sincerely, Mike