How to package a desktop app with CUDA, cudnn dependencies?

I am developing a desktop application that utilises Tensorflow. The aim of the application is to let users easily train a given model and use it for inference within the app. I want to support training and inference via the GPU , if available on the end-user’s machine.

The primary issue appears to be setting up Nvidia dependencies: driver, CUDA Toolkit and cudnn. These are required on the end-user’s machine for GPU support.

Ultimately, I don’t want the end-user to faff about installing dependencies for my application to work.


For brevity, let’s assume the application I am working on is a simple CLI written in Python that exposes some easy commands to train models using Tensorflow.

I am developing on Linux. Main end-user demographic is primarily Windows, but ideally I would be able to do cross-platform.