Tesla K80 + RTX 4090 combined to train models and RAG

Description

Is it possible to add the combined VRAM of 6 x Tesla K80s with 2 x RTX 4090 to utilize this setup with training a model?
A clear and concise description of the bug or issue.
I see that there is a need for ray tracing for it to work.

Environment

Ubuntu or Windows using the Chat RTX or CUDA Toolkit

Appreciate some insight.

Thanks!

TensorRT Version:
GPU Type:
Nvidia Driver Version:
CUDA Version:
CUDNN Version:
Operating System + Version:
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered