The GPU-accelerated deep learning containers are tuned, tested, and certified by NVIDIA to run on NVIDIA TITAN V, TITAN Xp, TITAN X (Pascal), NVIDIA Quadro GV100, GP100 and P6000, NVIDIA DGX Systems.
Release 23.01 supports CUDA compute capability 6.0 and later. This corresponds to GPUs in the NVIDIA Pascal, NVIDIA Volta™, NVIDIA Turing™, NVIDIA Ampere architecture, and NVIDIA Hopper™ architecture families. For a list of GPUs to which this compute capability corresponds, see CUDA GPUs. For additional support details, see Deep Learning Frameworks Support Matrix.
The RTX 4090 and RTX A6000 Ada are based on Nvidia’s Ada Lovelace architecture, but the RTX 3090 and RTX A6000 are on the last generation Ampere architecture. In addition, the Ada Lovelace GPU adopts TSMC’s 4nm manufacturing process compared to Ampere’s Samsung 8nm process.
If the 4090 is Ada architecture, then it doesn’t seem to be on the list of GPUs supported by NGC considering the release note above. Yet, I have heard in more than one place that NGC does support 4090.
I can run model training on NGC on paperspace with the ampere A6000.
I don’t want to buy a 4090 physical machine only to find out I can’t get it to run my TF1 code.