Backward Compatibility Issues with CUDA on Older Nvidia Architectures vs. OpenCL

Hello,

I looked through the forum and could not find the answer to my question, so I created this topic.

My question is about backward compatibility between CUDA and OpenCL. I am forced to use a very old Nvidia graphics card that uses the Fermi architecture, which was supported up to CUDA 8. To use CUDA, I had to use older versions of Visual Studio, but I received warnings in the code even though the code compiled and ran without any issues!
I tried the solutions suggested online but the problem was not resolved.
On the other hand, for using OpenCL, I downloaded the latest version of the OpenCL SDK from the official repository on GitHub and used it with Visual Studio 2022 without any issues. I just needed to use

#define CL_TARGET_OPENCL_VERSION 110

to use OpenCL 1.1!

Why doesn’t CUDA have this capability like OpenCL?
Why does CUDA easily drop support for its older architectures?
Keeping multiple architectures alongside new architectures doesn’t seem like a bad idea!
There is no need to integrate old hardware architectures with new hardware architectures!
There is no need to add any special features to old hardware architectures!
It’s just so that when reporting bugs and issues, CUDA Toolkit developers can fix the problem and release a patch for the old CUDA versions.

This answer represents my personal opinion based on knowledge of general industry practices.

The entire CUDA software stack and associated massive ecosystem is still evolving at a substantial pace, while NVIDIA’s OpenCL support (that never had an ecosystem to speak of) has been largely frozen for more than ten years. My guess as to why NVIDIA has not discarded it outright is so they can check someone’s requirements box for a non-proprietary software stack. But even OpenCL’s inventor Apple has deprecated it, steering developers to Metal instead.

Support for older architectures is dropped because maintaining support for too many CUDA versions and hardware generations creates a substantial maintenance burden, which is quantifiable as financial cost (one example: cost of a hardware farm for around-the-clock regression testing). In addition, as the software architecture changes to support new features, it often becomes (much) harder to accommodate support for old or even discontinued features of older architectures.

As the speed of GPU and CUDA development has slowed a bit, NVIDIA has lengthened the life cycle of GPU architectures. It used to be the case that a GPU older than five years was hopelessly outdated, with little point in supporting it as most people discarded the old hardware. Now, people hold on to GPUs for a longer time, and the oldest GPU architecture currently supported by CUDA is Maxwell which was introduced 10 years ago. Can you still get software updates for an iPhone or Android device from ten years ago?

If you want to use old hardware, simply use it with old software that was contemporary with that hardware. That old software did not just become defective overnight when support ended. I have a machine here that I use almost daily (I am typing on it right now) that I bought in 2012. It runs Windows 7 Pro, MSVS 2010, Intel compilers from 2013, CUDA 9.2, and has a Kepler GPU. Works just fine. Every time I compile with nvcc I get a warning nvcc warning : nvcc support for Microsoft Visual Studio 2010 and earlier has been deprecated and is no longer being maintained. The message just tells me that if I run into problems, NVIDIA won’t help me.

Unlike OpenCL, the CUDA toolchain tightly integrates with the host toolchain. That was an early design decision for CUDA. It allows one two write __host__ __device__ code, but also precludes a mix & match approach to pairing CUDA toolchain versions with host toolchain versions. Instead, each CUDA toolchain version offers interoperability only with specifically designated host toolchains. This is a tradeoff. You could claim it’s the wrong tradeoff, but it seemed like the right decision in 2006 when CUDA was new and in need of rapid adoption.

1 Like