OpenCL 2.x support plans?

OpenCL 2.0 was published 3 years ago, but drivers for nVIDIA GPUs still only support OpenCL 1.2, despite nVIDIA being a member of the Khronos Group and being formally committed (somewhat?) to OpenCL support in general.

Has nVIDIA disclosed plans for supporting OpenCL 2.x in the drivers some time in the near future? If not, is that official policy or just a matter of prioritization?

(I don’t want to argue about this - I’m just asking the informative question.)

1 Like

I agree, it’s ridiculous.

I would not hold my breath for an answer, especially not an official one posted here. Especially not about something that’s obviously of low priority and easily get deprioritized when CUDA needs to colonize another #deeplearning field.

I’m going to guess that OpenCL 2.x will be implicitly supported by NVIDIA as Vulkan matures.

Vulkan and OpenCL both rely on SPIR-V but it’s still unclear if and when the Vulkan Compute and OpenCL runtimes will converge enough that an SPIR-V bytecode OpenCL kernel will be executable on a Vulkan stack.

I’m really not sure if true convergence is the goal by the Khronos’ians but it sure seems like it is when you read their presentations.

Otherwise, I agree with @pszilard that you shouldn’t hold your breath. CUDA is the de facto GPU compute API that gives you access to the world’s fastest GPUs and most optimized libraries.

If there are any OpenCL/Vulkan/SPIR-V/Khronos gurus out there I’d definitely like to understand the vision behind these specifications.

External Media

They have not.

AFAIK there is no “official policy”. (Not sure what that means, precisely, but if in doubt I refer you to my statement above – nothing is published about future software plans.) Current drivers generally claim/advertise support for OpenCL 1.2 (on cc3.0 and higher).

There was a considerable period of time where OpenCL 1.2 (and later 2.x) were published, and NVIDIA drivers only supported OpenCL 1.1. There were business priorities that resulted in the move to OpenCL 1.2 support. NVIDIA makes no statements about “not supporting” OpenCL, nor any forward-looking statements about what will be supported in the future. We support OpenCL, and currently our support is at the OpenCL 1.2 level.

NVIDIA was the first to have a workable system based on OpenCL 1.0, and I believe we may have been the first to offer OpenCL 1.1 support. Eventually OpenCL 1.2 was supported. These milestones have all been driven by business priorities.

If you have a developer relationship with NVIDIA, I suggest you open a private channel to discuss any concerns. This forum is not the place where NVIDIA will discuss unpublished future software development plans, roadmaps, or intent.

txbob:

  • What constitutes a "developer relationship" with nVIDIA? I'm in a research group which has bought nVIDIA GPUs occasionally over the past 5 years.
  • I remember the period of time when OpenCL 1.1 was supported, but not OpenCL 1.2. That was kind of weird too, I must say. Or rather, more like suspicious - it looked like support was "artifically stuck" at 1.1
  • Is it not the case that nVIDIA could very easily support OpenCL 2.x ? I mean, is there any technical challenge preventing such support or is it just, as you describe, the lack of priority for getting this done?
  • Doesn't nVIDIA's membership in Khronos constitute a business priority? Or what about the potential of better integration with systems using other hardware (not necessarily GPUs even)? Or drawing new customers who have OpenCL experience? etc.
  • A simplified model of “business piorities”: Over a relevant time frame, can profit be increased by providing feature X, after accounting for the actual costs of providing X and the opportunity costs of providing X? Opportunity cost refers to the fact that with finite resources, doing X prevents you from doing Y. From what I can tell based on 25 years experience, people outside of industry consistently tend to underestimate costs, and overestimate market potential based on personal needs and desires. As someone once put it to me: “Norbert, you are not a market!”

    I am not a business analyst (nor do I play one on TV) but if you take a look at the compute market overall, currently there seems to be a multi-billion dollar market opportunity in deep learning, while the amount of additional business that could be driven by OpenCL is tiny to non-existent. Given that NVIDIA is a for-profit business, which technology are they likely to pursue? A contributing factor to NVIDIA particularly vigorously pursuing the highest-value market at this time may be that Intel’s royalty payments to NVIDIA under an existing licensing agreement (http://investor.nvidia.com/secfiling.cfm?filingid=1193125-11-5134&cik=1045810) are going away soon.

    A quick look at some of NVIDIA’s competition serves as a stark reminder as to what happens if a company doesn’t keep generating profits firmly in its focus.

    I’m referring here to a connection to someone at NVIDIA, that is not purely based on these forums. This sort of relationship could come about in a variety of ways, such as via a strategic partnership, participation in a technical committee, association with a CUDA Center of Excellence, being a contributor to a “strategic code”, participation in a GPU Technology conference (session participant, presenter, etc.) or a great many other possibilities. I mention it because what NVIDIA will say under NDA in a private developer discussion exceeds what it will say in the confines of a public forum such as this. This is not exclusive to OpenCL, or even NVIDIA. There are significant legal ramifications for any US Corporation about making statements about future support of software products/features.

    I think I’ve answered this already. It’s a matter of business priorities. We are a resource constrained company, as are most companies. We continually strive to invest in areas where the benefits are maximized. This is not specific to OpenCL. There are a great many requested features for CUDA (you will find many such requests here on these forums) which we don’t have the resources to address at this time.

    The process of setting priorities does not usually hinge on a single consideration. It involves multiple considerations. I won’t be able to discuss the individual merits of each of the factors that may be important here.

    As a matter of respect, I frequently try and answer well posed questions on this forum. However I’m not able to address all questions. Specifically, I’ll be unable to engage in extended dialog on this topic, or address detailed questions, many/most of which have forward-looking ramifications. I will be unable to address forward-looking questions, asking for information about future plans, or for speculation about future plans or intent. To be clear, I’m not saying that “nobody at NVIDIA has permission to discuss any future plans in any public setting.” I am saying I do not have such permission. That’s not my role here. And from what I know of NVIDIA operations, these specific public forums are not a vehicle we generally use to communicate or discuss future plans.

    txbob: Fair enough, Thanks for your answer. By the way, there’s no way of telling, by looking at your avatar and username, that you are anything other than just another forum user. Even your profile page only indicates that you’re very active - it doesn’t even hint at any official capacity.

    I work for NVIDIA. Apart from this thread, I don’t usually comment on such matters. I’m not really here in any official capacity except to answer questions.

    I understand NVIDIA’s wishes to prioritize CUDA over OpenCL. However, please, at least make OpenCL 1.1 work! My recent experience with the CUDA 8 toolkit and an old GTX 480 was very frustrating. As far as I can tell clEnqueueNDRangeKernel only accepts a local grid of 1x1x1. I known that sm_2x is “deprecated”, but only accepting a local grid of 1x1x1 is ridiculous.

    For the curious, I work in academia and my goal was to illustrate the similarities of the CUDA driver API with the OpenCL API.

    P.S. This was posted here because it appears that there is no forum topic addressing OpenCL issues.

    TOSilva: With due respect - you should open your own thread about your issue with OpenCL 1.1 support on Fermi cards, this thread is about something else. Still, valid point regarding how it’s important - imperative even I would say - for nVIDIA to “play nice” with respect to OpenCL.

    I wouldn’t be surprised if Nvidia is holding back on support for OpenCL 2.0, since 2.0 only supports SPIR and not the newer SPIR-V, which is what was required for Vulkan. However, with Nvidia already supporting a SPIR-V to PTEX compiler for Vulkan purposes, it would be a little ridiculous if Nvidia project managers didn’t see tremendous value in supporting 2.1. The majority of the work seems done, so how hard could it be?

    It’s difficult, if not impossible to get decent GPGPU performance with OpenCL 1.2 due to the lack of sub-group/warp shuffle operations, and that performance hit can be detrimental many OpenCL projects including our university’s research. Fundamental operations like scan and sort rely on those OpenCL 2.X features for optimal performance.

    Either I’m overlooking something, or Nvidia sees more advantage in keeping certain features “CUDA only”. And I’d rather exclude Nvidia GPUs as a target platform than wait another year for support… The silence from Nvidia is very frustrating, and I’m sure I’m not alone in feeling this way.

    Correct me if I am wrong, but OpenCL and CUDA have more in common than the opposite. First, I love keeping my favorite compiler to process OpenCL host code. Second, NVidia have to spend money on CUDA compiler’s support. So, it is evidently, that the cheaper way is to rely on compilers from other vendors. Then, why one underinvests (spends man/hours) in open standard?

    By what amount would NVIDIA’s revenue from GPU sales grow if OpenCL 2.x support were added? What would be the costs, one-time and ongoing, for that support? What would be the net contribution to NVIDIA’s annual profits? Serious and well-reasoned estimates only, please. Assume that the fully loaded cost of a software engineer in Silicon Valley is currently approaching $250K p.a. on average.

    This thread has degenerated into a discussion thread. Oh well…

    Sadly, I have to agree with what njuffa is insinuating. nVIDIA seems to see a commercial benefit in promoting CUDA for use with its GPUs and failing to support OpenCL well enough, let alone bring CUDA and OpenCL closer together, so that eventually there is no need for two separate ecosystems. While it’s true that there’s some overhead in maintaining the separate ecosystem - CUDA is already there; a lot of of software, internal and external, are in place for it; so - a transition or a bolstering of OpenCL support would cost a lot of time and money with no clear short- or mid-term benefit. Moreover, the separate ecosystem prevents AMD GPU users from benefiting from a lot of the work in the CUDA world; for nVIDIA to fully support OpenCL would level the playing field quite a lot, making it much easier and tempting for people to say “Hey, why don’t we just switch one card with another and see how performance is affected? There’s not that much coding effort necessary.”

    On the other hand, if this ruthless cut-throat approach was the only consideration, nVIDIA could have avoided supporting OpenCL completely, which is not the case; or kept support at 1.1 rather than 1.2. So apparently there’s a more complex balance of motivations and interests at play here. Unfortunately, it is not made public, even to the extent of allowing us to realize when we can expect OpenCL 2.0 support.

    A successful business adjusts its product offerings to where there customer demand (and thus, profits) is, that has nothing to do with “ruthless”. Businesses who are not nimbly adjusting to market conditions atrophy and die, at best their remnants are acquired by the competition on the cheap.

    The fact that NVIDIA offers support for OpenCL 1.2 but not OpenCL 2.x is therefore an indication that market forces compel NVIDIA to support the former but not the latter, for the time being. From where I sit, reading the trade press, I do not see a compelling market in OpenCL 2.x support, but I could easily be wrong (I was surprised by the recent AI boom, for example), thus my questions.

    How quickly NVIDIA reacts to changes in the market can be seen most easily from how rapidly they entered the deep learning market, with support for all the major platforms in that field. This is a multi-billion business opportunity, of which NVIDIA so far has captured only a fraction, nonetheless it has enabled record profits.

    In an industrial setting, technology does not exist for technology’s sake, it is a means to create products that people want to buy. CUDA is a technology that enhances GPU products, and it provides the quickest possible turnaround on customer’s requests for a better product. From a developer perspective, I still recall what a pain it was to add a tiny extension (to expose existing hardware functionality) to an OpenGL-ES driver, versus the ease of adding new functionality to CUDA.

    From the technical side, nothing has changed in the past 6 months. AllanMac’s post earlier in this thread may be the only workaround, as awkward as it is. OpenCL 2.x to SPIR-V, and run on NVidia GPUs via Vulkan compute.

    There is a good blog post from one of the SPIR-V architects with a trivial but working example of this workflow.

    The latest Windows driver 378.66 mentions OpenCL 2.0 for evaluation purposes in release notes.

    So, what is the current status of OpenCL 2.0 support?

    i wonder, why noone takes into account that opencl2 support may have negative value for nvidia. now i select between feature-rich nvidia-only cuda8, average-feature amd-only opencl2, and feature-poor opencl1. so i select cuda if i need anything more or less complex. with opencl2 nvidia support, i will select opencl in more cases, that will result in more applications compatible with amd cards