Machine Learning Acceleration in Vulkan with Cooperative Matrices

Originally published at: https://developer.nvidia.com/blog/machine-learning-acceleration-vulkan-cooperative-matrices/

Machine learning harnesses computing power to solve a variety of ‘hard’ problems that seemed impossible to program using traditional languages and techniques. Machine learning avoids the need for a programmer to explicitly program the steps in solving a complex pattern-matching problem such as understanding speech or recognizing objects within an image. NVIDIA aims to bring machine learning to…

This is great news. I especially applaud this line:

"if that developer desires to access state-of-the-art GPU rendering and
compute functionality in a way that doesn’t lock them to a single
platform
, then that API is Vulkan!"

But then why call it "VK_NV_cooperative_matrix". IE why is the "NV" in there. Doesn't this dis-incentivize other vendors such as Intel or AMD from creating their own vendor extension that exposes the same function, but with vendor-neutral nomenclature? What am I missing here? Please dissuade me from my suspicion that this is actually an attempt at quasi vendor lockin? Why not just call it VK_cooperative_matrix? Then it can be implemented by everyone, because right now they sure won't do so with "NV" in the function name. We really don't want fragmentation.

the NV in the name only indicates that nvidia authored the extension without explicit input from other software or hardware vendors. Other hardware vendors can implement the extension, and/or ask for a KHR version.

Very good news and initiative to expose the tensor cores in VK. How perfs compare against CuDNN or CuBlas? Tk.