Sure it does. There really isn’t any competition in this area so they can do what they want, which includes recommending performant yet properitary solutions. Nothing really compares to Tegra. Even Google does their own thing with Coral, rather than bother with OpenCV.
Not really. Learning CUDA isn’t required. Nvidia provides c, c++ and even python wrappers and example code for their libraries so you can do what you want. No, they don’t bother with OpenCV, but who would want them to.
OpenCV was designed by Intel in an era where GPUs weren’t even used. 90% of it uses the CPU only. That’s fine, for some things, and Intel certainly likes it since they can’t manage to make a GPU go faster than an actual potato, but for everybody else, it’s better off dead, not that Nvidia doesn’t also make software products I would rather be eliminated (looking at you, SDK Manager).
There is DeepStream which uses the gstreamer framework if you want to analyze video. It works with python, c, and c++. It doesn’t require any knowledge of cuda and most existing gstreamer code can be ported easily to use nvidia’s accelerated components instead of the stock ones. The same is not true for CPU based OpenCV code. Hell, you can even write static pipelines in the shell with gst-launch.
For robotics, there is Isaac. For photography, there is Argus. There is MMAPI for high performance multimedia applications. All of this is apt installable, including the examples which are well documented and commented… and if you run into trouble you can ask a question here and get a prompt response from the actual developers.
That there is the real advantage. Precisely none of this requires any knowledge of CUDA itself, though you can certainly learn if you’d like. I don’t know CUDA, and haven’t had a need to learn (but I am planning on it, just because it’s cool). And if you build OpenCV with cuda support you can use the CUDA enabled functions (that aren’t broken) there as well without learning CUDA itself.
This hobbyist is personally perplexed with the obsession with OpenCV. It’s slow, and only useful on a powerful CPU, which is not the case with development boards designed for low power mobile applications. Really. I don’t get it. If you have CPU cores to burn, go for it, but otherwise, graphics were meant to be processed on a GPU, or a TPU, not a CPU. I want things to go fast, personally.
So is OpenCV’s GPU parts. Half the CUDA tests fail (if you run them) and the rest is mostly experimental and only works with OpenCL. I would kinda like support for OpenCL on Tegra, but then again, I’ve never used an OpenCL app that didn’t perform poorly compared to the CUDA version (eg Blender).
Sure it does. They’re providing performant, portable solutions and nobody else is, so they can do what they want, and I don’t blame them for wanting customers to use solutions that go faster on their thing because it’s what literally everybody does.