When (if ever) will you support OpenCL on the Nano?
OpenCL is not supported on Jetson Nano.
Is NVIDIA planning to add OpenCL support for the Nano? If so, any estimate of when we’ll see it?
Hi samqzucg, there aren’t plans to add it, Tegra platforms have never supported OpenCL.
A bunch of folks (plus me) are using the Nano for distributed computing like SETI@home. While the Nano isn’t a heavy number-cruncher the bang for the watt ratio is really good. Taking advantage of the GPU would be nice. I’m trying to compile some SETI@home CUDA code on the Nano.
Do you have any idea if anyone has successfully ported POCL to the Nano?
I hadn’t heard of POCL before, but I see that it has a CUDA backend. If it builds with CUDA 10, then perhaps it could work - let us know if you have any luck using it!
Did this pan out? For my use cases using CUDA directly is a non-starter because it only works on nVidia GPUs and as much as nVidia (and Intel and AMD and…) Would like to believe they are the only important target the reality is that most real applications (aside from the small subset of one-off applications targeting a single cluster or single embedded device version) need to be portable and fully vendor agnostic.
Targeting any modern GPU through Vulkan + SPIR-V or those plus some FPGA+CPU SoCs via OpenCL is the only justifyable path for many developers. For those of us in the mandated-portability boat we can look at as many shiny vendor-specific features as you want to show us but we may only touch those which are exposed through vendor agnostic mechanisms (however, like with hard blocks on FPGAs, a quality vendor-supplied compiler can take standard input (e.g. SPIR-V) and infer when to leverage vendor specific features internally as the CUDA compiler already does to allow existing programs to benefit at least somewhat from new microarchitecture versions).
If anything will prevent the uptake of gpGPU utilization outside the walled garden of the HPC niche it’ll be the Babel of vendor-specific languages and vertical tool chains. Vendor lock-in may keep users in the short term but it tends to drive them off in the long run (remember MicroChannel? Glide wrappers? etc…)
I haven’t gotten by an error building POCL. It’s been awhile so I don’t remember the exact error but it had to do with pthreads. I tried finding the source of the error but didn’t succeed. One of these days when I have spare time I’ll try again.
Bless you persistent few. I wish you the best. Unfortunately I don’t see Nvidia supporting OpenCL anytime soon. CUDA does lock developers in, but it’s also fast and kind of an industry standard. I would love to see Nvidia headed by somebody like Satya, but that probably won’t happen either. What they are doing right now is probably just too profitable to justify such a change to investors. Maybe it does breed resentment, and it might build up, but their support is also fantastic. Every time I feel like Linus and have a frustration, I come here, ask a question, and I get a response quickly from the developers.
I’m making another attempt to get
CUDA back end working on Jetsons this week. They posted a release candidate a few days ago and I have a rather strong use case for OpenCL in my R work. See
Not yet tried myself, but some users are experimenting with this. See this topic.
Success!! I have POCL 1.6 running with CUDA on both my 4 GB Nano and my AGX Xavier. The Docker context is here - it’s in a branch now but will be merged to
main once I update the documentation. You may want to comment out the R bits.