Is anybody else reconsidering their choice in using cuda? The recent chip shortages and crypto-mining has left my project in this strange dead zone, wondering about code portability and scaling, using cheaper CPU hardware rather than expensive GPU hardware with architecture lock in.
Khronos has recently published an update to the SYCL specification and it’s beginning to look enticing, more so with where the industry is going, I see super computers popup with CPUs in them rather than GPUs suggesting it’s becoming more practical and cost effective to scale in this way.
There are some attractive features in SYCL which are helping it’s allure!!
I’d love the freedom to lift and shift my code to run on a CPU or a GPU (Nvidia or AMD), even if not practical from a performance perspective, it gives more options related to cost.
I’ve seen the new specific rtx mining chips being announced, without monitor outputs and I think that’s terrific, I’d love cut down hardware like that to reduce cost, if only I could compete with the miners to get it, even if it’s not targeted towards machine learning people like myself.
I also worry about the new rtx 3060 technology, which sounds great to nerf mining software, but at the same time I wonder how well that’ll work and if it’s prone to mistake, will it nerf my custom AI code accidentally?
It seems like there’s gamers, there’s miners and then there’s me and my awkward little niche that could do with some Nvidia TLC.
Are the shortages and miners and inability to buy RTX 30xx cards are reasonable prices stunting anybody else’s projects?