Define “better”. My programming background is on the C/C++ path, so CUDA itself is the perfect fit for me. Also, the work I typically do readily lends itself to use of a language in the C/C++ family.
Your programming background, and your current environment, may be completely different. You would want to make a choice based on your background and environment. For example, in some domains many people use Java, in others they use Python, yet others predominantly use Matlab, etc. If you work in those domains, it seems useful to approach CUDA from within the context of those programming environments.
Most people pick up parallel programming to speedup specific processing tasks in their domain, and GPUs are a cost-effective way to do that. That might involve relatively little actual CUDA programming, and mostly using the many performance libraries that are part of the CUDA ecosystem.
If you just want to get exposure to the nitty-gritty of parallel programming, straight CUDA is probably going to work as well as any other approach. You get to see the nuts and bolts up close, so to speak. That is useful background knowledge no matter how you branch out later.