Mr. Nuke, that’s a pretty silly viewpoint to have. There are right approaches for different problems–just because you haven’t encountered one where a more managed language is better than something like C doesn’t mean that they don’t exist.
I didn’t say managed languages are completely useless. The problem with ‘managed’ is that inexperienced programmers start using it in large numbers, and are unaware of the overhead involved with simplicity, thus creating highly inefficient applications. We then get tons of programs that are eating away a fecal-load of resources (just look at some of the programs bundled with Vista – WMP using 50MB of RAM to play an .mp3).
Guns don’t kill people, they just provide an easier way for people to kill each other.
Check your Inbox, I’ll PM you some evidence.
You seem to believe that the moment CUDA goes /MD, we will see millions of Java-heads with no idea how things work trying out CUDA ;)
Well, first, even if that happens, it’s their problem if their code runs slowly. And who knows, maybe a tiny fraction will get interested, read the Programming Guide, go through the online courses and start writing fast code? That’s what NVIDIA wants and that’s why they even came up with CUDA. CUDA is not meant to be elitist. They could’ve just thrown PTX at us and yet we have the seamless Runtime API - CUDA is about bringing GPGPU to the people.
By the way, we already have .NET wrappers, Python wrappers, even Java wrappers. Does a flood of newbies bother you right now? ;)
Second, don’t associate managed languages with bad programming. Managed languages are great at some things. For example, let’s say I wrote a (fast and optimized) CUDA compute layer and now want a user interface to display results, get input etc. I can either spend a week with GTK in C or an evening with Windows Forms or Java. I don’t want to spend a second over what’s necessary working with the front-end because I’m focused on the compute layer, the algorithms, the problem etc. I don’t need to be arsed by memory management and efficient implementation of the interface, it’s only there so that a person doesn’t have to enter parameters through the command line.
I am fully aware that a “pure” C/C++ front end will likely consume less resources, be more portable etc. But I’m not concerned about that, I can live with my GUI eating 50MB of RAM instead of 5 when the data eats a gigabyte, and that changing tabs in the menu takes 100ms instead of 10ms when the whole point of this interface revolves around the big “Start computing” button that launches an hour of (optimized) computation.
And I can achieve that much easier with /MD.
Mr_Nuke,
Democracy always rocks, even if majority are idiots.
You need to deal with Mediocrity and Average mindset at some point… There is no escape.
Big Mac,
Have you seen the following topics:
[url=“http://forums.nvidia.com/index.php?showtopic=94899”]http://forums.nvidia.com/index.php?showtopic=94899[/url]
[url=“http://forums.nvidia.com/index.php?showtopic=94969”]http://forums.nvidia.com/index.php?showtopic=94969[/url]
I am seeing more and more like those. If you consider proper usage of CUDA to be elitist, then your point is dead on. Yes, its their code, and their problem. That’s what I would have said 10-15 years ago about managed languages, had I not been too young. Well, just look at how bloated Windows is today.
I am quite aware of the wrappers, and I hope they can act as a pseudo-programmer barrier (it’s OK to be a newbie as long as you do your work; my problem is with pseudos). You misquote me when you say I am bothered by a flood of newbies. I am not. It’s the lack of thought that bothers me. Some newbies really want to learn, and do their homework impressively well.
Why shouldn’t I associate managed languages with bad programming if that is the case 99% of the time? Look at Java; it forces you to take an object-oriented approach on inherently sequential problems (how do I get a temperature in Fahrenheit and convert it to Celsius). Second, every time I encountered Java (be it in a book or class) I was forced to use a specific naming convention (different every time). Hey, most teahcers/professors took points off for not using their clumsy naming scheme. In the C/C++ books that I read, the authors clearly stated, “we are going to use this convention, but you are free to use whatever the fcuk you want.” There’s a clear difference in mentality between the two paradigms. That only strenghtens the point that 99% of the “managed” programmers are short-sighted. Most of them can only do things one way.
Regarding “pure” C++, you are making a mistake if you are looking at it purely as a language for low-level implementations. C++ will go as low or high level as you want it to. It may take 5-10 more minues to write a GUI using the non-managed C++ clases than it is to use the shiny “I’m a button, so drag me here” designer. If you’ve done it before in “pure” C++, then it’s just as fast to write. Don’t get me wrong, I understand and respect your preference for using the more user-friendly approach. Still, I believe that the tiny extra effort to write “pure” code is definitely worth the wait if it keeps shut gates of hell.
In trying to counter my hatred for managed languages, you have provided a very good argument for /MD. I will not try to counter it, but I still hold to the idea that we need to let out those who come to leech, idiotize, and not contribute, yet I am open to the means.
Alex
I agree with the part about the inexperienced programmers…but I don’t think that’s just limited to those using managed languages. In fact, I’d like to think that managed languages give them a bit of an edge, in that it’s easier for an inexperienced programmer to find out where there is a mistake (like reading outside of array bounds), whereas in C/C++, it may just cause the program to segfault, or worse, just silently return an invalid result.
But I’m not here to argue either side. I generally do 90% C# programming, with a sprinkle of Java/PHP/C/C++ and some others as needed. I think that making CUDA available to other languages has more benefits vs. the downside of the influx of ‘less hardcore’ programmers. For one, more CUDA usage (however it is used), means more driving force for nVidia to develop newer and better GPUs, which benefits everyone. Also, there may be lots of little areas where CUDA could be used to speed up a program (say, a little histogram or something), where the developer may not want to sit down and spend days/weeks/months trying to learn the depths of CUDA…if it’s too obscure, it won’t be adopted in those smaller cases, which really just means that only large companies or people with a lot of time to spend optimizing kernels will get the benefits of CUDA.
They do the opposite thing. There are two ways to write a program: make it work, or program it. Managed languages let you do the former. But one needs to go through the horrors of proper programming in order to understand generically how a specific concept works, and more importantly, why something doesn’t work. Learning to make something work in a managed language is faster and easier, but gets you nothing of value. The concepts of programming are far more generic and valuable. Sure, once a newbie has learned proper programming, than easier to use languages may be of use. Don’t believe that learning Java will teach one anything about programing.
And most certainly you learned what mistakes not to make the hard way. You know what is efficient and what is not, and understand it at a deeper level. I perdonally prefer C++, but will not argue an experienced programmer’s choice of a language. Unfortunately, you represent the 1%
And of course Nvidia has a vested interest in making CUDA more and more popular. I don’t contest that, but just how much value will a pseudoprogrammer bring?
Mr. Nuke, I hate to pick on you in public, but it seems like your argument revolves around the idea that there are more dumb programmers using managed languages than C/C++. Well, there are already a lot of dumb programmers using C/C++, and the set of people using managed languages is certainly not a subset of programmers that are also dumb. People will write bad code in any language.
Yes, my argument revolves around the idea that certain managed languages (and I am going to pick on Java specifically) present an increased risk of teaching “dumb” programmming. If I am just starting to learn programming, and I happen to wake into a Java class, I will most likely not be any more of a programmer than I was before I started (notice the lack of “dumb”). I am just trying to find a + b, but instead I have to hit dead on into advanced concepts such as object-oriented design (which I will probably not understand). So people will write bad code in any language, but Java is an arms-open invitiation to "dumb"ness.
But enough bashing of managed. I regard pure languages much more highly than managed languages because they teach something more meaningful (even if the hard way).
I have to agree with Big_Mac’s argument that /MD is useful for some things (as much as I’d like to say otherwise).
I don’t want to turn this into a religious discussion (which I probably already have). I am concerned that the more familiar CUDA is made to pseudoprogrammers, the higher the chance of (eventually) encountering GPU-accelerated applications that cause this: [url=“http://forums.nvidia.com/index.php?showtopic=94524”]http://forums.nvidia.com/index.php?showtopic=94524[/url] or even this: [url=“http://forums.nvidia.com/index.php?showtopic=91635”]http://forums.nvidia.com/index.php?showtopic=91635[/url]. There is always a small chance of “kaboom”(even I have baked some of those kernels myself), but minute if coming from responsible programmers testing their kernels properly.
I really hope that you guys will write drivers that really cannot crash on really “dumb” kernels. Hopefully, as CUDA gets more mature, that will no longer be a problem.
BTW, are you planning on supporting CUDA indefinitely, or will it slowly trickle into OpenCL?
Hey, I started programming with a Java class… (and then I went and learned C and x86 asm and CUDA and all is right with the world)
So that’s my point and I think we can leave it at that: generalizations are dumb and you shouldn’t make them.
Ensuring that unstable apps can’t blow up your system is our problem, not a developer problem, and I think we’ve accomplished all (or almost all) of that in 2.2. Sucky code that’s not living in ring 0 should not cause anything outside of the errant process to do anything bad, end of story.
We’re planning on supporting CUDA indefinitely–we leveraged our work on CUDA for OCL and can support both without any problems. A lot of our work directly (and transparently) benefits both. Now, before the obvious question of “why support CUDA when OCL is so similar,” that makes a pretty big assumption that may not be well founded…
Some pearls of wisdom born out of boredom:
Base classes (generalizations) are dumb (abstract). Derived classes (specifics) are realistic.
If your base-class is not general enough, you can’t grow beyond a point in your design.
Just like how a closed mind can’t scale beyond a point.
So, the bottomline is to have generalizations that can accomodate more as time flies by.
Imagine, if god had written a CFG (Context Free Grammar) to describe a Man and wrote a program that would generate programs for the CFG. (reverse of YACC) - All humans would be like Robots.
Every1 has a place in this planet. Be it dumb, idiot or the genius. Let us welcome all.
You went to a proper language for all to be right. That proves my point.
I started with BASIC in fifth grade. I never got aroud to doing much with it. In high school I was stuck with a PASCAL class. That really pissed me off because I felt I couldn’t do the stuff I wanted to do, so I started learning C++ on my own. The .NET and Java classes I took later were screeching with bad programming practice, from a teacher with a Master’s in programming. That and the fact that most Java porgrams are a bloat are enough to make the generalization that 99% of Java programmers are not programmers. There are exceptions to every rule/generalization (or for ever exception a rule, I can’t remember which one). Making one is not dumb.
What’s dumb is to not RTFM or RTFPG.
“What’s dumb is to not RTFM or RTFPG.”
That’s true no matter what the language used is. If you try to write Java without reading documentation, you’ll probably end up writing crappy code. Same with CUDA, or C, or C++, or Python, or Fortran, or Logo. (oh yeah, I went there)
I was referring specifically to the CUDA Programming Guide. I’ve never read a single C or C++ manual, just books on the language :P .
Yes, I prefer /MD, mainly because of runtime linking issues.
Potential Errors Passing CRT Objects Across DLL Boundaries
[url=“http://msdn.microsoft.com/en-us/library/ms235460(VS.80).aspx”]Microsoft Docs - Developer tools, technical documentation and coding examples
Description of the default C and C++ libraries
[url=“C runtime (CRT) and C++ Standard Library (STL) .lib files | Microsoft Docs”]http://support.microsoft.com/kb/154753[/url]
C Run-Time Libraries
[url=“C runtime (CRT) and C++ Standard Library (STL) .lib files | Microsoft Docs”]Microsoft Docs - Developer tools, technical documentation and coding examples