With 2.2 coming in the not-too-distant future, I think it’s time to talk about the Windows-specific feature that wasn’t included in the beta. As you can probably guess by the subject, we’re going to ship a test version of cudart.dll compiled with /MD in a separate package from the rest of the 2.2 toolkit (but it should be out simultaneously or very soon after the main 2.2 release) for people to try out. We’re not committing to supporting this going forward or anything like that at this point–we just want some feedback to see if this is useful. I’m not a C# or CLR guy or anything like that, but I’m guessing this will allow you to mix CUDART, C#, all that stuff that people have asked us to support.
If people really like /MD (and based on the number of complaints about the lack of /MD I’ve seen I’m pretty sure you guys will), we’d certainly like to support it going forward. However, this brings us to a problem; for a lot of reasons, we don’t want to support both /MD and /MT. So, we thought we’d just put the question out there. Does anyone have a really good reason why they’d need to stick with /MT as opposed to moving to /MD? From what I know on the subject, I can’t think of any reason besides the need to change your project settings, but I’m not the most knowledgeable about Windows internals. There’s certainly a possibility that I’ve missed some giant MSDN advisory claiming that /MD should never be used for anything…
I do a lot with C# and the CLR…from this Microsoft page, I don’t really see what the difference is? Or are you guys looking at /MDd (which would enable debugging)?
I’m interested to try it out and see though, when 2.2 is available.
You can’t link anything compiled with /MT from C#/CLR. So with a /MD cudart, you should be able to use cudart in a DLL that you then link with a C#/CLR/managed C++ (I think the latter is true?) app.
Hmm…if what you say is true…then I guess in ‘.NET speak’ compiling it that way does some kind of automatic marshaling between the CLR and the unmanaged DLL’s (i.e. CUDART)? As of right now, you can access both the runtime and driver DLL’s via C#/CLR, but you have to use P/Invoke, which introduces a small (but noticeable if overused) processing overhead to convert between the managed and unmanaged types.
Whenever it’s out, I’ll give it a try and see what’s up. If it really does get rid of the P/Invoke overhead, that could be a pretty big deal for using CUDA through .NET.
From a technical standpoint there’s little/no difference…
MT, the runtime libraries get linked in with CUDART, MD - you just have to make sure the system has the runtime libraries installed (eg: ship vcredist_x86.exe with your installer/app).
Either way CUDART has the same symbols linked into it, and either way it still uses the same runtime calls… you still have the same issues with runtime version compatability between CUDART and your application, etc…
I’m not sure what’s stopping people from using CUDART in managed CLI apps… (you can easily P/Invoke into any native DLL, regardless of linker settings - the only other method for native<->CLI interop is C++/CLI (which is horrible, and not portable), or COM - none of which I could imagine MD/MT would make any difference with…
I must be missing something pretty obvious, but I’ve never run into linker-setting issues when writing native<->CLI wrappers before.
Huh. Numerous people have claimed there is one before, I don’t know anything about it (Linux guy), and it wasn’t a huge deal for the driver team to make it work. We’ll see if it makes a difference, I guess.
I intend to use C++/CLI for interop because P/Invoke is a bigger code overhead and I don’t like the syntax at all. CLI doesn’t link when I select /MT. I don’t know why but that’s how it is (at least in VS2008). Sure, I could work around this one way or another (resorting to P/Invoke or even a different framework to quickly stitch up a GUI) but if it’s not a big deal for nV guys to release an /MD version, than I’m all for it.
Btw Smokey, why do you consider C++/CLI horrible (aside from lack of portability)?
As a new user who has spent years making C++/ATL/COM components, I’m glad I found this thread. We use /MD and /MDd options when compiling our .cpp files. When I started to figure out the options for the nvcc compiler, I used /MDd in the list of options for host code that nvcc passes on to the C compiler. My output from the build would always have a warning about a dllexport/dllimport conflict with the “clock” function in common_functions.h. I hadn’t got as far as worrying about which clock function I was using, so I continued on with coding. Real trouble came when I tried to compile my .cu code for the emulator. Then I got errors about some functions in math_functions.h having storage class specifier problems, and dllimport/dllexport problems. When I switched to /MTd for all project files, including the host code passed by nvcc, these problems went away. Since, it’s nice to be able to dynamically link to the runtime instead of static link, I’d welcome the change being proposed.
That’s the only reason - lack of portability. (you’re talking to someone who develops for Linux primarily, windows is a second rate citizen in both my personal work, and professional work)
Edit: btw, C++/CLI gets you internal calls at best case when calling native code - which is the same speed as a P/Invoke assuming no marshaling (atleast in Mono, MS.NET might not be at that performance level, I’m not sure) - and it’s generally quite easy to make a wrapper using P/Invoke that avoids marshaling (remembering that most primitive types require no marshaling, because they’re blit-able - same goes for structures made of other PoD structs and primitive types) - the only time I’ve ever had to marshal data when P/Invoking is when dealing with strings.
I’m a little bit confused about this issue. My DLL uses cudart and I am calling it from a managed .NET application using C++/CLI. I have noticed that I have to link the debug build of the DLL using /MD but I can build the release version with /MT. I haven’t had any problems with the C++/CLI interop side of things but its quite possible that the managed .NET application has only ever been using the release build of the DLL. Am I doing something wierd? In fact having re-read the original post my observations make even less sense.
Okay, sorry for that nonsense post. I managed to completely confuse myself. I have so far been building the debug build with /MT and the release build with /MD. I’m not sure how I ended up in that situation. On further investigation it seems be possible to build both debug and release builds with either /MT or /MD. Note that to link with /MD you need to add “libcmtd.lib;libcpmtd.lib” (for debug builds) or “libcmt.lib” (for release builds) to the Ignore Specific Library option.
I think the reason you are having to put libcmt.lib into the Ignore Specific Library option is because you are compiling one source file to link to the C runtime dynamically and you have another source that you are compiling to link to the C runtime statically. The culprit is usually in the compiler options you have set in the -Xcompiler options nvcc sends over to the c compiler when it compiles host functions.
I feel obliged to oppose the paradigm for managed code.
I think that enabling support for any managed language will send the wrong message to developers. You don’t want to make CUDA attractive to the mass of idiotic Java programmers. Open the gates and you’ll get an influx of pseudoprogrammers, ravaging the original spirit of the API (and in the mean time clogging the forums with stupid and pointless questions). Quite frankly, I believe that there are enough smart programmers interested in CUDA to not have a need for Redmond-like coders.
Secondly, I see CUDA is a neutral API as far as the cross platform compatibility is concerned. Enable CLI support and you have a heavy shift towards proprietary languages while in the same time introducing more and more options for OS specialization. Once you give them a finger, they’ll ask for the whole hand.
Right now, CUDA is mostly used by scientists, professors with a vested interest in GPU computing, and students with a desire to learn (myself included). Every reply I recieved on the CUDA forums has been polite and higly informative, showing a clear intention and desire to help; I cannot say that about any other forums, where the trend is to insult. I want to talk to the same polite and smart people I have talked to so far, not some twit who thinks he knows everything. Keeping CUDA neutral and in the spirit of multi-OS compatibility, if not open source mentality, will ensure the absence of annoying persons.
So even if the only reason to not use /MD is to keep CUDA pure, then it is more than good enough. Real programmers that absolutely need to use CLI will most certainly find a cheap, simple, and efficient solution for CUDA.
Thanks - that was exactly right. I do get a couple of warnings when I use /MD in the -Xcompiler options but it doesn’t seem to stop it compiling. So what is the problem that people are having?
Yes, CUDA will become more popular, even if among the wrong persons. However, there are a lot of wrappers (PhyCuda, CUDA .NET, etc) for those persons. I don’t think nVidia should be wasting time with anything that promotes “pseudoprogramming”, managed languages especially.
I’m the type of student that gets all A’s in computer programming classes and never goes to class. I’m also the type of person that spends dozens of sleepless nights in a row trying to find the bug in and inprove the GPU implementation of whatever working on. I will spend the minimum of a dozen sleepless nights before contemplating asking a question here, answering 90% of my questions. I understand I can benefit from the experience of the bright minds in these forums, but want not to abuse it. When I do post a question and find the answer by myself, or am given a simple solution, I feel idiotic.
The pseudoprogramming crowd is not like me, or the bright minds in these forums. They will arrogantly DEMAND an answer to even the most pointless (not to use idiotic) questions. Yes, those persons will start using CUDA just because it’s something “cool”. Still, why sould they bug us, so I say don’t draw them here.
Also, I’m certain that, as OpenCL roles out, CUDA will fade to the hard-core, most efficient, and fastest way to do GPU computing. Just another reason to keep it pure.
I get a deep satisfaction whenever I hear of some company switching to CUDA, or university offering CUDA classes. I want CUDA to become popular, and I want to have more persons to discuss it with, but those persons need to understand that CUDA is not a toy. Using CUDA involves a lot of responsibility, something serious companies will definitely accept. My worries reside with the less-than-knoledgeable programmers I mentioned. That, and I want CUDA to stay what it is: the best neutral compute API in the world.