I’m wondering if anyone here uses Amazon EC2 Cluster GPU. If you do, which AMI do you use? I can’t seem to find an AMI with the most updated toolkit and driver, and I’m going to use my instance for short periods of time so I don’t want to spend time on the installation of various tools.
This looks like the newest AMI with GPU support that I can find: ami-e4a7558d, but it’s still created three months ago so toolkit 4.0 is certainly not on it. Anyone used Amazon EC2 Cluster GPU before? Any recommendation for AMI?
I toyed around with an EC2 GPU instance a few months back (so we could test GPU.NET with it). IIRC, the “getting started” instructions mentioned that you should update to the latest driver / toolkit / sdk when setting up a new instance; I don’t think AWS updates the AMIs very frequently, instead preferring to let customers handle the updates themselves so they can be customized more easily.
Plus, it’d be a lot of work on Amazon’s part to keep the AMIs up-to-date, since they’d need to do a bunch of compatibility testing each time they updated the image.
OTOH, I believe you can create your own AMI from scratch or from one of the base AMIs. So, if you were going to set up a bunch of machines, you’d just create an instance with one of the base AMIs, update it with whatever you want, shut the instance down, then use that to create your own AMI (which needs to be saved to S3, if I’m not mistaken).
Thanks for the info! I guess I’ll just have to do my own AMI then.
Though I do think it’d be good for everybody if NVIDIA maintains some kind of support for the Amazon EC2 Cluster GPU. After all, the cloud would be a good place for potential customers to test out their ideas and to see how their applications may work with the Tesla GPUs. NVIDIA might get more sales if they make better use of this platform.
I’ve done a custom Ubuntu AMI before. It’s a little scary given how flexibly and “enterprisey” the Amazon tools are, but if you can follow a recipe it is doable. I can’t find the tutorial that I used before, but this looks close:
That also brings up the option of using EBS so you can have a persistent root partition. Not as cheap as AMI for intermittent use, but maybe more convenient.
This was a great and straightforward tut going around the #CUDA twitter feed/filter last week–> http://hpc.nomad-labs.com/?p=65
CUDA 4.0 MultiGPU on an Amazon EC2 instance
Posted by kashif on June 23, 2011
Easy AdSense by Unreal
This post will take you through starting and configuring an Amazon EC2 instance to use the Multi GPU features of CUDA 4.0.
Motivation
CUDA 4.0 comes with some new exciting features such as:
the ability to share GPUs across multiple threads;
or use all GPUs in the system concurrently from a single host thread;
and unified virtual addressing for faster multi GPU programming;
and many more.
The ability to access all the GPUs in a system is particularly nice on Amazon, since the large GPU enabled instances come with two Tesla M2050 Fermi boards, each capable of 1030 GFlops theoretical peak performance with 448 cores and 3GB of memory.
@seibert and vfulco: thanks for providing the links!
Jack was saying that AMI needs to be saved on S3. That’s perhaps outdated. Now EBS can be used to store an AMI, and it is cheaper than S3, not to mention that there is the free 1-year 10GB EBS storage for new AWS users.
And, creating an AMI from a running instance is surprisingly easy. I could almost expect my dad to be able to do it External Image
Now I only wish NVIDIA could bear the cost and maintain an updated (or perhaps one more that is feature-rich, like comes with loads of libraries) AMI for everybody’s good.
I learnt from hearsay that development in Amazon EC2 has a license issue…Amazon may owe some rights in software that you develop on EC2…
Do you guys have any idea?
I am sure Amazon would not be so dumb to claim rights… but I dont want to ignore the hearsay either…
Anybody, Any idea?
I haven’t heard of Amazon claiming rights to any software you develop on EC2 – and from a business perspective, that’d be a foolish move on their part; when I was at the AWS summit a few weeks back, I got the impression that their current strategy is to capture the enterprise market, since it’s cheaper to run your servers on their infrastructure rather than purchase/maintain your own on-site server room. That being the case, I doubt that many companies would consider using a platform whereby their investment (in in-house software, for example) could just be usurped at Amazon’s will.
On the other hand, some of the AWS stuff allows you to “bring your own license”, e.g., if you want to run Oracle on the Amazon cloud, you still need to pay for licenses – either through a higher hourly cost, or by purchasing your own licenses from Oracle and using them on AWS. AFAIK, this would apply to any EC2 instances running Windows Server, SQL Server, and Oracle (just off the top of my head). Perhaps this is what you heard about?