EC2 Instance Availability for Project 3 Extra Credit

To aid those who would like an isolated testing environment, we have made available an EC2 virtual machine image for performance testing. We will still be evaluating Project 3 performance on the lab machines. The images we have provided are for “Cluster Compute Quadruple Extra Large” instances, which correspond to a dedicated machine similar to our lab machines but about 20-30% faster across the board. Limit your usage to one machine at a time per project group.

What the Instances Are

These Cluster Compute instances are virtual machines which runs exclusively on a system similar to our lab machines. Since we get a dedicated machine, these are expensive: $1.60/hour. These are also 8 core Nehalem systems with 2 sockets and the same cache sizes. These systems have a slightly higher clock rate and memory bus speed, but we believe that relative performance improvements on these machines should translate to relative performance improvements on the lab machines.


There are some additional factors which may bias relative performance versus the lab machines:

  1. these EC2 instances run Linux and not MacOS X;
  2. the compiler we have provided on these instances is GCC 4.3.5 instead of an Apple-patched GCC 4.2.1;
  3. these machines have 22GByte of RAM available instead of "just" 12GByte;

Using Instances

Start a new instance with the command:


        ec2-run -t cc1.4xlarge -a ami-9209fefb


Get the public DNS name from the command




(It starts with ec2-... and ands with (ec2-my-instances is an alias for ‘ec2-describe-instances --filter='key-name:YOUR-USERNAME*'’)


Wait for the instance to boot (the instance may still be booting when "ec2-my-instances" says it is "running"), then login to this instance using


        ssh-nocheck -i ~/YOUR-USERNAME-default.pem


(ssh-nocheck is an alias for 'ssh -o "StrictHostKeyChecking false" -o "UserKnownHostsFile /dev/null"'. Since your instances are transient, we want to avoid polluting your known hosts file with their SSH keys.)


Alternately, copy the YOUR-USERNAME-default.pem file to your own machine and use it as a private key with your own SSH client.


If you get a message like "ssh: connect to host port 22: Connection refused", then the instance is probably not finished booting.


After logging in, you will have a root shell on a CentOS system with an appropriate copy of the BLAS installed. The /root directory contains a copy of the Goto BLAS and a matmulProject.tar.bz2 archive (uncompress using 'tar -jxvf matmulProject.tar.bz2') that contains a Makefile suitable for compiling Project 3 on these machines (that is with the 'GOTO = ' line changed to point to /root/GotoBLAS2).


You can copy files to this instance using scp as in


        scp-nocheck -i ~/YOUR-USERNAME-default.pem source-file


(scp-nocheck is an alias for 'scp -o "StrictHostKeyChecking false" -o "UserKnownHostsFile /dev/null"'.)


Alternately, copy the .pem file as before and use it on your own machine with your own SFTP or SCP client.


When you are done with your instance, be sure to terminate it with ‘ec2-terminate-instance i-XXXX’ where ‘i-XXXX’ is the name you get from ‘ec2-my-instances’.

Starting and Stopping Instances

We are using Elastic Block Store (EBS) backed instances, which allow for persistent instance storage. Therefore, you can stop and restart these instances without losing your data and avoid billing, except for data storage, for time the instances are not running.


You can stop an instance by running the command ‘halt’ while logged into an instance or running ‘ec2-stop-instances i-XXXX’ (where i-XXXX is the instance ID from 'ec2-my-instances', etc.). The instance will appear under ‘ec2-my-instances’ as “stopped”. You can then restart the instance with ‘ec2-start-instances i-XXXX’. Note that the restarted instance may get a different DNS name and IP address. We are not billed for the time these instances are stopped; however, each duration while they are booted is billed by the hour rounded up to the hour (so running a new instance, stopping and starting it, and then terminating that instance within one hour causes us to be billed for two hours of instance time), and storage is billed at $.10 per GByte-month, with a small charge ($.10 per million IOs) for usage. Our root volume for our instances is 20 GBytes.