Quantcast
Channel: high performance computing » ec2
Viewing all articles
Browse latest Browse all 2

CUDA 5 Multi-GPU Cluster via Amazon EC2 and StarCluster

$
0
0

As promised here is a tutorial on potentially configuring and running say a 20-node CUDA 5 Multi-GPU cluster on Amazon’s AWS cloud infrastructure. The secret is to not pay the $2.10*20=$42/hour cost by using Spot Instances together with the awesome StarCluster python package which takes the pain out of creating clusters on AWS. For the purpose of this post, we will stick to just 2-nodes and will point out the place where you can easy add more nodes all the way up to 20. So lets get started!

Prerequisites

The first thing we need is to install StarCluster and also configure our Amazon AWS credentials and keys. On my 64-bit Mac OSX, I had to install pycrpto first with the following command (you may need to sudo):

➜ export ARCHFLAGS='-arch x86_64'
➜ easy_install pycrypto
...
➜ easy_install starcluster
...

And once installed we need to run it with the help command to create the config file by pressing 2:

➜ starcluster help
StarCluster - (http://web.mit.edu/starcluster) (v. 0.9999)
Software Tools for Academics and Researchers (STAR)
Please submit bug reports to starcluster@mit.edu

!!! ERROR - config file /Users/kashif/.starcluster/config does not exist

Options:
--------
[1] Show the StarCluster config template
[2] Write config template to /Users/kashif/.starcluster/config
[q] Quit

Please enter your selection: 2

>>> Config template written to /Users/kashif/.starcluster/config
>>> Please customize the config template

Next we need to look up our AWS security credentials and fill in the [aws info] section of the .starcluster/config file:

➜ cat ~/.starcluster/config
...
#############################################
## AWS Credentials and Connection Settings ##
#############################################
[aws info]
# This is the AWS credentials section (required).
# These settings apply to all clusters
# replace these with your AWS keys
AWS_ACCESS_KEY_ID = blahblah
AWS_SECRET_ACCESS_KEY = blahblahblahblah
# replace this with your account number
AWS_USER_ID= blahblah
...

Now would be a good time to create a key via:

➜ starcluster createkey cuda -o ~/.ssh/cuda.rsa
...
>>> keypair written to /Users/kashif/.ssh/cuda.rsa

and add its location to the .starcluster/config under the [key cuda] section.

Its always good to also create a ~/.aws-credentials-master file and fill it in with the same information so that we can also use the Amazon command line tools:

➜ cat ~/.aws-credentials-master
# Enter the AWS Keys without the < or >
# You can either use the AWS Accounts access keys and they can be found at
# http://aws.amazon.com under Account->Security Credentials
# or you can use the access keys of a user created with IAM
AWSAccessKeyId=blahblah
AWSSecretKey=blahblahblah

Now commands like:

➜ starcluster spothistory cg1.4xlarge
StarCluster - (http://web.mit.edu/starcluster) (v. 0.93.3)
Software Tools for Academics and Researchers (STAR)
Please submit bug reports to starcluster@mit.edu

>>> Current price: $0.35
>>> Max price: $2.10
>>> Average price: $0.46

should work as above.

Basic Idea

What we are going to do is to use an official StarCluster HVM AMI and update and create an EBS backed AMI of it. Then we will use this new AMI to run the cluster. The updated AMI will hopefully have the latest CUDA 5 as well as other goodies.

Customizing an Image Host

We first launch a new single node cluster called imagehost as a spot instance based of an existing StarCluster AMI on a GPU enabled instance. We need to choose an AMI or machine image which supports HVM so we have access to the GPU. We can list all the StarCluster AMIs via:

➜ starcluster listpublic
StarCluster - (http://web.mit.edu/starcluster) (v. 0.93.3)
Software Tools for Academics and Researchers (STAR)
Please submit bug reports to starcluster@mit.edu

>>> Listing all public StarCluster images...
2bit Images:
-------------
[0] ami-899d49e0 us-east-1 starcluster-base-ubuntu-11.10-x86 (EBS)
[1] ami-8cf913e5 us-east-1 starcluster-base-ubuntu-10.04-x86-rc3
[2] ami-d1c42db8 us-east-1 starcluster-base-ubuntu-9.10-x86-rc8
[3] ami-8f9e71e6 us-east-1 starcluster-base-ubuntu-9.04-x86

64bit Images:
--------------
[0] ami-4583572c us-east-1 starcluster-base-ubuntu-11.10-x86_64-hvm (HVM-EBS)
[1] ami-999d49f0 us-east-1 starcluster-base-ubuntu-11.10-x86_64 (EBS)
[2] ami-0af31963 us-east-1 starcluster-base-ubuntu-10.04-x86_64-rc1
[3] ami-2faa7346 us-east-1 starcluster-base-ubuntu-10.04-x86_64-qiime-1.4.0 (EBS)
[4] ami-8852a0e1 us-east-1 starcluster-base-ubuntu-10.04-x86_64-hadoop
[5] ami-a5c42dcc us-east-1 starcluster-base-ubuntu-9.10-x86_64-rc4
[6] ami-a19e71c8 us-east-1 starcluster-base-ubuntu-9.04-x86_64
[7] ami-06a75a6f us-east-1 starcluster-base-centos-5.4-x86_64-ebs-hvm-gpu-hadoop-rc2 (HVM-EBS)
[8] ami-12b6477b us-east-1 starcluster-base-centos-5.4-x86_64-ebs-hvm-gpu-rc2 (HVM-EBS)

total images: 13

and its [0] ami-4583572c that we need to start as a single node cluster and a bid price higher than the current price:

➜ starcluster start -o -s 1 -b 0.35 -i cg1.4xlarge -n ami-4583572c imagehost
StarCluster - (http://web.mit.edu/starcluster) (v. 0.93.3)
Software Tools for Academics and Researchers (STAR)
Please submit bug reports to starcluster@mit.edu

>>> Using default cluster template: smallcluster
>>> Validating cluster template settings...
>>> Cluster template settings are valid
>>> Starting cluster...
>>> Launching a 1-node cluster...
>>> Launching master node (ami: ami-4583572c, type: cg1.4xlarge)...
>>> Creating security group @sc-imagehost...
>>> Creating placement group @sc-imagehost...
SpotInstanceRequest:sir-98fb8411
>>> Starting cluster took 0.042 mins

We can now check to see if our instance is available:

➜ starcluster listclusters --show-ssh-status imagehost
StarCluster - (http://web.mit.edu/starcluster) (v. 0.93.3)
Software Tools for Academics and Researchers (STAR)
Please submit bug reports to starcluster@mit.edu

-----------------------------------------
imagehost (security group: @sc-imagehost)
-----------------------------------------
Launch time: N/A
Uptime: N/A
Zone: N/A
Keypair: N/A
EBS volumes: N/A
Spot requests: 1 open
Cluster nodes: N/A
....
➜ starcluster listclusters --show-ssh-status imagehost
StarCluster - (http://web.mit.edu/starcluster) (v. 0.93.3)
Software Tools for Academics and Researchers (STAR)
Please submit bug reports to starcluster@mit.edu

-----------------------------------------
imagehost (security group: @sc-imagehost)
-----------------------------------------
Launch time: 2012-09-04 12:48:54
Uptime: 0 days, 00:02:38
Zone: us-east-1a
Keypair: cuda
EBS volumes: N/A
Spot requests: 1 active
Cluster nodes:
master running i-5654f92c ec2-50-19-21-200.compute-1.amazonaws.com (spot sir-98fb8411) (SSH: Up)
Total nodes: 1

And once its up we can ssh into it:

➜ starcluster sshmaster imagehost
...
root@ip-10-16-20-37:~#

Install CUDA 5

We can now update the system:

$ apt-get update
...
$ apt-get upgrade
...
$ apt-get dist-upgrade

and reboot. Once back in again we are ready to install CUDA 5. First we remove the installed Nvidia version of the drivers etc.:

$ sudo apt-get purge nvidia*
...

Next we adjust the linux-restricted-modules-common file so that it has:

$ cat /etc/default/linux-restricted-modules-common
DISABLED_MODULES=”nv nvidia_new”

Next we remove the older CUDA version:

$ sudo rm -rf /usr/local/cuda

After that we install some dependencies of CUDA 5:

$ sudo apt-get install freeglut3-dev build-essential libx11-dev libxmu-dev libxi-dev libgl1-mesa-glx libglu1-mesa libglu1-mesa-dev

And finally we download the latest CUDA 5 and install it:

$ wget http://developer.download.nvidia.com/compute/cuda/5_0/rc/installers/cuda_5.0.24_linux_64_ubuntu11.10.run
....
$ chmod +x cuda_5.0.24_linux_64_ubuntu11.10.run
$ sudo ./cuda_5.0.24_linux_64_ubuntu11.10.run
Logging to /tmp/cuda_install_7078.log
Install NVIDIA Accelerated Graphics Driver for Linux-x86_64 304.33? (yes/no/quit): yes
Install the CUDA 5.0 Toolkit? (yes/no/quit): yes
Enter Toolkit Location [ default is /usr/local/cuda-5.0 ]
Install the CUDA 5.0 Samples? (yes/no/quit): yes
Enter CUDA Samples Location [ default is /usr/local/cuda-5.0/samples ]
Installing the NVIDIA display driver...
Installing the CUDA Toolkit in /usr/local/cuda-5.0 ...
...

Once installed we can check if CUDA is working by going to:

$ cd /usr/local/cuda/samples/C/1_Utilities/deviceQuery
$ make
g++ -m64 -I/usr/local/cuda-5.0/include -I. -I.. -I../../common/inc -I../../../shared/inc -o deviceQuery.o -c deviceQuery.cpp
g++ -m64 -o deviceQuery deviceQuery.o -L/usr/local/cuda-5.0/lib64 -lcuda -lcudart
mkdir -p ../../bin/linux/release
cp deviceQuery ../../bin/linux/release
$ ./../bin/linux/release/deviceQuery
[deviceQuery] starting...

./deviceQuery Starting...

CUDA Device Query (Runtime API) version (CUDART static linking)

Found 2 CUDA Capable device(s)

Device 0: "Tesla M2050"
CUDA Driver Version / Runtime Version 5.0 / 5.0
CUDA Capability Major/Minor version number: 2.0
Total amount of global memory: 2687 MBytes (2817982464 bytes)
(14) Multiprocessors x ( 32) CUDA Cores/MP: 448 CUDA Cores
GPU Clock rate: 1147 MHz (1.15 GHz)
Memory Clock rate: 1546 Mhz
Memory Bus Width: 384-bit
L2 Cache Size: 786432 bytes
Max Texture Dimension Size (x,y,z) 1D=(65536), 2D=(65536,65535), 3D=(2048,2048,2048)
Max Layered Texture Size (dim) x layers 1D=(16384) x 2048, 2D=(16384,16384) x 2048
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 32768
Warp size: 32
Maximum number of threads per multiprocessor: 1536
Maximum number of threads per block: 1024
Maximum sizes of each dimension of a block: 1024 x 1024 x 64
Maximum sizes of each dimension of a grid: 65535 x 65535 x 65535
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and execution: Yes with 2 copy engine(s)
Run time limit on kernels: No
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Concurrent kernel execution: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support enabled: Yes
Device is using TCC driver mode: No
Device supports Unified Addressing (UVA): Yes
Device PCI Bus ID / PCI location ID: 0 / 3
Compute Mode:

Device 1: "Tesla M2050"
CUDA Driver Version / Runtime Version 5.0 / 5.0
CUDA Capability Major/Minor version number: 2.0
Total amount of global memory: 2687 MBytes (2817982464 bytes)
(14) Multiprocessors x ( 32) CUDA Cores/MP: 448 CUDA Cores
GPU Clock rate: 1147 MHz (1.15 GHz)
Memory Clock rate: 1546 Mhz
Memory Bus Width: 384-bit
L2 Cache Size: 786432 bytes
Max Texture Dimension Size (x,y,z) 1D=(65536), 2D=(65536,65535), 3D=(2048,2048,2048)
Max Layered Texture Size (dim) x layers 1D=(16384) x 2048, 2D=(16384,16384) x 2048
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 32768
Warp size: 32
Maximum number of threads per multiprocessor: 1536
Maximum number of threads per block: 1024
Maximum sizes of each dimension of a block: 1024 x 1024 x 64
Maximum sizes of each dimension of a grid: 65535 x 65535 x 65535
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and execution: Yes with 2 copy engine(s)
Run time limit on kernels: No
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Concurrent kernel execution: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support enabled: Yes
Device is using TCC driver mode: No
Device supports Unified Addressing (UVA): Yes
Device PCI Bus ID / PCI location ID: 0 / 4
Compute Mode:

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 5.0, CUDA Runtime Version = 5.0, NumDevs = 2, Device = Tesla M2050, Device = Tesla M2050
[deviceQuery] test results...
PASSED

> exiting in 3 seconds: 3...2...1...done!

The last thing we need to do is to ensure that the device files /dev/nvidia* exist and have the correct file permissions. This can be done by creating a startup script e.g.:

$ cat /etc/init.d/nvidia
#!/bin/bash
PATH=/sbin:/bin:/usr/bin:$PATH

/sbin/modprobe nvidia

if [ "$?" -eq 0 ]; then
# Count the number of NVIDIA controllers found.
NVDEVS=`lspci | grep -i NVIDIA`
N3D=`echo "$NVDEVS" | grep "3D controller" | wc -l`
NVGA=`echo "$NVDEVS" | grep "VGA compatible controller" | wc -l`
N=`expr $N3D + $NVGA - 1`
for i in `seq 0 $N`; do
mknod -m 666 /dev/nvidia$i c 195 $i
done
mknod -m 666 /dev/nvidiactl c 195 255
else
exit 1
fi

$ sudo chmod +x /etc/init.d/nvidia
$ sudo update-rc.d nvidia defaults

And all should be working! We can now cleanup by removing the downloaded files etc. and log out.

Creating an EBS-Backed AMI

We can now create an AMI called starcluster-cuda5-ami of our updated CUDA 5 instance (ID i-5654f92c) by using:

➜ starcluster ebsimage i-5654f92c starcluster-cuda5-ami
StarCluster - (http://web.mit.edu/starcluster) (v. 0.93.3)
Software Tools for Academics and Researchers (STAR)
Please submit bug reports to starcluster@mit.edu

>>> Removing private data...
>>> Creating EBS image...
>>> Waiting for AMI ami-9f6ed8f6 to become available...
>>> create_image took 12.236 mins
>>> Your new AMI id is: ami-9f6ed8f6

And we now have an AMI which we can use for our Cluster.

Cluster Template

Now we can setup the cluster template in the StarCluster config file. We need to choose the AMI or machine image which we just created before. The ami-9f6ed8f6 is the AMI which we will use to setup a small cluster template in the StarCluster config file:

...
[cluster smallcluster]
KEYNAME = cuda
CLUSTER_SIZE = 2
CLUSTER_USER = sgeadmin
CLUSTER_SHELL = bash
NODE_IMAGE_ID = ami-9f6ed8f6
NODE_INSTANCE_TYPE = cg1.4xlarge
SPOT_BID = x.xx

Its important to have a SPOT_BID = x.xx or else the actual price will be charged, which is not what we want :-) Also to run a bigger cluster just replace CLUSTER_SIZE = 2 with the number you need.

Finally in the [global] section of the config file we need to tell StarCluster to use this template:

[global]
DEFAULT_TEMPLATE=smallcluster

Start the Cluster

Ok so lets fire the cluster up with the command:

➜ starcluster start smallcluster
StarCluster - (http://web.mit.edu/starcluster) (v. 0.9999)
Software Tools for Academics and Researchers (STAR)
Please submit bug reports to starcluster@mit.edu

>>> Using default cluster template: smallcluster
>>> Validating cluster template settings...
>>> Cluster template settings are valid
>>> Starting cluster...
>>> Launching a 2-node cluster...
>>> Launching master node (ami: ami-9f6ed8f6, type: cg1.4xlarge)...
>>> Creating security group @sc-smallcluster...
Reservation:r-d03ff8b5
>>> Launching node001 (ami: ami-9f6ed8f6, type: cg1.4xlarge)
SpotInstanceRequest:sir-cdc0bc12
>>> Waiting for cluster to come up... (updating every 30s)
>>> Waiting for open spot requests to become active...
...
>>> Configuring cluster took 1.978 mins
>>> Starting cluster took 15.578 mins

And we can ssh into it via:

➜ starcluster sshmaster -u ubuntu smallcluster
...

Using the Cluster

CUDA 5 comes with a number of exciting new features, especially for multi-gpu programming (GPUDirect™) and in the next blog post I will show some of them, so stay tuned.

By the way after you are finished, dont forget to terminate the cluster via:

➜ starcluster terminate smallcluster


Viewing all articles
Browse latest Browse all 2

Latest Images

Trending Articles



Latest Images