Matt (on the left in the photo) had previously put the hardware in place, leaving Matteo to lead on installing Eucalyptus on the Cluster Controller (the only public-facing part of the infrastructure and the box which controls everything else in our cloud) and the first two Node Controllers (the boxes on which any virtual machine instances requested by end-users will run).
Because this cloud sits in the Eduserv data centre, which also hosts a variety of other services (some of which are sensitive) we had to partition the cloud machines onto a completely new network, firewalled off from everything else. This led to a couple of brief hiccups in the installation process because of the lack of both an existing gateway machine and a DHCP server on that network.
It turns out that the Cluster Controller acts as a DHCP server and gateway for all the virtual machine instances created by the Node Controllers but not for the Node Controllers themselves. This took us slightly by surprise. The Node Controllers need access to the Internet in order to download Ubuntu patches, hence the need for a gateway... no gateway, no patches :-(. However, rather than assigning one of our limited number of available machines to run as a gateway (you can't use the Cluster Controller as the gateway because the Eucalyptus software takes over control of the routing and NAT tables and allocates everything dynamically) we got round the problem by running a Tinyproxy server on the Cluster Controller and routing HTTP requests from the Node Controllers out thru that. We also circumvented the need for a DHCP server by manually assigning IP addresses to each of the Node Controllers.
Apart from that, the rest of the installation went very smoothly and by the end of the day we had a Cluster Controller and two Node Controllers up and running smoothly. A brief test of the Web interface indicated that we could instantiate virtual machines on the Node Controllers without any problems.
We still have to install Eucalyptus on the remaining three Node Controllers and put in place our FAS 3140 SAN cluster, of which we'll use about 10 Tbytes for FleSRR, kindly loaned to us by NetApp via Q Associates. The loan of this kit is very much appreciated.
This work should be completed this week. Then we can move ahead with properly testing our cloud.