T3N: "An introduction to cloud infrastructure for developers"

Last month, the German magazine T3N published an article that I wrote in English and which my colleague Torsten translated to German. Here is the original text I wrote before translation.

Working with a cloud infrastructure is not yet a common practice in the development community, and it is even less so for a local, on premises, private cloud infrastructure.  Using a cloud infrastructure service requires to understand a few new paradigms.  Having this infrastructure ready to service your developer's needs is not yet understood, but has much goodness to offer.  This article tries to give a few pointers on how to use it and what to expect from it.


For most people, the word cloud generally means: latest marketing over inflated buzz word, and I must admit that, too often, the word is usurped by marketing people to put a fresh coat of paint on old applications.  Clearly, this is not what the cloud really is.  Cloud computing is, before anything else, the description of the transformation of the computing economy towards a service model1. A world where one can purchase computing resources as simply as they use electricity, or any utility services, is really what is the promise of cloud computing. For it to be successful one should be able to chose freely between providers.  This means that providers should not each be imposing  their own particularities but that a common way of addressing any provider is defined.  Free software plays a key role in this but this is not the subject of this article. True choice in this world also means being able to run your task on your own hardware if the need arise, this is what the concept of an inter-operable private cloud brings.

For the past numbers of years developers have been learning to work jointly on projects. Rethinking one's development process is something that any developer is doing on a continuous basis.  Procedural programming evolved to object oriented programming.  File sharing evolved into version control then to distributed version control. All this increased the developers ability to share their work better, to work on bigger projects, to insure proper processes.  However, one thing has not changed much over the years: the fight for hardware capacity to build and test the code they are producing. A corollary of this is also that once one has put his hand on some new hardware, his first task is to re-assemble the tool set that makes it possible for him to work on it. This takes time, is prone to error, and has a steep learning curve. It is also highly inefficient in terms of wasted computing cycle.  One does not spend his full time compiling...  To solve this issue, a first solution was to share the processing power by creating a grid of compilers. While it is nice, it forces everyone to use the same tool chain, which is great, until products and releases are numerous.

In house cloud computing adds a supplementary abstraction layer to simple compiler grids: the machine image. Each user of a cloud computing infrastructure can prepare its own machine images the way he wants, and instantiate it as many time as the infrastructure permits.  Since the infrastructure is multi-tenant, multiple machines images from multiples users can be running on the same infrastructure with their own virtual environment, from network to application with the best isolation technologies preventing unwanted interactions.  Once a user is approved to use the infrastructure, he can start and stop instances or the images he want with no overhead. The same infrastructure can be shared not only amongst developers, but with any other IT user that may need some processing capacity.

The most used and known public cloud infrastructure is undoubtedly Amazon's Elastic Cloud Compute (EC2).  As their exist no standard cloud API at the moment, many open source projects have chosen to use the EC2 API as a reference, and have implemented cloud infrastructure technologies which use it as a reference. This is the case, for example of the Nimbus2, Open Nebula3 and Eucalyptus4 projects. 

From the point of view of the user, this API makes a few assumptions which are a bit surprising to the first time users:

  • instances are not persistent to a reboot,
  • machine images are the only persistent form of a machine,
  • administrators define base architectures for machine image allocation (32 or 64 bits, RAM, etc...), and users decide which architecture they need,
  • persistent storage is available through other mechanisms.

The idea is that users prepare templates of virtual machines (which are called machine images) which one can instantiate as many time as needed, in a very object oriented model.  Users can modify a machine image and save that as a new machine image: the operation is called re-bundling. 

When an image is instantiated, it is possible to pass it some data, which it will use to initialize itself.  Many people never re-bundle an image. They use a base OS image (such as the freely available yet maintained Ubuntu Server Edition), and invoke a script that customize the instance based on the data that was passed to it.  As an image is non modifiable, updating an image means creating a new image.  Using the script + data initializing method allows to switch to the most up to date image provided without having to go through the tedious process of reconfiguring/re-bundling manually each time.  

In terms of persistent storage, instances can connect to two type of services:

  • Simple Storage Service (S3), which is a file level storage, very similar to http.  It is on S3 that machine images reside, among other things.
  • Elastic Bloc Storage (EBS), is a block device storage, which one can format and mount the way they want.

Database storage can also be provided as a service to the cloud.  Amazon provides Simple DB, which is a key,value type database, but this is not identified yet as being widely used.  People still tend to deploy their own database services either traditional (ie: MySQL/Postgresql) or not (ie: CouchDB).

If you are interested in learning some of the best practices to use Debian and Ubuntu base images within a cloud infrastructure environment, the site alestic.com5 is the reference in this area.  A key sentence to remember when using cloud computing is that one should build for failure, not for resilience. Instead of building an architecture with very complex fail over scenarios, try to think in terms of the redundancy that can be attained when you can inexpensively launch multiple machines.  A bit like what RAID is to hard disk resilience6.

As I said earlier, building its own cloud infrastructure is entirely possible using open source software. Since all the projects I have mentioned use the same API, the behaviour described above should remain very similar.  Because Ubuntu has chosen to integrate with the Eucalyptus project to offer "Ubuntu Enterprise Cloud"7 as part of Ubuntu Server Edition, this is obviously the architecture I am the most familiar with.

The architecture of Eucalyptus has been designed as modular set of 5 simple elements that can be easily scaled:

  • Cloud Controller (CLC)
  • Walrus Storage Controller (WS3)
  • Elastic Block Storage Controller (EBS)
  • Cluster Controller (CC)
  • Node Controller (NC)


Each element is acting as an independent web service that exposes Web Service Description Language (WSDL) document defining the API to interact with it. It is a typical web service architecture.

The Cloud Controller (CLC) is the most visible element of the Eucalyptus architecture, as it is providing the interface with which users of the cloud interact. The CLC also talks with the Cluster Controllers (CC) and makes the top level choices for allocating new instances.

The Walrus Storage Controller (WS3) and the Elastic Block Storage Controller (EBS) are functional equivalent of the same storage interfaces in Amazon EC2.  WS3 can run on any machine you want but is generally installed with the CLC. EBS runs on the same machine(s) as the Cluster Controller and is configured automatically when the Cluster Controller is installed.

The Cluster Controller (CC) operates as the go between between the Node Controller and the Cloud Controller. As such, it needs to have access to both the Node Controller and Cloud Controller networks. It will  which Node Controller will run machine instance. It is also in charge of managing any virtual networks that the machine instances run in and routing traffic to and from them.  In simple deployments, the CLC and CC operate on the same machine.

The Node Controllers' (NC) software runs on the physical machines on which the MI will be instantiated. The NC software role is to interact with the OS and hypervisor running on the node, as instructed by the Cluster Controller.

Deploying Ubuntu Enterprise Cloud on top of Ubuntu Server Edition can be done with Ubuntu 9.10 directly from the server installer, so it is very simple to test out and see if it can be of help, but all projects have very exhaustive documentations on how to deploy them on many Linux distributions. 

I am quite sure that cloud computing will soon become widely used as the benefits it brings in terms of scalability and flexibility are not negligible.  The types of infrastructure it allows us to build are slightly different from traditional infrastructures, but you can expect similar benefits from it as when you switched form procedural to object oriented programing.  I hope this reading will have provided you with some clues on what and how you can use it.

Share this