Connecting Technology and Business.

The Birth of Cloud Computing

In the last 10 years we have been moving towards the consumption of technology in the same way we use gas or electricity. A few names were put forward for this model: Utility Computing, Grid Computing, Cloud Computing and to a certain extent even outsourcing or hosting. All of these mean more or less the same thing, however;

Utility and grid aren’t new terms and have been in use for many years
Outsourcing became a dirty word in the late 1990’s when many businesses failed to realise the perceived benefits and went through a period of insourcing.
Hosting has been delivered successfully for many years from renting rack space in someone else’s Data Centre (Co-Location). Or having a vendor providing a server for a businesses use (e.g. virtual or physical web servers) with the business paying a monthly or annual fee for the service
So is Cloud simply a rebranding of what we already have? To a certain extent it is exactly that, a reinvention of what we already and have used for many years. However there is more than just one element to a Cloud solution.

In the beginning…

To many the first real use of computers in business were the huge Mainframes that powered the world’s largest businesses in the 60’s and 70’s. Mainframe technology can actually trace its roots back to WWII and the ENIGMA code breaking machine, widely regarded as the first ever super-computer. So why are Mainframes so important to the understanding of Cloud? The answer becomes quite apparent; nearly every component that a Cloud service requires existed in the computing environments of the late 60’s and early 70’s!

What is a Mainframe?

Mainframes can be described very simply as a core group of required components:
ƒƒ CPU (the processor that is the thinking part of a computer)
ƒƒ Memory (where calculations and results are kept when in use)
ƒƒ Storage (long term storage of files and documents, originally to large tape reels, now primarily disks)
ƒƒ Network (allowing a computer to talk to another one)
ƒƒ Operating System (the chief program that tells all the components how to work together)
ƒƒ Programs (individual pieces of software designed to perform a certain task)
So what has changed over the years? Arguably very little. All those components used then are still requirements now. They are just faster, cheaper, and far more widely available than ever before.

The problem with Mainframes...

Size - Many took up entire floors just to be able to run the monthly accounts.
Cost - If your business had anything much more sophisticated than an electronic typewriter you would have had to invest a massive amount of money even for basic functionality.
Complexity - Not by modern day standards, but you needed experienced and costly engineers to run, maintain and even worse, repair them.

How were they used at the time?

Now to the interesting part! Due to the size, cost, complexity and overhead many office buildings or campuses had a single Mainframe but then shared out the computing power to the tenant businesses. Let’s put that another way, you shared computing resources with other businesses upon a single platform, paid a percentage towards the costs and had IT services delivered to your office and users with no upfront investment or associated cost of building your own infrastructure. Sound familiar?

Virtualisation was born

By the late 60’s it just seemed crazy to have all this computing power dedicated to running a solitary program, especially when loading the program could take many hours. A new idea was developed where a Kernel program was built onto the Mainframe hardware first. The Kernel’s job was to talk to each of the programs and book a time slot for each one to take turns using the computing resources, thus ensuring the best possible use of the hardware. This was imaginatively called time sharing. That Kernel is what we would now call a Hypervisor, which is the basic component of a virtual environment. In the early 1970’s IBM coined the phrase ‘Virtual Machine’ and also ‘Virtualisation’.

Why did we move away from Mainframes?

All those massive mainframes running all those programs could never be sustainable, so it was deemed a better idea to have a smaller computer that could happily run a program or two for a business and sit in a cupboard or under a desk, thus the modern day server was born.

The problem with modern day servers...

Roll forward through the years and we have new problems, those single box servers have multiplied. All of a sudden we need big costly rooms that get filled up with more and more servers, each running a new program, each requiring costly expertise to run, repair and deliver to the business the key applications it needs to function in modern times.
A solution to this problem becomes clear, wouldn’t it be a fantastic idea to build one big computer that replaces all the smaller servers, virtualise it and just worry about that one physical piece of hardware? In essence, virtualisation returns us to the days of the mainframe and we have gone full circle.

Is Cloud virtualisation?

There is much talk, and indeed incorrect statements that Cloud is virtualisation and vice versa. So is Virtualisation an exciting and new technology? Is it a ground breaking solution to business problems? Or is it in fact a 40 year old solution that has been in constant use by many globally and is currently enjoying new prominence over the last ten years?  Modern virtualisation technologies are far more advanced and with many more features than those of decades past, and at best form a part of a cloud solution.  Something that needs to be made clear though, is that you do not have to have any form of virtualisation to build a Cloud solution, it is a simple component. What is true however is that virtual technologies are just cheaper, faster and far more widely available now than ever before.
- Stuart James - Cloud - It is not a nebulous concept - a Whitepaper