Nothing is New Under the Sun Server

As Ecclesiastes said, "there is nothing new under the sun." Last week, we explored how much of the innovation in the tech business is just retooling existing processes, while much innovation exists in the technology itself, which enables those businesses.

It turns out, even in technology itself, sometimes the newest and most innovative item really is nothing new under the Sun (capitalization intended).

Back in the late 1990s and early 2000s, before the growth of Linux, commodity servers and Google, we used to buy a lot of very expensive computer hardware. I cannot recall precisely our hardware capital budget when I worked financials, but it was very big.

In 2000-2001, we often bought Sun E-series servers. They took up 8U, or about 1/5 to 1/6 of one of those standard data centre racks. They had up to 8 "boards" you could slide in front or back to customize: add a board with CPUs, one with storage, perhaps lots of memory. However you did it, these created a fairly modular high-powered computer. But it was a single computer.

[caption id="" align="alignnone" width="607"] Sun E-4500[/caption]

 

Fast forward just a few years, and Sun's hardware business had some problems. It turns out that it is a lot cheaper to buy 8 computers with the equivalent power of 2 CPUs each than one really big computer with 16 CPUs. Not only is it a lot cheaper, it also is a lot more flexible. After all, for less money, I can get 8 individual computers, each of which can be managed and configured differently. One can do email, another file processing, a third and fourth equities trading, two more run reports and analytics. And if you could get these computers to use commodity Intel chips and motherboards, and the costs go way down.

Of course, for that to work, we need software that runs equally well (or better) across multiple computers rather than on one really big computer. The growth of Google and Internet companies of the second wave, which could not afford the IBM, Sun and SGI prices per unit of compute, drove the software side.

Nowadays, very few people write software that requires one big computer to run. Well-designed software runs in parallel on multiple computers, either running each task independently, or, if it needs serious power, splitting the load up and joining the results together at the end. Whereas in the 1990s we might deploy Web servers on one or two powerful and interconnected Sun servers, today we deploy 10 or 20 of them on cheap (or cloud) computers, and if a few fails, who cares?

The basic economics of power consumption, cooling and data centre space, though, have not changed, nor have the need to have all of this software talk to each other. After all, two services on the same computer can talk to each other orders of magnitude more rapidly than two services on two different computers.

Enter blades (computer, not Samurai). Physically, they look a lot like the old Sun E-class servers, with boards that pop in and out. They even have similar heights (10U for a bladecentre vs. 8U for a Sun E-class). The key difference is that instead of configuring a single server with boards, each board is an independent and relatively cheap server that is reliably connected to all of the others (and, via external networks, to the rest of the world).

[caption id="" align="alignnone" width="474"] HP BladeSystem[/caption]

What does all of that have to do with the world circling around again? After all, 16 individual servers in a chassis is very different than a 16x powerful server!

The answer comes from application management.

The growth of virtualization has led to significant benefits for just about every company, but at the cost of managing many more servers and some performance degradation. For most companies, the cost is well worth it - else they would not bother.

The last 18-24 months has seen the growth of lightweight virtualization in the form of containers. Like in the old days, each computer runs multiple services; like modern server virtualization, each service thinks it is running independently.

The problem becomes, now that we have all of these lightweight containers everywhere, how do we coordinate deploying services across many of them?

A few projects have taken up the gauntlet, notably Kubernetes and Mesos. These projects make lots of independent servers look like one big server to you. You don't have to manage deploying your app to this server, and then this one, and then this one, and then....

Instead you tell the service, "make these eight servers look like one big server", and then say, "deploy my app."

In other words, you deploy applications by making many servers look like a single computer. If you prefer, make a blade centre look like a E-server.

Of course, today's options are superior to the old world:

  • I can deploy across multiple servers, whether they are individual or blades, and even across multiple independent blade centres.
  • I can deploy across multiple locations.
  • I can control where the boundaries between these "virtual single computers" are.
  • Most importantly, 2 blade centres with the same power of an equivalent of a single or even 2 enterprise-class servers is still much much cheaper.

Nonetheless, there is a delicious irony that modern, container-driven, open-source projects make our brave new world look a lot like our old one.

Summary

The patterns of technology go around in cycles. Understanding the benefits of the "latest" technology often depends on understanding the best technology that once was.

Knowing what worked, what didn't, and why can bring dramatic benefits to your operations today while saving you the time and pain of learning it all anew. Experience is always cheaper to have than to get.

Are your technology operations the best they can be? Do you know which patterns and technologies really fit your particular usage? Ask us to help you, and watch your operations thrive.