Data Centers, Singapore, and Cloud Foundry

by Roger StrukhoffSeptember 29, 2015
There is a clash of civilizations as the traditional world of data centers is increasingly confronted by the modern world of software development.

I was in Singapore recently, where it is very warm and very humid all the time. Located just 85 miles north of the equator, the dynamic nation-state is a top-tier global financial center and consequently, a dynamic location for numerous data centers—which generate a lot of heat on their own. Singapore’s perpetually sultry climate is hardly ideal for them.

But Singapore’s nice location in the heart of Southeast Asia and a reputation for corruption-free, efficient business practices built up over its 50-year history as an independent nation make it among the more dynamic centers for technology usage in the world.

My visit to Singapore was as co-chair of StackingIT, a new event that’s part of the larger DCD Converged series. “Converged” focuses on data centers at several events worldwide every year, and StackingIT is bringing the vast, complex world of open software to the traditional world of data centers. It feels a little bit like a shotgun wedding.

stackingIT

 

Tradition!

Data center operators are generally speaking a traditional lot. They live in a real world characterized by real estate, brick-and-mortar, heavy metal IT, air-conditioning, and the electricity that powers their world. Change is measured in increments, even if success is significant. A cent or two price-per-kWh here and there can make a difference of millions of dollars in annual operating costs to them. An increase in cooling efficiency of a percent or two can reap similar savings.

The open software world, by contrast, is a world of abstraction. Success with platforms such as Cloud Foundry in this world can lead to magnitudes of difference in the length of development cycles, and migration to entire new “cultures.” Whether you’re discussing PaaS, IaaS, frameworks, languages, virtualization, or containers and microservices, this world is one of quantum change and transformation.
 

More diverse than one may think

The data center world is hardly monolithic, to borrow a contemporary term from the software world. Operations span a continuum of size, from a few racks to many tens of thousands of computers and related hardware. Some are on-premises, others co-located. They are increasingly virtualized, sometimes containerized, sometimes integrated to some degree with public-cloud services. Bare metal is the way to go for some.

But opinion among data center operators can appear to be monolithic when it comes to change. Their world is often focused on PUE (power usage effectiveness), expressed as a ratio of the amount of total power consumed by a data center divided by the amount of power consumed by the IT hardware. A theoretical, ideal PUE would be 1.0—no overhead whatsoever for lighting, cooling, or anything else. The most effective data centers today run PUEs of around 1.7.

But PUE doesn’t account for climate. So new data centers being built by the Chinese government in Harbin, for example, are going to require a lot less cooling than those in Singapore.

That said, there are plenty of data centers in hot climate all over the world, as proximity to demand is a much larger factor than pesky heat and humidity.
 

Vapor IO

Into this world comes some radical thinking by a company called Vapor.io. CEO Cole Crawford was a co-founder of OpenStack. Chief Architect Steven White was Director of Hardware Engineering at Nebula.

These guys talk about the disaggregated data center, about hyper-collapsed and “truly” data-defined data centers. They take an open approach to hardware, designing chamber-style systems that look nothing like traditional racks and which are designed to bring vast efficiencies in cost and energy usage.

Crawford says Vapor technology enables data centers to be built for around $4 million per megawatt usage versus $10 million (and sometimes upward) for conventional technology. He is not a fan of the PUE metric, preferring to focus instead on how much sheer compute power can be squeezed out of a given amount of space.

This thinking comes amidst a present-day world in which the amount of data being created in the world has already reached 4 zettabytes more or less and is doubling every three years. A zettabyte is 1 million petabytes—at present growth rates, the amount of data will reach 1,000 zettabytes (aka a yottabyte) within 25 years.

Data centers already consume 2 to 3 percent of the world’s electrical grid, higher in localities with a concentration of them, such as Singapore. Imagining the amount of data to be handled increasing by 2^8 (or 256X) in 24 years brings into relief the enormous challenge facing the electrical grid unless vast new efficiencies in technology are achieved.

Realizing the dimensions of this problem, the Vapor team has responded with its revolutionary design. It has already been put to use in test projects.

Meanwhile, a parallel revolution must occur with software. For its part, Vapor IO announced its Open Data Center Runtime Environment (DCRE) earlier this year. This is the first accepted contribution to the Open Compute Foundation, a collaborative effort that aims to leverage best practices and efficiencies gleaned by Facebook over the course of its massive data center operations.

vapor dcre

They also announced Vapor Core, providing an intelligence layer on top of Open DCRE. “Vapor can now act as a gateway within the data center management network, mapping inbound connections to server serial consoles and enables further development of the data center,” according to the company.
 

The Yottabyte and Cloud Foundry

Open technology seems to provide the path for the technology industry as a whole to rise to the challenge of the coming Yottabyte Age. Community-based efforts are not flawless, and execution is always significantly more difficult than theory. But a new age of vendor lock-in would not be the first choice of IT buyers and users, whether at the personal or enterprise level.

This argument, in my mind, always circles back to PaaS and Cloud Foundry, which I see as the essential ingredient and catalyst in deploying modern, cloud applications and services. The emergence of containers and microservices only accentuates the need for a platform that can manage them.

Certainly CF is already being used in a serious fashion by some of the world’s greatest companies, including such technology purveyors as Cisco, GE Software, Lockheed Martin, Philips, Orange, and Huawei. It is now intrinsic to the cloud strategies of IBM and HP. It is leaping its traditional private cloud bounds to reach the world of public cloud as well. In its catholocity, it is leveling the playing field after a fashion for buyers. You can no longer get fired for not just buying IBM.
 

Commodization?

Data center operators whom I met in Singapore—and at an earlier StackingIT event in San Francisco—often advocate for a “single SKU,” i.e., a more commoditized approach to the hardware they need to purchase in increasing amounts.

They’re correct in not wanting to get locked into various proprietary and expensive ways of building a better data center. But it’s unclear to me how much understanding exists about the need to transform the concept of a data center itself.

The idea of the software-defined data center (SDDC) has received some traction in recent years, as virtualized deployment of widely distributed apps and services has become more common. Can you say “Hadoop?” But issues of security in general, compliance in particular, and data sovereignty quickly emerge to slow conversations about remaking the data center world into a big, distributed piece of software.
 

Why such growth?

It’s worth examining why today’s already massive data flows are expected to continue to grow so sharply. Video and mobility are the two key protagonists today, worldwide. In fact, many developing countries (such as those in Southeast Asia) are experiencing a shocking rise in data creation and consumption, as the local IT infrastructures make a leap from a thinly provisioned PC world to a massive use of smartphones.

But we ain’t seen nothing yet. The Internet of Things (IoT) will be the true heavy in this scene. I’m reminded again of automobiles versus chewing gum. Both businesses are wildly successful—certainly the Wrigley family had no reason to be shy around the Ford family a generation ago.

Today, the chewing gum (i.e., small sensors with small signals) is starting to become more popular. Within a generation, the many tens of billions of sensors deployed in a global IoT will be transmitting as much data, it not more, than the larger personal and business systems.

Data center design will have to evolve as a result. The concept of data centers on the edge will become valid—and the size of these data centers will be reduced drastically. Imagine a data center you can hold in your hand, for example, housed unobtrusively along a smart traffic grid, locally processing terabytes of data, while sending back the small percentage of this data that needs to be analyzed and stored back to a regional or central facility.

How much electricity-per-terabyte-processed (or some similar measurement) will they need? Can solar panels do the trick for them? And what about all those sensors? How do we power them?
 

The future is here!

Ruminating about the data centers of the future is more than an academic exercise. The world’s data is increasing quickly at this very moment, mobile devices continue to proliferate, and the IoT is becoming real in Smart Grids and Cities around the world.

Such rumination must include discussion of the software that will enable, influence, and ultimately manage all these data centers. The management team at Vapor certainly endorses this view. And I heard from several executives and developers in Singapore and San Francisco who are focused intensely on the software prerogative. But, as stated at the top of this article, I also sense a shotgun wedding of sorts. StackingIT is bringing together the two giant worlds of software development and data center operations.

As with the ongoing DevOps discussions within the software world, there is one side that is all in and another that remains to be convinced. Operations professionals are that way. They do, after all, live in the real world. Their conundrum is that their real world is changing, quickly, dramatically, and inevitably.