What do Smarter Planets and Oil Refineries have in common?

Last week we paid our third visit in as many years to IBM’s Impact SOA conference. Comparing notes, if 2007’s event was about engaging the business, and 2008 was about attaining the basic blocking and tackling to get transaction system-like performance and reliability, this year’s event was supposed to provide yet another forum for pushing IBM’s Smarter Planet corporate marketing. We’ll get back to that in a moment.

Of course, given that conventional wisdom or hype has called 2009 the year of the cloud (e.g., here and here), it shouldn’t be surprising that cloud-related announcements grabbed the limelight. To recap: IBM announced WebSphere Cloudburst, an appliance that automates rapid deployment of WebSphere images to the private cloud (whatever that is — we already provided our two cents on that) and it released IBM’s BlueWorks, a new public cloud service for white boarding business processes that is IBM’s answer to Lombardi Blueprints.

But back to our regularly scheduled program, IBM has been pushing Smarter Planet since the fall. It came in the wake of a period when rapid run-up and volatility in natural resource prices and global instability prompted renewed discussions over sustainability that are at decibel levels not heard since the late 70s. A Sam Palmisano speech delivered before the Council on Foreign relations last November laid out what have since become IBM’s standard talking points. The gist of IBM’s case is that the world is more instrumented and networked than ever, which in turn provides the nervous system so we can make the world a better, cleaner, and for companies, a more profitable place. A sample: 67% of electrical power generation is lost to network inefficiencies during a period of national debate in setting up smart grids.

IBM’s Smarter Planet campaign is hardly anything new. It builds on Metcalfe’s law, which posits that the value of a network is the square of the numbers of new users that join it. Put another way, a handful of sensors provides only narrow slices of disjoint data; fill that network in with hundreds or thousands of sensors, add some complex event processing logic to it, and now you not only can deduce what’s happening, but do things like predict what will happen or provide economic incentives that change human behavior so that everything comes out copasetic. Smarter Planet provides a raison d’etre for IBM’s Business Events Processing initiatives that it began struggling to get its arms around last fall. It also tries to make use of IBM’s capacity for extreme scale computing, but also prods it to establish relationships with new sets of industrial process control and device suppliers that are quite different from the world of ISVs and systems integrators.

So, if you instrumented the grid, you could take advantage of transient resources such as winds that this hour might be gusting in the Dakotas, and in the next hour, in the Texas Panhandle, so that you could even out generation to the grid and supplant more expensive gas-fired generation in Chicago. Or, as described by a Singaporean infrastructure official at the IBM conference, you can apply sensors to support congestion pricing, which rations scarce highway capacity based on demand, with the net result that it ramps up prices to what the market will bear at rush hour and funnel those revenues to expanding the subway system (too bad New York dropped the ball when a similar opportunity presented itself last year). The same principle could make supply chains far more transparent and driven by demand with real-time predictive analytics if you somehow correlate all that RFID data. The list of potential opportunities, which optimize consumption of resources in a resource-constrained economy, are limited by the imagination.

In actuality, what IBM described is a throwback to common practices established in highly automated industrial process facilities, where closed-loop process control has been standard practice for decades. Take oil refineries for example. The facilities required to refine crude are extremely capital-intensive, the processes are extremely complex and intertwined, and the scales of production so huge that operators have little choice but to run their facilities flat out 24 x 7. With margins extremely thin, operators are under the gun to constantly monitor and tweak production in real time so it stays in the sweet spot where process efficiency, output, and costs are optimized. Such data is also used for predictive trending to prevent runaway reactions and avoid potential safety issues such as a dangerous build-up of pressure in a distillation column.

So at base, a Smarter Planet is hardly a radical idea; it seeks to emulate what has been standard practice in industrial process control going back at least 30 years.

Private Cloudburst

To this day we’ve had a hard time getting our arms around just what exactly a private cloud is. More to the point, where does it depart from server consolidation? The common thread is that both cases involve some form of consolidation. But if you look at the definition of cloud, the implication is that what differentiates private cloud from server consolidation is that you’re talking a much greater degree of virtualization. Folks, such as Forrester’s John Rymer, fail to see any difference at all.

The topic is relevant as – since this is IBM Impact conference time, there are product announcements. In this case, the new WebSphere Cloudburst appliance. It manages, stores, and deploys IBM WebSphere Server images to the cloud, providing a way to ramp up virtualized business services with the kinds of dynamic response that cloud is supposed to enable. And since it is targeted for deployment to manage your resources inside the firewall, IBM is positioning this offering as an enabler for business services in the private cloud.

Before we start looking even more clueless than we already are, let’s set a few things straight. There’s no reason that you can’t have virtualization when you consolidate servers; in the long run it makes the most of your limited physical and carbon footprints. Instead, when we talk private clouds, we’re taking virtualization up a few levels. Not just the physical instance of a database or application, or its VM container, but now the actual services it delivers. Or as Joe McKendrick points out, it’s all about service orientation.

In actuality, that’s the mode you operate in when you take advantage of Amazon’s cloud. In their first generation, Amazon published APIs to their back end, but that approach hit a wall given that preserving state over so many concurrent active and dormant connections could never scale. It may be RESTful services, but they are still services that abstract the data services that Amazon provides if you decide to dip into their pool.

But we’ve been pretty skeptical up to now about private cloud – we’ve wondered what really sets it apart from a well-managed server consolidation strategy. And there’s not exactly been a lot of product out there that lets you manage an internal server farm beyond the kind of virtualization that you get with a garden variety hypervisor.

So we agree with Joe that’s it’s all about services. Services venture beyond hypervisor images to abstract the purpose and task that a service performs from how or where it is physically implemented. Consequently, if you take the notion to its logical extent, a private cloud is not simply a virtualized bank of server clusters, but a virtualized collection of services that made available wherever there is space, and if managed properly, as close to the point of consumption as demand and available resources (and the cost of those resources) permits.

In all likelihood, early implementations of IBM’s Cloudburst and anything of the like that comes along will initially be targeted on an identifiable server farm or cluster. In that sense, it makes it only a service abstraction away from what is really just another case of old fashioned server consolidation (paired with IBM’s established z/VM, you could really turn out some throughput if you already have the big iron there). But taken to its more logical extent, a private clouds that deploys service environments wherever there is demand and capacity, freed from the four walls of a single facility, will become the fruition of the idea.

Of course, there’s no free lunch. Private clouds are supposed to eliminate the uncertainty of running highly sensitive workloads outside the firewall. Being inside the firewall will not necessarily make the private cloud more secure than a public one, and by the way, it will not replace the need to implement proper governance and management now that you have more moving parts. That’s hopefully one lesson that SOA – dead or alive – should have taught us by now.