Forward Thinking

Emergence of technologies such as Complex Event Processing and growing take-up of BPM (business process management) and BAM (business activity monitoring) has called into question whether it continues to make sense to treat BI as a standalone solution. Scanning the event processing blogosphere, event processing consultant Tim Bass of SilkRoad makes the point very simply: “CEP, BI and BAM are simply today’s buzz words for IT processes that can be implemented in numerous ways to accomplish the very similar ‘things’… to take raw ‘sensory data’ and turn that data into knowledge that supports actions that are beneficial to organizations.”

Traditionally, BI was backward-focused, applying analytics to historic trends because of limitations in processing power and storage that today’s virtualization technologies have made mockeries of.

Although the idea of converging BI with more current or forward-looking approaches has been typically associated with business issues such as sales trends, the same idea redounds back to IT in the data center. It may help to analyze historic usage patterns for repeatable events, such as the closing of the books at the end of a reporting period, what happens when your company introduces a new product like an iPhone and is not prepared for the onslaught? At that point, historical patterns provide scant insight as best into a phenomenon that would be judged unpredictable.

It was with that in mind that we spoke with BMC today on the fruits of their recent acquisition of ProactiveNet, a tool that self-learns your IT operating environment and forward analyzes patterns to detect potential threats to service levels. ProactiveNet adopts a self-learning approach to IT infrastructure performance, tracking performance patterns to detect potential problems before they erupt. By, in effect, “teaching itself” about usage patterns and changes to infrastructure and utilization it projects out into the future. Applied to the problem of maintaining service levels, it complements another product that BMC acquired a decade ago, now called Performance assurance, that conducts predictive analysis for capacity planning purposes.

For now, these tools are deployed for specialized purposes when pared with specific systems management consoles for which interfaces have been developed. But in the long run, the uses for predictive modeling could be endless, such as whenever any change is made to IT infrastructure. And, ideally, such predictives should be translatable to higher level views, so that a business process such as order fulfillment could be forward tracked to see if a new promotion becomes so successful that it kills service levels. Likewise, if your organization exposes a web service and offers a service level commitment, as to whether current usage patterns are likely to lead to an SLA compliance issue downstream. And all this ultimately impacts capacity management, which shouldn’t be a separate process.

It’s an ideal scenario for SOA, because you don’t necessarily want to run predictives constantly because they will soak up significant overhead. But invoked as a service, dynamically, the ultimate solution is having predictive analyses of IT infrastructure service levels and capacity requirements available as services that can be triggered based on business rules and policies.

Cracks in the Facade

As reported in today’s Wall Street Journal, Verizon announced it would open up its mobile network next year to non-Verizon phones. Obviously, Google’s noise for open networks has finally gotten a rise out of the nation’s second largest carrier. More importantly, this move could signify a major shift in business practices across the industry, as carriers finally ditch their outdated turnkey system practices.

Verizon’s move doesn’t necessarily translate to a literal definition of open networks; the move is more akin to Microsoft publishing the file format interfaces to its Office software products. And, while the step could place the industry more in line with modern practices from the computing sector, there remains one important difference: while computers are commodities, handsets as a generalization are not. Yes, voice handsets could be regarded as commodities, but the handset/mobile device market is so diverse that you have multiple types of products with different functions competing for different demographic slices of the marketplace. So that lays waste to the established notion of the computing industry that 90% of the value is now in the software.

But for mobile carriers, Verizon’s move signals a realization that their true value is the network and the services that networks provides. The impending auction of new bandwidth recovered from analog TV broadcasters underscores that point: with new spectrum, carriers can dispense more services. Yet, existing business models, restricting freedom of choice would simply act as a speed bump. With fewer hardware options, customers could not as easily gain access to new models that could take advantage of these new services, and with a restricted market, handset manufacturers would be slower to roll out updated models.

Ergo, there is far more upside for network operators to open their networks, because they will gain access to larger markets to which they can sell more valuable services to more customers.

While of course distinctions remain between telco network service providers and their IT counterparts, ZDNet blogger Dana Gardner suggested that recent trends of convergence are likely to promote more cross acquisitions in this space. Verizon’s move is entirely consistent with that.

Consequently, while it’s easy to conclude that Google’s rants for open spectrum might have prompted Verizon’s move, our take is that the advent of new spectrum finally brought the carrier to its senses.

Reverse Curve

Funny how quickly times change. As we hit the conference circuit last spring, we noticed that “recovery” was no longer the topic of the hour. We saw evidence that customers were feeling freer to buy software and that investors were feeling safe enough to invest in software companies.

In the past few months, the laws of gravity in the form of a spreading subprime mortgage mess compounded by spiraling energy prices have made themselves more than apparent. And so it was timely that this week, VC Peter Sobiloff of Insight Venture Partners provided some common sense on how software companies could weather this downturn in a recent Sand Hill blog.

He states that vendors who are prepared for a downturn can use the time to position themselves for growth when the economy recovers. That reminds us of Wal-Mart’s post-9/11 strategy where it used lean times to snap up more store sites so it could greet the next upturn with a nice head start. Of course, it didn’t hurt that Wal-Mart’s “Always Low Prices” strategy resounded well with consumers whose pocketbooks were pinched. But it also reminds us that there is no silver bullet: Wal-Mart entered the recovery with more stores, but by then it found it no longer had the low price field to itself, and besides, with the economy booming, consumer tastes grew pickier.

Sobiloff outlines some best practices, starting with the notion that companies should have three plans addressing great, so-so, and poor market conditions. More importantly, KPIs should be flexible, so that companies navigating tough markets should not be evaluated by the same benchmarks as boom times. He also suggests that downturns could pose an excellent opportunity for fine tuning execution, a suggestion that flies in the face of human nature that struggles desperately to wring every last penny out to preserve margins. And Sobiloff calls for “componentizing” initiatives, so that you can scale your plans based on the resources you may have at your disposal. Obvious examples could cover adding new functionality or expanding your company’s geographic footprint.

These are just a sampling of Sobiloff’s suggestions, which in essence, recommend that companies treat down times as down time, where you can plan, make adjustments, experiment, and not get excessively hung up on the top line. We’d add that the plan should account for the fact that no two economic cycles are identical: on this go-round, Asia is still booming, and may have more resilience if their domestic markets soak up some of the demand that might have otherwise been export driven. And for software companies, a downturn here might dictate a shift towards cultivating emerging markets.

Nonetheless, Sobiloff’s suggestions dictate a degree of patience that at first blush conflicts with the basic human nature of capitalists, and would obviously never cut it for public companies on Wall Street. But for companies still in the early stages of growth, or for those having been taken private, Sobiloff’s advice could provide a welcome dose of sanity.

So Many Paths to Nirvana

We’ve groused repeatedly about the gaps in the software development lifecycle, or more specifically, that communication and coordination have been haphazard at best when it comes to developing software. Aside from the usual excuses of budgets, time schedules, or politics, the crux of the problem is not only the crevice that divides software development from the business, but the numerous functional silos that divide the software development organization itself.

Software developers have typically looked down at QA specialists as failed or would-be developers; software engineers look down on developers as journeyman at best, cowboys at worst; while enterprise architects wonder why nobody wants to speak to them.

Not only do you have functional silos and jealousies, but the kinds of metadata, artifacts, and rhythms vary all over the map as you proceed to different stages of the software lifecycle. Architecture deals with relatively abstract artifacts that have longer lifecycles, compared to code and test assets that are highly volatile. And depending on the nature of the business, requirements may be set in code or continually ephemeral. No wonder that the software delivery lifecycle has often resembled a game of telephone.

A decade ago, Rational pioneered the vision that tools covering different stages of software development belonged together. But it took a decade for the market that Rational created to actually get named – Application Lifecycle Management (ALM). And it took even longer for vendors that play in this space to figure out how the tooling should fit together.

What’s interesting is that, unlike other more thoroughly productized market segments, there has been a wide diversity among ALM providers on where the logical touch points are for weaving what should be an integrated process.
IBM/Rational has focused on links between change management, defect management, and project portfolio management.
Borland’s initial thrust has been establishing bi-directional flows from requirements to change management and testing, respectively.
Serena and MKS have crafted common repositories grafting source code control and change management with requirements.
Compuware attempts to federate all lifecycle activities as functions of requirements, from project management and source code changes to test and debugging.

But what about going upstream, where you define enterprise architecture and apply it to specific systems? That’s where Telelogic has placed its emphasis, initially tying requirements as inputs to enterprise architecture or vice versa. It has now extended that capability to its UML modeler and Java code generation tool through integration with the same repository. What would be interesting would be generation of BPMN, the modeling notation for business process modeling, that several years ago joined UML in the OMG modeling language family. For now, Telelogic’s Tau can generate UML from BPMN notation, but nothing more direct than that.

In looking at the different approaches by which vendors integrate their various ALM tooling, it’s not just a matter of connecting the dots. The dots that are connected represent different visions of where the most logical intersections in the software delivery lifecycle occur. Should the lifecycle be driven by enterprise architecture, or should we drive it as a function of requirements or testing? Or should we skip the developer stuff altogether and just generate byte code from a BPMN or UML model?

It’s an issue where the opportunity to play God might be all too tempting. The reality is, just as there is no such thing as a single grand unified software development process methodology, there is no single silver bullet when it comes to integrating the tools that are used for automating portions of the application lifecycle.

SOA and Interoperability

Although SOA is an architectural practice that isolates business logic or data from the way you execute or access it, for many, a major benefit is that it is supposed to promote interoperability. That of course is what the ever-expanding list of Oasis, W3C, and various vertical industry standards is supposed to be about.

Of course, we’ve all heard most if not all of this before, as the very idea of “open systems” was premised on interoperability via non-vendor specific interfaces. Back in the client/server era, it was the notion that a relational database on a UNIX server not only abstracted data from the application, but made that data accessible to other applications as well. One look at SAP’s implementation on Oracle put that truism to rest. Then there were others, like Java’s unrequited ideal of write once, run anywhere.

Admittedly, SOA is not necessarily synonymous with interoperability, but when paired with web services standards specifying the syntax by which services are described, listed, and requested, there was hope service requests originating from a C# application could access a service originally written in Java or vice versa. The WS-I was created to ensure basic levels of interoperability at the SOAP (messaging), WSDL (service description), and WS-Security (token handling) levels.

As a recent announcement from Mainsoft and IBM attest, that’s about as far as matters have gone. Mainsoft, which develops tools so you can access Windows and Microsoft .NET applications from non-Microsoft platforms, has written extensions for IBM WebSphere so that you can access content from or develop content for Microsoft SharePoint portal. In essence, you could use WebSphere Portal to act as surrogate parent for SharePoint when it comes to access control, an arrangement that makes IBM happy enough that it has decided to resell the Mainsoft tool.

But this is nothing like the vision of service-oriented portlet federation that was envisioned with the WSRP (Web Services Remote Portlet) Oasis standard, which standardizes communications at portal component level. In other words, where portlets could invoke each other remotely across different portal platforms. Because Microsoft has not signed on to WSRP, the Mainsoft solution accesses SharePoint at a coarser level, where SharePoint exposes the entire pane of glass as a single web service that might contain one or more portlets.

We’re not trying to single out Microsoft here, as for every Oasis standard, there is enough slip that some vendor’s implementation is inevitably going to conflict with somebody else’s. And so, as we noted several years back, when two vendors observe the same the standard the same way, it’s unfortunately so exceptional that they will go to the trouble of issuing a press release about it.

There are clearly other reasons to adopt SOA, as David Linthicum’s recent post on When Not to SOA explained, in a double-negative sort of way. He calls SOA a fit when the enterprise is changing, architecture is heterogeneous, and when value of change is high. Or as Joe McKendrick described from a recent conversation with Progress CTO Hub Vandervoort, the rate at which IT backlogs dwindle can be a sure indicator of SOA’s success (assuming most of that work involves integrating what’s already there).

We don’t quibble with the notion that SOA can help abet integration because it can reduce or eliminate some of the traditional guesswork. SOA contains the problem because it enforces a declarative approach where the service announces itself, and the conditions on which you can connect to it. That’s a great advance over the black box of legacy systems where behaviors and conditions were hardly self-evident.

We also see several benefits for adopting SOA. As Linthicum noted, it supports change and agility because if you do SOA right, all the moving parts are loosely coupled (that means you should be able to swap in and out when markets or partnerships change). And we see another benefit, which is the discipline that results when you design using SOA principles.

But we feel that integration alone understates the case, because, as past and current experience shows, standards and interoperability are not synonymous.

We’re Not Dead Yet

After a long silence, BEA began showing its cards after the markets closed this afternoon, and the results were prettier than the street expected. Q3 2008 earnings up 59% over a year ago, with earnings per share at $0.14, compared to a paltry $0.01 a year ago. And Q3 wasn’t a freak, as the steepest gain actually occurred the previous quarter, when earnings hit $0.12.

Before breaking out the champaign, however, keep in mind that these numbers are almost four months old. BEA wasn’t ready to report numbers for Q4, which ended on Halloween. But they predicted the party would continue during Q4, projecting revenues of $420 – 434 million with margins of 27 – 28%, which would be well over their initial projections. And they expect Asia/Pacific will propel BEA growth in FY 09.

“We have seen very significant profitability improvements over the past few quarters that were not visible to Wall Street investors,” boasted CEO Alfred Chuang.

But the cloud in the silver lining is that BEA’s new license sales have stagnated, declining a percentage point over a year ago, and that half its sales remain in its slowest region: the Americas. Aside from restating the books and getting Carl Icahn off its back, BEA’s top priority over the past year was getting firmly into the black.

All this recalls the line from Monty Python and the Holy Grail, “We’re not dead yet.” BEA’s numbers were better than expected, given that before trading closed, its shares dropped 4% over the day (in case you’re wondering, Oracle’s dropped just over a percent). It’s human nature that the longer you don’t hear anything, the worst you expect things to turn out.

Nonetheless, when Larry Ellison threatened the previous day that Oracle’s spurned $17/share offer was history and if they returned to the table, it would be for less money, another part of human nature kicked in: desire for blood.

So tomorrow (in all likelihood, today by the time you read this), Larry is probably going to trash talk the numbers. He’ll claim that BEA’s new license sales are still sucking wind. And with conventional wisdom holding that North American technology spending is likely to slow over the coming year, Ellison is going to claim that BEA’s FY09 projections are too rosy. And then he’ll recede to the shadows.

With BEA shares having closed the day barely 3% below that magical $17 mark, we wouldn’t be surprised if Oracle discretely returns to the table with a slightly sweeter offer.

IBM-Cognos Podcast now online

Until BEA finally releases their numbers later today, the IBM-Cognos deal has dominated the conversation all this week. We had the chance to review some of the fallout from this deal by participating in an eBizQ podcast moderated by VP Beth Gold-Bernstein, with Aberdeen Group analyst Michael Dortch and IBM’s data warehousing program director Mark Andrews.

According to Dortch, while Cognos adds more pieces to what he terms IBM’s BI “erector set,” he maintains that “no one vendor has a lock on what business intelligence means.” As for why IBM took the bait, Andrews cited Cognos’ re-design of their product to sit atop a Service-Oriented Architecture. “About four years ago, they actually went through the effort and, really [they were] the only BI vendor in the market that did this.”

You can now read the transcript or listen to the podcast.

Nature Abhors a BI Vacuum

While we’ve gone on record as stating that there’s little use for BI to remain a standalone market, nope, it’s not time to close the patent office.

The good news is that query, reporting, and OLAP have become the commodities that they were meant to be. Nonetheless, the BI market has become a story of two extremes: on one end, highly developed and consolidating enterprise BI which backs basic query and reporting with enterprise performance management, master data management, customer information management, and of course, information integration and transformation. At the other end are largely rudimentary tools that in many cases are little better than do it yourself.

A couple years ago, the idea of BI 2.0 drew a fair amount of buzz over the goal of making BI more event-driven, which in turn could transform it into a more operational tool. The buzz over BI 2.0 -– a.k.a, event-driven BI, operational BI, or real-time BI –- provided fresh evidence of the demand for new approaches that would liberate BI from its after-the-fact, historical analysis ghetto. Traditionally, BI was run on separate systems with separate data stores because the high overhead of processing analytic queries would otherwise bring transaction systems to their knees. Today, the combined impacts of virtualization, clustering, increased bandwidth, and of course Moore’s Law has made largely made that a non-issue.

Although the vision behind BI 2.0 or whatever it was called added the valuable notion that analytics should bridge current and historic data, we believe that the vision as first conceived framed the issue too narrowly. Specifically, it perpetuated the traditional notion that BI was a separate application silo. In our view, driving BI through current events should simply be another function of your Business Activity Monitoring (BAM) system, because dashboards of current activity are useless if they lack analytics that can tell you why your Key Performance Indicators (KPIs) are suddenly heading south. And if you’re serious about adopting Business Process Management (BPM), analytics should provide the feedback loop to make BPM self-optimizing.

Of course there’s plenty of work ahead to consummate this vision, but acquisitions by Oracle, SAP, and now IBM cast little doubt that this is the direction through which enterprise BI will head.

But as we noted the other day, this still leaves a yawning gap at the entry level of the market. We had a glimpse of what was possible when Microsoft added OLAP extensions to SQL Server roughly five years ago. While Microsoft commoditized OLAP, we had to wait several more years until Microsoft began putting the pieces in place for Office Business Applications (OBA) to provide the front end, and then to introduce its BAM dashboard, PerformancePoint.

These are all useful steps, but Microsoft’s approach remains too much of a do-it-yourself toolkit that requires significant assembly. Admittedly, BI will always require somebody that knows how to model data, but there are several other pieces that must fall into place to make BI ubiquitous:
1. The tools must be easy to use, automating or eliminating the need for much of the complex integration and transformations that standard practice for BI.
2. The technologies should be non-intrusive to the extent possible, and easy enough to minimize the need for application or database developers. (Note: We’re saying minimize as data modeling remains an essential skill in order to for BI analyses to get credible views of the right data.)
3. It should be fast, which shouldn’t be an issue with bandwidth and Moore’s Law.

Those pieces are now coming together with a vengeance. The accidental emergence of Web 2.0 technologies is providing the tools to ease creation of BI and make it a more dynamic, collaborative tool.

Using Wikis as your de facto enterprise knowledge management system, mashups as your PDQ application integration approach, and blogs as the new reporting channel, BI analyses can evolve from static reports to living, breathing, collaborative discovery. Imagine the collective wisdom that could result as analytics shared via Wiki are mashed up with other data sources or analytics. In effect, you could end up layering new insights and contexts to your analyses. And then animate this with the kinds of rich Internet clients, whether it be Ajax or some of the more deluxe alternatives being proposed by Adobe and Microsoft, and you could start spreading analytics like wildfire.

Emergence of software appliances or Software as a Service (SaaS) can in turn greatly contain impact on your IT environment by reducing or eliminating infrastructure or integration requirements. And taking advantage of 64-bit, multi-core processors and caching, you can crunch sufficiently large volumes of data to make analyses credible.

No single player has yet put together all the ingredients, but new challengers like QlikTech are helping erode some of the barriers. For instance, QlikTech has adopted an innovative approach to building your data model by automating much of it. It takes a snapshot of your existing schema, and uses a relatively simple, SQL-like scripting engine so you can draw your star schemas on the fly. It compresses data by factors up to 95% so you don’t need to double your network-attached storage appliances to capture the data from your SAP system, and using caching, it allows you to escape the limits of traditional indexes or dimensions so you can synthesize your own views, again on the fly.

While IBM, Oracle, and SAP appear to have sewn up the BI market (let’s not forget about SAS), a vacuum has been created at the low end that is begging to be filled. The good news is that the pieces are in place for innovators to step up to the plate and fill it.

Putting Us Out of Our Misery

Well it finally happened. Barely a month after SAP announced it was buying Business Objects, and roughly 8 months after Oracle announced same with Hyperion, IBM has closed the circle by announcing its intent to buy Cognos.

Unless you were sleeping under a rock, the deal shouldn’t have caught you by surprise. Do the math. IBM’s two major enterprise software rivals buy up two of the BI field’s three Tier 1 pure plays. And ever since acquiring Ascential 2 ½ years ago, IBM has been synthesizing an information server strategy whose prime missing link was business intelligence. Not that IBM and Cognos are strangers, 18 months ago both concluded a strategic alliance, and today, Cognos remains IBM’s prime BI partner – and vice versa.

Wall Street was obviously expecting a deal, given that Cognos’ shares surged almost 40% since last month’s SAP-Business Objects announcement.

IBM vehemently claims the deal was not defensive. Asked what took IBM so long, Software Group senior vice president Steve Mills spoke at length about the due diligence necessary for doing a $5 billion deal. While there’s little question that this was IBM’s deal to lose, IBM wasn’t the only possible suitor. HP, whose CEO came from Teradata, could have been a dark horse, but this horse would have been extremely dark. Yes, HP offers the Neoview data warehouse, but that’s no substitute for the more comprehensive strategy that converges information management, real-time business activity monitoring, and business process management strategy that it takes to elevate BI to the next level.

As we stated when Oracle announced its intentions with Hyperion, BI has little reason to remain a standalone market. As we argued with a colleague, the hockey stick phase of BI implementation happened a decade ago when client/server, later the web, introduced visual reporting and analysis tools, and when innovations in back end data transformation provided the added push for takeoff. At that point there was value-add in tools, as there was the need to reconcile different approaches to building analytic data stores. Today that technology has matured to point where enterprise platforms can federate data sources, reporting tools have grown commoditized, while adjacent disciplines ranging from enterprise performance management to BAM and BPM are beginning to erode the frontiers between historical, current, and future trend analysis.

We would agree with eBizQ’s Beth Gold-Bernstein who maintains that “Cognos fits very well into the IBM stack.” Five years ago, they revamped what became Cognos 8 as a J2EE-, SOA-based platform that will play quite well with IBM WebSphere and Information Server. She also astutely notes that the deal lacks the feed-forward predictive analyses that Software AG (which just agreed to OEM Cognos 8 into its CentraSite SOA middle tier) and Tibco have. And we concur with Dana Gardner that “IBM has wasted no time nor expense in cobbling together perhaps the global leadership position in data management in the most comprehensive sense,” naming Watchfire, DataMirror, and Princeton Softech as recent examples.

While the deal takes out the last of the Tier One BI pure plays, it does create room, if not a vacuum, for players targeting bottom-up solutions for SMB to fill.

Yes, Microsoft commoditized OLAP many years ago by building it as an option to SQL Server, but Microsoft’s approach is still largely a do-it-yourself jigsaw puzzle that does at least leverage the ubiquitous Office/SharePoint front end. You have more traditional mid-market players like Information Builders whose BI business grew 13% last year, but has not publicly demonstrated that it is winning new penetration outside its 30-year old base. And there are next-generation providers like QlikTech, which offers cached data snapshots that are supposed to overcome the limitations of traditional OLAP cubes.

In fact, an argument could be made that if you cross Cognos’ recently-acquired Applix technology, which caches data cubes on 64-bit boxes, with its DataPower appliances (IBM has gone on record saying it will spread the technology beyond its XML firewall roots), presto, you could have a relatively simple enterprise performance management box that you could plunk into an SMB.

But what we’re thinking about is a step beyond all of this. Call it BI 2.0: Combine the simplicity of appliances, the performance and growing scalability of caching, and the dynamics of Web 2.0, where you could compose analytics mashups that could bridge the differences between historical, real-time, and predictive analyses. At this point nobody’s yet stepped up the plate here, but when they do, they’ll definitely find a highly receptive audience.

Nope it’s Not a Gphone

Yesterday, Google’s announcement of Android headlined the blogosphere, as much for what it was as for what it wasn’t. For us, Google’s announcement took us a few steps down memory lane, back to an era when software appliances were better known as turnkey systems.

Thanks to Moore’s Law, it is taken for granted today that the brunt of value in an IT system is comprised of software. In other words, when hardware became cheap or commodity enough, you could cultivate a market for commercial software.

But until the early 90s, you couldn’t always take that for granted. If your application involved heavy number crunching, such developing mechanical or electrical designs (computer-aided design, or CAD; electronic design automation, or EDA), or grinding out a capacity-constrained schedule for a large factory, you probably had to buy a specialized box powerful enough to run your software. And you paid a pretty premium for it. By the early 90s, Moore’s Law made a mockery of that argument, and the market started demanding that vendors shake off their proprietary boxes. Arguably, the emergence of a free market for portable software helped create the conditions that enabled the Internet to flourish.

Today turnkey systems have come back, but we don’t call them that. They’re available as self-contained software appliances that handle functions so compute-intensive, such as XML message parsing or perimeter security, that it makes sense to move the computing overhead off your network or server farms. And then there are the devices that are bought turnkey, but not out of choice: the portable communications devices on which enterprises (and society at large) increasingly rely.

In the case of appliances, turnkey packaging adds value because of the headaches that it eliminates with integrating such resource hungry systems. By comparison, in the mobile device space, turnkey adds no value, except to the mobile carriers who profit from captive markets. As demand for unlocking the iPhone has demonstrated, the demand for freedom to separate the choice of device, carrier, and service plan is more than idle chatter. The Wall St. Journal’s Walt Mossberg made an especially elegant case:

“Suppose you own a Dell computer, and you decide to replace it with a Sony. You don’t have to get the permission of your Internet service provider to do so, or even tell the provider about it. You can just pack up the old machine and set up the new one…

This is the way digital capitalism should work, and, in the case of the mass-market personal-computer industry, and the modern Internet, it has created one of the greatest technological revolutions in human history, as well as one of the greatest spurts of wealth creation and of consumer empowerment….

So, it’s intolerable that the same country that produced all this has trapped its citizens in a backward, stifling system when it comes to the next great technology platform, the cellphone.”

Amen!

With the FCC planning to auction off new digital spectrum, Google secured a well-publicized victory is persuading the commission to allot a portion to providers that would support a more open systems approach. And since then, we’ve been waiting for the next shoes to drop: when will Google unveiled its Gphone, or will it step up to the plate and actually bid on bandwidth (or do so through surrogates)?

Instead, yesterday Google announced Android, a first act towards opening a new mobile market that more resembles the mainstream of digital markets. It’s an open source platform for smart mobile devices that’s supposed to provide an alternative to Microsoft and Symbian. Android will include its own OS, an HTML web browser, middleware, for which third parties are invited to create applications. Or as colleague Dana Gardner put it, the potential ramifications of Android could shake the PC industry, which is finding itself converging with mobile and home entertainment systems.

“Google with Android and the Open Handset Alliance, however, may blow open a marketplace through a common open platform that can then provide a lot more content, apps, data, media, and services. And that will feed the demand by developers, users, and ultimately advertisers that open platforms be provided on mobile devices.”

Gardner speculates that the atmosphere around Android reminds him of the early days of Java with its idealistic goal of write once, run anywhere. He adds that the connection is more than coincidence as Google CEO Eric Schmidt lead development of Java while at Sun prior to 1997.

We certainly have no shortage of gripes with U.S. mobile carriers: reliance on a mishmash of standards that are largely incompatible with the rest of the world; coverage that is arguably inferior to that of any other developed nation; and restriction of choices that trap customers by linking hardware and software to carrier and wireless plan. As compute platforms opened up, the market for software exploded; the emergence of smarter mobile platforms combined with new bandwidth could certainly work similar wonders in the mobile space.

But lets get off our soapbox for a moment and back to reality. Google simply announced an open source mobile platform yesterday. It’s not the first time that an open platform has been proposed for the mobile world (recall the Javaphone?).

We feel that the excitement over Google’s announcement yesterday was a bit overblown, as there is no assurance yet, FCC decision or not, that a Google-style open mobile market will actually materialize. But the measure of the excitement over this rather modest announcement reflects the reality that there is significant pent up demand for mobile service innovation beyond gimmicks like your circle of five.