Category Archives: OS/Platforms

SpringSource: Back to our regularly scheduled program

With the ink not yet dry on VMware’s offer to buy SpringSource, it’s time for SpringSource to get back to its regularly scheduled program. That happened to be SpringSource’s unveiling of the Cloud Foundry developer preview: This was the announcement that SpringSource was going to get out before the program got interrupted by the wheels of finance.

Cloud Foundry, a recent SpringSource acquisition, brings SpringSource’s evolution from niche technology to lightweight stack provider full circle. Just as pre-Red Hat JBoss was considered a light weight alternative to WebSphere and WebLogic, SpringSource is positioning itself as a kinder and gentler alternative to the growing JBoss-Red Hat stack. And that’s where the VMware connection comes into play, but more about that later.

The key of course is that SpringSource rides on the popularity of the Spring framework around which the company was founded. The company claims the Spring framework now shows up in roughly half of all Java installations. Its success is attributable to the way that Spring simplifies deployment to Java EE. But as popular as the Spring framework is, as an open source company, SpringSource monetizes only a fraction of al Spring framework deployments. So over the past few years it has been surrounding the framework with a stack of lightweight technologies that complement it, encompassing the:
• Tomcat servlet container (a lightweight Java server) and the newer DM server that is based on OSGi technology.
• Hyperic as the management stack;
• Groovy and Grails, which provides dynamic scripting that is native to the JVM, and an accompanying framework to make Groovy programming easy; and
• Cloud Foundry, which provided SpringSource the technology to mount its offerings in the cloud.

From a mercenary standpoint, putting all the pieces out in a cloud enables SpringSource to more thoroughly monetize the open source assets that otherwise gain revenue stream through support subscriptions.

But in another sense, you could consider the SpringSource’s Cloud Foundry as the Java equivalent of what Microsoft plans to do with Azure. In both cases, the goal is Platform-as-a-Service offerings based on familiar technology (Java, .NET) that can run in and outside the cloud. Microsoft calls it Software + Services. What both also have in common is that they are still in preview and not likely to go GA until next year.

But beyond the fact that SpringSource’s offering is Java-based, the combination with VMware adds yet a more basic differentiator. While Microsoft Azure is an attempt to preserve the Windows and Microsoft Office franchise, when you add VMware to the mix, the goal on SpringSource’s side is to make the OS irrelevant.

There are other intriguing possibilities with the link to VMware such as the possibility that some of the principles of the Spring framework (e.g., dependency injection, which abstract dependencies so developers don’t have to worry about writing all the necessary configuration files) might be applied to managing virtualization, which untamed, could become quite a beast to manage. And as we mentioned last week in the wake of the VMware announcement, SpringSource could do with some JVM virtualization so that each time you need to stretch the processing of Java objects., that you don’t have to blindly sprawl out another VM container.

Fleshing out the Cloud

VMware’s proposed $362 million acquisition of SpringSource is all about getting serious in competing with and Google App Engine as the Platform-as-a-Service (PaaS) cloud with the technology that everybody already uses.

This acquisition was a means to an end, pairing two companies that could not be less alike. VMware is a household name, sells software through traditional commercial licenses, and markets to IT operations. SpringSource is a grassroots, open source developer-oriented firm whose business is a cottage industry by comparison. The cloud brought both companies together that each faced complementary limitations on their growth. VMware needed to grow out beyond its hardware virtualization niche if it was to regain its groove, while SpringSource needed to grow up and find deeper pockets to become anything more than a popular niche player.

The fact is that providing a virtualization engine, even if you pad it with management utilities that act like an operating system, is still a raw cloud with little pull unless you go higher up in the stack. Raw clouds have their appeal only to vendors that resell capacity or enterprise large firms with the deep benches of infrastructure expertise to run their own virtual environments. For the rest of us, we need a player that provides a deployment environment, handles the plumbing, that is married to a development environment. That is what Salesforce’s and Google’s App Engine are all about. VMware’s gambit is in a way very similar to Microsoft’s Software + Services strategy: use the software and platforms that you are already used to, rather than some new environment in a cloud setting. There’s nothing more familiar to large IT environments than VMware’s ESX virtualization engine, and in the Java community, there’s nothing more familiar than the Spring framework which – according to the company – accounts for roughly half of all Java installations.

With roughly $60 million in stock options for SpringSource’s 150-person staff, VMware is intent on keeping the people as it knows nothing about the Java virtualization business. Normally, we’d question a deal like this because the company’s are so dissimilar. But the fact that they are complementary pieces to a PaaS offering gives the combination stickiness.

For instance, VMware’s vSphere’s cloud management environment (in a fit of bravado, VMware calls it a cloud OS) can understand resource consumption of VM containers; with SpringSource, it gets to peer inside the black box and understand why those containers are hogging resource. That provides more flexibility and smarts for optimizing virtualization strategies, and can help cloud customers answer the question: do we need to spin out more VMs, perform some load balancing, or re-apportion all those Spring TC (Tomcat) servlet containers?

The addition of SpringSource also complements VMware’s cloud portfolio in other ways. In his blog about the deal, SpringSource CEO Rod Johnson noted that the idea of pairing VMware’s Lab Manager (that’s the test lab automation piece that VMware picked up through the Akimbi acquisition) proved highly popular with Spring framework customers. In actuality, if you extend Lab manager from simply spinning out images of testbeds to spinning out runtime containers, you would have VMware’s answer to IBM’s recently-introduced WebSphere Cloudburst appliance.

VMware isn’t finished however. The most glaring omission is need for Java object distributed caching to provide yet another alternative to scalability. If you only rely on spinning out more VMs, you get a highly rigid one-dimensional cloud that will not provide the economies of scale and flexibility that clouds are supposed to provide. So we wouldn’t be surprised if GigaSpaces or Terracotta might be next in VMware’s acquisition plans.

Private Cloudburst

To this day we’ve had a hard time getting our arms around just what exactly a private cloud is. More to the point, where does it depart from server consolidation? The common thread is that both cases involve some form of consolidation. But if you look at the definition of cloud, the implication is that what differentiates private cloud from server consolidation is that you’re talking a much greater degree of virtualization. Folks, such as Forrester’s John Rymer, fail to see any difference at all.

The topic is relevant as – since this is IBM Impact conference time, there are product announcements. In this case, the new WebSphere Cloudburst appliance. It manages, stores, and deploys IBM WebSphere Server images to the cloud, providing a way to ramp up virtualized business services with the kinds of dynamic response that cloud is supposed to enable. And since it is targeted for deployment to manage your resources inside the firewall, IBM is positioning this offering as an enabler for business services in the private cloud.

Before we start looking even more clueless than we already are, let’s set a few things straight. There’s no reason that you can’t have virtualization when you consolidate servers; in the long run it makes the most of your limited physical and carbon footprints. Instead, when we talk private clouds, we’re taking virtualization up a few levels. Not just the physical instance of a database or application, or its VM container, but now the actual services it delivers. Or as Joe McKendrick points out, it’s all about service orientation.

In actuality, that’s the mode you operate in when you take advantage of Amazon’s cloud. In their first generation, Amazon published APIs to their back end, but that approach hit a wall given that preserving state over so many concurrent active and dormant connections could never scale. It may be RESTful services, but they are still services that abstract the data services that Amazon provides if you decide to dip into their pool.

But we’ve been pretty skeptical up to now about private cloud – we’ve wondered what really sets it apart from a well-managed server consolidation strategy. And there’s not exactly been a lot of product out there that lets you manage an internal server farm beyond the kind of virtualization that you get with a garden variety hypervisor.

So we agree with Joe that’s it’s all about services. Services venture beyond hypervisor images to abstract the purpose and task that a service performs from how or where it is physically implemented. Consequently, if you take the notion to its logical extent, a private cloud is not simply a virtualized bank of server clusters, but a virtualized collection of services that made available wherever there is space, and if managed properly, as close to the point of consumption as demand and available resources (and the cost of those resources) permits.

In all likelihood, early implementations of IBM’s Cloudburst and anything of the like that comes along will initially be targeted on an identifiable server farm or cluster. In that sense, it makes it only a service abstraction away from what is really just another case of old fashioned server consolidation (paired with IBM’s established z/VM, you could really turn out some throughput if you already have the big iron there). But taken to its more logical extent, a private clouds that deploys service environments wherever there is demand and capacity, freed from the four walls of a single facility, will become the fruition of the idea.

Of course, there’s no free lunch. Private clouds are supposed to eliminate the uncertainty of running highly sensitive workloads outside the firewall. Being inside the firewall will not necessarily make the private cloud more secure than a public one, and by the way, it will not replace the need to implement proper governance and management now that you have more moving parts. That’s hopefully one lesson that SOA – dead or alive – should have taught us by now.

Oracle finally gets its database appliance

Thank you Larry for finally putting us out of our misery, as Oracle has finally silenced the chattering classes (mea culpa) with a $9.50/share bid for Sun (almost smack dab in the middle between IBM’s original and revised lower bids).

In many ways this deal brings events full circle between Oracle and Sun. The obvious part is that the deal solidifies Oracle’s enterprise stack vs. IBM in that Oracle can now go fully mano a mano against IBM for the enterprise data center, database, platform and all. It also provides a welcome counterbalance to IBM for control over Java’s destiny. While the deal is likely to finally put NetBeans out of its misery, it means that there will be a competition over direction of the Java stack that is borne of realpolitik, not religion.

More importantly, it finally gives Solaris a meaning for existence as it returns to serving as Oracle’s reference platform. In a way, you could state that both companies were twins separated at birth, as both emerged as the de facto reference platforms for UNIX in the 80s; the deal was sealed with Sun’s purchase of some of the assets of Cray in the mid 90s that finally gave Sun an enterprise server on which Oracle could raise the ante on IBM. Aside from HP’s brief challenge with SAP in the mid 90s, Solaris has always been the biggest platform for Oracle.

But after the dot com bust and emergence of Linux, Solaris lost its relevance as open source provided an 80/20 alternative that was good enough for most dot coms. It left Sun with an identity crisis, debated much on these pages and elsewhere, as to its next act. Under Jonathan Schwartz’s watch, Sun tried becoming the enterprise counter pole to Red Hat – all the goodness of open source, MySQL too, but with the bulletproofing that Red Hat and SuSE were missing. As we noted a few weeks back, great idea, but not enough to support a $5 billion business.

Now Solaris becomes part of the Oracle enterprise stack – a marriage that makes sense as businesses investing in high end enterprise applications are going to expect umbrella deals. In other words, now Oracle has the complete deal to counter IBM. Oracle in the past has flirted with database appliances and certified implementations – now it doesn’t have to flirt anymore. More importantly, it provides a natural platform for Oracle to offer its own cloud.

The deal protects Sun’s – likely soon to be Oracle’s – hold on the Solaris installed base more than it protects the Oracle database, application or middleware stack. Basically, shades of UNIX hardware are commodity and more readily replaced than databases or applications. That’s why you saw Sun try developing a software business over years as it desperately needed something firmer to anchor Solaris. Oracle seals the deal.

Obviously, this one makes PeopleSoft and Siebel walks in the park, if you compare the scale of the deal. Miko Matsumura and Vinnie Merchandani have their doubts as to how well this beast will swallow the prey.

CORRECTION: The PeopleSoft deal was larger and marked the beginning of Oracle’s grand acquisitions spree. But this deal marks a major new chapter in the way it could transform Oracle’s core business.

While there is plenty of discussion of how this changes the lineup of who delivers to the data center, we’ll focus on some of the interesting implications for developers.

For starters, my colleague Dana Gardner had an interesting take on what this means for MySQL, which he calls MyToast. We concur with the rest of his analysis -– but depart from it on this one. First, this is open source, and in this case, open source where the genie is already out of the bottle. Were Oracle to try killing MySQL, there would be nothing to stop enterprising open source developers and some of the old MySQL team from developing a YourSQL. Secondly, MySQL was never going to seriously compete with Oracle as the database, in spite of improvement, remains too underpowered. Our take is that Oracle could take the opportunity to cultivate the base and develop the makings of a lightweight middleware stack that for the most part would be found money, rather than cannibalization, of its core business.

The other interesting question concerns Java. Three words: NetBeans is history.

Sun’s problem was that the company was too much under the control of engineers -– otherwise, how to explain why the company kept painting itself into corners with technologies increasingly off the mainstream, like NetBeans, or the more recent JavaFX Java-native rich internet client? Now that it “owns” the origins of the Java stack, we expect Oracle to provide counterweight to IBM/Eclipse, but as mentioned earlier, it will be one borne of nuance rather than religion. You can see it already in Oracle’s bifurcated Eclipse strategy, where its core development platform, JDeveloper, is not Eclipse-compliant, but the recently acquired BEA stack is. In some areas, such as Java persistence, Oracle has taken lead billing. Anyway, as Eclipse has spread from developer to run time platform, why would Oracle give up its position as a member of Eclipse’s board.

We see a different fate for JavaFX, however. If you recall, one of the first things that Oracle did after closing the BEA acquisition was dropping BEA’s deal to bundle the Adobe Flash client as part of its Java development suite. Instead, Oracle’s RIA strategy consisted of donating its Java Server Faces (JSF) to Apache as the MyFaces project. As JSF is server side technology for deploying the MVC framework to Java, we expect that Oracle will view JavaFX as the lightweight Java-native rich client alternative, providing web developers dual alternatives for deploying rich functionality.

Open Source a decade later

As we and many others have opined, one of the greatest tremors to have reshaped the software business over the past decade has been emergence of open source. Open source has changed the models of virtually every major player in the software business, from niche startups to household names. It’s hollowed out sectors like content management where pen source has replaced the entry level package; unless you’re implementing content systems as an extension of an enterprise middle tier platform strategy, open source platforms like Joomla or the social networking-oriented Drupal will provide a perfectly slick, professional website.

Multiple open source models ranging from GNU copy left to Apache/BSD copyright, plus a wide range of community sourcing and open technology previews have splattered the landscape. Of course, old perennial issues like whether open source robs Peter to pay Paul or boosts innovation continue to persist.

What’s new is emergence of the cloud, a form of computing that has resisted the platform standardization that was both created by open source, and for which makes open source possible. Behind proprietary cloud APIs, does open source still matter? We sat in on a recent Dana Gardner BriefingsDirect podcast that updated the discussion, along with Paul Fremantle, the chief technology officer at WSO2 and a vice president with the Apache Software Foundation; Miko Matsumura, vice president and deputy CTO at Software AG, and Richard Seibt, the former CEO at SUSE Linux, and founder of the Open Source Business Foundation; Jim Kobielus, senior analyst at Forrester Research; JP Morgenthal, independent analyst and IT consultant, and David A. Kelly, president of Upside Research.

Read the transcript here, and listen to the podcast here.

IBM’s Got Better Things to do with $7 Billion

We had a hard time imagining exactly why Sun was worth $7 billion to IBM, and upon completing the due diligence, evidently so did IBM. Yet in spite of a slightly reduced offer that, according to the New York Times, went form $9.55 a share to $9.40, we wonder what was going through the minds of Sun’s board, which according to the Wall Street Journal, was split: Jonathan Schwartz’s faction supposedly in favor and, not surprisingly, Scott McNealy’s against. Evidently, even in retirement, McNealy still calls the shots.

Looking back, McNealy seemed more interested in being right than adapting to structural changes in the marketplace that made Sun’s posturing irrelevant. As Sun was wasting its energy fighting rather than accommodating Microsoft with Java, IBM did an end-around with Eclipse which shifted the center of the Java universe away from the JCP. Meanwhile, emergence of Linux eroded the very foundations of Sun’s business.

We’ve had a hard time figuring what other exit strategy remains for Sun’s beleaguered shareholders. Yes, Sun just hedged its bets signing a Solaris x86 distribution deal with HP for its Proliant servers at the end of February, but for all practical purposes, there’s nobody that matches Fujitsu’s footprint as Solaris OEM. As we’ve argued previously, Fujitsu would be the most logical resting place for Sun’s SPARC server business, and there’s some precedent for Fujitsu to make such investments as it recently bought Siemens out of its x86 Fujitsu Siemens joint venture. Besides, as the largest Solaris OEM, they have real skin in the game for its survival.

Sun’s problems are hardly new. While open source has become the mantra under Jonathan Schwartz’s watch, we have a hard time figuring how it’s going to drive a $5 billion business built on lower volume, high margin sales. Some have drank the Kool Aid; Silicon Valley entrepreneur Sramana Mitra argued that Sun should fully walk Schwartz’s open source talk. OK, Sun’s Open Storage CMT (Niagara) and x86 systems businesses have grown of late at double digit rates, but for Sun to make the transition, it would have to become the open source counterpart of Dell.

But we’d agree with Mitra, had Sun made the move when Schwartz took the helm back in 2006. With the economy much better then, and the market expecting Schwartz to make a real break with the past, think of how Sun could have reinvented itself had Schwartz, as one of his first moves, divested the SPARC business. Were Fujitsu interested, it could have received a few billion which could have been used for a real makeover to a high volume but lower margin business; maybe some (or most of it) could have been used to shut Wall Street up and take the company private. Just imagine.

IBM’s $7 billion was simply a play to surround HP with more market share; but with aggressive selling, it could seriously eat into Sun’s business without the buyout –- servers are replaced far more readily than software. It has better things to do with its money.

Dana Blankenhorn had the best take on Sun’s fickleness, equating them to the Dodgers Manny Ramirez, who walked away from a $5 million offer, only to take it several months later after nothing else surfaced.

IBM buying Sun? Why bother?

That was our first response when we saw a WSJ headline and a sampling of comments from the blogosphere, here and here, earlier this morning. And it still is.

Ever since the popping of the dot com bubble, Sun has been trying to redefine itself. At core, Sun has always been a hardware company –- initially CADCAM workstations, and then thanks to purchase of part of Cray Computing’s assets –- a server company. That was fine when Windows couldn’t provide the scale required for the running websites, and before clustered Linux blades proved the viability of low cost/no cost, eating Sun’s lunch. Sun had Java, but ceded the business and much of development tooling standards to IBM before the web development market fragmented with new, popular scripting languages.

So what should sun do when it grows up? Back in 2003, we suggested Sun eat its young in classic Silicon Valley fashion: junk the software business, where it has never made money, and bite the bullet on Unix staking a new line in the sand for 64-bit Linux. A lot of our friends at Sun stopped returning calls and emails after that. Had Sun done so, it would have enjoyed a 2 – 3 year head start, of course at the price of transitioning to a higher volume, lower margin business model for which it is now still struggling with.

Fast forward to the present, and Sun is several years into a strategy to become an open source company. Fine idea had it begun prior to Jonathan Schwartz’s watch. But Sun’s boldest move of recent, buying MySQL for a billion dollars, was great for grabbing attention, but was hardly a game-changer in that this little database-that-could could not carry a $5 billion overall business (it would have made more sense a couple years earlier had Sun already been well underway down a Linux road, which it wasn’t).

So what does IBM really have to gain from spending $6.5 billion? More share in UNIX servers? UNIX is not exactly a growing market these days. With Linux eating UNIX’s lunch, IBM has been already quite busy, thank you, pushing the middleware and management systems that do make money atop Linux, which doesn’t. Sun still has $3 billion cash stashed away from the glory days that it’s burning through. IBM has $13 billion, and healthy margins to boot, so why bother? Migration of the tiny base of NetBeans users to Eclipse? Sorry, that bird’s already flown. A land office market in MySQL (when IBM already has stakes in the more scalable EnterpriseDB)?

One could posit that this is a circling of the wagons following Cisco’s announcement of its Unified Computing systems initiative; however, Sun does not offer any of the missing networking pieces for IBM to respond to Cisco. It could also be interpreted as a move to blunt HP by adding more data center share – except that IBM already has the heft to counter HP and doesn’t need Sun’s incremental presence.

It’s also been speculated that IBM might pick up Sun and divest portions, such as the Solaris business to Fujitsu, as piece parts. But the question is, what family jewels are actually left?

Sun has been approaching various suitors over the past few months as it requires an exit strategy. But Sun will be a waste of IBM’s money, not to mention the time spent digesting a large acquisition of questionable added value. That leaves Fujitsu, Sun’s primary Solaris OEM, as the only logical suitor left standing.

Update: Progress Software’s Eric Newcomer, whose “night job” is co-chair of the OSGi Enterprise Expert group, has some interesting observations on what it’s been like to have been caught in the middle of the Eclipse vs. NetBeans battle.

The Network is the Computer

It’s sometimes funny that history takes some strange turns. Back in the 1980s, Sun began building its empire in the workgroup by combining two standards: UNIX boxes with TCP/IP networks built in. Sun’s The Network is the Computer message declared that computing was of little value without the network. Of course, Sun hardly had a lock on the idea, as Bob Metcalfe devised the law stating that the value of the network was exponentially related to the number of nodes connected, and that Digital (DEC) (remember them?) actually scaled out the idea at division level where Sun was elbowing its way into the workgroup.

Funny that DEC was there first but they only got the equation half right – bundling a proprietary OS to a standard networking protocol. Fast forward a decade and Digital was history, Sun was the dot in dot com. Then go a few more years later, as Linux made even a “standard” OS like UNIX look proprietary, Sun suffers DEC’s fate (OK they haven’t been acquired, yet and still have cash reserves, if they could only figure out what to do when they finally grow up), and bandwidth, blades get commodity enough that businesses start thinking that the cloud might be a cheaper, more flexible alternative to the data center. Throw in a very wicked recession and companies are starting to think that the numbers around the cloud – cheap bandwidth, commodity OS, commodity blades – might provide the avoided cost dollars they’ve all been looking for. That is, if they can be assured that lacing data out in the cloud won’t violate ay regulatory or privacy headaches.

So today it gets official. After dropping hints for months, Cisco has finally made it official: its Unified Computing System is to provide in essence a prepackaged data center:

Blades + Storage Networking + Enterprise Networking in a box.

By now you’ve probably read the headlines – that UCS is supposed to do, what observers like Dana Gardner term, bring an iPhone like unity to the piece parts that pass for data centers. It would combine blade, network device, storage management and VMware’s virtualization platform (as you might recall, Cisco owns a $150 million chunk of VMware) to provide, in essence, a data center appliance in the cloud.

In a way, UCS is a closing of the circle that began with mainframe host/terminal architectures of a half century ago: a single monolithic architecture with no external moving parts.

Of course, just as Sun wasn’t the first to exploit TCP/IP network, but got the lion’s share of credit from, similarly, Cisco is hardly the first to bridge the gap between compute and networking node. Sun already has a Virtual Network Machines Project for processing network traffic on general-purpose servers, while its Project Crossbow is supposed to make networks virtual as well as part of its OpenSolaris project. Sounds like a nice open source research project to us that’s limited to the context of the Solaris OS. Meanwhile HP has raped up its Procurve business, which aims at the heart of Cisco territory. Ironically, the dancer left on the sidelines is IBM, which sold off its global networking business to AT&T over a decade ago, and its ROLM network switches nearly a decade before that.

It’s also not Cisco’s first foray out of the base of the network OSI stack. Anybody remember Application-Oriented Networking? Cisco’s logic, to build a level of content-based routing into its devices was supposed to make the network “understand” application traffic. Yes, it secured SAP’s endorsement for the rollout, but who were you really going to sell this to in the enterprise? Application engineers didn’t care for the idea of ceding some of their domain to their network counterparts. On the other hand, Cisco’s successful foray into storage networking proves that the company is not a one-trick pony.

What makes UCS different on this go round are several factors. Commoditization of hardware and firmware, emergence of virtualization and the cloud, makes division of networking, storage, and datacenter OS artificial. Recession makes enterprises hungry for found money, maturation of the cloud incents cloud providers to buy pre-packaged modules to cut acquisition costs and improve operating margins. Cisco’s lineup of partners is also impressive – VMware, Microsoft, Red Hat, Accenture, BMC, etc. – but names and testimonials alone won’t make UCS fly. The fact is that IT has no more hunger for data center complexity, the divisions between OS, storage, and networking no longer adds value, and cloud providers need a rapid way of prefabricating their deliverables.

Nonetheless we’ve heard lots of promises of all-in-one before. The good news is this time around there’s lots of commodity technology and standards available. But if Cisco is to make a real alternative to IBM, HP, or Dell, it’s got to put make datacenter or cloud-in-the box reality.

Ubiquity vs. Ubiquity

Firing the first shot that tells you the summer’s over, Google yesterday unveiled Chrome, their skunk works project for developing their own browser. Given the dynamics of the browser space (it’s not a market, but a means to an end, which is controlling the gateway to the web), it’s not surprising that reaction can be summarized as follows:

1. It’s part of Google’s grand plan to challenge Microsoft by adding the last piece to what would be Google’s enterprise desktop, app dev platform, and cloud.

2. It clouds the waters given that Google just extended its sponsorship of the Mozilla Foundation for 3 more years.

3. Chrome is ultimately intended more for the kind of “power” browsing that would be required for the enterprise desk or webtop. The obvious goodie here is completely independent tabbed browsing, where each tab is its own session – meaning one tab crashing won’t bring all the others down. It’s the kind of feature that came to Windows beginning with NT and Windows 2000, where a single window did not always have to crash the entire client session and it’s about time that the Internet experience become similarly robust. This obviously oversimplifies all the possible wish lists for features that could improve the browsing experience, with security being the obvious piece, but more robust tabbed browsing is an obvious missing piece.

4. Chrome originated because Google realized it had to own the entire stack and optimize the browser for the Google desktop if it were to present a viable alternative to Microsoft.

5. Google extended its Mozilla partnership because it couldn’t suddenly pull the plug and transition to a technology that is barely in alpha phase. Open source blogger Dana Blankenhorn contends both are complementary; that that Google will ultimately push a dual tiered strategy, pushing Firefox to consumers and Chrome at the enterprise.

Regardless of your take, keep in mind that whatever Google’s ulterior motives, consider the source. Google, much like Microsoft, is so gargantuan and has so much resource that its product, technology, and business development strategy is highly decentralized. The typical scenario is that there are multiple groups vying for development of the next great thing, and that the one with the best technology, market plan, and/or political skills typically wins out. In large part that’s how one can explain inconsistencies in Microsoft’s strategies, as recently revealed with Oslo, where a new workflow engine was developed in competition with existing BizTalk Server. So we’re not surprised that the group working to optimize delivery of Google Desktop on Firefox is different from the one hatching Chrome.

But our “aha” moment came when we recalled last week’s unveiling by Mozilla of its own Alpha, in this case, a natural language text search facility in Firefox that it code-named Ubiquity. In other words, Firefox was also treading on Google’s doorstep. So now you have a case of the ubiquitous search and advertising engine that is striving to become the ubiquitous enterprise webtop and compute cloud with a market cap of nearly $145 billion, and a tax-exempt not-for-profit corporation that racked up maybe $6 million sales in all of 2007 that has a respectable but hardly universal market presence, and the answer is obvious: Firefox is clearly about the consumer while Google’s dead serious about the enterprise. Or as Blankenhorn stated it in a prescient post filed just prior to the Chrome announcement, there are two Internets –- the locked down one in the office, which probably restricts you to the Microsoft IE browser, and the home Internet, where you can use Firefox or something else.

Our take is that Chrome’s prime target is replacing IE in the corporate Internet, leaving the home one as table scraps up for grabs. Our sense is that is where Firefox’s ubiquity is headed – if some third party mashed up that capability using a more graphical metaphor, it could provide a key enabler to monetizing the mobile web. But that’s another story.

Still Room for Billion-Dollar Plays: A Conversation with M.R. Rangaswami

On the eve of last year’s Software conference, Sand Hill Group principal M.R. Rangaswami spoke on the prospects for innovation in a consolidating software industry. Evidently there was some room left for innovation, witness Sun’s billion dollar acquisition of MySQL. According to Rangaswami, it points to the fact that there’s life – and value – in open source software companies beyond Red Hat.

In fact, 2007 was a year of second acts, with Salesforce joining the ranks of billion-dollar software companies. On the eve of Software 2008 next week, we just had a return engagement with MR to get his take on what’s gone down over the past year. The first point he dropped was breaking conventional wisdom that another software company could actually crack the established order, given ongoing consolidation. “People questioned whether there would ever be another billion dollars software company again, although of course Mark Benioff doesn’t call it that,” he noted.

But Rangaswami added that conventional wisdom wasn’t totally off, referring to the fact that a lot of promising break-outs are being gobbled up before they get the chance to go public – MySQL being Exhibit A. There’s plenty of cash around the old guard to snap up promising upstarts.

Nonetheless, there are invisible limits to the acquisition trend, most notably among SaaS (Software as a Service) providers. He ascribes the reticence to the fact that conventional software firms are scared of the disruptive effects that on demand providers could have in cannibalizing their existing businesses.

Going forward, Rangaswami expects some retrenchment. We’d put it another way – with last year’s 5 – 6% growth in IT spending, it was almost impossible for any viable ISV to not make money. Even if, perish the thought, we had been CFO for some poor ISV last year, it would have been in the black in spite of us.

But this year, with IT spending growth anticipated in the more modest 1 – 2% range if that, there’s going to be a lot of missed numbers. IBM cleaned up in Q1, but Oracle’s and Microsoft’s numbers failed to impress (Microsoft last year was coasting on Vista upgrades). Rangaswami advises ISVs to keep the lid of development costs (he expects to see more offshoring this year), but he also says that ISVs should be “smarter” with their marketing budgets. “Do a lot more with programs that are online and use Web 2.0 technologies as opposed to some of more traditional approaches,” he advised, pointing to channels like podcasts and YouTube. “Most people watch TV on YouTube these days,” he said, just slightly exaggerating.

Of course, Rangaswami says you can’t ignore emergence of social computing, for which Facebook for now has become the poster child. We admit to being a bit put off by the superficial atmosphere of the place, and of course not being under 35, why should we care what somebody did last night or who their favorite band is? But it’s become conventional wisdom that some form of social networking is bound to emerge for more professional purposes, like engaging your customers, that goes beyond the forums and chat rooms of user groups, the occasional regional or annual conferences, or the resume-oriented purpose of LinkedIn. In fact, one recent startup, Ringside Networks, is offering a “social appserver” where businesses can use Facebook apps to build their own communities on their own sites.

But Rangaswami says, why not use some of the less serious aspects of social computing to conduct real business. Like getting your New England customers together at the next Red Sox game (just make sure that one of your New York customers by mistake doesn’t slip onto the invite list).

The theme of this year’s Software 2008 conference is what Rangaswami terms “Platform Shift.” After the upheavals of the open systems and Internet eras, it appeared that the software industry was coalescing around Java and .NET platforms. But then on demand began making the Java vs. .NET differences irrelevant. For instance, if you want to write to Salesforce’s platform, it’s in a stored procedures languages that is like, but isn’t Java. On the horizon you have Amazon’s EC2 cloud, the Google Apps platform, and you could probably also consider Facebook to be another platform ecosystem (there are thousands of apps already written to it).

The good news is that tough times actually encourage customers to buy a couple of on demand seats for petty cash because it sharply limits risk.

The result is that barriers to entry for new software solution providers are lower than ever. You don’t have to ask customers to install software and you don’t have to build the on demand infrastructure to host it. Just build the software, then choose whose cloud you want to host it on, pay only by the drink, and start marketing. According to Rangaswami, the cloud might remove the hurdles to getting started, but getting your brand to emerge from the fog may prove the toughest challenge. “Sales and marketing in this new world will be totally different.”