Category Archives: Cloud

SOA what

There is a core disconnect between what gets analysts and journalists excited, and what gains traction with the customers who consume the technologies that keep our whole ecosystem in business. OK, guilty as charged, we analysts get off on hearing about what’s new and what’s breaking the envelope, but that’s the last thing that enterprise customers want to hear. Excluding reference customers (who have a separate set of motivations that often revolve around a vendor productizing something that would otherwise be custom developed), most want the tried and true, or at least innovative technology that at least has matured the rough spots and is no longer version 1.

It’s a thought that crystallized as we bounced impressions of this year’s IBM SOA Impact event with colleagues like Dorothy Alexander and Marcia Kaufman, who shared perceptions that, while this year’s headlines or trends seemed a bit anticlimactic, that there was real evidence that customers were actually “doing” whatever it is that we associate with SOA.

Forget about the architectural journeys that you’ve heard about SOA; SOA is an enterprise architectural pattern that is a means to an end. It’s not a new argument; it was central to the SOA is dead debate that flared up with Anne Thomas Manes’ famous or infamous post of almost a year and a half ago, and of the subsequent debates and hand wringing that ensued.

IBM’s so-called SOA conference, Impact, doesn’t include SOA in its name, but until now SOA was the implicit rationale for this WebSphere middleware stack conference to exist. But more and more it’s about the stack that SOA enables, and more and more, about the composite business applications that IBM’s SOA stack enables. IBM won’t call it the applications business. But when you put vertical industry frameworks, business rules, business process management, and analytics together, it’s not simply a plumbing stack. It becomes a collection of software tools and vertical industry templates that become the new de facto applications that bolt atop and aside the core application portfolio that enterprises already have and are not likely to replace. In past years, this conference was used to introduce game changers, such as the acquisition of Webify that placed IBM Software firmly on the road to verticalizing its middleware.

This year the buzz was about something old becoming something new again. IBM’s acquisition of Cast Iron, as dissected well by colleagues Dana Gardner and James Governor, reflects the fact that after all these years of talking flattened architectures, especially using the ESB style, that enterprise integration (or application-to-application, or A2A) hubs never went out of style. There are still plenty of instances of packaged apps out there that need to be interfaced. The problem is no different from a decade ago when the first wave of EAI hubs emerged to productize systems integration of enterprise packages.

While the EAI business model never scaled well in its time because of the need for too much customization, experience, commoditization of templates, and emergence of cheap appliances provided economic solutions to this model. More importantly, the emergence of multi-tenanted SaaS applications, like Salesforce.com, Workday and many others, have imposed a relatively stable target data schema plus a need of integration of cloud and on-premises applications. Informatica has made a strong run with its partnership with Salesforce, but Informatica is part of a broader data integration platform that for some customers is overkill. By contrast, niche players like Cast Iron which only do data translation have begun to thrive with a Blue Chip customer list.

Of course Cast Iron is not IBM’s first appliance play. That distinction goes to DataPower, which originally made its name with specialized appliances that accelerated compute-intensive XML processing and SSL encryption. While we were thinking about potential synergy, such as applying some of DataPower’s XML acceleration technology to A2A workloads, IBM’s middleware head Craig Hayman responded to us that IBM saw Cast Iron’s technology as a separate use case. But they did demonstrated that the software of Cast Iron could, and would, literally run on DataPower’s own iron.

Of course, you could say that Cast Iron overlaps the application connectors from IBM’s Crossworlds acquisition, but those connectors, which were really overlay applications (Crossworlds used to call them “collaborations”), have been repurposed by IBM as BPM technology for WebSphere Process Server. Arguably, there is much technology from IBM’s Ascential acquisition focused purely on data transformation that also overlaps here. But Cast Iron’s value add to IBM is the way those integrations are packaged, and the fact that they have been developed especially for integrations to and from SaaS applications – no more and no less. IBM has gained the right sized tool for the job. IBM has decided to walk a safe tightrope here; it doesn’t want to weigh Cast Iron’s simplicity (a key strength down) with added bells and whistles from the rest of its integration stack. But the integration doesn’t have to go in one direction –weighing down Cast Iron with richer but more complex functionality. IBM could go the opposite direction and infuse some of this A2A transformation as services that could be transformed and accelerated by the traditional DataPower line.

This is a similar issue that IBM has faced with Lombardi, a deal that it closed back in January. They’ve taken the obvious first step in “blue washing” the flagship Lombardi Teamworks BPM product, which is now rebranded IBM WebSphere Lombardi Edition and bundled with WebSphere Application Serve 7 and DB2 Express under the covers. The more pressing question is what to do with Lombardi’s elegantly straightforward Blueprint process definition tool and IBM WebSphere BlueWorks BPM, which is more of a collaboration and best practices definition rather than modeling tool (and still in beta). The good news is that IBM is trying the right thing in not cluttering Blueprint (now rebranded IBM BPM Blueprint), but the bad news is that there is still confusion with IBM’s mixed messages of a consistent branding umbrella but uncertainty regarding product synergy or convergence.

Back to the main point however: while SOA was the original impetus for the Impact event, it is now receding to a more appropriate supporting role.

VMforce: Marriage of Necessity

Go to any vendor conference and it gets hard to avoid what has become “The Obligatory Cloud Presentation” or “Slide.” It’s beyond this discussion to discuss hype vs. reality, but potential benefits like the elasticity of the cloud have made the idea too difficult to dismiss, even if most large enterprises remain wary of trusting the brunt of their mission systems to some external hoster, SAS 70 certification or otherwise.

So it’s not surprising that cloud has become a strategic objective for VMware and SpringSource both before after the acquisition that put both together. VMware was busy forming its vCloud strategy to stay a step ahead of rivals that seek to make VMware’s core virtualization hypervisor business commodity, while SpringSource acquired CloudFoundry to take its expanding Java stack to the cloud as such options were coming available for .NET and emerging web languages and frameworks like Ruby on Rails.

Following last summer’s VMware SpringSource acquisition, the obvious path would have placed SpringSource as the application development stack that would elevate vCloud from raw infrastructure as a service to a full development platform. That remains the goal, but it’s hardly the shortest path to VMware’s goals. At this point, VMware still is getting its arms around the assets that are now under its umbrella with SpringSource. As we speculated last summer, we would see some of the features of the Spring framework itself, such as dependency injection (which abstracts dependencies so developers don’t have to worry about writing all the necessary configuration files) might be applied to managing virtualization. But that’s for another time, another day. VMware’s more pressing need is to make vSphere the de facto standard for managing virtualization and vCloud, the de facto standard for cloud virtualization (actually, if you think about it, it is virtualization squared: OS instances virtualized from hardware, and hardware virtualized form infrastructure).

In turn, Salesforce wants to become the de facto cloud alternative to Google, Microsoft, IBM, and when they get serious, Oracle and SAP. The dilemma is that Salesforce up until now has built its own wall garden. That was fine when you were confining this to CRM and third party AppExchange providers who piggybacked on Salesforce’s own multi-tenanted infrastructure using its own proprietary Force.com environment with its “Java-like” Apex stored procedures language. But at the end of the day, Apex is not going to evolve into anything more than Salesforce.com niche development platform, and Force.com is not about to challenge Microsoft .NET, or Java for that matter.

The challenge is that Salesforce, having made the modern incarnation of remote hosted computing palatable to the enterprise mainstream, now finds itself in a larger fishbowl outgunned in sheer scale by Amazon and Google, and outside the enterprise Java mainstream. Benioff conceded as much at the VMforce launch yesterday, characterizing Java as “the No. 1 developer language in the enterprise.”

So VMforce is the marriage of two suitors that each needed their own leapfrogs: VMware into a ready made with existing brand recognition, and Salesforce for getting access to the wider Java enterprise mainstream.

Apps written using the Spring Java stack will gain access to Force.com services such as search, identity and security, workflow, reporting and analytics, web services integration API, and mobile deployment. But it also means dilution of some features that make Force.com platform what it is; the biggest departure is away from the Apex language stored procedures architecture that runs directly inside the Salesforce.com relational database. Salesforce trades scalability of a unitary architecture for scalability through a virtualized one.

It means that Salesforce morphs into a different creature, and now must decide whom it means to compete with because it’s not just Oracle anymore. Our bets are splitting the difference with Amazon, as other SaaS providers like IBM that don’t want to get weighed down by sunk costs have already done. If Salesforce wants to become the enterprise Java platform-as-a-Service (PaaS) leader, it will have to ramp up capacity, and matching Amazon or Google in a capital investment race is a hopeless proposition.

Do we really need OSGi?

With the coming of Spring (framework, season, take your choice), but more to the point, concurrent announcements of the OSGi Enterprise Edition 4.2 and the Eclipse Gemini and Virgo projects, debate over OSGi has renewed. OSGi has seen great success where it is not seen – as the framework for dispensing Eclipse plug-ins, and as the invisible engine by which most of the household name Java EE servers are now factored.

We’ve always been pretty bullish on what OSGi could do. It allows your server footprint to be truly dynamic – you can deploy and kill runtime components at will without taking the whole mess offline. There’s a potential sustainability appeal to any technology that helps reduce footprint – as less apps mean less server, less power, and less space.

Interestingly, OSGi could provide a lot of the elasticity at the appserver level that virtualization promises for OS images and cloud promises for application deployment. And there’s the rub – OSGi is hardly the only path to keeping your webfarm footprint contained. Significantly, while the goal is the same for each strategy – you only want as much resource as you need – they all take different ways of getting there. OSGi is a developer decision that addresses which application or middleware modules (or functionality) do you actually want running at any time, while virtualization and cloud are largely IT operations decisions pinpointing images and choice of what and how much infrastructure to provision.

Although the goal is common, the choice of strategy differs based on where elasticity is needed; furthermore, these are not all or nothing decisions. Conceivably, if you have a highly variable application that requires, not only different amounts of processing capacity, but different functionality at different times, then OSGi could complement your virtualization and/or cloud strategies. Let’s say you process market feeds, the composition and mix of which changes by time of day and which trading centers are active around the globe. Or your organization is number crunching end of period reports. Those are a couple possibilities.

The problem is in knowledge and awareness. For most IT customers, OSGi is a black box. It’s the way that WebSphere and WebLogic are architected. But that makes a difference to the vendors, not the customers because they don’t know how to provision OSGi bundles and there are no best practices for bundling bundles into bigger pieces that can be identified as tangible modules. There is still a lot of OSGi misinformation and still a lot of debate out there. Of course, while virtualization and cloud are much better known, there’s plenty of hype and debate about cloud, and concerns about unchecked use of virtualization.

So a couple years after OSGi gained critical mass vendor acceptance, there remains a lack of tooling for configuring OSGi servers, not to mention best practices for deploying them. SpringSource, one of the first to develop an OSGi server, has now donated the technology to Eclipse as reference implementation in Gemini, with Virgo becoming the technology development project. SpringSource’s commercial direction is tc Server, which commercializes the tiny Tomcat servlet container; as of March 8, VMware is pushing tc Server through its channels and for the next couple months, is giving away two production CPU licenses and 60 days evaluation support to VMware customers.

SpringSource’s fork in the road symbolizes the existential dilemma facing OSGi: if your goal is to simply reduce your web container footprint, 10-MByte Tomcat containers should do just fine. It scales out quite nicely as well, with LinkedIn serving 40 million web pages daily on Tomcat. So again, we ask, why do we need OSGi?

Give us your answers.

HP analyst meeting 2010: First Impressions

Over the past few years, HP under Mark Hurd has steadily gotten its act together in refocusing on the company’s core strengths with an unforgiving eye on the bottom line. Sitting at HP’s annual analyst meeting in Boston this week, we found ourselves comparing notes with our impressions from last year. Last year, our attention was focused on Cloud Assure; this year, it’s the integraiton of EDS into the core businesss.

HP now bills itself as the world’s largest purely IT company and ninth in the Fortune 500. Of course, there’s the consumer side of HP that the world knows. But with the addition of EDS, HP finally has a credible enterprise computing story (as opposed to an enterprise server company). Now we’ll get plenty of flack from our friends at HP for that one – as HP has historically had the largest market share for SAP servers. But let’s face it; prior to EDS, the enterprise side of HP was primarily a distributed (read: Windows or UNIX) server business. Professional services was pretty shallow, with scant knowledge of the mainframes that remain the mainstay of corporate computing. Aside from communications and media, HP’s vertical industry practices were sparse, few, and far between. HP still lacks the vertical breadth of IBM, but with EDS has gained critical mass in sectors ranging from federal to manufacturing, transport, financial services, and retail, among others.

Having EDS also makes credible initiatives such as Application Transformation, a practice that helps enterprises prune, modernize, and rationalize their legacy application portfolios. Clearly, Application transformation is not a purely EDS offering; it was originated by Ann Livermore’s Enterprise Business group, draws upon HP Software assets such as discovery and dependency mapping, Universal CMDB, PPM, and the recently introduced IT Financial Management (ITFM) service. But to deliver, you need bodies and people that know the mainframe – where most of the apps being harvested or thinned out are. And that’s where EDS helps HP flesh this out to a real service.

But EDS is so 2009; the big news on the horizon is 3Com, a company that Cisco left in the dust before it rethought its product line and eked out a highly noticeable 30% market share for network devices in China. Once the deal is closed, 3Com will be front and center in HP’s converged computing initiative which until now primarily consisted of blades and Procurve VoIP devices. It gains a much wider range of network devices to compete head-on as Cisco itself goes up the stack to a unified server business. Once the 3com deal is closed, HP will have to invest significant time, energy, and resources to deliver on the converged computing vision with an integrated product line, rather than a bunch of offerings that fill the squares of a PowerPoint matrix chart.

According to Livermore, the company’s portfolio is “well balanced.” We’d beg to differ where it comes to software, which accounts for a paltry 3% of revenues (a figure that our friends at HP reiterated underestimated the real contribution of software to the business).

It’s the side of the business that suffered from (choose one) benign or malign neglect prior to the Mark Hurd era. HP originated network node management software for distributed networks, an offering that eventually morphed into the former OpenView product line. Yet HP was so oblivious to its own software products that at one point its server folks promoted bundling of rival product from CA. Nonetheless, somehow the old HP managed not to kill off Openview or Opencall (the product now at the heart of HP’s communications and media solutions) – although we suspect that was probably more out of neglect than intent.

Under Hurd, software became strategic, a development that lead to the transformational acquisition of Mercury, followed by Opsware. HP had the foresight to place the Mercury, Opsware, and Openview products within the same business unit as – in our view – the application lifecycle should encompass managing the runtime (although to this day HP has not really integrated Openview with Mercury Business Availability Center; the products still appeal to different IT audiences). But there are still holes – modest ones on the ALM side, but major ones elsewhere, like in business intelligence where Neoview sits alone. Or in the converged computing stack and cloud in a box offerings, which could use strong identity management.

Yet if HP is to become a more well-rounded enterprise computing company, it needs more infrastructural software building blocks. To our mind, Informatica would make a great addition that would point more attention to Neoview as a credible BI business, not to mention that Informatica’s data transformation capabilities could play key roles with its Application Transformation service.

We’re concerned that, as integration of 3Com is going to consume considerable energy in the coming year, that the software group may not have the resources to conduct the transformational acquisitions that are needed to more firmly entrench HP as an enterprise computing player. We hope that we’re proven wrong.

Oracle’s Sun Java Strategy: Business as Usual

In an otherwise pretty packed news day, we’d like to echo @mdl4’s sentiments about the respective importance of Apple’s and Oracle’s announcements: “Oracle finalized its purchase of Sun. Best thing to happen to Sun since Java. Also: I don’t give a sh#t about the iPad. I said it.”

There’s little new in observing that on the platform side, that Oracle’s acquisition of Sun is a means for turning the clock back to the days of turnkey systems in a post-appliance era. History truly has come full circle as Oracle in its original database incarnation was one of the prime forces that helped decouple software from hardware. Fast forward to the present, and customers are tired of complexity and just want things that work. Actually, that idea was responsible for the emergence of specialized appliances over the past decade for performing tasks ranging from SSL encryption/decryption to XML processing, firewalls, email, or specialized web databases.

The implication here is that the concept is elevated to enterprise level; instead of a specialized appliance, it’s your core instance of Oracle databases, middleware, or applications. And even there, it’s but a logical step forward from Oracle’s past practice of certifying specific configurations of its database on Sun (Sun was, and now has become again, Oracle’s reference development platform). That’s in essence the argument for Oracle to latch onto a processor architecture that is overmatched in investment by Intel for the x86 line. The argument could be raised than in an era of growing interest in cloud, as to whether Oracle is fighting the last war. That would be the case – except for the certainty that your data center has just as much chance of dying as your mainframe.

At the end of the day, it’s inevitably a question of second source. Dana Gardner opines that Oracle will replace Microsoft as the hedge to IBM. Gordon Haff contends that alternate platform sources are balkanizing as Cisco/EMC/VMware butts their virtualized x86 head into the picture and customers look to private clouds the way they once idealized grids.

The highlight for us was what happens to Sun’s Java portfolio, and as it turns out, the results are not far from what we anticipated last spring: Oracle’s products remain the flagship offerings. From looking at respective market shares, it would be pretty crazy for Oracle to have done otherwise

The general theme was that – yes – Sun’s portfolio will remain the “reference” technologies for the JCP standards, but that these are really only toys that developers should play with. When they get serious, they’re going to keep using WebLogic, not Glassfish. Ditto for:
• Java software development. You can play around with NetBeans, which Oracle’s middleware chief Thomas Kurian characterized as a “lightweight development environment,” but again, if you really want to develop enterprise-ready apps for the Oracle platform, you will still use JDeveloper, which of course is written for Oracle’s umbrella ADF framework that underlies its database, middleware, and applications offerings. That’s identical to Oracle’s existing posture with the old (mostly) BEA portfolio of Eclipse developer tools. Actually, the only thing that surprised us was that Oracle didn’t simply take NetBeans and set it free – as in donating it to Apache or some more obscure open source body.
• SOA, where Oracle’s SOA Suite remains front and center while Sun’s offerings go on maintenance.

We’re also not surprised as to the prominent role of JavaFX in Oracle’s RIA plans; it fills a vacuum created when Oracle terminated BEA’s former arrangement to bundle Adobe Flash/Flex development tooling. In actuality, Oracle has become RIA agnostic, as ADF could support any of the frameworks for client display, but JavaFX provides a technology that Oracle can call its own.

There were some interesting distinctions with identity management and access, where Sun inherited some formidable technologies that, believe it or not, originated with Netscape. Oracle Identity management will grab some provisioning technology from the Sun stack, but otherwise Oracle’s suite will remain the core attraction. But Sun’s identity and access management won’t be put out to pasture, as it will be promoted for midsized web installations.

There are much bigger pieces to Oracle’s announcements, but we’ll finish with what becomes of MySQL. In short there’s nothing surprising to the announcement that MySQL will be maintained in a separate open source business unit – the EU would not have allowed otherwise. But we’ve never bought into the story that Oracle would kill MySQL. Both databases aim at different markets. Just about the only difference that Oracle’s ownership of MySQL makes – besides reuniting it under the same corporate umbrella as the InnoDB data store – is that, well, like yeah, MySQL won’t morph into an enterprise database. Then again, even if MySQL had remained independent, that arguably it was never going to evolve to the same class of Oracle as the product would lose its beloved simplicity.

The more relevant question for MySQL is whether Oracle will fork development to favor Solaris on SPARC. This being open source, there would be nothing stopping the community from taking the law into its own hands.

Early thoughts on IBM buying Lombardi

This has been quite a busy day, having seen IBM’s announcement come over the wire barely after the alarm went off. Lombardi has always been the little BPM company that could. In contrast to rivals like Pegasystems, which has a very complex, rule-driven approach, Lombardi’s approach has always been characterized by simplicity. In that sense, its approach mimicked that of Fuego before it was acquired by BEA, which of course was eventually swallowed up by Oracle.

We’d echo Sandy Kemsley’s thoughts of letdown about hopes for Lombardi IPO. But even had the IPO been done, that would have postponed the inevitable. We agree with her that if IBM is doing this acquisition anyway, it makes sense to make Lombardi a first class citizen within the WebSphere unit.

Not surprisingly, IBM is viewing Lombardi for its simplicity. At first glance, it appears that Lombardi Teamworks, their flagship product, overlaps WebSphere BPM. Look under the hood, and WebSphere BPM is not a single engine, but the product of several acquisitions and internal development, including the document-oriented processes of FileNet and the application integration processes from Crossworlds. So in fact Lombardi is another leg of the stool, and one that is considerably simpler than what IBM already has. In fact, this is vey similar to how Oracle has positioned the old Fuego product alongside its enterprise BPM offering which is build around IDS Scheer’s ARIS modeling language and tooling.

IBM’s strategy is that Lombardi provides a good way to open the BPM discussion at department level. But significantly on the call, IBM stated that once the customer wants to scale up, that it would move the discussion to its existing enterprise-scale BPM technology. It provided an example of a joint engagement at Ford -– where Lombardi works with the engineering department while IBM works at the B2B trading partner integration level, as an example of how the two pieces would be positioned going forward.

James Governor of RedMonk had a very interesting suggestion that IBM could leverage the Lombardi technologies atop some of its Lotus collaboration tools. We also see good potential synergies with the vertical industry frameworks as well.

The challenge for IBM is preserving the simplicity of Lombardi products, which tend to be more department oriented bottom-up, vs. the IBM offerings that are enterprise-scale and top-down. Craig Hayman, general manager of the application and integration middleware (WebSphere) division, admitted on the announcement call that IBM has “struggled” in departmental, human-centric applications. In part that is due to IBM’s top-down enterprise focus, and also the fact that all too often, IBM’s software is known more for richness than ease of use.

A good barometer of how IBM handles the Lombardi integration will be reflected on how it handles Lombardi Blueprint and IBM WebSphere BlueWorks BPM. Blueprint is a wonderfully simple process definition hosted service while BlueWorks is also hosted, but is far more complex with heavy strains of social computing. We have tried Blueprint and found it to be a very straightforward offering that simply codifies your processes, generating Word or PowerPoint documentation, and BPMN models. The cool thing is that if you use it only for documentation, you have gotten good value out of it – and in fact roughly 80% of Blueprint customers simply use it for that. On the call, Hayman said that IBM plans to converge both products. Thats a logical move. But please, please, please, don’t screw up the simplicity of Blueprint. If necessary, make it a stripped down face of BlueWorks.

Getting with the Program

Developers are a mighty stubborn bunch. Unlike the rest of the enterprise IT market, where a convergence of forces have favored a nobody gets fired for buying IBM, Oracle, SAP, or Microsoft, developers have no such herding instincts. Developers do not always get with the [enterprise] program.

For evidence, recall what happened the last time that the development market faced such consolidation. In the wake of web 1.0, the formerly fragmented development market – which used to revolve around dozens of languages and frameworks – congealed down to Java vs .NET camps. That was so 2002, however, as in the interim, developers have gravitated towards choosing their own alternatives.

The result was an explosion of what former Burton Group analyst Richard Monson Haefel termed the Rebel Frameworks (that was back in 2004), and more recently in the resurgence of scripting languages. In essence, developers didn’t take the future as inevitable, and for good reason: the so-called future of development circa 2002 was built on the assumption that everyone would gravitate to enterprise-class frameworks. Java and .NET were engineered on the assumption that the future of enterprise and Internet computing would be based on complex, multitier distributed transactional systems. It was accompanied by a growing risk-averseness: buy only from vendors that you expect will remain viable. Not surprisingly, enterprise computing procurements narrowed to IOSM (IBM, Oracle, SAP, Microsoft).

But the developer community lives to a different dynamic. In an age of open source, expertise for development frameworks and languages get dispersed; vendor viability becomes less of a concern. More importantly, developers only want to get the job done, and anyway, the tasks that they perform typically fall under the enterprise radar. Whereas a CFO may be concerned over the approach an ERP system may employ to managing financial system or supply chain processes, they are not going to care about development languages or frameworks.

The result is that developers remain independent minded, and that independence accounts for the popularity of alternatives to enterprise development platforms, with Ruby on Rails being the latest to enter the spotlight.

In one sense, Ruby’s path to prominence parallels Java in that the language was originally invented for another purpose. But there the similarity ends as, in Ruby’s case, no corporate entity really owned it. Ruby is a simple scripting language that became a viable alternative for web developers once David Heinemeier Hansson invented the Rails framework. The good news, Rails makes it easy to use Ruby to write relatively simple web database applications. Examples of Rails’ simplicity include:
• Eliminating the need to write configuration files for mapping requests to actions
• Avoiding multi-threading issues because Rails will not pool controller (logic) instances
• Dispensing with object-relational mapping files; instead, Rails automates much of this and tends to use very simplified naming conventions.

The bad news is that there are performance limitations and difficulties in handling more complex distributed transaction applications. But the good news is that when it comes to web apps, the vast majority are quite rudimentary, thank you.

The result has propelled a wave of alternative stacks, such as LAMP (Linux-Apache web server-MySQL-and either PHP, Python, or Perl) or, more recently, Ruby on Rails. At the other end of the spectrum, the Spring framework takes the same principle – simplification – to ease the pain of writing complex Java EE applications – but that’s not the segment addressed by PHP, MySQL, or Ruby on Rails. It reinforces the fact that, unlike the rest of the enterprise software market, developers don’t necessarily take orders from up top. Nobody told them to implement these alternative frameworks and languages.

The latest reminder of the strength of grassroots markets in the developer sector is Engine Yard’s securing of $19 million in C funding. The backing comes from some of the same players that also funded SpringSource (which was recently acquired by VMware). Some of the backing also comes from Amazon, whose Jeff Bezos owns outright 37Signals, the Chicago-based provider of project management software that employs Heinemeier Hansson. For the record, there is plenty of RoR presence in Amazon Web Services.

Engine Yard is an Infrastructure-as-a-Service (IaaS) provider that has optimized the RoR stack for runtime. Although hardly the only cloud provider out there that supports RoR development, Engine Yard’s business is currently on a 2x growth streak. Funding stages the company either for IPO or buy out.

At this point the script sounds similar to SpringSource which, of course, just got acquired by VMware, and is launching a development and runtime cloud that will eventually become VMware’s Java counterpart to Microsoft Azure. It’s tempting to wonder whether a similar path will become reality for Engine Yard. The answer is that the question itself is too narrow. It is inevitable that a development and runtime cloud paired with enterprise plumbing (e.g., OS, hypervisor) will materialize for Ruby on Rails. With its $19 million funding, Engine Yard has the chance to gain critical mass mindshare in the RoR community – but don’t rule out rivals like Joyent yet.

SpringSource: Back to our regularly scheduled program

With the ink not yet dry on VMware’s offer to buy SpringSource, it’s time for SpringSource to get back to its regularly scheduled program. That happened to be SpringSource’s unveiling of the Cloud Foundry developer preview: This was the announcement that SpringSource was going to get out before the program got interrupted by the wheels of finance.

Cloud Foundry, a recent SpringSource acquisition, brings SpringSource’s evolution from niche technology to lightweight stack provider full circle. Just as pre-Red Hat JBoss was considered a light weight alternative to WebSphere and WebLogic, SpringSource is positioning itself as a kinder and gentler alternative to the growing JBoss-Red Hat stack. And that’s where the VMware connection comes into play, but more about that later.

The key of course is that SpringSource rides on the popularity of the Spring framework around which the company was founded. The company claims the Spring framework now shows up in roughly half of all Java installations. Its success is attributable to the way that Spring simplifies deployment to Java EE. But as popular as the Spring framework is, as an open source company, SpringSource monetizes only a fraction of al Spring framework deployments. So over the past few years it has been surrounding the framework with a stack of lightweight technologies that complement it, encompassing the:
• Tomcat servlet container (a lightweight Java server) and the newer DM server that is based on OSGi technology.
• Hyperic as the management stack;
• Groovy and Grails, which provides dynamic scripting that is native to the JVM, and an accompanying framework to make Groovy programming easy; and
• Cloud Foundry, which provided SpringSource the technology to mount its offerings in the cloud.

From a mercenary standpoint, putting all the pieces out in a cloud enables SpringSource to more thoroughly monetize the open source assets that otherwise gain revenue stream through support subscriptions.

But in another sense, you could consider the SpringSource’s Cloud Foundry as the Java equivalent of what Microsoft plans to do with Azure. In both cases, the goal is Platform-as-a-Service offerings based on familiar technology (Java, .NET) that can run in and outside the cloud. Microsoft calls it Software + Services. What both also have in common is that they are still in preview and not likely to go GA until next year.

But beyond the fact that SpringSource’s offering is Java-based, the combination with VMware adds yet a more basic differentiator. While Microsoft Azure is an attempt to preserve the Windows and Microsoft Office franchise, when you add VMware to the mix, the goal on SpringSource’s side is to make the OS irrelevant.

There are other intriguing possibilities with the link to VMware such as the possibility that some of the principles of the Spring framework (e.g., dependency injection, which abstract dependencies so developers don’t have to worry about writing all the necessary configuration files) might be applied to managing virtualization, which untamed, could become quite a beast to manage. And as we mentioned last week in the wake of the VMware announcement, SpringSource could do with some JVM virtualization so that each time you need to stretch the processing of Java objects., that you don’t have to blindly sprawl out another VM container.

Fleshing out the Cloud

VMware’s proposed $362 million acquisition of SpringSource is all about getting serious in competing with Salesforce.com and Google App Engine as the Platform-as-a-Service (PaaS) cloud with the technology that everybody already uses.

This acquisition was a means to an end, pairing two companies that could not be less alike. VMware is a household name, sells software through traditional commercial licenses, and markets to IT operations. SpringSource is a grassroots, open source developer-oriented firm whose business is a cottage industry by comparison. The cloud brought both companies together that each faced complementary limitations on their growth. VMware needed to grow out beyond its hardware virtualization niche if it was to regain its groove, while SpringSource needed to grow up and find deeper pockets to become anything more than a popular niche player.

The fact is that providing a virtualization engine, even if you pad it with management utilities that act like an operating system, is still a raw cloud with little pull unless you go higher up in the stack. Raw clouds have their appeal only to vendors that resell capacity or enterprise large firms with the deep benches of infrastructure expertise to run their own virtual environments. For the rest of us, we need a player that provides a deployment environment, handles the plumbing, that is married to a development environment. That is what Salesforce’s Force.com and Google’s App Engine are all about. VMware’s gambit is in a way very similar to Microsoft’s Software + Services strategy: use the software and platforms that you are already used to, rather than some new environment in a cloud setting. There’s nothing more familiar to large IT environments than VMware’s ESX virtualization engine, and in the Java community, there’s nothing more familiar than the Spring framework which – according to the company – accounts for roughly half of all Java installations.

With roughly $60 million in stock options for SpringSource’s 150-person staff, VMware is intent on keeping the people as it knows nothing about the Java virtualization business. Normally, we’d question a deal like this because the company’s are so dissimilar. But the fact that they are complementary pieces to a PaaS offering gives the combination stickiness.

For instance, VMware’s vSphere’s cloud management environment (in a fit of bravado, VMware calls it a cloud OS) can understand resource consumption of VM containers; with SpringSource, it gets to peer inside the black box and understand why those containers are hogging resource. That provides more flexibility and smarts for optimizing virtualization strategies, and can help cloud customers answer the question: do we need to spin out more VMs, perform some load balancing, or re-apportion all those Spring TC (Tomcat) servlet containers?

The addition of SpringSource also complements VMware’s cloud portfolio in other ways. In his blog about the deal, SpringSource CEO Rod Johnson noted that the idea of pairing VMware’s Lab Manager (that’s the test lab automation piece that VMware picked up through the Akimbi acquisition) proved highly popular with Spring framework customers. In actuality, if you extend Lab manager from simply spinning out images of testbeds to spinning out runtime containers, you would have VMware’s answer to IBM’s recently-introduced WebSphere Cloudburst appliance.

VMware isn’t finished however. The most glaring omission is need for Java object distributed caching to provide yet another alternative to scalability. If you only rely on spinning out more VMs, you get a highly rigid one-dimensional cloud that will not provide the economies of scale and flexibility that clouds are supposed to provide. So we wouldn’t be surprised if GigaSpaces or Terracotta might be next in VMware’s acquisition plans.

Software Abundance in a Downturn

The term “get” is a journalism (remember that?) term for getting hard-to-get interviews. And so we’re jealous once more about one of RedMonk/Michael Cote’s latest gets, Grady Booch at last month’s Rational Software Conference.

In a rambling discussion, Booch made an interesting point during his sitdown about software being an abundant resource and how that jibes with the current economic slowdown. Although his eventual conclusion – that it pays to invest in software because it can help you deal with a downturn more effectively (and derive competitive edge) – was not surprising, the rationale was.

It’s that Booch calls software an abundant resource. Using his terms, it’s fungible, flexible; there’s lots of it and lots of developers around; and better yet, it’s not a natural extractive resource subject to zero-sum economics. That’s for the most part true although, unless you’re getting your power off solar, some resource must be consumed to provide the juice to your computer.

Booch referred to Clay Shirkey’s concept that a cognitive surplus now exists as a result of the leisure time freed up by the industrial revolution. He contends that highly accessible, dispersed computing networks have started to harness this cumulative cognitive resource. Exhibit A was his and Martin Wattenberg of IBM’s back of the envelope calculation that Wikipedia alone has provided an outlet for 100 million cumulative hours of collected human thought. That’s a lot of volunteer contribution to what, depending on your viewpoint, is contribution to or organization of human wisdom. Of course other examples are the open source software that floats in the wild like the airborne yeasts that magically transform grains into marvelous Belgian Lambics.

Booch implied that software has become an abundant resource, although he deftly avoided the trap of calling it “free” as that term brings with it plenty of baggage. As pioneers of today’s software industry discovered back in the 1980s, the fact that software come delivered on cheap media (followed today by cheap bandwidth) concealed the human capital value that was represented by it. There are many arguments of what the value of software is today – is it proprietary logic, peace of mind, or is the value of technical support? Regardless of what it is, there is value in software, and it is value that, unlike material goods, is not always directly related to supply and demand.

But of course there is a question as to the supply of software, or more specifically, the supply of minds. Globally this is a non-issue, but in the US the matter of whether there remains a shortage of computer science grads or a shortage of jobs for the few that are coming out of computer science schools is still up for debate.

There are a couple other factors to add to the equation of software abundance.

The first is “free” software; OK, Grady didn’t fall into that rat hole but we will. You can use free stuff like Google Docs to save money on the cost of Microsoft Office, or you can use an open source platform like Linux to avoid the overhead of Windows. Both have their value, but their value is not going to make or break the business fortunes of the company. By nature, free software will be commodity software because everybody can get it, so it confers no strategic advantage to the user.

The second is the cloud. It makes software that is around more readily accessible because, if you’ve got the bandwidth, we’ve got the beer. Your company can implement new software with less of the usual pain because it doesn’t have to do the installation and maintenance itself. Well not totally – it depends on whether your provider is using the SaaS model where they handle all the plumbing or whether you’re using a raw cloud where installation and management is a la carte. But assuming your company is using a SaaS provider or somebody that mediates the ugly cloud, software to respond to your business need is more accessible than ever. As with free or open source, the fact that this is widely available means that the software will be commodity; however, if your company is consuming a business application such as ERP, CRM, MRO, or supply chain management, competitive edge will come in how you configure, integrate and consume that software. That effort will be anything but free.

The bottom line is that Abundant Software is not about the laws of supply and demand. There is plenty and not enough software and software developers to go around. Software is abundant, but not always the right software, or if it is right, it takes effort to make it righter. Similarly, being abundant doesn’t mean that the software that is going to get your company out of the recession is going to be cheap.

UPDATE — Google Docs is no longer free.