Category Archives: Open Source

The Second Wave of Analytics

Throughout the recession, business intelligence (BI) was one of the few growth markets in IT. Given that transactional systems that report “what” is happening are simply the price of entry for remaining in a market, BI and analytics systems answer the question of “why” something is happening, and ideally, provide intelligence that is actionable so you can know ‘how’ to respond. Not surprisingly, understanding the whys and hows are essential for maximizing the top line in growing markets, and pinpointing the path to survival in down markets. The latter reason is why BI has remained one of the few growth areas in the IT and business applications space through the recession.

Analytic databases are cool again. Teradata, the analytic database provider with a 30-year track record, had its strongest Q2 in what was otherwise a lousy 2010 for most IT vendors. Over the past year, IBM, SAP, and EMC took major acquisitions in this space, while some of the loudest decibels at this year’s Oracle OpenWorld were over the Exadata optimized database machine. There are a number of upstarts with significant venture funding, ranging from Vertica to Cloudera, Aster Data, ParAccel and others that are not only charting solid growth, but the range of varied approaches that reveal that the market is far from mature and that there remains plenty of demand for innovation.

We are seeing today a second wave of innovation in BI and analytics that matches the ferment and intensity of the 1995-96 era when data warehousing and analytic reporting went commercial. There isn’t any one thing that is driving BI innovation. At one end of the spectrum, you have Big Data, and at the other end, Fast Data — the actualization of real-time business intelligence. Advances in commodity hardware, memory density, parallel programming models, and emergence of NoSQL, open source statistical programming languages, cloud are bringing this all within reach. There is more and more data everywhere that’s begging to be sliced, diced and analyzed.

The amount of data being generated is mushrooming, but much of it will not necessarily be persisted to storage. For instance, if you’re a power company that wants to institute a smart grid, moving from monthly to daily meter reads multiplies your data volumes by a factor of 30, and if you decide to take readings every 15 minutes, better multiple all that again by a factor of 100. Much of this data will be consumed as events. Even if any of it is persisted, traditional relational models won’t handle the load. The issue is not only because of overhead of operating all the iron, but with it the concurrent need for additional equipment, space, HVAC, and power.

Unlike the past, when the biggest databases were maintained inside the walls of research institutions, public sector agencies, or within large telcos or banks, today many of the largest data stores on the Internet are getting opened through APIs, such as from Facebook. Big databases are no longer restricted to use by big companies.

Compare that to the 1995-96 time period when relational databases, which made enterprise data accessible, reached critical mass adoption; rich Windows clients, which put powerful apps on the desktop, became enterprise standard; while new approaches to optimizing data storage and productizing the kind of enterprise reporting pioneered by Information Builders, emerged. And with it all came the debates OLAP (or MOLAP) vs ROLAP, star vs. snowflake schema, and ad hoc vs. standard reporting. Ever since, BI has become ingrained with enterprise applications, as reflected by recent consolidations with the acquisitions of Cognos, Business Objects, and Hyperion by IBM, SAP, and Oracle. How much more establishment can you get?

What’s old is new again. When SQL relational databases emerged in the 1980s, conventional wisdom was that the need for indexes and related features would limit their ability to perform or scale to support enterprise transactional systems. Moore’s Law and emergence of client/server helped make mockery of that argument until the web, proliferation of XML data, smart sensory devices, and realization that unstructured data contained valuable morsels of market and process intelligence, in turn made mockery of the argument that relational was the enterprise database end-state.

In-memory databases are nothing new either, but the same hardware commoditization trends that helped mainstream SQL has also brought costs of these engines down to earth.

What’s interesting is that there is no single source or style of innovation. Just as 1995 proved a year of discovery and debate over new concepts, today you are seeing a proliferation of approaches ranging from different strategies for massively-parallel, shared-nothing architectures; columnar databases; massive networked and hierarchical file systems; and SQL vs. programmatic approaches. It is not simply SQL vs. a single post-SQL model, but variations that mix and match SQL-like programming with various approaches to parallelism, data compression, and use of memory. And don’t forget the application of analytic models to complex event processes for identifying key patterns in long-running events or coming through streaming data that is arriving in torrents too fast and large to ever consider putting into persistent storage.

This time, much of the innovation is coming from the open source world as evidenced by projects like the Java-based distributed computing platform Hadoop developed by Google; MapReduce parallel programming model developed by Google; the HIVE project that makes MapReduce look like SQL; the R statistical programming language. Google has added fuel to the fire by releasing to developers its BigQuery and Prediction API for analyzing large sets of data and applying predictive algorithms.

These are not simply technology innovations looking for problems, as use cases for Big Data or real-time analysis are mushrooming. Want to extend your analytics from structured data to blogs, emails, instant messaging, wikis, or sensory data? Want to convene the world’s largest focus group? There’s sentiment analysis to be conducted from Facebook; trending topics for Wikipedia; power distribution optimization for smart grids; or predictive analytics for use cases such as real-time inventory analysis for retail chains, or strategic workforce planning, and so on.

Adding icing to the cake was an excellent talk at a New York Technology Council meeting by Merv Adrian, a 30-year veteran of the data management field (who will soon be joining Gartner) who outlined the content of a comprehensive multi-client study on analytic databases that can be downloaded free from Bitpipe.

Adrian speaks of a generational disruption occurring to the database market that is attacking new forms of age old problems: how to deal with expanding datasets while maintaining decent performance. as mundane as that. But the explosion of data coupled with commoditization of hardware and increasing bandwidth have exacerbated matters to the point where we can no longer apply the brute force approach to tweaking relational architectures. “Most of what we’re doing is figuring out how to deal with the inadequacies of existing systems,” he said, adding that the market and state of knowledge has not yet matured to the point where we’re thinking about how the data management scheme should look logically.

So it’s not surprising that competition has opened wide for new approaches to solving the Big and Fast Data challenges; the market has not yet matured to the point where there are one or a handful of consensus approaches around which to build a critical mass practitioner base. But when Adrian describes the spate of vendor acquisitions over the past year, it’s just a hint of things to come.

Watch this space.

SpringSource buying Gemstone: VMware’s written all over it

There they go again. Barely a month after announcing the acquisition of message broker Rabbit Technologies, SpringSource is adding yet one more piece to its middleware stack: it has announced the acquisition of Gemstone for its distributed data caching technology.

SpringSource’s Rod Johnson told us that he was planning to acquire such a technology even before VMware came into the picture, but make no mistake about it, VMware’s presence upped the ante.

SpringSource has been looking to fill out its stack vs. Oracle and IBM ever since its cornerstone acquisition of Covalent (which brought the expertise behind Apache Tomcat and bequeathed the world tc Server) two years ago. Adding Gemstone’s Gemfire becomes SpringSource’s response to Oracle Coherence and IBM WebSphere XD. The technologies in question allow you to replicate data from varied sources into a single logical cache, which is critical if those sources are highly dispersed.

So what about VMware? Wasn’t SpringSource planning to grow its stack anyway? There are deeper stakes at play: VMware’s aspiration to make cloud and virtualization virtually synonymous – or at least to make virtualization essential to the cloud – falls apart if you don’t have a scalable, high performance way to manage and access data. Enterprises using the cloud are not likely to move all their data there, and need a solution that allows hybrid strategies that will invariably involve a mix of cloud- and on premised-based data resources to be managed and accessed efficiently. Distributed data caching is essential.

So the next question is why SpringSource, as a historically open source company that has always made open source acquisitions, buy open source Terracotta instead? Chances are, were SpringSource still independent, it probably would have, but VMware brings deeper pockets and deeper aspirations. Gemstone is the company that sold object-oriented databases back in the 90s, and once it grew obvious that they (and other OODBMS rivals like Object Store) weren’t going to become the next Oracles, they adapted their expertise to caching. Gemfire emerged in 2002 and provided Wall Street and defense agencies an off the shelf alternative to homegrown development or a best of breed strategy. By comparison, although Terracotta boasts several Wall Street clients, its core base is in web caching for high traffic B2C oriented websites.

Bottom line: VMware needs the scale.

There are other interesting pieces that Gemstone brings to the party. It is currently developing SQLFabric, a project that embeds the Apache Derby open source relational database into Gemfire to make its distributed data grid fully SQL-compliant, which would be very strategic to VMware and SpringSource. It also has a shot-in-the-dark project, MagLev, which is more a curiosity for the mother ship. Conceivably it could provide the impetus for SpringSource to extend to the Ruby environment, but would require a lot more development work to productize.

Obviously as the deal won’t close immediately, both entities must be coy about their plans other than the obvious commitment to integrate products.

But there’s another angle that will be worth exploring once the ink dries: SpringSource has been known for simplicity. The Spring framework provided a way to abstract all the complexity out of Java EE, while tc Server, based on Tomcat, carries but a subset of the bells and whistles of full Java EE stacks. But Gemfire is hardly simple, and the market for distributed data grids has been limited to organizations with extreme processing needs who have extreme expertise and extreme budgets. Yet the move to cloud will mean, as noted above, that the need for logical data grids will trickle down to more of the enterprise mainstream, although the scope of the problem won’t be as extreme. It would make sense for the Spring framework to extend its dependency injection to a “lite” version of Gemfire (Gemcloud?) to simplify the hassle of managing data inside and outside of the cloud.

Open Source a decade later

As we and many others have opined, one of the greatest tremors to have reshaped the software business over the past decade has been emergence of open source. Open source has changed the models of virtually every major player in the software business, from niche startups to household names. It’s hollowed out sectors like content management where pen source has replaced the entry level package; unless you’re implementing content systems as an extension of an enterprise middle tier platform strategy, open source platforms like Joomla or the social networking-oriented Drupal will provide a perfectly slick, professional website.

Multiple open source models ranging from GNU copy left to Apache/BSD copyright, plus a wide range of community sourcing and open technology previews have splattered the landscape. Of course, old perennial issues like whether open source robs Peter to pay Paul or boosts innovation continue to persist.

What’s new is emergence of the cloud, a form of computing that has resisted the platform standardization that was both created by open source, and for which makes open source possible. Behind proprietary cloud APIs, does open source still matter? We sat in on a recent Dana Gardner BriefingsDirect podcast that updated the discussion, along with Paul Fremantle, the chief technology officer at WSO2 and a vice president with the Apache Software Foundation; Miko Matsumura, vice president and deputy CTO at Software AG, and Richard Seibt, the former CEO at SUSE Linux, and founder of the Open Source Business Foundation; Jim Kobielus, senior analyst at Forrester Research; JP Morgenthal, independent analyst and IT consultant, and David A. Kelly, president of Upside Research.

Read the transcript here, and listen to the podcast here.

Ubiquity vs. Ubiquity

Firing the first shot that tells you the summer’s over, Google yesterday unveiled Chrome, their skunk works project for developing their own browser. Given the dynamics of the browser space (it’s not a market, but a means to an end, which is controlling the gateway to the web), it’s not surprising that reaction can be summarized as follows:

1. It’s part of Google’s grand plan to challenge Microsoft by adding the last piece to what would be Google’s enterprise desktop, app dev platform, and cloud.

2. It clouds the waters given that Google just extended its sponsorship of the Mozilla Foundation for 3 more years.

3. Chrome is ultimately intended more for the kind of “power” browsing that would be required for the enterprise desk or webtop. The obvious goodie here is completely independent tabbed browsing, where each tab is its own session – meaning one tab crashing won’t bring all the others down. It’s the kind of feature that came to Windows beginning with NT and Windows 2000, where a single window did not always have to crash the entire client session and it’s about time that the Internet experience become similarly robust. This obviously oversimplifies all the possible wish lists for features that could improve the browsing experience, with security being the obvious piece, but more robust tabbed browsing is an obvious missing piece.

4. Chrome originated because Google realized it had to own the entire stack and optimize the browser for the Google desktop if it were to present a viable alternative to Microsoft.

5. Google extended its Mozilla partnership because it couldn’t suddenly pull the plug and transition to a technology that is barely in alpha phase. Open source blogger Dana Blankenhorn contends both are complementary; that that Google will ultimately push a dual tiered strategy, pushing Firefox to consumers and Chrome at the enterprise.

Regardless of your take, keep in mind that whatever Google’s ulterior motives, consider the source. Google, much like Microsoft, is so gargantuan and has so much resource that its product, technology, and business development strategy is highly decentralized. The typical scenario is that there are multiple groups vying for development of the next great thing, and that the one with the best technology, market plan, and/or political skills typically wins out. In large part that’s how one can explain inconsistencies in Microsoft’s strategies, as recently revealed with Oslo, where a new workflow engine was developed in competition with existing BizTalk Server. So we’re not surprised that the group working to optimize delivery of Google Desktop on Firefox is different from the one hatching Chrome.

But our “aha” moment came when we recalled last week’s unveiling by Mozilla of its own Alpha, in this case, a natural language text search facility in Firefox that it code-named Ubiquity. In other words, Firefox was also treading on Google’s doorstep. So now you have a case of the ubiquitous search and advertising engine that is striving to become the ubiquitous enterprise webtop and compute cloud with a market cap of nearly $145 billion, and a tax-exempt not-for-profit corporation that racked up maybe $6 million sales in all of 2007 that has a respectable but hardly universal market presence, and the answer is obvious: Firefox is clearly about the consumer while Google’s dead serious about the enterprise. Or as Blankenhorn stated it in a prescient post filed just prior to the Chrome announcement, there are two Internets –- the locked down one in the office, which probably restricts you to the Microsoft IE browser, and the home Internet, where you can use Firefox or something else.

Our take is that Chrome’s prime target is replacing IE in the corporate Internet, leaving the home one as table scraps up for grabs. Our sense is that is where Firefox’s ubiquity is headed – if some third party mashed up that capability using a more graphical metaphor, it could provide a key enabler to monetizing the mobile web. But that’s another story.

Back to the Future

We had an interesting conversation with Peter Cooper-Ellis, the guy who ran product management at BEA from the time it acquired WebLogic, who’s now taking on a similar role with SpringSource. Obviously, in the wake of the Oracle acquisition, it’s not surprising that Cooper-Ellis jumped ship.

But in making the jump to SpringSource, Cooper-Ellis has come full circle. As BEA was digesting its WebLogic acquisition, Cooper-Ellis was there when the Java stack was being built. Now at SpringSource, with its Eclipse Equinox OSGi-based server container, he’s now part of an open source company that’s helping deconstruct it. So we explored some history with him and compared notes. To summarize, Cooper-Ellis sees a bit of history repeating again today: a decade ago, it was drive for a unified middle tier stack to make the web interactive, and today, it’s the goal of having a dynamic lightweight stack that uses simpler constructs. In other words, a technology framework that actually delivers on the old promise of Internet time.

Let’s rewind the tape a bit. In the 90s, BEA (originally called Independence Technology) was formed to make a business in middleware. It thought its business would come from transaction monitors, but that only addressed a tiny portion of the market with transaction loads huge enough to justify buying another layer of software. Instead the killer app for middleware occurred with the appserver. When the web caught on, there was demand to add the kind of data-driven interactivity that became real with client/server. BEA bought WebLogic, a company that gambled (and won) a bet that EJBs would become standard (which it did with J2EE in 1999).

The good news was that J2EE (later joined by rival .NET) provided the standard middle tier that made e-commerce bloom (if you’re going to sell something, you need a database behind your online ordering system). The bad news was that J2EE was obese, proving overkill for anybody who wasn’t an Amazon, eBay, online banking, or travel reservations site – it was engineered for transaction-intensive, highly distributed data centric websites. Not surprisingly, the complexity of J2EE subsequently spawned a backlash for Plain Old Java Objects (POJOs), supported by a growing array of open source frameworks made famous by then Burton Group analyst Richard Monson-Haefel in 2004 as the Rebel Frameworks. More recently, there has been surging interest in dynamic scripting languages that let you manipulate data using higher-level constructs.

But so far, all these technologies were about development, not run time. That’s where Eclipse Equinox comes in. Leveraging the OSGi component model that Eclipse embraced for the IDE, Equinox extends the idea to run time. Like Java, OSGi was conceived for different purposes (Java was for set-top boxes, while OSGi was the framework for provisioning services in the smart, networked home) – you could consider both as fraternal twins reunited at adolescence. Eclipse reincarnated OSGi as the dynamic service bundle, first for its IDE (where developers could swap different vendor plug-ins at will), and more recently as a new run time.

That’s where Cooper-Ellis views OSGi as giving Java appservers a second wind. In place of installing the entire Java EE stack, OSGi lets you provision only the features you need at run time. So if you add distributed nodes, take the JMS plug-in; if traffic spikes, hot deploy clustering support, and so on. The advantage is that if you don’t need these or other bundles, you could run the server on a very tiny footprint of code, which reduces overhead and potentially makes it blazingly fast.

That was what BEA was trying to do with the micro-Service Architecture (mSA) that it announced roughly 18 months before Oracle swooped in, and how it built WebLogic Event Server, its complex event streaming engine. The product only used Java EE features such as managing availability, security, and user management; dispensed with much of the rest of the stack; and supported development with POJOs, which included support of the Spring framework.

OSGi/Eclipse Equinox is part of the same return to simplicity that originally spawned popularity of POJOs and the rebel frameworks. Beyond the Java space, it’s the same trend that has driven popularity of dynamic scripting languages as faster means to developing the relatively straightforward data-driven apps that are the mainstream of the web, and it’s also the driving force behind Ajax (which is easy enough that casual developers, business analysts, or web designers can grow dangerous). Each of these has catches and limitations, but they are evidence that for the rest of us, the 80/20 rule lives when it comes to developing for the web.

Life’s Getting Interesting Again

A conversation this week with database veteran Jnan Dash reminded us of one salient fact regarding computing, and more specifically, software platforms. That there never was and never will be a single grand unifying platform that totally flattens the playing field and eradicates all differences.

Dash should know, having been part of the teams that developed DB2, and after that, Oracle, who currently keeps off the street by advising tools companies that have gotten past startup phase. For now, his gig is advising Curl, developers of a self-contained language for Rich Internet Applications (RIA) combining declarative GUI and OO business logic inside the same language, which had the misfortune of emerging before its time (the term RIA had yet to be coined).

Curl provides an answer to unifying one piece of the process – developing the rich front end. But it’s a far cry from the false euphoria over “Write once, run anywhere” that emerged during Web 1.0, where the train of thought was on a single language (Java or later, C#) for logic on a single mid-tier back end and a universal HTML, HTTP, and TCP/IP stack for connectivity to the front end. Of course, not all web browsers were fully W3C compliant, and in the end, bandwidth killed the idea of Java applets (the original vision for RIA) and disputes between Sun and Microsoft gave rise to a Java/.NET duopoly on the back end. But the end result was not only a dumbed down thin client that was little more than a green screen with a pretty face, but also a dumbed down IDE market, as the Java/.NET duopoly effectively made development tooling commodity. Frankly, it made the tools market quite boring.

That’s in marked contrast to the swirl of competition that characterized the 4GL client/server era a few years before, where emergence of two key standards (SQL databases and Windows clients) provided a standard enough target that spawned a vibrant market of competing languages and IDEs that rapidly pushed innovation. Competition between VB, SQL Windows, PowerBuilder, Delphi and others spawned a race for ease of use, a secondary market for visual controls, simplified database connectivity, and birth of ideas like model-driven development and unified software development lifecycles.

What’s ironic is that today, roughly a decade later, we’re still trying to get to many of those goals. Significantly, as technology grew commodity, most of the innovation shifted to process methodology (witness the birth of the Agile Manifesto back in 2001).

While agile methodologies are continuing to evolve, we sense that the pendulum of innovation is shifting back to technology. In a talk on scaling agile at the Rational Software Development Conference last week, Scott Ambler told agile folks to, in effect, grow up and embrace some more traditional methods – like perform some modeling before you start – if you’re trying agile on an enterprise scale.

More to the point, the combined impacts of emergence of Web 2.0, emergence of open source, and a desire to simplify development such as what former Burton analyst Richard Monson-Haefel (who’s now an evangelist with Curl) termed the J2EE rebel frameworks spawned a new diversity of technology approaches and architectures.

Quoted in an article by John Waters, XML co-inventor and Sun director of web technologies Tim Bray recently acknowledged some of the new diversity in programming languages. “Until a few years ago, the entire world was either Java or .NET… And now all of a sudden, we have an explosion of new languages. We are at a very exciting inflection point where any new language with a good design basis has a chance of becoming a major player in the software development scene.”

Beyond languages, a partial list of innovations might include:
• A variety of open source frameworks like Spring or Hibernate that are abstracting (and simplifying use of) Java EE constructs and promoting a back to basics movement with Plain Old Java Objects (POJOs) and rethinking of the appserver tier;
• Emergence of mashups as a new path for accessible development and integration;
• Competition between frameworks and approaches for integrating design and development of Internet apps too rich for Ajax;
• Emergence of RESTful style development as simpler alternatives for data-driven SOA; and in turn,
• New competition for what we used to call component-based development; e.g., whether components should be formed at the programming language level (EJB, .NET) vs. web services level (Service Component Architecture, or SCA).

In short, there are no pat answers to developing or composing applications; it’s no longer simply, choose vanilla or chocolate for the back end, and using generic IDEs for churning out logic and presentation. In other words, competition has returned to software development technologies and architectural approaches, making the marketplace interesting once again.

EnterpriseDB Drops the Next Shoe

On the heels of securing a new round of $10 million venture financing barely a couple months ago, EnterpriseDB has deepened its management bench by naming Red Hat’s former head of North American sales to take over as CEO. As you might recall, EnterpriseDB is the company commercializing the open source Postgres SQL database, focusing on the Oracle market.

Ed Boyajian, the former Red Hat exec, succeeds founder Andy Astor, who is concentrating on partner development. Excluding the fact that today, open source has become accepted by the enterprise mainstream, EnterpriseDB is roughly where Red Hat was when Boyajian, came onboard 6 years ago. The company was developing a model for commercializing Linux, which culminated in the dual-track Fedora open source and Red Hat Enterprise Linux commercial model, and while it’s premature for Boyajian to tell us his plans for EnterpriseDB (he’s formally joining the staff next week), it’s not brain surgery to conclude that the company will likely travel a similar track.

Behind the headlines, what’s significant is that Astor will now spend full-time on partnerships. That’s potentially pivotal given that IBM was one of the backers of EnterpriseDB’s last round of funding. Draw your own conclusions.

Rational Ushers in the Jazz Age

We admit that it’s been sounding like a broken record, but for the past couple years, we’ve wondering when Rational would finally emerge from its roughly decade-old slumber. After having pioneered the concept of application lifecycle management before the term was actually coined, the company stood on its laurels with a suite of largely standalone tooling silos glued together by tape and baling wire.

This year, Rational is finally fleshing out what others have already dubbed ALM 2.0. After unveiling Jazz, its SOA-based ALM integration backplane a year ago, it’s finally releasing the first new products to natively leverage the technology. Rational is ushering in the Jazz Age with the obvious candidate, Team Concert, which has been in beta over the better part of the year, which in effect forms the gateway to jazz. Then it adds a couple pieces that are “nice to haves” but are hardly essential.

To recap, Team Concert is the collaboration portal that encompasses source code control, work item, and build management. Then there’s Requirements Composer, a kind of an in-between tool, somewhat akin to Borland’s Caliber DefineIT as a white boarding medium, that neither replaces partner offerings like iRise, which prototypes the look, feel, and flow of applications; or RequisitePro, where you formally enter detailed requirements for the record. Finally there’s Quality Manager, providing a quality dashboard sitting above test management that’s intended to add business-level traceability to quality issues that traditionally were tracked as more cryptically as software defects.

Admittedly, Rational is not first to the ALM 2.0 game, but like others that have already taken steps (e.g., Microsoft, Serena, Compuware, MKS), it’s just the beginning of a journey. The challenge to integrating the application lifecycle has always been that the constituencies are so diverse, with each requiring different blends of assets at different rhythms. For instance, while each process, from requirements to modeling, coding, and testing should be iterative, the length and speed of iterations will vary based. Unless you are heavily into agile methodology, chances are, requirements development iterations are likely to be longer running processes compared to, say, coding and testing. And, while the artifacts of coding and testing are largely machine readable, requirements are meant for humans to decipher.

Suffice it to say that as rev 1 offerings, IBM’s not telling anybody to pull out their ReqPro or ClearCase installations. In fact, given the huge landed investments that Rational customers have in their legacy tools, it’s always going to remain a matter of federation and coexistence. Besides, when you look at the sociology of software development orgs, you are always going to have bastions of trendy and legacy practices. That’s exactly the use case that IBM painted regarding its own internal beta implementations. For now, ClearCase, ClearQuest, and ReqPro will have interfaces to Jazz; later on the roadmap, IBM is promising native integration. (Other roadmap items cover enterprise reporting, project management, tie-in of rational Method Composer, not to mention a future enterprise edition of Team Concert.)

After flirtation with Eclipse-based individual productivity tools, we’re glad that IBM’s finally tackling the heart and soul of the Rational product line. It’s taken awhile, but it required a changing of the guard (after the golden handcuffs came off Mike Devlin, a team from the WebSphere group took over) to pull off.

For Jazz, the most obvious contrast is to Microsoft Team Foundation Server, targeted at performing a similar role with .NET development. But less noticed is the Eclipse Application Lifecycle Framework (ALF) project lead by Compuware and Serena. From a marketing standpoint, it’s obvious as to why IBM wouldn’t want to share spotlight with those whose penetrations it already dwarfs. But no matter how public or “open” the development of Jazz, the burden of proof remains on IBM to demonstrate why the world needs yet one more vendor-specific ALM framework, no matter how “open” it already is.

Still Room for Billion-Dollar Plays: A Conversation with M.R. Rangaswami

On the eve of last year’s Software conference, Sand Hill Group principal M.R. Rangaswami spoke on the prospects for innovation in a consolidating software industry. Evidently there was some room left for innovation, witness Sun’s billion dollar acquisition of MySQL. According to Rangaswami, it points to the fact that there’s life – and value – in open source software companies beyond Red Hat.

In fact, 2007 was a year of second acts, with Salesforce joining the ranks of billion-dollar software companies. On the eve of Software 2008 next week, we just had a return engagement with MR to get his take on what’s gone down over the past year. The first point he dropped was breaking conventional wisdom that another software company could actually crack the established order, given ongoing consolidation. “People questioned whether there would ever be another billion dollars software company again, although of course Mark Benioff doesn’t call it that,” he noted.

But Rangaswami added that conventional wisdom wasn’t totally off, referring to the fact that a lot of promising break-outs are being gobbled up before they get the chance to go public – MySQL being Exhibit A. There’s plenty of cash around the old guard to snap up promising upstarts.

Nonetheless, there are invisible limits to the acquisition trend, most notably among SaaS (Software as a Service) providers. He ascribes the reticence to the fact that conventional software firms are scared of the disruptive effects that on demand providers could have in cannibalizing their existing businesses.

Going forward, Rangaswami expects some retrenchment. We’d put it another way – with last year’s 5 – 6% growth in IT spending, it was almost impossible for any viable ISV to not make money. Even if, perish the thought, we had been CFO for some poor ISV last year, it would have been in the black in spite of us.

But this year, with IT spending growth anticipated in the more modest 1 – 2% range if that, there’s going to be a lot of missed numbers. IBM cleaned up in Q1, but Oracle’s and Microsoft’s numbers failed to impress (Microsoft last year was coasting on Vista upgrades). Rangaswami advises ISVs to keep the lid of development costs (he expects to see more offshoring this year), but he also says that ISVs should be “smarter” with their marketing budgets. “Do a lot more with programs that are online and use Web 2.0 technologies as opposed to some of more traditional approaches,” he advised, pointing to channels like podcasts and YouTube. “Most people watch TV on YouTube these days,” he said, just slightly exaggerating.

Of course, Rangaswami says you can’t ignore emergence of social computing, for which Facebook for now has become the poster child. We admit to being a bit put off by the superficial atmosphere of the place, and of course not being under 35, why should we care what somebody did last night or who their favorite band is? But it’s become conventional wisdom that some form of social networking is bound to emerge for more professional purposes, like engaging your customers, that goes beyond the forums and chat rooms of user groups, the occasional regional or annual conferences, or the resume-oriented purpose of LinkedIn. In fact, one recent startup, Ringside Networks, is offering a “social appserver” where businesses can use Facebook apps to build their own communities on their own sites.

But Rangaswami says, why not use some of the less serious aspects of social computing to conduct real business. Like getting your New England customers together at the next Red Sox game (just make sure that one of your New York customers by mistake doesn’t slip onto the invite list).

The theme of this year’s Software 2008 conference is what Rangaswami terms “Platform Shift.” After the upheavals of the open systems and Internet eras, it appeared that the software industry was coalescing around Java and .NET platforms. But then on demand began making the Java vs. .NET differences irrelevant. For instance, if you want to write to Salesforce’s platform, it’s in a stored procedures languages that is like, but isn’t Java. On the horizon you have Amazon’s EC2 cloud, the Google Apps platform, and you could probably also consider Facebook to be another platform ecosystem (there are thousands of apps already written to it).

The good news is that tough times actually encourage customers to buy a couple of on demand seats for petty cash because it sharply limits risk.

The result is that barriers to entry for new software solution providers are lower than ever. You don’t have to ask customers to install software and you don’t have to build the on demand infrastructure to host it. Just build the software, then choose whose cloud you want to host it on, pay only by the drink, and start marketing. According to Rangaswami, the cloud might remove the hurdles to getting started, but getting your brand to emerge from the fog may prove the toughest challenge. “Sales and marketing in this new world will be totally different.”

Microsoft: Glasnost or “We Will Bury You?”

ZDNet’s Jason Perlow had some interesting observation upon returning from Microsoft’s Technology Summit last week. For starters, Perlow noted Microsoft’s acknowledgement and implied support for the work that Novell’s Miguel de Icaza and colleagues on the Mono project are directing at porting key Microsoft desktop technologies to Linux. He speculated that Microsoft’s Open Source Lab’s work to achieve interoperability with key open source projects like Samba and Apache is being driven is not only the result of EU directives to open up its interfaces, but also to increase visibility and utilization of Microsoft platforms on open source systems.

But Perlow felt uncomfortable equating these olive branches with Glasnost. “It would be difficult to say that Steve Ballmer is Microsoft’s Mikhail Gorbachev – his patent and intellectual property saber rattling in the past year would seem to put him more firmly in the Nikita Khrushchev shoe-banging ‘We will bury you’ on the podium camp rather than be characterized like the reformative and cuddly Gorby.” Per our observations a couple weeks back at EclipseCon, it looks more like ping pong diplomacy, where Microsoft is testing the waters for a market where open source has gone mainstream.