Category Archives: Application Development

Guest Post: Beyond Agile

And now for something completely different. This week, we offer a guest post from my Ovum colleague and agile methodology expert Michael Azoff.

Software development is more art than science: more about sociology than computer science — Agile has demonstrated that. The dream of computer scientists back in the 1970s, the era of the birth of computing, was that all you needed was a perfect specification and that programmers simply had to implement that spec. And what was implied? Maybe one day, you could automate that step and remove the need for a human programmer. Of course that dream didn’t happen and could never be fulfilled. The reasons are twofold: change and people.

* Change: because you can never nail down a spec perfectly upfront for most projects. Change is introduced during the lifetime of the project, so even if you had that perfect spec it can easily go stale.
* People: because for anything but the smallest projects, you need a team or multiple teams. And when people interact, there is scope for miscommunication and misunderstandings.

It is not a joke when project leaders looking back on large scale project failures say that rather than the hundred developers that were used, if they could rewind history and try again, they would pick the ten ablest developers and get the job completed and in short time.

Fast forward to today: Agile methodology has reached beyond the innovators and visionaries and has arguably gone mainstream. In practice that means various contortions and customizations of Agile methodologies exist, entwined with other processes and methodologies found in organizations, including neo-waterfall.

Neo-waterfall is an interesting case. I use that term because I do not believe developers ever did strict waterfall — if they did, the job would never get accomplished. So there was even a hint of agile in classic waterfall. Developers generally do what is necessary to get the job done and present the results to management in whatever form management expects it. Some form of iteration is essential, call it rework or doing it twice or whatever, because most software requirements are unique and getting it implemented perfectly right first time is difficult.

So now we have a situation where Agile adoption has reached the masses and organizations are ready to try it alongside other options, or, in some cases using only Agile and nothing else. The question is where do we go from here? Have we now solved the software development problem? (To recognize that there is a problem, read Fred Brooks’ The Mythical Man-Month). First of all, the overall (research and anecdotal) evidence is in favor of Agile: it is a step in the right direction (actually major strides forward). Agile methodologies solve development project management problems better than other known methodologies and processes.

However, Agile is not the end of the software development road. There is a “beyond Agile.” The idea is to retain the strengths of Agile and improve its weaker areas.

On the strengths side: the values and principles as expressed in the Agile Manifesto; the philosophy of adaptability and continuous learning (there is good overlap with Lean thinking here); the embracing of change; the emphasis on delivering business value; the iteration heart beat; the retrospectives for making continuous learning happen; the use of testing throughout the lifecycle; gaining feedback from users; getting the business involved; applying macro-management to the team, with a multi-skilled team self-organizing. The list continues: pair programming, test-driven development etc.

However, what will change in ‘beyond Agile’ are the areas where Agile has addressed itself less well. So the emphasis in early phase Agile has been on small teams where possible. The problem is that some enterprise projects are very large scale and need a lot of teams on a global basis. Various Agile development groups have addressed these issues but there is no consensus. The use of architecture and modeling also varies across these approaches. I expect some new form of Agile-friendly architecture and modeling will emerge. Certainly, the technology needs to improve: nothing quite beats the adaptability, the versatility (the agility) of programming languages — and creating software by drawing UML diagrams alone is dreadfully dull.

Another fault line has to do with QA and testing. Listening to some developers talk about how they had to bypass the (non-Agile) QA facility within their organization because it became a bottleneck, where they took on the job of QA themselves. That illustrates how QA and development have become separated in some organizations. ‘Beyond Agile’ I envisage will see QA and testing (the whole range, not just developer testing) become better integrated with development. While Agile developers have embraced quality and testing, the expertise in traditional QA and testing should not be lost.

Managing Agile stories, vast numbers running into thousands, and dealing with interdependencies and the transformation from business orientation to technical orientation — this is another area that could benefit from refinement.

While Agile expands its reach beyond development into operations in DevOps, and into business development (where Lean thinking is already established), the question is whether in the future the practices will be recognizably Agile or follow a new development wave. My hunch is that it will be recognizably rooted in our current understanding of Agile. It would be just fine if Agile became so established and traditional that we called it simply ‘software development’, without further distinction.

Event notice: Special 10th anniversary: The Agile Alliance’s Agile2011 Conference, Aug 8-12, 2011, will be revisiting Salt Lake City, Utah, where the Agile Manifesto was written back in 2001. I’m told the original signatories of the Agile Manifesto will be on stage to debate the progress of Agile.

How should enterprises navigate forks in the Hudson?

A South Jersey neighbor of ours — runner, educator, and open source mischief maker Bob Bickelrecently blogged a status report on what’s been going on with the Jenkins open source project ever since it split off from Hudson.

That’s prompted us to wade in to ask the question that’s been glossed over by the theatrics: what about the user?

For background: This is a case of a promising grassroots technology that took off beyond expectation and became a victim of its own success: governance just did not keep up with the projects growing popularity and attractiveness to enterprise developers. The sign of a mature open source project is that its governing body has successfully balanced the conflicting pressures of constant innovation vs. the need to slow things down for stable, enterprise-ready releases. Hudson failed in this regard.

That led to unfortunate conflicts that degenerated to stupid, petty, and entirely avoidable personality squabbles that in the end screwed the very enterprise users that were pumping much of the oxygen in. We know the actors on both sides – who in their everyday roles are normal, reasonable people that got caught up in mob frenzy. Both sides shot themselves in the feet as matters careened out of control. Go to SD Times if you want the blow by blow.

So what is Hudson – or Jenkins – and why is it important?

Hudson is a continuous integration (CI) server open source project that grew very popular for Java developers. The purpose of a CI server is to support agile practices of continuous integration with a server that maintains the latest copy of the truth. The project was the brainchild of former Sun and Oracle, and current Cloudbees employee Kohsuke Kawaguchi.

Since the split, it has forked into the Hudson and Jenkins branches, with Jenkins having attracted the vast majority of committers and much livelier mailing list activity. Bickel has given us a good snapshot from the Jenkins side with which he’s aligned: a diverse governance body has been established that plans to publish results of its meetings and commit, not only to continuing the crazy schedule of weekly builds, but “stable” quarterly releases. The plan is to go “stable” with the recent 1.400 release, for which a stream of patches is underway.

So most of the committers have gone to Jenkins. Case closed? From the Hudson side, Jason van Zyl of Sonatype, whose business was built around Apache Maven, states that the essential plug-ins are already in the existing Hudson version, and that the current work is more about consolidating the technology already in place, testing it, and refactoring to comply with JSR 330, built around the dependency injection technology popularized by the Spring framework. Although the promises are to keep the APIs stable, this is going to be a rewrite of the innards of Hudson.

Behind the scenes, Sonatype is competing on the natural affinity between Maven and Hudson, which share a large mass of common users, while the emerging force behind Jenkins is Cloudbees, which wants to establish itself as the leading Java development in the cloud platform.

So if you’re wondering what to do, join the crowd. There are bigger commercial forces at work, but as far as you’re concerned, you want stable releases that don’t break the APIs you already use. Jenkins must prove it’s not just the favorite of the hard core, but that its governance structure has grown up to provide stability and assurance to the enterprise, while Hudson must prove that the new rewrite won’t destabilize the old, and that it has managed to retain the enterprise base in spite of all the noise otherwise.

Stay tuned.

April 28, 2011 update. Bob Bickel has reported to me that since the “divorce,” that Jenkins has drawn 733 commits vs 172 for Hudson.

May 4, 2011 update. Oracle has decided to submit the Hudson project to the Eclipse Foundation. Eclipse board member Mik Kersten voices his support of this effort. Oracle says it didn’t consider this before because going to Eclipse was originally perceived as being too heavyweight. This leaves us wondering, why didn’t Oracle propose to do this earlier? where was the common sense?

Big Data analytics in the cloud could be HP’s enterprise trump card

Unfortunately, scheduling conflicts have kept us from attending Leo Apotheker’s keynote today before the HP Analyst Summit in San Francisco. But yesterday, he tipped his cards for his new software vision for HP before a group of investment analysts. HP’s software focus is not to reinvent the wheel – at least where it comes to enterprise apps. Apotheker has to put to rest that he’s not about to do a grudge match and buy the company that dismissed him. There is already plenty of coverage here, interesting comment from Tom Foremski (we agree with him about SAP being a non-starter), and the Software Advice guys who are conducting a poll.

To some extent this has been little surprise with HP’s already stated plans for WebOS and its recently announced acquisition of Vertica. We do have one question though: what happened to Converged Infrastructure?

For now, we’re not revisiting the acquisitions stakes, although if you follow #HPSummit twitter tags today, you’ll probably see lots of ideas floating around today after 9am Pacific time. We’ll instead focus on the kind of company HP wants to be, based on its stated objectives.

1. Develop a portfolio of cloud services from infrastructure to platform services and run the industry’s first open cloud marketplace that will combine a secure, scalable and trusted consumer app store and an enterprise application and services catalog.

This hits two points on the checklist: provide a natural market for all those PCs that HP sells. The next part is stating that HP wants to venture higher up the food chain than just sell lots of iron. That certainly makes sense. The next part is where we have a question: offering cloud services to consumers, the enterprise, and developers sounds at first blush that HP wants its cloud to be all things to all people.

The good news is that HP has a start on the developer side where it has been offering performance testing services for years – but is now catching up to providers like CollabNet (with which it is aligned and would make a logical acquisition candidate) and Rally in offering higher value planning services for the app lifecycle.

In the other areas – consumer apps and enterprise apps – HP is starting from square one. It obviously must separate the two, as cloud is just about the only thing that the two have in common.

For the consumer side, HP (like Google Android and everyone else) is playing catchup to Apple. It is not simply a matter of building it and expecting they will come. Apple has built an entire ecosystem around its iOS platform that has penetrated content and retail – challenging Amazon, not just Salesforce or a would-be HP, using its user experience as the basis for building a market for an audience that is dying to be captive. For its part, HP hopes to build WebOS to have the same “Wow!” factor as the iPhone/iPad experience. It’s got a huge uphill battle on its hands.

For the enterprise, it’s a more wide open space where only Salesforce’s AppExchange has made any meaningful mark. Again, the key is a unifying ecosystem, with the most likely outlet being enterprise outsourcing customers for HP’s Enterprise Services (the former EDS operation). The key principle is that when you build a market place, you have to identity who your customers are and give them a reason to visit. A key challenge, as we’ve stated in our day job, is that enterprise customers are not the enterprise equivalent of those $2.99 apps that you’ll see in the AppStore. The experience at Salesforce – the classic inversion of the long tail – is that the market is primarily for add-ons to the Salesforce.com CRM application or use of the Force.com development platform, but that most entries simply get buried deep down the list.

Enterprise apps marketplaces are not simply going to provide a cheaper channel for solutions that still require consultative sells. We’ve suggested that they adhere more to the user group model, which also includes forums, chats, exchanges of ideas, and by the way, places to get utilities that can make enterprise software programs more useful. Enterprise app stores are not an end in themselves, but a means for reinforcing a community — whether it be for a core enterprise app – or for HP, more likely, for the community of outsourcing customers that it already has.

2. Build webOS into a leading connectivity platform.
HP clearly hopes to replicate Apple’s success with iOS here – the key being that it wants to extend the next-generation Palm platform to its base of PCs and other devices. This one’s truly a Hail Mary pass designed to rescue the Palm platform from irrelevance in a market where iOS, Android, Adobe Flash, Blackberry, and Microsoft Windows 7/Silverlight are battling it out. Admittedly, mobile developers have always tolerated fragmentation as a fact of life in this space – but of course that was when the stakes (with feature phones) were rather modest. With smart device – in all its varied form factors from phone to tablet – becoming the next major consumer (and to some extent, enterprise) frontier, there’s a new battle afresh for mindshare. That mindshare will be built on the size of the third party app ecosystem that these platforms attract.

As Palm was always more an enterprise rather consumer platform – before the Blackberry eclipsed it – HP’s likely WebOS venue will be the enterprise space. Another uphill battle with Microsoft (that has the office apps), Blackberry (with its substantial corporate email base), and yes, Apple, where enterprise users are increasingly sneaking iPhones in the back door, just like they did with PCs 25 years ago,

3. Build presence with Big Data
Like (1), this also hits a key checkbox for where to sell all those HP PCs. HP has had a half-hearted presence with the discontinued Neoview business. The Vertica acquisition was clearly the first one that had Apotheker’s stamp on it. Of HP’s announced strategies, this is the one that aligns closest with the enterprise software strategy that we’ve all expected Apotheker to champion. Obviously Vertica is the first step here – and there are many logical acquisitions that could fill this out, as we’ve noted previously, regarding Tibco, Informatica, and Teradata. The importance is that classic business intelligence never really suffered through the recession, and arguably, big data is becoming the next frontier for BI that is becoming, not just a nice to have, but increasingly an expected cost of competition.

What’s interesting so far is that in all the talk about big Data, there’s been relatively scant attention paid to utilizing the cloud to provide the scaling to conduct such analytics. We foresee a market where organizations that don’t necessarily want to buy all that and that use large advanced analytics on an event-driven basis, to consume the cloud for their Hadoop – or Vertica – runs. Big Data analytics in the cloud could be HP’s enterprise trump card.

The Second Wave of Analytics

Throughout the recession, business intelligence (BI) was one of the few growth markets in IT. Given that transactional systems that report “what” is happening are simply the price of entry for remaining in a market, BI and analytics systems answer the question of “why” something is happening, and ideally, provide intelligence that is actionable so you can know ‘how’ to respond. Not surprisingly, understanding the whys and hows are essential for maximizing the top line in growing markets, and pinpointing the path to survival in down markets. The latter reason is why BI has remained one of the few growth areas in the IT and business applications space through the recession.

Analytic databases are cool again. Teradata, the analytic database provider with a 30-year track record, had its strongest Q2 in what was otherwise a lousy 2010 for most IT vendors. Over the past year, IBM, SAP, and EMC took major acquisitions in this space, while some of the loudest decibels at this year’s Oracle OpenWorld were over the Exadata optimized database machine. There are a number of upstarts with significant venture funding, ranging from Vertica to Cloudera, Aster Data, ParAccel and others that are not only charting solid growth, but the range of varied approaches that reveal that the market is far from mature and that there remains plenty of demand for innovation.

We are seeing today a second wave of innovation in BI and analytics that matches the ferment and intensity of the 1995-96 era when data warehousing and analytic reporting went commercial. There isn’t any one thing that is driving BI innovation. At one end of the spectrum, you have Big Data, and at the other end, Fast Data — the actualization of real-time business intelligence. Advances in commodity hardware, memory density, parallel programming models, and emergence of NoSQL, open source statistical programming languages, cloud are bringing this all within reach. There is more and more data everywhere that’s begging to be sliced, diced and analyzed.

The amount of data being generated is mushrooming, but much of it will not necessarily be persisted to storage. For instance, if you’re a power company that wants to institute a smart grid, moving from monthly to daily meter reads multiplies your data volumes by a factor of 30, and if you decide to take readings every 15 minutes, better multiple all that again by a factor of 100. Much of this data will be consumed as events. Even if any of it is persisted, traditional relational models won’t handle the load. The issue is not only because of overhead of operating all the iron, but with it the concurrent need for additional equipment, space, HVAC, and power.

Unlike the past, when the biggest databases were maintained inside the walls of research institutions, public sector agencies, or within large telcos or banks, today many of the largest data stores on the Internet are getting opened through APIs, such as from Facebook. Big databases are no longer restricted to use by big companies.

Compare that to the 1995-96 time period when relational databases, which made enterprise data accessible, reached critical mass adoption; rich Windows clients, which put powerful apps on the desktop, became enterprise standard; while new approaches to optimizing data storage and productizing the kind of enterprise reporting pioneered by Information Builders, emerged. And with it all came the debates OLAP (or MOLAP) vs ROLAP, star vs. snowflake schema, and ad hoc vs. standard reporting. Ever since, BI has become ingrained with enterprise applications, as reflected by recent consolidations with the acquisitions of Cognos, Business Objects, and Hyperion by IBM, SAP, and Oracle. How much more establishment can you get?

What’s old is new again. When SQL relational databases emerged in the 1980s, conventional wisdom was that the need for indexes and related features would limit their ability to perform or scale to support enterprise transactional systems. Moore’s Law and emergence of client/server helped make mockery of that argument until the web, proliferation of XML data, smart sensory devices, and realization that unstructured data contained valuable morsels of market and process intelligence, in turn made mockery of the argument that relational was the enterprise database end-state.

In-memory databases are nothing new either, but the same hardware commoditization trends that helped mainstream SQL has also brought costs of these engines down to earth.

What’s interesting is that there is no single source or style of innovation. Just as 1995 proved a year of discovery and debate over new concepts, today you are seeing a proliferation of approaches ranging from different strategies for massively-parallel, shared-nothing architectures; columnar databases; massive networked and hierarchical file systems; and SQL vs. programmatic approaches. It is not simply SQL vs. a single post-SQL model, but variations that mix and match SQL-like programming with various approaches to parallelism, data compression, and use of memory. And don’t forget the application of analytic models to complex event processes for identifying key patterns in long-running events or coming through streaming data that is arriving in torrents too fast and large to ever consider putting into persistent storage.

This time, much of the innovation is coming from the open source world as evidenced by projects like the Java-based distributed computing platform Hadoop developed by Google; MapReduce parallel programming model developed by Google; the HIVE project that makes MapReduce look like SQL; the R statistical programming language. Google has added fuel to the fire by releasing to developers its BigQuery and Prediction API for analyzing large sets of data and applying predictive algorithms.

These are not simply technology innovations looking for problems, as use cases for Big Data or real-time analysis are mushrooming. Want to extend your analytics from structured data to blogs, emails, instant messaging, wikis, or sensory data? Want to convene the world’s largest focus group? There’s sentiment analysis to be conducted from Facebook; trending topics for Wikipedia; power distribution optimization for smart grids; or predictive analytics for use cases such as real-time inventory analysis for retail chains, or strategic workforce planning, and so on.

Adding icing to the cake was an excellent talk at a New York Technology Council meeting by Merv Adrian, a 30-year veteran of the data management field (who will soon be joining Gartner) who outlined the content of a comprehensive multi-client study on analytic databases that can be downloaded free from Bitpipe.

Adrian speaks of a generational disruption occurring to the database market that is attacking new forms of age old problems: how to deal with expanding datasets while maintaining decent performance. as mundane as that. But the explosion of data coupled with commoditization of hardware and increasing bandwidth have exacerbated matters to the point where we can no longer apply the brute force approach to tweaking relational architectures. “Most of what we’re doing is figuring out how to deal with the inadequacies of existing systems,” he said, adding that the market and state of knowledge has not yet matured to the point where we’re thinking about how the data management scheme should look logically.

So it’s not surprising that competition has opened wide for new approaches to solving the Big and Fast Data challenges; the market has not yet matured to the point where there are one or a handful of consensus approaches around which to build a critical mass practitioner base. But when Adrian describes the spate of vendor acquisitions over the past year, it’s just a hint of things to come.

Watch this space.

IBM’s Software Complex

Sometimes the news is that there is no news. Well, Steve Mills did tell us that IBM is investing the bulk of its money in software and that between now and 2015, it would continue to make an average of $4 – 5 billion worth of strategic acquisitions per year. In other words, it would continue its current path, and it will continue buying making acquisitions for the strategic value of technology with the guideline of having them become revenue accretive within 2 – 4 years. Again, nothing new, as if there were anything wrong with that.

The blend of acquisition and organic development is obviously bulking up the Software Group’s product portfolio, which in itself is hardly a bad thing; there is more depth and breadth. But the issue that IBM has always faced is that of complexity. The traditional formula has always been, we have the pieces and we have services to put it together for you. Players like Oracle compete with a packaged apps strategy; in more specialized areas such as project portfolio management, rivals like HP and CA Technologies say that we have one product where IBM splits it in two.

IBM continues to deny that it is in the apps business, but as it shows architectural slides of its stack that is based on middleware along with horizontal “solutions” such as a SPSS Decision Manager offering (more about that shortly); vertical industry frameworks which specify processes, best practices, and other assets that can be used to compose industry solutions; and then at the top of the stack, solutions that IBM and/or its services group develops. It’s at the peak of the stack that the difference between “solutions” and “applications” becomes academic. Reviewing Oracle’s yet to be released Fusion applications, there is a similar architecture that composes solutions based on modular building blocks.

So maybe IBM feels self-conscious about the term application as it doesn’t want to be classed with Oracle or SAP, or maybe it’s the growing level of competition with Oracle that made Mills rather defensive in responding to an analyst’s question about the difference between IBM’s and Oracle’s strategy. His response was that IBM’s is more of a toolkit approach that layers atop the legacy that will always be there, which is reasonable, although the tone was more redolent of “you [Oracle] can’t handle the truth.”

Either way, where you sell a solution or a packaged application for enterprise level, assembly will still be required. Services will be needed to integrate and/or train your people. Let’s be adults and get that debate behind us. For IBM, time to get back to Issue One: Defusing Complexity. When you’re dealing with enterprise software, there will always be complexity. But when it comes to richness or simplicity, IBM tends to aim for the former. The dense slides with small print made the largely middle aged analyst audience more self conscious than normal of the inadequacies of their graduated eyeglasses or contacts.

OK, if you’re IBM facing an analyst crowd, you don’t want to oversimplify the presentation into the metaphorical equivalent of the large print weekly for the visually impaired. You must prove that you have depth. You need to show a memorable, coherent message (Smarter Planet was great when it débuted two years ago). But most importantly, you need to have coherent packaging and delivery to market.

IBM Software Group has done a good job of repurposing technologies across brands to fill defined product needs; it still has its work cut out for its goal of making the software group sales force brand agnostic. That is going to take time.

As a result, good deeds don’t go unpunished, with IBM’s challenges with SPSS Decision Manager a great case in point. The product, an attempt to craft a wider market for SPSS capabilities, blends BI analytics from Cognos, rules management from Ilog, and event processing from WebSphere Business Events to develop a predictive analytics solution for fashioning business strategy aimed at line of business users.

For instance, if you are in the claims processing group of an auto insurance company, you can use form-based interfaces to vary decision rules and simulate the results to ensure that accident calls from 19 year old drivers or those who have not yet contacted the police are not fast tracked for settlement.

The problem with Decision Manager is that it is not a single SKU or install; IBM has simply pre-integrated components that you still must buy a la carte. IBM Software is already integrating product technologies; it now needs to attend to integrating delivery.

Pragmatism over Harmony

Times like this morning, we can really appreciate that the east coast is the right coast, as you get a nice early start on the day. We caught announcement of IBM’s agreement with Oracle to shift JDK development efforts from Apache harmony to Oracle’s (formerly Sun’s) OpenJDK project for about 5 minutes as we were literally running to the JetBlue gate at JFK on the way to SFO. By the following morning, bright and early west coast time, the verdict was out: The announcement clearly knocks the pillars out from under Google’s Dalvik JVM project, which was based on Apache Harmony.

Good summaries of the announcement are available from Gavin Clarke; great commentary is available from Shain McGlaun, Carlo Daffara, and Sacha Labourey.

As Thinkovation’s Gary Barnett tweeted to us yesterday, the announcement was Deja moo all over again. We were recalling sitting in the audience at JavaOne 2005 where Steve Mills appeared in a video reiterating support for the Java Community process in spite of differences with Sun over its more-than-equal role in governing the JCP and, of course, IBM’s successful effort in cultivating Eclipse as the successful rival to Sun in the Java tooling space.

One of the critiques of Sun over the years regarding Java was that (1) they couldn’t make a viable business of it and (2) that their stewardship over the JCP and the OpenJDK was too heavy-handed. Well, the first critique would hardly apply to Oracle, with its recently filing of litigation with Google over the so-called clean room Darvik JVM which it created, based on content from the Apache Harmony JDK (the rival to OpenJDK with which IBM was heavily vested).

IBM’s Bob Sutor characterized the agreement as “the pragmatic choice,” providing a way to work with Oracle from the inside to open up access to the Java certification tests that it denied Apache.

Clearly the bystander is Google; with IBM’s agreement to dump Apache Harmony, Google is left exposed as its Dalvik code uses class libraries from the Harmony project that IBM has abandoned. In essence, IBM didn’t have a dog in the Oracle/Google fight over Android. If you can’t beat ‘em, join ‘em. Google has become the giant pin cushion as it sits on piles of search ad dollars yet, as a player in the Java community, is an outsider whose not-invented here mentality has not exactly conjured grassroots support in the Java community.

Labourey, formerly of JBoss, summarized the prospects for the Red Hats of the world to essentially accept the new status quo. For Google, it’s another fact on the ground that, in the end, will likely result in its coming to a royalty settlement with Oracle sooner rather than later.

Oct. 20 Update — The Register’s Gavin Clarke has a scoop on the fallout at Apache.

Leo Apotheker to target HP’s forgotten business

Ever since its humble beginnings in the Palo Alto garage, HP has always been kind of a geeky company – in spite of Carly Fiorina’s superficial attempts to prod HP towards a vision thing during her aborted tenure. Yet HP keeps talking about getting back to that spiritual garage.

Software has long been the forgotten business of HP. Although – surprisingly – the software business was resuscitated under Mark Hurd’s reign (revenues have more than doubled as of a few years ago), software remains almost a rounding error in HP’s overall revenue pie.

Yes, Hurd gave the software business modest support. Mercury Interactive was acquired under his watch, giving the business a degree of critical mass when combined with the legacy OpenView business. But during Hurd’s era, there were much bigger fish to fry beyond all the internal cost cutting for which Wall Street cheered, but insiders jeered. Converged Infrastructure has been the mantra, reminding us one and all that HP was still very much a hardware company. The message remains loud and clear with HP’s recent 3PAR acquisition at a heavily inflated $2.3 billion which was concluded in spite of the interim leadership vacuum.

The dilemma that HP faces is that, yes, it is the world’s largest hardware company (they call it technology), but the bulk of that is from personal systems. Ink, anybody?

The converged infrastructure strategy was a play at the CTO’s office. Yet HP is a large enough company that it needs to compete in the leagues of IBM and Oracle, and for that it needs to get meetings with the CEO. Ergo, the rumors of feelers made to IBM Software’s Steve Mills, and the successful offer to Leo Apotheker, and agreement for Ray Lane as non executive chairman.

Our initial reaction was one of disappointment; others have felt similarly. But Dennis Howlett feels that Apotheker is the right choice “to set a calm tone” that there won’t be a massive a debilitating reorg in the short term.

Under Apotheker’s watch, SAP stagnated, hit by the stillborn Business ByDesign and the hike in maintenance fees that, for the moment, made Oracle look warmer and fuzzier. Of course, you can’t blame all of SAP’s issues on Apotheker; the company was in a natural lull cycle as it was seeking a new direction in a mature ERP market. The problem with SAP is that, defensive acquisition of Business Objects notwithstanding, the company has always been limited by a “not invented here” syndrome that has tended to blind the company to obvious opportunities – such as inexplicably letting strategic partner IDS Scheer slip away to Software AG. Apotheker’s shortcoming was not providing the strong leadership to jolt SAP out of its inertia.

Instead, Apotheker’s – and Ray Lane’s for that matter – value proposition is that they know the side of the enterprise market that HP doesn’t. That’s the key to this transition.

The next question becomes acquisitions. HP has a lot on its plate already. It took at least 18 months for HP to digest the $14 billion acquisition of EDS, providing a critical mass IT services and data center outsourcing business. It is still digesting nearly $7 billion of subsequent acquisitions of 3Com, 3PAR, and Palm to make its converged infrastructure strategy real. HP might be able to get backing to make new acquisitions, but the dilemma is that Converged Infrastructure is a stretch in the opposite direction from enterprise software. So it’s not just a question of whether HP can digest another acquisition; it’s an issue of whether HP can strategically focus in two different directions that ultimately might come together, but not for a while.

So let’s speculate about software acquisitions.

SAP, the most logical candidate, is, in a narrow sense, relatively “affordable” given that its stock is roughly about 10 – 15% off its 2007 high. But SAP would be obviously the most challenging given the scale; it would be difficult enough for HP to digest SAP under normal circumstances, but with all the converged infrastructure stuff on its plate, it’s back to the question of how can you be in two places at once. Infor is a smaller company, but as it is also a polyglot of many smaller enterprise software firms, would present HP additional integration headaches that it doesn’t need.

HP may have little choice but to make a play for SAP if IBM or Microsoft were unexpectedly to actively bid. Otherwise, its best bet is to revive the relationship which would give both companies the time to acclimate. But in a rapidly consolidating technology market, who has the luxury of time these days?

Salesforce.com would make a logical stab as it would reinforce HP Enterprise Services’ (formerly EDS) outsourcing and BPO business. It would be far easier for HP to get its arms around this business. The drawback is that Salesforce.com would not be very extensible as an application as it uses a proprietary stored procedures database architecture. That would make it difficult to integrate with a prospective ERP SaaS acquisition, which would otherwise be the next logical step to growing the enterprise software footprint.

Informatica is often brought up – if HP is to salvage its Neoview BI business, it would need a data integration engine to help bolster it. Better yet, buy Teradata, which is one of the biggest resellers of Informatica PowerCenter – that would give HP far more credible presence in the analytics space. Then it will have to ward off Oracle – which has an even more pressing need for Informatica to fill out the data integration piece in its Fusion middleware stack – for Informatica. But with Teradata, there would at least be a real anchor for the Informatica business.

HP has to decide what kind of company it needs to be as Tom Kucharvy summarized well a few weeks back. Can HP afford to converge itself in another direction? Can it afford not to? Leo Apotheker has a heck of a listening tour ahead of him.

Stack envy: Impressions for Oracle OpenWorld 2010

Last year, the anticipation of the unveiling of Fusion apps was palpable. Although we’re not finished with Oracle OpenWorld 2010 yet – we still have the Fusion middleware analyst summit tomorrow and still have loose ends regarding Oracle’s Java strategy – by now our overall impressions are fairly code complete.

In his second conference keynote – which unfortunately turned out to be almost a carbon copy of his first – Larry Ellison boasted that they “announced more new technology this week than anytime in Oracle’s history.” Of course, that shouldn’t be a heavy lift given that Oracle is a much bigger company with many more products across the portfolio, and with Sun, has a much broader hardware/software footprint at that.

On the software end – and post-Sun acquisition, we have to make that distinction – it’s hard to follow up last year’s unveiling of Fusion apps. The Fusion apps are certainly a monster in size with over 5000 tables, 10,000 task flows, representing five years of development. Among other things, the embedded analytics provide the context long missing from enterprise apps like ERP and CRM, which previously required you to slip into another module as a separate task. There is also good integration of process modeling, although for now BPM models developed using either of Oracle’s modeling tools won’t be executable. For now, Fusion apps will not change the paradigm of model, then develop.

A good sampling of coverage and comment can be found from Ray Wang, Dennis Howlett, Therese Poletti, Stefan Ried, and for the Java side, Lucas Jellema.

The real news is that Fusion apps, excluding manufacturing, will be in limited release by year end and general release in Q1. That’s pretty big news.

But at the conference, Fusion apps took a back seat to Exadata, the SPARC HP (and soon to be SPARC)-based database appliance unveiled last year, and the Exalogic cloud-in-a-box unwrapped this year. It’s no mystery that growth in the enterprise apps market has been flat for quite some time, with the main Greenfield opportunities going forward being midsized businesses or the BRIC world region. Yet Fusion apps will be overkill for small-midsized enterprises that won’t need such a rich palette of functionality (NetSuite is more likely their speed), which leaves the emerging economies as the prime growth target. The reality is most enterprises are not about to replace the very ERP systems that they implemented as part of modernization or Y2K remediation efforts a decade ago. At best, Fusion will be a gap filler, picking up where current enterprise applications leave off, which provides potential growth opportunity for Oracle, but not exactly a blockbuster one.

Nonetheless, as Oracle was historically a software company, the bulk of attendees along with the press and analyst community (including us) pretty much tuned out all the hardware talk. That likely explains why, if you subscribed to the #oow10 Twitter hashtag, that you heard nothing but frustration from software bigots like ourselves and others who got sick of the all-Exadata/ Exalogic-all-the-time treatment during the keynotes.

In a memorable metaphor, Ellison stated that one Exalogic device can schedule the entire Chinese rail system, and that two of them could run Facebook – to which a Twitter user retorted, how many enterprises have the computing load of a Facebook?

Frankly, Larry Ellison has long been at the point in his life where he can afford to disregard popular opinion. Give a pure hardware talk Sunday night, then do it almost exactly again on Wednesday (although on the second go round we were also treated to a borscht belt routine taking Salesforce’s Mark Benioff down more than a peg on who has the real cloud). Who is going to say no to the guy who sponsored and crewed on the team that won the America’s cup?

But if you look at the dollars and sense opportunity for Oracle, it’s all about owning the full stack that crunches and manages the data. Even in a recession, if there’s anything that’s growing, it’s the amount of data that’s floating around. Combine the impacts of broadband, sensory data, and lifestyles that are becoming more digital, and you have the makings for the data counterpart to Metcalfe’s Law. Owning the hardware closes the circle. Last year, Ellison spoke of his vision to recreate the unity of the IBM System 360 era, because at the end of the day, there’s nothing that works better than software and hardware that are tuned for each other.

So if you want to know why Ellison is talking about almost nothing else except hardware, it’s not only because it’s his latest toy (OK, maybe it’s partly that). It’s because if you run the numbers, there’s far more growth potential to the Exadata/Exalogic side of the business than there is for Fusion applications and middleware.

And if you look at the positioning, owning the entire stack means deeper account control. It’s the same strategy behind the entire Fusion software stack, which uses SOA to integrate internally and with the rest of the world. But Fusion apps and middleware remain optimized for an all-Oracle Fusion environment,underpinned by a declarative Application Development Framework (ADF) and tooling that is designed specifically for that stack.

So on one hand, Oracle’s pitch that big database processing works best on optimized hardware can sound attractive to CIOs that are seeking to optimize one of their nagging points of pain. But the flipside is that, given Oracle’s reputation for aggressive sales and pricing, will the market be receptive to giving Oracle even more control? To some extent the question is moot; with Oracle having made so many acquisitions, enterprises that followed a best of breed strategy can easily find themselves unwittingly becoming all-Oracle shops by default.

Admittedly, the entire IT industry is consolidating, but each player is vying for different combinations of the hardware, software, networking, and storage stack. Arguably, applications are the most sensitive layer of the IT technology stack because that is where the business logic lies. As Oracle asserts greater claim to that part of the IT stack and everything around it, it requires a strategy for addressing potential backlash from enterprises seeking second sources when it comes to managing their family jewels.

Please Keep the LightSwitch on

It’s seems almost quaint to think that once upon a time, you really had to be a rocket scientist to develop software. OK, correct that, computer scientist. Your IDE was a cryptic command line text editor, and you freelanced debugging manually. That’s OK, that was during the cowboy days of appdev, when ideas like objects, components, or models were considered the stuff of idle dreams. Besides, what self-respecting programmer (we didn’t call them developers back then) would ever condescend to using somebody else’s code? Real coders only need command lines, and they don’t need formalized architecture to tell them how to program.

Roughly 25 years ago, what was then Borland introduced the integrated development environment, and several years after that, Microsoft blew the lid out of that market with the first programming language that was really designed for, to borrow Apple’s terminology, “the rest of us: Visual Basic. For the first time, here was a language that was fairly easy to learn; offering lots of flexibility; and taaping the innovations of visual development, it made software development more intuitive. As God’s gift to liberal arts majors, they now could get paid for doing something other than waiting tables, driving cabs, or teaching art history or philosophy.

Of course, lowering barriers to entry allows the unwashed masses in, and yes, there is sound argument to say that allowing anybody to program would lower the quality of coding. Yet, democratizing development became essential because in the early 90s, the coming boom in client/server, followed by web developed, unleashed an enormous appetite for applications for which there weren’t enough computer scientists in the world to deliver. Even with bandwidth bringing millions of Indian, Chinese, and Ukrainian developers online, supply is still mismatched with demand. While you might think about outsourcing large projects or maintenance, it is simply impractical to task teams located over a dozen time zones away (not to mention language or cultural barriers) to churn out the kinds of quick, tactical applications that some agile team in a corner could crank out in days.

Not surprisingly, the democratization of development unleashed by Visual Basic and almost every development tool after made it possible for the IT profession to meet demand; it didn’t create it. But not surprisingly, with all that sloppy coding out there, emergence of robust frameworks like Java EE and .NET attempted to clean up the mess with frameworks that required disciplined practices like strongly typed coding. But as the laws of physics predict counter reaction to every action, dynamic scripting languages like PHP and Ruby emerged to provide the ease and lightweight that the top down frameworks forgot.

Anyway, it is difficult to make it through a vendor briefing call these days without hearing bromides on how they are making their tools accessible to “business developers” – as if there is such a class of people in the business who do software development. What they are really saying is that they have tools for business stakeholders who have day jobs to, by the way, craft quick little productivity or business insight apps with their drag and drop tools on the side. It’s the same thing that we have heard from players like Zoho which seem more like cloud platforms for developing trivial apps of little meaning.

Therefore our ear perked up with Microsoft’s release of LightSwitch, which provides a simpler path to developing real data-centric .NET applications. We’ll spare you the details because Andrew Brust has explained them much better than we could, and is hoping that LightSwitch might become part of “a long overdue turnaround” from Microsoft’s last decade of “courting complexity.”

We share his hopes, but our optimism is a bit more measured. Microsoft doesn’t exactly have a great track record backing innovation these days. A couple years ago, it had a similar kind of great idea with Oslo, an innovative attempt to make modeling of data-driven applications (do we see a pattern here?) more developer-centric through a coding-oriented approach. Less than a year after unveiling Oslo, Microsoft backtracked and made it a development pattern for SQL Server. Let’s hope that on this go round, Microsoft has the patience and perseverance to keep the LightSwitch on.

HP buys Fortify, it’s about time!

What took HP so long? Store that thought.

As we’ve stated previously, security is one of those things that have become everybody’s business. Traditionally the role of security professionals who have focused more on perimeter security, the exposure of enterprise apps, processes, and services to the Internet opens huge back doors that developers unwittingly leave open to buffer overflows, SQL injection, cross-site scripting, and you name it. Security was never part of the computer science curriculum.

But as we noted when IBM Rational acquired Ounce Labs, developers need help. They will need to become more aware of security issues but realistically cannot be expected to become experts. Otherwise, developers are caught between a rock and a hard place – the pressures of software delivery require skills like speed and agility, and a discipline of continuous integration, while security requires the mental processes of chess players.

At this point, most development/ALM tools vendors have not actively pursued this additional aspect of QA; there are a number of point tools in the wild that may not necessarily be integrated. The exceptions are IBM Rational and HP, which have been in an arms race to incorporate this discipline into QA. Both have so-called “black box” testing capabilities via acquisition – where you throw ethical hacks at the problem and then figure out where the soft spots are. It’s the security equivalent of functionality testing.

Last year IBM Rational raised the ante with acquisition of Ounce Labs, providing “white box” static scans of code – in essence, applying debugger type approaches. Ideally, both should be complementary – just as you debug, then dynamically test code for bugs, do the same for security: white box static scan, then black both hacking test.

Over the past year, HP and Fortify have been in a mating dance as HP pulled its DevInspect product (an also-ran to Fortify’s offering) and began jointly marketing Fortify’s SCA product as HP’s white box security testing offering. In addition to generating the tests, Fortify;s SCA manages this stage as a workflow, and with integration to HP Quality Center, autopopulates defect tracking. We’ll save discussion of Fortify’s methodology for some other time, but suffice it to say that it was previously part of HP’s plans to integrate security issue tracking as part of its Assessment Management Platform (AMP), which provides a higher level dashboard focused on managing policy and compliance, vulnerability and risk management, distributed scanning operations, and alerting thresholds.

In our mind, we wondered what took HP so long to consummate this deal. Admittedly, while the software business unit has grown under now departed CEO Mark Hurd, it remains a small fraction of the company’s overall business. And with the company’s direction of “Converged Infrastructure”,” its resources are heavily preoccupied with digesting Palm and 3Com (not to mention, EDS). The software group therefore didn’t have a blank check, and given Fortify’s 750-strong global client base, we don’t think that the company was going to come cheap (the acquistion price was not disclosed). With the mating ritual having predated IBM’s Ounce acquisition last year, buying Fortify was just a matter of time. At least a management interregnum didn’t stall it.

Finally!