It certainly struck a nerve. The recent essay “IT Doesn’t Matter,” written by Nicholas Carr for Harvard Business Review, prompted the expected cacophony of rebuttals from the usual suspects. To recap, the HBR piece equated IT with historical industrial revolution innovations including railroads and electricity. The premise was that innovations experienced common lifecycles that (1) revolutionize commerce, (2) become essential, and (3) wind up as universal commodities.
For the record, we agree with one aspect of the HBR analysis: that baseline front office, back office, and web-enabled applications have become part of the price of competition. Beyond that, we believe the HBR piece missed a crucial point: unlike electricity or railroads, IT systems can embed the intellectual property (IP) that differentiates how enterprises compete. And we don’t think that IP has become commodity — unless that’s another deception of The Matrix.
OK, now that we’ve gotten that off our chest, we’ll ask the more pressing question: why has the HBR piece knocked everyone for a loop? (1) It’s the economy stupid, and more importantly, (2) the IT community is atoning for past sins.
We’ll focus on the latter. Recall the run up to Year 2000, when the rush to finish Y2K repairs coincided with the emergence of the Internet and a hyperactive economy? There was a lot of silly money in IT driving many organizations to be first — too often at all costs. But that was just temporary insanity.
Of greater concern is the cost-plus mentality that has always plagued enterprise- scale IT projects, even before the Internet. Recall those SAP or CASE projects where consulting costs outpaced software, sometimes by ratios up to 10:1? No wonder enterprise IT projects were often equated to DOD $500 toilet seat procurements. No wonder the HBR piece drew far more attention than it deserved.
Have we learned our lessons? We’re worried about the inflated expectations accompanying the emergence of web services. Yes, a critical mass of vendors may eventually agree on a core set of standards that might transform applications into interchangeable services, but if anybody ever thinks that will finally make systems integration easy, there will be another HBR article coming.
The legal maneuverings over Linux are getting ‘curiouser’ and ‘curiouser.’ Only a couple months back, SCO (formerly Caldera) filed suit against IBM alleging violations of their UNIX license. They recently followed up with a mass mailing to corporate IT executives alleging that use of Linux could also constitute a similar breach of intellectual property agreements.
In so doing, we believe that SCO’s actions are the software industry’s equivalent of terrorism. If at first you can’t succeed in ruling the world, why not bring the rest of civilization down with you? As we concluded in an earlier note, SCO’s moves are nothing more than an exit strategy for a company that has consistently failed in the marketplace.
Nobody will win after all this except … err, Microsoft shareholders? In an almost comic turn of events, Microsoft just signed a formal licensing agreement for SCO’s UNIX technology. While Microsoft’s motives might sound understandable — why risk more litigation when the previous bout almost broke up the company — we believe that SCO’s potential legal threat to Microsoft was laughable at best.
We can only conclude that the licensing of SCO UNIX is Microsoft’s strategy to drive a new wedge into the Linux community, a sector whose growth poses a far more formidable threat than the empty roars emanating out of SCO.
Yesterday’s first anniversary of “the new HP” provided the coming out party for the company’s new enterprise services strategy. HP’s moniker, the “Adaptive Enterprise,” is their take on the real-time enterprise, something that sounds a lot like IBM’s On Demand computing or Microsoft’s one degree of separation campaigns.
Like its rivals, HP is couching its message in business terms. HP will provide the expertise, software, and services to tune or transform the infrastructure. But it will hand off to partners the rest of the job, at the higher-margin business process, application architecture, and systems integration levels. Driving the point home, HP announced a new $3 billion 10-year deal to manage Procter & Gamble’s worldwide IT infrastructure.
The pillars of HP’s go-to-market strategy bore the label “Darwin Reference Architecture,” which was more survival philosophy than formal technology blueprint. Darwin declares a commitment to the usual array of standards (which include .NET and J2EE), and states that HP will offer infrastructure services, systems management software, while relying on partnerships with leading software vendors and integrators.
On the plus side, HP leads in Intel servers and Linux (although it trades places with Dell for overall Intel market share), has a successful track record transforming itself after swallowing Compaq, and offers leading management software (OpenView, the core component of Utility Data Center). The minuses: HP’s absence at the higher levels of the stack, a drawback that is more serious given HP’s aspirations to help its customers transform their businesses, not just their infrastructures.
Of note, part of the announcement was HP offering a web services management roadmap — something that rivals IBM and BMC have yet to declare (CA has; we’ll discuss them in an upcoming note). Specifically, HP will inspect SOAP headers to manage access, authorization, and conformance with service levels. It is also adding a subscription-based provisioning engine to OpenView for helping customers monetize web services.
And, just as IBM is embedding Tivoli into its own products, HP has similar goals to embed OpenView web services management features in third party products. It is already building reference implementations to show how OpenView service level compliance features for third-party software applications can be enforced, and plans to develop channels programs.
HP needs to get its consulting group up to speed on adaptive infrastructure knowledge and practices quickly. For instance, at present, only a couple hundred of its 20,000+ consulting force have learned the new system of metrics for the critical job of infrastructure adaptability assessment.
But that challenge is dwarfed by something more basic. Although HP plans to keep infrastructure assessments to itself, it must identify ways of bringing partners earlier in the cycle and making the handoff seamless. That’s because web services are supposed to dissolve the boundaries between infrastructure and business process. Lacking successful teaming and coordination with partners (who handle the upper half of the transformation job) would be akin to the old adage about the surgery being successful — and you know the rest.
At the company’s annual shareholder’s meeting last week, IBM president Sam Palmisano announced that 61% of last year’s revenues came from software and services. At IBM’s annual industry analyst conference, also last week, company execs got the message. “I would rather just show you chips,” joked Bill Zeitler, senior VP of the Systems Group.
IBM’s current push, On Demand Computing, was quite a switch from last year’s “eServer” theme. On Demand refers to the notion of transforming systems into utilities. In other words, getting access to business services and computing resources transparently, in real (or right) time, anywhere inside or outside the enterprise. On Demand is based on old concepts, like capacity on demand and outsourcing, and newer concepts, such as virtualization (where all resources are made to look like a single entity) and services- oriented architectures (where modular, integratable software components replace conventional applications).
Our take? Good start, but the devil’s in the details.
We were impressed, for instance, about how Tivoli management capabilities such as monitoring, access control, and provisioning, are already embedded into WebSphere, DB2, and Lotus. Ditto for IBM’s clear explanation, that if you want to do things like correlate WebSphere events with other systems, that’s where you need Tivoli.
Other pieces proved less complete. For instance, the fact that IBM’s Business Consulting Services group (the folks from PwC) has not yet ironed out pricing details with their counterparts at Global Services regarding new business transformation offerings (where business processes, and the IT services that support them, are outsourced on a combination of fixed and variable costs).
Or, that IBM has not yet vocalized its strategy for managing web services at the systems level. Admittedly, neither have IBM’s rivals (read: HP). For instance, Tivoli needs to do more than simply track end-to-end performance data covering how fast a SOAP message
is answered. It needs to inspect SOAP headers as part of On Demand performance, security, and provisioning. In all fairness, this discipline remains a work in progress, as we learned at our XML One panel session a couple months back on the topic.
IBM currently has more of the pieces in place for its On Demand vision, from servers to software and services. But whether the entire package need come from one vendor is another issue. While past successes from enterprise players like SAP made mockeries of multi-vendor best-of-breed strategies, the fact that IBM — and others — are basing their real-time computing utility visions on open standards means that such strategies should theoretically have better chances this go round.
So how is IBM doing? Comparing its rivals, IBM’s Nancy Breiman, VP of competitive sales, named HP as the main competitor (with Utility Data Center). Grading rivals from B to D, with IBM implicitly getting an A, we believe that she was a bit generous. More realistically, we would rate IBM at C+ because of the huge amount of work remaining to fill out the vision, and IBM’s rivals C- to F.