Category Archives: ITIL

What’s a Service? Who’s Responsible?

Abbott and Costello aside, one of the most charged, ambiguous, and overused terms in IT today is Service. At its most basic, a service is a function or operation that performs a task. For IT operations, a service is a function that is performed using a computing facility that performs a task for the organization. For software architecture, a service in the formal capital “S” form is a loosely coupled function or process that is designed to be abstracted from the software application, physical implementation, and the data source; as a more generic lower case “s,” service is a function that is performed by software. And if you look at the Wikipedia definition, a service can refer to processes that are performed down at the OS level.

Don’t worry, we’ll keep the discussion above OS level to stay relevant — and to stay awake.

So why are we getting hung up on this term? It’s because it was all over the rather varied day that we had today, having split our time at (1) HP’s annual IT analyst conference for its Technology Solutions Group (that’s the 1/3 of the company that’s not PCs or printers); (2) a meeting of the SOA Consortium; and (3) yet another meeting with MKS, an ALM vendor that just signed an interesting resale deal with BMC that starts with the integration of IT Service Desk with issue and defect management in the application lifecycle.

Services in each of its software and IT operations were all over our agenda today; we just couldn’t duck it. But more than just a coincidence of terminology, there is actually an underlying interdependency between the design and deployment of a software service, and the IT services that are required to run it.

It was core to the presentation that we delivered to the SOA Consortium today, as our belief is that you cannot manage a SOA or application lifecycle without adequate IT Service Management (ITSM, a discipline for running IT operations that is measured or tracked by the services. We drew a diagram that was deservedly torn apart by our colleagues on the call, Beth Gold-Bernstein and Todd Biske. UPDATE: Beth has a picture of the diagram in her blog. In our diagram, we showed how at run time, there is an intersection between the SOA life cycle and ITSM – or more specifically, ITIL version 3 (ITIL is the best known framework for implanting ITSM). Both maintained that interaction is necessary throughout the lifecycle; for instance, when the software development team is planning a service, they need to get ITO in the loop to brace for release of the service – especially if the service is likely to drastically ramp up demand on the infrastructure.

The result of our discussion was that, not simply that services are joined, figuratively, at the head and neck bone – the software and IT operations implementations – but that at the end of that day, somebody’s got to be accountable for ensuring that services are being developed and deployed responsibly. In other words, just making the business case for a service is not adequate if you can’t ensure that the infrastructure will be able to handle it. Lacking the second piece of the equation, you’d wind up with a scenario of the surgery being successful but the patient dies. With the functional silos that comprise most IT organizations today, that would mean dispersed responsibility of the software (or in some cases, enterprise) architect, and their equivalent(s) in IT Operations. In other words, everybody’s responsible, and nobody’s responsible.

The idea came up that maybe what’s needed is a service ownership role that transcends the usual definition (today, the service owner is typically the business stakeholder that sponsored development, and/or the architect that owns the design or software implementation). That is, a sort of uber role that ensures that the service (1) responds to a bona fide business need (2) is consistent with enterprise architectural standards and does not needlessly duplicate what is already in place, and (3) won’t break IT infrastructure or physical delivery (e.g., assure that ITO is adequately prepared).

While the last thing that the IT organizations needs is yet another layer of management, it may need another layer of responsibility.

UPDATE: Todd Biske has provided some more detail on what the role of a Service Manager would entail.

Does SOA Need Another Governance Silo?

Turns out that the new year wouldn’t be complete without yet another SOA is dead flame war, touched off by Anne Thomas Mane’s provocative comments that stated, to the effect, that SOA is dead, long live services. As inveterate SOA blogger Joe McKendrick has noted, it’s a debate that’s come and gone over the years, and in its latest incarnation has drawn plenty of reaction, both defensive, and on target – that the problem is that practitioners get hung up on technology, not solutions. Or as Manes later clarified, it’s about tangibles like services, and solid practices like applying application portfolio management that deliver business value, not just technology for its own sake.

We could be glib and respond that Francisco Franco is still dead, but Mane’s clarification struck a chord. All too often in software development, we leap then look. We were reminded of that with an interesting announcement this week from SOA Software. Their contention is that there is a major gap at the front end of the SOA lifecycle, at least when it comes to vendor-supplied solutions. Specifically, it is over managing service portfolios – making investment decisions as to whether a service is worth developing, or worth continuing.

SOA Software contends that service repositories are suited for managing the design and development lifecycles of the service, while run-time management is suited for tracking consumption, policy compliance, service contract compliance, and quality of service monitoring. However, existing SOA governance tools omit the portfolio management function.

Well, there’s a gap when it comes to portfolio management of services, except that there isn’t: there is an established market and practice for project portfolio management (PPM), which applies financial portfolio analysis techniques to analyzing software development projects to help decision makers identify which projects should get greenlighted, and which existing efforts should have the plugs pulled.

The downside to PPM is that it’s damn complex, and mandates comprehensive data collection encompassing timesheet data, all data relating what’s paid to software vendors and consultants, and infrastructure consumption. We also have another beef, that in most IT organizations, new software development or implementation projects account for 10% of budgets or less. The bottom line is, PPM is complex, hard, and anyway, shouldn’t it also cover the 80 – 90% of the software budget that is devoted to maintenance?

But anyway, SOA Software contends that PPM is overkill for managing service portfolios. Their new offering, Service Portfolio Manager, is essentially a “lite” PPM tool that is applied specifically to services. Their tool factors in four basic artifacts: existing (as-is) business processes or application functionality; identifiers for candidate services such as “customer load qualifier;” ranking of business priorities; and metadata for services that are greenlighted for development and production.

We understand that SOA Software is attempting to be pragmatic about all this. They claim most of their clients have either not bothered with PPM, or have not been successful in implementing it because of its scope and overhead, and it’s better to manage even if it’s for a special case. And they see plenty of demand from their client base for a more manageable service-oriented PPM alternative.

But we have to wonder if it makes sense to erect yet another governance silo, or if SOA really merits a special case. The problem is that if we view SOA as a special case, now we wind up with yet another level of managerial silo and more process complexity. It also divorces SOA – or services if you prefer—supports the business. If SOA is a special case, then it must be an island of technology that has to be managed uniquely. In the long run, that will only increase management costs, and in the end reinforce the notion that SOA is a workaround to the bottlenecks of enterprise, application, or process integration, and a band-aid to poor or nonexistent enterprise architecture.

It also further isolates SOA or services from the software development lifecycle (SDLC), of which they should be an integral part. While services are not monolithic applications, are extensions or composites of applications and other artifacts such as feeds, they are still software. From a governance standpoint, the criteria for developing and publishing services should not be distinct from developing and implementing software.

And while we’re at it, we also believe that the run-time governance of SOA or services cannot be divorced from the physical aspects of running IT infrastructure. Service level management of SOA services is directly impacted by how effectively IT delivers business services, which is the discipline of IT Service Management (ITSM). When there is a problem with publishing a service, it should become an incident that is managed, not within its own SOA cocoon, but as an IT service event that might involve problems with software or infrastructure. In the long run, service repositories should be federated with CMDBs that track how different elements of IT infrastructure are configured.

In the short run, SOA Software’s Service Portfolio Manager is a pragmatic solution for early adopters who have drank the SOA Kool Aid and mainstreamed service implementation, but lack adequate governance when it comes to the SDLC (and enterprise architecture, by implication). In the long run, it should serve as a wakeup call to simplify PPM applying the 80/20 rule, making it more usable rather than spawning special case implementations.

Software and Data Center Lifecycles

Reporting from HP Software Universe, colleague Dana Gardner provided an interesting account of a keynote delivered by BTO software unit general manager (and Opsware alum) Ben Horowitz on the synergies between the application lifecycle and the data center lifecycle. It’s a topic that’s not just tailor-made for HP Software (whose acquisitions include Mercury, covering the software lifecycle, and Opsware, which complements the former OpenView in data center operations), but also a hot button for us as well. Traditionally, the IT organization has been heavily silo’ed: not only are there walls between different players in the software group (e.g., architects, developers, QA), but also between software and operations. So while software development folks are supposed to performance and integration test code, when it comes to migrating code to production, the process has been one of throwing production-ready code over the wall to operations.

While there has always been a disconnect between software development and the data center, the gap has become even more glaring with emergence of SOA and its promises for enabling enterprise agility. That is, if you can make services so flexible that you can swap pieces out (like selecting a different weather forecasting service for transportation routing), and make them responsive to the business through enforcement of service contracts, how can you deliver when you can’t control whether the underlying IT infrastructure can handle the load and provide response times that comply with the contract. Significantly, none of the tools that handle run-time SOA governance have trap doors that automatically re-provision capacity. In an era of risk aversion, the last thing that data center stewards want is software developers hijacking iron. When we spoke with Tim Hall, product director for HP’s SOA Center after the product was released, he told us that “Dynamic flexing of resources is a nice idea that won’t sit well with the operations people.”

Gardner reported HP’s Horowitz describing the roles that the business, security specialists, IT operations, and QA (note that developers were omitted) play in the transition form design to run time.

What makes HP’s proposition thinkable is that there is an emerging awareness to enforce process management mentality on IT operations. While most organizations observe software development lifecycle processes in the breach, there is a consciousness that developing software should encompass collecting and validating requirements, developing or mapping to an architecture, generating code, and testing. With some of the newer agile methodologies, many of these steps are performed concurrently and in smaller chunks, but they are still supposed to be performed. What’s new is the IT operations side; significantly, notably where the latest version of the ITIL framework takes a lifecycle view of the management and delivery of IT services. There are some parallels with the SDLC: Service Strategy has a logical fit with Requirements; Service Design fits with well with Development (although likeness to architecture or design may not be apparent); Service Transition deals with operations and incidents, which is not addressed in the SDLC; while Continual Service Improvement relates well to the maintenance and upgrade part of the SDLC.

While HP’s Horowitz might not have been speaking about ITIL v3 literally in his keynote – and while not all data center organizations are gung ho about ITIL itself – there is growing awareness inside the data center that you can’t just run operations by reflex anymore.

ESBs, Service Connectivity, and SneakerNet

All too often, when I come across one of Todd Biske’s writings, my conclusion is, “Why didn’t I ever think of that?” As a former colleague on the weekly BriefingsDirect podcasts, you could always count on Biske to come up with common sense insights that were so basic that they never occurred to you. In his latest post, The Future of ESBs, he reiterates a conclusion he came to some time ago, that “the capabilities associated with the [ESB] space really belong in the hands of operations rather than the hands of developers.” That’s because when you eliminate all the extraneous embellishments of the definition of what ESBs are, at the end of the day, they are all about connecting services. Nothing more, nothing less. He adds that when you really compare costs (and we presume, benefits) of ESBs, it should be with that of other network intermediaries, such as switches, load balancers, and proxying appliances – rather than the software-based offerings of middleware providers who dominate the ESB market.

The dilemma of course is that the network folks have had difficulty navigating up the stack. Recall Cisco’s Application-Oriented Networking (AON), where routers were supposed to get a lot smarter about prioritizing SAP traffic? The notion of attaining telco-like Five 9’s availability and service levels out of the basic plumbing of service, process, or (forbid the thought) application-level integration has long been a holy grail. Haven’t heard much from Cisco about AON lately.

The problem is mindset, not technology. Network admins are used to thinking at packet level; the software folks are used to thinking in terms of components, services, or application modules; while the business folks wonder why they can’t get their reports on time or why there’s so much latency when they attempt a supply chain optimization. The dilemma gets compounded when we start talking run time governance of SOA, a situation we were reminded of last week when Tibco announced addition of Service Level Performance Manager to their ActiveMatrix SOA management stack. Obviously, when gauging whether service contracts are fulfilled, you need the ability at the level of named service to see if it is available and whether latency is within what’s called for in the service contract. The problem is that, while software development factors like how a service is designed and whether the SOAP headers and XML schemas are properly formed will impact performance, at some point, when demand for a service spikes, you’ve got to start provisioning underlying infrastructure.

As we’ve written previously, there’s still a disconnect between SOA run time governance and underlying infrastructure management that for now is handled through email or SneakerNet. At the technology level, nobody is yet talking about how service level management issues identified by tools from folks like Tibco or AmberPoint are going to automatically generate trouble tickets in Remedy or Peregrine. At the mindset level, nobody is yet talking about mapping SOA run time governance to ITIL processes.

Until we do, the goal of conceiving service connectivity, performance, and SLA management will remain a tale of two IT silos.

ITIL Deeds Don’t Go Unpunished

It shouldn’t be surprising as to why IT’s automation needs often fall to the bottom of the stack: Because most companies are not in the technology business, investments in people, processes, or technologies that are designed to improve IT only show up as expenses on the bottom line. And so while IT is the organization that is responsible for helping the rest of the business adopt various automated solutions, IT often goes begging when it comes to investing in its own operational improvements.

In large part that explains the 20-year “overnight success” of ITIL. Conceived as a framework for codifying what IT actually does for a living, the latest revisions of ITIL describe a service lifecycle that provides opportunity for IT to operate more like a services business that develops, markets, and delivers those services to the enterprise as a whole. In other words, it’s supposed to elevate areas that used to pass for “help desk” and “systems management” into conversations that could migrate from middle manager to C-level.

Or at least that’s what it’s supposed to be cracked up to being. If you codify what an IT Service is, then define the actions that are involved with every step in its lifecycle, you have the outlines for a business process that could be made repeatable. And as you codify processes, you gain opportunities to attach more consistent metrics that track performance.

We’ve studied ITIL and have spoken with organizations on their rationale for adoption. Although the hammer of regulatory compliance might look to be the obvious impetus (e.g., if you are concerned about regulations covering corporate governance or protecting the sanctity of customer identity, you want audit trails that include data center operation), we also found that scenarios such as corporate restructuring or merger and acquisition also played a hand. At an ITIL forum convened by IDC in New York last week, we found that was exactly the case for Motorola, Toyota Financial Services, and for Hospital Corp. of America – each of whom sat on a panel to reflect on their experiences.

They spoke of establishing change advisory boards to cut down on the incidence of unexpected changes (that tend to break systems), formalizing all service requests (to reduce common practices of buttonholing), reporting structures (which, not surprisingly for different organizations, varied widely), and what to put in you’re the Configuration Management Database (CMDB) that is stipulated by the ITIL framework as the definitive store defining what you are running and how you are running it.

But we also came across comments that, for all its lofty goals, that ITIL could wind up erecting new silos for an initiative that was supposed to break them down. One attendee, from a major investment bank, was concerned that with adoption of the framework would come pressure for certifications that would wind up pigeonholing professionals into discrete tasks, a la Frederick Taylor. Another mused about excessive reliance on arbitrary metrics for the sake of metrics, because that is what looks good in management presentations. Others questioned the idea of whether initiatives, such as adding a change advisory board or adding a layer of what were, in effect, “account representatives” between IT and the various business operating units it solves, would in turn create new layers of bureaucracy.

What’s more interesting is that the concerns of attendees were hardly isolated voices in the wilderness. John Willis, who heads Zabovo, third party consulting firm specializing in IBM Tivoli tools, recently posted the responses from the Tivoli mailing list on the question of whether ITIL actually matters. He didn’t get a shortage of answers. Not surprisingly, there were plenty of detractors. One respondent characterized ITIL as “merely chang[ing] the shape of the hoops I jump through, and not the fact that there are hoops…” Another termed it “$6 buzzword/ management fad,” while another claimed that ITIL makes much ado over the obvious. “The Helpdesk is NOT an ITIL process, it is merely a function…”

But others stated that, despite the hassles, that the real problem is defining processes or criteria more realistically. “Even when I’m annoyed, I still believe ITIL or ITIL-like processes should be here to stay, but management should be more educated on what constitutes a serious change to the environment…” Others claimed that ITIL formalizes what should be your IT organization’s best practices and doesn’t really invent anything new. “Most shops already perform many of the processes and don’t recognize how close they are to being ITIL compliant already. It is often a case of refining a few things.”

Probably the comment that best summarized the problem is that many organizations, not surprisingly, are “forgetting that ITIL is not an end in and of itself. Rather, it is a means to an end,” adding that this point is often “lost on both the critics and cheerleaders of Service Management.”

We’re not surprised that adopting of ITIL is striking nerves. It’s redolent of the early battles surrounding those old MRP II, and later, ERP projects, that often degenerated into endless business process reengineering efforts for their own sake. Ironically, promoters of early Class A MRP II initiatives justified their efforts on the ideal that integration would enable everyone to read from the same sheet of music and therefore enable the organization to act in unison. In fact, many of those early implementations simply cemented the functional or organizational walls that they were supposed to tear down. And although, in the name of process reengineering, they were supposed to make the organizations more responsive, in fact, many of those implementations wound up enshrining many of the inflexible planning practices that drove operational cost structures through the roof.

The bottom line is that IT could use a dose of process, but not the arbitrary kind that looks good in management PowerPoints. The ITIL framework presents the opportunity to rationalize the best practices that are already are or should be in place. Ideally, it could provide the means for correlating aspects of service delivery, such maintaining uptime and availability, with metrics that actually reflect the business value of meeting those goals. For the software industry, ITIL provides a nice common target for developing packaged solutions, just as the APICS framework did for enterprise applications nearly 30 years ago. That’s the upside.

The downside is that, like any technical profession, IT has historically operated in its own silo away from the business, where service requests were simply thrown over the wall with little or no context. As a result, the business has hardly understood what they are getting for their IT dollars, while IT has had little concept of the bigger picture on why they are doing their jobs, other than to keep the machines running. IT has good reason to fear process improvement exercises which don’t appear grounded in reality.

ITIL is supposed to offer the first step -– defining what IT does as a service, and the tasks or processes that are involved in delivering it. The cultural gap that separates IT from the business is both the best justification for adopting ITIL, and the greatest obstacle towards achieving its goals.

HP Incubates Opsware

HP Software’s “Extreme Makeover” took another step at the end of last week when it closed the Opsware deal. Since the deal was announced almost exactly two months ago, we’ve been wondering whether it would be a replay of the Mercury deal, which amounted to more of a reverse acquisition. Yeah, HP bought Mercury, and the Mercury execs left standing (the ones tainted by indictment were long gone by that point). But many of the Mercury principals quickly took the helm in sales, marketing, and product development.

The Opsware script will be a bit different. Like the Mercury deal, Opsware CEO Ben Horowitz now takes over product R&D for the combined entity. But the rest of Opsware is being, what HP terms, “incubated.” It amounts to HP leaving the Opsware growth machine alone for now.

That makes sense, for a limited time. Much of Opsware’s appeal was that it was the fresh player on the block for managing change to IT infrastructure in real-time that until now has been beyond the grasp of “mainstream” systems management frameworks from the usual suspects. And the company’s growth curve was just beginning to break into the black.

With last week’s announcement of System 7, Opsware has made significant strides towards addressing a key weakness: the lack of integration between its server, network, and storage automation pieces. It’s glued them together with its process automation piece (from its own iConclude acquisition), where changes in provisioning of IT infrastructure from server back to storage and network can now be automatically triggered by an ITIL workflow, or where any action taken in any of the modules can now automatically bring up the appropriate workflow.

So HP is probably smart in not wanting to rain on Opsware’s parade, for now.

In the long run, Opsware must become part of the mother ship because its products have too many potential synergies with the rest of HP Software. There’s a long list, but we’ll just give a couple examples: there are obvious tie-ins between HP’s Service Desk and Business Availability Centers with Opsware’s change management capabilities. In other cases, HP’s portfolio could provide the depth missing from Opsware’s offerings, with the CMDB (configuration management database, the system of record for the layout of your IT infrastructure) being the prime example.

HP’s strategy reflects the common dilemma that software firms face when they acquire companies that are younger in the growth curve. Assuming that you’ve bought a company to add to your product set (as opposed to expanding your market footprint, like EMC and VMware), you’re going to find yourself in a balancing act. You don’t want your legacy culture to smother the innovation machine that you’ve just acquired, but you also don’t want to miss out on the synergies. Besides, isn’t that why you bought the company in the first place?

Breaching the Blood Brain Barrier

A month after Software AG unveiled its roadmap for converging webMethods products, it is releasing the first of the new or enhanced offerings. What piqued our interest was one aspect of the release, where Software AG is starting to seed webMethods BAM (Business Activity Monitoring) dashboards to other parts of the stack. In this case, they’re extending the webMethods Optimize BAM tool from BPM to the B2B piece.

So why does this matter? As its first name implied, BAM is about monitoring business processes. But if you think about it, it could just as well apply to the operational aspects of deploying SOA, from trending compliance with service level agreements to the nitty gritty, such as the speed at which the XML in SOAP messages is being parsed.

So far so good. What Software AG is doing is trying to use the same dashboarding engine that has been owned by the line-of-business folks, who want to monitor high level processes, to the software development folks, who are charged with exposing those processes as web services.

But when it comes down to the thorny issue of monitoring compliance with service level agreements (SLAs), Software AG’s moves are just a very modest first step. With a common dashboarding engine, you might be able to get software developers to improve the efficiency of a web service through programmatic modifications, but at the end of the day (and hopefully a lot earlier!), you have to run the services on physical IT infrastructure. And as we’ve noted in the past, when it comes to fixing service level issues, today’s processes, technologies, and organizational structures remain highly silo’ed. The software development folks own the SOA implementation, while IT operations own the data center.

It’s an issue that HP Software, which has undergone a reverse acquisition by Mercury (yes, HP bought it, but many ex-Mercury execs are now running it) is striving to bridge. And with Software AG’s latest moves to extend Optimize, it’s a goal that’s on their horizon as well.

The challenge however is that, as the IT operations folks embrace ITIL and the business service optimization or management tools (a.k.a., retooled offerings from systems management vendors), you may wind up with multiple islands of automation that each operate their own silo’ed dashboards claiming to show the truth about service levels — whether those service levels pertain to how fast IT resolves an incident, how fast the database runs, or how available is a specific web service.

Software AG says that it wants to eventually integrate metadata from its CentraSite SOA repository with the CMDBs (configuration management databases) of ITIL-oriented tools sometime in the future. We wonder how they, and their presumed ITIL vendor partner, will sell the idea to their respective constituencies, and more importantly, who’s ultimately going to claim accountability for ensuring that web services meet the SLAs.

HP Buys Opsware – Or is it the other way around?

HP’s announcement that it plans to buy Opsware represents something of a changing of the guard. HP’s $1.6 billion offer, roughly a 35% premium over last week’s close, is for a $100 million company whose claim to fame is managing change across servers, networks, and recently, storage.

Today, Opsware’s change automation systems tend to work alongside classic infrastructure management tools, such as what used to be known as HP OpenView. Over the past year, Opsware has bulked itself up with several acquisitions of its own, including IT process automation – where you embed data center best practices as configurable, repeatable, policy-driven workflows. And it has added storage management, plus a sweetheart deal with Cisco for OEM’ing and reselling its network change management system as part of the Cisco. Although Cisco wasn’t happy about the disclosure, Opsware did announce during the Q4 earnings call that Cisco had resold $5 million worth of its network automation tool.

For HP, the Opsware acquisition comes after a year of roughly 80% growth – although the bulk of that was attributable to the Mercury acquisition. HP Software is one of those units that HP somehow never got around to killing – although not for lack of trying (we recall HP’s server unit concluding a deal with CA that undercut its own HP OpenView offering). And it reported an operating profit of 8.5% — although not stellar, at least it reflected the fact that software is finally becoming a viable business at HP.

In part it’s attributable to the fact that infrastructure management folks are finally getting some respect with the popularity of ITIL – that is, ITIL defines something that even a c-level executive could understand. The challenge of course is that most classic infrastructure management tools simply populated consoles with cryptic metrics and nuisance alarms, not to mention the fact that at heart they were very primitive toolkits that took lots of care and custom assembly to put together. They didn’t put together the big picture that ITIL demanded regarding quantifying service level agreement compliance, as opposed to network node operation.

What’s changed at HP Software is that the Mercury deal represented something of a reverse acquisition, as key Mercury software executives (at least, the ones who weren’t canned and indicted) are now largely driving HP Software’s product and go to market strategy. Although branding’s only skin deep, it’s nonetheless significant that HP ditched its venerable OpenView brand in favor of Mercury’s Business Technology Optimization.

Consequently, we think there’s potentially some very compelling synergies between Opsware’s change management, HP’s Service Management and CMDB, and Mercury’s quality centers that not only test software and manage defects, but provide tools for evaluating software and project portfolios. We’re probably dreaming here, but it would be really cool if somehow we could correlate the cost of a software defect, not only on the project at large (and whether it and other defects place that project at risk), but correlate it to changes in IT infrastructure configurations and service levels. The same would go for managing service levels in SOA, where HP/Mercury is playing catch-up to AmberPoint and SOA Software.

This is all very blue sky. Putting all these pieces together requires more than just a blending of product architecture, but also the ability to address IT operations, software development, and the business. Once the deal closes, HP Software’s got it work cut out.

Close the Patent Office?

Rumor had it that, back in 1899, U.S. Commissioner of Patents Charles H. Duell declared that everything that could be invented had been invented. In actuality, Duell’s supposed comment and his recommendation that the Patent Office be closed down proved an urban myth. Of course, that didn’t prevent the metaphor from being abused by pundits such as yours truly, or by presidential speechwriters.

Next week, debate over that urban legend -– as applied to the software industry -– will be the central theme of Software 2007, an annual gathering of the folks who buy, sell, and run software companies. Specifically, it’s the question of whether consolidation is killing innovation.

Clearly, unless you’ve been living under a rock, it’s kind of hard to not notice that software industry consolidation has sharply accelerated since 2000. And according to conventional wisdom, when you have fewer voices there should be fewer new ideas.

But let’s first ask if anybody cares. Is there such a thing as too much innovation?

Ask any SAP customer about how much they look forward to upgrades. Of course, in most cases, version upgrades are not necessarily innovation. They simply augment (or detract) from existing functionality.

And then there’s innovation that in actuality is little more than feature creep. Case in point: Microsoft Word 2003. An upgrade from Word 2000 –- which we fondly remember because, for us, it worked just fine, thank you. Yet the 2003 upgrade added “smarter” formatting that was supposed to be innovative. But in actuality, it ended up making our life more complicated and less productive. Obviously there’s such a thing as half-baked innovation.

Too much innovation -– good or bad -– can be clearly disruptive. But obviously, were the march of technology to cease tomorrow, you’d never be able to solve that perplexing integration problem, or discover how to rationalize IT service management.

So let’s return to the main point: if innovation is a good thing, does consolidation stifle it?

Obviously, acquisition strategies where vendors are swallowed up for maintenance streams certainly validate conventional wisdom, witness CA’s practices during the Charles Wang era.

But when a vendor acquires a microscopic startup, it’s usually about mainstreaming, not killing innovation. For instance, when BEA bought 12-person SolarMetric about a year and a half ago, it was a shortcut to adding a critical piece of Enterprise Java Beans 3.0 object/relational mapping technology to its products.

Arguably, the goal of most niche startups not to become the next Microsoft or Google. For instance, SolarMetric was not likely to find a mass enterprise market for such a niche technology – who would go to the trouble of buying something like an object/relational mapper a la carte? Such a technology had to be part of a broader product, offered by a vendor with sufficient market reach.

Whether acquisition spurs or suppresses innovation depends on culture and, obviously, where the vendor is in its product or market life cycle. When IBM bought Rational, Rational’s peak of innovation was behind it. Not surprisingly, in the years since, IBM has failed to renew the core product. Maybe IBM should have taken the steps that Rational failed to while it was still independent, but the innovation slowed well before IBM’s watch.

Similarly, when you look at the dramatic consolidation in the ERP market, from the standpoint of innovation, it’s made relatively little difference because ERP is a pretty mature technology. Whatever innovation is occurring is happening at the edges — such as deconstructing functionality into plug and play services.

Consolidation hasn’t prevented innovations, such as software-as-a-service (SaaS), the emergence of globalized development, or the rise of open source and service-oriented architecture (SOA). It also hasn’t prevented emergence of ITIL (IT Infrastructure Libraries), which is providing ERP-like roadmaps for IT organizations on how to deliver service, and for vendors to integrate their systems management and help desk offerings.

Focusing on open source and SaaS, both have lowered barriers to entry for new – or in the case of players like Ingres – enabling the resurrection of companies back from the dead. And they have completely disrupted the industry’s business model, with open source pushing commodity software down to commodity prices, and SaaS replacing up front purchase with pay-as-you-go subscription models.

Open source and SaaS have clearly disrupted software pricing, with open source pushing commodity software down to commodity prices, and SaaS eliminating the huge upfront purchase in favor of a subscription model that looks suspiciously like the 15 – 20% annual maintenance models that, some customers claimed, was a form of extortion. It also lowers the risk of test driving new software because customers can yank subscriptions anytime, theoretically providing more shots for new startups.

Admittedly, not all these changes are as radical as they seem. For instance, subscription pricing revives a practice of renting software that prevailed before the emergence of packaged software in the 1980s (and for SAS, which never went away). And in fact, subscriptions look suspiciously like those 15 – 20% annual maintenance charges customers have been paying all along. But is does eliminate the huge upfront cost, and that’s posed a major revenue challenge for traditional vendors.

Nonetheless, if you agree with Ingres president Roger Burkhardt, open source has had another major impact on the software market: it drives out useless innovation, such as what happened with Microsoft Word. “Open source takes the onus off upgrade creep,” Burkhardt told us. “We get the same subscription revenue as before, with no changes for new features.”

Paradoxically, while open source takes the pressure off innovation, theoretically it also lowers the barriers to it, as anyone who sweats through each incremental 0.2.x release of Linux will attest. On the other hand, constant innovation – even if useful – causes chaos. Here again, the changes aren’t as great as they seem: most commercial distributors of open source products tend to tamp down the release cycles.

Clearly, rising trends like open source or SaaS are promoting innovation have nothing to do with the pace of industry consolidation. These new modes of delivering software to market are becoming mainstream in spite of the fact that desktop platforms or enterprise software markets have become highly consolidated.

M.R. Rangaswami, cofounder of the Sand Hill Group, and a veteran of the enterprise software industry, predicts that multiple business models will proliferate throughout the industry. That to us does not exactly sound like a recipe that would kill innovation. In the long run, customer appetite for new technologies will dictate the rate of innovation

What’s in a Name?

When CA announced last week that it would drastically simplify product branding, our first reaction was, “What, are you crazy?”

It’s the latest in a spate of changes intended to demonstrate that this is not your father’s CA, and definitely not the one that blindly consumed companies for their maintenance streams and typically emphasized marketing over product development. Consequently, you’ll no longer see reruns of product names like “CA Unicenter TNG,” which modestly stood for “The Next Generation.”

A new decade and a major accounting scandal later, the company has been trying to shed that identity with a more focused acquisition strategy targeting, for the most part, startups that could add synergy and new IP to the company. It’s been the bright spot in what have been several years of financial pain.

And so the branding simplification could be viewed as the latest example of an effort to clarify just what CA is. So in the new CA, the CA Wily Introscope product, that monitors J2EE application performance, will probably be called something like CA Introscope, while CA Unicenter Service Desk will likely be renamed CA Service Desk.

But there are several risks. The first is that CA might wind up confusing its installed base, although in actuality, that risk will vary by product. For most of the recent acquisitions, such as startups like Wily Technologies that had brands that weren’t exactly indelible, the risk should be pretty low. For the well-established lines, like Unicenter or the IDMS legacy database, it might be another story.

The Unicenter case is especially interesting. Ingrained for 15 years as CA’s management framework for distributed systems, the unitary branding implied unitary product.

Under the new scheme of things, the Unicenter brand will be dropped. Sounds like a dumb move until you realize that, excluding IBM (which is retaining the Tivoli brand), each of CA’s major rivals are doing the same. For HP after the Mercury acquisition, it’s goodbye “OpenView,” hello “Business Technology Optimization,” and for BMC, it’s goodbye “Patrol,” hello “Business Service Management.”

(What’s even more amazing is that HP, Mercury, and CA – which uses the moniker “Business Service Optimization” – will now have brand names that sound like virtual clones of one another. Our first reaction was, couldn’t any of these guys come up with something more original and memorable? But we digress…)

Why are the systems management folks dropping the familiar brand names? One word: ITIL. With the ITIL framework stipulating use of a Configuration Management Database (CMDB), the prime result is that each vendor in this space is reengineering its products to incorporate one. More to the point, the emergence of the CMDB has revealed the ugly truth that systems management products have never been as unified as their common brandings implied – thereby rendering the old brand names rather meaningless.

Indeed, the real risk to CA is the degree of effort it must invest to change more than 1000 product names. In the wake of the announcement, it published a 34-page, single-spaced PDF listing the entire catalog, estimating it would take 12 – 18 months to change all the names. That’s a process that will soak up significant marketing resources to change websites, collaterals, and craft new campaigns. Maybe the new branding may help CA burnish its new identity, but given that the company’s earnings have remained fairly flat over the past year, is it an investment that it can afford?