Category Archives: ITIL

The Next Enterprise App

During the first Internet bubble, one prominent IT journal promoted itself with the tagline that IT equaled the business.

Obviously that was a bunch of hype, but a chance conversation with a colleague from a financial data firm reminded us of the central role that IT still plays. His firm was dumping Documentum as its content management platform in favor of a homegrown system because managing content comprised the heart of its business. And their business was simply too unique to be addressed by anything off the shelf.

But in most cases, exceptions like these prove the rule. For most firms, technology helps them run some aspect of the business, but doesn’t define it. And that explains why packages have become facts of life across large, and in growing cases, small-midsize enterprises (SMBs).

Admittedly, enterprise apps aren’t exactly a hot growth sector. Aside from targeting SMBs (which will generate barely a fraction of the revenues of the Global 2000 heyday), most enterprise players like SAP are focusing more on expanding the footprint with their existing base.

Actually, let’s correct ourselves. While ERP is mostly a mature market, there’s at least one enterprise segment that’s begging for common solutions: the management and delivery of IT services.

Problem is, this has long been a fragmented, technical, and highly custom market. Infrastructure management players like BMC, CA, HP, and IBM have traditionally sold to varying silos within IT operations with products that were tools that required significant time and money to customize and integrate.

Yet, with regulatory compliance initiatives consuming greater chunks of IT bandwidth, something’s gotta give. When DBAs tweak databases, system admins provision servers, desktop support adds new users, or operations phases in a new Oracle or SAP upgrade, how do you document that their activities aren’t compromising the sanctity of financial data or the privacy of customer records? The only practical solution is adopting common processes that leave audit trails.

Over the past couple years, a new category of software with the incredibly awkward name of “Run Book Automation” has emerged to orchestrate some of the processes required for managing and delivering IT service. It’s drawn startups like Opalis and iConclude plus attention from the usual suspects. You model the workflows that it takes to handle a trouble ticket or provision a new user, tie in the appropriate management systems, then dashboard or report how the processes are orchestrated. Call it BPM (Business Process Management) for IT.

“Run Book Automation” is a pretty awful label because it implies that vendors are still designing this as a technical solution addressing their usual data center constituencies.

But executing and documenting that users are provisioned according to standard workflows is critical to the folks who own the business apps and conduct the audits. They’re the people who control the budgets, and they’re not going to buy “run book automation.” But they might pony up for something like “Business Service Management,” “Business Technology Orchestration” or maybe even BPM for IT.

Changing labels is the easy part. The challenges are to offer comparable functionality to prove to the folks writing the checks that this is a real market with products (not tools) that can be compared. Equally daunting to vendors is building an effective go to market strategy that reaches a higher-level business audience.

Fortunately there’s an answer here. The IT Infrastructure Library (or ITIL) is a framework that defines things like, what are the elements that comprise incident management or change management. More importantly, ITIL initiatives are being embraced by a critical mass of the Global 2000, primarily as one of the pillars of their SOX, HIPPA, or Basel II compliance initiatives.

For Run Book Automation (or whatever you want to call it) vendors, ITIL provides the blueprint for developing standard orchestrations that could become the next enterprise off the shelf application. OK, ITIL itself doesn’t prescribe implementation, so you can’t design solutions from it. But organizations like the IT Service Management Institute are beginning to develop formalized bodies of knowledge that might fill the gap.

All this is redolent of what happened the last time a cross-industry group formalized processes. Nearly 30 years ago, the American Production & Inventory Control Society (APICS) defined the processes for managing manufacturing inventories, something that eventually broadened to areas like finance and costing. The result, MRP and MRP II (and later, ERP), provided a fat target for technology vendors to develop packaged solutions. It eventually spawned a $25 billion market.

Eyes Off the Prize

At first glance, there’s something about the HP board saga that doesn’t ring true. On one side, you have a chairwoman who was a student of governance, suffering a lapse of judgment that could find her (and others) in legal jeopardy. Meanwhile, a dissident board member whose success was frequently built by throwing away the rules proved the one who blew the whistle.

What’s wrong with this picture?

An article in yesterday’s Wall Street Journal documenting the dissension and politics on the HP board provided good insights on a culture that allowed the situation to careen out of control.

You could spin the saga in any number of ways: the outsider battling against entrenched interests, the innocent being set up, the woman of the year battling to be taken seriously in a highly macho boardroom culture, or the need to vet board nominees more thoroughly.

But reality is hardly black and white. Although the smoking gun has yet to be uncovered, it’s clear that the pretexting occurred because project leadership lost sight of the big picture.

To recap, the investigation began in the wake of boardroom leaks during debates over former CEO Carly Fiorina’s future. Based on a solid premise that boardroom leaks compromised corporate governance, at some point, the investigation veered off course once somebody cleared the use of legally questionable tactics to flesh out phone records of suspect board members.

Consequently, a project conceived with the goal of sound governance ultimately compromised it. Governance broke down when somebody at the top authorized the pretexting, or failed to adequately manage subordinates to keep the process on hard ground.

While the consequences of the HP case may prove far more severe than a blown budget or project schedule, the scenario should still look rather familiar to any seasoned IT professional. Put another way, how often do project teams get so wrapped up in the details that they lose sight of the overall goals?

Consider the case of a major global investment bank’s compliance project. In this case, one of the bank’s compliance strategies is building systems that adequately separate and document risk. And, as part of the development cycle, IT is documenting the business requirements so that the system delivers the right performance, supports the necessary scalability, maintains appropriate security levels, and contains the right functionality.

Unfortunately, in the quest to document requirements, the team has found itself being evaluated on its progress in feeding those requirements to a tool that manages them. Yet, there are either no metrics in place, and nobody taking ownership for vetting the accuracy, reliability, or quality of the requirements. Ultimately, if the system is built on faulty requirements, compliance will prove problematical.

The consequences of project tunnel vision could become exacerbated as IT organizations get serious about adopting best practices frameworks such as COBIT for governance; ITIL for service management; or ISO 17999 for IT security. The goals may be admirable, but as the HP board learned the hard way, the devil is when the details block the big picture.

Governance Vendor, Merge Thyself

It’s pretty hard to keep secrets these days. After months of speculation, HP put Mercury out of its misery, announcing it would acquire the scandal-rocked firm for roughly a 25% premium on its most recent closing price. It wasn’t a bad deal for Mercury shareholders, as HP’s offer amounted to roughly double the street value of the firm after it announced the firing of its three top executives over allegations of stock option gaming last fall.

On one level, the acquisition of Mercury adds a sad footnote to a company whose reputation wasn’t brought down by performance, but by allegations of executive greed.

On another level, it adds another feather in the cap for HP, whose OpenView business has recently been on a roll (its Q1 numbers were 34% higher than those of a year ago).

It’s easy in hindsight to realize that Mercury’s governance strategy ultimately had to play on a wider field that included IT operations. Increasingly, the company was finding itself competing, not with the Compuwares of the world, but the BMCs, CAs, and IBMs.

Having begun life as a software testing vendor that played heavily in the application development space, Mercury shifted gears roughly three years ago when it bought project portfolio management provider Kintana. The shift higher up the totem pole to IT governance made sense in that it widened Mercury’s horizons from software QA to IT QA.

Even had Mercury’s corporate governance scandal never occurred, it would have found its IT governance hitting the wall sooner or later. Take service desk. Just over a month ago, it acquired a small firm that provided a service desk (what used to be called help desk) tool that it billed as being driven by ITIL (IT Infrastructure Libraries).

To clarify, ITIL is becoming a more popular term among CIOs because it provides a framework of best practices for delivering and managing IT service. ITIL has grown in importance, not only because IT organizations need to document their quality of service, but also because many of the ITIL processes, such as change management, have compliance ramifications.

While Mercury’s ITIL-based service desk approach was a natural extension to its IT governance offerings, it proved an incomplete solution. By contrast, most service desk offerings are linked to IT operations systems such as BMC Patrol, CA Unicenter, HP OpenView, or IBM Tivoli. But Mercury’s was linked to a change management system, an approach that it claimed was more business-focused. Yet, offering a service desk lacking provision for automatic generation of trouble tickets when an IT operations system reports an outage does little for meeting service level agreements.

Consequently, it wasn’t surprising that HP was the rumored suitor for Mercury. While HP – like its IT operations rivals – has promoted the business aspects of systems management, it lacked the IT governance dashboards, product portfolio management, and analytics of CA and IBM. Mercury therefore makes a good fit.

But we have one quibble with published analyst comments that HP’s acquisition of Mercury adds SOA management. Aside from the acquisition of web services registry provider Systinet that was completed earlier this year, Mercury had not yet developed solutions or strategies for run time governance of SOA environments. Ironically, that’s something that the HP acquisition could solve. HP’s SOA Manager, which provides the run time governance, has until now been somewhat of an orphan in the HP software group. Paired with Mercury’s Systinet, HP might finally get the chance to plug the gap.

Lies, Damn Lies, and Statistics

More often than not IT organizations are judged guilty until proven innocent — and why not? When they had the money, they didn’t always spend it wisely.

Not surprisingly, the question of measurement keeps rearing its ugly head. Yeah, it’s “ugly,” because few enterprises know what they are actually getting for their IT dollars. Meanwhile, solution vendors are overly anxious to trot out their ROI white papers, while subjective ROI studies from sources such as Nucleus Research continue drawing press.

Naturally, that piqued our interest in a fairly new category of software that helps IT demonstrate how well it is meeting service level agreements (SLAs). The idea is especially relevant since existing systems typically hog 80% or more of typical corporate IT budgets.

SLAs measure factors like performance, availability (how often the system can handle the load), and reliability (how often it is up). There are numerous bibles from which these measures are derived, including the IT Infrastructure Library (ITIL), which defines the elements of IT service management; ISO 9000 and Six Sigma for defect elimination; and ISO 17799 for security policy audits.

Yet, guidelines or not, SLA remains subjective. How do you define “good” service? Which parameters are relevant to your company or industry? Consequently, you can’t benchmark SLAs like EPA gas mileage ratings.

Nonetheless, if an organization has bothered to do the homework and define what good service means, it may be able to take advantage of various emerging tools employing dashboards and business intelligence techniques to demonstrate quality of service and whether the enterprise is getting its money’s worth from IT.

We were intrigued by a new tool from Euclid that ventures beyond passive reporting of factors (e.g., response times) normally seen on systems management or help desk consoles. For instance, it could show how fast a business process executes, how often users demand changes to an application or database, or who owns a particular problem. And it provides mechanisms for prioritizing which problems get resolved first. On the horizon, the usual suspects (BMC, CA, IBM, and HP) will also enter the act. For instance, HP recently provided a glimpse of a future business impact analysis tool that will measure the financial impact of IT infrastructure failures or performance degradations.

We believe that the attention to gauging service levels and business value are positive, if not necessary developments. However, providing vivid dashboard readouts is the easy part of the problem. The real heavy lifting will be defining what comprises “good” service, and what the value or cost of a properly transacted, delayed, or failed process is. And in many cases, it will require that companies capture cost data that has traditionally fallen under the cracks as “overhead.” The solution — activity-based costing (ABC) — was first proposed in the 1980s. The fact that we’re still even discussing the idea shows how far most organizations have yet to go.

Flashback: The Year of Living Dangerously

Give yourself a pat on the back for this one. Midnight came on December 31, 1999, and chances are, the worst thing that happened was that your IT systems displayed a few errant report screens that rolled over from 1999 to 19100. Otherwise, your organization stayed open, or reopened for business as usual on Monday, January 3, 2000. So much for Y2K. In the end, it looked like somebody called a war, but nobody came.

By the middle of last year, most of us became pretty bullish on surviving the date change. Before that, however, it looked like Chicken Little was calling the shots. Remember the fears about the lights going out and chaos breaking loose? The myths about long-lost COBOL programmers commanding $100,000 salaries, or the dire warnings that you’d better book Y2K consultant time early, because their rates would skyrocket the longer you waited?

Fast forward to the present. Your systems rolled over with little incident. You didn’t have to worry about finding COBOL programmers or consulting help. Your company probably relied on your services, rather than hiring consultants, because people like you knew best where the date problems were, and knew the business well enough to know which date fixes had to be done, and which ones could pass. Maybe your company deferred some important IT projects, or maybe it accelerated a few long-awaited system replacements.

Congratulations. Your company survived the disease. Now, the question is, will it survive the cure?

For most organizations, Y2K was the moral equivalent of ERP. Y2K projects forced us to take stock of our IT assets, in many cases, for the first time. It drove organizations to clear the cobwebs, identifying what systems were still in use, how they were used, and which ones could be retired. So far, no pain, all gain.

However, the headaches began when your company began scouring those 15 – 20 year old COBOL programs. If your company was lucky, it still had a few of the original developers around. If not, it was time for the sleuth work.

Once the assessments turned to the bedrock systems your company used everyday, it was time to haul out the change management process, because those systems were too big to fix in one shot. It was time for a reality check with users. What parts of the system were most used, and which parts were the least valuable? What workarounds were possible? In what sequence should we fix the parts to minimize disruptions?

With the change management process updated, it was time to dig in. Renovate the code, followed by the never-ending testing. Again, the upside was the opportunity to engage users, whose acceptance was the final test.

The final step was dusting off those contingency plans. Of course, with Y2K work going so well, it was easy to grow complacent, but Y2K was a potential disaster too universal to ignore. Even if your company held up its end of the bargain, what would happen if the power went out, the phones failed, or your business partners began sending corrupt data?

Even the best-prepared organizations had to revisit their contingency plans. A South Florida company, which had plenty of experience planning for hurricane disruptions to headquarters, had to revise its plans because they didn’t adequately factor local processing problems at its regional distribution centers.

In the end, many of us gained a clearer picture of our IT assets. We updated our change management processes, beefed up testing procedures, and hopefully improved overall software quality. We engaged users from planning through final acceptance testing stages. If we were lucky, we had the chance to replace older desktops with new, standardized, more maintainable configurations and accelerate some new enterprise transaction system projects.

According to Gartner Group, we spent $300 – 600 billion worldwide to fix Y2K problems, and if we improved internal practices to boot, the investment should prove worthwhile. Significant pain, but plenty gain.

However, it’s human nature to mobilize for emergencies, then rest on our laurels. If we relax too much, all those updated asset inventories and change management practices could easily fall out of date. Lacking strong follow-through, the Y2K cure could prove worse than the disease.