Category Archives: Systems Management

The Next Enterprise App

During the first Internet bubble, one prominent IT journal promoted itself with the tagline that IT equaled the business.

Obviously that was a bunch of hype, but a chance conversation with a colleague from a financial data firm reminded us of the central role that IT still plays. His firm was dumping Documentum as its content management platform in favor of a homegrown system because managing content comprised the heart of its business. And their business was simply too unique to be addressed by anything off the shelf.

But in most cases, exceptions like these prove the rule. For most firms, technology helps them run some aspect of the business, but doesn’t define it. And that explains why packages have become facts of life across large, and in growing cases, small-midsize enterprises (SMBs).

Admittedly, enterprise apps aren’t exactly a hot growth sector. Aside from targeting SMBs (which will generate barely a fraction of the revenues of the Global 2000 heyday), most enterprise players like SAP are focusing more on expanding the footprint with their existing base.

Actually, let’s correct ourselves. While ERP is mostly a mature market, there’s at least one enterprise segment that’s begging for common solutions: the management and delivery of IT services.

Problem is, this has long been a fragmented, technical, and highly custom market. Infrastructure management players like BMC, CA, HP, and IBM have traditionally sold to varying silos within IT operations with products that were tools that required significant time and money to customize and integrate.

Yet, with regulatory compliance initiatives consuming greater chunks of IT bandwidth, something’s gotta give. When DBAs tweak databases, system admins provision servers, desktop support adds new users, or operations phases in a new Oracle or SAP upgrade, how do you document that their activities aren’t compromising the sanctity of financial data or the privacy of customer records? The only practical solution is adopting common processes that leave audit trails.

Over the past couple years, a new category of software with the incredibly awkward name of “Run Book Automation” has emerged to orchestrate some of the processes required for managing and delivering IT service. It’s drawn startups like Opalis and iConclude plus attention from the usual suspects. You model the workflows that it takes to handle a trouble ticket or provision a new user, tie in the appropriate management systems, then dashboard or report how the processes are orchestrated. Call it BPM (Business Process Management) for IT.

“Run Book Automation” is a pretty awful label because it implies that vendors are still designing this as a technical solution addressing their usual data center constituencies.

But executing and documenting that users are provisioned according to standard workflows is critical to the folks who own the business apps and conduct the audits. They’re the people who control the budgets, and they’re not going to buy “run book automation.” But they might pony up for something like “Business Service Management,” “Business Technology Orchestration” or maybe even BPM for IT.

Changing labels is the easy part. The challenges are to offer comparable functionality to prove to the folks writing the checks that this is a real market with products (not tools) that can be compared. Equally daunting to vendors is building an effective go to market strategy that reaches a higher-level business audience.

Fortunately there’s an answer here. The IT Infrastructure Library (or ITIL) is a framework that defines things like, what are the elements that comprise incident management or change management. More importantly, ITIL initiatives are being embraced by a critical mass of the Global 2000, primarily as one of the pillars of their SOX, HIPPA, or Basel II compliance initiatives.

For Run Book Automation (or whatever you want to call it) vendors, ITIL provides the blueprint for developing standard orchestrations that could become the next enterprise off the shelf application. OK, ITIL itself doesn’t prescribe implementation, so you can’t design solutions from it. But organizations like the IT Service Management Institute are beginning to develop formalized bodies of knowledge that might fill the gap.

All this is redolent of what happened the last time a cross-industry group formalized processes. Nearly 30 years ago, the American Production & Inventory Control Society (APICS) defined the processes for managing manufacturing inventories, something that eventually broadened to areas like finance and costing. The result, MRP and MRP II (and later, ERP), provided a fat target for technology vendors to develop packaged solutions. It eventually spawned a $25 billion market.

Blood Brain Barrier

Once upon a time, maintaining service levels was a simple yet difficult matter. At least you knew who was responsible for tuning databases, monitoring infrastructure, and safeguarding access control. The hard part of course was getting everything working as promised.

Service-related issues have traditionally been dealt with piecemeal, at the perimeter, data center, and inside application and database silos. Even after web applications exposed databases to the outside world, the action that mattered was confined (1) to the application inside the firewall, as the domain of security specialists or DBAs, or (2) out there in the cloud, where it was the service provider’s problem.

You had, in effect, a barrier between the circulatory and nervous systems where interaction, and responsibility for it, was carefully proscribed.

Life isn’t as simple anymore. With services eroding the silos demarcating internal applications from one another, not to mention the outside world, it’s growing difficult to delineate where the system admin’s responsibility leaves off and the software developer or process owner kicks in. In a Services-Oriented Architecture (SOA), key aspects of business logic are often intertwined with service level.

Consider an order fulfillment process. The customer’s procurement system triggers a series of events, culminating in a requirement to receive confirmation from you, the supplier. In so doing, the customer system fires a request to your order processing system, which in turn triggers an orchestrated process involving inventory checks, approval workflows (where warranted), queries to logistics providers, followed by final acknowledgment. When you guarantee a platinum-level partner or customer priority service, you are implicitly promising that your infrastructure will deliver at a specified performance level.

In such a scenario, who’s responsible for ensuring that the request is genuine, comes from an approved customer or partner, and requires response within a given timeframe? We’ll understand if you’re drawing a blank.

When silos of functionality break down, so does division of labor. System admins, charged with infrastructure health and perimeter protection, are over their head when considering the subtleties of contractual service guarantees. Likewise, the software and process folks are understandably edgy when it comes to banking on infrastructure issues outside their control.

Not surprisingly, the market for governance solutions reflects the current organizational disconnect. Systems management providers like BMC, CA, HP and IBM are accustomed to calling on systems admins or operations, while SOA run-time governance upstarts like Actional (now part of Sonic), AmberPoint, and SOA Software speak to software shops.

There’s been a bit of consolidation. HP’s acquisition of Talking Blocks a couple years back and IBM’s more recent acquisition of Cyanea, provide hints that the artificial boundaries between infrastructure and service management markets may eventually dissolve.

But that won’t happen before customer organizations overcome cultural inertia.

When you first deploy a handful of services in an SOA pilot, you can readily babysit them yourself. When you ramp up to dozen services or more, you need a mechanism to enforce policy at run time. And once you really get serious about SOA deployment, that’s when you need better-coupled infrastructure management.

Nope, you won’t want to breach the blood-brain barrier outright because your perimeter must remain sacrosanct. But you’ll need some way to improve circulation.

Butterfly Wings

If you buy technology or track the market, by now you’re probably hearing catch phrases like real-time, adaptive, or on-demand enterprises in your sleep. Theoretically, if you subscribe to the logic, your company can respond at the drop of a coin, updating all respective systems — giving your employees, customers, and business partners the same snapshot of the way things really are right now. As a competitive strategy, that’s supposed to blow your rivals away.

The dark side, however, is the resulting stress that all this places on data centers, storage devices, and networks. Like the Amazon butterfly, whose wing flapping is eventually connected to some gale that pounds the coast of Maine, the impact of a major customer changing an order at the end of the quarter could wreak havoc if your infrastructure is not properly sized and your applications not adequately integrated.

Traditionally, getting all this under control has required a disparate array of best of breed tools that monitor network devices, server utilization, database I/O, storage, applications, and other peripherals — not to mention the systems used for providing technical support and problem resolution. Compounding the situation, IT infrastructure is the domain of different organizations covering the data center, network infrastructure, and security. As we noted a few months back, look at identity management. The matter often degrades to such a tug of war between systems admins and developers that one of the few companies that markets products to both [Novell] has yet to integrate them.

So, we look on IBM’s acquisition of Candle as a good step. It’s a logical move for IBM, whose Tivoli framework lacks the end-to-end server performance management functions of Candle’s flagship Omegamon products, not to mention the fact that Candle’s customer base is almost 100% IBM. And it’s an exit strategy for privately held Candle, whose revenues have been stagnating for years.

But it’s not the end of the conversation for BMC or CA, because nobody — not even IBM — has been able to integrate all the tools you need to monitor, manage, and repair IT infrastructures. Look at web services. When you submit a service request, you’re asking for authorization to access and claim processing, storage, and network resources. Yet, for the most part, web services resource management functions are still largely viewed as the domain of application developers. And, HP excluded, none of the major systems management folks have come up with strategies for integrating that.

Surgery and Survival

Yesterday’s first anniversary of “the new HP” provided the coming out party for the company’s new enterprise services strategy. HP’s moniker, the “Adaptive Enterprise,” is their take on the real-time enterprise, something that sounds a lot like IBM’s On Demand computing or Microsoft’s one degree of separation campaigns.

Like its rivals, HP is couching its message in business terms. HP will provide the expertise, software, and services to tune or transform the infrastructure. But it will hand off to partners the rest of the job, at the higher-margin business process, application architecture, and systems integration levels. Driving the point home, HP announced a new $3 billion 10-year deal to manage Procter & Gamble’s worldwide IT infrastructure.

The pillars of HP’s go-to-market strategy bore the label “Darwin Reference Architecture,” which was more survival philosophy than formal technology blueprint. Darwin declares a commitment to the usual array of standards (which include .NET and J2EE), and states that HP will offer infrastructure services, systems management software, while relying on partnerships with leading software vendors and integrators.

On the plus side, HP leads in Intel servers and Linux (although it trades places with Dell for overall Intel market share), has a successful track record transforming itself after swallowing Compaq, and offers leading management software (OpenView, the core component of Utility Data Center). The minuses: HP’s absence at the higher levels of the stack, a drawback that is more serious given HP’s aspirations to help its customers transform their businesses, not just their infrastructures.

Of note, part of the announcement was HP offering a web services management roadmap — something that rivals IBM and BMC have yet to declare (CA has; we’ll discuss them in an upcoming note). Specifically, HP will inspect SOAP headers to manage access, authorization, and conformance with service levels. It is also adding a subscription-based provisioning engine to OpenView for helping customers monetize web services.

And, just as IBM is embedding Tivoli into its own products, HP has similar goals to embed OpenView web services management features in third party products. It is already building reference implementations to show how OpenView service level compliance features for third-party software applications can be enforced, and plans to develop channels programs.

HP needs to get its consulting group up to speed on adaptive infrastructure knowledge and practices quickly. For instance, at present, only a couple hundred of its 20,000+ consulting force have learned the new system of metrics for the critical job of infrastructure adaptability assessment.

But that challenge is dwarfed by something more basic. Although HP plans to keep infrastructure assessments to itself, it must identify ways of bringing partners earlier in the cycle and making the handoff seamless. That’s because web services are supposed to dissolve the boundaries between infrastructure and business process. Lacking successful teaming and coordination with partners (who handle the upper half of the transformation job) would be akin to the old adage about the surgery being successful — and you know the rest.

The Softer Side of IBM

At the company’s annual shareholder’s meeting last week, IBM president Sam Palmisano announced that 61% of last year’s revenues came from software and services. At IBM’s annual industry analyst conference, also last week, company execs got the message. “I would rather just show you chips,” joked Bill Zeitler, senior VP of the Systems Group.

IBM’s current push, On Demand Computing, was quite a switch from last year’s “eServer” theme. On Demand refers to the notion of transforming systems into utilities. In other words, getting access to business services and computing resources transparently, in real (or right) time, anywhere inside or outside the enterprise. On Demand is based on old concepts, like capacity on demand and outsourcing, and newer concepts, such as virtualization (where all resources are made to look like a single entity) and services- oriented architectures (where modular, integratable software components replace conventional applications).

Our take? Good start, but the devil’s in the details.

We were impressed, for instance, about how Tivoli management capabilities such as monitoring, access control, and provisioning, are already embedded into WebSphere, DB2, and Lotus. Ditto for IBM’s clear explanation, that if you want to do things like correlate WebSphere events with other systems, that’s where you need Tivoli.

Other pieces proved less complete. For instance, the fact that IBM’s Business Consulting Services group (the folks from PwC) has not yet ironed out pricing details with their counterparts at Global Services regarding new business transformation offerings (where business processes, and the IT services that support them, are outsourced on a combination of fixed and variable costs).

Or, that IBM has not yet vocalized its strategy for managing web services at the systems level. Admittedly, neither have IBM’s rivals (read: HP). For instance, Tivoli needs to do more than simply track end-to-end performance data covering how fast a SOAP message
is answered. It needs to inspect SOAP headers as part of On Demand performance, security, and provisioning. In all fairness, this discipline remains a work in progress, as we learned at our XML One panel session a couple months back on the topic.

IBM currently has more of the pieces in place for its On Demand vision, from servers to software and services. But whether the entire package need come from one vendor is another issue. While past successes from enterprise players like SAP made mockeries of multi-vendor best-of-breed strategies, the fact that IBM — and others — are basing their real-time computing utility visions on open standards means that such strategies should theoretically have better chances this go round.

So how is IBM doing? Comparing its rivals, IBM’s Nancy Breiman, VP of competitive sales, named HP as the main competitor (with Utility Data Center). Grading rivals from B to D, with IBM implicitly getting an A, we believe that she was a bit generous. More realistically, we would rate IBM at C+ because of the huge amount of work remaining to fill out the vision, and IBM’s rivals C- to F.

BMC Aims Downmarket

Systems management has traditionally been the domain of high-end corporate sales. A glance at BMC’s own product description for PATROL sums it best. “PATROL for Enterprise Management provides the business process, enterprise-wide view of your environment. It provides a central point of control for all applications, computers, LANS, WAN and communications devices throughout the enterprise.” These solutions are often overkill or unaffordable for SMEs.

Yet, the emergence of dot coms means that there are a lot of new, small companies with big company computing needs. And in most cases, they probably lack the DBAs or network managers to do it.

That’s in large part the explanation behind the emergence of application service providers, for example. Yet, ASPs alone, only allow aspiring dot coms to export their problems, not necessarily solve them. Emerging dot coms and recently spun-off business units require products and solutions, not just services, to manage their computing requirements.

Software vendors like BMC need opportunities like this to distract from their post-Y2K blues. Like much of the enterprise software industry, 1999 and 2000 have been a tale of two cities. BMC, which grew at a 30% clip last year, just reported its third straight disappointing quarter this year. The September 30 figures showed revenue drops of 7% below the same quarter of last year, with per-share profits coming in at only half of Wall Street’s expectations.

At Oracle OpenWorld, BMC announced the formation of a new business unit, Data One that will have over 300 professionals—including sales and marketing, product development, customer service, and consulting. The new unit’s mission is to come out with simpler, cheaper, almost shrink-wrapped versions of BMC’s PATROL (distributed systems management) products—like WebDBA, a $495/seat product released in the summer that allows DBAs to manage Oracle databases remotely through any HTML web browser.

Aside from WebDBA, DataOne does not yet have any specific products, nor has it firmed up product plans or timelines. So why announce now? According to Anthony Brown, director of marketing for DataOne, “This is a good time give our customers an understanding of where we’re going with our technology,” adding that the Oracle OpenWorld event just happened at the time that they were ready to go public.

(At the show, BMC also announced the acquisition of Sylvain Faust, a Canadian software firm that developed tools for tuning SQL statements, which will also be folded into the new business unit.)

The new DataOne business unit inherits roughly 30 PATROL products covering SQL development, database change management, performance tuning and optimization, maintenance optimization, backup and recovery, and business information management. With the exception of some front end beautification, these products are currently the same PATROL offerings aimed at Global 2000 organizations.

The long-term goal is to come out with integrated solutions aimed at companies scraping by with one or two inexperienced DBAs, who require easy to use solutions with liberal use of preconfigured templates. In most cases, WebDBA will serve as the front end to these new solution sets.

To accelerate products out the door, BMC has begun adopting larger-scale beta testing programs. A program associated with the initial WebDBA release attracted 100 custom,ers. BMC claims that over 2000 customers have volunteered for the next beta phases, aimed at adding new utilities to the WebDBA palette. Although DataOne has not yet announced product plans, BMC promises to begin offering incremental integration between WebDBA and some of these utilities as early as the next 60 days. Among the early targets, BMC will likely address change management first, followed by performance monitoring and maintenance optimization.

BMC’s moves are not surprising. Given that existing enterprise markets have flattened, it has to figure out some way of penetrating smaller organizations that are largely virgin markets. The task is daunting; database management is full of arcane concepts such as disk defragmentation and tablespace reorganization that typically require years of experience to master. In fact, the need for simpler, preconfigured tools extends beyond small organizations, because even large enterprises are having hard times finding experienced DBAs in tight labor markets.

BMC’s moves are hardly unique. On the database side, the recent Sun/Oracle/Veritas alliance to certify specific database/hardware configurations reflects a need that fast-growing enterprises need databases-to-go. In the ERP space, giants like SAP have introduced fast track ASAP programs, which feature liberal pre-configured templates. The difficulty of the task is reflected in the fact that ASPs have struggled to keep customers happy with plain vanilla configurations of enterprise software. Although, at a certain level, databases are more generic, in a distributed computing world, there are always bound to be variations in the way that functions or tables are deployed.

Although Sun et al have recently resisted expanding their database/hardware pre-certification programs, these efforts are the logical place for ventures like BMC’s DataOne to gain traction.