The New Industrial Revolution

The word “globalization” has always been a loaded term, meaning different things to different people at different times. Until recently, white collar professionals thought that globalization affected the other guy; for professionals, it was synonymous with expanding markets for their company’s wares and services.

Fast forward to the present, and in the back office, globalization is probably associated with that sucking sound of software jobs going to India. If you take the headlines too seriously, you might be tempted to believe that San Jose will become the next Akron.

But go ask manufacturers about globalization, and you’ll probably get a more nuanced view. Yes, many of them closed obsolete plants in the 1980s when the Japanese invaded with faster, better, cheaper. Ironically, they did so based on ideas of continuous improvement preached by Americans like W. Edwards Deming and Joseph Juran.

Fast forward to the 90s. The same Japanese automakers are busily setting up shop along I-85 and opening design centers in Southern California. Increasingly, their suppliers are becoming more domestic too. The bottom line: the market changed, consumer tastes are changing more rapidly, and suddenly, being close to the end customer became a competitive benefit. And today, those same Japanese automakers are playing catch-up ball with Detroit in SUVs.

Don’t fool yourself, even with the resurgence of domestic manufacturing, production remains as global as ever. The difference is the value chain itself. It still makes sense to make commodity parts abroad but differentiating it locally. And it makes even more sense that labor in developed nations have higher skill sets — such as computer literacy and the ability to take wider responsibility — to justify their higher pay scales.

The same will be true in software. Global fiber backbones will continue enabling countries with the right educational systems, skills bases, and business climates to join the software value chain. In developed regions, geeks in turn will have to stop being just geeks. Mastery over Java, .NET, or XML will no longer suffice. In the new software value chain, geeks must learn to communicate, comprehend business plans, and add management skills to avoid becoming tomorrow’s laid-off coal miners. They might even have to get a life.

The jobs will still be here — it’s no surprise that the high-end offshore firms are bulking up their North American and European presences. Because, as manufacturers learned the hard way, being faster, better, cheaper cannot be performed by remote control.

Hi Ho or Ho Hum?

Call it coincidence, but within the past week, two strangely similar deals reshaped the fragmented BI sector: Business Objects + Crystal Decisions, and Hyperion + Brio. Some background: While SAS and Teradata are the 16-ton gorillas dominating the high-end Terabyte market, the rest of the field is dominated by midsize players dying to break out.

Business Objects and Cognos battle over the ad hoc query and reporting space which targets power users. By contrast, Crystal has blown away virtually all competition (including Brio) for the high-volume, standard reports that are created once and distributed across enterprises and business units. Crystal also boasts a huge OEM business whose contribution is more mindshare than revenue, with Hyperion having been one of its licensees.

At first glance, the combination of Business Objects and Crystal appears a marriage of strength, while Hyperion’s Brio acquisition appears reactive. In reality, both deals are defensive, primarily aimed at putting together critical mass, and secondarily intended to provide more complete soup-to-nuts suites in a sector that until now resisted them.

Of course, amassing size in a consolidating software market is not a bad thing, unless the assembled parts have little or no synergy. Virtually every BI player has traveled the M&A route before to mixed results. In this case, Business Objects and Crystal have more apparent synergies, but both players admit that it will be impractical to merge both product sets into a single line. Ironically, Hyperion may find it easier to unite products since conceptually, it’s not a huge leap to replace the embedded Crystal Reports with Brio. And it may find some benefit adding Brio’s Metrics Builder to its existing corporate performance dashboard products. But the deal won’t garner Hyperion much additional market share, given Brio’s also-ran history.

The real question is what BI companies want to be when they grow up. Business Objects aims to be the “pure play” query and reporting player. Although it (and everybody else) has ventured into analytic apps, it realizes the need to avoid traipsing on SAP’s turf. Hyperion intends to continue emphasizing financial consolidation and planning, while Cognos is trying to serve all of the above. To its credit, Cognos has just come out with a pretty cool next-generation web services reporting capability that has yet to be proven.

Yet, as all this consolidation is taking place, new demand for “real-time” business intelligence is emerging that may — or may not — utilize the capabilities that these vendors have spent years developing. In that context, this week’s M&A is more about shoring up the past rather than looking into the future.

Lies, Damn Lies, and Statistics

More often than not IT organizations are judged guilty until proven innocent — and why not? When they had the money, they didn’t always spend it wisely.

Not surprisingly, the question of measurement keeps rearing its ugly head. Yeah, it’s “ugly,” because few enterprises know what they are actually getting for their IT dollars. Meanwhile, solution vendors are overly anxious to trot out their ROI white papers, while subjective ROI studies from sources such as Nucleus Research continue drawing press.

Naturally, that piqued our interest in a fairly new category of software that helps IT demonstrate how well it is meeting service level agreements (SLAs). The idea is especially relevant since existing systems typically hog 80% or more of typical corporate IT budgets.

SLAs measure factors like performance, availability (how often the system can handle the load), and reliability (how often it is up). There are numerous bibles from which these measures are derived, including the IT Infrastructure Library (ITIL), which defines the elements of IT service management; ISO 9000 and Six Sigma for defect elimination; and ISO 17799 for security policy audits.

Yet, guidelines or not, SLA remains subjective. How do you define “good” service? Which parameters are relevant to your company or industry? Consequently, you can’t benchmark SLAs like EPA gas mileage ratings.

Nonetheless, if an organization has bothered to do the homework and define what good service means, it may be able to take advantage of various emerging tools employing dashboards and business intelligence techniques to demonstrate quality of service and whether the enterprise is getting its money’s worth from IT.

We were intrigued by a new tool from Euclid that ventures beyond passive reporting of factors (e.g., response times) normally seen on systems management or help desk consoles. For instance, it could show how fast a business process executes, how often users demand changes to an application or database, or who owns a particular problem. And it provides mechanisms for prioritizing which problems get resolved first. On the horizon, the usual suspects (BMC, CA, IBM, and HP) will also enter the act. For instance, HP recently provided a glimpse of a future business impact analysis tool that will measure the financial impact of IT infrastructure failures or performance degradations.

We believe that the attention to gauging service levels and business value are positive, if not necessary developments. However, providing vivid dashboard readouts is the easy part of the problem. The real heavy lifting will be defining what comprises “good” service, and what the value or cost of a properly transacted, delayed, or failed process is. And in many cases, it will require that companies capture cost data that has traditionally fallen under the cracks as “overhead.” The solution — activity-based costing (ABC) — was first proposed in the 1980s. The fact that we’re still even discussing the idea shows how far most organizations have yet to go.