Category Archives: Legacy Systems

HP analyst meeting 2010: First Impressions

Over the past few years, HP under Mark Hurd has steadily gotten its act together in refocusing on the company’s core strengths with an unforgiving eye on the bottom line. Sitting at HP’s annual analyst meeting in Boston this week, we found ourselves comparing notes with our impressions from last year. Last year, our attention was focused on Cloud Assure; this year, it’s the integraiton of EDS into the core businesss.

HP now bills itself as the world’s largest purely IT company and ninth in the Fortune 500. Of course, there’s the consumer side of HP that the world knows. But with the addition of EDS, HP finally has a credible enterprise computing story (as opposed to an enterprise server company). Now we’ll get plenty of flack from our friends at HP for that one – as HP has historically had the largest market share for SAP servers. But let’s face it; prior to EDS, the enterprise side of HP was primarily a distributed (read: Windows or UNIX) server business. Professional services was pretty shallow, with scant knowledge of the mainframes that remain the mainstay of corporate computing. Aside from communications and media, HP’s vertical industry practices were sparse, few, and far between. HP still lacks the vertical breadth of IBM, but with EDS has gained critical mass in sectors ranging from federal to manufacturing, transport, financial services, and retail, among others.

Having EDS also makes credible initiatives such as Application Transformation, a practice that helps enterprises prune, modernize, and rationalize their legacy application portfolios. Clearly, Application transformation is not a purely EDS offering; it was originated by Ann Livermore’s Enterprise Business group, draws upon HP Software assets such as discovery and dependency mapping, Universal CMDB, PPM, and the recently introduced IT Financial Management (ITFM) service. But to deliver, you need bodies and people that know the mainframe – where most of the apps being harvested or thinned out are. And that’s where EDS helps HP flesh this out to a real service.

But EDS is so 2009; the big news on the horizon is 3Com, a company that Cisco left in the dust before it rethought its product line and eked out a highly noticeable 30% market share for network devices in China. Once the deal is closed, 3Com will be front and center in HP’s converged computing initiative which until now primarily consisted of blades and Procurve VoIP devices. It gains a much wider range of network devices to compete head-on as Cisco itself goes up the stack to a unified server business. Once the 3com deal is closed, HP will have to invest significant time, energy, and resources to deliver on the converged computing vision with an integrated product line, rather than a bunch of offerings that fill the squares of a PowerPoint matrix chart.

According to Livermore, the company’s portfolio is “well balanced.” We’d beg to differ where it comes to software, which accounts for a paltry 3% of revenues (a figure that our friends at HP reiterated underestimated the real contribution of software to the business).

It’s the side of the business that suffered from (choose one) benign or malign neglect prior to the Mark Hurd era. HP originated network node management software for distributed networks, an offering that eventually morphed into the former OpenView product line. Yet HP was so oblivious to its own software products that at one point its server folks promoted bundling of rival product from CA. Nonetheless, somehow the old HP managed not to kill off Openview or Opencall (the product now at the heart of HP’s communications and media solutions) – although we suspect that was probably more out of neglect than intent.

Under Hurd, software became strategic, a development that lead to the transformational acquisition of Mercury, followed by Opsware. HP had the foresight to place the Mercury, Opsware, and Openview products within the same business unit as – in our view – the application lifecycle should encompass managing the runtime (although to this day HP has not really integrated Openview with Mercury Business Availability Center; the products still appeal to different IT audiences). But there are still holes – modest ones on the ALM side, but major ones elsewhere, like in business intelligence where Neoview sits alone. Or in the converged computing stack and cloud in a box offerings, which could use strong identity management.

Yet if HP is to become a more well-rounded enterprise computing company, it needs more infrastructural software building blocks. To our mind, Informatica would make a great addition that would point more attention to Neoview as a credible BI business, not to mention that Informatica’s data transformation capabilities could play key roles with its Application Transformation service.

We’re concerned that, as integration of 3Com is going to consume considerable energy in the coming year, that the software group may not have the resources to conduct the transformational acquisitions that are needed to more firmly entrench HP as an enterprise computing player. We hope that we’re proven wrong.

Skeletons and Demons

A few months back, we had an interesting discussion on the history of the relational database (RDBMS) with Oracle VP Ken Jacobs, a guy also known as Mr. DBA. An outgrowth of IBM research, RDBMSs languished until midrange platforms made the idea feasible, creating unexpected openings for startups like Oracle.

A revolutionary notion at the time, RDBMS systems theoretically altered the balance of power, making information more accessible to business analysts. Nonetheless, it wasn’t until the emergence of user-friendly client/server applications that business users could finally do things like write reports without the help of programmers or DBAs.

Over the next 25 years, RDBMS systems scaled up and out. Nonetheless, even today, they still don’t house he world’s largest transaction systems. Go to a bank, call the phone company, file an insurance claim, or scream over a utility bill, and in all likelihood, the data probably came from some tried and true 25-year old legacy system. The growth of RDBMS systems notwithstanding, the truism of 70% of the world’s data residing on legacy systems remains just as valid as ever.

Like Rome, most of the world’s classic transaction systems weren’t built in a day. Attaining such levels of resilience typically took years of trial and error. And, chances are, the minds behind all that resilience are no longer around. Anywhere.

Those systems were opened a bit by the web, which added HTML screen scrapers. Today, with short-term economic uncertainties and long-term upheavals calling for more agile or — dare we say it — real-time enterprises, the need to pry open legacy systems has grown more acute.

Data profiling tool providers, such as Relativity Technologies, and consulting firms, such as offshore outsourcer Cognizant Technologies, have been pushing the notion of legacy renewal for several years. Evidently, their rumblings are being heard. Now IBM Global Services is joining the crowd, announcing new legacy renewal service offerings leveraging the domain knowledge and life cycle tools of its recent PwC Consulting and Rational acquisitions.

What caught our ear was that ROI tools would be part of the mix. While not unusual — vendor ROI tools are pretty common these days for obvious reasons –we wondered, how can you quantify projects like this, which typically have plenty of unknowns. Yes, Y2K projects in general dealt successfully with skeletons in the closet, but in a tighter spending environment, we’ll be interested to see how IBM factors the uncertainties of programmers past.

Mainframes for the Masses

As we noted a few weeks back, Linux markets are entering adolescence, with platform vendors assuming the initiative from back room geeks. Like UNIX before it, platform vendors are pushing, or being pushed by, popular demand.

Sun is being pushed. Finally biting the bullet last week, Sun announced it would open up SPARC servers to Linux. Conversely, IBM is treating Linux as new opportunity, witness today’s announcement of the new z800 series. A new “mini mainframe” with 10 models, z800 offers two OS options: almost Pure Linux, or a ‘lite’ version of z/OS called z/OS.e. IBM is slashing prices dramatically on the new boxes, starting at under $400k for hardware and three years maintenance.

That’s not as brave as it sounds, because, by omitting support for core CICS and IMS transaction processing applications, the bargain basement z/800s are aimed strictly at what IBM calls “new workloads”: web serving, file and print serving, and enterprise applications, that largely require UNIX, not mainframe professional skills. To us, the obvious market looks like service providers, not data centers.

What a deal, or is it? Although Linux looks like UNIX, running z/800 boxes still requires some classic mainframe administrator skills because they utilize z/VM, an underlying piece of mainframe OS. Then there are software vendor licensing practices to contend with. Remember, IBM has been chopping mainframe prices for years, but ISVs have typically reclaimed those savings with classic MIPS pricing schemes that raised licensing fees with every box upgrade.

Both hurdles are clearly surmountable. On the technical side, IBM’s project eLiza, aimed at making mainframes self-managing and self- healing, could reduce mainframe skills requirements. And on the marketing side, if IBM could persuade mainframe ISVs to adopt the CPU-based licensing schemes that prevail in UNIX/NT, software costs will grow more competitive. IBM is holding up the magic carrot of services to ISVs (this market will certainly need them), but fulfilling that promise will dictate Herculean efforts for keeping IBM Global Services at bay.

That’s a lot of ifs, but then, we said similar things about UNIX a decade ago.

IBM WebSphere Finally Supports Mainframe

It’s little surprise that a key ingredient of IBM WebSphere’s success has been its support of existing IBM technologies—especially DB2 databases, plus the availability of bodies from IBM Global Services which has built an e-business practice around WebSphere. If you’re a confirmed IBM shop, WebSphere has long been your logical choice for web-enabling legacy applications.

But until now, running WebSphere on the mainframe was essentially like running a port of an open systems tool in a legacy environment. WebSphere 4.0 for z/OS and OS/390, released this week, adds native support for core mainframe services such as CICS, the transaction monitor that IBM claims runs 40 billion transactions every day. Specifically, WebSphere 4.0 allows Java application developers to plug into these transaction services without having to write custom code. WebSphere 4.0 does the same thing with other established mainframe building blocks such as Parallel Sysplex; IBM’s RACF security and access control programs; and WLM, IBM’s workload manager used for load balancing.

Version 4.0 is also the first release of WebSphere that is officially J2EE (Java 2 Enterprise Edition)-compliant. Not only that, but it’s the “deepest and broadest” implementation of J2EE, claimed Scott Hebner, middleware marketing director for IBM Software. He said that IBM passed more J2EE compliance tests than any other appserver vendor, including all the mandatory tests and 70% of the optional ones run by Sun. For instance, IBM claims that its implementation of JMS (Java Messaging Service, one of the J2EE standards) is more mature than BEA’s, its primary rival in the appserver space. For instance, IBM claims that BEA WebLogic lacks the ability to import transactions originating as MQSeries messages (IBM’s market-leading messaging middleware). BEA was not available for comment at press time.

To place matters into perspective, IBM’s J2EE fervor is rather recent. They admit to being slower in becoming J2EE-compliant than the other Java appserver rivals. “We took a balanced approach to supporting standards since we originally did not believe that J2EE was the only [important] platform,” said Hebner. He claimed that other standards, such as XML-based UDDI and SOAP, were equally important (see adjacent story). Hebner added that WebSphere supported all the J2EE essentials anyway. “We supported EJB 1.1 ‘minus’,” he said. For instance, while IBM supported essential EJB features such as session beans, until now it didn’t support peripheral ones such as XML descriptors. “Our clients weren’t using them [XML descriptors] anyway,” he maintained.

The WebSphere 4.0 announcements were accompanied by an important upgrade to Visual Age, IBM’s umbrella IDE (integrated development environment) used for developing many of the applications that run on WebSphere. For the first time, IBM is finally offering Visual Age as a suite, bundling all languages including C, C++, COBOL, and Java into the same package. In so doing, IBM is following the lead of other tools vendors, such as Microsoft, Rational, Sybase—and just recently, CA.