Category Archives: IT Value/ROI

Can Project Portfolio Management Become Easy?

The old adage of the shoemaker’s son’s going barefoot has long been associated with IT organizations because they have often been the last to adopt the kind of automated solutions to run their shops that they have implemented for the rest of the enterprise.

Of course, packaging up what IT does is kind of difficult because, compared to line organizations, technology group processes and activities are so diverse. On one hand, they are regarded as a digital utility that is responsible for keeping digital lights on; not surprisingly, that is a dominant impression given that, according to a wide range of surveys, IT infrastructure and software maintenance easily consumes 70% or more (depending on whose survey you read). As to what’s left over, that’s where IT is supposed to play project delivery mode, where it delivers the technologies that are supposed to support business innovation. In that role, IT juggles several roles including systems integration, application development, and project management. And, if IT is to function properly, it is supposed to govern itself.

With ITIL emerging as a -– fill in the blank -– process enabler or yet another layer of red tape, the goal has been to make the utility side of IT a more consistent business with repeatable processes that should improve service levels and reduce cost. While adherence to the ITIL framework, or the broader discipline of IT Service Management, is supposed to make the business feel that it is getting more value for its IT dollars (better service can translate to competitive edge, especially if you heavily leverage the web as a channel for dealing with business partners or customers), the brass ring is supposed to come from the way that IT overtly supports business innovation in project delivery. But like any business, there is the need to manage your investments, a concern that has driven emergence of Project Portfolio Management as a means for IT organizations to evaluate how well projects are meeting their budgets, schedules, and defined goals; in some ways, it’s tempting to label PPM as ERP for IT, as it’s intended as an application for planning where IT should direct its resources.

Five years ago, Mercury’s (now part of HP) acquisition of Kintana, which was intended to grow the company out of its development/testing niche, began putting PPM on the map. That was followed in the next few years with each of the major development tools players acquiring their ways into this space.

Of course, the devil’s in the details. PPM is not a simple answer to a complex problem, as it requires decent data from IT projects that could encompass, not only accomplished milestones from a project management system, but feedback or reports from quality management or requirements analysis tools to ensure that the software – no matter how mission-critical – isn’t getting bogged down with insurmountable defects or veering away from intended scope. Significantly, at least one major player – IBM – is rethinking its whole PPM approach, with the result being that it will likely split its future solutions into separate project, portfolio, and program management streams.

Against this backdrop, Innotas has carved a different path. Founded by veterans of Kintana, the goal of the founders is to make the new generation a kinder, gentler PPM for the rest of us. Delivered as a SaaS offering, the company not surprisingly differentiates itself using the Siebel/Salesforce.com metaphor (and in fact is a marketing partner on Salesforce’s App Exchange). While we haven’t yet caught a demo to attest that Innotas PPM really is simpler, the company did grow 4x to a hundred customers last year, expects to double this year (recession notwithstanding), and just received a modest $6 million shot of C funding to do the usual late round expand sales & marketing.

The dilemma of governance is that lacking bounds, you can often wind up spinning wheels just to get the last detail, whether you actually need it or not. Not surprisingly for a top-down solution, and one that’s aimed at IT, the PPM market has been fairly limited. Significantly, in the press release, Innotas referred to the size of the SaaS rather than the PPM market to promote its upside potential. While that begs the question, there’s always the classic China market argument of a relatively empty market just waiting to be educated. In this case, Innotas points to the fact that barely 300 of the Fortune 1000 have bought PPM tools so far, and that doesn’t even count the 50,000 midmarket companies that have yet to be penetrated.

Our comeback is that like any midmarket solution, the burden is on the vendor to make the case that midmarket companies, with their more modest IT software portfolios, have resource management problems that are complex enough to warrant such a solution. Nonetheless, the PPM market is in need of solutions that can at least give you an 80% answer, because most organizations don’t have the time or resource to maintain fully staffed program management offices whose mission is to perform exactly that.

SOA Benefits: Too Much Reuse of Reuse?

Ever since the dawning of structured software development, arguments have been put forth that, if we only architected [fill in the blank] entity relationships, objects, components, processes, or services, software development organizations would magically grow more productive because they could now systematically reuse their work. The fact that reuse has been such a holy grail for so many years reveals how elusive it has always been.

And so Joe McKendrick summarized the recent spate of arguments over SOA and reuse quite succinctly this morning: “What if you built a service-oriented architecture and nothing got reused? Is it still of value to the business, or is it a flop?” The latest spate of discussions were sparked by a recent AMR Research report that stated that too much of the justification for SOA projects was being driven by reuse. McKendrick cited ZapThink’s David Linthicum, who curtly responded that he’s already “been down this road, several times in fact,” adding that “The core issue is that reuse, as a notion, is not core to the value of SOA…never has, never will.” Instead, he pointed to agility as the real driver for SOA.

To give you an idea of how long this topic has been bandied about, we point to an article that we wrote for the old Software Magazine back in January 1997, where we quoted a Gartner analyst who predicted that by 2000, at least 60% of all new software applications would be built from components, reusable, self-contained pieces of software which perform generic functions.

And in the course of our research, we had the chance to speak with an enterprise architect on two occasions – in 1997 and again last year – who was still with the same organization (a near miracle in this era of rapid turnover). A decade ago, her organization embraced component-based development to the point of changing job roles in the software development organization:
Architects, who have the big picture, with knowledge of technology architecture, the requirements of the business, and where the functionality gaps are. They work with business analysts in handling requirements analysis.
Provisioners, who perform analysis and coding of software components. They handle horizontal issues such as how to design complex queries and optimize database reads.
Assemblers, who are the equivalent of developers. As the label implies, they are the ones who physically put the pieces together.

So how did that reorg of 1997 sink in? The EA admitted that it took several reorgs for the new division of labor to sink in, and after that, was adjusted for reality. “When you had multiple projects, scheduling conflicts often arose. It turned out that you needed dedicated project teams that worked with each other rather than pools. You couldn’t just throw people into different projects that called for assemblers.”

And even with a revamping of roles, the goals of reuse also adjusted for reality. You didn’t get exact reuse, because components – now services – tended to evolve over time. At best, that component or service that you developed became a starting point, but not a goal.

So as we see it, there are several hurdles here.

The first is culture. Like any technical, engineering oriented profession, developers pride themselves in creativity and cleverness, and consider it un-macho to reuse somebody else’s work – because of the implication that they cannot improve on it.

Secondly is technology and the laws of competition.
1. During the days of CASE, conventional wisdom was that we could reuse key processes around the enterprise if we adequately modeled the data.
2. We eventually realized that entity relationships did not adequately address business logic, so we eventually went to objects, followed by coarser grained components, with the idea that if we built that galactic enterprise repository in the sky, that software functionality could be harvested like low hanging fruit.
3. Then we realized the futility of grand enterprise modeling schemes, because the pace of change in modern economies meant that any enterprise model would become obsolete the moment it was created. And so we pursued SOA, with the idea that we could compose and dynamically orchestrate our way to agility.
4. Unfortunately, as that concept was a bit difficult to digest (add moving parts to any enterprise system, and you could make the audit and compliance folks nervous), we lazily fell back to that old warhorse: reuse.
5. The problem with reuse took a new twist. Maybe you could make services accessible, even declarative, to eliminate the messiness of integration. But you couldn’t easily eliminate the issue of context. Aside from extremely low value, low level commodity services like authentication, higher level business requirements are much more specific and hard to replicate.

What’s amazing is how the reuse argument continues to endure as we now try justifying SOA. You’d think that after 20 years, we’d finally start updating our arguments.

The Grind

All too often, software projects are late, over budget, or end up with less functionality than first planned. If these were consumer products, most software development shops would probably find themselves under FTC scrutiny.

Despite the high-tech veneer, software development practices probably have more in common with 17th century English blacksmith shops. Forget the fancy tools, most software is still crafted by hand.

Production is closer to black art than science. Credit or blame developer culture. Comprised largely of former (or would-be) chess players, philosophers, and artists (OK, there are engineers and scientists too), the field has always valued creativity. Not surprisingly, when you get this kind of mix of people and priorities together, the results are often breakthrough, chaos, or something in between. Call that a mismatch with the intended market, because development of business software requires creativity tempered with discipline and consistency.

Not surprisingly, attempts to industrialize software development have typically fallen short. Approaches, ranging from CASE (Computer-Assisted Software Engineering) to Extreme Programming (XP) have been plagued by over reliance on pure top-down or bottom-up emphases.

For instance, component-based development (a form of CASE “lite”) aimed to streamline the process with development of modular software that could be reused. This approach fell apart for several reasons: components were too complex to build, developers and business analysts alike had difficulties forecasting future needs, while developers viewed reuse as attacks on their manhoods because it stifled creativity. Not surprisingly, when we studied Java development practices several years back, we found most developers dispensing with Enterprise Java Bean (EJB) components in favor of simpler, modular, but less reusable Java servlets.

In industry sectors with processes common enough to provide sufficiently broad markets for software vendors, packages replaced internal development. However, when a company buys a package, it becomes hostage to the marketing priorities and fortunes of its software vendor. (How would you feel if you were a PeopleSoft customer today?)

In markets too narrow for package providers, other approaches have emerged. For instance, we’ve recently heard about the notion of co-sourcing, where multiple companies in a sector cooperate on development of commodity software, such as for regulatory compliance.

And a few days ago, we caught an announcement from Unisys of a new “Business Blueprints” strategy that could provide yet another alternative. The blueprints are, in effect, general frameworks for business processes in sectors untouched by package vendors. Essentially a controlled development solution, Unisys boasts an efficient software development process that saves time and money through strict adoption of the Rational Unified Process (RUP) for ensuring that software code is driven by requirements, gets thorough testing, and is carefully controlled for changes that could blow the scope of a project.

While vendors always bend over backwards to make reference customers successful, we were impressed with Unisys’ successes at ING Barings, where it developed the first phase of an insurance policy management system. Using cost estimation tools, the Unisys team predicted costs of this multi-million dollar project within a few percentage points, using that uncanny accuracy to prevent scope creep.

It could be argued that such results are only possible through heroic measures. But, during a period when offshore development is competing for programmer’s livelihoods, maybe a little heroism wouldn’t hurt.

Skeletons and Demons

A few months back, we had an interesting discussion on the history of the relational database (RDBMS) with Oracle VP Ken Jacobs, a guy also known as Mr. DBA. An outgrowth of IBM research, RDBMSs languished until midrange platforms made the idea feasible, creating unexpected openings for startups like Oracle.

A revolutionary notion at the time, RDBMS systems theoretically altered the balance of power, making information more accessible to business analysts. Nonetheless, it wasn’t until the emergence of user-friendly client/server applications that business users could finally do things like write reports without the help of programmers or DBAs.

Over the next 25 years, RDBMS systems scaled up and out. Nonetheless, even today, they still don’t house he world’s largest transaction systems. Go to a bank, call the phone company, file an insurance claim, or scream over a utility bill, and in all likelihood, the data probably came from some tried and true 25-year old legacy system. The growth of RDBMS systems notwithstanding, the truism of 70% of the world’s data residing on legacy systems remains just as valid as ever.

Like Rome, most of the world’s classic transaction systems weren’t built in a day. Attaining such levels of resilience typically took years of trial and error. And, chances are, the minds behind all that resilience are no longer around. Anywhere.

Those systems were opened a bit by the web, which added HTML screen scrapers. Today, with short-term economic uncertainties and long-term upheavals calling for more agile or — dare we say it — real-time enterprises, the need to pry open legacy systems has grown more acute.

Data profiling tool providers, such as Relativity Technologies, and consulting firms, such as offshore outsourcer Cognizant Technologies, have been pushing the notion of legacy renewal for several years. Evidently, their rumblings are being heard. Now IBM Global Services is joining the crowd, announcing new legacy renewal service offerings leveraging the domain knowledge and life cycle tools of its recent PwC Consulting and Rational acquisitions.

What caught our ear was that ROI tools would be part of the mix. While not unusual — vendor ROI tools are pretty common these days for obvious reasons –we wondered, how can you quantify projects like this, which typically have plenty of unknowns. Yes, Y2K projects in general dealt successfully with skeletons in the closet, but in a tighter spending environment, we’ll be interested to see how IBM factors the uncertainties of programmers past.

Mirror Mirror on the Wall

We’ve all heard the maxim that there are lies, damn lies, and statistics. Nowhere is that truer than in the Windows vs. Linux cost of ownership debate. This week, IDC released a Microsoft-sponsored study concluding, surprise, surprise, that Windows 2000 Server proved cheaper over a simulated five-year period. It provided varying sets of numbers for a representative set of server functions.

Yes, Linux costs less to buy than Windows, but that’s trivial in the grand scheme of things. According to IDC, the big differentiator is that Windows admins can be hired more cheaply than UNIX (read: Linux) counterparts, and that the server management tools for Linux are less mature. Now contrast that with numbers published by the Robert Francis Group a few months back (sponsored by IBM, which pushes Linux) concluding that Linux webservers are cheaper. RFG’s data was based on the assumption that you get what you pay for: while Linux folks are paid more, they’re more experienced and able to handle larger workloads.

So what should a poor CIO/CFO believe? Both sets of studies had valid methodologies, debatable assumptions. While their numbers are technically valid, the reality is that Microsoft’s licensing schemes are currently more onerous than Linux (today’s plans tend to push upgrades at customers). In response, Linux continues to gain market penetration, with a recent Goldman Sachs survey indicating 39% of respondents at large U.S.-based global organizations having at least “some” Linux.

Yes, big vendor support for Linux is increasing dramatically, but the market remains in its infancy. Today, Linux primarily competes with UNIX, but tomorrow it will have Windows dead in its sights.

So let’s not get carried away here. Studies are easily skewed. A couple years back, we conducted our own Linux TCO study. The timing of the study in effect limited the sample to pioneers who tended to be more ambitious and outside the IT mainstream, so of course, our Linux numbers looked quite good. But we believe that our conclusion back in year 2000 remains valid: when (not if) Linux gets really competitive with Windows, Microsoft will drop their pricing fast.

Real Returns

Nobody’s surprised that IT budgets are still dropping and that, according to Meta Group veteran benchmarking analyst Howard Rubin, IT execs are demanding faster ROIs. So, not surprisingly, our ears perked up when we came across a recent study from Nucleus Research showing poor ROIs for Siebel CRM projects.

The kicker was that Nucleus drew its non-scientific sample from Siebel reference clients pulled from the company’s website. “These were supposed to be happy clients,” noted Rebecca Wettemann, VP of Research. (We found similar results when we polled CA Unicenter users 5 years ago. Some ISVs are just not vigilant enough in qualifying their references.)

The best Siebel ROIs came when projects had a tangible focus, such as call center or sales force automation, and foundered when the system was expanded to other areas without business justification. Usability, competence of the consultants, and customization were major problems.

Of course, all enterprise packages are hard to implement. But we believe CRM has unique challenges because it isn’t as mission-critical as back office packages like ERP, and because it serves a tougher audience. When order entry, inventory, or general ledger systems fail, so goes the company. But if sales force automation fails, sales staff often rely on their wits. And, forgive us for cultural stereotyping, but we think that sales and marketing folks are looser cannons compared to back office folks when it comes to towing the line on new systems.

Comparing results, we recall a 1999 Meta Group ERP ROI study that also pinpointed problems, such as 2 – 4 year time-to-benefit and negative net present values (an ROI derivative). Placed in perspective, the ERP installations in the Meta study were much larger, averaging about 3 – 5x costs per end user compared to Siebel. Back then, ERP was justified strategically because who could compete in the 21st century without it? How times have changed.

The Nucleus study was a good start, spotlighting key problems and outlining actual costs, but it fell short in quantifying the benefits. From our experience, that’s the hard part, because the prime benefit of enterprise systems is improved business execution, not productivity savings. Few know how to count those benefits, but to be fair, we’d like to see numbers from Nucleus showing the other side of the story.