Category Archives: Supply Chain Management

The Other Shoe Drops: SAP puts ERP on HANA

It was never a question of whether SAP would bring it flagship product, Business Suite to HANA, but when. And when I saw this while parking the car at my physical therapist over the holidays, I should’ve suspected that something was up: SAP at long last was about to announce … this.

From the start, SAP has made clear that its vision for HANA was not a technical curiosity, positioned as some high-end niche product or sideshow. In the long run, SAP was going to take HANA to Broadway.

SAP product rollouts on HANA have proceeded in logical, deliberate fashion. Start with the lowest hanging fruit, analytics, because that is the sweet spot of the embryonic market for in-memory data platforms. Then work up the food chain, with the CRM introduction in the middle of last year – there’s an implicit value proposition for having a customer database on a real-time system, especially while your call center reps are on the phone and would like to either soothe, cross-sell, or upsell the prospect. Get some initial customer references with a special purpose transactional product in preparation for taking it to the big time.

There’s no question that in-memory can have real impact, from simplifying deployment to speeding up processes and enabling more real-time agility. Your data integration architecture is much simpler and the amount of data you physically must store is smaller. SAP provides a cute video that shows how HANA cuts through the clutter.

For starters, when data is in memory, you don’t have to denormalize or resort to tricks like sharding or striping of data to enhance access to “hot” data. You also don’t have to create staging servers to perform ETL of you want to load transaction data into a data warehouse. Instead, you submit commands or routines that, thanks to processing speeds that are up to what SAP claims to be 1000x faster than disk, convert the data almost instantly to the form in which you need to consume it. And when you have data in memory, you can now perform more ad hoc analyses. In the case of production and inventory planning (a.k.a., the MRP portion of ERP), you could run simulations when weighing the impact of changing or submitting new customer orders, or judging the impact of changing sourcing strategies when commodity process fluctuate. For beta customer John Deere, they achieved positive ROI based solely on the benefits of implementing it for pricing optimization (SAP has roughly a dozen customers in ramp up for Business Suite on HANA).

It’s not a question of whether you can run ERP in real time. No matter how fast you construct or deconstruct your business planning, there is still a supply chain that introduces its own lag time. Instead, the focus is how to make enterprise planning more flexible, enhanced with built-in analytics.

But how hungry are enterprises for such improvements? To date, SAP has roughly 500 HANA installs, primarily for Business Warehouse (BW) where the in-memory data store was a logical upgrade for analytics, where demand for in-memory is more established. But on the transactional side, it’s a more uphill battle as enterprises are not clamoring to conduct forklift replacements of their ERP systems, not to mention their databases as well. Changing both is no trivial matter, and in fact, changing databases is even rarer because of the specialized knowledge that is required. Swap out your database, and you might as well swap out your DBAs.

The best precedent is Oracle, which introduced Fusion Applications two years ago. Oracle didn’t necessarily see Fusion as replacement for E-Business Suite, JD Edwards, or PeopleSoft. Instead it viewed Fusion Apps as a gap filler for new opportunities among its installed base or the rare case of greenfield enterprise install. We’d expect no less from SAP.

Yet in the exuberance of rollout day, SAP was speaking of the transformative nature of HANA, claiming it “Reinvents the Real-Time Enterprise.” It’s not the first time that SAP has positioned HANA in such terms.

Yes, HANA is transformative when it comes to how you manage data and run applications, but let’s not get caught down another path to enterprise transformation. We’ve seen that movie before, and few of us want to sit through it again.

HP does a 180 – Now it’s Apotheker’s Company

HP chose the occasion of its Q3 earnings call to drop the bomb. The company that under Mark Hurd’s watch focused on Converged Infrastructure, spending almost $7 billion to buy Palm, 3COM, and 3PAR, is now pulling a 180 in ditching both the PC and Palm hardware business, and making an offer to buy Autonomy, one of the last major independent enterprise content management players, for roughly $11 billion.

At first glance, the deal makes perfect sense, given Leo Apotheker’s enterprise software orientation. From that standpoint, Apotheker has made some shrewd moves, putting aging enterprise data warehouse brand Neoview out of its misery, following up weeks later with the acquisition of Advanced SQL analytics platform provider Vertica. During the Q3 earnings call, Apotheker stated the obvious as to his comfort factor with Autonomy: “I have spent my entire professional life in software and it is a world that I know well. Autonomy is very complementary.”

There is potential synergy between Autonomy and Vertica, with Autonomy CEO Mike Lynch (who will stay on as head of the unit, reporting to Apotheker) that Autonomy’s user screens provide the long missing front end to Vertica, and that both would be bound by a common “information layer.” Of course, the acquisition not being final, he did not give details on what that layer is, but for now we’d assume that integration will be at presentation and reporting layer. There is clearly a lot more potential here — Vertica for now only holds structured data while Autonomy’s IDOL system holds everything else. In the long run we’d love to see federated metadata and also an extension of Vertica to handle unstructured data, just as Advanced SQL rivals like Teradata’s Aster Data already do.

Autonomy, according to my Ovum colleague Mike Davis who has tracked the company for years, is one of only three ECM providers that have mastered the universal document viewer – Oracle’s Stellent and an Australian open source player being the others. In contrast to HP (more about that in a moment), Autonomy is quite healthy with the latest quarterly revenues up 16% year over year, operating margins in the mid 40% range, and a run rate that will take the company to its first billion dollar year.

Autonomy is clearly a gem, but HP paid dearly for it. During Q&A on the earnings call, a Wall street analyst took matters back down to earth, asking whether HP got such a good deal, given that it was paying roughly 15% of its market cap for a company that will only add about 1% to its revenues.

Great, expensive acquisition aside, HP’s not doing so well these days. Excluding a few bright spots, such as its Fortify security software business, most of HP’s units are running behind last year. Q3 net revenue of $31.2 billion was up only 1% over last year, but down 2% when adjusted for constant currency. By contrast, IBM’s most recent results were up 12% and 5%, respectively, when currency adjusted. Dennis Howlett tweeted that it was now HP’s turn to undergo IBM’s near-death experience.

More specifically, HP Software was the bright spot with 20% growth year over year and 19.4% operating margin. By contrast, the printer and ink business – long HP’s cash cow – dropped 1% year over year with the economy dampening demand from the commercial side, not to mention supply chain disruptions from the Japanese tsunami.

By contrast, services grew only 4%, and is about to kick in yet another round of transformation. John Visenten, who ran HP’s Enterprise services in the Americas region, comes in to succeed Ann Livermore. The problem is, as Ovum colleague John Madden states it, HP’s services “has been in a constant state of transformation” that is making some customers’ patience wear thin. Ever since acquiring EDS, HP has been trying – and trying – to raise the legacy outsourcing business higher up the value chain, with its sights literally set in the cloud.

The trick is that as HP tries aiming higher up the software and services food chain, it deals with a market that has longer sales cycles and long-term customer relationships that prize stability. Admittedly, when Apotheker was named CEO last fall, along with enterprise software veteran ray Lane to the board, the conventional wisdom was that HP would train its focus on enterprise software. So to that extent, HP’s strategy over the past 9 months has been almost consistent – save for earlier pronouncements on the strategic role of the tablet and WebOS business inherited with Palm.

But HP has been around for much longer than 9 months, and its latest shifts in strategy must be viewed with a longer perspective. Traditionally an engineering company, HP grew into a motley assortment of businesses. Before spinning off its geeky Agilent unit in 1999, HP consisted of test instruments, midrange servers and PCs, a token software business, and lest we forget, that printer business. Since then:
• The 2001 acquisition of Compaq that cost a cool $25 billion, under Carly Fiorina’s watch. That pitted it against Dell and caused HP to assume an even more schizoid personality as consumer and enterprise brand.
• Under Mark Hurd’s reign, software might have grown a bit (they did purchase Mercury after unwittingly not killing off their OpenView business), but the focus was directed at infrastructure – storage, switches, and mobile devices as part of the Converged Infrastructure initiative.
• In the interim, HP swallowed EDS, succeeding at what it failed to do with its earlier ill-fated pitch for PwC.

Then (1) Hurd gets tossed out and (2) almost immediately lands at Oracle; (3) Oracle pulls support for HP Itanium servers, (4) HP sues Oracle, and (5) its Itanium business sinks through the floor.

That sets the scene for today’s announcements that HP is “evaluating a range of options” (code speak for likely divestment) for its PC and tablet business – although it will keep WebOS on life support as its last gasp in the mobile arena. A real long shot: HP’s only hope for WebOS might be Android OEMs not exactly tickled pink about Google’s going into the handset business by buying Motorola’s mobile unit.

There is logical rationale for dropping those businesses – PCs have always been a low margin business in both sales and service, in spite of what it claimed to be an extremely efficient supply chain. Although a third of its business, PCs were only 13% of HP’s profits, and have been declining in revenue for several years. PCs were big enough to provide a distraction and low enough margin to become a drain. And with Palm, HP gained an eloquent OS, but with a damaged brand that was too late to become the iOS alternative – Google had a 5-year headstart. Another one bites the dust.

Logical moves, but it’s fair to ask, what is an HP? Given HP’s twists, turns, and about-faces, a difficult one to answer. OK, HP is shedding its consumer businesses – except printers and ink because in normal times they are too lucrative – but HP still has all this infrastructure business. It hopes to rationalize all this in becoming a provider of cloud infrastructure and related services, with a focus on information management solutions.

As mentioned above, enterprises crave stability, yet HP’s track record over the past decade has been anything but. To be an enterprise provider, technology providers must demonstrate that they have a consistent strategy and staying power because enterprise clients don’t want to be left with orphaned technologies. To its credit, today’s announcements show the fruition of Apotheker’s enterprise software-focused strategy. But HP’s enterprise software customers and prospects need the assurance that HP won’t pull another about face when it comes time for Apotheker’s successor.

Postscript: Of course we all know how this one ended up. One good 180 deserved another. Exit Apotheker stage left. Enter Meg Whitman stage right. Reality has been reversed.

IBM offers to buy Sterling Commerce

We should have seen this one coming. IBM’s offer to buy Sterling Commerce for $1.4 billion from AT&T closes a major gap in the WebSphere portfolio, extending IBM’s array of internal integrations externally to B2B. It’s a logical extension, and IBM is hardly the first to travel this path: Software AG’s webMethods began life as a B2B integration firm before it morphed into EAI, later SOA and BPM middleware, before getting acquired by Software AG. In turn, Tibco recently added Foresight Software as an opportunistic extension for taking advantage of a booming market in healthcare B2B transactions.

But neither Software AG’s or Tibco’s moves approach the scope of Sterling Commerce’s footprint in B2B trading partner management, a business that grew out of its heritage as one of the major EDI (electronic data interchange) hubs. The good news is the degree of penetration that Sterling has; the other (we won’t call it “bad”) news is all the EDI legacy, which provides great fodder for IBM’s Global Business Services arm to address a broader application modernization opportunity.

Sterling’s base has been heavily in downstream EDI and related trading partner management support for retailers, manufacturers, and transportation/freight carriers. Its software products cover B2B/EDI integration, partner onboarding into partner communities (an outgrowth of the old hub and spoke patterns between EDI trading partners), invoicing, payments, order fulfillment, and multi-channel sales. In effect, this gets IBM deeper into the supply chain management applications market as it already has Dynamic Inventory Optimization (DIOS) from the Maximo suite (which falls under the Tivoli umbrella), not to mention the supply chain optimization algorithms that it inherited as part of the Ilog acquisition which are OEM’ed to partners (rivals?) like SAP and JDA.

Asked if acquisition of Sterling would place IBM in competition with its erstwhile ERP partners, IBM reiterated its official line that it picks up where ERP leaves off – but that line is getting blurrier.

But IBM’s challenge is prioritizing the synergies and integrations. As there is still a while before this deal closes – approvals from AT&T shareholders are necessary first – IBM wasn’t about to give a roadmap. But they did point to one no-brainer: infusing IBM WebSphere vertical industry templates for retail with Sterling content. But there are many potential synergies looming.

At top of mind are BPM and business rules management that could make trading partner relationships more dynamic. There are obvious opportunities for WebSphere Business Modeler’s Dynamic Process Edition, WebSphere Lombardi Edition’s modeling, and/or Ilog’s business rules. For instance, a game changing event such as Apple’s iPad entering or creating a new market for tablet could provide the impetus for changes to products catalogs, pricing, promotions, and so on; a BPM or business rules model could facilitate such changes as an orchestration layer that acts in conjunction with some of the Sterling multi-channel and order fulfillment suites. Other examples include master data management, which can be critical when managing sale of families of like products through the channel; and of course Cognos/BI, which can be used for evaluating the profitability or growth potential of B2B relationships.

Altimeter Group’s Ray Wang voiced a question that was on many of our minds: why AT&T would give up Sterling. IBM responded about the potential partnership opportunities but to our mind, AT&T has its hands full attaining network parity with Verizon Wireless and is just not a business solutions company.

GM, Toyota & NUMMI: Opportunity lost

It’s no April Fools joke: NUMMI, the pioneering joint venture of GM and Toyota, is closing its doors today. NUMMI proved yet another bold new beginning that met its doom

Its fate reminded us of another such failed startup. Back in 1987, we wrote a story for Managing Automation magazine about Volkswagen’s closing of its New Stanton, Pennsylvania. plant under the headline, “The End of a New Beginning” (our article is not online, so we pointed to a NY Times filing). It was the tale of a successful foreign automaker opening a beachhead and creating jobs in the Rust Belt, only to find that it could not replicate the success of the original Beetle for a new generation.

Ironically, VW’s American foray fell victim to the inroads of Japanese automakers who cashed in on American demand for better quality cars. It was the very wave that spurred the Toyota’s pioneering joint venture with GM at the latter’s highly troubled Freemont, California plant. The joint venture, New United Motor Manufacturing Inc. (NUMMI), was born of Toyota’s need for a partner to help it learn how to replicate its famed production system with an American workforce, and GM’s need to learn Toyota’s secret sauce.

Sadly, that same tagline fits a sorry occasion for today, which marks the closing of NUMMI, the former GM/Toyota joint venture in Freemont. California. NPR’s This American Life ran an excellent account of NUMMI’s quarter century and its demise. The story is painful for GM, which drank the Kool Aid too little and too late, for Toyota, where success has bred sloppiness, and for the NUMMI workers. Having visited the plant for a series of articles on NUMMI’s adoption of a supervisory system to automate some of their quality assurance practices, we also are feeling the pangs.

We met the people on the plant floor who internalized the practice of Kaizen; when we visited the plant in 1992, NUMMI-made Toyota 2×4 light pickups, Toyota Corollas,, and GM Geo Prizms were ranked 1st, 8th, and 12th in customer satisfaction, respectively, by JD Power. The plant was one of the few US sites to increase production during the recession of the early 90s. NUMMI’s production peaked in 2005.

The plant represented a hope that American grit, determination, and know-how could rise to the challenge of offshore manufacturing. If the Japanese could apply the lessons of Juran and Deming, why could we turn around and do the same?

The reason we were there was because growth overwhelmed the staff’s ability to continue tracking quality and scheduling operations manually. With Toyota’s philosophy that empowered workers knew best how to manage the assembly line, impetus for the project to implement computerized systems for managing operations came from the rank and file, not from top management. The computer-integrated monitoring system (called CIMS) project was lead by the assistant maintenance manager, not by plant senior management. Bidders initially couldn’t believe that authority for approving bids for a 6-figure project would rest with such an operating level group.

It was a weird confluence of history: an up and coming automaker giving a competitor the chance to learn its secrets of success. But NUMMI’s success wasn’t easily replicated inside GM. For starters, it relied on Toyota’s automotive parts supply ecosystem, which was already compliant with Toyota’s Kaizen practices; additionally, the labor contract was changed. Neither of those conditions existed elsewhere inside GM. The company lacked a master plan to apply lessons being handed it on a golden platter. The company didn’t get serious until Jack Smith – one of the people who helped negotiate the NUMMI agreement – became CEO in 1992; and even then, change faced ingrained resistance from workers, unions, plant management, and suppliers. Booming SUV business in the 90s concealed the skeletons still inside GM’s closet.

NUMMI wasn’t GM’s only blown opportunity; it had a chance to reinvent the car plant with a similar worker empowerment scheme at Saturn. After being incubated under Roger Smith in the 80s, Saturn’s death spiral began as GM failed to invest in the product and transferring production to other plants run along traditional adversarial lines, and failing to invest in new models to broaden the line.

GM’s impending bankruptcy caused it to pull out of NUMMI last year; now Toyota, which has seen its sails trimmed by the recession, has followed suit as it retrenches to its non-union manufacturing base concentrated in the Sunbelt. This is occurring as Toyota, ironically, faces quality issues of its own as the company let some of its famed Kaizen practices slide in the face of growth.

America ironically built itself up through determination and grit; back in the 1980s, the thought was that if we could roll up our sleeves, apply American ingenuity and the American spirit, that we could triumph over adversity. Or in this case, embrace world-class quality and we would become magically competitive again. But our luck ran out when trade barriers came down and the world grew flat. Until the recent wave of plant closings, the world had roughly twice the automotive production capacity that it needed, with most of it in the wrong places (e.g., located in mature rather than growing markets).

NUMMI succeeded by the old rules of the game, but in a market where demand was sinking faster than supply, NUMMI’s isolated west coast location away from the hub of the industry (now located mostly along I-85 in the south) sealed its fate. NUMMI gave Toyota its stepping stone into the US market, but it was a beachhead no longer needed.

The irony is that Toyota has surpassed GM in more ways than one. It has not only become the world’s largest carmaker, but as recent headlines of bungled recalls have revealed, has also adopted much of GM’s sloppiness.

Fusion Apps finally out of wraps

Sorry for the pathetic rhyme, but we waited bloody long enough for the privilege of writing it. Like almost every attendee at just-concluded Oracle OpenWorld, the suspense on when Oracle would finally lift the wraps on Fusion Apps was palpable. Staying cool with minimizing our carbon footprint, we weren’t physically at Moscone, but instead watching the webcasts and monitoring the Twitter stream from our home office.

The level of anticipation over Fusion apps was palpable. But it was hardly suspense as it seemed that a good cross-section of Twitterati were either analysts, reference customers, consultants or other business partners who have had their NDA sneak peaks (we had ours back in June), but had to keep our lips sealed until last night.

There was also plenty of impatience for Oracle to finally get on with a message that was being drowned out by its sudden obsession with hardware. Ellison spent most of his keynote time pumping up its Exadata cache memory database storage appliance and issuing a $10 million challenge to IBM that it can’t match Oracle’s database benchmarks on Sun. Yup, if the Sun acquisition goes trough, Oracle’s no longer strictly a software company, and although the Twiterati counted its share of big iron groupies, the predominant mood was that hardware was a distraction. “This conference has been hardware heavy from the start. Odd for a software conference,” tweeted Forrester analyst Paul Hamerman. “90 minutes into the keynote, nothing yet on Fusion apps,” “Larry clearly stalling with all this compression mumbo jumbo,” “Larry please hurry up and tell the world about Fusion Apps, fed up of saying YES it does exist to your skeptics,” and so on read the Twitter stream. There was fear that Oracle would simply tease us in a manner akin to Jon Stewart’s we’ll have to leave it there dig at CNN: “I am afraid that Larry soon will tell that as time has run out he will tell about Fusion applications in next OOW.” A 20-minute rousing speech from governor Arnold Schwarzenegger served as a welcome relief from Ellison’s newly found affection for big iron toys.

Ellison came back after the guvernator pleaded with the audience to stick around awhile and drop some change around California as the state is broke. The break gave him the chance to drift over to Oracle Enterprise Manager, which at least got the conversation off hardware. Ellison described some evolutionary enhancements where Oracle can track your configurations trough Enterprise Manager and automatically manage patching. As we’ve noted previously, Oracle has compelling solutions for all-Oracle environments, among them being a declarative framework for developing apps and specifying what to monitor and auto-patch.

But the spiel on Enterprise Manager provided a useful back door to the main topic, as Ellison showed how it could automate management of the next generation of Oracle apps. Ellison got the audience’s attention with the words, “We are code complete for all of this.”

Well almost everything. Oracle has completed work on all modules except manufacturing.

Ellison then gave a demo that was quite similar to one that we saw under NDA back in the summer. While ERP emerged with and was designed for client/server architectures, Fusion has emerged with a full Java EE and SOA architecture; it is built around Oracle Fusion middleware 11g and uses Oracle BPEL Process Manager to run processes as orchestrations of processes exposed from the Fusion apps or other legacy applications. That makes the architecture of Fusion Apps clean and flexible.

It uses SOA to loosely couple, rather than tightly integrate with other Fusion processes or processes exposed by existing back end applications, which should make Fusion apps more pliant and less prone to outage. That allows workflows in Fusion to be dynamic and flexible. If an order in the supply chain is held up, the process can be dynamically changed without bringing down order fulfillment processes for orders that are working correctly. It also allows Oracle to embed business intelligence throughout the suite, so that you don’t have to leave the application to perform analytics. For instance, in an HR process used for locating the right person for a job, you can dig up an employee’s salary history, and instead switching to a separate dashboard, you can instead retrieve and display relevant pieces of information necessary to see comparisons and make a decision.

Fusion’s SOA architecture also allows Oracle to abstract security and access control by relying on its separate, Fusion middleware-based Identity Manager product. The same goes with communications, where instant messaging systems can be pulled in (we didn’t see any integration with Wikis or other Web 2.0 social computing mechanisms, but we assume that they can be integrated as services.). It also applies to user interfaces, where you can use different rich internet clients by taking advantage of Oracle’s ADF framework in JDeveloper.

Oracle concedes the obvious: outside of the midmarket, there is no Greenfield market for ERP, and therefore, Fusion Apps are intended to supplement what you already have, not necessarily replace it. That includes Oracle’s existing applications, for which it currently promises at least a decade of more support. But at this point, Oracle is not being any more specific about rollout other than to say it would happen sometime next year.

Software Abundance in a Downturn

The term “get” is a journalism (remember that?) term for getting hard-to-get interviews. And so we’re jealous once more about one of RedMonk/Michael Cote’s latest gets, Grady Booch at last month’s Rational Software Conference.

In a rambling discussion, Booch made an interesting point during his sitdown about software being an abundant resource and how that jibes with the current economic slowdown. Although his eventual conclusion – that it pays to invest in software because it can help you deal with a downturn more effectively (and derive competitive edge) – was not surprising, the rationale was.

It’s that Booch calls software an abundant resource. Using his terms, it’s fungible, flexible; there’s lots of it and lots of developers around; and better yet, it’s not a natural extractive resource subject to zero-sum economics. That’s for the most part true although, unless you’re getting your power off solar, some resource must be consumed to provide the juice to your computer.

Booch referred to Clay Shirkey’s concept that a cognitive surplus now exists as a result of the leisure time freed up by the industrial revolution. He contends that highly accessible, dispersed computing networks have started to harness this cumulative cognitive resource. Exhibit A was his and Martin Wattenberg of IBM’s back of the envelope calculation that Wikipedia alone has provided an outlet for 100 million cumulative hours of collected human thought. That’s a lot of volunteer contribution to what, depending on your viewpoint, is contribution to or organization of human wisdom. Of course other examples are the open source software that floats in the wild like the airborne yeasts that magically transform grains into marvelous Belgian Lambics.

Booch implied that software has become an abundant resource, although he deftly avoided the trap of calling it “free” as that term brings with it plenty of baggage. As pioneers of today’s software industry discovered back in the 1980s, the fact that software come delivered on cheap media (followed today by cheap bandwidth) concealed the human capital value that was represented by it. There are many arguments of what the value of software is today – is it proprietary logic, peace of mind, or is the value of technical support? Regardless of what it is, there is value in software, and it is value that, unlike material goods, is not always directly related to supply and demand.

But of course there is a question as to the supply of software, or more specifically, the supply of minds. Globally this is a non-issue, but in the US the matter of whether there remains a shortage of computer science grads or a shortage of jobs for the few that are coming out of computer science schools is still up for debate.

There are a couple other factors to add to the equation of software abundance.

The first is “free” software; OK, Grady didn’t fall into that rat hole but we will. You can use free stuff like Google Docs to save money on the cost of Microsoft Office, or you can use an open source platform like Linux to avoid the overhead of Windows. Both have their value, but their value is not going to make or break the business fortunes of the company. By nature, free software will be commodity software because everybody can get it, so it confers no strategic advantage to the user.

The second is the cloud. It makes software that is around more readily accessible because, if you’ve got the bandwidth, we’ve got the beer. Your company can implement new software with less of the usual pain because it doesn’t have to do the installation and maintenance itself. Well not totally – it depends on whether your provider is using the SaaS model where they handle all the plumbing or whether you’re using a raw cloud where installation and management is a la carte. But assuming your company is using a SaaS provider or somebody that mediates the ugly cloud, software to respond to your business need is more accessible than ever. As with free or open source, the fact that this is widely available means that the software will be commodity; however, if your company is consuming a business application such as ERP, CRM, MRO, or supply chain management, competitive edge will come in how you configure, integrate and consume that software. That effort will be anything but free.

The bottom line is that Abundant Software is not about the laws of supply and demand. There is plenty and not enough software and software developers to go around. Software is abundant, but not always the right software, or if it is right, it takes effort to make it righter. Similarly, being abundant doesn’t mean that the software that is going to get your company out of the recession is going to be cheap.

UPDATE — Google Docs is no longer free.

What do Smarter Planets and Oil Refineries have in common?

Last week we paid our third visit in as many years to IBM’s Impact SOA conference. Comparing notes, if 2007’s event was about engaging the business, and 2008 was about attaining the basic blocking and tackling to get transaction system-like performance and reliability, this year’s event was supposed to provide yet another forum for pushing IBM’s Smarter Planet corporate marketing. We’ll get back to that in a moment.

Of course, given that conventional wisdom or hype has called 2009 the year of the cloud (e.g., here and here), it shouldn’t be surprising that cloud-related announcements grabbed the limelight. To recap: IBM announced WebSphere Cloudburst, an appliance that automates rapid deployment of WebSphere images to the private cloud (whatever that is — we already provided our two cents on that) and it released IBM’s BlueWorks, a new public cloud service for white boarding business processes that is IBM’s answer to Lombardi Blueprints.

But back to our regularly scheduled program, IBM has been pushing Smarter Planet since the fall. It came in the wake of a period when rapid run-up and volatility in natural resource prices and global instability prompted renewed discussions over sustainability that are at decibel levels not heard since the late 70s. A Sam Palmisano speech delivered before the Council on Foreign relations last November laid out what have since become IBM’s standard talking points. The gist of IBM’s case is that the world is more instrumented and networked than ever, which in turn provides the nervous system so we can make the world a better, cleaner, and for companies, a more profitable place. A sample: 67% of electrical power generation is lost to network inefficiencies during a period of national debate in setting up smart grids.

IBM’s Smarter Planet campaign is hardly anything new. It builds on Metcalfe’s law, which posits that the value of a network is the square of the numbers of new users that join it. Put another way, a handful of sensors provides only narrow slices of disjoint data; fill that network in with hundreds or thousands of sensors, add some complex event processing logic to it, and now you not only can deduce what’s happening, but do things like predict what will happen or provide economic incentives that change human behavior so that everything comes out copasetic. Smarter Planet provides a raison d’etre for IBM’s Business Events Processing initiatives that it began struggling to get its arms around last fall. It also tries to make use of IBM’s capacity for extreme scale computing, but also prods it to establish relationships with new sets of industrial process control and device suppliers that are quite different from the world of ISVs and systems integrators.

So, if you instrumented the grid, you could take advantage of transient resources such as winds that this hour might be gusting in the Dakotas, and in the next hour, in the Texas Panhandle, so that you could even out generation to the grid and supplant more expensive gas-fired generation in Chicago. Or, as described by a Singaporean infrastructure official at the IBM conference, you can apply sensors to support congestion pricing, which rations scarce highway capacity based on demand, with the net result that it ramps up prices to what the market will bear at rush hour and funnel those revenues to expanding the subway system (too bad New York dropped the ball when a similar opportunity presented itself last year). The same principle could make supply chains far more transparent and driven by demand with real-time predictive analytics if you somehow correlate all that RFID data. The list of potential opportunities, which optimize consumption of resources in a resource-constrained economy, are limited by the imagination.

In actuality, what IBM described is a throwback to common practices established in highly automated industrial process facilities, where closed-loop process control has been standard practice for decades. Take oil refineries for example. The facilities required to refine crude are extremely capital-intensive, the processes are extremely complex and intertwined, and the scales of production so huge that operators have little choice but to run their facilities flat out 24 x 7. With margins extremely thin, operators are under the gun to constantly monitor and tweak production in real time so it stays in the sweet spot where process efficiency, output, and costs are optimized. Such data is also used for predictive trending to prevent runaway reactions and avoid potential safety issues such as a dangerous build-up of pressure in a distillation column.

So at base, a Smarter Planet is hardly a radical idea; it seeks to emulate what has been standard practice in industrial process control going back at least 30 years.

IBM Puts the Rules Down

In the aftermath of IBM’s announcement of intent to buy Ilog, it would be all too easy for us to reflect back on a conversation with Ilog’s CEO Pierre Haren last winter at its annual user conference covering survival in the software industry. Haren’s description of the typical life of a software vendor is that first you get a handful of successful references, then replicate to at least 20 – 30 successful accounts, then you start thinking about what your company wants to do when it grows up. Start specializing your solution for vertical sectors or other specialized sectors of the market, or you must change your role and move on. Haren’s implicit message: eat or be eaten.

We won’t take the cheap shot about IBM swallowing up Ilog because this deal makes too much sense.

Both companies know each other quite well, having been partners in one way or another for about a dozen odd years, that Ilog’s business rules engine fills a key gap in the WebSphere Process server BPM line, and most notably, that we saw IBM’s SOA strategy VP Sandy Carter keynote Ilog’s conference. IBM’s not going to haul out the big guns for any sub-$200 million software company.

Ilog has had a case of multiple attention disorder for a number of years. Otherwise, how could you explain that company of Ilog’s size would have not one or two, but three separate product families that targeted almost completely different markets? Or that a $180 million company could support 500 partners? Ilog was best known to us and the enterprise software world as one of a handful of providers of industrial-strength business rules management systems. That is, when your rules for conducting business are so conditional and intertwined, you need a separate system to keep them from gumming up into a hairball. That condition tends to typify the world’s largest financial institutions. That’s enough for one business.

But Ilog had two other product lines, one of them being an optimization engine that was OEM’ed to virtually every major supply chain management vendor, from SAP and Oracle to i2, Manugistics, Manhattan Associates and others. And by the way, it also had a cottage industry business selling visualization tools to ISVs.

So how do all these pieces fit together? Just about the only common thread we could think of was the case of a supply chain customer that not only uses the optimization engine, but has such a complex supply chain that it needs to manage all the rules and policy permutations separately. And not to leave loose ends untied, it needed a vivid graphics engine so it could visualize supply chain analyses so it could conduct better exception management.

Suffice it to say, that is not why Ilog had three separate business units. The company happened to grow satisfactorily, showing profits for seven straight years, so that it never had to face the uncomfortable question of refocusing. Had it stayed independent, it might have had to do so; while revenues grew roughly $20 million this year to $180 million, profits sank from $4.9 million last year to a paltry $500k this year. Blame it on currency fluctuations (based in France, Ilog would have had to discount in the US to keep customers happy), not to mention the mortgage crisis that has cratered the financial sector.

The good news is that Ilog is a great fit for IBM. Its rules engine adds a piece missing from WebSphere Process Server, and in fact has excellent synergy with a raft of IBM products that start with Business Events (apply sophisticated rules to piecing together subtle patterns emerging from torrents of data), FileNet content server, WebSphere Business Fabric (the old Webify acquisition, providing frameworks for building vertical industry SOA templates), and the list goes on. And that’s only the BRMS part. IBM Global Business Services and its Fishkill fab are customers of Ilog’s optimization engine, while Tivoli’s Netcool node manager uses Ilog’s visualization.

The sad part of the deal is that the acquisition will likely abort Ilog’s interesting foray into Microsoft’s Oslo vision, where it provides the business rules underpinning. Even if IBM wants to maintain the business, we’d be surprised if Microsoft followed suit. Ilog went to the effort, not simply to port Java-based JRules, but write a fully .NET native product. That’s analogous to what happened with Rational, whose Microsoft Visual Studio partnership originally dwarfed its ties with IBM.

Colleague James Taylor says that the acquisition portends the end of the rules management market as it will likely set off a wave of consolidation by major application/middle tier vendors. CIO UK’s Martin Veitch adds that “IBM is continuing to dance around the margins of enterprise applications” with the Ilog deal. We’d agree, just as with the previous acquisition of Webify and the bulking up of WebSphere Process Server, that it’s getting harder to see where tools leave off and applications pick up. In an era where all these pieces become service-oriented and theoretically composable, maybe that’s irrelevant.

Veitch takes issue with the broader implications for IBM and Oracle – that “These companies have become planets to be explored rather than recognisable fiefdoms of even 10 years ago,” and that “a lot of people are unimpressed by the levels of integration and R&D that follow the incessant deal-making.” Well, part of that may be to satisfy Wall Street, but the march toward agglomeration has become something of a self-fulfilling prophesy. That is, a $500 million software company is no longer considered large enough to be viable, and so if customers are afraid for vendor survival, that reinforces the trend for IBM, Oracle, SAP, and Microsoft to gobble up what’s left. That’s a larger issue that gets beyond the pay grade of this post, but ironically provides subtle reinforcement of what Haren told us roughly six months ago: that once a market gets to the billion dollar or so level, that it becomes prey for “bottom fishers” that push niche providers back into their niche.

Shaking Hands

B2B trading hype notwithstanding, the premise of linking trading partners electronically is as old as enterprise computing itself. In the late 1960s, the trucking industry’s experiments with exchanging routine transactions such as purchase orders and shipping notices eventually led to Electronic Data Interchange (EDI). And although EDI standards eventually emerged, securing electronic “handshakes” between trading partner remained elusive because different companies applied the standards differently. Consequently, only top tier companies could afford the time and expense of making EDI work.

The crux of the problem is that, while it’s relatively simple to standardize lower- level protocols, such as opening or closing a message, going higher up the value chain to specify, for instance, what fields to include in a forecast cuts to the heart of how companies differentiate themselves. So, while my company is proud of its lean forecasts, your company includes customer data because that provides more insight on demand patterns for your products. Not surprisingly, ambitious standards efforts, such as ebXML, which attempted to address all facets of establishing electronic handshakes, failed to gain traction.

With web services promising to democratize B2B commerce for businesses of all shapes and sizes, the challenge of creating electronic handshakes grows even steeper. Not surprisingly, web services bodies have effectively addressed specifying how a service is described, listed, and requested, and now they are branching to higher level functions such as asserting what kind of security is enforced, how identity is declared, how business processes are choreographed, and how quality of service requirements are described. Another body, WS-I, is providing standard cases for testing all the handshaking. And, echoing EDI history, where various vertical sectors such as automotives defined extensions to standards, the same thing is occurring with specification of XML business vocabularies for areas like law, insurance, and financial services.

But what’s missing is a standard framework for describing or assembling service contracts. Admittedly, some pieces of the puzzle are falling into place, such as WSDL, which describes the service; UDDI, which provides a registry of services; LegalXML, which provides contract language; WS-Reliable Messaging, which addresses how to specify the way service messages are to be delivered; and BPEL, which “choreographs” business processes. But there is no standard framework for organizing all the service and identity descriptors that in aggregate define the relationship between customer and provider, and the services to which customers are entitled. For now, that’s the domain of individual products, such as Infravio’s new X-Registry, which provides a meta data repository for such descriptors.

Admittedly, standards are no panacea, as they won’t ensure handshakes will succeed. They simply harmonize formats and vocabularies for building and testing messages and content. Still, we’d like to see the standards community attempt a WS-Contract framework that describes service provider relationships. History tells us that would be an uphill battle, but we think it would be one worth the fighting.

Radio Frequency what?

With dot com era long over, it’s hard to imagine anybody getting exercised over technology anymore. Yet, in recent weeks, the national media — from Time and Newsweek to the New York Times, Wall Street Journal, CNN, and Fortune — have all run headline stories on Radio Frequency Identification (RFID).

Of course, it would be very understandable if you were to ask, “Radio frequency what?”

Wal-Mart’s recent announcement requiring suppliers to affix RFID tags to every case and pallet by 2005-2006 has put the story in the headlines. When Wal-Mart announced similar dictates over bar codes a decade ago, the consequences almost pushed rivals like K-Mart out of business.

Yet, virtually all of the stories focus on the RFID feature least likely to be adopted, for now: Having RF tags ID each box of product. Sounds like we’ve got another case of technohype on our hands.

Theoretically, affixing RF tags to every package of razor blades or cornflakes could reduce pilferage, eliminate checkout lines, and for durable goods, provide ways for appliances to “talk back” when they need repair. Yet, the articles in the press focus around inflated fears that those innocent little tags could tell retailers something about what you do when you wake up in the morning.

Nothing could be further from reality. Let’s assume that retailers and manufacturers really wanted to know how you consume products. Do you really think they could afford to occasionally drive trucks around your neighborhood scanning every box of cornflakes in the area? Could they even manage all that data? As one RF equipment supplier told us, if Wal-Mart tagged every item in every store and tracked every movement, that would generate 7.5 million terabytes daily. And that’s just inside the store. Using Gartner storage cost estimates, we figured that level of tracking would easily cost Wal-Mart millions of millions of dollars.

OK, item-level RF tags might eventually be economical for some aspect of inventory tracking, but regardless of how cheap the tags get, the challenges of mining unheard quantities of data are likely to doom any kind of Big Brother scheme. Furthermore, it’s a safe bet that retailers will finesse the issue by embedding kill features that disable RF tags after checkout.

The real problem is that we still get overexcited by technology. Remember when the Internet was going to change everything? That’s exactly why Gartner Group publishes a “hype” cycle that includes a “trough of disillusionment” that eventual flattens out to a “Plateau of Productivity.” Obviously, the hoopla with RFID and privacy issues reflects the same sense of misdirected hype, making yet another “trough of disillusionment” likely once more.

At case and pallet level, RFID will make supply chain management more productive. But you can bet that story won’t sell as many newspapers as the one about privacy advocates worrying what could happen once an RFID tag detects that you have opened yet another box of birth control products.