The Java camp patted itself on the back yesterday, holding a coming out party for the almost two dozen vendors whose products just won J2EE 1.3 (Java 2 Enterprise Edition) certification. Among the highlights of J2EE 1.3 are new connectivity standards for application, messaging, and XML web services integration.
In between the lines, however, questions emerged on whether the Java Community Process (JCP), which controls standards development, could fragment. So far, the JCP has shown remarkable cohesion, in spite of IBM’s worries that Sun was pulling a few too many strings. And, aside from Microsoft, which withdrew from Java by mutual consent, and some clean room experiments by HP, nobody has successfully engineered a different Java.
But could the JCP’s broad mission prove too much of a good thing? As long as the JCP limits itself to “plumbing” functions that lack intrinsic value, it’s on safe ground. Messaging extensions make sense, but APIs for business workflows? They’re another question.
Another pitfall: as the agenda grows too broad, it might grow overcautious. For instance, in tackling SOAP (an emerging, simplified messaging protocol for XML web services), the JCP lumped it under an umbrella spec covering all “on-the-wire protocols.” Are we getting as complex answer to a simple problem, and consequently, is that slowing down the standard process?
Some Java community members are already taking the law into their own hands. Last month, IBM and HP announced UDDI4J (UDDI, a web services registry, for Java) to provide a quicker alternative to the proposed official JAXR spec, which would cover registries of all types.
We don’t know how far left-field responses like UDDI4J will go (even IBM’s representative at the J2EE event yesterday hadn’t heard of it), but they should serve as a wake-up call to the JCP to place realistic boundaries on its mission.
As the old saying goes, success has many fathers, but failure is typically an orphan. Using this logic, it would be too easy to blame external factors, like the collapse of Enron, for giving K-Mart’s investors and insurers the cold feet that eventually catapulted the retailer over the Chapter 11 cliff this morning.
For K-Mart, its smart bet on the Martha Stewart product line was probably the exception that proved the rule. K-Mart had powerful mindshare assets, from the blue lights to entry-level Martha Stewart linens and window treatments. However, K-Mart’s poor technology systems meant that the retailer was unable to capitalize on its natural advantages. Maybe the only thing that the retailer understood was how fast the money was flowing out.
Technology obviously wasn’t the sole culprit. K-Mart ended up squeezed on both ends-between Wal-Mart for economies of scale and Target for style. Management dallied back and forth as to whether the retailer was to emphasize its blue lights or Martha Stewart’s tablecloths.
Nonetheless, technology helped keep K-Mart in the dark during those years when the rural upstart from Arkansas was quietly grabbing market penetration. Whereas Wal-Mart began adopting computerized inventory management systems nearly 30 years ago, K-Mart only began seriously integrating its inventory systems within the past five years. And, when Wal-Mart helped pioneer “quick response” programs that gave selected top-tier suppliers access to point of sale or distribution data, the best that K-Mart could muster was to turn its inventory systems over to suppliers whose forecasting systems, in some cases, were even less sophisticated (K-Mart eventually retrenched here).
While technology can’t substitute for poor management, it can improve their chances to learn from their mistakes.
t’s tempting to dismiss Usinternetworking’s (USi) recent Chapter 11 filing as just the latest casualty of the dot bomb implosion. Sure, much of the appeal of Application Service Providers (ASPs), of which USi claimed to be the largest, was their promise to provide mature IT infrastructure to young, fast-growing firms like dot coms. Ironically, USi positioned itself as the adult of the crowd owing to its large capitalization and a non-dot com client list boasting names like Rand McNally, Rohm & Haas, Hershey, and Providian Financial.
In fact, ASPs weren’t anything new. As the latest incarnation of traditional service bureaus and time-sharing services, ASPs were supposed to implement and run applications cheaper because they ran the same things for everybody.
But the economies of scale never happened. ASPs fell victim to the same problems that drove ERP projects into cost overruns and delays. Namely that every company is different, requiring custom changes to the software. ASPs like USi promised to avoid those potholes by only allowing packages to be “configured”-that is, changed from a limited menu of choices. But recall that while ERP vendors like SAP also made the same claims, their projects still came in at multiples of 5 -10x the cost of the software and hardware. In the end, ASPs like USi couldn’t “discipline” their customers.
The conventional wisdom today is that neo-ASPs will resemble the service bureaus of old. Sure, ADP never looked sexy, but it consistently made money processing payrolls. The most likely candidates to succeed are Managed Security Providers, who focus on firewalls, user authentication, and virus protection. They will survive because the talent pool of security professionals, like Java programmers before them, isn’t meeting demand.
But once the hysteria of 9/11 subsides, Managed Security Providers and similar downsized ASPs will thrive only if they keep their ambitions in check. Ironically, a chastened stock market-which previously fed the worst of the ASP excesses-might actually exert some useful discipline.