When Real-Time Isn’t Fast Enough: Notes from TUCON 2008

You can tell you’re at a techie conference when the CEO’s opening address steps you through the generations of IT history. Tibco CEO Vivek Ranadivé, who roughly a decade ago wrote a book on the power of real-time information, is now telling us that real-time isn’t fast enough; you need to be able to predict the foreseeable future based on patterns that are emerging today. What was kind of amusing was that as he was drawing Tibco’s vision of event-driven architecture (EDA), events took their own turn as his slide presentation anticipated what Ranadive was going to talk about next, and advanced from the EDA architectural diagram to the next shot before he was through. Check out Dana’s Gardner’s ZDNet blog for another account of Ranadive’s keynote.

TUCON 2008 provided some interesting tidbits of a company in transition. Like most major software players, it came off a good 2007 and, Q1 2008 wasn’t too shabby either (like the rest of the software industry, we’re waiting for Q3 and 4 to see if Tibco and its peers can handle the truth). Like most active mid-tier players, the company is embarking on a SOA transition that hasn’t always been well explained to its customers. In part it’s due to the fact that Tibco has had to layer over a legacy, the penalty paid for being the first on the block, but mostly it’s due to the fact that the company has not adequately communicated its message as to how all the pieces are fitting together. With Tibco ONE, which seeks to unify the design experience across all its tools, busses, and process engines, it’s made an auspicious start. But to date, the company lacks a web page to explain what the ActveMatrix brand is, and the vision and message behind it. Enter “Tibco ActiveMatrix” in Google today, and aside from point product listings, you’ll get a SOA landing page with a list of ActiveMatrix products underneath it.

The highlights of Tibco’s announcements include a welcome addition to run time SOA governance, Service Performance Manager, which cleverly bundles some of its Business Events complex event processing technology to tell you whether you’re meeting the Service Level Agreements, at least from a performance standpoint. It takes the assumption that service levels are a classic complex event processing problem, especially given the kinds of highly scaled and highly distributed networks that Tibco customers tend to have. We attended a live podcast recording session led by Dana Gardner that featured independent analyst Joe McKendrick, IDC analyst Sandy Rogers, Tibco SOA marketing manager Rourke McNamara, and Allstate technology solutions VP Anthony Abbatista that plumbed the topic to more detail, which should be published soon by Gardner and Tibco.

What’s missing (and Tibco is hardly alone here) is the link with the actual systems performance side of the house. Ironically, an announcement made at the show that BMC will OEM ActiveMatrix as the SOA platform to integrate its Business Service Management (BSM) products was a bit anticlimactic, as it was not about adding any tie-ins between SOA service performance management and IT service management. The challenge is that, if service levels on a service are tanking, it would be nice to either automatically trigger a trouble ticket and/or initiate an action to trigger something like a hypervisor manager to automatically spawn some new VMware or similar containers.

The problem is that the issue has yet to be adequately defined, there are no standards for specifying how such an interprocess communication might be initiated, and finally, there’s the age-old problem of organizational silos: service level management of SOA is part of run time governance, which tends to be the domain of the software development organization. Meanwhile, IT service levels in the data center are the domains of service operators. Even were technologies and standards adopted, you’d have the challenge of marketing a joint solution to two different entry points in the IT organization, something that will only happen when a CIO bangs heads together.

A couple other interesting developments coming up in the pipe today included some early fruits of a budding technology partnership between Tibco and Microsoft that, actually, have little to do with each other. The first is the ability for Microsoft .NET development shops that have Tibco licenses to invoke Tibco’s EMS messaging technology from Windows Communications Foundation (WCF) in place of something like Microsoft’s own MSMQ – this was the result of joint customers like Allstate that said, we’ve got sizable .NET installed base, and we want our .NET apps to talk to Tibco’s busses without having to do lots of translation. From a more strategic point, the relationship makes sense because, aside from Eclipse-based tooling, Tibco is not your typical Java EE provider because it relies on busses, rather than servers. So there’s no reason why Tibco shouldn’t have closer ties to Microsoft.

The other Microsoft related development was a statement of direction that going forward, Tibco richer clients would use Microsoft Silverlight, rather than Adobe Flex. Now of course, the natural question to ask is, didn’t Tibco already have a rich web client strategy already with General Interface, and does this mean that GI will fall by the wayside (it’s based on Ajax)? The answer is yes and no – GI still lives, but according to technology VP Matt Quinn, when it comes to charting the extremely complex network topologies that ActiveMatrix must manage, JavaScript simply runs out of gas. Quinn added that in this case, the decision was strictly the result of a technology shootout, as opposed to something more strategic. In essence, Silverlight can talk back to any server, while Flex has more dependence on its own servers. Admittedly with Microsoft there’s always questions of cross platform support, but for now it promises support of all the usual browser suspects.

The final piece for today was Tibco doing something really weird – a software company going into the custom appliance business. It’s for a special use case: high-end financial services houses (maybe later telcos) who are so neck deep in algorithmic trading that they need to process such huge throughputs with practically no latency. They are packaging Rendezvous (the message bus on which the company was founded) in a box, with the idea being to provide supercharged throughput to financial service clients who are demanding it.

Although folks like Oracle gave the practice a bad name in the past (recall the “Raw Iron” 8i database appliance?), recently it’s not been so unusual for ISVs to embed their stuff in silicon. But typically it’s commodity silicon using commodity pieces of the Linux kernel. What makes this different is that Tibco is doing custom silicon -– think of their boxes as ASICS on steroids. In the short run, Tibco can count on an addressable market numbering in the hundreds. When it comes to silicon, that’s probably a few seconds of production, which means huge expense and high risk as the market is so modest. We understand the need from Tibco’s customer base, and of course the unusual nature of the problem, but we wonder in an age of commodity hardware, how a company not experienced with this side of the market is successfully going to navigate upwind.

Leaving Las Vegas

Maybe we should have entitled this entry, “How to Kill a Conference.”

Let’s explain. For the past several years, one of the events that we’ve most looked forward to was Sand Hill Group’s annual Software conferences. Brought together by industry veteran M.R. Rangaswami, the conference was an intimate gathering of software industry business executives and financiers, providing place where one could truly put one’s ear to the ground and get a sense of where the money was flowing. In fact, even more valuable than the agenda (which boasted an A-list of speakers, ranging from Steve Ballmer to Mark Benioff and Hasso Plattner) were the networking opportunities.

We had the misfortune of attending this year’s gathering, which was hardly anything like its predecessors. Previously an intimate gathering of C level software executives, venture capitalists, and angel investors in Santa Clara, this year it was airlifted wholesale to Vegas and merged in with InterOp, a gathering of network gearheads. In place of an intimate event with maybe a couple thousand attendees, you had probably about 10x the crowds, most of whom were looking at converged routers.

Now of course, there’s obviously a case to be made from letting a tiny event take advantage of discounted room deals when piggybacked with a larger event. The problem is that, in their infinite wisdom, the InterOp folks ran both events as one conference –- which wouldn’t have been a bad idea if there was some commonality or intersecting of the audiences. But software executives and network geeks?

Of course, Sand Hill Group is a small, specialized group that helps emerging software vendors formulate go-to-market and business planning, it’s not a media, analyst, or conferences organization. So we fully understood their decision to sell the conference to Tech Web Networks. But as for Tech Web, just what were these people thinking? We presume they paid good money to take on the event, so we’re wondering if anybody there ever passed Demographics 101. They could have easily paired it with another of their events, Enterprise 2.0, where there might have been far more synergy. Unfortunately for us, the result was largely a wasted day getting lost in Vegas, not to mention all the needless pestering over routers and switches that clogged our voice and emails ahead of time.

They made their mistake, hopefully we won’t repeat ours.

WOA or Whoah?

Over the past couple days, we’ve seen a backlash to a backlash. Web-Oriented Architecture (WOA) has been seen as a reaction or backlash to the complexity of SOA. But as no good deed goes unpunished, there’s much concern that with debates over WOA vs. SOA, the ideal of service-orientation gets lost in the message. Weeks of give and take over the matter culminated in the welcome return of Dana Gardner’s much-missed BriefingsDirect Analyst Insights podcasts. Yesterday, we joined with Forrester Research analyst Jim Kobielus, independent analyst Joe McKendrick, Current Analysis principal analyst Brad Shimmin, and Procullux Ventures director Phil Wainewright. There seemed to be emerging consensus that WOA complements or can also provide a simpler onramp to SOA.

But after we received a reader comment that maybe we’re muddying the waters with the plea, “Can we please stop the proliferation of WOA when folks can barely still digest SOA??,” the result sparked yet a new round of debate not only from yesterday’s crew, but also consultant (and WOA evangelist) Dion Hinchecliffe, Jon Collins, plus Neil Macehiter and Neil Ward-Dutton.

Here’s a few excerpts:

“Not sure how things work in the US, but in Europe folks need time to digest stuff and make it relevant to themselves. That’s still happening in the real world around SOA, at least over here. All the chatter about WOA appears pretty pie-in-the-sky and hype-driven, too vendor- and tech-focused for the folks we speak to… the “pie-in-the-sky” etc stuff isn’t necessarily my perspective, but it certainly is the perspective of the people I speak to here in Europe. So we have to moderate our message and show how new ideas fit in with existing ideas…”

“I also agree with Dana’s blog about dropping the acronyms altogether…”

“The problem is that Gartner’s business is predicated on creating acronyms (that’s where WOA came from after all) and they are, as [we] were discussing earlier in the week, a market maker. Is there a WOA MQ? My guess (0.9 probability) is that if there isn’t there will be soon…”

“Too bad human nature will never allow it. If we see or understand something that’s unique, we are compelled to name it…”

“Don’t under estimate the Silicon Valley get-rich quick thinking and seduction process. There’s a deep dichotomy: Silicon Valley methods vs ITIL reasoning…”

“WOA simply reflects the set of emergent network and application architectures that are working today on a large scale on the Web, getting results for a great many organizations by using slightly different techniques and a fairly different mindset than we’ve used in SOA…”

“I sit in trepidation waiting for the first press release I get with the acronym “WOA” in the title, knowing it will be from some obscure company that has somehow managed to fit their quart into the latest pint pot…”

“I don’t believe SOA is going away or out of fashion, I’ve predicted a potentially bright future for it as WOA ideas helps deliver on the fuller promise of SOA…”

“SOA isn’t just seen as a technical integration-centric activity, but also a business architecture / business transformation…”

“I for one am not seeing much of that kind of strategic-level analysis and thinking in the WOA-side of things… as you say, that’s a pretty far cry from using SOA as a way to enable the transformation and reinvention of business processes and business models. And that’s an enterprise perspective that probably needs to be “married” to WOA and its pointedly consumption-oriented approach since it could be a great enabler of such activities. It does remain to seen if it’s a meaningful driver of them however…”

To hear the podcast that prompted today’s discourse, click here.

Still Room for Billion-Dollar Plays: A Conversation with M.R. Rangaswami

On the eve of last year’s Software conference, Sand Hill Group principal M.R. Rangaswami spoke on the prospects for innovation in a consolidating software industry. Evidently there was some room left for innovation, witness Sun’s billion dollar acquisition of MySQL. According to Rangaswami, it points to the fact that there’s life – and value – in open source software companies beyond Red Hat.

In fact, 2007 was a year of second acts, with Salesforce joining the ranks of billion-dollar software companies. On the eve of Software 2008 next week, we just had a return engagement with MR to get his take on what’s gone down over the past year. The first point he dropped was breaking conventional wisdom that another software company could actually crack the established order, given ongoing consolidation. “People questioned whether there would ever be another billion dollars software company again, although of course Mark Benioff doesn’t call it that,” he noted.

But Rangaswami added that conventional wisdom wasn’t totally off, referring to the fact that a lot of promising break-outs are being gobbled up before they get the chance to go public – MySQL being Exhibit A. There’s plenty of cash around the old guard to snap up promising upstarts.

Nonetheless, there are invisible limits to the acquisition trend, most notably among SaaS (Software as a Service) providers. He ascribes the reticence to the fact that conventional software firms are scared of the disruptive effects that on demand providers could have in cannibalizing their existing businesses.

Going forward, Rangaswami expects some retrenchment. We’d put it another way – with last year’s 5 – 6% growth in IT spending, it was almost impossible for any viable ISV to not make money. Even if, perish the thought, we had been CFO for some poor ISV last year, it would have been in the black in spite of us.

But this year, with IT spending growth anticipated in the more modest 1 – 2% range if that, there’s going to be a lot of missed numbers. IBM cleaned up in Q1, but Oracle’s and Microsoft’s numbers failed to impress (Microsoft last year was coasting on Vista upgrades). Rangaswami advises ISVs to keep the lid of development costs (he expects to see more offshoring this year), but he also says that ISVs should be “smarter” with their marketing budgets. “Do a lot more with programs that are online and use Web 2.0 technologies as opposed to some of more traditional approaches,” he advised, pointing to channels like podcasts and YouTube. “Most people watch TV on YouTube these days,” he said, just slightly exaggerating.

Of course, Rangaswami says you can’t ignore emergence of social computing, for which Facebook for now has become the poster child. We admit to being a bit put off by the superficial atmosphere of the place, and of course not being under 35, why should we care what somebody did last night or who their favorite band is? But it’s become conventional wisdom that some form of social networking is bound to emerge for more professional purposes, like engaging your customers, that goes beyond the forums and chat rooms of user groups, the occasional regional or annual conferences, or the resume-oriented purpose of LinkedIn. In fact, one recent startup, Ringside Networks, is offering a “social appserver” where businesses can use Facebook apps to build their own communities on their own sites.

But Rangaswami says, why not use some of the less serious aspects of social computing to conduct real business. Like getting your New England customers together at the next Red Sox game (just make sure that one of your New York customers by mistake doesn’t slip onto the invite list).

The theme of this year’s Software 2008 conference is what Rangaswami terms “Platform Shift.” After the upheavals of the open systems and Internet eras, it appeared that the software industry was coalescing around Java and .NET platforms. But then on demand began making the Java vs. .NET differences irrelevant. For instance, if you want to write to Salesforce’s platform, it’s in a stored procedures languages that is like, but isn’t Java. On the horizon you have Amazon’s EC2 cloud, the Google Apps platform, and you could probably also consider Facebook to be another platform ecosystem (there are thousands of apps already written to it).

The good news is that tough times actually encourage customers to buy a couple of on demand seats for petty cash because it sharply limits risk.

The result is that barriers to entry for new software solution providers are lower than ever. You don’t have to ask customers to install software and you don’t have to build the on demand infrastructure to host it. Just build the software, then choose whose cloud you want to host it on, pay only by the drink, and start marketing. According to Rangaswami, the cloud might remove the hurdles to getting started, but getting your brand to emerge from the fog may prove the toughest challenge. “Sales and marketing in this new world will be totally different.”

Write Once (on the Web), Run Anywhere

The concept, or some might say “markitecture” of Web-Oriented Architecture (WOA) is hardly new, but for some reason, in the past few weeks, debates over WOA vs. SOA have suddenly flared up like a tornado materializing in Kansas. Probably it’s no coincidence that the catharsis was the observation by Burton Group analyst Anne Thomas Manes that “SOA is not working in most organizations” because of a lack of will to share services across organizational silos. Microsoft’s new Live Mesh raises the profile of WOA even further -– more about that in a moment.

What’s interesting is that Manes’ conclusions laid the problem more to culture than technology complexity. But subsequent arguments from the likes of Dana Gardner, who contends that SOA became all too much the end and not the means; or Software AG’s Miko Matsumura and StrikeIron’s David Linthicum who viewed WOA as an onramp to SOA, while TechTarget’s SearchSOA columnist Michael Meehan concluded from interviewing a number of experts that WOA was simply hype and markitecture, or, “an empty suit if you will.”

For the record, we’re struck by the incredible parallels between WOA and Ajax, in that they both stretch common web building blocks in ways that creators never foresaw. Both have similar appeal to web developers, based on the facts that the technologies are already familiar, both rely on a grab bag of poorly structured technologies, with few if any rules governing how you apply them.

Cut to the chase. Like the Democratic primaries, the latest reminder that we just can’t put this WOA vs. SOA debate to rest is Microsoft’s unveiling of Live Mesh. The elevator pitch is that it is Microsoft’s emerging on demand environment enabling you to store and share things between PCs and other kinds of devices like phones (smart and dumb), game stations, PDAs, and other things. We won’t repeat what’s already been reported, but check out Mary Jo Foley who’s got the Top Ten list describing what Live Mesh is, and eWeek’s Daryl Taft, who’s described in detail the underlying development platform.

Live Mesh uses a freebie new technology from Microsoft called FeedSync that makes syndication feeds bi-directional. It surrounds FeedSync with – you guessed it – a webby-oriented development environment. “If you want to live in software and services world, you must start with what works on the web, lowest programming model to be built on open protocols,” explained Abhay Parasnis, Microsoft’s product unit manager for Live Mesh.

So let’s call WOA the lowest common denominator.

Under the hood, Live Mesh’s plumbing runs on raw XML sent over HTTP, using a RESTful programming model for obtaining services, plus the familiar ATOM or RSS feeds that support pubsub communications. Above that, Microsoft provides different paths so .NET, Ajax, and dynamic scripting languages can all write to this platform. This being Microsoft, rendering via WPF and Silverlight are supported as well. OK, if you want Java, AIR, or Flex, better wait for another Mono project.

Live Mesh is intended as a loosely coupled platform in that it supports familiar store & forward and pubsub modes. You might call it a variation of a Services-Oriented Architecture because the services are supposed to be abstracted from whatever device or software platform they are implemented on, but there’s no trace of any web services stack here.

In other words, if you want to write once, run anywhere, Microsoft is telling you to use WOA.

IT Forecast is Partly Cloudy (II)

Last week, we opened the Pandora’s Box about the inevitability of the cloud. And we spoke of the tension between SOA and WOA camps as to which is the best means for getting services from or providing services to the cloud.

You can bet with the Web 2.0 Expo this week that there is plenty of noise about the cloud. For some, it’s so much noise that the whole notion of cloud computing, or the cloud itself, has become rather foggy.

One of the arguments over SOA is that the web services standards that are used for implementation have generated intimidating layers of complexity, and that web-oriented alternatives (e.g., WOA) are far more straightforward and far better fits for the web development skills that are already commonplace.

So there will be a flurry of announcements. A few examples: Kapow Technologies for instance, is launching an on demand mashup server, providing black box capabilities like Excel plug-ins for data services out in the cloud. Meanwhile Serena is teaming with Cap Gemini to launch a sandbox enabling business professionals to design and compose a mashup without the need for programming skills.

One of the more interesting announcements from a lineage standpoint is the emergence from its cocoon of SnapLogic, a startup with a WOA-oriented takeoff on RSS that it promotes as “Really Simple Integration.” Started by Informatica founder Gaurav Dhillon, SnapLogic represents a closing of the circle for simplified data access. Just as Informatica was the first to adopt a visual, component-based approach to developing database integrations, SnapLogic is doing same with resources that are accessible over the web.

It’s based on an HTTP server that supports RESTful services; a repository comprised of metadata written in HTML; generic resources for reading, transforming, and writing data; and support of Java and various dynamic scripting languages on the server, and multiple web output formats including HTML pages, RSS or ATOM feeds, and JSON (a JavaScript-based data interchange format).

Using RESTful style, each data source is treated as a resource. In turn, access to those resources can be managed, not through adding tokens or other entitlement technologies, but by making each individual or class of individual’s access a separate URI. Imagine, if you will, table, where the columns are data sources and the rows are specific users. Such tables could be fed by directories and internal access control tools, or the HTML metadata repository, rather than adding a separate layer of complexity for access and authentication.

Providing a clever way for RESTful services to become reusable, SnapLogic helps flesh out the vision of WOA, which is could be nicknamed, technology that is just good enough to get the data you need, wherever it sits out in the cloud. Don’t mistake the elegance of simplicity; although web-oriented approaches essentially take the user friendliness of client/server database apps to the web, the simplicity of the architecture rules out embedding properties or sophisticated capabilities such as federated identity, orchestration, security assertions that come with WS-standards. That’s not necessarily a bad thing if your app doesn’t necessarily involve processes involving high sensitive data or require high performance. But if they do, there’s no reason why they can’t be implemented within secured environments where all the necessary governance and performance are applied extrinsically.

But what’s interesting is that with emergence of the cloud, SnapLogic and StrikeIron offer approachable alternatives that let you have your data services without the reengineering baggage.

Not Your Father’s Data Cleansing

Back at some data warehousing conference circa 1995, we recalled sitting at a briefing on data integration when a marketer for a rival vendor stood up and made a spectacle, shouting that the dirty secret of data integration was that nobody was paying attention to the quality of the data going in. We followed up with an article characterizing data quality as “a motherhood and apple pie issue,” adding, “It’s hard to be against it, but it’s still something of a sleeper in the information technology community. As a result, few have any idea how much manpower it will take to do the job.” We came across a major chemical manufacturer that spent three years reconciling 23 payroll systems and 18 general ledgers for its SAP migration, and a major bank that invested 80 staff years cleaning up 20 million customer records.

Suffice it to say that since that time, the idea has sunk in. At the time, data quality was largely the domain of niche tools; fast forward, it’s no longer a separate discipline (a journal strictly devoted to the topic having folded). Instead, data quality is simply a pillar of data integration platforms, such as those from IBM, Informatica and Business Objects.

The dilemma of course is that data quality is a concept that’s often in the eyes of the beholder. Specifically, it was defined largely by the problem it tackled.

So the earliest natural market was in rationalizing customer records. The earliest tools performed rudimentary pattern matching on name and address to correct spelling and syntax errors. It was a quick hit because it was readily definable, and there was an obvious ready market: merchants that wanted to get a single view of the customer. When you spoke data quality back in the 90s, this is generally what you meant.

However, data quality was also viewed as a solution for reconciling differences in converging multiple data sources. You could easily wind up with a case where two rights could make a wrong, especially where properly validated data sources were conceived at different times, according to different standards or naming conventions. While the data might already be correct, more recent systems could reflect newer corporate standards or changing regulatory requirements dictating more detailed data. For instance, in the wake of corporate governance standards, retention of email records grew far more stringent.

Over the years, data quality also began to encompass tasks such as data profiling, where you sized up the data so you could optimize data cleansing and integration strategies, rather than developing reactively as you stumbled along. There’s also the case to be made for data integration and data quality to be used as a forensic tool.

More recently, as enterprises have embraced globalization, they have found record diversity increasing even further. That’s the gap that Informatica is addressing with its acquisition of Identity Systems from Nokia; with the deal, data quality has come full circle as the new piece addresses a different aspect of the customer data quality issue on which the business began. Surprisingly, given the familiar space, there are relatively few overlaps with Informatica’s existing data quality tools, in that Identity Systems is focused more on reconciling if different records are the same person, not whether the records are all spelled the same.

Admittedly, you could do a more primitive form of the same task with a traditional name/address verification tool; however getting rough matches is not the same thing as providing authoritative answers when you deal with names that are common, such as Smith or Jones in North America, or for that matter Bin Laden in the Mideast. Similarly, having versions of name/address verification tools in different languages isn’t new either. But again, this all child’s play compared to the challenge of verifying identity, which requires pattern matching that also accounts for context, not to mention real-time search capabilities (most data quality tools have traditionally been batch-driven).

Demand for identity verification capabilities is obvious, given the plagues of identity theft, financial fraud, and world terrorism, not to mention more positive goals such as providing real-time credit verifications or managing patient electronic health records for care that may be delivered through multiple entities.

Like any acquisition, Informatica also faces some fancy footwork because its strategy is to remain within its data integration market and not compete with its application partners. For instance, where does the boundary demarcating identity verification and customer management aspects of master data management lie?

Identity Systems is a logical addition to Informatica’s catalog as a standalone solution, or better yet, an advanced option for its existing data quality tooling. While not every client will need such detailed identity matching, for those that do, separating identity from customer record data quality would be an artificial one. There’s also potential synergies with last year’s Itemfield (now branded Complex Data Exchange), which integrates unstructured data (e.g., text, EDI documents). That could prove especially useful in money laundering applications where forensic accountants must piece together paper trails while ensuring the documents all concern the same suspect transaction.

IT Forecast is Partly Cloudy (I)

While some of us have been busy sweating our taxes, the brunt of attention shifted to San Francisco this week where Salesforce.com and Google to publicly consummate their infatuation. Specifically, it’s with Google apps, which is being hyped as the Great White Hope against Microsoft Office as an on demand alternative. Specifically, Google Apps, Gmail, Calendar, and Google Talk will be integrated to the Salesforce.com platform, with the add-on being available to subscribers for as little as $50/per user annually. By contrast, Microsoft Office Live is far less along in its evolution and for now is perceived as more afterthought.

In spite of the odd-couplish qualities (Salesforce’s over-the-top marketing vs. Google’s Zen approach), the pairing makes sense for one basic reason – neither is Microsoft (thy enemy of thy enemy). And it adds a key pillar in Salesforce’s PaaS (platform-as-a-service) strategy, which is to become the de facto enterprise desktop and back office platform. Of course, there’s one critical little hurdle, Josh Greenbaum has been pointing out over the past year, which is that Google’s licensing terms are not exactly enterprise-friendly: specifically, Google gains worldwide license to reproduce your content for the purpose of promoting its services. What it illustrates is that, for its enterprise aspirations, Google is still very oriented towards a consumer-focused business model.

But such legal distractions are but a sideshow – Google could easily afford a smart enough lawyer to adjust its licensing if it could concentrate itself for a moment away from its sling everything up at the wall strategy. The challenge is that Google is still very much the hyperactive precocious child who has yet to get directed: he or she could either grow up to be rocket scientist, poet, or simply somebody who gets rich stuffing merchant fliers around the neighborhood.

Whether Google gets serious about the enterprise market or not, and whether you want your application future to live inside Mark Benioff’s wall garden or not, there’s no question that cloud computing has become more than passing fad.

While this does not portend an immediate mass migration, it points to the inevitability that enterprise computing must increasingly embrace the cloud. Not necessarily for everything (it will depend on the organization), but with finite budgets and resources, IT must, to paraphrase Geoffrey Moore, reexamine what exactly is its core, while dispensing with the context (supporting activities). While Mark Benioff splashes the “No Software” logo around, if you strip out the hype, it’s a matter for IT to not decide, not necessarily to outsource, but instead to decide where it makes sense to let somebody else run the infrastructure.

Dana Gardner, in a rambling post today, asked and volunteered some answers on how the cloud and so-called webby apps (apps that use basic web technologies, nothing fancy) might become part of SOA. In other words, do you really need to go through WS-religion and elaborate enterprise architecture exercises when there might be some services available in the cloud that are ripe for the picking?

Of course, if you really want to get religious, there’s the debate over whether what Dion Hinchcliffe and others have termed Web-Oriented Architectures (WOA), which they claim is a more doable alternative to SOA. “The left-hand turn that Web services took early on in the Internet story (circa 1999-2000) with SOAP, WSDL, UDDI, and WS-I Basic Profile turned out to be definitely not the right answer for the vast majority of integration scenarios,” Hinchcliffe wrote. In other words, while SOA has a finite set of well-defined endpoints, WOA is just the opposite: they consist of resources with assigned endpoints that can be located by search engines rather than UDDI registries; are accessed using RESTful approaches via HTTP rather than SOAP; and use service contracts that are implicit, rather than formally spelled out in a WSDL web service description.

Regardless of whether you are embracing SOA or WOA, Gardner explains that there is no free lunch to integration if you take the cloud seriously. Because much of this is happening outside the firewall, you’ll have to establish clear data governance policies to set rules of engagement and permissible use.

Gardner then ventures over to the precipice by declaring, “Perhaps it’s time to fully divorce data from applications, and wed it all instead to people and groups, guided by roles and permissions, and therefore no longer co-located with applications or even enterprises.” He drills down further, discussing the reality that data is growing more portable than ever, blurring the line between public and internal (private) clouds, and speculates about the organizational ramifications of all this. It’s the opening to a much larger discussion.

But for now, we’ll review the first of a couple interesting plays at virtualizing data services from the cloud. We had a chance to speak with Dave Linthicum, who recently took over as CEO of StrikeIron, a five year-old 25-30 person firm offering an online marketplace of data services that’s out in the cloud. The occasion is that this quarter, the company did what few others in the software business have done: grow its customer base 20%. We’d term the venture SOA without the religion, which is especially apt since Linthicum has not been terribly sanguine of late over prospects for top-down enterprise architectural SOA approaches.

Instead, StrikeIron offers a marketplace data services. Say you’d like to qualify customers for your CRM application or you want to feed demand data to your suppliers. Instead of hooking up to Dun & Bradstreet or your suppliers yourself, you get access to a syndicated feed from D&B (or other commercial sources), or you get a feed published yourself. You still have to perform some integration on your end. Unlike Grand Central Communications an ill-fated predecessor that Linthicum was previously associated with, StrikeIron is not masquerading as a self-contained enterprise integration hub. But at least you’re not looking at extensive enterprise architecture exercises. And priced using the subscription model, that eliminates the bulk of upfront design and architectural costs normally associated with SOA implementation.

In that sense, StrikeIron is an answer to WOA critics who contend that web services took a left hand turn towards complexity. Next week (after the NDA expires), we’ll spotlight a provider from the WOA side who gives their response.

Integration Competency Centers: Optimization vs. Investment

Whenever we hang out at enterprise architect conferences, our first question to those folks is, how do you make yourselves relevant? All too often, the ideals of enterprise architecture get short-circuited by tactical needs (the point of pain must be resolved yesterday); budget constraints (we don’t want to pay more for up front architecting so somebody else gets a free ride from reuse); and of course politics (no explanation needed).

Evidently the same barb has been aimed at proponents of Integration Competency Centers (ICCs), ever since Gartner began writing about it circa 2002, and since Integration Consortium head John Schmidt and now-Informatica colleague David Lyle authored a book on how to create and run them back in 2005.

In response, Schmidt, who recently joined Informatica to head its newly formed ICC consulting practice, collaborated with colleagues on a white paper (registration is required) outlining some practical suggestions for making the economic case. For Informatica’s customer base, Schmidt’s preaching to the already – converted, with a survey of the base indicating that 43% either have an center already established or are in the midst of rolling one out.

The first half of the paper, offering a primer on ICCs, literally borrows a few pages from Schmidt’s book. There’s no single kind of integration center, with missions ranging pure advice to delivering actual service, with varying levels of control. For instance, some may simply provide a repository of best practices, act in a standards advisory or policing role, or deliver actual services.

Finally, midway through the paper, it cuts to the chase, starting with laying out the models for how ICCs are financed: a la carte by projects, centrally funded operations taxed as a part of IT overhead, or anything in between.

Our take is that the case for any kind of central funding is obviously tougher today, given the near-term likelihood of the economy and IT budgets heading south.

Furthermore, central funding often clouds the transparency that increasingly being demanded of IT, not to mention the fact that just as with taxes, it’s easy to see the costs and difficult to enumerate the value. Significantly, the paper says that while transparency is a goal, there is a clear limit. “Note that cost transparency does not mean detailed cost accounting; that is a mistake and invites your customer (internal or otherwise) to run your business,” providing as an example, the fact that you don’t expect a restaurant to disclose recipes for the dinners they are serving you. “Transparency does not mean you should allow the customer to wander into the kitchen to see how the food is prepared or to provide details about where the ingredients are purchased.”

If funding is more project-based, that dredges up the issue of who pays the freight for sound architecture, especially if the goal is some degree of reuse. Whether you are building a data integration platform or service-oriented architecture (or both), there’s going to be some extra effort expended up front to ensure that the architecture does not result in a one-off solution. And that brings the question of, why should my project pay for somebody else’s reuse? Short of central funding, budgetary or salary incentives are probably the best route for rewarding the finer aspects of human nature.

But that brings up a related dilemma facing ICCs: If shared infrastructure is involved, how do you make the cost case? As the paper notes, typical hurdles are pinpointing and communicating the right outcome metrics, mismatch of project cycles and budgeting/internal planning horizons, and securing stakeholder buy-in, especially given the typical pace of staff turnover.

The paper listed four possible strategies: the quick-win scenario that comes from small, incremental projects targeting immediate pain points; executive vision, that is great as long as you have the same c-level team in place; riding a “wave” of a large project with tangible ROI where you implement some of the future building blocks (it’s a sly way of slipping in upfront payment for future benefits); and probably the toughest one of the bunch, creating that wave (e.g., we’ve just consummated a dozen acquisitions, and we’re drowning in redundancy).

What was useful were the case study examples for backing these methods, which illustrated some of the real-world hurdles that you’d face. For instance, the typical hassles of getting the numbers for the business case, which often degenerates to political tug of wars: teams reluctant to generate time or cost numbers on the excuse that nobody has the time to collect them, masquerading lukewarm support for the idea of integration centers, or fear of losing control to some central authority. Or another case where an IT executive was leery about quoting the cost of a la carte development, so as not to publicize how costly his team of developers really were – with the resolution being splitting the difference (more modest costs that were still just high enough to support the business case).

For the example of a team that mounted a “create the wave” strategy, it took 6 months to assemble all the numbers showing 5 years of history and 3 years of projections, and monthly project costs for 2 years (the spreadsheet exceeded 13 MBytes) showing a $20 million investment would yield a $25 million annual net operational saving.

In many ways, the outlook for forming ICCs mirrors that for SOA, or any other EA initiative. If your organization is already invested in a strategy, it pays to have somebody tasked with optimizing it; but with the economy going south, for the rest of us, the “quick win” approach described by the authors, is going to be the most realistic. And to deal with inevitable resistance, maybe it doesn’t make sense to initially term it a competency center, because to cynics and budget hawks, that might sound too much akin to the 100-year troop commitment myth. Instead, the most expedient course would probably be to promote this as a way to optimize and stretch, rather than invest dollars.

Is SOA Getting Boring? A Conversation with Steve Mills

At IBM’s annual Impact SOA bash this week, software group head Steve Mills stated that the next frontier for SOA is really not a frontier at all: it’s the basic blocking and tackling of getting Enterprise Service Bus (ESB) backbones to deliver the high levels of ACID reliability and fault recovery now taken for granted with OLTP transaction systems. In other words, when you start thinking about enterprise SOA, you’d better expect rollback, compensation, and high availability features that are taken for granted with online transaction systems.

By comparison, IBM’s message last year was that it was time for SOA to graduate from IT and get driven by the business.

No matter, when we sat down with Mills afterward, we asked him if this meant that SOA was getting, well, kind of humdrum. No more quibbling about whose standard for federated identity to latch onto, what’s most important are the basics of enterprise systems. Replying tongue in cheek that SOA’s always been boring, Mills added that now, the question no longer centers over whether SOA will work. But he notes that with more moving parts, delivering that reliability presents more of a challenge.

Of course, it took about 20 years for enterprise databases to achieve that kind of rock solid assurance, but applying the lessons learned should make that journey quicker today. Nonetheless, compared to database transactions, SOA could involve a far more complicated use case. For starters, there’s the architecture, which calls for a middle tier abstraction layer that separates the service from whatever physical systems implement it. Of course, you could argue that the golden age of transaction processing introduced its own middleware: transaction monitors.

Nonetheless, the dynamic nature of SOA, where services could be orchestrated and service providers swapped at run time, could make delivering ACID reliability for run-of-the-mill OLTP systems appear almost like child’s play. Troubleshooting could require serious detective work. For instance, when a customer history service that is composited from order history and account identifiers in ERP, and interaction history from CRM, where do you start looking when the service fails to execute?

There’s yet another parallel between SOA and the evolution of databases. Twenty years ago, there were debates over whether SQL databases could handle the load and deliver the performance of legacy databases or file systems. The answer was throwing Moore’s law at the problem. Today, there re similar questions regarding SOA, because if web services standards are used, that means a lot of fat, resource-hungry XML messages whizzing around. Mills’ answer is that there’s a glut of underutilized processing capacity out there and a crying need for virtualization to make that iron available for XML.

Obviously, SOA plays to IBM’s strengths – large systems, and ways of integrate them. But the world of run time governance of SOA remains fragmented, which explained AmberPoint’s presence in the vendor exhibit area.

That hits on another point, which is IBM’s contention that it is best situated to glue everything together. Mills contended that SOA brings pressure on the SAPs and Oracles of the world to open their APIs using SCA (Service Component Architecture) and SDO (Service Data Objects). Obliquely, it points to new competition for where new functionality gets situated: inside the application stack, or at the BPM or service composition layer in the middle. Mills denies that there’s a power struggle going on. In his words, the question of who wins or loses when you add SOA to the mix is “one of those silly debates… We’re not saying what we do is more important than what SAP does.”

Well, maybe it’s silly, but IBM has plenty of accounts where there’s a lot of SAP and Oracle, so there’s going to be competition for that middle tier.

But what about the other half of the enterprise market that uses homegrown rather than packaged apps? Later that day, we sat in on another session to explore how IBM is building a composite apps business for that segment (more specifically, for banking, insurance, telco, and healthcare sectors). This was a follow up to a session we caught a year ago, just as ink had dried on IBM’s Webify acquisition that brought the underlying technology for building vertical industry SOA frameworks.

Balance of power questions abounded there as well, although IBM is taking pains not to call these composite applications, but composite solutions. The inference is that solutions are supersets of applications, and thanks to SOA, far more dynamic.

With that assumption, IBM is still playing linchpin – they own the underlying framework, and reprising the role performed by its global business services groups, IBM also plays arbiter in recommending solutions. This year, it introduced tooling that allows customers to add their own framework extensions, or get IBM to integrate third parties currently not on their preferred partner list. Three partners – Kana for contact management; Seec, for insurance business components; and Chordiant, for “customer experience solutions – testified that of course it’s a two-way partnership (they’re also veteran IBM business partners).

But IBM is clearly first among equals, as the framework is its own, is responsible for level 1 support, and therefore, is positioned to assert account control. But realizing that the technology has surged ahead of the business model, IBM and partners are still shaping the rules of engagement for composites as they go along.

From a technology standpoint, SOA might be getting a lot more boring. However, impacts to vendor business relationships are for now anything but.