Who makes the business case for SOA?

A few days ago, TechTarget’s Rich Seeley posed the question, “Who owns the SOA business case?” He contacted us, and several colleagues for his article, published last Friday. Frankly, we were a bit stumped. Our best guess is that it’s not the business’s job to specify an architecture per se. You can’t expect business people to ask for “an SOA.” They have a business requirement, and IT responds. Ultimately, the job of selecting and justifying a technical architecture should be the role of technology architects. Anyway, as my colleagues at ZapThink put it, SOA is an architectural practice; it isn’t a product or technology that you buy or install.

But somehow we don’t feel that we really answered Seeley’s question.

In Seeley’s article, Software AG/webMethods’ Miko Matsumura suggested that there should be joint responsibility, using the metaphor of hiring an electrician: the electrician knows the wiring, but you own the house. Current Analysis’ Brad Shimmin chimed in that the situation varies -– the business owns it if it has a specific functional requirement, while IT owns it if the initiative is something like infrastructure modernization.

We agreed as far as those statements went, but something still didn’t jibe. The fact is, organizations embrace SOA, not to modernize software or enterprise architectures for their own sake, but to make businesses more agile. But wait a minute, haven’t we heard those promises before?

Enterprise apps of the 80s and 90s were supposed to make your organization more agile because everyone would now read from the same page, or that enterprise application integration would provide the integration that your enterprise apps originally promised but never delivered. Or that web-enabling all those apps or integrations would make your enterprise move at Internet speed because everyone could access those systems on any browser.

Rubbing salt into the wound, those enterprise app projects weren’t simply migrations, but opportunities to reengineer your enterprise. All too often, once organizations started reengineering, well you know the rest of the tale…

So SOA isn’t the first time that we’ve heard the statement that business and IT are all in this together.

But there is a difference with SOA because of one core assumption: SOA is supposed to be loosely coupled. Data should no longer be tied to particular databases, and processes should not be attached at the hip to specific business applications. Using SOA, you should be able to configure new processes by composing services exposed by all those apps and data sources. The good news is this sounds a heckuva lot less traumatic than reengineering your enterprise. The flipside, however, is that maybe SOA shouldn’t simply be as disjointed a technology architectural decision as we first thought.

In his ZDNet blog earlier today, Dana Gardner helped crystallize why we felt so off base. He stated that raising the question of the business value of SOA might be “jumping the gun.” He points to the practical landscape in most organizations, where no single entity typically owns specific business processes. Consequently, bottom-up approaches, process by process, may be self-defeating. Similarly, top-down mandates may prove too arbitrary, failing to acknowledge the interdependencies of all the moving parts or the varying velocities at which different parts of the organizations are prepared for change.

Citing writings from Dr. Paul Brown, from a book that Gardner recent reviewed, he concluded that the business process would be the right level to assign “ownership” of SOA. Remember, the rationale for SOA isn’t SOA itself, but to improve the way the business functions –- which at the end of the day means improving the flexibility and responsiveness of business processes.

Ownership of SOA would therefore be the domain, preferably, of multifunctional teams choreographed by architects or evangelists whose role is to provide the big picture. In that sense, SOA would be owned from the middle, from which it would gradually spread down to line organization while gradually influencing enterprise architecture by osmosis.

Of course, multifunction teams were also supposed to be the core of those reengineering efforts that accompanied those SAP implementations. In both cases, the tactic was the same -– get broad-based input -– even if the ultimate strategy was different. But it does bring up one key issue: maybe the ends are different, but could the problems be the same? Just because SOA differs architecturally from ERP doesn’t necessarily guarantee that cross-functional teams won’t vastly overrun their mission.

No answer is perfect, but by process of elimination, Gardner’s notions probably provide the most practical response to the question of who “owns” the justification of SOA.

HP Incubates Opsware

HP Software’s “Extreme Makeover” took another step at the end of last week when it closed the Opsware deal. Since the deal was announced almost exactly two months ago, we’ve been wondering whether it would be a replay of the Mercury deal, which amounted to more of a reverse acquisition. Yeah, HP bought Mercury, and the Mercury execs left standing (the ones tainted by indictment were long gone by that point). But many of the Mercury principals quickly took the helm in sales, marketing, and product development.

The Opsware script will be a bit different. Like the Mercury deal, Opsware CEO Ben Horowitz now takes over product R&D for the combined entity. But the rest of Opsware is being, what HP terms, “incubated.” It amounts to HP leaving the Opsware growth machine alone for now.

That makes sense, for a limited time. Much of Opsware’s appeal was that it was the fresh player on the block for managing change to IT infrastructure in real-time that until now has been beyond the grasp of “mainstream” systems management frameworks from the usual suspects. And the company’s growth curve was just beginning to break into the black.

With last week’s announcement of System 7, Opsware has made significant strides towards addressing a key weakness: the lack of integration between its server, network, and storage automation pieces. It’s glued them together with its process automation piece (from its own iConclude acquisition), where changes in provisioning of IT infrastructure from server back to storage and network can now be automatically triggered by an ITIL workflow, or where any action taken in any of the modules can now automatically bring up the appropriate workflow.

So HP is probably smart in not wanting to rain on Opsware’s parade, for now.

In the long run, Opsware must become part of the mother ship because its products have too many potential synergies with the rest of HP Software. There’s a long list, but we’ll just give a couple examples: there are obvious tie-ins between HP’s Service Desk and Business Availability Centers with Opsware’s change management capabilities. In other cases, HP’s portfolio could provide the depth missing from Opsware’s offerings, with the CMDB (configuration management database, the system of record for the layout of your IT infrastructure) being the prime example.

HP’s strategy reflects the common dilemma that software firms face when they acquire companies that are younger in the growth curve. Assuming that you’ve bought a company to add to your product set (as opposed to expanding your market footprint, like EMC and VMware), you’re going to find yourself in a balancing act. You don’t want your legacy culture to smother the innovation machine that you’ve just acquired, but you also don’t want to miss out on the synergies. Besides, isn’t that why you bought the company in the first place?

Will Frameworks Tame Ajax?

With AjaxWorld convening this week, we’re hearing a lot from vendors who are promoting tools that could help enforce some form of discipline over Ajax development. We hear a lot about frameworks that, not only eliminate the need for raw coding in JavaScript, but provide ways to automate handling of events, connecting to structured or unstructured data, or deliver all of the above.

Oracle’s Ted Farrell, who’ll be keynoting, is making the case for Ajax frameworks. He argues that
1. Technologies come and go.
2. You can’t always find the widgets you need if you’re dealing with enterprise legacy; and
3. Enterprise mashups can’t be treated as casually as consumer-driven ones.

Let’s drill down point by point. First, if you look at the history of the web, we’ve had multiple generations of browsers that have not always fully supported W3C standards. Over the years, there have been numerous attempts to make web pages more dynamic, beginning with Java applets a decade ago, and more recently encompassing Ajax and newer rich frameworks form the likes of Adobe, Microsoft, and Laszlo Systems. So, give this point to Farrell, web technologies come and go.

Secondly, if you’re trying to hit against legacy apps that may not have been moved to the web, or exposed through standard formats like portlets or web services, you’re going to have a heckuva time unless you’ve found some tool to wrapper or tag them. Beyond mainframe apps, just look at the diverse stable that is now part of Oracle’s tent. Legacy Oracle users may have apps written in Oracle Forms, while PeopleSoft, Siebel, and J.D. Edwards have apps written in a mélange of C, C++, Java, or even RPG. Oracle Fusion, anybody?

Finally, there’s the case that enterprise mashups must be treated more seriously. Take that Google map where you’ve just mashed up the names and locations of your customers in a particular region. It may be fine to post it on an Intranet to the sales team, but if those names somehow leaked out to your external website, or pried open via SQL injection, you’ve got a HIPAA, Gram-Leach-Bliley, or even PCI (Payment Card Industry) violation on your hands.

OK, so you decide that a framework might not be such a bad idea. Well, most of the Ajax frameworks out there are currently focused on abstracting out the JavaScript, giving you declarative, 4GL-like tooling to piece those pages together. Others might be leveraging the middleware you already have to access back end internal assets that might not be exposed on a web page. And a few are starting to provide links to governance so that JavaScript objects that are deployed at run time don’t somehow get hijacked along the way.

Ultimately, whether you add a framework to your Ajax development depends how formal or informal is your organization’s mashup strategy. Given the outlaw lineage of this form of development (remember, the term Ajax itself was coined by a guy whose first and middle names were “Jesse James”), until now the idea of strategy was something of an oxymoron. Nonetheless, some enterprises are indeed starting to take Web 2.0, and more specifically, Ajax-style programming more seriously.

Indicators of whether an organization can, or will have a strategy for mashups rest on a number of factors. First, how controlling is IT – is there a strong tradition of governance, or is it an afterthought? Who’s doing Web 2.0 development, and why? Is Web 2.0 development now considered a formal extension of your enterprise software development portfolio and handled by your software development organization? Or are line organizations taking the law into their own hands to do Web 2.0 because it’s easy, and it’s a workaround to IT’s endless backlogs? And are the mashups intended to live for the moment, using assets that are easily accessible, and then be disposed? Or are mashups supposed to carry a formal lifecycle?

One of Farrell’s arguments for frameworks is that it’s meant to abstract your Web 2.0 development, not only from the vagaries of back end systems, but also from the technology churn that the web environment has proven to be. We can’t argue with that.

But that argument is valid only if your organization doesn’t swap out Ajax frameworks themselves. If your organization lacks a Web 2.0 strategy, and/or developers are not from the IT mother ship, you might buy frameworks, but they may be only as permanent as the PDQ mashups that they churn out.

Depending on your point of view, there’s hope. Outlaw technologies like PCs and the Web initially snuck in through the back door, only to eventually get accepted and “civilized by IT.” Frameworks might get the last laugh after all.

Where’s the Meaning?

How often have you heard the excuse of blaming blown project budgets on unanticipated systems integration costs? For good reason, nobody wants to do customized point-to-point integrations if they can help it -– it’s difficult if not impossible to leverage the work.

But in one respect, such integrations contained one potentially messy issue. When working with designated source and target, you became all too intimately familiar with the data that you were trying to integrate and therefore didn’t have to worry about the context or meaning of the data that you were trying to exchange.

Nonetheless, when you think about reusing software assets, context stares you in the face. For instance, what if you want to reuse a process for tracking customer preferences in another entity, only to learn that privacy laws prevent the use of some portions of that data? And if another part of your business has a different definition of what constitutes a customer, the divergent meanings become show stoppers.

Admittedly, given the difficulty of attaining software reuse, concerns about context or the meaning of data remained academic. eBizQ’s Beth-Gold Bernstein recalled being at the event where IBM announced SNA and told everybody to start building their enterprise data dictionaries. “I worked with organizations that did that. They had the books on their shelves, but it didn’t do anything. They were just books on the shelves.”

And in fact, thinking about systems that can automatically decide meaning or context from data kind of conjures up some of the original goals of Artificial Intelligence, which was supposed to produce software that could think. Japan mounted a fifth generation computing project back in the 1980s that was supposed to leapfrog the west with AI software, replicating their successes with lean manufacturing. We’re not terribly sure if the Japanese effort actually got as far as generating shelfware.

About a decade ago, web pioneer and W3C director Tim Berners-Less began pushing the idea of a Semantic Web that would provide a web that was searchable, not only by keywords, but real meaning. Along the way, the W3C developed several standards including Resource Description Framework (RDF) and Web Ontology Language (OWL) that specify how to represent entity relationships or meanings using XML. But today, we’re still on Web 2.0, which is a more dynamic, interactive, but hardly a semantic place.

The emergence of SOA has made the possibility of software reuse less academic. According to IT architectural consultant Todd Biske, a consistent semantic model is critical to SOA if your services are going to be adequately consumed. Without such a model, suggests Biske, it’ll be harder for users to figure out if the service is what they’re looking for.

While short of the true meaning of semantics, the use of metadata has exploded through integration middleware and SOA registries/repositories that provide descriptors to help you, or some automated process, find the right data or service. There are also tools from providers like Software AG that are starting to infer relationships between different web services. This is all tactical semantics with a lower case “s” –- it provides some descriptors that present at best a card catalog “what” information is out there, and from a technical standpoint, “how” to access it.

It may be lower case “semantic web,” but it’s a useful one. And that’s similar to the lower case “ai” that spawned modest pieces of functionality that didn’t make machines smarter per se, but made them more convenient (e.g.,context-based menus).

Our sense is also that we’re ages away from Semantic Web, or Semantic Services with a capital “S.” Current Analysis principal analyst and longtime Network World contributor Jim Kobielus equated the challenge as a “boil the ocean” initiative during a recent Dana Gardner podcast. Few have covered the topic as extensively. In a recent Network World column, Kobielus summarized the prospects: Most vendors are taking a wait and see attitude. For instance, Microsoft, which is sponsoring a project code-named Astoria to extend ADO.NET with a new entity data model that would implement some of the W3C semantic web standards, has yet to promise whether to implement any of the technology in SQL Server.

Kobielus believes that it will take at least another decade before any of this is commercialized. While our gut believes he’s optimistic, we find it hard to argue with his facts. Besides, he adds, it took a full half-century for hypertext to advance from “Utopian Vision” to something taken for granted today on the web.

Breaching the Blood Brain Barrier

A month after Software AG unveiled its roadmap for converging webMethods products, it is releasing the first of the new or enhanced offerings. What piqued our interest was one aspect of the release, where Software AG is starting to seed webMethods BAM (Business Activity Monitoring) dashboards to other parts of the stack. In this case, they’re extending the webMethods Optimize BAM tool from BPM to the B2B piece.

So why does this matter? As its first name implied, BAM is about monitoring business processes. But if you think about it, it could just as well apply to the operational aspects of deploying SOA, from trending compliance with service level agreements to the nitty gritty, such as the speed at which the XML in SOAP messages is being parsed.

So far so good. What Software AG is doing is trying to use the same dashboarding engine that has been owned by the line-of-business folks, who want to monitor high level processes, to the software development folks, who are charged with exposing those processes as web services.

But when it comes down to the thorny issue of monitoring compliance with service level agreements (SLAs), Software AG’s moves are just a very modest first step. With a common dashboarding engine, you might be able to get software developers to improve the efficiency of a web service through programmatic modifications, but at the end of the day (and hopefully a lot earlier!), you have to run the services on physical IT infrastructure. And as we’ve noted in the past, when it comes to fixing service level issues, today’s processes, technologies, and organizational structures remain highly silo’ed. The software development folks own the SOA implementation, while IT operations own the data center.

It’s an issue that HP Software, which has undergone a reverse acquisition by Mercury (yes, HP bought it, but many ex-Mercury execs are now running it) is striving to bridge. And with Software AG’s latest moves to extend Optimize, it’s a goal that’s on their horizon as well.

The challenge however is that, as the IT operations folks embrace ITIL and the business service optimization or management tools (a.k.a., retooled offerings from systems management vendors), you may wind up with multiple islands of automation that each operate their own silo’ed dashboards claiming to show the truth about service levels — whether those service levels pertain to how fast IT resolves an incident, how fast the database runs, or how available is a specific web service.

Software AG says that it wants to eventually integrate metadata from its CentraSite SOA repository with the CMDBs (configuration management databases) of ITIL-oriented tools sometime in the future. We wonder how they, and their presumed ITIL vendor partner, will sell the idea to their respective constituencies, and more importantly, who’s ultimately going to claim accountability for ensuring that web services meet the SLAs.