It’s been an interesting week as we’ve had a chance to dialog with enterprise architects and the BPM community at back-to-back conferences presented by The Open Group and OMG, respectively. And as we hinted during our last posting, where we conversed live via blog (and later on stage) with Dana Gardner, Todd Biske, Beth Gold-Bernstein, and Eric Knorr from the Open Group EA Practitioner’s Conference, was how isolated SOA developers appear to be from both groups.
Specifically, that the folks doing the SOA projects are perceived to be estranged from the enterprise architects who are supposed to (depending on the organization) enforce or promote technology standards and best practices across the enterprise. And they are seen as equally estranged from the domain experts and process owners in the business who manage business processes. Regardless of whether you are an EA or business analyst, chances are, you view SOA developers as just the latest generation of cowboy programmer.
In the case of EA, it’s that programmers make de facto architectural choices in delivering projects that could wreak havoc down the road when it comes to reusability, maintainability, or the accountability with enterprise compliance policies. According to Gold-Bernstein, most web services being exposed today are rudimentary data services that in many cases are conventional remote procedure calls (RPCs) with web service interfaces that are SOA in name only.
Adding insult to injury, SOA developers are accused of unleashing web services with scant awareness of the subtleties of the business process(es) that they are exposing.
Of course, it’s easy to dump on developers because they are the poor folks down on the front lines who actually must deliver tangible results for sponsors who are rightly concerned about their own projects. And lacking the right funding mechanism, project sponsors are not exactly willing to bear the costs alone of generalizing their solutions for the rest of the enterprise. Although it might be reasonable to expect developers to observe enterprise architectural standards, you can’t knock them for failing to develop reusable assets if nobody’s going to pay the necessary surtax.
In the case of business process management, SOA developers are similarly estranged from process owners. In this instance, it’s a matter of reconciling who makes the architectural decisions for integrating business processes. A couple days after the Open Group, we presented our take on the cultural divide between business process and SOA architects/developers at OMG’s BPM Think Tank.
You can download our presentation, BPM’s from Venus, SOA’s from Mars, here.
In a nutshell, our take was that the divide constitutes the latest battleground between business and IT. Both sides use different tools (BPM’s building blocks provide the process context that is executed, but not necessarily represented by web services), and that both groups are fighting over whether to integrate processes through the BPM tool, or through dynamic BPEL web services orchestrations. In the vendor community, the sides are polarized between the J2EE integration platform vendors (e.g., BEA, IBM, SAP, and others, who tend to dominate Oasis) vs. the BPM pure plays (Lombardi, Savvion, Metastorm, Pegasystems and others). World peace wasn’t exactly helped by the recent BPEL4People proposal by the J2EE/Oasis players, which met a wall of silence from BPM pure plays.
We shared the stage with consultant Brenda Michelson, coordinator for OMG’s SOA Consortium, where CIOs share their pain and gain over SOA practices. Her message was also one of overcoming isolation – in this case, that to realize the promise of SOA, that architects and developers alike should cross train in the language of business. She pointed to one organization’s success story, where the resident SOA geek who only spoke WS-speak subsequently spent roughly a year immersing himself in night business courses and studying what made his organization tick. Today that SOA architect speaks the vocabulary of the business, and has become someone whose advice is now sought after by the very people who formerly avoided him.
Maybe there’s hope.
Keynoting the Open Group’s Enterprise Architecture Practitioner’s Conference this afternoon in Austin, colleague David Linthicum made a bold prediction: that in five years, SOA would get absorbed into the discipline of Enterprise Architecture within five years.
He characterized the current scenario in most organizations: that the EAs who tend to take the long view in planning what practices, platforms, and architectures should become enterprise standards, are largely speaking past the teams doing SOA projects who are concerned with meeting deadlines, delivering tactical results in manners that may at cross-purposes to what the EAs are talking.
In essence, SOA project teams are simply exposing services from the silos that already exist, whereas EAs say, rethink those silos so you don’t end up reinventing the wheel in the future. It’s an argument that we’ve been hearing since the days of component-based development.
As you might guess, my colleagues (and fellow panelists later this afternoon) Todd Biske and Dana Gardner also had a few things to say about this. Both agree that ultimately this is the goal – you can’t harness the benefits of SOA if you simply expose the same old silos with interfaces that are just a bit more modern.
But both see a few roadblocks in the way.
Biske pointed to the degree of risk, that EAs are often derided as “paper pushers” who are often disconnected from the reality of what’s happening at the project level. Biske places onus on EA to create usable assets that can be consumed at project level, but he adds that’s the challenge that EAs face with any of their creations: to become relevant in the real world. Gardner added that both sides are working toward the same goal: making the organization more agile, but that “in many cases those planning SOAs are not in sync with those that are keeping the trains running on time.”
While we’ll have a lot more fun with this later today, what’s funny is this reminds us exactly about another message we’ll throwing out before a BPM audience at OMG’s BPM Think Tank in a couple days: The BPM folks believe that the SOA folks don’t understand what really comprises a business process, whereas the SOA folks feel the BPM folks have no idea what it takes to make things execute.
In this case, just change a few nouns and adjectives: the SOA project folks know nothing about what SOA really is, whereas the EA folks have no idea how to make it all work.
HP’s announcement that it plans to buy Opsware represents something of a changing of the guard. HP’s $1.6 billion offer, roughly a 35% premium over last week’s close, is for a $100 million company whose claim to fame is managing change across servers, networks, and recently, storage.
Today, Opsware’s change automation systems tend to work alongside classic infrastructure management tools, such as what used to be known as HP OpenView. Over the past year, Opsware has bulked itself up with several acquisitions of its own, including IT process automation – where you embed data center best practices as configurable, repeatable, policy-driven workflows. And it has added storage management, plus a sweetheart deal with Cisco for OEM’ing and reselling its network change management system as part of the Cisco. Although Cisco wasn’t happy about the disclosure, Opsware did announce during the Q4 earnings call that Cisco had resold $5 million worth of its network automation tool.
For HP, the Opsware acquisition comes after a year of roughly 80% growth – although the bulk of that was attributable to the Mercury acquisition. HP Software is one of those units that HP somehow never got around to killing – although not for lack of trying (we recall HP’s server unit concluding a deal with CA that undercut its own HP OpenView offering). And it reported an operating profit of 8.5% — although not stellar, at least it reflected the fact that software is finally becoming a viable business at HP.
In part it’s attributable to the fact that infrastructure management folks are finally getting some respect with the popularity of ITIL – that is, ITIL defines something that even a c-level executive could understand. The challenge of course is that most classic infrastructure management tools simply populated consoles with cryptic metrics and nuisance alarms, not to mention the fact that at heart they were very primitive toolkits that took lots of care and custom assembly to put together. They didn’t put together the big picture that ITIL demanded regarding quantifying service level agreement compliance, as opposed to network node operation.
What’s changed at HP Software is that the Mercury deal represented something of a reverse acquisition, as key Mercury software executives (at least, the ones who weren’t canned and indicted) are now largely driving HP Software’s product and go to market strategy. Although branding’s only skin deep, it’s nonetheless significant that HP ditched its venerable OpenView brand in favor of Mercury’s Business Technology Optimization.
Consequently, we think there’s potentially some very compelling synergies between Opsware’s change management, HP’s Service Management and CMDB, and Mercury’s quality centers that not only test software and manage defects, but provide tools for evaluating software and project portfolios. We’re probably dreaming here, but it would be really cool if somehow we could correlate the cost of a software defect, not only on the project at large (and whether it and other defects place that project at risk), but correlate it to changes in IT infrastructure configurations and service levels. The same would go for managing service levels in SOA, where HP/Mercury is playing catch-up to AmberPoint and SOA Software.
This is all very blue sky. Putting all these pieces together requires more than just a blending of product architecture, but also the ability to address IT operations, software development, and the business. Once the deal closes, HP Software’s got it work cut out.
Ideally, we’d like to believe that the course of IT history is a story of upward progress -– even if there are a few peaks and valleys along the way. But sometimes when you think that something’s a no brainer, well, reality has a way of adding complications.
Our colleague, Todd Biske, voiced such concerns when he remarked that project-based culture can hamper SOA rollouts because it compromises longer term architectural goals with short term deliverables that might yield only fleeting benefit. But Biske believes against all belief that things will get better -– or in this case, that time will prove that architecting for SOA will is what we’d term a no-brainer.
And so we thought similar things about the relationship between BPM and SOA. At this point there appears to be consensus that there is a true synergy: promoting process agility is entirely consistent with SOA’s loose coupling. And, it seems to be a no-brainer to expose processes as web services to take advantage of all the standards-based integration. Not surprisingly, you’d be hard pressed today to find a BPM tool that doesn’t expose processes as web services.
Yet beneath the surface, there remains a deep cultural war, as we learned while writing about BPEL for People a few weeks back. As it turned out, the debate wasn’t over exposing processes a services but at what layer (read: on which side of the organization) should control the flex points. That is, the point where you orchestrate or chain multiple processes or workflows together.
The BPM folks claim orchestration is too complex for the IT folks to handle inside the web services stack, whereas the IT folks claim that the business folks just don’t understand how automated processes really execute. That question will again rear its ugly head at OMG’s BPM Think Tank next week, where we’ll be joining a panel chaired by Lombardi’s Phil Gilbert.
It will likely also prove grist for yet another session next week, on the future of SOA. As part of The Open Group EA Practitioners Conference, we’ll be sitting in on a panel lead by ZDNet blogger and BriefingsDirect podcast host Dana Gardner that will examine how various technologies – including BPM – will fit into SOA (or vice versa) and/or whether BPM might subsume SOA – and with it, IT as well.
And we’ll be exploring whether SOA might be impacted by the kind of mission creep that has plagued the Java world. If you look at what seems to be an almost endless list of Oasis technical committees, the answer appears fairly obvious. And significantly, the Java community now seems to be atoning for past sins. If you look at Java EE 6 (which was just approved this week), it adapts the pruning mechanism from Java 6 SE to pare some of the bloat. One obvious candidate is the container-managed persistence (CMP) feature of Enterprise Java Beans, which proved too complex for anyone but rocket scientists to implement.
The question is, at some point will we find ourselves going back to basics when it comes to implementing SOA? Will simple building blocks prove adequate, or will we discover that we can’t do without federated trust or quality of service mechanisms, and so on? More about that once we’ve heard from wiser heads next week.
After previews of almost 9 months, Salesforce finally made it official: next month, Apex will formally be released. As Salesforce’s own development language, Apex is something of a hybrid –- it runs in the database like a stored procedures language, but as a modern language built more like Java, it’s more about objects and components rather than procedures. It’s intended as a way to extend Salesforce’s CRM application, or write apps that run on their own in Salesforce’s recently released Platform Edition
And so we asked when Salesforce first announced it last fall: does the world need yet another new software development platform? And one that’s proprietary to boot?
Salesforce points to the fact that the plumbing of Apex had to be different than that of Java or C#. That is, it’s a language that lives inside a database, and within the confines of a multi-tenanted environment. So it doesn’t need all the richness of general purposes languages, which deal with areas like visual representation.
OK, so you need different plumbing to handle multi-tenancy, but back to the other part of our question: does the world need another proprietary language?
The answer is obvious, until you look at the dynamics of the marketplace. The politically correct answer is that Salesforce should open up the language and either open source it or contribute it to a standards body where it could become a formal or de facto standard platform for writing apps that run as Software-as-a-Service (SaaS) in multi-tenanted environments. The realistic answer is, why would Salesforce’s rivals (and you know who they are) be willing to give founder’s credit to Mark Benioff?
But there are several possible scenarios where such a language could become a standard for writing SaaS applications. Should Salesforce take the first step, it could coalesce a gang of dwarfs (as it has with the AppExchange) who want to hop the SaaS bandwagon. While Salesforce would wind up leaving some money on the table, it gains influence (and indirectly, business) from Apex becoming the de facto standard.
Or Oracle or SAP jump into the act taking the moral high ground, trying to draw the dwarfs taround them. In SAP’s case, it could make the subtle differentiation that its language is actually more versatile for SaaS because it would support a hybrid single/multi-tenanted model (which is how SAP is handling its hosted CRM).
In the end, the determining factor as to whether we end up with the equivalent of a Java for SaaS will revolve around skills base.
To that end, it helps to compare the genesis of Java and Apex. Java emerged at a point when the web needed an application deployment environment, and it needed a level of interoperability akin to SQL. That is, enough to ensure an adequate skills pool, but with room for proprietary extensions (e.g., dialects of SQL, deployment descriptors for J2EE appservers).
It’s unclear whether the SaaS platform has the same interoperability requirements, because today you have SOA and web services for all that. But if, and only if, the SaaS world requires developer (as opposed to software) portability, you may see Salesforce and rivals rapidly making tracks to the standards bodies.
Compared to past years, the unveiling of Oracle 11g database was downright modest. In place of booking Radio City, this year it was a few blocks away in the Equitable Building auditorium. Of course, we recall sitting in the same place nearly 20 years ago as we saw IBM’s John Akers, Digital’s Ken Olson, and HP’s John Young announcing formation of the Open Software Foundation, the precursor of today’s Open Group.
The tone of the announcement was anticlimactic if not downright boring. But maybe that’s not a bad thing. Oracle president Charles Phillips doesn’t give the kind of shows that Larry Ellison used to. So in an extended monotone, he rattled off the highlights of 11g which were, for the most part, about feeds and speeds and continued absorption of functions that you used to go to third parties for (like advanced compression, SQL-flavored OLAP, broader data encryption, data recovery utilities, and various other bells and whistles).
To us, the highlight was something that Oracle calls “real application testing,” which mercifully, won’t be abbreviated as “RAT.” It adapts features associated with the functional testing world – the “robots” from folks like Mercury that recorded user sessions so you could exercise the UI. In this case, Oracle’s facility (which it likened to a DVR of your database) could sit there for hours or even days to record all the transactions of your database. So instead of having to concoct your own transaction tests, you can play back actual sessions on a testbed against whatever change or upgrade you’re contemplating. In fact, Oracle says it will subsequently extend this technology to its apps portfolio.
But the other enhancements were more incremental in nature – like new capabilities to store XML in binary form to reduce file size, which may come in useful in SOA environments where services represent data in XML. But Oracle won’t have a lock on this as XML database appliances from IBM and others could give Oracle a cheaper run for its money.
Additionally, new capabilities to store “materialized data views,” which are composite slices of data, in OLAP cubes that are accessible by SQL commands represents yet another step in the commoditization of BI – but that’s an area where Oracle is playing catch-up to Microsoft which has spread BI to the masses through SQL Server’s OLAP Services.
Overall, matter-of-fact mood reflects the fact that databases are yet another enterprise platform. That is, in some organizations, databases are the center of software architecture, against which other software must be compatible (in other orgs, that role might be played by the OS or ERP system).
But back to the point, when it comes to platforms, customers want as few surprises or disruptions as possible. Watching Larry Ellison has always been entertaining, but in today’s more matter-of-fact IT marketplace, customers are likely to sleep more soundly after (if not through) Phillips’ more boring pronouncements.