08.30.07

Is it time for “Guerilla SOA?”

Posted in Application Development, SOA & Web Services at 1:31 pm by Tony Baer

In his blog yesterday, Joe McKendrick alerted us of an agile development-like idea being pushed by colleague by Jim Webber, SOA practice lead at ThoughtWorks, which he terms “Guerilla SOA.” It’s a reaction to the heavyweight, top-down approaches to SOA deployments which tend to feature liberal doses of appservers, ESBs, registries and repositories, and of course, a large dollop of architecture.

In essence, it’s the latest turn on the notion of architecting software for reuse, which dates back to the days of COBOL Copy Books, CASE methodologies, and later, component-based and object-oriented development. Webber is concerned that such top-down approaches typically push architecture at the expense of business priorities, not to mention strangling a project in analysis paralysis. Instead, he urges a more tactical approach that applies SOA to specific business problems pinpointed by stakeholders that he calls “Guerilla SOA.”

Webber went on to describe alternatives to SOAP and WSDL web services standards that could make Guerilla SOA doable. They included “MEST,” which applies RESTful-like approaches (they are used for requesting data, in place of more complex SOAP messages) to message exchanges; and SSDL, which simplifies WSDL service descriptions and appends elements of BPEL process orchestration and CDL choreography to reduce the amount of back and forth chatter for more complex, contract-driven services.

McKendrick said that Webber’s approach reminded him of BEA AquaLogic chief architect Paul Patrick’s notion of “SOA Neighborhoods,” where localized clusters of services emerge to serve departmental or business unit level requirements, and eventually scale through federation.

To us, Guerilla SOA gives us a sense of déjà vu all over again. Barely a month ago, we heard SOA developers -– exactly the types that Webber’s talking about -– take it on the chin from enterprise architects and BPM owners. Namely, that SOA project teams are considered estranged from EAs because they tend to make short-term tactical decisions that may not be architecturally sound over the long haul, and that they are similarly isolated from the business folks who own BPM because they feel that SOA developers (or anybody from the software development organization) just doesn’t understand the subtleties of business processes.

Guerilla SOA also conjures up notions of Agile development, an umbrella concept that describes development, testing, and project management techniques that are designed to be much lighter weight than conventional approaches. Agile stipulates conducting “just enough” planning to avoid analysis paralysis, and it typically encourages intimate collaborations between the business and the different players that are involved in designing, developing, and testing software. The term may not be as sexy, but perhaps a more apt label for Webber’s “Guerilla SOA” might be “Agile SOA.”

Webber brings up some useful points. IT organizations are under the gun to demonstrate that they are more productive and add value. They are overcoming years delivering projects being late, over budget, and under scope. And in the post Y2K, post-bubble environment, they have had to prove their relevance to cynical CxOs who bought the kitty litter that IT doesn’t matter. And, although one of the oft-stated benefits of SOA is reuse (which is akin to saying your software development machine runs more efficiently), CEOs and CFOs are justified in asking why they should care. In fact, the ultimate benefit of SOA is business agility because, if properly implemented, it is architected using loose coupling that enables you to swap pieces in and out as conditions or underlying technologies change.

Nonetheless, in his video podcast, we didn’t sense that Webber had a good answer to the question of how SOA guerillas could scale their efforts beyond small projects. When asked if a service usable for specific scenarios could scale, he responded: “The nice thing is that it works in either because you have specific priorities that the business gives you at any given time and you focus on those, and as long as the business can keep coming to you and saying ‘I now need that this process implemented’ then you can scale up ad infinitum until the point where you automated all of the processes of a given domain…”

We feel that didn’t answer the question. Sure, if you’re working for the same group, you might have enough logical continuity to avoid retracing your tracks. But what about other teams, or what if the project grows large enough that, to respond to a new requirement, you create a new service from scratch because you’ll get the job done much quicker? That’s exactly how spaghetti code (where you have tangles of point programs) gets created –- while it’s quicker to produce, you really don’t want to end up maintaining all that crud.

In other words, you’re back to where you started. Just that this time, you’ve replaced spaghetti code with spaghetti web services.

We believe that Webber’s Guerilla SOA asks some valid questions about over-engineered SOA, but we don’t feel it yet provides enough answers to avoid software development’s age-old headaches.

08.22.07

The China Syndrome

Posted in IT Infrastructure, Security at 2:41 pm by Tony Baer

We had an interesting discussion this morning with software development security guru Brian Chess this morning, but afterwards, we couldn’t help but wonder whether the topic itself was something of an oxymoron. Given the fact that disclosures of breaches of sensitive data like credit card or social security numbers is becoming alarmingly routine, it seems like software security is only be observed literally in the breach.

The problem has grown worse with spread of the Internet, which means you can get access to vulnerable assets, and the legacy Visual Basic and its descendants, that have made software development easier and more accessible to music, art history, and philosophy majors, who have supplemented what -– in this country –- is a declining supply of computer scientists.

The challenge of course is that software developers are not security experts and never have been. Traditionally, security has been the domain of specialists who devise measures at the perimeter, and with authentication and access control inside the firewall. Not surprisingly, the traditional attitude among developers is that security is somebody else’s problem, and that when you’re done developing and testing the code for conventional software bugs and defects, you typically throw it over the wall to the security folks.

Although he doesn’t use those words, Chess would likely characterize such a strategy as the software equivalent of a Maginot Line defense. He’s authored a new book discussing common software security holes and how they can be fixed.

He concedes that not all security holes can be caught. But he claims that the most common ones, such as cross-site scripting (which allows hackers to deflect incoming HTTP requests to pirate sites), SQL injection (which causes database diarrhea), or buffer overflows (which so overwhelm cache that security measures break down), can easily be eliminated with automated tools without bringing software development to its knees. And even if eliminating these holes solves only half the problem, at least we’re halfway better off.

Of course, Chess has an agenda, as his company, Fortify Software provides “white box” testing that throws software through a battery of tests to identify security holes.

But agenda or not, he has a point.

Chess doesn’t expect developers to become security experts, but he expects them to become deputized on the front lines, to at least test for the obvious holes. He security folks should set the strategy and be the overseers of last resort, but Chess contends that they can’t handle the security challenge entirely on their own anymore. The world has simply gotten too interconnected for that. He’s optimistic that developers can change their ways, as it wasn’t too long ago that most considered the idea of defect management databases and bug tracking systems as unmanly. He also believes that corporations will likely enforce a change of ethos because cracks in security – such as the unsecured wireless network at TJ Maxx that eventually opened a hole for somebody to filch credit card numbers, will cause them to advance security to the front burner in their IT departments.

Nonetheless, we’ve got some doubts. We believe that the TJ Maxxes of the world will be compelled by the market to get their acts together. But we also realize that in a global market, there’s always room for shoddy merchandise -– in this case, low cost code where you’ll get what you pay for. India, which began the globalization trend, has competed on quality. China, whose manufacturing sector has come under suspicion lately, will also likely clean up its act. But we fear that development centers in emerging countries, desperate for hard currency, will offer development services at the lowest possible cost, with little regard for security. And we wouldn’t be surprised if a market emerges for those services -– security warts and all.

08.16.07

Menage a Trois

Posted in OS/Platforms at 2:57 pm by Tony Baer

At first glance, Citrix’s $500 million offer for hypervisor provider XenSource fills an obvious gap in Citrix’s desktop virtualization product line: while Citrix built its business offering a terminal server deployment path for Windows (and other OSs), XenSource competes the picture by providing a way to virtualize the client side as well. It also nicely steals the headlines a day after VMware’s partial IPO.

But the real buzz is not so much about the deal itself, but what happens with Microsoft, which happens to be the silent 16-ton gorilla sitting in the back of the room. Citrix has had a technology-sharing arrangement with Microsoft for roughly a decade, while XenSource has enjoyed access to Microsoft’s budding Viridian technology (which will form the core of its own upcoming Windows hypervisor itself) as part of Microsoft’s policy to support interoperability with Linux environments. Yet, when Microsoft actually productizes its next generation Windows hypervisor, code-named Viridian, as part of Windows Server 2008, that would be in direct competition with the Xen technology.

As my colleague Dana Gardner summed it up, ‘The move further cements an already strong relation with Microsoft on the part of Citrix, but complicates the picture when it comes to open source.” That is, XenSource’s technology being open source, it doesn’t own it. Or as industry analyst Brian Madden asked, “What exactly did they just pay $500M for?” We’ll be talking about this tomorrow as we record the next BriefingsDirect podcast.

We’ll take the safe route on this one. Citrix had a hole in its desktop virtualization offerings, and as the best-known emerging rival to VMware, XenSource was the ripest fruit for the picking. Nonetheless, as Citrix is not an open source company, we’d concur with 451 Group analyst Rachel Chalmers, as quoted by eWeek’s Peter Galli today that in all likelihood, the combined Citrix/Xen would likely spin out the Xen project into a nonprofit open source foundation, a la Eclipse. In essence, Citrix would be buying a company using the Red Hat subscription model.

Our take is that the deal won’t necessarily shake the cooptition that Microsoft and Citrix have engaged in for years. If you recall, Citrix provided the first Windows terminal server as part of a technology sharing deal with Microsoft, then Microsoft chimed in with its own, followed by Citrix’s fancy footwork to extend its terminal server across multiple OSs. When Microsoft finally unleashes Viridian as part of Windows Server 2008, its need for Linux interoperability won’t go away.

08.15.07

The Future of SOA

Posted in SOA & Web Services at 1:23 am by Tony Baer

It’s always kinda scary to predict the future of anything, but at the Open Group’s Enterprise Architecture Practitioners Conference in Austin, Texas last month, we had the pleasure to sit on a Future of SOA panel discussion moderated by our colleague Dana Gardner that confronted the issue head-on. Along with Eric Knorr, executive editor-at-large at InfoWorld; Beth Gold-Bernstein, vice president of ebizQ Learning Center; Todd Biske, principal architect at MomentumSI, we debated David Linthicum’s prediction that within five years, SOA will be become just another enterprise architecture discipline. While few on the panel disagreed whether that should happen, some of us wondered whether Linthicum’s prediction was a bit optimistic.

You can now read a full transcript of the discussion or listen to the podcast from Dana Gardner’s BriefingsDirect page.

08.10.07

SOA Appliances – Forward Into the Past?

Posted in SOA & Web Services at 2:35 pm by Tony Baer

Although back at school, my history profs adhered to the party line that history doesn’t repeat itself (because that oversimplifies matters), in actuality it does. As we’ve been observed on numerous occasions, in this business, history definitely goes round in cycles. And one of the most recent examples of that is the emergence of XML appliances to offload some of the most compute-intensive tasks involved with SOA.

That was one of the topics of one of Dana Gardner’s recent podcast where we threw IBM’s Jim Ricotta, IBM’s GM of appliances in the software group (and head of DataPower before IBM acquired it), on the spot. Ricotta indicated that SOA has been a sweet spot for appliances because of a couple reasons.

First, that the SOA market being built on standards means that the prospective market becomes wide enough for vendors to address. He gave the historical parallel with routers. “If you look at networking products, what really made routers and other types of networking such big horizontal businesses was that there were standards. The first routers were software products that ran on UNIX Boxes.” The other driver is that SOA – like the 7-layer ISO architecture for open networking before that, was designed in such a way that you could plop devices into different tiers without disrupting the rest of your technology stack.

Of course, appliances are hardly new. Back in the 80s we called them turnkey systems where you bought a special piece of hardware for that CADCAM, finite scheduling, data warehouse, or other system because (1) the hardware of the day was too slow, or (2) more likely, the vendor just wanted to lock you in. In that event, maybe we can say that history’s changed.

But, as fellow panelist Todd Biske pointed out, maybe SOA’s tiered architecture enables you to plunk in appliances, but he warns you shouldn’t just buy them simply because they’re available. Dana’s now posted the session online, which included Jim Kobielus, Brad Shimmin, Biske and myself – and which later covered BPEL4People. We learned some valuable pointers about architecting with and without appliances from the session, and maybe you will too.

08.03.07

Not Your Father’s Data Archive

Posted in Data Management, Database at 10:24 pm by Tony Baer

Breaking up the dog days of summer, IBM announced its intention to acquire Princeton Softech, a firm best known for its database archiving tools. Aside from the obvious market synergy – IBM’s databases and customers are a majority of Princeton’s installed base – Princeton brings an interesting mixed bag of capabilities that fit well with IBM’s emerging data governance initiatives.

In one sense, the issues that Princeton addresses are age old, which is that you don’t want archival data clogging up your most expensive storage arrays. But IBM is not buying just your father’s information lifecycle management (ILM – itself a modernized take on hierarchical storage management) tool. It’s buying a supplier whose tools infuse meaning into data that’s being offloaded and provide a policy engine for determining where and how to store it. And because it can superimpose a business subject view that to us borders on providing a form of semantic data access (that is, you can associate data by logic, as opposed to strict entity relationships), it makes the archives far more accessible.

According to Princeton Softech Chairman and CEO Paul Winn, those capabilities have proven trump cards to sales over the past 12 – 15 months. Whereas the largest chunk of its business used to be ILM-driven (e.g., find the least expensive place to store data based on currency or other factors), compliance-related factors are accounting for much of the growth. Specifically, if you are investigating a financial transaction or documenting compliance with privacy protection laws, having metadata or business objects to guide you through the archive can prove valuable forensic aids.

In the long run there could be synergy with IBM’s Information Server, providing a semantic overlay that covers live and archived data. Looking for data by its meaning rather than its description in the long run may provide a more structured alternative to enterprise search technology. IBM agrees that “it’s on the table,” but we get the feeling that at this point semantic integration is not the first thing on their to-do list.

Princeton also brings assets such as test data management (not that unique) and data archive and migration tools for major enterprise applications (primarily Oracle so far, but eventually SAP and Microsoft). The combination will readily complement IBM Software’s Data Management group’s Master Data Management and Information Server offerings, and IBM Tivoli access control and security solutions. As an IBM acquisition, Princeton Softech may not carry the scope of Ascential, but it’s an equally logical fit.