It’s seems almost quaint to think that once upon a time, you really had to be a rocket scientist to develop software. OK, correct that, computer scientist. Your IDE was a cryptic command line text editor, and you freelanced debugging manually. That’s OK, that was during the cowboy days of appdev, when ideas like objects, components, or models were considered the stuff of idle dreams. Besides, what self-respecting programmer (we didn’t call them developers back then) would ever condescend to using somebody else’s code? Real coders only need command lines, and they don’t need formalized architecture to tell them how to program.
Roughly 25 years ago, what was then Borland introduced the integrated development environment, and several years after that, Microsoft blew the lid out of that market with the first programming language that was really designed for, to borrow Apple’s terminology, “the rest of us: Visual Basic. For the first time, here was a language that was fairly easy to learn; offering lots of flexibility; and taaping the innovations of visual development, it made software development more intuitive. As God’s gift to liberal arts majors, they now could get paid for doing something other than waiting tables, driving cabs, or teaching art history or philosophy.
Of course, lowering barriers to entry allows the unwashed masses in, and yes, there is sound argument to say that allowing anybody to program would lower the quality of coding. Yet, democratizing development became essential because in the early 90s, the coming boom in client/server, followed by web developed, unleashed an enormous appetite for applications for which there weren’t enough computer scientists in the world to deliver. Even with bandwidth bringing millions of Indian, Chinese, and Ukrainian developers online, supply is still mismatched with demand. While you might think about outsourcing large projects or maintenance, it is simply impractical to task teams located over a dozen time zones away (not to mention language or cultural barriers) to churn out the kinds of quick, tactical applications that some agile team in a corner could crank out in days.
Not surprisingly, the democratization of development unleashed by Visual Basic and almost every development tool after made it possible for the IT profession to meet demand; it didn’t create it. But not surprisingly, with all that sloppy coding out there, emergence of robust frameworks like Java EE and .NET attempted to clean up the mess with frameworks that required disciplined practices like strongly typed coding. But as the laws of physics predict counter reaction to every action, dynamic scripting languages like PHP and Ruby emerged to provide the ease and lightweight that the top down frameworks forgot.
Anyway, it is difficult to make it through a vendor briefing call these days without hearing bromides on how they are making their tools accessible to “business developers” – as if there is such a class of people in the business who do software development. What they are really saying is that they have tools for business stakeholders who have day jobs to, by the way, craft quick little productivity or business insight apps with their drag and drop tools on the side. It’s the same thing that we have heard from players like Zoho which seem more like cloud platforms for developing trivial apps of little meaning.
Therefore our ear perked up with Microsoft’s release of LightSwitch, which provides a simpler path to developing real data-centric .NET applications. We’ll spare you the details because Andrew Brust has explained them much better than we could, and is hoping that LightSwitch might become part of “a long overdue turnaround” from Microsoft’s last decade of “courting complexity.”
We share his hopes, but our optimism is a bit more measured. Microsoft doesn’t exactly have a great track record backing innovation these days. A couple years ago, it had a similar kind of great idea with Oslo, an innovative attempt to make modeling of data-driven applications (do we see a pattern here?) more developer-centric through a coding-oriented approach. Less than a year after unveiling Oslo, Microsoft backtracked and made it a development pattern for SQL Server. Let’s hope that on this go round, Microsoft has the patience and perseverance to keep the LightSwitch on.
In spite of a belated challenge from Microsoft, Adobe’s Flash framework has arguably remained the de facto standard for formal Rich Internet Applications (RIAs). But that existence has been called into question with its latest cold war with Apple.
Steve Jobs has slammed Adobe for being lazy; his motives of course are debatable, as we’ll get into below. Yes, Flash is buggy and there are lots of security holes. That’s because as a full RIA client framework,, the technology is being called upon to exert a much wider role from which it was originally designed; to bring multimedia to static web pages.
We were reminded of this while being contacted by an Asian reporter who was asking about whether Flash’s very market survival was now in question. But let’s get real; the only reason we’re having this discussion is Apple’s rejection of Flash for the new and overly hyped iPad.
The conflict between Apple and Adobe is nothing new, and in a way is rather ironic. Adobe Postscript was the technology that helped make the Mac what it is for creative professionals, as Postscript technology made the Mac the de facto standard for desktop publishing. Fast forward to the present, and Apple views Adobe technology as a threat to its revenue stream.
Apple has honed to a very clever business model for its mobile products that is actually a throwback to the golden days of turnkey systems, circa 1979. This model, where the hardware supplier controls what software goes on the machine and gives the client a box with functionality that is ready to go gives the hardware provider control over the revenue stream. The only difference between 1979 and now is that, while the hardware provider used to supply the software, today that comes through third parties who pay for the privilege of selling content to the iPod audience, and a mix of content or software to the iPhone, and now the iPad market.
The problem for Apple however is that the Flash framework could provide third-party software and content providers a bypass around Apple’s Berlin Wall and fees. Adobe is therefore an existential threat to Apple’s annuity stream.
Consequently, while Steve Jobs isn’t off base in criticizing Flash’s technical vulnerabilities, the real driver is cold hard cash.
The argument over whether denial of access to he iPad is a threat to Adobe is because there are questions as to whether the iPad will have the same transformational impact on the mobile Internet space that the iPod and iPhone have had over music and cell phone. Based on what’s out now, we think that the iPad is more hype and actually represents a step back for Internet users to the Web 0.9 experience as the iPad lacks multi-tasking, not to mention the Flash content that is ubiquitous across the web. Others are obviously rushing to come out with their iPad wannabees, most of them likely with Flash support. A new tablet market category will emerge and steal thunder from the netbook.
Admittedly, multi-tasking could be fixed in forthcoming rev, but we think that Apple has made a line in the sand regarding Flash. Maybe Apple has something up its sleeve, like its own answer to Flash, Silverlight, or JavaFX. Or maybe Apple eventually promotes HTML 5 as its RIA strategy. That’s the draft W3C standard that would bring RIA support right back into the mother ship, eliminating the need for those pesky add-ons or reliance on loosey-goosey Ajax. But HTML 5 is way off in the future. Currently in working draft and deficient in areas such as security and codec support, the W3C won’t likely approve it until 2011 at the earliest, an dafter that, it will be years before it reaches critical mass adoption if ever.
But let’s just pretend that maybe the iPad has the same transformative impact on the market as the iPod or iPhone. By 2011, there’s a definite trend away from netbooks to tablets, Apple’s rivals roll out their wannabees, but web developers find that much of their audience is drifting off Flash. (Fat chance.) That’s where things could get really weird. Microsoft, which has been watching from the sidelines, wants a game changer. It must decide which is its worst enemy: Apple or Adobe. If the former, it scraps Silverlight for Flash, because what use is there in being #3? If the latter, it embraces HTML 5 under the guise of industry standards support. Sound unlikely? Actually there’s a precedent. Years ago, Bill Gates promoted Dynamic HTML as Microsoft’s industry-standard alternative to Java clients (we saw him at a Gartner event back in 1999 making the pitch). Who’da thunk that DHTML would eventually becoming one of the pillars that made Ajax possible?
Back to our original point: the iPad is overhyped, it will gain some market share, but it won’t kill off Flash.
Thanks go out to Oracle this morning for finally putting us out of our suspense. AmberPoint was one of a dwindling group of still-standing independents delivering run time governance of the for SOA environments.
It’s a smart move for Oracle as it patches some gaps in its Enterprise Manager offering, not only in SOA runtime governance, but also with business transaction management – and potentially – better visibility to non-Oracle systems. Of course, that visibility will in part depend on the kindness of strangers as AmberPoint partners like Microsoft and Software AG might not be feeling the same degree of love going forward.
We’re surprised that AmberPoint was able to stay independent for as long as it had, because the task that it performs is simply one piece of managing the run time. When you manage whether services are connecting, delivering the right service levels to the right consumers, ultimately you are looking at a larger problem because services do not exist on their own desert island. Neither should runtime SOA governance. As we’ve stated again and again, it makes little sense to isolate runtime governance from IT Service Management. The good news is that with the Oracle acquisition, there are potential opportunities, not only for converging runtime SOA governance with application management, but as Oracle digests the Sun acquisition, providing full visibility down to infrastructure level.
But let’s not get ahead of ourselves here as the emergence of a unified, Oracle on Sun turnkey stack won’t happen overnight. And the challenge of delivering an integrated solution will be as much cultural as technical, as the jurisdictional boundary between software development and IT operations blurs. But we digress.
Nonetheless, over the past couple years, AmberPoint itself has begun reaching out from its island of SOA runtime, as it extended its visibility to business transaction management. AmberPoint is hardly alone here as we’ve seen a number of upstarts like AppDynamics or Bluestripe (typically formed by veterans of Wiley and HP/Mercury), burrowing down into the space of instrumenting transactions from hop to hop. Transaction monitoring and optimization will become the next battleground of application performance management, and it is one that IBM, BMC, CA, HP, and Compuware are hardly likely to passively watch from the sidelines.
As for whether runtime SOA governance demands a Switzerland-style independent vendor approach, that leaves it up to the last one standing, SOA Software, to fight the good fight. Until now, AmberPoint and SOA Software have competed for the affections of Microsoft; AmberPoint has offered an Express web services monitoring product that is a free plug-in for Visual Studio (a version is also available for Java); SOA Software offers extensive .NET versions of its service policy, portfolio, repository, and service manager offerings.
Nonetheless, although AmberPoint isn’t saying anything outright about the WebLogic share of its 300-customer installed base, that platform was first among equals when it came to R&D investment and presence. BEA previously OEM’ed the AmberPoint management platform, an arrangement that Oracle ironically discontinued; well in this case, the story ends happily ever after. As for SOA Software, we would be surprised if this deal didn’t push it into closer embrace with Microsoft.
Postscript: Thanks to Ann Thomas Manes for updating me on AmberPoint’s alliances. They are/were with SAP, Tibco, and HP, in addition to Microsoft. Their Software AG relationship has faded in recent years.
Of course all this M&A rearranges the dance floor in interesting ways. Oracle currently OEMs HP’s Systinet as its SOA registry, an arrangement that might get awkward now that Oracle’s getting into the hardware business. That will place into question virtually all of AmberPoint’s relationships.
Application platforms are often like chunks of private real estate. If you have some sort of relationship, you are invited in. If you want to go next door, you will probably have to go back onto the street before ringing the neighbor’s doorbell because, unless you are very familiar with the people you are visiting, they will probably not let you cut directly through the back yard to the neighbors place. Chances are there will be some sort of fence, natural barrier, or No Trespassing sign in the way.
It’s the same with application platforms. If you are a Java developer and are working with a Java-based web application, you might be treated as part of the family. Of course it depends on whether you are writing against a compatible back end that uses the same transaction and persistence models. If that’s the case, you could figuratively cut through the backyard and not have to go back out to the street. If not, you might as well be writing against .NET or other platform, meaning you could not write to it directly.
If you are writing a complex, highly distributed, transactional web application, you will probably have to treat each Java or .NET instance as independent transaction systems that will each require their own housekeeping when it comes to running an application. That is, if System A works fine but System B crashes, you will have to treat each as self-contained systems; you would not be able to roll System A back if B crashed.
In highly complex transactional environments, there are good reasons to different zones autonomous to avoid single points of failure. But depending on the application, there are equally if not more compelling reasons to treat them as a virtual single entity. Until now, the most effective way of doing so was through dropping to neutral, loosely coupled architectures such as SOA, which are theoretically supposed to enable the dynamic chaining of independent entities through elf contained services transactions that manage themselves. Or you could have taken your chance writing your own transactional logic, but that’s a pretty klugey approach that gets awfully brittle each time some piece of code or underlying platform changes.
JNBridge has just introduced a more traditionally tightly coupled option, which lets you write a Java application against a .NET source so that if one goes down, it roll back the other so the systems are not out of sync. An evolutionary development in the progression of Java-.NET interoperability, JNBridge’s feature will be useful for exceptional situations where you have heterogeneous systems that must be tightly coupled.
In the end, the decision to use SOA or a directly bridge using the JNBridge approach boils down to the architectural argument of whether it makes more sense to write tightly or loosely coupled systems. JNBridge CTO Wayne Citrin makes an interesting argument over the topic. Tightly coupled have their benefits, especially if the logic is relatively static, and performance requirements are strict. Otherwise a loosely coupled approach becomes the better option if either of the moving parts are likely to evolve independently and/or have the potential for reuse. At least JNBridge now gives you the choice in a scenario where No Trespassing signs were traditionally the rule.
Developers are a mighty stubborn bunch. Unlike the rest of the enterprise IT market, where a convergence of forces have favored a nobody gets fired for buying IBM, Oracle, SAP, or Microsoft, developers have no such herding instincts. Developers do not always get with the [enterprise] program.
For evidence, recall what happened the last time that the development market faced such consolidation. In the wake of web 1.0, the formerly fragmented development market – which used to revolve around dozens of languages and frameworks – congealed down to Java vs .NET camps. That was so 2002, however, as in the interim, developers have gravitated towards choosing their own alternatives.
The result was an explosion of what former Burton Group analyst Richard Monson Haefel termed the Rebel Frameworks (that was back in 2004), and more recently in the resurgence of scripting languages. In essence, developers didn’t take the future as inevitable, and for good reason: the so-called future of development circa 2002 was built on the assumption that everyone would gravitate to enterprise-class frameworks. Java and .NET were engineered on the assumption that the future of enterprise and Internet computing would be based on complex, multitier distributed transactional systems. It was accompanied by a growing risk-averseness: buy only from vendors that you expect will remain viable. Not surprisingly, enterprise computing procurements narrowed to IOSM (IBM, Oracle, SAP, Microsoft).
But the developer community lives to a different dynamic. In an age of open source, expertise for development frameworks and languages get dispersed; vendor viability becomes less of a concern. More importantly, developers only want to get the job done, and anyway, the tasks that they perform typically fall under the enterprise radar. Whereas a CFO may be concerned over the approach an ERP system may employ to managing financial system or supply chain processes, they are not going to care about development languages or frameworks.
The result is that developers remain independent minded, and that independence accounts for the popularity of alternatives to enterprise development platforms, with Ruby on Rails being the latest to enter the spotlight.
In one sense, Ruby’s path to prominence parallels Java in that the language was originally invented for another purpose. But there the similarity ends as, in Ruby’s case, no corporate entity really owned it. Ruby is a simple scripting language that became a viable alternative for web developers once David Heinemeier Hansson invented the Rails framework. The good news, Rails makes it easy to use Ruby to write relatively simple web database applications. Examples of Rails’ simplicity include:
• Eliminating the need to write configuration files for mapping requests to actions
• Avoiding multi-threading issues because Rails will not pool controller (logic) instances
• Dispensing with object-relational mapping files; instead, Rails automates much of this and tends to use very simplified naming conventions.
The bad news is that there are performance limitations and difficulties in handling more complex distributed transaction applications. But the good news is that when it comes to web apps, the vast majority are quite rudimentary, thank you.
The result has propelled a wave of alternative stacks, such as LAMP (Linux-Apache web server-MySQL-and either PHP, Python, or Perl) or, more recently, Ruby on Rails. At the other end of the spectrum, the Spring framework takes the same principle – simplification – to ease the pain of writing complex Java EE applications – but that’s not the segment addressed by PHP, MySQL, or Ruby on Rails. It reinforces the fact that, unlike the rest of the enterprise software market, developers don’t necessarily take orders from up top. Nobody told them to implement these alternative frameworks and languages.
The latest reminder of the strength of grassroots markets in the developer sector is Engine Yard’s securing of $19 million in C funding. The backing comes from some of the same players that also funded SpringSource (which was recently acquired by VMware). Some of the backing also comes from Amazon, whose Jeff Bezos owns outright 37Signals, the Chicago-based provider of project management software that employs Heinemeier Hansson. For the record, there is plenty of RoR presence in Amazon Web Services.
Engine Yard is an Infrastructure-as-a-Service (IaaS) provider that has optimized the RoR stack for runtime. Although hardly the only cloud provider out there that supports RoR development, Engine Yard’s business is currently on a 2x growth streak. Funding stages the company either for IPO or buy out.
At this point the script sounds similar to SpringSource which, of course, just got acquired by VMware, and is launching a development and runtime cloud that will eventually become VMware’s Java counterpart to Microsoft Azure. It’s tempting to wonder whether a similar path will become reality for Engine Yard. The answer is that the question itself is too narrow. It is inevitable that a development and runtime cloud paired with enterprise plumbing (e.g., OS, hypervisor) will materialize for Ruby on Rails. With its $19 million funding, Engine Yard has the chance to gain critical mass mindshare in the RoR community – but don’t rule out rivals like Joyent yet.
With the ink not yet dry on VMware’s offer to buy SpringSource, it’s time for SpringSource to get back to its regularly scheduled program. That happened to be SpringSource’s unveiling of the Cloud Foundry developer preview: This was the announcement that SpringSource was going to get out before the program got interrupted by the wheels of finance.
Cloud Foundry, a recent SpringSource acquisition, brings SpringSource’s evolution from niche technology to lightweight stack provider full circle. Just as pre-Red Hat JBoss was considered a light weight alternative to WebSphere and WebLogic, SpringSource is positioning itself as a kinder and gentler alternative to the growing JBoss-Red Hat stack. And that’s where the VMware connection comes into play, but more about that later.
The key of course is that SpringSource rides on the popularity of the Spring framework around which the company was founded. The company claims the Spring framework now shows up in roughly half of all Java installations. Its success is attributable to the way that Spring simplifies deployment to Java EE. But as popular as the Spring framework is, as an open source company, SpringSource monetizes only a fraction of al Spring framework deployments. So over the past few years it has been surrounding the framework with a stack of lightweight technologies that complement it, encompassing the:
• Tomcat servlet container (a lightweight Java server) and the newer DM server that is based on OSGi technology.
• Hyperic as the management stack;
• Groovy and Grails, which provides dynamic scripting that is native to the JVM, and an accompanying framework to make Groovy programming easy; and
• Cloud Foundry, which provided SpringSource the technology to mount its offerings in the cloud.
From a mercenary standpoint, putting all the pieces out in a cloud enables SpringSource to more thoroughly monetize the open source assets that otherwise gain revenue stream through support subscriptions.
But in another sense, you could consider the SpringSource’s Cloud Foundry as the Java equivalent of what Microsoft plans to do with Azure. In both cases, the goal is Platform-as-a-Service offerings based on familiar technology (Java, .NET) that can run in and outside the cloud. Microsoft calls it Software + Services. What both also have in common is that they are still in preview and not likely to go GA until next year.
But beyond the fact that SpringSource’s offering is Java-based, the combination with VMware adds yet a more basic differentiator. While Microsoft Azure is an attempt to preserve the Windows and Microsoft Office franchise, when you add VMware to the mix, the goal on SpringSource’s side is to make the OS irrelevant.
There are other intriguing possibilities with the link to VMware such as the possibility that some of the principles of the Spring framework (e.g., dependency injection, which abstract dependencies so developers don’t have to worry about writing all the necessary configuration files) might be applied to managing virtualization, which untamed, could become quite a beast to manage. And as we mentioned last week in the wake of the VMware announcement, SpringSource could do with some JVM virtualization so that each time you need to stretch the processing of Java objects., that you don’t have to blindly sprawl out another VM container.
VMware’s proposed $362 million acquisition of SpringSource is all about getting serious in competing with Salesforce.com and Google App Engine as the Platform-as-a-Service (PaaS) cloud with the technology that everybody already uses.
This acquisition was a means to an end, pairing two companies that could not be less alike. VMware is a household name, sells software through traditional commercial licenses, and markets to IT operations. SpringSource is a grassroots, open source developer-oriented firm whose business is a cottage industry by comparison. The cloud brought both companies together that each faced complementary limitations on their growth. VMware needed to grow out beyond its hardware virtualization niche if it was to regain its groove, while SpringSource needed to grow up and find deeper pockets to become anything more than a popular niche player.
The fact is that providing a virtualization engine, even if you pad it with management utilities that act like an operating system, is still a raw cloud with little pull unless you go higher up in the stack. Raw clouds have their appeal only to vendors that resell capacity or enterprise large firms with the deep benches of infrastructure expertise to run their own virtual environments. For the rest of us, we need a player that provides a deployment environment, handles the plumbing, that is married to a development environment. That is what Salesforce’s Force.com and Google’s App Engine are all about. VMware’s gambit is in a way very similar to Microsoft’s Software + Services strategy: use the software and platforms that you are already used to, rather than some new environment in a cloud setting. There’s nothing more familiar to large IT environments than VMware’s ESX virtualization engine, and in the Java community, there’s nothing more familiar than the Spring framework which – according to the company – accounts for roughly half of all Java installations.
With roughly $60 million in stock options for SpringSource’s 150-person staff, VMware is intent on keeping the people as it knows nothing about the Java virtualization business. Normally, we’d question a deal like this because the company’s are so dissimilar. But the fact that they are complementary pieces to a PaaS offering gives the combination stickiness.
For instance, VMware’s vSphere’s cloud management environment (in a fit of bravado, VMware calls it a cloud OS) can understand resource consumption of VM containers; with SpringSource, it gets to peer inside the black box and understand why those containers are hogging resource. That provides more flexibility and smarts for optimizing virtualization strategies, and can help cloud customers answer the question: do we need to spin out more VMs, perform some load balancing, or re-apportion all those Spring TC (Tomcat) servlet containers?
The addition of SpringSource also complements VMware’s cloud portfolio in other ways. In his blog about the deal, SpringSource CEO Rod Johnson noted that the idea of pairing VMware’s Lab Manager (that’s the test lab automation piece that VMware picked up through the Akimbi acquisition) proved highly popular with Spring framework customers. In actuality, if you extend Lab manager from simply spinning out images of testbeds to spinning out runtime containers, you would have VMware’s answer to IBM’s recently-introduced WebSphere Cloudburst appliance.
VMware isn’t finished however. The most glaring omission is need for Java object distributed caching to provide yet another alternative to scalability. If you only rely on spinning out more VMs, you get a highly rigid one-dimensional cloud that will not provide the economies of scale and flexibility that clouds are supposed to provide. So we wouldn’t be surprised if GigaSpaces or Terracotta might be next in VMware’s acquisition plans.
Let’s remind you that we included the word “in” in the title.
We’re not experts on BPM standards, but we’ve never been great fans of BPEL either. It’s one of those things that’s necessary in that, if you want to make a business process executable, you’ll probably need something like BPEL. Imagine slicing and dicing a process down into its constituent workflows, then explode those workflows into a series of atomic steps that at the end of the day look more like computer log files. That’s BPEL for you.
Business stakeholders have long disdained BPEL, with a few of them having deep pockets springing for BPM tools that use their own richer proprietary syntax to generate Java. It’s remained a niche, as classic BPM systems satisfied primarily those organizations with sufficiently complex, but high enough value-added processes that were beyond the scope of packaged software.
So the riddle is whether you can make rich business process models portable. The XPDL folks claim you can, and they have done a thorough mapping to BPMN 1.1 to prove it. XPDL backers claim it gives you the best of both worlds: a rich, workflow-oriented language, and portability. Detractors, like IDS Scheer’s Sebastian Stein, say it’s fine as long as you don’t mind wading through a 216-page spec to do it. As for vendor support, if you’re talking to IBM, Oracle, or SAP, pulling teeth would be easier.
The big enterprise/middle tier players would rather shift the subject to BPMN, which does provide the kinds of visual flow charts and terms that domain experts and process designers understand, and also makes provision for translating those process flows to executables. Sounds fine, except, maintains Bruce Silver, until now the coupling to executables has been too tight. The existing 1.x version requires that those service interfaces and data mappings be specified, which ironically makes them too web services-friendly but not really service-oriented. Yes, BPMN requires you to specify the services that processes steps are supposed to fire, but on the other hand, it violates a key precept of SOA which is that there should be no dependencies. None. Zilch. Not even to an otherwise loosely-coupled service. (Silver hopes that BPMN 2.0 will more effectively support portability.)
But at the end of the day, you have a process that you need to automate, it’s not covered by Oracle or SAP, and you don’t have a quarter million dollars for a big BPM tool. Active Endpoints, which began life as an OEM supplier of BPEL technology, has taken the approach of saying, leave it to developers. Not the rocket scientist Java EE or C# folks, but the departmental developers used to VB.
OK, maybe the $30 – 50k is a bit rich for a departmental app, but in fact, it’s probably more the sweet spot these days for corporate IT which needs to stretch its dollars. But the guiding logic looks quite similar to what drove all those departmental VB apps that often snuck through the back door under the radar of corporate IT: business units needed solutions and couldn’t afford to wait at the back of the line for corporate IT to burn through its project backlogs.
ActiveVOS, which is Active Endpoint’s product (a mouthful for a small company, yes?) takes a RAD approach to making BPEL just a little less BPEL. Instead of seeing endless lines of BPEL XML, it aggregates the BPEL into process steps that look a bit like the workflow diagrams that business process analysts consume. It also provides capabilities like process rewind, which are a lot like transaction rollback – if a live process starts to go bad, you can get a bit of a do-over and roll it back (data and all) to the last decent step. And yes, it will translate BPMN, because as you might recall, BPMN was designed to translate to an executable (we won’t dredge up all the baggage again).
Given that the product is still young – sales only picked up last year with release of version 6 – it piqued our attention that in the first crappy quarter of this year, the company’s business still went up 20%.
Obviously, there is no single silver bullet approach to BPM that will work for all comers. Maybe in part that’s been a speed bump impeding development of the BPM market, or at least one that is obviously identifiable. On the other hand, maybe that’s not the point, as the prize really is to keep application development as closely aligned with the business as possible in an era of reduced budgets, whatever it takes. ActiveVOS’s emergence reflects the fact that departmental VB-style development remains very much alive, and in many cases, the shortest path between two points. And if it leverages BPEL so be it – that’s the execution language that WS-* web services understand.
It’s one of a number of emerging model-driven approaches that hope to make what was traditionally tactical application development more coherent and less random (the same point behind Microsoft Oslo, for instance). The rationale is that models are easier to manage and replicate, as they abstract physical implementation from content or core logic. We’re currently studying various approaches to what we’d call Business Whiteboarding which provide simpler onramps for business people, not developers, the ability to more formally declare what a process is, or what their core business requirements are, so there’s less game of telephone and finger pointing, and more accountability for the software that results at the end of a project request.
Chalk another ancient family feud to evaporate. Microsoft and OMG have agreed to bury the hatchet. The announcement was carried in a terse press release backed by a stilted, condescending video of server & tools VP Bob Muglia interviewed by a junior PR or marketing aide who was obviously reading her lines off a teleprompter.
Microsoft and OMG had been butting heads back to the dawning of client/server, as each backed their own definitions of what would define the component architecture for the next generation of object-oriented software. Microsoft had its COM model, intended to provide its answer for enterprise Windows development, vs. CORBA, the model promoted by the rest of the industry (read: UNIX and legacy server providers) as the multi-vendor alternative. In other words, Microsoft’s de facto vs. the rest of the industry’s “open” standard.
COM and CORBA eventually battled to a draw; COM was too platform-limited while CORBA was too complex. Both eventually ceded ground to more “modern” architectures: .NET framework, which provided a multi-language run time requiring managed (e.g., strongly typed) code); and J2EE, which provided a more accessible answer to CORBA. In turn, .NET and J2EE had their share of struggles: Microsoft waging a multi-year ground war to eventually get VB developers to adopted more disciplined development under .NET, and J2EE remaining too complex until open source alternatives such as Spring and Hibernate forced the JCP’s hand with the lighter weight EJB 3.
We had a feeling that something was afoot when at TechEd, Bill Gates’ valedictory keynote endorsed UML for its emerging Oslo business process initiative.
Ironically, Microsoft was one of the original backers of UML, and – justifiably in our view – backed away when UML itself grew too complex in its 2.0 version. To recap, UML is a visual modeling notation for describing the logical architecture of a software program. The problem was that with version 2.0, UML got larded with additional diagrams for modeling physical deployment of a program, a development that unfortunately made model-driven development (an OMG initiative) look far too complex for ordinary mortals. While some architects simply stuck to UML 1.x diagrams, others such as Microsoft began looking for more intuitive domain-specific language (DSL) alternatives.
Microsoft is still working with groups like the Business Process Alliance regarding DSLs. But more to the point, it realized that at the end of the day, UML remained the de facto standard of model-driven software development. Rather than start from scratch, Microsoft figured that the most direct path to Oslo, which is based on the notion of model-driven development, had to run through UML.
Consequently, Microsoft vs. OMG was just one of those wars that just had to end with a whimper.
In the aftermath of IBM’s announcement of intent to buy Ilog, it would be all too easy for us to reflect back on a conversation with Ilog’s CEO Pierre Haren last winter at its annual user conference covering survival in the software industry. Haren’s description of the typical life of a software vendor is that first you get a handful of successful references, then replicate to at least 20 – 30 successful accounts, then you start thinking about what your company wants to do when it grows up. Start specializing your solution for vertical sectors or other specialized sectors of the market, or you must change your role and move on. Haren’s implicit message: eat or be eaten.
We won’t take the cheap shot about IBM swallowing up Ilog because this deal makes too much sense.
Both companies know each other quite well, having been partners in one way or another for about a dozen odd years, that Ilog’s business rules engine fills a key gap in the WebSphere Process server BPM line, and most notably, that we saw IBM’s SOA strategy VP Sandy Carter keynote Ilog’s conference. IBM’s not going to haul out the big guns for any sub-$200 million software company.
Ilog has had a case of multiple attention disorder for a number of years. Otherwise, how could you explain that company of Ilog’s size would have not one or two, but three separate product families that targeted almost completely different markets? Or that a $180 million company could support 500 partners? Ilog was best known to us and the enterprise software world as one of a handful of providers of industrial-strength business rules management systems. That is, when your rules for conducting business are so conditional and intertwined, you need a separate system to keep them from gumming up into a hairball. That condition tends to typify the world’s largest financial institutions. That’s enough for one business.
But Ilog had two other product lines, one of them being an optimization engine that was OEM’ed to virtually every major supply chain management vendor, from SAP and Oracle to i2, Manugistics, Manhattan Associates and others. And by the way, it also had a cottage industry business selling visualization tools to ISVs.
So how do all these pieces fit together? Just about the only common thread we could think of was the case of a supply chain customer that not only uses the optimization engine, but has such a complex supply chain that it needs to manage all the rules and policy permutations separately. And not to leave loose ends untied, it needed a vivid graphics engine so it could visualize supply chain analyses so it could conduct better exception management.
Suffice it to say, that is not why Ilog had three separate business units. The company happened to grow satisfactorily, showing profits for seven straight years, so that it never had to face the uncomfortable question of refocusing. Had it stayed independent, it might have had to do so; while revenues grew roughly $20 million this year to $180 million, profits sank from $4.9 million last year to a paltry $500k this year. Blame it on currency fluctuations (based in France, Ilog would have had to discount in the US to keep customers happy), not to mention the mortgage crisis that has cratered the financial sector.
The good news is that Ilog is a great fit for IBM. Its rules engine adds a piece missing from WebSphere Process Server, and in fact has excellent synergy with a raft of IBM products that start with Business Events (apply sophisticated rules to piecing together subtle patterns emerging from torrents of data), FileNet content server, WebSphere Business Fabric (the old Webify acquisition, providing frameworks for building vertical industry SOA templates), and the list goes on. And that’s only the BRMS part. IBM Global Business Services and its Fishkill fab are customers of Ilog’s optimization engine, while Tivoli’s Netcool node manager uses Ilog’s visualization.
The sad part of the deal is that the acquisition will likely abort Ilog’s interesting foray into Microsoft’s Oslo vision, where it provides the business rules underpinning. Even if IBM wants to maintain the business, we’d be surprised if Microsoft followed suit. Ilog went to the effort, not simply to port Java-based JRules, but write a fully .NET native product. That’s analogous to what happened with Rational, whose Microsoft Visual Studio partnership originally dwarfed its ties with IBM.
Colleague James Taylor says that the acquisition portends the end of the rules management market as it will likely set off a wave of consolidation by major application/middle tier vendors. CIO UK’s Martin Veitch adds that “IBM is continuing to dance around the margins of enterprise applications” with the Ilog deal. We’d agree, just as with the previous acquisition of Webify and the bulking up of WebSphere Process Server, that it’s getting harder to see where tools leave off and applications pick up. In an era where all these pieces become service-oriented and theoretically composable, maybe that’s irrelevant.
Veitch takes issue with the broader implications for IBM and Oracle – that “These companies have become planets to be explored rather than recognisable fiefdoms of even 10 years ago,” and that “a lot of people are unimpressed by the levels of integration and R&D that follow the incessant deal-making.” Well, part of that may be to satisfy Wall Street, but the march toward agglomeration has become something of a self-fulfilling prophesy. That is, a $500 million software company is no longer considered large enough to be viable, and so if customers are afraid for vendor survival, that reinforces the trend for IBM, Oracle, SAP, and Microsoft to gobble up what’s left. That’s a larger issue that gets beyond the pay grade of this post, but ironically provides subtle reinforcement of what Haren told us roughly six months ago: that once a market gets to the billion dollar or so level, that it becomes prey for “bottom fishers” that push niche providers back into their niche.
« Previous entries Next Page » Next Page »