As we’re all too aware, the tech field has always been all too susceptible to the fad of the buzzword, which of curse gave birth to another buzzword as popularized by gave birth to Gartner’s Hype Cycles. But in essence the tech field is no different from the worlds of fashion or the latest wave in electronic gizmos – there’s always going to be some new gimmick on the block.
But when it comes to cloud, we’re just as guilty as the next would-be prognosticator as it figured into several of our top predictions for 2009. In a year of batten-down-the-hatch psychology, anything that saves or postpones costs, avoids long-term commitment, while preserving all options (to scale up or ramp down) is going to be quite popular, and under certain scenarios, cloud services support all that.
And so it shouldn’t be surprising that roughly a decade after Salesforce.com re-popularized the concept (remember, today’s cloud is yesterday’s time-sharing), the cloud is beginning to shake up how software developers approach application development. But in studying the extent to which the cloud has impacted software development for our day job at Ovum, we came across some interesting findings that in some cases had their share of surprises.
ALM vendors, like their counterparts on the applications side, are still figuring out how the cloud will impact their business. While there is no shortage of hosted tools addressing different tasks in the software development lifecycle (SDLC), major players such as IBM/Rational have yet to show their cards. In fact, there was a huge gulf of difference in cloud-readiness between IBM and HP, whose former Mercury unit has been offering hosted performance testing capabilities for 7 or 8 years, and is steadily expanding hosted offerings to much of the rest of its BTO software portfolio.
More surprising was the difficulty of defining what Platform-as-a-Service (PaaS) actually means. There is the popular definition and then the purist one. For instance, cloud service providers such as Salesforce.com employ the term PaaS liberally in promoting their Force.com development platform, in actuality development for the Force.com platform uses coding tools that don’t run on Salesforce’s servers, but locally on the developer’s own machines. Only once the code is compiled is it migrated to the developer’s Force.com sandbox where it is tested and staged prior to deployment. For now, the same principle applies to Microsoft Azure.
That throws plenty of ambiguity on the term PaaS – does it refer to development inside the cloud, or development of apps that run in the cloud? The distinction is important, not only to resolve marketplace confusion and realistically manage developer expectations, but also to highlight the reality that apps designed for running inside a SaaS provider’s cloud are going to be architecturally different than those deployed locally. Using the Salesforce definition of PaaS, apps that run in its cloud are designed based on the fact that the Salesforce engine handles all the underlying plumbing. In this case, it also highlights the very design of Salesforce’s Apex programming language, which is essentially a stored procedures variant of Java. It’s a style of development popular from the early days of client/server, where the design pattern of embedding logic inside the database was viewed as a realistic workaround to the bottlenecks of code running from fat clients. Significantly, it runs against common design patterns for highly distributed applications, and of course against the principles of SOA, which was to loosely couple the logic and abstract from the physical implementation. In plain English, this means that developers of apps to run in the cloud may have to make some very stark architectural choices.
Nonetheless, there’s nothing bad about all that — just that when you write logic inside a PaaS platform, it is like writing to any application platform. It’s nothing different, better, or worse than writing to Oracle or SAP. The only difference is that the code exists in the cloud, for which there may be good operational, budgetary, or strategic reasons to do so.
UPDATE: In a recent column, Andrew Brust described how Microsoft came to terms with this issue during its current rollout of SQL Data Services.
The confusion over PaaS could be viewed as a battle over vendor lock-in. It would be difficult to port an application running in the Salesforce cloud to another cloud provider or transition it to on premises because the logic is tightly coupled to Salesforce’s plumbing. This sets the stage for future differentiation of players like Microsoft, whose Software + Services is supposed to make the transition between cloud and on premises seamless; in actuality, that will prove more difficult unless the applications are written in strict, loosely-coupled service-oriented manner. But that’s another discussion that applies to all cloud software, not just ALM tools.
But the flipside of this issue is that there are very good reasons why much of what passes for PaaS involves on-premises development. And that in turn provides keen insights as to which SDLC tasks work best in the cloud and which do not.
The main don’ts consist of anything having to do with source code, for two reasons: Network latency and IP protection. The first one is obvious: who wants to write a line of code and wait until it gets registered into the system, only to find out that the server or network connection went down and you’d better retype your code again. Imagine how aggravating that would be with highly complex logic; obviously no developer, sane or otherwise, would have such patience. And ditto for code check-in/check out, or for running the usual array of static checks and debugs. Developers have enough things to worry about without having to wait for the network to respond.
More of concern however is the issue of IP protection: while your program is in source code and not yet compiled or obfuscated, anybody can get to it. The code is naked, it’s in a language that any determined hacker can intercept. Now consider that unless you’re automating a lowly task like queuing up a line of messages or printers, your source code is business logic that represents in software how your company does business. Would any developer wishing to remain on the payroll the following week dare place code in an online repository that, no matter how rigorous the access control, could be blown away by determined hackers for whatever nefarious purpose?
If you keep your logic innocuous or sufficiently generic (such as using hosted services like Zoho or Bungee Connect), developing online may be fine (we’ll probably get hate mail on that). Otherwise, it shouldn’t be surprising that no ALM vendor has yet or is likely to place code-heavy IDEs or source code control systems online. OK, Mozilla has opened the Bespin project, but just because you could write code online doesn’t mean you should.
Conversely, anything that is resource-intensive, like performance testing, does well with the cloud because, unless you’re a software vendor, you don’t produce major software releases constantly. You need lots of resource occasionally to load and performance test those apps (which by that point, their code is compiled anyway). That’s a great use of the cloud, as HP’s Mercury has been doing since around 2001.
Similarly, anything having to do with the social or collaboration aspects of software development lent themselves well to the cloud. Project management, scheduling, task lists, requirements, and defect management all suit themselves well as these are at core group functions where communications is essential to keeping projects in sync and all members of the team – wherever they are located — on literally the same page. Of course, there is a huge caveat here – if your company designs embedded software that goes into products, it is not a good candidate for the cloud: imagine getting a hold of Apple’s project plans for the next version of the iPhone.
Thank you Larry for finally putting us out of our misery, as Oracle has finally silenced the chattering classes (mea culpa) with a $9.50/share bid for Sun (almost smack dab in the middle between IBM’s original and revised lower bids).
In many ways this deal brings events full circle between Oracle and Sun. The obvious part is that the deal solidifies Oracle’s enterprise stack vs. IBM in that Oracle can now go fully mano a mano against IBM for the enterprise data center, database, platform and all. It also provides a welcome counterbalance to IBM for control over Java’s destiny. While the deal is likely to finally put NetBeans out of its misery, it means that there will be a competition over direction of the Java stack that is borne of realpolitik, not religion.
More importantly, it finally gives Solaris a meaning for existence as it returns to serving as Oracle’s reference platform. In a way, you could state that both companies were twins separated at birth, as both emerged as the de facto reference platforms for UNIX in the 80s; the deal was sealed with Sun’s purchase of some of the assets of Cray in the mid 90s that finally gave Sun an enterprise server on which Oracle could raise the ante on IBM. Aside from HP’s brief challenge with SAP in the mid 90s, Solaris has always been the biggest platform for Oracle.
But after the dot com bust and emergence of Linux, Solaris lost its relevance as open source provided an 80/20 alternative that was good enough for most dot coms. It left Sun with an identity crisis, debated much on these pages and elsewhere, as to its next act. Under Jonathan Schwartz’s watch, Sun tried becoming the enterprise counter pole to Red Hat – all the goodness of open source, MySQL too, but with the bulletproofing that Red Hat and SuSE were missing. As we noted a few weeks back, great idea, but not enough to support a $5 billion business.
Now Solaris becomes part of the Oracle enterprise stack – a marriage that makes sense as businesses investing in high end enterprise applications are going to expect umbrella deals. In other words, now Oracle has the complete deal to counter IBM. Oracle in the past has flirted with database appliances and certified implementations – now it doesn’t have to flirt anymore. More importantly, it provides a natural platform for Oracle to offer its own cloud.
The deal protects Sun’s – likely soon to be Oracle’s – hold on the Solaris installed base more than it protects the Oracle database, application or middleware stack. Basically, shades of UNIX hardware are commodity and more readily replaced than databases or applications. That’s why you saw Sun try developing a software business over years as it desperately needed something firmer to anchor Solaris. Oracle seals the deal.
Obviously, this one makes PeopleSoft and Siebel walks in the park, if you compare the scale of the deal. Miko Matsumura and Vinnie Merchandani have their doubts as to how well this beast will swallow the prey.
CORRECTION: The PeopleSoft deal was larger and marked the beginning of Oracle’s grand acquisitions spree. But this deal marks a major new chapter in the way it could transform Oracle’s core business.
While there is plenty of discussion of how this changes the lineup of who delivers to the data center, we’ll focus on some of the interesting implications for developers.
For starters, my colleague Dana Gardner had an interesting take on what this means for MySQL, which he calls MyToast. We concur with the rest of his analysis -– but depart from it on this one. First, this is open source, and in this case, open source where the genie is already out of the bottle. Were Oracle to try killing MySQL, there would be nothing to stop enterprising open source developers and some of the old MySQL team from developing a YourSQL. Secondly, MySQL was never going to seriously compete with Oracle as the database, in spite of improvement, remains too underpowered. Our take is that Oracle could take the opportunity to cultivate the base and develop the makings of a lightweight middleware stack that for the most part would be found money, rather than cannibalization, of its core business.
The other interesting question concerns Java. Three words: NetBeans is history.
Sun’s problem was that the company was too much under the control of engineers -– otherwise, how to explain why the company kept painting itself into corners with technologies increasingly off the mainstream, like NetBeans, or the more recent JavaFX Java-native rich internet client? Now that it “owns” the origins of the Java stack, we expect Oracle to provide counterweight to IBM/Eclipse, but as mentioned earlier, it will be one borne of nuance rather than religion. You can see it already in Oracle’s bifurcated Eclipse strategy, where its core development platform, JDeveloper, is not Eclipse-compliant, but the recently acquired BEA stack is. In some areas, such as Java persistence, Oracle has taken lead billing. Anyway, as Eclipse has spread from developer to run time platform, why would Oracle give up its position as a member of Eclipse’s board.
We see a different fate for JavaFX, however. If you recall, one of the first things that Oracle did after closing the BEA acquisition was dropping BEA’s deal to bundle the Adobe Flash client as part of its Java development suite. Instead, Oracle’s RIA strategy consisted of donating its Java Server Faces (JSF) to Apache as the MyFaces project. As JSF is server side technology for deploying the MVC framework to Java, we expect that Oracle will view JavaFX as the lightweight Java-native rich client alternative, providing web developers dual alternatives for deploying rich functionality.
Let’s remind you that we included the word “in” in the title.
We’re not experts on BPM standards, but we’ve never been great fans of BPEL either. It’s one of those things that’s necessary in that, if you want to make a business process executable, you’ll probably need something like BPEL. Imagine slicing and dicing a process down into its constituent workflows, then explode those workflows into a series of atomic steps that at the end of the day look more like computer log files. That’s BPEL for you.
Business stakeholders have long disdained BPEL, with a few of them having deep pockets springing for BPM tools that use their own richer proprietary syntax to generate Java. It’s remained a niche, as classic BPM systems satisfied primarily those organizations with sufficiently complex, but high enough value-added processes that were beyond the scope of packaged software.
So the riddle is whether you can make rich business process models portable. The XPDL folks claim you can, and they have done a thorough mapping to BPMN 1.1 to prove it. XPDL backers claim it gives you the best of both worlds: a rich, workflow-oriented language, and portability. Detractors, like IDS Scheer’s Sebastian Stein, say it’s fine as long as you don’t mind wading through a 216-page spec to do it. As for vendor support, if you’re talking to IBM, Oracle, or SAP, pulling teeth would be easier.
The big enterprise/middle tier players would rather shift the subject to BPMN, which does provide the kinds of visual flow charts and terms that domain experts and process designers understand, and also makes provision for translating those process flows to executables. Sounds fine, except, maintains Bruce Silver, until now the coupling to executables has been too tight. The existing 1.x version requires that those service interfaces and data mappings be specified, which ironically makes them too web services-friendly but not really service-oriented. Yes, BPMN requires you to specify the services that processes steps are supposed to fire, but on the other hand, it violates a key precept of SOA which is that there should be no dependencies. None. Zilch. Not even to an otherwise loosely-coupled service. (Silver hopes that BPMN 2.0 will more effectively support portability.)
But at the end of the day, you have a process that you need to automate, it’s not covered by Oracle or SAP, and you don’t have a quarter million dollars for a big BPM tool. Active Endpoints, which began life as an OEM supplier of BPEL technology, has taken the approach of saying, leave it to developers. Not the rocket scientist Java EE or C# folks, but the departmental developers used to VB.
OK, maybe the $30 – 50k is a bit rich for a departmental app, but in fact, it’s probably more the sweet spot these days for corporate IT which needs to stretch its dollars. But the guiding logic looks quite similar to what drove all those departmental VB apps that often snuck through the back door under the radar of corporate IT: business units needed solutions and couldn’t afford to wait at the back of the line for corporate IT to burn through its project backlogs.
ActiveVOS, which is Active Endpoint’s product (a mouthful for a small company, yes?) takes a RAD approach to making BPEL just a little less BPEL. Instead of seeing endless lines of BPEL XML, it aggregates the BPEL into process steps that look a bit like the workflow diagrams that business process analysts consume. It also provides capabilities like process rewind, which are a lot like transaction rollback – if a live process starts to go bad, you can get a bit of a do-over and roll it back (data and all) to the last decent step. And yes, it will translate BPMN, because as you might recall, BPMN was designed to translate to an executable (we won’t dredge up all the baggage again).
Given that the product is still young – sales only picked up last year with release of version 6 – it piqued our attention that in the first crappy quarter of this year, the company’s business still went up 20%.
Obviously, there is no single silver bullet approach to BPM that will work for all comers. Maybe in part that’s been a speed bump impeding development of the BPM market, or at least one that is obviously identifiable. On the other hand, maybe that’s not the point, as the prize really is to keep application development as closely aligned with the business as possible in an era of reduced budgets, whatever it takes. ActiveVOS’s emergence reflects the fact that departmental VB-style development remains very much alive, and in many cases, the shortest path between two points. And if it leverages BPEL so be it – that’s the execution language that WS-* web services understand.
It’s one of a number of emerging model-driven approaches that hope to make what was traditionally tactical application development more coherent and less random (the same point behind Microsoft Oslo, for instance). The rationale is that models are easier to manage and replicate, as they abstract physical implementation from content or core logic. We’re currently studying various approaches to what we’d call Business Whiteboarding which provide simpler onramps for business people, not developers, the ability to more formally declare what a process is, or what their core business requirements are, so there’s less game of telephone and finger pointing, and more accountability for the software that results at the end of a project request.
As we and many others have opined, one of the greatest tremors to have reshaped the software business over the past decade has been emergence of open source. Open source has changed the models of virtually every major player in the software business, from niche startups to household names. It’s hollowed out sectors like content management where pen source has replaced the entry level package; unless you’re implementing content systems as an extension of an enterprise middle tier platform strategy, open source platforms like Joomla or the social networking-oriented Drupal will provide a perfectly slick, professional website.
Multiple open source models ranging from GNU copy left to Apache/BSD copyright, plus a wide range of community sourcing and open technology previews have splattered the landscape. Of course, old perennial issues like whether open source robs Peter to pay Paul or boosts innovation continue to persist.
What’s new is emergence of the cloud, a form of computing that has resisted the platform standardization that was both created by open source, and for which makes open source possible. Behind proprietary cloud APIs, does open source still matter? We sat in on a recent Dana Gardner BriefingsDirect podcast that updated the discussion, along with Paul Fremantle, the chief technology officer at WSO2 and a vice president with the Apache Software Foundation; Miko Matsumura, vice president and deputy CTO at Software AG, and Richard Seibt, the former CEO at SUSE Linux, and founder of the Open Source Business Foundation; Jim Kobielus, senior analyst at Forrester Research; JP Morgenthal, independent analyst and IT consultant, and David A. Kelly, president of Upside Research.
Read the transcript here, and listen to the podcast here.
We had a hard time imagining exactly why Sun was worth $7 billion to IBM, and upon completing the due diligence, evidently so did IBM. Yet in spite of a slightly reduced offer that, according to the New York Times, went form $9.55 a share to $9.40, we wonder what was going through the minds of Sun’s board, which according to the Wall Street Journal, was split: Jonathan Schwartz’s faction supposedly in favor and, not surprisingly, Scott McNealy’s against. Evidently, even in retirement, McNealy still calls the shots.
Looking back, McNealy seemed more interested in being right than adapting to structural changes in the marketplace that made Sun’s posturing irrelevant. As Sun was wasting its energy fighting rather than accommodating Microsoft with Java, IBM did an end-around with Eclipse which shifted the center of the Java universe away from the JCP. Meanwhile, emergence of Linux eroded the very foundations of Sun’s business.
We’ve had a hard time figuring what other exit strategy remains for Sun’s beleaguered shareholders. Yes, Sun just hedged its bets signing a Solaris x86 distribution deal with HP for its Proliant servers at the end of February, but for all practical purposes, there’s nobody that matches Fujitsu’s footprint as Solaris OEM. As we’ve argued previously, Fujitsu would be the most logical resting place for Sun’s SPARC server business, and there’s some precedent for Fujitsu to make such investments as it recently bought Siemens out of its x86 Fujitsu Siemens joint venture. Besides, as the largest Solaris OEM, they have real skin in the game for its survival.
Sun’s problems are hardly new. While open source has become the mantra under Jonathan Schwartz’s watch, we have a hard time figuring how it’s going to drive a $5 billion business built on lower volume, high margin sales. Some have drank the Kool Aid; Silicon Valley entrepreneur Sramana Mitra argued that Sun should fully walk Schwartz’s open source talk. OK, Sun’s Open Storage CMT (Niagara) and x86 systems businesses have grown of late at double digit rates, but for Sun to make the transition, it would have to become the open source counterpart of Dell.
But we’d agree with Mitra, had Sun made the move when Schwartz took the helm back in 2006. With the economy much better then, and the market expecting Schwartz to make a real break with the past, think of how Sun could have reinvented itself had Schwartz, as one of his first moves, divested the SPARC business. Were Fujitsu interested, it could have received a few billion which could have been used for a real makeover to a high volume but lower margin business; maybe some (or most of it) could have been used to shut Wall Street up and take the company private. Just imagine.
IBM’s $7 billion was simply a play to surround HP with more market share; but with aggressive selling, it could seriously eat into Sun’s business without the buyout –- servers are replaced far more readily than software. It has better things to do with its money.
Dana Blankenhorn had the best take on Sun’s fickleness, equating them to the Dodgers Manny Ramirez, who walked away from a $5 million offer, only to take it several months later after nothing else surfaced.