This guest post comes from Ovum colleague Michael Azoff.
Agile practices have been around for over twenty years. The Agile Manifesto was written a decade after ‘agile’ first emerged (under different names of course, Agile was first coined at the 2001 manifesto meeting). There are also plenty of proof points around what works in agile and when to apply it. If you are still asking for agile to prove itself then you are missing where software development has progressed to.
Going back to Waterfall is not an option because it has inherent faults and those faults are visible all around in many failed IT projects. Ultimately, if waterfall is not broken for you then don’t fix it. But you should consider alternatives to waterfall if your software development processes or organization have become dysfunctional; over time, you might find difficulty in recruiting developers for legacy processes, but that’s another issue.
Ken Schwaber a co-originator of Scrum has said that only 25% of Scrum deployments succeed. The question then is what happens to the other 75% of failures. The problem can be examined at three levels of maturity: intra-team agility, extra-team agility, and business agility.
Teams may not be perfectly pure about their agile adoption, and we can get into discussions as Jeff Sutherland has with Scrum But scenarios (i.e. Scrum, but without some Scrum practices). But at some point there reaches a point where the team’s partial adoption of Scrum leads to failure. It could also be that cultural impediments prevent certain agile practices to take root: a highly hierarchical organization will be antithetical to the practice of self-organizing agile teams, for example.
The interface between the business and an agile team can harbor impediments. For example processes on the business side may have originally evolved around supporting waterfall processes and constrain a team that has transitioned to agile. In this scenario failure of agile is now a problem that spans beyond intra-team agile adoption and across the business-IT interface.
The biggest challenge and opportunity is with the organization as a whole: Can the business transform its agility? Can the business become agile and thereby make the agile IT department an integral part of the business, rather than a department in the basement that no executive visits? Today, many major businesses are essentially IT businesses and divorcing the IT team from the business becomes a serious handicap – witness successful businesses in technology, financial services, retail and more, where IT and the business are integral and are agile about it.
There is no magic recipe for agile adoption and it is seen in practice that the most successful agile transformation is one where the team goes through a learning process of self-discovery. Introducing agile practices, using trial and error, learning through experience, seeing what works and what does not, allows the team to evolve its agility and fit it to the constraints of the organization culture.
Organizations need support, training, and coaching in their agile transformation, but the need for business agility is greater the larger the scale of the IT project. Large scale agile projects can be swamped by business waterfall processes that impede their agility at levels above core software development. Interestingly there are cases where agility at the higher levels are introduced and succeed, while intra-team processes remain waterfall. There is no simple ‘right’ way to adopt agile. It all depends on the individual cases, but as long as we are agile about agile adoption, then we can avoid agile failure, or at least improve on what went before. Failure in adopting agile is not about giving up on agile, but re-thinking the problem and seeing what can be improved, incrementally.
And now for something completely different. This week, we offer a guest post from my Ovum colleague and agile methodology expert Michael Azoff.
Software development is more art than science: more about sociology than computer science — Agile has demonstrated that. The dream of computer scientists back in the 1970s, the era of the birth of computing, was that all you needed was a perfect specification and that programmers simply had to implement that spec. And what was implied? Maybe one day, you could automate that step and remove the need for a human programmer. Of course that dream didn’t happen and could never be fulfilled. The reasons are twofold: change and people.
* Change: because you can never nail down a spec perfectly upfront for most projects. Change is introduced during the lifetime of the project, so even if you had that perfect spec it can easily go stale.
* People: because for anything but the smallest projects, you need a team or multiple teams. And when people interact, there is scope for miscommunication and misunderstandings.
It is not a joke when project leaders looking back on large scale project failures say that rather than the hundred developers that were used, if they could rewind history and try again, they would pick the ten ablest developers and get the job completed and in short time.
Fast forward to today: Agile methodology has reached beyond the innovators and visionaries and has arguably gone mainstream. In practice that means various contortions and customizations of Agile methodologies exist, entwined with other processes and methodologies found in organizations, including neo-waterfall.
Neo-waterfall is an interesting case. I use that term because I do not believe developers ever did strict waterfall — if they did, the job would never get accomplished. So there was even a hint of agile in classic waterfall. Developers generally do what is necessary to get the job done and present the results to management in whatever form management expects it. Some form of iteration is essential, call it rework or doing it twice or whatever, because most software requirements are unique and getting it implemented perfectly right first time is difficult.
So now we have a situation where Agile adoption has reached the masses and organizations are ready to try it alongside other options, or, in some cases using only Agile and nothing else. The question is where do we go from here? Have we now solved the software development problem? (To recognize that there is a problem, read Fred Brooks’ The Mythical Man-Month). First of all, the overall (research and anecdotal) evidence is in favor of Agile: it is a step in the right direction (actually major strides forward). Agile methodologies solve development project management problems better than other known methodologies and processes.
However, Agile is not the end of the software development road. There is a “beyond Agile.” The idea is to retain the strengths of Agile and improve its weaker areas.
On the strengths side: the values and principles as expressed in the Agile Manifesto; the philosophy of adaptability and continuous learning (there is good overlap with Lean thinking here); the embracing of change; the emphasis on delivering business value; the iteration heart beat; the retrospectives for making continuous learning happen; the use of testing throughout the lifecycle; gaining feedback from users; getting the business involved; applying macro-management to the team, with a multi-skilled team self-organizing. The list continues: pair programming, test-driven development etc.
However, what will change in ‘beyond Agile’ are the areas where Agile has addressed itself less well. So the emphasis in early phase Agile has been on small teams where possible. The problem is that some enterprise projects are very large scale and need a lot of teams on a global basis. Various Agile development groups have addressed these issues but there is no consensus. The use of architecture and modeling also varies across these approaches. I expect some new form of Agile-friendly architecture and modeling will emerge. Certainly, the technology needs to improve: nothing quite beats the adaptability, the versatility (the agility) of programming languages — and creating software by drawing UML diagrams alone is dreadfully dull.
Another fault line has to do with QA and testing. Listening to some developers talk about how they had to bypass the (non-Agile) QA facility within their organization because it became a bottleneck, where they took on the job of QA themselves. That illustrates how QA and development have become separated in some organizations. ‘Beyond Agile’ I envisage will see QA and testing (the whole range, not just developer testing) become better integrated with development. While Agile developers have embraced quality and testing, the expertise in traditional QA and testing should not be lost.
Managing Agile stories, vast numbers running into thousands, and dealing with interdependencies and the transformation from business orientation to technical orientation — this is another area that could benefit from refinement.
While Agile expands its reach beyond development into operations in DevOps, and into business development (where Lean thinking is already established), the question is whether in the future the practices will be recognizably Agile or follow a new development wave. My hunch is that it will be recognizably rooted in our current understanding of Agile. It would be just fine if Agile became so established and traditional that we called it simply ‘software development’, without further distinction.
Event notice: Special 10th anniversary: The Agile Alliance’s Agile2011 Conference, Aug 8-12, 2011, will be revisiting Salt Lake City, Utah, where the Agile Manifesto was written back in 2001. I’m told the original signatories of the Agile Manifesto will be on stage to debate the progress of Agile.
A South Jersey neighbor of ours — runner, educator, and open source mischief maker Bob Bickel – recently blogged a status report on what’s been going on with the Jenkins open source project ever since it split off from Hudson.
That’s prompted us to wade in to ask the question that’s been glossed over by the theatrics: what about the user?
For background: This is a case of a promising grassroots technology that took off beyond expectation and became a victim of its own success: governance just did not keep up with the projects growing popularity and attractiveness to enterprise developers. The sign of a mature open source project is that its governing body has successfully balanced the conflicting pressures of constant innovation vs. the need to slow things down for stable, enterprise-ready releases. Hudson failed in this regard.
That led to unfortunate conflicts that degenerated to stupid, petty, and entirely avoidable personality squabbles that in the end screwed the very enterprise users that were pumping much of the oxygen in. We know the actors on both sides – who in their everyday roles are normal, reasonable people that got caught up in mob frenzy. Both sides shot themselves in the feet as matters careened out of control. Go to SD Times if you want the blow by blow.
So what is Hudson – or Jenkins – and why is it important?
Hudson is a continuous integration (CI) server open source project that grew very popular for Java developers. The purpose of a CI server is to support agile practices of continuous integration with a server that maintains the latest copy of the truth. The project was the brainchild of former Sun and Oracle, and current Cloudbees employee Kohsuke Kawaguchi.
Since the split, it has forked into the Hudson and Jenkins branches, with Jenkins having attracted the vast majority of committers and much livelier mailing list activity. Bickel has given us a good snapshot from the Jenkins side with which he’s aligned: a diverse governance body has been established that plans to publish results of its meetings and commit, not only to continuing the crazy schedule of weekly builds, but “stable” quarterly releases. The plan is to go “stable” with the recent 1.400 release, for which a stream of patches is underway.
So most of the committers have gone to Jenkins. Case closed? From the Hudson side, Jason van Zyl of Sonatype, whose business was built around Apache Maven, states that the essential plug-ins are already in the existing Hudson version, and that the current work is more about consolidating the technology already in place, testing it, and refactoring to comply with JSR 330, built around the dependency injection technology popularized by the Spring framework. Although the promises are to keep the APIs stable, this is going to be a rewrite of the innards of Hudson.
Behind the scenes, Sonatype is competing on the natural affinity between Maven and Hudson, which share a large mass of common users, while the emerging force behind Jenkins is Cloudbees, which wants to establish itself as the leading Java development in the cloud platform.
So if you’re wondering what to do, join the crowd. There are bigger commercial forces at work, but as far as you’re concerned, you want stable releases that don’t break the APIs you already use. Jenkins must prove it’s not just the favorite of the hard core, but that its governance structure has grown up to provide stability and assurance to the enterprise, while Hudson must prove that the new rewrite won’t destabilize the old, and that it has managed to retain the enterprise base in spite of all the noise otherwise.
April 28, 2011 update. Bob Bickel has reported to me that since the “divorce,” that Jenkins has drawn 733 commits vs 172 for Hudson.
May 4, 2011 update. Oracle has decided to submit the Hudson project to the Eclipse Foundation. Eclipse board member Mik Kersten voices his support of this effort. Oracle says it didn’t consider this before because going to Eclipse was originally perceived as being too heavyweight. This leaves us wondering,
why didn’t Oracle propose to do this earlier? where was the common sense?
Today’s announcement of CollabNet’s acquisition of Danube is yet another indicator of the mainstreaming of agile development processes. Managing agile development was formerly the domain of purpose-built tools from providers like Rally Software and VersionOne; today, virtually every ALM tools provider claims to support agile in some way shape or form.
Even CollabNet did, although it was through a fairly klugey template atop their TeamForge core planning platform, which was not originally designed for agile processes.
Until now, CollabNet was best known for helping reinvent the ALM market by being one of the first tools vendors to actually profit from open source. One of its co-founders, Brian Behlendorf, was also a cofounder of the Apache Software Foundation, a group that took a commercial-friendly approach to open source licensing. One of the earliest projects of that foundation was Subversion, a source code change and configuration management (SCCM) tool around which CollabNet was founded.
Open source found its calling during the deflationary period of the early 00s, after he combined impacts of the end of Y2K, the popping of the dot com bubble, and the post-9/11 recession imploded IT budgets and with it, software vendor sales. In the ALM space, CollabNet got its mojo as the usual suspects – Borland, Compuware, Rational, and Serena – found their pipelines eroding. In tough times, developers were no longer going to pay a lot for this muffler – if it was commodity technology, they would not pay for it. CollabNet saw open source wave coming and figured out that if you layered value-add atop it, IT organizations would be glad to put their money where their mouths were.
Having caught the open source wave, CollabNet missed the agile one. On one hand that was kind of surprising, but on another it wasn’t. CollabNet represented one form of grassroots movement that paradoxically came to fruition from the top down: exploit commodity technology to simplify management of global software development. Global IT organizations thirsted for cheaper, simpler alternatives to proprietary household names like ClearCase, PVCS, or ChangeMan, and Subversion provided it. Building atop Subversion, CollabNet developed a planning system that coordinated source code check in, testing, builds and releases. CollabNet Enterprise and its current incarnation TeamForge capitalized on such demand from large global IT organizations, while overlooking what was happening at ground level, within isolated enclaves across its client base. Instead, Rally, joined by niche players like VersionOne and Danube, focused on the planning needs of development teams that embraced agile development methodologies. More recently, household names like IBM Rational, HP, Serena – and yes, CollabNet – hopped that bandwagon, exclaiming, “We’re agile!”
But Danube provides the lighter weight planning capability that CollabNet was missing. The acquisition that is being announced today plugs a gaping hole in CollabNet’s catalog. In the short term, CollabNet will link Danube’s (which will lose the company name) ScrumWorks product (whose brand name will survive) by enabling defect reports and commits tracked in TeamForge to update ScrumWorks. But both products will still be driven by separate back end repositories.
CollabNet can be excused for not having a full product integration roadmap, but at some point it is going to have to bite the bullet with ScrumWorks. While offerings from powers-that-be like IBM or Micro Focus (which inherited the Borland catalog and what was left of Compuware developer products) are not driven by a single engine, the CollabNet portfolio is not so diverse and complex to make that argument. Furthermore, acquisition of Danube has placed CollabNet up against Rally and Serena, which provide unified, broader suites (for Rally, minus source code control) that includes a key piece that is underrepresented in the combined TeamForge/ScrumWorks portfolio: requirements management. Scrumworks has some limited uer story management capabilities. Now that it has bitten off on agile, CollabNet needs to also make a serious stab for upgrading requirements capabilities.
It’s not like product convergence is foreign to CollabNet; last year it completed a more ambitious migration following the 2007 acquisition of SourceForge. It was hardly a straightforward process: product convergence meant migrating to the more modern, scalable architecture of the acquired product. It took 18 months, and after that point, the products were only about 75% integrated. But it did migrate the core base. The transition should not be so painful with ScrumWorks – you can use a common data engine but just selectively expose it through a lighter weight process skin.
For those of us on the outside looking in, it might be tempting to pigeonhole the agile software development community as a highly vocal but homogeneous minority that gained its voice through the Agile Manifesto. In actuality, the manifesto was simply a call to action to, in essence, value working software over working plans. The authors didn’t pretend to monopolize knowledge on the topic and purposely made their manifesto aspirational rather than prescriptive.
And so, not surprisingly, in a world of software development methodologies, there is no such thing as an agile methodology. There are many. There is Scrum, so named for the quick huddles in Rugby that call for fast decisions based on a basic assessment of conditions on the ground. Proponents call scrum responsive while critics assail it for lacking the big picture. In actuality, properly managed scrums do have master plans for direction and context. Other variants such as eXtreme Programming (XP) drive development through testing to ensure 100% test coverage. In actuality, agile methodology is a spectrum that can borrow practices from scrum, XP or other methodologies as long as it meets the goal of working software with minimal red tape.
More recently, debate has emerged over yet another refinement of agile – Lean Development, which borrows many of the total quality improvement and continuous waste reduction principles of lean manufacturing. Lean is dedicated to elimination of waste, but not at all costs (like Six Sigma). Instead, it is about continuous improvement in quality, which will lead to waste reduction. It aims to root out rework, which is a great waste of time and money, and a process that subtracts rather than adds value to the product. Lean techniques were at the crux of new enhancements to Danube’s agile planning offering which we discussed a few weeks back.
The best known proponent of lean development, Mary Poppendieck, makes the metaphor that software development is very much like a manufacturing process. Having spent serious time studying and writing about lean manufacturing, and having spent time with the people who implemented lean at the pioneering NUMMI auto plant nearly 20 years ago, we can very much relate to Poppendieck’s metaphors.
In essence, developing software is like making a durable good like a car, appliance, military transport, machine tool, or consumer electronics product. In essence you are building complex products that are expected to have a long service life, and which may require updates or repairs. We also contend that the metaphor means that software should also carry a warranty, where the manufacturer vouches for the product’s performance over a specific operating life.
But not surprisingly, there is hardly any consensus over lean in the agile community. We came across a well-considered dissent by Pillar Technology consultant Daryl Kulak that criticizes the very idea of comparing software development and manufacturing in the same sentence. Kulak maintains that unlike manufacturing, software is not comprised of the kinds of uniform discrete parts that are found in manufacturing. There’s a bit of irony there, as proponents of SOA, and before that component-based development, often called for just that. But Kulak is correct that most software is not uniform discrete parts, at least according to common methods of software construction and design. And besides, there is always some local environmental variable that will cause variations in how software is configured to run on different targets.
That’s a key part of Kulak’s argument. “We don’t discover its actual character until we’ve started building it.” He also argues that the core systems-oriented principles that are at the heart of lean could lean to “ ‘mechanical Agile,” where we view our teams as machines and our people as a kind of robot. This type of team can only follow orders and cannot innovate or adapt.”
Kulak and others have good reason for their cynicism, as it wasn’t that long ago that CFOs actually believed the promises of outsourcers that software factories would make software development predictable in time and cost. That had great appeal during the post 2000 recession. Obviously – in retrospect – software development is not so cut and dried – business needs are always changing (that’s why we have agile) and software deployment environments always have their unique idiosyncrasies.
(There has been a more recent refinement of the software factories idea that focuses, not so much on offshore factories. Instead the emphasis is more on patterns and industrializing aspects of software development, e.g., the portions of an app that could be exposed as a reusable service. But that doesn’t eliminate people from the equation, it amps up the architecture.)
But back to core issue, lean cynics miss the point. When we visited the NUMMI plant, we were struck by how lean actually spurred production line workers to get creative and develop solutions to problems that they carefully tracked on the plant floor. We saw all those hand-drawn Pareto charts on the walls, and in fact the reason why we were there was to study implementation of the first computerized production management system that would help people track quality trends more thoroughly. What we saw was hardly a mechanical process, but one that encouraged people to ask questions, then base their decisions on a combination of hard data and personal experience. It was actually quite a step up from traditional command and control production line management.
There’s no reason why this approach shouldn’t work in software. This doesn’t mean that lean approaches will work in all software projects, as conducting value stream analyses could get disruptive for projects, such as the ever evolving Facebook apps of the Obama 2008 campaign. But the same goes for agile – not all projects are cut out for agile methodologies.
So to software developers, stop thinking that you’re so unique. Obviously, lean manufacturing methods won’t apply verbatim to software development, but look at the ideas behind it. Stop getting hung up on minor details, like software isn’t a piece part.
Update: The NUMMI auto plant in Fremont, California has unfortunately become a casualty of the restructuring of GM and contraction in the US auto market. So it would be tempting to state that NUMMI was a case where the surgery was successful but the patient died. The lesson for software developers is that process only ensures that the product will be developed; it doesn’t ensure success of the product.
VMware’s proposed $362 million acquisition of SpringSource is all about getting serious in competing with Salesforce.com and Google App Engine as the Platform-as-a-Service (PaaS) cloud with the technology that everybody already uses.
This acquisition was a means to an end, pairing two companies that could not be less alike. VMware is a household name, sells software through traditional commercial licenses, and markets to IT operations. SpringSource is a grassroots, open source developer-oriented firm whose business is a cottage industry by comparison. The cloud brought both companies together that each faced complementary limitations on their growth. VMware needed to grow out beyond its hardware virtualization niche if it was to regain its groove, while SpringSource needed to grow up and find deeper pockets to become anything more than a popular niche player.
The fact is that providing a virtualization engine, even if you pad it with management utilities that act like an operating system, is still a raw cloud with little pull unless you go higher up in the stack. Raw clouds have their appeal only to vendors that resell capacity or enterprise large firms with the deep benches of infrastructure expertise to run their own virtual environments. For the rest of us, we need a player that provides a deployment environment, handles the plumbing, that is married to a development environment. That is what Salesforce’s Force.com and Google’s App Engine are all about. VMware’s gambit is in a way very similar to Microsoft’s Software + Services strategy: use the software and platforms that you are already used to, rather than some new environment in a cloud setting. There’s nothing more familiar to large IT environments than VMware’s ESX virtualization engine, and in the Java community, there’s nothing more familiar than the Spring framework which – according to the company – accounts for roughly half of all Java installations.
With roughly $60 million in stock options for SpringSource’s 150-person staff, VMware is intent on keeping the people as it knows nothing about the Java virtualization business. Normally, we’d question a deal like this because the company’s are so dissimilar. But the fact that they are complementary pieces to a PaaS offering gives the combination stickiness.
For instance, VMware’s vSphere’s cloud management environment (in a fit of bravado, VMware calls it a cloud OS) can understand resource consumption of VM containers; with SpringSource, it gets to peer inside the black box and understand why those containers are hogging resource. That provides more flexibility and smarts for optimizing virtualization strategies, and can help cloud customers answer the question: do we need to spin out more VMs, perform some load balancing, or re-apportion all those Spring TC (Tomcat) servlet containers?
The addition of SpringSource also complements VMware’s cloud portfolio in other ways. In his blog about the deal, SpringSource CEO Rod Johnson noted that the idea of pairing VMware’s Lab Manager (that’s the test lab automation piece that VMware picked up through the Akimbi acquisition) proved highly popular with Spring framework customers. In actuality, if you extend Lab manager from simply spinning out images of testbeds to spinning out runtime containers, you would have VMware’s answer to IBM’s recently-introduced WebSphere Cloudburst appliance.
VMware isn’t finished however. The most glaring omission is need for Java object distributed caching to provide yet another alternative to scalability. If you only rely on spinning out more VMs, you get a highly rigid one-dimensional cloud that will not provide the economies of scale and flexibility that clouds are supposed to provide. So we wouldn’t be surprised if GigaSpaces or Terracotta might be next in VMware’s acquisition plans.
Although this month our first order of business is to demolish the governance silos separating managing the application lifecycle (ALM) from IT Service Management (ITSM), and then of course achieve world peace, it’s impossible to ignore the Agile software development world’s upcoming annual event. Sadly, for the third straight year we’ll miss the Agile software conference. We’re getting into the stream of tools briefings that for the most part revolve around the common theme of addressing some hurdle to scaling up or out agile practices.
This week we’ve been briefed by VerisonOne which has chosen to focus on adding support for lean software development variant of agile in its Summer 2009 release. It’s very easy to get confused here, as on the face of it, agile is a lean approach to software development if you define lean as lighter weight process that stems to slice through the red tape.
But if you adhere more loosely to the definition of lean that emerged out of the just-in-time, continuous improvement-oriented practices of the manufacturing world, there are methods that depart from your typical scrum or form of extreme development. The core thread of lean is eliminating waste through value stream mapping (essentially, the workflow that a product undergoes, documenting the steps and cycles times for adding value) so you can decide where to attack waste and bloat. Then, as with core agile practices, you assume planning goals are moving targets; the subtle distinction in lean is that you make key decisions as late as possible, then deliver as fast as possible (before the goalposts change).
Having a background in manufacturing, and having visited the famous GM/Toyota NUMMI assembly plant joint venture as they were implementing their first computerized production and quality tracking systems 20 years ago, we can relate to the idea of lean. In the software development world, Mary and Tom Poppendieck have become the best-known evangelists; suffice it to say that in the agile world there is plenty of debate on the merits of lean: that lean require more systematic planning and upfront analysis of waste compared to traditional agile, vs. whether the upfront work leads to analysis paralysis. Clearly, the ability to analyze waste up front does require more of a big picture view than many software development practitioners either have in their veins or are trained for. We’ll pass the buck here by saying that the choice of methodology should depend on the project.
VersionOne has attacked the problem by adding features such as visualizing a value stream with the ability to drill down on each step to apply, in effect, what we’d call development policies. That is, you may decide that here are certain parameters, such as amount of work-in-process (uncompleted work), that must be enforced so that development teams do not get in over their heads with too many deliverables at any one time. It can also reveal findings on cycle time that can help you either whittle away at the scope of a particular development iteration, and of course, critical metrics such as defect density so you can connect the dots between development steps, code characteristics, project scope, and quality issues.
What would be nice would be to have some simulation capabilities, so you could test drive changing planning assumptions to see how it might impact developer productivity or the ability to meet delivery commitments. Alas, that feature will have to wait, but it’s always helpful when a tools provider steps into the debate over software development methodology, as it takes those debates out of the realm of theory.
Publishing of the Agile Manifesto back in 2001 formalized a belief in the development community that the business goals of software development projects were in actuality moving targets. Since then, agile has emerged as mostly a bottom-up movement among true believers. In a handful of cases, such as within software vendors (whose business is software), agile has become a matter of enterprise urgency. In scattered cases, such as at BMC, successes of enterprise-scale agile development (e.g., multiple teams across multiple sites). But for the most part agile has been perceived as a cottage industry in the developer community that often views itself in a David vs. Goliath situation.
There is little question that agile makes a good fit for the kind of incremental development that adds or extends existing software portfolios. If the problems are well-contained, they are well-suited for the compact teams that are typically involved with agile projects. Furthermore, as incremental software, extensions and add-ons do not it against what is agile’s greatest weakness: the ability to maintain consistent direction and big picture vision in a methodology where the assumptions are revisited with each iteration or release cycle.
All that is an urban myth however as true agile development does require the basic scoping and planning required for larger projects. While the business environment is hardly static these days, no CEO or CFO in his or her right mind would sign their name to an open ended project that might evolve in random directions. Admittedly the approach to planning for an agile project might differ from a more conventional one, but planning, scoping, and high end requirements planning are essential so that the agile project – as changing as it is – operates under the right assumptions and project leaders adequately manage customer expectations. In agile speak, by the way, such mega planning cycles are appropriately called “epics.’
The other side of the coin is that for agile methodology to measurably impact software development, it must be able to scale up to the point where multiple teams can coexist on larger projects. Just because software development is incremental today doesn’t mean that large projects are a thing of the past. On the other hand, while it is critical not to pigeonhole agile into small projects, there is no single methodology that fits all projects. In some cases, you may have a large project that remains better suited for waterfall.
Over the past few years, agile planning tools providers have been gradually adding features that extend the rings of agile planning past the small group holed up by the water cooler. And in coming weeks leading up to the Agile 2009 conference, you’re likely to see other announcements from the likes of Rally, ThoughtWorks and others.
This week it’s Danube’s turn. They’ve come up with a clever search and aggregation facility within their project planner to identify and group project goals and objectives that gets you beyond the rigid hierarchical schemes of most project planners – agile or otherwise. The rationale for such need is that multiple release cycles may share different mixes of common planning goals, and that in one release cycle, the order of priority of those goals might vary. For instance, in planning a customer portal, the requirement of concealing the customer’s actual identity is always important, but it may be more important when it comes to page navigation or entry of credit card numbers than where it comes to setting a page display or shipping preference. Within a conventional hierarchical planning scheme there is no way to distinguish that.
Danube’s innovation typifies the state of agile planning – tools providers are beginning to add the bells and whistles that can improve the ability to communicate and coordinate across larger or more complex projects. But it also reveals that the solution is hardly a done deal. For instance, as Danube can now facilitate a more sophisticated means for managing requirements and goals for more involved agile projects, it lacks the next logical piece of the puzzle: adding traceability. As you consider goals for each iteration, it helps to have context, and in regulated situations, it is essential to understand where and why those requirements or goals originated, and how and when they were or are redressed. Danube has change logs, but on the next release, we’d like to see them raise the ante with some ability to track the bread crumbs that will inevitably accumulate during the course of an agile project.
Ever get the feeling that software development has degenerated into a cat and mouse game? The business defines a requirement, provides a budget that covers about two thirds the cost, specifies an 8-week deadline, and then changes requirements in midstream because their needs changed as did the competitive environment. The result is a game of dodge ball where the winner is somehow able to duck the finger pointing.
The practice of Enterprise Architecture (EA) was invented to make IT proactive rather than reactive. So, come and give us an impossible deadline, and we’ll implement a consistent process that, after analysis, determines the possible, and then enforces a process to ensure that the possible happens. In that sense EA is much like Business Process Management (BPM) for IT’s software or solution delivery business in that you try to imbed your best practices when implementing systems projects.
At the end of the day, the goal of EA is to make all the processes related to implementing and supporting IT systems consistent. Yes, it can be about determining the standards for physical architecture, such as preferred database, OS, application choices, and SOA policy and all that. But more broadly, it’s a form of Business Process Management (BPM) applied to IT’s system activities. EAs have many different frameworks to choose from for codifying how IT responds to the business in making the decisions governing specification, implementation, and maintenance of systems; among the best known EA frameworks are the Zachman Framework, the granddaddy which takes a matrix approach to identifying all the facets of implementing IT systems, and TOGAF, the Open Group-sponsored framework which takes more of an iterative process-centered approach.
Enterprise Architecture has a strong band of adherents, primarily in large enterprises or public bodies that also have strong process focuses. The actual power and influunce of the enterprise architect himself or herself obviously varies from one organization to the next. The Open Group’s EA conference this week still had a pretty strong turnout in spite of the fact that layoffs and budget cuts are dominating the headlines.
But Enterprise Architecture has a branding problem: Try and pitch Architecture, promote the benefit that it is supposed to be transformational, and if you work in a more typical enterprise, you’re likely to get one of two responses as you are thrown out of the office:
1. Transformation sounds too much like Reengineering, which translates to all pain and little gain.
2. What the heck is enterprise architecture? I need software!
Cut back to the chase: how can IT successfully pitch enterprise architecture? Should it pitch EA? And how is the idea actually thinkable during a recession when IT budgets are getting slashed and people are getting laid off?
We were reminded of the issue of keeping EA relevant as we sat on a panel at the Open Group’s Enterprise Architect Practitioner’s Conference in San Diego this week hosted by Dana Gardner, along with Forrester Research principal EA analyst Henry Peyret; Chris Forde, VP Technology Integrator for American Express and Chair of the Open Group’s Architecture Forum; Janine Kemmeren, Enterprise Architect for Getronics Consulting and Chair of the Architecture Forum Strategy Work Group; and Jane Varnus, Architecture Consultant for Enterprise Architecture Department, of Bank of Montreal.
Gardner polled the audience on what kind of ROI was realistic for EA initiatives; an overwhelming majority of attendees stated that a 2-year payback was reasonable. The problem is that today, money is just not that patient. As one consultant joked to us after the session, anyone proposing a 2-year payback should enter the CxO’s office with another job offer in their back pocket. Consider this: if you proposed a project in July 2008 for a natural resources company based on oil prices exceeding $140/barrel, your plans would have been fine until the collapse of Lehman Brothers 2 months later.
EA could borrow some lessons from the agile software development community which has proven that in some cases (not all), lighter weight processes may be adequate and preferable. With agile development predicated on the assumption that requirements are a moving target, it takes a looser approach to requirements called “stories” that can change at the end of each spring or iteration, which could be anything from 2 – 6 weeks. Conceived for web development, agile is not the be-all or end all, and likely would not be a wise choice for implementing SAP. If you keep in mind that the goal of EA is not process for its own sake, but instead is aimed at ingraining consistent policy and methodology for decision making, the notion of “lite” EA is not all that outlandish. Your organization just has to decide what the pain points are and address relevant processes accordingly.
TOGAF 9 is an encouraging step in this direction in that it has made the framework more modular and the pieces of it more self-explanatory. No more do you have to borrow from different parts of the framework all the concepts and processes that you need in place to conduct a planned systems migration, and it makes it easier to implement piecemeal. We agree with Forrester’s Peyret that EA needs to accounts for the fact that the world is more virtual, scattered, and networked, and that EA frameworks need to account for this reality. But at the same time, EA needs to provide a lighter weight answer for smaller organizations that desire the consistency, but lack the time, resources, or depth to undertake classic EA.
And while we’re at it, lose the name. The term enterprise architecture is awfully vague – it could mean process architecture, physical architecture, not to mention the reengineering and capital “T” transformation connotations. When pitching EA, and especially EA lite outside the choir, how about using terms that connote steady, consistent decision making and predictable results? If you’ve got a better idea on how to brand EA, please let us and the world know.
Update: A full transcript of the session is available here; you can listen to the podcast here.
A conversation this week with database veteran Jnan Dash reminded us of one salient fact regarding computing, and more specifically, software platforms. That there never was and never will be a single grand unifying platform that totally flattens the playing field and eradicates all differences.
Dash should know, having been part of the teams that developed DB2, and after that, Oracle, who currently keeps off the street by advising tools companies that have gotten past startup phase. For now, his gig is advising Curl, developers of a self-contained language for Rich Internet Applications (RIA) combining declarative GUI and OO business logic inside the same language, which had the misfortune of emerging before its time (the term RIA had yet to be coined).
Curl provides an answer to unifying one piece of the process – developing the rich front end. But it’s a far cry from the false euphoria over “Write once, run anywhere” that emerged during Web 1.0, where the train of thought was on a single language (Java or later, C#) for logic on a single mid-tier back end and a universal HTML, HTTP, and TCP/IP stack for connectivity to the front end. Of course, not all web browsers were fully W3C compliant, and in the end, bandwidth killed the idea of Java applets (the original vision for RIA) and disputes between Sun and Microsoft gave rise to a Java/.NET duopoly on the back end. But the end result was not only a dumbed down thin client that was little more than a green screen with a pretty face, but also a dumbed down IDE market, as the Java/.NET duopoly effectively made development tooling commodity. Frankly, it made the tools market quite boring.
That’s in marked contrast to the swirl of competition that characterized the 4GL client/server era a few years before, where emergence of two key standards (SQL databases and Windows clients) provided a standard enough target that spawned a vibrant market of competing languages and IDEs that rapidly pushed innovation. Competition between VB, SQL Windows, PowerBuilder, Delphi and others spawned a race for ease of use, a secondary market for visual controls, simplified database connectivity, and birth of ideas like model-driven development and unified software development lifecycles.
What’s ironic is that today, roughly a decade later, we’re still trying to get to many of those goals. Significantly, as technology grew commodity, most of the innovation shifted to process methodology (witness the birth of the Agile Manifesto back in 2001).
While agile methodologies are continuing to evolve, we sense that the pendulum of innovation is shifting back to technology. In a talk on scaling agile at the Rational Software Development Conference last week, Scott Ambler told agile folks to, in effect, grow up and embrace some more traditional methods – like perform some modeling before you start – if you’re trying agile on an enterprise scale.
More to the point, the combined impacts of emergence of Web 2.0, emergence of open source, and a desire to simplify development such as what former Burton analyst Richard Monson-Haefel (who’s now an evangelist with Curl) termed the J2EE rebel frameworks spawned a new diversity of technology approaches and architectures.
Quoted in an article by John Waters, XML co-inventor and Sun director of web technologies Tim Bray recently acknowledged some of the new diversity in programming languages. “Until a few years ago, the entire world was either Java or .NET… And now all of a sudden, we have an explosion of new languages. We are at a very exciting inflection point where any new language with a good design basis has a chance of becoming a major player in the software development scene.”
Beyond languages, a partial list of innovations might include:
• A variety of open source frameworks like Spring or Hibernate that are abstracting (and simplifying use of) Java EE constructs and promoting a back to basics movement with Plain Old Java Objects (POJOs) and rethinking of the appserver tier;
• Emergence of mashups as a new path for accessible development and integration;
• Competition between frameworks and approaches for integrating design and development of Internet apps too rich for Ajax;
• Emergence of RESTful style development as simpler alternatives for data-driven SOA; and in turn,
• New competition for what we used to call component-based development; e.g., whether components should be formed at the programming language level (EJB, .NET) vs. web services level (Service Component Architecture, or SCA).
In short, there are no pat answers to developing or composing applications; it’s no longer simply, choose vanilla or chocolate for the back end, and using generic IDEs for churning out logic and presentation. In other words, competition has returned to software development technologies and architectural approaches, making the marketplace interesting once again.