Category Archives: Application Lifecycle Management (ALM)

Big Data and the Product Lifecycle

Our twitter feed went silent for a few days last week as we spent some time at a conference that where chance conversations, personal reunions, and discovery were the point. In fact, this was one of the few events where attendees – like us – didn’t have our heads down buried in our computers. We’re speaking of Cyon Research’s COFES 2012 design engineering software conference, where we had the opportunity to explore the synergy of Big Data and the Product Lifecycle, why ALM and PLM systems can’t play nice, and how to keep a handle on finding the right data as product development adopts a 24/7 follow-the-sun strategy. It wasn’t an event of sessions in the conventional sense, but lots of hallways where you spent most of your time in chance, impromptu meetings. And it was a great chance to hook up with colleagues whom we haven’t caught in years.

There were plenty of contrarian views. There were a couple of keynotes in the conventional sense that each took different shots at the issue of risk. Retired Ford product life cycle management director Richard Riff took aim at conventional wisdom when it comes to product testing. After years of ingrained lean, six sigma, and zero defects practices – not to mention Ford’s old slogan that quality is job one — Riff countered with a provocative notion: sometimes the risk of not testing is the better path. It comes down to balancing the cost of defects vs. the cost of testing, the likely incidence of defects, and the reliability of testing. While we couldn’t repeat the math, in essence, it amounted to a lifecycle cost approach for testing. He claimed that the method even accounted for intangible factors, such as social media buzz or loss of reputation, when referring g to recently highly publicized quality issues with some of Ford’s rivals.

Xerox PARC computing legend Alan Kay made the case for reducing risk through a strategy that applied a combination of object-oriented design (or which he was one of the pioneers – along with the GUI of course) and what sounded to us like domain-specific languages. Or more specifically, that software describes the function, then lets other programs automatically generate the programming to execute it. Kay decried the instability that we have come to accept with software design – which reminded us that since the mainframe days, we have become all too accustomed to hearing that the server is down. Showing some examples of ancient Roman design (e.g., a 2000-year old bridge in Spain that today still carries cars and looks well intact), he insists that engineers can do better.

Some credit to host Brad Holtz who deciphered that there really was a link between our diverging interests: Big Data and meshing software development with the product lifecycle. By the definition of Big Data – volume, variability, velocity, and value – Big Data is nothing new to the product lifecycle. CAD files, models, and simulations are extremely data-intensive and containing a variety of data types encompassing graphical and alphanumeric data. Today, the brass ring for the modeling and simulation world is implementing co-simulations, where models each drive other models (the results of one drives the other).

But is anybody looking at the bigger picture? Modeling has been traditionally silo’ed – for instance, models are not typically shared across product teams, projects, or product families. Yet new technologies could provide the economical storage and processing power to make it possible to analyze and compare the utilization and reliability of different models for different scenarios – with the possible result being metamodels that provide frameworks for optimizing model development and parameters with specific scenarios. All this is highly data-intensive.

What about the operational portion of the product lifecycle? Today, it’s rare for products not to have intelligence baked into controllers. Privacy issues aside (they must be dealt with), machinery connected to networks can feed back performance data; vehicles can yield data while in the repair shop, or thanks to mobile devices, provide operational data while in movement. Add to that reams of publicly available data from services such as NOAA or the FAA, and now there is context placed around performance data (did bad weather cause performance to drop?). Such data could feed processes, ranging from MRO (maintenance, repair, and operation) and warranty, to providing feedback loops that can validate product tests and simulation models.

Let’s take another angle – harvesting publicly available data for the business. For instance, businesses could use disaster preparedness models to help their scenario planning, as described in this brief video snippet from last years COFES conference. Emerging organizations, such as the Center for Understanding Change, aim to make this reality by making available models and expertise developed through tax dollars in the national laboratory system.

Big Data and connectivity can also be used to overcome gaps in locating expertise and speed product development. Adapting techniques from the open source software world, where software is developed collaboratively by voluntary groups of experts in the field, crowdsourcing is invading design and data science (we especially enjoyed our conversation with Kaggle’s Jeremy Howard).

A personal note on the sessions – the conference marked a reunion with folks whom we have crossed paths with in over 20 years. Our focus on application development lead us to engineered systems, an area of white space between software engineering and classic product engineering disciplines. And as noted above, that in turn bought us full circle to our roots covering the emergence of CADCAM in the 80s as we had the chance to reconnect many who continue to advance the engineering discipline. What a long, fun trip it’s been.

Big Data analytics in the cloud could be HP’s enterprise trump card

Unfortunately, scheduling conflicts have kept us from attending Leo Apotheker’s keynote today before the HP Analyst Summit in San Francisco. But yesterday, he tipped his cards for his new software vision for HP before a group of investment analysts. HP’s software focus is not to reinvent the wheel – at least where it comes to enterprise apps. Apotheker has to put to rest that he’s not about to do a grudge match and buy the company that dismissed him. There is already plenty of coverage here, interesting comment from Tom Foremski (we agree with him about SAP being a non-starter), and the Software Advice guys who are conducting a poll.

To some extent this has been little surprise with HP’s already stated plans for WebOS and its recently announced acquisition of Vertica. We do have one question though: what happened to Converged Infrastructure?

For now, we’re not revisiting the acquisitions stakes, although if you follow #HPSummit twitter tags today, you’ll probably see lots of ideas floating around today after 9am Pacific time. We’ll instead focus on the kind of company HP wants to be, based on its stated objectives.

1. Develop a portfolio of cloud services from infrastructure to platform services and run the industry’s first open cloud marketplace that will combine a secure, scalable and trusted consumer app store and an enterprise application and services catalog.

This hits two points on the checklist: provide a natural market for all those PCs that HP sells. The next part is stating that HP wants to venture higher up the food chain than just sell lots of iron. That certainly makes sense. The next part is where we have a question: offering cloud services to consumers, the enterprise, and developers sounds at first blush that HP wants its cloud to be all things to all people.

The good news is that HP has a start on the developer side where it has been offering performance testing services for years – but is now catching up to providers like CollabNet (with which it is aligned and would make a logical acquisition candidate) and Rally in offering higher value planning services for the app lifecycle.

In the other areas – consumer apps and enterprise apps – HP is starting from square one. It obviously must separate the two, as cloud is just about the only thing that the two have in common.

For the consumer side, HP (like Google Android and everyone else) is playing catchup to Apple. It is not simply a matter of building it and expecting they will come. Apple has built an entire ecosystem around its iOS platform that has penetrated content and retail – challenging Amazon, not just Salesforce or a would-be HP, using its user experience as the basis for building a market for an audience that is dying to be captive. For its part, HP hopes to build WebOS to have the same “Wow!” factor as the iPhone/iPad experience. It’s got a huge uphill battle on its hands.

For the enterprise, it’s a more wide open space where only Salesforce’s AppExchange has made any meaningful mark. Again, the key is a unifying ecosystem, with the most likely outlet being enterprise outsourcing customers for HP’s Enterprise Services (the former EDS operation). The key principle is that when you build a market place, you have to identity who your customers are and give them a reason to visit. A key challenge, as we’ve stated in our day job, is that enterprise customers are not the enterprise equivalent of those $2.99 apps that you’ll see in the AppStore. The experience at Salesforce – the classic inversion of the long tail – is that the market is primarily for add-ons to the Salesforce.com CRM application or use of the Force.com development platform, but that most entries simply get buried deep down the list.

Enterprise apps marketplaces are not simply going to provide a cheaper channel for solutions that still require consultative sells. We’ve suggested that they adhere more to the user group model, which also includes forums, chats, exchanges of ideas, and by the way, places to get utilities that can make enterprise software programs more useful. Enterprise app stores are not an end in themselves, but a means for reinforcing a community — whether it be for a core enterprise app – or for HP, more likely, for the community of outsourcing customers that it already has.

2. Build webOS into a leading connectivity platform.
HP clearly hopes to replicate Apple’s success with iOS here – the key being that it wants to extend the next-generation Palm platform to its base of PCs and other devices. This one’s truly a Hail Mary pass designed to rescue the Palm platform from irrelevance in a market where iOS, Android, Adobe Flash, Blackberry, and Microsoft Windows 7/Silverlight are battling it out. Admittedly, mobile developers have always tolerated fragmentation as a fact of life in this space – but of course that was when the stakes (with feature phones) were rather modest. With smart device – in all its varied form factors from phone to tablet – becoming the next major consumer (and to some extent, enterprise) frontier, there’s a new battle afresh for mindshare. That mindshare will be built on the size of the third party app ecosystem that these platforms attract.

As Palm was always more an enterprise rather consumer platform – before the Blackberry eclipsed it – HP’s likely WebOS venue will be the enterprise space. Another uphill battle with Microsoft (that has the office apps), Blackberry (with its substantial corporate email base), and yes, Apple, where enterprise users are increasingly sneaking iPhones in the back door, just like they did with PCs 25 years ago,

3. Build presence with Big Data
Like (1), this also hits a key checkbox for where to sell all those HP PCs. HP has had a half-hearted presence with the discontinued Neoview business. The Vertica acquisition was clearly the first one that had Apotheker’s stamp on it. Of HP’s announced strategies, this is the one that aligns closest with the enterprise software strategy that we’ve all expected Apotheker to champion. Obviously Vertica is the first step here – and there are many logical acquisitions that could fill this out, as we’ve noted previously, regarding Tibco, Informatica, and Teradata. The importance is that classic business intelligence never really suffered through the recession, and arguably, big data is becoming the next frontier for BI that is becoming, not just a nice to have, but increasingly an expected cost of competition.

What’s interesting so far is that in all the talk about big Data, there’s been relatively scant attention paid to utilizing the cloud to provide the scaling to conduct such analytics. We foresee a market where organizations that don’t necessarily want to buy all that and that use large advanced analytics on an event-driven basis, to consume the cloud for their Hadoop – or Vertica – runs. Big Data analytics in the cloud could be HP’s enterprise trump card.

HP buys Fortify, it’s about time!

What took HP so long? Store that thought.

As we’ve stated previously, security is one of those things that have become everybody’s business. Traditionally the role of security professionals who have focused more on perimeter security, the exposure of enterprise apps, processes, and services to the Internet opens huge back doors that developers unwittingly leave open to buffer overflows, SQL injection, cross-site scripting, and you name it. Security was never part of the computer science curriculum.

But as we noted when IBM Rational acquired Ounce Labs, developers need help. They will need to become more aware of security issues but realistically cannot be expected to become experts. Otherwise, developers are caught between a rock and a hard place – the pressures of software delivery require skills like speed and agility, and a discipline of continuous integration, while security requires the mental processes of chess players.

At this point, most development/ALM tools vendors have not actively pursued this additional aspect of QA; there are a number of point tools in the wild that may not necessarily be integrated. The exceptions are IBM Rational and HP, which have been in an arms race to incorporate this discipline into QA. Both have so-called “black box” testing capabilities via acquisition – where you throw ethical hacks at the problem and then figure out where the soft spots are. It’s the security equivalent of functionality testing.

Last year IBM Rational raised the ante with acquisition of Ounce Labs, providing “white box” static scans of code – in essence, applying debugger type approaches. Ideally, both should be complementary – just as you debug, then dynamically test code for bugs, do the same for security: white box static scan, then black both hacking test.

Over the past year, HP and Fortify have been in a mating dance as HP pulled its DevInspect product (an also-ran to Fortify’s offering) and began jointly marketing Fortify’s SCA product as HP’s white box security testing offering. In addition to generating the tests, Fortify;s SCA manages this stage as a workflow, and with integration to HP Quality Center, autopopulates defect tracking. We’ll save discussion of Fortify’s methodology for some other time, but suffice it to say that it was previously part of HP’s plans to integrate security issue tracking as part of its Assessment Management Platform (AMP), which provides a higher level dashboard focused on managing policy and compliance, vulnerability and risk management, distributed scanning operations, and alerting thresholds.

In our mind, we wondered what took HP so long to consummate this deal. Admittedly, while the software business unit has grown under now departed CEO Mark Hurd, it remains a small fraction of the company’s overall business. And with the company’s direction of “Converged Infrastructure”,” its resources are heavily preoccupied with digesting Palm and 3Com (not to mention, EDS). The software group therefore didn’t have a blank check, and given Fortify’s 750-strong global client base, we don’t think that the company was going to come cheap (the acquistion price was not disclosed). With the mating ritual having predated IBM’s Ounce acquisition last year, buying Fortify was just a matter of time. At least a management interregnum didn’t stall it.

Finally!

First Takes from Rational Innovate 2010

To paraphrase Firesign Theatre, how can you in two places at once when you’re not anywhere at all? We would have preferred being in at least two places if not more today, what with Microsoft TechEd in New Orleans, IBM Rational’s Innovate conference in Orlando, and for spare change, PTC’s media and analyst day just a cab ride away.

Rational’s message, which is that software is the invisible glue of smarter products, was much more business grounded than its call a year ago for collaborative software development, which we criticized back then as more like a call for repairing your software process as opposed to improving your core business.

The ongoing name changes in the conference reflect Rational’s repositioning, which the Telelogic acquisition closes the circled. Two years ago, the event was called the Rational Software Development Conference; last year they eliminated the word “Development,” and this year replaced “Software” with “Innovate.” Vanishing of Software from the conference title is consistent with the invisible glue motif.

Software is the means, not the end. Your business needs to automate its processes or make better products in a better way. Software gets you there. As IBM’s message is Smarter Planet, Rational has emphasized Smarter Products rather than “Smarter Business Processes. It’s not just a matter of force fitting to a corporate slogan; Rational estimates compound annual growth of its systems of systems (ergo, mostly the Telelogic side of the business) to be well into the double digits over the next 4 – 5 years, compared to a fraction of that for its more traditional enterprise software modernization and IT optimization businesses.

Telelogic played the starring role for Smart Products. The core of the strategy is a newly announced Integrated Product Management umbrella of products and services for helping companies gain better control over their product lifecycles. Great lead, but for now scant detail. IBM’s strategy leverages Telelogic’s stakehold with companies making complex engineered products with other assets such as IBM’s vertical industry frameworks. We also see strong potential synergy with Maximo, which completes the product lifecycle with the piece that follows the product’s service life.

IBM’s product management strategy places it on a collision course with the PLM community. IBM lays claim to managing the logical constraints of product development – coming from its base in requirements and portfolio management. By contrast, IBM claims that PLM vendors know only the physical constraints. The PLM folks – especially PTC – are developing their own counter strategies. For starters, there is PTC’s plan to offer Eclipse tooling that will start with its own branded support of CollabNet’s open source Subversion source code repository. Folks, this is the beginning of a grudge match that for now is only reinforcing the culture/turf divides demarcating software from the more traditional physical engineering disciplines.

Rational also introduced a workbench idea which is a promising way to use SOA to mix and mash capabilities from across its wide portfolio to address specific vertical industry problems or horizontal software development requirements. The idea is promising, but for now mostly vision. These workbenches take products – mostly Jazz offerings – and mash them up using the SOA architecture of Jazz framework to create configurable integrations for addressing specific business and software delivery problems. We saw a demo of an early example that provided purpose built integrations that provided role-based views for correlating requirements to functional and performance tests, through to specific builds that would be accessed through tools that BAs, testers, and developers use. On the horizon, IBM Rational is planning vertical workbenches that apply Rational tools with some of its vertical industry frameworks addressing segments such as Android mobile device development, cybersecurity, and smarter cities.

The idea for the workbenches is that they would not be rigid products but instead configurable mixes and matches of Rational and partner content, through interfaces developed through OSLC, IBM’s not-really-open-source community for building Jazz interfaces. A good use for IBM’s varied software and industry framework portfolio, it will be challenging to sell as these are not standard catalog products, and ideally, not customized systems integrations. Sales needs to think out of the box to sell these workbenches while customers need assurance that they are not paying for one-off systems integration engagements.

The good news is that with IBM’s expanding cloud offerings for Rational, that these workbenches could be readily composed and delivered through the cloud on much shorter lead time compared to delivering conventional packaged software. Aiding the workbenches is a new flexible token licensing system that expands on a model originated by Telelogic. Tokens are generic licenses that give you access to any piece of software within a designated group, allowing the customer to mix and match software, or for IBM to do so through its Rational Workbenches. IBM is combining it with the idea of term licensing to make this suited for cloud customers who are allergic to perpetual licensing. For now, tokens are available only for Telelogic and Jazz offerings, but IBM Software Group is investigating applying it to the other brands.

So how do you cost justify these investments for the software side of smarter products? Rational GM Danny Sabbah’s keynote on software econometrics addressed the costing issue based on Rational’s invisible glue, means not end premise. We agree with Sabbah that traditional metrics for software development, such as defect rates, are simply internal metrics that fail to express business value. Instead, Sabbah urges measuring business outcomes of software development.

Sabbah’s arguments are hardly new. They rehash old debates over the merits of “hard” vs. “soft,” or tangible vs intangible costing. Traditionally, new capital investments, such as buying new software (or paying to develop it) were driven by ROI calculations that computed how much money you’d save; in mist cases, those savings came form direct labor. Those were considered “hard” numbers because it was fairly straightforward to calculate how much labor some piece of automation would save. Savings are OK for the bottom line but do nothing for the top line. However, if you automated a process that would allow you to shorten product lead time by 3 weeks, how much money would you make with the extra selling time, and by getting to market earlier, how much benefit would accrue by becoming first to market? Common sense is that all other factors being equal, getting a jump in sales should translate to more revenue, and in turn, bolster your competitive position. But such numbers were considered “soft” because there were few ways to scientifically quantify the benefits.

Sabbah’s plea for software econometrics simply revives this debate.

HP analyst meeting 2010: First Impressions

Over the past few years, HP under Mark Hurd has steadily gotten its act together in refocusing on the company’s core strengths with an unforgiving eye on the bottom line. Sitting at HP’s annual analyst meeting in Boston this week, we found ourselves comparing notes with our impressions from last year. Last year, our attention was focused on Cloud Assure; this year, it’s the integraiton of EDS into the core businesss.

HP now bills itself as the world’s largest purely IT company and ninth in the Fortune 500. Of course, there’s the consumer side of HP that the world knows. But with the addition of EDS, HP finally has a credible enterprise computing story (as opposed to an enterprise server company). Now we’ll get plenty of flack from our friends at HP for that one – as HP has historically had the largest market share for SAP servers. But let’s face it; prior to EDS, the enterprise side of HP was primarily a distributed (read: Windows or UNIX) server business. Professional services was pretty shallow, with scant knowledge of the mainframes that remain the mainstay of corporate computing. Aside from communications and media, HP’s vertical industry practices were sparse, few, and far between. HP still lacks the vertical breadth of IBM, but with EDS has gained critical mass in sectors ranging from federal to manufacturing, transport, financial services, and retail, among others.

Having EDS also makes credible initiatives such as Application Transformation, a practice that helps enterprises prune, modernize, and rationalize their legacy application portfolios. Clearly, Application transformation is not a purely EDS offering; it was originated by Ann Livermore’s Enterprise Business group, draws upon HP Software assets such as discovery and dependency mapping, Universal CMDB, PPM, and the recently introduced IT Financial Management (ITFM) service. But to deliver, you need bodies and people that know the mainframe – where most of the apps being harvested or thinned out are. And that’s where EDS helps HP flesh this out to a real service.

But EDS is so 2009; the big news on the horizon is 3Com, a company that Cisco left in the dust before it rethought its product line and eked out a highly noticeable 30% market share for network devices in China. Once the deal is closed, 3Com will be front and center in HP’s converged computing initiative which until now primarily consisted of blades and Procurve VoIP devices. It gains a much wider range of network devices to compete head-on as Cisco itself goes up the stack to a unified server business. Once the 3com deal is closed, HP will have to invest significant time, energy, and resources to deliver on the converged computing vision with an integrated product line, rather than a bunch of offerings that fill the squares of a PowerPoint matrix chart.

According to Livermore, the company’s portfolio is “well balanced.” We’d beg to differ where it comes to software, which accounts for a paltry 3% of revenues (a figure that our friends at HP reiterated underestimated the real contribution of software to the business).

It’s the side of the business that suffered from (choose one) benign or malign neglect prior to the Mark Hurd era. HP originated network node management software for distributed networks, an offering that eventually morphed into the former OpenView product line. Yet HP was so oblivious to its own software products that at one point its server folks promoted bundling of rival product from CA. Nonetheless, somehow the old HP managed not to kill off Openview or Opencall (the product now at the heart of HP’s communications and media solutions) – although we suspect that was probably more out of neglect than intent.

Under Hurd, software became strategic, a development that lead to the transformational acquisition of Mercury, followed by Opsware. HP had the foresight to place the Mercury, Opsware, and Openview products within the same business unit as – in our view – the application lifecycle should encompass managing the runtime (although to this day HP has not really integrated Openview with Mercury Business Availability Center; the products still appeal to different IT audiences). But there are still holes – modest ones on the ALM side, but major ones elsewhere, like in business intelligence where Neoview sits alone. Or in the converged computing stack and cloud in a box offerings, which could use strong identity management.

Yet if HP is to become a more well-rounded enterprise computing company, it needs more infrastructural software building blocks. To our mind, Informatica would make a great addition that would point more attention to Neoview as a credible BI business, not to mention that Informatica’s data transformation capabilities could play key roles with its Application Transformation service.

We’re concerned that, as integration of 3Com is going to consume considerable energy in the coming year, that the software group may not have the resources to conduct the transformational acquisitions that are needed to more firmly entrench HP as an enterprise computing player. We hope that we’re proven wrong.

Software Development is like Manufacturing

For those of us on the outside looking in, it might be tempting to pigeonhole the agile software development community as a highly vocal but homogeneous minority that gained its voice through the Agile Manifesto. In actuality, the manifesto was simply a call to action to, in essence, value working software over working plans. The authors didn’t pretend to monopolize knowledge on the topic and purposely made their manifesto aspirational rather than prescriptive.

And so, not surprisingly, in a world of software development methodologies, there is no such thing as an agile methodology. There are many. There is Scrum, so named for the quick huddles in Rugby that call for fast decisions based on a basic assessment of conditions on the ground. Proponents call scrum responsive while critics assail it for lacking the big picture. In actuality, properly managed scrums do have master plans for direction and context. Other variants such as eXtreme Programming (XP) drive development through testing to ensure 100% test coverage. In actuality, agile methodology is a spectrum that can borrow practices from scrum, XP or other methodologies as long as it meets the goal of working software with minimal red tape.

More recently, debate has emerged over yet another refinement of agile – Lean Development, which borrows many of the total quality improvement and continuous waste reduction principles of lean manufacturing. Lean is dedicated to elimination of waste, but not at all costs (like Six Sigma). Instead, it is about continuous improvement in quality, which will lead to waste reduction. It aims to root out rework, which is a great waste of time and money, and a process that subtracts rather than adds value to the product. Lean techniques were at the crux of new enhancements to Danube’s agile planning offering which we discussed a few weeks back.

The best known proponent of lean development, Mary Poppendieck, makes the metaphor that software development is very much like a manufacturing process. Having spent serious time studying and writing about lean manufacturing, and having spent time with the people who implemented lean at the pioneering NUMMI auto plant nearly 20 years ago, we can very much relate to Poppendieck’s metaphors.

In essence, developing software is like making a durable good like a car, appliance, military transport, machine tool, or consumer electronics product. In essence you are building complex products that are expected to have a long service life, and which may require updates or repairs. We also contend that the metaphor means that software should also carry a warranty, where the manufacturer vouches for the product’s performance over a specific operating life.

But not surprisingly, there is hardly any consensus over lean in the agile community. We came across a well-considered dissent by Pillar Technology consultant Daryl Kulak that criticizes the very idea of comparing software development and manufacturing in the same sentence. Kulak maintains that unlike manufacturing, software is not comprised of the kinds of uniform discrete parts that are found in manufacturing. There’s a bit of irony there, as proponents of SOA, and before that component-based development, often called for just that. But Kulak is correct that most software is not uniform discrete parts, at least according to common methods of software construction and design. And besides, there is always some local environmental variable that will cause variations in how software is configured to run on different targets.

That’s a key part of Kulak’s argument. “We don’t discover its actual character until we’ve started building it.” He also argues that the core systems-oriented principles that are at the heart of lean could lean to “ ‘mechanical Agile,” where we view our teams as machines and our people as a kind of robot. This type of team can only follow orders and cannot innovate or adapt.”

Kulak and others have good reason for their cynicism, as it wasn’t that long ago that CFOs actually believed the promises of outsourcers that software factories would make software development predictable in time and cost. That had great appeal during the post 2000 recession. Obviously – in retrospect – software development is not so cut and dried – business needs are always changing (that’s why we have agile) and software deployment environments always have their unique idiosyncrasies.

(There has been a more recent refinement of the software factories idea that focuses, not so much on offshore factories. Instead the emphasis is more on patterns and industrializing aspects of software development, e.g., the portions of an app that could be exposed as a reusable service. But that doesn’t eliminate people from the equation, it amps up the architecture.)

But back to core issue, lean cynics miss the point. When we visited the NUMMI plant, we were struck by how lean actually spurred production line workers to get creative and develop solutions to problems that they carefully tracked on the plant floor. We saw all those hand-drawn Pareto charts on the walls, and in fact the reason why we were there was to study implementation of the first computerized production management system that would help people track quality trends more thoroughly. What we saw was hardly a mechanical process, but one that encouraged people to ask questions, then base their decisions on a combination of hard data and personal experience. It was actually quite a step up from traditional command and control production line management.

There’s no reason why this approach shouldn’t work in software. This doesn’t mean that lean approaches will work in all software projects, as conducting value stream analyses could get disruptive for projects, such as the ever evolving Facebook apps of the Obama 2008 campaign. But the same goes for agile – not all projects are cut out for agile methodologies.

So to software developers, stop thinking that you’re so unique. Obviously, lean manufacturing methods won’t apply verbatim to software development, but look at the ideas behind it. Stop getting hung up on minor details, like software isn’t a piece part.

Update: The NUMMI auto plant in Fremont, California has unfortunately become a casualty of the restructuring of GM and contraction in the US auto market. So it would be tempting to state that NUMMI was a case where the surgery was successful but the patient died. The lesson for software developers is that process only ensures that the product will be developed; it doesn’t ensure success of the product.

Productizing Lean Software Development

Although this month our first order of business is to demolish the governance silos separating managing the application lifecycle (ALM) from IT Service Management (ITSM), and then of course achieve world peace, it’s impossible to ignore the Agile software development world’s upcoming annual event. Sadly, for the third straight year we’ll miss the Agile software conference. We’re getting into the stream of tools briefings that for the most part revolve around the common theme of addressing some hurdle to scaling up or out agile practices.

This week we’ve been briefed by VerisonOne which has chosen to focus on adding support for lean software development variant of agile in its Summer 2009 release. It’s very easy to get confused here, as on the face of it, agile is a lean approach to software development if you define lean as lighter weight process that stems to slice through the red tape.

But if you adhere more loosely to the definition of lean that emerged out of the just-in-time, continuous improvement-oriented practices of the manufacturing world, there are methods that depart from your typical scrum or form of extreme development. The core thread of lean is eliminating waste through value stream mapping (essentially, the workflow that a product undergoes, documenting the steps and cycles times for adding value) so you can decide where to attack waste and bloat. Then, as with core agile practices, you assume planning goals are moving targets; the subtle distinction in lean is that you make key decisions as late as possible, then deliver as fast as possible (before the goalposts change).

Having a background in manufacturing, and having visited the famous GM/Toyota NUMMI assembly plant joint venture as they were implementing their first computerized production and quality tracking systems 20 years ago, we can relate to the idea of lean. In the software development world, Mary and Tom Poppendieck have become the best-known evangelists; suffice it to say that in the agile world there is plenty of debate on the merits of lean: that lean require more systematic planning and upfront analysis of waste compared to traditional agile, vs. whether the upfront work leads to analysis paralysis. Clearly, the ability to analyze waste up front does require more of a big picture view than many software development practitioners either have in their veins or are trained for. We’ll pass the buck here by saying that the choice of methodology should depend on the project.

VersionOne has attacked the problem by adding features such as visualizing a value stream with the ability to drill down on each step to apply, in effect, what we’d call development policies. That is, you may decide that here are certain parameters, such as amount of work-in-process (uncompleted work), that must be enforced so that development teams do not get in over their heads with too many deliverables at any one time. It can also reveal findings on cycle time that can help you either whittle away at the scope of a particular development iteration, and of course, critical metrics such as defect density so you can connect the dots between development steps, code characteristics, project scope, and quality issues.

What would be nice would be to have some simulation capabilities, so you could test drive changing planning assumptions to see how it might impact developer productivity or the ability to meet delivery commitments. Alas, that feature will have to wait, but it’s always helpful when a tools provider steps into the debate over software development methodology, as it takes those debates out of the realm of theory.

The Little Method that Might

Publishing of the Agile Manifesto back in 2001 formalized a belief in the development community that the business goals of software development projects were in actuality moving targets. Since then, agile has emerged as mostly a bottom-up movement among true believers. In a handful of cases, such as within software vendors (whose business is software), agile has become a matter of enterprise urgency. In scattered cases, such as at BMC, successes of enterprise-scale agile development (e.g., multiple teams across multiple sites). But for the most part agile has been perceived as a cottage industry in the developer community that often views itself in a David vs. Goliath situation.

There is little question that agile makes a good fit for the kind of incremental development that adds or extends existing software portfolios. If the problems are well-contained, they are well-suited for the compact teams that are typically involved with agile projects. Furthermore, as incremental software, extensions and add-ons do not it against what is agile’s greatest weakness: the ability to maintain consistent direction and big picture vision in a methodology where the assumptions are revisited with each iteration or release cycle.

All that is an urban myth however as true agile development does require the basic scoping and planning required for larger projects. While the business environment is hardly static these days, no CEO or CFO in his or her right mind would sign their name to an open ended project that might evolve in random directions. Admittedly the approach to planning for an agile project might differ from a more conventional one, but planning, scoping, and high end requirements planning are essential so that the agile project – as changing as it is – operates under the right assumptions and project leaders adequately manage customer expectations. In agile speak, by the way, such mega planning cycles are appropriately called “epics.’

The other side of the coin is that for agile methodology to measurably impact software development, it must be able to scale up to the point where multiple teams can coexist on larger projects. Just because software development is incremental today doesn’t mean that large projects are a thing of the past. On the other hand, while it is critical not to pigeonhole agile into small projects, there is no single methodology that fits all projects. In some cases, you may have a large project that remains better suited for waterfall.

Over the past few years, agile planning tools providers have been gradually adding features that extend the rings of agile planning past the small group holed up by the water cooler. And in coming weeks leading up to the Agile 2009 conference, you’re likely to see other announcements from the likes of Rally, ThoughtWorks and others.

This week it’s Danube’s turn. They’ve come up with a clever search and aggregation facility within their project planner to identify and group project goals and objectives that gets you beyond the rigid hierarchical schemes of most project planners – agile or otherwise. The rationale for such need is that multiple release cycles may share different mixes of common planning goals, and that in one release cycle, the order of priority of those goals might vary. For instance, in planning a customer portal, the requirement of concealing the customer’s actual identity is always important, but it may be more important when it comes to page navigation or entry of credit card numbers than where it comes to setting a page display or shipping preference. Within a conventional hierarchical planning scheme there is no way to distinguish that.

Danube’s innovation typifies the state of agile planning – tools providers are beginning to add the bells and whistles that can improve the ability to communicate and coordinate across larger or more complex projects. But it also reveals that the solution is hardly a done deal. For instance, as Danube can now facilitate a more sophisticated means for managing requirements and goals for more involved agile projects, it lacks the next logical piece of the puzzle: adding traceability. As you consider goals for each iteration, it helps to have context, and in regulated situations, it is essential to understand where and why those requirements or goals originated, and how and when they were or are redressed. Danube has change logs, but on the next release, we’d like to see them raise the ante with some ability to track the bread crumbs that will inevitably accumulate during the course of an agile project.

Shutting the barn doors

Security is one of those things that have become everybody’s business. Well maybe not quite everybody, but for software developers, the growing reality of web-based application architectures means that this is something that they have to worry about, even if they were never taught about back doors, buffer overflows, or SQL injection in their computer science programs.

Back when software programs were entirely internal, or even during Web 1.0 where Internet applications consisted of document dispensaries or remote database access, security would be adequately controlled through traditional perimeter protection means. We’ve said it before, as applications evolved to full web architectures that graduated from remote queries to a database to more dynamic interaction between applications, perimeter protection became the 21st century equivalent of a Maginot Line.

Security is a black box to most civilians and for good reason. The fact that, even in the open source world where you have the best minds constantly hacking away, users of popular open source programs like Firefox still are on the receiving end of an ongoing array f patches and updates. As a cat and mouse game, hackers are constantly discovering new back doors that even the brightest software development minds couldn’t imagine.

While in an ideal world, developers would never write bugs or leave open doors. In the real world, they need automated tools that ferret out what their training never provided, or what they wouldn’t be able to uncover through manual checks anyway. A couple years ago, IBM Rational acquired Watchfire, whose AppScan does so-called ”black box” testing or ethical hacking of an app once it’s on a testbed; today, IBM bought Ounce Labs, whose static (or “white box”) testing provides the other half of the equation.

With the addition of Ounce, IBM Rational claims it has the only end-to-end web security testing solution. For its part, HP, like IBM, also previously acquired a black box tester (SPI Dynamics) and currently covers white box testing through a partnership with Fortify (we wouldn’t be surprised if at some point HP ties the knot n that one as well). But for IBM Rational, it means they have put together the basic piece parts, but do not have an end-to-end solution yet. Ounce needs to be integrated with AppScan first. But in a discussion with colleague Bola Rotibi, we agreed that presenting as testbed no matter how unified is just the first step. She suggested modeling – kind of a staged approach where a model is tested first to winnow out architectural weaknesses. To that we could see an airtight case for a targeted solution with requirements that makes security testing an exercise driven by corporate (and were appropriate, regulatory) policy.

While the notion of application security testing is fairly new, the theme about proactive testing early in the application lifecycle is anything but. The more things change, the more they don’t.

Oracle Fusion 11g Middleware: Executed According to Plan

Today’s announcement by Oracle of the rollouts of Fusion Middleware 11g is a bit anticlimactic in that the details are pretty much according to the plan that came out exactly a year ago today. Although the Fusion stack is comprised of multiple parts, internally developed and acquired, the highlight is that it represents the fruition of the BEA acquisition. Oracle had Fusion middleware prior to acquiring BEA, but there’s little question that BEA was the main event. WebLogic filled the donut hole in the middle of the Fusion stack with a server that was far more popular than Oracle Containers for Java EE (OC4J). Singlehandedly, BEA catapulted Oracle Fusion into becoming a major player in middleware.

Oracle largely stuck to the previously announced roadmap for convergence of BEA products, with the only major surprises being in the details. As planned, Oracle incorporated WebLogic as the strategic Java platform, JDeveloper as the primary development environment, dual business process modeling paths, with master data management, data integration, and identity management driven largely by Oracle offerings with some added BEA content.

Although the Oracle Fusion product portfolio came from far more diverse sources than BEA (as Oracle was obviously a more aggressive acquirer), the result is far more unified than anything that BEA ever fielded. Before getting swallowed by Oracle, BEA had multiple portal, development, and integration technologies lacking a common framework. By comparison, Oracle has emphasized a common framework for mashing the pieces together.

That’s rooted in Oracle’s heritage for developing native tools and utilities, dating back to the Oracle Forms 4GL and the various utilities for managing the Oracle database; the tools were sufficiently native that they typically were confined to Oracle shops. But that approach to native tooling morphed with development of a broader framework that is optimized for Oracle platforms. It’s an outgrowth of the mentality at Oracle that good is the enemy of best, and that what Oracle is building is a platform rather than discrete products.

It’s an approach that also makes Oracle’s tagline of Fusion being standards-based as being more nuanced. Yes, the Fusion products are designed to support Oracle’s “hot pluggable” best of breed strategy to work with other vendors products, but for designing and managing the Fusion environment, Oracle has you surrounded with native tooling if you want them. Call it a subtle pull for encouraging customers to add more Oracle content.

That explains how, 6 – 7 years ago, Oracle began developing what has become the Application Development Framework (ADF) as its own model-view-controller alternative to the Apache Struts framework that it previously used in early versions of the JDeveloper Java tool. That approach has carried through to this day with JDeveloper, which provides a higher level, declarative approach to development that would not fit with traditional Eclipse IDEs. And that approach applies to Oracle Enterprise Manager (EM), which does not necessarily compete with BMC, CA, HP, or IBM Tivoli in application management, but provides the last mile of declarative deployment, monitoring, and performance testing capabilities for the Fusion platform.

Bringing together the Oracle and BEA technologies resulted in some synergies where the value was greater than the sum of its parts. A good example is the pairing of BA’s quasi-real time JRockit JVM with Oracle Coherence data grid, a distributed caching layer for Java objects. In essence, JRockit juices up performance of Coherence, which is used whenever you need higher performance with frequently used objects; conversely, Coherence provides a high end enterprise clustered platform that provides an excellent use case for JRockit.

As noted, while the broad outlines of Fusion 11g are hardly any mystery, there are some interesting departures that occurred along the way. One of the more notable was in BPM where Oracle added another option to its runtime strategy for Oracle BPM Suite. Originally, Oracle BPEL Process Manager was to be the runtime, requiring BPM users to map their process models to BPEL, essentially an XML-based sequential programming language that lacks process semantics. A year later, OMG is putting finishing touches to BPMN 2.0, a process modeling notation that has added support for executable models. And so with release of 11g, Oracle BPM Suite users will gain the option of bypassing BPEL as long as their processes are not that transactionally complex.

Make no doubt about it, the Fusion 11g migration was a huge reengineering project, involving nearly 2000 development projects and over 5000 product enhancements. So it’s a shame that Oracle did not take the opportunity of re-architecting its middleware stack by migrating it to microkernel architecture, with OSGi being the most prominent example. Oracle WebLogic Server is OSGi-based, but the BPM/SOA stack is not. Oracle remains mum as to whether it plans to adopt a microkernel architecture throughout the rest of the Fusion stack.

So why are we all hot and bothered about this? OSGi, or the principle of dynamic, modular microkernels in general, offer the potential to vastly reduce Java’s footprint through deployment of highly compact, servers that contain only the Java modules that are necessary to run. The good news is that this is potentially a highly economic, energy-efficient, space efficient green strategy. The bad news is that it’s not enough for the vendor to adopt a microkernel, as the user has to learn how to selectively and dynamically deploy them.

But as we noted last week, OSGi seems to have lost its momentum of late. In our Ovum research last year, we believed that OSGi was going to become the de facto standard for Java platforms as IBM and SpringSource fully migrated their stacks, and as rivals were providing at least tacit support. A year later, Oracle’s silence is deafening. We believe that Oracle’s pending acquisition of Sun adds some interesting dynamics to the plot, as Sun has continued to speak on both sides of its mouth on the topic: supporting OSGi for its open source Glassfish Java platform, while putting its weight behind Project Jigsaw that aims to redefine Java modularity as JSR 294. Unfortunately, announcement of Fusion 11g has not cleared up matters.