Reporting from HP Software Universe, colleague Dana Gardner provided an interesting account of a keynote delivered by BTO software unit general manager (and Opsware alum) Ben Horowitz on the synergies between the application lifecycle and the data center lifecycle. It’s a topic that’s not just tailor-made for HP Software (whose acquisitions include Mercury, covering the software lifecycle, and Opsware, which complements the former OpenView in data center operations), but also a hot button for us as well. Traditionally, the IT organization has been heavily silo’ed: not only are there walls between different players in the software group (e.g., architects, developers, QA), but also between software and operations. So while software development folks are supposed to performance and integration test code, when it comes to migrating code to production, the process has been one of throwing production-ready code over the wall to operations.
While there has always been a disconnect between software development and the data center, the gap has become even more glaring with emergence of SOA and its promises for enabling enterprise agility. That is, if you can make services so flexible that you can swap pieces out (like selecting a different weather forecasting service for transportation routing), and make them responsive to the business through enforcement of service contracts, how can you deliver when you can’t control whether the underlying IT infrastructure can handle the load and provide response times that comply with the contract. Significantly, none of the tools that handle run-time SOA governance have trap doors that automatically re-provision capacity. In an era of risk aversion, the last thing that data center stewards want is software developers hijacking iron. When we spoke with Tim Hall, product director for HP’s SOA Center after the product was released, he told us that “Dynamic flexing of resources is a nice idea that won’t sit well with the operations people.”
Gardner reported HP’s Horowitz describing the roles that the business, security specialists, IT operations, and QA (note that developers were omitted) play in the transition form design to run time.
What makes HP’s proposition thinkable is that there is an emerging awareness to enforce process management mentality on IT operations. While most organizations observe software development lifecycle processes in the breach, there is a consciousness that developing software should encompass collecting and validating requirements, developing or mapping to an architecture, generating code, and testing. With some of the newer agile methodologies, many of these steps are performed concurrently and in smaller chunks, but they are still supposed to be performed. What’s new is the IT operations side; significantly, notably where the latest version of the ITIL framework takes a lifecycle view of the management and delivery of IT services. There are some parallels with the SDLC: Service Strategy has a logical fit with Requirements; Service Design fits with well with Development (although likeness to architecture or design may not be apparent); Service Transition deals with operations and incidents, which is not addressed in the SDLC; while Continual Service Improvement relates well to the maintenance and upgrade part of the SDLC.
While HP’s Horowitz might not have been speaking about ITIL v3 literally in his keynote – and while not all data center organizations are gung ho about ITIL itself – there is growing awareness inside the data center that you can’t just run operations by reflex anymore.