What took HP so long? Store that thought.
As we’ve stated previously, security is one of those things that have become everybody’s business. Traditionally the role of security professionals who have focused more on perimeter security, the exposure of enterprise apps, processes, and services to the Internet opens huge back doors that developers unwittingly leave open to buffer overflows, SQL injection, cross-site scripting, and you name it. Security was never part of the computer science curriculum.
But as we noted when IBM Rational acquired Ounce Labs, developers need help. They will need to become more aware of security issues but realistically cannot be expected to become experts. Otherwise, developers are caught between a rock and a hard place – the pressures of software delivery require skills like speed and agility, and a discipline of continuous integration, while security requires the mental processes of chess players.
At this point, most development/ALM tools vendors have not actively pursued this additional aspect of QA; there are a number of point tools in the wild that may not necessarily be integrated. The exceptions are IBM Rational and HP, which have been in an arms race to incorporate this discipline into QA. Both have so-called “black box” testing capabilities via acquisition – where you throw ethical hacks at the problem and then figure out where the soft spots are. It’s the security equivalent of functionality testing.
Last year IBM Rational raised the ante with acquisition of Ounce Labs, providing “white box” static scans of code – in essence, applying debugger type approaches. Ideally, both should be complementary – just as you debug, then dynamically test code for bugs, do the same for security: white box static scan, then black both hacking test.
Over the past year, HP and Fortify have been in a mating dance as HP pulled its DevInspect product (an also-ran to Fortify’s offering) and began jointly marketing Fortify’s SCA product as HP’s white box security testing offering. In addition to generating the tests, Fortify;s SCA manages this stage as a workflow, and with integration to HP Quality Center, autopopulates defect tracking. We’ll save discussion of Fortify’s methodology for some other time, but suffice it to say that it was previously part of HP’s plans to integrate security issue tracking as part of its Assessment Management Platform (AMP), which provides a higher level dashboard focused on managing policy and compliance, vulnerability and risk management, distributed scanning operations, and alerting thresholds.
In our mind, we wondered what took HP so long to consummate this deal. Admittedly, while the software business unit has grown under now departed CEO Mark Hurd, it remains a small fraction of the company’s overall business. And with the company’s direction of “Converged Infrastructure”,” its resources are heavily preoccupied with digesting Palm and 3Com (not to mention, EDS). The software group therefore didn’t have a blank check, and given Fortify’s 750-strong global client base, we don’t think that the company was going to come cheap (the acquistion price was not disclosed). With the mating ritual having predated IBM’s Ounce acquisition last year, buying Fortify was just a matter of time. At least a management interregnum didn’t stall it.
Security is one of those things that have become everybody’s business. Well maybe not quite everybody, but for software developers, the growing reality of web-based application architectures means that this is something that they have to worry about, even if they were never taught about back doors, buffer overflows, or SQL injection in their computer science programs.
Back when software programs were entirely internal, or even during Web 1.0 where Internet applications consisted of document dispensaries or remote database access, security would be adequately controlled through traditional perimeter protection means. We’ve said it before, as applications evolved to full web architectures that graduated from remote queries to a database to more dynamic interaction between applications, perimeter protection became the 21st century equivalent of a Maginot Line.
Security is a black box to most civilians and for good reason. The fact that, even in the open source world where you have the best minds constantly hacking away, users of popular open source programs like Firefox still are on the receiving end of an ongoing array f patches and updates. As a cat and mouse game, hackers are constantly discovering new back doors that even the brightest software development minds couldn’t imagine.
While in an ideal world, developers would never write bugs or leave open doors. In the real world, they need automated tools that ferret out what their training never provided, or what they wouldn’t be able to uncover through manual checks anyway. A couple years ago, IBM Rational acquired Watchfire, whose AppScan does so-called ”black box” testing or ethical hacking of an app once it’s on a testbed; today, IBM bought Ounce Labs, whose static (or “white box”) testing provides the other half of the equation.
With the addition of Ounce, IBM Rational claims it has the only end-to-end web security testing solution. For its part, HP, like IBM, also previously acquired a black box tester (SPI Dynamics) and currently covers white box testing through a partnership with Fortify (we wouldn’t be surprised if at some point HP ties the knot n that one as well). But for IBM Rational, it means they have put together the basic piece parts, but do not have an end-to-end solution yet. Ounce needs to be integrated with AppScan first. But in a discussion with colleague Bola Rotibi, we agreed that presenting as testbed no matter how unified is just the first step. She suggested modeling – kind of a staged approach where a model is tested first to winnow out architectural weaknesses. To that we could see an airtight case for a targeted solution with requirements that makes security testing an exercise driven by corporate (and were appropriate, regulatory) policy.
While the notion of application security testing is fairly new, the theme about proactive testing early in the application lifecycle is anything but. The more things change, the more they don’t.
To this day we’ve had a hard time getting our arms around just what exactly a private cloud is. More to the point, where does it depart from server consolidation? The common thread is that both cases involve some form of consolidation. But if you look at the definition of cloud, the implication is that what differentiates private cloud from server consolidation is that you’re talking a much greater degree of virtualization. Folks, such as Forrester’s John Rymer, fail to see any difference at all.
The topic is relevant as – since this is IBM Impact conference time, there are product announcements. In this case, the new WebSphere Cloudburst appliance. It manages, stores, and deploys IBM WebSphere Server images to the cloud, providing a way to ramp up virtualized business services with the kinds of dynamic response that cloud is supposed to enable. And since it is targeted for deployment to manage your resources inside the firewall, IBM is positioning this offering as an enabler for business services in the private cloud.
Before we start looking even more clueless than we already are, let’s set a few things straight. There’s no reason that you can’t have virtualization when you consolidate servers; in the long run it makes the most of your limited physical and carbon footprints. Instead, when we talk private clouds, we’re taking virtualization up a few levels. Not just the physical instance of a database or application, or its VM container, but now the actual services it delivers. Or as Joe McKendrick points out, it’s all about service orientation.
In actuality, that’s the mode you operate in when you take advantage of Amazon’s cloud. In their first generation, Amazon published APIs to their back end, but that approach hit a wall given that preserving state over so many concurrent active and dormant connections could never scale. It may be RESTful services, but they are still services that abstract the data services that Amazon provides if you decide to dip into their pool.
But we’ve been pretty skeptical up to now about private cloud – we’ve wondered what really sets it apart from a well-managed server consolidation strategy. And there’s not exactly been a lot of product out there that lets you manage an internal server farm beyond the kind of virtualization that you get with a garden variety hypervisor.
So we agree with Joe that’s it’s all about services. Services venture beyond hypervisor images to abstract the purpose and task that a service performs from how or where it is physically implemented. Consequently, if you take the notion to its logical extent, a private cloud is not simply a virtualized bank of server clusters, but a virtualized collection of services that made available wherever there is space, and if managed properly, as close to the point of consumption as demand and available resources (and the cost of those resources) permits.
In all likelihood, early implementations of IBM’s Cloudburst and anything of the like that comes along will initially be targeted on an identifiable server farm or cluster. In that sense, it makes it only a service abstraction away from what is really just another case of old fashioned server consolidation (paired with IBM’s established z/VM, you could really turn out some throughput if you already have the big iron there). But taken to its more logical extent, a private clouds that deploys service environments wherever there is demand and capacity, freed from the four walls of a single facility, will become the fruition of the idea.
Of course, there’s no free lunch. Private clouds are supposed to eliminate the uncertainty of running highly sensitive workloads outside the firewall. Being inside the firewall will not necessarily make the private cloud more secure than a public one, and by the way, it will not replace the need to implement proper governance and management now that you have more moving parts. That’s hopefully one lesson that SOA – dead or alive – should have taught us by now.
Last week we spent a couple lovely but unseasonably cold early spring days locked inside a hotel near the Boston convention center for HP’s annual analyst conference for the 1/3 of the company that is not PCs or printers. While much of what we heard or saw was under non-disclosure, we won’t be shot if we told you about a 20-minute demonstration given by Caleb Sima on how Web 2.0 apps can be honey pots for hackers. You can restrict access to your site as strictly as possible, use SSL to go out over secure HTTP, but if your Web 2.0 site uses a rich browser client for performing localizing all the authentication, you may as well have built a Maginot Line.
The point was demonstrating a new freebie utility from HP, SWFScan, which scans Flash files for security holes; you point it at a website, it then decompiles the code and identifies vulnerabilities. OK, pretty abstract sounding. But Sima did a live demo, conducting a Google search for websites with rich Flash clients that included logins, then picking a few actual sites at random (well, maybe he did the search ahead of time to get an idea of what he’d pick up, but that’s showbiz). Entering the url in the tool, it scanned the web page, de-compiling down to SWF (the vector graphics file format of Flash that contains the ActionScript), which then displayed in plain English all the instances of Password entry and processing. So why bother with the trouble of network sniffing when all you have to do is run a botnet that automates Google searches, hits web pages, decompiles the code, and forges log-ins. Sima then showed the same thing with database queries, providing yet simpler alternatives for hackers to SQL injection.
Point taken. Web 2.0 is fine as long as authentication is conducted using Web 1.0 design.
Ever get the feeling that software development has degenerated into a cat and mouse game? The business defines a requirement, provides a budget that covers about two thirds the cost, specifies an 8-week deadline, and then changes requirements in midstream because their needs changed as did the competitive environment. The result is a game of dodge ball where the winner is somehow able to duck the finger pointing.
The practice of Enterprise Architecture (EA) was invented to make IT proactive rather than reactive. So, come and give us an impossible deadline, and we’ll implement a consistent process that, after analysis, determines the possible, and then enforces a process to ensure that the possible happens. In that sense EA is much like Business Process Management (BPM) for IT’s software or solution delivery business in that you try to imbed your best practices when implementing systems projects.
At the end of the day, the goal of EA is to make all the processes related to implementing and supporting IT systems consistent. Yes, it can be about determining the standards for physical architecture, such as preferred database, OS, application choices, and SOA policy and all that. But more broadly, it’s a form of Business Process Management (BPM) applied to IT’s system activities. EAs have many different frameworks to choose from for codifying how IT responds to the business in making the decisions governing specification, implementation, and maintenance of systems; among the best known EA frameworks are the Zachman Framework, the granddaddy which takes a matrix approach to identifying all the facets of implementing IT systems, and TOGAF, the Open Group-sponsored framework which takes more of an iterative process-centered approach.
Enterprise Architecture has a strong band of adherents, primarily in large enterprises or public bodies that also have strong process focuses. The actual power and influunce of the enterprise architect himself or herself obviously varies from one organization to the next. The Open Group’s EA conference this week still had a pretty strong turnout in spite of the fact that layoffs and budget cuts are dominating the headlines.
But Enterprise Architecture has a branding problem: Try and pitch Architecture, promote the benefit that it is supposed to be transformational, and if you work in a more typical enterprise, you’re likely to get one of two responses as you are thrown out of the office:
1. Transformation sounds too much like Reengineering, which translates to all pain and little gain.
2. What the heck is enterprise architecture? I need software!
Cut back to the chase: how can IT successfully pitch enterprise architecture? Should it pitch EA? And how is the idea actually thinkable during a recession when IT budgets are getting slashed and people are getting laid off?
We were reminded of the issue of keeping EA relevant as we sat on a panel at the Open Group’s Enterprise Architect Practitioner’s Conference in San Diego this week hosted by Dana Gardner, along with Forrester Research principal EA analyst Henry Peyret; Chris Forde, VP Technology Integrator for American Express and Chair of the Open Group’s Architecture Forum; Janine Kemmeren, Enterprise Architect for Getronics Consulting and Chair of the Architecture Forum Strategy Work Group; and Jane Varnus, Architecture Consultant for Enterprise Architecture Department, of Bank of Montreal.
Gardner polled the audience on what kind of ROI was realistic for EA initiatives; an overwhelming majority of attendees stated that a 2-year payback was reasonable. The problem is that today, money is just not that patient. As one consultant joked to us after the session, anyone proposing a 2-year payback should enter the CxO’s office with another job offer in their back pocket. Consider this: if you proposed a project in July 2008 for a natural resources company based on oil prices exceeding $140/barrel, your plans would have been fine until the collapse of Lehman Brothers 2 months later.
EA could borrow some lessons from the agile software development community which has proven that in some cases (not all), lighter weight processes may be adequate and preferable. With agile development predicated on the assumption that requirements are a moving target, it takes a looser approach to requirements called “stories” that can change at the end of each spring or iteration, which could be anything from 2 – 6 weeks. Conceived for web development, agile is not the be-all or end all, and likely would not be a wise choice for implementing SAP. If you keep in mind that the goal of EA is not process for its own sake, but instead is aimed at ingraining consistent policy and methodology for decision making, the notion of “lite” EA is not all that outlandish. Your organization just has to decide what the pain points are and address relevant processes accordingly.
TOGAF 9 is an encouraging step in this direction in that it has made the framework more modular and the pieces of it more self-explanatory. No more do you have to borrow from different parts of the framework all the concepts and processes that you need in place to conduct a planned systems migration, and it makes it easier to implement piecemeal. We agree with Forrester’s Peyret that EA needs to accounts for the fact that the world is more virtual, scattered, and networked, and that EA frameworks need to account for this reality. But at the same time, EA needs to provide a lighter weight answer for smaller organizations that desire the consistency, but lack the time, resources, or depth to undertake classic EA.
And while we’re at it, lose the name. The term enterprise architecture is awfully vague – it could mean process architecture, physical architecture, not to mention the reengineering and capital “T” transformation connotations. When pitching EA, and especially EA lite outside the choir, how about using terms that connote steady, consistent decision making and predictable results? If you’ve got a better idea on how to brand EA, please let us and the world know.
Update: A full transcript of the session is available here; you can listen to the podcast here.
Turns out that the new year wouldn’t be complete without yet another SOA is dead flame war, touched off by Anne Thomas Mane’s provocative comments that stated, to the effect, that SOA is dead, long live services. As inveterate SOA blogger Joe McKendrick has noted, it’s a debate that’s come and gone over the years, and in its latest incarnation has drawn plenty of reaction, both defensive, and on target – that the problem is that practitioners get hung up on technology, not solutions. Or as Manes later clarified, it’s about tangibles like services, and solid practices like applying application portfolio management that deliver business value, not just technology for its own sake.
We could be glib and respond that Francisco Franco is still dead, but Mane’s clarification struck a chord. All too often in software development, we leap then look. We were reminded of that with an interesting announcement this week from SOA Software. Their contention is that there is a major gap at the front end of the SOA lifecycle, at least when it comes to vendor-supplied solutions. Specifically, it is over managing service portfolios – making investment decisions as to whether a service is worth developing, or worth continuing.
SOA Software contends that service repositories are suited for managing the design and development lifecycles of the service, while run-time management is suited for tracking consumption, policy compliance, service contract compliance, and quality of service monitoring. However, existing SOA governance tools omit the portfolio management function.
Well, there’s a gap when it comes to portfolio management of services, except that there isn’t: there is an established market and practice for project portfolio management (PPM), which applies financial portfolio analysis techniques to analyzing software development projects to help decision makers identify which projects should get greenlighted, and which existing efforts should have the plugs pulled.
The downside to PPM is that it’s damn complex, and mandates comprehensive data collection encompassing timesheet data, all data relating what’s paid to software vendors and consultants, and infrastructure consumption. We also have another beef, that in most IT organizations, new software development or implementation projects account for 10% of budgets or less. The bottom line is, PPM is complex, hard, and anyway, shouldn’t it also cover the 80 – 90% of the software budget that is devoted to maintenance?
But anyway, SOA Software contends that PPM is overkill for managing service portfolios. Their new offering, Service Portfolio Manager, is essentially a “lite” PPM tool that is applied specifically to services. Their tool factors in four basic artifacts: existing (as-is) business processes or application functionality; identifiers for candidate services such as “customer load qualifier;” ranking of business priorities; and metadata for services that are greenlighted for development and production.
We understand that SOA Software is attempting to be pragmatic about all this. They claim most of their clients have either not bothered with PPM, or have not been successful in implementing it because of its scope and overhead, and it’s better to manage even if it’s for a special case. And they see plenty of demand from their client base for a more manageable service-oriented PPM alternative.
But we have to wonder if it makes sense to erect yet another governance silo, or if SOA really merits a special case. The problem is that if we view SOA as a special case, now we wind up with yet another level of managerial silo and more process complexity. It also divorces SOA – or services if you prefer—supports the business. If SOA is a special case, then it must be an island of technology that has to be managed uniquely. In the long run, that will only increase management costs, and in the end reinforce the notion that SOA is a workaround to the bottlenecks of enterprise, application, or process integration, and a band-aid to poor or nonexistent enterprise architecture.
It also further isolates SOA or services from the software development lifecycle (SDLC), of which they should be an integral part. While services are not monolithic applications, are extensions or composites of applications and other artifacts such as feeds, they are still software. From a governance standpoint, the criteria for developing and publishing services should not be distinct from developing and implementing software.
And while we’re at it, we also believe that the run-time governance of SOA or services cannot be divorced from the physical aspects of running IT infrastructure. Service level management of SOA services is directly impacted by how effectively IT delivers business services, which is the discipline of IT Service Management (ITSM). When there is a problem with publishing a service, it should become an incident that is managed, not within its own SOA cocoon, but as an IT service event that might involve problems with software or infrastructure. In the long run, service repositories should be federated with CMDBs that track how different elements of IT infrastructure are configured.
In the short run, SOA Software’s Service Portfolio Manager is a pragmatic solution for early adopters who have drank the SOA Kool Aid and mainstreamed service implementation, but lack adequate governance when it comes to the SDLC (and enterprise architecture, by implication). In the long run, it should serve as a wakeup call to simplify PPM applying the 80/20 rule, making it more usable rather than spawning special case implementations.
The old adage of the shoemaker’s son’s going barefoot has long been associated with IT organizations because they have often been the last to adopt the kind of automated solutions to run their shops that they have implemented for the rest of the enterprise.
Of course, packaging up what IT does is kind of difficult because, compared to line organizations, technology group processes and activities are so diverse. On one hand, they are regarded as a digital utility that is responsible for keeping digital lights on; not surprisingly, that is a dominant impression given that, according to a wide range of surveys, IT infrastructure and software maintenance easily consumes 70% or more (depending on whose survey you read). As to what’s left over, that’s where IT is supposed to play project delivery mode, where it delivers the technologies that are supposed to support business innovation. In that role, IT juggles several roles including systems integration, application development, and project management. And, if IT is to function properly, it is supposed to govern itself.
With ITIL emerging as a -– fill in the blank -– process enabler or yet another layer of red tape, the goal has been to make the utility side of IT a more consistent business with repeatable processes that should improve service levels and reduce cost. While adherence to the ITIL framework, or the broader discipline of IT Service Management, is supposed to make the business feel that it is getting more value for its IT dollars (better service can translate to competitive edge, especially if you heavily leverage the web as a channel for dealing with business partners or customers), the brass ring is supposed to come from the way that IT overtly supports business innovation in project delivery. But like any business, there is the need to manage your investments, a concern that has driven emergence of Project Portfolio Management as a means for IT organizations to evaluate how well projects are meeting their budgets, schedules, and defined goals; in some ways, it’s tempting to label PPM as ERP for IT, as it’s intended as an application for planning where IT should direct its resources.
Five years ago, Mercury’s (now part of HP) acquisition of Kintana, which was intended to grow the company out of its development/testing niche, began putting PPM on the map. That was followed in the next few years with each of the major development tools players acquiring their ways into this space.
Of course, the devil’s in the details. PPM is not a simple answer to a complex problem, as it requires decent data from IT projects that could encompass, not only accomplished milestones from a project management system, but feedback or reports from quality management or requirements analysis tools to ensure that the software – no matter how mission-critical – isn’t getting bogged down with insurmountable defects or veering away from intended scope. Significantly, at least one major player – IBM – is rethinking its whole PPM approach, with the result being that it will likely split its future solutions into separate project, portfolio, and program management streams.
Against this backdrop, Innotas has carved a different path. Founded by veterans of Kintana, the goal of the founders is to make the new generation a kinder, gentler PPM for the rest of us. Delivered as a SaaS offering, the company not surprisingly differentiates itself using the Siebel/Salesforce.com metaphor (and in fact is a marketing partner on Salesforce’s App Exchange). While we haven’t yet caught a demo to attest that Innotas PPM really is simpler, the company did grow 4x to a hundred customers last year, expects to double this year (recession notwithstanding), and just received a modest $6 million shot of C funding to do the usual late round expand sales & marketing.
The dilemma of governance is that lacking bounds, you can often wind up spinning wheels just to get the last detail, whether you actually need it or not. Not surprisingly for a top-down solution, and one that’s aimed at IT, the PPM market has been fairly limited. Significantly, in the press release, Innotas referred to the size of the SaaS rather than the PPM market to promote its upside potential. While that begs the question, there’s always the classic China market argument of a relatively empty market just waiting to be educated. In this case, Innotas points to the fact that barely 300 of the Fortune 1000 have bought PPM tools so far, and that doesn’t even count the 50,000 midmarket companies that have yet to be penetrated.
Our comeback is that like any midmarket solution, the burden is on the vendor to make the case that midmarket companies, with their more modest IT software portfolios, have resource management problems that are complex enough to warrant such a solution. Nonetheless, the PPM market is in need of solutions that can at least give you an 80% answer, because most organizations don’t have the time or resource to maintain fully staffed program management offices whose mission is to perform exactly that.
Ever since the dawning of structured software development, arguments have been put forth that, if we only architected [fill in the blank] entity relationships, objects, components, processes, or services, software development organizations would magically grow more productive because they could now systematically reuse their work. The fact that reuse has been such a holy grail for so many years reveals how elusive it has always been.
And so Joe McKendrick summarized the recent spate of arguments over SOA and reuse quite succinctly this morning: “What if you built a service-oriented architecture and nothing got reused? Is it still of value to the business, or is it a flop?” The latest spate of discussions were sparked by a recent AMR Research report that stated that too much of the justification for SOA projects was being driven by reuse. McKendrick cited ZapThink’s David Linthicum, who curtly responded that he’s already “been down this road, several times in fact,” adding that “The core issue is that reuse, as a notion, is not core to the value of SOA…never has, never will.” Instead, he pointed to agility as the real driver for SOA.
To give you an idea of how long this topic has been bandied about, we point to an article that we wrote for the old Software Magazine back in January 1997, where we quoted a Gartner analyst who predicted that by 2000, at least 60% of all new software applications would be built from components, reusable, self-contained pieces of software which perform generic functions.
And in the course of our research, we had the chance to speak with an enterprise architect on two occasions – in 1997 and again last year – who was still with the same organization (a near miracle in this era of rapid turnover). A decade ago, her organization embraced component-based development to the point of changing job roles in the software development organization:
• Architects, who have the big picture, with knowledge of technology architecture, the requirements of the business, and where the functionality gaps are. They work with business analysts in handling requirements analysis.
• Provisioners, who perform analysis and coding of software components. They handle horizontal issues such as how to design complex queries and optimize database reads.
• Assemblers, who are the equivalent of developers. As the label implies, they are the ones who physically put the pieces together.
So how did that reorg of 1997 sink in? The EA admitted that it took several reorgs for the new division of labor to sink in, and after that, was adjusted for reality. “When you had multiple projects, scheduling conflicts often arose. It turned out that you needed dedicated project teams that worked with each other rather than pools. You couldn’t just throw people into different projects that called for assemblers.”
And even with a revamping of roles, the goals of reuse also adjusted for reality. You didn’t get exact reuse, because components – now services – tended to evolve over time. At best, that component or service that you developed became a starting point, but not a goal.
So as we see it, there are several hurdles here.
The first is culture. Like any technical, engineering oriented profession, developers pride themselves in creativity and cleverness, and consider it un-macho to reuse somebody else’s work – because of the implication that they cannot improve on it.
Secondly is technology and the laws of competition.
1. During the days of CASE, conventional wisdom was that we could reuse key processes around the enterprise if we adequately modeled the data.
2. We eventually realized that entity relationships did not adequately address business logic, so we eventually went to objects, followed by coarser grained components, with the idea that if we built that galactic enterprise repository in the sky, that software functionality could be harvested like low hanging fruit.
3. Then we realized the futility of grand enterprise modeling schemes, because the pace of change in modern economies meant that any enterprise model would become obsolete the moment it was created. And so we pursued SOA, with the idea that we could compose and dynamically orchestrate our way to agility.
4. Unfortunately, as that concept was a bit difficult to digest (add moving parts to any enterprise system, and you could make the audit and compliance folks nervous), we lazily fell back to that old warhorse: reuse.
5. The problem with reuse took a new twist. Maybe you could make services accessible, even declarative, to eliminate the messiness of integration. But you couldn’t easily eliminate the issue of context. Aside from extremely low value, low level commodity services like authentication, higher level business requirements are much more specific and hard to replicate.
What’s amazing is how the reuse argument continues to endure as we now try justifying SOA. You’d think that after 20 years, we’d finally start updating our arguments.
It shouldn’t be surprising as to why IT’s automation needs often fall to the bottom of the stack: Because most companies are not in the technology business, investments in people, processes, or technologies that are designed to improve IT only show up as expenses on the bottom line. And so while IT is the organization that is responsible for helping the rest of the business adopt various automated solutions, IT often goes begging when it comes to investing in its own operational improvements.
In large part that explains the 20-year “overnight success” of ITIL. Conceived as a framework for codifying what IT actually does for a living, the latest revisions of ITIL describe a service lifecycle that provides opportunity for IT to operate more like a services business that develops, markets, and delivers those services to the enterprise as a whole. In other words, it’s supposed to elevate areas that used to pass for “help desk” and “systems management” into conversations that could migrate from middle manager to C-level.
Or at least that’s what it’s supposed to be cracked up to being. If you codify what an IT Service is, then define the actions that are involved with every step in its lifecycle, you have the outlines for a business process that could be made repeatable. And as you codify processes, you gain opportunities to attach more consistent metrics that track performance.
We’ve studied ITIL and have spoken with organizations on their rationale for adoption. Although the hammer of regulatory compliance might look to be the obvious impetus (e.g., if you are concerned about regulations covering corporate governance or protecting the sanctity of customer identity, you want audit trails that include data center operation), we also found that scenarios such as corporate restructuring or merger and acquisition also played a hand. At an ITIL forum convened by IDC in New York last week, we found that was exactly the case for Motorola, Toyota Financial Services, and for Hospital Corp. of America – each of whom sat on a panel to reflect on their experiences.
They spoke of establishing change advisory boards to cut down on the incidence of unexpected changes (that tend to break systems), formalizing all service requests (to reduce common practices of buttonholing), reporting structures (which, not surprisingly for different organizations, varied widely), and what to put in you’re the Configuration Management Database (CMDB) that is stipulated by the ITIL framework as the definitive store defining what you are running and how you are running it.
But we also came across comments that, for all its lofty goals, that ITIL could wind up erecting new silos for an initiative that was supposed to break them down. One attendee, from a major investment bank, was concerned that with adoption of the framework would come pressure for certifications that would wind up pigeonholing professionals into discrete tasks, a la Frederick Taylor. Another mused about excessive reliance on arbitrary metrics for the sake of metrics, because that is what looks good in management presentations. Others questioned the idea of whether initiatives, such as adding a change advisory board or adding a layer of what were, in effect, “account representatives” between IT and the various business operating units it solves, would in turn create new layers of bureaucracy.
What’s more interesting is that the concerns of attendees were hardly isolated voices in the wilderness. John Willis, who heads Zabovo, third party consulting firm specializing in IBM Tivoli tools, recently posted the responses from the Tivoli mailing list on the question of whether ITIL actually matters. He didn’t get a shortage of answers. Not surprisingly, there were plenty of detractors. One respondent characterized ITIL as “merely chang[ing] the shape of the hoops I jump through, and not the fact that there are hoops…” Another termed it “$6 buzzword/ management fad,” while another claimed that ITIL makes much ado over the obvious. “The Helpdesk is NOT an ITIL process, it is merely a function…”
But others stated that, despite the hassles, that the real problem is defining processes or criteria more realistically. “Even when I’m annoyed, I still believe ITIL or ITIL-like processes should be here to stay, but management should be more educated on what constitutes a serious change to the environment…” Others claimed that ITIL formalizes what should be your IT organization’s best practices and doesn’t really invent anything new. “Most shops already perform many of the processes and don’t recognize how close they are to being ITIL compliant already. It is often a case of refining a few things.”
Probably the comment that best summarized the problem is that many organizations, not surprisingly, are “forgetting that ITIL is not an end in and of itself. Rather, it is a means to an end,” adding that this point is often “lost on both the critics and cheerleaders of Service Management.”
We’re not surprised that adopting of ITIL is striking nerves. It’s redolent of the early battles surrounding those old MRP II, and later, ERP projects, that often degenerated into endless business process reengineering efforts for their own sake. Ironically, promoters of early Class A MRP II initiatives justified their efforts on the ideal that integration would enable everyone to read from the same sheet of music and therefore enable the organization to act in unison. In fact, many of those early implementations simply cemented the functional or organizational walls that they were supposed to tear down. And although, in the name of process reengineering, they were supposed to make the organizations more responsive, in fact, many of those implementations wound up enshrining many of the inflexible planning practices that drove operational cost structures through the roof.
The bottom line is that IT could use a dose of process, but not the arbitrary kind that looks good in management PowerPoints. The ITIL framework presents the opportunity to rationalize the best practices that are already are or should be in place. Ideally, it could provide the means for correlating aspects of service delivery, such maintaining uptime and availability, with metrics that actually reflect the business value of meeting those goals. For the software industry, ITIL provides a nice common target for developing packaged solutions, just as the APICS framework did for enterprise applications nearly 30 years ago. That’s the upside.
The downside is that, like any technical profession, IT has historically operated in its own silo away from the business, where service requests were simply thrown over the wall with little or no context. As a result, the business has hardly understood what they are getting for their IT dollars, while IT has had little concept of the bigger picture on why they are doing their jobs, other than to keep the machines running. IT has good reason to fear process improvement exercises which don’t appear grounded in reality.
ITIL is supposed to offer the first step -– defining what IT does as a service, and the tasks or processes that are involved in delivering it. The cultural gap that separates IT from the business is both the best justification for adopting ITIL, and the greatest obstacle towards achieving its goals.
While open source has drawn a halo for the community development model, recent findings from Fortify Software are revealing that some worms, snakes, and other nasty creatures may be invading the sanctuary. They have identified a new exploit that they have assigned the arcane name “build-process injection.” Translated to English, it means that Trojan Horses may now be hiding inside that open source code that your developers just checked out.
Our first reaction to this was something akin to, “Is nothing sacred anymore?” We recalled crashing some local Linux user group meetings back in the 90s. And we walked away impressed that somehow the idealism of Woodstock had resurfaced in virtual developer communities who were populating, in the words of Eric S. Raymond, bazaars of free intellectual property around the closed cathedrals of proprietary software. Open source developers develop software because that’s what they like to do, and because they wanted to seize the initiative of innovation back from greedy corporate interests who guarded IP with layers of barbed wire.
And, as success stories like Linux or JBoss attest, open source has proven an extremely effective, if occasionally chaotic development model that opens up the world’s largest virtual software R&D team. Doing good could also mean doing profitable.
Well we shouldn’t have been so naïve to think that reality wouldn’t at some point crash this virtual oasis of trust. Heck, you’d have to be living under a rock to remain unaware of the increasingly ubiquitous presence of worms, virus, Trojan Horses, and bots across the Internet. How can you be sure that at this very moment, your computer isn’t unconsciously spewing out offers to recover lost fortunes for some disgraced Nigerian politician?
Designers of open source Trojan Horses had it figured out all too well, targeting the human behavior patterns prevailing inside the community. Like the infamous Love Bug back in 2000, they both took advantage of the fact that in certain situations, we’ll all let our guard down. With the Love Bug, it was the temptation to satisfy some feeling of unrequited love.
Yup, we were one of the victims. In our case, the love bug came from a rather attractive public relations contact, who had a rather melodic name. Oh and by the way did we say that she happened to be quite attractive? Well, the morning after found us cleaning up quite a proverbial mess.
With the new open source exploits, the attack targeted the level of trust that normally exists inside the community. While developers might be wary, or prohibited by corporate policy, from downloading executables from outside the firewall, many typically trust that development server. In this case, downloading what you thought was a fragment of code from the repository, only to find (at some point) that it’s a Trojan Horse that is going to wreak some havoc when it decides to.
Given the success of the open source development model, it was inevitable that reality would bite at some point. Given the maturity of the open source world, the community will hopefully start engaging in the software equivalent of safe sex.
« Previous entries Next Page » Next Page »