Category Archives: Security

HP buys Fortify, it’s about time!

What took HP so long? Store that thought.

As we’ve stated previously, security is one of those things that have become everybody’s business. Traditionally the role of security professionals who have focused more on perimeter security, the exposure of enterprise apps, processes, and services to the Internet opens huge back doors that developers unwittingly leave open to buffer overflows, SQL injection, cross-site scripting, and you name it. Security was never part of the computer science curriculum.

But as we noted when IBM Rational acquired Ounce Labs, developers need help. They will need to become more aware of security issues but realistically cannot be expected to become experts. Otherwise, developers are caught between a rock and a hard place – the pressures of software delivery require skills like speed and agility, and a discipline of continuous integration, while security requires the mental processes of chess players.

At this point, most development/ALM tools vendors have not actively pursued this additional aspect of QA; there are a number of point tools in the wild that may not necessarily be integrated. The exceptions are IBM Rational and HP, which have been in an arms race to incorporate this discipline into QA. Both have so-called “black box” testing capabilities via acquisition – where you throw ethical hacks at the problem and then figure out where the soft spots are. It’s the security equivalent of functionality testing.

Last year IBM Rational raised the ante with acquisition of Ounce Labs, providing “white box” static scans of code – in essence, applying debugger type approaches. Ideally, both should be complementary – just as you debug, then dynamically test code for bugs, do the same for security: white box static scan, then black both hacking test.

Over the past year, HP and Fortify have been in a mating dance as HP pulled its DevInspect product (an also-ran to Fortify’s offering) and began jointly marketing Fortify’s SCA product as HP’s white box security testing offering. In addition to generating the tests, Fortify;s SCA manages this stage as a workflow, and with integration to HP Quality Center, autopopulates defect tracking. We’ll save discussion of Fortify’s methodology for some other time, but suffice it to say that it was previously part of HP’s plans to integrate security issue tracking as part of its Assessment Management Platform (AMP), which provides a higher level dashboard focused on managing policy and compliance, vulnerability and risk management, distributed scanning operations, and alerting thresholds.

In our mind, we wondered what took HP so long to consummate this deal. Admittedly, while the software business unit has grown under now departed CEO Mark Hurd, it remains a small fraction of the company’s overall business. And with the company’s direction of “Converged Infrastructure”,” its resources are heavily preoccupied with digesting Palm and 3Com (not to mention, EDS). The software group therefore didn’t have a blank check, and given Fortify’s 750-strong global client base, we don’t think that the company was going to come cheap (the acquistion price was not disclosed). With the mating ritual having predated IBM’s Ounce acquisition last year, buying Fortify was just a matter of time. At least a management interregnum didn’t stall it.

Finally!

Shutting the barn doors

Security is one of those things that have become everybody’s business. Well maybe not quite everybody, but for software developers, the growing reality of web-based application architectures means that this is something that they have to worry about, even if they were never taught about back doors, buffer overflows, or SQL injection in their computer science programs.

Back when software programs were entirely internal, or even during Web 1.0 where Internet applications consisted of document dispensaries or remote database access, security would be adequately controlled through traditional perimeter protection means. We’ve said it before, as applications evolved to full web architectures that graduated from remote queries to a database to more dynamic interaction between applications, perimeter protection became the 21st century equivalent of a Maginot Line.

Security is a black box to most civilians and for good reason. The fact that, even in the open source world where you have the best minds constantly hacking away, users of popular open source programs like Firefox still are on the receiving end of an ongoing array f patches and updates. As a cat and mouse game, hackers are constantly discovering new back doors that even the brightest software development minds couldn’t imagine.

While in an ideal world, developers would never write bugs or leave open doors. In the real world, they need automated tools that ferret out what their training never provided, or what they wouldn’t be able to uncover through manual checks anyway. A couple years ago, IBM Rational acquired Watchfire, whose AppScan does so-called ”black box” testing or ethical hacking of an app once it’s on a testbed; today, IBM bought Ounce Labs, whose static (or “white box”) testing provides the other half of the equation.

With the addition of Ounce, IBM Rational claims it has the only end-to-end web security testing solution. For its part, HP, like IBM, also previously acquired a black box tester (SPI Dynamics) and currently covers white box testing through a partnership with Fortify (we wouldn’t be surprised if at some point HP ties the knot n that one as well). But for IBM Rational, it means they have put together the basic piece parts, but do not have an end-to-end solution yet. Ounce needs to be integrated with AppScan first. But in a discussion with colleague Bola Rotibi, we agreed that presenting as testbed no matter how unified is just the first step. She suggested modeling – kind of a staged approach where a model is tested first to winnow out architectural weaknesses. To that we could see an airtight case for a targeted solution with requirements that makes security testing an exercise driven by corporate (and were appropriate, regulatory) policy.

While the notion of application security testing is fairly new, the theme about proactive testing early in the application lifecycle is anything but. The more things change, the more they don’t.

Private Cloudburst

To this day we’ve had a hard time getting our arms around just what exactly a private cloud is. More to the point, where does it depart from server consolidation? The common thread is that both cases involve some form of consolidation. But if you look at the definition of cloud, the implication is that what differentiates private cloud from server consolidation is that you’re talking a much greater degree of virtualization. Folks, such as Forrester’s John Rymer, fail to see any difference at all.

The topic is relevant as – since this is IBM Impact conference time, there are product announcements. In this case, the new WebSphere Cloudburst appliance. It manages, stores, and deploys IBM WebSphere Server images to the cloud, providing a way to ramp up virtualized business services with the kinds of dynamic response that cloud is supposed to enable. And since it is targeted for deployment to manage your resources inside the firewall, IBM is positioning this offering as an enabler for business services in the private cloud.

Before we start looking even more clueless than we already are, let’s set a few things straight. There’s no reason that you can’t have virtualization when you consolidate servers; in the long run it makes the most of your limited physical and carbon footprints. Instead, when we talk private clouds, we’re taking virtualization up a few levels. Not just the physical instance of a database or application, or its VM container, but now the actual services it delivers. Or as Joe McKendrick points out, it’s all about service orientation.

In actuality, that’s the mode you operate in when you take advantage of Amazon’s cloud. In their first generation, Amazon published APIs to their back end, but that approach hit a wall given that preserving state over so many concurrent active and dormant connections could never scale. It may be RESTful services, but they are still services that abstract the data services that Amazon provides if you decide to dip into their pool.

But we’ve been pretty skeptical up to now about private cloud – we’ve wondered what really sets it apart from a well-managed server consolidation strategy. And there’s not exactly been a lot of product out there that lets you manage an internal server farm beyond the kind of virtualization that you get with a garden variety hypervisor.

So we agree with Joe that’s it’s all about services. Services venture beyond hypervisor images to abstract the purpose and task that a service performs from how or where it is physically implemented. Consequently, if you take the notion to its logical extent, a private cloud is not simply a virtualized bank of server clusters, but a virtualized collection of services that made available wherever there is space, and if managed properly, as close to the point of consumption as demand and available resources (and the cost of those resources) permits.

In all likelihood, early implementations of IBM’s Cloudburst and anything of the like that comes along will initially be targeted on an identifiable server farm or cluster. In that sense, it makes it only a service abstraction away from what is really just another case of old fashioned server consolidation (paired with IBM’s established z/VM, you could really turn out some throughput if you already have the big iron there). But taken to its more logical extent, a private clouds that deploys service environments wherever there is demand and capacity, freed from the four walls of a single facility, will become the fruition of the idea.

Of course, there’s no free lunch. Private clouds are supposed to eliminate the uncertainty of running highly sensitive workloads outside the firewall. Being inside the firewall will not necessarily make the private cloud more secure than a public one, and by the way, it will not replace the need to implement proper governance and management now that you have more moving parts. That’s hopefully one lesson that SOA – dead or alive – should have taught us by now.

Web 2.0 Maginot Line

Last week we spent a couple lovely but unseasonably cold early spring days locked inside a hotel near the Boston convention center for HP’s annual analyst conference for the 1/3 of the company that is not PCs or printers. While much of what we heard or saw was under non-disclosure, we won’t be shot if we told you about a 20-minute demonstration given by Caleb Sima on how Web 2.0 apps can be honey pots for hackers. You can restrict access to your site as strictly as possible, use SSL to go out over secure HTTP, but if your Web 2.0 site uses a rich browser client for performing localizing all the authentication, you may as well have built a Maginot Line.

The point was demonstrating a new freebie utility from HP, SWFScan, which scans Flash files for security holes; you point it at a website, it then decompiles the code and identifies vulnerabilities. OK, pretty abstract sounding. But Sima did a live demo, conducting a Google search for websites with rich Flash clients that included logins, then picking a few actual sites at random (well, maybe he did the search ahead of time to get an idea of what he’d pick up, but that’s showbiz). Entering the url in the tool, it scanned the web page, de-compiling down to SWF (the vector graphics file format of Flash that contains the ActionScript), which then displayed in plain English all the instances of Password entry and processing. So why bother with the trouble of network sniffing when all you have to do is run a botnet that automates Google searches, hits web pages, decompiles the code, and forges log-ins. Sima then showed the same thing with database queries, providing yet simpler alternatives for hackers to SQL injection.

Point taken. Web 2.0 is fine as long as authentication is conducted using Web 1.0 design.

Innocence Lost

While open source has drawn a halo for the community development model, recent findings from Fortify Software are revealing that some worms, snakes, and other nasty creatures may be invading the sanctuary. They have identified a new exploit that they have assigned the arcane name “build-process injection.” Translated to English, it means that Trojan Horses may now be hiding inside that open source code that your developers just checked out.

Our first reaction to this was something akin to, “Is nothing sacred anymore?” We recalled crashing some local Linux user group meetings back in the 90s. And we walked away impressed that somehow the idealism of Woodstock had resurfaced in virtual developer communities who were populating, in the words of Eric S. Raymond, bazaars of free intellectual property around the closed cathedrals of proprietary software. Open source developers develop software because that’s what they like to do, and because they wanted to seize the initiative of innovation back from greedy corporate interests who guarded IP with layers of barbed wire.

And, as success stories like Linux or JBoss attest, open source has proven an extremely effective, if occasionally chaotic development model that opens up the world’s largest virtual software R&D team. Doing good could also mean doing profitable.

Well we shouldn’t have been so naïve to think that reality wouldn’t at some point crash this virtual oasis of trust. Heck, you’d have to be living under a rock to remain unaware of the increasingly ubiquitous presence of worms, virus, Trojan Horses, and bots across the Internet. How can you be sure that at this very moment, your computer isn’t unconsciously spewing out offers to recover lost fortunes for some disgraced Nigerian politician?

Designers of open source Trojan Horses had it figured out all too well, targeting the human behavior patterns prevailing inside the community. Like the infamous Love Bug back in 2000, they both took advantage of the fact that in certain situations, we’ll all let our guard down. With the Love Bug, it was the temptation to satisfy some feeling of unrequited love.

Yup, we were one of the victims. In our case, the love bug came from a rather attractive public relations contact, who had a rather melodic name. Oh and by the way did we say that she happened to be quite attractive? Well, the morning after found us cleaning up quite a proverbial mess.

With the new open source exploits, the attack targeted the level of trust that normally exists inside the community. While developers might be wary, or prohibited by corporate policy, from downloading executables from outside the firewall, many typically trust that development server. In this case, downloading what you thought was a fragment of code from the repository, only to find (at some point) that it’s a Trojan Horse that is going to wreak some havoc when it decides to.

Given the success of the open source development model, it was inevitable that reality would bite at some point. Given the maturity of the open source world, the community will hopefully start engaging in the software equivalent of safe sex.

The China Syndrome

We had an interesting discussion this morning with software development security guru Brian Chess this morning, but afterwards, we couldn’t help but wonder whether the topic itself was something of an oxymoron. Given the fact that disclosures of breaches of sensitive data like credit card or social security numbers is becoming alarmingly routine, it seems like software security is only be observed literally in the breach.

The problem has grown worse with spread of the Internet, which means you can get access to vulnerable assets, and the legacy Visual Basic and its descendants, that have made software development easier and more accessible to music, art history, and philosophy majors, who have supplemented what -– in this country –- is a declining supply of computer scientists.

The challenge of course is that software developers are not security experts and never have been. Traditionally, security has been the domain of specialists who devise measures at the perimeter, and with authentication and access control inside the firewall. Not surprisingly, the traditional attitude among developers is that security is somebody else’s problem, and that when you’re done developing and testing the code for conventional software bugs and defects, you typically throw it over the wall to the security folks.

Although he doesn’t use those words, Chess would likely characterize such a strategy as the software equivalent of a Maginot Line defense. He’s authored a new book discussing common software security holes and how they can be fixed.

He concedes that not all security holes can be caught. But he claims that the most common ones, such as cross-site scripting (which allows hackers to deflect incoming HTTP requests to pirate sites), SQL injection (which causes database diarrhea), or buffer overflows (which so overwhelm cache that security measures break down), can easily be eliminated with automated tools without bringing software development to its knees. And even if eliminating these holes solves only half the problem, at least we’re halfway better off.

Of course, Chess has an agenda, as his company, Fortify Software provides “white box” testing that throws software through a battery of tests to identify security holes.

But agenda or not, he has a point.

Chess doesn’t expect developers to become security experts, but he expects them to become deputized on the front lines, to at least test for the obvious holes. He security folks should set the strategy and be the overseers of last resort, but Chess contends that they can’t handle the security challenge entirely on their own anymore. The world has simply gotten too interconnected for that. He’s optimistic that developers can change their ways, as it wasn’t too long ago that most considered the idea of defect management databases and bug tracking systems as unmanly. He also believes that corporations will likely enforce a change of ethos because cracks in security – such as the unsecured wireless network at TJ Maxx that eventually opened a hole for somebody to filch credit card numbers, will cause them to advance security to the front burner in their IT departments.

Nonetheless, we’ve got some doubts. We believe that the TJ Maxxes of the world will be compelled by the market to get their acts together. But we also realize that in a global market, there’s always room for shoddy merchandise -– in this case, low cost code where you’ll get what you pay for. India, which began the globalization trend, has competed on quality. China, whose manufacturing sector has come under suspicion lately, will also likely clean up its act. But we fear that development centers in emerging countries, desperate for hard currency, will offer development services at the lowest possible cost, with little regard for security. And we wouldn’t be surprised if a market emerges for those services -– security warts and all.

Athens and Sparta — Part II

After IBM announced it was buying web application security tool provider Watchfire, we predicted that HP would soon respond. We were right but we goofed on the target. Instead of buying Cenzic, HP chose SPI Dynamics, a provider whose tools fit more squarely with Mercury’s test tools.

But there’s a bit of an irony in all this, which has to do with division of labor. The need to vet software security certainly belongs in the software lifecycle, which HP says is the logic behind the SPI buy. The only hitch is that software developers and QA specialists are not trained in the area. The kinds of software defects that they look for aren’t the ones that could spawn memory leaks, or open vulnerabilities to SQL injection or buffer overflows (which can cause the databases to dump data).

Typically, security is vetted after the fact by its own specialists. If you’re lucky, it’ll happen before the app enters production, but more likely, serious analysis of real or potential holes won’t happen till after there’s an incident. At best, developers don’t see security issues until they’re asked to rework.

So it was interesting to hear SPI Dynamics cofounder Caleb Sina concede that, although his company has unit and system test tools that would fit in nicely with HP/Mercury’s Quality center offerings, only 12% of the customer base uses them (he did claim that half the base is kicking the tires on those tools). The rest use the company’s best known tools: the ones that scan websites in production for existing security holes. Ironically, the company’s strong point is not with the QA folks who are the sweet spot of the HP division in which they would fall.

The other side to the story is that this only adds one piece to HP’s web app security strategy. Significantly, SPI has partnerships with other niche providers that could fill many of the gaps, such as Citadel, which offers automated vulnerability remediation; GuardedNet, which offers a security information management platform; Teros, which conducts discovery; or F5 Networks, which provides an XML load-leveling appliance. In fact, SPI Dynamics has worked jointly with these folks on Application Vulnerability Description Language (AVDL), an XML languages that is now an Oasis standard.

So we’ll venture another prediction: We wouldn’t be surprised if HP buys one or more of SPI Dynamics’ partners in coming months to pick up where SPI leaves off.

Athens and Sparta

We’ve noted in the past that, when it comes to safeguarding services levels in SOA, there’s been a disconnect – the service level agreements hammered out by business process owners are typically enforced using tools targeted at software developers. There’s been relatively little connect between monitoring service level agreements with SOA tools and dealing with the realities of the data center.

All too often, the same has proven true when it comes to enforcing security of web applications. Software developers, who are supposed to be the intellects or artists of IT, typically know little about IT security. Conversely, security folks, who act as the armed guards or soldiers of the data center in repelling intrusions and hacks, know little of software architecture.

So we were intrigued yesterday when IBM disclosed its intention to buy Watchfire. Until now, you typically didn’t see security checks within design and development phases of the software lifecycle. But IBM’s offer could take what is currently a niche tool used by security specialists once the web app is either live or just about to, and inject the process at several points along the software life cycle. That’s because IBM is the first household name to show an interest in tooling that probes application security soft spots.

Watchfire is part of a small, growing collection of providers who automate the ethical hacking of web applications (some of the others are SpiDynamics and Cenzic). In Watchfire’s case, it stores signatures of known security breaches, much as antivirus tools don’t store the virus, but its signature. (Cenzic takes a different tack, recoding end to end session through the browser.

Watchfire has a fairly impressive, 800-strong customer base, which is concentrated in financial services, healthcare, and government. Nine of the top 10 global banks are Watchfire customers. IBM is proposing to add it to the Rational brand, with targeted integrations to Tivoli.

Although Watchfire has been until now primarily a tool used by security specialists, it has a loose arrangement to exchange data with Fortify, a tool that checks application security vulnerabilities at the code level. Significantly, Fortify is also a Rational partner, and once the Watchfire deal is closed, could become another logical acquisition target for IBM as its tools could have an even better fit with Rational’s testing tools.

What’s interesting is that rival Cenzic is predicting there will be more consolidation in this space. On one side, application life cycle management vendors like Borland, Compuware, and Serena are logical suitors, as security testing should be added to the QA stage of the life cycle.

But we’d like to make a bit of a further reach: How about HP? Like IBM, it also has testing, IT governance, and infrastructure management offerings. Roughly half of Cenzic’s installed base uses HP/Mercury testing tools, and the company has been certified to interface with Mercury Quality Center. Of course, as Mercury dominates the test market, Cenzic’s ties are hardly unique.

But that doesn’t mean that HP/Mercury shouldn’t one-up IBM here. Maybe it’s poker face or maybe HP has other meat on its plate, but Cenzic’s marketing VP Mandeep Khera maintained that both companies have not had any marketing-related discussions since HP completed the Mercury acquisition.

Divided at Birth

In many ways, the IM market is déjà vu all over again. In an age of universal, standards-based Internet connectivity, IM remains a bastion of proprietary technology.

So if you’re on Microsoft Live Messenger, don’t even think about pinging buddies who happen to be using AOL’s AIM system unless you use a special gateway. And so you end up with segregated communities that can speak to each other with great difficulty if at all. It’s kind of funny that this still exists in the Internet age.

But 15 years ago, that’s exactly how email worked. When the business world began using the Internet, proprietary email systems added gateways. It was complicated and expensive, but the precedent was set. When ISPs emerged, providing direct access to the Internet with free, standards-based email, you could say that the rest was history.

Obviously, the transition forced huge dislocations on the first generation email industry, but of course, it opened up new e-business and service opportunities that far dwarfed what came before it.

Consequently, at first glance, you can’t help but conclude that by keeping their technologies proprietary, that IM vendors are shooting themselves in the feet. Even if standards commoditized their IM services, think of the additional higher value added services that could be unleashed as a result.

Well maybe.

Looking at email as precedent, the good news is that connectivity has grown virtual, cheap, and ubiquitous. But the bad news hits you in the face when you log in every morning, courtesy of the spam, malware, phishing attacks that clog your corporate networks and personal mailboxes.

Don’t give the AOLs or Microsofts of the world too much credit here. Their prime motivation remains protecting their turf. To date, they’ve barely paid lip service in supporting interoperability standards. The usual suspects, including IBM, Microsoft, Yahoo, and AOL, each developed their own dialects of SIMPLE, which meant that there was effectively no standard.

But what broke the ice was Google’s endorsement last year of XMPP, the protocol developed by open source IM server provider Jabber. As a consumer brand, it’s too hard to ignore Google. And just this week, IBM bit the bullet by agreeing to add support of the upstart standard.

Nonetheless, the opening up of IM is not about standards.

Look at Google. It lists roughly a half dozen, mostly minor, third party IM systems with which it interoperates via XMPP. Now, Google is not the kind of provider that would waste its time with custom links. Its standard practice is to publish a single API and expect that many will come.

But supporting XMPP doesn’t guarantee interoperability. For IM services linking to Google, it depends on the platform, and in the case of Trillian, only applies to the paid deluxe rather than the free downloaded version.

Nonetheless, reluctance at opening up IM networks isn’t limited to vendors. Corporate customers are equally leery. They obviously don’t want IM chats to sag under the weight of Spam, and security managers are not exactly gung ho over the prospects of their own carefully authenticated users exchanging chats with buddies going by anonymous calls like “pigsty123.”

Not surprisingly, corporate IM providers are approaching interoperability with 10-foot poles. This week, IBM signed interoperability agreements with AOL, Google, and Yahoo using variants of the rival standards that are in place. The hitch for shaking hands is preserving the authentication mechanisms so a SameTime user with credentials won’t compromise the security of the group using AOL Instant Messenger.

IBM’s announcement this week wasn’t its first bout with interoperability. Several years ago, IBM and AOL each agreed to support embedding of each other’s clients, so a SameTime user could sign on as an AOL AIM user or vice versa.

Not surprisingly, the idea broke down because neither provider could exert control over the foreign clients that were now running native within their supposedly protected enclaves.

It Can’t Happen Here?

As we’ve noted time and again, services-oriented architectures (SOAs) promise to do wonders for integration, both inside and outside the firewall. Reliance on loosely-coupled connections that survive software upgrades, declarative approaches that demystify the process of making connections, and the adoption of consensus standards such as SOAP messaging and XML identifiers and content, will make connectivity far more accessible than ever.

But here’s the rub. The strengths of SOAP and XML are also their greatest weaknesses. Because SOAP uses HTTP, which is designed to pass through firewalls, SOAP messages could provide attractive vectors for writers of all the evil malware that infects Windows PCs. And because XML is wordy and easy to manipulate, how much will it take for some hacker to design a payload that is so complex to parse that it could expose service providers denial of service attacks?

We were pondering this while walking the aisles of a recent web services conference for Wall St. Significantly, when we popped the question on whether there have ever been documented virus, worm, or denial of services attacks via web services, the stock answer was, “Nothing that’s been announced publicly.”

In all likelihood, there have probably been few if any attacks up until now because the vast worlds of Outlook address books and category killer sites like Amazon or Yahoo present meatier targets for hackers. But as enterprises expose higher value transactions through SOAs and web services, attackers bent on economic destruction rather than ego gratification are likely to shift their sights.

The immediate question is whether the basic building blocks of web services – SOAP and XML – are in their own way just as vulnerable as Windows and Internet Explorer. In Windows and IE, the problems are endemic to the platform; for web services, the vulnerability is the distributed nature of web services, the accessibility of the core building blocks (XML can be read by non-programmers), and the lack of mechanisms, best practices, or standards outside of identification or message authentication. Compounding matters, because web services are standards based, they are well suited for interchangeability. You can replicate, aggregate, or disaggregate service requests or service content. And XML itself is very resource-intensive. Bottom line? XML and SOAP could present inviting targets for hackers.

Of course, you can set policies limiting the size of XML payloads that are processed and other measures covering origin of requests, and so on. And you can devote an armada of specialized appliances for decryption, scanning, parsing, content or message routing, and other resource-intensive tasks. But just as clever programmers can defeat passwords and similar measures, manipulating XML to wreak havoc shouldn’t be rocket science. And, given that each of these tasks is usually handled by products from an array of hardware, software, and platform vendors, there is always the chance that cleverly constructed hacks could seep through the cracks.

To repeat, the sky is NOT yet falling down on the sanctity of web services. We’re encouraged by innovations, such as Sarvega’s approach to sniffing XML streams before they are assembled into documents. But sooner or later, hacks and malware will become reality, meaning that service requests are going to have to be vetted for threats far beyond requestor or message integrity.