09.14.12

What will Splunk be when it grows up?

Posted in Big Data, Data Management, Fast Data, IT Infrastructure, Systems Management at 5:20 pm by Tony Baer

Much of the hype around Big Data is that, not only are people generating more data, but machines. Machine data has always been there – it was traditionally collected by dedicated systems such as network node managers, firewalls systems, SCADA systems, and so on. But that’s where the data stayed.

Machine data is obviously pretty low level stuff. Depending on the format of data spewed forth by devices, it may be highly cryptic or may actually contain text that is human intelligible. It was traditionally considered low-density data that was digested either by specific programs or applications or by specific people – typically systems operators or security specialists.

Splunk’s reason for existence is putting this data onto a common data platform, then index it to make it searchable as a function of time. The operable notion is that the data could then be shared or correlated across applications, such as the weblogs. Its roots are in the underside of IT infrastructure management systems, where Splunk is often the embedded data engine. An increasingly popular use case is security, where Splunk can reach across network, server, storage, and web domains to provide a picture of exploits that could be end-to-end, at least within the data center.

There’s been a bit of hype around the company, which IPO’ed earlier this year and reported a strong Q2. Consumer technology still draws the headlines (just look at how much the release of the iPhone 5 drowned out almost all other tech news this week). But given Facebook’s market dive, maybe the turn of events on Wall Street could be characterized as revenge of the enterprise, given the market’s previous infatuation with the usual suspects in the consumer space – mobile devices, social networks, and gaming.

Splunk has a lot of headroom. With machine data proliferating and the company’s promoting its offering as an operational intelligence platform, Splunk is well-positioned as a company that leverages Fast Data. While Splunk is not split second or deterministic real-time, its ability to build searchable indexes on the fly positions itself nicely for tracking volatile environments as they change as opposed to waiting after the fact (although Splunk can also be used for retrospective historical analysis, too).

But Splunk faces real growing pains, both up the value chain, and across it.

While Splunk’s heritage is in IT infrastructure data, the company bills itself as being about the broader category of machine data analytics. And there’s certainly lots of it around, given the explosion of sensory devices that are sending log files from all over the place, inside the four walls of a data center or enterprise, and out. There’s The Internet of Things. IBM’s Smarter Planet campaign over the past few years has raised general awareness of how instrument and increasingly intelligent Spaceship Earth is becoming. Maybe we’re jaded, but it’s become common knowledge that the world is full of sensory points, whether it is traffic sensors embedded in the pavement, weather stations, GPS units, smartphones, biomedical devices, industrial machinery, oil and gas recovery and refining, not to mention the electronic control modules sitting between driver and the powertrain in your car.

And within the enterprise, there may be plenty of resistance to getting the bigger picture. For instance, while ITO owns infrastructure data, marketing probably owns the Omniture logs; yet absent the means to correlate the two, it may not be possible to get the answer on why the customer did or did not make the purchase online.

For a sub $200-million firm, this is all a lot of ground to cover. Splunk knows the IT and security market but lacks the breadth of an IBM to address all of the other segments across national intelligence, public infrastructure, smart utility grids, or healthcare verticals, to name a few. And it has no visibility above IT operations or appdev organizations. Splunk needs to pick its targets.

Splunk is trying to address scale – that’s where the Big Data angle comes in. Splunk is adding some features to increases its scale, with the new 5.0 release adding federated indexing to boost performance over larger bodies of data. But for real scale, that’s where integration with Hadoop comes in, acting as a near-line archive for Splunk data that might otherwise be purged. Splunk offers two forms of connectivity: HadoopConnect, which provides a way to stream and transform Splunk data to populate HDFS and Shuttl, a slower archival feature that treats Hadoop as a tape library (the data is heavily compressed with GZip). It’s definitely a first step – HadoopConnect as the name implies establishes connectivity, but the integration is hardly seamless or intuitive, yet. It uses Splunk’s familiar fill-in-the-blank interface (we’d love to see something more point and click), with the data in Hadoop retrievable, but without Splunk’s familiar indexes (unless you re-import the data back to Splunk). On the horizon, we’d love to see Splunk tackle the far more challenging problem of getting its indexes to work natively inside Hadoop, maybe with HBase.

Then there’s the eternal question of making machine data meaningful to the business. Splunk’s search-based interface today is intuitive to developers and systems admins, as it requires knowledge of the types of data elements that are being stored. But it won’t work for anybody that doesn’t work with the guts of applications or computing infrastructure. But it becomes more critical to convey that message as Splunk is used to correlate log files with higher level sources, such as the correlating abandoned shopping carts with underlying server data to see if the missed sale was attributable to system bugs or the buyer changing her mind.

The log files that record how different elements of IT infrastructure perform are in aggregate telling a story that tells how well your organization is serving the customer. Yet the perennial challenge of all systems level management platforms has been conveying the business impact from the events that generated those log files. For those who don’t have to dye their hair gray, there are distant memories of providers like CA, IBM, and HP promoting how their panes of glass displaying data center performance could tell a business message. There’s been the challenge for ITIL adopters to codify the running of processes in the data center to support the business. The lists of stillborn attempts to convey business meaning to the underlying operations are endless.

So maybe given the hype of the IPO, the relatively new management team that is in place, and the reality of Splunk’s heritage, it shouldn’t be surprising that we heard two different messages and tones.

From recently-appointed product SVP Guido Schroeder, we heard talk of creating a semantic metadata layer that would, in effect, create de facto business objects. That shouldn’t be surprising, as during his previous incarnation he oversaw the integration of Business Objects into the SAP business. For anyone who has tracked the BI business over the years, the key to success has been creation of a metadata layer that not only codified the entities, but made it possible to attain reuse in ad hoc query and standard reporting. Schroeder and the current management team are clearly looking to take Splunk above IT operations to CIO level.

But attend almost session at the conference, and the enterprise message was largely missing. That shouldn’t be surprising as the conference itself was aimed at the people who buy Splunk’s tools – and they tend to be down more in the depths of operations.

There were a few exceptions. One of the sessions in the Big Data track, led by Stuart Hirst, CTO of an Australian big data consulting firm Converging Data, communicated the importance of preserving the meaning of data as it moves through the lifecycle. In this case, it was a counter-intuitive pitch to conventional wisdom of Big Data, which is ingest the data, explore and classify it later. As Splunk data is ingested, it is time stamped to provide a chronological record. Although this may be low level data, as you bring more of it together, there should be a record of lineage, not to mention sensitivity (e.g., are customer-facing systems involved.

From that standpoint, the notion of adding a semantic metadata layer atop its indexing sounds quite intuitive – assign higher level meanings to buckets of log data that carries some business process meaning. For that, Splunk would have to rely on external sources – the applications and databases that run atop the infrastructure whose log files it tracks. That’s a tall order and one that will require partners, not to mention how do you define what are the entities that should be defined. Unfortunately, the track record for cross enterprise repositories is not great; maybe there could be some leveraging of MDM implementations around customer or product that could provide some beginning frame of reference.

But we’re getting way, way ahead of ourselves here. Splunk is the story of an engineering-oriented company that is seeking to climb higher up the value chain in the enterprise. Yet, as it seeks to engage higher level people within the customer organization, Splunk can’t afford to lose track of the base that has been responsible for its success. Splunk’s best route upward is likely through partnering with enterprise players like SAP. That doesn’t deal with the question of how to expand out the footprint to follow the footprint of what is called machine data, but then again, that’s a question for another day. First things first, Splunk needs to pick its target(s) carefully.

10.01.10

Leo Apotheker to target HP’s forgotten business

Posted in Application Development, Business Intelligence, Data Management, Database, Enterprise Applications, IT Infrastructure, IT Services & Systems Integration, Networks, Outsourcing, SaaS (Software as a Service), Storage, Systems Management, Technology Market Trends at 1:35 pm by Tony Baer

Ever since its humble beginnings in the Palo Alto garage, HP has always been kind of a geeky company – in spite of Carly Fiorina’s superficial attempts to prod HP towards a vision thing during her aborted tenure. Yet HP keeps talking about getting back to that spiritual garage.

Software has long been the forgotten business of HP. Although – surprisingly – the software business was resuscitated under Mark Hurd’s reign (revenues have more than doubled as of a few years ago), software remains almost a rounding error in HP’s overall revenue pie.

Yes, Hurd gave the software business modest support. Mercury Interactive was acquired under his watch, giving the business a degree of critical mass when combined with the legacy OpenView business. But during Hurd’s era, there were much bigger fish to fry beyond all the internal cost cutting for which Wall Street cheered, but insiders jeered. Converged Infrastructure has been the mantra, reminding us one and all that HP was still very much a hardware company. The message remains loud and clear with HP’s recent 3PAR acquisition at a heavily inflated $2.3 billion which was concluded in spite of the interim leadership vacuum.

The dilemma that HP faces is that, yes, it is the world’s largest hardware company (they call it technology), but the bulk of that is from personal systems. Ink, anybody?

The converged infrastructure strategy was a play at the CTO’s office. Yet HP is a large enough company that it needs to compete in the leagues of IBM and Oracle, and for that it needs to get meetings with the CEO. Ergo, the rumors of feelers made to IBM Software’s Steve Mills, and the successful offer to Leo Apotheker, and agreement for Ray Lane as non executive chairman.

Our initial reaction was one of disappointment; others have felt similarly. But Dennis Howlett feels that Apotheker is the right choice “to set a calm tone” that there won’t be a massive a debilitating reorg in the short term.

Under Apotheker’s watch, SAP stagnated, hit by the stillborn Business ByDesign and the hike in maintenance fees that, for the moment, made Oracle look warmer and fuzzier. Of course, you can’t blame all of SAP’s issues on Apotheker; the company was in a natural lull cycle as it was seeking a new direction in a mature ERP market. The problem with SAP is that, defensive acquisition of Business Objects notwithstanding, the company has always been limited by a “not invented here” syndrome that has tended to blind the company to obvious opportunities – such as inexplicably letting strategic partner IDS Scheer slip away to Software AG. Apotheker’s shortcoming was not providing the strong leadership to jolt SAP out of its inertia.

Instead, Apotheker’s – and Ray Lane’s for that matter – value proposition is that they know the side of the enterprise market that HP doesn’t. That’s the key to this transition.

The next question becomes acquisitions. HP has a lot on its plate already. It took at least 18 months for HP to digest the $14 billion acquisition of EDS, providing a critical mass IT services and data center outsourcing business. It is still digesting nearly $7 billion of subsequent acquisitions of 3Com, 3PAR, and Palm to make its converged infrastructure strategy real. HP might be able to get backing to make new acquisitions, but the dilemma is that Converged Infrastructure is a stretch in the opposite direction from enterprise software. So it’s not just a question of whether HP can digest another acquisition; it’s an issue of whether HP can strategically focus in two different directions that ultimately might come together, but not for a while.

So let’s speculate about software acquisitions.

SAP, the most logical candidate, is, in a narrow sense, relatively “affordable” given that its stock is roughly about 10 – 15% off its 2007 high. But SAP would be obviously the most challenging given the scale; it would be difficult enough for HP to digest SAP under normal circumstances, but with all the converged infrastructure stuff on its plate, it’s back to the question of how can you be in two places at once. Infor is a smaller company, but as it is also a polyglot of many smaller enterprise software firms, would present HP additional integration headaches that it doesn’t need.

HP may have little choice but to make a play for SAP if IBM or Microsoft were unexpectedly to actively bid. Otherwise, its best bet is to revive the relationship which would give both companies the time to acclimate. But in a rapidly consolidating technology market, who has the luxury of time these days?

Salesforce.com would make a logical stab as it would reinforce HP Enterprise Services’ (formerly EDS) outsourcing and BPO business. It would be far easier for HP to get its arms around this business. The drawback is that Salesforce.com would not be very extensible as an application as it uses a proprietary stored procedures database architecture. That would make it difficult to integrate with a prospective ERP SaaS acquisition, which would otherwise be the next logical step to growing the enterprise software footprint.

Informatica is often brought up – if HP is to salvage its Neoview BI business, it would need a data integration engine to help bolster it. Better yet, buy Teradata, which is one of the biggest resellers of Informatica PowerCenter – that would give HP far more credible presence in the analytics space. Then it will have to ward off Oracle – which has an even more pressing need for Informatica to fill out the data integration piece in its Fusion middleware stack – for Informatica. But with Teradata, there would at least be a real anchor for the Informatica business.

HP has to decide what kind of company it needs to be as Tom Kucharvy summarized well a few weeks back. Can HP afford to converge itself in another direction? Can it afford not to? Leo Apotheker has a heck of a listening tour ahead of him.

03.10.10

HP analyst meeting 2010: First Impressions

Posted in Application Development, Application Lifecycle Management (ALM), Business Intelligence, Cloud, Data Management, IT Infrastructure, IT Services & Systems Integration, Legacy Systems, Networks, Outsourcing, Systems Management at 12:34 am by Tony Baer

Over the past few years, HP under Mark Hurd has steadily gotten its act together in refocusing on the company’s core strengths with an unforgiving eye on the bottom line. Sitting at HP’s annual analyst meeting in Boston this week, we found ourselves comparing notes with our impressions from last year. Last year, our attention was focused on Cloud Assure; this year, it’s the integraiton of EDS into the core businesss.

HP now bills itself as the world’s largest purely IT company and ninth in the Fortune 500. Of course, there’s the consumer side of HP that the world knows. But with the addition of EDS, HP finally has a credible enterprise computing story (as opposed to an enterprise server company). Now we’ll get plenty of flack from our friends at HP for that one – as HP has historically had the largest market share for SAP servers. But let’s face it; prior to EDS, the enterprise side of HP was primarily a distributed (read: Windows or UNIX) server business. Professional services was pretty shallow, with scant knowledge of the mainframes that remain the mainstay of corporate computing. Aside from communications and media, HP’s vertical industry practices were sparse, few, and far between. HP still lacks the vertical breadth of IBM, but with EDS has gained critical mass in sectors ranging from federal to manufacturing, transport, financial services, and retail, among others.

Having EDS also makes credible initiatives such as Application Transformation, a practice that helps enterprises prune, modernize, and rationalize their legacy application portfolios. Clearly, Application transformation is not a purely EDS offering; it was originated by Ann Livermore’s Enterprise Business group, draws upon HP Software assets such as discovery and dependency mapping, Universal CMDB, PPM, and the recently introduced IT Financial Management (ITFM) service. But to deliver, you need bodies and people that know the mainframe – where most of the apps being harvested or thinned out are. And that’s where EDS helps HP flesh this out to a real service.

But EDS is so 2009; the big news on the horizon is 3Com, a company that Cisco left in the dust before it rethought its product line and eked out a highly noticeable 30% market share for network devices in China. Once the deal is closed, 3Com will be front and center in HP’s converged computing initiative which until now primarily consisted of blades and Procurve VoIP devices. It gains a much wider range of network devices to compete head-on as Cisco itself goes up the stack to a unified server business. Once the 3com deal is closed, HP will have to invest significant time, energy, and resources to deliver on the converged computing vision with an integrated product line, rather than a bunch of offerings that fill the squares of a PowerPoint matrix chart.

According to Livermore, the company’s portfolio is “well balanced.” We’d beg to differ where it comes to software, which accounts for a paltry 3% of revenues (a figure that our friends at HP reiterated underestimated the real contribution of software to the business).

It’s the side of the business that suffered from (choose one) benign or malign neglect prior to the Mark Hurd era. HP originated network node management software for distributed networks, an offering that eventually morphed into the former OpenView product line. Yet HP was so oblivious to its own software products that at one point its server folks promoted bundling of rival product from CA. Nonetheless, somehow the old HP managed not to kill off Openview or Opencall (the product now at the heart of HP’s communications and media solutions) – although we suspect that was probably more out of neglect than intent.

Under Hurd, software became strategic, a development that lead to the transformational acquisition of Mercury, followed by Opsware. HP had the foresight to place the Mercury, Opsware, and Openview products within the same business unit as – in our view – the application lifecycle should encompass managing the runtime (although to this day HP has not really integrated Openview with Mercury Business Availability Center; the products still appeal to different IT audiences). But there are still holes – modest ones on the ALM side, but major ones elsewhere, like in business intelligence where Neoview sits alone. Or in the converged computing stack and cloud in a box offerings, which could use strong identity management.

Yet if HP is to become a more well-rounded enterprise computing company, it needs more infrastructural software building blocks. To our mind, Informatica would make a great addition that would point more attention to Neoview as a credible BI business, not to mention that Informatica’s data transformation capabilities could play key roles with its Application Transformation service.

We’re concerned that, as integration of 3Com is going to consume considerable energy in the coming year, that the software group may not have the resources to conduct the transformational acquisitions that are needed to more firmly entrench HP as an enterprise computing player. We hope that we’re proven wrong.

03.24.09

What’s a Service? Who’s Responsible?

Posted in Application Development, Application Lifecycle Management (ALM), IT Infrastructure, ITIL, SOA & Web Services, Systems Management at 11:50 pm by Tony Baer

Abbott and Costello aside, one of the most charged, ambiguous, and overused terms in IT today is Service. At its most basic, a service is a function or operation that performs a task. For IT operations, a service is a function that is performed using a computing facility that performs a task for the organization. For software architecture, a service in the formal capital “S” form is a loosely coupled function or process that is designed to be abstracted from the software application, physical implementation, and the data source; as a more generic lower case “s,” service is a function that is performed by software. And if you look at the Wikipedia definition, a service can refer to processes that are performed down at the OS level.

Don’t worry, we’ll keep the discussion above OS level to stay relevant — and to stay awake.

So why are we getting hung up on this term? It’s because it was all over the rather varied day that we had today, having split our time at (1) HP’s annual IT analyst conference for its Technology Solutions Group (that’s the 1/3 of the company that’s not PCs or printers); (2) a meeting of the SOA Consortium; and (3) yet another meeting with MKS, an ALM vendor that just signed an interesting resale deal with BMC that starts with the integration of IT Service Desk with issue and defect management in the application lifecycle.

Services in each of its software and IT operations were all over our agenda today; we just couldn’t duck it. But more than just a coincidence of terminology, there is actually an underlying interdependency between the design and deployment of a software service, and the IT services that are required to run it.

It was core to the presentation that we delivered to the SOA Consortium today, as our belief is that you cannot manage a SOA or application lifecycle without adequate IT Service Management (ITSM, a discipline for running IT operations that is measured or tracked by the services. We drew a diagram that was deservedly torn apart by our colleagues on the call, Beth Gold-Bernstein and Todd Biske. UPDATE: Beth has a picture of the diagram in her blog. In our diagram, we showed how at run time, there is an intersection between the SOA life cycle and ITSM – or more specifically, ITIL version 3 (ITIL is the best known framework for implanting ITSM). Both maintained that interaction is necessary throughout the lifecycle; for instance, when the software development team is planning a service, they need to get ITO in the loop to brace for release of the service – especially if the service is likely to drastically ramp up demand on the infrastructure.

The result of our discussion was that, not simply that services are joined, figuratively, at the head and neck bone – the software and IT operations implementations – but that at the end of that day, somebody’s got to be accountable for ensuring that services are being developed and deployed responsibly. In other words, just making the business case for a service is not adequate if you can’t ensure that the infrastructure will be able to handle it. Lacking the second piece of the equation, you’d wind up with a scenario of the surgery being successful but the patient dies. With the functional silos that comprise most IT organizations today, that would mean dispersed responsibility of the software (or in some cases, enterprise) architect, and their equivalent(s) in IT Operations. In other words, everybody’s responsible, and nobody’s responsible.

The idea came up that maybe what’s needed is a service ownership role that transcends the usual definition (today, the service owner is typically the business stakeholder that sponsored development, and/or the architect that owns the design or software implementation). That is, a sort of uber role that ensures that the service (1) responds to a bona fide business need (2) is consistent with enterprise architectural standards and does not needlessly duplicate what is already in place, and (3) won’t break IT infrastructure or physical delivery (e.g., assure that ITO is adequately prepared).

While the last thing that the IT organizations needs is yet another layer of management, it may need another layer of responsibility.

UPDATE: Todd Biske has provided some more detail on what the role of a Service Manager would entail.

03.17.09

The Network is the Computer

Posted in Cloud, IT Infrastructure, IT Services & Systems Integration, Linux, Networks, OS/Platforms, Storage, Systems Management, Technology Market Trends, Virtualization at 1:50 pm by Tony Baer

It’s sometimes funny that history takes some strange turns. Back in the 1980s, Sun began building its empire in the workgroup by combining two standards: UNIX boxes with TCP/IP networks built in. Sun’s The Network is the Computer message declared that computing was of little value without the network. Of course, Sun hardly had a lock on the idea, as Bob Metcalfe devised the law stating that the value of the network was exponentially related to the number of nodes connected, and that Digital (DEC) (remember them?) actually scaled out the idea at division level where Sun was elbowing its way into the workgroup.

Funny that DEC was there first but they only got the equation half right – bundling a proprietary OS to a standard networking protocol. Fast forward a decade and Digital was history, Sun was the dot in dot com. Then go a few more years later, as Linux made even a “standard” OS like UNIX look proprietary, Sun suffers DEC’s fate (OK they haven’t been acquired, yet and still have cash reserves, if they could only figure out what to do when they finally grow up), and bandwidth, blades get commodity enough that businesses start thinking that the cloud might be a cheaper, more flexible alternative to the data center. Throw in a very wicked recession and companies are starting to think that the numbers around the cloud – cheap bandwidth, commodity OS, commodity blades – might provide the avoided cost dollars they’ve all been looking for. That is, if they can be assured that lacing data out in the cloud won’t violate ay regulatory or privacy headaches.

So today it gets official. After dropping hints for months, Cisco has finally made it official: its Unified Computing System is to provide in essence a prepackaged data center:

Blades + Storage Networking + Enterprise Networking in a box.

By now you’ve probably read the headlines – that UCS is supposed to do, what observers like Dana Gardner term, bring an iPhone like unity to the piece parts that pass for data centers. It would combine blade, network device, storage management and VMware’s virtualization platform (as you might recall, Cisco owns a $150 million chunk of VMware) to provide, in essence, a data center appliance in the cloud.

In a way, UCS is a closing of the circle that began with mainframe host/terminal architectures of a half century ago: a single monolithic architecture with no external moving parts.

Of course, just as Sun wasn’t the first to exploit TCP/IP network, but got the lion’s share of credit from, similarly, Cisco is hardly the first to bridge the gap between compute and networking node. Sun already has a Virtual Network Machines Project for processing network traffic on general-purpose servers, while its Project Crossbow is supposed to make networks virtual as well as part of its OpenSolaris project. Sounds like a nice open source research project to us that’s limited to the context of the Solaris OS. Meanwhile HP has raped up its Procurve business, which aims at the heart of Cisco territory. Ironically, the dancer left on the sidelines is IBM, which sold off its global networking business to AT&T over a decade ago, and its ROLM network switches nearly a decade before that.

It’s also not Cisco’s first foray out of the base of the network OSI stack. Anybody remember Application-Oriented Networking? Cisco’s logic, to build a level of content-based routing into its devices was supposed to make the network “understand” application traffic. Yes, it secured SAP’s endorsement for the rollout, but who were you really going to sell this to in the enterprise? Application engineers didn’t care for the idea of ceding some of their domain to their network counterparts. On the other hand, Cisco’s successful foray into storage networking proves that the company is not a one-trick pony.

What makes UCS different on this go round are several factors. Commoditization of hardware and firmware, emergence of virtualization and the cloud, makes division of networking, storage, and datacenter OS artificial. Recession makes enterprises hungry for found money, maturation of the cloud incents cloud providers to buy pre-packaged modules to cut acquisition costs and improve operating margins. Cisco’s lineup of partners is also impressive – VMware, Microsoft, Red Hat, Accenture, BMC, etc. – but names and testimonials alone won’t make UCS fly. The fact is that IT has no more hunger for data center complexity, the divisions between OS, storage, and networking no longer adds value, and cloud providers need a rapid way of prefabricating their deliverables.

Nonetheless we’ve heard lots of promises of all-in-one before. The good news is this time around there’s lots of commodity technology and standards available. But if Cisco is to make a real alternative to IBM, HP, or Dell, it’s got to put make datacenter or cloud-in-the box reality.

05.15.08

A Governance Day in May

Posted in Application Development, Data Management, Enterprise Integration, IT Infrastructure, SOA & Web Services, Systems Management at 1:22 pm by Tony Baer

Just mention the word “governance,” and most people will equate it with auditors or attorneys who scold on all the wrong things you’re doing, or that a gap in the access control system could put your CEO in jail. No wonder governance is such a hard sell, and why Forrester Research analyst Mike Gilpin and Software AG chief marketing officer Ivo Totev concurred that when it comes to SOA governance, maybe we need another name.

Nonetheless, on a beautiful day in May, roughly 150 customers and prospects showed up for a SOA governance summit presented by Software AG at a hotel in the heart of Times Square. Maybe the turnout wasn’t so surprising given the locale. With the financial sector having seen Enron and corporate options scandals, not to mention the subprime meltdown, if there was ever a sector that begs governance, this one’s the baby.

It was also a pretty sophisticated audience when it came to SOA background; virtually everybody in the room had already gotten their feet wet with SOA projects, and roughly 60% were already involved in some form of SOA governance effort. We had the chance to moderate a panel with Gilpin, Software AG’s Miko Matsumura and Jim Bole, plus HCL Technologies consultant (and pragmatist) Rama Kanneganti, most of whom had spoke earlier in the morning.

Gilpin and Kanneganti gave the audience healthy doses of realism.

Gilpin maintained that, at least for larger enterprises, the deeper they get into SOA, the more likely they’ll wind up with four, five, or more enterprise service busses. In other words, don’t fool yourself with the myth that you’ll have a single monolithic grand unification architecture, despite the best efforts of your enterprise architects. Just as most organizations never succeeded with the galactic enterprise data warehouse or that top-down enterprise data model, life in a modern post M&A world is just too complex and heterogeneous. He noted that no vendors have yet picked up the mantle on how to integrate all those integration stacks, but in the next year or two you’ll start seeing commercial product rolling out. Nonetheless, he conceded that selling an integration stack of integration stacks may not be the easiest pitch that you’ve ever made to a CFO. For starters, if you end up using the same rationale that was purposed towards justifying those initial SOA investments (e.g., more reuse, agility, better IT/business alignment), the response is likely to be jaded. Gilpin’s dose of reality was not without its ironies: as you implement SOA to unify your app and data silos, you could wind up with creating yet another über silo.

When it was Kanneganti’s turn, his talk was a welcome departure from the usual SOA-and-see-the-light presentations in its acknowledgment that in the trenches, SOA may seem so abstract that it may difficult at first to gain support through acclimation. He presented several lessons, such as:
1. “Dumbing down the technology” by eliminating the aura of mystery and the abstractness of service abstraction. Take advantage of social computing tools to tell stakeholders what’s really happening on the project and what it means (a challenge because architecture is not as easy to explain as, say, implementing a functional module of a business application), develop common plumbing services to get some quick wins (things like error handlers seemed a bit low level to us, and he later admitted to us that, no, those are not the first services you should develop, but they’re useful when you want to scale out the project) and then show how an approach that stays close to the basics can result in predictable production of services.
2. To reduce risk, organize something like an informal Center of Excellence, but pitch it more as “a center of getting things done” that contends with issues such as ownership of data or processes.
3. When it comes to the usual role of EAs, which is to promote better, consistent architecture, use a carrot rather than stick approach.

In between, Miko (we’re violating our usual editorial convention of last names because, well, his first name is so heavily branded) provided a talk that was, literally, a stretch. Relating some metaphors of evolution (reptilian functions are still present in human brainstems, and we’re genetically 95% equivalent to chimpanzees), he explained how SOA projects must navigate entrenched tribal turf rivalries, such as when services transition from design (software development) to run time (operations). And from that, he launched into a discussion of his latest initiative, looking at the synergies of service provisioning and virtualization – which we feel is the tip of the iceberg of a larger issue. Namely, how can you manage compliance with service level agreements or contracts as part of run time governance without some logical ties with IT operations and IT service management/ITIL worlds. It’s an area for which vendors on both sides of the aisle – the Software AGs, the HPs, and the IBMs, have yet to provide adequate answers. But, bringing the point back home, it’s a question of dealing with potentially warring tribes, as a debate rekindled by Todd Biske on the future of ESBs brought out a few weeks back.

As if we can’t get rid of this tribal mania yet, well, maybe they’re not a tribe, but a group of (choose one) soothsayers, high priests, or medicine men within the tribe. We’re talking about EAs here. When we posed the question of what SOA governance is, the panel concurred that it is a combination of people and processes, with technology as tool to apply the processes that people (in or out of their tribes have worked out). Bole volunteered that business processes might be a better way to sell SOA, and ultimately, SOA governance to the enterprise. We pressed again, asking, at the end of the day, who’s accountable for SOA governance? One of the answers was, maybe, yet maybe, SOA governance could make EAs relevant, finally.

09.24.07

HP Incubates Opsware

Posted in IT Infrastructure, ITIL, Systems Management at 8:57 am by Tony Baer

HP Software’s “Extreme Makeover” took another step at the end of last week when it closed the Opsware deal. Since the deal was announced almost exactly two months ago, we’ve been wondering whether it would be a replay of the Mercury deal, which amounted to more of a reverse acquisition. Yeah, HP bought Mercury, and the Mercury execs left standing (the ones tainted by indictment were long gone by that point). But many of the Mercury principals quickly took the helm in sales, marketing, and product development.

The Opsware script will be a bit different. Like the Mercury deal, Opsware CEO Ben Horowitz now takes over product R&D for the combined entity. But the rest of Opsware is being, what HP terms, “incubated.” It amounts to HP leaving the Opsware growth machine alone for now.

That makes sense, for a limited time. Much of Opsware’s appeal was that it was the fresh player on the block for managing change to IT infrastructure in real-time that until now has been beyond the grasp of “mainstream” systems management frameworks from the usual suspects. And the company’s growth curve was just beginning to break into the black.

With last week’s announcement of System 7, Opsware has made significant strides towards addressing a key weakness: the lack of integration between its server, network, and storage automation pieces. It’s glued them together with its process automation piece (from its own iConclude acquisition), where changes in provisioning of IT infrastructure from server back to storage and network can now be automatically triggered by an ITIL workflow, or where any action taken in any of the modules can now automatically bring up the appropriate workflow.

So HP is probably smart in not wanting to rain on Opsware’s parade, for now.

In the long run, Opsware must become part of the mother ship because its products have too many potential synergies with the rest of HP Software. There’s a long list, but we’ll just give a couple examples: there are obvious tie-ins between HP’s Service Desk and Business Availability Centers with Opsware’s change management capabilities. In other cases, HP’s portfolio could provide the depth missing from Opsware’s offerings, with the CMDB (configuration management database, the system of record for the layout of your IT infrastructure) being the prime example.

HP’s strategy reflects the common dilemma that software firms face when they acquire companies that are younger in the growth curve. Assuming that you’ve bought a company to add to your product set (as opposed to expanding your market footprint, like EMC and VMware), you’re going to find yourself in a balancing act. You don’t want your legacy culture to smother the innovation machine that you’ve just acquired, but you also don’t want to miss out on the synergies. Besides, isn’t that why you bought the company in the first place?

09.07.07

Breaching the Blood Brain Barrier

Posted in IT Infrastructure, ITIL, SOA & Web Services, Systems Management at 2:32 pm by Tony Baer

A month after Software AG unveiled its roadmap for converging webMethods products, it is releasing the first of the new or enhanced offerings. What piqued our interest was one aspect of the release, where Software AG is starting to seed webMethods BAM (Business Activity Monitoring) dashboards to other parts of the stack. In this case, they’re extending the webMethods Optimize BAM tool from BPM to the B2B piece.

So why does this matter? As its first name implied, BAM is about monitoring business processes. But if you think about it, it could just as well apply to the operational aspects of deploying SOA, from trending compliance with service level agreements to the nitty gritty, such as the speed at which the XML in SOAP messages is being parsed.

So far so good. What Software AG is doing is trying to use the same dashboarding engine that has been owned by the line-of-business folks, who want to monitor high level processes, to the software development folks, who are charged with exposing those processes as web services.

But when it comes down to the thorny issue of monitoring compliance with service level agreements (SLAs), Software AG’s moves are just a very modest first step. With a common dashboarding engine, you might be able to get software developers to improve the efficiency of a web service through programmatic modifications, but at the end of the day (and hopefully a lot earlier!), you have to run the services on physical IT infrastructure. And as we’ve noted in the past, when it comes to fixing service level issues, today’s processes, technologies, and organizational structures remain highly silo’ed. The software development folks own the SOA implementation, while IT operations own the data center.

It’s an issue that HP Software, which has undergone a reverse acquisition by Mercury (yes, HP bought it, but many ex-Mercury execs are now running it) is striving to bridge. And with Software AG’s latest moves to extend Optimize, it’s a goal that’s on their horizon as well.

The challenge however is that, as the IT operations folks embrace ITIL and the business service optimization or management tools (a.k.a., retooled offerings from systems management vendors), you may wind up with multiple islands of automation that each operate their own silo’ed dashboards claiming to show the truth about service levels — whether those service levels pertain to how fast IT resolves an incident, how fast the database runs, or how available is a specific web service.

Software AG says that it wants to eventually integrate metadata from its CentraSite SOA repository with the CMDBs (configuration management databases) of ITIL-oriented tools sometime in the future. We wonder how they, and their presumed ITIL vendor partner, will sell the idea to their respective constituencies, and more importantly, who’s ultimately going to claim accountability for ensuring that web services meet the SLAs.

07.23.07

HP Buys Opsware – Or is it the other way around?

Posted in IT Infrastructure, ITIL, Systems Management at 2:20 pm by Tony Baer

HP’s announcement that it plans to buy Opsware represents something of a changing of the guard. HP’s $1.6 billion offer, roughly a 35% premium over last week’s close, is for a $100 million company whose claim to fame is managing change across servers, networks, and recently, storage.

Today, Opsware’s change automation systems tend to work alongside classic infrastructure management tools, such as what used to be known as HP OpenView. Over the past year, Opsware has bulked itself up with several acquisitions of its own, including IT process automation – where you embed data center best practices as configurable, repeatable, policy-driven workflows. And it has added storage management, plus a sweetheart deal with Cisco for OEM’ing and reselling its network change management system as part of the Cisco. Although Cisco wasn’t happy about the disclosure, Opsware did announce during the Q4 earnings call that Cisco had resold $5 million worth of its network automation tool.

For HP, the Opsware acquisition comes after a year of roughly 80% growth – although the bulk of that was attributable to the Mercury acquisition. HP Software is one of those units that HP somehow never got around to killing – although not for lack of trying (we recall HP’s server unit concluding a deal with CA that undercut its own HP OpenView offering). And it reported an operating profit of 8.5% — although not stellar, at least it reflected the fact that software is finally becoming a viable business at HP.

In part it’s attributable to the fact that infrastructure management folks are finally getting some respect with the popularity of ITIL – that is, ITIL defines something that even a c-level executive could understand. The challenge of course is that most classic infrastructure management tools simply populated consoles with cryptic metrics and nuisance alarms, not to mention the fact that at heart they were very primitive toolkits that took lots of care and custom assembly to put together. They didn’t put together the big picture that ITIL demanded regarding quantifying service level agreement compliance, as opposed to network node operation.

What’s changed at HP Software is that the Mercury deal represented something of a reverse acquisition, as key Mercury software executives (at least, the ones who weren’t canned and indicted) are now largely driving HP Software’s product and go to market strategy. Although branding’s only skin deep, it’s nonetheless significant that HP ditched its venerable OpenView brand in favor of Mercury’s Business Technology Optimization.

Consequently, we think there’s potentially some very compelling synergies between Opsware’s change management, HP’s Service Management and CMDB, and Mercury’s quality centers that not only test software and manage defects, but provide tools for evaluating software and project portfolios. We’re probably dreaming here, but it would be really cool if somehow we could correlate the cost of a software defect, not only on the project at large (and whether it and other defects place that project at risk), but correlate it to changes in IT infrastructure configurations and service levels. The same would go for managing service levels in SOA, where HP/Mercury is playing catch-up to AmberPoint and SOA Software.

This is all very blue sky. Putting all these pieces together requires more than just a blending of product architecture, but also the ability to address IT operations, software development, and the business. Once the deal closes, HP Software’s got it work cut out.

02.09.07

What’s in a Name?

Posted in ITIL, Systems Management, Technology Market Trends at 7:04 am by Tony Baer

When CA announced last week that it would drastically simplify product branding, our first reaction was, “What, are you crazy?”

It’s the latest in a spate of changes intended to demonstrate that this is not your father’s CA, and definitely not the one that blindly consumed companies for their maintenance streams and typically emphasized marketing over product development. Consequently, you’ll no longer see reruns of product names like “CA Unicenter TNG,” which modestly stood for “The Next Generation.”

A new decade and a major accounting scandal later, the company has been trying to shed that identity with a more focused acquisition strategy targeting, for the most part, startups that could add synergy and new IP to the company. It’s been the bright spot in what have been several years of financial pain.

And so the branding simplification could be viewed as the latest example of an effort to clarify just what CA is. So in the new CA, the CA Wily Introscope product, that monitors J2EE application performance, will probably be called something like CA Introscope, while CA Unicenter Service Desk will likely be renamed CA Service Desk.

But there are several risks. The first is that CA might wind up confusing its installed base, although in actuality, that risk will vary by product. For most of the recent acquisitions, such as startups like Wily Technologies that had brands that weren’t exactly indelible, the risk should be pretty low. For the well-established lines, like Unicenter or the IDMS legacy database, it might be another story.

The Unicenter case is especially interesting. Ingrained for 15 years as CA’s management framework for distributed systems, the unitary branding implied unitary product.

Under the new scheme of things, the Unicenter brand will be dropped. Sounds like a dumb move until you realize that, excluding IBM (which is retaining the Tivoli brand), each of CA’s major rivals are doing the same. For HP after the Mercury acquisition, it’s goodbye “OpenView,” hello “Business Technology Optimization,” and for BMC, it’s goodbye “Patrol,” hello “Business Service Management.”

(What’s even more amazing is that HP, Mercury, and CA – which uses the moniker “Business Service Optimization” – will now have brand names that sound like virtual clones of one another. Our first reaction was, couldn’t any of these guys come up with something more original and memorable? But we digress…)

Why are the systems management folks dropping the familiar brand names? One word: ITIL. With the ITIL framework stipulating use of a Configuration Management Database (CMDB), the prime result is that each vendor in this space is reengineering its products to incorporate one. More to the point, the emergence of the CMDB has revealed the ugly truth that systems management products have never been as unified as their common brandings implied – thereby rendering the old brand names rather meaningless.

Indeed, the real risk to CA is the degree of effort it must invest to change more than 1000 product names. In the wake of the announcement, it published a 34-page, single-spaced PDF listing the entire catalog, estimating it would take 12 – 18 months to change all the names. That’s a process that will soak up significant marketing resources to change websites, collaterals, and craft new campaigns. Maybe the new branding may help CA burnish its new identity, but given that the company’s earnings have remained fairly flat over the past year, is it an investment that it can afford?

« Previous entries Next Page » Next Page »