10.01.10

Leo Apotheker to target HP’s forgotten business

Posted in Application Development, Business Intelligence, Data Management, Database, Enterprise Applications, IT Infrastructure, IT Services & Systems Integration, Networks, Outsourcing, SaaS (Software as a Service), Storage, Systems Management, Technology Market Trends at 1:35 pm by Tony Baer

Ever since its humble beginnings in the Palo Alto garage, HP has always been kind of a geeky company – in spite of Carly Fiorina’s superficial attempts to prod HP towards a vision thing during her aborted tenure. Yet HP keeps talking about getting back to that spiritual garage.

Software has long been the forgotten business of HP. Although – surprisingly – the software business was resuscitated under Mark Hurd’s reign (revenues have more than doubled as of a few years ago), software remains almost a rounding error in HP’s overall revenue pie.

Yes, Hurd gave the software business modest support. Mercury Interactive was acquired under his watch, giving the business a degree of critical mass when combined with the legacy OpenView business. But during Hurd’s era, there were much bigger fish to fry beyond all the internal cost cutting for which Wall Street cheered, but insiders jeered. Converged Infrastructure has been the mantra, reminding us one and all that HP was still very much a hardware company. The message remains loud and clear with HP’s recent 3PAR acquisition at a heavily inflated $2.3 billion which was concluded in spite of the interim leadership vacuum.

The dilemma that HP faces is that, yes, it is the world’s largest hardware company (they call it technology), but the bulk of that is from personal systems. Ink, anybody?

The converged infrastructure strategy was a play at the CTO’s office. Yet HP is a large enough company that it needs to compete in the leagues of IBM and Oracle, and for that it needs to get meetings with the CEO. Ergo, the rumors of feelers made to IBM Software’s Steve Mills, and the successful offer to Leo Apotheker, and agreement for Ray Lane as non executive chairman.

Our initial reaction was one of disappointment; others have felt similarly. But Dennis Howlett feels that Apotheker is the right choice “to set a calm tone” that there won’t be a massive a debilitating reorg in the short term.

Under Apotheker’s watch, SAP stagnated, hit by the stillborn Business ByDesign and the hike in maintenance fees that, for the moment, made Oracle look warmer and fuzzier. Of course, you can’t blame all of SAP’s issues on Apotheker; the company was in a natural lull cycle as it was seeking a new direction in a mature ERP market. The problem with SAP is that, defensive acquisition of Business Objects notwithstanding, the company has always been limited by a “not invented here” syndrome that has tended to blind the company to obvious opportunities – such as inexplicably letting strategic partner IDS Scheer slip away to Software AG. Apotheker’s shortcoming was not providing the strong leadership to jolt SAP out of its inertia.

Instead, Apotheker’s – and Ray Lane’s for that matter – value proposition is that they know the side of the enterprise market that HP doesn’t. That’s the key to this transition.

The next question becomes acquisitions. HP has a lot on its plate already. It took at least 18 months for HP to digest the $14 billion acquisition of EDS, providing a critical mass IT services and data center outsourcing business. It is still digesting nearly $7 billion of subsequent acquisitions of 3Com, 3PAR, and Palm to make its converged infrastructure strategy real. HP might be able to get backing to make new acquisitions, but the dilemma is that Converged Infrastructure is a stretch in the opposite direction from enterprise software. So it’s not just a question of whether HP can digest another acquisition; it’s an issue of whether HP can strategically focus in two different directions that ultimately might come together, but not for a while.

So let’s speculate about software acquisitions.

SAP, the most logical candidate, is, in a narrow sense, relatively “affordable” given that its stock is roughly about 10 – 15% off its 2007 high. But SAP would be obviously the most challenging given the scale; it would be difficult enough for HP to digest SAP under normal circumstances, but with all the converged infrastructure stuff on its plate, it’s back to the question of how can you be in two places at once. Infor is a smaller company, but as it is also a polyglot of many smaller enterprise software firms, would present HP additional integration headaches that it doesn’t need.

HP may have little choice but to make a play for SAP if IBM or Microsoft were unexpectedly to actively bid. Otherwise, its best bet is to revive the relationship which would give both companies the time to acclimate. But in a rapidly consolidating technology market, who has the luxury of time these days?

Salesforce.com would make a logical stab as it would reinforce HP Enterprise Services’ (formerly EDS) outsourcing and BPO business. It would be far easier for HP to get its arms around this business. The drawback is that Salesforce.com would not be very extensible as an application as it uses a proprietary stored procedures database architecture. That would make it difficult to integrate with a prospective ERP SaaS acquisition, which would otherwise be the next logical step to growing the enterprise software footprint.

Informatica is often brought up – if HP is to salvage its Neoview BI business, it would need a data integration engine to help bolster it. Better yet, buy Teradata, which is one of the biggest resellers of Informatica PowerCenter – that would give HP far more credible presence in the analytics space. Then it will have to ward off Oracle – which has an even more pressing need for Informatica to fill out the data integration piece in its Fusion middleware stack – for Informatica. But with Teradata, there would at least be a real anchor for the Informatica business.

HP has to decide what kind of company it needs to be as Tom Kucharvy summarized well a few weeks back. Can HP afford to converge itself in another direction? Can it afford not to? Leo Apotheker has a heck of a listening tour ahead of him.

03.10.10

HP analyst meeting 2010: First Impressions

Posted in Application Development, Application Lifecycle Management (ALM), Business Intelligence, Cloud, Data Management, IT Infrastructure, IT Services & Systems Integration, Legacy Systems, Networks, Outsourcing, Systems Management at 12:34 am by Tony Baer

Over the past few years, HP under Mark Hurd has steadily gotten its act together in refocusing on the company’s core strengths with an unforgiving eye on the bottom line. Sitting at HP’s annual analyst meeting in Boston this week, we found ourselves comparing notes with our impressions from last year. Last year, our attention was focused on Cloud Assure; this year, it’s the integraiton of EDS into the core businesss.

HP now bills itself as the world’s largest purely IT company and ninth in the Fortune 500. Of course, there’s the consumer side of HP that the world knows. But with the addition of EDS, HP finally has a credible enterprise computing story (as opposed to an enterprise server company). Now we’ll get plenty of flack from our friends at HP for that one – as HP has historically had the largest market share for SAP servers. But let’s face it; prior to EDS, the enterprise side of HP was primarily a distributed (read: Windows or UNIX) server business. Professional services was pretty shallow, with scant knowledge of the mainframes that remain the mainstay of corporate computing. Aside from communications and media, HP’s vertical industry practices were sparse, few, and far between. HP still lacks the vertical breadth of IBM, but with EDS has gained critical mass in sectors ranging from federal to manufacturing, transport, financial services, and retail, among others.

Having EDS also makes credible initiatives such as Application Transformation, a practice that helps enterprises prune, modernize, and rationalize their legacy application portfolios. Clearly, Application transformation is not a purely EDS offering; it was originated by Ann Livermore’s Enterprise Business group, draws upon HP Software assets such as discovery and dependency mapping, Universal CMDB, PPM, and the recently introduced IT Financial Management (ITFM) service. But to deliver, you need bodies and people that know the mainframe – where most of the apps being harvested or thinned out are. And that’s where EDS helps HP flesh this out to a real service.

But EDS is so 2009; the big news on the horizon is 3Com, a company that Cisco left in the dust before it rethought its product line and eked out a highly noticeable 30% market share for network devices in China. Once the deal is closed, 3Com will be front and center in HP’s converged computing initiative which until now primarily consisted of blades and Procurve VoIP devices. It gains a much wider range of network devices to compete head-on as Cisco itself goes up the stack to a unified server business. Once the 3com deal is closed, HP will have to invest significant time, energy, and resources to deliver on the converged computing vision with an integrated product line, rather than a bunch of offerings that fill the squares of a PowerPoint matrix chart.

According to Livermore, the company’s portfolio is “well balanced.” We’d beg to differ where it comes to software, which accounts for a paltry 3% of revenues (a figure that our friends at HP reiterated underestimated the real contribution of software to the business).

It’s the side of the business that suffered from (choose one) benign or malign neglect prior to the Mark Hurd era. HP originated network node management software for distributed networks, an offering that eventually morphed into the former OpenView product line. Yet HP was so oblivious to its own software products that at one point its server folks promoted bundling of rival product from CA. Nonetheless, somehow the old HP managed not to kill off Openview or Opencall (the product now at the heart of HP’s communications and media solutions) – although we suspect that was probably more out of neglect than intent.

Under Hurd, software became strategic, a development that lead to the transformational acquisition of Mercury, followed by Opsware. HP had the foresight to place the Mercury, Opsware, and Openview products within the same business unit as – in our view – the application lifecycle should encompass managing the runtime (although to this day HP has not really integrated Openview with Mercury Business Availability Center; the products still appeal to different IT audiences). But there are still holes – modest ones on the ALM side, but major ones elsewhere, like in business intelligence where Neoview sits alone. Or in the converged computing stack and cloud in a box offerings, which could use strong identity management.

Yet if HP is to become a more well-rounded enterprise computing company, it needs more infrastructural software building blocks. To our mind, Informatica would make a great addition that would point more attention to Neoview as a credible BI business, not to mention that Informatica’s data transformation capabilities could play key roles with its Application Transformation service.

We’re concerned that, as integration of 3Com is going to consume considerable energy in the coming year, that the software group may not have the resources to conduct the transformational acquisitions that are needed to more firmly entrench HP as an enterprise computing player. We hope that we’re proven wrong.

05.11.09

What do Smarter Planets and Oil Refineries have in common?

Posted in BPM, Business Intelligence, Cloud, Data Management, Green, Java, Middleware, Networks, SaaS (Software as a Service), SOA & Web Services, Supply Chain Management, Technology Market Trends at 12:20 pm by Tony Baer

Last week we paid our third visit in as many years to IBM’s Impact SOA conference. Comparing notes, if 2007’s event was about engaging the business, and 2008 was about attaining the basic blocking and tackling to get transaction system-like performance and reliability, this year’s event was supposed to provide yet another forum for pushing IBM’s Smarter Planet corporate marketing. We’ll get back to that in a moment.

Of course, given that conventional wisdom or hype has called 2009 the year of the cloud (e.g., here and here), it shouldn’t be surprising that cloud-related announcements grabbed the limelight. To recap: IBM announced WebSphere Cloudburst, an appliance that automates rapid deployment of WebSphere images to the private cloud (whatever that is — we already provided our two cents on that) and it released IBM’s BlueWorks, a new public cloud service for white boarding business processes that is IBM’s answer to Lombardi Blueprints.

But back to our regularly scheduled program, IBM has been pushing Smarter Planet since the fall. It came in the wake of a period when rapid run-up and volatility in natural resource prices and global instability prompted renewed discussions over sustainability that are at decibel levels not heard since the late 70s. A Sam Palmisano speech delivered before the Council on Foreign relations last November laid out what have since become IBM’s standard talking points. The gist of IBM’s case is that the world is more instrumented and networked than ever, which in turn provides the nervous system so we can make the world a better, cleaner, and for companies, a more profitable place. A sample: 67% of electrical power generation is lost to network inefficiencies during a period of national debate in setting up smart grids.

IBM’s Smarter Planet campaign is hardly anything new. It builds on Metcalfe’s law, which posits that the value of a network is the square of the numbers of new users that join it. Put another way, a handful of sensors provides only narrow slices of disjoint data; fill that network in with hundreds or thousands of sensors, add some complex event processing logic to it, and now you not only can deduce what’s happening, but do things like predict what will happen or provide economic incentives that change human behavior so that everything comes out copasetic. Smarter Planet provides a raison d’etre for IBM’s Business Events Processing initiatives that it began struggling to get its arms around last fall. It also tries to make use of IBM’s capacity for extreme scale computing, but also prods it to establish relationships with new sets of industrial process control and device suppliers that are quite different from the world of ISVs and systems integrators.

So, if you instrumented the grid, you could take advantage of transient resources such as winds that this hour might be gusting in the Dakotas, and in the next hour, in the Texas Panhandle, so that you could even out generation to the grid and supplant more expensive gas-fired generation in Chicago. Or, as described by a Singaporean infrastructure official at the IBM conference, you can apply sensors to support congestion pricing, which rations scarce highway capacity based on demand, with the net result that it ramps up prices to what the market will bear at rush hour and funnel those revenues to expanding the subway system (too bad New York dropped the ball when a similar opportunity presented itself last year). The same principle could make supply chains far more transparent and driven by demand with real-time predictive analytics if you somehow correlate all that RFID data. The list of potential opportunities, which optimize consumption of resources in a resource-constrained economy, are limited by the imagination.

In actuality, what IBM described is a throwback to common practices established in highly automated industrial process facilities, where closed-loop process control has been standard practice for decades. Take oil refineries for example. The facilities required to refine crude are extremely capital-intensive, the processes are extremely complex and intertwined, and the scales of production so huge that operators have little choice but to run their facilities flat out 24 x 7. With margins extremely thin, operators are under the gun to constantly monitor and tweak production in real time so it stays in the sweet spot where process efficiency, output, and costs are optimized. Such data is also used for predictive trending to prevent runaway reactions and avoid potential safety issues such as a dangerous build-up of pressure in a distillation column.

So at base, a Smarter Planet is hardly a radical idea; it seeks to emulate what has been standard practice in industrial process control going back at least 30 years.

03.17.09

The Network is the Computer

Posted in Cloud, IT Infrastructure, IT Services & Systems Integration, Linux, Networks, OS/Platforms, Storage, Systems Management, Technology Market Trends, Virtualization at 1:50 pm by Tony Baer

It’s sometimes funny that history takes some strange turns. Back in the 1980s, Sun began building its empire in the workgroup by combining two standards: UNIX boxes with TCP/IP networks built in. Sun’s The Network is the Computer message declared that computing was of little value without the network. Of course, Sun hardly had a lock on the idea, as Bob Metcalfe devised the law stating that the value of the network was exponentially related to the number of nodes connected, and that Digital (DEC) (remember them?) actually scaled out the idea at division level where Sun was elbowing its way into the workgroup.

Funny that DEC was there first but they only got the equation half right – bundling a proprietary OS to a standard networking protocol. Fast forward a decade and Digital was history, Sun was the dot in dot com. Then go a few more years later, as Linux made even a “standard” OS like UNIX look proprietary, Sun suffers DEC’s fate (OK they haven’t been acquired, yet and still have cash reserves, if they could only figure out what to do when they finally grow up), and bandwidth, blades get commodity enough that businesses start thinking that the cloud might be a cheaper, more flexible alternative to the data center. Throw in a very wicked recession and companies are starting to think that the numbers around the cloud – cheap bandwidth, commodity OS, commodity blades – might provide the avoided cost dollars they’ve all been looking for. That is, if they can be assured that lacing data out in the cloud won’t violate ay regulatory or privacy headaches.

So today it gets official. After dropping hints for months, Cisco has finally made it official: its Unified Computing System is to provide in essence a prepackaged data center:

Blades + Storage Networking + Enterprise Networking in a box.

By now you’ve probably read the headlines – that UCS is supposed to do, what observers like Dana Gardner term, bring an iPhone like unity to the piece parts that pass for data centers. It would combine blade, network device, storage management and VMware’s virtualization platform (as you might recall, Cisco owns a $150 million chunk of VMware) to provide, in essence, a data center appliance in the cloud.

In a way, UCS is a closing of the circle that began with mainframe host/terminal architectures of a half century ago: a single monolithic architecture with no external moving parts.

Of course, just as Sun wasn’t the first to exploit TCP/IP network, but got the lion’s share of credit from, similarly, Cisco is hardly the first to bridge the gap between compute and networking node. Sun already has a Virtual Network Machines Project for processing network traffic on general-purpose servers, while its Project Crossbow is supposed to make networks virtual as well as part of its OpenSolaris project. Sounds like a nice open source research project to us that’s limited to the context of the Solaris OS. Meanwhile HP has raped up its Procurve business, which aims at the heart of Cisco territory. Ironically, the dancer left on the sidelines is IBM, which sold off its global networking business to AT&T over a decade ago, and its ROLM network switches nearly a decade before that.

It’s also not Cisco’s first foray out of the base of the network OSI stack. Anybody remember Application-Oriented Networking? Cisco’s logic, to build a level of content-based routing into its devices was supposed to make the network “understand” application traffic. Yes, it secured SAP’s endorsement for the rollout, but who were you really going to sell this to in the enterprise? Application engineers didn’t care for the idea of ceding some of their domain to their network counterparts. On the other hand, Cisco’s successful foray into storage networking proves that the company is not a one-trick pony.

What makes UCS different on this go round are several factors. Commoditization of hardware and firmware, emergence of virtualization and the cloud, makes division of networking, storage, and datacenter OS artificial. Recession makes enterprises hungry for found money, maturation of the cloud incents cloud providers to buy pre-packaged modules to cut acquisition costs and improve operating margins. Cisco’s lineup of partners is also impressive – VMware, Microsoft, Red Hat, Accenture, BMC, etc. – but names and testimonials alone won’t make UCS fly. The fact is that IT has no more hunger for data center complexity, the divisions between OS, storage, and networking no longer adds value, and cloud providers need a rapid way of prefabricating their deliverables.

Nonetheless we’ve heard lots of promises of all-in-one before. The good news is this time around there’s lots of commodity technology and standards available. But if Cisco is to make a real alternative to IBM, HP, or Dell, it’s got to put make datacenter or cloud-in-the box reality.

08.12.08

Worldwide Wait 2.0

Posted in e-Commerce, Mobile, Networks, Technology Market Trends, Web 2.0 Apps at 2:40 pm by Tony Baer

A hallmark of Web 2.0 is that the web is supposed to become more dynamic. That dynamism has been energized by critical mass broadband penetration, which in the U.S. now reaches over half of all households.

But unless you’re lucky (like us) to live within the Verizon FIOS service area, the future that’s supposedly already here is … not here yet. We’ve seen several fresh reminders over the past few weeks about the lack of connectivity, and how the issue is related to the fact that, while China is building cities, superhighways, metro lines, and networking, our physical and electronic infrastructure remains stuck in the 1960s.

No wonder that between 2001 and now, U.S. dropped from fourth to number 15 in broadband penetration. A proposed remedy by FCC chairman Kevin Martin to fund DSL-equivalent free WiMax access through royalties on wireless spectrum might contribute but a drop in the bucket.

Over the past few weeks, we’ve been reminded of the penalties that the U.S. is paying for letting the ball drop when it comes to Internet infrastructure. We’ve also been reminded about the inertia of the media and entertainment industry in fully embracing the new technologies to revive what is a stagnant (in the case of music) or threatened (in the case of film) market. And we’ve been reminded about the resulting difference between hype and reality when it comes to the capabilities of the dynamic, location-based Internet that supposedly is already here today — but in reality is not.

Here are a few cases.

Basic Connectivity. About a month ago, we spent a lovely week up on the Maine coast. People who move to Deer Isle do so because they cherish the isolation — it’s 40 miles to the nearest McDonalds. But, unless you’re lucky enough to live on Highway 15, the main road, chances are, you’re still relying on dial-up Internet access. That is, if you’re lucky to get a dial-up line of any kind, because the copper wire phone system on Deer Isle is fully tapped out. You need to wait for somebody to move or die before getting a new line. About 18 months ago, Verizon sold off the landlines to Fairpoint Communications , which subsequently decided that the infrastructure was too obsolete to continue investing in. It promises — someday — to replace copper with fiber. You want mobile instead? Only a single minor carrier provides cell phone coverage. By contrast, back in 2003 we vacationed on the other side of the Gulf of Maine in Nova Scotia where virtually every town of any size had, not only broadband, but cellular coverage.

The hype of 3G. Adding 3G support to the iPhone was supposed to make it a true mobile Interenet device. Maybe it does — it certainly has a great UI and operating environment, but don’t take the Apple commercials literally, as this entry from agile development and Ruby on Rails tools development firm 37 Signals attests. Our mobile infrastructure — which was built on a divide-and-conquer rather than an interchangeable standards-based strategy, continues to deliver coverage that is spotty and inferior to the rest of the developed world.

Internet Home Media. There has been lots of press over the idea of dynamic movie downloads from the likes of Netflix. But when it comes down to old fashioned home entertainment — the stuff where you’re going to utilize home theater 100-inch flat screens and 5:2 sound, don’t count on internet streaming just yet, wrote colleague Andrew Brust recently.

****

There are several issues here:

1. A national failure to mobilize to renew our nation’s infrastructure (we’re too hung up on keeping taxes low and letting the market sort it out to pay for it) that touches broader policy issues.
2. The inertia of certain sectors that feel threatened but could otherwise profit if they could only think out of the box.
3. Hype continues to outrace reality.

09.26.05

Where the Bus Stops

Posted in Enterprise Integration, Networks, SOA & Web Services at 2:39 am by Tony Baer

Who cares about the next big thing? The answer’s obvious, the IT infrastructure we have today will remain in place for a good long while. Paradoxically, that’s why a new technology, Services-Oriented Architectures (SOAs), is advancing past pilot stage across many organizations.

Services don’t replace legacy applications, they expose them and add value in new ways. Log onto FatLens, a concert tickets site, click on Coldplay, and you can find tickets available for the next concert in your area. Beneath the hood, FatLens is making a web services call to eBay for listings and buyer authentication. By contrast, traditional web applications would have forced you to manually navigate hit or miss through portals or search engines.

Services are evolving, just as web applications did a generation earlier.

Back then, Web apps moved from static to interactive. In the process, vendors had to make tough choices. IBM ditched the San Francisco framework once Sun’s J2EE gathered steam. Today IBM sells more J2EE than Sun.

Fast forward. Like FatLens, most services today are manual, where the end user is a human. On the horizon, organizations will find it profitable to daisy chain multiple processes into dynamic workflows where services consume each other. For instance, you might design an app that automatically requests a credit history service as a cost-cutting strategy. But if you’d like to make this a moneymaker, you might refine it to a process designed to stimulate business from your best customers. You would then orchestrate credit checks with policy-driven services also retrieving buying history and generating promotions.

But as you automate, familiar issues arise. Like whether the requestor is entitled to that service, their identity isn’t faked, and that their credit request message hasn’t been corrupted or falsely generated. And as you get serious, you will inevitably deal with quality of service issues, worrying about availability and reliability so your customers do not suffer service interruptions or broken transactions.

Web apps dealt with parallel issues. For them the answer was the appserver, which focused housekeeping in a central middleware stack. But is that the right solution for services which are more self-contained than web requests, where all the logic was deployed back on the server?

That’s where the debate over Enterprise Services Busses (ESBs) arises, because they provide a flatter, more distributed pipeline that can mediate, route, and transform messages – and depending on your viewpoint — orchestrate more complex workflows on demand. In other words, delivering many of the functions of appservers in an environment where services make logic portable and dynamic.

No, you probably won’t rip out your J2EE appservers for ESBs, but as you add services going forward, will you buy appservers or ESBs in the future?

Obviously, ESB pure plays have little to lose by promoting maximum functionality. Regarding incumbents, IBM is playing it both ways. Finally admitting that ESB’s are product (after several years of denial), they will let you buy simple or complex buses. By contrast, BEA is simply selling a simple ESB on the rationale that complex orchestrated actions are best handled by appservers. Our take? BEA is trying to protect its WebLogic business.

Yet, the march of technology could throw many of BEA’s rivals in the same position.

A few months back, Cisco unveiled “application-oriented routing,” moving functions like content routing and message or requestor authentication onto smart routers. It makes sense. As tasks get commoditized, commodity technologies emerge. There’s plenty of precedent: routers replacing custom gateways piles of simple Intel blades supplanting multi-processor servers, and the list goes on.

By that logic, ESBs could find themselves waystops as well, with network devices absorbing repetitive content routing, XML processing, authentication, authorization, and load balancing tasks, while higher-level process design and management devolve to application level. At that point, would the ESB folks be willing to eat their young?

02.13.04

Head Bone and Neck Bone

Posted in IT Infrastructure, Networks, Security at 12:48 am by Tony Baer

In most IT organizations, data centers and networking are managed by different groups. And reflecting the organizational stovepipes, the vendors serving both groups are quite separate. In large part, that’s why IBM exited the network business several years ago, selling global backbones to AT&T and switching products to Cisco. From a sales and marketing standpoint, keeping the server and network product lines separate made ample sense. Yet in this era of viruses, worms, and hacker attacks, keeping both domains separate amounts to a security nightmare.

Traditionally, the data center and the network folks met through network node management software, using rudimentary SNMP interfaces to read the status of network nodes. HP, which largely invented the business, has an arms-length alliance with Cisco that enables HP OpenView console to configure Cisco devices and monitor their availability. Now IBM has upped the ante, announcing an alliance to cross-fertilize its product lines with Cisco’s. For opening shots, both companies are primarily focusing on integrating Cisco network security features into IBM ThinkPad clients covering VPN clients, intrusion detection, and access control. It’s just the first step — with the exception of intrusion detection, integration of network protection features into servers will come later.

Of course, to get a truly united front, we’d like to see the common umbrella extended to features such as firewalls and antivirus protection. However, from a business standpoint, that would make things more complicated because it would require participation by third parties. Significantly, past efforts to forge unified security suites have had rough going, as Network Associates could testify. Nonetheless, for anybody that didn’t perceive the need for converging data center and network protection, the invasion of the MyDoom worm over the past couple weeks was a cold wake up call.

03.20.03

Honey I Shrank the Database

Posted in Networks, Security, Wireless at 12:07 am by Tony Baer

Each day, it’s becoming clearer that WiFi is just the latest in a long list of back door computing technologies that have made it into the enterprise. Like PCs, mobile phones, PDAs, and Blackberries before them, WiFi is a grass roots technology that IT groups are reacting to, rather than planning for. Today’s announcement of Cisco’s acquisition of Linksys, the leading maker of consumer broadband hubs, simply confirms this trend.

Hubs are becoming the latest must-have device for anyone with cable modems or DSL. Although Linksys also makes conventional wired hubs, it’s clear that the wireless is becoming the preferred way for sharing broadband connections at home.

The corporate angle here is that, for a large proportion of white-collar workers, accessing back office systems at night or on weekends has long been part of the workweek. Admittedly, when access to the Internet was through a single dedicated connection, passwords proved adequate for keeping out kids under the age of 6, while VPNs did their job in rendering transactions invisible to the Internet.

However, the new wireless LAN technologies currently lack adequate encryption measures. Although these security holes should be patched within 6 – 12 months, in the interim, systems administrators must further refine access control policies. The bright side is that organizations that effectively manage security in environments rife with quasi- legal MP3 downloads should make them better prepared when wireless LANs take center stage inside the enterprise.

11.22.02

The Little Technology That Will

Posted in Networks, Technology Market Trends, Wireless at 10:30 pm by Tony Baer

For road warriors among us, the promise of an untethered world is a mixed blessing. While some foresee huge productivity improvements from being always connected, for most of us, the very notion probably conjures up scenes of entrapment inside noisy commuter trains with everybody’s cell phones and pagers going off.

And, as one former itinerant consultant confided to us, isolation can be a blessing. There is a reason why libraries are supposed to be quiet, and, he noted, a reason why the isolation of cross-country flights can provide the most productive hours of consulting projects. Nonetheless, the unexpected success of Blackberries indicates pent-up demand for at least some mobile connectivity.

Consequently, we expect WiFi to become “the next little thing” for wireless. Cheap and simple, costing less than $500 apiece for local transmitters, almost anybody can set up a WiFi hot spot.

In that context, we were amused in reading accounts of a wireless forum at Comdex this week. The carrier and technology establishment is being caught off guard by this guerilla technology, just as phone carriers were when dial-up Internet flooded local switches a few years back.

Among key hurdles are billing and network handoffs, because mobile broadband — whether WiFi or 3G — will involve multiple carriers. While the problem has continued to dog even terrestrial carriers, that hasn’t shut down long distance service. Other objections were raised that WiFi won’t adequately cover the landscape. However, Qualcomm’s notion that 3G will make WiFi obsolete was pretty laughable, given the company’s track record supplying the proprietary technologies that have rendered U.S. cell phones useless in the rest of the world.

WiFi is the little technology that will. Although security questions remain, they will get solved years before 3G networks ever reach critical mass. With Intel about to embed WiFi into its P4 mobile chips next year, the installed base will quickly mushroom.

We expect that hotels could prove the tipping point, since WiFi is cheaper and quicker than extending T1 lines to every room, a captive market exists (road warriors have to sleep somewhere), and there is an existing billing mechanism. Furthermore, the environment is conducive but not ubiquitous (road warriors leave their rooms every morning). Our main concern, however, is that those poor souls won’t lose too much sleep in the process.