Modest Innovation from the bottom up

Ever since it was introduced several years back, we’ve always been fascinated with the Eclipse Mylyn project. Leveraging the success of the Eclipse IDE as a platform for multiple open source development tools, Mylyn presents tasks lists along relevant files and artifacts to developers that are generated by various Eclipse tools covering the development lifecycle. What’s kind of cool is the technology’s relative simplicity. Rather than implement a highly complex centralized workflow engine, it functions in pubsub listening mode; the only step that developers must take to activate Mylyn’s context-based task filtering is to click open a work space, or close it when the task is done or the bug resolved. Otherwise, Mylyn the updates the task list automatically based on what’s in the underlying repository of the source tool.

Mylyn’s a cool example of bottom-up, modest thinking aimed at addressing everyday problem rather than trying to save world hunger.

Web 2.0 Maginot Line

Last week we spent a couple lovely but unseasonably cold early spring days locked inside a hotel near the Boston convention center for HP’s annual analyst conference for the 1/3 of the company that is not PCs or printers. While much of what we heard or saw was under non-disclosure, we won’t be shot if we told you about a 20-minute demonstration given by Caleb Sima on how Web 2.0 apps can be honey pots for hackers. You can restrict access to your site as strictly as possible, use SSL to go out over secure HTTP, but if your Web 2.0 site uses a rich browser client for performing localizing all the authentication, you may as well have built a Maginot Line.

The point was demonstrating a new freebie utility from HP, SWFScan, which scans Flash files for security holes; you point it at a website, it then decompiles the code and identifies vulnerabilities. OK, pretty abstract sounding. But Sima did a live demo, conducting a Google search for websites with rich Flash clients that included logins, then picking a few actual sites at random (well, maybe he did the search ahead of time to get an idea of what he’d pick up, but that’s showbiz). Entering the url in the tool, it scanned the web page, de-compiling down to SWF (the vector graphics file format of Flash that contains the ActionScript), which then displayed in plain English all the instances of Password entry and processing. So why bother with the trouble of network sniffing when all you have to do is run a botnet that automates Google searches, hits web pages, decompiles the code, and forges log-ins. Sima then showed the same thing with database queries, providing yet simpler alternatives for hackers to SQL injection.

Point taken. Web 2.0 is fine as long as authentication is conducted using Web 1.0 design.

What’s a Service? Who’s Responsible?

Abbott and Costello aside, one of the most charged, ambiguous, and overused terms in IT today is Service. At its most basic, a service is a function or operation that performs a task. For IT operations, a service is a function that is performed using a computing facility that performs a task for the organization. For software architecture, a service in the formal capital “S” form is a loosely coupled function or process that is designed to be abstracted from the software application, physical implementation, and the data source; as a more generic lower case “s,” service is a function that is performed by software. And if you look at the Wikipedia definition, a service can refer to processes that are performed down at the OS level.

Don’t worry, we’ll keep the discussion above OS level to stay relevant — and to stay awake.

So why are we getting hung up on this term? It’s because it was all over the rather varied day that we had today, having split our time at (1) HP’s annual IT analyst conference for its Technology Solutions Group (that’s the 1/3 of the company that’s not PCs or printers); (2) a meeting of the SOA Consortium; and (3) yet another meeting with MKS, an ALM vendor that just signed an interesting resale deal with BMC that starts with the integration of IT Service Desk with issue and defect management in the application lifecycle.

Services in each of its software and IT operations were all over our agenda today; we just couldn’t duck it. But more than just a coincidence of terminology, there is actually an underlying interdependency between the design and deployment of a software service, and the IT services that are required to run it.

It was core to the presentation that we delivered to the SOA Consortium today, as our belief is that you cannot manage a SOA or application lifecycle without adequate IT Service Management (ITSM, a discipline for running IT operations that is measured or tracked by the services. We drew a diagram that was deservedly torn apart by our colleagues on the call, Beth Gold-Bernstein and Todd Biske. UPDATE: Beth has a picture of the diagram in her blog. In our diagram, we showed how at run time, there is an intersection between the SOA life cycle and ITSM – or more specifically, ITIL version 3 (ITIL is the best known framework for implanting ITSM). Both maintained that interaction is necessary throughout the lifecycle; for instance, when the software development team is planning a service, they need to get ITO in the loop to brace for release of the service – especially if the service is likely to drastically ramp up demand on the infrastructure.

The result of our discussion was that, not simply that services are joined, figuratively, at the head and neck bone – the software and IT operations implementations – but that at the end of that day, somebody’s got to be accountable for ensuring that services are being developed and deployed responsibly. In other words, just making the business case for a service is not adequate if you can’t ensure that the infrastructure will be able to handle it. Lacking the second piece of the equation, you’d wind up with a scenario of the surgery being successful but the patient dies. With the functional silos that comprise most IT organizations today, that would mean dispersed responsibility of the software (or in some cases, enterprise) architect, and their equivalent(s) in IT Operations. In other words, everybody’s responsible, and nobody’s responsible.

The idea came up that maybe what’s needed is a service ownership role that transcends the usual definition (today, the service owner is typically the business stakeholder that sponsored development, and/or the architect that owns the design or software implementation). That is, a sort of uber role that ensures that the service (1) responds to a bona fide business need (2) is consistent with enterprise architectural standards and does not needlessly duplicate what is already in place, and (3) won’t break IT infrastructure or physical delivery (e.g., assure that ITO is adequately prepared).

While the last thing that the IT organizations needs is yet another layer of management, it may need another layer of responsibility.

UPDATE: Todd Biske has provided some more detail on what the role of a Service Manager would entail.

IBM buying Sun? Why bother?

That was our first response when we saw a WSJ headline and a sampling of comments from the blogosphere, here and here, earlier this morning. And it still is.

Ever since the popping of the dot com bubble, Sun has been trying to redefine itself. At core, Sun has always been a hardware company –- initially CADCAM workstations, and then thanks to purchase of part of Cray Computing’s assets –- a server company. That was fine when Windows couldn’t provide the scale required for the running websites, and before clustered Linux blades proved the viability of low cost/no cost, eating Sun’s lunch. Sun had Java, but ceded the business and much of development tooling standards to IBM before the web development market fragmented with new, popular scripting languages.

So what should sun do when it grows up? Back in 2003, we suggested Sun eat its young in classic Silicon Valley fashion: junk the software business, where it has never made money, and bite the bullet on Unix staking a new line in the sand for 64-bit Linux. A lot of our friends at Sun stopped returning calls and emails after that. Had Sun done so, it would have enjoyed a 2 – 3 year head start, of course at the price of transitioning to a higher volume, lower margin business model for which it is now still struggling with.

Fast forward to the present, and Sun is several years into a strategy to become an open source company. Fine idea had it begun prior to Jonathan Schwartz’s watch. But Sun’s boldest move of recent, buying MySQL for a billion dollars, was great for grabbing attention, but was hardly a game-changer in that this little database-that-could could not carry a $5 billion overall business (it would have made more sense a couple years earlier had Sun already been well underway down a Linux road, which it wasn’t).

So what does IBM really have to gain from spending $6.5 billion? More share in UNIX servers? UNIX is not exactly a growing market these days. With Linux eating UNIX’s lunch, IBM has been already quite busy, thank you, pushing the middleware and management systems that do make money atop Linux, which doesn’t. Sun still has $3 billion cash stashed away from the glory days that it’s burning through. IBM has $13 billion, and healthy margins to boot, so why bother? Migration of the tiny base of NetBeans users to Eclipse? Sorry, that bird’s already flown. A land office market in MySQL (when IBM already has stakes in the more scalable EnterpriseDB)?

One could posit that this is a circling of the wagons following Cisco’s announcement of its Unified Computing systems initiative; however, Sun does not offer any of the missing networking pieces for IBM to respond to Cisco. It could also be interpreted as a move to blunt HP by adding more data center share – except that IBM already has the heft to counter HP and doesn’t need Sun’s incremental presence.

It’s also been speculated that IBM might pick up Sun and divest portions, such as the Solaris business to Fujitsu, as piece parts. But the question is, what family jewels are actually left?

Sun has been approaching various suitors over the past few months as it requires an exit strategy. But Sun will be a waste of IBM’s money, not to mention the time spent digesting a large acquisition of questionable added value. That leaves Fujitsu, Sun’s primary Solaris OEM, as the only logical suitor left standing.

Update: Progress Software’s Eric Newcomer, whose “night job” is co-chair of the OSGi Enterprise Expert group, has some interesting observations on what it’s been like to have been caught in the middle of the Eclipse vs. NetBeans battle.

The Network is the Computer

It’s sometimes funny that history takes some strange turns. Back in the 1980s, Sun began building its empire in the workgroup by combining two standards: UNIX boxes with TCP/IP networks built in. Sun’s The Network is the Computer message declared that computing was of little value without the network. Of course, Sun hardly had a lock on the idea, as Bob Metcalfe devised the law stating that the value of the network was exponentially related to the number of nodes connected, and that Digital (DEC) (remember them?) actually scaled out the idea at division level where Sun was elbowing its way into the workgroup.

Funny that DEC was there first but they only got the equation half right – bundling a proprietary OS to a standard networking protocol. Fast forward a decade and Digital was history, Sun was the dot in dot com. Then go a few more years later, as Linux made even a “standard” OS like UNIX look proprietary, Sun suffers DEC’s fate (OK they haven’t been acquired, yet and still have cash reserves, if they could only figure out what to do when they finally grow up), and bandwidth, blades get commodity enough that businesses start thinking that the cloud might be a cheaper, more flexible alternative to the data center. Throw in a very wicked recession and companies are starting to think that the numbers around the cloud – cheap bandwidth, commodity OS, commodity blades – might provide the avoided cost dollars they’ve all been looking for. That is, if they can be assured that lacing data out in the cloud won’t violate ay regulatory or privacy headaches.

So today it gets official. After dropping hints for months, Cisco has finally made it official: its Unified Computing System is to provide in essence a prepackaged data center:

Blades + Storage Networking + Enterprise Networking in a box.

By now you’ve probably read the headlines – that UCS is supposed to do, what observers like Dana Gardner term, bring an iPhone like unity to the piece parts that pass for data centers. It would combine blade, network device, storage management and VMware’s virtualization platform (as you might recall, Cisco owns a $150 million chunk of VMware) to provide, in essence, a data center appliance in the cloud.

In a way, UCS is a closing of the circle that began with mainframe host/terminal architectures of a half century ago: a single monolithic architecture with no external moving parts.

Of course, just as Sun wasn’t the first to exploit TCP/IP network, but got the lion’s share of credit from, similarly, Cisco is hardly the first to bridge the gap between compute and networking node. Sun already has a Virtual Network Machines Project for processing network traffic on general-purpose servers, while its Project Crossbow is supposed to make networks virtual as well as part of its OpenSolaris project. Sounds like a nice open source research project to us that’s limited to the context of the Solaris OS. Meanwhile HP has raped up its Procurve business, which aims at the heart of Cisco territory. Ironically, the dancer left on the sidelines is IBM, which sold off its global networking business to AT&T over a decade ago, and its ROLM network switches nearly a decade before that.

It’s also not Cisco’s first foray out of the base of the network OSI stack. Anybody remember Application-Oriented Networking? Cisco’s logic, to build a level of content-based routing into its devices was supposed to make the network “understand” application traffic. Yes, it secured SAP’s endorsement for the rollout, but who were you really going to sell this to in the enterprise? Application engineers didn’t care for the idea of ceding some of their domain to their network counterparts. On the other hand, Cisco’s successful foray into storage networking proves that the company is not a one-trick pony.

What makes UCS different on this go round are several factors. Commoditization of hardware and firmware, emergence of virtualization and the cloud, makes division of networking, storage, and datacenter OS artificial. Recession makes enterprises hungry for found money, maturation of the cloud incents cloud providers to buy pre-packaged modules to cut acquisition costs and improve operating margins. Cisco’s lineup of partners is also impressive – VMware, Microsoft, Red Hat, Accenture, BMC, etc. – but names and testimonials alone won’t make UCS fly. The fact is that IT has no more hunger for data center complexity, the divisions between OS, storage, and networking no longer adds value, and cloud providers need a rapid way of prefabricating their deliverables.

Nonetheless we’ve heard lots of promises of all-in-one before. The good news is this time around there’s lots of commodity technology and standards available. But if Cisco is to make a real alternative to IBM, HP, or Dell, it’s got to put make datacenter or cloud-in-the box reality.

Let My Handsets Go!

We’ve always been amazed at how, in North America, mobile carriers perpetuated a captive business model that in the computing world went out of style nearly two decades ago. So it was ironic when release of yet another proprietary system – the iPhone – opened the first crack in the captive turnkey systems environment that was the North American mobile market.

Since then, the iPhone has carved out a powerful niche in the market, Google’s noises have prodded the FCC to reserve new spectrum for open devices, while Microsoft has made limited impact with Windows Mobile. But for the core of the mobile market, it’s still been business as usual. Until now, handset makers have been the odd men out.

However, as mobile devices morph into computing platforms, something has to give if you ever expect to see a critical mass market for mobile apps. It will have to follow the same script as the PC, which provided a critical mass target that led to explosion of what became the consumer software market.

Today’s announcement of the Eclipse Pulsar initiative is the handset maker’s revenge. Led by Motorola, Nokia and Genuitec, with participation from IBM, RIM and Sony Ericsson Mobile Communications, Pulsar is a new Eclipse effort to develop a common IDE that would support different handsets through a series of device profile plug-ins. The idea is that handset makers have better things to do than keep reinventing the wheel with their own unique development tools.

Pulsar is a modest but important first step towards creating a coherent target for mobile app developers. Until now, the market for mobile handset environments has been very fragmented, with no single OS or presentation layer having more than 10% of the market. Because the carriers controlled what went on their market, it took away motivation on the part of handset makers to agree on standards of any sort when it came to development targets. Apple’s iPhone App Store was the shot across the brow that prompted them to act.

Pulsar won’t rationalize the mobile app development market on its own, as there will still be a proliferation of profiles that govern which apps can make it onto which devices. But in the long run it will make it more economically attractive for device makers to rationalize their offerings so that software developers gain critical mass targets. Ideally, device profiles should hide the complexity of writing to different handset models, just as printer drivers for PCs have eliminated the burden for software developers to account for different printers. It will open wide opportunities for players like Adobe, whose Open Screen Project encourages developers to write their own Flash mobile players.

In actuality, as handsets plays such widely different roles to different classes of customers, it will never be that simple. Instead, what is likely is that handsets will divide into different classes, from basic phone to PDA and gaming or entertainment platform, and so on, with each type of device likely fitting within some form of de facto standard. Pressure for rationalization will come from the fact that devoice makers want to draw more application software support, which in turn makes them more attractive to consumers for what becomes the open market.