With their last acquisition of the year just narrowly sneaking in before the holidays, IBM announced it is acquiring solidDB, an established provider of in-memory database technology based out of Finland for an undisclosed sum.
With over 3 million deployments worldwide, solidDB certainly has a real presence in what has historically proven a niche market. In fact, given solidDB’s vintage, it’s obvious that the technology is hardly brand new. We recall specialized, in memory databases dating back to the previous decade with offerings from providers like Harte-Hanks providing specialized offerings geared toward the direct marketing industry.
But in recent years real-time databases have threatened to escape from or at least broaden their niche. In fact, telcos, who are the embodiment of real-time processing, have comprised the sweet spot of solidDB’s base. Other uses in retail, and especially RFID (which we’re still waiting for when it’s finally going to become The Next Big Thing) are obvious uses given the need to process massive amounts of data as it’s being generated. Emergence of embedded apps have fueled the revival of the old object database mainstay ObjectStore, which was rechristened by Progress as a caching mechanism for its Sonic JMS (and later ESB) offerings when it acquired the technology five years ago.
The acquisition makes eminent sense for IBM, not the least of which as it finally provides the response to Oracle’s snapping up of the TimesTen in-memory database a couple years ago. But given solidDB’s base in telco, we think that there is obvious synergy with IBM’s Tivoli NetCool (which came through the Micromuse acquisition) that IBM should be able to leverage going forward.
The end of the year is obviously the time for the “look back,” and although we weren’t planning any grand pronouncements that 2007 was the year of social computing or anything like that (more in a moment), perhaps it’s fitting that we stumbled across an announcement from Red Hat of a changing of the guard.
(For the record, 2007 was not the year of social computing for us. But in 2008 it might be, as you start seeing more enterprise apps like Salesforce to Salesforce absorbing social computing techniques.)
Opening his farewell Red Hat blog with a quote from Jack Kerouac stating that, “The only people for me are the mad ones,” outgoing CEO and president Matthew Szulik painted the kind of wild west picture of open source that has become something of a stereotype.
“And over the phone, in the middle of my sales pitch, corporate types at Dell, IBM and HP and others would hear the constant banging of soda cans dropping in the soda machine and would ask if there were fights going on outside my office. So, after a while, I told the prospective investors that YES there were fights going on. And yes, these fights happened frequently. Its how people at Red Hat settled technical issues likes software bugs and features in new releases.”
In all fairness, flying soda cans at Red Hat were hardly unique to open source, as any veteran of knock-down debates in Bill Gates’ office would attest.
But the passing of the baton at Red Hat is more symbolic in another way: while the origins of open source were characterized by evangelist Eric S. Raymond as a thriving, chaotic bazaar, in actuality, today open source has spawned a variety of technology development and go-to-market models.
Consequently, while you have the idealism and the bug patching efficiency of the world’s biggest virtual software R&D lab, open source for some providers has become big business. Just ask Red Hat shareholders, after the company reported Q3 earnings exceeding $20 million, up roughly 40% over a year ago. Or, from a go-to-market standpoint, the fact that Red Hat has morphed with the JBoss acquisition into a platform company that, not only offers an OS and middle tier, but now engages in practices of offering certified bundles that would look familiar to any veteran IBM, Sun, or Oracle customer.
In other words, being open source doesn’t mean that some bundles won’t be more equal than others. Yes, You can run the Linux ports of rival J2EE platforms on Red Hat Enterprise Linux, while JBoss continues to run on Windows (and has an interoperability agreement with Microsoft to juice performance), but Red Hat has become another commercial platform company, just like anyone else in the open source or proprietary worlds.
There’s nothing wrong with this unless you’re an open source idealist who is committed to a democratic ideal where all software is free and the playing field for interoperability is completely level. In the real world, enterprise customers want products that they know will work.
David Heinemeier Hansson got back to us today and provided more insight on his partiality towards RESTful programming. In essence, his thoughts are that REST is more in sync with how the web works, and therefore a better match for web development.
“Making it easy to do web applications with RESTful APIs is making it easy to do web applications that are native and true to the web. The RESTful principles all steam from best practices of how to get the most out of HTTP and the infrastructure available for the web. Thankfully, this approach also maps incredibly well to the programming notion of CRUD, which makes the development process easier and more naturally divided according to areas of concern.”
He admits that RESTful approaches may take some getting used to.
“It takes a little while to wrap your head around resources and REST, but once it clicks, the old ways of RPC seem like a very unfulfilling way to work.”
By implication, he’s saying what works better for the web works better for Rails.
Forgetting the hype over Web 2.0, the second generation of web development has spawned a huge back to basics movement which is reflected by the popularity of dynamic scripting languages and POJOs (Plain Old Java Objects). And among the most popular alternatives is Ruby, a declarative language ideal for database-focused applications, whose potential blossomed with the Rails framework.
This week, the second version of Ruby on Rails finally came out after about four years of work. One of the highlights of Ruby on Rails 2.0 probably isn’t that surprising: promotion of the use REST over more complex SOAP-based web services.
Ruby emerged along with a number of other technologies, ranging from the dynamic trio of “P”-named dynamic scripting languages (Perl, Python, and PHP), and the movement towards POJOs (Plain Old Java Objects), as a developer reaction against the mounting complexity of J2EE/Java EE and .NET, both of which were crafted for taking the complexities of enterprise software development to the web.
Ruby’s origins were similar to Java in that it was crafted for a completely different purpose; while Java originated as friendlier abstraction of C++ that was intended for set-top boxes, Ruby emerged as a latter day, friendlier continuation of the object-oriented SmallTalk language for more general-purpose applications (some say that Perl and Python also influenced Ruby). In turn, the Rails framework which debuted in 2004 put Ruby on the map, combining an easy to program OO language with a framework for building database-centric web applications.
The hoopla over Rails 2.0 is that the enhancements are clearly skewed toward promoting use of REST rather than SOAP, for data services. Quote in his blog, Rails creator David Heinemeier Hansson of consulting firm 37signals was pretty unequivocal. “It’ll probably come as no surprise that Rails has picked a side in the SOAP vs REST debate. Unless you absolutely have to use SOAP for integration purposes, we strongly discourage you from doing so.”
Heinemeier Hansson’s argument isn’t necessarily that REST is better, but that it is better suited for Rails.
If you look at SOAP, the answer is pretty obvious, because SOAP is clearly designed to do more things. You can do data services with SOAP, but as a general-purpose messaging container, it’s also designed to do much more. Specifically, it’s designed to be extensible, so you can use all those wonderful Oasis WS-standards to add attributes ranging from asserting trust, specifying token type, requesting reliable response, and so on. By contrast, REST (a.k.a., RESTful programming) is not a standard like SOAP, but a style that is designed for retrieving or storing resources, end of story.
Consequently, Ruby on Rails (RoR) and RESTful style programming are well aligned; you could say they were made for each other. One features a simplified framework for getting, storing, and acting on data, while the other describes the style (and implies use of web standards) for doing it.
It shouldn’t be surprising as to why IT’s automation needs often fall to the bottom of the stack: Because most companies are not in the technology business, investments in people, processes, or technologies that are designed to improve IT only show up as expenses on the bottom line. And so while IT is the organization that is responsible for helping the rest of the business adopt various automated solutions, IT often goes begging when it comes to investing in its own operational improvements.
In large part that explains the 20-year “overnight success” of ITIL. Conceived as a framework for codifying what IT actually does for a living, the latest revisions of ITIL describe a service lifecycle that provides opportunity for IT to operate more like a services business that develops, markets, and delivers those services to the enterprise as a whole. In other words, it’s supposed to elevate areas that used to pass for “help desk” and “systems management” into conversations that could migrate from middle manager to C-level.
Or at least that’s what it’s supposed to be cracked up to being. If you codify what an IT Service is, then define the actions that are involved with every step in its lifecycle, you have the outlines for a business process that could be made repeatable. And as you codify processes, you gain opportunities to attach more consistent metrics that track performance.
We’ve studied ITIL and have spoken with organizations on their rationale for adoption. Although the hammer of regulatory compliance might look to be the obvious impetus (e.g., if you are concerned about regulations covering corporate governance or protecting the sanctity of customer identity, you want audit trails that include data center operation), we also found that scenarios such as corporate restructuring or merger and acquisition also played a hand. At an ITIL forum convened by IDC in New York last week, we found that was exactly the case for Motorola, Toyota Financial Services, and for Hospital Corp. of America – each of whom sat on a panel to reflect on their experiences.
They spoke of establishing change advisory boards to cut down on the incidence of unexpected changes (that tend to break systems), formalizing all service requests (to reduce common practices of buttonholing), reporting structures (which, not surprisingly for different organizations, varied widely), and what to put in you’re the Configuration Management Database (CMDB) that is stipulated by the ITIL framework as the definitive store defining what you are running and how you are running it.
But we also came across comments that, for all its lofty goals, that ITIL could wind up erecting new silos for an initiative that was supposed to break them down. One attendee, from a major investment bank, was concerned that with adoption of the framework would come pressure for certifications that would wind up pigeonholing professionals into discrete tasks, a la Frederick Taylor. Another mused about excessive reliance on arbitrary metrics for the sake of metrics, because that is what looks good in management presentations. Others questioned the idea of whether initiatives, such as adding a change advisory board or adding a layer of what were, in effect, “account representatives” between IT and the various business operating units it solves, would in turn create new layers of bureaucracy.
What’s more interesting is that the concerns of attendees were hardly isolated voices in the wilderness. John Willis, who heads Zabovo, third party consulting firm specializing in IBM Tivoli tools, recently posted the responses from the Tivoli mailing list on the question of whether ITIL actually matters. He didn’t get a shortage of answers. Not surprisingly, there were plenty of detractors. One respondent characterized ITIL as “merely chang[ing] the shape of the hoops I jump through, and not the fact that there are hoops…” Another termed it “$6 buzzword/ management fad,” while another claimed that ITIL makes much ado over the obvious. “The Helpdesk is NOT an ITIL process, it is merely a function…”
But others stated that, despite the hassles, that the real problem is defining processes or criteria more realistically. “Even when I’m annoyed, I still believe ITIL or ITIL-like processes should be here to stay, but management should be more educated on what constitutes a serious change to the environment…” Others claimed that ITIL formalizes what should be your IT organization’s best practices and doesn’t really invent anything new. “Most shops already perform many of the processes and don’t recognize how close they are to being ITIL compliant already. It is often a case of refining a few things.”
Probably the comment that best summarized the problem is that many organizations, not surprisingly, are “forgetting that ITIL is not an end in and of itself. Rather, it is a means to an end,” adding that this point is often “lost on both the critics and cheerleaders of Service Management.”
We’re not surprised that adopting of ITIL is striking nerves. It’s redolent of the early battles surrounding those old MRP II, and later, ERP projects, that often degenerated into endless business process reengineering efforts for their own sake. Ironically, promoters of early Class A MRP II initiatives justified their efforts on the ideal that integration would enable everyone to read from the same sheet of music and therefore enable the organization to act in unison. In fact, many of those early implementations simply cemented the functional or organizational walls that they were supposed to tear down. And although, in the name of process reengineering, they were supposed to make the organizations more responsive, in fact, many of those implementations wound up enshrining many of the inflexible planning practices that drove operational cost structures through the roof.
The bottom line is that IT could use a dose of process, but not the arbitrary kind that looks good in management PowerPoints. The ITIL framework presents the opportunity to rationalize the best practices that are already are or should be in place. Ideally, it could provide the means for correlating aspects of service delivery, such maintaining uptime and availability, with metrics that actually reflect the business value of meeting those goals. For the software industry, ITIL provides a nice common target for developing packaged solutions, just as the APICS framework did for enterprise applications nearly 30 years ago. That’s the upside.
The downside is that, like any technical profession, IT has historically operated in its own silo away from the business, where service requests were simply thrown over the wall with little or no context. As a result, the business has hardly understood what they are getting for their IT dollars, while IT has had little concept of the bigger picture on why they are doing their jobs, other than to keep the machines running. IT has good reason to fear process improvement exercises which don’t appear grounded in reality.
ITIL is supposed to offer the first step -– defining what IT does as a service, and the tasks or processes that are involved in delivering it. The cultural gap that separates IT from the business is both the best justification for adopting ITIL, and the greatest obstacle towards achieving its goals.