03.15.11

Big Data analytics in the cloud could be HP’s enterprise trump card

Posted in Application Development, Application Lifecycle Management (ALM), Big Data, Business Intelligence, Cloud, Data Management, Enterprise Applications, OS/Platforms, Outsourcing, Rich Internet Apps., SaaS (Software as a Service) at 10:47 am by Tony Baer

Unfortunately, scheduling conflicts have kept us from attending Leo Apotheker’s keynote today before the HP Analyst Summit in San Francisco. But yesterday, he tipped his cards for his new software vision for HP before a group of investment analysts. HP’s software focus is not to reinvent the wheel – at least where it comes to enterprise apps. Apotheker has to put to rest that he’s not about to do a grudge match and buy the company that dismissed him. There is already plenty of coverage here, interesting comment from Tom Foremski (we agree with him about SAP being a non-starter), and the Software Advice guys who are conducting a poll.

To some extent this has been little surprise with HP’s already stated plans for WebOS and its recently announced acquisition of Vertica. We do have one question though: what happened to Converged Infrastructure?

For now, we’re not revisiting the acquisitions stakes, although if you follow #HPSummit twitter tags today, you’ll probably see lots of ideas floating around today after 9am Pacific time. We’ll instead focus on the kind of company HP wants to be, based on its stated objectives.

1. Develop a portfolio of cloud services from infrastructure to platform services and run the industry’s first open cloud marketplace that will combine a secure, scalable and trusted consumer app store and an enterprise application and services catalog.

This hits two points on the checklist: provide a natural market for all those PCs that HP sells. The next part is stating that HP wants to venture higher up the food chain than just sell lots of iron. That certainly makes sense. The next part is where we have a question: offering cloud services to consumers, the enterprise, and developers sounds at first blush that HP wants its cloud to be all things to all people.

The good news is that HP has a start on the developer side where it has been offering performance testing services for years – but is now catching up to providers like CollabNet (with which it is aligned and would make a logical acquisition candidate) and Rally in offering higher value planning services for the app lifecycle.

In the other areas – consumer apps and enterprise apps – HP is starting from square one. It obviously must separate the two, as cloud is just about the only thing that the two have in common.

For the consumer side, HP (like Google Android and everyone else) is playing catchup to Apple. It is not simply a matter of building it and expecting they will come. Apple has built an entire ecosystem around its iOS platform that has penetrated content and retail – challenging Amazon, not just Salesforce or a would-be HP, using its user experience as the basis for building a market for an audience that is dying to be captive. For its part, HP hopes to build WebOS to have the same “Wow!” factor as the iPhone/iPad experience. It’s got a huge uphill battle on its hands.

For the enterprise, it’s a more wide open space where only Salesforce’s AppExchange has made any meaningful mark. Again, the key is a unifying ecosystem, with the most likely outlet being enterprise outsourcing customers for HP’s Enterprise Services (the former EDS operation). The key principle is that when you build a market place, you have to identity who your customers are and give them a reason to visit. A key challenge, as we’ve stated in our day job, is that enterprise customers are not the enterprise equivalent of those $2.99 apps that you’ll see in the AppStore. The experience at Salesforce – the classic inversion of the long tail – is that the market is primarily for add-ons to the Salesforce.com CRM application or use of the Force.com development platform, but that most entries simply get buried deep down the list.

Enterprise apps marketplaces are not simply going to provide a cheaper channel for solutions that still require consultative sells. We’ve suggested that they adhere more to the user group model, which also includes forums, chats, exchanges of ideas, and by the way, places to get utilities that can make enterprise software programs more useful. Enterprise app stores are not an end in themselves, but a means for reinforcing a community — whether it be for a core enterprise app – or for HP, more likely, for the community of outsourcing customers that it already has.

2. Build webOS into a leading connectivity platform.
HP clearly hopes to replicate Apple’s success with iOS here – the key being that it wants to extend the next-generation Palm platform to its base of PCs and other devices. This one’s truly a Hail Mary pass designed to rescue the Palm platform from irrelevance in a market where iOS, Android, Adobe Flash, Blackberry, and Microsoft Windows 7/Silverlight are battling it out. Admittedly, mobile developers have always tolerated fragmentation as a fact of life in this space – but of course that was when the stakes (with feature phones) were rather modest. With smart device – in all its varied form factors from phone to tablet – becoming the next major consumer (and to some extent, enterprise) frontier, there’s a new battle afresh for mindshare. That mindshare will be built on the size of the third party app ecosystem that these platforms attract.

As Palm was always more an enterprise rather consumer platform – before the Blackberry eclipsed it – HP’s likely WebOS venue will be the enterprise space. Another uphill battle with Microsoft (that has the office apps), Blackberry (with its substantial corporate email base), and yes, Apple, where enterprise users are increasingly sneaking iPhones in the back door, just like they did with PCs 25 years ago,

3. Build presence with Big Data
Like (1), this also hits a key checkbox for where to sell all those HP PCs. HP has had a half-hearted presence with the discontinued Neoview business. The Vertica acquisition was clearly the first one that had Apotheker’s stamp on it. Of HP’s announced strategies, this is the one that aligns closest with the enterprise software strategy that we’ve all expected Apotheker to champion. Obviously Vertica is the first step here – and there are many logical acquisitions that could fill this out, as we’ve noted previously, regarding Tibco, Informatica, and Teradata. The importance is that classic business intelligence never really suffered through the recession, and arguably, big data is becoming the next frontier for BI that is becoming, not just a nice to have, but increasingly an expected cost of competition.

What’s interesting so far is that in all the talk about big Data, there’s been relatively scant attention paid to utilizing the cloud to provide the scaling to conduct such analytics. We foresee a market where organizations that don’t necessarily want to buy all that and that use large advanced analytics on an event-driven basis, to consume the cloud for their Hadoop – or Vertica – runs. Big Data analytics in the cloud could be HP’s enterprise trump card.

12.13.10

The Second Wave of Analytics

Posted in Big Data, Business Intelligence, Cloud, Data Management, Database, e-Commerce, Java, Open Source, SaaS (Software as a Service) at 3:17 am by Tony Baer

Throughout the recession, business intelligence (BI) was one of the few growth markets in IT. Given that transactional systems that report “what” is happening are simply the price of entry for remaining in a market, BI and analytics systems answer the question of “why” something is happening, and ideally, provide intelligence that is actionable so you can know ‘how’ to respond. Not surprisingly, understanding the whys and hows are essential for maximizing the top line in growing markets, and pinpointing the path to survival in down markets. The latter reason is why BI has remained one of the few growth areas in the IT and business applications space through the recession.

Analytic databases are cool again. Teradata, the analytic database provider with a 30-year track record, had its strongest Q2 in what was otherwise a lousy 2010 for most IT vendors. Over the past year, IBM, SAP, and EMC took major acquisitions in this space, while some of the loudest decibels at this year’s Oracle OpenWorld were over the Exadata optimized database machine. There are a number of upstarts with significant venture funding, ranging from Vertica to Cloudera, Aster Data, ParAccel and others that are not only charting solid growth, but the range of varied approaches that reveal that the market is far from mature and that there remains plenty of demand for innovation.

We are seeing today a second wave of innovation in BI and analytics that matches the ferment and intensity of the 1995-96 era when data warehousing and analytic reporting went commercial. There isn’t any one thing that is driving BI innovation. At one end of the spectrum, you have Big Data, and at the other end, Fast Data — the actualization of real-time business intelligence. Advances in commodity hardware, memory density, parallel programming models, and emergence of NoSQL, open source statistical programming languages, cloud are bringing this all within reach. There is more and more data everywhere that’s begging to be sliced, diced and analyzed.

The amount of data being generated is mushrooming, but much of it will not necessarily be persisted to storage. For instance, if you’re a power company that wants to institute a smart grid, moving from monthly to daily meter reads multiplies your data volumes by a factor of 30, and if you decide to take readings every 15 minutes, better multiple all that again by a factor of 100. Much of this data will be consumed as events. Even if any of it is persisted, traditional relational models won’t handle the load. The issue is not only because of overhead of operating all the iron, but with it the concurrent need for additional equipment, space, HVAC, and power.

Unlike the past, when the biggest databases were maintained inside the walls of research institutions, public sector agencies, or within large telcos or banks, today many of the largest data stores on the Internet are getting opened through APIs, such as from Facebook. Big databases are no longer restricted to use by big companies.

Compare that to the 1995-96 time period when relational databases, which made enterprise data accessible, reached critical mass adoption; rich Windows clients, which put powerful apps on the desktop, became enterprise standard; while new approaches to optimizing data storage and productizing the kind of enterprise reporting pioneered by Information Builders, emerged. And with it all came the debates OLAP (or MOLAP) vs ROLAP, star vs. snowflake schema, and ad hoc vs. standard reporting. Ever since, BI has become ingrained with enterprise applications, as reflected by recent consolidations with the acquisitions of Cognos, Business Objects, and Hyperion by IBM, SAP, and Oracle. How much more establishment can you get?

What’s old is new again. When SQL relational databases emerged in the 1980s, conventional wisdom was that the need for indexes and related features would limit their ability to perform or scale to support enterprise transactional systems. Moore’s Law and emergence of client/server helped make mockery of that argument until the web, proliferation of XML data, smart sensory devices, and realization that unstructured data contained valuable morsels of market and process intelligence, in turn made mockery of the argument that relational was the enterprise database end-state.

In-memory databases are nothing new either, but the same hardware commoditization trends that helped mainstream SQL has also brought costs of these engines down to earth.

What’s interesting is that there is no single source or style of innovation. Just as 1995 proved a year of discovery and debate over new concepts, today you are seeing a proliferation of approaches ranging from different strategies for massively-parallel, shared-nothing architectures; columnar databases; massive networked and hierarchical file systems; and SQL vs. programmatic approaches. It is not simply SQL vs. a single post-SQL model, but variations that mix and match SQL-like programming with various approaches to parallelism, data compression, and use of memory. And don’t forget the application of analytic models to complex event processes for identifying key patterns in long-running events or coming through streaming data that is arriving in torrents too fast and large to ever consider putting into persistent storage.

This time, much of the innovation is coming from the open source world as evidenced by projects like the Java-based distributed computing platform Hadoop developed by Google; MapReduce parallel programming model developed by Google; the HIVE project that makes MapReduce look like SQL; the R statistical programming language. Google has added fuel to the fire by releasing to developers its BigQuery and Prediction API for analyzing large sets of data and applying predictive algorithms.

These are not simply technology innovations looking for problems, as use cases for Big Data or real-time analysis are mushrooming. Want to extend your analytics from structured data to blogs, emails, instant messaging, wikis, or sensory data? Want to convene the world’s largest focus group? There’s sentiment analysis to be conducted from Facebook; trending topics for Wikipedia; power distribution optimization for smart grids; or predictive analytics for use cases such as real-time inventory analysis for retail chains, or strategic workforce planning, and so on.

Adding icing to the cake was an excellent talk at a New York Technology Council meeting by Merv Adrian, a 30-year veteran of the data management field (who will soon be joining Gartner) who outlined the content of a comprehensive multi-client study on analytic databases that can be downloaded free from Bitpipe.

Adrian speaks of a generational disruption occurring to the database market that is attacking new forms of age old problems: how to deal with expanding datasets while maintaining decent performance. as mundane as that. But the explosion of data coupled with commoditization of hardware and increasing bandwidth have exacerbated matters to the point where we can no longer apply the brute force approach to tweaking relational architectures. “Most of what we’re doing is figuring out how to deal with the inadequacies of existing systems,” he said, adding that the market and state of knowledge has not yet matured to the point where we’re thinking about how the data management scheme should look logically.

So it’s not surprising that competition has opened wide for new approaches to solving the Big and Fast Data challenges; the market has not yet matured to the point where there are one or a handful of consensus approaches around which to build a critical mass practitioner base. But when Adrian describes the spate of vendor acquisitions over the past year, it’s just a hint of things to come.

Watch this space.

10.01.10

Leo Apotheker to target HP’s forgotten business

Posted in Application Development, Business Intelligence, Data Management, Database, Enterprise Applications, IT Infrastructure, IT Services & Systems Integration, Networks, Outsourcing, SaaS (Software as a Service), Storage, Systems Management, Technology Market Trends at 1:35 pm by Tony Baer

Ever since its humble beginnings in the Palo Alto garage, HP has always been kind of a geeky company – in spite of Carly Fiorina’s superficial attempts to prod HP towards a vision thing during her aborted tenure. Yet HP keeps talking about getting back to that spiritual garage.

Software has long been the forgotten business of HP. Although – surprisingly – the software business was resuscitated under Mark Hurd’s reign (revenues have more than doubled as of a few years ago), software remains almost a rounding error in HP’s overall revenue pie.

Yes, Hurd gave the software business modest support. Mercury Interactive was acquired under his watch, giving the business a degree of critical mass when combined with the legacy OpenView business. But during Hurd’s era, there were much bigger fish to fry beyond all the internal cost cutting for which Wall Street cheered, but insiders jeered. Converged Infrastructure has been the mantra, reminding us one and all that HP was still very much a hardware company. The message remains loud and clear with HP’s recent 3PAR acquisition at a heavily inflated $2.3 billion which was concluded in spite of the interim leadership vacuum.

The dilemma that HP faces is that, yes, it is the world’s largest hardware company (they call it technology), but the bulk of that is from personal systems. Ink, anybody?

The converged infrastructure strategy was a play at the CTO’s office. Yet HP is a large enough company that it needs to compete in the leagues of IBM and Oracle, and for that it needs to get meetings with the CEO. Ergo, the rumors of feelers made to IBM Software’s Steve Mills, and the successful offer to Leo Apotheker, and agreement for Ray Lane as non executive chairman.

Our initial reaction was one of disappointment; others have felt similarly. But Dennis Howlett feels that Apotheker is the right choice “to set a calm tone” that there won’t be a massive a debilitating reorg in the short term.

Under Apotheker’s watch, SAP stagnated, hit by the stillborn Business ByDesign and the hike in maintenance fees that, for the moment, made Oracle look warmer and fuzzier. Of course, you can’t blame all of SAP’s issues on Apotheker; the company was in a natural lull cycle as it was seeking a new direction in a mature ERP market. The problem with SAP is that, defensive acquisition of Business Objects notwithstanding, the company has always been limited by a “not invented here” syndrome that has tended to blind the company to obvious opportunities – such as inexplicably letting strategic partner IDS Scheer slip away to Software AG. Apotheker’s shortcoming was not providing the strong leadership to jolt SAP out of its inertia.

Instead, Apotheker’s – and Ray Lane’s for that matter – value proposition is that they know the side of the enterprise market that HP doesn’t. That’s the key to this transition.

The next question becomes acquisitions. HP has a lot on its plate already. It took at least 18 months for HP to digest the $14 billion acquisition of EDS, providing a critical mass IT services and data center outsourcing business. It is still digesting nearly $7 billion of subsequent acquisitions of 3Com, 3PAR, and Palm to make its converged infrastructure strategy real. HP might be able to get backing to make new acquisitions, but the dilemma is that Converged Infrastructure is a stretch in the opposite direction from enterprise software. So it’s not just a question of whether HP can digest another acquisition; it’s an issue of whether HP can strategically focus in two different directions that ultimately might come together, but not for a while.

So let’s speculate about software acquisitions.

SAP, the most logical candidate, is, in a narrow sense, relatively “affordable” given that its stock is roughly about 10 – 15% off its 2007 high. But SAP would be obviously the most challenging given the scale; it would be difficult enough for HP to digest SAP under normal circumstances, but with all the converged infrastructure stuff on its plate, it’s back to the question of how can you be in two places at once. Infor is a smaller company, but as it is also a polyglot of many smaller enterprise software firms, would present HP additional integration headaches that it doesn’t need.

HP may have little choice but to make a play for SAP if IBM or Microsoft were unexpectedly to actively bid. Otherwise, its best bet is to revive the relationship which would give both companies the time to acclimate. But in a rapidly consolidating technology market, who has the luxury of time these days?

Salesforce.com would make a logical stab as it would reinforce HP Enterprise Services’ (formerly EDS) outsourcing and BPO business. It would be far easier for HP to get its arms around this business. The drawback is that Salesforce.com would not be very extensible as an application as it uses a proprietary stored procedures database architecture. That would make it difficult to integrate with a prospective ERP SaaS acquisition, which would otherwise be the next logical step to growing the enterprise software footprint.

Informatica is often brought up – if HP is to salvage its Neoview BI business, it would need a data integration engine to help bolster it. Better yet, buy Teradata, which is one of the biggest resellers of Informatica PowerCenter – that would give HP far more credible presence in the analytics space. Then it will have to ward off Oracle – which has an even more pressing need for Informatica to fill out the data integration piece in its Fusion middleware stack – for Informatica. But with Teradata, there would at least be a real anchor for the Informatica business.

HP has to decide what kind of company it needs to be as Tom Kucharvy summarized well a few weeks back. Can HP afford to converge itself in another direction? Can it afford not to? Leo Apotheker has a heck of a listening tour ahead of him.

09.23.10

Stack envy: Impressions for Oracle OpenWorld 2010

Posted in Application Development, BPM, Business Intelligence, Data Management, Database, Enterprise Applications, Enterprise Integration, Java, Middleware, SaaS (Software as a Service), SOA & Web Services at 2:19 am by Tony Baer

Last year, the anticipation of the unveiling of Fusion apps was palpable. Although we’re not finished with Oracle OpenWorld 2010 yet – we still have the Fusion middleware analyst summit tomorrow and still have loose ends regarding Oracle’s Java strategy – by now our overall impressions are fairly code complete.

In his second conference keynote – which unfortunately turned out to be almost a carbon copy of his first – Larry Ellison boasted that they “announced more new technology this week than anytime in Oracle’s history.” Of course, that shouldn’t be a heavy lift given that Oracle is a much bigger company with many more products across the portfolio, and with Sun, has a much broader hardware/software footprint at that.

On the software end – and post-Sun acquisition, we have to make that distinction – it’s hard to follow up last year’s unveiling of Fusion apps. The Fusion apps are certainly a monster in size with over 5000 tables, 10,000 task flows, representing five years of development. Among other things, the embedded analytics provide the context long missing from enterprise apps like ERP and CRM, which previously required you to slip into another module as a separate task. There is also good integration of process modeling, although for now BPM models developed using either of Oracle’s modeling tools won’t be executable. For now, Fusion apps will not change the paradigm of model, then develop.

A good sampling of coverage and comment can be found from Ray Wang, Dennis Howlett, Therese Poletti, Stefan Ried, and for the Java side, Lucas Jellema.

The real news is that Fusion apps, excluding manufacturing, will be in limited release by year end and general release in Q1. That’s pretty big news.

But at the conference, Fusion apps took a back seat to Exadata, the SPARC HP (and soon to be SPARC)-based database appliance unveiled last year, and the Exalogic cloud-in-a-box unwrapped this year. It’s no mystery that growth in the enterprise apps market has been flat for quite some time, with the main Greenfield opportunities going forward being midsized businesses or the BRIC world region. Yet Fusion apps will be overkill for small-midsized enterprises that won’t need such a rich palette of functionality (NetSuite is more likely their speed), which leaves the emerging economies as the prime growth target. The reality is most enterprises are not about to replace the very ERP systems that they implemented as part of modernization or Y2K remediation efforts a decade ago. At best, Fusion will be a gap filler, picking up where current enterprise applications leave off, which provides potential growth opportunity for Oracle, but not exactly a blockbuster one.

Nonetheless, as Oracle was historically a software company, the bulk of attendees along with the press and analyst community (including us) pretty much tuned out all the hardware talk. That likely explains why, if you subscribed to the #oow10 Twitter hashtag, that you heard nothing but frustration from software bigots like ourselves and others who got sick of the all-Exadata/ Exalogic-all-the-time treatment during the keynotes.

In a memorable metaphor, Ellison stated that one Exalogic device can schedule the entire Chinese rail system, and that two of them could run Facebook – to which a Twitter user retorted, how many enterprises have the computing load of a Facebook?

Frankly, Larry Ellison has long been at the point in his life where he can afford to disregard popular opinion. Give a pure hardware talk Sunday night, then do it almost exactly again on Wednesday (although on the second go round we were also treated to a borscht belt routine taking Salesforce’s Mark Benioff down more than a peg on who has the real cloud). Who is going to say no to the guy who sponsored and crewed on the team that won the America’s cup?

But if you look at the dollars and sense opportunity for Oracle, it’s all about owning the full stack that crunches and manages the data. Even in a recession, if there’s anything that’s growing, it’s the amount of data that’s floating around. Combine the impacts of broadband, sensory data, and lifestyles that are becoming more digital, and you have the makings for the data counterpart to Metcalfe’s Law. Owning the hardware closes the circle. Last year, Ellison spoke of his vision to recreate the unity of the IBM System 360 era, because at the end of the day, there’s nothing that works better than software and hardware that are tuned for each other.

So if you want to know why Ellison is talking about almost nothing else except hardware, it’s not only because it’s his latest toy (OK, maybe it’s partly that). It’s because if you run the numbers, there’s far more growth potential to the Exadata/Exalogic side of the business than there is for Fusion applications and middleware.

And if you look at the positioning, owning the entire stack means deeper account control. It’s the same strategy behind the entire Fusion software stack, which uses SOA to integrate internally and with the rest of the world. But Fusion apps and middleware remain optimized for an all-Oracle Fusion environment,underpinned by a declarative Application Development Framework (ADF) and tooling that is designed specifically for that stack.

So on one hand, Oracle’s pitch that big database processing works best on optimized hardware can sound attractive to CIOs that are seeking to optimize one of their nagging points of pain. But the flipside is that, given Oracle’s reputation for aggressive sales and pricing, will the market be receptive to giving Oracle even more control? To some extent the question is moot; with Oracle having made so many acquisitions, enterprises that followed a best of breed strategy can easily find themselves unwittingly becoming all-Oracle shops by default.

Admittedly, the entire IT industry is consolidating, but each player is vying for different combinations of the hardware, software, networking, and storage stack. Arguably, applications are the most sensitive layer of the IT technology stack because that is where the business logic lies. As Oracle asserts greater claim to that part of the IT stack and everything around it, it requires a strategy for addressing potential backlash from enterprises seeking second sources when it comes to managing their family jewels.

10.08.09

Getting with the Program

Posted in .NET, Application Development, Cloud, Data Management, Database, Java, SaaS (Software as a Service) at 5:44 pm by Tony Baer

Developers are a mighty stubborn bunch. Unlike the rest of the enterprise IT market, where a convergence of forces have favored a nobody gets fired for buying IBM, Oracle, SAP, or Microsoft, developers have no such herding instincts. Developers do not always get with the [enterprise] program.

For evidence, recall what happened the last time that the development market faced such consolidation. In the wake of web 1.0, the formerly fragmented development market – which used to revolve around dozens of languages and frameworks – congealed down to Java vs .NET camps. That was so 2002, however, as in the interim, developers have gravitated towards choosing their own alternatives.

The result was an explosion of what former Burton Group analyst Richard Monson Haefel termed the Rebel Frameworks (that was back in 2004), and more recently in the resurgence of scripting languages. In essence, developers didn’t take the future as inevitable, and for good reason: the so-called future of development circa 2002 was built on the assumption that everyone would gravitate to enterprise-class frameworks. Java and .NET were engineered on the assumption that the future of enterprise and Internet computing would be based on complex, multitier distributed transactional systems. It was accompanied by a growing risk-averseness: buy only from vendors that you expect will remain viable. Not surprisingly, enterprise computing procurements narrowed to IOSM (IBM, Oracle, SAP, Microsoft).

But the developer community lives to a different dynamic. In an age of open source, expertise for development frameworks and languages get dispersed; vendor viability becomes less of a concern. More importantly, developers only want to get the job done, and anyway, the tasks that they perform typically fall under the enterprise radar. Whereas a CFO may be concerned over the approach an ERP system may employ to managing financial system or supply chain processes, they are not going to care about development languages or frameworks.

The result is that developers remain independent minded, and that independence accounts for the popularity of alternatives to enterprise development platforms, with Ruby on Rails being the latest to enter the spotlight.

In one sense, Ruby’s path to prominence parallels Java in that the language was originally invented for another purpose. But there the similarity ends as, in Ruby’s case, no corporate entity really owned it. Ruby is a simple scripting language that became a viable alternative for web developers once David Heinemeier Hansson invented the Rails framework. The good news, Rails makes it easy to use Ruby to write relatively simple web database applications. Examples of Rails’ simplicity include:
• Eliminating the need to write configuration files for mapping requests to actions
• Avoiding multi-threading issues because Rails will not pool controller (logic) instances
• Dispensing with object-relational mapping files; instead, Rails automates much of this and tends to use very simplified naming conventions.

The bad news is that there are performance limitations and difficulties in handling more complex distributed transaction applications. But the good news is that when it comes to web apps, the vast majority are quite rudimentary, thank you.

The result has propelled a wave of alternative stacks, such as LAMP (Linux-Apache web server-MySQL-and either PHP, Python, or Perl) or, more recently, Ruby on Rails. At the other end of the spectrum, the Spring framework takes the same principle – simplification – to ease the pain of writing complex Java EE applications – but that’s not the segment addressed by PHP, MySQL, or Ruby on Rails. It reinforces the fact that, unlike the rest of the enterprise software market, developers don’t necessarily take orders from up top. Nobody told them to implement these alternative frameworks and languages.

The latest reminder of the strength of grassroots markets in the developer sector is Engine Yard’s securing of $19 million in C funding. The backing comes from some of the same players that also funded SpringSource (which was recently acquired by VMware). Some of the backing also comes from Amazon, whose Jeff Bezos owns outright 37Signals, the Chicago-based provider of project management software that employs Heinemeier Hansson. For the record, there is plenty of RoR presence in Amazon Web Services.

Engine Yard is an Infrastructure-as-a-Service (IaaS) provider that has optimized the RoR stack for runtime. Although hardly the only cloud provider out there that supports RoR development, Engine Yard’s business is currently on a 2x growth streak. Funding stages the company either for IPO or buy out.

At this point the script sounds similar to SpringSource which, of course, just got acquired by VMware, and is launching a development and runtime cloud that will eventually become VMware’s Java counterpart to Microsoft Azure. It’s tempting to wonder whether a similar path will become reality for Engine Yard. The answer is that the question itself is too narrow. It is inevitable that a development and runtime cloud paired with enterprise plumbing (e.g., OS, hypervisor) will materialize for Ruby on Rails. With its $19 million funding, Engine Yard has the chance to gain critical mass mindshare in the RoR community – but don’t rule out rivals like Joyent yet.

07.06.09

Software Abundance in a Downturn

Posted in Application Development, Cloud, Desktop Apps, Enterprise Applications, SaaS (Software as a Service), Supply Chain Management, Technology Market Trends at 3:33 pm by Tony Baer

The term “get” is a journalism (remember that?) term for getting hard-to-get interviews. And so we’re jealous once more about one of RedMonk/Michael Cote’s latest gets, Grady Booch at last month’s Rational Software Conference.

In a rambling discussion, Booch made an interesting point during his sitdown about software being an abundant resource and how that jibes with the current economic slowdown. Although his eventual conclusion – that it pays to invest in software because it can help you deal with a downturn more effectively (and derive competitive edge) – was not surprising, the rationale was.

It’s that Booch calls software an abundant resource. Using his terms, it’s fungible, flexible; there’s lots of it and lots of developers around; and better yet, it’s not a natural extractive resource subject to zero-sum economics. That’s for the most part true although, unless you’re getting your power off solar, some resource must be consumed to provide the juice to your computer.

Booch referred to Clay Shirkey’s concept that a cognitive surplus now exists as a result of the leisure time freed up by the industrial revolution. He contends that highly accessible, dispersed computing networks have started to harness this cumulative cognitive resource. Exhibit A was his and Martin Wattenberg of IBM’s back of the envelope calculation that Wikipedia alone has provided an outlet for 100 million cumulative hours of collected human thought. That’s a lot of volunteer contribution to what, depending on your viewpoint, is contribution to or organization of human wisdom. Of course other examples are the open source software that floats in the wild like the airborne yeasts that magically transform grains into marvelous Belgian Lambics.

Booch implied that software has become an abundant resource, although he deftly avoided the trap of calling it “free” as that term brings with it plenty of baggage. As pioneers of today’s software industry discovered back in the 1980s, the fact that software come delivered on cheap media (followed today by cheap bandwidth) concealed the human capital value that was represented by it. There are many arguments of what the value of software is today – is it proprietary logic, peace of mind, or is the value of technical support? Regardless of what it is, there is value in software, and it is value that, unlike material goods, is not always directly related to supply and demand.

But of course there is a question as to the supply of software, or more specifically, the supply of minds. Globally this is a non-issue, but in the US the matter of whether there remains a shortage of computer science grads or a shortage of jobs for the few that are coming out of computer science schools is still up for debate.

There are a couple other factors to add to the equation of software abundance.

The first is “free” software; OK, Grady didn’t fall into that rat hole but we will. You can use free stuff like Google Docs to save money on the cost of Microsoft Office, or you can use an open source platform like Linux to avoid the overhead of Windows. Both have their value, but their value is not going to make or break the business fortunes of the company. By nature, free software will be commodity software because everybody can get it, so it confers no strategic advantage to the user.

The second is the cloud. It makes software that is around more readily accessible because, if you’ve got the bandwidth, we’ve got the beer. Your company can implement new software with less of the usual pain because it doesn’t have to do the installation and maintenance itself. Well not totally – it depends on whether your provider is using the SaaS model where they handle all the plumbing or whether you’re using a raw cloud where installation and management is a la carte. But assuming your company is using a SaaS provider or somebody that mediates the ugly cloud, software to respond to your business need is more accessible than ever. As with free or open source, the fact that this is widely available means that the software will be commodity; however, if your company is consuming a business application such as ERP, CRM, MRO, or supply chain management, competitive edge will come in how you configure, integrate and consume that software. That effort will be anything but free.

The bottom line is that Abundant Software is not about the laws of supply and demand. There is plenty and not enough software and software developers to go around. Software is abundant, but not always the right software, or if it is right, it takes effort to make it righter. Similarly, being abundant doesn’t mean that the software that is going to get your company out of the recession is going to be cheap.

UPDATE — Google Docs is no longer free.

06.08.09

In need of a trigger: Report from Rational Software Conference 2009

Posted in Application Development, Application Lifecycle Management (ALM), Business Intelligence, Cloud, SaaS (Software as a Service) at 11:33 am by Tony Baer

Rational Software Conference 2009 last week was supposed to be “as real as it gets,” but in the light of day proved a bit anticlimactic. A year after ushering in Jazz, a major new generation of products, Rational has not yet made the compelling business case for it. The hole at the middle of the doughnut remains not the “what” but the “why.” Rational uses the calling cry of Collaborative ALM to promote Jazz, but that is more like a call for repairing your software process as opposed to improving your core business. Collaborative might be a good term to trot out in front of the CxO, but not without a business case justifying why software development should become more collaborative.

The crux of the problem is that although Rational has omitted the term Development from its annual confab, it still speaks the language of a development tools company.

With Jazz products barely a year old if that, you wouldn’t expect there to be much of Jazz installed base yet. But in isolated conversations (our sample was hardly scientific), we heard most customers telling us that Jazz to them was just another new technology requiring new server applications, which at $25,000 – $35,000 and up are not an insignificant expense; they couldn’t understand the need of adding something like Requirements Composer, which makes it easier for business users to describe their requirements, if they already had RequisitePro for requirements management. They hear that future versions of Rational’s legacy products are going to be Jazz-based (their data stores will be migrated to the Jazz repository), but that is about as exciting to them as the prospect of another SAP version upgrade. All pain for little understood gain.

There are clear advantages to the new Jazz products, but Rational has not yet made the business case. Rational Insight, built on Cognos BI technology, provides KPIs that in many cases are over the heads of development managers. Jazz products such as Requirements Composer could theoretically stand on its own for lightweight software development processes if IBM sprinkled in the traceability that still requires RequisitePro. The new Measured Capability Improvement Framework (MCIF) productizes the gap analysis assessments that Rational has performed over the years for its clients regarding software processes, with addition of prescriptive measures that could make such assessment actionable.

But who in Rational is going to sell it? There is a small program management consulting group that could make a credible push, but the vast majority of Rational’s sales teams are still geared towards shorter-fuse tactical tools sales. Yet beyond the tendency of sales teams to focus on products like Build Forge (one of its better acquisitions), the company has not developed the national consulting organization it needs to do solution sells. That should have cleared the way for IBM’s Global Business Services to create a focused Jazz practice, but so far GBS’s Jazz activity is mostly ad hoc, engagement-driven. In some cases, Rational has been its own worst enemy as it talks strategic solutions at the top, while having mindlessly culled some of its most experienced process expertise for software development during last winter’s IBM Resource Action.

Besides telling Rational to do selective rehires, we’d suggest a cross-industry effort to raise the consciousness of this profession. It needs a precursor to MCIF because the market is just not ready for it yet, outside of the development shops that have awareness of frameworks like CMMi. This is missionary stuff, as organizations (and potential partners) like the International Association of Business Analysts (IIBA) are barely established (a precedent might be organizations like Catalyze that has heavy sponsorship from iRise). A logical partner might be the program management profession, which is tasked with helping CIOs effectively target their limited software development resources.

Other highlights of the conference included Rational’s long-awaited disclosure of its cloud strategy, and plans for leveraging the Telelogic acquisition to drive its push into “Smarter Products.” According to our recent research for Ovum, the cloud is transforming the software development tools business, with dozens of vendors already having made plays for offering various ALM tools as services. Before this, IBM Rational made some baby steps, such as offering hosted versions of its AppScan web security tests. It is opening technology previews or private cloud instances that could be hosted inside the firewall or virtually using preconfigured Amazon Machine Images of Rational tooling on Amazon’s EC2 raw cloud. Next year Rational will unveil public cloud offerings.

Rational’s cloud strategy is part of a broader-based strategy for IBM Software Group, which in the long run could use the cloud as the chance to, in effect, “mash up” various tools across brands to respond to specific customer pain points, such as application quality throughout the entire lifecycle including production (e.g., Requirements Composer, Quality Manager, some automated testing tools, and Tivoli ITCAM, for instance). Ironically, the use case “mashups” that are offered by Rational as cloud-based services might provide the very business use cases that are currently missing from its Jazz rollout.

But IBM Rational still has lots of pieces to put together first, like for starters figuring out how to charge. In our Ovum research we found that core precepts of SaaS including multi-tenancy and subscription pricing may not always apply to ALM.

Finally there’s the “Smarter Products” push, which is Rational’s Telelogic-based rationale to IBM’s Smarter Planet campaign. It reflects the fact that the software content in durable goods is increasing to the point where it is no longer just a control module that is bolted on; increasingly, software is defining the product. Rational’s foot in the door is that many engineered-product companies (like in aerospace) are already heavy users of Telelogic DOORS, which is well set up for tracking requirements of very complex systems, and potentially, “systems of systems” where you have a meta-control layer that governs multiple smart products or processes performed by smart products.

The devil is in the details as Rational/Telelogic has not yet established the kinds of strategic partnerships with PLM companies like Siemens, PTC or Dassault for joint product integration and go-to-market initiatives for converging application lifecycle management with its counterpart in the for managing the lifecycle of engineered products (Dassault would be a likely place to start as IBM has had a longstanding reselling arrangement) . Roles, responsibilities, and workflows have yet to be developed or templated, bestowing on the whole initiative to reality that for now every solution is a one-off. The organizations that Rational and the PLM companies are targeting are heavily silo’ed. Smarter Products as a strategy offers inviting long term growth possibilities for IBM Rational, but at the same time, requires lots of spadework first.

06.03.09

A Silver Lining in the Cloud

Posted in Application Development, BPM, Cloud, Enterprise Integration, Middleware, Rich Internet Apps., SaaS (Software as a Service), SOA & Web Services at 7:13 pm by Tony Baer

Tibco has always been about data and more recently processes in motion. Its heritage is as a company that connects data and applications, providing the mediation that routes and integrates data, and governs the whole process, on its way to its final destination.

So it shouldn’t be surprising in this year of the cloud and virtualization that Tibco has become the latest IT software infrastructure provider to offer a way for its customers to take advantage of the cloud,. Initially that will be Amazon EC2, but going forward there are likely to be other clouds – public and private – that Tibco will support.

Its offering, now in beta, is branded Silver, based on the notion that there is a silver lining in the cloud. In this case the lining is mediation, and governance of an environment that provides the kind of elasticity that would not otherwise be feasible with dedicated internal environments.

Not surprisingly, Silver is a manifestation in the cloud of most but not all of Tibco’s Active Matrix SOA middleware for composing, integrating, and transporting services, plus governance of the process to monitor service levels. To get an idea of what services Silver provides, look at Tibco’s Active Matrix tooling and that will give you a good idea.

For the cloud, Tibco extended many of its Active Matrix tools with new caching and user and session management capabilities to preserve state within a virtual environment. To get a very simple idea of how Silver, or Active Matrix in the cloud differs from how you would the tools on premises, you would compose by using a tool that closely resembles the Eclipse-based Active Matrix Business Studio, build a deployment archive, and then set it free. By contrast, if you were composing the same composite service-oriented application on premises, you would have to set up the testing and staging environments, then configure it for deployment on as local server. Tibco manages the underlying plumbing, providing the load balancing, failover, fault tolerance, provisioning and de-provisioning of machine instances (in this case Amazon EC2 AMIs), and service level monitoring.

Silver is still a beta, which should be pretty obvious if you go to the Silver website; it contains only barebones information at this point. Silver is essentially a pure-play Platform-as-a-Service (PaaS) offering that enables you to compose service-oriented applications fore the cloud, in the cloud. But as Tibco has always been a technology-driven company, it does a good job of explaining the tooling that it has offered but has not exactly flushed out the use cases covering the why.

As further evidence that Silver is still a work in progress, Tibco has not taken advantage of all the assets it has to truly tap the potential of composition in the cloud. For instance, while you can orchestrate and compose, you cannot necessarily model and execute the business processes that its BPM suite offers. However, it’s very understandable that BPM did not make it into the beta as that is a major chunk of technology that only addresses a portion of what its customer base needs.

But what is more surprising is that Silver for now ignores one of the most obvious use cases for the cloud: the ability to compose mashups on the fly, putting a front end to the services that customers are composing (the company has its own Ajax tools). As the cloud is in essence a lightweight approach to application deployment, so are mashups to integration. It would be logical icing on the cake were Tibco to pitch Silver as an easy composition environment for piecing together processes, services, and cool pieces of the web so business users could readily gain the flexibility of orchestration, and the accessibility and ease of deployment of the cloud.

05.11.09

What do Smarter Planets and Oil Refineries have in common?

Posted in BPM, Business Intelligence, Cloud, Data Management, Green, Java, Middleware, Networks, SaaS (Software as a Service), SOA & Web Services, Supply Chain Management, Technology Market Trends at 12:20 pm by Tony Baer

Last week we paid our third visit in as many years to IBM’s Impact SOA conference. Comparing notes, if 2007’s event was about engaging the business, and 2008 was about attaining the basic blocking and tackling to get transaction system-like performance and reliability, this year’s event was supposed to provide yet another forum for pushing IBM’s Smarter Planet corporate marketing. We’ll get back to that in a moment.

Of course, given that conventional wisdom or hype has called 2009 the year of the cloud (e.g., here and here), it shouldn’t be surprising that cloud-related announcements grabbed the limelight. To recap: IBM announced WebSphere Cloudburst, an appliance that automates rapid deployment of WebSphere images to the private cloud (whatever that is — we already provided our two cents on that) and it released IBM’s BlueWorks, a new public cloud service for white boarding business processes that is IBM’s answer to Lombardi Blueprints.

But back to our regularly scheduled program, IBM has been pushing Smarter Planet since the fall. It came in the wake of a period when rapid run-up and volatility in natural resource prices and global instability prompted renewed discussions over sustainability that are at decibel levels not heard since the late 70s. A Sam Palmisano speech delivered before the Council on Foreign relations last November laid out what have since become IBM’s standard talking points. The gist of IBM’s case is that the world is more instrumented and networked than ever, which in turn provides the nervous system so we can make the world a better, cleaner, and for companies, a more profitable place. A sample: 67% of electrical power generation is lost to network inefficiencies during a period of national debate in setting up smart grids.

IBM’s Smarter Planet campaign is hardly anything new. It builds on Metcalfe’s law, which posits that the value of a network is the square of the numbers of new users that join it. Put another way, a handful of sensors provides only narrow slices of disjoint data; fill that network in with hundreds or thousands of sensors, add some complex event processing logic to it, and now you not only can deduce what’s happening, but do things like predict what will happen or provide economic incentives that change human behavior so that everything comes out copasetic. Smarter Planet provides a raison d’etre for IBM’s Business Events Processing initiatives that it began struggling to get its arms around last fall. It also tries to make use of IBM’s capacity for extreme scale computing, but also prods it to establish relationships with new sets of industrial process control and device suppliers that are quite different from the world of ISVs and systems integrators.

So, if you instrumented the grid, you could take advantage of transient resources such as winds that this hour might be gusting in the Dakotas, and in the next hour, in the Texas Panhandle, so that you could even out generation to the grid and supplant more expensive gas-fired generation in Chicago. Or, as described by a Singaporean infrastructure official at the IBM conference, you can apply sensors to support congestion pricing, which rations scarce highway capacity based on demand, with the net result that it ramps up prices to what the market will bear at rush hour and funnel those revenues to expanding the subway system (too bad New York dropped the ball when a similar opportunity presented itself last year). The same principle could make supply chains far more transparent and driven by demand with real-time predictive analytics if you somehow correlate all that RFID data. The list of potential opportunities, which optimize consumption of resources in a resource-constrained economy, are limited by the imagination.

In actuality, what IBM described is a throwback to common practices established in highly automated industrial process facilities, where closed-loop process control has been standard practice for decades. Take oil refineries for example. The facilities required to refine crude are extremely capital-intensive, the processes are extremely complex and intertwined, and the scales of production so huge that operators have little choice but to run their facilities flat out 24 x 7. With margins extremely thin, operators are under the gun to constantly monitor and tweak production in real time so it stays in the sweet spot where process efficiency, output, and costs are optimized. Such data is also used for predictive trending to prevent runaway reactions and avoid potential safety issues such as a dangerous build-up of pressure in a distillation column.

So at base, a Smarter Planet is hardly a radical idea; it seeks to emulate what has been standard practice in industrial process control going back at least 30 years.

04.27.09

Can Software Development Aspire to the Cloud?

Posted in Application Development, Cloud, Java, SaaS (Software as a Service), Technology Market Trends at 1:00 am by Tony Baer

As we’re all too aware, the tech field has always been all too susceptible to the fad of the buzzword, which of curse gave birth to another buzzword as popularized by gave birth to Gartner’s Hype Cycles. But in essence the tech field is no different from the worlds of fashion or the latest wave in electronic gizmos – there’s always going to be some new gimmick on the block.

But when it comes to cloud, we’re just as guilty as the next would-be prognosticator as it figured into several of our top predictions for 2009. In a year of batten-down-the-hatch psychology, anything that saves or postpones costs, avoids long-term commitment, while preserving all options (to scale up or ramp down) is going to be quite popular, and under certain scenarios, cloud services support all that.

And so it shouldn’t be surprising that roughly a decade after Salesforce.com re-popularized the concept (remember, today’s cloud is yesterday’s time-sharing), the cloud is beginning to shake up how software developers approach application development. But in studying the extent to which the cloud has impacted software development for our day job at Ovum, we came across some interesting findings that in some cases had their share of surprises.

ALM vendors, like their counterparts on the applications side, are still figuring out how the cloud will impact their business. While there is no shortage of hosted tools addressing different tasks in the software development lifecycle (SDLC), major players such as IBM/Rational have yet to show their cards. In fact, there was a huge gulf of difference in cloud-readiness between IBM and HP, whose former Mercury unit has been offering hosted performance testing capabilities for 7 or 8 years, and is steadily expanding hosted offerings to much of the rest of its BTO software portfolio.

More surprising was the difficulty of defining what Platform-as-a-Service (PaaS) actually means. There is the popular definition and then the purist one. For instance, cloud service providers such as Salesforce.com employ the term PaaS liberally in promoting their Force.com development platform, in actuality development for the Force.com platform uses coding tools that don’t run on Salesforce’s servers, but locally on the developer’s own machines. Only once the code is compiled is it migrated to the developer’s Force.com sandbox where it is tested and staged prior to deployment. For now, the same principle applies to Microsoft Azure.

That throws plenty of ambiguity on the term PaaS – does it refer to development inside the cloud, or development of apps that run in the cloud? The distinction is important, not only to resolve marketplace confusion and realistically manage developer expectations, but also to highlight the reality that apps designed for running inside a SaaS provider’s cloud are going to be architecturally different than those deployed locally. Using the Salesforce definition of PaaS, apps that run in its cloud are designed based on the fact that the Salesforce engine handles all the underlying plumbing. In this case, it also highlights the very design of Salesforce’s Apex programming language, which is essentially a stored procedures variant of Java. It’s a style of development popular from the early days of client/server, where the design pattern of embedding logic inside the database was viewed as a realistic workaround to the bottlenecks of code running from fat clients. Significantly, it runs against common design patterns for highly distributed applications, and of course against the principles of SOA, which was to loosely couple the logic and abstract from the physical implementation. In plain English, this means that developers of apps to run in the cloud may have to make some very stark architectural choices.

Nonetheless, there’s nothing bad about all that — just that when you write logic inside a PaaS platform, it is like writing to any application platform. It’s nothing different, better, or worse than writing to Oracle or SAP. The only difference is that the code exists in the cloud, for which there may be good operational, budgetary, or strategic reasons to do so.

UPDATE: In a recent column, Andrew Brust described how Microsoft came to terms with this issue during its current rollout of SQL Data Services.

The confusion over PaaS could be viewed as a battle over vendor lock-in. It would be difficult to port an application running in the Salesforce cloud to another cloud provider or transition it to on premises because the logic is tightly coupled to Salesforce’s plumbing. This sets the stage for future differentiation of players like Microsoft, whose Software + Services is supposed to make the transition between cloud and on premises seamless; in actuality, that will prove more difficult unless the applications are written in strict, loosely-coupled service-oriented manner. But that’s another discussion that applies to all cloud software, not just ALM tools.

But the flipside of this issue is that there are very good reasons why much of what passes for PaaS involves on-premises development. And that in turn provides keen insights as to which SDLC tasks work best in the cloud and which do not.

The main don’ts consist of anything having to do with source code, for two reasons: Network latency and IP protection. The first one is obvious: who wants to write a line of code and wait until it gets registered into the system, only to find out that the server or network connection went down and you’d better retype your code again. Imagine how aggravating that would be with highly complex logic; obviously no developer, sane or otherwise, would have such patience. And ditto for code check-in/check out, or for running the usual array of static checks and debugs. Developers have enough things to worry about without having to wait for the network to respond.

More of concern however is the issue of IP protection: while your program is in source code and not yet compiled or obfuscated, anybody can get to it. The code is naked, it’s in a language that any determined hacker can intercept. Now consider that unless you’re automating a lowly task like queuing up a line of messages or printers, your source code is business logic that represents in software how your company does business. Would any developer wishing to remain on the payroll the following week dare place code in an online repository that, no matter how rigorous the access control, could be blown away by determined hackers for whatever nefarious purpose?

If you keep your logic innocuous or sufficiently generic (such as using hosted services like Zoho or Bungee Connect), developing online may be fine (we’ll probably get hate mail on that). Otherwise, it shouldn’t be surprising that no ALM vendor has yet or is likely to place code-heavy IDEs or source code control systems online. OK, Mozilla has opened the Bespin project, but just because you could write code online doesn’t mean you should.

Conversely, anything that is resource-intensive, like performance testing, does well with the cloud because, unless you’re a software vendor, you don’t produce major software releases constantly. You need lots of resource occasionally to load and performance test those apps (which by that point, their code is compiled anyway). That’s a great use of the cloud, as HP’s Mercury has been doing since around 2001.

Similarly, anything having to do with the social or collaboration aspects of software development lent themselves well to the cloud. Project management, scheduling, task lists, requirements, and defect management all suit themselves well as these are at core group functions where communications is essential to keeping projects in sync and all members of the team – wherever they are located — on literally the same page. Of course, there is a huge caveat here – if your company designs embedded software that goes into products, it is not a good candidate for the cloud: imagine getting a hold of Apple’s project plans for the next version of the iPhone.

« Previous entries Next Page » Next Page »