BI crashing into the database

Flattening of Big Data architecture has become something of an epidemic. The largeness of Big Data has forced the middle and bottom layers of the stack – analytics and data – to converge; the accessibility of SQL married to the scale of Hadoop has driven a similar result. And now we’re seeing the top, middle, and in some cases lower levels of the stack converging with BI and transformation atop an increasingly ambitious data tier.

It began with the notion of making BI more self-service; give ordinary people the ability to make ad hoc queries without waiting for IT to clear its backlog. Tools like Tableau, QlikTech, and Spotfire have popularized visualization with intuitive front ends backed typically by some form of data caching mechanism for materializing views PDQ. Originally these approaches may have amounted to putting lipstick on a pig (e.g., big, ugly, complex SQL databases), but in many cases, these tools are packing more back end functionality to not simply paint pictures, but quickly assemble them. They are increasingly embedding their own transformation tools. While they are eliminating the ETL tier, they are definitely not eliminating the “T” – although that message tends to get blotted out by marketing hyperbole. That in turn is leading to the next step, which is elevating cache to becoming full-bore in-memory databases. So now you’ve collapsed, not only the ETL middle tier, but the data back end tier. We’re seeing platforms that would otherwise be classed Advanced SQL or NewSQL databases, like SiSense.

That phenomenon is also working its way on the NoSQL side where we see Platfora packaging, not only an in-memory caching tier for Hadoop, but also the means to marshal and transform data and views on the fly.

This is not simply a tale of flattening architecture for its own sake. The ramifications are basic changes to the analytics workflow and lifecycle. Instead of planning your data structures and queries ahead of time, generate schema and views on demand. Flattening the architecture facilitates this new style.

Traditionally with SQL – both for data warehousing and transaction systems – the process was completely different. You modeled the data and specified the tables at design time based on your knowledge of the content of the data and the queries and reports you were going to generate.

As you might recall, NoSQL was a reaction to the constraints of imposing a schema on the database at design time. By contrast, NoSQL did not necessarily do away with structure; it simply allowed the process to become more flexible. Collect the data and when it’s time to harvest it, explore it, discover the problem, and then derive the structure. And because in most NoSQL platforms, you still retain the data in raw form, you can generate a different schema as the nature of the problem, business challenge, or content of the data itself changes. Just run another series of MapReduce or similar processes to generate new views. Nonetheless, this view of flexible schema was borne with assumptions of a batch processing environment.

What’s changed is the declining cost of silicon-based storage: Flash (SSD) and memory (DRAM). That’s allowed those cute SQL D-I-Y visualization tools to morph into in-memory data platforms, because it was now cheap enough to gang terabytes of memory together. Likewise, it has cleared the way for Oracle and SAP to release in-memory platforms. And on the NoSQL side, it is making the notion of dynamic views from Hadoop thinkable.

We’re only at the beginning of the great rethink of analytic data views, schematizing processes, and architectural refactoring. It fires a shot across the bow to traditional BI players who have built their solutions on traditional schema at design rather than run time; in their case, it will require some significant architectural redesign. The old way will not disappear, as the contents of core end-of-period reporting and similar processes will not go away. There will always be a need for data warehouses and BI/reporting tools that provide repeatable, baseline query and reporting. And the analytic and data protection/housekeeping functions that are provided by established platforms will continue to be in demand. Astute BI and DW vendors should consider these new options as additive; they will be challenged by upstarts offering highly discounted pricing. For established vendors, the key is emphasizing the value add while they provide the means for taking advantage of the new, more flexible style of schema on demand.

Sadly, as we’re at the beginning of a new era of dynamic schema and dynamic analytics, there is also a lot of noise, like the dubious proposition that we can eliminate ETL. Folks, we are eliminating a tier, not the process itself. Even with Hadoop, when you analyze data, you inevitably end up forming it into a structure so you can grind out your analytics.

Disregard the noise and hype. You’re not going to replace your data warehouses for routine, mandated processes. But there will be new analytics will become more organic. It’s not simply a phenomenon in the Hadoop world, but with SQL as well. Your analytics infrastructure will flatten, and your schema and analytics will grow more flexible and organic.

Intel’s Hadoop Sleeper

One of the more out-of-the-blue announcements to have dropped last week at Strata was Intel’s entry into the Hadoop space. Our first reactions were (1) what does Intel know about the software market (actually they do own McAfee) and (2) the last thing the Hadoop ecosystem needs is yet another distro to further confuse matters. We had a chance to briefly review it during our Strata write-up, but since then we’ve had a chance for a more detailed drill down. There’s some interesting technology under the covers that could accelerate Hadoop thanks to optimization at the Xeon chip instruction set level.

The headline is that it will speed data crunching; in the press release, it cites an internal benchmark of reducing analyzing of a terabyte of data from 4 hours down to 7 minutes admittedly, we don’t know what kind of analysis it was, or whether there are certain forms of processing that will benefit more from chip-level optimization than others, but we’ll accept the general message.

Update: The spec was based on the Terasort benchmark

The headline of the initial release pertains to an area that has so far posed an unmet challenge in Hadoop: embedding encryption functions at the hardware level. It uses a specific encryption instruction set in Xeon designed for the Advanced Encryption Standard (AES) instruction set. It addresses a key conundrum in Hadoop data security: if you have sensitive data, it would be nice to encrypt it, but encryption is such a compute-intensive operation that applying it at terabyte scale would seem impractical. For now, IBM and Dataguise are the only ones that are providing such capability for Hadoop, and that, encryption is selective; at hardware level, the performance differences could be significant. Admittedly, hardware-based encryption is not new; there are appliances on the market that already handle it, but until now nothing that works inside Hadoop.

Like Cloudera, Intel is also competing with its own proprietary Hadoop management tooling, which in this case incorporates patented auto-tuning technology for optimizing clusters.

The Intel release is based on technology developed for HPC clusters, and was initially developed for several Chinese telco clients. It is a hybrid of proprietary and open source technologies. Obviously, optimization at the chip level is proprietary, but accompanying refinements to HDFS, HBase, and Hive to work with Intel processors are being contributed back to open source. Additionally, Intel is co-sponsoring a new open source project GraphLab, which will develop a graph processing framework that will rival the existing Apache GIRAPH project, both of which are intended to provide more efficient alternatives to graph processing than MapReduce. Intel is also backing several nascent open source initiatives, including Panthera, a SQL parser which was announced last fall and Rhino, which is actually a framework for encryption and key management intended to be seeded across various Hadoop projects (components) (don’t confuse this with a similarly named project which specifies a server-side JavaScript engine).

Clearly, Intel’s entry to the Hadoop space ups the bar on performance. There are a number of different paths to the same summit; Cloudera’s Impala is a framework that is supposed to speed SQL processing while avoiding the bottleneck of MapReduce; Hortonworks is proposing the Tez runtime and Stinger interactive query projects to accomplish a similar end; Greenplum has fused its MPP SQL database while using HDFS for storage under the Pivotal HD system; while MapR swaps out HDFS altogether for a more highly performant, NFS-compatible one.

Intel enters an increasingly competitive and forking Hadoop market where there is contention at almost every layer of the stack. Although Intel has sizable enterprise software businesses, it is mostly at the lower levels of the stack (e.g., hardware optimization, security). So it is new to database level of the stack. On one hand, the Big Data software unit of Intel will be at most a tiny speck of its overall business; but as a means to an end (selling more Hadoop boxes), it is potentially very strategic.

The question is whether Intel is better off competing head to head with its own hardware-optimized platform, or forming great OEM deals with the players that are gaining critical mass in the Hadoop space, which could sell even more Xeon servers. Intel has announced a number of opening round partners which include several data platforms on the other side of the Hadoop divide. Among the most interesting are SAP, where chip-optimized Hadoop would form a natural side-by-side install with the in-memory HANA database; and Teradata, whose Aster unit already offers an appliance with rival Hortonworks (could the Intel distro become a virtual extension of the Teradata Extreme Data Appliance?).

There’s little question that Hadoop could use a hardware kick.

Strata 2013 debrief: Enterprise-ready Hadoop Wars heat up

We’re in the thick of analyst conference season – Informatica last week, SAS tomorrow. So on this Sunday afternoon between gigs, we’re digesting what went down at Strata 2013 in Santa Clara last week. It was kind of a frustrating day in that we had limited time, were scheduled wall to wall with meetings, and missed what were likely some fascinating sessions. But we got a sense of some dominant themes: Harden Hadoop for the enterprise, and take the SQL world to Hadoop.

The Hadoop vendor ecosystem is filling in – new players with their own distros, and new capabilities focused on making Hadoop more enterprise grade. The field is early enough that the approaches are still quite diverse – it’s time to invent, not consolidate. Let the games proceed.

EMC stole the jump early in the week by announcing the grafting of its own Greenplum Advanced SQL analytic data store onto Hadoop – basically, the Greenplum MPP database squooched (wanted an excuse to use a “word” like that) atop HDFS. Tastes like a SQL analytic database, scales like Hadoop. Cloudera Impala will soon be in a GA branded as RTQ (Real-Time Query). Not to be outshined, Hortonworks, which works through the official Hadoop project itself, announced a couple responding initiatives: the Tez runtime and Stinger interactive query engine. You wouldn’t be seeing all these efforts to make Hadoop interactive if the demand wasn’t out there; while Hadoop as a platform for extending the range of analytics has become very compelling to enterprises, they clearly expect that the platform must be SQL interactive if it is to become a part of their analytic system portfolio.

While we’ve been expending electrons on the SQLization of Hadoop, the next stage of hardening is rapidly emerging. Specifically, make Hadoop and Hadoop data more governable and secure. This involves capabilities such as data masking (where you permanently obliterate sensitive pieces of data), data encryption (where you can recover the original data), activity monitoring (who does what), data lineage (who and where this data came from, and who has done what to it), and of course, more fine grained access control (preferably role-based) that picks up where Kerberos authentication leaves off. The pieces are just beginning to fall into place.

Dataguise, a niche player in data obfuscation that relaunched itself in the Hadoop space last year, has had an encryption product out for roughly six months and has drawn several customers; they promote a self-learning feature that discovers sensitive data (e.g., credit card numbers), selectively encrypts, and then acts only when data is changed. IBM already has capabilities in Optim that are typically used when pulling data from an external database; a user-defined function can mask it in Hadoop, or mask data as it is drawn from Hadoop. IBM offers data masking and activity monitoring, a capability that Cloudera just announced. Specifically, Cloudera’s new Navigation tool places agents (like everybody else, they characterize them as “lightweight”) on HDFS, Hive, and HBase, and you can configure them. For instance, the traffic on Hive is likely to be a fraction of that for HBase, which is more interactive, so you can configure monitoring of event changes to data accordingly. And then we came across Revelytix, which focuses on data lineage

Then out of the blue, Intel swooped in with announcement of its own Hadoop distribution. As if that was the last thing the world needed. But Intel has carved some interesting angles: it is utilizing the native instruction set of the Xeon processor to move encryption and I/O optimization directly into the chip. Intel’s play addresses the issue that these processes are resource-heavy, a point where the sheer size of Hadoop data stores add insult to injury. And that is not to mention that embedding encryption in hardware lessens the load of developers. Intel has drawn a number of partners including SAP, where integration with the HANA in-memory platform offers some interesting Fast Data possibilities. So far we’ve missed signals with Intel, but will speak with them later next week to get a better idea of where they hope to take hardware optimization with Hadoop.

Loose ends: Time is running out on us, but coming out of this week, there are several issues that are running in the back of our mind:
• Hive – we thought this was a done deal. Hive is one of the earliest components of Hadoop. Having been designed when MapReduce was the predominant processing pattern, and the jobs to spawn the metadata were batch in nature. We were surprised that the debate over Hive’s use remains very, very live. The issue is over how dynamic Hive can become – yes, it can support interactive queries, but is it based on metadata that is current? We sense that this will become another area for vendor differentiation.
• Apache Hadoop project – This could be spin, but there is sniping behind the scenes that the Hadoop project is no longer so broad-based when it comes to contributions. The flipside is arguments over whether a particular vendor has enough (or any) committers rings a bit hollow. The operable question for enterprises is whether the distro of Hadoop is and will remain well-supported.
• Resource management – this one has multiple angles. Of course there is debate over YARN. It is supposed to be the über resource manager of Hadoop, so MapReduce jobs don’t collide with those of other frameworks that may have different (and conflicting) demand on processing and data access. There’s active debate over whether YARN has sufficiently weaned itself of its MapReduce batch lineage, or whether it should be a batch-oriented sub manager in a scheme where there is yet another layer of control. The counterargument to that is that this may make life (or at least levels of control) far too complex. Expect vendor differentiation here.

SQL collides into Hadoop

It’s going to be quite a whirlwind this week. Informatica Analyst conference tomorrow, wall-to-wall meetings at Strata Wed afternoon, before heading off to the mountains of Colorado for the SAS Analyst meet next week.

So EMC Greenplum caught us at an opportune time in the news cycle. They chose the day before strata to saturate the airwaves with announcement that they are staking their future on Hadoop.

Specifically, they have come up with a product with a proliferation of brands. EMC is the parent company, Greenplum is the business unit and Advanced SQL MPP database, Pivotal HD is the branding of their new SQL/Hadoop offering, and guess what, did we forget to tell you about the HAWQ engine that provides the interactivity with Hadoop? Sounds like a decision by committee for branding.

(OK, HAWQ is the code name for the project, but Greeenplum has been promoting it quite prominently. Maybe they might want to tone it down as the HAWQ domain is already claimed.)

But it is an audacious move for Greenplum. While its rivals are putting Hadoop either at arm’s reach or in Siamese twin deployments, Greenplum is putting its engine directly into Hadoop, sitting atop HDFS. The idea is that to the enterprise user, it looks like Greenplum, but with the scale-out of HDFS underneath. Greenplum is not alone in looking to a singular Hadoop destination for big Data analytics; Cloudera is also pushing that direction heavily with Impala. And while Hortonworks has pushed coexistence with its HCatalog Apache incubator project ad close OEM partnerships with Teradata Aster and Microsoft, it is responding with announcement of the Tez runtime and Stinger interactive Hadoop query projects.

These developments simply confirm what we’ve been saying: SQL is converging with Hadoop, and for good reason. For Big Data – and Hadoop – to get accepted into the mainstream enterprise, it has to become a first class citizen with IT, the data center, and the business. That means (1) reasonably map to the skills (SQL, Java) that already exist, and extend from there (2) fit in with the databases, applications, and systems management practices (e.g., storage, virtualization) that are how the data center operate and (3) the analytics must start by covering the ground that the business understands.

For enterprises, these announcements represent vendors placing stakes in the ground; these are newly announced products that in many cases are still using pre-release technology. But it is important to understand the direction of the announcements, what this means for the analytics that your shop produces, and how your future needs are to be met.

Clearly, Hadoop and its programming styles (for now MapReduce remains the best known) offer a new approach for a new kind of analytic to connect the dots. But for enterprises, the journey to Big Data and Hadoop will be more evolutionary.

Fast Data — the TV show

We’ve been talking about Fast Data over the past year, and so has Oracle. Last week we had the chance to make it a dialogue as we were interviewed by Hasan Rizvi, who heads Oracle’s middleware business as Executive Vice President Oracle Fusion Middleware and Java. The podcast, which will also include an appearance with Oracle customer Turkcell, will go live on February 27. You can sign up for it here.

Hadoop spinning YARNs

With the Strata 2013 Santa Clara conference about to kick into high gear a week from now, we’re bracing for a wave of SQL-related announcements. You won’t hear a lot about this in the vendor announcements, but behind the scenes, there’s a major disruption occurring that will determine whether MapReduce and other products or frameworks play friendly with each other on Hadoop.

MapReduce has historically been the yin to Hadoop’s yan. Historically, the literature about Hadoop invariably mentioned MapReduce, often in within the same sentence. So excuse us for having wondered, once upon a naïve time, if they were synonymous.

MapReduce is the processing framework that put Hadoop on the map because it so effectively took advantage of Hadoop’s scalable Internet data center-style architecture. In and of itself, MapReduce is a generic idea for massively parallel computing: break a job into multiple con current threads( Map) and then consolidate them (Reduce) to get a result. The MapReduce framework itself was written for Hadoop’s architecture. It pushes Map operations directly to Hadoop data nodes; each operation being completely self-contained (e.g., it supports massively parallel, shared-nothing operation); it treats data as the key-value pairs that Hadoop uses; and it works directly with Hadoop’s JobTracker and TaskTracker to provide a closed-loop process for checking and submitting correctly-formed jobs, tracking their progress to completion (where the results of each Map are shuffled together as part of the Reduce phase).

A key advantage of MapReduce is that it treats, not only individual Map operations and self-contained, but also each Map-Reduce cycle as a self-contained operation. That allows huge flexibility to allow problems to be solved iteratively through a chained series of MapReduce cycles. Such a process proved extremely effectively for crunching through petabytes of data.

Yet, that advantage is also a drawback: each MapReduce cycle is extremely read/write-intensive, as each MapReduce step is written to disk (rather than cached), which makes the process time-consuming and best suited for batch operation. If anything, the trend in enterprise analytics has been towards interactive and in some cases real-time operation, but Hadoop has been off limits to that – until recently.

As we’ve noted, with convergence of SQL and Hadoop, we believe that the predominant theme this year for Hadoop development is rationalization with the SQL world. While of course there is batch processing in the SQL world, the dominant mode is interactive. But this doesn’t rule out innovation in other directions with Hadoop, as the platform’s flexibility could greatly extend and expand the types of analytics. Yes, there will be other types of batch analytics, but it’s hard to ignore the young elephant in the room: interactive Hadoop.

Enter YARN. As we said, there was a good reason why we used to get confused between Hadoop and MapReduce. Although you could run jobs in any style that could scan and process HDFS files, the only framework you could directly manage with core Hadoop was MapReduce. YARN takes the resource management piece out of MapReduce. That means (1) MapReduce can just be MapReduce and (2) you can use the same resource manager to run other processing frameworks. YARN is a major element of the forthcoming Hadoop 2.0, which we expect to see formal release of around Q3.

That’s the end goal with YARN; it’s still a work in process, as is all of Hadoop 2.0. At this point, YARN has been tested at scale at Yahoo — over 30,000 nodes and 14 million applications reports Arun Murthy in his blog (as release manager, he’s in charge of herding cats for Hadoop 2.0). OK, so YARN has tested at scale (MapReduce could do the same thing) but still needs some API work.

So what other frameworks will be supporting YARN? We had a chat with Arun a couple weeks back to get a better idea of what’s shakin’. It’s still early days; for now, Apache HAMA (a rather unfortunate name in our view), which you could imagine as MapReduce’s scientific computing cousin, supports YARN. Others are still work in progress. Giraph, an incubating Apache project that addresses graph processing, will likely join the fray. Others include Spark, a framework for supporting in-memory cluster computing (it provides the engine behind Shark, an Apache Hive-compatible data warehousing system that is supposed to run 100x faster than Hive). Pervasive Software (about to be acquired by Actian) has gone on record that its DataRush engine would run under YARN. We wouldn’t be surprised if Twitter Storm, which focuses on distributed, real-time processing of streaming data, also comes under the YARN umbrella.

There are of course other frameworks emerging that may or may not support YARN. Cloudera already supports a prerelease version of YARN as part of CDH 4, but it has not stated whether Impala, its own open source SQL-compatible MPP data warehousing framework, will run under YARN. Speaking of Impala, there are a number of other approaches that are emerging for making Hadoop more interactive or real-time, such as Platfora, which adapts a common approach from the relational world for tiering “hot” data into memory. There are others, like Hadapt and Splice Machine that are inserting SQL directly into HDFS.

The 64-petabyte question of course is whether everybody is going to play nice and rational and make their frameworks or products work with YARN. In essence, it’s a literal power grab question – should I let the operation of my own product or framework be governed by a neutral resource manager, or can that resource manager fully support my product’s style of execution? The answer is both technology and market maturity.

On the technology end, there’s the question of whether YARN can get beyond its MapReduce (batch) roots. The burden of proof is on the YARN project folks for demonstrating, not only that their framework works at scale and supports the necessary APIs, but also that it can support other styles such as interactive or real-time streaming modes, and that it can balance workloads as approaches from the database world, such as data tiering, require their own unique optimizations.

The commercial end of the argument is where the boundary between open source and commercial value-add (proprietary or non-Hadoop open source) lies. It’s a natural rite of passage for any open source platform that becomes a victim of its own success. And it’s a question for enterprises to consider when they make their decision: ultimately, it’s about the ecosystem or club that they want to belong to.

The Other Shoe Drops: SAP puts ERP on HANA

It was never a question of whether SAP would bring it flagship product, Business Suite to HANA, but when. And when I saw this while parking the car at my physical therapist over the holidays, I should’ve suspected that something was up: SAP at long last was about to announce … this.

From the start, SAP has made clear that its vision for HANA was not a technical curiosity, positioned as some high-end niche product or sideshow. In the long run, SAP was going to take HANA to Broadway.

SAP product rollouts on HANA have proceeded in logical, deliberate fashion. Start with the lowest hanging fruit, analytics, because that is the sweet spot of the embryonic market for in-memory data platforms. Then work up the food chain, with the CRM introduction in the middle of last year – there’s an implicit value proposition for having a customer database on a real-time system, especially while your call center reps are on the phone and would like to either soothe, cross-sell, or upsell the prospect. Get some initial customer references with a special purpose transactional product in preparation for taking it to the big time.

There’s no question that in-memory can have real impact, from simplifying deployment to speeding up processes and enabling more real-time agility. Your data integration architecture is much simpler and the amount of data you physically must store is smaller. SAP provides a cute video that shows how HANA cuts through the clutter.

For starters, when data is in memory, you don’t have to denormalize or resort to tricks like sharding or striping of data to enhance access to “hot” data. You also don’t have to create staging servers to perform ETL of you want to load transaction data into a data warehouse. Instead, you submit commands or routines that, thanks to processing speeds that are up to what SAP claims to be 1000x faster than disk, convert the data almost instantly to the form in which you need to consume it. And when you have data in memory, you can now perform more ad hoc analyses. In the case of production and inventory planning (a.k.a., the MRP portion of ERP), you could run simulations when weighing the impact of changing or submitting new customer orders, or judging the impact of changing sourcing strategies when commodity process fluctuate. For beta customer John Deere, they achieved positive ROI based solely on the benefits of implementing it for pricing optimization (SAP has roughly a dozen customers in ramp up for Business Suite on HANA).

It’s not a question of whether you can run ERP in real time. No matter how fast you construct or deconstruct your business planning, there is still a supply chain that introduces its own lag time. Instead, the focus is how to make enterprise planning more flexible, enhanced with built-in analytics.

But how hungry are enterprises for such improvements? To date, SAP has roughly 500 HANA installs, primarily for Business Warehouse (BW) where the in-memory data store was a logical upgrade for analytics, where demand for in-memory is more established. But on the transactional side, it’s a more uphill battle as enterprises are not clamoring to conduct forklift replacements of their ERP systems, not to mention their databases as well. Changing both is no trivial matter, and in fact, changing databases is even rarer because of the specialized knowledge that is required. Swap out your database, and you might as well swap out your DBAs.

The best precedent is Oracle, which introduced Fusion Applications two years ago. Oracle didn’t necessarily see Fusion as replacement for E-Business Suite, JD Edwards, or PeopleSoft. Instead it viewed Fusion Apps as a gap filler for new opportunities among its installed base or the rare case of greenfield enterprise install. We’d expect no less from SAP.

Yet in the exuberance of rollout day, SAP was speaking of the transformative nature of HANA, claiming it “Reinvents the Real-Time Enterprise.” It’s not the first time that SAP has positioned HANA in such terms.

Yes, HANA is transformative when it comes to how you manage data and run applications, but let’s not get caught down another path to enterprise transformation. We’ve seen that movie before, and few of us want to sit through it again.

So what does IBM mean when they say they’re in the solutions business?

Ever since IBM exited the applications business, it has been steadily inching its way back up the value chain from pure infrastructure software. IBM has over the past few years unleashed a string of initiatives seeking to deliver, not only infrastructure software and the integration services to accompany them, but gradually more bits of software that deliver content aimed for the needs of specific scenarios in specific verticals. Naturally, with a highly diversified organization like IBM, there have been multiple initiatives with, of course, varying levels of success.

It started with the usual scenario among IT service providers seeking to derive reusable content from client engagements.  Then followed a series of acquisitions for capabilities targeted at vertical industries such as fixed asset management for capital-intensive sectors such as manufacturing or utilities; product information management for consumer product companies; commerce for B2B transactions; online marketing analytic capabilities, and so on. Then came the acquisition of Webify in 2007, where we thought this would lead to a new generation of SOA-based, composite vertical applications (disclosure: we were still drinking the SOA Kool-Aid at the time). At the time, IBM announced there would be Business Fabric SOA frameworks for telco, banking, and insurance, which left us waiting for the shoe to drop for more sectors. Well, that’s all they wrote.

Last year, IBM Software Group (SWG) reorganized into two uber organizations: Middleware under the lead of Robert Leblanc,  and Solutions under Mike Rhodin. Both presented at SWG’s 2011 analyst forum as to what the reorg meant. What was interesting was that for organizational purposes, this was a very ecumenical definition of Middleware: it included much of the familiar products from the Information Management, Tivoli, and Rational brand portfolios, and as such, was far more encompassing (e.g., it also included the data layer).

More to the point, once you get past middleware infrastructure, what’s left? At his presentation last year, Rhodin outlined five core areas: Business Analytics and Optimization; Smarter Commerce; Social Business; Smarter Cities; and Watson Solutions. And he outlined IBM’s staged process for developing new markets, expressed as incubation, where the missionary work is done; “make a market” where the product and market is formally defined and materialized; and scale a market, which is self-explanatory. Beyond, we still wondered what makes an IBM solution.

This year, Rhodin fleshed out the answer. To paraphrase, Rhodin said that “it’s not about creating 5000 new products, but creating new market segments.” Rhodin defined segments as markets that are large enough to have visible impact on a $100 billion corporation’s top line. Not $100 million markets, but instead, add a zero or two to it.

An example is Smarter Cities, which began with the customary reference customer engagements to define a solution space. IBM had some marquee urban infrastructure engagements with Washington DC, Singapore, Stockholm, and other cites, out of which came its Intelligent Operations Center. IBM is at an earlier stage with Watson Solutions with engagements at WellPoint (for approving procedures) and Memorial Sloan-Kettering Cancer Center (healthcare delivery) in fleshing out a Smart Healthcare solution.

Of these, Smarter Analytics (not to be confused with Smart Analytics System – even big companies sometimes run out of original brand names) is the most mature.

The good news is that we have a better idea of what IBM means when it says solutions – it’s not individual packaged products per se, but groups of related software products, services, and systems. And we know at very high level where IBM is going to focus its solutions efforts.

Plus ca change… IBM has always been about software, services, and systems – although in recent years the first two have taken front stage. The flip side is that some of these solutions areas are overly broad. Smarter Analytics is a catch-all covering the familiar areas of business intelligence and performance management (much of the Cognos portfolio), predictive analytics and analytical decision management (much of the SPSS portfolio), and analytic applications (Cognos products tailored to specific line organizations like sales, finance, and operations).

It hasn’t been in doubt that for IBM, solutions meant addressing the line of business rather than just IT. That’s certainly a logical strategy for IBM to spread its footprint within the Global 2000. The takeaway of getting a better definition of what IBM’s Solutions business is that it gives us the idea of the scale and acquisitions opportunities that they’re after.

Hadoop and Replication

Conventional wisdom is that once Big Data is at rest, don’t move it or shake it. Akin to “don’t fold, spindle, or mutilate.” But seriously, if mainstream enterprises adopt Hadoop, they will expect it to become more robust. And so you start looking at things like data replication, or at least replication of the NameNode or other components that govern how and where data resides in Hadoop and how operations are performed against.

So here’s an interesting one to watch: Wandisco buying Altostore. They are applying replication technol developed for Subversion to Hadoop. We’re gonna check this one out

It’s happening: Hadoop and SQL worlds are converging

With Strata, IBM IOD, and Teradata Partners conferences all occurring this week, it’s not surprising that this is a big week for Hadoop-related announcements. The common thread of announcements is essentially, “We know that Hadoop is not known for performance, but we’re getting better at it, and we’re going to make it look more like SQL.” In essence, Hadoop and SQL worlds are converging, and you’re going to be able to perform interactive BI analytics on it.

The opportunity and challenge of Big Data from new platforms such as Hadoop is that it opens a new range of analytics. On one hand, Big Data analytics have updated and revived programmatic access to data, which happened to be the norm prior to the advent of SQL. There are plenty of scenarios where taking programmatic approaches are far more efficient, such as dealing with time series data or graph analysis to map many-to-many relationships. It also leverages in-memory data grids such as Oracle Coherence, IBM WebSphere eXtreme Scale, GigaSpaces and others, and, where programmatic development (usually in Java) proved more efficient for accessing highly changeable data for web applications where traditional paths to the database would have been I/O-constrained. Conversely Advanced SQL platforms such as Greenplum and Teradata Aster have provided support for MapReduce-like programming because, even with structured data, sometimes using a Java programmatic framework is a more efficient way to rapidly slice through volumes of data.

Until now, Hadoop has not until now been for the SQL-minded. The initial path was, find someone to do data exploration inside Hadoop, but once you’re ready to do repeatable analysis, ETL (or ELT) it into a SQL data warehouse. That’s been the pattern with Oracle Big Data Appliance (use Oracle loader and data integration tools), and most Advanced SQL platforms; most data integration tools provide Hadoop connectors that spawn their own MapReduce programs to ferry data out of Hadoop. Some integration tool providers, like Informatica, offer tools to automate parsing of Hadoop data. Teradata Aster and Hortonworks have been talking up the potentials of HCatalog, actuality an enhanced version of Hive with RESTful interfaces, cost optimizers, and so on, to provide a more SQL friendly view of data residing inside Hadoop.

But when you talk analytics, you can’t simply write off the legions of SQL developers that populate enterprise IT shops. And beneath the veneer of chaos, there is an implicit order to most so-called “unstructured” data that is within the reach programmatic transformation approaches that in the long run could likely be automated or packaged inside a tool.

At Ovum, we have long believed that for Big Data to crossover to the mainstream enterprise, that it must become a first-class citizen with IT and the data center. The early pattern of skunk works projects, led by elite, highly specialized teams of software engineers from Internet firms to solve Internet-style problems (e.g., ad placement, search optimization, customer online experience, etc.) are not the problems of mainstream enterprises. And neither is the model of recruiting high-priced talent to work exclusively on Hadoop sustainable for most organizations; such staffing models are not sustainable for mainstream enterprises. It means that Big Data must be consumable by the mainstream of SQL developers.

Making Hadoop more SQL-like is hardly new
Hive and Pig became Apache Hadoop projects because of the need for SQL-like metadata management and data transformation languages, respectively; HBase emerged because of the need for a table store to provide a more interactive face – although as a very sparse, rudimentary column store, does not provide the efficiency of an optimized SQL database (or the extreme performance of some columnar variants). Sqoop in turn provides a way to pipeline SQL data into Hadoop, a use case that will grow more common as organizations look to Hadoop to provide scalable and cheaper storage than commercial SQL. While these Hadoop subprojects that did not exactly make Hadoop look like SQL, they provided building blocks from which many of this week’s announcements leverage.

Progress marches on
One train of thought is that if Hadoop can look more like a SQL database, more operations could be performed inside Hadoop. That’s the theme behind Informatica’s long-awaited enhancement of its PowerCenter transformation tool to work natively inside Hadoop. Until now, PowerCenter could extract data from Hadoop, but the extracts would have to be moved to a staging server where the transformation would be performed for loading to the familiar SQL data warehouse target. The new offering, PowerCenter Big Data Edition, now supports an ELT pattern that uses the power of MapReduce processes inside Hadoop to perform transformations. The significance is that PowerCenter users now have a choice: load the transformed data to HBase, or continue loading to SQL.

There is growing support for packaging Hadoop inside a common hardware appliance with Advanced SQL. EMC Greenplum was the first out of gate with DCA (Data Computing Appliance) that bundles its own distribution of Apache Hadoop (not to be confused with Greenplum MR, a software only product that is accompanied by a MapR Hadoop distro). Teradata Aster has just joined the fray with Big Analytics Appliance, bundling the Hortonworks Data Platform Hadoop; this move was hardly surprising given their growing partnership around HCatalog, an enhancement of the SQL-like Hive metadata layer of Hadoop that adds features such as a cost optimizer and RESTful interfaces that make the metadata accessible without the need to learn MapReduce or Java. With HCatalog, data inside Hadoop looks like another Aster data table.

Not coincidentally, there is a growing array of analytic tools that are designed to execute natively inside Hadoop. For now they are from emerging players like Datameer (providing a spreadsheet-like metaphor; which just announced an app store-like marketplace for developers), Karmasphere (providing an application develop tool for Hadoop analytic apps), or a more recent entry, Platfora (which caches subsets of Hadoop data in memory with an optimized, high performance fractal index).

Yet, even with Hadoop analytic tooling, there will still be a desire to disguise Hadoop as a SQL data store, and not just for data mapping purposes. Hadapt has been promoting a variant where it squeezes SQL tables inside HDFS file structures – not exactly a no-brainer as it must shoehorn tables into a file system with arbitrary data block sizes. Hadapt’s approach sounds like the converse of object-relational stores, but in this case, it is dealing with a physical rather than a logical impedance mismatch.

Hadapt promotes the ability to query Hadoop directly using SQL. Now, so does Cloudera. It has just announced Impala, a SQL-based alternative to MapReduce for querying the SQL-like Hive metadata store, supporting most but not all forms of SQL processing (based on SQL 92; Impala lacks triggers, which Cloudera deems low priority). Both Impala and MapReduce rely on parallel processing, but that’s where the similarity ends. MapReduce is a blunt instrument, requiring Java or other programming languages; it splits a job into multiple, concurrently, pipelined tasks where, at each step along the way, reads data, processes it, and writes it back to disk and then passes it to the next task. Conversely, Impala takes a shared nothing, MPP approach to processing SQL jobs against Hive; using HDFS, Cloudera claims roughly 4x performance against MapReduce; if the data is in HBase, Cloudera claims performance multiples up to a factor of 30. For now, Impala only supports row-based views, but with columnar (on Cloudera’s roadmap), performance could double. Cloudera plans to release a real-time query (RTQ) offering that, in effect, is a commercially supported version of Impala.

By contrast, Teradata Aster and Hortonworks promote a SQL MapReduce approach that leverages HCatalog, an incubating Apache project that is a superset of Hive that Cloudera does not currently include in its roadmap. For now, Cloudera claims bragging rights for performance with Impala; over time, Teradata Aster will promote the manageability of its single appliance, and with the appliance has the opportunity to counter with hardware optimization.

The road to SQL/programmatic convergence
Either way – and this is of interest only to purists – any SQL extension to Hadoop will be outside the Hadoop project. But again, that’s an argument for purists. What’s more important to enterprises is getting the right tool for the job – whether it is the flexibility of SQL or raw power of programmatic approaches.

SQL convergence is the next major battleground for Hadoop. Cloudera is for now shunning HCatalog, an approach backed by Hortonworks and partner Teradata Aster. The open question is whether Hortonworks can instigate a stampede of third parties to overcome Cloudera’s resistance. It appears that beyond Hive, the SQL face of Hadoop will become a vendor-differentiated layer.

Part of conversion will involve a mix of cross-training and tooling automation. Savvy SQL developers will cross train to pick up some of the Java- or Java-like programmatic frameworks that will be emerging. Tooling will help lower the bar, reducing the degree of specialized skills necessary. And for programming frameworks, in the long run, MapReduce won’t be the only game in town. It will always be useful for large-scale jobs requiring brute force, parallel, sequential processing. But the emerging YARN framework, which deconstructs MapReduce to generalize the resource management function, will provide the management umbrella for ensuring that different frameworks don’t crash into one another by trying to grab the same resources. But YARN is not yet ready for primetime – for now it only supports the batch job pattern of MapReduce. And that means that YARN is not yet ready for Impala or vice versa.

Of course, mainstreaming Hadoop – and Big Data platforms in general – is more than just a matter of making it all look like SQL. Big Data platforms must be manageable and operable by the people who are already in IT; they will need some new skills and grow accustomed to some new practices (like exploratory analytics), but the new platforms must also look and act familiar enough. Not all announcements this week were about SQL; for instance, MapR is throwing a gauntlet to the Apache usual suspects by extending its management umbrella beyond the proprietary NFS-compatible file system that is its core IP to the MapReduce framework and HBase, making a similar promise of high performance. On the horizon, EMC Isilon and NetApp are proposing alternatives promising a more efficient file system but at the “cost” of separating the storage from the analytic processing. And at some point, the Hadoop vendor community will have to come to grips with capacity utilization issues, because in the mainstream enterprise world, no CFO will approve the purchase of large clusters or grids that get only 10 – 15% utilization. Keep an eye on VMware’s Project Serengeti.

They must be good citizens in data centers that need to maximize resource (e.g., virtualization, optimized storage); must comply with existing data stewardship policies and practices; and must fully support existing enterprise data and platform security practices. These are all topics for another day.

Insights on the world of Information Technology — Views expressed here do not reflect the opinions of Ovum.