Behind Every Successful Mashup…

The cool thing about mashups is that they are quick and easy, with the results becoming highly visible and accessible. And because of their relative ease of assembly, mashups are often thought of as disposable apps, and therefore, not typically designed with the same degree of rigor as more formally designed applications.

Admittedly, given the ease by which you can overlay your contacts list atop a Google map, issues such as architectural integrity, customer privacy protection, or access control may not necessarily be forefront on your mind. Who cares about data breaches if this mashup remains your own individual, private productivity tool?

Like a game of telephone, life gets complicated once somebody else grabs your idea, adds a new refinement, which in turn prompts somebody else to spruce up that mashup. Soon, that modest little screen that you thought would remain ephemeral suddenly morphs into the enterprise equivalent of a runaway YouTube video. And once that happens, now it’s time for someone to start worrying about issues like data currency, reliability, and access privileges. And when you take the next step and start mashing up feeds from your inventory system and UPS to start making delivery promises, you have to start exercising the same discipline that you would if this were a regular enterprise application.

We were reminded of this after a conversation with Informatica’s Ashutosh Kulkarni, who recently delivered a presentation at the AJAXWorld West conference on the role of data services in enterprise mashups. His message was that enterprise mashups face the same integration challenges as enterprise applications. Coming from a company that specializes in data integration and transformation, Kulkarni emphasized that you have to pay special attention to the data that you mash up. Call it the Web 2.0 equivalent of the old axiom, “Garbage in, garbage out.”

Mashups are simply the latest in a long line of technologies that have democratized software development and data access.

Starting with the PC, which replaced dumb terminals with real computers on end users’ desktops, succeeding innovations such as visual IDEs broadened ownership over software development from computer scientists, creating new career paths for underemployed liberal arts majors. Later, visual reporting tools freed business users from having to constantly bug IT to get routine or ad hoc reports, while the web thrust graphic designers when it came to creating corporate image. And let’s not forget Linux, itself initially the domain of departmental web developers finding new uses for decommissioned 486 PCs.

In each case, these democratizing technologies were not taken terribly seriously at the start. It took several years for IT to start exercising control over procurement of PCs, and it took several more years for IT to impose corporate standards over renegade 4GL and web development, and so on.

Web 2.0 is obviously the heir to this long tradition of back door innovation. Ajax-style development has sufficiently simplified matters that assembly of dynamic web pages has become an extremely causal task. Some of the latest tools let you pull it all off without having to learn JavaScript.

The result is that with each new wave of technology democratization, you can no longer assume that your practitioners are computer scientists. You can no longer assume that those churning out new apps, mashups or not, are necessarily internalizing architectural best practices or considering the compliance implications of their work.

Consequently, it means that all the rigor must now be made explicit. But at the same time, you don’t want to tie the hands of those who are innovating with red tape. You need a middle tier that prevents the folks innovating at the operations level from wreaking havoc, yet enables them to remain productive.

To take an obvious case, suppose you’re trying to promise same day delivery to a client in downtown Boston. If you relied on Google Maps for your routing, getting an accurate result could be a matter of chance. For instance, while the map of downtown Boston has been updated to reflect Boston’s recently completed Big Dig highway project, the satellite imagery hasn’t. You might be scheduling a delivery to an address that might not yet exist in that image.

The same concerns of course also rely to enterprise data. Just because it’s there doesn’t mean it’s reliable. Naturally, these are the problems that have long faced anyone with experience in classic IT projects ranging from ERP to data federation, business process management, and BI applications. As enterprises embrace mashups as new forms of PDQ development for more casual applications that may or may not be disposable, they still need to exercise the same vigilance when it comes to the data that populates the screens.

At the end of the day, mashups that evolve to enterprise mashups are not like enterprise applications. They are enterprise apps. The only difference is that they piece together much much faster.