Emergence of technologies such as Complex Event Processing and growing take-up of BPM (business process management) and BAM (business activity monitoring) has called into question whether it continues to make sense to treat BI as a standalone solution. Scanning the event processing blogosphere, event processing consultant Tim Bass of SilkRoad makes the point very simply: “CEP, BI and BAM are simply today’s buzz words for IT processes that can be implemented in numerous ways to accomplish the very similar ‘things’… to take raw ‘sensory data’ and turn that data into knowledge that supports actions that are beneficial to organizations.”
Traditionally, BI was backward-focused, applying analytics to historic trends because of limitations in processing power and storage that today’s virtualization technologies have made mockeries of.
Although the idea of converging BI with more current or forward-looking approaches has been typically associated with business issues such as sales trends, the same idea redounds back to IT in the data center. It may help to analyze historic usage patterns for repeatable events, such as the closing of the books at the end of a reporting period, what happens when your company introduces a new product like an iPhone and is not prepared for the onslaught? At that point, historical patterns provide scant insight as best into a phenomenon that would be judged unpredictable.
It was with that in mind that we spoke with BMC today on the fruits of their recent acquisition of ProactiveNet, a tool that self-learns your IT operating environment and forward analyzes patterns to detect potential threats to service levels. ProactiveNet adopts a self-learning approach to IT infrastructure performance, tracking performance patterns to detect potential problems before they erupt. By, in effect, “teaching itself” about usage patterns and changes to infrastructure and utilization it projects out into the future. Applied to the problem of maintaining service levels, it complements another product that BMC acquired a decade ago, now called Performance assurance, that conducts predictive analysis for capacity planning purposes.
For now, these tools are deployed for specialized purposes when pared with specific systems management consoles for which interfaces have been developed. But in the long run, the uses for predictive modeling could be endless, such as whenever any change is made to IT infrastructure. And, ideally, such predictives should be translatable to higher level views, so that a business process such as order fulfillment could be forward tracked to see if a new promotion becomes so successful that it kills service levels. Likewise, if your organization exposes a web service and offers a service level commitment, as to whether current usage patterns are likely to lead to an SLA compliance issue downstream. And all this ultimately impacts capacity management, which shouldn’t be a separate process.
It’s an ideal scenario for SOA, because you don’t necessarily want to run predictives constantly because they will soak up significant overhead. But invoked as a service, dynamically, the ultimate solution is having predictive analyses of IT infrastructure service levels and capacity requirements available as services that can be triggered based on business rules and policies.