Superlatives are the norm when talking about Big Data. We all see Big predictions of the Big numbers and High growth of petabytes, connected devices, data containers, etc., and the Tidal changes they bring.
But I think the “Big” adjective best applies to Complexity. How do IT shops manage all these 1s and 0s and set the table for meaningful analytics?
At the most basic level, there are two approaches, seemingly contradictory, for laying the analytics foundation: consolidate data onto fewer platforms, or analyze it selectively across platforms in a decentralized fashion. These actually complement one another well as part of an iterative data management model in which IT continuously re-positions data sets to support evolving analytics needs.
Let’s examine each, then look at the squishy middle, where most companies will reside.