Big Blue places management and disaster recovery at the top of the chart: organizations must gain visibility into their infrastructure and prepare for worse-case scenarios if they want to get their big data under control. Achieving operational efficiency is an even more complicated task.

IBM lays out the prequisites for this next stage: an organization must be able to scale-up rapidly and cost-efficiently; it must be able to do the same with backup; and all applications and processes need to be optimized to meet business requirements. The Big Data leader also stresses the need for tight security, a clearly defined set of policies governing the usage of data, and the ability to effectively audit the entire stack.

Replication is number nine on the list, followed immediately by virtualization: users need reliable access to data, while admins require tools that can help them make use of all the available resources on the network. IBM’s final two best practices are archiving, for the purpose of future analysis, and constant availability.

Courtesy of “Taming Big Data: 12 Best Practices for Analysts