The need to move toward virtualized and more resilient disaster-recovery architectures

Due to growing concerns around natural disasters, information technology (IT) complexity, increasing cyber-attacks, and the sensitivity of financial systems such that corporations may lose millions of dollars per minute if key business processes are not available, corporations are finding the need to develop more resilient disaster-recovery (DR) architectures. For many years, corporations have critical business functions that have relied on tape methods for DR. However, due to pressures from regulatory groups such as the FFIEC (Federal Financial Insurance Examination Council), there is a growing requirement to recover business functions faster than offered by tape solutions. As a result, application owners are challenged with more aggressive recovery-time objectives that necessitate the development of recovery solutions that offer a faster, near-continuous recovery of critical business function. However, moving mission-critical workloads from tape to a near-continuous method can be very expensive. This is true when dealing with legacy, multisite, heterogeneous workloads that have a business process and governance model that prevents workloads from moving to a cloud-computing model. Nevertheless, to offset DR costs, emerging technologies such as cloud computing and virtualization can be used, along with existing underutilized server capacity, to form effective and affordable DR solutions that can accommodate heterogeneous and legacy workloads.