This prompted greater focus on architecting cloud-based applications across multiple zones and locations for greater resilience:
Super storm sandy, Oct 29-30: Datacenters in New York and New Jersey were impacted by the storm ranging from downtime because of flooding to days on generator power for data centers around the region. Sandy was a storm that caused more than just a single outage, and tested the resilience and determination of the data center industry on an unprecedented scale.
Go Daddy DNS outage, Sept 10: Go Daddy is one of the biggest DNS service providers, as it hosts 5 million websites and manages more than 50 million domain names. That's why the Sept 10 outage was one of the most disruptive incidents of 2012. The six-hour incident was a result of corrupted data in router tables.
Amazon Outage, June 29-30: AWS EC2 cloud computing service powers some of the web's most popular sites and services, including Netflix, Heroku, Pinterest, Quora, Hootsuite and Instagram. A system of strong thunderstorms, known as a derecho, rolled through northern Virginia causing a power outage to the AWS Ashburn datacenter. The generators failed to operate properly, depleting the emergency power in the Uninterruptible Power Supply (UPS) systems.
Calgary data center fire, July 11: A datacenter fire in the Shaw Communications facility in Calgary, Alberta delayed hundreds of surgeries at the local hospitals. The fire disabled both the primary and backup systems that supported key public services. This was a wake-up call for government agencies to ensure that the datacenters that manage emergency services have failover systems.
This is why having a well-planned DR strategy is so important, because of unforeseen assurances like the preceding ones.