Jump to content

Amazon Explains Its Cloud Disaster


Infiltrator

Recommended Posts

NEW YORK (CNNMoney) -- Amazon on Friday issued a detailed analysis and apology on last week's massive crash of its cloud service, an event that brought down dozens of websites.

The disruption to Amazon (AMZN, Fortune 500) Web Service's Elastic Compute Cloud, or EC2, limited customers' access to much of the information that was stored in the company's East Coast regional data centers. About 75 sites crashed because of the outage.

Until now, Amazon had stayed relatively silent about the cause. But after completing a post-mortem assessment of the mess, the company issued a technically detailed, 5,700-word explanation of what went wrong.

The event -- the first prolonged, widespread outage EC2 has suffered since launching five years ago -- was a technical perfect storm. A mistake made by Amazon's engineers triggered a cascade of other bugs and glitches.

"As with any complicated operational issue, this one was caused by several root causes interacting with one another," Amazon wrote.

On April 21, AWS tried to upgrade capacity in one storage section of its regional network in Northern Virginia. That section is called an "availability zone." There are multiple availability zones in each region, with information spread across several zones in order to protect against data loss or downtime.

The upgrade required some traffic to be rerouted. Instead of redirecting the traffic within its primary network, Amazon accidentally sent it to a backup network. That secondary network isn't designed to handle that massive traffic flood. It got overwhelmed and clogged up, cutting a bunch of storage nodes off from the network.

When Amazon fixed the traffic flow, a failsafe triggered: The storage volumes essentially freaked out and began searching for a place to back up their data. That kicked off a "re-mirroring storm," filling up all the available storage space. When storage volumes couldn't find any way to back themselves up, they got "stuck." At the problem's peak, about 13% of the availability zone's volumes were stuck.

But why did a problem in one availability zone ripple out to affect a whole region? That's precisely the kind of glitch Amazon's infrastructure is supposed to prevent.

Turns out EC2 had a few bugs. Amazon describes them in detail in its analysis, but the gist is that the master system that coordinates all communication within the region had design flaws. It got overwhelmed, suffered a "brown out," and turned an isolated problem into a widespread one.

Interestingly, those bugs and design flaws have always been in place -- but they wouldn't have been discovered if Amazon hadn't goofed up and set off a domino chain.

Amazon says that knowing about and repairing those weaknesses will make EC2 even stronger. The company has already made several fixes and adjustments, and plans to deploy additional ones over the next few weeks. The mistake presented "many opportunities to protect the service against any similar event reoccurring," Amazon said.

Of course, Amazon's customers aren't so thrilled to have been guinea pigs in this cloud-crash learning experience. Amazon offered a mea culpa, and said it would give all customers in the affected availability zone a credit for 10 days of free service.

"We want to apologize," the company said in a prepared statement. "We know how critical our services are to our customers' businesses and we will do everything we can to learn from this event and use it to drive improvement across our services."

Source: http://money.cnn.com/2011/04/29/technology/amazon_apology/index.htm

Link to comment
Share on other sites

Another reason to rethink twice before migrating your data to the cloud "Reliability"

Link to comment
Share on other sites

Another reason to rethink twice before migrating your data to the cloud "Reliability"

Actually, I think this is an argument for standardization of cloud APIs so that customers with failing nodes on EC2 can quickly and easily replicate their nodes over to another server in the event of a failure a their primary cloud provider.

Link to comment
Share on other sites

Actually, I think this is an argument for standardization of cloud APIs so that customers with failing nodes on EC2 can quickly and easily replicate their nodes over to another server in the event of a failure a their primary cloud provider.

I agree, these are things that as a customer you will have to consider, what can you do in this circumstance? What are your available options, say if the cloud went down or how to prevent it from happening in the first place.

Edited by Infiltrator
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...