← All PostsAWS US-EAST-1 Is Down Again — What Happened and Who Got Hit

AWS US-EAST-1 Is Down Again — What Happened and Who Got Hit

May 8, 2026

If your AWS-hosted services went down last night, you weren't alone. Amazon Web Services is working to address a power outage that has created impairments to services served from the US-EAST-1 region. The cause? Overheating at a North Virginia data center — and the fallout hit some very big names.

Here's everything that happened and what it means.

What Happened

A May 7 incident report time-stamped 5:25 PM PDT states that AWS spotted problems in the use1-az4 availability zone of the US-EAST-1 region. EC2 instances and EBS volumes hosted on impacted hardware were affected by the loss of power during the thermal event.

The disruption affected one of AWS's six Availability Zones — use1-az4 in US-EAST-1. This is one of the company's most heavily used regions globally.

AWS confirmed the cause as an increase in temperatures within a single data center, which in some cases caused impairments for instances in the Availability Zone. The hyperscaler is yet to confirm exactly how the overheating occurred.

The Timeline

At 6:47 PM PDT, AWS said it was continuing to work towards mitigating the increased temperatures, warning that other AWS services depending on affected EC2 instances and EBS volumes in the Availability Zone may also experience impairments.

At 8:06 PM PDT, AWS said it was actively working to restore temperatures to normal levels, though progress was slower than originally anticipated.

By 10:11 PM PDT, AWS advised it was able to get additional cooling capacity online, which allowed recovery of some affected racks, while actively working to recover additional racks in a controlled and safe manner.

AWS shifted traffic away from the impacted zone but warned some customers would continue to see their EC2 instances and EBS volumes as impaired until full recovery is achieved, with no ETA provided.

Who Got Hit

The blast radius was significant. Coinbase had core exchange functions disrupted for more than five hours. Other reported victims include the CME Group trading platform and major gambling company FanDuel.

One reader described the experience to The Register: "All my servers in that region have just gone inaccessible, and the AWS Dashboard is misbehaving, with even the status page timing out."

Financial platforms and crypto exchanges being among the first to feel the pain is no surprise — these are the services that run some of the most latency-sensitive, availability-critical workloads on AWS infrastructure.

US-EAST-1's Problem History

This isn't a one-off. US-EAST-1 was the site of major outages in 2021 and again in October 2025. AWS executives have acknowledged the region isn't inherently more fragile than others, but it runs things at a bigger scale than elsewhere — which imposes extra stress on services.

For developers and engineers who've been through previous outages, this will feel familiar. US-EAST-1 is AWS's most used region globally, which means when it goes down, the collateral damage is always outsized.

What This Means for Your Architecture

Every time US-EAST-1 has an incident it makes the same argument louder — single-region deployments are a liability.

If you're running production workloads exclusively on US-EAST-1, this outage is your reminder to take multi-region seriously. A few things worth reviewing after this:

Make sure critical workloads are distributed across at least two availability zones — and ideally two regions. Know exactly which services failed and how long recovery took. If the answer is hours, your architecture needs work. Keep backups in a separate region entirely, not just a separate availability zone. Set up health checks and automated failover, not manual intervention when things go down.

The outage also raises a broader question about infrastructure concentration. When a single overheating event at one data center in Virginia can knock Coinbase offline for five hours, the fragility of cloud dependency becomes very real very fast.

CONCLUSION

AWS will fix the overheating, restore the racks, and publish a post-mortem. They always do. But the pattern with US-EAST-1 is hard to ignore — this is the third significant incident in five years for the same region.

The cloud is still the right call for most workloads. But resilient cloud architecture means designing for failure, not hoping it won't happen. Last night's outage is a useful reminder that hope is not a reliability strategy.

We'll update this post as AWS releases its full post-mortem.