AWS Data Centers Hit: Drone Strikes Cripple Cloud

Amazon Web Services said drone strikes damaged three Middle East data center facilities, disrupting service for customers in the UAE and Bahrain and complicating recovery with physical infrastructure damage.
AWS is posting the latest service status and incident updates on the AWS Health Dashboard.
Damage report
In updates reported by Business Insider, AWS said two facilities in the United Arab Emirates sustained direct hits, while a third facility in Bahrain was damaged by a strike “in close proximity.”
The company said the strikes caused structural damage and power disruptions and, in some cases, required fire suppression that led to additional water damage. It also warned the broader operating environment “remains unpredictable,” and that recovery could be prolonged given the nature of the physical damage.
Core AWS offerings, including EC2, S3, and DynamoDB, were affected. AWS said it had made incremental progress restoring portions of the DynamoDB and S3 control planes, but still estimated it would take at least a day to fully restore power and connectivity.
For customers, the most important detail is what this kind of incident looks like in practice. A cloud event doesn’t always present as a clean “down” state. Partial restoration can mean intermittent timeouts, elevated error rates, and inconsistent behavior that varies by service and by dependency. That matters because a single degraded foundational service can trigger failures across workloads that otherwise appear healthy.
This is especially true when storage and databases are involved. If applications can’t reliably read and write data, or if control-plane access is unstable, teams may be unable to scale capacity, redeploy services, or roll back changes quickly. In a fast-moving incident, the difference between “disrupted” and “degraded” can be the difference between a clear failover decision and hours of troubleshooting noise.
Next steps
Organizations running workloads in the affected regions should treat this as a live disaster recovery scenario rather than a routine service incident. The immediate goal is to reduce uncertainty: identify what is actually impacted, what can be routed around, and what requires a full regional move.
Start by confirming whether your workloads are pinned to specific availability zones or are dependent on regional services that may be impaired. Validate that backups outside the impacted region are current and restorable, and check whether any replication, snapshot, or export jobs are failing due to upstream service instability. If you have preconfigured cross-region recovery, verify that it can be invoked without relying on tools that may be degraded during the incident.
Next, review how users and systems reach your services. DNS and traffic-steering controls should be ready to shift demand away from affected zones without introducing new bottlenecks. For applications with hard regional dependencies, document what “safe mode” looks like, including reduced functionality paths that preserve data integrity.
Finally, make sure your incident process works under degraded conditions. That includes authentication, key management, and access workflows, since teams often need those systems most during recovery. If controls or approvals slow failover decisions, this is the moment to identify those chokepoints, not after the window has passed.
Also read: Amazon commits $12 billion to build AI data centers in Louisiana.
