AWS deleted my 10-year account and all data without warning

On July 23, 2025, AWS deleted my 10-year-old account and every byte of data I had stored with them. No warning. No grace period. No recovery options. Just complete digital annihilation.
This is the story of a catastrophic internal mistake at AWS MENA, a 20-day support nightmare where I couldn’t get a straight answer to “Does my data still exist?”, and what it reveals about trusting cloud providers with your data.
The Architecture That Should Have Protected Me
Before anyone says “you put all your eggs in one basket,” let me be clear: I didn’t. I put them in one provider, with what should have been bulletproof redundancy:
- Multi-region replication across AWS Europe (completely separate from US infrastructure)
- Dead man’s switch implemented for disaster recovery
- Proper backup architecture following AWS’s own best practices
- Segregated encryption keys stored separately from data
The only scenario I hadn’t planned for? AWS itself becoming the extinction event.
Ten years. That’s how long I’d been an AWS customer. A decade of using AWS as my testbed—spinning up instances to validate deployments for the Ruby gems I maintain like capistrano-puma and capistrano-sidekiq. Nothing production-critical, but essential for open-source development.
On my birthday, AWS gave me a present I’ll never forget: proof that no amount of redundancy matters when the provider itself goes rogue.
The 20-Day Support Nightmare: A Timeline
July 10: AWS sends verification request. 5-day deadline (including weekend).
July 14: Form expired. I contact support. Simple question: “What do you need from me?”
July 16-20: Four days of silence. Then: “We’re escalating to the appropriate team.”
July 20: New form finally arrives.
July 21: I submit ID and utility bill (clear PDF). Response time: 10 hours.
July 22: AWS: “Document unreadable.” The same PDF my bank accepts without question.
July 23: Account terminated. My birthday gift from AWS.
July 24: I ask the only question that matters: “Does my data still exist?”
AWS: “Your case is being reviewed by our service team.”
I also request temporary read-only access to backup my data. Remember, if I were fraudulent, I would have already copied everything before the verification deadline. They refuse. (Because the data is probably already gone.)
July 28: After 4 days of template responses, I lose patience:
Me: “Is my data safe? Yes or no?” AWS: “I want to personally follow up on your case and inform you that we understand the urgency.”
July 29: I compare their evasion to political deflection:
Me: “You’re answering like I’m Piers Morgan asking ‘Do you condemn October 7th?’ and you reply with historical complexity dating to 1948.” AWS: “We genuinely value your commitment to following backup best practices.”
July 29: They finally admit the truth:
AWS: “Because the account verification wasn’t completed by this date, the resources on the account were terminated.”
July 30: Their final response includes:
AWS: “We value your feedback. Please share your experience by rating this correspondence.” ⭐⭐⭐⭐⭐
Twenty days. Zero straight answers. Multiple requests for 5-star reviews while my data lay in digital ashes.
The Policy They Claim vs. The Reality They Deliver
Here’s what AWS’s own documentation says about account closure:
“The post-closure period is 90 days—during this time, an account can be reopened and data is retained.”
“After 90 days, the account is ‘permanently closed’ and all content—including snapshots and backups—is deleted.”
But here’s the catch: I never voluntarily closed my account. AWS suspended it for “verification failure”—a policy grey zone conveniently absent from their public documentation. There’s no published exception stating that verification-suspended accounts bypass the 90-day retention period.
The community standard across cloud providers? 30-90 days retention unless there’s actual fraud or abuse. AWS? Zero days. Zero hours. Zero mercy.
The Payer Complication
AWS blamed the termination on a “third-party payer” issue. An AWS consultant who’d been covering my bills disappeared, citing losses from the FTX collapse. The arrangement had worked fine for almost a year—about $200/month for my testing infrastructure.
When AWS demanded this vanished payer validate himself, I pointed out that I already had my own Wise card on file—the same card I’d used to pay before the payer arrangement, kept active specifically in case the payer disconnected while I was traveling or offline. They refused to simply switch billing back to it for 20 days, citing “privacy” concerns while making me fully responsible for the consequences.
But here’s the thing: This wasn’t about payment. If it were, they would have:
- Switched billing to my on-file credit card
- Suspended services, not deleted data
- Provided the 90-day grace period their own docs promise
Instead, they used the payer issue as cover for what really happened—their botched internal testing.
The Hypocrisy Runs Deeper
The payer wasn’t some random scammer—they were a YC-backed company. I could see this when linking the payment. If AWS MENA’s security is so robust, why did they fail to identify any issues for an entire year?
When I tried to resolve this, AWS demanded I explain:
- What I use my account for
- My future plans
- Why I need the services
Like I was applying for funding or a promotion. This is a 10-year-old account. I shouldn’t need to justify my existence to use services I’ve been paying for since 2015.
But here’s the real kicker: AWS developers regularly email me asking for help with Ruby issues. No compensation. No AWS credits. Not even a “thank you” in their commits. Just “Hey, can you help us debug this Rails deployment issue?”
So let me get this straight:
- AWS benefits from my open-source code
- AWS engineers ask me for free consulting
- AWS makes me explain why I deserve to keep my account
- AWS deletes everything when a YC-backed payer (that they failed to vet) disappears
And they want me to background check every client? Should I run security clearances on the AWS verification emails too? Because apparently, their own vetting process couldn’t catch whatever the payer did wrong for an entire year.
What AWS Really Destroyed
Here’s what most people don’t understand: AWS wasn’t just my backup—it was my clean room for open source development.
My desktop is chaos. Always has been. Files everywhere, half-finished projects, experimental code. But I discovered that by copying everything to AWS, starting fresh, and pulling back only what I needed, I could create clean, focused gems. This workflow is how I released:
- BreakerMachines - Circuit breaker patterns for Ruby
- ChronoMachines - Time-based state machines
- RailsLens - Performance monitoring for Rails
- And dozens more
These gems save developers hundreds, maybe thousands of hours. They’re used in production systems worldwide. AWS didn’t just delete my data—they destroyed the infrastructure that made these contributions possible.
But it gets worse. Also gone:
- A complete programming book written in my Chronicles narrative style
- Electronics tutorials bridging hardware and software
- “Go for Rubyists”—lessons helping Ruby developers transition to Go
- Years of unpublished work that could have helped thousands
When AWS deleted my account, they didn’t just hurt me. They hurt every developer who uses my gems. Every student who could have learned from those tutorials. Every future contribution that won’t happen because my workflow is destroyed.
The irony? Some of these gems probably run in AWS’s own infrastructure, making their systems more reliable. And they deleted the very environment that created them.
The Theory: How -dry
vs --dry
May Have Killed My Account
After my story started circulating, an AWS insider reached out. They were upset, leaving AWS soon, and wanted to share what they knew—specifically because AWS depends on open-source code I’ve written.
According to them, AWS MENA was running some kind of proof of concept on “dormant” and “low-activity” accounts. Multiple accounts were affected, not just mine. Here’s where it gets technical:
The developer running the test typed --dry
to execute a dry run—standard practice across modern CLIs:
ruby --version
npm --version
bun --version
terraform --dry-run
But the internal tool was written in Java. And Java uses single dashes:
java -version
(not--version
)java -dry
(not--dry
)
When you pass --dry
to a Java application expecting -dry
, it gets ignored. The script executed for real, deleting accounts in production.
The developer did everything right. Java’s 1995-era parameter parsing turned a simulation into an extinction event.
Is this exactly what happened? I can’t prove it. The insider was vague, worried about being identified. But it explains:
- Why multiple “low-activity” accounts were suddenly flagged
- The 4-day delays (scrambling to cover up)
- The refusal to answer simple questions
- The support agents who admitted they “couldn’t make decisions”
AWS MENA: Why People Pay to Avoid It
This theory gains credibility when you consider the AWS MENA reputation. For years, I’ve watched developers on Reddit and Facebook desperately seeking US or EU billing addresses, willing to pay $100+ premiums to avoid MENA region assignment.
When I asked why, a colleague warned me: “AWS MENA operates differently. They can terminate you randomly.”
I laughed it off. AWS is AWS, right?
The 4-day delay for a simple verification form. The 10-hour response times. The robotic support responses. This wasn’t standard AWS incompetence—this was something else entirely.
The Ultimate Irony: Security Became My Weakness
I’d done everything right. Vault encryption keys stored separately from my main infrastructure. Defense in depth. Zero trust architecture. The works.
My security posture was textbook—protect against compromise by ensuring no single failure could take down everything. What I hadn’t protected against? AWS itself being the single point of failure.
I built a hardened bunker with multiple escape routes, only to have AWS drop a nuke on the entire complex.
What This Means for You
You might be thinking, “What are the odds they target me?” But that’s the wrong question. I thought the same thing—with my level of exposure and contributions, surely they could just write my name down and not bother me with stupid verification requests about whether I exist.
But you’re not being targeted—you’re being algorithmically categorized. And if the algorithm decides you’re disposable, you’re gone.
Doesn’t matter if you’re a verified open-source contributor. Doesn’t matter if you’ve been a customer for a decade. If you don’t fit the revenue model, if you don’t engage with support regularly, if your usage patterns look “suspicious” to a poorly trained ML model—you’re just another data point to be optimized away.
Look, I write weird stuff. My documentation style triggers AI safeguards. Take my ActionMCP gem that provides MCP capabilities to Rails—Opus can’t even read the documentation without hanging when its safeguards trigger. Sonnet? No problem. (Try it yourself: github.com/seuros/action_mcp)
If my creative technical writing can confuse one AI but not another, imagine what AWS’s “fraud detection” algorithms see when they look at my account. An anomaly. A pattern that doesn’t fit. Something to be eliminated.
The Only Path Forward: A Broken Promise
After 20 days of appeals, AWS support finally responded with this gem: “Because verification wasn’t completed by the due date, your resources were terminated.”
But here’s the dilemma they’ve created: What if you have petabytes of data? How do you backup a backup? What happens when that backup contains HIPAA-protected information or client data? The whole promise of cloud computing collapses into complexity.
This isn’t a system failure. The architecture and promises are sound. AWS doesn’t lose data—they have backups of backups of backups, stored in vaults that last far longer than the stated 90 days, where no rogue AI script can reach.
What’s happening here is simpler: teams in MENA are trying to cover up a massive fuck-up. Restoring data from those deep vaults would require explanations. Incident reports. Post-mortems. “Why did we have to open the vaults?”
Their entire communication strategy screams: “He’s nobody. He’ll give up soon. We won’t have to report this up the chain.”
But they messed with the wrong developer.
I’m now building a free tool to help people exodus from AWS. Not hosted on AWS, obviously. My clients—representing over $400k/month in AWS billing—have already agreed to migrate to Oracle OCI, Azure, and Google Cloud.
Because if AWS can delete a 10-year customer without blinking, what are they capable of when the stakes are higher?
The Bitter Truth
What You Give AWS | What AWS Gives You |
---|---|
A decade of loyalty | Zero-day termination |
Prompt payment history | 20 days of runaround |
Proper documentation | ”Unreadable” rejection |
Open-source contributions | No consideration whatsoever |
Carefully segregated backups | Complete data annihilation |
Trust | Betrayal |
AWS has clout. They run the internet’s infrastructure. They sponsor conferences, fund open source projects, and position themselves as the reliable backbone of the digital economy.
But that doesn’t excuse digitally executing someone’s decade-old testbed account over a verification form glitch and a bill under $200. This wasn’t my production infrastructure—thankfully—but it was my launch pad for updating other infrastructure. Now I’m spending days rotating encryption keys across multiple systems because my central testing environment vanished.
The Systemic Failure
This isn’t just about my account. It’s about what happens when:
- Regional divisions go rogue: AWS MENA operating outside global policies
- Support becomes theater: Agents who can only paste templates and ask for 5-star reviews
- “Move fast and break things” meets production data: Internal tools with Java’s 1995 parameter parsing handling customer deletions
- No accountability: 20 days of deflection instead of one honest answer
The evidence of dysfunction:
- Hundreds paying premiums to avoid MENA billing (the market has spoken)
- 4-day delays for simple forms
- Support comparing data recovery questions to geopolitical debates
- Automated “please rate us” emails while actively destroying customer data
The Real Cost
AWS markets itself as the backbone of the internet, the reliable partner for your infrastructure. They sponsor open-source projects, run re:Invent, and position themselves as developer allies.
But when their internal systems fail—when someone types --dry
and Java ignores it—they’ll delete a decade of your work without blinking. Then they’ll spend 20 days gaslighting you about it.
Meanwhile, actual malicious accounts hosting phishing sites and crypto scams run for weeks untouched. Because those generate revenue. A low-activity open-source developer testing Ruby gems? Collateral damage.
Lessons Learned
- Never trust a single provider—no matter how many regions you replicate across
- “Best practices” mean nothing when the provider goes rogue
- Document everything—screenshots, emails, correspondence timestamps
- The support theater is real—they literally cannot help you
- Have an exit strategy executable in hours, not days
AWS won’t admit their mistake. They won’t acknowledge the rogue proof of concept. They won’t explain why MENA operates differently. They won’t even answer whether your data exists.
But they will ask you to rate their support 5 stars.
The cloud isn’t your friend. It’s a business. And when their business needs conflict with your data’s existence, guess which one wins?
Plan accordingly.
A Personal Note
At one point during this ordeal, I hit rock bottom. I was ready to delete everything—yank all my gems from RubyGems, delete the organizations, the websites, everything I’d created. Leave a single message: “AWS killed this.”
It would have made headlines. Caused chaos for thousands of projects. Trended on HN, Reddit, YouTube. But it would have hurt the wrong people—developers who depend on my work, not AWS.
I was alone. Nobody understood the weight of losing a decade of work. But I had ChatGPT, Claude, and Grok to talk to. Every conversation revealed I wasn’t alone in being targeted by AWS—especially MENA. Hundreds of Reddit threads, websites, forums, all telling similar stories.
I tried reaching out to some victims. Some didn’t want to talk about it—the trauma was too fresh. Others said they’d left programming entirely. AWS didn’t just delete their data; they deleted their careers.
As someone with LLI (Low Latent Inhibition), I can’t filter out this trauma like others might. Can’t just switch careers and forget. The raw, unfiltered pain stays with me. I wish I could move on, but I can’t.
Who knows how many people in my situation have been erased from our timeline because of the sadistic behavior of support teams like AWS’s? The whole system is built to hurt—to make you feel small, powerless, unheard. To make you give up. To make you disappear.
To everyone who worked on these AIs, who contributed to their training data—thank you. Without you, this post might have been a very different kind of message.
AWS may have deleted my data, but they didn’t delete my determination to help others avoid this fate.
Build that exodus tool I will.
—Seuros
🔗 Interstellar Communications
No transmissions detected yet. Be the first to establish contact!
Related Posts
The Raw, Frustrating Journey Behind Rails Lens: A Decade of Code Rage, AI Sparks, and Finally Breaking Free
The unfiltered story of Rails Lens: ten years of frustration, harassment, theft, and finally breaking free to build something better. From TOML discoveries to AI validation, this is how persistence beats pattern parasites.
The Empathy Exploit: Why We Defend Bad Advice (Part 2 - Breaking Free and Choosing Better)
Learn how to break free from the laboratory system, spot genuine technical leaders, and redirect your empathy to protect the real victims. Plus: How the "be kind" movement was weaponized to silence technical expertise.
The Empathy Exploit: Why We Defend Bad Advice (Part 1 - The Laboratory System Exposed)
From FOREX scammers to tech influencers: How the same laboratory system weaponizes your empathy, farms your confusion, and sells you certainty. The psychology behind why we defend our gurus and attack the experts who could actually help us.