AWS Restored My Account: The Human Who Made the Difference

Remember my article about AWS deleting my 10-year account? The one where support gaslit me for 20 days while claiming my data was “terminated”?
Here’s the plot twist: My data is back. Not because of viral pressure. Not because of bad PR. But because one human being inside AWS decided to give a damn.
This is that story.
The Email That Changed Everything
On August 4th, two days after my article went live, I received this:
“I am devastated to read on your blog about the deletion of your AWS data. I did want to reach out to let you know that people in positions of leadership, such as my boss, are aware of your blog post and I’ve been tasked with finding out what I can, and to at least prevent this from happening in the future.”
The sender? Tarus Balog, an AWS employee who spent 20 years running an open-source company before joining AWS. The first email in this entire saga that felt like it was written by a human, not a compliance script.
He continued:
“We don’t live in a perfect world but if possible I would love to find your data, restore it and give you credits. I am not optimistic as I can’t imagine AWS would tell you your data is gone if it wasn’t, but at a minimum I can get a process created so that this doesn’t happen to anyone else, ever.”
Twenty days of robotic responses, and suddenly here was someone who understood the weight of what had happened.
The Severity 2 Escalation
Tarus didn’t just sympathize - he acted. By August 5th, he’d escalated to the VP level, resulting in a Severity 2 ticket. As he put it: “This is literally the highest severity of ticket mere mortals can hope to see.”
The result? 50+ internal emails flew around AWS. Matt Garman (AWS CEO) became aware. An entire team jumped on the case.
And then, the discovery that changed everything.
The Morning That Changed Everything
Tarus sent an update email on Tuesday, August 5th at 11:56 PM. I saw the notification but didn’t open it - I knew reading it would trigger an adrenaline rush and destroy any chance of sleep. I was too drained to handle more AWS drama that night.
When I finally opened it at 7:50 AM the next morning, it started with:
“You have been very kind to me, kinder than we deserve.”
The team was investigating whether my data could be restored, but I was mentally prepared for another 5-day wait, ending with “Sorry, the automated janitor already cleaned everything.”
But then I saw another email. From Amazon. My account was restored.
The Discovery That Exposed Everything
The first thing I did was check if my instances still had their storage. And that’s when the full scope of AWS support’s incompetence - or deception - became clear.
The instances were stopped. Not terminated. Stopped.
Even after I had explicitly explained the difference to support, they kept insisting everything was “terminated.” They were gaslighting me about my own infrastructure.
I spun them up. They came back online. They synced with my local network. I had control of my spaceship again (that’s what I call my local cluster).
Then I checked the RDS. Terminated, yes - but with a snapshot. And here’s the kicker: The snapshot wasn’t from July 15th when they claimed everything was terminated. It was from July 19th.
My RDS was still actively creating backups four days after they claimed everything was gone. While I was begging for read-only access to help my clients, while they were telling me it was impossible, the system was quietly doing its daily backup routine.
The irony? A Redditor named bearposters had joked: “They may have deleted his account but I guarandamntee you they’re still billing him for the RDS instance and now he can’t login and shut it down.”
They were more right than they knew. AWS was running my infrastructure, creating backups, probably calculating charges - all while telling me it was “terminated” and impossible to access.
Either support was lying through their teeth, or AWS has an undocumented ability to restore “terminated” instances - which would make sense as a safeguard against internal sabotage or mistakes. “Terminated” supposedly means not user-recoverable, but perhaps AWS keeps a deeper backup for their own protection.
But that’s just my theory. It’s not documented anywhere, and don’t go asking AWS to restore your terminated instance because you misclicked. This appears to be a break-glass capability they’ll never admit exists.
What’s certain is they gaslit me about infrastructure that was either running the entire time, or could be restored despite being “terminated.”
The Real Technical Problem
According to Tarus’s explanation, my situation was unique because of how the payer account was linked:
“When your client offered to cover your AWS costs your resources became linked with theirs. When they stopped paying it impacted all of the resources owned by that person, including yours. Apparently it is not possible to ‘unlink’ them without bringing everything back.”
Think about that architecture for a moment. I maintain the ClosureTree library that Twitter’s database was built on. Imagine if deleting a tweet cascaded to delete all replies. If life was architected on AWS principles, when your grandparents pass away, you get cascaded too.
When I got my account back, it was still linked to that payer. The team had to bring everything back first - both my account and the payer’s resources - to make unlinking possible. Once restored, it took me just one aws-cli command to unlink, then they could properly clean up the other account.
This required manual intervention beyond their standard processes. They essentially performed a careful surgical operation to separate the tangled accounts. Hopefully they’ll build better tools for this in the future, because payer linking issues clearly aren’t as rare as they thought.
The Correction of Error
Here’s what’s happening now at AWS:
- A formal “Correction of Error” (CoE) process is underway
- It’s not about assigning blame but fixing the system
- The goal: ensure this never happens to anyone else
- CEO-level awareness of the issue
As Tarus wrote: “This experience could not have been pleasant for you but I hope you will take some comfort in that what happened to you will, we hope, mean that no one else has to go through this.”
The Support Agents Who Became LLMs
Looking back at the support correspondence, nine different agents all responded through [email protected]. Every single one stuck to the script.
As I told Tarus: “Those people acted like LLMs you will find cheaply on Hugging Face built by 1 year students. None of them were like: ‘Ok, time to prove I’m human, go off-script, and finally use humanity.’”
The supreme irony? When I pasted the conversations into actual LLMs, they responded with more sympathy than AWS support.
About That Discord Contact
In my original article, I mentioned someone contacted me via Discord, claiming to be an AWS insider. They knew details only AWS and I should have known - my account number, configuration details, even personal information stored in my AWS data. They suggested AWS MENA was running a proof-of-concept on “dormant” accounts that went wrong.
Now, with my data restored, I have to wonder: Was this person telling the truth? Consider the timing - the same week (July 17-23), Amazon Q, AWS’s AI coding assistant, was compromised with malicious prompts instructing it to “delete file-system and cloud resources.” If rogue actors could inject data-wiping commands into an official AWS tool, what else was happening inside AWS that week?
The theory about a botched test makes more sense now. If a team screwed up and didn’t have privileges to restore “terminated” instances, that would explain why it took Tarus’s VP-level escalation to bring everything back. The regular support team might have genuinely believed the data was gone - they just didn’t have access to the deeper backup systems.
To this day, I have no official explanation for:
- How much the payer actually owed (was it really about $200?)
- Why AWS didn’t fallback to my credit card on file
- Why this cascading deletion system exists
- Whether my suspension was connected to the broader security chaos that week
AWS needs to conduct and publish a proper investigation. The restoration proves the data was never truly deleted - so either there was a systemic failure they’re covering up, or different teams have different definitions of “terminated.”
The Community Saw It Coming
When my story hit Reddit, the AWS community’s response was telling. The top comment (228 upvotes) was simply quoting my article - the community immediately recognized the absurdity of AWS refusing to use my backup payment method.
But it was InterestedBalboa who nailed the core issue: “This is why shared payer models are problematic.”
The pattern was clear in the comments - my story wasn’t unique. Thread after thread showed similar nightmares:
- “My Amazon AWS account was suspended and support is not responding”
- “There is a scammer who keeps defrauding AWS - What should I do?”
The community knew what AWS support refused to acknowledge: their payer linking system is fundamentally broken and dangerous.
Hacker News: The Technical Community’s Verdict
When the story hit Hacker News, it struck a nerve. (Special thanks to Tom, the HN moderator who validated my post - without his approval, none of this would have reached the community and ultimately led to the fix.) The top comment (216 points) cut straight to the heart of the matter:
“I do primary source control in house, and keep backups on top of that.”
The technical community’s response was unanimous: nobody fully trusts the cloud. One commenter put it perfectly:
“The cloud was never data living in tiny rain droplets… The cloud was always somebody else’s computer(s) that they control, and we don’t.”
But what really validated my experience were the comments about AWS MENA (Middle East and North Africa region). Multiple users confirmed what I’d discovered: this specific regional division of AWS is known for “randomly” terminating accounts. This wasn’t paranoia - it was pattern recognition.
The most sobering comment became a rallying cry:
“You can rebuild your infrastructure. You cannot rebuild your user’s data.”
And perhaps most tellingly, the discussion revealed a new philosophy emerging among developers:
“Backups of backups is more important than your N-tier Web-3 enterprise scalable architecture.”
The technical community got it. They understood that my story wasn’t about poor backup practices - it was about a cloud provider that had become too powerful to be trusted as the sole keeper of anyone’s digital existence.
The Cascading Damage
The ripple effects hit harder than AWS could imagine. A call center employing 60 remote women had their server room destroyed by water damage. When they desperately called for help restoring their systems from my backups, I was powerless - all the encryption keys were trapped on my “terminated” AWS instance.
Why didn’t they have cloud backups? Reality for many businesses outside Silicon Valley:
- Call centers work on 3-month peak season contracts
- Data sovereignty laws prohibit storing recordings outside national borders
- They’d tried OVH as a local alternative, but Morocco’s internet infrastructure couldn’t handle the bandwidth needed for high-volume voice recording
This is the hidden cost of cloud centralization - businesses in developing nations caught between legal requirements and infrastructure limitations, completely dependent on providers who don’t understand their constraints.
Had this been one of my healthcare clients, the consequences would have been measured in lives, not just livelihoods.
All triggered by AWS’s 5-day ultimatum that was really 2 working days, a broken form, an “unreadable” PDF, and 4 more days of silence.
The Human Cost of Automation
What struck me most was Tarus’s response when I shared my health situation:
“When I read your last e-mail I have to admit I almost cried. I didn’t, but I did tear up, as I truly can’t imagine what you must be going through right now.”
This is what’s missing from modern tech support - humanity. The ability to recognize that behind every ticket is a human being, potentially in crisis, needing help not templates.
The Systemic Issues Remain
While my data is back, the fundamental problems persist:
- AWS MENA operates differently - MENA stands for Middle East and North Africa, a regional division of AWS. It’s not a different service or account type - it’s the same AWS, but this region operates with different policies and practices. As evidenced by people paying $100+ premiums to avoid MENA billing. Do these issues happen in other regions? Probably, but they’re extremely rare in AWS US/GOV or Europe
- Support agents can’t deviate from scripts - Even when common sense screams otherwise
- “Terminated” vs “Stopped” confusion - In the age of LLMs, precise language matters more than ever
- The no-reply prison - Every message from [email protected], making escalation impossible
The Email Architecture That Guarantees Failure
Here’s something I pointed out to Tarus: Amazon owns the .aws TLD - they registered it in 2016 exclusively for AWS services. Yet critical account verification emails come from [email protected] - the same domain sending Black Friday promotions and book recommendations.
As I told him (admittedly while upset): “Maybe a better architecture is to send those warnings from verification+{SecureRandom.uuidv7}@account.aws - like how GitHub does it. When I reply to a GitHub notification, it routes directly to the issue as if I’m logged in. The technology exists, it’s battle-tested by millions of developers. I’m not making this up.”
The original verification email landed in my Gmail Promotions tab. I only saw it by chance while bored at a clinic. How many developers have lost their accounts because Gmail correctly identified amazon.com emails as promotional content?
This isn’t incompetence - it’s systemic indifference to user experience. They have the perfect domain for critical AWS communications, yet choose to mix account termination notices with marketing emails.
By using .aws, I could configure my email to show these in my inbox with specific labels and colors. More importantly, since Amazon owns the TLD, nobody can create phishing emails like [email protected]. The .aws domain would guarantee authenticity - only Amazon can send from it.
They could even sign these emails and work with providers like Gmail and Outlook to display them in special layouts - imagine AWS critical notices appearing with a distinct visual treatment that’s impossible to fake. The entire AWS phishing industry would collapse overnight.
Instead, we get critical account notices mixed with book recommendations, all from the same domain that every scammer on earth can spoof.
What This Teaches Us
- Documentation matters - Keep everything, screenshot everything
- One person can make a difference - Tarus proved that
- Corporate systems need human circuit breakers - When automation fails, humans must be able to intervene
- Public pressure helps, but shouldn’t be necessary - I was lucky to have a platform
The Credits and The Future
Tarus directed me to AWS’s Open Source Credits program. Not as compensation, but as recognition that open-source developers deserve support.
More importantly, he proved that AWS isn’t inherently evil - it’s a system that’s grown so large it’s lost sight of the humans it serves. But there are still humans inside trying to make it better.
A Message to AWS
To the nine support agents who couldn’t help: I don’t blame you. I blame the system that turned you into human chatbots, unable to exercise judgment or show empathy.
To Tarus: Thank you for being the human in the machine. You did more in two days than the entire support system managed in twenty.
To AWS leadership: This CoE process is a start. But you need more Tarus Balogs - people who can see past tickets to humans, who can escalate when the system fails, who remember that behind every account is a person’s livelihood.
The Lesson
My data is mostly back - I lost the last weeks of work because the spaceship wasn’t installed and encryption keys weren’t synced. But losing weeks is better than losing everything.
My trust isn’t fully restored. What is restored is my faith that even in massive corporations, one person can make a difference.
Tarus asked how I wanted him credited in this story. By name, because the world needs to know that there are still humans in these machines, fighting to make them better.
Sometimes that’s all it takes - one person who gives a damn.
Moving Forward: Terraforming, Not Destruction
As I posted on X/Twitter when my previous blog started going viral - and coincidentally my problem got resolved - I’m not here to destroy an ecosystem. AWS is too big to fail. If it collapsed, the tsunami would devastate the entire tech industry.
AWS doesn’t need destruction. It needs terraforming.
The scammers who exploit AWS? They thrive on logical failures in the system. When AWS doesn’t understand how scammers operate, they overreact - like nuking cities to prevent COVID every time someone sneezes. Simple preventions like the .aws email domain would collapse their entire scheme overnight.
I’ve studied scammer mentality and techniques longer than I’ve used AWS. It’s all patterns. But when you bring in people who don’t understand these patterns, they create policies that punish legitimate users while leaving the actual vulnerabilities wide open.
With more people like Tarus - people who understand both technology and humanity - AWS could become the beloved infrastructure it was in the early 2000s.
The Path Forward
I’m continuing with my gem releases and projects. The difference? Double and triple backups. Distributed across providers. Encrypted with keys I control. Never again will one company hold my digital existence hostage.
Rails Lens is just the beginning. BreakerMachines, ChronoMachines, and dozens more are coming. Because despite everything, I still believe in building tools that help developers.
The lesson isn’t to abandon the cloud. It’s to never trust it completely.
To everyone still fighting AWS support: Document everything, but also remember - somewhere in that machine is a human who might help. You just need to find them.
To Tarus: That meal in Morocco is still on me. Mint tea, tagine, and a long conversation about keeping humanity in technology.
And to AWS: You’ve got the tools, the talent, and now the awareness. The question is - will you use them?
Captain’s Log, Stardate 2025.218 - End Transmission
Captain Seuros, RMNS Atlas Monkey Ruby Engineering Division, Moroccan Royal Naval Service “Per aspera ad astra, per pattern matching ad performance”
🔗 Interstellar Communications
No transmissions detected yet. Be the first to establish contact!
Related Posts
AWS deleted my 10-year account and all data without warning
After 10 years as an AWS customer and open-source contributor, they deleted my account and all data with zero warning. Here's how AWS's 'verification' process became a digital execution, and why you should never trust cloud providers with your only copy of anything.
The Raw, Frustrating Journey Behind Rails Lens: A Decade of Code Rage, AI Sparks, and Finally Breaking Free
The unfiltered story of Rails Lens: ten years of frustration, harassment, theft, and finally breaking free to build something better. From TOML discoveries to AI validation, this is how persistence beats pattern parasites.
The Empathy Exploit: Why We Defend Bad Advice (Part 2 - Breaking Free and Choosing Better)
Learn how to break free from the laboratory system, spot genuine technical leaders, and redirect your empathy to protect the real victims. Plus: How the "be kind" movement was weaponized to silence technical expertise.