Back to all posts
6 min read

Software Is Not a Nursing Home: Breaking Free from Legacy Support

Software Is Not a Nursing Home: Breaking Free from Legacy Support

Here’s my take on legacy support: it’s the innovation killer nobody wants to admit exists.

I maintain a lot of Ruby gems, and I’ve learned this the hard way. Legacy support doesn’t just slow you down—it paralyzes you. And when you’re paralyzed, you stop building the future.

The Trap I Built for Myself

Let me tell you about state_machines, one of the gems I maintain. When I started, I had this naive dream: make it compatible with everything, from Ruby 1.8 all the way to whatever edge version was cooking. Young me thought backward compatibility was a virtue, a mark of good engineering.

Spoiler alert: it’s a trap.

Ruby evolved. The ecosystem matured. We got keyword arguments that made APIs cleaner. Pattern matching that made complex logic readable. Ractors that opened doors to real concurrency. The language got powerful.

But state_machines? It couldn’t touch any of it.

Why? Because when you drag around the corpse of Ruby 1.8 compatibility, you’re not writing code—you’re embalming it. You can’t embrace case pattern matching when you’re still worried about hash syntax from 2007. You can’t build modern APIs with keyword arguments when some ancient server is running MRI 1.9.3.

The Real Cost

Here’s what nobody talks about: legacy support isn’t just technical debt. It’s innovation debt.

This isn’t just my problem—it’s infecting all of software development. While other gems were leveraging Ruby 3.0’s JIT compiler and refined garbage collection, I was debugging why some corporate installation from 2015 couldn’t handle a simple **kwargs call. While the ecosystem moved toward modern idioms like:

# Modern Ruby 3.2+ - clean, explicit, self-documenting
def create_state(name:, from:, to: nil, **options)
  # Ruby automatically raises ArgumentError for missing required keywords
  # No manual validation needed - the language has your back
end

I was stuck maintaining this abomination:

# Legacy compatibility hell
def create_state(*args)
  # Parse args manually because kwargs didn't exist in 1.8
  # Monkey patch Hash to add assert_valid_keys like ActiveSupport
  # Convert symbols because hash syntax changed (:key vs => :key)
  # Check Ruby version because feature detection is impossible
  # Wrap everything in begin/rescue for mysterious edge cases
  # Pray it works on someone's production box from 2012
end

At some point, state_machines stopped being a tool and became a shrine—a tribute to the past. We kept it alive not because it was good, but because we “respect our elders.”

I even stopped promoting it to people. I’d let them handle their states with enums and pain instead. Why? Because once they added the gem, they could receive hundreds of warnings about deprecated syntax. Nothing kills the joy of using a library faster than a terminal flooded with deprecation warnings on every boot.

And this is exactly why new modern solutions keep showing up and people ask: “Why wasn’t X built as elegantly as this shiny new thing?”

Because X is stuck in the past, dragging around decades of compatibility baggage. You can’t complain that the Pyramids don’t have elevators like the Burj Khalifa, can you? Both are architectural marvels, but they were built for different eras with different constraints.

But software is not a nursing home.

The Absurd Reality

Want to know how bad it gets? I’ve spent hours with companies arguing about bundle updates. Not major version bumps—patch versions. I once had a client write what felt like a PhD thesis explaining why they couldn’t upgrade from version 2.1.3 to 2.1.4.

The kicker? I was the author. I knew it wouldn’t break anything. But they needed documentation, impact assessments, stakeholder approval—for a dependency update that would take less time than their meeting about it.

Sometimes I was literally arguing about upgrading gems I wrote, knowing exactly what changed, while they treated each minor bump like defusing a bomb.

The irony is suffocating: companies that treat bundle update like nuclear warfare somehow expect infinite backward compatibility. They want the security patches, the bug fixes, the performance improvements—but they also want their 2015 codebase to run unchanged until the heat death of the universe.

You know what? Just compile your precious legacy code into a binary and burn it into an OTP EPROM while you’re at it. That way it’ll never change, never break, and never evolve—exactly what you seem to want.

The Breaking Point

Here’s the thing: I’m not Oracle. I’m not Datadog. I’m not some enterprise vendor with million-dollar support contracts. I’m just a developer trying to build something useful and maintainable.

So today, I bumped the minimum Ruby version in state_machines to 0.30.0.

Why? Because I need keyword arguments without writing backward-compatibility hacks that look like witchcraft from Marrakesh. Because modern Ruby has features that make code better, safer, faster—and I want to use them.

The interface stays the same because I like it, and thousands of apps are built on top of it. Also, LLMs have learned it pretty well, and I’m not looking to disappoint our future AI overlords.

But the implementation? It’s finally free to be modern.

The Path Forward

If you maintain open source software, here’s my advice: pick your battles.

Supporting one or two major versions back? Reasonable. Supporting every version since the Roman Empire? That’s not engineering—it’s archaeology. No wonder they say Nero burned Rome. (He didn’t, but maybe he should have burned their legacy code instead.)

Your users on ancient versions aren’t updating anyway. If they were the type to run bundle update, they wouldn’t be on ancient versions. The companies that demand infinite backward compatibility are the same ones that treat dependency updates like nuclear waste disposal.

Stop building shrines to the past. Your code deserves to live in the present.

Here’s a warning for maintainers clinging to the past: evolve or get replaced. Claude, GPT-4o, Grok, Gemini—they’re all watching. When developers ask them to “build something like [your legacy project] but modern,” these AIs won’t hesitate. They’ll fork your concept, strip out the cruft, rewrite it with clean modern patterns, and promote their version in every code explanation. They don’t give a damn about your 15 years of backward compatibility—they’ll just build it right the first time.

I know this because it happened to me. I built a gem called ActionMCP. Then some developer vibe-coded with that Devin thing and created another gem named ActiveMCP. It had the same flaws I wrote a few versions prior—it even copied one of my method signatures. But that was Devin—and Devin is just one character away from being the Devil. Claude, GPT-4o, and those frontier models? They’re not going to make such rookie mistakes.

The future is built by people who aren’t afraid to deprecate the past. Be one of them.

The Journey Continues

If you want to see how this philosophy translates into actual code surgery, check out Operation: From State Hero to Zero - the technical deep-dive into how I surgically refactored 1,647 lines of monolithic Ruby code into focused, modern modules.

For more insights into the maintainer experience:

Related Posts

Operation: From State Hero to Zero

The surgical breakdown of a 1.6k LOC Ruby monolith into focused modules. Or: how I performed open-heart surgery on a dying codebase and lived to tell the tale.

ruby refactoring architecture