Mothership: One Process, Multiple Services, Zero Platform Drama

You have a Rails app. It needs background jobs. Maybe a separate API service. Traditional wisdom says: spin up 4 containers. One for web, one for workers, one for this, one for that.
Heroku charges per dyno. AWS bills per container. Your docker-compose.yml grows. Your deployment pipeline becomes a ceremony.
What if the platform saw one process, but internally you ran everything?
The Runtime Tax¶
Every process supervisor has dependencies:
- Foreman: Needs Ruby
- PM2: Needs Node.js
- systemd: Needs… systemd (FreeBSD? Alpine with OpenRC? Good luck)
- Kubernetes: Needs a cluster, a PhD, and three yaml files per service
You’re deploying a Go binary. Why install Ruby just to supervise it? You’re running on FreeBSD. Where’s your systemd now?
Mothership is a single static binary. No runtime. Works on Linux, FreeBSD, macOS—any Unix.
And the billing problem: Heroku sees each Foreman process as a separate dyno. AWS ECS sees each service as a separate container. They’ll call it “best practice.” Best practice for them, not for you.
You need a background worker to send 1000 emails per hour. Why pay $7/month for a separate dyno? Let it run inside your main process. Let it crash and respin. When you’re getting millions of visitors per day, hire a team for that. Until then, Mothership presents one process to the platform while running multiple services internally.
Mothership: What It Actually Does¶
One binary. One config. Platforms see one process.
Internally, Mothership:
- Launches your services in dependency order (database connections before web servers)
- Routes HTTP/WebSocket traffic to the right internal service
- Health checks that verify services can actually handle requests
- Restarts crashed processes with circuit breaker protection
- Coordinates across servers when you scale horizontally
# mothership.toml - your entire deployment
[config]
bind = "0.0.0.0:8080"
[[ships]]
name = "web"
command = "bundle exec puma -p 3000"
routes = [{ pattern = "/*" }]
healthcheck = { path = "/health", interval = 30 }
[[ships]]
name = "workers"
command = "bundle exec good_job"
depends_on = ["web"]
Run mothership. That’s it. Heroku sees one process. Your services run internally.
The Deployment Platform Problem¶
Different platforms have different opinions:
| Platform | Expectation | Traditional Solution |
|---|---|---|
| Heroku | 1 process per dyno | Pay for multiple dynos |
| AWS ECS | 1 container per service | Multiple task definitions |
| Bare metal | Whatever you want | systemd + nginx + glue scripts |
| Fly.io | 1 process per machine | Multiple machines |
| FreeBSD | Whatever you want | Docker doesn’t exist here |
With Mothership, you write one config. Deploy anywhere. The platform sees one well-behaved process. Internally, your entire stack runs coordinated.
graph TB
subgraph "What Platforms See"
MS[Mothership Process<br/>Port 8080]
end
subgraph "What's Actually Running"
Web[Web Server<br/>Port 3000]
Workers[Background Jobs]
API[Internal API<br/>Port 3001]
end
MS -->|routes /api/*| API
MS -->|routes /*| Web
MS -->|manages| Workers
Ships and Bays: Two Modes¶
Ships are traditional processes. They bind to ports. Mothership proxies traffic to them.
Bays speak the docking protocol. They connect to Mothership via Unix socket. Thousands of connections multiplex over one socket. No port management. No connection overhead per client.
# Ship - binds its own port
[[ships]]
name = "api"
command = "./api-server --port 4000"
routes = [{ pattern = "/api/*" }]
# Bay - uses docking protocol
[[bays]]
name = "websocket-handler"
command = "./ws-handler"
routes = [{ bind = "ws", pattern = "/live" }]
The docking protocol is optional. Ships work fine for most services. Bays matter when you’re handling thousands of concurrent WebSocket connections.
User-Agent Routing: Different Backends for Different Clients¶
LLMs don’t want your inline JavaScript, CSS, or 30KB of Tailwind classes. They want markdown. Browsers want the full app. Bots want static cache.
Why serve the same thing to everyone?
# Browser traffic → full Rails app
[[fleet.web]]
name = "app"
command = "bundle exec puma"
routes = [{ bind = "http", pattern = "/.*", ua_filter = "browser" }]
# LLM traffic → clean markdown (no JS/CSS/Tailwind)
[[fleet.web]]
name = "markdown-api"
command = "./markdown-server"
routes = [{ bind = "http", pattern = "/.*", ua_filter = "llm" }]
# Bot traffic → static cache
[[fleet.web]]
name = "nginx-cache"
command = "nginx"
routes = [{ bind = "http", pattern = "/.*", ua_filter = "bot" }]
Available filters:
browser- Chromium, Firefox, Safarillm- Claude/Anthropic, GPT, Perplexitybot- Crawlers, curl, wget~pattern- Custom regex (e.g.,~MyAgent.*)
Routes match in order. Put specific filters before catch-alls. Your LLM backend serves /blog/post as pure markdown. Your browser backend serves the same URL with full UI.
Shields: HTTP Fingerprinting¶
Bots lie about their User-Agent. Headless browsers pretend to be Chrome. How do you actually know who’s hitting your site?
Mothership computes Ja4H fingerprints for every request. Ja4H analyzes HTTP header order and values—things browsers can’t easily fake.
DEBUG method=GET path=/ ua=Chrome shields=ge11nn06enus_...
The fingerprint is logged with each request. Spoofed User-Agent? The fingerprint reveals the truth.
Coming soon: Shield-based routing to block or redirect suspicious fingerprints automatically.
Prometheus Metrics: Don’t Fly Blind¶
Motherships don’t fly blind. That’s how you end up in a black hole.
[mothership]
metrics_port = 9090
Scrape http://127.0.0.1:9090/metrics for telemetry:
mothership_ship_status{ship="app",group="web"} 1
mothership_ship_healthy{ship="app",group="web"} 1
mothership_ship_restarts_total{ship="app",group="web"} 0
mothership_requests_total{route="/api"} 1234
mothership_fleet_ships_total 3
Also serves /health for liveness probes. No separate exporter. No additional processes. Just telemetry.
Static Files + Compression¶
Serve assets directly. No separate nginx. Multiple directories supported.
[[mothership.static_dirs]]
path = "./public/assets"
prefix = "/assets"
[[mothership.static_dirs]]
path = "./public"
prefix = "/"
[mothership]
compression = true # gzip, deflate, brotli
Longest prefix wins. File not found? Falls through to your ships. No 404, just routing. Compression happens automatically based on Accept-Encoding.
CORS Preflight Cache¶
Browsers send OPTIONS requests before every cross-origin call. Your backend doesn’t need to handle all of them.
[mothership.cors_cache]
enabled = true
default_ttl = 3600
max_entries = 10000
Mothership caches CORS preflight responses per origin. Second browser making the same request? Cached. Your backend never sees it.
Cache key: origin + path + request method + headers. TTL from backend’s Access-Control-Max-Age.
I was receiving 100k+ OPTIONS requests per day in my Rails apps. Now I receive each unique CORS request once. The rest hit cache.
Dependency Ordering That Works¶
Services start in the right order. Always.
# Prelaunch - runs before any ships start
[[mothership.prelaunch]]
name = "migrations"
command = "bundle exec rails db:migrate"
# Ships start after prelaunch completes
[[ships]]
name = "web"
command = "bundle exec puma"
[[ships]]
name = "workers"
command = "bundle exec good_job"
depends_on = ["web"] # Wait for web to be healthy
graph LR
M[migrations] --> W[web]
W --> WK[workers]
style M fill:#1e3a5f,stroke:#38bdf8
style W fill:#1e3a5f,stroke:#38bdf8
style WK fill:#1e3a5f,stroke:#38bdf8
Prelaunch jobs run first. Ships start only after prelaunch completes. Workers wait for web to be healthy. No race conditions. No manual coordination.
Uplinks: Pre-flight Checks¶
Before launching anything, verify external dependencies are reachable:
[[mothership.uplinks]]
url = "postgres://localhost:5432/mydb"
name = "database"
timeout = "30s"
[[mothership.uplinks]]
url = "$REDIS_URL"
name = "redis"
timeout = "10s"
Run mothership preflight to validate config and verify uplinks. In production, Mothership checks uplinks on every launch. Database down? You’ll know before anything starts, not after half your services crash.
Multi-Server: Flagship Election¶
Scaling to multiple servers? One server needs to run migrations. Others need to wait.
Mothership handles this with flagship election:
[flagship]
enabled = true
database_url = "postgres://..."
lock_name = "myapp_deploy"
sequenceDiagram
participant S1 as Server 1
participant S2 as Server 2
participant S3 as Server 3
participant DB as PostgreSQL
S1->>DB: Request advisory lock
S2->>DB: Request advisory lock
S3->>DB: Request advisory lock
DB-->>S1: Lock granted (Flagship)
DB-->>S2: Lock denied (wait)
DB-->>S3: Lock denied (wait)
S1->>S1: Run migrations
S1->>DB: Release lock
S2->>S2: Start services
S3->>S3: Start services
Server 1 becomes flagship, runs migrations. Servers 2 and 3 wait. When migrations complete, everyone starts. No Consul. No etcd. Just PostgreSQL advisory locks.
Circuit Breakers, Not Restart Loops¶
Process crashes? Mothership doesn’t blindly restart forever.
Crash 1: Wait 1s, restart
Crash 2: Wait 2s, restart
Crash 3: Wait 4s, restart
Crash 4: Wait 8s, restart
Crash 5: Circuit OPEN - stop trying for 5 minutes
After cooling off, try once more. Success? Circuit closes. Failure? Wait again. Your crashed service doesn’t hammer the system while it’s broken.
WASM Plugins (Optional)¶
Need custom logic at the proxy layer? Write it in any language that compiles to WebAssembly:
[[plugins]]
name = "rate-limiter"
path = "./plugins/rate_limit.wasm"
config = { requests_per_second = 100 }
Auth, rate limiting, request transformation—without forking Mothership or waiting for features.
Who This Is For¶
You need Mothership if:
- You’re tired of paying for multiple dynos/containers for one app
- Your deployment differs between Heroku, AWS, and bare metal
- You run FreeBSD and Docker isn’t an option
- You want dependency ordering without writing shell scripts
- You need health checks that actually verify service readiness
- You’re scaling to multiple servers and need migration coordination
You don’t need Mothership if:
- You have one process and it’s fine
- You’re happy managing Kubernetes
- Your platform handles all this already (and you’re not paying through the nose)
Install¶
cargo install mothership
# Or download binary
curl -L https://github.com/seuros/mothership/releases/latest/download/mothership-$(uname -s)-$(uname -m).tar.gz | tar xz
Initialize a config:
mothership init
Launch your fleet:
mothership
The Real Reason¶
I was deploying the same app to Heroku (for staging), AWS (for production), and bare metal (for development). Three different deployment configurations. Three different ways to coordinate services. Three different billing models punishing me for having background jobs.
Mothership is the deployment tool I wanted: one config, any platform, one process that does everything.
Thank You¶
This project wouldn’t exist without Rama. Glen did all the hard work building a production-ready proxy framework. I just assembled the pieces like playing with Lego.
I originally built features on top of Pingora, but it didn’t support FreeBSD. Shelved the project for 6 months. Then I found Rama—which worked on FreeBSD and was a better solution anyway.
All credit for the proxy layer goes to the Rama team.
One process. Multiple services. Zero platform drama.
The bridge is yours.
Related Posts¶
- Blackship: A FreeBSD Jail Orchestrator That Understands State — For when you need jail-level isolation, not just process management
- Blackship Architecture: State Machines, Dependency Graphs, and Resilience Patterns — The architectural patterns shared across the fleet
🔗 Interstellar Communications
No transmissions detected yet. Be the first to establish contact!
Related Posts
Blackship: A FreeBSD Jail Orchestrator That Understands State
Announcing Blackship - declarative jail management with dependency graphs, state machines, circuit breakers, and ZFS-first design.
Blackship vs The Galaxy: FreeBSD Jail Managers Compared
Docker, Podman, Bastille, CBSD, iocage, pot. Which jail manager actually fits your workflow? A brutally honest comparison.
Vibe Reporting: When Your Bug Is Just A Feeling You Haven't Debugged
A .AI founder complains about slow DAG queries while using MongoDB (a document database) for graph operations. Won't read docs. Deploys in-memory graph database on 512MB RAM. Blames software when it crashes. Trusts LLM that hallucinates deprecated versions. Asks if 1M context window fixes architecture. This is Vibe Reporting--and it's killing open source.