A distributed file store built on top of AWS CloudShell's persistent storage.
For fun only. This is a proof-of-concept exploring what's possible with free cloud infrastructure, NAT traversal, and erasure coding. Don't store anything you can't afford to lose — CloudShell environments are ephemeral and AWS could change the rules at any time.
Every AWS region gives you a free CloudShell environment with ~1GB of persistent disk. Across all supported regions, that's a decent amount of free, globally distributed storage. What if you could stitch them together into a single fault-tolerant file store?
That's what this does. Files are split into chunks, erasure-coded so any 6 of 9 shards can rebuild the data, optionally encrypted, and scattered across CloudShell nodes worldwide. Lose 3 entire AWS regions? Your files are fine.
There are four key problems to solve: getting a shell, punching through NAT, moving data, and surviving failures.
CloudShell has no public API for programmatic access. By inspecting the browser's network requests, an internal JSON API was found at cloudshell.{region}.amazonaws.com, authenticated with standard AWS SigV4 signatures. This allows programmatically creating environments, starting sessions, and sending heartbeats to keep them alive — all without touching the AWS console.
Once session credentials are obtained, session-manager-plugin (AWS's SSM client) provides an interactive shell. Through this shell a Python agent is uploaded and started.
The laptop and CloudShell are both behind NAT — neither can directly reach the other. A classic UDP hole-punching technique solves this:
┌──────────┐ STUN ┌──────────┐
│ Laptop │◄──────discover──────► │ Agent │
│ │ public IP:port │(CS node) │
└────┬─────┘ └────┬─────┘
│ │
│ 1. Both STUN to learn public endpoints
│ 2. Laptop tells agent where to punch (via shell)
│ 3. Both send UDP packets to each other simultaneously
│ 4. NAT mappings open in both directions
│ 5. QUIC connection established over the punched hole
│ │
└──────────── QUIC/UDP ─────────────┘
(encrypted, multiplexed)
QUIC runs over the punched UDP hole, providing multiplexed streams, TLS encryption, and keepalive packets every 5 seconds to prevent NAT mappings from expiring. If a connection dies, the daemon detects it and automatically re-bootstraps the agent through the still-alive SSM session.
Files are split into 1MB chunks. Each chunk is Reed-Solomon encoded into 9 shards (6 data + 3 parity):
File ──► Chunk 0 ──► RS(6,3) ──► S0 S1 S2 S3 S4 S5 S6 S7 S8
Chunk 1 ──► RS(6,3) ──► S0 S1 S2 S3 S4 S5 S6 S7 S8
Chunk 2 ──► RS(6,3) ──► ...
│ │ │ │ │ │ │ │ │
▼ ▼ ▼ ▼ ▼ ▼ ▼ ▼ ▼
distributed round-robin across regions
(offset per chunk so all regions get shards)
Any 6 of the 9 shards can reconstruct a chunk — meaning 3 entire regions can go down with no data loss. Shards rotate across regions per chunk, ensuring even distribution.
With --key, every shard is encrypted with AES-256-GCM before leaving the laptop. Each shard gets a unique nonce. CloudShell nodes store opaque ciphertext.
A background integrity checker runs every 10 minutes, spot-checking random shards across all nodes and flagging any that have gone missing.
go build -o cs-daemon ./cmd/daemon
# Pick your regions
./cs-daemon --regions us-east-1,us-west-2,eu-west-1 --addr :8080
# Or all default regions
./cs-daemon --regions all --addr :8080
# With encryption
./cs-daemon --regions all --addr :8080 --key "my secret passphrase"Open http://localhost:8080. Upload files by drag-and-drop. Click a file to see its shard distribution across regions. Delete individual shards to simulate failures, then repair. Add new regions and redistribute.
- Go 1.22+
- AWS credentials (
~/.aws/credentialsor environment variables) session-manager-plugin
- Ephemeral — CloudShell environments sleep after inactivity and can be reclaimed
- ~1GB per region — the persistent storage quota
- Bandwidth — bottlenecked by CloudShell's network and your home connection
- NAT dependent — UDP hole punching works with most residential NATs but will fail with symmetric NAT (common in corporate networks). Aggressive NATs may also drop mappings despite keepalive
- Single writer — no concurrent upload coordination
Sometimes you just want to know if something is possible. Can free cloud shells become an erasure-coded, encrypted storage network spread across the globe? Turns out they can. It's not practical for anything serious, but it was a fun weekend rabbit hole.
