FastAPI backend for tax-benefit policy microsimulations using PolicyEngine's UK and US models.
Level 2: Reports AI-generated documents (future)
Level 1: Analyses Operations on simulations (economy_comparison_*)
Level 0: Simulations Single world-state calculations (simulate_household_*, simulate_economy_*)
See docs/DESIGN.md for the full design including future endpoints.
- Client submits request to FastAPI (Cloud Run)
- API resolves the country package version → versioned Modal app name via Modal Dicts
- API creates job record in Supabase and spawns a function on the versioned Modal app
- Modal runs calculation with pre-loaded PolicyEngine models (sub-1s cold start)
- Modal writes results directly to Supabase
- Client polls API until job status = "completed"
Each deploy creates a versioned Modal app named policyengine-v2-us{X}-uk{Y} (e.g., policyengine-v2-us1-592-4-uk2-75-1). Old versions remain deployed and accessible. Cloud Run routes to the correct version via v2-specific Modal Dict registries (api-v2-us-versions, api-v2-uk-versions).
Key files:
src/policyengine_api/modal/app.py— Versioned app definition (dynamic name from env vars)src/policyengine_api/modal/images.py— Country images with exact version pins (==)src/policyengine_api/modal/deploy.py— Entry point formodal deploysrc/policyengine_api/version_resolver.py— Resolves country+version to Modal app namescripts/update_version_registry.py— Updates Modal Dicts after deploy.github/scripts/modal-deploy-versioned.sh— Deploy script (generates app name, deploys, updates registry)
Deploy: POLICYENGINE_US_VERSION=X POLICYENGINE_UK_VERSION=Y .github/scripts/modal-deploy-versioned.sh <environment>
| Function | Purpose |
|---|---|
simulate_household_uk |
Single UK household calculation |
simulate_household_us |
Single US household calculation |
simulate_economy_uk |
Single UK economy simulation |
simulate_economy_us |
Single US economy simulation |
economy_comparison_uk |
UK economy comparison (decile impacts, budget impact) |
economy_comparison_us |
US economy comparison |
- Framework: FastAPI with async endpoints
- Database: Supabase (Postgres) via SQLModel
- Compute: Modal.com serverless functions
- Package manager: UV
- Formatting: Ruff
- Testing: Pytest with pytest-asyncio
- Deployment: Terraform on GCP Cloud Run
make install # install dependencies with uv
make dev # start supabase + api via docker compose
make test # run unit tests
make integration-test # full integration tests
make format # ruff formatting
make lint # ruff linting with auto-fix
make modal-deploy # deploy Modal.com serverless functionssrc/policyengine_api/api/- FastAPI routerssrc/policyengine_api/models/- SQLModel database modelssrc/policyengine_api/services/- database and storage servicessrc/policyengine_api/modal/- Versioned Modal.com serverless functionssrc/policyengine_api/version_resolver.py- Version → Modal app name resolutionsupabase/migrations/- SQL migrationsterraform/- GCP Cloud Run infrastructuredocs/- Next.js docs site + DESIGN.md
SQLModel for database schemas, Pydantic BaseModel for request/response schemas. All calculation endpoints are async (submit job → poll for results). Modal functions use Supabase connection pooler for IPv4 compatibility. Analysis logic lives in policyengine package; API is thin orchestration layer.
Never commit directly to main. PRs trigger tests; merging to main deploys to Cloud Run via Terraform.
Use gh CLI for GitHub operations to ensure Actions run correctly.
This project uses Alembic for database migrations. See .claude/skills/database-migrations.md for detailed guidelines.
Key rules:
- All schema changes go through Alembic migrations (never use
SQLModel.metadata.create_all()) - After modifying a model:
uv run alembic revision --autogenerate -m "Description" - Apply migrations:
uv run alembic upgrade head
Local development:
supabase start # Start local Supabase
uv run python scripts/init.py # Run migrations + apply RLS policies
uv run python scripts/seed.py # Seed datascripts/init.py --reset drops and recreates everything (destructive).
The agent endpoint (/agent/stream) runs Claude Code CLI inside a Modal sandbox. Hard-won lessons:
-
Modal secrets must explicitly set env var names. When creating:
modal secret create anthropic-api-key ANTHROPIC_API_KEY=sk-ant-.... Just having a secret named "anthropic-api-key" doesn't automatically setANTHROPIC_API_KEY. -
--dangerously-skip-permissionsdoesn't work as root. Modal containers run as root. Claude Code blocks this flag for security. Don't use it. -
sb.exec()doesn't close stdin. This causes Claude to hang waiting for input. Wrap in shell:sb.exec("sh", "-c", "claude ... < /dev/null 2>&1"). -
Claude Code has first-run onboarding. Pre-accept during image build:
.run_commands( "mkdir -p /root/.claude && " 'echo \'{"hasCompletedOnboarding": true, "hasAcknowledgedCostThreshold": true}\' ' "> /root/.claude/settings.json", )
-
--output-format stream-jsonrequires--verbose. Otherwise you get an error. -
Modal image caching. Changes to
.run_commands()may not rebuild if earlier layers are cached. Add a cache-busting change (new env var, modified command) to force rebuild. -
Test locally before deploying. Use
modal.Sandbox.create()directly in a Python script to debug without waiting for Cloud Run deploys. -
MCP SSE doesn't work in Modal containers. Claude Code with MCP works locally but exits immediately after init in Modal (both sandbox and function). Workaround implemented:
stream_policy_analysisuses a system prompt with API documentation instead of MCP, and Claude makes direct HTTP calls via Bash/curl. -
subprocess.Popenneedsstdin=DEVNULL. Same issue as the sandbox - if stdin is left as a pipe (the default), Claude hangs waiting for input. Always usestdin=subprocess.DEVNULL.