Atlas is an EVM blockchain explorer (indexer + API + frontend) for ev-node based chains.
| Layer | Tech |
|---|---|
| Server | Rust, tokio, Axum, sqlx, alloy, tokio-postgres (binary COPY), tower-http |
| Database | PostgreSQL (partitioned tables) |
| Frontend | React, TypeScript, Vite, Tailwind CSS, Bun |
| Deployment | Docker Compose, nginx (unprivileged, port 8080→80) |
atlas/
├── backend/
│ ├── Cargo.toml # Workspace — all dep versions live here
│ ├── crates/
│ │ ├── atlas-common/ # Shared types, DB pool, error handling, Pagination
│ │ └── atlas-server/ # Unified server: indexer + API in a single binary
│ │ └── src/
│ │ ├── main.rs # Startup: migrations, pools, spawn indexer, serve API
│ │ ├── config.rs # Unified config from env vars
│ │ ├── indexer/ # Block fetcher, batch writer, metadata fetcher
│ │ └── api/ # Axum REST API + SSE handlers
│ └── migrations/ # sqlx migrations (run once at startup)
├── frontend/
│ ├── src/
│ │ ├── api/ # Typed API clients (axios)
│ │ ├── components/ # Shared UI components
│ │ ├── hooks/ # React hooks (useBlocks, useLatestBlockHeight, …)
│ │ ├── pages/ # One file per page/route
│ │ └── types/ # Shared TypeScript types
│ ├── Dockerfile # Multi-stage: oven/bun:1 → nginx-unprivileged:alpine
│ └── nginx.conf # SPA routing + /api/ reverse proxy to atlas-server:3000
├── docker-compose.yml
└── .env.example
The indexer and API run as concurrent tokio tasks in a single atlas-server binary. The indexer pushes block events directly to SSE subscribers via an in-process broadcast::Sender<()>. If the indexer task fails, the API keeps running (graceful degradation); the indexer retries with exponential backoff.
- API pool: 20 connections (configurable via
API_DB_MAX_CONNECTIONS),statement_timeout = '10s' - Indexer pool: 20 connections (configurable via
DB_MAX_CONNECTIONS), same timeout — kept separate so API load can't starve the indexer - Binary COPY client: separate
tokio-postgresdirect connection (bypasses sqlx pool), conditional TLS based onsslmodein DATABASE_URL - Migrations: run once with a dedicated 1-connection pool with no statement_timeout (index builds can take longer than 10s)
The indexer publishes block updates through broadcast::Sender<()>. SSE handler (GET /api/events) subscribes to this broadcast channel and refreshes independently of the database write path.
The blocks table can have 80M+ rows. OFFSET on large pages causes 30s+ full index scans. Instead:
// cursor = max_block - (page - 1) * limit — uses clamped limit(), not raw offset()
let limit = pagination.limit(); // clamped to 100
let cursor = (total_count - 1) - (pagination.page.saturating_sub(1) as i64) * limit;
// Query: WHERE number <= $cursor ORDER BY number DESC LIMIT $1total_count comes from MAX(number) + 1 (O(1), not COUNT(*)).
For large tables (transactions, addresses), use pg_class.reltuples instead of COUNT(*):
// handlers/mod.rs — get_table_count(pool, table_name)
// Partition-aware: sums child reltuples, falls back to parent
// For tables < 100k rows: falls back to exact COUNT(*)TimeoutLayer::with_status_code(StatusCode::REQUEST_TIMEOUT, Duration::from_secs(10)) wraps all routes except SSE — returns 408 if any handler exceeds 10s.
pub struct AppState {
pub pool: PgPool, // API pool only
pub block_events_tx: broadcast::Sender<()>, // shared with indexer
pub da_events_tx: broadcast::Sender<Vec<DaSseUpdate>>, // shared with DA worker
pub head_tracker: Arc<HeadTracker>,
pub rpc_url: String,
pub da_tracking_enabled: bool,
pub chain_id: u64,
pub chain_name: String,
}When ENABLE_DA_TRACKING=true, a background DA worker queries ev-node for Celestia inclusion heights per block. EVNODE_URL is required only in that mode. Updates are pushed to SSE clients via an in-process broadcast::Sender<Vec<DaSseUpdate>>. The SSE handler streams da_batch events for incremental updates and emits da_resync when a client falls behind and should refetch visible DA state.
- Base URL:
/api(proxied by nginx toatlas-server:3000) - Fast polling endpoint:
GET /api/height→{ block_height, indexed_at, features: { da_tracking } }— serves fromhead_trackerfirst and falls back toindexer_statewhen the in-memory head is empty. Used by the navbar as a polling fallback when SSE is disconnected and by feature-flag consumers. - Chain status:
GET /api/status→{ chain_id, chain_name, block_height, total_transactions, total_addresses, indexed_at }— full chain info, fetched once on page load. GET /api/events→ SSE stream ofnew_block,da_batch, andda_resyncevents. Primary live-update path for navbar counter, blocks page, block detail DA status, and DA resync handling. Falls back to/api/heightpolling on disconnect.
- Rust: idiomatic — use
.min(),.max(),|=,+=over manual if/assign - SQL: never use
OFFSETfor large tables — use keyset/cursor pagination - Migrations: use
run_migrations(&database_url)(not&pool) to get a timeout-free connection - Frontend: uses Bun (not npm/yarn). Lockfile is
bun.lock(text, Bun ≥ 1.2). Build withbunx vite build(skips tsc type check). - Docker: frontend image uses
nginxinc/nginx-unprivileged:alpine(non-root, port 8080). Server usesalpinewithca-certificates. - Tests: add unit tests for new logic in a
#[cfg(test)] mod testsblock in the same file. Run withcargo test --workspace. - Commits: authored by the user only — no Claude co-author lines.
Key vars (see .env.example for full list):
| Var | Used by | Default |
|---|---|---|
DATABASE_URL |
all | required |
RPC_URL |
server | required |
CHAIN_NAME |
server | "Unknown" |
DB_MAX_CONNECTIONS |
indexer pool | 20 |
API_DB_MAX_CONNECTIONS |
API pool | 20 |
BATCH_SIZE |
indexer | 100 |
FETCH_WORKERS |
indexer | 10 |
ADMIN_API_KEY |
API | none |
API_HOST |
API | 127.0.0.1 |
API_PORT |
API | 3000 |
ENABLE_DA_TRACKING |
server | false |
EVNODE_URL |
server | none |
DA_RPC_REQUESTS_PER_SECOND |
DA worker | 50 |
DA_WORKER_CONCURRENCY |
DA worker | 50 |
# Start full stack
docker compose up -d
# Rebuild after code changes
docker compose build atlas-server && docker compose up -d atlas-server
# Backend only (no Docker)
cd backend && cargo build --workspacerun_migrationstakes&str(database URL), not&PgPool- The blocks cursor uses
pagination.limit()(clamped), notpagination.offset()— they diverge when client sendslimit > 100 bun.locknotbun.lockb— Bun ≥ 1.2 uses text format- SSE uses in-process broadcast, not PG NOTIFY — no PgListener needed