Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
There was a problem hiding this comment.
Pull Request Overview
This PR refactors the backend architecture by extracting Prisma database code into a separate @metro-now/database app to improve code organization and separation of concerns.
- Created a dedicated database app with its own package configuration and build setup
- Moved all Prisma-related scripts and dependencies from backend to the new database app
- Updated import statements across backend services to use the new database package
Reviewed Changes
Copilot reviewed 12 out of 36 changed files in this pull request and generated no comments.
Show a summary per file
| File | Description |
|---|---|
| package.json | Added database-specific scripts at root level for easier database management |
| apps/database/package.json | Created new database app with Prisma dependencies and scripts |
| apps/database/tsconfig.json | Added TypeScript configuration for the database app |
| apps/database/index.ts | Created entry point that re-exports Prisma client |
| apps/database/.env.example | Added database environment configuration template |
| apps/backend/package.json | Removed Prisma dependencies and scripts, added workspace dependency |
| apps/backend/src/modules/*.ts | Updated imports to use new database package instead of direct Prisma client |
| README.md | Added development setup instructions for PostgreSQL and Redis |
Files not reviewed (1)
- pnpm-lock.yaml: Language not supported
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
📝 WalkthroughWalkthroughReplaces Prisma with a Kysely-based database package, extracts import/GTFS sync into a new dataloader app, adds DB migrations/seeding, introduces logging and cache utilities, rewires backend modules to DatabaseModule/DatabaseService, and updates CI/tooling and workspace scripts. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor User
participant Backend as Backend (GraphQL)
participant Resolver as GraphQL Resolver
participant Cache as CacheManager/Keyv
participant Database as DatabaseService (Kysely)
participant DB as PostgreSQL
User->>Backend: GraphQL query (stops/platforms/routes)
Backend->>Resolver: resolve field
Resolver->>Cache: cacheManager.wrap(CACHE_KEY)
alt cache hit
Cache-->>Resolver: cached payload
else cache miss
Resolver->>Database: loadPlatforms / loadStops / loadRoutes
Database->>DB: SQL select(s)
DB-->>Database: rows
Database-->>Resolver: assembled records
Resolver->>Cache: cacheManager.set(CACHE_KEY, payload)
end
Resolver-->>Backend: response payload
Backend-->>User: GraphQL response
sequenceDiagram
autonumber
actor Cron as Scheduler / Manual Trigger
participant Dataloader as Dataloader App
participant Pid as PidImportService
participant Gtfs as GtfsService
participant Validator as SyncSnapshotValidator
participant Repo as SyncRepository
participant DB as PostgreSQL
Cron->>Dataloader: trigger syncEverything
Dataloader->>Pid: getStopSnapshot()
Pid->>Dataloader: StopSnapshot
Dataloader->>Gtfs: getGtfsSnapshot(platformIds)
Gtfs->>Dataloader: GtfsSnapshot
Dataloader->>Validator: validate(snapshot)
Validator-->>Dataloader: valid / throw
Dataloader->>Repo: persist(snapshot)
Repo->>DB: begin transaction + advisory lock
Repo->>DB: upserts and relation sync (inserts/deletes)
DB-->>Repo: ack / results
Repo-->>Dataloader: success / skipped
Dataloader-->>Cron: sync result (status, counts)
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Suggested labels
✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
|
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
apps/backend/src/modules/platform/platform.service.ts (1)
45-59: Use the transactional client for raw SQL inside $transaction
Calls on the root client (this.prisma.$queryRaw) execute outside the interactive transaction—use thetransactioninstance so your query runs within the same transaction context. (prisma.io)- const platformsWithDistance = await this.prisma.$queryRaw< + const platformsWithDistance = await transaction.$queryRaw< { id: string; distance: number }[] >` SELECT "Platform"."id", earth_distance( ll_to_earth("Platform"."latitude", "Platform"."longitude"), ll_to_earth(${latitude}, ${longitude}) ) AS "distance" … `;package.json (1)
5-27: Ensure Prisma client is generated on install (add root postinstall)Run the existing root db:generate during postinstall so @prisma/client is generated for CI and local installs.
File: package.json (root)"scripts": { "xcode": "xed ./apps/mobile/metro-now", "dev": "pnpm --recursive --parallel run dev", "build": "pnpm --recursive --parallel run build", "backend:build": "pnpm run -r --filter @metro-now/backend build", "backend:dev": "pnpm run -r --filter @metro-now/backend dev", "web:dev": "pnpm run -r --filter @metro-now/web dev", "db:studio": "pnpm run -r --filter @metro-now/database prisma:studio", "db:generate": "pnpm run -r --filter @metro-now/database prisma:generate", "db:migrate:create": "pnpm run -r --filter @metro-now/database prisma:migrate:create", "db:migrate:deploy": "pnpm run -r --filter @metro-now/database prisma:migrate:deploy", "db:push": "pnpm run -r --filter @metro-now/database prisma:push", "db:seed": "pnpm run -r --filter @metro-now/database prisma:seed", "db:reset": "pnpm run -r --filter @metro-now/database prisma:reset", "db:status": "pnpm run -r --filter @metro-now/database prisma:status", + "postinstall": "pnpm db:generate", "precommit": "pnpm format", "format": "pnpm --recursive --parallel format && pnpm exec prettier . --write", "app:format": "pnpm --filter @metro-now/mobile format", "format:check": "pnpm -r format:check && pnpm exec prettier . --check", "docker:up:dev": "docker compose up postgres redis-stack", "docker:up": "docker compose up -d --build", "docker:down": "docker compose down --remove-orphans --volumes" },
🧹 Nitpick comments (6)
apps/database/.env.example (1)
1-1: Drop quotes around DATABASE_URL (fixes dotenv-linter warning and avoids misparsing).Quotes aren’t needed and can be misread by some tools.
-DATABASE_URL="postgresql://postgres_user:postgres_password@localhost:5432/metro-now?schema=public" +DATABASE_URL=postgresql://postgres_user:postgres_password@localhost:5432/metro-now?schema=publicTip: remind users to URL‑encode passwords with special characters.
README.md (1)
32-34: Clarify that docker:up:dev is foreground; suggest detaching or using docker:up.
docker compose up postgres redis-stackblocks the terminal. Either add-dor instruct to run it in a separate terminal; or referencepnpm docker:uphere.-# Start PostgreSQL and Redis for development -pnpm docker:up:dev +# Start PostgreSQL and Redis for development (detached) +pnpm docker:upapps/database/package.json (1)
1-29: Reduce cross-package env coupling in Prisma scripts.These scripts read
../backend/.env.local. Prefer.envcolocated inapps/database(or env passed by caller) so the database package isn’t coupled to backend pathing.Example:
-"prisma:migrate:deploy": "dotenv -e ../backend/.env.local -- prisma migrate deploy", +"prisma:migrate:deploy": "dotenv -e ./.env.local -- prisma migrate deploy",If you keep current wiring, document it prominently.
apps/backend/src/modules/import/import.service.ts (1)
112-115: Make VehicleType mapping robust to external inputs.
VehicleType?.[route.vehicleType.toUpperCase()] ?? nullwill silently null unknown values and trusts casing. Prefer an explicit map with safe fallback; surface unexpected values for observability.-vehicleType: - VehicleType?.[route.vehicleType.toUpperCase()] ?? null, +vehicleType: toVehicleType(route.vehicleType),Add once near the top of the file:
const VEHICLE_TYPE_MAP: Record<string, VehicleType> = { METRO: VehicleType.METRO, TRAM: VehicleType.TRAM, BUS: VehicleType.BUS, // extend as needed… }; function toVehicleType(v: string): VehicleType | null { const key = v?.toUpperCase(); if (key in VEHICLE_TYPE_MAP) return VEHICLE_TYPE_MAP[key]!; // TODO: metric/log for unknown values return null; }apps/backend/src/modules/route/route.service.ts (1)
41-41: Follow-on TS errors are downstream of missing Prisma typesErrors at Lines 41, 48, 66 (missing Prisma members) and Lines 55/58 (
unknownfromgroup(...)) should resolve once the generated Prisma model types are available from@metro-now/database. If noise persists, add explicit generics togroup:- group(route.GtfsRouteStop, ({ directionId }) => directionId) + group<typeof route.GtfsRouteStop[number]>( + route.GtfsRouteStop, + s => s.directionId + )But prefer fixing the generation/export first.
Also applies to: 48-48, 66-66, 55-59
apps/backend/src/modules/prisma/prisma.service.ts (1)
11-13: Guard against undefined envs when composing datasourceUrlIf any env is missing, you’ll emit an invalid URL (e.g.,
schema=undefined). Consider defaulting or deferring toDATABASE_URL:- datasourceUrl: `postgresql://${process.env.POSTGRES_USER}:${process.env.POSTGRES_PASSWORD}@${process.env.DB_HOST}:${process.env.DB_PORT}/${process.env.POSTGRES_DB}?schema=${process.env.DB_SCHEMA}`, + datasourceUrl: + process.env.DATABASE_URL ?? + `postgresql://${process.env.POSTGRES_USER}:${process.env.POSTGRES_PASSWORD}` + + `@${process.env.DB_HOST}:${process.env.DB_PORT}/${process.env.POSTGRES_DB}` + + (process.env.DB_SCHEMA ? `?schema=${process.env.DB_SCHEMA}` : ""),
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (12)
README.md(1 hunks)apps/backend/package.json(2 hunks)apps/backend/src/modules/import/import.service.ts(1 hunks)apps/backend/src/modules/platform/platform.service.ts(1 hunks)apps/backend/src/modules/prisma/prisma.service.ts(1 hunks)apps/backend/src/modules/route/route.service.ts(1 hunks)apps/backend/src/modules/stop/stop.service.ts(1 hunks)apps/database/.env.example(1 hunks)apps/database/index.ts(1 hunks)apps/database/package.json(1 hunks)apps/database/tsconfig.json(1 hunks)package.json(1 hunks)
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2024-11-21T21:43:06.313Z
Learnt from: krystxf
PR: krystxf/metro-now#49
File: package.json:15-15
Timestamp: 2024-11-21T21:43:06.313Z
Learning: In the `metro-now` project, `swiftformat` is used as an external dependency, and the `app:format` script in `package.json` works correctly even though `swiftformat` is not listed as a dependency in the project's `package.json` files.
Applied to files:
apps/backend/package.jsonpackage.json
🪛 dotenv-linter (3.3.0)
apps/database/.env.example
[warning] 1-1: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
🪛 GitHub Actions: Docker
apps/backend/src/modules/platform/platform.service.ts
[error] 28-28: Prisma: Namespace has no exported member 'PlatformSelect'.
[error] 56-56: Prisma: Property 'sql' does not exist on type 'typeof Prisma'.
[error] 58-58: Prisma: Property 'empty' does not exist on type 'typeof Prisma'.
[error] 129-129: Prisma: Namespace has no exported member 'PlatformWhereInput'.
[error] 1-1: Nest build failed due to 18 TypeScript error(s) reported in backend package.
apps/backend/src/modules/route/route.service.ts
[error] 41-41: Prisma: Namespace has no exported member 'GtfsRouteSelect'.
[error] 48-48: Prisma: Namespace has no exported member 'GtfsRouteGetPayload'.
[error] 55-55: Type 'unknown' is not assignable to type '{ directionId: any; }'.
[error] 58-58: Type 'unknown' is not assignable to type '...'.
[error] 66-66: Prisma: Namespace has no exported member 'GtfsRouteWhereInput'.
[error] 1-1: Nest build failed due to 18 TypeScript error(s) reported in backend package.
apps/backend/src/modules/prisma/prisma.service.ts
[error] 1-1: Nest build failed due to 18 TypeScript error(s) reported in backend package.
apps/backend/src/modules/stop/stop.service.ts
[error] 17-17: Prisma: Namespace has no exported member 'StopSelect'.
[error] 32-32: Prisma: Namespace has no exported member 'StopWhereInput'.
[error] 66-66: Prisma: Namespace has no exported member 'StopWhereInput'.
[error] 90-90: Prisma: Namespace has no exported member 'StopWhereInput'.
[error] 1-1: Nest build failed due to 18 TypeScript error(s) reported in backend package.
apps/backend/package.json
[error] 1-1: Nest build failed due to 18 TypeScript error(s) reported in backend package.
apps/backend/src/modules/import/import.service.ts
[error] 1-1: Nest build failed due to 18 TypeScript error(s) reported in backend package.
🔇 Additional comments (7)
apps/database/tsconfig.json (1)
1-18: TS config looks solid for a small package.Strict mode, declarations, and maps are on. No blockers from my side.
apps/backend/src/modules/import/import.service.ts (1)
139-186: Sanity-check platform.stopId derivation.
stopId: platform.id.split("Z")[0]assumes a constant delimiter/pattern. If not guaranteed by source, this can associate platforms to wrong stops.Would you like a guard (regex + runtime check) and metric for mismatches?
apps/backend/src/modules/platform/platform.service.ts (1)
1-3: Action required: install dependencies and generate Prisma client (verification blocked)
- Verification failed: generation/build couldn't run because node_modules/binaries are missing (errors: "sh: 1: prisma: not found", "sh: 1: rimraf: not found").
- rg shows imports from "@metro-now/database" in: apps/backend/src/modules/stop/stop.service.ts, apps/backend/src/modules/platform/platform.service.ts, apps/backend/src/modules/prisma/prisma.service.ts, apps/backend/src/modules/route/route.service.ts, apps/backend/src/modules/import/import.service.ts.
- Run: pnpm install && pnpm db:generate && pnpm -w run backend:build. After generating the Prisma client and rebuilding, the reported missing types (PlatformSelect, Prisma.sql, Prisma.empty, PlatformWhereInput) should resolve.
apps/backend/src/modules/stop/stop.service.ts (2)
17-17: Type references depend on generated Prisma model typesThe
satisfies Prisma.StopSelect(Line 17) andPrisma.StopWhereInput(Lines 32/66/90) will typecheck once Prisma model types are available. No further code change needed after fixing generation/export.Also applies to: 32-32, 66-66, 90-90
2-2: Use a type-only import; Prisma types are already re-exported by @metro-now/databaseapps/database/index.ts exports "@prisma/client" (apps/database has a prisma:generate script and CI runs Prisma generate), so StopSelect/StopWhereInput are available — update the import in apps/backend/src/modules/stop/stop.service.ts:
-import { Prisma } from "@metro-now/database"; +import type { Prisma } from "@metro-now/database";Likely an incorrect or invalid review comment.
apps/backend/package.json (1)
24-24: Dependency swap looks rightReplacing
@prisma/clientwith@metro-now/databasealigns with the migration. Ensure@metro-now/databaselists@prisma/client(runtime) andprisma(CLI) in its own package where generation happens.apps/backend/src/modules/prisma/prisma.service.ts (1)
2-2: Ensure @metro-now/database re-exports a fully generated PrismaClient
- Evidence: apps/database/index.ts does
export * from "@prisma/client"; apps/database/package.json lists@prisma/clientand has aprisma:generatescript.- Problem: no generated client found under ./apps/database/node_modules/.prisma/client (client appears hoisted), so importing PrismaClient from @metro-now/database can fail at runtime if the generated client isn't present for consumers.
- Action: run
prisma generateduring CI/install (or add a postinstall/workspace generate step) so node_modules/.prisma/client exists in the runtime environment that runs apps/backend; verify the generated client is available at runtime. Locations: apps/database/index.ts, apps/database/package.json, apps/backend/src/modules/prisma/prisma.service.ts.
| @@ -1,5 +1,5 @@ | |||
| import { Injectable } from "@nestjs/common"; | |||
| import { Prisma } from "@prisma/client"; | |||
| import { Prisma } from "@metro-now/database"; | |||
There was a problem hiding this comment.
🧩 Analysis chain
Prisma model types missing via @metro-now/database; generate and re‑export properly
The switch to @metro-now/database breaks model‑specific types (GtfsRouteSelect, GtfsRouteGetPayload, GtfsRouteWhereInput). This usually happens when the Prisma client in the database package hasn’t been generated or the Prisma namespace isn’t re‑exported as types. Ensure:
apps/databaserunsprisma generate(prepare/postinstall or via workspace script before backend build).apps/database/index.tsexplicitly re‑exports the Prisma namespace as types.
Minimal code tweak (type‑only import) to avoid value import, though the root cause still needs the generation step:
-import { Prisma } from "@metro-now/database";
+import type { Prisma } from "@metro-now/database";To restore CI determinism, reintroduce a workspace generate step in backend prebuild (see package.json comment).
Run to verify database pkg wiring:
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "Check database index.ts re-exports:"
fd -a '^index\.(ts|tsx|cts|mts)$' apps/database | xargs -I{} sed -n '1,120p' {}
echo "Check prisma generate hooks in apps/database/package.json:"
jq -r '.scripts | to_entries[] | select(.key|test("prepare|postinstall|generate|prisma")).key + ": " + .value' apps/database/package.json || true
echo "Check root/workspace db scripts:"
jq -r '.scripts | to_entries[] | select(.key|test("^db:")).key + ": " + .value' package.json || true
echo "Check backend prebuild:"
jq -r '.scripts.prebuild' apps/backend/package.jsonLength of output: 1252
Import Prisma types as type-only and run workspace prisma:generate before backend build
- apps/database already re-exports @prisma/client (
export * from "@prisma/client";) and contains aprisma:generatescript; the root workspace hasdb:generate. apps/backend prebuild currently does not run it (rimraf dist && pnpm typegen). - Change the import in apps/backend/src/modules/route/route.service.ts to a type-only import:
-import { Prisma } from "@metro-now/database";
+import type { Prisma } from "@metro-now/database";- Add a workspace generate step to apps/backend/package.json prebuild so the generated client exists before building, e.g.
"prebuild": "rimraf dist && pnpm -w run db:generate && pnpm typegen"🤖 Prompt for AI Agents
In apps/backend/src/modules/route/route.service.ts around line 2, the import of
Prisma is not marked type-only which can cause runtime dependency on the
generated Prisma client that may not exist; change the import to a type-only
import (e.g., "import type { Prisma } from ...") and update
apps/backend/package.json to run the workspace db:generate before build by
adding a prebuild script that ensures the generated client exists (for example
run rimraf dist && pnpm -w run db:generate && pnpm typegen) so the backend build
does not fail due to missing generated Prisma client.
| @@ -0,0 +1 @@ | |||
| export * from "@prisma/client"; | |||
There was a problem hiding this comment.
Re-export Prisma namespace explicitly (unblocks compile errors for Prisma.sql/empty and generated model types).
Some TS setups don’t forward the namespace cleanly with export *. Explicit re-exports fix missing members like Prisma.sql, Prisma.empty, and model‑scoped types (e.g., PlatformSelect).
-export * from "@prisma/client";
+export { Prisma, PrismaClient } from "@prisma/client";
+export * from "@prisma/client";📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export * from "@prisma/client"; | |
| export { Prisma, PrismaClient } from "@prisma/client"; | |
| export * from "@prisma/client"; |
🤖 Prompt for AI Agents
In apps/database/index.ts around line 1, re-exporting the entire Prisma package
with `export *` misses the Prisma namespace and some generated model types
(e.g., Prisma.sql, Prisma.empty, PlatformSelect); explicitly re-export the
Prisma namespace and any needed generated types from "@prisma/client" so
consumers can import Prisma.* and model-scoped types reliably (add an explicit
export for Prisma and export the specific generated types your codebase
requires).
| { | ||
| "name": "@metro-now/database", | ||
| "main": "index.ts", | ||
| "types": "index.ts", | ||
| "scripts": { | ||
| "prisma:studio": "dotenv -e ../backend/.env.local -- prisma studio", | ||
| "prisma:generate": "prisma generate", | ||
| "prisma:migrate:create": "dotenv -e ../backend/.env.local -- prisma migrate dev", | ||
| "prisma:migrate:deploy": "dotenv -e ../backend/.env.local -- prisma migrate deploy", | ||
| "prisma:push": "dotenv -e ../backend/.env.local -- prisma db push", | ||
| "prisma:seed": "prisma db seed", | ||
| "prisma:reset": "dotenv -e ../backend/.env.local -- prisma migrate reset", | ||
| "prisma:status": "dotenv -e ../backend/.env.local -- prisma migrate status" | ||
| }, | ||
| "dependencies": { | ||
| "@prisma/client": "5.20.0" | ||
| }, | ||
| "devDependencies": { | ||
| "typescript": "^5.8.3", | ||
| "ts-node": "^10.9.2" | ||
| }, | ||
| "optionalDependencies": { | ||
| "dotenv-cli": "^8.0.0", | ||
| "prisma": "^5.20.0" | ||
| }, | ||
| "prisma": { | ||
| "seed": "dotenv -e ../backend/.env.local -- ts-node prisma/seed.ts" | ||
| } | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion
🧩 Analysis chain
Publish built artifacts and generate Prisma client on install (prevents backend TS errors).
Point main/types to dist, add a build step, and run prisma generate automatically so downstream packages always see generated types (e.g., Prisma.PlatformSelect). Also keep Studio/migrate scripts.
{
"name": "@metro-now/database",
- "main": "index.ts",
- "types": "index.ts",
+ "main": "dist/index.js",
+ "types": "dist/index.d.ts",
"scripts": {
+ "build": "tsc -p tsconfig.json",
+ "prepare": "pnpm run build",
+ "postinstall": "pnpm run prisma:generate",
"prisma:studio": "dotenv -e ../backend/.env.local -- prisma studio",
"prisma:generate": "prisma generate",
"prisma:migrate:create": "dotenv -e ../backend/.env.local -- prisma migrate dev",
"prisma:migrate:deploy": "dotenv -e ../backend/.env.local -- prisma migrate deploy",
"prisma:push": "dotenv -e ../backend/.env.local -- prisma db push",
"prisma:seed": "prisma db seed",
"prisma:reset": "dotenv -e ../backend/.env.local -- prisma migrate reset",
"prisma:status": "dotenv -e ../backend/.env.local -- prisma migrate status"
},Optional: add an exports map for Node resolution hygiene.
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "PWD: $(pwd)"
echo "---- root package.json (upto 200 lines) ----"
[ -f package.json ] && sed -n '1,200p' package.json || echo "no root package.json"
echo "---- apps/database/package.json ----"
[ -f apps/database/package.json ] && sed -n '1,200p' apps/database/package.json || echo "no apps/database/package.json"
echo "---- apps/database listing ----"
ls -la apps/database || true
echo "---- apps/database/tsconfig.json ----"
[ -f apps/database/tsconfig.json ] && sed -n '1,200p' apps/database/tsconfig.json || echo "no apps/database/tsconfig.json"
echo "---- root tsconfig.json ----"
[ -f tsconfig.json ] && sed -n '1,200p' tsconfig.json || echo "no root tsconfig.json"
echo "---- apps/database/dist exists? ----"
[ -d apps/database/dist ] && { echo "dist exists:"; ls -la apps/database/dist || true; } || echo "no apps/database/dist"
echo "---- search repo for prisma/schema and related scripts ----"
rg -n --hidden --glob '!node_modules/**' "schema.prisma|prisma generate|prisma:generate|prisma:studio" || true
echo "---- search repo for @metro-now/database usage ----"
rg -n --hidden --glob '!node_modules/**' "@metro-now/database" || true
echo "---- search for 'main\": \"index.ts' occurrences ----"
rg -n --hidden --glob '!node_modules/**' '"main"\s*:\s*"index\.ts"' || true
echo "---- search for prepare/postinstall/build scripts ----"
rg -n --hidden --glob '!node_modules/**' '"prepare"\s*:|"postinstall"\s*:|"build"\s*:' || true
echo "---- pnpm markers ----"
[ -f pnpm-workspace.yaml ] && echo "pnpm-workspace.yaml present" || echo "no pnpm-workspace.yaml"
[ -f pnpm-lock.yaml ] && echo "pnpm-lock.yaml present" || echo "no pnpm-lock.yaml"
rg -n --hidden --glob '!node_modules/**' '"packageManager"\s*:\s*".*pnpm' || true
echo "---- done ----"Length of output: 5392
Publish built artifacts and run Prisma generate on install
tsconfig outputs to ./dist but package.json points at index.ts and has no build — add build/prepare/postinstall so workspace builds the package and generated Prisma types are present.
File: apps/database/package.json
{
"name": "@metro-now/database",
- "main": "index.ts",
- "types": "index.ts",
+ "main": "dist/index.js",
+ "types": "dist/index.d.ts",
"scripts": {
+ "build": "tsc -p tsconfig.json",
+ "prepare": "pnpm run build",
+ "postinstall": "pnpm run prisma:generate",
"prisma:studio": "dotenv -e ../backend/.env.local -- prisma studio",
"prisma:generate": "prisma generate",
"prisma:migrate:create": "dotenv -e ../backend/.env.local -- prisma migrate dev",
"prisma:migrate:deploy": "dotenv -e ../backend/.env.local -- prisma migrate deploy",
"prisma:push": "dotenv -e ../backend/.env.local -- prisma db push",
"prisma:seed": "prisma db seed",
"prisma:reset": "dotenv -e ../backend/.env.local -- prisma migrate reset",
"prisma:status": "dotenv -e ../backend/.env.local -- prisma migrate status"
},📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| { | |
| "name": "@metro-now/database", | |
| "main": "index.ts", | |
| "types": "index.ts", | |
| "scripts": { | |
| "prisma:studio": "dotenv -e ../backend/.env.local -- prisma studio", | |
| "prisma:generate": "prisma generate", | |
| "prisma:migrate:create": "dotenv -e ../backend/.env.local -- prisma migrate dev", | |
| "prisma:migrate:deploy": "dotenv -e ../backend/.env.local -- prisma migrate deploy", | |
| "prisma:push": "dotenv -e ../backend/.env.local -- prisma db push", | |
| "prisma:seed": "prisma db seed", | |
| "prisma:reset": "dotenv -e ../backend/.env.local -- prisma migrate reset", | |
| "prisma:status": "dotenv -e ../backend/.env.local -- prisma migrate status" | |
| }, | |
| "dependencies": { | |
| "@prisma/client": "5.20.0" | |
| }, | |
| "devDependencies": { | |
| "typescript": "^5.8.3", | |
| "ts-node": "^10.9.2" | |
| }, | |
| "optionalDependencies": { | |
| "dotenv-cli": "^8.0.0", | |
| "prisma": "^5.20.0" | |
| }, | |
| "prisma": { | |
| "seed": "dotenv -e ../backend/.env.local -- ts-node prisma/seed.ts" | |
| } | |
| } | |
| { | |
| "name": "@metro-now/database", | |
| "main": "dist/index.js", | |
| "types": "dist/index.d.ts", | |
| "scripts": { | |
| "build": "tsc -p tsconfig.json", | |
| "prepare": "pnpm run build", | |
| "postinstall": "pnpm run prisma:generate", | |
| "prisma:studio": "dotenv -e ../backend/.env.local -- prisma studio", | |
| "prisma:generate": "prisma generate", | |
| "prisma:migrate:create": "dotenv -e ../backend/.env.local -- prisma migrate dev", | |
| "prisma:migrate:deploy": "dotenv -e ../backend/.env.local -- prisma migrate deploy", | |
| "prisma:push": "dotenv -e ../backend/.env.local -- prisma db push", | |
| "prisma:seed": "prisma db seed", | |
| "prisma:reset": "dotenv -e ../backend/.env.local -- prisma migrate reset", | |
| "prisma:status": "dotenv -e ../backend/.env.local -- prisma migrate status" | |
| }, | |
| "dependencies": { | |
| "@prisma/client": "5.20.0" | |
| }, | |
| "devDependencies": { | |
| "typescript": "^5.8.3", | |
| "ts-node": "^10.9.2" | |
| }, | |
| "optionalDependencies": { | |
| "dotenv-cli": "^8.0.0", | |
| "prisma": "^5.20.0" | |
| }, | |
| "prisma": { | |
| "seed": "dotenv -e ../backend/.env.local -- ts-node prisma/seed.ts" | |
| } | |
| } |
🤖 Prompt for AI Agents
In apps/database/package.json (lines 1-29) the package currently points at
source files (index.ts) and lacks build/prepare/postinstall hooks so published
package won't include compiled output or generated Prisma types; update
package.json to point main/types to the compiled outputs (e.g., dist/index.js
and dist/index.d.ts), add a "build" script to compile TypeScript to ./dist, add
a "prepare" (or "prepublishOnly") script that runs the build, and add a
"postinstall" script that runs prisma generate (or a combined script that
sources ../backend/.env.local then runs prisma generate) so Prisma client types
are generated on install; ensure dev/optional deps used for these steps are
available in the workspace or move prisma/@prisma/client to proper dependency if
needed.
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (5)
apps/dataloader/src/app.ts (3)
30-34: Add graceful shutdown (SIGINT/SIGTERM) and keep a server handle.Prevents dropped connections on deploys and lets the process exit cleanly.
-app.listen(port, () => { +const server = app.listen(port, () => { console.log(`🚀 Dataloader app listening on port ${port}`); console.log(`📍 Health check available at http://localhost:${port}/health`); console.log(`📍 Status endpoint available at http://localhost:${port}/status`); });Place after the block:
for (const sig of ['SIGINT', 'SIGTERM'] as const) { process.on(sig, () => { console.log(`Received ${sig}. Shutting down...`); server.close(() => process.exit(0)); setTimeout(() => process.exit(1), 10_000).unref(); }); }
3-3: Minor hardening: hide Express signature.const app = express(); +app.disable('x-powered-by');
21-21: Avoid hard‑coding version; make it env/config driven.Use an env var (or load from package.json if
resolveJsonModuleis enabled).- version: '1.0.0', + version,Add near the top:
+const version = process.env.APP_VERSION ?? '1.0.0';apps/dataloader/package.json (2)
1-14: Prevent accidental publish and specify supported Node.{ "name": "@metro-now/dataloader", "version": "1.0.0", + "private": true, + "engines": { "node": ">=18" },
7-10: Faster DX: simplify dev watch loop.Current
nodemonbuilds then restarts Node each change; consider running TS directly withtsx.- "build": "tsc", - "start": "node dist/app.js", - "dev:watch": "nodemon --watch src --ext ts --exec \"npm run build && npm start\"", + "build": "tsc", + "start": "node dist/app.js", + "dev": "tsx watch src/app.ts",And add the dev dep:
"devDependencies": { + "tsx": "^4", "@types/express": "^4.17.21", "@types/node": "^20.10.0", "nodemon": "^3.0.2", "typescript": "^5.3.0" }Also applies to: 18-23
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (4)
apps/dataloader/.gitignore(1 hunks)apps/dataloader/package.json(1 hunks)apps/dataloader/src/app.ts(1 hunks)apps/dataloader/tsconfig.json(1 hunks)
✅ Files skipped from review due to trivial changes (2)
- apps/dataloader/.gitignore
- apps/dataloader/tsconfig.json
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: ci (macos-latest)
🔇 Additional comments (1)
apps/dataloader/src/app.ts (1)
1-1: No action required — esModuleInterop enabled.
apps/dataloader/tsconfig.json has "esModuleInterop": true, so the default import from 'express' is safe; switching toimport typefor purely type imports is optional.
There was a problem hiding this comment.
Actionable comments posted: 11
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
compose.yaml (1)
51-55:⚠️ Potential issue | 🔴 CriticalEnsure database migrations are executed before backend startup in the Docker compose setup.
The backend container now runs only
pnpm start:prodwithout any migration step. TheDatabaseServiceonly verifies connectivity viaSELECT 1during module initialization—it does not run migrations. Migration scripts exist inapps/database(migrate:deploy,migrate:rollback) but are not triggered during the compose startup. Without migrations applied, the backend will encounter schema mismatches and runtime failures when accessing database tables or columns.Add an
apps/databaseservice tocompose.yamlto executepnpm migrate:deploybefore the backend container starts, or document an alternative migration strategy in the README and ensure it is executed as part of the initialization workflow.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@compose.yaml` around lines 51 - 55, The compose file currently starts the backend with only "pnpm start:prod" and does not run DB migrations, so add a startup step that runs the migration scripts in apps/database (e.g., run "pnpm migrate:deploy") before the backend boots: either add a dedicated "apps_database" service that runs "pnpm migrate:deploy" and make the backend service depend_on it (or use a healthcheck/wait-for mechanism) or modify the backend service command to run the migration command from apps/database before executing "pnpm start:prod"; reference the migration npm scripts "migrate:deploy" / "migrate:rollback" and the backend startup currently using "pnpm start:prod" and ensure DatabaseService's simple SELECT 1 connectivity check is not relied upon as a migration step.
🧹 Nitpick comments (20)
apps/dataloader/src/services/cron.service.ts (1)
34-42: Disabled jobs are not registered, preventing later enablement via API.When
config.enabledisfalse, the method returns early without adding the job to the map. This means disabled jobs cannot be started later viastartJob(). If runtime toggling is desired, consider registering the job but not starting it.♻️ Alternative: Register disabled jobs without starting
addJob(config: CronJobConfig, task: () => Promise<void>): void { if (this.jobs.has(config.name)) { this.removeJob(config.name); } - if (!config.enabled) { - logger.info("Skipping disabled cron job", { name: config.name }); - return; - } - if (!cron.validate(config.schedule)) { throw new Error(`Invalid cron schedule: ${config.schedule}`); } const managedJob: ManagedCronJob = { // ... create task as before }; this.jobs.set(config.name, managedJob); + + if (!config.enabled) { + logger.info("Registered disabled cron job", { name: config.name }); + return; + } + + managedJob.task.start(); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/dataloader/src/services/cron.service.ts` around lines 34 - 42, The addJob method currently returns early when config.enabled is false, so disabled entries are not stored and cannot later be toggled via startJob; change addJob (handling CronJobConfig) to always register the job into this.jobs (and call removeJob first if present) but only start/schedule it when config.enabled is true—log "Skipping starting disabled cron job" instead of returning—so startJob can later find and start the stored job; update references to this.jobs, addJob, startJob, removeJob, and logger.info accordingly.lib/shared/src/env.ts (1)
13-16: Consider couplingREDIS_HOSTandREDIS_PORTvalidation.Both fields are independently optional, but a partial configuration (e.g., only
REDIS_HOSTwithoutREDIS_PORT) may cause runtime issues when constructing the Redis connection. Consider using.refine()to ensure they're either both present or both absent.♻️ Optional: Add coupled validation
export const redisEnvSchema = z.object({ REDIS_HOST: z.string().min(1).optional(), REDIS_PORT: z.coerce.number().int().positive().optional(), -}); +}).refine( + (data) => (data.REDIS_HOST !== undefined) === (data.REDIS_PORT !== undefined), + { message: "REDIS_HOST and REDIS_PORT must both be set or both be unset" } +);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/shared/src/env.ts` around lines 13 - 16, The schema redisEnvSchema currently allows REDIS_HOST and REDIS_PORT to be set independently which can lead to invalid partial configs; update redisEnvSchema to add a .refine(...) check that enforces both REDIS_HOST and REDIS_PORT are either both present or both absent (inspect values of REDIS_HOST and REDIS_PORT inside the predicate) and provide a clear message for the failure case so consumers know they must supply both values together.apps/dataloader/src/services/pid-import.service.ts (2)
12-14: Consider makingPidImportServicestateless with static methods.The class has no instance state or injected dependencies. Either use static methods or instantiate it as a singleton to clarify the intended usage pattern.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/dataloader/src/services/pid-import.service.ts` around lines 12 - 14, PidImportService currently has no instance state; make its intent explicit by converting its methods to static (e.g., declare class PidImportService with static getStopSnapshot, static buildStopSnapshot, static getStops) so callers can call PidImportService.getStopSnapshot() without instantiating, or alternatively export a single instantiated singleton (e.g., export const pidImportService = new PidImportService()) and mark the constructor private to prevent multiple instances; update all call sites that use getStopSnapshot/buildStopSnapshot/getStops to match the chosen pattern.
110-113: Type assertion on enum lookup masks invalid values.The
as keyof typeof VehicleTypeassertion bypasses type checking. Ifline.type.toUpperCase()produces an unexpected value, it silently becomesundefined(thennull). Consider explicit validation or a safer lookup pattern.♻️ Safer enum lookup
+const getVehicleType = (type: string): VehicleType | null => { + const upperType = type.toUpperCase(); + return Object.values(VehicleType).includes(upperType as VehicleType) + ? (upperType as VehicleType) + : null; +}; + routes.set(routeId, { id: routeId, name: routeName, - vehicleType: - VehicleType[ - line.type.toUpperCase() as keyof typeof VehicleType - ] ?? null, + vehicleType: getVehicleType(line.type), isNight: null, });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/dataloader/src/services/pid-import.service.ts` around lines 110 - 113, The enum lookup using VehicleType[line.type.toUpperCase() as keyof typeof VehicleType] should be made safe by validating the uppercase string before indexing; compute the candidate key from line.type (e.g. const key = line.type?.toUpperCase()), check that key exists on VehicleType (e.g. key in VehicleType or Object.prototype.hasOwnProperty.call(VehicleType, key)), and only then assign vehicleType = VehicleType[key as keyof typeof VehicleType], otherwise set vehicleType = null; update the code near the vehicleType assignment in pid-import.service.ts to perform this explicit validation instead of using the type assertion.apps/dataloader/src/config/env.ts (1)
31-37: All matching env files are loaded, allowing later files to override earlier values.The current implementation loads every existing file from
ENV_FILESrather than stopping at the first match. This means variables defined in multiple files will use the value from the last loaded file. If this cascading behavior is intentional, consider adding a comment to document it.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/dataloader/src/config/env.ts` around lines 31 - 37, The loop over ENV_FILES currently loads every existing env file (using resolve/process.cwd() to build filePath and process.loadEnvFile), causing later files to override earlier values; update the logic to stop after the first successful load (i.e., after process.loadEnvFile(filePath) perform a break) so only the highest-priority matching file is used, or if cascading overrides are intended, add an explicit comment above the for-loop documenting that ENV_FILES are intentionally loaded in order and later files override earlier ones.apps/dataloader/src/utils/fetch.utils.ts (1)
17-20: Caller-providedsignalwill be silently overwritten.If
initcontains asignalproperty, it gets overwritten by the timeout signal. Consider merging signals usingAbortSignal.any()(available in Node 18.17.0+) or documenting this limitation.♻️ Optional fix using AbortSignal.any
export const fetchWithTimeout = async ( url: URL | string, init: RequestInit = {}, timeoutMs = DEFAULT_TIMEOUT_MS, ): Promise<Response> => { + const timeoutSignal = AbortSignal.timeout(timeoutMs); + const combinedSignal = init.signal + ? AbortSignal.any([init.signal, timeoutSignal]) + : timeoutSignal; + try { return await fetch(url, { ...init, - signal: AbortSignal.timeout(timeoutMs), + signal: combinedSignal, }); } catch (error) { throw new Error(formatFetchError(url, error)); } };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/dataloader/src/utils/fetch.utils.ts` around lines 17 - 20, The current fetch call overwrites any caller-provided init.signal with AbortSignal.timeout(timeoutMs), so merge the signals instead of overwriting: if AbortSignal.any is available use AbortSignal.any([init.signal, AbortSignal.timeout(timeoutMs)]) to create a combined signal; otherwise implement a fallback by creating a local AbortController that listens to the caller's init.signal and calls controller.abort() on either the caller signal or a timeout (using AbortSignal.timeout or setTimeout), then pass controller.signal to fetch. Update the fetch invocation that uses init, signal, AbortSignal.timeout, and timeoutMs accordingly or document the limitation if you choose not to merge.apps/backend/src/types/parent.ts (1)
1-1: Consider usingunknown[]instead ofnever[]for stricter yet more practical typing.The current
(...args: never[])constraint is overly restrictive. WhilegetMany()has no parameters and satisfies this constraint, methods likegetMultiple()andgetOne()accept parameters. Usingunknown[]maintains stricter typing compared toanywhile allowing functions with any parameters:Suggested change
-type AnyFunction = (...args: never[]) => unknown; +type AnyFunction = (...args: unknown[]) => unknown;This preserves the improvement over the previous
anytyping while avoiding potential edge cases with the contravariant parameter constraint.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/src/types/parent.ts` at line 1, Replace the overly restrictive parameter type in the AnyFunction alias: change the rest-parameter type from never[] to unknown[] so functions with any parameter shapes (e.g., getMany, getMultiple, getOne) type-check correctly; update the type alias named AnyFunction accordingly and ensure callers using AnyFunction continue to compile with the new unknown[] parameter signature.apps/backend/src/constants/cache.ts (1)
30-42: Consider consolidatinguniqueStringsinto@metro-now/shared.This utility appears generic and could be useful elsewhere in the monorepo. The AI summary mentions
@metro-now/sharedis already a dependency incache-batch.ts. If that package exists, consider moving this helper there to avoid duplication.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/src/constants/cache.ts` around lines 30 - 42, The helper functions uniqueStrings and uniqueSortedStrings are generic utilities that should be consolidated into the shared package; move their implementations into `@metro-now/shared` (export them from that package), remove them from apps/backend/src/constants/cache.ts, and update any callers (e.g., import sites like cache-batch.ts which already depends on `@metro-now/shared`) to import uniqueStrings and uniqueSortedStrings from `@metro-now/shared` instead; ensure the shared package exports are updated and run the build/tests to confirm no import breakages.apps/backend/src/modules/golemio/golemio.service.ts (1)
35-39: Consider including the URL in the error message for easier debugging.The error message includes status code and text, but including the full URL would help identify which endpoint failed.
💡 Enhanced error message
if (!res.ok) { throw new Error( - `Failed to fetch Golemio data: ${res.status} ${res.statusText}`, + `Failed to fetch Golemio data from ${url}: ${res.status} ${res.statusText}`, ); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/src/modules/golemio/golemio.service.ts` around lines 35 - 39, The error thrown when a Golemio fetch fails only contains status and statusText; update the throw in the failure branch to include the request URL (use the local URL variable if present or fall back to res.url) so the message becomes something like "Failed to fetch Golemio data: <URL> <status> <statusText>"; locate the check around res.ok in golemio.service.ts and modify the Error construction used there to include the URL.apps/backend/src/config/cache-module.config.ts (1)
10-14: Add radix parameter toparseIntfor explicit base-10 parsing.While
Number.parseIntdefaults to base 10 for standard decimal strings, explicitly passing the radix improves clarity and avoids potential edge cases.💡 Suggested fix
new Keyv({ store: new KeyvRedis( - `redis://${process.env.REDIS_HOST || "localhost"}:${Number.parseInt(process.env.REDIS_PORT || "6379")}`, + `redis://${process.env.REDIS_HOST || "localhost"}:${Number.parseInt(process.env.REDIS_PORT || "6379", 10)}`, ), }),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/src/config/cache-module.config.ts` around lines 10 - 14, The call to Number.parseInt in the KeyvRedis constructor should pass an explicit radix to ensure base‑10 parsing; update the code where Keyv is constructed (the new Keyv({...} with store: new KeyvRedis(...)) and change Number.parseInt(process.env.REDIS_PORT || "6379") to Number.parseInt(process.env.REDIS_PORT || "6379", 10) so the port is parsed as base 10.apps/backend/src/modules/route/route.resolver.ts (1)
11-13: Consider handling edge cases in route ID normalization.The helper correctly handles the "L" prefix, but consider what happens with empty strings or whitespace-only inputs. If these are invalid inputs, you might want to validate or throw early.
💡 Defensive enhancement
private toDatabaseRouteId(id: string): string { + if (!id || !id.trim()) { + throw new Error("Route ID cannot be empty"); + } return id.startsWith("L") ? id : `L${id}`; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/src/modules/route/route.resolver.ts` around lines 11 - 13, The toDatabaseRouteId helper should defensively handle empty or whitespace-only inputs: in the toDatabaseRouteId(id: string) function trim the incoming id, then if the trimmed value is empty throw a clear Error (e.g., "Invalid route id: empty or whitespace"), otherwise apply the existing logic to ensure it starts with "L" (return trimmed if startsWith("L") else `L${trimmed}`). This ensures invalid inputs are rejected early while preserving current prefix normalization.apps/backend/src/utils/cache-batch.spec.ts (1)
10-18: Consider adding TTL verification in mock.The mock
msetignores thettlproperty. Consider adding an assertion to verify TTL values are passed correctly if TTL handling is critical.💡 TTL verification
mset: jest.fn( - async (entries: Array<{ key: string; value: unknown }>) => { + async (entries: Array<{ key: string; value: unknown; ttl: number }>) => { for (const entry of entries) { store.set(entry.key, entry.value); + // Could verify: expect(entry.ttl).toBeGreaterThan(0); } return entries; }, ),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/src/utils/cache-batch.spec.ts` around lines 10 - 18, The mock mset implementation ignores the ttl on each entry—update the mock for mset (the async function receiving entries) to accept entries typed with an optional ttl, and either (a) assert passed TTLs inside the mock (e.g., verify entry.ttl matches expected values) or (b) store ttl alongside the value in the in-memory store (e.g., store.set(entry.key, { value: entry.value, ttl: entry.ttl })) so tests can verify TTL handling; change references to entries and store.set in the mock accordingly and ensure tests supply and validate the ttl field.apps/dataloader/src/types/sync.types.ts (1)
74-82: Consider using discriminated union forSyncRunResult.Currently
countsandreasonare both optional, but semanticallycountsshould only be present on success andreasononly on skipped. Using a discriminated union (likeSyncPersistenceResult) would enforce this at the type level.💡 Discriminated union approach
-export type SyncRunResult = { - status: "success" | "skipped"; - trigger: SyncTrigger; - startedAt: Date; - finishedAt: Date; - durationMs: number; - counts?: SyncCounts; - reason?: string; -}; +type SyncRunResultBase = { + trigger: SyncTrigger; + startedAt: Date; + finishedAt: Date; + durationMs: number; +}; + +export type SyncRunResult = + | (SyncRunResultBase & { status: "success"; counts: SyncCounts }) + | (SyncRunResultBase & { status: "skipped"; reason: string });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/dataloader/src/types/sync.types.ts` around lines 74 - 82, Replace the broad SyncRunResult type with a discriminated union so that status discriminates the payload: define one variant for status: "success" that includes trigger: SyncTrigger, startedAt, finishedAt, durationMs and a required counts: SyncCounts (no reason), and another variant for status: "skipped" that includes trigger, startedAt, finishedAt, durationMs and a required reason: string (no counts); update any code that constructs or narrows SyncRunResult to use the status discriminator (e.g., switch/if on result.status) so TypeScript can infer the correct fields and adjust any references to counts/reason accordingly.apps/dataloader/src/utils/logger.spec.ts (1)
17-29: Usetry/finallyto guarantee transport cleanup.This test touches global logger state; cleanup should be guaranteed even if an await throws.
💡 Suggested refactor
test("logger writes entries to the configured transport", async () => { const transport = new TestTransport(); logger.setTransport(transport); - logger.info("hello", { foo: "bar" }); - await logger.flush(); - logger.setTransport(null); - - assert.equal(transport.entries.length, 1); - assert.equal(transport.entries[0]?.level, "info"); - assert.equal(transport.entries[0]?.message, "hello"); - assert.deepEqual(transport.entries[0]?.context, { foo: "bar" }); + try { + logger.info("hello", { foo: "bar" }); + await logger.flush(); + + assert.equal(transport.entries.length, 1); + assert.equal(transport.entries[0]?.level, "info"); + assert.equal(transport.entries[0]?.message, "hello"); + assert.deepEqual(transport.entries[0]?.context, { foo: "bar" }); + } finally { + logger.setTransport(null); + } });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/dataloader/src/utils/logger.spec.ts` around lines 17 - 29, Wrap the test body in a try/finally so the transport is always restored: set the TestTransport via logger.setTransport(transport) before the try, perform logger.info("hello", { foo: "bar" }) and await logger.flush() inside the try, and call logger.setTransport(null) in the finally block; keep references to TestTransport, logger.setTransport, logger.info, and logger.flush so cleanup happens even if the await throws.apps/backend/src/modules/infotexts/infotexts.resolver.ts (1)
24-27: Short-circuit empty related-platform IDs before service call.A tiny guard avoids unnecessary DB-layer work when
relatedPlatformsis empty.💡 Suggested refactor
const ids = infotext.relatedPlatforms.map(({ id }) => id); + if (ids.length === 0) { + return []; + } return this.platformService.getGraphQLByIds(ids);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/src/modules/infotexts/infotexts.resolver.ts` around lines 24 - 27, Add a fast-return guard in the resolver to avoid calling platformService.getGraphQLByIds when there are no related platform IDs: check infotext.relatedPlatforms (or the derived ids array) and if empty return an empty array immediately; locate this logic around the ids variable and the call to this.platformService.getGraphQLByIds(ids) in the InfotextsResolver and return [] before invoking platformService when ids.length === 0.apps/backend/src/modules/infotexts/__test__/infotext.resolver.spec.ts (1)
20-27: Mock may need enhancement for more comprehensive tests.The current mock provides
selectFrom: jest.fn()which suffices for the basic "should be defined" test. If you add tests that exercise actual queries, you'll need to mock the Kysely query chain (e.g.,selectFrom().selectAll().where().execute()).Example enhanced mock for future tests
{ provide: DatabaseService, useValue: { db: { selectFrom: jest.fn().mockReturnValue({ selectAll: jest.fn().mockReturnThis(), where: jest.fn().mockReturnThis(), execute: jest.fn().mockResolvedValue([]), }), }, }, },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/src/modules/infotexts/__test__/infotext.resolver.spec.ts` around lines 20 - 27, The mock for DatabaseService only stubs selectFrom and should be enhanced to mimic the Kysely fluent API so tests that run queries can chain methods; update the DatabaseService mock (the db.selectFrom mock) to return an object implementing selectAll, where, and execute (each as jest.fn() with selectAll/where returning this and execute returning a resolved value) so tests invoking selectFrom().selectAll().where().execute() behave correctly.apps/backend/src/modules/stop/stop.service.ts (1)
318-320: Consider preserving input order instead of deduplicating to Set.
Array.from(new Set(ids))removes duplicates but may change the order. If callers expect results in the same order as inputids, consider using a different deduplication approach or documenting the behavior.♻️ Order-preserving deduplication
- return Array.from(new Set(ids)) - .map((id) => stopsById.get(id)) - .filter((stop): stop is StopGraphQLRecord => stop !== null); + const seen = new Set<string>(); + return ids + .filter((id) => { + if (seen.has(id)) return false; + seen.add(id); + return true; + }) + .map((id) => stopsById.get(id)) + .filter((stop): stop is StopGraphQLRecord => stop !== null);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/src/modules/stop/stop.service.ts` around lines 318 - 320, The current return builds Array.from(new Set(ids)) which deduplicates but can reorder inputs; update the logic in the function that uses ids and stopsById so duplicates are removed while preserving the original ids order: iterate over the original ids, track seen ids (e.g., with a local Set or Map), push the id only if not seen, then map those ordered unique ids to stopsById.get(id) and filter to Drop nulls as StopGraphQLRecord; keep references to ids, stopsById, and StopGraphQLRecord so the change is localized.apps/dataloader/src/services/sync-repository.service.ts (1)
304-324: Consider optimizing bulk relation deletes for large datasets.The delete query builds an OR of AND conditions, which can become slow with large batches (up to 500 rows). For better performance, consider using a CTE with
VALUESor temp table approach.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/dataloader/src/services/sync-repository.service.ts` around lines 304 - 324, The current delete builds an OR-of-ANDs via transaction.deleteFrom("PlatformsOnRoutes") ... where(expressionBuilder.or(...chunk.map(...))) which will be slow for large chunks; replace this pattern with a set-based delete using a CTE/VALUES (or a temp table) that lists (platformId, routeId) and then deletes by joining that CTE to PlatformsOnRoutes (e.g. DELETE ... USING (VALUES ...) AS v(platformId,routeId) WHERE PlatformsOnRoutes.platformId = v.platformId AND PlatformsOnRoutes.routeId = v.routeId). Update the code that constructs the delete (the block using transaction.deleteFrom("PlatformsOnRoutes") and expressionBuilder/or with chunk.map) to generate the CTE/VALUES payload from chunk and execute a single join-based delete for much better performance on batches up to 500 rows.apps/backend/src/modules/platform/platform.service.ts (1)
272-277: Same order preservation consideration as inStopService.The
Array.from(new Set(ids))pattern here has the same potential order change as noted instop.service.ts. Consider applying consistent behavior across both services.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/src/modules/platform/platform.service.ts` around lines 272 - 277, The current deduplication using Array.from(new Set(ids)) can reorder ids; update the logic in platform.service.ts to preserve the original ids order while removing duplicates and null platform entries: iterate the original ids array, maintain a seen Set to skip duplicate ids, for each unseen id get platformsById.get(id) and only push non-null PlatformGraphQLRecord results into the output array. Keep references to the existing ids variable and platformsById Map so the change is local and consistent with StopService’s ordering behavior.apps/dataloader/src/services/sync.service.ts (1)
103-113: Object spread merge may silently overwrite properties if snapshots share keys.If
stopSnapshotandgtfsSnapshothave overlapping property names, the spread will silently prefergtfsSnapshotvalues. Consider explicitly listing properties or using a type-safe merge to prevent accidental data loss.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/dataloader/src/services/sync.service.ts` around lines 103 - 113, The current createSnapshot method merges stopSnapshot and gtfsSnapshot with object spread which can silently overwrite keys; replace the spread merge in createSnapshot so you explicitly construct and return a SyncSnapshot by enumerating and assigning each expected property from stopSnapshot and gtfsSnapshot (or perform a controlled, type-safe merge that reconciles overlapping keys), using importService.getStopSnapshot and gtfsService.getGtfsSnapshot values (e.g., pick/rename conflicting fields or merge nested structures) rather than relying on ...stopSnapshot, ...gtfsSnapshot.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/backend/package.json`:
- Line 20: The test:e2e npm script is missing the workspace-prebuild step that
ensures `@metro-now/shared` and `@metro-now/database` are built; update the
"test:e2e" script in package.json so it runs the same prebuild command used by
the regular test flow (build the workspace packages `@metro-now/shared` and
`@metro-now/database`) before invoking Jest (i.e., prepend or incorporate the
workspace build step that exists for the other test script so test:e2e always
builds those packages first).
- Line 73: The Jest coverage glob under the "collectCoverageFrom" key currently
uses the literal pattern "**/*.(t|j)s" which doesn't match files; update the
value for "collectCoverageFrom" to a proper glob that matches .ts and .js files
(for example use a brace expansion like "**/*.{ts,js}") so Jest will collect
coverage from TypeScript and JavaScript sources; locate the string
"collectCoverageFrom" and replace the pattern "**/*.(t|j)s" with the corrected
pattern.
In `@apps/backend/src/modules/departure/departure-board.service.ts`:
- Around line 20-57: In resolvePlatformIds, avoid passing huge parameter lists
to the DB by de-duplicating and capping platformIds and stopIds before running
the two Platform queries: create deduped arrays (e.g. via new Set) for
platformIds and stopIds and then slice them to a sensible maximum (based on
limit) before using them in the .where("id", "in", ...) and .where("stopId",
"in", ...) calls; keep references to the existing
variables/directPlatforms/stopPlatforms and retain the final
uniqueSortedStrings([...]).slice(0, limit) behavior but ensure the DB queries
only receive the deduped-and-sliced input arrays.
In `@apps/backend/src/modules/departure/departure.resolver.ts`:
- Around line 36-52: If resolvePlatformIds(...) returns an empty array, return
an empty list immediately instead of calling fetchDepartureBoard with
platformIds: []; update the resolver method (the code that calls
this.departureBoardService.resolvePlatformIds and then
this.departureBoardService.fetchDepartureBoard) to check if
resolvedPlatformIds.length === 0 and return [] (or the appropriate empty
response shape) early; keep the normalizedLimit and other logic for non-empty
results and only call fetchDepartureBoard when there are resolvedPlatformIds.
In `@apps/backend/src/modules/log/log.service.ts`:
- Around line 47-111: The current logGraphQLRequest implementation chains writes
on the single Promise pendingWrite, causing an unbounded in-memory backlog under
load; replace this single-writer chain with a bounded, concurrency-controlled
approach: remove the append-to-pendingWrite pattern and instead implement a
limited-concurrency writer (e.g., a semaphore/worker pool) or a bounded
in-memory buffer with periodic batching and overflow-dropping for entries, and
have the writer flush using database.db.insertInto("Log").values(...).execute()
in batches or with up to N concurrent requests; ensure the implementation
references and replaces pendingWrite and logGraphQLRequest, enforces a max
buffer size or concurrency limit, and drops or stalls new logs (not grow memory)
when the limit is reached.
In `@apps/backend/src/modules/stop/stop.resolver.ts`:
- Around line 25-31: The resolver currently treats an explicit empty ids array
as a request for all stops; change the branching in the resolver so only
undefined/null ids triggers getAllGraphQL(), and an empty array returns an empty
array immediately. In practice, adjust the condition around
stopService.getGraphQLByIds in the resolver (the block that calls
this.stopService.getGraphQLByIds(ids)) to first check Array.isArray(ids) and if
ids.length === 0 return [] (or if ids is undefined/null call getAllGraphQL()),
otherwise call getGraphQLByIds and apply offset/limit as before.
In `@apps/database/migrate.ts`:
- Around line 82-85: The usage banner in the default case is out of sync with
the command switch: update the console.log help lines in the default branch so
they list the actual accepted commands (deploy, latest for apply and rollback
for revert) instead of "<up|down>"; locate the default case in migrate.ts and
change the two console.log strings that print the usage/description to mention
"ts-node migrate.ts <deploy|latest|rollback>" (or similar wording) and map
deploy/latest to apply/pending migrations and rollback to revert the last
migration so the help text matches the switch logic.
In `@apps/database/seed.ts`:
- Around line 98-106: The seed currently maps routes to NewRoute objects with
isNight set to null, which discards the parsed Route.isNight value; update the
mapping that builds routeValues (the routes.map callback creating NewRoute
entries) to assign isNight: route.isNight (or route.isNight ?? null if you want
to keep nullable typing) instead of null so the seeded rows preserve the
original isNight attribute.
In `@apps/dataloader/src/config/env.ts`:
- Line 35: Add an explicit Node.js engine constraint to package.json to ensure
runtime support for process.loadEnvFile (requires Node >=20.12.0): open
package.json and add or update the "engines" field to "node": ">=20.12.0" so
installs/resolutions and CI enforce the minimum version used by code paths like
process.loadEnvFile in apps/dataloader/src/config/env.ts; after changing
package.json, regenerate lockfiles/CI config as needed to ensure environments
meet this constraint.
In `@apps/dataloader/src/services/pid-import.service.ts`:
- Around line 100-101: The code risks a runtime error when computing routeName
because it calls line.name.trim() without ensuring line.name is a string; update
the logic that sets routeName (next to routeId) to coerce or guard line.name
similarly to routeId—e.g. use String(line.name).trim() or check for
null/undefined and default to ''—so that routeName assignment never calls
.trim() on a non-string value.
In `@lib/shared/src/database.ts`:
- Line 77: The connection-string builder currently interpolates raw
POSTGRES_USER and POSTGRES_PASSWORD into the URL (the return statement building
the postgresql://... string) which breaks when credentials contain special
characters; update that code to pass env.POSTGRES_USER and env.POSTGRES_PASSWORD
through encodeURIComponent() before interpolation so both username and password
are URL-encoded when constructing the connection string (leave host, port, DB
name and schema unchanged).
---
Outside diff comments:
In `@compose.yaml`:
- Around line 51-55: The compose file currently starts the backend with only
"pnpm start:prod" and does not run DB migrations, so add a startup step that
runs the migration scripts in apps/database (e.g., run "pnpm migrate:deploy")
before the backend boots: either add a dedicated "apps_database" service that
runs "pnpm migrate:deploy" and make the backend service depend_on it (or use a
healthcheck/wait-for mechanism) or modify the backend service command to run the
migration command from apps/database before executing "pnpm start:prod";
reference the migration npm scripts "migrate:deploy" / "migrate:rollback" and
the backend startup currently using "pnpm start:prod" and ensure
DatabaseService's simple SELECT 1 connectivity check is not relied upon as a
migration step.
---
Nitpick comments:
In `@apps/backend/src/config/cache-module.config.ts`:
- Around line 10-14: The call to Number.parseInt in the KeyvRedis constructor
should pass an explicit radix to ensure base‑10 parsing; update the code where
Keyv is constructed (the new Keyv({...} with store: new KeyvRedis(...)) and
change Number.parseInt(process.env.REDIS_PORT || "6379") to
Number.parseInt(process.env.REDIS_PORT || "6379", 10) so the port is parsed as
base 10.
In `@apps/backend/src/constants/cache.ts`:
- Around line 30-42: The helper functions uniqueStrings and uniqueSortedStrings
are generic utilities that should be consolidated into the shared package; move
their implementations into `@metro-now/shared` (export them from that package),
remove them from apps/backend/src/constants/cache.ts, and update any callers
(e.g., import sites like cache-batch.ts which already depends on
`@metro-now/shared`) to import uniqueStrings and uniqueSortedStrings from
`@metro-now/shared` instead; ensure the shared package exports are updated and run
the build/tests to confirm no import breakages.
In `@apps/backend/src/modules/golemio/golemio.service.ts`:
- Around line 35-39: The error thrown when a Golemio fetch fails only contains
status and statusText; update the throw in the failure branch to include the
request URL (use the local URL variable if present or fall back to res.url) so
the message becomes something like "Failed to fetch Golemio data: <URL> <status>
<statusText>"; locate the check around res.ok in golemio.service.ts and modify
the Error construction used there to include the URL.
In `@apps/backend/src/modules/infotexts/__test__/infotext.resolver.spec.ts`:
- Around line 20-27: The mock for DatabaseService only stubs selectFrom and
should be enhanced to mimic the Kysely fluent API so tests that run queries can
chain methods; update the DatabaseService mock (the db.selectFrom mock) to
return an object implementing selectAll, where, and execute (each as jest.fn()
with selectAll/where returning this and execute returning a resolved value) so
tests invoking selectFrom().selectAll().where().execute() behave correctly.
In `@apps/backend/src/modules/infotexts/infotexts.resolver.ts`:
- Around line 24-27: Add a fast-return guard in the resolver to avoid calling
platformService.getGraphQLByIds when there are no related platform IDs: check
infotext.relatedPlatforms (or the derived ids array) and if empty return an
empty array immediately; locate this logic around the ids variable and the call
to this.platformService.getGraphQLByIds(ids) in the InfotextsResolver and return
[] before invoking platformService when ids.length === 0.
In `@apps/backend/src/modules/platform/platform.service.ts`:
- Around line 272-277: The current deduplication using Array.from(new Set(ids))
can reorder ids; update the logic in platform.service.ts to preserve the
original ids order while removing duplicates and null platform entries: iterate
the original ids array, maintain a seen Set to skip duplicate ids, for each
unseen id get platformsById.get(id) and only push non-null PlatformGraphQLRecord
results into the output array. Keep references to the existing ids variable and
platformsById Map so the change is local and consistent with StopService’s
ordering behavior.
In `@apps/backend/src/modules/route/route.resolver.ts`:
- Around line 11-13: The toDatabaseRouteId helper should defensively handle
empty or whitespace-only inputs: in the toDatabaseRouteId(id: string) function
trim the incoming id, then if the trimmed value is empty throw a clear Error
(e.g., "Invalid route id: empty or whitespace"), otherwise apply the existing
logic to ensure it starts with "L" (return trimmed if startsWith("L") else
`L${trimmed}`). This ensures invalid inputs are rejected early while preserving
current prefix normalization.
In `@apps/backend/src/modules/stop/stop.service.ts`:
- Around line 318-320: The current return builds Array.from(new Set(ids)) which
deduplicates but can reorder inputs; update the logic in the function that uses
ids and stopsById so duplicates are removed while preserving the original ids
order: iterate over the original ids, track seen ids (e.g., with a local Set or
Map), push the id only if not seen, then map those ordered unique ids to
stopsById.get(id) and filter to Drop nulls as StopGraphQLRecord; keep references
to ids, stopsById, and StopGraphQLRecord so the change is localized.
In `@apps/backend/src/types/parent.ts`:
- Line 1: Replace the overly restrictive parameter type in the AnyFunction
alias: change the rest-parameter type from never[] to unknown[] so functions
with any parameter shapes (e.g., getMany, getMultiple, getOne) type-check
correctly; update the type alias named AnyFunction accordingly and ensure
callers using AnyFunction continue to compile with the new unknown[] parameter
signature.
In `@apps/backend/src/utils/cache-batch.spec.ts`:
- Around line 10-18: The mock mset implementation ignores the ttl on each
entry—update the mock for mset (the async function receiving entries) to accept
entries typed with an optional ttl, and either (a) assert passed TTLs inside the
mock (e.g., verify entry.ttl matches expected values) or (b) store ttl alongside
the value in the in-memory store (e.g., store.set(entry.key, { value:
entry.value, ttl: entry.ttl })) so tests can verify TTL handling; change
references to entries and store.set in the mock accordingly and ensure tests
supply and validate the ttl field.
In `@apps/dataloader/src/config/env.ts`:
- Around line 31-37: The loop over ENV_FILES currently loads every existing env
file (using resolve/process.cwd() to build filePath and process.loadEnvFile),
causing later files to override earlier values; update the logic to stop after
the first successful load (i.e., after process.loadEnvFile(filePath) perform a
break) so only the highest-priority matching file is used, or if cascading
overrides are intended, add an explicit comment above the for-loop documenting
that ENV_FILES are intentionally loaded in order and later files override
earlier ones.
In `@apps/dataloader/src/services/cron.service.ts`:
- Around line 34-42: The addJob method currently returns early when
config.enabled is false, so disabled entries are not stored and cannot later be
toggled via startJob; change addJob (handling CronJobConfig) to always register
the job into this.jobs (and call removeJob first if present) but only
start/schedule it when config.enabled is true—log "Skipping starting disabled
cron job" instead of returning—so startJob can later find and start the stored
job; update references to this.jobs, addJob, startJob, removeJob, and
logger.info accordingly.
In `@apps/dataloader/src/services/pid-import.service.ts`:
- Around line 12-14: PidImportService currently has no instance state; make its
intent explicit by converting its methods to static (e.g., declare class
PidImportService with static getStopSnapshot, static buildStopSnapshot, static
getStops) so callers can call PidImportService.getStopSnapshot() without
instantiating, or alternatively export a single instantiated singleton (e.g.,
export const pidImportService = new PidImportService()) and mark the constructor
private to prevent multiple instances; update all call sites that use
getStopSnapshot/buildStopSnapshot/getStops to match the chosen pattern.
- Around line 110-113: The enum lookup using VehicleType[line.type.toUpperCase()
as keyof typeof VehicleType] should be made safe by validating the uppercase
string before indexing; compute the candidate key from line.type (e.g. const key
= line.type?.toUpperCase()), check that key exists on VehicleType (e.g. key in
VehicleType or Object.prototype.hasOwnProperty.call(VehicleType, key)), and only
then assign vehicleType = VehicleType[key as keyof typeof VehicleType],
otherwise set vehicleType = null; update the code near the vehicleType
assignment in pid-import.service.ts to perform this explicit validation instead
of using the type assertion.
In `@apps/dataloader/src/services/sync-repository.service.ts`:
- Around line 304-324: The current delete builds an OR-of-ANDs via
transaction.deleteFrom("PlatformsOnRoutes") ...
where(expressionBuilder.or(...chunk.map(...))) which will be slow for large
chunks; replace this pattern with a set-based delete using a CTE/VALUES (or a
temp table) that lists (platformId, routeId) and then deletes by joining that
CTE to PlatformsOnRoutes (e.g. DELETE ... USING (VALUES ...) AS
v(platformId,routeId) WHERE PlatformsOnRoutes.platformId = v.platformId AND
PlatformsOnRoutes.routeId = v.routeId). Update the code that constructs the
delete (the block using transaction.deleteFrom("PlatformsOnRoutes") and
expressionBuilder/or with chunk.map) to generate the CTE/VALUES payload from
chunk and execute a single join-based delete for much better performance on
batches up to 500 rows.
In `@apps/dataloader/src/services/sync.service.ts`:
- Around line 103-113: The current createSnapshot method merges stopSnapshot and
gtfsSnapshot with object spread which can silently overwrite keys; replace the
spread merge in createSnapshot so you explicitly construct and return a
SyncSnapshot by enumerating and assigning each expected property from
stopSnapshot and gtfsSnapshot (or perform a controlled, type-safe merge that
reconciles overlapping keys), using importService.getStopSnapshot and
gtfsService.getGtfsSnapshot values (e.g., pick/rename conflicting fields or
merge nested structures) rather than relying on ...stopSnapshot,
...gtfsSnapshot.
In `@apps/dataloader/src/types/sync.types.ts`:
- Around line 74-82: Replace the broad SyncRunResult type with a discriminated
union so that status discriminates the payload: define one variant for status:
"success" that includes trigger: SyncTrigger, startedAt, finishedAt, durationMs
and a required counts: SyncCounts (no reason), and another variant for status:
"skipped" that includes trigger, startedAt, finishedAt, durationMs and a
required reason: string (no counts); update any code that constructs or narrows
SyncRunResult to use the status discriminator (e.g., switch/if on result.status)
so TypeScript can infer the correct fields and adjust any references to
counts/reason accordingly.
In `@apps/dataloader/src/utils/fetch.utils.ts`:
- Around line 17-20: The current fetch call overwrites any caller-provided
init.signal with AbortSignal.timeout(timeoutMs), so merge the signals instead of
overwriting: if AbortSignal.any is available use AbortSignal.any([init.signal,
AbortSignal.timeout(timeoutMs)]) to create a combined signal; otherwise
implement a fallback by creating a local AbortController that listens to the
caller's init.signal and calls controller.abort() on either the caller signal or
a timeout (using AbortSignal.timeout or setTimeout), then pass controller.signal
to fetch. Update the fetch invocation that uses init, signal,
AbortSignal.timeout, and timeoutMs accordingly or document the limitation if you
choose not to merge.
In `@apps/dataloader/src/utils/logger.spec.ts`:
- Around line 17-29: Wrap the test body in a try/finally so the transport is
always restored: set the TestTransport via logger.setTransport(transport) before
the try, perform logger.info("hello", { foo: "bar" }) and await logger.flush()
inside the try, and call logger.setTransport(null) in the finally block; keep
references to TestTransport, logger.setTransport, logger.info, and logger.flush
so cleanup happens even if the await throws.
In `@lib/shared/src/env.ts`:
- Around line 13-16: The schema redisEnvSchema currently allows REDIS_HOST and
REDIS_PORT to be set independently which can lead to invalid partial configs;
update redisEnvSchema to add a .refine(...) check that enforces both REDIS_HOST
and REDIS_PORT are either both present or both absent (inspect values of
REDIS_HOST and REDIS_PORT inside the predicate) and provide a clear message for
the failure case so consumers know they must supply both values together.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 5ace93df-749b-4076-96cc-05810beb449f
⛔ Files ignored due to path filters (17)
apps/database/dist/index.d.tsis excluded by!**/dist/**apps/database/dist/index.d.ts.mapis excluded by!**/dist/**,!**/*.mapapps/database/dist/index.jsis excluded by!**/dist/**apps/database/dist/index.js.mapis excluded by!**/dist/**,!**/*.maplib/shared/dist/database.d.tsis excluded by!**/dist/**lib/shared/dist/database.d.ts.mapis excluded by!**/dist/**,!**/*.maplib/shared/dist/database.jsis excluded by!**/dist/**lib/shared/dist/database.js.mapis excluded by!**/dist/**,!**/*.maplib/shared/dist/env.d.tsis excluded by!**/dist/**lib/shared/dist/env.d.ts.mapis excluded by!**/dist/**,!**/*.maplib/shared/dist/env.jsis excluded by!**/dist/**lib/shared/dist/env.js.mapis excluded by!**/dist/**,!**/*.maplib/shared/dist/index.d.tsis excluded by!**/dist/**lib/shared/dist/index.d.ts.mapis excluded by!**/dist/**,!**/*.maplib/shared/dist/index.jsis excluded by!**/dist/**lib/shared/dist/index.js.mapis excluded by!**/dist/**,!**/*.mappnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (104)
.github/workflows/backend-ci.yaml.github/workflows/check-format.yaml.prettierignoreapps/backend/.env.local.exampleapps/backend/.eslintignoreapps/backend/.eslintrc.jsapps/backend/e2e/tests/import.e2e-spec.tsapps/backend/e2e/tests/platform.e2e-spec.tsapps/backend/e2e/tests/stop.e2e-spec.tsapps/backend/package.jsonapps/backend/src/app.module.tsapps/backend/src/common/graphql-error.tsapps/backend/src/config/cache-module.config.tsapps/backend/src/constants/cache.tsapps/backend/src/main.tsapps/backend/src/modules/database/database.module.tsapps/backend/src/modules/database/database.service.tsapps/backend/src/modules/dataloader/platforms-by-stop.loader.tsapps/backend/src/modules/dataloader/routes-by-platform.loader.tsapps/backend/src/modules/dataloader/stop-by-platform.loader.tsapps/backend/src/modules/departure/departure-board.service.tsapps/backend/src/modules/departure/departure-v1.service.tsapps/backend/src/modules/departure/departure-v2.service.tsapps/backend/src/modules/departure/departure.controller.tsapps/backend/src/modules/departure/departure.module.tsapps/backend/src/modules/departure/departure.resolver.tsapps/backend/src/modules/golemio/golemio.module.tsapps/backend/src/modules/golemio/golemio.service.tsapps/backend/src/modules/hello/__test__/hello.resolver.spec.tsapps/backend/src/modules/hello/hello.resolver.tsapps/backend/src/modules/import/gtfs.service.tsapps/backend/src/modules/import/import.controller.tsapps/backend/src/modules/import/import.module.tsapps/backend/src/modules/import/import.service.tsapps/backend/src/modules/infotexts/__test__/infotext.resolver.spec.tsapps/backend/src/modules/infotexts/infotexts.module.tsapps/backend/src/modules/infotexts/infotexts.resolver.tsapps/backend/src/modules/infotexts/infotexts.service.tsapps/backend/src/modules/log/graphql-query-logging.plugin.tsapps/backend/src/modules/log/log.module.tsapps/backend/src/modules/log/log.service.spec.tsapps/backend/src/modules/log/log.service.tsapps/backend/src/modules/platform/platform.controller.tsapps/backend/src/modules/platform/platform.module.tsapps/backend/src/modules/platform/platform.resolver.tsapps/backend/src/modules/platform/platform.service.tsapps/backend/src/modules/platform/schema.graphqlapps/backend/src/modules/prisma/prisma.module.tsapps/backend/src/modules/prisma/prisma.service.tsapps/backend/src/modules/route/route.resolver.tsapps/backend/src/modules/route/route.service.tsapps/backend/src/modules/status/status.controller.tsapps/backend/src/modules/status/status.service.tsapps/backend/src/modules/stop/stop-data.module.tsapps/backend/src/modules/stop/stop.controller.tsapps/backend/src/modules/stop/stop.module.tsapps/backend/src/modules/stop/stop.resolver.tsapps/backend/src/modules/stop/stop.service.tsapps/backend/src/schema/env.schema.tsapps/backend/src/scripts/generate-types.tsapps/backend/src/types/parent.tsapps/backend/src/utils/array.spec.tsapps/backend/src/utils/cache-batch.spec.tsapps/backend/src/utils/cache-batch.tsapps/backend/src/utils/delay.tsapps/backend/tsconfig.build.jsonapps/database/index.tsapps/database/migrate.tsapps/database/migrations/0001_initial_schema.tsapps/database/package.jsonapps/database/seed.tsapps/database/seeds/platforms-on-routes.jsonapps/database/seeds/platforms.jsonapps/database/seeds/routes.jsonapps/database/seeds/stops.jsonapps/database/tsconfig.jsonapps/dataloader/package.jsonapps/dataloader/src/app.tsapps/dataloader/src/config/env.tsapps/dataloader/src/schema/pid-stops.schema.tsapps/dataloader/src/services/cron.service.tsapps/dataloader/src/services/database-log-store.service.tsapps/dataloader/src/services/database.service.tsapps/dataloader/src/services/gtfs.service.tsapps/dataloader/src/services/pid-import.service.tsapps/dataloader/src/services/sync-repository.service.tsapps/dataloader/src/services/sync-snapshot-validator.service.tsapps/dataloader/src/services/sync-snapshot-validator.spec.tsapps/dataloader/src/services/sync.service.tsapps/dataloader/src/types/sync.types.tsapps/dataloader/src/utils/csv.utils.tsapps/dataloader/src/utils/fetch.utils.tsapps/dataloader/src/utils/logger.spec.tsapps/dataloader/src/utils/logger.tsapps/dataloader/tsconfig.jsonbiome.jsoncompose.yamllib/shared/package.jsonlib/shared/src/database.tslib/shared/src/env.tslib/shared/src/index.tslib/shared/tsconfig.jsonpackage.jsonpnpm-workspace.yaml
💤 Files with no reviewable changes (10)
- apps/backend/.eslintignore
- apps/backend/.env.local.example
- apps/backend/src/modules/import/import.module.ts
- apps/backend/src/modules/prisma/prisma.module.ts
- apps/backend/e2e/tests/import.e2e-spec.ts
- apps/backend/.eslintrc.js
- apps/backend/src/modules/import/gtfs.service.ts
- apps/backend/src/modules/import/import.controller.ts
- apps/backend/src/modules/prisma/prisma.service.ts
- apps/backend/src/modules/import/import.service.ts
✅ Files skipped from review due to trivial changes (21)
- apps/backend/tsconfig.build.json
- apps/backend/src/main.ts
- pnpm-workspace.yaml
- .prettierignore
- apps/backend/src/modules/hello/hello.resolver.ts
- apps/backend/src/modules/departure/departure.controller.ts
- apps/backend/src/common/graphql-error.ts
- apps/backend/src/utils/array.spec.ts
- apps/backend/src/scripts/generate-types.ts
- apps/backend/src/modules/status/status.controller.ts
- apps/backend/src/modules/platform/schema.graphql
- apps/backend/src/modules/hello/test/hello.resolver.spec.ts
- apps/backend/src/utils/delay.ts
- apps/backend/src/modules/platform/platform.controller.ts
- apps/database/tsconfig.json
- lib/shared/package.json
- lib/shared/tsconfig.json
- apps/database/package.json
- apps/dataloader/tsconfig.json
- apps/dataloader/package.json
- biome.json
🚧 Files skipped from review as they are similar to previous changes (3)
- package.json
- apps/backend/src/modules/route/route.service.ts
- apps/dataloader/src/app.ts
| "prisma:migrate:deploy": "dotenv -e .env.local -- prisma migrate deploy", | ||
| "prisma:push": "dotenv -e .env.local -- prisma db push", | ||
| "prisma:seed": "prisma db seed" | ||
| "test:e2e": "jest --config ./e2e/jest-e2e.json" |
There was a problem hiding this comment.
test:e2e should prepare workspace deps too.
Line 16 now builds @metro-now/shared and @metro-now/database before Jest, but Line 20 still skips that step. After a clean checkout—or after changing either workspace package—pnpm test:e2e can still run against missing or stale output even though pnpm test passes.
🔁 Suggested fix
- "test:e2e": "jest --config ./e2e/jest-e2e.json"
+ "test:e2e": "pnpm --filter `@metro-now/shared` build && pnpm --filter `@metro-now/database` build && jest --config ./e2e/jest-e2e.json"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "test:e2e": "jest --config ./e2e/jest-e2e.json" | |
| "test:e2e": "pnpm --filter `@metro-now/shared` build && pnpm --filter `@metro-now/database` build && jest --config ./e2e/jest-e2e.json" |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/backend/package.json` at line 20, The test:e2e npm script is missing the
workspace-prebuild step that ensures `@metro-now/shared` and `@metro-now/database`
are built; update the "test:e2e" script in package.json so it runs the same
prebuild command used by the regular test flow (build the workspace packages
`@metro-now/shared` and `@metro-now/database`) before invoking Jest (i.e., prepend
or incorporate the workspace build step that exists for the other test script so
test:e2e always builds those packages first).
| "collectCoverageFrom": [ | ||
| "**/*.(t|j)s" | ||
| ], | ||
| "collectCoverageFrom": ["**/*.(t|j)s"], |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
In Jest 29 / micromatch glob syntax, does collectCoverageFrom: ["/*.(t|j)s"]match both.tsand.jsfiles, or should it be written as["/.{ts,js}"](or["**/.+(ts|js)"]) instead?
💡 Result:
No, collectCoverageFrom: ["/.(t|j)s"] does NOT match both .ts and .js files in Jest 29 using micromatch glob syntax. The pattern .(t|j)s attempts to use extglob syntax @(t|j) or +(t|j) for alternatives, but the plain (t|j) without a preceding extglob quantifier (?+!@) is treated as a literal group or regex character class, not alternatives. It would match files ending with . followed by 't' or '|' or 'j' then 's' (like .ts, .|s, .js), but not properly .ts and .js as intended. Official Jest docs show the correct brace syntax: /.{js,jsx} or /.[jt]s?(x) for JS/TS files [1,3,4]. Micromatch supports brace expansion {ts,js} which expands to /.ts or /.js [6,7,13]. Use ["/.{ts,js}"] or ["/.+(ts|js)"] (extglob) or ["/.[jt]s"] instead. To match exactly .ts and .js: /*.{ts,js} is recommended and used in Jest examples.
Citations:
- 1: https://jestjs.io/docs/29.6/configuration
- 2: https://jestjs.io/docs/configuration
- 3: http://jestjs.io/docs/29.7/configuration
- 4: https://github.com/micromatch/micromatch
- 5: https://github.com/micromatch/micromatch/blob/master/.verb.md
- 6: https://www.pkgpulse.com/blog/picomatch-vs-micromatch-vs-minimatch-glob-pattern-matching-nodejs-2026
Fix the coverage glob pattern to properly match TypeScript and JavaScript files.
The pattern **/*.(t|j)s uses parentheses which are treated literally in micromatch glob syntax, not as alternatives. This will cause Jest to silently miss most source files during coverage collection.
Corrected pattern
- "collectCoverageFrom": ["**/*.(t|j)s"],
+ "collectCoverageFrom": ["**/*.{ts,js}"],📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "collectCoverageFrom": ["**/*.(t|j)s"], | |
| "collectCoverageFrom": ["**/*.{ts,js}"], |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/backend/package.json` at line 73, The Jest coverage glob under the
"collectCoverageFrom" key currently uses the literal pattern "**/*.(t|j)s" which
doesn't match files; update the value for "collectCoverageFrom" to a proper glob
that matches .ts and .js files (for example use a brace expansion like
"**/*.{ts,js}") so Jest will collect coverage from TypeScript and JavaScript
sources; locate the string "collectCoverageFrom" and replace the pattern
"**/*.(t|j)s" with the corrected pattern.
| async resolvePlatformIds({ | ||
| platformIds, | ||
| stopIds, | ||
| metroOnly, | ||
| limit = 100, | ||
| }: { | ||
| platformIds: readonly string[]; | ||
| stopIds: readonly string[]; | ||
| metroOnly?: boolean; | ||
| limit?: number; | ||
| }): Promise<string[]> { | ||
| const directPlatforms = | ||
| platformIds.length === 0 | ||
| ? [] | ||
| : await this.database.db | ||
| .selectFrom("Platform") | ||
| .select("id") | ||
| .where("id", "in", [...platformIds]) | ||
| .$if(Boolean(metroOnly), (query) => | ||
| query.where("isMetro", "=", true), | ||
| ) | ||
| .execute(); | ||
| const stopPlatforms = | ||
| stopIds.length === 0 | ||
| ? [] | ||
| : await this.database.db | ||
| .selectFrom("Platform") | ||
| .select("id") | ||
| .where("stopId", "in", [...stopIds]) | ||
| .$if(Boolean(metroOnly), (query) => | ||
| query.where("isMetro", "=", true), | ||
| ) | ||
| .execute(); | ||
|
|
||
| return uniqueSortedStrings([ | ||
| ...directPlatforms.map((platform) => platform.id), | ||
| ...stopPlatforms.map((platform) => platform.id), | ||
| ]).slice(0, limit); |
There was a problem hiding this comment.
Cap and de-duplicate IDs before querying Platform.
The limit is only applied after both IN (...) queries. A caller can still send thousands of platformIds/stopIds, forcing large parameter lists and scans even though only the first 100 resolved IDs are ever used downstream.
♻️ Suggested guardrail
}): Promise<string[]> {
+ const limitedPlatformIds = uniqueSortedStrings([...platformIds]).slice(
+ 0,
+ limit,
+ );
+ const limitedStopIds = uniqueSortedStrings([...stopIds]).slice(
+ 0,
+ limit,
+ );
+
const directPlatforms =
- platformIds.length === 0
+ limitedPlatformIds.length === 0
? []
: await this.database.db
.selectFrom("Platform")
.select("id")
- .where("id", "in", [...platformIds])
+ .where("id", "in", limitedPlatformIds)
.$if(Boolean(metroOnly), (query) =>
query.where("isMetro", "=", true),
)
.execute();
const stopPlatforms =
- stopIds.length === 0
+ limitedStopIds.length === 0
? []
: await this.database.db
.selectFrom("Platform")
.select("id")
- .where("stopId", "in", [...stopIds])
+ .where("stopId", "in", limitedStopIds)
.$if(Boolean(metroOnly), (query) =>
query.where("isMetro", "=", true),
)
.execute();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/backend/src/modules/departure/departure-board.service.ts` around lines
20 - 57, In resolvePlatformIds, avoid passing huge parameter lists to the DB by
de-duplicating and capping platformIds and stopIds before running the two
Platform queries: create deduped arrays (e.g. via new Set) for platformIds and
stopIds and then slice them to a sensible maximum (based on limit) before using
them in the .where("id", "in", ...) and .where("stopId", "in", ...) calls; keep
references to the existing variables/directPlatforms/stopPlatforms and retain
the final uniqueSortedStrings([...]).slice(0, limit) behavior but ensure the DB
queries only receive the deduped-and-sliced input arrays.
| const resolvedPlatformIds = | ||
| await this.departureBoardService.resolvePlatformIds({ | ||
| platformIds, | ||
| stopIds, | ||
| }); | ||
| const normalizedLimit = Math.max(1, Math.min(limit, 100)); | ||
| const json = await this.departureBoardService.fetchDepartureBoard({ | ||
| platformIds: resolvedPlatformIds, | ||
| params: { | ||
| includeMetroTrains: true, | ||
| limit: normalizedLimit, | ||
| minutesBefore: 1, | ||
| mode: "departures", | ||
| order: "real", | ||
| skip: "canceled", | ||
| }, | ||
| }); |
There was a problem hiding this comment.
Return early when no platform IDs resolve.
Unlike DepartureServiceV2, this path still calls fetchDepartureBoard() with platformIds: []. Unknown stopIds/platformIds should return [], not make an empty upstream request.
Suggested change
const resolvedPlatformIds =
await this.departureBoardService.resolvePlatformIds({
platformIds,
stopIds,
});
+ if (resolvedPlatformIds.length === 0) {
+ return [];
+ }
const normalizedLimit = Math.max(1, Math.min(limit, 100));
const json = await this.departureBoardService.fetchDepartureBoard({
platformIds: resolvedPlatformIds,📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const resolvedPlatformIds = | |
| await this.departureBoardService.resolvePlatformIds({ | |
| platformIds, | |
| stopIds, | |
| }); | |
| const normalizedLimit = Math.max(1, Math.min(limit, 100)); | |
| const json = await this.departureBoardService.fetchDepartureBoard({ | |
| platformIds: resolvedPlatformIds, | |
| params: { | |
| includeMetroTrains: true, | |
| limit: normalizedLimit, | |
| minutesBefore: 1, | |
| mode: "departures", | |
| order: "real", | |
| skip: "canceled", | |
| }, | |
| }); | |
| const resolvedPlatformIds = | |
| await this.departureBoardService.resolvePlatformIds({ | |
| platformIds, | |
| stopIds, | |
| }); | |
| if (resolvedPlatformIds.length === 0) { | |
| return []; | |
| } | |
| const normalizedLimit = Math.max(1, Math.min(limit, 100)); | |
| const json = await this.departureBoardService.fetchDepartureBoard({ | |
| platformIds: resolvedPlatformIds, | |
| params: { | |
| includeMetroTrains: true, | |
| limit: normalizedLimit, | |
| minutesBefore: 1, | |
| mode: "departures", | |
| order: "real", | |
| skip: "canceled", | |
| }, | |
| }); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/backend/src/modules/departure/departure.resolver.ts` around lines 36 -
52, If resolvePlatformIds(...) returns an empty array, return an empty list
immediately instead of calling fetchDepartureBoard with platformIds: []; update
the resolver method (the code that calls
this.departureBoardService.resolvePlatformIds and then
this.departureBoardService.fetchDepartureBoard) to check if
resolvedPlatformIds.length === 0 and return [] (or the appropriate empty
response shape) early; keep the normalizedLimit and other logic for non-empty
results and only call fetchDepartureBoard when there are resolvedPlatformIds.
| @Injectable() | ||
| export class LogService implements OnModuleDestroy { | ||
| private pendingWrite: Promise<void> = Promise.resolve(); | ||
|
|
||
| constructor(private readonly database: DatabaseService) {} | ||
|
|
||
| logGraphQLRequest({ | ||
| durationMs, | ||
| errors = [], | ||
| requestContext, | ||
| }: { | ||
| durationMs: number; | ||
| errors?: readonly GraphQLError[]; | ||
| requestContext: GraphQLRequestContextWillSendResponse<BaseContext>; | ||
| }): void { | ||
| const level = errors.length > 0 ? "error" : "info"; | ||
| const statusCode = requestContext.response.http.status ?? 200; | ||
| const logEntry: NewLog = { | ||
| id: randomUUID(), | ||
| service: SERVICE_NAME, | ||
| level, | ||
| message: "GraphQL request completed", | ||
| context: sanitizeContext({ | ||
| operationName: | ||
| requestContext.operationName ?? | ||
| requestContext.request.operationName ?? | ||
| null, | ||
| operationType: requestContext.operation?.operation ?? null, | ||
| query: | ||
| requestContext.source ?? | ||
| requestContext.request.query ?? | ||
| null, | ||
| variables: requestContext.request.variables ?? null, | ||
| extensions: requestContext.request.extensions ?? null, | ||
| queryHash: requestContext.queryHash ?? null, | ||
| durationMs, | ||
| statusCode, | ||
| requestIsBatched: requestContext.requestIsBatched, | ||
| httpMethod: requestContext.request.http?.method ?? null, | ||
| search: requestContext.request.http?.search ?? null, | ||
| userAgent: | ||
| requestContext.request.http?.headers.get("user-agent") ?? | ||
| null, | ||
| responseKind: requestContext.response.body.kind, | ||
| resultKeys: getResultKeys(requestContext.response.body), | ||
| hasErrors: errors.length > 0, | ||
| errorCount: errors.length, | ||
| errors: errors.map(serializeGraphQLError), | ||
| }), | ||
| createdAt: new Date(), | ||
| }; | ||
|
|
||
| this.pendingWrite = this.pendingWrite | ||
| .then(async () => { | ||
| await this.database.db | ||
| .insertInto("Log") | ||
| .values(logEntry) | ||
| .execute(); | ||
| }) | ||
| .catch((error) => { | ||
| console.error("Failed to write backend GraphQL log", { | ||
| error: | ||
| error instanceof Error ? error.message : String(error), | ||
| }); | ||
| }); |
There was a problem hiding this comment.
This turns request logging into a single-writer DB queue.
Every GraphQL request appends one more insert behind pendingWrite, so sustained traffic can build an unbounded in-memory backlog while writes drain one at a time. If persistent request logging is intentionally coming back, it needs batching or a bounded/drop strategy instead of a single promise chain.
Based on learnings, the logger service was intentionally removed from the codebase as it was considered poor quality.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/backend/src/modules/log/log.service.ts` around lines 47 - 111, The
current logGraphQLRequest implementation chains writes on the single Promise
pendingWrite, causing an unbounded in-memory backlog under load; replace this
single-writer chain with a bounded, concurrency-controlled approach: remove the
append-to-pendingWrite pattern and instead implement a limited-concurrency
writer (e.g., a semaphore/worker pool) or a bounded in-memory buffer with
periodic batching and overflow-dropping for entries, and have the writer flush
using database.db.insertInto("Log").values(...).execute() in batches or with up
to N concurrent requests; ensure the implementation references and replaces
pendingWrite and logGraphQLRequest, enforces a max buffer size or concurrency
limit, and drops or stalls new logs (not grow memory) when the limit is reached.
| default: | ||
| console.log("Usage: ts-node migrate.ts <up|down>"); | ||
| console.log(" up / deploy / latest - Apply all pending migrations"); | ||
| console.log(" down / rollback - Revert the last migration"); |
There was a problem hiding this comment.
Update the usage banner to match the accepted commands.
The switch accepts deploy, latest, and rollback, but the help text still advertises only <up|down>.
Suggested change
- console.log("Usage: ts-node migrate.ts <up|down>");
+ console.log(
+ "Usage: ts-node migrate.ts <up|down|deploy|latest|rollback>",
+ );📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| default: | |
| console.log("Usage: ts-node migrate.ts <up|down>"); | |
| console.log(" up / deploy / latest - Apply all pending migrations"); | |
| console.log(" down / rollback - Revert the last migration"); | |
| default: | |
| console.log( | |
| "Usage: ts-node migrate.ts <up|down|deploy|latest|rollback>", | |
| ); | |
| console.log(" up / deploy / latest - Apply all pending migrations"); | |
| console.log(" down / rollback - Revert the last migration"); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/database/migrate.ts` around lines 82 - 85, The usage banner in the
default case is out of sync with the command switch: update the console.log help
lines in the default branch so they list the actual accepted commands (deploy,
latest for apply and rollback for revert) instead of "<up|down>"; locate the
default case in migrate.ts and change the two console.log strings that print the
usage/description to mention "ts-node migrate.ts <deploy|latest|rollback>" (or
similar wording) and map deploy/latest to apply/pending migrations and rollback
to revert the last migration so the help text matches the switch logic.
| if (routes.length > 0) { | ||
| const routeValues: NewRoute[] = routes.map((route) => ({ | ||
| id: route.id, | ||
| name: route.name, | ||
| vehicleType: null, | ||
| isNight: null, | ||
| createdAt: timestamp, | ||
| updatedAt: timestamp, | ||
| })); |
There was a problem hiding this comment.
Preserve isNight from the seed payload.
Route already parses isNight, but the insert hardcodes it to null, so every seeded route loses that attribute.
Suggested change
const routeValues: NewRoute[] = routes.map((route) => ({
id: route.id,
name: route.name,
vehicleType: null,
- isNight: null,
+ isNight: route.isNight,
createdAt: timestamp,
updatedAt: timestamp,
}));📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (routes.length > 0) { | |
| const routeValues: NewRoute[] = routes.map((route) => ({ | |
| id: route.id, | |
| name: route.name, | |
| vehicleType: null, | |
| isNight: null, | |
| createdAt: timestamp, | |
| updatedAt: timestamp, | |
| })); | |
| if (routes.length > 0) { | |
| const routeValues: NewRoute[] = routes.map((route) => ({ | |
| id: route.id, | |
| name: route.name, | |
| vehicleType: null, | |
| isNight: route.isNight, | |
| createdAt: timestamp, | |
| updatedAt: timestamp, | |
| })); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/database/seed.ts` around lines 98 - 106, The seed currently maps routes
to NewRoute objects with isNight set to null, which discards the parsed
Route.isNight value; update the mapping that builds routeValues (the routes.map
callback creating NewRoute entries) to assign isNight: route.isNight (or
route.isNight ?? null if you want to keep nullable typing) instead of null so
the seeded rows preserve the original isNight attribute.
| const filePath = resolve(process.cwd(), relativePath); | ||
|
|
||
| if (existsSync(filePath)) { | ||
| process.loadEnvFile(filePath); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Description: Check Node.js version requirements in the project
# Check package.json engines field
cat package.json | jq '.engines.node // "not specified"'
# Check .nvmrc or .node-version if they exist
cat .nvmrc 2>/dev/null || cat .node-version 2>/dev/null || echo "No version file found"
# Check Dockerfile for Node version
rg -n "FROM node:" --type=dockerfile 2>/dev/null || rg -n "FROM node:" -g 'Dockerfile*'Repository: krystxf/metro-now
Length of output: 117
🌐 Web query:
Node.js process.loadEnvFile API introduction version
💡 Result:
The Node.js process.loadEnvFile API was introduced in Node.js v21.7.0 and v20.12.0. It was initially experimental and became stable in v24.10.0.
Citations:
- 1: https://nodejs.org/api/process.html
- 2: https://nodejs.org/en/blog/release/v21.7.0
- 3: v21.7.0 proposal nodejs/node#51932
- 4: https://nodejs.org/en/blog/release/v20.12.0
🏁 Script executed:
# Check for additional version specifications in CI/CD and deployment configs
fd -type f \( -name ".github" -o -name ".gitlab-ci.yml" -o -name ".circleci" -o -name "renovate.json" -o -name ".tool-versions" \)
# Search for any .yml or .yaml files that might contain Node version specs
rg -g '*.yml' -g '*.yaml' -i 'node.*version|node.*20' -n
# Check package-lock.json or pnpm-lock.yaml for Node version hints
fd -name "package-lock.json" -o -name "pnpm-lock.yaml" -o -name "yarn.lock" | head -5Repository: krystxf/metro-now
Length of output: 3859
🏁 Script executed:
# Check root package.json directly for engines field
if [ -f "package.json" ]; then
echo "=== Root package.json engines field ==="
cat package.json | jq '.engines // "engines field not found"'
fi
# Also check if there's any explicit minimum version constraint elsewhere
echo -e "\n=== Checking for Node version enforcement ==="
rg -i "20\.12|20\.11|engines.*node" -g "package.json" -g "*.md" -A 2Repository: krystxf/metro-now
Length of output: 173
Add explicit Node.js version constraint to prevent runtime errors.
process.loadEnvFile requires Node.js 20.12.0 or later. The project currently has no explicit engines field in package.json, and dependency constraints in pnpm-lock.yaml allow Node.js 20.11.1 (e.g., ^20.11.1), which lacks this API. Add an engines field to package.json specifying node: ">=20.12.0" to enforce the minimum required version.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/dataloader/src/config/env.ts` at line 35, Add an explicit Node.js engine
constraint to package.json to ensure runtime support for process.loadEnvFile
(requires Node >=20.12.0): open package.json and add or update the "engines"
field to "node": ">=20.12.0" so installs/resolutions and CI enforce the minimum
version used by code paths like process.loadEnvFile in
apps/dataloader/src/config/env.ts; after changing package.json, regenerate
lockfiles/CI config as needed to ensure environments meet this constraint.
| const routeId = String(line.id).trim(); | ||
| const routeName = line.name.trim(); |
There was a problem hiding this comment.
Potential runtime error if line.name is not a string.
line.name.trim() will throw if line.name is undefined or not a string. Consider adding a guard similar to how line.id is converted with String().
🛡️ Defensive fix
const routeId = String(line.id).trim();
-const routeName = line.name.trim();
+const routeName = String(line.name ?? "").trim();📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const routeId = String(line.id).trim(); | |
| const routeName = line.name.trim(); | |
| const routeId = String(line.id).trim(); | |
| const routeName = String(line.name ?? "").trim(); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/dataloader/src/services/pid-import.service.ts` around lines 100 - 101,
The code risks a runtime error when computing routeName because it calls
line.name.trim() without ensuring line.name is a string; update the logic that
sets routeName (next to routeId) to coerce or guard line.name similarly to
routeId—e.g. use String(line.name).trim() or check for null/undefined and
default to ''—so that routeName assignment never calls .trim() on a non-string
value.
| ); | ||
| } | ||
|
|
||
| return `postgresql://${env.POSTGRES_USER}:${env.POSTGRES_PASSWORD}@${env.DB_HOST}:${env.DB_PORT}/${env.POSTGRES_DB}?schema=${env.DB_SCHEMA}`; |
There was a problem hiding this comment.
URL-encode credentials to handle special characters.
If POSTGRES_USER or POSTGRES_PASSWORD contains special characters (e.g., @, :, /, #), the constructed URL will be malformed. Use encodeURIComponent() on credentials.
🛡️ Proposed fix
- return `postgresql://${env.POSTGRES_USER}:${env.POSTGRES_PASSWORD}@${env.DB_HOST}:${env.DB_PORT}/${env.POSTGRES_DB}?schema=${env.DB_SCHEMA}`;
+ const user = encodeURIComponent(env.POSTGRES_USER!);
+ const password = encodeURIComponent(env.POSTGRES_PASSWORD!);
+ const schema = encodeURIComponent(env.DB_SCHEMA!);
+ return `postgresql://${user}:${password}@${env.DB_HOST}:${env.DB_PORT}/${env.POSTGRES_DB}?schema=${schema}`;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lib/shared/src/database.ts` at line 77, The connection-string builder
currently interpolates raw POSTGRES_USER and POSTGRES_PASSWORD into the URL (the
return statement building the postgresql://... string) which breaks when
credentials contain special characters; update that code to pass
env.POSTGRES_USER and env.POSTGRES_PASSWORD through encodeURIComponent() before
interpolation so both username and password are URL-encoded when constructing
the connection string (leave host, port, DB name and schema unchanged).
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
apps/database/seed.ts (1)
108-113:⚠️ Potential issue | 🟡 MinorPreserve
isNightfrom the seed payload.Line 112 hardcodes
isNighttonull, so seeded routes lose the parsed value.Suggested fix
await insertInBatches( transaction, "Route", routes.map((route) => ({ id: route.id, name: route.name, vehicleType: null, - isNight: null, + isNight: route.isNight, createdAt: timestamp, updatedAt: timestamp, })), );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/database/seed.ts` around lines 108 - 113, The seed mapping for routes sets isNight to null, losing the parsed value; update the routes.map block (the mapping that builds objects with id, name, vehicleType, isNight, createdAt) to preserve the incoming value by assigning isNight: route.isNight (or route.isNight ?? null if you want an explicit null fallback), so seeded routes retain their isNight flag instead of always being null.
🧹 Nitpick comments (1)
apps/mobile/metro-now/metro-nowTests/get-remaining-time.test.swift (1)
15-20: Inconsistent naming: some test functions still havetestprefix.The renaming was applied inconsistently in this file:
- Renamed (no
testprefix):getRemainingTimeSeconds(line 15),getRemainingTimeMinutes(line 44),getRemainingTimeHours(line 68)- Not renamed (still has
testprefix):testGetRemainingTimeSeconds(lines 30, 84),testGetRemainingTimeMinutes(line 98),testGetRemainingTimeHours(line 122)Consider applying the naming change consistently across all test functions in this file.
♻️ Suggested renames for consistency
- func testGetRemainingTimeSeconds( + func getRemainingTimeSeconds((Apply similar changes to
testGetRemainingTimeMinutesandtestGetRemainingTimeHoursinGetRemainingTimePastTests)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/mobile/metro-now/metro-nowTests/get-remaining-time.test.swift` around lines 15 - 20, Rename the remaining test functions so naming is consistent: remove the "test" prefix from testGetRemainingTimeSeconds (both occurrences), testGetRemainingTimeMinutes, and testGetRemainingTimeHours so they match the already-renamed getRemainingTimeSeconds, getRemainingTimeMinutes, and getRemainingTimeHours; update any internal references/calls or test-suite registrations that use those identifiers (including equivalents in GetRemainingTimePastTests) to the new names to avoid unresolved symbol errors.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/database/seed.ts`:
- Around line 69-72: The deletion order must remove GtfsRouteStop rows before
deleting Platform to avoid FK constraint failures; update the seed transaction
to call transaction.deleteFrom("GtfsRouteStop").execute() (or otherwise clear
GtfsRouteStop) prior to transaction.deleteFrom("Platform").execute(), keeping
the existing deletions for "PlatformsOnRoutes", "Route", and "Stop" but ensuring
"GtfsRouteStop" is deleted before "Platform".
---
Duplicate comments:
In `@apps/database/seed.ts`:
- Around line 108-113: The seed mapping for routes sets isNight to null, losing
the parsed value; update the routes.map block (the mapping that builds objects
with id, name, vehicleType, isNight, createdAt) to preserve the incoming value
by assigning isNight: route.isNight (or route.isNight ?? null if you want an
explicit null fallback), so seeded routes retain their isNight flag instead of
always being null.
---
Nitpick comments:
In `@apps/mobile/metro-now/metro-nowTests/get-remaining-time.test.swift`:
- Around line 15-20: Rename the remaining test functions so naming is
consistent: remove the "test" prefix from testGetRemainingTimeSeconds (both
occurrences), testGetRemainingTimeMinutes, and testGetRemainingTimeHours so they
match the already-renamed getRemainingTimeSeconds, getRemainingTimeMinutes, and
getRemainingTimeHours; update any internal references/calls or test-suite
registrations that use those identifiers (including equivalents in
GetRemainingTimePastTests) to the new names to avoid unresolved symbol errors.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 9c5253c4-d861-49c3-87b8-d89e687efe88
📒 Files selected for processing (25)
.prettierignoreapps/database/migrate.tsapps/database/migrations/0001_initial_schema.tsapps/database/seed.tsapps/mobile/metro-now/metro-now Watch App/pages/closest-stop-list/ClosestStopListViewModel.swiftapps/mobile/metro-now/metro-now Watch App/pages/platform/PlatformDetailDeparturesListView.swiftapps/mobile/metro-now/metro-now Watch App/pages/platform/PlatformDetailView.swiftapps/mobile/metro-now/metro-now Watch App/pages/stop/StopDepartureListItemPlaceholderView.swiftapps/mobile/metro-now/metro-now Watch App/pages/stop/StopDeparturesView.swiftapps/mobile/metro-now/metro-now/models/StopsViewModel.swiftapps/mobile/metro-now/metro-now/pages/closest-stop/ClosestStopPageViewModel.swiftapps/mobile/metro-now/metro-now/pages/closest-stop/PlatformDeparturesListView.swiftapps/mobile/metro-now/metro-now/pages/search/SearchPageSorting.swiftapps/mobile/metro-now/metro-now/pages/search/stop-detail/SearchPageDetailViewModel.swiftapps/mobile/metro-now/metro-now/pages/search/utils/normalizeForSearch.swiftapps/mobile/metro-now/metro-now/pages/settings/utils/getGithubIssueUrl.swiftapps/mobile/metro-now/metro-now/pages/stop-detail/StopMetroDeparturesViewModel.swiftapps/mobile/metro-now/metro-now/pages/welcome/WelcomePageView.swiftapps/mobile/metro-now/metro-nowTests/endpoint.test.swiftapps/mobile/metro-now/metro-nowTests/get-remaining-time.test.swiftapps/mobile/metro-now/metro-nowTests/route-vehicle-type.tests.swiftapps/mobile/metro-now/metro-nowTests/shorten-stop-name.test.swiftapps/web/src/app/map/map-layers/heatmap-layer.tsapps/web/src/app/robots.tsbiome.json
💤 Files with no reviewable changes (5)
- apps/mobile/metro-now/metro-now/pages/closest-stop/ClosestStopPageViewModel.swift
- apps/mobile/metro-now/metro-now Watch App/pages/closest-stop-list/ClosestStopListViewModel.swift
- apps/mobile/metro-now/metro-now/pages/search/stop-detail/SearchPageDetailViewModel.swift
- apps/mobile/metro-now/metro-now/pages/stop-detail/StopMetroDeparturesViewModel.swift
- .prettierignore
✅ Files skipped from review due to trivial changes (13)
- apps/mobile/metro-now/metro-now Watch App/pages/platform/PlatformDetailView.swift
- apps/mobile/metro-now/metro-now/pages/search/utils/normalizeForSearch.swift
- apps/mobile/metro-now/metro-now/pages/welcome/WelcomePageView.swift
- apps/mobile/metro-now/metro-now/pages/settings/utils/getGithubIssueUrl.swift
- apps/mobile/metro-now/metro-now/pages/closest-stop/PlatformDeparturesListView.swift
- apps/mobile/metro-now/metro-now/pages/search/SearchPageSorting.swift
- apps/mobile/metro-now/metro-now Watch App/pages/stop/StopDepartureListItemPlaceholderView.swift
- apps/mobile/metro-now/metro-now Watch App/pages/stop/StopDeparturesView.swift
- apps/mobile/metro-now/metro-now Watch App/pages/platform/PlatformDetailDeparturesListView.swift
- apps/mobile/metro-now/metro-now/models/StopsViewModel.swift
- apps/web/src/app/robots.ts
- biome.json
- apps/database/migrate.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- apps/database/migrations/0001_initial_schema.ts
| await transaction.deleteFrom("PlatformsOnRoutes").execute(); | ||
| await transaction.deleteFrom("Route").execute(); | ||
| await transaction.deleteFrom("Platform").execute(); | ||
| await transaction.deleteFrom("Stop").execute(); |
There was a problem hiding this comment.
Handle GtfsRouteStop before deleting Platform to avoid FK failures.
If GtfsRouteStop contains rows, deleting Platform at Line 71 can fail due to ON DELETE RESTRICT on GtfsRouteStop_stopId_fkey.
Suggested fix
await db.transaction().execute(async (transaction) => {
await transaction.deleteFrom("PlatformsOnRoutes").execute();
+ await transaction.deleteFrom("GtfsRouteStop").execute();
await transaction.deleteFrom("Route").execute();
await transaction.deleteFrom("Platform").execute();
await transaction.deleteFrom("Stop").execute();📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| await transaction.deleteFrom("PlatformsOnRoutes").execute(); | |
| await transaction.deleteFrom("Route").execute(); | |
| await transaction.deleteFrom("Platform").execute(); | |
| await transaction.deleteFrom("Stop").execute(); | |
| await transaction.deleteFrom("PlatformsOnRoutes").execute(); | |
| await transaction.deleteFrom("GtfsRouteStop").execute(); | |
| await transaction.deleteFrom("Route").execute(); | |
| await transaction.deleteFrom("Platform").execute(); | |
| await transaction.deleteFrom("Stop").execute(); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/database/seed.ts` around lines 69 - 72, The deletion order must remove
GtfsRouteStop rows before deleting Platform to avoid FK constraint failures;
update the seed transaction to call
transaction.deleteFrom("GtfsRouteStop").execute() (or otherwise clear
GtfsRouteStop) prior to transaction.deleteFrom("Platform").execute(), keeping
the existing deletions for "PlatformsOnRoutes", "Route", and "Stop" but ensuring
"GtfsRouteStop" is deleted before "Platform".
What