This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
- Specifications and design docs: All specs, design documents, and technical documentation should be saved to and read from the
specs/directory - CLAUDE.md: This file contains general guidance and architectural patterns
- README files: Each package and example has its own README for specific usage instructions
- Claude Skills: Specialized skills for common tasks are in
.claude/skills/pr-changeset: REQUIRED - Use when modifying any package code to create proper changeset filesplugin-version: REQUIRED - Use when modifying any Claude plugin code to bump plugin and marketplace versions
OpenSaas Stack is a Next.js-based stack for building admin-heavy applications with built-in access control. It uses a config-first approach similar to KeystoneJS but modernized for Next.js App Router and designed to be AI-agent-friendly with automatic security guardrails.
This is a pnpm monorepo with:
packages/core: Core stack (config system, access control, generators)packages/cli: CLI tools (generators via bin scripts)packages/ui: Admin UI components (composable React components)packages/auth: Better-auth integration (authentication & sessions)packages/mcp: DEPRECATED - MCP functionality moved to core and auth packagespackages/tiptap: Rich text editor integration (third-party field example)examples/blog: Basic blog exampleexamples/custom-field: Custom field types demonstrationexamples/composable-dashboard: Composable UI componentsexamples/auth-demo: Authentication integrationexamples/mcp-demo: MCP server integrationexamples/tiptap-demo: Tiptap rich text editor integrationspecs/: Design documents and specifications
# Install all dependencies
pnpm install
# Build all packages
pnpm build
# Build in development mode (watch)
pnpm dev
# Clean build artifacts
pnpm cleancd packages/core
# Build the core package
pnpm build
# Run tests
pnpm test
# Run tests with UI
pnpm test:ui
# Run tests with coverage
pnpm test:coveragecd examples/blog
# Generate Prisma schema and types from opensaas.config.ts
pnpm generate
# Push schema to database (creates/updates SQLite file)
pnpm db:push
# Generate Prisma Client
npx prisma generate
# Open Prisma Studio
pnpm db:studio
# Run development server
pnpm dev
# Build for production
pnpm build# Run test scripts directly
cd examples/blog
npx tsx test.ts # or any other .ts fileThe stack's primary innovation is its access control engine that automatically secures database operations. Understanding this is critical for working with the codebase.
Key files:
packages/core/src/context/index.ts- Context wrapper that intercepts all Prisma operationspackages/core/src/access/engine.ts- Access control execution logicpackages/core/src/access/types.ts- Type definitions for access control
How it works:
- User defines access control in
opensaas.config.tsusingAccessControlfunctions - Operations go through context wrapper:
context.db.post.update()instead ofprisma.post.update() - Access control engine checks operation-level access (can user perform this action?)
- Access filters are merged with Prisma where clauses (which records can they access?)
- Field-level access controls which fields are readable/writable
- Operations return
nullor[]on access denial (silent failures prevent info leakage)
Access Control Types:
- Operation-level: Controls query/create/update/delete access at the list level
- Field-level: Controls read/create/update access for individual fields
- Filter-based: Returns Prisma filters to scope access (e.g.,
{ authorId: { equals: userId } }) - Boolean: Returns
true(allow) orfalse(deny)
The hooks system provides data transformation and side effects during database operations. Hooks are available at both the list level and field level. The hooks API is compliant with Keystone's hooks specification.
Key files:
packages/core/src/hooks/index.ts- List-level hookspackages/core/src/config/types.ts- Field-level hook types
Hook Types:
- Data Transformation Hooks:
resolveInputandresolveOutput- Transform data going in or out - Side Effect Hooks:
beforeOperationandafterOperation- Perform actions without modifying data - Validation Hooks:
validate(orvalidateInputfor backwards compatibility) - Custom validation logic
Hook execution order (write operations - create/update):
- List-level
resolveInput- Transform input data at list level - Field-level
resolveInput- Transform individual field values (e.g., hash passwords) - List-level
validate- Custom validation logic - Field validation - Built-in rules (isRequired, length, min/max)
- Field-level access control - Filter writable fields
- Field-level
beforeOperation- Side effects for individual fields - List-level
beforeOperation- Side effects at list level - Database operation
- List-level
afterOperation- Side effects at list level - Field-level
afterOperation- Side effects for individual fields
Hook execution order (read operations - query):
- Database operation
- Field-level access control - Filter readable fields
- Field-level
resolveOutput- Transform individual field values (e.g., wrap passwords)
Hook Arguments (Keystone-compliant):
All hooks receive these common arguments:
listKey- The name of the list being operated onoperation- The operation type ('create', 'update', or 'delete'). ForresolveOutputhooks, this is 'query'context- The AccessContext object
List-level hooks additionally receive:
resolveInput:{ listKey, operation, inputData, resolvedData, item, context }- Returns the modified
resolvedData
- Returns the modified
validate:{ listKey, operation, inputData, resolvedData, item, context, addValidationError }- Use
addValidationError(msg)to report validation failures
- Use
beforeOperation:- create/update:
{ listKey, operation, inputData, resolvedData, context } - delete:
{ listKey, operation, item, context }
- create/update:
afterOperation:- create:
{ listKey, operation, inputData, item, resolvedData, context } - update:
{ listKey, operation, inputData, originalItem, item, resolvedData, context } - delete:
{ listKey, operation, originalItem, context }
- create:
Field-level hooks additionally receive:
fieldKey- The name of the field (usefieldKey, notfieldName)resolveInput:{ listKey, fieldKey, operation, inputData, item, resolvedData, context }- Access field value via
resolvedData[fieldKey] - Returns the modified field value
- Access field value via
validate:- create/update:
{ listKey, fieldKey, operation, inputData, item, resolvedData, context, addValidationError } - delete:
{ listKey, fieldKey, operation, item, context, addValidationError }
- create/update:
beforeOperation:- create:
{ listKey, fieldKey, operation, inputData, resolvedData, context } - update:
{ listKey, fieldKey, operation, inputData, item, resolvedData, context } - delete:
{ listKey, fieldKey, operation, item, context }
- create:
afterOperation:- create:
{ listKey, fieldKey, operation, inputData, item, resolvedData, context } - update:
{ listKey, fieldKey, operation, inputData, originalItem, item, resolvedData, context } - delete:
{ listKey, fieldKey, operation, originalItem, context }
- create:
resolveOutput:{ operation, value, item, listKey, fieldName, context }(query operations only)
Key Concepts:
inputData- The original data passed to the operation (before any transformations)resolvedData- The data after transformations (updated byresolveInputhooks)item- The existing item from the database (undefined for create, present for update/delete)originalItem- The item before the operation (undefined for create, present for update/delete inafterOperation)
List-level hook use cases:
resolveInput: Auto-set publishedAt when status changes to "published"validate: Business logic validation (e.g., "title cannot contain spam")beforeOperation: Logging, sending notificationsafterOperation: Cache invalidation, webhooks, comparing previous and new values usingoriginalItem
Field-level hook use cases:
resolveInput: Hash passwords, normalize phone numbers, resize imagesresolveOutput: Wrap passwords with HashedPassword class, format datesbeforeOperation: Log field changes, validate external constraintsafterOperation: Update search indexes, invalidate CDN caches, cleanup old files by comparingoriginalItemfield values
Key files:
packages/core/src/config/types.ts- Type definitionspackages/core/src/config/index.ts- Config builder functionspackages/core/src/config/plugin-engine.ts- Plugin execution engine
Users define their schema in opensaas.config.ts:
export default config({
plugins: [
authPlugin({ emailAndPassword: { enabled: true } }),
ragPlugin({ provider: openaiEmbeddings({ apiKey: '...' }) }),
],
db: { provider: 'sqlite', url: 'file:./dev.db' },
lists: {
Post: list({
fields: { title: text({ validation: { isRequired: true } }) },
access: { operation: { query: () => true, update: isAuthor } },
hooks: { resolveInput: async ({ resolvedData }) => resolvedData },
}),
},
})Overview: The stack uses a plugin system for extending functionality. Plugins can inject lists, add hooks, register MCP tools, and participate in code generation.
Key files:
packages/core/src/config/plugin-engine.ts- Dependency resolution and executionpackages/auth/src/config/plugin.ts- Auth plugin implementationpackages/rag/src/config/plugin.ts- RAG plugin implementation
Plugin Capabilities:
- Inject Lists: Add auto-generated lists (e.g., User, Session from authPlugin)
- Extend Lists: Add fields or hooks to existing lists
- Hook Chaining: Multiple plugins can add hooks that execute in sequence
- Deep Merging: Plugins safely merge fields, hooks, and access control
- Lifecycle Hooks:
beforeGenerate,afterGeneratefor code generation control - Dependency Resolution: Automatic execution ordering via topological sort
Plugin Pattern:
export function myPlugin(config: MyConfig): Plugin {
return {
name: 'my-plugin',
version: '0.1.0',
dependencies: ['auth'], // Optional: depends on auth plugin
init: async (context) => {
// Add lists
context.addList('MyList', list({ fields: {...} }))
// Extend existing lists
context.extendList('User', { fields: { myField: text() } })
// Store plugin data for runtime
context.setPluginData('my-plugin', config)
},
beforeGenerate: async (config) => {
// Modify config before schema generation
return config
},
afterGenerate: async (files) => {
// Post-process generated files
return files
},
}
}Runtime Access:
Plugin data is stored in config._pluginData[pluginName]:
const authConfig = config._pluginData.auth // NormalizedAuthConfig
const ragConfig = config._pluginData.rag // NormalizedRAGConfigKey files:
packages/cli/src/generator/prisma.ts- Generatesprisma/schema.prismapackages/cli/src/generator/prisma-config.ts- Generatesprisma.config.tspackages/cli/src/generator/types.ts- Generates.opensaas/types.tspackages/cli/src/generator/context.ts- Generates.opensaas/context.ts
Run with pnpm generate to convert opensaas.config.ts into Prisma schema and TypeScript types.
Generated files:
prisma/schema.prisma- Prisma schema with models (no datasource URL)prisma.config.ts- Prisma 7 CLI configuration with datasource URL fordb:pushand migrations.opensaas/types.ts- TypeScript type definitions.opensaas/context.ts- Context factory with Prisma Client
Architecture: Generators delegate to field builder methods rather than using switch statements. Each field type provides its own generation logic through getPrismaType() and getTypeScriptType() methods.
Prisma 7 Configuration:
Prisma 7 requires two separate configurations:
- CLI configuration (
prisma.config.tsat project root): Used byprisma db push,prisma migrate dev, etc. Contains datasource URL from environment variables. - Runtime configuration (in
opensaas.config.ts): Used by application code. Provides database adapters viaprismaClientConstructor.
This separation allows CLI commands to work while keeping the runtime flexible with custom adapters.
Extending the Generated Prisma Schema:
The extendPrismaSchema function in the database configuration allows you to modify the generated Prisma schema before it's written to disk. This is useful for advanced Prisma features not directly supported by the config API.
export default config({
db: {
provider: 'postgresql',
prismaClientConstructor: (PrismaClient) => {
// ... adapter setup
},
extendPrismaSchema: (schema) => {
// Modify the schema as needed
let modifiedSchema = schema
// Example: Add multi-schema support for PostgreSQL
modifiedSchema = modifiedSchema.replace(
/(datasource db \{[^}]+provider\s*=\s*"postgresql")/,
'$1\n schemas = ["public", "auth"]',
)
// Example: Add @@schema attribute to all models
modifiedSchema = modifiedSchema.replace(
/^(model \w+\s*\{[\s\S]*?)(^}$)/gm,
(match, modelContent) => {
if (!modelContent.includes('@@schema')) {
return `${modelContent}\n @@schema("public")\n}`
}
return match
},
)
return modifiedSchema
},
},
// ... rest of config
})Common use cases:
- Multi-schema support: Add Prisma's multi-schema support for PostgreSQL
- Custom attributes: Add model-level or field-level attributes not exposed in the config API
- Output path modifications: Adjust the Prisma Client output path
- Preview features: Enable Prisma preview features via datasource or generator configuration
Field-Level extendPrismaSchema for Relationships:
Relationship fields also support extendPrismaSchema in their db config for more granular control. This is useful for self-referential relationships that need custom onDelete or onUpdate actions.
lists: {
Category: list({
fields: {
name: text({ validation: { isRequired: true } }),
parent: relationship({
ref: 'Category.children',
db: {
foreignKey: true,
extendPrismaSchema: ({ fkLine, relationLine }) => ({
fkLine,
relationLine: relationLine.replace(
'@relation(',
'@relation(onDelete: SetNull, onUpdate: Cascade, '
),
}),
},
}),
children: relationship({ ref: 'Category.parent', many: true }),
},
}),
}The function receives:
fkLine: The foreign key field line (e.g.,"parentId String?") - only present for single relationships that own the FKrelationLine: The relation field line (e.g.,"parent Category? @relation(...)")
Field-level extendPrismaSchema is applied before the global db.extendPrismaSchema, allowing both granular and broad modifications.
Key file: packages/core/src/fields/index.ts
Core field types:
text()- String field with validation (isRequired, length)integer()- Number field with validation (isRequired, min, max)checkbox()- Boolean fieldtimestamp()- Date/time field with auto-now supportpassword()- String field (excluded from reads)select()- Enum field with predefined optionsrelationship()- Foreign key relationship (one-to-one, one-to-many)json()- JSON field for storing arbitrary JSON datavirtual()- Computed field not stored in database, computed via hooks
Third-party field types:
richText()from@opensaas/stack-tiptap/fields- Rich text editor with JSON storage
Field Builder Methods:
Each field builder function returns an object with these methods:
getZodSchema(fieldName, operation)- Validation schema generationgetPrismaType(fieldName)- Prisma type and modifiers (e.g.,{ type: "String", modifiers: "?" })getTypeScriptType()- TypeScript type and optionality (e.g.,{ type: "string", optional: true })
This allows field types to be fully self-contained and extensible without modifying core stack code.
Virtual fields now support custom scalar types (like Decimal for financial precision) through three different approaches:
1. Primitive type strings (for built-in JavaScript types):
fields: {
fullName: virtual({
type: 'string',
hooks: {
resolveOutput: ({ item }) => `${item.firstName} ${item.lastName}`,
},
})
}2. Import strings (for custom types, explicit format):
fields: {
totalPrice: virtual({
type: "import('decimal.js').Decimal",
hooks: {
resolveOutput: ({ item }) => {
return new Decimal(item.price).times(item.quantity)
},
},
})
}3. Type descriptor objects (recommended for custom types):
import Decimal from 'decimal.js'
fields: {
totalPrice: virtual({
type: { value: Decimal, from: 'decimal.js' },
hooks: {
resolveOutput: ({ item }) => {
return new Decimal(item.price).times(item.quantity)
}
}
}),
// With custom name (when constructor name doesn't match export)
customField: virtual({
type: {
value: MyClass,
from: '@myorg/types',
name: 'MyExportedType' // Optional
},
hooks: {
resolveOutput: ({ item }) => new MyClass(item.data)
}
})
}Type generation:
- Primitive strings are used as-is in generated types
- Import strings and type descriptors generate proper TypeScript import statements
- The type generator automatically collects and deduplicates imports from all fields
Use cases:
- Financial calculations: Use
Decimalfromdecimal.jsfor precise currency calculations - Custom data structures: Return domain-specific types from virtual fields
- Third-party libraries: Integrate types from any npm package
This feature addresses the need for precision in financial applications (billing, invoicing, e-commerce) where JavaScript's number type loses precision for monetary values.
The stack provides optional Better-auth integration through @opensaas/stack-auth.
Key files:
packages/auth/src/config/index.ts- Config wrapperwithAuth()andauthConfig()packages/auth/src/lists/index.ts- Auto-generated auth lists (User, Session, Account, Verification)packages/auth/src/server/index.ts- Better-auth server setuppackages/auth/src/client/index.ts- Client-side auth hookspackages/auth/src/ui/index.ts- Pre-built UI components (SignInForm, SignUpForm, etc.)
How it works:
withAuth()wraps your config and merges in auth lists (User, Session, Account, Verification)authConfig()configures Better-auth plugins and session fields- Generator creates Prisma schema with auth tables
- Better-auth handles OAuth flow and session management
- Context automatically includes session in all access control functions
- Session fields are configurable (e.g.,
['userId', 'email', 'name', 'role'])
See: packages/auth/CLAUDE.md for detailed patterns and examples/auth-demo for usage.
The stack provides Model Context Protocol server integration through @opensaas/stack-core/mcp and @opensaas/stack-auth/mcp.
Key files:
packages/core/src/mcp/handler.ts- Auth-agnostic MCP HTTP handlerspackages/core/src/mcp/types.ts- MCP session typespackages/auth/src/mcp/better-auth.ts- Better-auth OAuth adapter
How it works:
- Enable MCP in config with
mcp: { enabled: true, auth: { type: 'better-auth', loginPage: '/sign-in' } } - Core runtime generates CRUD tools for each list (query, create, update, delete)
- Auth adapter provides session from Better-auth OAuth flow with AI assistants
- All tools respect existing access control rules
- Custom tools can be added per-list for specialized operations
Migration Note: The @opensaas/stack-mcp package is deprecated. Use @opensaas/stack-core/mcp for MCP handlers and @opensaas/stack-auth/mcp for Better-auth integration.
See: packages/core/CLAUDE.md and packages/auth/CLAUDE.md for detailed patterns, and examples/mcp-demo for usage.
The stack uses consistent case conventions across different contexts:
List Names in Config: Always use PascalCase
lists: {
User: list({ ... }), // Good
BlogPost: list({ ... }), // Good
AuthUser: list({ ... }), // Good
user: list({ ... }), // Bad - don't use lowercase
blog_post: list({ ... }), // Bad - don't use snake_case
}Case Conversions:
- Prisma Models: PascalCase (e.g.,
AuthUser,BlogPost) - Prisma Client Properties: camelCase (e.g.,
prisma.authUser,prisma.blogPost) - Context DB Properties: camelCase (e.g.,
context.db.authUser,context.db.blogPost) - Admin UI URLs: kebab-case (e.g.,
/admin/auth-user,/admin/blog-post)
Utility Functions:
import { getDbKey, getUrlKey, getListKeyFromUrl } from '@opensaas/stack-core'
getDbKey('AuthUser') // 'authUser' - for accessing context.db and prisma
getUrlKey('AuthUser') // 'auth-user' - for constructing URLs
getListKeyFromUrl('auth-user') // 'AuthUser' - for parsing URLsThe stack automatically generates a context factory in .opensaas/context.ts that abstracts away Prisma client management:
// In your app code (e.g., server actions)
import { getContext } from '@/.opensaas/context'
// Anonymous access
const context = await getContext()
const posts = await context.db.post.findMany()
// Authenticated access
const context = await getContext({ userId: 'user-123' })
const myPosts = await context.db.post.findMany()Prisma Client Constructor (Required for Prisma 7):
Prisma 7 requires database adapters. You must provide a prismaClientConstructor function in your config:
// opensaas.config.ts - SQLite example
import { PrismaBetterSQLite3 } from '@prisma/adapter-better-sqlite3'
import Database from 'better-sqlite3'
export default config({
db: {
provider: 'sqlite',
url: process.env.DATABASE_URL || 'file:./dev.db',
prismaClientConstructor: (PrismaClient) => {
const db = new Database(process.env.DATABASE_URL || './dev.db')
const adapter = new PrismaBetterSQLite3(db)
return new PrismaClient({ adapter })
},
},
// ... rest of config
})// PostgreSQL example
import { PrismaPg } from '@prisma/adapter-pg'
import pg from 'pg'
export default config({
db: {
provider: 'postgresql',
url: process.env.DATABASE_URL,
prismaClientConstructor: (PrismaClient) => {
const pool = new pg.Pool({ connectionString: process.env.DATABASE_URL })
const adapter = new PrismaPg(pool)
return new PrismaClient({ adapter })
},
},
})// Neon serverless PostgreSQL example
import { PrismaNeon } from '@prisma/adapter-neon'
import { neonConfig } from '@neondatabase/serverless'
import ws from 'ws'
export default config({
db: {
provider: 'postgresql',
url: process.env.DATABASE_URL,
prismaClientConstructor: (PrismaClient) => {
neonConfig.webSocketConstructor = ws
const adapter = new PrismaNeon({
connectionString: process.env.DATABASE_URL,
})
return new PrismaClient({ adapter })
},
},
})The generated context will use your custom constructor to instantiate PrismaClient with the appropriate adapter.
Access-controlled operations return null (single record) or [] (multiple records) when access is denied, rather than throwing errors. This prevents information leakage about whether records exist.
Always check for null:
const post = await context.db.post.update({ where: { id }, data })
if (!post) {
// Either doesn't exist OR user doesn't have access
return { error: 'Access denied' }
}Fields id, createdAt, updatedAt are automatically:
- Added to Prisma schema
- Excluded from access control (always readable)
- Excluded from field-level write operations
Relationships support two ref formats: 'ListName.fieldName' (bidirectional) or 'ListName' (list-only)
Bidirectional relationships (both sides define the relationship):
- One-to-many:
posts: relationship({ ref: 'Post.author', many: true }) - Many-to-one:
author: relationship({ ref: 'User.posts' })
List-only relationships (only one side defines the relationship):
- Many-to-one:
category: relationship({ ref: 'Category' }) - One-to-many:
tags: relationship({ ref: 'Tag', many: true })
How list-only refs work:
- When you use
ref: 'Category'(no field specified), the stack automatically creates a synthetic relation field on the Category model - The synthetic field is named
from_<SourceList>_<field>(e.g.,from_Post_category) - This matches Keystone's behavior and is useful when you don't need to access the relationship from both sides
- Prisma generates foreign keys automatically and uses named relations for list-only refs
Example:
// Blog example with both patterns
lists: {
User: list({
fields: {
name: text(),
// Bidirectional: User has many Posts
posts: relationship({ ref: 'Post.author', many: true }),
},
}),
Category: list({
fields: {
name: text(),
// No relationship field needed here!
},
}),
Post: list({
fields: {
title: text(),
// Bidirectional: Post belongs to User
author: relationship({ ref: 'User.posts' }),
// List-only: Post belongs to Category
category: relationship({ ref: 'Category' }),
},
}),
}Generated Prisma schema:
model Category {
id String @id @default(cuid())
name String
from_Post_category Post[] @relation("Post_category") // Auto-generated
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
}
model Post {
id String @id @default(cuid())
title String
authorId String?
author User? @relation(fields: [authorId], references: [id])
categoryId String?
category Category? @relation("Post_category", fields: [categoryId], references: [id])
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
}Problem: When migrating from KeystoneJS to OpenSaaS Stack, many-to-many relationship join tables have different naming conventions, which can cause data loss if not handled correctly.
Join Table Naming Strategies:
- Prisma (default): Uses alphabetically-sorted names like
_LessonToTeacher - Keystone: Uses field-location-based names like
_Lesson_teachers - Custom: Use per-field
db.relationNamefor full control
Configuration Options:
Option 1: Global Keystone Naming (recommended for migrations)
Set joinTableNaming: 'keystone' in your database config to automatically apply Keystone naming to all M2M relationships:
export default config({
db: {
provider: 'postgresql',
joinTableNaming: 'keystone', // Auto-apply Keystone naming
prismaClientConstructor: (PrismaClient) => {
// ... your adapter setup
},
},
lists: {
Lesson: {
fields: {
title: text(),
// Prisma creates implicit join table _Lesson_teachers
teachers: relationship({ ref: 'Teacher.lessons', many: true }),
},
},
Teacher: {
fields: {
name: text(),
lessons: relationship({ ref: 'Lesson.teachers', many: true }),
},
},
},
})Option 2: Per-Field Relation Name (recommended for fine-grained control)
Use db.relationName on individual relationships to specify custom names:
export default config({
db: {
provider: 'postgresql',
prismaClientConstructor: (PrismaClient) => {
// ... your adapter setup
},
},
lists: {
Lesson: {
fields: {
title: text(),
// Only need to set on ONE side of the relationship
teachers: relationship({
ref: 'Teacher.lessons',
many: true,
db: { relationName: 'Lesson_teachers' },
}),
},
},
Teacher: {
fields: {
name: text(),
// Automatically uses same relationName from other side
lessons: relationship({ ref: 'Lesson.teachers', many: true }),
},
},
},
})Option 3: Hybrid (per-field overrides global)
Combine both for flexibility:
export default config({
db: {
provider: 'postgresql',
joinTableNaming: 'keystone', // Default for most relationships
},
lists: {
Lesson: {
fields: {
// Uses global Keystone naming → _Lesson_students
students: relationship({ ref: 'Student.lessons', many: true }),
// Per-field overrides global → _CustomTeachers
teachers: relationship({
ref: 'Teacher.lessons',
many: true,
db: { relationName: 'CustomTeachers' },
}),
},
},
},
})Generated Prisma Schema:
model Lesson {
id String @id @default(cuid())
title String?
teachers Teacher[] @relation("Lesson_teachers")
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
}
model Teacher {
id String @id @default(cuid())
name String?
lessons Lesson[] @relation("Lesson_teachers")
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
}
// Note: Prisma automatically creates implicit join table _Lesson_teachers
// No explicit model needed - controlled via @relation("name") attributeMigration Guide:
- Before Migration: Identify all many-to-many relationships in your Keystone schema
- Choose Strategy:
- For full migration: Use global
joinTableNaming: 'keystone' - For specific tables: Use per-field
db.relationName
- For full migration: Use global
- Run Generator: Generate Prisma schema with
pnpm generate - Verify Schema: Check that relation names match (e.g.,
@relation("Lesson_teachers")) - Test Migration: Use
prisma db pullto introspect existing database and compare schemas - Apply Changes: Use
prisma db pushto sync schema (should detect no changes if names match)
Validation:
- If both sides specify
db.relationName, they must match or an error is thrown - Only need to set
db.relationNameon one side of bidirectional relationships - Per-field
db.relationNametakes precedence over globaljoinTableNaming
Important Notes:
- Keystone naming uses deterministic selection for bidirectional many-to-many relationships (alphabetically sorted)
- Prisma automatically creates join tables named
_relationNamewhen you use@relation("relationName") - For new projects, use the default Prisma naming unless you need Keystone compatibility or have specific naming requirements
- Edit TypeScript files in
packages/core/src/ - Build with
pnpm build(orpnpm devfor watch mode) - Test changes in examples:
cd examples/blog pnpm generate # Regenerate if config types changed npx tsx test.ts # Run test script
IMPORTANT: Field types are fully self-contained. Do NOT add switch statements to core or UI packages.
-
Define the field type in
packages/core/src/config/types.ts:export type MyCustomField = BaseFieldConfig & { type: 'myCustom' customOption?: string }
-
Create the field builder in
packages/core/src/fields/index.ts:export function myCustom(options?: Omit<MyCustomField, 'type'>): MyCustomField { return { type: 'myCustom', ...options, getZodSchema: (fieldName, operation) => { // Return Zod schema for validation return z.string().optional() }, getPrismaType: (fieldName) => { // Return Prisma type and modifiers return { type: 'String', modifiers: '?' } }, getTypeScriptType: () => { // Return TypeScript type and optionality return { type: 'string', optional: true } }, } }
-
Register UI component (optional, for admin UI):
import { registerFieldComponent } from '@opensaas/stack-ui' import { MyCustomFieldComponent } from './components/MyCustomField' registerFieldComponent('myCustom', MyCustomFieldComponent)
Key Principle: The field config object drives ALL behavior. Generators, validators, and UI components delegate to field methods. Never add switch statements based on field type in core or UI packages.
The UI package (@opensaas/stack-ui) offers multiple levels of abstraction through specialized exports:
// Full admin UI (all-in-one solution)
import { AdminUI } from '@opensaas/stack-ui'
// Primitives (shadcn/ui components for custom UIs)
import { Button, Input, Dialog, Card, Table } from '@opensaas/stack-ui/primitives'
// Composable field components
import { TextField, SelectField, RelationshipField } from '@opensaas/stack-ui/fields'
// Standalone composable components
import { ItemCreateForm, ItemEditForm, ListTable } from '@opensaas/stack-ui/standalone'
// Server utilities
import { getAdminContext } from '@opensaas/stack-ui/server'1. Full AdminUI - Complete admin interface with routing:
<AdminUI context={context} config={config} />2. Standalone Components - Drop-in CRUD components:
import { ItemCreateForm, ListTable } from '@opensaas/stack-ui/standalone'
// Create form in custom page
<ItemCreateForm
listKey="Post"
context={context}
onSuccess={(item) => router.push(`/posts/${item.id}`)}
/>
// Table in custom layout
<ListTable
listKey="Post"
context={context}
columns={['title', 'author', 'createdAt']}
/>3. Primitives - Build custom UIs with shadcn components:
import { Card, Button, Dialog } from '@opensaas/stack-ui/primitives'
<Card>
<Button onClick={handleAction}>Custom Action</Button>
</Card>See: examples/composable-dashboard for complete working examples of all composability patterns.
The UI layer uses a component registry pattern to avoid switch statements and enable extensibility.
Two approaches for custom field components:
-
Global Registration - Register a component for reuse across multiple fields:
import { registerFieldComponent } from "@opensaas/stack-ui"; import { ColorPickerField } from "./components/ColorPickerField"; // Register once at app startup registerFieldComponent("color", ColorPickerField); // Use in multiple fields by referencing the fieldType fields: { favoriteColor: text({ ui: { fieldType: "color" } }), themeColor: text({ ui: { fieldType: "color" } }), }
-
Per-Field Override - Pass a component directly for one-off customization:
import { SlugField } from './components/SlugField' fields: { slug: text({ ui: { component: SlugField }, // Used only for this field }) }
Component Resolution Priority:
ui.component(per-field override) - highest priorityui.fieldType(global registry lookup by custom type name)fieldConfig.type(default registry lookup by field type)
See: examples/custom-field for a complete working example demonstrating both patterns.
The stack supports third-party field packages as separate npm packages. This allows developers to add rich functionality without bloating the core stack.
Example: @opensaas/stack-tiptap - Rich text editor integration
Package Structure:
packages/my-field/
├── src/
│ ├── fields/
│ │ └── myField.ts # Field builder with Zod/Prisma/TS generators
│ ├── components/
│ │ └── MyFieldComponent.tsx # React component (client-side)
│ ├── styles/
│ │ └── my-field.css # Optional styles
│ └── index.ts # Public exports
├── package.json
└── README.md
Key Requirements:
-
Field Builder - Must implement
BaseFieldConfig:import type { BaseFieldConfig } from '@opensaas/stack-core' export type MyField = BaseFieldConfig & { type: 'myField' // Your custom options } export function myField(options?): MyField { return { type: 'myField', ...options, getZodSchema: (fieldName, operation) => { /* ... */ }, getPrismaType: (fieldName) => { /* ... */ }, getTypeScriptType: () => { /* ... */ }, } }
-
React Component - Must accept standard field props:
export interface MyFieldProps { name: string value: any onChange: (value: any) => void label: string error?: string disabled?: boolean required?: boolean mode?: 'read' | 'edit' // Your custom UI options from fieldConfig.ui }
-
Client-Side Registration - Due to Next.js server/client boundaries:
// lib/register-fields.ts 'use client' import { registerFieldComponent } from '@opensaas/stack-ui' import { MyFieldComponent } from '@my-org/my-field' registerFieldComponent('myField', MyFieldComponent)
Then import in admin page:
// app/admin/[[...admin]]/page.tsx import '../../../lib/register-fields' // Side-effect import
-
FieldConfig Extensibility - Core types support third-party fields:
// FieldConfig union includes BaseFieldConfig to allow custom types export type FieldConfig = | TextField | IntegerField | ... | BaseFieldConfig; // Allows third-party fields
See:
packages/tiptap/- Complete reference implementationexamples/tiptap-demo/- Usage example with client-side registration
The blog example's test script (README test code) exercises all access control paths:
- Anonymous vs. authenticated users
- Published vs. draft posts
- Author vs. non-author access
- Field-level access (internalNotes)
This project uses ESM ("type": "module" in package.json):
- All imports must include
.jsextensions (not.ts) - Use
import typefor type-only imports - Config:
moduleResolution: "bundler",module: "ESNext"
The session object passed to access control functions is user-defined. The stack only requires it exists but doesn't enforce a structure. Common pattern:
{
userId: string
} // or null for anonymousThe context uses generic typing to preserve Prisma Client types:
const context = await getContext<typeof prisma>(config, prisma, session)
// context.db operations are fully typedThe UI layer automatically passes custom UI options from field configs to components:
// In config
fields: {
content: richText({
ui: {
placeholder: 'Write your content...',
minHeight: 300,
maxHeight: 800,
},
})
}
// Component automatically receives these as props
export function MyField({ placeholder, minHeight, maxHeight, ...baseProps }) {
// UI options are automatically passed through
}The FieldRenderer extracts component and fieldType from ui options, then passes all remaining options to the component. This allows field types to define custom UI behaviors without modifying core stack code.
Current generators are basic:
- No migration support (use
prisma db push) - No introspection support
- Limited Prisma features (no raw queries, transactions, etc.)
Tests use Vitest. Run from core package:
cd packages/core
pnpm testThis monorepo uses changesets for versioning and publishing. Every change to a package must be accompanied by a new changeset file.
IMPORTANT: When working with Claude Code, you MUST use the pr-changeset skill to create changeset files. The changeset CLI doesn't work in the Claude Code environment, so use the skill instead.
The pr-changeset skill:
- Automatically creates changeset files in
.changeset/directory - Enforces versioning rules (patch for bug fixes, minor for features, major only when explicitly requested)
- Provides templates and examples for proper changeset format
- Ensures consistent changeset descriptions across the project
Versioning Rules:
- patch: Bug fixes only (max 2 lines)
- minor: New features or enhancements (include usage examples)
- major: Breaking changes (only when user explicitly requests, include migration guide)
Manual changeset creation (if not using Claude Code):
- Create a changeset:
pnpm changesetThen follow the prompts to select packages and version bumps.
- Commit changes including the changeset file. Version bumping and publishing is handled automatically by changesets during release in a GitHub Action.
Claude plugins in claude-plugins/* use direct semver versioning in JSON files. Whenever you modify plugin code, skills, commands, agents, or marketplace root files, you must bump the version using the plugin-version skill.
IMPORTANT: When working with Claude Code, you MUST use the plugin-version skill to bump plugin versions. It handles both the plugin's own plugin.json and the matching entry in .claude-plugin/marketplace.json.
The plugin-version skill:
-
Detects which plugin directories changed (
claude-plugins/opensaas-stack/,claude-plugins/opensaas-migration/,.claude-plugin/) -
Determines patch vs minor bump (patch for fixes, minor for new capabilities, major only when explicitly requested)
-
Directly edits version fields in JSON — no changeset files
-
Keeps
plugin.jsonandmarketplace.jsonplugin entries in sync -
Only bumps
marketplace.metadata.versionwhen the marketplace structure itself changed (not just plugin version numbers) -
Data passed in as props to a component that is marked with
"use client"must be serialised and must only contain the minimum data required to make that component work -
Avoid the use of the
anytype, and do not use type casting. All types must be strongly typed to ensure type satefy - theunkownandanytypes must never exposed as an exteral type and are only to be used internally (within a package) where absolutely necessary -
All new exmaples must have a package name of
opensaas-,<example-name>-exampleto ensure consistency across the monorepo -
when installing packages first check if the package is in use in another package or example and then make sure the versions match across all packages and examples to avoid multiple versions of the same package being installed
-
when adding a new exmaple always use the
create-opensaas-appscript from @packages/create-opensaas-app - this will ensure the example is setup correctly and that the init script is kept up to date with any changes -
Always run
pnpm lintpnpm manypkg fixandpnpm formatto ensure code quality and consistency before committing any changes -
The repo URL is
https://github.com/OpenSaasAU/stackand the docs site ishttps://stack.opensaas.au/