diff --git a/.github/ISSUE_TEMPLATE/bug_report.md b/.github/ISSUE_TEMPLATE/bug_report.md new file mode 100644 index 0000000..78e48b9 --- /dev/null +++ b/.github/ISSUE_TEMPLATE/bug_report.md @@ -0,0 +1,23 @@ +--- +name: Bug report +about: Report a bug +--- + +## Description + + + +## Steps to reproduce + +1. +2. + +## Expected behavior + + + +## Environment + +- OS: +- Node version: +- Think version: diff --git a/.github/PULL_REQUEST_TEMPLATE.md b/.github/PULL_REQUEST_TEMPLATE.md new file mode 100644 index 0000000..ed268cf --- /dev/null +++ b/.github/PULL_REQUEST_TEMPLATE.md @@ -0,0 +1,10 @@ +## Summary + + + +## Test plan + +- [ ] `npm run lint` passes +- [ ] `npm run test:ports` passes +- [ ] `npm run test:m1` passes +- [ ] Docs updated if user-facing diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index 49e5785..ca129c3 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -20,8 +20,8 @@ jobs: matrix: node-version: [22] steps: - - uses: actions/checkout@v4 - - uses: actions/setup-node@v4 + - uses: actions/checkout@v5 + - uses: actions/setup-node@v5 with: node-version: ${{ matrix.node-version }} cache: npm diff --git a/.github/workflows/release.yml b/.github/workflows/release.yml index 295b953..63e8d38 100644 --- a/.github/workflows/release.yml +++ b/.github/workflows/release.yml @@ -17,8 +17,8 @@ jobs: sanity: runs-on: ubuntu-latest steps: - - uses: actions/checkout@v4 - - uses: actions/setup-node@v4 + - uses: actions/checkout@v5 + - uses: actions/setup-node@v5 with: node-version: 22 cache: npm @@ -49,7 +49,7 @@ jobs: needs: sanity runs-on: ubuntu-latest steps: - - uses: actions/checkout@v4 + - uses: actions/checkout@v5 with: fetch-depth: 0 diff --git a/AGENTS.md b/AGENTS.md index 5baa55c..fd6d696 100644 --- a/AGENTS.md +++ b/AGENTS.md @@ -19,6 +19,7 @@ Do not audit the repository by recursively walking the filesystem. Follow the au ### 2. The Bedrock - **`ARCHITECTURE.md`**: The authoritative structural reference (Git, WARP, Minds). +- **`docs/INFRASTRUCTURE_DOCTRINE.md`**: The runtime-first engineering standards (MANDATORY). - **`docs/VISION.md`**: Core tenets and the capture doctrine. - **`docs/method/process.md`**: Repo work doctrine (Backlog lanes, Cycle loop). diff --git a/CHANGELOG.md b/CHANGELOG.md index fb88bc7..9e317a2 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -10,6 +10,15 @@ Release discipline: - `package.json` version is bumped on the release commit - a Git tag is created on the commit that lands on `main` for that release +## Unreleased + +- fixed MCP tool result envelopes so structured content matches each registered output schema again +- fixed checkpoint-backed reads to use public `@git-stunts/git-warp` package exports instead of private `node_modules` internals +- fixed cached writer retries across raw capture follow-through, annotations, reflect writes, migrations, and enrichment patches +- fixed enrichment search-index invalidation and per-repo cache scoping, and counted semantic-parse receipts in enrichment results +- documented `--annotate`, `--enrich`, and `--topics` in CLI help and validated stray positional text for enrichment/topic commands +- cleaned whitespace in the infrastructure doctrine and reflect command source so diff checks pass + ## [0.7.0] - 2026-04-11 - added `think --doctor` health check command — reports think directory, local repo, graph model version, entry count, and upstream reachability (with `git ls-remote` connectivity test) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 0611731..cc1759b 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -90,7 +90,7 @@ npm run benchmark:browse ## Coding standard -New JavaScript should follow [System-Style JavaScript](./docs/SYSTEMS_STYLE_JAVASCRIPT.md): runtime-backed domain concepts, boundary validation, explicit ownership of behavior, and narrow seams instead of object-shape soup. +New JavaScript should follow **[Infrastructure Doctrine](./docs/INFRASTRUCTURE_DOCTRINE.md)** and [System-Style JavaScript](./docs/SYSTEMS_STYLE_JAVASCRIPT.md): runtime-backed domain concepts, boundary validation, explicit ownership of behavior, and narrow seams instead of object-shape soup. Read [`ARCHITECTURE.md`](./ARCHITECTURE.md) before making structural changes. Do not let storage concerns leak into normal UX, and do not let surface-specific concerns infect the capture core. diff --git a/GUIDE.md b/GUIDE.md index 7e1ed16..6dd066f 100644 --- a/GUIDE.md +++ b/GUIDE.md @@ -26,7 +26,7 @@ Return to your archive through high-fidelity browse or context-aware recall. ### 4. Pressure-Testing (Reflect) Move beyond simple capture by challenging your ideas through structured prompt families. -- **Run**: `think --reflect` +- **Run**: `think --reflect` (CLI-only; MCP reflect is not yet available) - **Modes**: `challenge`, `constraint`, `sharpen` ## Big Picture: System Orchestration @@ -40,7 +40,7 @@ Think is a tiered engine designed to keep capture cheap while enabling rich re-e ## Orientation Checklist - [ ] **I am setting up a new machine**: Start with `README.md` Quick Start. -- [ ] **I want to separate my agent's thoughts**: Use `THINK_REPO_DIR` in an agent wrapper script. +- [ ] **I want to separate my agent's thoughts**: See [Mind Orchestration](./docs/MIND_ORCHESTRATION.md) for the multi-mind pattern. - [ ] **I need to backup my archive**: Configure `THINK_UPSTREAM_URL`. - [ ] **I am debugging the TUI**: Start with `ADVANCED_GUIDE.md`. - [ ] **I am contributing to Think**: Read `docs/method/process.md` and `docs/BEARING.md`. diff --git a/README.md b/README.md index ef70f42..41f04dc 100644 --- a/README.md +++ b/README.md @@ -21,7 +21,7 @@ Unlike traditional note-taking apps that prioritize organization over ingestion, ## Quick Start ### 1. Local Setup -Clone, install dependencies, and capture your first thought. +Requires **Node.js >= 22** and **Git**. Clone, install, capture. ```bash npm install node ./bin/think.js "first captured thought" diff --git a/docs/AMBIENT_CONTEXT.md b/docs/AMBIENT_CONTEXT.md new file mode 100644 index 0000000..3190a4d --- /dev/null +++ b/docs/AMBIENT_CONTEXT.md @@ -0,0 +1,121 @@ +# Ambient Context and Recall + +How Think collects, persists, and uses project context for capture +provenance and recall matching. + +## Overview + +When a thought is captured, Think records the ambient project context +— the working directory, git remote, branch, and derived project +tokens — alongside the raw text. This context powers `--remember`, +which matches prior thoughts by project affinity rather than requiring +explicit search terms. + +## Collection + +Two levels of context are collected at different points in the capture +flow: + +### Capture ambient context (`getCaptureAmbientContext`) + +Collected synchronously during `saveRawCapture`. Cheap — no git +probes. + +| Field | Source | Notes | +|-------|--------|-------| +| `cwd` | `path.resolve(cwd)` | Resolved working directory | +| `projectName` | Derived from cwd basename | Fallback when no git | +| `projectTokens` | Derived from projectName | Lowercased, split on non-alphanumeric | + +### Full ambient context (`getAmbientProjectContext`) + +Collected during `finalizeCapturedThought` follow-through. Runs three +`git` probes via `spawnSync`: + +| Field | Git command | Notes | +|-------|-------------|-------| +| `gitRoot` | `rev-parse --show-toplevel` | Absolute path to repo root | +| `gitRemote` | `config --get remote.origin.url` | Origin remote URL | +| `gitBranch` | `branch --show-current` | Current branch name | +| `projectName` | Derived from gitRemote → gitRoot → cwd | Priority order | +| `projectTokens` | All candidates, lowercased, split | Used for recall matching | + +### Resolution priority + +The `projectName` is derived in priority order: +1. Last segment of `gitRemote` URL (sans `.git`) +2. Basename of `gitRoot` +3. Basename of `cwd` + +### Where resolution happens + +The CLI and MCP layers resolve ambient context at the boundary +(`process.cwd()`) and pass it into the store functions. The store +layer does not read `process.cwd()` or shell out to git directly. + +## Persistence + +Context fields are stored as WARP node properties on the entry: + +| Property | Source | +|----------|--------| +| `ambientCwd` | `cwd` | +| `ambientGitRoot` | `gitRoot` (follow-through only) | +| `ambientGitRemote` | `gitRemote` (follow-through only) | +| `ambientGitBranch` | `gitBranch` (follow-through only) | + +The two-phase write means `ambientCwd` is available immediately after +capture, while git fields are backfilled during follow-through. This +preserves capture latency. + +## Recall matching (`--remember`) + +### Ambient recall (no query) + +`buildAmbientRememberScope(cwd)` resolves the current project context +and matches stored entries by affinity: + +| Match kind | Condition | Score | Tier | +|------------|-----------|-------|------| +| `ambient_git_remote` | Entry's `ambientGitRemote` matches current | 100 | 3 | +| `ambient_git_root` | Entry's `ambientGitRoot` matches current | 50 | 3 | +| `ambient_cwd` | Entry's `ambientCwd` matches current | 25 | 3 | +| `ambient_git_branch` | Entry's `ambientGitBranch` matches current | 15 | 3 | +| `project_tokens_text` | Entry text contains any current project token | 5 per token | 3 | + +Results are sorted by score (highest first), then by recency. + +### Explicit recall (with query) + +`buildExplicitRememberScope(query)` splits the query into terms and +matches against entry text. Terms are lowercased and split on +non-alphanumeric boundaries. + +| Match kind | Condition | Score | +|------------|-----------|-------| +| `query_terms` | Entry text contains query terms | 1 per term | + +### Provenance + +Capture provenance (`CaptureProvenance`) is separate from ambient +context. It records how the thought entered the system: + +| Field | Source | Values | +|-------|--------|--------| +| `ingress` | Capture surface | `url`, `shortcut`, `selected_text`, `share` | +| `sourceApp` | Originating application | Free text (trimmed) | +| `sourceURL` | Source URL | `http:` or `https:` only | + +Provenance is normalized at the boundary via +`normalizeCaptureProvenance` and persisted as entry properties +(`captureIngress`, `captureSourceApp`, `captureSourceURL`). + +## Files + +| File | Role | +|------|------| +| `src/project-context.js` | Collection and token generation | +| `src/capture-provenance.js` | Provenance normalization | +| `src/store/capture.js` | Persistence (saveRawCapture, finalize) | +| `src/store/remember.js` | Recall scope and matching | +| `src/store/queries.js` | Query execution (rememberThoughts) | diff --git a/docs/INFRASTRUCTURE_DOCTRINE.md b/docs/INFRASTRUCTURE_DOCTRINE.md new file mode 100644 index 0000000..5329bd1 --- /dev/null +++ b/docs/INFRASTRUCTURE_DOCTRINE.md @@ -0,0 +1,201 @@ +# How to write TypeScript infrastructure that *actually* lasts. + +This is the authoritative doctrine for Think. It is a refined, battle-tested version of the original "Runtime Truth Wins" philosophy. Infrastructure code (persistence, replication, crypto, conflict resolution, migrations, audit logs) cannot afford weak assumptions. They create long-lived, expensive bugs. + +--- + +### Rule 0: Runtime Truth Wins (Non-Negotiable) + +When the program is running, only one question matters: + +**What is actually true right now, in memory, under execution?** + +Everything else — types, comments, tests, design docs — is secondary. If they disagree with runtime reality, they are lying. Fix the reality first, then update the documentation. + +**Hierarchy of Truth** + +``` +1. Runtime (constructors, invariants, methods, errors) +2. Boundary parsers & schemas +3. Tests (executable specification) +4. TypeScript types (checked documentation) +5. IDE / static analysis +6. Design docs & comments +``` + +TypeScript is #4 — a powerful servant, never the master. + +--- + +### Core Philosophy + +- Prioritize **truth-seeking** over cleverness. +- Favor **boring, explicit, and robust**. +- Default to **immutability**. +- Treat **portability as a feature** (browser-first mindset). +- Make correctness cheap; performance comes after. + +--- + +### Language Policy + +**TypeScript is the primary language.** Strong IDE support and ecosystem make it the right default. + +**Banned without mercy:** +- `any` +- `unknown` escaping boundaries +- Type assertions (`as`) +- `enum` +- `throw new Error("string")` +- Magic numbers & strings +- Boolean trap parameters +- Anonymous option bags in public APIs + +**Encouraged:** +- Classes for domain concepts with invariants or behavior +- `readonly` + `private` fields + `Object.freeze()` +- Branded classes for cross-realm safety +- Rust → Wasm when TypeScript is insufficient (performance, memory safety, hostile parsing) + +**Canonical Boundary Pattern** + +```typescript +function parsePatchFromWire(bytes: Uint8Array): PatchV2 { + const raw = cborDecode(bytes); // untrusted + return PatchV2.fromDecoded(raw); // validates + constructs trusted domain object +} + +function applyPatch(patch: PatchV2): Result { ... } +``` + +--- + +### Architecture + +**Hexagonal (Ports & Adapters) — Mandatory** + +Core domain logic must never depend on host-specific APIs (Node globals, `fs`, `Buffer`, `process`, etc.). All external concerns go behind clean ports. + +**Browser-First Mindset** + +Prefer web-standard primitives: +- `Uint8Array`, `TextEncoder`, `URL`, `crypto.subtle` +- Keep core logic portable across browsers, Node, Deno, and workers. + +--- + +### Object Model – The Four Pillars + +1. **Value Objects** — Invariant-rich, immutable, equality by value +2. **Entities** — Identity + lifecycle +3. **Outcomes / Results** — Rich classes (preferred over tagged unions when behavior differs) +4. **Domain Errors** — Typed, contextual, first-class + +**Example: Value Object** + +```typescript +class EventId { + readonly writerId: WriterId; + readonly lamport: Lamport; + + private readonly brand = Symbol.for('grok.EventId'); + + constructor(writerId: string, lamport: number) { + this.writerId = WriterId.from(writerId); + this.lamport = Lamport.from(lamport); + Object.freeze(this); + } + + static is(value: unknown): value is EventId { + return value instanceof EventId + || (value != null && (value as any)[EventId.prototype.brand] === true); + } + + equals(other: EventId): boolean { + return this.writerId.equals(other.writerId) && this.lamport === other.lamport; + } +} +``` + +**Preferred Outcomes** + +```typescript +class OpApplied { ... } +class OpSuperseded { ... } + +// Clean polymorphic dispatch +if (outcome instanceof OpSuperseded) { ... } +``` + +--- + +### Principles + +**P1: Domain Concepts Demand Runtime Forms** +If it has invariants, identity, or behavior — give it a class. + +**P2: Validation at Construction & Boundaries** +Constructors are synchronous and establish invariants or throw. Raw data becomes trusted only here. + +**P3: Behavior Belongs on the Owner** +Prefer polymorphism over type-tag switching. + +**P4: Schemas Are Boundary Guards Only** +Use Zod (or similar) at system edges. Keep domain classes clean. + +**P5: Serialization Is Codec Territory** +Domain objects should not know about JSON, CBOR, protobuf, etc. + +**P6: Immutability by Default** +Trusted objects should be difficult to mutate after construction. Use `readonly`, `freeze`, and return new values for transformations. + +**P7: Determinism & Replayability** +- All time comes from `ClockPort` +- All randomness from `RandomPort` +- All side effects through ports +Your core should be deterministic and replayable. + +**P8: Single Source of Truth** +The runtime model rules. Types, tests, and docs document it. + +**P9: Runtime Dispatch When Appropriate** +`instanceof` is excellent inside the same realm. Use branding + `static is()` for cross-realm (workers, iframes). + +--- + +### Practices + +- One meaningful class or export per file, named after the concept. +- Parameter objects must have semantic meaning. +- Branch on error types, never `err.message`. +- Prefer composition over deep inheritance. +- No floating promises. +- Raw plain objects are for transport/logging only — not for domain meaning. + +--- + +### Anti-Patterns I Strongly Dislike + +- Shape soup (giant unions + endless type guards) +- God classes +- Leaking host APIs into core +- Treating types as the source of truth +- Parsing error messages like a raccoon in a dumpster + +--- + +### Review Checklist (Before Merging) + +- Does every important domain concept have a runtime-backed class? +- Any `any`, `unknown`, or `as` sneaking in? +- Are invariants enforced at construction time? +- Does behavior live on the owning type? +- Could this core logic run in a browser? +- Are time, randomness, and side effects properly abstracted? +- Are we mutating trusted domain objects? + +--- + +**This is infrastructure.** It should feel like building a reliable, inspectable machine — not gluing components together with hope. + +**Runtime truth wins.** Types are there to help you stay honest, not to replace reality. diff --git a/docs/MIND_ORCHESTRATION.md b/docs/MIND_ORCHESTRATION.md new file mode 100644 index 0000000..8a69ffc --- /dev/null +++ b/docs/MIND_ORCHESTRATION.md @@ -0,0 +1,121 @@ +# Mind Orchestration + +Think supports multiple **minds** — separate thought archives that +live side by side under `~/.think/`. Each mind is a self-contained +git-backed repository with its own captures, sessions, and derived +artifacts. + +## What is a mind? + +Any directory under `~/.think/` that contains a git repository +(a `.git/` subdirectory) is a mind. The directory name is the +mind's display name. + +``` +~/.think/ + repo/ → "default" mind (the original single-mind path) + claude/ → "claude" mind + work/ → "work" mind + metrics/ → NOT a mind (no .git/) +``` + +The special directory `~/.think/repo` displays as **"default"** for +backward compatibility — it's the mind Think uses when no other is +selected. + +## Creating a mind + +```bash +mkdir ~/.think/work +cd ~/.think/work +git init +``` + +That's it. Think discovers it automatically on the next browse launch. +No configuration files, no registration step. The filesystem is the +registry. + +## Discovery + +`discoverMinds()` in `src/minds.js` scans `~/.think/` for directories +containing a valid git repo. Results are sorted: default first, then +alphabetical by name. Non-directory entries and directories without +`.git/` are ignored. + +## Browsing minds + +### Splash screen + +When you launch `think --browse`, the splash screen shows the active +mind's name (e.g., `◀ default ▶`). If multiple minds exist: + +- **Tab** — cycle to the next mind +- **Shift+Tab** — cycle to the previous mind +- **Left/Right arrows** — cycle shaders manually (within the current mind) +- **Enter** — open the selected mind + +Each mind gets a **deterministic shader** derived from its name via +a djb2 hash. The same mind always looks the same visually. + +When only one mind exists, the splash behaves exactly as before — +no mind label, Tab cycles shaders. + +### Browse TUI + +Inside the browse TUI, press **`m`** to open the mind switcher — a +command palette listing all discovered minds. Select one to switch. +The browse session tears down and re-bootstraps with the new mind's +data. + +The header shows the active mind name when multiple minds exist +(e.g., `THINK BROWSE [claude]`). + +## Capture + +Capture always goes to the default mind (`~/.think/repo`) or +whatever `THINK_REPO_DIR` points to. Mind selection in browse is +read-only — it does not change which mind receives new captures. + +To capture into a specific mind, set the environment variable: + +```bash +THINK_REPO_DIR=~/.think/work think "work thought" +``` + +## THINK_REPO_DIR interaction + +When `THINK_REPO_DIR` is set, it overrides the default mind for +both capture and browse. The mind switcher in the TUI is limited to +a single-element list containing the overridden path. + +When `THINK_REPO_DIR` is not set, Think discovers all minds under +`~/.think/` and uses `~/.think/repo` as the default. + +## Agent isolation + +Agents can maintain their own thought archives by using separate +mind directories: + +```bash +# Create a mind for an agent +mkdir -p ~/.think/claude && cd ~/.think/claude && git init + +# Agent captures into its own mind +THINK_REPO_DIR=~/.think/claude think "agent thought" + +# Human browses the agent's mind in the TUI (press m to switch) +think --browse +``` + +This keeps human and agent thought streams separate without +configuration files or process isolation — just filesystem +boundaries. + +## Limitations + +- **No cross-mind search** — `think --remember` searches the default + mind only. Cross-mind recall is a backlog item. +- **No mind creation from CLI** — `think --mind=` is a backlog + item. For now, use `mkdir + git init`. +- **No per-mind themes** — all minds share the same plum palette. + Per-mind color themes are a backlog item. diff --git a/docs/design/0009-clarify-reflect-mcp-status/clarify-reflect-mcp-status.md b/docs/design/0009-clarify-reflect-mcp-status/clarify-reflect-mcp-status.md new file mode 100644 index 0000000..04ae03d --- /dev/null +++ b/docs/design/0009-clarify-reflect-mcp-status/clarify-reflect-mcp-status.md @@ -0,0 +1,47 @@ +--- +title: "Clarify Reflect MCP status" +legend: "SURFACE" +cycle: "0009-clarify-reflect-mcp-status" +source_backlog: "docs/method/backlog/asap/SURFACE_clarify-reflect-mcp-status.md" +--- + +# Clarify Reflect MCP status + +Source backlog item: `docs/method/backlog/asap/SURFACE_clarify-reflect-mcp-status.md` +Legend: SURFACE + +## Sponsors + +- Human: James +- Agent: Claude + +## Hill + +Docs accurately describe what's available on each surface — no +implied capabilities that don't exist yet. + +## Playback Questions + +### Human + +- [ ] Does GUIDE.md clarify that reflect is CLI-only? + +### Agent + +- [ ] Does agent isolation advice mention the multi-mind pattern? + +## Accessibility / Localization / Agent Inspectability + +Not applicable — documentation fix only. + +## Non-goals + +- Not adding MCP reflect support in this cycle + +## Backlog Context + +The README implies full MCP support for reflect, but it's currently +CLI-first. Update README and MCP docs to state that reflect is a +CLI-first experience with MCP support in the backlog. + +Source: documentation-quality audit 2026-04-11 §1.1, §2.1. diff --git a/docs/design/0010-document-mind-orchestration/document-mind-orchestration.md b/docs/design/0010-document-mind-orchestration/document-mind-orchestration.md new file mode 100644 index 0000000..3ee31d4 --- /dev/null +++ b/docs/design/0010-document-mind-orchestration/document-mind-orchestration.md @@ -0,0 +1,51 @@ +--- +title: "Document mind orchestration" +legend: "SURFACE" +cycle: "0010-document-mind-orchestration" +source_backlog: "docs/method/backlog/up-next/SURFACE_document-mind-orchestration.md" +--- + +# Document mind orchestration + +Source backlog item: `docs/method/backlog/up-next/SURFACE_document-mind-orchestration.md` +Legend: SURFACE + +## Sponsors + +- Human: James +- Agent: Claude + +## Hill + +A new operator can create, discover, and browse multiple minds after +reading one document. + +## Playback Questions + +### Human + +- [ ] Does the doc explain how to create a mind? +- [ ] Does it explain how discovery works? +- [ ] Does it explain human/agent separation? + +### Agent + +- [ ] Does the doc explain the TUI mind switcher? +- [ ] Does it explain THINK_REPO_DIR interaction? +- [ ] Is the doc linked from README and GUIDE? + +## Accessibility and Assistive Reading + +- Not applicable — prose documentation. + +## Localization and Directionality + +- Not applicable. + +## Agent Inspectability and Explainability + +- Not applicable. + +## Non-goals + +- No new code — documentation only diff --git a/docs/design/0011-doctor-inconsistent-skip-logic/doctor-inconsistent-skip-logic.md b/docs/design/0011-doctor-inconsistent-skip-logic/doctor-inconsistent-skip-logic.md new file mode 100644 index 0000000..40c747c --- /dev/null +++ b/docs/design/0011-doctor-inconsistent-skip-logic/doctor-inconsistent-skip-logic.md @@ -0,0 +1,60 @@ +--- +title: "Doctor checks have inconsistent skip logic" +legend: "CORE" +cycle: "0011-doctor-inconsistent-skip-logic" +source_backlog: "docs/method/backlog/bad-code/CORE_doctor-inconsistent-skip-logic.md" +--- + +# Doctor checks have inconsistent skip logic + +Source backlog item: `docs/method/backlog/bad-code/CORE_doctor-inconsistent-skip-logic.md` +Legend: CORE + +## Sponsors + +- Human: Backlog operator +- Agent: Implementation agent + +## Hill + +TBD + +## Playback Questions + +### Human + +- [ ] TBD + +### Agent + +- [ ] TBD + +## Accessibility and Assistive Reading + +- Linear truth / reduced-complexity posture: TBD +- Non-visual or alternate-reading expectations: TBD + +## Localization and Directionality + +- Locale / wording / formatting assumptions: TBD +- Logical direction / layout assumptions: TBD + +## Agent Inspectability and Explainability + +- What must be explicit and deterministic for agents: TBD +- What must be attributable, evidenced, or governed: TBD + +## Non-goals + +- [ ] TBD + +## Backlog Context + +Graph model and entry count checks skip when `repoOk` is false OR +callback is missing. But upstream reports "ok" when URL is set but +no `checkUpstreamReachable` callback is provided — giving false +confidence that the upstream was validated. + +Standardize: all checks should skip if they lack the means to verify. + +File: `src/doctor.js` diff --git a/docs/design/0011-shaderForMind-no-input-validation/shaderForMind-no-input-validation.md b/docs/design/0011-shaderForMind-no-input-validation/shaderForMind-no-input-validation.md new file mode 100644 index 0000000..c3490df --- /dev/null +++ b/docs/design/0011-shaderForMind-no-input-validation/shaderForMind-no-input-validation.md @@ -0,0 +1,57 @@ +--- +title: "shaderForMind lacks input validation" +legend: "CORE" +cycle: "0011-shaderForMind-no-input-validation" +source_backlog: "docs/method/backlog/bad-code/CORE_shaderForMind-no-input-validation.md" +--- + +# shaderForMind lacks input validation + +Source backlog item: `docs/method/backlog/bad-code/CORE_shaderForMind-no-input-validation.md` +Legend: CORE + +## Sponsors + +- Human: Backlog operator +- Agent: Implementation agent + +## Hill + +TBD + +## Playback Questions + +### Human + +- [ ] TBD + +### Agent + +- [ ] TBD + +## Accessibility and Assistive Reading + +- Linear truth / reduced-complexity posture: TBD +- Non-visual or alternate-reading expectations: TBD + +## Localization and Directionality + +- Locale / wording / formatting assumptions: TBD +- Logical direction / layout assumptions: TBD + +## Agent Inspectability and Explainability + +- What must be explicit and deterministic for agents: TBD +- What must be attributable, evidenced, or governed: TBD + +## Non-goals + +- [ ] TBD + +## Backlog Context + +`shaderForMind(name, shaderCount)` does not validate that +`shaderCount > 0`. If 0 or negative, `Math.abs(hash) % shaderCount` +produces `NaN` or `Infinity` silently. + +File: `src/minds.js` diff --git a/docs/design/0011-unused-browseStartMs-field/unused-browseStartMs-field.md b/docs/design/0011-unused-browseStartMs-field/unused-browseStartMs-field.md new file mode 100644 index 0000000..64bd372 --- /dev/null +++ b/docs/design/0011-unused-browseStartMs-field/unused-browseStartMs-field.md @@ -0,0 +1,36 @@ +--- +title: "Unused browseStartMs field in windowed model" +legend: "CORE" +cycle: "0011-unused-browseStartMs-field" +source_backlog: "docs/method/backlog/bad-code/CORE_unused-browseStartMs-field.md" +--- + +# Unused browseStartMs field in windowed model + +Source backlog item: `docs/method/backlog/bad-code/CORE_unused-browseStartMs-field.md` +Legend: CORE + +## Sponsors + +- Human: James +- Agent: Claude + +## Hill + +No dead fields in the browse model. + +## Playback Questions + +### Agent + +- [ ] Is `browseStartMs` absent from model.js? + +## All postures + +Not applicable — dead code removal. + +## Backlog Context + +`browseStartMs` was added to the windowed browse model during cycle +0004 for a fade-in approach that was later replaced. The field is set +in `createWindowedBrowseModel` but never read. Remove it. diff --git a/docs/design/0012-audit-plain-object-model/audit-plain-object-model.md b/docs/design/0012-audit-plain-object-model/audit-plain-object-model.md new file mode 100644 index 0000000..fc6c018 --- /dev/null +++ b/docs/design/0012-audit-plain-object-model/audit-plain-object-model.md @@ -0,0 +1,63 @@ +--- +title: "Core entry and session concepts are still plain objects" +legend: "CORE" +cycle: "0012-audit-plain-object-model" +source_backlog: "docs/method/backlog/bad-code/CORE_audit-plain-object-model.md" +--- + +# Core entry and session concepts are still plain objects + +Source backlog item: `docs/method/backlog/bad-code/CORE_audit-plain-object-model.md` +Legend: CORE + +## Sponsors + +- Human: James +- Agent: Claude + +## Hill + +`createEntry` and `createReflectSession` return domain class instances +with validated construction, not anonymous bags. + +## Playback Questions + +### Agent + +- [ ] Are `Entry` and `ReflectSession` classes with constructor validation? +- [ ] Can existing callers use them without changing property access? +- [ ] Do all existing tests pass unchanged? + +## All postures + +Not applicable — internal refactor, no behavior change. + +## Non-goals + +- Not migrating consumers to use instanceof checks yet (that's the + next cycle: CORE_ssjr-src-store-model) +- Not adding methods to the classes yet — fields only + +## Design + +Replace the two factory functions with classes: + +```js +class Entry { + constructor(text, writerId, { kind, source }) { + // validate, assign fields, freeze + } +} + +class ReflectSession { + constructor(writerId, { seedEntryId, ... }) { + // validate, assign fields, freeze + } +} +``` + +Callers continue to use `entry.id`, `entry.text`, etc. — property +access is identical. The classes are frozen to preserve immutability. + +Two callers for each: `capture.js` and `reflect.js`. 164 property +accesses across 18 files remain unchanged. diff --git a/docs/design/0012-buildStatsSparkline-duplication/buildStatsSparkline-duplication.md b/docs/design/0012-buildStatsSparkline-duplication/buildStatsSparkline-duplication.md new file mode 100644 index 0000000..3a85b8f --- /dev/null +++ b/docs/design/0012-buildStatsSparkline-duplication/buildStatsSparkline-duplication.md @@ -0,0 +1,59 @@ +--- +title: "buildStatsSparkline duplicates logic from formatStats" +legend: "SURFACE" +cycle: "0012-buildStatsSparkline-duplication" +source_backlog: "docs/method/backlog/bad-code/SURFACE_buildStatsSparkline-duplication.md" +--- + +# buildStatsSparkline duplicates logic from formatStats + +Source backlog item: `docs/method/backlog/bad-code/SURFACE_buildStatsSparkline-duplication.md` +Legend: SURFACE + +## Sponsors + +- Human: Backlog operator +- Agent: Implementation agent + +## Hill + +TBD + +## Playback Questions + +### Human + +- [ ] TBD + +### Agent + +- [ ] TBD + +## Accessibility and Assistive Reading + +- Linear truth / reduced-complexity posture: TBD +- Non-visual or alternate-reading expectations: TBD + +## Localization and Directionality + +- Locale / wording / formatting assumptions: TBD +- Logical direction / layout assumptions: TBD + +## Agent Inspectability and Explainability + +- What must be explicit and deterministic for agents: TBD +- What must be attributable, evidenced, or governed: TBD + +## Non-goals + +- [ ] TBD + +## Backlog Context + +Both `formatStats()` and `buildStatsSparkline()` in `src/mcp/format.js` +do the same `buckets.map(b => b.count).reverse()` → `sparkline()`. +`formatStats` does it inline AND `buildStatsSparkline` is exported for +`read.js`. Either inline everywhere or have `formatStats` call the +shared function — don't do both. + +File: `src/mcp/format.js` diff --git a/docs/design/0013-ssjr-src-store-model-js/ssjr-src-store-model-js.md b/docs/design/0013-ssjr-src-store-model-js/ssjr-src-store-model-js.md new file mode 100644 index 0000000..49e5b0b --- /dev/null +++ b/docs/design/0013-ssjr-src-store-model-js/ssjr-src-store-model-js.md @@ -0,0 +1,46 @@ +--- +title: "Raise SSJR grades for `src/store/model.js`" +legend: "CORE" +cycle: "0013-ssjr-src-store-model-js" +source_backlog: "docs/method/backlog/bad-code/CORE_ssjr-src-store-model-js.md" +--- + +# Raise SSJR grades for `src/store/model.js` + +Source backlog item: `docs/method/backlog/bad-code/CORE_ssjr-src-store-model-js.md` +Legend: CORE + +## Sponsors + +- Human: James +- Agent: Claude + +## Hill + +`src/store/model.js` uses validated domain constants for entry kinds +and bucket periods instead of magic strings, and does not read +process globals directly. + +## Playback Questions + +### Agent + +- [ ] Are ENTRY_KINDS and BUCKET_PERIODS exported validated sets? +- [ ] Does `getCurrentTime` no longer read `process.env` directly? +- [ ] Does `createWriterId` no longer read `os.hostname` directly? +- [ ] Do all existing tests pass? + +## All postures + +Not applicable — internal refactor. + +## Non-goals + +- Not moving comparators onto Entry (methods cycle) +- Not changing the public function signatures + +## Backlog Context + +Current SSJR sanity check: `Hex D`, `P1 F`, `P2 C`, `P3 C`, `P4 C`, `P5 B`, `P6 B`, `P7 C`. + +This is the worst core modeling hotspot. Meaning-heavy concepts like entries and sessions are still emitted as plain objects with loose `kind` fields. Start by introducing real domain types for entries, sessions, and related identifiers so construction establishes trust instead of downstream code patching shape assumptions together. diff --git a/docs/design/0014-audit-provenance-url-schemes/audit-provenance-url-schemes.md b/docs/design/0014-audit-provenance-url-schemes/audit-provenance-url-schemes.md new file mode 100644 index 0000000..1bbc330 --- /dev/null +++ b/docs/design/0014-audit-provenance-url-schemes/audit-provenance-url-schemes.md @@ -0,0 +1,55 @@ +--- +title: "Provenance URLs accept any scheme" +legend: "CORE" +cycle: "0014-audit-provenance-url-schemes" +source_backlog: "docs/method/backlog/bad-code/CORE_audit-provenance-url-schemes.md" +--- + +# Provenance URLs accept any scheme + +Source backlog item: `docs/method/backlog/bad-code/CORE_audit-provenance-url-schemes.md` +Legend: CORE + +## Sponsors + +- Human: Backlog operator +- Agent: Implementation agent + +## Hill + +TBD + +## Playback Questions + +### Human + +- [ ] TBD + +### Agent + +- [ ] TBD + +## Accessibility and Assistive Reading + +- Linear truth / reduced-complexity posture: TBD +- Non-visual or alternate-reading expectations: TBD + +## Localization and Directionality + +- Locale / wording / formatting assumptions: TBD +- Logical direction / layout assumptions: TBD + +## Agent Inspectability and Explainability + +- What must be explicit and deterministic for agents: TBD +- What must be attributable, evidenced, or governed: TBD + +## Non-goals + +- [ ] TBD + +## Backlog Context + +Capture provenance currently accepts any syntactically valid URL, including schemes that should not be treated like ordinary safe links. + +Think should narrow provenance URL acceptance to explicit safe schemes before more surfaces start rendering or exporting those fields. diff --git a/docs/design/0015-audit-capture-path-sync-git/audit-capture-path-sync-git.md b/docs/design/0015-audit-capture-path-sync-git/audit-capture-path-sync-git.md new file mode 100644 index 0000000..6f64e45 --- /dev/null +++ b/docs/design/0015-audit-capture-path-sync-git/audit-capture-path-sync-git.md @@ -0,0 +1,55 @@ +--- +title: "Capture path still shells out to `git` synchronously" +legend: "CORE" +cycle: "0015-audit-capture-path-sync-git" +source_backlog: "docs/method/backlog/bad-code/CORE_audit-capture-path-sync-git.md" +--- + +# Capture path still shells out to `git` synchronously + +Source backlog item: `docs/method/backlog/bad-code/CORE_audit-capture-path-sync-git.md` +Legend: CORE + +## Sponsors + +- Human: Backlog operator +- Agent: Implementation agent + +## Hill + +TBD + +## Playback Questions + +### Human + +- [ ] TBD + +### Agent + +- [ ] TBD + +## Accessibility and Assistive Reading + +- Linear truth / reduced-complexity posture: TBD +- Non-visual or alternate-reading expectations: TBD + +## Localization and Directionality + +- Locale / wording / formatting assumptions: TBD +- Logical direction / layout assumptions: TBD + +## Agent Inspectability and Explainability + +- What must be explicit and deterministic for agents: TBD +- What must be attributable, evidenced, or governed: TBD + +## Non-goals + +- [ ] TBD + +## Backlog Context + +`saveRawCapture()` calls `getAmbientProjectContext(process.cwd())`, and that helper runs three `spawnSync('git', ...)` probes. + +The capture path is supposed to be sacred. This host work belongs behind a bounded adapter or cache, not inline in persistence. diff --git a/docs/design/0016-ssjr-src-capture-provenance-js/ssjr-src-capture-provenance-js.md b/docs/design/0016-ssjr-src-capture-provenance-js/ssjr-src-capture-provenance-js.md new file mode 100644 index 0000000..a1fcda8 --- /dev/null +++ b/docs/design/0016-ssjr-src-capture-provenance-js/ssjr-src-capture-provenance-js.md @@ -0,0 +1,55 @@ +--- +title: "Raise SSJR grades for `src/capture-provenance.js`" +legend: "CORE" +cycle: "0016-ssjr-src-capture-provenance-js" +source_backlog: "docs/method/backlog/bad-code/CORE_ssjr-src-capture-provenance-js.md" +--- + +# Raise SSJR grades for `src/capture-provenance.js` + +Source backlog item: `docs/method/backlog/bad-code/CORE_ssjr-src-capture-provenance-js.md` +Legend: CORE + +## Sponsors + +- Human: Backlog operator +- Agent: Implementation agent + +## Hill + +TBD + +## Playback Questions + +### Human + +- [ ] TBD + +### Agent + +- [ ] TBD + +## Accessibility and Assistive Reading + +- Linear truth / reduced-complexity posture: TBD +- Non-visual or alternate-reading expectations: TBD + +## Localization and Directionality + +- Locale / wording / formatting assumptions: TBD +- Logical direction / layout assumptions: TBD + +## Agent Inspectability and Explainability + +- What must be explicit and deterministic for agents: TBD +- What must be attributable, evidenced, or governed: TBD + +## Non-goals + +- [ ] TBD + +## Backlog Context + +Current SSJR sanity check: `Hex B`, `P1 B`, `P3 B`, `P6 B`. + +The boundary normalization is disciplined, but provenance is still just a plain object. Introduce a small runtime-backed provenance form so the invariant lives on the value instead of in helper conventions spread across callers. diff --git a/docs/design/0017-audit-undocumented-ambient-context-and-recall/audit-undocumented-ambient-context-and-recall.md b/docs/design/0017-audit-undocumented-ambient-context-and-recall/audit-undocumented-ambient-context-and-recall.md new file mode 100644 index 0000000..6c55389 --- /dev/null +++ b/docs/design/0017-audit-undocumented-ambient-context-and-recall/audit-undocumented-ambient-context-and-recall.md @@ -0,0 +1,38 @@ +--- +title: "Ambient context and recall behavior are underdocumented" +legend: "CORE" +cycle: "0017-audit-undocumented-ambient-context-and-recall" +source_backlog: "docs/method/backlog/bad-code/CORE_audit-undocumented-ambient-context-and-recall.md" +--- + +# Ambient context and recall behavior are underdocumented + +Source backlog item: `docs/method/backlog/bad-code/CORE_audit-undocumented-ambient-context-and-recall.md` +Legend: CORE + +## Sponsors + +- Human: James +- Agent: Claude + +## Hill + +A contributor can understand the ambient context and recall pipeline +from one document. + +## Playback Questions + +### Agent + +- [ ] Does the doc exist in docs/? +- [ ] Does it explain collection, normalization, persistence, and recall matching? + +## All postures + +Not applicable — internal documentation. + +## Backlog Context + +The behavior that powers ambient capture context, remember scoring, and provenance flow is spread across `src/project-context.js`, `src/store/capture.js`, `src/store/queries.js`, and `src/capture-provenance.js`. + +There is no single contributor-facing doc that explains what gets collected, when it gets normalized, and how it affects recall. That makes the behavior harder to change safely. diff --git a/docs/design/0018-ssjr-src-store-capture-js/ssjr-src-store-capture-js.md b/docs/design/0018-ssjr-src-store-capture-js/ssjr-src-store-capture-js.md new file mode 100644 index 0000000..460067b --- /dev/null +++ b/docs/design/0018-ssjr-src-store-capture-js/ssjr-src-store-capture-js.md @@ -0,0 +1,55 @@ +--- +title: "Raise SSJR grades for `src/store/capture.js`" +legend: "CORE" +cycle: "0018-ssjr-src-store-capture-js" +source_backlog: "docs/method/backlog/bad-code/CORE_ssjr-src-store-capture-js.md" +--- + +# Raise SSJR grades for `src/store/capture.js` + +Source backlog item: `docs/method/backlog/bad-code/CORE_ssjr-src-store-capture-js.md` +Legend: CORE + +## Sponsors + +- Human: Backlog operator +- Agent: Implementation agent + +## Hill + +TBD + +## Playback Questions + +### Human + +- [ ] TBD + +### Agent + +- [ ] TBD + +## Accessibility and Assistive Reading + +- Linear truth / reduced-complexity posture: TBD +- Non-visual or alternate-reading expectations: TBD + +## Localization and Directionality + +- Locale / wording / formatting assumptions: TBD +- Logical direction / layout assumptions: TBD + +## Agent Inspectability and Explainability + +- What must be explicit and deterministic for agents: TBD +- What must be attributable, evidenced, or governed: TBD + +## Non-goals + +- [ ] TBD + +## Backlog Context + +Current SSJR sanity check: `Hex C`, `P1 D`, `P2 C`, `P3 C`, `P4 C`, `P5 B`, `P6 B`, `P7 C`. + +Core capture persistence still operates on raw entry objects plus `kind`-based assumptions. Introduce real runtime-backed entry and provenance forms so construction, persistence, and follow-through stop depending on ambient shape trust. diff --git a/docs/design/0019-audit-unvalidated-read-models/audit-unvalidated-read-models.md b/docs/design/0019-audit-unvalidated-read-models/audit-unvalidated-read-models.md new file mode 100644 index 0000000..eef2dec --- /dev/null +++ b/docs/design/0019-audit-unvalidated-read-models/audit-unvalidated-read-models.md @@ -0,0 +1,38 @@ +--- +title: "Store runtime reconstructs trusted entries from raw graph props" +legend: "CORE" +cycle: "0019-audit-unvalidated-read-models" +source_backlog: "docs/method/backlog/bad-code/CORE_audit-unvalidated-read-models.md" +--- + +# Store runtime reconstructs trusted entries from raw graph props + +Source backlog item: `docs/method/backlog/bad-code/CORE_audit-unvalidated-read-models.md` +Legend: CORE + +## Sponsors + +- Human: James +- Agent: Claude + +## Hill + +`getStoredEntry` returns a frozen `StoredEntry` class with validated +fields instead of a raw property bag. + +## Playback Questions + +### Agent + +- [ ] Is `StoredEntry` a class with Object.freeze? +- [ ] Do all existing tests pass unchanged? + +## All postures + +Not applicable — internal refactor. + +## Backlog Context + +`src/store/runtime.js` turns raw graph node properties directly into store entry objects without a schema or runtime-backed constructor boundary. + +This is a core correctness risk because every read surface downstream inherits whatever that raw graph shape happens to be. diff --git a/docs/design/0020-audit-no-error-taxonomy/audit-no-error-taxonomy.md b/docs/design/0020-audit-no-error-taxonomy/audit-no-error-taxonomy.md new file mode 100644 index 0000000..a83e048 --- /dev/null +++ b/docs/design/0020-audit-no-error-taxonomy/audit-no-error-taxonomy.md @@ -0,0 +1,55 @@ +--- +title: "Cross-surface failures still lack a typed error taxonomy" +legend: "CORE" +cycle: "0020-audit-no-error-taxonomy" +source_backlog: "docs/method/backlog/bad-code/CORE_audit-no-error-taxonomy.md" +--- + +# Cross-surface failures still lack a typed error taxonomy + +Source backlog item: `docs/method/backlog/bad-code/CORE_audit-no-error-taxonomy.md` +Legend: CORE + +## Sponsors + +- Human: Backlog operator +- Agent: Implementation agent + +## Hill + +TBD + +## Playback Questions + +### Human + +- [ ] TBD + +### Agent + +- [ ] TBD + +## Accessibility and Assistive Reading + +- Linear truth / reduced-complexity posture: TBD +- Non-visual or alternate-reading expectations: TBD + +## Localization and Directionality + +- Locale / wording / formatting assumptions: TBD +- Logical direction / layout assumptions: TBD + +## Agent Inspectability and Explainability + +- What must be explicit and deterministic for agents: TBD +- What must be attributable, evidenced, or governed: TBD + +## Non-goals + +- [ ] TBD + +## Backlog Context + +CLI, MCP, and store paths still throw or translate many failures as raw `Error` objects or generic strings. + +Think needs a smaller set of owned failure types so human and machine surfaces can report the same truth consistently. diff --git a/docs/design/0021-audit-git-binary-path-trust/audit-git-binary-path-trust.md b/docs/design/0021-audit-git-binary-path-trust/audit-git-binary-path-trust.md new file mode 100644 index 0000000..c25a122 --- /dev/null +++ b/docs/design/0021-audit-git-binary-path-trust/audit-git-binary-path-trust.md @@ -0,0 +1,55 @@ +--- +title: "Git execution still trusts ambient PATH lookup" +legend: "CORE" +cycle: "0021-audit-git-binary-path-trust" +source_backlog: "docs/method/backlog/bad-code/CORE_audit-git-binary-path-trust.md" +--- + +# Git execution still trusts ambient PATH lookup + +Source backlog item: `docs/method/backlog/bad-code/CORE_audit-git-binary-path-trust.md` +Legend: CORE + +## Sponsors + +- Human: Backlog operator +- Agent: Implementation agent + +## Hill + +TBD + +## Playback Questions + +### Human + +- [ ] TBD + +### Agent + +- [ ] TBD + +## Accessibility and Assistive Reading + +- Linear truth / reduced-complexity posture: TBD +- Non-visual or alternate-reading expectations: TBD + +## Localization and Directionality + +- Locale / wording / formatting assumptions: TBD +- Logical direction / layout assumptions: TBD + +## Agent Inspectability and Explainability + +- What must be explicit and deterministic for agents: TBD +- What must be attributable, evidenced, or governed: TBD + +## Non-goals + +- [ ] TBD + +## Backlog Context + +Think invokes `git` by bare command name from `src/project-context.js` and `src/git.js`. + +That is acceptable for a local developer tool until it is not. The repo should resolve and trust one Git binary intentionally instead of inheriting whatever PATH happens to provide. diff --git a/docs/design/0022-audit-query-reshape-pipeline/audit-query-reshape-pipeline.md b/docs/design/0022-audit-query-reshape-pipeline/audit-query-reshape-pipeline.md new file mode 100644 index 0000000..26a1e77 --- /dev/null +++ b/docs/design/0022-audit-query-reshape-pipeline/audit-query-reshape-pipeline.md @@ -0,0 +1,55 @@ +--- +title: "Query layer repeatedly re-shapes the same entry data" +legend: "CORE" +cycle: "0022-audit-query-reshape-pipeline" +source_backlog: "docs/method/backlog/bad-code/CORE_audit-query-reshape-pipeline.md" +--- + +# Query layer repeatedly re-shapes the same entry data + +Source backlog item: `docs/method/backlog/bad-code/CORE_audit-query-reshape-pipeline.md` +Legend: CORE + +## Sponsors + +- Human: Backlog operator +- Agent: Implementation agent + +## Hill + +TBD + +## Playback Questions + +### Human + +- [ ] TBD + +### Agent + +- [ ] TBD + +## Accessibility and Assistive Reading + +- Linear truth / reduced-complexity posture: TBD +- Non-visual or alternate-reading expectations: TBD + +## Localization and Directionality + +- Locale / wording / formatting assumptions: TBD +- Logical direction / layout assumptions: TBD + +## Agent Inspectability and Explainability + +- What must be explicit and deterministic for agents: TBD +- What must be attributable, evidenced, or governed: TBD + +## Non-goals + +- [ ] TBD + +## Backlog Context + +`src/store/queries.js` keeps remapping entries into new anonymous shapes for recent, remember, browse, inspect, and stats callers. + +That increases coupling and makes it harder to trust that all surfaces are talking about the same domain object. diff --git a/docs/design/0023-audit-warp-handle-reuse/audit-warp-handle-reuse.md b/docs/design/0023-audit-warp-handle-reuse/audit-warp-handle-reuse.md new file mode 100644 index 0000000..2852683 --- /dev/null +++ b/docs/design/0023-audit-warp-handle-reuse/audit-warp-handle-reuse.md @@ -0,0 +1,59 @@ +--- +title: "openWarpApp handle reuse" +legend: "CORE" +cycle: "0023-audit-warp-handle-reuse" +source_backlog: "docs/method/backlog/bad-code/CORE_audit-warp-handle-reuse.md" +--- + +# openWarpApp handle reuse + +Source backlog item: `docs/method/backlog/bad-code/CORE_audit-warp-handle-reuse.md` +Legend: CORE + +## Sponsors + +- Human: Backlog operator +- Agent: Implementation agent + +## Hill + +TBD + +## Playback Questions + +### Human + +- [ ] TBD + +### Agent + +- [ ] TBD + +## Accessibility and Assistive Reading + +- Linear truth / reduced-complexity posture: TBD +- Non-visual or alternate-reading expectations: TBD + +## Localization and Directionality + +- Locale / wording / formatting assumptions: TBD +- Logical direction / layout assumptions: TBD + +## Agent Inspectability and Explainability + +- What must be explicit and deterministic for agents: TBD +- What must be attributable, evidenced, or governed: TBD + +## Non-goals + +- [ ] TBD + +## Backlog Context + +`openWarpApp` is called multiple times across `saveRawCapture` and +`finalizeCapturedThought`, creating redundant repository handles. +Implement a simple singleton cache in `src/store/runtime.js` that +reuses open app handles for the same `repoDir` during a single +execution tick. + +Source: code-quality audit 2026-04-11 §4.2. diff --git a/docs/design/0024-ssjr-src-store-runtime-js/ssjr-src-store-runtime-js.md b/docs/design/0024-ssjr-src-store-runtime-js/ssjr-src-store-runtime-js.md new file mode 100644 index 0000000..997ee8d --- /dev/null +++ b/docs/design/0024-ssjr-src-store-runtime-js/ssjr-src-store-runtime-js.md @@ -0,0 +1,55 @@ +--- +title: "Raise SSJR grades for `src/store/runtime.js`" +legend: "CORE" +cycle: "0024-ssjr-src-store-runtime-js" +source_backlog: "docs/method/backlog/bad-code/CORE_ssjr-src-store-runtime-js.md" +--- + +# Raise SSJR grades for `src/store/runtime.js` + +Source backlog item: `docs/method/backlog/bad-code/CORE_ssjr-src-store-runtime-js.md` +Legend: CORE + +## Sponsors + +- Human: Backlog operator +- Agent: Implementation agent + +## Hill + +TBD + +## Playback Questions + +### Human + +- [ ] TBD + +### Agent + +- [ ] TBD + +## Accessibility and Assistive Reading + +- Linear truth / reduced-complexity posture: TBD +- Non-visual or alternate-reading expectations: TBD + +## Localization and Directionality + +- Locale / wording / formatting assumptions: TBD +- Logical direction / layout assumptions: TBD + +## Agent Inspectability and Explainability + +- What must be explicit and deterministic for agents: TBD +- What must be attributable, evidenced, or governed: TBD + +## Non-goals + +- [ ] TBD + +## Backlog Context + +Current SSJR sanity check: `Hex C`, `P1 D`, `P2 D`, `P3 C`, `P4 D`, `P5 B`, `P6 B`, `P7 D`. + +This file is the core/runtime seam with the most architectural strain. It mixes graph access, host-specific opening, raw prop normalization, and `kind`-driven reconstruction of domain meaning. Break it up and introduce typed read models so the runtime seam stops leaking host details and shape soup into the store core. diff --git a/docs/design/0025-ssjr-src-verbose-js/ssjr-src-verbose-js.md b/docs/design/0025-ssjr-src-verbose-js/ssjr-src-verbose-js.md new file mode 100644 index 0000000..27bd94e --- /dev/null +++ b/docs/design/0025-ssjr-src-verbose-js/ssjr-src-verbose-js.md @@ -0,0 +1,55 @@ +--- +title: "Raise SSJR grades for `src/verbose.js`" +legend: "SURFACE" +cycle: "0025-ssjr-src-verbose-js" +source_backlog: "docs/method/backlog/bad-code/SURFACE_ssjr-src-verbose-js.md" +--- + +# Raise SSJR grades for `src/verbose.js` + +Source backlog item: `docs/method/backlog/bad-code/SURFACE_ssjr-src-verbose-js.md` +Legend: SURFACE + +## Sponsors + +- Human: Backlog operator +- Agent: Implementation agent + +## Hill + +TBD + +## Playback Questions + +### Human + +- [ ] TBD + +### Agent + +- [ ] TBD + +## Accessibility and Assistive Reading + +- Linear truth / reduced-complexity posture: TBD +- Non-visual or alternate-reading expectations: TBD + +## Localization and Directionality + +- Locale / wording / formatting assumptions: TBD +- Logical direction / layout assumptions: TBD + +## Agent Inspectability and Explainability + +- What must be explicit and deterministic for agents: TBD +- What must be attributable, evidenced, or governed: TBD + +## Non-goals + +- [ ] TBD + +## Backlog Context + +Current SSJR sanity check: `Hex A`, `P1 B`, `P2 B`, `P3 B`, `P4 B`, `P6 B`. + +The reporter is small and stable, but event payloads are still just shaped objects. Tighten the reporting contract so event names and payload structure derive from one runtime-backed source of truth instead of ambient convention. diff --git a/docs/design/0026-audit-mcp-service-shape-soup/audit-mcp-service-shape-soup.md b/docs/design/0026-audit-mcp-service-shape-soup/audit-mcp-service-shape-soup.md new file mode 100644 index 0000000..c992501 --- /dev/null +++ b/docs/design/0026-audit-mcp-service-shape-soup/audit-mcp-service-shape-soup.md @@ -0,0 +1,55 @@ +--- +title: "MCP service layer still shuffles raw objects" +legend: "SURFACE" +cycle: "0026-audit-mcp-service-shape-soup" +source_backlog: "docs/method/backlog/bad-code/SURFACE_audit-mcp-service-shape-soup.md" +--- + +# MCP service layer still shuffles raw objects + +Source backlog item: `docs/method/backlog/bad-code/SURFACE_audit-mcp-service-shape-soup.md` +Legend: SURFACE + +## Sponsors + +- Human: Backlog operator +- Agent: Implementation agent + +## Hill + +TBD + +## Playback Questions + +### Human + +- [ ] TBD + +### Agent + +- [ ] TBD + +## Accessibility and Assistive Reading + +- Linear truth / reduced-complexity posture: TBD +- Non-visual or alternate-reading expectations: TBD + +## Localization and Directionality + +- Locale / wording / formatting assumptions: TBD +- Logical direction / layout assumptions: TBD + +## Agent Inspectability and Explainability + +- What must be explicit and deterministic for agents: TBD +- What must be attributable, evidenced, or governed: TBD + +## Non-goals + +- [ ] TBD + +## Backlog Context + +`src/mcp/service.js` is already called out in `docs/BEARING.md` as shape-soup debt, and the audit agrees. It mostly normalizes inputs, calls store functions, and returns anonymous result bags. + +That is acceptable for a tiny adapter, but this one is now large enough to deserve explicit request and result forms. diff --git a/docs/design/0027-audit-mcp-contract-holes/audit-mcp-contract-holes.md b/docs/design/0027-audit-mcp-contract-holes/audit-mcp-contract-holes.md new file mode 100644 index 0000000..b907c28 --- /dev/null +++ b/docs/design/0027-audit-mcp-contract-holes/audit-mcp-contract-holes.md @@ -0,0 +1,55 @@ +--- +title: "MCP contracts still have `z.any()` holes" +legend: "SURFACE" +cycle: "0027-audit-mcp-contract-holes" +source_backlog: "docs/method/backlog/bad-code/SURFACE_audit-mcp-contract-holes.md" +--- + +# MCP contracts still have `z.any()` holes + +Source backlog item: `docs/method/backlog/bad-code/SURFACE_audit-mcp-contract-holes.md` +Legend: SURFACE + +## Sponsors + +- Human: Backlog operator +- Agent: Implementation agent + +## Hill + +TBD + +## Playback Questions + +### Human + +- [ ] TBD + +### Agent + +- [ ] TBD + +## Accessibility and Assistive Reading + +- Linear truth / reduced-complexity posture: TBD +- Non-visual or alternate-reading expectations: TBD + +## Localization and Directionality + +- Locale / wording / formatting assumptions: TBD +- Logical direction / layout assumptions: TBD + +## Agent Inspectability and Explainability + +- What must be explicit and deterministic for agents: TBD +- What must be attributable, evidenced, or governed: TBD + +## Non-goals + +- [ ] TBD + +## Backlog Context + +`src/mcp/server.js` still uses `z.any()` for important outputs like migration results, remember matches and scope, browse session context, and inspect entry payloads. + +That weakens integration trust exactly where Think claims MCP parity with the CLI core. diff --git a/docs/design/0028-audit-cli-options-bag/audit-cli-options-bag.md b/docs/design/0028-audit-cli-options-bag/audit-cli-options-bag.md new file mode 100644 index 0000000..42d8e84 --- /dev/null +++ b/docs/design/0028-audit-cli-options-bag/audit-cli-options-bag.md @@ -0,0 +1,55 @@ +--- +title: "CLI parsing still depends on one large options bag" +legend: "SURFACE" +cycle: "0028-audit-cli-options-bag" +source_backlog: "docs/method/backlog/bad-code/SURFACE_audit-cli-options-bag.md" +--- + +# CLI parsing still depends on one large options bag + +Source backlog item: `docs/method/backlog/bad-code/SURFACE_audit-cli-options-bag.md` +Legend: SURFACE + +## Sponsors + +- Human: Backlog operator +- Agent: Implementation agent + +## Hill + +TBD + +## Playback Questions + +### Human + +- [ ] TBD + +### Agent + +- [ ] TBD + +## Accessibility and Assistive Reading + +- Linear truth / reduced-complexity posture: TBD +- Non-visual or alternate-reading expectations: TBD + +## Localization and Directionality + +- Locale / wording / formatting assumptions: TBD +- Logical direction / layout assumptions: TBD + +## Agent Inspectability and Explainability + +- What must be explicit and deterministic for agents: TBD +- What must be attributable, evidenced, or governed: TBD + +## Non-goals + +- [ ] TBD + +## Backlog Context + +`src/cli/options.js` builds a large procedural options object and validates it later through command-specific conditionals. + +The result is serviceable but structurally mushy. Parsing and validation should return a smaller, more explicit runtime-backed parsed-command form. diff --git a/docs/design/0029-ssjr-src-cli-options-js/ssjr-src-cli-options-js.md b/docs/design/0029-ssjr-src-cli-options-js/ssjr-src-cli-options-js.md new file mode 100644 index 0000000..ec88d9c --- /dev/null +++ b/docs/design/0029-ssjr-src-cli-options-js/ssjr-src-cli-options-js.md @@ -0,0 +1,55 @@ +--- +title: "Raise SSJR grades for `src/cli/options.js`" +legend: "SURFACE" +cycle: "0029-ssjr-src-cli-options-js" +source_backlog: "docs/method/backlog/bad-code/SURFACE_ssjr-src-cli-options-js.md" +--- + +# Raise SSJR grades for `src/cli/options.js` + +Source backlog item: `docs/method/backlog/bad-code/SURFACE_ssjr-src-cli-options-js.md` +Legend: SURFACE + +## Sponsors + +- Human: Backlog operator +- Agent: Implementation agent + +## Hill + +TBD + +## Playback Questions + +### Human + +- [ ] TBD + +### Agent + +- [ ] TBD + +## Accessibility and Assistive Reading + +- Linear truth / reduced-complexity posture: TBD +- Non-visual or alternate-reading expectations: TBD + +## Localization and Directionality + +- Locale / wording / formatting assumptions: TBD +- Logical direction / layout assumptions: TBD + +## Agent Inspectability and Explainability + +- What must be explicit and deterministic for agents: TBD +- What must be attributable, evidenced, or governed: TBD + +## Non-goals + +- [ ] TBD + +## Backlog Context + +Current SSJR sanity check: `Hex B`, `P1 C`, `P2 B`, `P3 C`, `P4 B`, `P6 C`, `P7 C`. + +The parser currently produces a large mutable-feeling options bag and command resolution depends on stringly post-processing. Introduce explicit parsed-command and parsed-option forms so validation and dispatch stop depending on shape soup. diff --git a/docs/design/0030-ssjr-src-store-queries-js/ssjr-src-store-queries-js.md b/docs/design/0030-ssjr-src-store-queries-js/ssjr-src-store-queries-js.md new file mode 100644 index 0000000..4e073f3 --- /dev/null +++ b/docs/design/0030-ssjr-src-store-queries-js/ssjr-src-store-queries-js.md @@ -0,0 +1,55 @@ +--- +title: "Raise SSJR grades for `src/store/queries.js`" +legend: "CORE" +cycle: "0030-ssjr-src-store-queries-js" +source_backlog: "docs/method/backlog/bad-code/CORE_ssjr-src-store-queries-js.md" +--- + +# Raise SSJR grades for `src/store/queries.js` + +Source backlog item: `docs/method/backlog/bad-code/CORE_ssjr-src-store-queries-js.md` +Legend: CORE + +## Sponsors + +- Human: Backlog operator +- Agent: Implementation agent + +## Hill + +TBD + +## Playback Questions + +### Human + +- [ ] TBD + +### Agent + +- [ ] TBD + +## Accessibility and Assistive Reading + +- Linear truth / reduced-complexity posture: TBD +- Non-visual or alternate-reading expectations: TBD + +## Localization and Directionality + +- Locale / wording / formatting assumptions: TBD +- Logical direction / layout assumptions: TBD + +## Agent Inspectability and Explainability + +- What must be explicit and deterministic for agents: TBD +- What must be attributable, evidenced, or governed: TBD + +## Non-goals + +- [ ] TBD + +## Backlog Context + +Current SSJR sanity check: `Hex C`, `P1 D`, `P2 C`, `P3 C`, `P4 C`, `P5 B`, `P6 B`, `P7 C`. + +The query layer reconstructs many domain records by hand and then re-shapes them again for callers. Move toward runtime-backed read models so query code returns trusted objects instead of repeatedly rebuilding loosely related plain-object views. diff --git a/docs/design/0031-ssjr-src-mcp-service-js/ssjr-src-mcp-service-js.md b/docs/design/0031-ssjr-src-mcp-service-js/ssjr-src-mcp-service-js.md new file mode 100644 index 0000000..669583b --- /dev/null +++ b/docs/design/0031-ssjr-src-mcp-service-js/ssjr-src-mcp-service-js.md @@ -0,0 +1,55 @@ +--- +title: "Raise SSJR grades for `src/mcp/service.js`" +legend: "SURFACE" +cycle: "0031-ssjr-src-mcp-service-js" +source_backlog: "docs/method/backlog/bad-code/SURFACE_ssjr-src-mcp-service-js.md" +--- + +# Raise SSJR grades for `src/mcp/service.js` + +Source backlog item: `docs/method/backlog/bad-code/SURFACE_ssjr-src-mcp-service-js.md` +Legend: SURFACE + +## Sponsors + +- Human: Backlog operator +- Agent: Implementation agent + +## Hill + +TBD + +## Playback Questions + +### Human + +- [ ] TBD + +### Agent + +- [ ] TBD + +## Accessibility and Assistive Reading + +- Linear truth / reduced-complexity posture: TBD +- Non-visual or alternate-reading expectations: TBD + +## Localization and Directionality + +- Locale / wording / formatting assumptions: TBD +- Logical direction / layout assumptions: TBD + +## Agent Inspectability and Explainability + +- What must be explicit and deterministic for agents: TBD +- What must be attributable, evidenced, or governed: TBD + +## Non-goals + +- [ ] TBD + +## Backlog Context + +Current SSJR sanity check: `Hex B`, `P1 C`, `P2 B`, `P3 B`, `P4 B`, `P5 B`, `P6 B`, `P7 B`. + +This is the exact shape-soup debt already called out in BEARING. The service layer mostly shuffles plain objects between boundaries and store calls; introduce runtime-backed request and result forms so the MCP surface owns fewer soft contracts. diff --git a/docs/method/backlog/bad-code/HT-007-remediation-payloads-in-json-errors.md b/docs/design/0032-HT-007-remediation-payloads-in-json-errors/HT-007-remediation-payloads-in-json-errors.md similarity index 50% rename from docs/method/backlog/bad-code/HT-007-remediation-payloads-in-json-errors.md rename to docs/design/0032-HT-007-remediation-payloads-in-json-errors/HT-007-remediation-payloads-in-json-errors.md index 91bf283..cf5fb9f 100644 --- a/docs/method/backlog/bad-code/HT-007-remediation-payloads-in-json-errors.md +++ b/docs/design/0032-HT-007-remediation-payloads-in-json-errors/HT-007-remediation-payloads-in-json-errors.md @@ -1,5 +1,55 @@ +--- +title: "HT-007 — Remediation Payloads in JSON Errors" +legend: "none" +cycle: "0032-HT-007-remediation-payloads-in-json-errors" +source_backlog: "docs/method/backlog/bad-code/HT-007-remediation-payloads-in-json-errors.md" +--- + # HT-007 — Remediation Payloads in JSON Errors +Source backlog item: `docs/method/backlog/bad-code/HT-007-remediation-payloads-in-json-errors.md` +Legend: none + +## Sponsors + +- Human: Backlog operator +- Agent: Implementation agent + +## Hill + +TBD + +## Playback Questions + +### Human + +- [ ] TBD + +### Agent + +- [ ] TBD + +## Accessibility and Assistive Reading + +- Linear truth / reduced-complexity posture: TBD +- Non-visual or alternate-reading expectations: TBD + +## Localization and Directionality + +- Locale / wording / formatting assumptions: TBD +- Logical direction / layout assumptions: TBD + +## Agent Inspectability and Explainability + +- What must be explicit and deterministic for agents: TBD +- What must be attributable, evidenced, or governed: TBD + +## Non-goals + +- [ ] TBD + +## Backlog Context + Legend: [CORE — Core Bedrock](../../legends/CORE.md) ## Idea diff --git a/docs/design/0032-ssjr-src-store-migrations-js/ssjr-src-store-migrations-js.md b/docs/design/0032-ssjr-src-store-migrations-js/ssjr-src-store-migrations-js.md new file mode 100644 index 0000000..9abf575 --- /dev/null +++ b/docs/design/0032-ssjr-src-store-migrations-js/ssjr-src-store-migrations-js.md @@ -0,0 +1,55 @@ +--- +title: "Raise SSJR grades for `src/store/migrations.js`" +legend: "CORE" +cycle: "0032-ssjr-src-store-migrations-js" +source_backlog: "docs/method/backlog/bad-code/CORE_ssjr-src-store-migrations-js.md" +--- + +# Raise SSJR grades for `src/store/migrations.js` + +Source backlog item: `docs/method/backlog/bad-code/CORE_ssjr-src-store-migrations-js.md` +Legend: CORE + +## Sponsors + +- Human: Backlog operator +- Agent: Implementation agent + +## Hill + +TBD + +## Playback Questions + +### Human + +- [ ] TBD + +### Agent + +- [ ] TBD + +## Accessibility and Assistive Reading + +- Linear truth / reduced-complexity posture: TBD +- Non-visual or alternate-reading expectations: TBD + +## Localization and Directionality + +- Locale / wording / formatting assumptions: TBD +- Logical direction / layout assumptions: TBD + +## Agent Inspectability and Explainability + +- What must be explicit and deterministic for agents: TBD +- What must be attributable, evidenced, or governed: TBD + +## Non-goals + +- [ ] TBD + +## Backlog Context + +Current SSJR sanity check: `Hex B`, `P1 D`, `P2 C`, `P3 C`, `P4 C`, `P6 B`, `P7 D`. + +The migration engine is graph-correct, but it reasons about node meaning almost entirely through raw props and `kind` checks. Introduce typed migration facts or per-kind migration helpers so the updater stops being a large conditional over graph shapes. diff --git a/docs/design/0033-audit-cli-dispatch-chain/audit-cli-dispatch-chain.md b/docs/design/0033-audit-cli-dispatch-chain/audit-cli-dispatch-chain.md new file mode 100644 index 0000000..f10519d --- /dev/null +++ b/docs/design/0033-audit-cli-dispatch-chain/audit-cli-dispatch-chain.md @@ -0,0 +1,55 @@ +--- +title: "CLI dispatch is still a stringly `if/else` ladder" +legend: "SURFACE" +cycle: "0033-audit-cli-dispatch-chain" +source_backlog: "docs/method/backlog/bad-code/SURFACE_audit-cli-dispatch-chain.md" +--- + +# CLI dispatch is still a stringly `if/else` ladder + +Source backlog item: `docs/method/backlog/bad-code/SURFACE_audit-cli-dispatch-chain.md` +Legend: SURFACE + +## Sponsors + +- Human: Backlog operator +- Agent: Implementation agent + +## Hill + +TBD + +## Playback Questions + +### Human + +- [ ] TBD + +### Agent + +- [ ] TBD + +## Accessibility and Assistive Reading + +- Linear truth / reduced-complexity posture: TBD +- Non-visual or alternate-reading expectations: TBD + +## Localization and Directionality + +- Locale / wording / formatting assumptions: TBD +- Logical direction / layout assumptions: TBD + +## Agent Inspectability and Explainability + +- What must be explicit and deterministic for agents: TBD +- What must be attributable, evidenced, or governed: TBD + +## Non-goals + +- [ ] TBD + +## Backlog Context + +The top-level CLI command path in `src/cli.js` is still an `if/else` dispatch chain keyed by strings. + +It works, but it keeps command behavior, help identity, and reporting identity softer than they should be. diff --git a/docs/design/0034-ssjr-src-cli-output-js/ssjr-src-cli-output-js.md b/docs/design/0034-ssjr-src-cli-output-js/ssjr-src-cli-output-js.md new file mode 100644 index 0000000..ab0cab7 --- /dev/null +++ b/docs/design/0034-ssjr-src-cli-output-js/ssjr-src-cli-output-js.md @@ -0,0 +1,55 @@ +--- +title: "Raise SSJR grades for `src/cli/output.js`" +legend: "SURFACE" +cycle: "0034-ssjr-src-cli-output-js" +source_backlog: "docs/method/backlog/bad-code/SURFACE_ssjr-src-cli-output-js.md" +--- + +# Raise SSJR grades for `src/cli/output.js` + +Source backlog item: `docs/method/backlog/bad-code/SURFACE_ssjr-src-cli-output-js.md` +Legend: SURFACE + +## Sponsors + +- Human: Backlog operator +- Agent: Implementation agent + +## Hill + +TBD + +## Playback Questions + +### Human + +- [ ] TBD + +### Agent + +- [ ] TBD + +## Accessibility and Assistive Reading + +- Linear truth / reduced-complexity posture: TBD +- Non-visual or alternate-reading expectations: TBD + +## Localization and Directionality + +- Locale / wording / formatting assumptions: TBD +- Logical direction / layout assumptions: TBD + +## Agent Inspectability and Explainability + +- What must be explicit and deterministic for agents: TBD +- What must be attributable, evidenced, or governed: TBD + +## Non-goals + +- [ ] TBD + +## Backlog Context + +Current SSJR sanity check: `Hex A`, `P1 B`, `P2 B`, `P3 B`, `P4 B`, `P5 B`, `P6 B`. + +Output is centralized well, but the event and message contracts are still largely implied. Make the reporting/result forms more explicit so streams and callers share one runtime-backed output model. diff --git a/docs/design/0035-ssjr-src-mcp-result-js/ssjr-src-mcp-result-js.md b/docs/design/0035-ssjr-src-mcp-result-js/ssjr-src-mcp-result-js.md new file mode 100644 index 0000000..a0179ef --- /dev/null +++ b/docs/design/0035-ssjr-src-mcp-result-js/ssjr-src-mcp-result-js.md @@ -0,0 +1,55 @@ +--- +title: "Raise SSJR grades for `src/mcp/result.js`" +legend: "SURFACE" +cycle: "0035-ssjr-src-mcp-result-js" +source_backlog: "docs/method/backlog/bad-code/SURFACE_ssjr-src-mcp-result-js.md" +--- + +# Raise SSJR grades for `src/mcp/result.js` + +Source backlog item: `docs/method/backlog/bad-code/SURFACE_ssjr-src-mcp-result-js.md` +Legend: SURFACE + +## Sponsors + +- Human: Backlog operator +- Agent: Implementation agent + +## Hill + +TBD + +## Playback Questions + +### Human + +- [ ] TBD + +### Agent + +- [ ] TBD + +## Accessibility and Assistive Reading + +- Linear truth / reduced-complexity posture: TBD +- Non-visual or alternate-reading expectations: TBD + +## Localization and Directionality + +- Locale / wording / formatting assumptions: TBD +- Logical direction / layout assumptions: TBD + +## Agent Inspectability and Explainability + +- What must be explicit and deterministic for agents: TBD +- What must be attributable, evidenced, or governed: TBD + +## Non-goals + +- [ ] TBD + +## Backlog Context + +Current SSJR sanity check: `Hex A`, `P1 B`, `P2 B`, `P4 B`. + +This helper is tiny, but it still duplicates the text-plus-structured MCP result contract procedurally. Consider a dedicated result form so the contract lives in one runtime-backed place instead of in shape-building glue. diff --git a/docs/design/0036-ssjr-src-git-js/ssjr-src-git-js.md b/docs/design/0036-ssjr-src-git-js/ssjr-src-git-js.md new file mode 100644 index 0000000..ec67ff1 --- /dev/null +++ b/docs/design/0036-ssjr-src-git-js/ssjr-src-git-js.md @@ -0,0 +1,55 @@ +--- +title: "Raise SSJR grades for `src/git.js`" +legend: "CORE" +cycle: "0036-ssjr-src-git-js" +source_backlog: "docs/method/backlog/bad-code/CORE_ssjr-src-git-js.md" +--- + +# Raise SSJR grades for `src/git.js` + +Source backlog item: `docs/method/backlog/bad-code/CORE_ssjr-src-git-js.md` +Legend: CORE + +## Sponsors + +- Human: Backlog operator +- Agent: Implementation agent + +## Hill + +TBD + +## Playback Questions + +### Human + +- [ ] TBD + +### Agent + +- [ ] TBD + +## Accessibility and Assistive Reading + +- Linear truth / reduced-complexity posture: TBD +- Non-visual or alternate-reading expectations: TBD + +## Localization and Directionality + +- Locale / wording / formatting assumptions: TBD +- Logical direction / layout assumptions: TBD + +## Agent Inspectability and Explainability + +- What must be explicit and deterministic for agents: TBD +- What must be attributable, evidenced, or governed: TBD + +## Non-goals + +- [ ] TBD + +## Backlog Context + +Current SSJR sanity check: `Hex A`, `P1 B`, `P2 B`, `P3 B`, `P4 B`, `P6 B`, `P7 B`. + +This adapter is in the right layer, but push/init outcomes and retry semantics are still mostly plain-object conventions. Introduce a few explicit runtime-backed outcomes or error types so callers stop interpreting raw shell results directly. diff --git a/docs/design/0037-audit-prompt-metrics-raw-parse/audit-prompt-metrics-raw-parse.md b/docs/design/0037-audit-prompt-metrics-raw-parse/audit-prompt-metrics-raw-parse.md new file mode 100644 index 0000000..2787efa --- /dev/null +++ b/docs/design/0037-audit-prompt-metrics-raw-parse/audit-prompt-metrics-raw-parse.md @@ -0,0 +1,55 @@ +--- +title: "Prompt metrics parsing is still a raw JSONL pipeline" +legend: "CORE" +cycle: "0037-audit-prompt-metrics-raw-parse" +source_backlog: "docs/method/backlog/bad-code/CORE_audit-prompt-metrics-raw-parse.md" +--- + +# Prompt metrics parsing is still a raw JSONL pipeline + +Source backlog item: `docs/method/backlog/bad-code/CORE_audit-prompt-metrics-raw-parse.md` +Legend: CORE + +## Sponsors + +- Human: Backlog operator +- Agent: Implementation agent + +## Hill + +TBD + +## Playback Questions + +### Human + +- [ ] TBD + +### Agent + +- [ ] TBD + +## Accessibility and Assistive Reading + +- Linear truth / reduced-complexity posture: TBD +- Non-visual or alternate-reading expectations: TBD + +## Localization and Directionality + +- Locale / wording / formatting assumptions: TBD +- Logical direction / layout assumptions: TBD + +## Agent Inspectability and Explainability + +- What must be explicit and deterministic for agents: TBD +- What must be attributable, evidenced, or governed: TBD + +## Non-goals + +- [ ] TBD + +## Backlog Context + +`src/store/prompt-metrics.js` reads the whole file, parses line-by-line into anonymous objects, and lets downstream aggregation assume shape. + +The failure mode is lenient, but the core contract stays soft and memory behavior will only get worse as the metrics file grows. diff --git a/docs/design/0038-ssjr-src-cli-js/ssjr-src-cli-js.md b/docs/design/0038-ssjr-src-cli-js/ssjr-src-cli-js.md new file mode 100644 index 0000000..162b55a --- /dev/null +++ b/docs/design/0038-ssjr-src-cli-js/ssjr-src-cli-js.md @@ -0,0 +1,55 @@ +--- +title: "Raise SSJR grades for `src/cli.js`" +legend: "SURFACE" +cycle: "0038-ssjr-src-cli-js" +source_backlog: "docs/method/backlog/bad-code/SURFACE_ssjr-src-cli-js.md" +--- + +# Raise SSJR grades for `src/cli.js` + +Source backlog item: `docs/method/backlog/bad-code/SURFACE_ssjr-src-cli-js.md` +Legend: SURFACE + +## Sponsors + +- Human: Backlog operator +- Agent: Implementation agent + +## Hill + +TBD + +## Playback Questions + +### Human + +- [ ] TBD + +### Agent + +- [ ] TBD + +## Accessibility and Assistive Reading + +- Linear truth / reduced-complexity posture: TBD +- Non-visual or alternate-reading expectations: TBD + +## Localization and Directionality + +- Locale / wording / formatting assumptions: TBD +- Logical direction / layout assumptions: TBD + +## Agent Inspectability and Explainability + +- What must be explicit and deterministic for agents: TBD +- What must be attributable, evidenced, or governed: TBD + +## Non-goals + +- [ ] TBD + +## Backlog Context + +Current SSJR sanity check: `Hex B`, `P1 B`, `P2 B`, `P3 C`, `P4 B`, `P6 B`, `P7 C`. + +The top-level dispatcher still routes through command strings and a long conditional chain. Move toward command objects or a command registry that owns behavior so the CLI shell becomes thinner and less tag-oriented. diff --git a/docs/design/0039-ssjr-src-project-context-js/ssjr-src-project-context-js.md b/docs/design/0039-ssjr-src-project-context-js/ssjr-src-project-context-js.md new file mode 100644 index 0000000..599a28b --- /dev/null +++ b/docs/design/0039-ssjr-src-project-context-js/ssjr-src-project-context-js.md @@ -0,0 +1,55 @@ +--- +title: "Raise SSJR grades for `src/project-context.js`" +legend: "CORE" +cycle: "0039-ssjr-src-project-context-js" +source_backlog: "docs/method/backlog/bad-code/CORE_ssjr-src-project-context-js.md" +--- + +# Raise SSJR grades for `src/project-context.js` + +Source backlog item: `docs/method/backlog/bad-code/CORE_ssjr-src-project-context-js.md` +Legend: CORE + +## Sponsors + +- Human: Backlog operator +- Agent: Implementation agent + +## Hill + +TBD + +## Playback Questions + +### Human + +- [ ] TBD + +### Agent + +- [ ] TBD + +## Accessibility and Assistive Reading + +- Linear truth / reduced-complexity posture: TBD +- Non-visual or alternate-reading expectations: TBD + +## Localization and Directionality + +- Locale / wording / formatting assumptions: TBD +- Logical direction / layout assumptions: TBD + +## Agent Inspectability and Explainability + +- What must be explicit and deterministic for agents: TBD +- What must be attributable, evidenced, or governed: TBD + +## Non-goals + +- [ ] TBD + +## Backlog Context + +Current SSJR sanity check: `Hex A`, `P1 C`, `P2 B`, `P3 B`, `P4 C`, `P6 B`. + +Ambient project context is useful, but it is still represented as a raw bag of strings and token arrays. Give the context a firmer runtime-backed shape so project-name, token, and query-term invariants do not live only in helper conventions. diff --git a/docs/design/0040-audit-prompt-metrics-io-port/audit-prompt-metrics-io-port.md b/docs/design/0040-audit-prompt-metrics-io-port/audit-prompt-metrics-io-port.md new file mode 100644 index 0000000..4a4c717 --- /dev/null +++ b/docs/design/0040-audit-prompt-metrics-io-port/audit-prompt-metrics-io-port.md @@ -0,0 +1,58 @@ +--- +title: "Prompt metrics testability: IOPort abstraction" +legend: "CORE" +cycle: "0040-audit-prompt-metrics-io-port" +source_backlog: "docs/method/backlog/bad-code/CORE_audit-prompt-metrics-io-port.md" +--- + +# Prompt metrics testability: IOPort abstraction + +Source backlog item: `docs/method/backlog/bad-code/CORE_audit-prompt-metrics-io-port.md` +Legend: CORE + +## Sponsors + +- Human: Backlog operator +- Agent: Implementation agent + +## Hill + +TBD + +## Playback Questions + +### Human + +- [ ] TBD + +### Agent + +- [ ] TBD + +## Accessibility and Assistive Reading + +- Linear truth / reduced-complexity posture: TBD +- Non-visual or alternate-reading expectations: TBD + +## Localization and Directionality + +- Locale / wording / formatting assumptions: TBD +- Logical direction / layout assumptions: TBD + +## Agent Inspectability and Explainability + +- What must be explicit and deterministic for agents: TBD +- What must be attributable, evidenced, or governed: TBD + +## Non-goals + +- [ ] TBD + +## Backlog Context + +Testing macOS panel telemetry requires reading from a physical +`.jsonl` file on disk. Refactor `prompt-metrics.js` to accept an +optional IOPort that abstracts the filesystem, allowing tests to +run against in-memory buffers. + +Source: code-quality audit 2026-04-11 §3.3. diff --git a/docs/design/0041-ssjr-src-mcp-server-js/ssjr-src-mcp-server-js.md b/docs/design/0041-ssjr-src-mcp-server-js/ssjr-src-mcp-server-js.md new file mode 100644 index 0000000..f10aa21 --- /dev/null +++ b/docs/design/0041-ssjr-src-mcp-server-js/ssjr-src-mcp-server-js.md @@ -0,0 +1,55 @@ +--- +title: "Raise SSJR grades for `src/mcp/server.js`" +legend: "SURFACE" +cycle: "0041-ssjr-src-mcp-server-js" +source_backlog: "docs/method/backlog/bad-code/SURFACE_ssjr-src-mcp-server-js.md" +--- + +# Raise SSJR grades for `src/mcp/server.js` + +Source backlog item: `docs/method/backlog/bad-code/SURFACE_ssjr-src-mcp-server-js.md` +Legend: SURFACE + +## Sponsors + +- Human: Backlog operator +- Agent: Implementation agent + +## Hill + +TBD + +## Playback Questions + +### Human + +- [ ] TBD + +### Agent + +- [ ] TBD + +## Accessibility and Assistive Reading + +- Linear truth / reduced-complexity posture: TBD +- Non-visual or alternate-reading expectations: TBD + +## Localization and Directionality + +- Locale / wording / formatting assumptions: TBD +- Logical direction / layout assumptions: TBD + +## Agent Inspectability and Explainability + +- What must be explicit and deterministic for agents: TBD +- What must be attributable, evidenced, or governed: TBD + +## Non-goals + +- [ ] TBD + +## Backlog Context + +Current SSJR sanity check: `Hex A`, `P1 C`, `P3 B`, `P6 C`, `P7 B`. + +Boundary schemas are strong here, but command definitions are still spread across repeated schema/result wiring. Consolidate the MCP tool registry so names, schemas, and execution contracts derive from one runtime-backed command definition. diff --git a/docs/design/0042-ssjr-src-cli-graph-gate-js/ssjr-src-cli-graph-gate-js.md b/docs/design/0042-ssjr-src-cli-graph-gate-js/ssjr-src-cli-graph-gate-js.md new file mode 100644 index 0000000..3f0d355 --- /dev/null +++ b/docs/design/0042-ssjr-src-cli-graph-gate-js/ssjr-src-cli-graph-gate-js.md @@ -0,0 +1,55 @@ +--- +title: "Raise SSJR grades for `src/cli/graph-gate.js`" +legend: "SURFACE" +cycle: "0042-ssjr-src-cli-graph-gate-js" +source_backlog: "docs/method/backlog/bad-code/SURFACE_ssjr-src-cli-graph-gate-js.md" +--- + +# Raise SSJR grades for `src/cli/graph-gate.js` + +Source backlog item: `docs/method/backlog/bad-code/SURFACE_ssjr-src-cli-graph-gate-js.md` +Legend: SURFACE + +## Sponsors + +- Human: Backlog operator +- Agent: Implementation agent + +## Hill + +TBD + +## Playback Questions + +### Human + +- [ ] TBD + +### Agent + +- [ ] TBD + +## Accessibility and Assistive Reading + +- Linear truth / reduced-complexity posture: TBD +- Non-visual or alternate-reading expectations: TBD + +## Localization and Directionality + +- Locale / wording / formatting assumptions: TBD +- Logical direction / layout assumptions: TBD + +## Agent Inspectability and Explainability + +- What must be explicit and deterministic for agents: TBD +- What must be attributable, evidenced, or governed: TBD + +## Non-goals + +- [ ] TBD + +## Backlog Context + +Current SSJR sanity check: `Hex B`, `P1 B`, `P2 B`, `P3 B`, `P4 B`, `P6 B`, `P7 B`. + +The graph gate has the right responsibility, but its migration decisions and outcomes are still plain-object and string-driven. Move toward a named gate/policy result that owns the branching semantics. diff --git a/docs/design/0043-ssjr-bin-think-mcp-js/ssjr-bin-think-mcp-js.md b/docs/design/0043-ssjr-bin-think-mcp-js/ssjr-bin-think-mcp-js.md new file mode 100644 index 0000000..1b3c837 --- /dev/null +++ b/docs/design/0043-ssjr-bin-think-mcp-js/ssjr-bin-think-mcp-js.md @@ -0,0 +1,55 @@ +--- +title: "Raise SSJR grades for `bin/think-mcp.js`" +legend: "SURFACE" +cycle: "0043-ssjr-bin-think-mcp-js" +source_backlog: "docs/method/backlog/bad-code/SURFACE_ssjr-bin-think-mcp-js.md" +--- + +# Raise SSJR grades for `bin/think-mcp.js` + +Source backlog item: `docs/method/backlog/bad-code/SURFACE_ssjr-bin-think-mcp-js.md` +Legend: SURFACE + +## Sponsors + +- Human: Backlog operator +- Agent: Implementation agent + +## Hill + +TBD + +## Playback Questions + +### Human + +- [ ] TBD + +### Agent + +- [ ] TBD + +## Accessibility and Assistive Reading + +- Linear truth / reduced-complexity posture: TBD +- Non-visual or alternate-reading expectations: TBD + +## Localization and Directionality + +- Locale / wording / formatting assumptions: TBD +- Logical direction / layout assumptions: TBD + +## Agent Inspectability and Explainability + +- What must be explicit and deterministic for agents: TBD +- What must be attributable, evidenced, or governed: TBD + +## Non-goals + +- [ ] TBD + +## Backlog Context + +Current SSJR sanity check: `Hex B`, `P1 B`, `P2 B`, `P3 B`, `P4 B`, `P6 B`. + +This entrypoint is thin, but it still carries soft-contract glue. Keep it as a pure adapter shell, avoid re-declaring runtime contracts here, and make sure command/result shaping stays owned by the MCP modules beneath it. diff --git a/docs/design/0044-ssjr-src-cli-environment-js/ssjr-src-cli-environment-js.md b/docs/design/0044-ssjr-src-cli-environment-js/ssjr-src-cli-environment-js.md new file mode 100644 index 0000000..8357acb --- /dev/null +++ b/docs/design/0044-ssjr-src-cli-environment-js/ssjr-src-cli-environment-js.md @@ -0,0 +1,55 @@ +--- +title: "Raise SSJR grades for `src/cli/environment.js`" +legend: "SURFACE" +cycle: "0044-ssjr-src-cli-environment-js" +source_backlog: "docs/method/backlog/bad-code/SURFACE_ssjr-src-cli-environment-js.md" +--- + +# Raise SSJR grades for `src/cli/environment.js` + +Source backlog item: `docs/method/backlog/bad-code/SURFACE_ssjr-src-cli-environment-js.md` +Legend: SURFACE + +## Sponsors + +- Human: Backlog operator +- Agent: Implementation agent + +## Hill + +TBD + +## Playback Questions + +### Human + +- [ ] TBD + +### Agent + +- [ ] TBD + +## Accessibility and Assistive Reading + +- Linear truth / reduced-complexity posture: TBD +- Non-visual or alternate-reading expectations: TBD + +## Localization and Directionality + +- Locale / wording / formatting assumptions: TBD +- Logical direction / layout assumptions: TBD + +## Agent Inspectability and Explainability + +- What must be explicit and deterministic for agents: TBD +- What must be attributable, evidenced, or governed: TBD + +## Non-goals + +- [ ] TBD + +## Backlog Context + +Current SSJR sanity check: `Hex A`, `P1 B`, `P2 B`, `P3 B`, `P4 B`, `P6 B`. + +This file is small and well-placed, but it still exposes ambient booleans and raw environment reads as loose helpers. A tiny runtime-backed environment capability object would make these decisions less ad hoc. diff --git a/docs/design/0045-ssjr-src-cli-help-js/ssjr-src-cli-help-js.md b/docs/design/0045-ssjr-src-cli-help-js/ssjr-src-cli-help-js.md new file mode 100644 index 0000000..6767a1d --- /dev/null +++ b/docs/design/0045-ssjr-src-cli-help-js/ssjr-src-cli-help-js.md @@ -0,0 +1,55 @@ +--- +title: "Raise SSJR grades for `src/cli/help.js`" +legend: "SURFACE" +cycle: "0045-ssjr-src-cli-help-js" +source_backlog: "docs/method/backlog/bad-code/SURFACE_ssjr-src-cli-help-js.md" +--- + +# Raise SSJR grades for `src/cli/help.js` + +Source backlog item: `docs/method/backlog/bad-code/SURFACE_ssjr-src-cli-help-js.md` +Legend: SURFACE + +## Sponsors + +- Human: Backlog operator +- Agent: Implementation agent + +## Hill + +TBD + +## Playback Questions + +### Human + +- [ ] TBD + +### Agent + +- [ ] TBD + +## Accessibility and Assistive Reading + +- Linear truth / reduced-complexity posture: TBD +- Non-visual or alternate-reading expectations: TBD + +## Localization and Directionality + +- Locale / wording / formatting assumptions: TBD +- Logical direction / layout assumptions: TBD + +## Agent Inspectability and Explainability + +- What must be explicit and deterministic for agents: TBD +- What must be attributable, evidenced, or governed: TBD + +## Non-goals + +- [ ] TBD + +## Backlog Context + +Current SSJR sanity check: `Hex B`, `P1 B`, `P2 C`, `P3 B`, `P4 C`, `P6 B`. + +Help is still mostly a string registry with conventions around topics and commands. Tighten the boundary between command definition and help rendering so the text surface derives from one runtime-backed source of truth. diff --git a/docs/design/0046-ssjr-src-cli-commands-capture-js/ssjr-src-cli-commands-capture-js.md b/docs/design/0046-ssjr-src-cli-commands-capture-js/ssjr-src-cli-commands-capture-js.md new file mode 100644 index 0000000..d40790b --- /dev/null +++ b/docs/design/0046-ssjr-src-cli-commands-capture-js/ssjr-src-cli-commands-capture-js.md @@ -0,0 +1,55 @@ +--- +title: "Raise SSJR grades for `src/cli/commands/capture.js`" +legend: "SURFACE" +cycle: "0046-ssjr-src-cli-commands-capture-js" +source_backlog: "docs/method/backlog/bad-code/SURFACE_ssjr-src-cli-commands-capture-js.md" +--- + +# Raise SSJR grades for `src/cli/commands/capture.js` + +Source backlog item: `docs/method/backlog/bad-code/SURFACE_ssjr-src-cli-commands-capture-js.md` +Legend: SURFACE + +## Sponsors + +- Human: Backlog operator +- Agent: Implementation agent + +## Hill + +TBD + +## Playback Questions + +### Human + +- [ ] TBD + +### Agent + +- [ ] TBD + +## Accessibility and Assistive Reading + +- Linear truth / reduced-complexity posture: TBD +- Non-visual or alternate-reading expectations: TBD + +## Localization and Directionality + +- Locale / wording / formatting assumptions: TBD +- Logical direction / layout assumptions: TBD + +## Agent Inspectability and Explainability + +- What must be explicit and deterministic for agents: TBD +- What must be attributable, evidenced, or governed: TBD + +## Non-goals + +- [ ] TBD + +## Backlog Context + +Current SSJR sanity check: `Hex B`, `P1 B`, `P2 B`, `P3 C`, `P4 B`, `P6 B`, `P7 B`. + +Capture orchestration is solid, but it still returns and reports mostly raw outcome shapes. Introduce explicit capture result forms so persistence, migration follow-through, and backup reporting stop leaning on ambient object conventions. diff --git a/docs/design/0047-ssjr-src-cli-commands-read-js/ssjr-src-cli-commands-read-js.md b/docs/design/0047-ssjr-src-cli-commands-read-js/ssjr-src-cli-commands-read-js.md new file mode 100644 index 0000000..647cbb3 --- /dev/null +++ b/docs/design/0047-ssjr-src-cli-commands-read-js/ssjr-src-cli-commands-read-js.md @@ -0,0 +1,55 @@ +--- +title: "Raise SSJR grades for `src/cli/commands/read.js`" +legend: "SURFACE" +cycle: "0047-ssjr-src-cli-commands-read-js" +source_backlog: "docs/method/backlog/bad-code/SURFACE_ssjr-src-cli-commands-read-js.md" +--- + +# Raise SSJR grades for `src/cli/commands/read.js` + +Source backlog item: `docs/method/backlog/bad-code/SURFACE_ssjr-src-cli-commands-read-js.md` +Legend: SURFACE + +## Sponsors + +- Human: Backlog operator +- Agent: Implementation agent + +## Hill + +TBD + +## Playback Questions + +### Human + +- [ ] TBD + +### Agent + +- [ ] TBD + +## Accessibility and Assistive Reading + +- Linear truth / reduced-complexity posture: TBD +- Non-visual or alternate-reading expectations: TBD + +## Localization and Directionality + +- Locale / wording / formatting assumptions: TBD +- Logical direction / layout assumptions: TBD + +## Agent Inspectability and Explainability + +- What must be explicit and deterministic for agents: TBD +- What must be attributable, evidenced, or governed: TBD + +## Non-goals + +- [ ] TBD + +## Backlog Context + +Current SSJR sanity check: `Hex C`, `P1 C`, `P2 B`, `P3 D`, `P4 B`, `P5 B`, `P6 C`, `P7 D`. + +This command surface is doing too much with too many raw result shapes. Split command-specific presentation into smaller owned modules and replace command/result switching with behavior that lives on the types or handlers that own it. diff --git a/docs/design/0048-ssjr-bin-think-js/ssjr-bin-think-js.md b/docs/design/0048-ssjr-bin-think-js/ssjr-bin-think-js.md new file mode 100644 index 0000000..d4c7725 --- /dev/null +++ b/docs/design/0048-ssjr-bin-think-js/ssjr-bin-think-js.md @@ -0,0 +1,55 @@ +--- +title: "Raise SSJR grades for `bin/think.js`" +legend: "SURFACE" +cycle: "0048-ssjr-bin-think-js" +source_backlog: "docs/method/backlog/bad-code/SURFACE_ssjr-bin-think-js.md" +--- + +# Raise SSJR grades for `bin/think.js` + +Source backlog item: `docs/method/backlog/bad-code/SURFACE_ssjr-bin-think-js.md` +Legend: SURFACE + +## Sponsors + +- Human: Backlog operator +- Agent: Implementation agent + +## Hill + +TBD + +## Playback Questions + +### Human + +- [ ] TBD + +### Agent + +- [ ] TBD + +## Accessibility and Assistive Reading + +- Linear truth / reduced-complexity posture: TBD +- Non-visual or alternate-reading expectations: TBD + +## Localization and Directionality + +- Locale / wording / formatting assumptions: TBD +- Logical direction / layout assumptions: TBD + +## Agent Inspectability and Explainability + +- What must be explicit and deterministic for agents: TBD +- What must be attributable, evidenced, or governed: TBD + +## Non-goals + +- [ ] TBD + +## Backlog Context + +Current SSJR sanity check: `Hex A`, `P1 B`, `P2 B`, `P3 B`, `P4 B`, `P6 B`. + +The CLI entrypoint is structurally correct, but it still depends on convention-heavy wiring. Keep the file narrowly host-facing and make sure command and error contracts remain derived from the owning runtime modules instead of being repeated here. diff --git a/docs/design/0049-ssjr-src-browse-benchmark-js/ssjr-src-browse-benchmark-js.md b/docs/design/0049-ssjr-src-browse-benchmark-js/ssjr-src-browse-benchmark-js.md new file mode 100644 index 0000000..7681289 --- /dev/null +++ b/docs/design/0049-ssjr-src-browse-benchmark-js/ssjr-src-browse-benchmark-js.md @@ -0,0 +1,55 @@ +--- +title: "Raise SSJR grades for `src/browse-benchmark.js`" +legend: "SURFACE" +cycle: "0049-ssjr-src-browse-benchmark-js" +source_backlog: "docs/method/backlog/bad-code/SURFACE_ssjr-src-browse-benchmark-js.md" +--- + +# Raise SSJR grades for `src/browse-benchmark.js` + +Source backlog item: `docs/method/backlog/bad-code/SURFACE_ssjr-src-browse-benchmark-js.md` +Legend: SURFACE + +## Sponsors + +- Human: Backlog operator +- Agent: Implementation agent + +## Hill + +TBD + +## Playback Questions + +### Human + +- [ ] TBD + +### Agent + +- [ ] TBD + +## Accessibility and Assistive Reading + +- Linear truth / reduced-complexity posture: TBD +- Non-visual or alternate-reading expectations: TBD + +## Localization and Directionality + +- Locale / wording / formatting assumptions: TBD +- Logical direction / layout assumptions: TBD + +## Agent Inspectability and Explainability + +- What must be explicit and deterministic for agents: TBD +- What must be attributable, evidenced, or governed: TBD + +## Non-goals + +- [ ] TBD + +## Backlog Context + +Current SSJR sanity check: `Hex B`, `P1 C`, `P2 C`, `P3 C`, `P4 C`, `P5 B`, `P6 C`, `P7 D`. + +The benchmark harness leans on raw item shapes and tag-driven branching. Pull the benchmark-facing concepts into small runtime-backed helper forms so benchmark logic stops switching on loose `type` values and duplicated structure. diff --git a/docs/design/0050-ssjr-src-cli-interactive-js/ssjr-src-cli-interactive-js.md b/docs/design/0050-ssjr-src-cli-interactive-js/ssjr-src-cli-interactive-js.md new file mode 100644 index 0000000..fd9e04e --- /dev/null +++ b/docs/design/0050-ssjr-src-cli-interactive-js/ssjr-src-cli-interactive-js.md @@ -0,0 +1,55 @@ +--- +title: "Raise SSJR grades for `src/cli/interactive.js`" +legend: "SURFACE" +cycle: "0050-ssjr-src-cli-interactive-js" +source_backlog: "docs/method/backlog/bad-code/SURFACE_ssjr-src-cli-interactive-js.md" +--- + +# Raise SSJR grades for `src/cli/interactive.js` + +Source backlog item: `docs/method/backlog/bad-code/SURFACE_ssjr-src-cli-interactive-js.md` +Legend: SURFACE + +## Sponsors + +- Human: Backlog operator +- Agent: Implementation agent + +## Hill + +TBD + +## Playback Questions + +### Human + +- [ ] TBD + +### Agent + +- [ ] TBD + +## Accessibility and Assistive Reading + +- Linear truth / reduced-complexity posture: TBD +- Non-visual or alternate-reading expectations: TBD + +## Localization and Directionality + +- Locale / wording / formatting assumptions: TBD +- Logical direction / layout assumptions: TBD + +## Agent Inspectability and Explainability + +- What must be explicit and deterministic for agents: TBD +- What must be attributable, evidenced, or governed: TBD + +## Non-goals + +- [ ] TBD + +## Backlog Context + +Current SSJR sanity check: `Hex A`, `P1 B`, `P2 B`, `P3 B`, `P4 B`, `P6 B`, `P7 B`. + +The interactive shell helpers are structurally fine, but they still pass around a lot of loose prompt/render state. Keep the host concerns here, while moving reusable interaction semantics into runtime-backed forms where they matter. diff --git a/docs/design/0051-ssjr-src-store-js/ssjr-src-store-js.md b/docs/design/0051-ssjr-src-store-js/ssjr-src-store-js.md new file mode 100644 index 0000000..68c5678 --- /dev/null +++ b/docs/design/0051-ssjr-src-store-js/ssjr-src-store-js.md @@ -0,0 +1,55 @@ +--- +title: "Raise SSJR grades for `src/store.js`" +legend: "CORE" +cycle: "0051-ssjr-src-store-js" +source_backlog: "docs/method/backlog/bad-code/CORE_ssjr-src-store-js.md" +--- + +# Raise SSJR grades for `src/store.js` + +Source backlog item: `docs/method/backlog/bad-code/CORE_ssjr-src-store-js.md` +Legend: CORE + +## Sponsors + +- Human: Backlog operator +- Agent: Implementation agent + +## Hill + +TBD + +## Playback Questions + +### Human + +- [ ] TBD + +### Agent + +- [ ] TBD + +## Accessibility and Assistive Reading + +- Linear truth / reduced-complexity posture: TBD +- Non-visual or alternate-reading expectations: TBD + +## Localization and Directionality + +- Locale / wording / formatting assumptions: TBD +- Logical direction / layout assumptions: TBD + +## Agent Inspectability and Explainability + +- What must be explicit and deterministic for agents: TBD +- What must be attributable, evidenced, or governed: TBD + +## Non-goals + +- [ ] TBD + +## Backlog Context + +Current SSJR sanity check: `Hex B`, `P1 B`, `P2 B`, `P3 B`, `P4 B`. + +The barrel is convenient, but it is also a soft-contract choke point. Keep the export surface intentional and derived from the owning modules so the store API does not drift into an undifferentiated namespace. diff --git a/docs/design/0052-ssjr-src-store-prompt-metrics-js/ssjr-src-store-prompt-metrics-js.md b/docs/design/0052-ssjr-src-store-prompt-metrics-js/ssjr-src-store-prompt-metrics-js.md new file mode 100644 index 0000000..80c065e --- /dev/null +++ b/docs/design/0052-ssjr-src-store-prompt-metrics-js/ssjr-src-store-prompt-metrics-js.md @@ -0,0 +1,55 @@ +--- +title: "Raise SSJR grades for `src/store/prompt-metrics.js`" +legend: "CORE" +cycle: "0052-ssjr-src-store-prompt-metrics-js" +source_backlog: "docs/method/backlog/bad-code/CORE_ssjr-src-store-prompt-metrics-js.md" +--- + +# Raise SSJR grades for `src/store/prompt-metrics.js` + +Source backlog item: `docs/method/backlog/bad-code/CORE_ssjr-src-store-prompt-metrics-js.md` +Legend: CORE + +## Sponsors + +- Human: Backlog operator +- Agent: Implementation agent + +## Hill + +TBD + +## Playback Questions + +### Human + +- [ ] TBD + +### Agent + +- [ ] TBD + +## Accessibility and Assistive Reading + +- Linear truth / reduced-complexity posture: TBD +- Non-visual or alternate-reading expectations: TBD + +## Localization and Directionality + +- Locale / wording / formatting assumptions: TBD +- Logical direction / layout assumptions: TBD + +## Agent Inspectability and Explainability + +- What must be explicit and deterministic for agents: TBD +- What must be attributable, evidenced, or governed: TBD + +## Non-goals + +- [ ] TBD + +## Backlog Context + +Current SSJR sanity check: `Hex B`, `P1 C`, `P2 C`, `P3 B`, `P4 C`, `P6 B`, `P7 B`. + +Prompt metrics are handled as tolerant raw records, which is useful at the boundary but too loose in the core summarization path. Introduce an explicit parsed metric record form so invalid lines are rejected once and downstream aggregation deals in trusted values. diff --git a/docs/design/0053-ssjr-src-store-remember-js/ssjr-src-store-remember-js.md b/docs/design/0053-ssjr-src-store-remember-js/ssjr-src-store-remember-js.md new file mode 100644 index 0000000..aa98727 --- /dev/null +++ b/docs/design/0053-ssjr-src-store-remember-js/ssjr-src-store-remember-js.md @@ -0,0 +1,55 @@ +--- +title: "Raise SSJR grades for `src/store/remember.js`" +legend: "CORE" +cycle: "0053-ssjr-src-store-remember-js" +source_backlog: "docs/method/backlog/bad-code/CORE_ssjr-src-store-remember-js.md" +--- + +# Raise SSJR grades for `src/store/remember.js` + +Source backlog item: `docs/method/backlog/bad-code/CORE_ssjr-src-store-remember-js.md` +Legend: CORE + +## Sponsors + +- Human: Backlog operator +- Agent: Implementation agent + +## Hill + +TBD + +## Playback Questions + +### Human + +- [ ] TBD + +### Agent + +- [ ] TBD + +## Accessibility and Assistive Reading + +- Linear truth / reduced-complexity posture: TBD +- Non-visual or alternate-reading expectations: TBD + +## Localization and Directionality + +- Locale / wording / formatting assumptions: TBD +- Logical direction / layout assumptions: TBD + +## Agent Inspectability and Explainability + +- What must be explicit and deterministic for agents: TBD +- What must be attributable, evidenced, or governed: TBD + +## Non-goals + +- [ ] TBD + +## Backlog Context + +Current SSJR sanity check: `Hex B`, `P1 C`, `P2 B`, `P3 B`, `P4 B`, `P6 B`, `P7 B`. + +Remember matching is coherent, but scopes and matches are still plain objects with implied invariants. Introduce explicit runtime-backed scope and match forms so ranking and recall receipts are less dependent on convention. diff --git a/docs/design/0054-ssjr-src-cli-commands-reflect-js/ssjr-src-cli-commands-reflect-js.md b/docs/design/0054-ssjr-src-cli-commands-reflect-js/ssjr-src-cli-commands-reflect-js.md new file mode 100644 index 0000000..a52094b --- /dev/null +++ b/docs/design/0054-ssjr-src-cli-commands-reflect-js/ssjr-src-cli-commands-reflect-js.md @@ -0,0 +1,55 @@ +--- +title: "Raise SSJR grades for `src/cli/commands/reflect.js`" +legend: "REFLECT" +cycle: "0054-ssjr-src-cli-commands-reflect-js" +source_backlog: "docs/method/backlog/bad-code/REFLECT_ssjr-src-cli-commands-reflect-js.md" +--- + +# Raise SSJR grades for `src/cli/commands/reflect.js` + +Source backlog item: `docs/method/backlog/bad-code/REFLECT_ssjr-src-cli-commands-reflect-js.md` +Legend: REFLECT + +## Sponsors + +- Human: Backlog operator +- Agent: Implementation agent + +## Hill + +TBD + +## Playback Questions + +### Human + +- [ ] TBD + +### Agent + +- [ ] TBD + +## Accessibility and Assistive Reading + +- Linear truth / reduced-complexity posture: TBD +- Non-visual or alternate-reading expectations: TBD + +## Localization and Directionality + +- Locale / wording / formatting assumptions: TBD +- Logical direction / layout assumptions: TBD + +## Agent Inspectability and Explainability + +- What must be explicit and deterministic for agents: TBD +- What must be attributable, evidenced, or governed: TBD + +## Non-goals + +- [ ] TBD + +## Backlog Context + +Current SSJR sanity check: `Hex C`, `P1 C`, `P2 B`, `P3 C`, `P4 B`, `P5 B`, `P6 C`, `P7 C`. + +Reflect orchestration is still built from raw result bags and conditional branching. Introduce firmer runtime-backed session/result forms so the command layer stops reinterpreting plain objects from the store. diff --git a/docs/design/0055-ssjr-src-store-derivation-js/ssjr-src-store-derivation-js.md b/docs/design/0055-ssjr-src-store-derivation-js/ssjr-src-store-derivation-js.md new file mode 100644 index 0000000..dc0d461 --- /dev/null +++ b/docs/design/0055-ssjr-src-store-derivation-js/ssjr-src-store-derivation-js.md @@ -0,0 +1,55 @@ +--- +title: "Raise SSJR grades for `src/store/derivation.js`" +legend: "REFLECT" +cycle: "0055-ssjr-src-store-derivation-js" +source_backlog: "docs/method/backlog/bad-code/REFLECT_ssjr-src-store-derivation-js.md" +--- + +# Raise SSJR grades for `src/store/derivation.js` + +Source backlog item: `docs/method/backlog/bad-code/REFLECT_ssjr-src-store-derivation-js.md` +Legend: REFLECT + +## Sponsors + +- Human: Backlog operator +- Agent: Implementation agent + +## Hill + +TBD + +## Playback Questions + +### Human + +- [ ] TBD + +### Agent + +- [ ] TBD + +## Accessibility and Assistive Reading + +- Linear truth / reduced-complexity posture: TBD +- Non-visual or alternate-reading expectations: TBD + +## Localization and Directionality + +- Locale / wording / formatting assumptions: TBD +- Logical direction / layout assumptions: TBD + +## Agent Inspectability and Explainability + +- What must be explicit and deterministic for agents: TBD +- What must be attributable, evidenced, or governed: TBD + +## Non-goals + +- [ ] TBD + +## Backlog Context + +Current SSJR sanity check: `Hex B`, `P1 D`, `P2 C`, `P3 C`, `P4 C`, `P5 B`, `P6 B`, `P7 D`. + +Derived artifacts and receipts are currently raw objects with a lot of `kind`-driven branching. Pull seed quality, session attribution, and derived receipt concepts into runtime-backed forms so reflect derivation is less dependent on tag-switching. diff --git a/docs/design/0056-ssjr-src-store-reflect-js/ssjr-src-store-reflect-js.md b/docs/design/0056-ssjr-src-store-reflect-js/ssjr-src-store-reflect-js.md new file mode 100644 index 0000000..ab75b43 --- /dev/null +++ b/docs/design/0056-ssjr-src-store-reflect-js/ssjr-src-store-reflect-js.md @@ -0,0 +1,55 @@ +--- +title: "Raise SSJR grades for `src/store/reflect.js`" +legend: "REFLECT" +cycle: "0056-ssjr-src-store-reflect-js" +source_backlog: "docs/method/backlog/bad-code/REFLECT_ssjr-src-store-reflect-js.md" +--- + +# Raise SSJR grades for `src/store/reflect.js` + +Source backlog item: `docs/method/backlog/bad-code/REFLECT_ssjr-src-store-reflect-js.md` +Legend: REFLECT + +## Sponsors + +- Human: Backlog operator +- Agent: Implementation agent + +## Hill + +TBD + +## Playback Questions + +### Human + +- [ ] TBD + +### Agent + +- [ ] TBD + +## Accessibility and Assistive Reading + +- Linear truth / reduced-complexity posture: TBD +- Non-visual or alternate-reading expectations: TBD + +## Localization and Directionality + +- Locale / wording / formatting assumptions: TBD +- Logical direction / layout assumptions: TBD + +## Agent Inspectability and Explainability + +- What must be explicit and deterministic for agents: TBD +- What must be attributable, evidenced, or governed: TBD + +## Non-goals + +- [ ] TBD + +## Backlog Context + +Current SSJR sanity check: `Hex B`, `P1 D`, `P2 C`, `P3 C`, `P4 C`, `P5 B`, `P6 B`, `P7 D`. + +Reflect sessions and entries are still modeled as mutable-looking raw objects plus `kind` checks. Introduce runtime-backed session, prompt-plan, and reflect-entry forms so reflect behavior lives on owned types instead of being spread across patch logic and conditionals. diff --git a/docs/design/0057-audit-cli-generic-errors/audit-cli-generic-errors.md b/docs/design/0057-audit-cli-generic-errors/audit-cli-generic-errors.md new file mode 100644 index 0000000..d6a460d --- /dev/null +++ b/docs/design/0057-audit-cli-generic-errors/audit-cli-generic-errors.md @@ -0,0 +1,55 @@ +--- +title: "CLI still hides too much behind a generic top-level error" +legend: "SURFACE" +cycle: "0057-audit-cli-generic-errors" +source_backlog: "docs/method/backlog/bad-code/SURFACE_audit-cli-generic-errors.md" +--- + +# CLI still hides too much behind a generic top-level error + +Source backlog item: `docs/method/backlog/bad-code/SURFACE_audit-cli-generic-errors.md` +Legend: SURFACE + +## Sponsors + +- Human: Backlog operator +- Agent: Implementation agent + +## Hill + +TBD + +## Playback Questions + +### Human + +- [ ] TBD + +### Agent + +- [ ] TBD + +## Accessibility and Assistive Reading + +- Linear truth / reduced-complexity posture: TBD +- Non-visual or alternate-reading expectations: TBD + +## Localization and Directionality + +- Locale / wording / formatting assumptions: TBD +- Logical direction / layout assumptions: TBD + +## Agent Inspectability and Explainability + +- What must be explicit and deterministic for agents: TBD +- What must be attributable, evidenced, or governed: TBD + +## Non-goals + +- [ ] TBD + +## Backlog Context + +`src/cli.js` catches unexpected failures and tells the default human path only `Something went wrong`. + +That keeps output terse, but it also weakens self-serve recovery and makes production debugging slower than necessary. diff --git a/docs/method/backlog/bad-code/SURFACE_audit-missing-pr-issue-templates.md b/docs/design/0058-SURFACE_audit-missing-pr-issue-templates/SURFACE_audit-missing-pr-issue-templates.md similarity index 77% rename from docs/method/backlog/bad-code/SURFACE_audit-missing-pr-issue-templates.md rename to docs/design/0058-SURFACE_audit-missing-pr-issue-templates/SURFACE_audit-missing-pr-issue-templates.md index 20394d3..fb03112 100644 --- a/docs/method/backlog/bad-code/SURFACE_audit-missing-pr-issue-templates.md +++ b/docs/design/0058-SURFACE_audit-missing-pr-issue-templates/SURFACE_audit-missing-pr-issue-templates.md @@ -1,3 +1,9 @@ +--- +id: SURFACE_audit-missing-pr-issue-templates +blocks: [] +blocked_by: [] +--- + # Repo is missing pull-request and issue templates `.github/` currently contains workflows, but no pull-request template and no issue templates. diff --git a/docs/design/0058-audit-missing-code-of-conduct/audit-missing-code-of-conduct.md b/docs/design/0058-audit-missing-code-of-conduct/audit-missing-code-of-conduct.md new file mode 100644 index 0000000..b17ab6e --- /dev/null +++ b/docs/design/0058-audit-missing-code-of-conduct/audit-missing-code-of-conduct.md @@ -0,0 +1,55 @@ +--- +title: "Repo is missing a `CODE_OF_CONDUCT.md`" +legend: "SURFACE" +cycle: "0058-audit-missing-code-of-conduct" +source_backlog: "docs/method/backlog/bad-code/SURFACE_audit-missing-code-of-conduct.md" +--- + +# Repo is missing a `CODE_OF_CONDUCT.md` + +Source backlog item: `docs/method/backlog/bad-code/SURFACE_audit-missing-code-of-conduct.md` +Legend: SURFACE + +## Sponsors + +- Human: Backlog operator +- Agent: Implementation agent + +## Hill + +TBD + +## Playback Questions + +### Human + +- [ ] TBD + +### Agent + +- [ ] TBD + +## Accessibility and Assistive Reading + +- Linear truth / reduced-complexity posture: TBD +- Non-visual or alternate-reading expectations: TBD + +## Localization and Directionality + +- Locale / wording / formatting assumptions: TBD +- Logical direction / layout assumptions: TBD + +## Agent Inspectability and Explainability + +- What must be explicit and deterministic for agents: TBD +- What must be attributable, evidenced, or governed: TBD + +## Non-goals + +- [ ] TBD + +## Backlog Context + +Think already has `CONTRIBUTING.md`, `CHANGELOG.md`, and `SECURITY.md`, but it still lacks the normal conduct policy file contributors expect in a public repository. + +That is process debt, not product debt, but it still degrades the repo's outer quality. diff --git a/docs/method/backlog/bad-code/SURFACE_audit-manual-agent-bootstrap.md b/docs/design/0059-batch-audit-fixes/SURFACE_audit-manual-agent-bootstrap.md similarity index 84% rename from docs/method/backlog/bad-code/SURFACE_audit-manual-agent-bootstrap.md rename to docs/design/0059-batch-audit-fixes/SURFACE_audit-manual-agent-bootstrap.md index 7eb0c37..6bb86d5 100644 --- a/docs/method/backlog/bad-code/SURFACE_audit-manual-agent-bootstrap.md +++ b/docs/design/0059-batch-audit-fixes/SURFACE_audit-manual-agent-bootstrap.md @@ -1,3 +1,9 @@ +--- +id: SURFACE_audit-manual-agent-bootstrap +blocks: [] +blocked_by: [] +--- + # Manual agent bootstrap is still hand-rolled The current onboarding story still asks users to assemble agent wrappers and MCP config manually. See the `agent-think` heredoc in `docs/GUIDE.md` and the separate MCP setup prose in `README.md`. diff --git a/docs/method/backlog/bad-code/SURFACE_audit-no-release-readiness-checklist.md b/docs/design/0059-batch-audit-fixes/SURFACE_audit-no-release-readiness-checklist.md similarity index 79% rename from docs/method/backlog/bad-code/SURFACE_audit-no-release-readiness-checklist.md rename to docs/design/0059-batch-audit-fixes/SURFACE_audit-no-release-readiness-checklist.md index 9d5c4ce..5f0c8f4 100644 --- a/docs/method/backlog/bad-code/SURFACE_audit-no-release-readiness-checklist.md +++ b/docs/design/0059-batch-audit-fixes/SURFACE_audit-no-release-readiness-checklist.md @@ -1,3 +1,9 @@ +--- +id: SURFACE_audit-no-release-readiness-checklist +blocks: [] +blocked_by: [] +--- + # Repo lacks a release-readiness smoke bundle Think has CI, tests, and design discipline, but there is no small, explicit release-readiness checklist or smoke command that proves the CLI, MCP surface, and macOS surface are all still coherent before a handoff. diff --git a/docs/method/backlog/bad-code/SURFACE_audit-readme-missing-git-requirement.md b/docs/design/0059-batch-audit-fixes/SURFACE_audit-readme-missing-git-requirement.md similarity index 80% rename from docs/method/backlog/bad-code/SURFACE_audit-readme-missing-git-requirement.md rename to docs/design/0059-batch-audit-fixes/SURFACE_audit-readme-missing-git-requirement.md index 32385e8..876c5c2 100644 --- a/docs/method/backlog/bad-code/SURFACE_audit-readme-missing-git-requirement.md +++ b/docs/design/0059-batch-audit-fixes/SURFACE_audit-readme-missing-git-requirement.md @@ -1,3 +1,9 @@ +--- +id: SURFACE_audit-readme-missing-git-requirement +blocks: [] +blocked_by: [] +--- + # README install requirements omit Git `README.md` lists Node.js and the optional macOS toolchain as requirements, but Think shells out to `git` for storage, ambient context, and backup. diff --git a/docs/method/backlog/bad-code/SURFACE_audit-readme-test-local-platform-scope.md b/docs/design/0059-batch-audit-fixes/SURFACE_audit-readme-test-local-platform-scope.md similarity index 79% rename from docs/method/backlog/bad-code/SURFACE_audit-readme-test-local-platform-scope.md rename to docs/design/0059-batch-audit-fixes/SURFACE_audit-readme-test-local-platform-scope.md index 6df0da1..90bf2b4 100644 --- a/docs/method/backlog/bad-code/SURFACE_audit-readme-test-local-platform-scope.md +++ b/docs/design/0059-batch-audit-fixes/SURFACE_audit-readme-test-local-platform-scope.md @@ -1,3 +1,9 @@ +--- +id: SURFACE_audit-readme-test-local-platform-scope +blocks: [] +blocked_by: [] +--- + # README does not mark `test:local` as Darwin-only `README.md` presents `npm run test:local` as a generic verification command, but `package.json` and `CONTRIBUTING.md` make clear that it includes the macOS Swift suite and is Darwin-only. diff --git a/docs/method/backlog/bad-code/SURFACE_audit-stdin-pola.md b/docs/design/0059-batch-audit-fixes/SURFACE_audit-stdin-pola.md similarity index 83% rename from docs/method/backlog/bad-code/SURFACE_audit-stdin-pola.md rename to docs/design/0059-batch-audit-fixes/SURFACE_audit-stdin-pola.md index ef7b57d..36acd59 100644 --- a/docs/method/backlog/bad-code/SURFACE_audit-stdin-pola.md +++ b/docs/design/0059-batch-audit-fixes/SURFACE_audit-stdin-pola.md @@ -1,3 +1,9 @@ +--- +id: SURFACE_audit-stdin-pola +blocks: [] +blocked_by: [] +--- + # Plain `think` silently ignores piped stdin `think` intentionally requires `--ingest` for stdin capture, but the current no-diagnostic behavior still violates normal shell expectations. The docs explain it, but the interface itself does not help the surprised caller. diff --git a/docs/method/backlog/bad-code/SURFACE_audit-surface-capability-docs.md b/docs/design/0059-batch-audit-fixes/SURFACE_audit-surface-capability-docs.md similarity index 81% rename from docs/method/backlog/bad-code/SURFACE_audit-surface-capability-docs.md rename to docs/design/0059-batch-audit-fixes/SURFACE_audit-surface-capability-docs.md index 7ad777b..7d7e78d 100644 --- a/docs/method/backlog/bad-code/SURFACE_audit-surface-capability-docs.md +++ b/docs/design/0059-batch-audit-fixes/SURFACE_audit-surface-capability-docs.md @@ -1,3 +1,9 @@ +--- +id: SURFACE_audit-surface-capability-docs +blocks: [] +blocked_by: [] +--- + # CLI, MCP, and macOS parity is not documented as one surface Think's surfaces are philosophically aligned, but the repo does not publish one capability matrix proving which operations exist on CLI, `--json`, MCP, and macOS. diff --git a/docs/design/0060-graph-v4-enrichment-schema/graph-v4-enrichment-schema.md b/docs/design/0060-graph-v4-enrichment-schema/graph-v4-enrichment-schema.md new file mode 100644 index 0000000..3c283e9 --- /dev/null +++ b/docs/design/0060-graph-v4-enrichment-schema/graph-v4-enrichment-schema.md @@ -0,0 +1,54 @@ +--- +title: "Graph v4: enrichment schema extension" +legend: "CORE" +cycle: "0060-graph-v4-enrichment-schema" +source_backlog: "docs/method/backlog/asap/CORE_graph-v4-enrichment-schema.md" +--- + +# Graph v4: enrichment schema extension + +Source backlog item: `docs/method/backlog/asap/CORE_graph-v4-enrichment-schema.md` +Legend: CORE + +## Sponsors + +- Human: James +- Agent: Claude + +## Hill + +The WARP graph schema supports enrichment nodes and edges at v4. +Migration creates standing classification nodes. No enrichment +logic yet — just the schema. + +## Playback Questions + +### Agent + +- [ ] Does GRAPH_MODEL_VERSION = 4? +- [ ] Does migration create 7 classification nodes? +- [ ] Does PRODUCT_READ_LENS include all new prefixes? +- [ ] Do existing tests still pass after migration? +- [ ] Does --doctor report graph model v4 after migration? + +## All postures + +Not applicable — internal schema extension. + +## Backlog Context + +Extend the WARP graph schema for the enrichment pipeline. No new +features — just the foundation so enrichment stages have somewhere +to write. + +- New node prefixes: topic, classification, entity, annotation, + link, evolution, pipeline_run +- New edge labels: about, classified_as, mentions, annotates, + links_from, links_to, evolves, enriches, alias_of, covers, + similar_to, summarizes +- Standing classification nodes (7): question, decision, + observation, action_item, idea, reference, unclassified +- Match lens update for PRODUCT_READ_LENS +- Graph model version 3 → 4 migration + +Design: docs/design/enrichment-pipeline.md diff --git a/docs/design/0061-annotate-command/annotate-command.md b/docs/design/0061-annotate-command/annotate-command.md new file mode 100644 index 0000000..79bac56 --- /dev/null +++ b/docs/design/0061-annotate-command/annotate-command.md @@ -0,0 +1,59 @@ +--- +title: "think --annotate" +legend: "CORE" +cycle: "0061-annotate-command" +source_backlog: "docs/method/backlog/asap/CORE_annotate-command.md" +--- + +# think --annotate + +Source backlog item: `docs/method/backlog/asap/CORE_annotate-command.md` +Legend: CORE + +## Sponsors + +- Human: James +- Agent: Claude + +## Hill + +A user can attach a note to an existing capture via `--annotate` +and see it when inspecting that entry. + +## Playback Questions + +### Human + +- [ ] Can I annotate a capture from the CLI? +- [ ] Does --inspect show annotations? + +### Agent + +- [ ] Does --json --annotate emit structured JSONL? +- [ ] Does the annotation create a graph node with an annotates edge? +- [ ] Does annotation text survive as attached content? +- [ ] Does annotation reject empty text? +- [ ] Does annotation reject a nonexistent entry? + +## All postures + +Not applicable — CLI command, no visual/locale concern. + +## Non-goals + +- No browse TUI `a` key in this cycle (follow-up) +- No MCP tool in this cycle (follow-up) + +## Backlog Context + +First enrichment surface. Attach a user-authored note to an existing +capture without mutating the original. + +``` +think --annotate= "this turned out to be wrong" +``` + +New node: `annotation:` with `annotates` edge to the +target entry. CLI text, --json, MCP tool, and browse TUI (`a` key). + +Design: docs/design/enrichment-pipeline.md (step 2) diff --git a/docs/design/0062-auto-tags-stage/auto-tags-stage.md b/docs/design/0062-auto-tags-stage/auto-tags-stage.md new file mode 100644 index 0000000..f9e1a78 --- /dev/null +++ b/docs/design/0062-auto-tags-stage/auto-tags-stage.md @@ -0,0 +1,120 @@ +--- +title: "auto_tags enrichment stage" +legend: "CORE" +cycle: "0062-auto-tags-stage" +source_backlog: "docs/method/backlog/asap/CORE_auto-tags-stage.md" +--- + +# auto_tags enrichment stage + +Source backlog item: `docs/method/backlog/asap/CORE_auto-tags-stage.md` +Legend: CORE + +## Sponsors + +- Human: James +- Agent: Claude + +## Hill + +After capture, the enrichment pipeline extracts topic keywords from +the thought and links the thought to topic nodes in the graph. Users +can query "thoughts about X" by traversing topic graph edges. + +## Playback Questions + +### Human + +- [ ] After capturing a thought about "performance optimization", + can I find it by querying topic:performance? +- [ ] Do topics only become graph nodes after appearing in multiple + thoughts (promotion threshold)? + +### Agent + +- [ ] Does `extractTopics(text, corpus)` return relevant keywords + without an LLM? +- [ ] Does the auto_tags stage create `about` edges from thoughts + to topic nodes? +- [ ] Does a receipt artifact track what was extracted and when? +- [ ] Are candidate topics below the threshold stored on the receipt + (not as graph nodes)? +- [ ] Does re-running the stage on the same thought produce the same + result (idempotent)? +- [ ] Does a new CLI command (`--topics`) list all promoted topics? + +## Accessibility and Assistive Reading + +- Linear truth: topics are plain text labels. No visual-only + representation. +- `--json --topics` provides machine-readable topic list. + +## Localization and Directionality + +- Topic names are normalized to lowercase. No locale-specific + normalization in this cycle. + +## Agent Inspectability and Explainability + +- The `auto_tags` receipt artifact records: extracted topics, method + used, topics promoted to nodes, deriver version. +- `--inspect` shows the receipt alongside other derivations. + +## Non-goals + +- No LLM — keyword extraction is corpus-statistical or pattern-based +- No TF-IDF in this first cut — use simple noun-phrase extraction + with stopword filtering. TF-IDF requires a corpus index that + doesn't exist yet. +- No topic merging/aliases (separate cycle) +- No browse TUI topic panel (separate cycle) + +## Design + +### Topic extraction algorithm (v1: simple) + +1. Lowercase the thought text +2. Split on whitespace and punctuation +3. Remove stopwords (common English words) +4. Remove tokens < 3 characters +5. Count unique tokens +6. Return tokens that appear in the text, sorted by position + +This is intentionally simple. TF-IDF and noun-phrase extraction +are future improvements — the stage architecture supports swapping +the algorithm via `deriverVersion`. + +### Promotion threshold + +Default: 2. A topic becomes a graph node after it appears across +2+ distinct thoughts. Below that, the topic name is stored in the +`auto_tags` receipt as a candidate. + +When a candidate crosses the threshold during a pipeline run, the +stage: +1. Creates `topic:` node +2. Backfills `about` edges for all prior thoughts that had the + candidate in their receipt + +### Graph mutations per thought + +``` +thought --about--> topic: (one per promoted topic) +artifact --derived_from--> thought (receipt) +``` + +### CLI: `--topics` + +```bash +think --topics # list all promoted topics +think --topics --json # JSONL topic list +``` + +### Files to create/modify + +- `src/store/enrichment/auto-tags.js` — extraction + stage logic +- `src/store/enrichment/stopwords.js` — stopword list +- `src/cli/commands/read.js` — `runTopics` command +- `src/cli/options.js` — `--topics` flag +- `src/cli.js` — dispatch +- `src/store/constants.js` — topic promotion threshold constant diff --git a/docs/design/0063-semantic-parse-stage/semantic-parse-stage.md b/docs/design/0063-semantic-parse-stage/semantic-parse-stage.md new file mode 100644 index 0000000..1ac271d --- /dev/null +++ b/docs/design/0063-semantic-parse-stage/semantic-parse-stage.md @@ -0,0 +1,94 @@ +--- +title: "semantic_parse enrichment stage" +legend: "CORE" +cycle: "0063-semantic-parse-stage" +source_backlog: "docs/method/backlog/asap/CORE_semantic-parse-stage.md" +--- + +# semantic_parse enrichment stage + +Source backlog item: `docs/method/backlog/asap/CORE_semantic-parse-stage.md` +Legend: CORE + +## Sponsors + +- Human: James +- Agent: Claude + +## Hill + +After enrichment, every thought has at least one `classified_as` +edge to a standing classification node. Users can query "all +questions" or "all decisions" by traversing the graph. + +## Playback Questions + +### Human + +- [ ] After enriching, can I find all my questions? +- [ ] Does a thought get multiple classifications when it matches + multiple patterns? + +### Agent + +- [ ] Does `classifyThought(text)` return correct types for + questions, decisions, observations, action items, and ideas? +- [ ] Does a thought that matches no pattern get `unclassified`? +- [ ] Does the enrichment pipeline create `classified_as` edges? +- [ ] Does a receipt artifact track the classification result? +- [ ] Is the stage idempotent (re-run doesn't duplicate edges)? + +## Accessibility and Assistive Reading + +- Linear truth: classifications are plain text labels. +- `--json` provides machine-readable classification data. + +## Localization and Directionality + +- English-only pattern matching in this cycle. + +## Agent Inspectability and Explainability + +- The `semantic_parse` receipt artifact records: classifications + matched, patterns triggered, confidence (1.0 for pattern match). + +## Non-goals + +- No LLM — pattern matching only +- No entity extraction (separate stage) +- No browse TUI classification filter (separate cycle) + +## Design + +### Classification patterns + +| Type | Patterns | +|------|----------| +| `question` | Contains `?`, starts with "how", "what", "why", "when", "where", "who", "can", "should", "could", "would", "is there", "do we" | +| `decision` | Contains "I decided", "we decided", "decision:", "going with", "chose to", "picking" | +| `observation` | Contains "I noticed", "I observed", "it seems", "turns out", "interesting that", "realized" | +| `action_item` | Contains "need to", "todo", "must", "should do", "action:", "next step", "follow up" | +| `idea` | Contains "what if", "idea:", "concept:", "maybe we could", "imagine", "proposal" | +| `reference` | Contains URL (http/https), "see:", "ref:", "link:", "source:" | + +A thought can match multiple patterns → multiple `classified_as` +edges. If no pattern matches → `classification:unclassified`. + +### Graph mutations + +``` +thought --classified_as--> classification:question +thought --classified_as--> classification:action_item (multi-class) +artifact --derived_from--> thought (receipt) +``` + +### Integration with enrichment pipeline + +Add as a second stage in `runEnrichmentPipeline()`, after auto_tags. +Uses the same capture iteration — no extra graph reads. + +### Files to create/modify + +- `src/store/enrichment/semantic-parse.js` — classifyThought + patterns +- `src/store/enrichment/runner.js` — add stage to pipeline +- Tests: port-level for classifyThought, acceptance for graph edges diff --git a/docs/design/0064-recent-default-limit/recent-default-limit.md b/docs/design/0064-recent-default-limit/recent-default-limit.md new file mode 100644 index 0000000..96b2601 --- /dev/null +++ b/docs/design/0064-recent-default-limit/recent-default-limit.md @@ -0,0 +1,56 @@ +--- +title: "--recent loads entire history by default" +legend: "CORE" +cycle: "0064-recent-default-limit" +source_backlog: "docs/method/backlog/asap/CORE_recent-default-limit.md" +--- + +# --recent loads entire history by default + +Source backlog item: `docs/method/backlog/asap/CORE_recent-default-limit.md` +Legend: CORE + +## Sponsors + +- Human: James +- Agent: Claude + +## Hill + +`--recent` shows a bounded window with a total count, not the +entire archive. + +## Playback Questions + +### Agent + +- [ ] Does --recent without --count default to 50? +- [ ] Does text output show a trailer with total count? +- [ ] Does --json output include a total count? +- [ ] Does --count=N still override the default? + +## All postures + +Not applicable — bugfix. + +## Non-goals + +- Not fixing the capture buffer limit (git-warp issue) + +## Backlog Context + +`listRecent` calls `listEntriesByKind(read, 'capture')` which +materializes every capture in the graph. With a large repo (317MB +codex mind), this hits the git-warp 10MB buffer limit. + +Fix: default to last 50 entries. Show total count in output so the +user knows how many more exist. `--count=N` already works for +explicit limits. + +Output should include a trailer: +``` +(showing 50 of 1234 captures) +``` + +Triggered by: codex-think capture failing with "Buffer limit +exceeded: 10485760 bytes" on a 317MB repo. diff --git a/docs/design/0065-eliminate-full-graph-materialization/eliminate-full-graph-materialization.md b/docs/design/0065-eliminate-full-graph-materialization/eliminate-full-graph-materialization.md new file mode 100644 index 0000000..aec442f --- /dev/null +++ b/docs/design/0065-eliminate-full-graph-materialization/eliminate-full-graph-materialization.md @@ -0,0 +1,132 @@ +--- +title: "Eliminate full graph materialization anti-pattern" +legend: "CORE" +cycle: "0065-eliminate-full-graph-materialization" +source_backlog: "docs/method/backlog/asap/CORE_eliminate-full-graph-materialization.md" +--- + +# Eliminate full graph materialization anti-pattern + +Source backlog item: `docs/method/backlog/asap/CORE_eliminate-full-graph-materialization.md` +Legend: CORE + +## Sponsors + +- Human: James +- Agent: Claude + +## Hill + +Zero calls to `getNodes()` or `getEdges()` in Think. All reads +use the worldline query API. Capture works on repos of any size. + +## Playback Questions + +### Human + +- [ ] Can the codex mind (317MB) capture without buffer errors? + +### Agent + +- [ ] Does `grep -r 'getNodes\|getEdges' src/` return zero hits? +- [ ] Does migration use worldline queries instead of full scan? +- [ ] Does enrichment use worldline queries instead of full scan? +- [ ] Does annotation lookup use edge traversal instead of full scan? +- [ ] Do all existing tests pass? + +## All postures + +Not applicable — internal refactor for correctness. + +## Non-goals + +- Not changing the migration logic — just how it reads the graph + +## Design + +### Migration (migrations.js) + +Replace: +```js +const nodes = await graph.getNodes(); +const edges = await graph.getEdges(); +``` + +With targeted queries per node kind: +```js +const worldline = app.worldline(); +const captures = await worldline.query().match('entry:*').where({ kind: 'capture' }).run(); +const sessions = await worldline.query().match('reflect:*').run(); +const meta = await worldline.getNodeProps('meta:graph'); +``` + +Then check for missing edges using `.outgoing()` / `.incoming()` +traversal on specific nodes instead of building a full edge Set. + +### Enrichment (runner.js) + +Replace `getNodes()` + `getEdges()` with: +```js +const captures = await listEntriesByKind(read, 'capture'); +// Already uses query API internally +``` + +For existing receipt/edge checks, query specific patterns: +```js +const receipts = await worldline.query().match('artifact:*').where({ kind: 'auto_tags' }).run(); +``` + +### Annotation lookup (queries.js) + +Replace `read.view.getEdges()` with incoming edge traversal: +```js +const annotations = await worldline.query() + .match(entryId) + .incoming('annotates') + .run(); +``` + +### Files to modify + +- `src/store/migrations.js` — most critical (causes capture crash) +- `src/store/enrichment/runner.js` — 4 violations +- `src/store/queries.js` — 1 violation + +## Backlog Context + +Think calls `getNodes()`, `getEdges()`, and iterates full prop maps +in multiple places. This dumps the entire WARP graph into memory, +violating the git-warp GUIDE's prescribed read patterns (worldline +query API with match/where/outgoing/incoming). + +On a 317MB codex repo, this exceeds the 10MB buffer limit and +crashes capture. + +## Violations + +| File | Anti-pattern | +|------|-------------| +| `src/store/migrations.js:14-25` | `graph.getNodes()` + `graph.getEdges()` + full props scan into Map | +| `src/store/enrichment/runner.js:20` | `read.view.getEdges()` | +| `src/store/enrichment/runner.js:24` | `read.view.getNodes()` | +| `src/store/enrichment/runner.js:62` | `read.view.getNodes()` again | +| `src/store/enrichment/runner.js:214-215` | `read.view.getEdges()` + `read.view.getNodes()` | +| `src/store/queries.js:293` | `read.view.getEdges()` for annotation lookup | + +## Correct pattern (from git-warp GUIDE) + +```js +const worldline = app.worldline(); +const results = await worldline.query() + .match('entry:*') + .where({ kind: 'capture' }) + .run(); +``` + +Use `.outgoing()` / `.incoming()` for edge traversal instead of +loading all edges and filtering in JS. + +## Priority + +Critical — blocks capture on large repos. The codex mind (317MB) +cannot capture because migration triggers full materialization. diff --git a/docs/design/enrichment-pipeline.md b/docs/design/enrichment-pipeline.md new file mode 100644 index 0000000..770af73 --- /dev/null +++ b/docs/design/enrichment-pipeline.md @@ -0,0 +1,742 @@ +# Enrichment Pipeline Design + +A graph-native enrichment system for Think that models the pipeline +itself, its inputs, and its outputs as WARP graph nodes and edges. + +## Principles + +1. **Raw captures are immutable.** Enrichment never mutates a capture + node. All enrichment produces new derived nodes linked to the + original via named edges. +2. **The pipeline is in the graph.** Pipeline runs, stage results, + and scheduling decisions are themselves WARP nodes — inspectable, + versionable, and auditable. +3. **Semantic objects are graph nodes.** Topics, classifications, + and entity types are first-class nodes, not properties buried + in JSON. Finding "thoughts about X" is a graph traversal, not + a table scan. +4. **Provenance is explicit.** Every enrichment artifact records what + produced it, what version of the enrichment logic ran, and what + inputs it consumed. +5. **No LLM is required.** Lightweight enrichment (topics, semantic + parse, linking) works without an LLM. LLM-assisted enrichment is + opt-in, clearly marked, and separable. + +--- + +## Graph Extension: Semantic Object Nodes + +Semantic objects are first-class graph nodes. This means "find all +thoughts about performance" is `traverse incoming edges of +topic:performance` — not a scan of every artifact's JSON properties. + +### Thought-level vs entry-level enrichment + +Think has two identity layers: + +- **Entry** (`entry:`) — the capture *event*. Unique per capture. + Two identical texts produce two entries. +- **Thought** (`thought:`) — the canonical *content*. + Two identical texts produce one thought. + +Semantic enrichment operates on **thoughts** (content-level): +`thought --about--> topic`, `thought --classified_as--> classification`. +This means deduplication is automatic — identical captures don't +produce duplicate topic edges. + +User-authored enrichment operates on **entries** (event-level): +`annotation --annotates--> entry`, `evolution --evolves--> entry`. +This means you can annotate one specific capture without affecting +other captures of the same text. + +The bridge is the existing `expresses` edge: +`entry --expresses--> thought --about--> topic`. + +### `topic:` + +A topic that thoughts can be about. Topics have a **promotion +threshold** — a topic candidate only becomes a graph node after it +appears across N thoughts (default: 2). Below the threshold, topic +candidates live as properties on the `auto_tags` receipt artifact. + +``` +Properties: + kind = 'topic' + name = 'performance' (display name) + normalizedName = 'performance' (lowercase, deduplication key) + createdAt = ISO timestamp + source = 'auto_tags' | 'user' (who created it) + mentionCount = number (how many thoughts reference this) + +Edge: + thought --about--> topic:performance +``` + +Identity: `topic:`. Deterministic — same topic name +always resolves to the same node. + +**Promotion**: the `auto_tags` stage tracks candidate counts. When a +candidate crosses the threshold, the stage creates the topic node +and backfills `about` edges for all prior thoughts that mentioned it. + +**Merging / aliases**: topics can be merged. A `topic --alias_of--> +topic` edge redirects queries. `think --merge-topics perf performance` +moves all `about` edges from `topic:perf` to `topic:performance` and +adds the alias edge. + +Finding all thoughts about a topic: + +``` +graph.query() + .match('topic:performance') + .traverse({ direction: 'incoming', label: 'about' }) + .run() +``` + +### `classification:` + +A semantic type that thoughts can be classified as. Finite set, +created at graph model v4 migration. A thought can have **multiple** +`classified_as` edges (e.g., both a question and an action item). + +``` +Nodes: + classification:question + classification:decision + classification:observation + classification:action_item + classification:idea + classification:reference + classification:unclassified + +Properties: + kind = 'classification' + name = 'question' + +Edge: + thought --classified_as--> classification:question + thought --classified_as--> classification:action_item (multi-class) +``` + +All thoughts get at least one `classified_as` edge. Thoughts that +don't match any pattern get `classification:unclassified` so they're +still reachable in the graph. + +Finding all questions: + +``` +graph.query() + .match('classification:question') + .traverse({ direction: 'incoming', label: 'classified_as' }) + .run() +``` + +### `entity::` + +Named entities extracted from thought text — people, projects, +tools, concepts. + +**This is a separate opt-in stage**, not bundled with +`semantic_parse`. Entity extraction on short informal text is +unreliable without an LLM. Classifying "is this a question?" is +cheap pattern matching; extracting "this mentions git-warp" is NER +and requires more confidence. + +``` +Examples: + entity:project:git-warp + entity:tool:bijou + entity:person:james + entity:concept:capture-latency + +Properties: + kind = 'entity' + entityType = 'project' | 'tool' | 'person' | 'concept' + name = 'git-warp' + normalizedName = 'git-warp' + createdAt = ISO timestamp + +Edge: + thought --mentions--> entity:project:git-warp +``` + +Finding all thoughts that mention git-warp: + +``` +graph.query() + .match('entity:project:git-warp') + .traverse({ direction: 'incoming', label: 'mentions' }) + .run() +``` + +### Cross-semantic traversal + +Because topics, classifications, and entities are all nodes with +edges, you can compose queries: + +- "All questions about performance": + `topic:performance <--about-- thought --classified_as--> classification:question` + +- "All thoughts mentioning git-warp in the last week": + `entity:project:git-warp <--mentions-- thought` filtered by + `createdAt` + +- "Topics I haven't thought about in 30 days": + `topic:* <--about-- thought` where latest thought's `createdAt` + is older than 30 days + +--- + +## Graph Extension: Enrichment Artifact Nodes + +### `artifact:` (new kinds) + +Enrichment artifacts use the existing `artifact:` prefix with new +`kind` values. Identity follows the established pattern: + +``` +artifactId = artifact: +``` + +| Kind | Purpose | Primary Input | Discriminator | +|------|---------|---------------|---------------| +| `auto_tags` | Tag extraction run receipt | `thought:` | — | +| `semantic_parse` | Classification run receipt | `thought:` | — | +| `auto_annotation` | One-line gist/summary of a thought | `thought:` | — | +| `auto_link` | Detected similarity to another thought | `thought:` | `relatedThoughtId` | +| `revisit_score` | Priority score for revisit scheduling | `entry:` | — | +| `summary` | Aggregated digest of multiple entries | `pipeline_run:` | — | + +Note: `auto_tags` and `semantic_parse` artifacts are *receipts* of +the enrichment run. The actual semantic data lives on the topic and +classification nodes and their edges. The artifact records what was +extracted, when, and by what version — so re-runs can detect drift. + +### `annotation:` + +User-authored annotations on existing captures. + +``` +Properties: + kind = 'annotation' + source = 'annotation' + channel = 'cli' | 'mcp' | 'tui' + writerId = + createdAt = ISO timestamp + sortKey = + targetEntryId = entry: (what this annotates) + text = + +Edge: + annotation --annotates--> entry: +``` + +### `link:` + +Explicit user-created relationship between two thoughts. + +``` +Properties: + kind = 'link' + source = 'link' + writerId = + createdAt = ISO timestamp + fromEntryId = entry: + toEntryId = entry: + linkType = 'relates_to' | 'contradicts' | 'extends' | 'replaces' | 'inspired_by' + description = optional text + +Edges: + link --links_from--> entry: + link --links_to--> entry: +``` + +### `evolution:` + +A new thought with explicit lineage to an older one. + +``` +Properties: + kind = 'evolution' + source = 'evolution' + writerId = + createdAt = ISO timestamp + sortKey = + seedEntryId = entry: (what this evolves from) + text = + +Edges: + evolution --evolves--> entry: + evolution --expresses--> thought: (same canonical pattern) +``` + +### `pipeline_run:` + +A single node per enrichment execution. Stage results are properties +on the run node, not separate nodes — this avoids 2N metadata nodes +for N thoughts. + +``` +Properties: + kind = 'pipeline_run' + source = 'enrichment' + writerId = + createdAt = ISO timestamp + completedAt = ISO timestamp | null + status = 'running' | 'completed' | 'failed' + trigger = 'capture_followthrough' | 'scheduled' | 'manual' + stagesJson = JSON { stageName: { status, durationMs, artifactCount, error } } + targetEntryCount = number + errorMessage = null | string + +Edges: + pipeline_run --enriches--> entry: (one per target entry) +``` + +Stage-level detail lives in `stagesJson` instead of as separate +nodes. This keeps the graph focused on content relationships. +Pipeline runs are audit records — queryable but not the primary +navigation surface. + +--- + +## Graph Extension: New Edge Labels + +### Semantic edges (traversable for queries) + +| Edge | From | To | Meaning | +|------|------|----|---------| +| `about` | thought | topic | This thought is about this topic | +| `classified_as` | thought | classification | This thought is this type | +| `mentions` | thought | entity | This thought mentions this entity | + +### Enrichment edges + +| Edge | From | To | Meaning | +|------|------|----|---------| +| `annotates` | annotation | entry | This annotation comments on this capture | +| `links_from` | link | entry | Source end of an explicit link | +| `links_to` | link | entry | Target end of an explicit link | +| `evolves` | evolution | entry | This thought evolved from that one | +| `similar_to` | artifact (auto_link) | thought | Detected similarity | +| `summarizes` | artifact (summary) | entry | This summary covers this entry | +| `covers` | artifact (summary) | topic | This summary covers this topic | + +### Pipeline and topic management edges + +| Edge | From | To | Meaning | +|------|------|----|---------| +| `enriches` | pipeline_run | entry | This pipeline run processed this entry | +| `alias_of` | topic | topic | This topic is a synonym of that topic | + +Existing edges (`derived_from`, `contextualizes`, `expresses`) are +reused where applicable. + +### Query patterns enabled by semantic nodes + +``` +# All thoughts about a topic +topic:performance <--about-- thought:* + +# All questions +classification:question <--classified_as-- thought:* + +# All thoughts mentioning a project +entity:project:git-warp <--mentions-- thought:* + +# Questions about performance +topic:performance <--about-- thought --classified_as--> classification:question + +# Topics covered by a summary +summary --covers--> topic:* + +# Dormant topics (no recent thoughts) +topic:* <--about-- thought (where latest createdAt > 30d ago) + +# Related thoughts via shared topics +thought:A --about--> topic:X <--about-- thought:B + +# Evolution chain +entry:newest --evolves--> entry:older --evolves--> entry:oldest +``` + +--- + +## Pipeline Architecture + +### Trigger Modes + +1. **Capture follow-through** — runs inline after raw save, like + `seed_quality` and `session_attribution` today. Only lightweight + stages: `auto_tags`, `semantic_parse`. + +2. **Manual** — `think --enrich` or `think --enrich=`. + Runs all stages on specified entries or un-enriched captures. + +3. **Scheduled** — cron or idle-tick. Processes the backlog of + un-enriched captures. Generates summaries at configurable + intervals. + +### Stage Dependency Graph + +``` +capture + ↓ (follow-through, no LLM) + ├── auto_tags → creates topic nodes + about edges + ├── semantic_parse → creates classified_as edges + ↓ (background, no LLM) + ├── auto_annotation (needs: auto_tags) + ├── auto_link (needs: auto_tags, corpus) + ├── revisit_score (needs: auto_tags, semantic_parse, age) + ↓ (opt-in, needs LLM) + ├── entity_extraction → creates entity nodes + mentions edges + ↓ (scheduled) + └── summary (needs: multiple entries, auto_tags) +``` + +Entity extraction is a separate opt-in stage, not bundled with +semantic_parse. Classifying "is this a question?" is cheap pattern +matching. Extracting "this mentions git-warp" is NER on informal +text and needs higher confidence (LLM or curated dictionary). + +### Stage Contracts + +Each stage is a function: + +```js +async function stageAutoTags(entry, context) { + // Returns: { artifacts: [{ kind, properties, edges }], skipped: bool } +} +``` + +The pipeline runner: +1. Creates a `pipeline_run` node +2. For each stage in order: + a. Creates a `pipeline_stage` node + b. Calls the stage function + c. Writes produced artifacts to the graph + d. Updates stage status +3. Updates `pipeline_run` status + +### Idempotency + +Artifact IDs are deterministic (hash of kind + input + version). +Re-running a stage for the same input and version produces the same +artifact ID. The graph treats `addNode` on an existing ID as a +no-op, so re-runs are safe. + +When the `deriverVersion` changes, a new artifact ID is generated +and both old and new coexist. Consumers read the latest version. + +--- + +## Enrichment Stages + +### 1. `auto_tags` (follow-through, no LLM) + +Extract topic keywords and create/link topic graph nodes. + +**Graph mutations:** +1. For each extracted topic, ensure `topic:` node exists +2. Add `about` edge from `thought:` to each `topic:` +3. Create `auto_tags` artifact as a receipt of the extraction + +``` +Receipt artifact kind: 'auto_tags' +Properties: + topicsExtracted = JSON array of topic names + method = 'tf-idf' | 'noun-phrase' | 'keyword-extraction' + topicNodesCreated = number (new topics added to graph) + +Edges: + thought --about--> topic: (one per extracted topic) + artifact --derived_from--> thought: (receipt provenance) +``` + +Algorithm: TF-IDF against the existing corpus. Top N keywords +above a threshold. Falls back to simple noun-phrase extraction +if corpus is too small. + +### 2. `semantic_parse` (follow-through, no LLM) + +Classify the structural type and link to classification nodes. +A thought can receive **multiple** `classified_as` edges. + +**Graph mutations:** +1. Add `classified_as` edge(s) from `thought:` to matching + `classification:` node(s) +2. If no pattern matches, add `classified_as` edge to + `classification:unclassified` +3. Create `semantic_parse` artifact as a receipt + +``` +Receipt artifact kind: 'semantic_parse' +Properties: + classifications = JSON array of matched types + confidence = JSON object { type: score } + markers = JSON array of matched patterns + +Edges: + thought --classified_as--> classification: (one or more) + artifact --derived_from--> thought: +``` + +Algorithm: Pattern matching on linguistic markers. "How do I..." → +question. "I decided to..." → decision. "Need to..." → action +item. Similar to existing `REFLECT_MARKERS` but broader. Multiple +patterns can match the same thought. + +### 2b. `entity_extraction` (opt-in, needs LLM or dictionary) + +Extract named entities and create entity graph nodes. **Separate +stage** from `semantic_parse` — requires higher confidence. + +**Graph mutations:** +1. For each extracted entity, ensure `entity::` node + exists +2. Add `mentions` edge from `thought:` to each entity + +``` +Receipt artifact kind: 'entity_extraction' +Properties: + entitiesExtracted = JSON array of { type, name, entityId } + method = 'llm' | 'dictionary' | 'pattern' + llmModel = null | string + +Edges: + thought --mentions--> entity:: + artifact --derived_from--> thought: +``` + +Approaches (in order of reliability): +- **Dictionary**: curated list of known projects, tools, people. + High precision, no coverage for new entities. +- **Pattern**: regex for common formats (GitHub URLs → project, + `@mentions` → person). Medium precision. +- **LLM**: full NER with explicit model provenance. Best coverage, + requires opt-in. + +### 3. `auto_annotation` (background, optional LLM) + +Generate a one-line gist of a thought. + +``` +Artifact kind: 'auto_annotation' +Properties: + gist = string (one sentence) + method = 'extractive' | 'llm' + llmModel = null | string (if LLM-generated) + +Edge: + artifact --derived_from--> thought: +``` + +Without LLM: first sentence or first N words. +With LLM: one-sentence summary with explicit model provenance. + +### 4. `auto_link` (background, no LLM) + +Detect similar thoughts in the archive. + +``` +Artifact kind: 'auto_link' +Properties: + relatedThoughtId = thought: + similarity = number (0-1) + sharedTags = JSON array of common tags + method = 'tag-overlap' | 'embedding-cosine' + +Edges: + artifact --derived_from--> thought: + artifact --similar_to--> thought: +``` + +Algorithm: tag overlap from `auto_tags`. If embeddings are +available (opt-in), cosine similarity. Threshold for link +creation configurable. + +### 5. `revisit_score` (background, no LLM) + +Score a capture for revisit priority. + +``` +Artifact kind: 'revisit_score' +Properties: + score = number (0-100) + factors = JSON object { age, seedQuality, topicActivity, annotationCount } + +Edge: + artifact --contextualizes--> entry: +``` + +Factors: +- Age: older thoughts score higher (exponential decay) +- Seed quality: `likely_reflectable` scores higher than `weak_note` +- Topic activity: thoughts in active topics score lower (already + being worked on) +- Annotation count: un-annotated thoughts score higher + +### 6. `summary` (scheduled, optional LLM) + +Aggregate digest of a time period or topic. + +``` +Artifact kind: 'summary' +Properties: + scope = 'session' | 'daily' | 'weekly' | 'monthly' | 'topic' + scopeKey = session: | '2026-04-11' | 'architecture' + text = + entryCount = number + method = 'deterministic' | 'llm' + +Edges: + artifact --summarizes--> entry: (one per source entry) + artifact --produced_by--> pipeline_run: +``` + +Without LLM: bullet list of entries grouped by topic/session, +with auto-annotation gists. +With LLM: narrative synthesis with explicit model provenance. + +--- + +## CLI Surface + +```bash +# Manual enrichment +think --enrich # enrich all un-enriched captures +think --enrich= # enrich a specific capture +think --enrich --stage=auto_tags # run only one stage + +# Semantic queries (graph traversal) +think --topics # list all topics with thought counts +think --topic=performance # list thoughts about performance +think --questions # list all thoughts classified as questions +think --about=architecture # alias for --topic +think --mentions=git-warp # list thoughts mentioning an entity + +# Topic management +think --merge-topics perf performance # merge perf into performance + +# Annotations +think --annotate= "my note" + +# Links +think --link "because..." +think --link --type=extends + +# Evolution +think --evolve= "refined version of this idea" + +# Revisit +think --revisit # surface a thought to revisit +think --revisit --since=30d + +# Summaries +think --summarize --since=1d +think --summarize --session= +think --summarize --topic=architecture + +# Inspection +think --inspect= # now shows enrichment artifacts +``` + +### `--json` Output + +All enrichment commands emit JSONL events following existing +conventions: + +```json +{"event":"enrich.start","entryId":"entry:...","stages":["auto_tags","semantic_parse"]} +{"event":"enrich.stage.done","stage":"auto_tags","artifactId":"artifact:...","tags":["perf","warp"]} +{"event":"enrich.done","entryId":"entry:...","artifactsCreated":2} +``` + +--- + +## MCP Surface + +New tools: + +| Tool | Input | Output | +|------|-------|--------| +| `enrich` | `entryId?`, `stages?` | Pipeline run result | +| `annotate` | `entryId`, `text` | Annotation entry | +| `link` | `fromEntryId`, `toEntryId`, `type?`, `description?` | Link node | +| `evolve` | `entryId`, `text` | Evolution entry | +| `revisit` | `since?` | Entry to revisit | +| `summarize` | `scope`, `scopeKey?`, `since?` | Summary artifact | + +--- + +## Browse TUI Surface + +New panels and keys: + +| Key | Action | +|-----|--------| +| `a` | Annotate current thought | +| `e` | Show enrichment panel (tags, parse, links, score) | +| `t` | Filter by tag | + +The enrichment panel shows: +- Auto-tags as a tag bar +- Semantic classification badge +- Linked thoughts with descriptions +- Revisit score +- Annotations (chronological) +- Evolution chain (if any) + +--- + +## Migration + +### Graph Model Version 4 + +New edge labels, node kinds, and semantic object nodes require a +graph model version bump. Migration from v3 → v4: + +1. Add new prefixes to the match lens: + - `topic:*` + - `classification:*` + - `entity:*` + - `annotation:*` + - `link:*` + - `evolution:*` + - `pipeline_run:*` +2. Create the 7 standing `classification:*` nodes (question, + decision, observation, action_item, idea, reference, + unclassified) +3. No existing data changes — enrichment is purely additive +4. Set `graphModelVersion = 4` on `meta:graph` + +### Backfill + +On first `--enrich` or scheduled run, the pipeline processes all +existing captures that lack enrichment artifacts. This is a one-time +catch-up, not a migration. + +--- + +## Implementation Sequence + +1. **Graph v4 migration**: new constants, edge labels, node kinds, + match lens, standing classification nodes +2. **Annotate**: simplest enrichment — proves the derived-node + pattern for user-authored content +3. **auto_tags**: topic node creation with promotion threshold, + `about` edges, corpus-relative extraction +4. **semantic_parse**: classification edges, multi-class support, + pattern-based +5. **Pipeline runner**: `pipeline_run` nodes, stage orchestration, + idempotent re-runs +6. **auto_link + auto_annotation**: background stages +7. **revisit_score + --revisit**: scheduling and re-encounter +8. **summary**: aggregation stage with `covers` edges to topics +9. **Link + Evolve**: explicit user-authored relationship types +10. **Topic management**: merge, alias, prune dormant topics +11. **entity_extraction**: opt-in NER with dictionary/LLM backends +12. **Browse enrichment panel**: TUI integration (tags, class, + links, annotations, evolution chain) +13. **LLM opt-in**: entity extraction, richer annotations and + summaries with model provenance diff --git a/docs/method/backlog/asap/CORE_repair-v17-git-warp-minds.md b/docs/method/backlog/asap/CORE_repair-v17-git-warp-minds.md new file mode 100644 index 0000000..4cecaea --- /dev/null +++ b/docs/method/backlog/asap/CORE_repair-v17-git-warp-minds.md @@ -0,0 +1,188 @@ +--- +id: CORE_repair-v17-git-warp-minds +blocks: + - CORE_validate-daily-capture-habit + - SURFACE_track-reentry-friction +blocked_by: [] +--- + +# CORE - Repair Think minds after git-warp v17 + +Legend: CORE + +## Idea + +Add a documented, repeatable repair path for local Think minds after +upgrading their backing `git-warp` dependency to v17. A broken mind +currently presents as a schema error during re-entry: + +```sh +claude-think --remember --json +``` + +Expected failure shape: + +```text +Checkpoint is schema:4. Only schema:5 checkpoints are supported. +``` + +The operator should not need hand-rolled Node snippets, manual Git tree +surgery, or guesswork. Think should provide a first-class repair flow that +backs up the checkpoint ref, runs the git-warp schema upgrade, preserves +legacy content anchors, writes a fresh checkpoint, and verifies the mind. + +## Why + +Think minds are durable local memory. A dependency upgrade must not leave +`~/.think/codex`, `~/.think/claude`, or future agent minds unable to +remember, capture, or run doctor checks. + +The observed v17 break has two layers: + +1. Existing minds can have an old checkpoint schema. +2. Some legacy minds store content properties as raw Git blob object IDs. + During checkpoint creation, those legacy `_content_` anchors must + remain `100644 blob` entries. Treating them as `040000 tree` entries + makes Git reject the checkpoint tree. + +The repair path must normalize that legacy edge case during migration or +checkpoint creation without moving version-specific compatibility sludge +into the normal Think runtime. + +## Scope + +Provide a pullable repair cycle for one command or scripted flow, probably +one of: + +- `think repair-mind --mind --after-git-warp-v17` +- `think doctor --repair --mind ` +- a focused script under `scripts/repair-v17-mind.mjs` + +The implementation should resolve named minds to `~/.think/`, default +the graph name to `think`, and operate only by creating backup refs and +advancing checkpoint refs. It must not rewrite graph history. + +## Operator Runbook + +Use this as the manual truth path until the command exists. + +1. Confirm the mind is broken: + + ```sh + -think --remember --json + ``` + +2. Dry-run the git-warp upgrade from a git-warp v17 checkout: + + ```sh + npm run upgrade -- --repo ~/.think/ --graph think --dry-run --json + ``` + + Expected states are `would-upgrade` for an old mind or + `already-current` after repair. + +3. Create a backup checkpoint ref before changing anything: + + ```sh + repo="$HOME/.think/" + stamp="$(date +%Y%m%d-%H%M%S)" + head="$(git --git-dir "$repo/.git" rev-parse refs/warp/think/checkpoints/head)" + git --git-dir "$repo/.git" update-ref \ + "refs/warp/think/checkpoints/pre-v17-upgrade-$stamp" \ + "$head" + ``` + +4. Run the upgrade: + + ```sh + npm run upgrade -- --repo ~/.think/ --graph think --json + ``` + +5. If Git rejects the checkpoint tree with `Git command failed with code + 128`, inspect for legacy `_content_` anchors that point to blobs. + The repair must preserve those anchors as blob entries in the new + checkpoint tree. + +6. Force a fresh materialize-and-checkpoint pass through the repaired path. + The result should be a fresh schema-compatible checkpoint with zero + patches since checkpoint. + +7. Verify: + + ```sh + npm run upgrade -- --repo ~/.think/ --graph think --dry-run --json + -think --remember --json + -think --doctor --json + git warp check --repo ~/.think/ --graph think --json + git warp doctor --repo ~/.think/ --graph think --json + ``` + +## Acceptance Criteria + +- The repair command creates a dated backup under + `refs/warp/think/checkpoints/pre-v17-upgrade-*` before mutating refs. +- A schema-outdated mind becomes readable by `-think --remember --json`. +- `git-warp` upgrade dry-run reports `already-current` after repair. +- A fresh checkpoint is written after materialization. +- `git warp check` reports `patchesSinceCheckpoint: 0`. +- `git warp doctor` reports zero failures. Warnings such as + `COVERAGE_NO_REF` or `HOOKS_MISSING` may remain visible, but must be + documented as post-repair hygiene rather than schema repair failure. +- The repair code includes a regression fixture for an old checkpoint whose + `_content_` anchor points at a Git blob. +- The normal capture and remember runtime does not gain version-specific + compatibility branches. + +## Test Plan + +- Golden: fixture mind with old checkpoint schema and legacy blob content + anchors upgrades successfully. +- Golden: already-current mind exits cleanly and does not move checkpoint + refs. +- Known fail before fix: stock checkpoint creation rejects a tree entry when + `_content_` points to a blob object. +- Stress: large mind with thousands of thoughts and content anchors repairs + without loading unrelated graph history into memory. +- Jitter: run repair twice; the second run must be idempotent and leave the + mind readable. +- Doctor: repaired fixture has zero doctor failures and a fresh checkpoint. + +## Evidence From 2026-05-05 + +The `~/.think/claude` mind failed re-entry after the git-warp v17 upgrade: + +```sh +claude-think --remember --json +``` + +It reported a schema 4 checkpoint while v17 accepted only schema 5. A dry +run reported `would-upgrade`. A backup ref was created: + +```text +refs/warp/think/checkpoints/pre-v17-upgrade-20260505-102848 + -> fe47a53d65e0bbdf98cd4a7546679c08f5ad074b +``` + +The stock migration failed because legacy content anchors were blob object +IDs. A controlled repair preserved those anchors as blob entries and wrote: + +```text +91f65c5e2e3c75d1c503df778aa64aeb42b002bc +``` + +A later materialize-and-checkpoint pass created: + +```text +7b05cfe4e9bdad5e25000b18a5b90400b727e440 +``` + +That repaired checkpoint contained schema 5 metadata, 3,464 nodes, 3,683 +edges, 33,590 properties, and zero patches since checkpoint. The final +doctor had zero failures, with only coverage-ref and hook-install warnings +remaining. + +## Priority + +Critical after any git-warp v17 dependency upgrade. Broken minds block the +core promise of Think: cheap capture, reliable re-entry, and durable agent +memory. diff --git a/docs/method/backlog/asap/CORE_think-echo-contract-proof.md b/docs/method/backlog/asap/CORE_think-echo-contract-proof.md new file mode 100644 index 0000000..454ced4 --- /dev/null +++ b/docs/method/backlog/asap/CORE_think-echo-contract-proof.md @@ -0,0 +1,81 @@ +--- +id: CORE_think-echo-contract-proof +blocks: + - CORE_think-echo-phase-0-direction-charter + - CORE_think-echo-phase-1-app-contract + - CORE_think-echo-phase-2-runtime-roundtrip + - CORE_think-echo-phase-3-experimental-product-path + - CORE_think-echo-phase-4-read-observers + - CORE_think-echo-phase-5-migration-and-sibling-exchange +blocked_by: [] +--- + +# CORE - Think on Echo contract proof + +Legend: CORE + +## Idea + +Turn the Think-on-Echo north star into an executable backlog lane. + +Think remains the product boundary for capture, remember, browse, inspect, +minds, sessions, and user workflows. Echo becomes the first active runtime path +for the new proof. Continuum provides the shared causal-history boundary and +runtime family vocabulary. Wesley provides generated helpers, codecs, +registries, and witnesses. + +The first proof must make one sentence true: + +```text +Think can capture and inspect a thought as a Continuum application on Echo. +``` + +## Why + +The current Think architecture still orbits a local Git/`git-warp` substrate. +That path has real value and must protect existing minds, but it is also where +the current pressure is concentrated: large repos, checkpoint repair, +dependency-version ambiguity, graph bottlenecks, and repo-directory-based mind +identity. + +This lane keeps the existing product safe while proving a cleaner runtime seam +outside the hot CLI path. + +## Phase Map + +1. **Phase 0 - Direction charter** + - Record the proof boundary and non-goals in Think. + - Decide which repo owns each noun before code moves. +2. **Phase 1 - App contract** + - Add a small Think-authored GraphQL family for raw capture and exact + inspect. +3. **Phase 2 - Runtime round trip** + - Use generated or minimally generated helpers to dispatch one capture + through Echo and inspect it back with a complete reading. +4. **Phase 3 - Experimental product path** + - Decide how the proof enters Think surfaces without replacing the current + store prematurely. +5. **Phase 4 - Read observers** + - Extend the proof toward recent, remember, browse, and first-class mind + identity. +6. **Phase 5 - Migration and sibling exchange** + - Move old minds and future `git-warp` participation into explicit replay, + import/export, or witnessed suffix exchange. + +## Acceptance Criteria + +- Each phase has its own backlog card. +- Phase 1 and Phase 2 are small enough to pull into a single METHOD cycle. +- No card requires changing the production capture path before the round-trip + proof exists. +- Existing `git-warp` repair work remains framed as data rescue and continuity, + not the long-term architecture. +- The first executable witness is raw capture plus exact inspect, not remember, + browse, migration, or cross-runtime sync. + +## Non-Goals + +- Do not remove `git-warp` from Think in this lane. +- Do not migrate existing `~/.think/*` minds before the new proof works. +- Do not put Think domain nouns into Echo or Continuum shared schemas. +- Do not make the CLI use Echo by default until the proof has its own evidence. diff --git a/docs/method/backlog/bad-code/CORE_acceptance-tests-cold-spawn.md b/docs/method/backlog/bad-code/CORE_acceptance-tests-cold-spawn.md new file mode 100644 index 0000000..f7b2e00 --- /dev/null +++ b/docs/method/backlog/bad-code/CORE_acceptance-tests-cold-spawn.md @@ -0,0 +1,15 @@ +--- +id: CORE_acceptance-tests-cold-spawn +blocks: [] +blocked_by: [] +--- + +# Acceptance tests spawn a cold Node process per assertion + +Every `runThink()` call in the acceptance suite spawns a new Node +process via `spawnSync`. With 125+ acceptance tests, this means +125+ cold ESM module loads (~2s each). The suite takes ~3 minutes. + +A warm-process test harness (import `main()` directly, mock +stdin/stdout) could cut this to seconds while keeping the same +assertion semantics. diff --git a/docs/method/backlog/bad-code/CORE_audit-capture-path-sync-git.md b/docs/method/backlog/bad-code/CORE_audit-capture-path-sync-git.md deleted file mode 100644 index 6812a26..0000000 --- a/docs/method/backlog/bad-code/CORE_audit-capture-path-sync-git.md +++ /dev/null @@ -1,5 +0,0 @@ -# Capture path still shells out to `git` synchronously - -`saveRawCapture()` calls `getAmbientProjectContext(process.cwd())`, and that helper runs three `spawnSync('git', ...)` probes. - -The capture path is supposed to be sacred. This host work belongs behind a bounded adapter or cache, not inline in persistence. diff --git a/docs/method/backlog/bad-code/CORE_audit-git-binary-path-trust.md b/docs/method/backlog/bad-code/CORE_audit-git-binary-path-trust.md deleted file mode 100644 index 06994e8..0000000 --- a/docs/method/backlog/bad-code/CORE_audit-git-binary-path-trust.md +++ /dev/null @@ -1,5 +0,0 @@ -# Git execution still trusts ambient PATH lookup - -Think invokes `git` by bare command name from `src/project-context.js` and `src/git.js`. - -That is acceptable for a local developer tool until it is not. The repo should resolve and trust one Git binary intentionally instead of inheriting whatever PATH happens to provide. diff --git a/docs/method/backlog/bad-code/CORE_audit-no-dependency-freshness-cadence.md b/docs/method/backlog/bad-code/CORE_audit-no-dependency-freshness-cadence.md index f2883c5..5818111 100644 --- a/docs/method/backlog/bad-code/CORE_audit-no-dependency-freshness-cadence.md +++ b/docs/method/backlog/bad-code/CORE_audit-no-dependency-freshness-cadence.md @@ -1,3 +1,9 @@ +--- +id: CORE_audit-no-dependency-freshness-cadence +blocks: [] +blocked_by: [] +--- + # Dependency health is checked ad hoc instead of by policy The current install tree is clean under `npm audit`, but the repo does not appear to have a dedicated cadence or CI guard for dependency freshness and compatibility on its critical machine-facing libraries. diff --git a/docs/method/backlog/bad-code/CORE_audit-no-error-taxonomy.md b/docs/method/backlog/bad-code/CORE_audit-no-error-taxonomy.md deleted file mode 100644 index 2a35439..0000000 --- a/docs/method/backlog/bad-code/CORE_audit-no-error-taxonomy.md +++ /dev/null @@ -1,5 +0,0 @@ -# Cross-surface failures still lack a typed error taxonomy - -CLI, MCP, and store paths still throw or translate many failures as raw `Error` objects or generic strings. - -Think needs a smaller set of owned failure types so human and machine surfaces can report the same truth consistently. diff --git a/docs/method/backlog/bad-code/CORE_audit-no-latency-regression-gate.md b/docs/method/backlog/bad-code/CORE_audit-no-latency-regression-gate.md index 683e33e..ba5b2d5 100644 --- a/docs/method/backlog/bad-code/CORE_audit-no-latency-regression-gate.md +++ b/docs/method/backlog/bad-code/CORE_audit-no-latency-regression-gate.md @@ -1,3 +1,9 @@ +--- +id: CORE_audit-no-latency-regression-gate +blocks: [] +blocked_by: [] +--- + # Capture latency has no enforced regression gate The repo has capture benchmarks and the bearing doc explicitly calls capture latency out as a concern, but CI does not enforce a stable latency budget. diff --git a/docs/method/backlog/bad-code/CORE_audit-plain-object-model.md b/docs/method/backlog/bad-code/CORE_audit-plain-object-model.md deleted file mode 100644 index 5c6b732..0000000 --- a/docs/method/backlog/bad-code/CORE_audit-plain-object-model.md +++ /dev/null @@ -1,5 +0,0 @@ -# Core entry and session concepts are still plain objects - -`src/store/model.js` returns raw objects for entries and reflect sessions even though these are identity-bearing, meaning-heavy domain concepts. - -That is direct SSJR debt. Construction does not establish much trust beyond "shape happened to be present." diff --git a/docs/method/backlog/bad-code/CORE_audit-prompt-metrics-raw-parse.md b/docs/method/backlog/bad-code/CORE_audit-prompt-metrics-raw-parse.md deleted file mode 100644 index ac4b35d..0000000 --- a/docs/method/backlog/bad-code/CORE_audit-prompt-metrics-raw-parse.md +++ /dev/null @@ -1,5 +0,0 @@ -# Prompt metrics parsing is still a raw JSONL pipeline - -`src/store/prompt-metrics.js` reads the whole file, parses line-by-line into anonymous objects, and lets downstream aggregation assume shape. - -The failure mode is lenient, but the core contract stays soft and memory behavior will only get worse as the metrics file grows. diff --git a/docs/method/backlog/bad-code/CORE_audit-provenance-url-schemes.md b/docs/method/backlog/bad-code/CORE_audit-provenance-url-schemes.md deleted file mode 100644 index e224100..0000000 --- a/docs/method/backlog/bad-code/CORE_audit-provenance-url-schemes.md +++ /dev/null @@ -1,5 +0,0 @@ -# Provenance URLs accept any scheme - -Capture provenance currently accepts any syntactically valid URL, including schemes that should not be treated like ordinary safe links. - -Think should narrow provenance URL acceptance to explicit safe schemes before more surfaces start rendering or exporting those fields. diff --git a/docs/method/backlog/bad-code/CORE_audit-query-reshape-pipeline.md b/docs/method/backlog/bad-code/CORE_audit-query-reshape-pipeline.md deleted file mode 100644 index 2ccf358..0000000 --- a/docs/method/backlog/bad-code/CORE_audit-query-reshape-pipeline.md +++ /dev/null @@ -1,5 +0,0 @@ -# Query layer repeatedly re-shapes the same entry data - -`src/store/queries.js` keeps remapping entries into new anonymous shapes for recent, remember, browse, inspect, and stats callers. - -That increases coupling and makes it harder to trust that all surfaces are talking about the same domain object. diff --git a/docs/method/backlog/bad-code/CORE_audit-undocumented-ambient-context-and-recall.md b/docs/method/backlog/bad-code/CORE_audit-undocumented-ambient-context-and-recall.md deleted file mode 100644 index 59713d7..0000000 --- a/docs/method/backlog/bad-code/CORE_audit-undocumented-ambient-context-and-recall.md +++ /dev/null @@ -1,5 +0,0 @@ -# Ambient context and recall behavior are underdocumented - -The behavior that powers ambient capture context, remember scoring, and provenance flow is spread across `src/project-context.js`, `src/store/capture.js`, `src/store/queries.js`, and `src/capture-provenance.js`. - -There is no single contributor-facing doc that explains what gets collected, when it gets normalized, and how it affects recall. That makes the behavior harder to change safely. diff --git a/docs/method/backlog/bad-code/CORE_audit-unvalidated-read-models.md b/docs/method/backlog/bad-code/CORE_audit-unvalidated-read-models.md deleted file mode 100644 index b1f7a1c..0000000 --- a/docs/method/backlog/bad-code/CORE_audit-unvalidated-read-models.md +++ /dev/null @@ -1,5 +0,0 @@ -# Store runtime reconstructs trusted entries from raw graph props - -`src/store/runtime.js` turns raw graph node properties directly into store entry objects without a schema or runtime-backed constructor boundary. - -This is a core correctness risk because every read surface downstream inherits whatever that raw graph shape happens to be. diff --git a/docs/method/backlog/bad-code/CORE_git-warp-dependency-truth.md b/docs/method/backlog/bad-code/CORE_git-warp-dependency-truth.md new file mode 100644 index 0000000..0532c13 --- /dev/null +++ b/docs/method/backlog/bad-code/CORE_git-warp-dependency-truth.md @@ -0,0 +1,28 @@ +--- +id: CORE_git-warp-dependency-truth +blocks: [] +blocked_by: + - CORE_repair-v17-git-warp-minds +--- + +# git-warp dependency truth is split between package metadata and local v17 links + +Think currently declares `@git-stunts/git-warp` as `15.0.0`, while local +development can resolve to a linked `17.0.0` checkout. That makes +`npm ls @git-stunts/git-warp` fail with `ELSPROBLEMS` and leaves runtime +compatibility depending on local workspace state rather than package truth. + +The checkpoint read path now includes a public-reader compatibility bridge for +`createStateReader` vs `createStateReaderV5`. That bridge is acceptable as a +short-term guard, but it should not become permanent dependency sludge. + +## Acceptance Criteria + +- `npm ls @git-stunts/git-warp` exits cleanly in a normal checkout. +- `package.json` and `package-lock.json` match the intended git-warp version. +- The intended version is published or resolved through an explicit, + documented local/workspace dependency path. +- Checkpoint read tests pass from a clean install, not only from a local + linked git-warp checkout. +- The state-reader compatibility bridge is either documented as intentional + version support or removed after the dependency cutover. diff --git a/docs/method/backlog/bad-code/CORE_ssjr-src-capture-provenance-js.md b/docs/method/backlog/bad-code/CORE_ssjr-src-capture-provenance-js.md deleted file mode 100644 index 92d3ad6..0000000 --- a/docs/method/backlog/bad-code/CORE_ssjr-src-capture-provenance-js.md +++ /dev/null @@ -1,5 +0,0 @@ -# Raise SSJR grades for `src/capture-provenance.js` - -Current SSJR sanity check: `Hex B`, `P1 B`, `P3 B`, `P6 B`. - -The boundary normalization is disciplined, but provenance is still just a plain object. Introduce a small runtime-backed provenance form so the invariant lives on the value instead of in helper conventions spread across callers. diff --git a/docs/method/backlog/bad-code/CORE_ssjr-src-git-js.md b/docs/method/backlog/bad-code/CORE_ssjr-src-git-js.md deleted file mode 100644 index 432a32f..0000000 --- a/docs/method/backlog/bad-code/CORE_ssjr-src-git-js.md +++ /dev/null @@ -1,5 +0,0 @@ -# Raise SSJR grades for `src/git.js` - -Current SSJR sanity check: `Hex A`, `P1 B`, `P2 B`, `P3 B`, `P4 B`, `P6 B`, `P7 B`. - -This adapter is in the right layer, but push/init outcomes and retry semantics are still mostly plain-object conventions. Introduce a few explicit runtime-backed outcomes or error types so callers stop interpreting raw shell results directly. diff --git a/docs/method/backlog/bad-code/CORE_ssjr-src-project-context-js.md b/docs/method/backlog/bad-code/CORE_ssjr-src-project-context-js.md deleted file mode 100644 index ca253e2..0000000 --- a/docs/method/backlog/bad-code/CORE_ssjr-src-project-context-js.md +++ /dev/null @@ -1,5 +0,0 @@ -# Raise SSJR grades for `src/project-context.js` - -Current SSJR sanity check: `Hex A`, `P1 C`, `P2 B`, `P3 B`, `P4 C`, `P6 B`. - -Ambient project context is useful, but it is still represented as a raw bag of strings and token arrays. Give the context a firmer runtime-backed shape so project-name, token, and query-term invariants do not live only in helper conventions. diff --git a/docs/method/backlog/bad-code/CORE_ssjr-src-store-capture-js.md b/docs/method/backlog/bad-code/CORE_ssjr-src-store-capture-js.md deleted file mode 100644 index 0b95abf..0000000 --- a/docs/method/backlog/bad-code/CORE_ssjr-src-store-capture-js.md +++ /dev/null @@ -1,5 +0,0 @@ -# Raise SSJR grades for `src/store/capture.js` - -Current SSJR sanity check: `Hex C`, `P1 D`, `P2 C`, `P3 C`, `P4 C`, `P5 B`, `P6 B`, `P7 C`. - -Core capture persistence still operates on raw entry objects plus `kind`-based assumptions. Introduce real runtime-backed entry and provenance forms so construction, persistence, and follow-through stop depending on ambient shape trust. diff --git a/docs/method/backlog/bad-code/CORE_ssjr-src-store-js.md b/docs/method/backlog/bad-code/CORE_ssjr-src-store-js.md deleted file mode 100644 index 375a5d8..0000000 --- a/docs/method/backlog/bad-code/CORE_ssjr-src-store-js.md +++ /dev/null @@ -1,5 +0,0 @@ -# Raise SSJR grades for `src/store.js` - -Current SSJR sanity check: `Hex B`, `P1 B`, `P2 B`, `P3 B`, `P4 B`. - -The barrel is convenient, but it is also a soft-contract choke point. Keep the export surface intentional and derived from the owning modules so the store API does not drift into an undifferentiated namespace. diff --git a/docs/method/backlog/bad-code/CORE_ssjr-src-store-migrations-js.md b/docs/method/backlog/bad-code/CORE_ssjr-src-store-migrations-js.md deleted file mode 100644 index 4ec2b3b..0000000 --- a/docs/method/backlog/bad-code/CORE_ssjr-src-store-migrations-js.md +++ /dev/null @@ -1,5 +0,0 @@ -# Raise SSJR grades for `src/store/migrations.js` - -Current SSJR sanity check: `Hex B`, `P1 D`, `P2 C`, `P3 C`, `P4 C`, `P6 B`, `P7 D`. - -The migration engine is graph-correct, but it reasons about node meaning almost entirely through raw props and `kind` checks. Introduce typed migration facts or per-kind migration helpers so the updater stops being a large conditional over graph shapes. diff --git a/docs/method/backlog/bad-code/CORE_ssjr-src-store-model-js.md b/docs/method/backlog/bad-code/CORE_ssjr-src-store-model-js.md deleted file mode 100644 index 44a62b0..0000000 --- a/docs/method/backlog/bad-code/CORE_ssjr-src-store-model-js.md +++ /dev/null @@ -1,5 +0,0 @@ -# Raise SSJR grades for `src/store/model.js` - -Current SSJR sanity check: `Hex D`, `P1 F`, `P2 C`, `P3 C`, `P4 C`, `P5 B`, `P6 B`, `P7 C`. - -This is the worst core modeling hotspot. Meaning-heavy concepts like entries and sessions are still emitted as plain objects with loose `kind` fields. Start by introducing real domain types for entries, sessions, and related identifiers so construction establishes trust instead of downstream code patching shape assumptions together. diff --git a/docs/method/backlog/bad-code/CORE_ssjr-src-store-prompt-metrics-js.md b/docs/method/backlog/bad-code/CORE_ssjr-src-store-prompt-metrics-js.md deleted file mode 100644 index edee658..0000000 --- a/docs/method/backlog/bad-code/CORE_ssjr-src-store-prompt-metrics-js.md +++ /dev/null @@ -1,5 +0,0 @@ -# Raise SSJR grades for `src/store/prompt-metrics.js` - -Current SSJR sanity check: `Hex B`, `P1 C`, `P2 C`, `P3 B`, `P4 C`, `P6 B`, `P7 B`. - -Prompt metrics are handled as tolerant raw records, which is useful at the boundary but too loose in the core summarization path. Introduce an explicit parsed metric record form so invalid lines are rejected once and downstream aggregation deals in trusted values. diff --git a/docs/method/backlog/bad-code/CORE_ssjr-src-store-queries-js.md b/docs/method/backlog/bad-code/CORE_ssjr-src-store-queries-js.md deleted file mode 100644 index 3332898..0000000 --- a/docs/method/backlog/bad-code/CORE_ssjr-src-store-queries-js.md +++ /dev/null @@ -1,5 +0,0 @@ -# Raise SSJR grades for `src/store/queries.js` - -Current SSJR sanity check: `Hex C`, `P1 D`, `P2 C`, `P3 C`, `P4 C`, `P5 B`, `P6 B`, `P7 C`. - -The query layer reconstructs many domain records by hand and then re-shapes them again for callers. Move toward runtime-backed read models so query code returns trusted objects instead of repeatedly rebuilding loosely related plain-object views. diff --git a/docs/method/backlog/bad-code/CORE_ssjr-src-store-remember-js.md b/docs/method/backlog/bad-code/CORE_ssjr-src-store-remember-js.md deleted file mode 100644 index c28c14a..0000000 --- a/docs/method/backlog/bad-code/CORE_ssjr-src-store-remember-js.md +++ /dev/null @@ -1,5 +0,0 @@ -# Raise SSJR grades for `src/store/remember.js` - -Current SSJR sanity check: `Hex B`, `P1 C`, `P2 B`, `P3 B`, `P4 B`, `P6 B`, `P7 B`. - -Remember matching is coherent, but scopes and matches are still plain objects with implied invariants. Introduce explicit runtime-backed scope and match forms so ranking and recall receipts are less dependent on convention. diff --git a/docs/method/backlog/bad-code/CORE_ssjr-src-store-runtime-js.md b/docs/method/backlog/bad-code/CORE_ssjr-src-store-runtime-js.md deleted file mode 100644 index 28db715..0000000 --- a/docs/method/backlog/bad-code/CORE_ssjr-src-store-runtime-js.md +++ /dev/null @@ -1,5 +0,0 @@ -# Raise SSJR grades for `src/store/runtime.js` - -Current SSJR sanity check: `Hex C`, `P1 D`, `P2 D`, `P3 C`, `P4 D`, `P5 B`, `P6 B`, `P7 D`. - -This file is the core/runtime seam with the most architectural strain. It mixes graph access, host-specific opening, raw prop normalization, and `kind`-driven reconstruction of domain meaning. Break it up and introduce typed read models so the runtime seam stops leaking host details and shape soup into the store core. diff --git a/docs/method/backlog/bad-code/CORE_think-echo-toolchain-capability-probe.md b/docs/method/backlog/bad-code/CORE_think-echo-toolchain-capability-probe.md new file mode 100644 index 0000000..64739e3 --- /dev/null +++ b/docs/method/backlog/bad-code/CORE_think-echo-toolchain-capability-probe.md @@ -0,0 +1,39 @@ +--- +id: CORE_think-echo-toolchain-capability-probe +blocks: + - CORE_think-echo-phase-2-runtime-roundtrip +blocked_by: + - CORE_think-echo-phase-1-app-contract +--- + +# Think lacks a local Echo/Wesley capability probe + +The Think-on-Echo round-trip proof depends on practical toolchain facts: + +- can Wesley compile the Think app contract shape we need? +- can the generated or minimally generated helpers pack the capture intent? +- can Echo host the generic dispatch/observe path for that contract? +- what sibling repo, binary, or generated artifact assumptions are required? + +Right now those answers live in cross-repo memory and manual inspection rather +than a local Think probe. + +## Why + +Phase 2 should fail for product reasons, not because the first engineer has to +rediscover the current Wesley/Echo integration shape. A small capability probe +keeps the round-trip proof honest and prevents hidden sibling checkout +assumptions from becoming folklore. + +## Acceptance Criteria + +- Think has a command, script, or test helper that reports the available + Wesley/Echo contract-hosting capability in JSON. +- The probe distinguishes "generator unavailable", "Echo runtime unavailable", + "generated target unsupported", and "ready enough for Phase 2". +- The probe records exact paths or versions for any sibling checkout or local + binary it uses. +- Phase 2 can invoke the probe or documents why it replaced the probe with a + stronger witness. +- Missing capabilities become explicit follow-on backlog items, not inline + TODO comments in the round-trip proof. diff --git a/docs/method/backlog/bad-code/DX-018-explicit-mind-management.md b/docs/method/backlog/bad-code/DX-018-explicit-mind-management.md index 208ec3d..989236b 100644 --- a/docs/method/backlog/bad-code/DX-018-explicit-mind-management.md +++ b/docs/method/backlog/bad-code/DX-018-explicit-mind-management.md @@ -1,3 +1,11 @@ +--- +id: DX-018-explicit-mind-management +blocks: + - SURFACE_mind-switch-loop-in-command + - SURFACE_scripted-browse-no-mind-switch +blocked_by: [] +--- + # DX-018 — Explicit Mind Management Legend: [DX — Developer Experience](../../legends/DX-developer-experience.md) diff --git a/docs/method/backlog/bad-code/PROCESS_backlog-dependency-integrity-check.md b/docs/method/backlog/bad-code/PROCESS_backlog-dependency-integrity-check.md new file mode 100644 index 0000000..b8d8750 --- /dev/null +++ b/docs/method/backlog/bad-code/PROCESS_backlog-dependency-integrity-check.md @@ -0,0 +1,29 @@ +--- +id: PROCESS_backlog-dependency-integrity-check +blocks: [] +blocked_by: [] +--- + +# Backlog dependency references are not mechanically checked + +Think backlog cards use `id`, `blocks`, and `blocked_by` front matter, but the +repo does not currently appear to enforce that those references are valid. + +The Think-on-Echo phase map now relies on that graph being readable. A typo, +duplicate id, missing file, stale blocker, or self-reference would make the +planning map quietly misleading. + +## Why + +The backlog is now doing real coordination work, not just collecting loose +ideas. If agents and humans are going to use `blocks` / `blocked_by` to choose +the next METHOD cycle, those edges need a cheap guard. + +## Acceptance Criteria + +- A docs consistency test scans backlog front matter. +- Every `id` is unique across backlog lanes. +- Every `blocks` and `blocked_by` entry points at an existing backlog id. +- The test rejects self-blocking edges. +- The test output names the broken file and missing/stale id. +- Existing backlog files pass without requiring a taxonomy rewrite. diff --git a/docs/method/backlog/bad-code/RE-025-deferred-derivation-pipeline.md b/docs/method/backlog/bad-code/RE-025-deferred-derivation-pipeline.md index 3acc670..f932fb0 100644 --- a/docs/method/backlog/bad-code/RE-025-deferred-derivation-pipeline.md +++ b/docs/method/backlog/bad-code/RE-025-deferred-derivation-pipeline.md @@ -1,3 +1,12 @@ +--- +id: RE-025-deferred-derivation-pipeline +blocks: + - REFLECT_ssjr-src-store-derivation +blocked_by: + - CORE_audit-capture-path-sync-git + - CORE_ssjr-src-store-capture +--- + # RE-025 — Deferred Derivation Pipeline Legend: [RE — Runtime Engine](../../legends/RE-runtime-engine.md) diff --git a/docs/method/backlog/bad-code/REFLECT_ssjr-src-cli-commands-reflect-js.md b/docs/method/backlog/bad-code/REFLECT_ssjr-src-cli-commands-reflect-js.md deleted file mode 100644 index d7c0679..0000000 --- a/docs/method/backlog/bad-code/REFLECT_ssjr-src-cli-commands-reflect-js.md +++ /dev/null @@ -1,5 +0,0 @@ -# Raise SSJR grades for `src/cli/commands/reflect.js` - -Current SSJR sanity check: `Hex C`, `P1 C`, `P2 B`, `P3 C`, `P4 B`, `P5 B`, `P6 C`, `P7 C`. - -Reflect orchestration is still built from raw result bags and conditional branching. Introduce firmer runtime-backed session/result forms so the command layer stops reinterpreting plain objects from the store. diff --git a/docs/method/backlog/bad-code/REFLECT_ssjr-src-store-derivation-js.md b/docs/method/backlog/bad-code/REFLECT_ssjr-src-store-derivation-js.md deleted file mode 100644 index 9afe3c2..0000000 --- a/docs/method/backlog/bad-code/REFLECT_ssjr-src-store-derivation-js.md +++ /dev/null @@ -1,5 +0,0 @@ -# Raise SSJR grades for `src/store/derivation.js` - -Current SSJR sanity check: `Hex B`, `P1 D`, `P2 C`, `P3 C`, `P4 C`, `P5 B`, `P6 B`, `P7 D`. - -Derived artifacts and receipts are currently raw objects with a lot of `kind`-driven branching. Pull seed quality, session attribution, and derived receipt concepts into runtime-backed forms so reflect derivation is less dependent on tag-switching. diff --git a/docs/method/backlog/bad-code/REFLECT_ssjr-src-store-reflect-js.md b/docs/method/backlog/bad-code/REFLECT_ssjr-src-store-reflect-js.md deleted file mode 100644 index f6ce7d2..0000000 --- a/docs/method/backlog/bad-code/REFLECT_ssjr-src-store-reflect-js.md +++ /dev/null @@ -1,5 +0,0 @@ -# Raise SSJR grades for `src/store/reflect.js` - -Current SSJR sanity check: `Hex B`, `P1 D`, `P2 C`, `P3 C`, `P4 C`, `P5 B`, `P6 B`, `P7 D`. - -Reflect sessions and entries are still modeled as mutable-looking raw objects plus `kind` checks. Introduce runtime-backed session, prompt-plan, and reflect-entry forms so reflect behavior lives on owned types instead of being spread across patch logic and conditionals. diff --git a/docs/method/backlog/bad-code/SURFACE_audit-cli-dispatch-chain.md b/docs/method/backlog/bad-code/SURFACE_audit-cli-dispatch-chain.md deleted file mode 100644 index 87629be..0000000 --- a/docs/method/backlog/bad-code/SURFACE_audit-cli-dispatch-chain.md +++ /dev/null @@ -1,5 +0,0 @@ -# CLI dispatch is still a stringly `if/else` ladder - -The top-level CLI command path in `src/cli.js` is still an `if/else` dispatch chain keyed by strings. - -It works, but it keeps command behavior, help identity, and reporting identity softer than they should be. diff --git a/docs/method/backlog/bad-code/SURFACE_audit-cli-generic-errors.md b/docs/method/backlog/bad-code/SURFACE_audit-cli-generic-errors.md deleted file mode 100644 index b0615b1..0000000 --- a/docs/method/backlog/bad-code/SURFACE_audit-cli-generic-errors.md +++ /dev/null @@ -1,5 +0,0 @@ -# CLI still hides too much behind a generic top-level error - -`src/cli.js` catches unexpected failures and tells the default human path only `Something went wrong`. - -That keeps output terse, but it also weakens self-serve recovery and makes production debugging slower than necessary. diff --git a/docs/method/backlog/bad-code/SURFACE_audit-cli-options-bag.md b/docs/method/backlog/bad-code/SURFACE_audit-cli-options-bag.md deleted file mode 100644 index 3cfc42b..0000000 --- a/docs/method/backlog/bad-code/SURFACE_audit-cli-options-bag.md +++ /dev/null @@ -1,5 +0,0 @@ -# CLI parsing still depends on one large options bag - -`src/cli/options.js` builds a large procedural options object and validates it later through command-specific conditionals. - -The result is serviceable but structurally mushy. Parsing and validation should return a smaller, more explicit runtime-backed parsed-command form. diff --git a/docs/method/backlog/bad-code/SURFACE_audit-mcp-contract-holes.md b/docs/method/backlog/bad-code/SURFACE_audit-mcp-contract-holes.md deleted file mode 100644 index 4bb6c7f..0000000 --- a/docs/method/backlog/bad-code/SURFACE_audit-mcp-contract-holes.md +++ /dev/null @@ -1,5 +0,0 @@ -# MCP contracts still have `z.any()` holes - -`src/mcp/server.js` still uses `z.any()` for important outputs like migration results, remember matches and scope, browse session context, and inspect entry payloads. - -That weakens integration trust exactly where Think claims MCP parity with the CLI core. diff --git a/docs/method/backlog/bad-code/SURFACE_audit-mcp-service-shape-soup.md b/docs/method/backlog/bad-code/SURFACE_audit-mcp-service-shape-soup.md deleted file mode 100644 index 811a073..0000000 --- a/docs/method/backlog/bad-code/SURFACE_audit-mcp-service-shape-soup.md +++ /dev/null @@ -1,5 +0,0 @@ -# MCP service layer still shuffles raw objects - -`src/mcp/service.js` is already called out in `docs/BEARING.md` as shape-soup debt, and the audit agrees. It mostly normalizes inputs, calls store functions, and returns anonymous result bags. - -That is acceptable for a tiny adapter, but this one is now large enough to deserve explicit request and result forms. diff --git a/docs/method/backlog/bad-code/SURFACE_audit-missing-code-of-conduct.md b/docs/method/backlog/bad-code/SURFACE_audit-missing-code-of-conduct.md deleted file mode 100644 index d28c89e..0000000 --- a/docs/method/backlog/bad-code/SURFACE_audit-missing-code-of-conduct.md +++ /dev/null @@ -1,5 +0,0 @@ -# Repo is missing a `CODE_OF_CONDUCT.md` - -Think already has `CONTRIBUTING.md`, `CHANGELOG.md`, and `SECURITY.md`, but it still lacks the normal conduct policy file contributors expect in a public repository. - -That is process debt, not product debt, but it still degrades the repo's outer quality. diff --git a/docs/method/backlog/bad-code/SURFACE_browse-fade-in-single-color.md b/docs/method/backlog/bad-code/SURFACE_browse-fade-in-single-color.md new file mode 100644 index 0000000..f325805 --- /dev/null +++ b/docs/method/backlog/bad-code/SURFACE_browse-fade-in-single-color.md @@ -0,0 +1,15 @@ +--- +id: SURFACE_browse-fade-in-single-color +blocks: [] +blocked_by: [] +--- + +# Browse fade-in uses single color for all text + +The splash-to-browse fade-in lerps all text from BG toward cream. +Section headers (amber), accents (teal), and dim text (mauve) all +appear as cream during the fade, then snap to their real colors when +bijou takes over. A proper fade would lerp each text element from BG +toward its actual target color. + +Discovered during cycle 0004 transition work. diff --git a/docs/method/backlog/bad-code/SURFACE_fadeInBrowse-throwaway-model.md b/docs/method/backlog/bad-code/SURFACE_fadeInBrowse-throwaway-model.md new file mode 100644 index 0000000..834126e --- /dev/null +++ b/docs/method/backlog/bad-code/SURFACE_fadeInBrowse-throwaway-model.md @@ -0,0 +1,16 @@ +--- +id: SURFACE_fadeInBrowse-throwaway-model +blocks: [] +blocked_by: [] +--- + +# fadeInBrowse creates a throwaway model to render + +`fadeInBrowse()` in `src/browse-tui/app.js` constructs a full +`createWindowedBrowseModel` just to call `renderBrowseModel` for +the fade-in frames, then discards it. The real model is built +separately by `createBrowsePage`. This couples the fade to the +model shape and does redundant work. + +The fade should accept pre-rendered content or share the model +with the page initialization. diff --git a/docs/method/backlog/bad-code/SURFACE_mind-switch-loop-in-command.md b/docs/method/backlog/bad-code/SURFACE_mind-switch-loop-in-command.md new file mode 100644 index 0000000..1ffa16e --- /dev/null +++ b/docs/method/backlog/bad-code/SURFACE_mind-switch-loop-in-command.md @@ -0,0 +1,17 @@ +--- +id: SURFACE_mind-switch-loop-in-command +blocks: [] +blocked_by: + - DX-018-explicit-mind-management +--- + +# Mind-switch loop embedded in command layer + +The mind-switching orchestration (re-bootstrap, re-open graph store, +re-create loaders) is hardcoded inside `runInteractiveBrowseShell()`. +If mind switching is needed in other contexts (API, non-interactive), +the entire loop structure would need duplication. + +Neither `runBrowseTui()` nor the caller owns the switching cleanly. + +File: `src/cli/commands/read.js` (lines 493-575) diff --git a/docs/method/backlog/bad-code/SURFACE_modelref-side-effect-mutation.md b/docs/method/backlog/bad-code/SURFACE_modelref-side-effect-mutation.md new file mode 100644 index 0000000..dc9ec0e --- /dev/null +++ b/docs/method/backlog/bad-code/SURFACE_modelref-side-effect-mutation.md @@ -0,0 +1,16 @@ +--- +id: SURFACE_modelref-side-effect-mutation +blocks: [] +blocked_by: [] +--- + +# modelRef side-effect mutation in browse page + +The browse page updates model state in two places: the immutable +return value from `update()` AND via side-effect mutation of +`modelRef.current`. This implicit contract means any code path that +forgets to sync the ref leaves the parent observing stale state. + +Action-at-a-distance: page.js mutates state owned by app.js. + +Files: `src/browse-tui/page.js`, `src/browse-tui/app.js` diff --git a/docs/method/backlog/bad-code/SURFACE_scripted-browse-no-mind-switch.md b/docs/method/backlog/bad-code/SURFACE_scripted-browse-no-mind-switch.md new file mode 100644 index 0000000..3571ca4 --- /dev/null +++ b/docs/method/backlog/bad-code/SURFACE_scripted-browse-no-mind-switch.md @@ -0,0 +1,14 @@ +--- +id: SURFACE_scripted-browse-no-mind-switch +blocks: [] +blocked_by: + - DX-018-explicit-mind-management +--- + +# Scripted browse path does not support switch_mind action + +The scripted browse test runner (`src/browse-tui/script.js`) does not +handle a `switch_mind` action type. Mind switching can only be tested +manually, not through the acceptance test harness. + +Noted in cycle 0004 retro as new debt. diff --git a/docs/method/backlog/bad-code/SURFACE_showSplash-reads-process-globals.md b/docs/method/backlog/bad-code/SURFACE_showSplash-reads-process-globals.md new file mode 100644 index 0000000..811d167 --- /dev/null +++ b/docs/method/backlog/bad-code/SURFACE_showSplash-reads-process-globals.md @@ -0,0 +1,14 @@ +--- +id: SURFACE_showSplash-reads-process-globals +blocks: [] +blocked_by: [] +--- + +# showSplash reads process.stdout directly + +`showSplash()` reads `process.stdout.columns`, `process.stdout.rows`, +and manages `process.stdin.setRawMode` directly. Same boundary +violation we fixed in the store layer — terminal I/O should be +injected, not hardcoded. + +This makes the splash untestable without a real terminal. diff --git a/docs/method/backlog/bad-code/SURFACE_splash-monolith.md b/docs/method/backlog/bad-code/SURFACE_splash-monolith.md new file mode 100644 index 0000000..d95c0c0 --- /dev/null +++ b/docs/method/backlog/bad-code/SURFACE_splash-monolith.md @@ -0,0 +1,18 @@ +--- +id: SURFACE_splash-monolith +blocks: [] +blocked_by: [] +--- + +# showSplash is a 126-line monolith mixing animation and I/O + +`showSplash()` directly manages process.stdout, raw mode, frame +rendering, input handling, mind cycling, and transition state all in +one function with nested closures. The animation/state logic is +untestable because it's buried inside side-effectful I/O. + +Extract a pure splash state machine that takes (state, elapsed, input) +and returns (nextState, frameData). Let showSplash just orchestrate +I/O around it. + +File: `src/browse-tui/app.js` diff --git a/docs/method/backlog/bad-code/SURFACE_ssjr-bin-think-js.md b/docs/method/backlog/bad-code/SURFACE_ssjr-bin-think-js.md deleted file mode 100644 index 395e963..0000000 --- a/docs/method/backlog/bad-code/SURFACE_ssjr-bin-think-js.md +++ /dev/null @@ -1,5 +0,0 @@ -# Raise SSJR grades for `bin/think.js` - -Current SSJR sanity check: `Hex A`, `P1 B`, `P2 B`, `P3 B`, `P4 B`, `P6 B`. - -The CLI entrypoint is structurally correct, but it still depends on convention-heavy wiring. Keep the file narrowly host-facing and make sure command and error contracts remain derived from the owning runtime modules instead of being repeated here. diff --git a/docs/method/backlog/bad-code/SURFACE_ssjr-bin-think-mcp-js.md b/docs/method/backlog/bad-code/SURFACE_ssjr-bin-think-mcp-js.md deleted file mode 100644 index ebc8459..0000000 --- a/docs/method/backlog/bad-code/SURFACE_ssjr-bin-think-mcp-js.md +++ /dev/null @@ -1,5 +0,0 @@ -# Raise SSJR grades for `bin/think-mcp.js` - -Current SSJR sanity check: `Hex B`, `P1 B`, `P2 B`, `P3 B`, `P4 B`, `P6 B`. - -This entrypoint is thin, but it still carries soft-contract glue. Keep it as a pure adapter shell, avoid re-declaring runtime contracts here, and make sure command/result shaping stays owned by the MCP modules beneath it. diff --git a/docs/method/backlog/bad-code/SURFACE_ssjr-src-browse-benchmark-js.md b/docs/method/backlog/bad-code/SURFACE_ssjr-src-browse-benchmark-js.md deleted file mode 100644 index 0dcf712..0000000 --- a/docs/method/backlog/bad-code/SURFACE_ssjr-src-browse-benchmark-js.md +++ /dev/null @@ -1,5 +0,0 @@ -# Raise SSJR grades for `src/browse-benchmark.js` - -Current SSJR sanity check: `Hex B`, `P1 C`, `P2 C`, `P3 C`, `P4 C`, `P5 B`, `P6 C`, `P7 D`. - -The benchmark harness leans on raw item shapes and tag-driven branching. Pull the benchmark-facing concepts into small runtime-backed helper forms so benchmark logic stops switching on loose `type` values and duplicated structure. diff --git a/docs/method/backlog/bad-code/SURFACE_ssjr-src-cli-commands-capture-js.md b/docs/method/backlog/bad-code/SURFACE_ssjr-src-cli-commands-capture-js.md deleted file mode 100644 index b68e563..0000000 --- a/docs/method/backlog/bad-code/SURFACE_ssjr-src-cli-commands-capture-js.md +++ /dev/null @@ -1,5 +0,0 @@ -# Raise SSJR grades for `src/cli/commands/capture.js` - -Current SSJR sanity check: `Hex B`, `P1 B`, `P2 B`, `P3 C`, `P4 B`, `P6 B`, `P7 B`. - -Capture orchestration is solid, but it still returns and reports mostly raw outcome shapes. Introduce explicit capture result forms so persistence, migration follow-through, and backup reporting stop leaning on ambient object conventions. diff --git a/docs/method/backlog/bad-code/SURFACE_ssjr-src-cli-commands-read-js.md b/docs/method/backlog/bad-code/SURFACE_ssjr-src-cli-commands-read-js.md deleted file mode 100644 index d7febcf..0000000 --- a/docs/method/backlog/bad-code/SURFACE_ssjr-src-cli-commands-read-js.md +++ /dev/null @@ -1,5 +0,0 @@ -# Raise SSJR grades for `src/cli/commands/read.js` - -Current SSJR sanity check: `Hex C`, `P1 C`, `P2 B`, `P3 D`, `P4 B`, `P5 B`, `P6 C`, `P7 D`. - -This command surface is doing too much with too many raw result shapes. Split command-specific presentation into smaller owned modules and replace command/result switching with behavior that lives on the types or handlers that own it. diff --git a/docs/method/backlog/bad-code/SURFACE_ssjr-src-cli-environment-js.md b/docs/method/backlog/bad-code/SURFACE_ssjr-src-cli-environment-js.md deleted file mode 100644 index 4d05bcb..0000000 --- a/docs/method/backlog/bad-code/SURFACE_ssjr-src-cli-environment-js.md +++ /dev/null @@ -1,5 +0,0 @@ -# Raise SSJR grades for `src/cli/environment.js` - -Current SSJR sanity check: `Hex A`, `P1 B`, `P2 B`, `P3 B`, `P4 B`, `P6 B`. - -This file is small and well-placed, but it still exposes ambient booleans and raw environment reads as loose helpers. A tiny runtime-backed environment capability object would make these decisions less ad hoc. diff --git a/docs/method/backlog/bad-code/SURFACE_ssjr-src-cli-graph-gate-js.md b/docs/method/backlog/bad-code/SURFACE_ssjr-src-cli-graph-gate-js.md deleted file mode 100644 index 2fda3f1..0000000 --- a/docs/method/backlog/bad-code/SURFACE_ssjr-src-cli-graph-gate-js.md +++ /dev/null @@ -1,5 +0,0 @@ -# Raise SSJR grades for `src/cli/graph-gate.js` - -Current SSJR sanity check: `Hex B`, `P1 B`, `P2 B`, `P3 B`, `P4 B`, `P6 B`, `P7 B`. - -The graph gate has the right responsibility, but its migration decisions and outcomes are still plain-object and string-driven. Move toward a named gate/policy result that owns the branching semantics. diff --git a/docs/method/backlog/bad-code/SURFACE_ssjr-src-cli-help-js.md b/docs/method/backlog/bad-code/SURFACE_ssjr-src-cli-help-js.md deleted file mode 100644 index d7933e4..0000000 --- a/docs/method/backlog/bad-code/SURFACE_ssjr-src-cli-help-js.md +++ /dev/null @@ -1,5 +0,0 @@ -# Raise SSJR grades for `src/cli/help.js` - -Current SSJR sanity check: `Hex B`, `P1 B`, `P2 C`, `P3 B`, `P4 C`, `P6 B`. - -Help is still mostly a string registry with conventions around topics and commands. Tighten the boundary between command definition and help rendering so the text surface derives from one runtime-backed source of truth. diff --git a/docs/method/backlog/bad-code/SURFACE_ssjr-src-cli-interactive-js.md b/docs/method/backlog/bad-code/SURFACE_ssjr-src-cli-interactive-js.md deleted file mode 100644 index f76186c..0000000 --- a/docs/method/backlog/bad-code/SURFACE_ssjr-src-cli-interactive-js.md +++ /dev/null @@ -1,5 +0,0 @@ -# Raise SSJR grades for `src/cli/interactive.js` - -Current SSJR sanity check: `Hex A`, `P1 B`, `P2 B`, `P3 B`, `P4 B`, `P6 B`, `P7 B`. - -The interactive shell helpers are structurally fine, but they still pass around a lot of loose prompt/render state. Keep the host concerns here, while moving reusable interaction semantics into runtime-backed forms where they matter. diff --git a/docs/method/backlog/bad-code/SURFACE_ssjr-src-cli-js.md b/docs/method/backlog/bad-code/SURFACE_ssjr-src-cli-js.md deleted file mode 100644 index 748c07b..0000000 --- a/docs/method/backlog/bad-code/SURFACE_ssjr-src-cli-js.md +++ /dev/null @@ -1,5 +0,0 @@ -# Raise SSJR grades for `src/cli.js` - -Current SSJR sanity check: `Hex B`, `P1 B`, `P2 B`, `P3 C`, `P4 B`, `P6 B`, `P7 C`. - -The top-level dispatcher still routes through command strings and a long conditional chain. Move toward command objects or a command registry that owns behavior so the CLI shell becomes thinner and less tag-oriented. diff --git a/docs/method/backlog/bad-code/SURFACE_ssjr-src-cli-options-js.md b/docs/method/backlog/bad-code/SURFACE_ssjr-src-cli-options-js.md deleted file mode 100644 index fe4b213..0000000 --- a/docs/method/backlog/bad-code/SURFACE_ssjr-src-cli-options-js.md +++ /dev/null @@ -1,5 +0,0 @@ -# Raise SSJR grades for `src/cli/options.js` - -Current SSJR sanity check: `Hex B`, `P1 C`, `P2 B`, `P3 C`, `P4 B`, `P6 C`, `P7 C`. - -The parser currently produces a large mutable-feeling options bag and command resolution depends on stringly post-processing. Introduce explicit parsed-command and parsed-option forms so validation and dispatch stop depending on shape soup. diff --git a/docs/method/backlog/bad-code/SURFACE_ssjr-src-cli-output-js.md b/docs/method/backlog/bad-code/SURFACE_ssjr-src-cli-output-js.md deleted file mode 100644 index e03a339..0000000 --- a/docs/method/backlog/bad-code/SURFACE_ssjr-src-cli-output-js.md +++ /dev/null @@ -1,5 +0,0 @@ -# Raise SSJR grades for `src/cli/output.js` - -Current SSJR sanity check: `Hex A`, `P1 B`, `P2 B`, `P3 B`, `P4 B`, `P5 B`, `P6 B`. - -Output is centralized well, but the event and message contracts are still largely implied. Make the reporting/result forms more explicit so streams and callers share one runtime-backed output model. diff --git a/docs/method/backlog/bad-code/SURFACE_ssjr-src-mcp-result-js.md b/docs/method/backlog/bad-code/SURFACE_ssjr-src-mcp-result-js.md deleted file mode 100644 index 253bf6e..0000000 --- a/docs/method/backlog/bad-code/SURFACE_ssjr-src-mcp-result-js.md +++ /dev/null @@ -1,5 +0,0 @@ -# Raise SSJR grades for `src/mcp/result.js` - -Current SSJR sanity check: `Hex A`, `P1 B`, `P2 B`, `P4 B`. - -This helper is tiny, but it still duplicates the text-plus-structured MCP result contract procedurally. Consider a dedicated result form so the contract lives in one runtime-backed place instead of in shape-building glue. diff --git a/docs/method/backlog/bad-code/SURFACE_ssjr-src-mcp-server-js.md b/docs/method/backlog/bad-code/SURFACE_ssjr-src-mcp-server-js.md deleted file mode 100644 index 7d7e399..0000000 --- a/docs/method/backlog/bad-code/SURFACE_ssjr-src-mcp-server-js.md +++ /dev/null @@ -1,5 +0,0 @@ -# Raise SSJR grades for `src/mcp/server.js` - -Current SSJR sanity check: `Hex A`, `P1 C`, `P3 B`, `P6 C`, `P7 B`. - -Boundary schemas are strong here, but command definitions are still spread across repeated schema/result wiring. Consolidate the MCP tool registry so names, schemas, and execution contracts derive from one runtime-backed command definition. diff --git a/docs/method/backlog/bad-code/SURFACE_ssjr-src-mcp-service-js.md b/docs/method/backlog/bad-code/SURFACE_ssjr-src-mcp-service-js.md deleted file mode 100644 index 12a2b34..0000000 --- a/docs/method/backlog/bad-code/SURFACE_ssjr-src-mcp-service-js.md +++ /dev/null @@ -1,5 +0,0 @@ -# Raise SSJR grades for `src/mcp/service.js` - -Current SSJR sanity check: `Hex B`, `P1 C`, `P2 B`, `P3 B`, `P4 B`, `P5 B`, `P6 B`, `P7 B`. - -This is the exact shape-soup debt already called out in BEARING. The service layer mostly shuffles plain objects between boundaries and store calls; introduce runtime-backed request and result forms so the MCP surface owns fewer soft contracts. diff --git a/docs/method/backlog/bad-code/SURFACE_ssjr-src-verbose-js.md b/docs/method/backlog/bad-code/SURFACE_ssjr-src-verbose-js.md deleted file mode 100644 index 809cf53..0000000 --- a/docs/method/backlog/bad-code/SURFACE_ssjr-src-verbose-js.md +++ /dev/null @@ -1,5 +0,0 @@ -# Raise SSJR grades for `src/verbose.js` - -Current SSJR sanity check: `Hex A`, `P1 B`, `P2 B`, `P3 B`, `P4 B`, `P6 B`. - -The reporter is small and stable, but event payloads are still just shaped objects. Tighten the reporting contract so event names and payload structure derive from one runtime-backed source of truth instead of ambient convention. diff --git a/docs/method/backlog/cool-ideas/CI-002-auto-mind-discovery-in-tui.md b/docs/method/backlog/cool-ideas/CI-002-auto-mind-discovery-in-tui.md deleted file mode 100644 index 53d2e2c..0000000 --- a/docs/method/backlog/cool-ideas/CI-002-auto-mind-discovery-in-tui.md +++ /dev/null @@ -1,19 +0,0 @@ -# CI-002 — Auto-Mind Discovery in TUI - -Legend: [CORE — Core Bedrock](../../legends/CORE.md) - -## Idea - -Think supports "Multiple Minds" by switching repositories, but discovery is currently manual. - -Enhance the TUI splash screen and CLI to automatically scan the `~/.think/` directory for any subdirectory containing a valid Git repository. Provide a "Mind Switcher" overlay (summoned by `m`) that lists these minds and allows instantaneous context-switching within the same TUI session. - -## Why - -1. **Ergonomics**: Makes the multi-mind architecture accessible without needing to restart the application or set environment variables. -2. **Organization**: Encourages users to isolate different cognitive domains (e.g., `work`, `side-project`, `agentic-exploration`) while maintaining a single primary entry point. -3. **Productivity**: High-speed switching between archives is essential for multi-project developers. - -## Effort - -Small-Medium — requires a directory-walking utility and a TUI overlay component. diff --git a/docs/method/backlog/cool-ideas/CORE_agent-owned-minds.md b/docs/method/backlog/cool-ideas/CORE_agent-owned-minds.md new file mode 100644 index 0000000..1005f13 --- /dev/null +++ b/docs/method/backlog/cool-ideas/CORE_agent-owned-minds.md @@ -0,0 +1,13 @@ +# Agent-owned minds + +Agents may need their own thought repo rather than writing into the +operator's personal mind. Preserves separate provenance and avoids +polluting a human's private archive. + +Each agent (Claude, Codex, Gemini) gets a named mind under +`~/.think//`. The agent captures into its own mind via +`--mind=` or `THINK_REPO_DIR`. The human can browse any +agent's mind through the TUI mind switcher. + +Dropped from the original CORE_multiple-minds backlog item when +cycle 0004 scoped down to browse-only mind switching. diff --git a/docs/method/backlog/cool-ideas/CORE_annotate-thoughts.md b/docs/method/backlog/cool-ideas/CORE_annotate-thoughts.md new file mode 100644 index 0000000..8c8538c --- /dev/null +++ b/docs/method/backlog/cool-ideas/CORE_annotate-thoughts.md @@ -0,0 +1,16 @@ +# Annotate existing thoughts + +Attach a note to an existing thought without mutating the original. + +``` +think --annotate= "this turned out to be wrong" +``` + +A new derived entry with `kind: 'annotation'` linked via an +`annotates` edge. Browse TUI shows annotations below the thought. +Press `a` in browse to annotate inline. + +Same graph primitives as reflect — new node, new edge, original +stays immutable. + +**Superseded by:** `docs/design/enrichment-pipeline.md` diff --git a/docs/method/backlog/cool-ideas/CORE_capture-time-auto-tagging.md b/docs/method/backlog/cool-ideas/CORE_capture-time-auto-tagging.md new file mode 100644 index 0000000..251a903 --- /dev/null +++ b/docs/method/backlog/cool-ideas/CORE_capture-time-auto-tagging.md @@ -0,0 +1,21 @@ +# Capture-time auto-tagging + +Attempt to tag or categorize a thought at capture time without +user input. Runs as part of the derivation follow-through (after +raw save, like seed quality scoring). + +Approach: +- Extract topic keywords via lightweight NLP (no LLM required): + TF-IDF against the existing corpus, or simple noun-phrase + extraction +- Assign tags as a new derived artifact: `kind: 'auto_tags'` with + `tags: ['performance', 'architecture', 'git-warp']` +- Tags are suggestions, not ground truth — the user can override + or dismiss +- Keep the capture path sacred: tagging happens in follow-through, + never blocking the raw save + +The tag vocabulary grows organically from the corpus. No predefined +taxonomy — Think discovers your topics, it doesn't impose them. + +**Superseded by:** `docs/design/enrichment-pipeline.md` diff --git a/docs/method/backlog/cool-ideas/CORE_doctor-prompt-metrics-check.md b/docs/method/backlog/cool-ideas/CORE_doctor-prompt-metrics-check.md new file mode 100644 index 0000000..c3d6821 --- /dev/null +++ b/docs/method/backlog/cool-ideas/CORE_doctor-prompt-metrics-check.md @@ -0,0 +1,8 @@ +# Doctor: prompt metrics file check + +Add a check to `runDiagnostics` that reports whether the prompt +metrics file exists and is readable. Currently doctor checks think +dir, repo, graph model, entry count, and upstream — but not the +macOS telemetry surface. + +Noted in cycle 0007 retro as cool idea. diff --git a/docs/method/backlog/cool-ideas/CORE_evolve-thoughts.md b/docs/method/backlog/cool-ideas/CORE_evolve-thoughts.md new file mode 100644 index 0000000..84fd4e1 --- /dev/null +++ b/docs/method/backlog/cool-ideas/CORE_evolve-thoughts.md @@ -0,0 +1,16 @@ +# Evolve a thought with explicit lineage + +Start a new thought seeded from an old one. + +``` +think --evolve= "actually, the real insight is..." +``` + +Like capture but with a `seeded_by` edge back to the original. +Browse shows the evolution chain. A raw seed becomes a developed +idea through multiple passes — each pass is a new immutable entry. + +Different from reflect (which pressure-tests) and annotate (which +comments). Evolve says "this thought grew into this new thought." + +**Superseded by:** `docs/design/enrichment-pipeline.md` diff --git a/docs/method/backlog/cool-ideas/CORE_holding-area-and-mind-routing.md b/docs/method/backlog/cool-ideas/CORE_holding-area-and-mind-routing.md new file mode 100644 index 0000000..581c9db --- /dev/null +++ b/docs/method/backlog/cool-ideas/CORE_holding-area-and-mind-routing.md @@ -0,0 +1,11 @@ +# Holding area and mind routing + +Raw ingress may need a neutral local holding area before derivation +or routing assigns a thought to a specific mind. + +A thought captured without a `--mind` flag could land in a staging +area, then be routed to the appropriate mind by user action or by +an automated triage step. + +Dropped from the original CORE_multiple-minds backlog item when +cycle 0004 scoped down to browse-only mind switching. diff --git a/docs/method/backlog/cool-ideas/CORE_integrity-verification-command.md b/docs/method/backlog/cool-ideas/CORE_integrity-verification-command.md new file mode 100644 index 0000000..172b576 --- /dev/null +++ b/docs/method/backlog/cool-ideas/CORE_integrity-verification-command.md @@ -0,0 +1,8 @@ +# Integrity verification command + +Add a command to verify that all `thought:` entries +match their `attachContent` reality. Like `git fsck` for the +thought graph — detects corruption, missing attachments, or +fingerprint mismatches. + +Source: ship-readiness audit 2026-04-11 §2.3 (Gap 2). diff --git a/docs/method/backlog/cool-ideas/CORE_link-thoughts.md b/docs/method/backlog/cool-ideas/CORE_link-thoughts.md new file mode 100644 index 0000000..86a4cff --- /dev/null +++ b/docs/method/backlog/cool-ideas/CORE_link-thoughts.md @@ -0,0 +1,16 @@ +# Link thoughts explicitly + +Connect two thoughts with an explicit relationship. + +``` +think --link "related because..." +``` + +A `relates_to` edge with an optional description stored as an edge +property. Browse shows linked thoughts in a "Connections" panel. +Builds the graph from a flat chronology into a navigable web. + +Could also support typed links: "contradicts", "extends", "replaces", +"inspired_by". + +**Superseded by:** `docs/design/enrichment-pipeline.md` diff --git a/docs/method/backlog/cool-ideas/CORE_mind-capture-flag.md b/docs/method/backlog/cool-ideas/CORE_mind-capture-flag.md new file mode 100644 index 0000000..e717a9a --- /dev/null +++ b/docs/method/backlog/cool-ideas/CORE_mind-capture-flag.md @@ -0,0 +1,9 @@ +# Capture into a specific mind + +`think --mind=work "thought"` would capture directly into the `work` +mind without needing THINK_REPO_DIR or wrapper scripts. Auto-bootstrap +the mind repo if it doesn't exist. + +Currently Think only supports mind switching in browse, not capture. + +Noted in cycle 0004 retro as cool idea. diff --git a/docs/method/backlog/cool-ideas/CORE_post-capture-automated-enrichment.md b/docs/method/backlog/cool-ideas/CORE_post-capture-automated-enrichment.md new file mode 100644 index 0000000..fa06caa --- /dev/null +++ b/docs/method/backlog/cool-ideas/CORE_post-capture-automated-enrichment.md @@ -0,0 +1,60 @@ +# Post-capture automated enrichment pipeline + +Background enrichment that runs after capture (or on a schedule) +to annotate, link, categorize, and schedule thoughts for revisit. + +## Pipeline stages + +### 1. Topic extraction +Identify topics and themes from thought text. Store as derived +`auto_topics` artifact. Use corpus-relative frequency (TF-IDF) or +LLM extraction. + +### 2. Semantic parsing +Parse for actionable structure: +- Questions ("how do I...") → mark as open question +- Decisions ("I decided to...") → mark as decision +- Tasks ("need to...") → mark as action item +- Observations ("I noticed...") → mark as observation +Store as `kind: 'semantic_parse'` artifact. + +### 3. Auto-linking +Find similar thoughts in the archive by topic/embedding overlap. +Create `relates_to` edges between thoughts that share themes but +weren't captured in the same session. "You said something similar +3 weeks ago." + +### 4. Auto-annotation +Generate a one-line summary or "gist" of each thought. Store as +`kind: 'auto_annotation'`. Useful for browse list views and search +results. + +### 5. Revisit scheduling +Score thoughts for revisit priority based on: +- Age (older = more likely forgotten) +- Seed quality (reflectable thoughts worth revisiting) +- Topic activity (thoughts in active topics vs dormant ones) +- No prior annotations (un-enriched thoughts) +Store as `kind: 'revisit_score'` artifact. + +## Architecture + +All enrichment produces derived artifacts linked to the original +capture. Nothing mutates raw entries. Enrichment can re-run +idempotently — artifacts are keyed by (source_entry, enrichment_type, +version). + +Two modes: +- **Inline**: runs during capture follow-through (lightweight only: + topic extraction, semantic parse) +- **Background**: runs on a schedule or `think --enrich` command + (heavier: auto-linking, LLM annotation, revisit scoring) + +## LLM boundary + +Lightweight enrichment (topics, semantic parse) should work without +an LLM — pattern matching and corpus statistics. LLM enrichment +(annotation, linking rationale) is opt-in and clearly marked in +provenance as LLM-derived. + +**Superseded by:** `docs/design/enrichment-pipeline.md` diff --git a/docs/method/backlog/cool-ideas/CORE_shared-minds-and-collective-ownership.md b/docs/method/backlog/cool-ideas/CORE_shared-minds-and-collective-ownership.md new file mode 100644 index 0000000..8d12d93 --- /dev/null +++ b/docs/method/backlog/cool-ideas/CORE_shared-minds-and-collective-ownership.md @@ -0,0 +1,14 @@ +# Shared minds and collective ownership + +Jointly produced and jointly owned provenance rather than one subject +per repo. A shared mind could hold thoughts from multiple actors +(human + agents, or multiple humans) with explicit per-entry +authorship. + +Group-held keys or threshold access for sensitive shared traces. + +Requires graph-level mind identity (see feedback item: +2026-04-11-multi-mind-in-one-repo-needs-first-class-graph-identity). + +Dropped from the original CORE_multiple-minds backlog item when +cycle 0004 scoped down to browse-only mind switching. diff --git a/docs/method/backlog/cool-ideas/CORE_think-as-git-hook.md b/docs/method/backlog/cool-ideas/CORE_think-as-git-hook.md new file mode 100644 index 0000000..9379cc1 --- /dev/null +++ b/docs/method/backlog/cool-ideas/CORE_think-as-git-hook.md @@ -0,0 +1,9 @@ +# Think as a git hook + +Capture thoughts on commit. A `prepare-commit-msg` or `post-commit` +hook that prompts "What were you thinking?" and captures the answer +into the active mind. + +Could also auto-extract intent from commit messages and capture +them as linked thoughts — connecting the cognitive stream to the +code stream. diff --git a/docs/method/backlog/cool-ideas/CORE_think-echo-phase-4-read-observers.md b/docs/method/backlog/cool-ideas/CORE_think-echo-phase-4-read-observers.md new file mode 100644 index 0000000..150b70d --- /dev/null +++ b/docs/method/backlog/cool-ideas/CORE_think-echo-phase-4-read-observers.md @@ -0,0 +1,54 @@ +--- +id: CORE_think-echo-phase-4-read-observers +blocks: + - CORE_think-echo-phase-5-migration-and-sibling-exchange +blocked_by: + - CORE_think-echo-phase-2-runtime-roundtrip +--- + +# CORE - Phase 4 - Echo-backed read observers + +Legend: CORE + +## Idea + +Extend the Think-on-Echo proof from exact inspect to real read surfaces: + +- recent chronology +- remember observers +- browse observers +- first-class mind identity + +This phase should be pulled only after raw capture plus exact inspect already +works through Echo. + +## Why + +Think's read surfaces are where the product becomes useful after capture. They +are also where the current graph-backed model leaks implementation details: +global chronology, repo-directory mind selection, and read handles that do not +publish explicit reading posture. + +Echo-backed observers should make the read question explicit: + +```text +basis + aperture + observer plan -> ReadingEnvelope + Think payload +``` + +## Candidate Slices + +- `RecentThoughts` observer over one mind and time window. +- `RememberThoughts` observer with deterministic lexical or project-aware + matching before any embedding/ranking work. +- `BrowseWindow` observer over a bounded chronology window. +- `mindId` and `actorId` scoping in all read requests. + +## Acceptance Criteria + +- Each read surface is expressed as a Think-owned query or observer contract. +- Every read result carries explicit completeness, residual, or obstruction + posture from the runtime boundary. +- Reads can be scoped by `mindId`. +- No observer silently presents a narrowed reading as canonical full history. +- Existing TUI/remember behavior is not replaced until the new observers have + proof coverage. diff --git a/docs/method/backlog/cool-ideas/CORE_think-echo-phase-5-migration-and-sibling-exchange.md b/docs/method/backlog/cool-ideas/CORE_think-echo-phase-5-migration-and-sibling-exchange.md new file mode 100644 index 0000000..3df9510 --- /dev/null +++ b/docs/method/backlog/cool-ideas/CORE_think-echo-phase-5-migration-and-sibling-exchange.md @@ -0,0 +1,54 @@ +--- +id: CORE_think-echo-phase-5-migration-and-sibling-exchange +blocks: [] +blocked_by: + - CORE_think-echo-phase-2-runtime-roundtrip + - CORE_think-echo-phase-4-read-observers +--- + +# CORE - Phase 5 - Mind migration and sibling runtime exchange + +Legend: CORE + +## Idea + +Define the path from existing `git-warp`-backed Think minds into the +Think-on-Echo world, and later define how `git-warp` participates as a sibling +runtime through Continuum exchange. + +This is deliberately later than the raw capture/read proof. + +## Why + +Existing `~/.think/*` minds are durable user data. They must not be abandoned. +But migration should follow a working destination, not precede it. + +The future `git-warp` role should be explicit: + +- data rescue and continuity for old minds +- export/replay/import of Think entries into the new contract path +- witnessed suffix exchange when both runtimes participate in one shared + causal history + +It should not be an implicit storage swap. + +## Candidate Slices + +1. Export old Think entries into a portable replay format. +2. Replay exported entries through the Think contract path into Echo. +3. Verify entry ids, content digests, timestamps, ingress, mind identity, and + provenance. +4. Define duplicate and idempotency behavior. +5. Add `git-warp` suffix export/import only when the Continuum runtime-boundary + family can carry the evidence honestly. + +## Acceptance Criteria + +- No migration runs without a backup or dry-run path. +- Migration preserves raw capture text and provenance. +- Migration makes `mindId` explicit for legacy repo-directory minds. +- Duplicate import is idempotent or visibly obstructed. +- `git-warp` participation uses witnessed suffix exchange or an explicitly + documented interim export path. +- The current v17 repair work remains scoped to keeping existing minds usable, + not defining the new runtime architecture. diff --git a/docs/method/backlog/cool-ideas/CORE_think-echo-shadow-write-comparison.md b/docs/method/backlog/cool-ideas/CORE_think-echo-shadow-write-comparison.md new file mode 100644 index 0000000..1ecc64f --- /dev/null +++ b/docs/method/backlog/cool-ideas/CORE_think-echo-shadow-write-comparison.md @@ -0,0 +1,22 @@ +# Echo shadow-write comparison mode + +After the Think-on-Echo round-trip proof works, consider an opt-in shadow-write +mode that writes a capture to the current Think store and to the Echo-backed +contract path, then compares receipts, latency, ids, provenance, and readback. + +This is a possible implementation shape for Phase 3, not a requirement. + +## Why + +Shadow-write gives the new runtime path exposure to real capture shapes without +making Echo the source of product truth too early. It also gives a concrete +way to compare capture latency and evidence posture against the existing +`git-warp`-backed path. + +## Guardrails + +- The existing store remains authoritative. +- Echo failure must not make local capture fail. +- The mode must be explicitly opt in. +- The comparison output should be inspectable and quiet by default. +- No migration semantics should be inferred from shadow-write success. diff --git a/docs/method/backlog/cool-ideas/CORE_think-echo-warpspace-constellation.md b/docs/method/backlog/cool-ideas/CORE_think-echo-warpspace-constellation.md new file mode 100644 index 0000000..1a050a4 --- /dev/null +++ b/docs/method/backlog/cool-ideas/CORE_think-echo-warpspace-constellation.md @@ -0,0 +1,30 @@ +# Think Echo WARPspace constellation + +Create a local WARPspace or constellation manifest for the Think-on-Echo proof +once Phase 0 has chosen the exact proof boundary. + +Candidate shape: + +```text +think-echo-dev + pins Think + pins Echo + pins Wesley + pins Continuum + optionally pins warp-ttd + pins git-warp only when sibling exchange is actually exercised +``` + +## Why + +The proof will cross repository boundaries even if the app contract lives in +Think. A small constellation keeps those coordinates explicit and avoids +"whatever is checked out next door" becoming the hidden build system. + +## Acceptance Criteria + +- The manifest names exact repo coordinates or local override posture. +- It explains which repo owns each generated or consumed artifact. +- It does not require `git-warp` for the first capture/inspect proof. +- It can be verified or synced by the current Continuum `warp` tooling, or it + records the missing tooling gap as follow-on work. diff --git a/docs/method/backlog/cool-ideas/REFLECT_automated-summaries.md b/docs/method/backlog/cool-ideas/REFLECT_automated-summaries.md new file mode 100644 index 0000000..80865fc --- /dev/null +++ b/docs/method/backlog/cool-ideas/REFLECT_automated-summaries.md @@ -0,0 +1,50 @@ +# Automated thought summaries + +Summarize sessions, days, weeks, months, or topics automatically. + +``` +think --summarize --session= # what happened in this session +think --summarize --since=1d # today's thinking +think --summarize --since=7d # this week +think --summarize --since=30d # this month +think --summarize --topic=architecture # all thoughts on a topic +``` + +## Output + +A new derived entry with `kind: 'summary'` linked to the source +entries via `summarizes` edges. The summary is itself a thought in +the archive — browsable, annotatable, evolvable. + +## Levels + +### Session summary +"In this 20-minute session you explored X, questioned Y, and +decided Z." Derived from the session's capture sequence. + +### Daily/weekly/monthly digest +Aggregate across sessions. Identify recurring themes, open +questions that haven't been resolved, decisions made, and shifts +in thinking over time. + +### Topic summary +Cross-temporal synthesis on a single topic. "Your thinking about +performance has evolved from concern about ESM load time (March) +to graph-level optimization (April)." + +## Architecture + +Summaries are derived entries — immutable, linked, inspectable. +Each summary records its source entries, time window, and generation +method in provenance. + +Two modes: +- **Deterministic**: session summaries from capture sequence and + semantic parse artifacts (no LLM needed) +- **LLM-assisted**: richer narrative summaries with explicit + LLM provenance + +Could run on a schedule ("generate daily digest at midnight") or +on demand. + +**Superseded by:** `docs/design/enrichment-pipeline.md` diff --git a/docs/method/backlog/cool-ideas/REFLECT_deterministic-analysis.md b/docs/method/backlog/cool-ideas/REFLECT_deterministic-analysis.md index 7c8ea1e..ceef4e1 100644 --- a/docs/method/backlog/cool-ideas/REFLECT_deterministic-analysis.md +++ b/docs/method/backlog/cool-ideas/REFLECT_deterministic-analysis.md @@ -31,3 +31,5 @@ Whenever later modes expose a cluster or link, show why it exists: shared unusua - Raw adjacent-entry Levenshtein drift as the main evolution metric. - Lexical-only clustering presented as "understanding." - Silent classification leaking into capture or recent. + +**Related:** `docs/design/enrichment-pipeline.md` — auto_tags, semantic_parse, and auto_link stages implement several of these ideas. diff --git a/docs/method/backlog/cool-ideas/REFLECT_externalize-prompt-families.md b/docs/method/backlog/cool-ideas/REFLECT_externalize-prompt-families.md new file mode 100644 index 0000000..ec4b8f1 --- /dev/null +++ b/docs/method/backlog/cool-ideas/REFLECT_externalize-prompt-families.md @@ -0,0 +1,7 @@ +# Externalize reflect prompt families + +Move hardcoded reflect prompt families out of `reflect.js` into +JSON templates under `~/.think/prompts/`. Users could define custom +pressure-testing logic without modifying source code. + +Source: code-quality audit 2026-04-11 §2.2. diff --git a/docs/method/backlog/cool-ideas/REFLECT_future-self-and-abandoned-ideas.md b/docs/method/backlog/cool-ideas/REFLECT_future-self-and-abandoned-ideas.md index ca0170f..b6a0347 100644 --- a/docs/method/backlog/cool-ideas/REFLECT_future-self-and-abandoned-ideas.md +++ b/docs/method/backlog/cool-ideas/REFLECT_future-self-and-abandoned-ideas.md @@ -7,3 +7,5 @@ Resurface older thoughts and ask whether they still feel true. Example: "You sai ## Abandoned idea detector Find ideas revisited several times without clear resolution and invite deliberate re-entry. Possible prompt: "You keep circling this. Want to push it?" + +**Superseded by:** `docs/design/enrichment-pipeline.md` — revisit_score stage + --revisit command. diff --git a/docs/method/backlog/cool-ideas/REFLECT_llm-chorus-triage.md b/docs/method/backlog/cool-ideas/REFLECT_llm-chorus-triage.md new file mode 100644 index 0000000..5246dd7 --- /dev/null +++ b/docs/method/backlog/cool-ideas/REFLECT_llm-chorus-triage.md @@ -0,0 +1,24 @@ +# LLM chorus triage + +Human captures a raw idea. Multiple agents react to it from their +own perspectives — challenging, extending, constraining, connecting +to prior thoughts. The human sees a chorus of responses, not one +monolithic LLM answer. + +Flow: +1. Human captures a thought into their mind +2. Think fans the thought out to N agent minds (or N prompt families) +3. Each agent produces a derived response in its own mind +4. Human browses the chorus: a view that shows the seed thought + plus all agent responses side by side +5. Human picks what's useful, discards the rest + +This is different from reflect (which is one deterministic prompt +family) and spitball (which is one LLM session). Chorus is multiple +independent perspectives on the same seed. + +Requires: agent-owned minds, shared mind browsing, explicit +derivation provenance linking responses to their seed. + +Related: REFLECT_llm-spitball, CORE_agent-owned-minds, +CORE_shared-minds-and-collective-ownership. diff --git a/docs/method/backlog/cool-ideas/SURFACE_ambient-clipboard-capture.md b/docs/method/backlog/cool-ideas/SURFACE_ambient-clipboard-capture.md new file mode 100644 index 0000000..e680c73 --- /dev/null +++ b/docs/method/backlog/cool-ideas/SURFACE_ambient-clipboard-capture.md @@ -0,0 +1,8 @@ +# Ambient clipboard capture + +macOS watcher that captures clipboard changes as thoughts. Opt-in, +not default. When you copy a snippet that looks like an idea or +note, it gets captured with provenance showing the source app. + +Pairs with the existing macOS menu bar app. Could be a toggle +in the menu bar — "Watch clipboard." diff --git a/docs/method/backlog/cool-ideas/SURFACE_document-window-based-read-model.md b/docs/method/backlog/cool-ideas/SURFACE_document-window-based-read-model.md new file mode 100644 index 0000000..64ca6c8 --- /dev/null +++ b/docs/method/backlog/cool-ideas/SURFACE_document-window-based-read-model.md @@ -0,0 +1,8 @@ +# Document window-based read model + +Add a "Window-Based Navigation" section to ADVANCED_GUIDE.md +explaining how git-warp read handles prevent whole-graph +materialization, how the browse TUI loads neighbors lazily, and +how checkpoint-backed reuse keeps navigation fast. + +Source: documentation-quality audit 2026-04-11 §2.3. diff --git a/docs/method/backlog/cool-ideas/SURFACE_echo-reading-envelope-inspector.md b/docs/method/backlog/cool-ideas/SURFACE_echo-reading-envelope-inspector.md new file mode 100644 index 0000000..97d34b3 --- /dev/null +++ b/docs/method/backlog/cool-ideas/SURFACE_echo-reading-envelope-inspector.md @@ -0,0 +1,27 @@ +# Echo reading envelope inspector + +Add a developer-facing inspection surface for Echo-backed Think proof outputs. + +The first useful shape could be a script or CLI subcommand that renders: + +- observer plan id +- lane or mind id +- coordinate/frame +- reading posture +- witness or receipt refs +- payload digest +- decoded Think payload summary + +## Why + +The Think-on-Echo path will introduce evidence-bearing reads. If those reads +are only visible as raw JSON fixtures, developers will either ignore the +evidence posture or build ad hoc inspection snippets during every debugging +session. + +## Guardrails + +- This is not a user-facing browse replacement. +- It should not imply the reading is canonical full history. +- It should stay tied to the proof harness until Echo-backed read observers + are real. diff --git a/docs/method/backlog/cool-ideas/SURFACE_export-portable-format.md b/docs/method/backlog/cool-ideas/SURFACE_export-portable-format.md new file mode 100644 index 0000000..744418c --- /dev/null +++ b/docs/method/backlog/cool-ideas/SURFACE_export-portable-format.md @@ -0,0 +1,7 @@ +# Export to portable format + +Add a tool to export the cognitive worldline to standard markdown +or PDF for offline archival. No dependency on git-warp to read +the export. + +Source: ship-readiness audit 2026-04-11 §2.3 (Gap 3). diff --git a/docs/method/backlog/cool-ideas/SURFACE_mind-feed-webhook.md b/docs/method/backlog/cool-ideas/SURFACE_mind-feed-webhook.md new file mode 100644 index 0000000..83b9ea8 --- /dev/null +++ b/docs/method/backlog/cool-ideas/SURFACE_mind-feed-webhook.md @@ -0,0 +1,10 @@ +# Mind feed / webhook + +Subscribe to a mind's capture stream for real-time notifications. +When a thought is captured, emit it to a webhook URL, RSS feed, +or local unix socket. + +Use cases: +- Agent monitors human's mind for new ideas to react to (chorus) +- Dashboard shows live capture activity across minds +- Integration with Slack/Discord for team capture streams diff --git a/docs/method/backlog/cool-ideas/SURFACE_per-mind-color-themes.md b/docs/method/backlog/cool-ideas/SURFACE_per-mind-color-themes.md new file mode 100644 index 0000000..ff232ee --- /dev/null +++ b/docs/method/backlog/cool-ideas/SURFACE_per-mind-color-themes.md @@ -0,0 +1,11 @@ +# Per-mind color themes + +Each mind gets a deterministic shader (shipped in v0.7.0), but they +all share the same plum/cream/teal palette. Distinct palettes per +mind would make the visual identity stronger — you'd know which mind +you're in by the colors, not just the name. + +Could derive hue shifts from the mind name hash, or let users +configure per-mind themes. + +Noted in cycle 0004 retro as cool idea. diff --git a/docs/method/backlog/cool-ideas/SURFACE_remember-global-flag.md b/docs/method/backlog/cool-ideas/SURFACE_remember-global-flag.md new file mode 100644 index 0000000..55bc861 --- /dev/null +++ b/docs/method/backlog/cool-ideas/SURFACE_remember-global-flag.md @@ -0,0 +1,7 @@ +# --remember --global flag + +`think --remember` defaults to ambient project recall, which +surprises users expecting global search. Add a `--global` flag +to force cross-project search across all minds. + +Source: code-quality audit 2026-04-11 §1.2. diff --git a/docs/method/backlog/cool-ideas/SURFACE_reusable-fade-in.md b/docs/method/backlog/cool-ideas/SURFACE_reusable-fade-in.md new file mode 100644 index 0000000..44043cc --- /dev/null +++ b/docs/method/backlog/cool-ideas/SURFACE_reusable-fade-in.md @@ -0,0 +1,7 @@ +# Reusable terminal fade-in utility + +`fadeInBrowse()` rebuilds the browse model just to render it, then +discards it. The color-lerp-and-write logic is tightly coupled to +one call site. Extract a generic `fadeInContent(lines, palette, opts)` +that can fade any ANSI content from bg to visible. Useful for future +transitions (mind switch, page change). diff --git a/docs/method/backlog/cool-ideas/SURFACE_revisit-prompts.md b/docs/method/backlog/cool-ideas/SURFACE_revisit-prompts.md new file mode 100644 index 0000000..7ffd81e --- /dev/null +++ b/docs/method/backlog/cool-ideas/SURFACE_revisit-prompts.md @@ -0,0 +1,17 @@ +# Revisit prompts + +Think nudges you to revisit old thoughts. + +``` +think --revisit # random thought from the past +think --revisit --since=30d # older thoughts worth revisiting +``` + +Surfaces a thought and asks "What do you think now?" Your response +becomes an annotation or evolution. Closes the capture loop: +capture → time passes → re-encounter → enrich. + +Selection heuristics: prefer thoughts with no annotations, high +seed quality, or from topics you haven't revisited recently. + +**Superseded by:** `docs/design/enrichment-pipeline.md` diff --git a/docs/method/backlog/cool-ideas/SURFACE_session-replay.md b/docs/method/backlog/cool-ideas/SURFACE_session-replay.md new file mode 100644 index 0000000..49b84bb --- /dev/null +++ b/docs/method/backlog/cool-ideas/SURFACE_session-replay.md @@ -0,0 +1,8 @@ +# Session replay + +Play back a capture session chronologically, like watching yourself +think. Show each thought appearing in sequence with real timing +gaps (or accelerated). Useful for reviewing a thinking session's +arc — what started it, where it went, what emerged. + +Could be a TUI mode or a CLI command that streams to stdout. diff --git a/docs/method/backlog/cool-ideas/SURFACE_splash-state-machine.md b/docs/method/backlog/cool-ideas/SURFACE_splash-state-machine.md new file mode 100644 index 0000000..7bfd525 --- /dev/null +++ b/docs/method/backlog/cool-ideas/SURFACE_splash-state-machine.md @@ -0,0 +1,7 @@ +# Pure splash state machine + +Extract the splash animation logic into a pure function: +`nextSplashState(state, elapsed, input) → { nextState, frame }`. +This makes the shader transitions, mind cycling, and fade +logic testable without terminal I/O. Could enable splash +rendering in non-terminal contexts (web, recording). diff --git a/docs/method/backlog/cool-ideas/SURFACE_think-diff.md b/docs/method/backlog/cool-ideas/SURFACE_think-diff.md new file mode 100644 index 0000000..36b2f71 --- /dev/null +++ b/docs/method/backlog/cool-ideas/SURFACE_think-diff.md @@ -0,0 +1,10 @@ +# Think diff + +Compare two minds or two time periods. Show what changed in your +thinking — new themes, abandoned threads, evolving concerns. + +`think --diff --mind=work --since=7d` — what's new in the work +mind this week. + +`think --diff --mind=claude --mind=default` — how does the agent's +thinking diverge from yours? diff --git a/docs/method/backlog/cool-ideas/SURFACE_thought-graph-visualization.md b/docs/method/backlog/cool-ideas/SURFACE_thought-graph-visualization.md new file mode 100644 index 0000000..5d2ea9e --- /dev/null +++ b/docs/method/backlog/cool-ideas/SURFACE_thought-graph-visualization.md @@ -0,0 +1,8 @@ +# Thought graph visualization + +Render the worldline as a DAG in the terminal. Show capture sessions +as clusters, reflect derivations as edges, temporal flow as layout. + +Could use bijou's braille chart infrastructure for sub-character +resolution. The browse TUI could summon this as a panel alongside +the reader-first view. diff --git a/docs/method/backlog/up-next/CORE_think-echo-phase-0-direction-charter.md b/docs/method/backlog/up-next/CORE_think-echo-phase-0-direction-charter.md new file mode 100644 index 0000000..9e19271 --- /dev/null +++ b/docs/method/backlog/up-next/CORE_think-echo-phase-0-direction-charter.md @@ -0,0 +1,50 @@ +--- +id: CORE_think-echo-phase-0-direction-charter +blocks: + - CORE_think-echo-phase-1-app-contract +blocked_by: + - CORE_think-echo-contract-proof +--- + +# CORE - Phase 0 - Think on Echo direction charter + +Legend: CORE + +## Idea + +Create a short Think-owned design packet that records the first proof boundary, +ownership split, and non-goals for the Think-on-Echo lane. + +The packet should live in `docs/design//` when the cycle is pulled. It +should anchor to repo truth in Think, Echo, Continuum, and Wesley, but it +should not copy those worlds into Think. + +## Why + +The direction changes the center of gravity for Think. Without a local charter, +future work can drift into one of three wrong shapes: + +- another broad `git-warp` repair project +- Echo learning Think-specific nouns +- Continuum becoming the home for Think's app schema + +The charter should make the smallest executable hill obvious before any code +or schema is added. + +## Scope + +- State that Think owns app/domain nouns. +- State that Echo owns generic runtime dispatch and observation. +- State that Continuum owns shared runtime-boundary families and WARPspace + coordination. +- State that Wesley owns generated helpers, codecs, registries, and witnesses. +- Name the first proof as `CaptureThought` plus `InspectThought`. +- Record what is intentionally out of scope for the first proof. + +## Acceptance Criteria + +- A design packet exists for the Think-on-Echo proof. +- The packet names the first witness command or test shape. +- The packet explicitly excludes remember, browse, migration, multi-mind UX, + and `git-warp` exchange from the first proof. +- The packet references this backlog phase map. diff --git a/docs/method/backlog/up-next/CORE_think-echo-phase-1-app-contract.md b/docs/method/backlog/up-next/CORE_think-echo-phase-1-app-contract.md new file mode 100644 index 0000000..04dad15 --- /dev/null +++ b/docs/method/backlog/up-next/CORE_think-echo-phase-1-app-contract.md @@ -0,0 +1,61 @@ +--- +id: CORE_think-echo-phase-1-app-contract +blocks: + - CORE_think-echo-phase-2-runtime-roundtrip +blocked_by: + - CORE_think-echo-phase-0-direction-charter +--- + +# CORE - Phase 1 - Think memory app contract + +Legend: CORE + +## Idea + +Author the smallest Think-owned contract family needed for a raw capture and +exact inspect round trip. + +The likely first file is: + +```text +contracts/think-memory.graphql +``` + +The family should define only the nouns needed for the first proof: + +```text +mutation CaptureThought(input: CaptureThoughtInput): CaptureThoughtResult +query InspectThought(entryId: ID!): ThoughtEntry +``` + +Equivalent names are acceptable if the design packet chooses them. + +## Why + +Think's domain model is currently spread across JS store code, CLI/MCP shapes, +and read surfaces. A Think-authored app contract gives the Echo proof a typed +boundary without pushing application nouns into Echo or Continuum. + +## Model Constraints + +The first model should stay boring: + +- `entryId` +- content digest or `thoughtId` +- raw text +- captured timestamp +- ingress/provenance fields already known by Think +- `mindId`, defaulting to `default` +- optional `actorId` or writer identity if the proof needs it + +Do not add tags, embeddings, summaries, ranking fields, browse windows, or +reflection outputs to the first contract. + +## Acceptance Criteria + +- A Think-owned GraphQL contract file exists. +- The contract supports one capture mutation and one exact inspect query. +- The contract names `mindId` explicitly, even if only `default` is used. +- Generated-artifact locations are decided but generated output is not treated + as semantic source truth. +- No Echo or Continuum schema is modified to add Think domain nouns. diff --git a/docs/method/backlog/up-next/CORE_think-echo-phase-2-runtime-roundtrip.md b/docs/method/backlog/up-next/CORE_think-echo-phase-2-runtime-roundtrip.md new file mode 100644 index 0000000..03a7abd --- /dev/null +++ b/docs/method/backlog/up-next/CORE_think-echo-phase-2-runtime-roundtrip.md @@ -0,0 +1,61 @@ +--- +id: CORE_think-echo-phase-2-runtime-roundtrip +blocks: + - CORE_think-echo-phase-3-experimental-product-path + - CORE_think-echo-phase-4-read-observers + - CORE_think-echo-phase-5-migration-and-sibling-exchange +blocked_by: + - CORE_think-echo-phase-1-app-contract + - CORE_think-echo-toolchain-capability-probe +--- + +# CORE - Phase 2 - Echo runtime round-trip proof + +Legend: CORE + +## Idea + +Build the first executable witness that Think can capture and inspect one +thought through Echo. + +This should be a test, example, or proof harness that runs outside the current +production capture path. + +## Witness Shape + +The proof should: + +1. Build a `CaptureThought` input through generated or minimally generated + contract helpers. +2. Dispatch the canonical intent through Echo. +3. Receive admission evidence for the capture. +4. Build an exact `InspectThought` observation by entry id or coordinate. +5. Receive a `ReadingEnvelope` or equivalent Echo observation artifact. +6. Verify the reading posture is complete. +7. Decode the payload into a Think-owned `ThoughtEntry`. +8. Assert that raw text and capture metadata survived the round trip. + +## Why + +This is the first point where the north star becomes engineering fact. Until +this passes, the Echo direction is still architecture discussion rather than a +usable migration path. + +## Constraints + +- Do not switch the CLI, MCP server, macOS app, or default store to Echo. +- Do not require existing `~/.think/*` minds to migrate. +- Do not depend on `git-warp` in the hot proof path. +- Do not hand-roll runtime bytes if the current Wesley/Echo toolchain can + generate the needed helper surface. +- If generation is not ready, write the smallest temporary adapter and log the + missing generated cut as follow-on debt. + +## Acceptance Criteria + +- One reproducible command proves raw capture plus exact inspect through Echo. +- The proof asserts decoded Think payload fields, not only runtime success. +- The proof records admission/read evidence in a way that can be inspected. +- The production Think capture path remains unchanged. +- Follow-on gaps for Wesley/Echo generation are recorded if any temporary + adapter is used. diff --git a/docs/method/backlog/up-next/CORE_think-echo-phase-3-experimental-product-path.md b/docs/method/backlog/up-next/CORE_think-echo-phase-3-experimental-product-path.md new file mode 100644 index 0000000..a0e73ee --- /dev/null +++ b/docs/method/backlog/up-next/CORE_think-echo-phase-3-experimental-product-path.md @@ -0,0 +1,50 @@ +--- +id: CORE_think-echo-phase-3-experimental-product-path +blocks: + - CORE_think-echo-phase-4-read-observers +blocked_by: + - CORE_think-echo-phase-2-runtime-roundtrip +--- + +# CORE - Phase 3 - Experimental Think on Echo product path + +Legend: CORE + +## Idea + +After the round-trip proof works, expose it through an explicit experimental +Think surface without replacing the current store by accident. + +Possible shapes: + +- an internal proof command +- an opt-in CLI flag +- a separate dev-only command +- a shadow-write mode that writes to Echo while the existing store remains the + source of product truth + +The cycle that pulls this card should choose one. + +## Why + +The proof needs a product-adjacent path before it can teach us about capture +latency, operational ergonomics, and real data shapes. But switching the +default capture path too early risks user data and hides migration work. + +## Acceptance Criteria + +- The Echo-backed path is explicitly opt in. +- The default Think capture behavior remains unchanged. +- The path reports enough evidence to compare current store capture with Echo + capture. +- The path has a clear failure posture that does not threaten the existing + local capture. +- The implementation defines whether it is proof-only, shadow-write, or a + candidate replacement path. + +## Non-Goals + +- No default store replacement. +- No automatic migration. +- No cross-runtime exchange. +- No UI polish beyond what is needed to operate and inspect the proof. diff --git a/docs/method/release-runbook.md b/docs/method/release-runbook.md new file mode 100644 index 0000000..1aa5a6e --- /dev/null +++ b/docs/method/release-runbook.md @@ -0,0 +1,21 @@ +# Release Readiness Runbook + +Sequential pre-flight checks before tagging a release. + +## Pre-flight + +1. `npm run lint` — zero errors, zero warnings +2. `npm run test:ports` — all port tests pass +3. `npm run test:m1` — all acceptance tests pass +4. `npm run test:m2` — macOS Swift tests pass (Darwin only) +5. `node bin/think.js --doctor` — all checks ok/skip +6. `node bin/think.js --stats` — verify capture count is sane +7. Verify MCP tools list: `node -e "import('./src/mcp/server.js').then(m => m.createThinkMcpServer()).then(s => s.listTools()).then(t => console.log(t.tools.map(x=>x.name)))"` + +## Release + +1. Bump version in `package.json` +2. Date the CHANGELOG section +3. Commit: `chore: bump version to X.Y.Z` +4. Tag: `git tag -a vX.Y.Z -m "vX.Y.Z — description"` +5. Push: `git push origin main --tags` diff --git a/docs/method/retro/0009-clarify-reflect-mcp-status/clarify-reflect-mcp-status.md b/docs/method/retro/0009-clarify-reflect-mcp-status/clarify-reflect-mcp-status.md new file mode 100644 index 0000000..03763e9 --- /dev/null +++ b/docs/method/retro/0009-clarify-reflect-mcp-status/clarify-reflect-mcp-status.md @@ -0,0 +1,36 @@ +--- +title: "Clarify Reflect MCP status" +cycle: "0009-clarify-reflect-mcp-status" +design_doc: "docs/design/0009-clarify-reflect-mcp-status/clarify-reflect-mcp-status.md" +outcome: hill-met +drift_check: yes +--- + +# Clarify Reflect MCP status Retro + +## Summary + +Two-line fix in GUIDE.md: noted reflect is CLI-only, and updated agent +isolation advice to mention `~/.think/` alongside THINK_REPO_DIR. + +## Playback Witness + +- [verification.md](witness/verification.md) — 173 pass, 0 fail. + +## Drift + +- None. + +## New Debt + +- None. + +## Cool Ideas + +- None. + +## Backlog Maintenance + +- [x] Inbox clear +- [x] Priorities reviewed +- [x] Dead work buried or merged diff --git a/docs/method/retro/0009-clarify-reflect-mcp-status/witness/verification.md b/docs/method/retro/0009-clarify-reflect-mcp-status/witness/verification.md new file mode 100644 index 0000000..2dfb591 --- /dev/null +++ b/docs/method/retro/0009-clarify-reflect-mcp-status/witness/verification.md @@ -0,0 +1,237 @@ +--- +title: "Verification Witness for Cycle 9" +--- + +# Verification Witness for Cycle 9 + +This witness proves that `Clarify Reflect MCP status` now carries the required +behavior and adheres to the repo invariants. + +## Test Results + +```text + +> think@0.7.0 test +> npm run test:ports && npm run test:m1 + + +> think@0.7.0 test:ports +> node --test test/ports/*.test.js + +✔ BG_TOKEN is exported from style.js alongside the palette (0.82125ms) +✔ windowed browse initializes with no drawer open (18.657166ms) +✔ saveRawCapture writes cwd receipts first and defers git enrichment to followthrough (1134.805125ms) +✔ capture provenance exports the canonical ingress set (14.419167ms) +✔ capture provenance trims source strings while preserving valid ingress and URL (0.172334ms) +✔ capture provenance trims ingress strings before validation (0.076792ms) +✔ capture provenance reads and normalizes environment input (0.075ms) +✔ METHOD docs use one consistent cycle-only release and README closeout policy (3.431958ms) +✔ cycle 0006 retrospective restarts ordered numbering for the human playback section (0.827917ms) +✔ runDiagnostics reports ok for a healthy repo with entries (31.532167ms) +✔ runDiagnostics reports fail when think directory does not exist (0.184125ms) +✔ runDiagnostics reports fail when local repo has no git init (0.748125ms) +✔ runDiagnostics reports ok for upstream when reachable (20.31675ms) +✔ runDiagnostics reports warn for upstream when unreachable (19.926833ms) +✔ runDiagnostics reports skip for upstream when not configured (16.738292ms) +✔ runDiagnostics reports ok for upstream when configured (16.435625ms) +✔ runDiagnostics includes all expected check names (16.792666ms) +✔ runDiagnostics reports graph model version when available (17.099916ms) +✔ runDiagnostics warns when graph model needs migration (16.438167ms) +✔ runDiagnostics reports entry count when available (19.67325ms) +✔ runDiagnostics warns when entry count is zero (16.724208ms) +✔ runDiagnostics skips graph and entry checks when no repo exists (0.179417ms) +✔ shared JSON helper canonicalizes object keys deterministically on parse and stringify (1.59925ms) +✔ discoverMinds finds all valid repos under the think directory (75.410167ms) +✔ discoverMinds ignores directories without git repos (16.88275ms) +✔ discoverMinds labels ~/.think/repo as "default" (16.4605ms) +✔ discoverMinds sorts with default first, then alphabetical (53.926958ms) +✔ discoverMinds returns empty array when think directory does not exist (0.172708ms) +✔ discoverMinds includes repoDir for each mind (17.582ms) +✔ shaderForMind returns a deterministic index for a given name (0.194375ms) +✔ shaderForMind returns different indices for different names (0.075041ms) +✔ shaderForMind stays within the shader count range (0.082291ms) +✔ shaderForMind handles single-character names (0.099208ms) +✔ selectLogo picks large mind logo when terminal is wide and tall enough (0.946833ms) +✔ selectLogo picks medium mind logo when terminal fits medium but not large (0.1055ms) +✔ selectLogo picks text logo when terminal is too small for mind (0.057166ms) +✔ selectLogo always returns something even for tiny terminals (0.053958ms) +✔ renderSplash contains the logo (0.1415ms) +✔ renderSplash contains the Enter prompt (0.06025ms) +✔ renderSplash output fits within the given dimensions (0.066917ms) +✔ splash.js does not export renderSplashView (dead code from RE-015 workaround) (0.046042ms) +✔ renderSplash centers the prompt horizontally (0.168584ms) +✔ windowed browse model initializes in windowed mode (0.221333ms) +✔ formatStats includes a sparkline when buckets are present (2.009625ms) +✔ formatStats omits sparkline when no buckets are present (0.139084ms) +✔ formatStats handles a single bucket without crashing (0.129125ms) +✔ formatStats handles empty bucket array without sparkline (0.075416ms) +✔ formatStats sparkline is oldest-to-newest (left-to-right) (0.100875ms) +ℹ tests 48 +ℹ suites 0 +ℹ pass 48 +ℹ fail 0 +ℹ cancelled 0 +ℹ skipped 0 +ℹ todo 0 +ℹ duration_ms 1405.736 + +> think@0.7.0 test:m1 +> node --test test/acceptance/*.test.js + +✔ think --doctor reports health of a repo with captures (3704.672792ms) +✔ think --doctor succeeds before the first capture (365.52525ms) +✔ think --json --doctor emits a structured health report (3919.272791ms) +✔ think --doctor rejects an unexpected thought argument (308.295875ms) +✔ new capture writes graph-native relationship edges while preserving compatibility properties (2272.514792ms) +✔ think --migrate-graph upgrades a version-1 property-linked repo additively (4139.726292ms) +✔ think --migrate-graph is idempotent and safe to rerun (4152.485875ms) +✔ capture on a version-1 repo still succeeds and only migrates after the raw local save (5498.443542ms) +✔ graph-native commands fail clearly on an outdated repo outside interactive use (4206.9575ms) +✔ interactive inspect on an outdated repo shows visible upgrade progress before continuing (3028.668ms) +✔ interactive browse on an outdated repo shows visible upgrade progress before continuing (3034.69175ms) +✔ think --json emits explicit graph migration required errors for outdated graph-native commands (2068.483166ms) +✔ think --migrate-graph upgrades a version-2 repo to graph model version 3 with browse and reflect read edges (6562.929ms) +✔ think --json --inspect exposes direct reflect receipts that exist only through graph-native v3 edges (2342.763875ms) +✔ think --help prints top-level usage without bootstrapping local state (444.847791ms) +✔ think -h is accepted as a short alias for top-level help (343.892708ms) +✔ think --recent --help prints recent help instead of running the command (279.940958ms) +✔ think --recent -h prints recent help instead of running the command (343.467375ms) +✔ think recent --help fails and points callers to the explicit flag form (320.343333ms) +✔ think --inspect --help bypasses required entry validation (369.755375ms) +✔ think --json --help emits structured JSONL help output (400.186917ms) +✔ think recent --json --help fails machine-readably instead of acting as shorthand help (334.588875ms) +✔ think -- -h captures the literal text after option parsing is terminated (3160.851208ms) +✔ think --ingest reads stdin explicitly and captures it into the normal raw-capture core (3313.22225ms) +✔ think with stdin but without --ingest does not accidentally capture piped input (352.562542ms) +✔ think --ingest rejects mixed positional capture text and stdin capture text (371.140667ms) +✔ think --json --ingest preserves machine-readable capture semantics for agents (3356.302875ms) +✔ think --ingest rejects empty stdin payloads (393.319083ms) +✔ think --json capture emits JSONL on stdout and keeps stderr quiet when there are no warnings (1902.405667ms) +✔ think --json --recent emits entry events instead of plain text (7408.645416ms) +✔ think --json --stats emits totals and bucket rows as JSONL (4510.389375ms) +✔ think --json validation failures emit JSONL on stderr instead of stdout (278.428708ms) +✔ think --json reports backup pending as a structured warning on stderr (1396.285209ms) +✔ think --json emits deterministically sorted keys in JSONL output (1952.135208ms) +✔ think MCP server lists the core Think tools (547.728791ms) +✔ think MCP capture, recent, browse, and inspect route through the existing Think runtime (5103.489083ms) +✔ think MCP capture preserves additive provenance separately from the raw text (3974.279833ms) +✔ think MCP capture trims additive provenance strings before persistence (2271.587958ms) +✔ think MCP remember, stats, and prompt_metrics expose structured read results (5466.618666ms) +✔ think MCP doctor tool returns structured health checks (2224.180792ms) +✔ CLI raw capture bootstraps the local repo and preserves exact text (3184.824167ms) +✔ think "recent" is captured as a thought rather than triggering the list (3325.770708ms) +✔ think --recent does not bootstrap local state before the first capture (311.720958ms) +✔ think --recent rejects an unexpected thought argument (354.227166ms) +✔ capture does not require retrieval-before-write or conceptual confirmation (4582.194541ms) +✔ THINK_REPO_DIR overrides the default local repo path (2234.401458ms) +✔ reachable upstream reports local save first and backup second (1448.077709ms) +✔ unreachable upstream keeps capture successful and reports backup pending (1302.932292ms) +✔ recent stays plain and chronological (5980.535792ms) +✔ capture is append-only across later capture activity (3595.393459ms) +✔ duplicate thoughts produce distinct captures rather than deduping (3602.2705ms) +✔ empty input is rejected (256.273125ms) +✔ whitespace-only input is rejected (251.19ms) +✔ capture preserves formatting neutrality for spacing, casing, and punctuation (1788.994958ms) +✔ default user language avoids Git terminology (1101.493583ms) +✔ verbose capture emits JSONL trace updates on stderr (1105.606416ms) +✔ raw entries remain immutable after later derived entries exist (0.180834ms) # TODO +✔ stored raw entry bytes remain unchanged in the local store after later writes (0.035458ms) # TODO +✔ entry kind separation remains explicit once the first derived-entry write path exists (0.023458ms) # TODO +✔ think --prompt-metrics prints factual prompt telemetry totals and medians (467.142708ms) +✔ think --prompt-metrics does not bootstrap local state before the first capture (317.780666ms) +✔ think --prompt-metrics supports --since filtering over prompt sessions (285.4475ms) +✔ think --prompt-metrics supports --bucket=day (353.251583ms) +✔ think --json --prompt-metrics emits explicit summary, timing, and bucket rows (307.6105ms) +✔ think --prompt-metrics rejects an unexpected thought argument (354.539583ms) +✔ think --prompt-metrics rejects invalid filter values (722.9255ms) +✔ think --recent --count limits output to the newest N raw captures (10057.551625ms) +✔ think --recent --query filters raw captures by case-insensitive text match (6791.675125ms) +✔ removed recent alias flags fail clearly and point to the scoped forms (1690.173584ms) +✔ think --json --recent applies count and query filters while remaining JSONL-only (5763.641125ms) +✔ think --remember uses the current project context to recall relevant prior thoughts (3998.883833ms) +✔ think --remember with an explicit phrase recalls matching thoughts without turning into generic recent listing (6140.188333ms) +✔ think --json --remember emits explicit ambient scope and match receipts for agents (3722.297084ms) +✔ think --remember falls back honestly to textual project-token matching for entries without ambient project receipts (3584.468875ms) +✔ think --remember --limit returns only the top N matching thoughts in deterministic order (7369.246709ms) +✔ think --remember --brief returns a triage-friendly snippet instead of the full multiline thought (3337.311375ms) +✔ think --json --remember --brief --limit preserves bounded explicit recall receipts for agents (5158.356542ms) +✔ think --remember rejects invalid --limit values (1404.643708ms) +✔ think --browse shows one raw thought with its immediate newer and older neighbors (5238.92175ms) +✔ think --browse without an entry id fails clearly outside interactive TTY use and remains read-only (234.954958ms) +✔ think --json --browse without an entry id stays machine-readable and does not try to open the shell (230.496375ms) +✔ think --json --browse emits JSONL rows for the current raw thought and its neighbors (5228.740292ms) +✔ think --browse opens a reader-first browse TUI with metadata and no permanent recent rail (5840.087833ms) +✔ think --browse can reveal a chronology drawer on demand instead of showing the full log by default (5083.78225ms) +✔ think --browse can jump to another thought through a fuzzy jump surface (5288.017042ms) +✔ think --browse can reveal inspect receipts inside the scripted browse TUI (3453.193709ms) +✔ think --browse can hand the selected thought into reflect from the scripted browse TUI (3498.8365ms) +✔ think --browse surfaces session identity for the current thought without replacing the reader-first view (7583.744625ms) +✔ think --browse uses a short visible entry id in the reader-first metadata while inspect keeps the full exact id (6052.923166ms) +✔ think --browse can reveal a summon-only session drawer that excludes out-of-session thoughts (7004.340875ms) +✔ think --browse reveals a structured session drawer with a visible start label and current-thought marker (6988.800833ms) +✔ think --json --browse emits explicit session context and session-nearby rows without mislabeling out-of-session thoughts (7412.554125ms) +✔ think --browse can move to the previous thought within the current session without leaving reader-first browse (5185.329916ms) +✔ think --browse keeps the current thought in place when there is no next thought in the current session (5175.026583ms) +✔ think --json --browse exposes explicit session traversal semantics without conflating them with chronology neighbors (5235.178833ms) +✔ think --inspect exposes exact raw entry metadata without narration (1725.803375ms) +✔ think --json --inspect emits JSONL for the exact raw entry metadata (1738.224167ms) +✔ think --inspect exposes additive capture provenance separately from the raw text (1726.065709ms) +✔ think --json --inspect includes additive capture provenance in the inspected entry payload (1793.256125ms) +✔ think --inspect exposes canonical content identity and direct derived receipts when they exist (3459.143916ms) +✔ think --json --inspect emits canonical content identity and direct derived receipt rows (3467.726541ms) +✔ think --inspect exposes the first derived bundle as explicit raw, canonical, derived, and context sections (5344.292167ms) +✔ think --json --inspect emits canonical identity plus seed-quality and session-attribution receipts with provenance (5403.377458ms) +✔ think --json --inspect keeps duplicate raw captures distinct while linking them to the same canonical thought (4370.775209ms) +✔ think --reflect starts an explicit seeded reflect session with a deterministic seed-first challenge prompt (7508.24625ms) +✔ removed brainstorm aliases fail clearly and point to reflect (2532.06625ms) +✔ think --reflect can use an explicit sharpen prompt family (2510.059666ms) +✔ think --reflect-session stores a separate derived entry with preserved seed-first lineage (6370.612375ms) +✔ think --reflect validates explicit session entry and stays read-only on invalid start (2372.906917ms) +✔ think --reflect fails clearly when the seed entry does not exist (263.180416ms) +✔ think --reflect refuses status-like seeds that are not pressure-testable ideas (6905.7875ms) +✔ think --json --reflect refuses ineligible seeds with structured machine-readable errors (6739.994583ms) +✔ think --json --reflect emits only JSONL with seed-first session and prompt data (3620.580959ms) +✔ think --json --reflect-session emits only JSONL and preserves stored seed-first lineage (2643.172667ms) +✔ think --json reflect validation failures stay fully machine-readable (236.832042ms) +✔ think --stats prints total thoughts (5904.593542ms) +✔ think --stats does not bootstrap local state before the first capture (288.072333ms) +✔ think "stats" is captured as a thought rather than triggering the command (2971.903292ms) +✔ think --stats rejects an unexpected thought argument (277.146417ms) +✔ think stats supports --since filter (3974.135416ms) +✔ think --stats rejects an invalid --since value (260.8305ms) +✔ think stats supports --from and --to filters (5767.738292ms) +✔ think --stats rejects invalid absolute date filters (262.550584ms) +✔ think stats supports --bucket=day (5830.65575ms) +✔ think --stats --bucket=day includes a sparkline in text output (5655.1085ms) +✔ think --stats --bucket=day --json includes sparkline in stats.total event (5267.173ms) +✔ think --stats without --bucket omits sparkline (1597.042708ms) +✔ think --stats rejects an invalid bucket value (237.542542ms) +ℹ tests 128 +ℹ suites 0 +ℹ pass 125 +ℹ fail 0 +ℹ cancelled 0 +ℹ skipped 0 +ℹ todo 3 +ℹ duration_ms 175020.852542 + +``` + +## Drift Results + +```text +Playback-question drift found. +Scanned 1 active cycle, 2 playback questions, 0 test descriptions. +Search basis: exact normalized match in tests/**/*.test.* and tests/**/*.spec.* descriptions. + +docs/design/0009-clarify-reflect-mcp-status/clarify-reflect-mcp-status.md +- Human: Does GUIDE.md clarify that reflect is CLI-only? + No exact normalized test description match found. +- Agent: Does agent isolation advice mention the multi-mind pattern? + No exact normalized test description match found. + +``` + +## Manual Verification + +- [x] Automated capture completed successfully. diff --git a/docs/method/retro/0010-document-mind-orchestration/document-mind-orchestration.md b/docs/method/retro/0010-document-mind-orchestration/document-mind-orchestration.md new file mode 100644 index 0000000..3927da8 --- /dev/null +++ b/docs/method/retro/0010-document-mind-orchestration/document-mind-orchestration.md @@ -0,0 +1,38 @@ +--- +title: "Document mind orchestration" +cycle: "0010-document-mind-orchestration" +design_doc: "docs/design/0010-document-mind-orchestration/document-mind-orchestration.md" +outcome: hill-met +drift_check: yes +--- + +# Document mind orchestration Retro + +## Summary + +Created `docs/MIND_ORCHESTRATION.md` covering mind creation, discovery, +splash/browse switching, capture routing, THINK_REPO_DIR interaction, +agent isolation, and limitations. Linked from GUIDE.md. Port test +asserts the doc exists and is linked. + +## Playback Witness + +- [verification.md](witness/verification.md) — 49 port tests pass. + +## Drift + +- None. + +## New Debt + +- None. + +## Cool Ideas + +- None. + +## Backlog Maintenance + +- [x] Inbox clear +- [x] Priorities reviewed +- [x] Dead work buried or merged diff --git a/docs/method/retro/0010-document-mind-orchestration/witness/verification.md b/docs/method/retro/0010-document-mind-orchestration/witness/verification.md new file mode 100644 index 0000000..0d3e832 --- /dev/null +++ b/docs/method/retro/0010-document-mind-orchestration/witness/verification.md @@ -0,0 +1,246 @@ +--- +title: "Verification Witness for Cycle 10" +--- + +# Verification Witness for Cycle 10 + +This witness proves that `Document mind orchestration` now carries the required +behavior and adheres to the repo invariants. + +## Test Results + +```text + +> think@0.7.0 test +> npm run test:ports && npm run test:m1 + + +> think@0.7.0 test:ports +> node --test test/ports/*.test.js + +✔ BG_TOKEN is exported from style.js alongside the palette (0.81025ms) +✔ windowed browse initializes with no drawer open (18.197542ms) +✔ saveRawCapture writes cwd receipts first and defers git enrichment to followthrough (1040.900042ms) +✔ capture provenance exports the canonical ingress set (1.524125ms) +✔ capture provenance trims source strings while preserving valid ingress and URL (0.134583ms) +✔ capture provenance trims ingress strings before validation (0.069834ms) +✔ capture provenance reads and normalizes environment input (0.072542ms) +✔ METHOD docs use one consistent cycle-only release and README closeout policy (1.666833ms) +✔ MIND_ORCHESTRATION.md exists and is linked from GUIDE.md (0.62075ms) +✔ cycle 0006 retrospective restarts ordered numbering for the human playback section (0.33825ms) +✔ runDiagnostics reports ok for a healthy repo with entries (24.209291ms) +✔ runDiagnostics reports fail when think directory does not exist (0.221167ms) +✔ runDiagnostics reports fail when local repo has no git init (1.124292ms) +✔ runDiagnostics reports ok for upstream when reachable (18.441417ms) +✔ runDiagnostics reports warn for upstream when unreachable (17.722208ms) +✔ runDiagnostics reports skip for upstream when not configured (16.806541ms) +✔ runDiagnostics reports ok for upstream when configured (17.02625ms) +✔ runDiagnostics includes all expected check names (16.187625ms) +✔ runDiagnostics reports graph model version when available (17.027667ms) +✔ runDiagnostics warns when graph model needs migration (16.843375ms) +✔ runDiagnostics reports entry count when available (16.477583ms) +✔ runDiagnostics warns when entry count is zero (14.868125ms) +✔ runDiagnostics skips graph and entry checks when no repo exists (0.157583ms) +✔ shared JSON helper canonicalizes object keys deterministically on parse and stringify (1.646417ms) +✔ discoverMinds finds all valid repos under the think directory (71.908042ms) +✔ discoverMinds ignores directories without git repos (16.714417ms) +✔ discoverMinds labels ~/.think/repo as "default" (16.308375ms) +✔ discoverMinds sorts with default first, then alphabetical (48.504625ms) +✔ discoverMinds returns empty array when think directory does not exist (0.163417ms) +✔ discoverMinds includes repoDir for each mind (16.755ms) +✔ shaderForMind returns a deterministic index for a given name (0.207625ms) +✔ shaderForMind returns different indices for different names (0.076084ms) +✔ shaderForMind stays within the shader count range (0.076875ms) +✔ shaderForMind handles single-character names (0.101167ms) +✔ selectLogo picks large mind logo when terminal is wide and tall enough (0.920709ms) +✔ selectLogo picks medium mind logo when terminal fits medium but not large (0.096833ms) +✔ selectLogo picks text logo when terminal is too small for mind (0.058667ms) +✔ selectLogo always returns something even for tiny terminals (0.053125ms) +✔ renderSplash contains the logo (0.140709ms) +✔ renderSplash contains the Enter prompt (0.061709ms) +✔ renderSplash output fits within the given dimensions (0.067584ms) +✔ splash.js does not export renderSplashView (dead code from RE-015 workaround) (0.045584ms) +✔ renderSplash centers the prompt horizontally (0.153208ms) +✔ windowed browse model initializes in windowed mode (0.215875ms) +✔ formatStats includes a sparkline when buckets are present (1.640625ms) +✔ formatStats omits sparkline when no buckets are present (0.086ms) +✔ formatStats handles a single bucket without crashing (0.092542ms) +✔ formatStats handles empty bucket array without sparkline (0.065917ms) +✔ formatStats sparkline is oldest-to-newest (left-to-right) (0.082584ms) +ℹ tests 49 +ℹ suites 0 +ℹ pass 49 +ℹ fail 0 +ℹ cancelled 0 +ℹ skipped 0 +ℹ todo 0 +ℹ duration_ms 1277.217834 + +> think@0.7.0 test:m1 +> node --test test/acceptance/*.test.js + +✔ think --doctor reports health of a repo with captures (2721.957708ms) +✔ think --doctor succeeds before the first capture (302.673834ms) +✔ think --json --doctor emits a structured health report (2516.202583ms) +✔ think --doctor rejects an unexpected thought argument (269.826583ms) +✔ new capture writes graph-native relationship edges while preserving compatibility properties (1811.589208ms) +✔ think --migrate-graph upgrades a version-1 property-linked repo additively (2927.598666ms) +✔ think --migrate-graph is idempotent and safe to rerun (2660.484208ms) +✔ capture on a version-1 repo still succeeds and only migrates after the raw local save (4482.656791ms) +✔ graph-native commands fail clearly on an outdated repo outside interactive use (3826.6075ms) +✔ interactive inspect on an outdated repo shows visible upgrade progress before continuing (2808.0755ms) +✔ interactive browse on an outdated repo shows visible upgrade progress before continuing (2819.269083ms) +✔ think --json emits explicit graph migration required errors for outdated graph-native commands (1933.378458ms) +✔ think --migrate-graph upgrades a version-2 repo to graph model version 3 with browse and reflect read edges (6052.391125ms) +✔ think --json --inspect exposes direct reflect receipts that exist only through graph-native v3 edges (2171.741583ms) +✔ think --help prints top-level usage without bootstrapping local state (371.798333ms) +✔ think -h is accepted as a short alias for top-level help (284.693459ms) +✔ think --recent --help prints recent help instead of running the command (281.550583ms) +✔ think --recent -h prints recent help instead of running the command (275.428542ms) +✔ think recent --help fails and points callers to the explicit flag form (277.926458ms) +✔ think --inspect --help bypasses required entry validation (315.878334ms) +✔ think --json --help emits structured JSONL help output (301.226875ms) +✔ think recent --json --help fails machine-readably instead of acting as shorthand help (300.804ms) +✔ think -- -h captures the literal text after option parsing is terminated (2230.941417ms) +✔ think --ingest reads stdin explicitly and captures it into the normal raw-capture core (2468.337833ms) +✔ think with stdin but without --ingest does not accidentally capture piped input (305.254375ms) +✔ think --ingest rejects mixed positional capture text and stdin capture text (312.728791ms) +✔ think --json --ingest preserves machine-readable capture semantics for agents (2191.334125ms) +✔ think --ingest rejects empty stdin payloads (306.733167ms) +✔ think --json capture emits JSONL on stdout and keeps stderr quiet when there are no warnings (1521.869875ms) +✔ think --json --recent emits entry events instead of plain text (4568.054875ms) +✔ think --json --stats emits totals and bucket rows as JSONL (3814.783375ms) +✔ think --json validation failures emit JSONL on stderr instead of stdout (263.8965ms) +✔ think --json reports backup pending as a structured warning on stderr (1151.914167ms) +✔ think --json emits deterministically sorted keys in JSONL output (1517.587583ms) +✔ think MCP server lists the core Think tools (430.016625ms) +✔ think MCP capture, recent, browse, and inspect route through the existing Think runtime (3671.827875ms) +✔ think MCP capture preserves additive provenance separately from the raw text (2132.501292ms) +✔ think MCP capture trims additive provenance strings before persistence (1906.900792ms) +✔ think MCP remember, stats, and prompt_metrics expose structured read results (4610.182167ms) +✔ think MCP doctor tool returns structured health checks (2035.6105ms) +✔ CLI raw capture bootstraps the local repo and preserves exact text (2415.188458ms) +✔ think "recent" is captured as a thought rather than triggering the list (2244.652292ms) +✔ think --recent does not bootstrap local state before the first capture (290.928667ms) +✔ think --recent rejects an unexpected thought argument (289.878292ms) +✔ capture does not require retrieval-before-write or conceptual confirmation (2964.543583ms) +✔ THINK_REPO_DIR overrides the default local repo path (1875.105375ms) +✔ reachable upstream reports local save first and backup second (1225.381458ms) +✔ unreachable upstream keeps capture successful and reports backup pending (1128.583667ms) +✔ recent stays plain and chronological (5504.962167ms) +✔ capture is append-only across later capture activity (3350.699334ms) +✔ duplicate thoughts produce distinct captures rather than deduping (3326.489541ms) +✔ empty input is rejected (253.614667ms) +✔ whitespace-only input is rejected (253.639041ms) +✔ capture preserves formatting neutrality for spacing, casing, and punctuation (1615.88775ms) +✔ default user language avoids Git terminology (1036.514792ms) +✔ verbose capture emits JSONL trace updates on stderr (1022.838792ms) +✔ raw entries remain immutable after later derived entries exist (0.101375ms) # TODO +✔ stored raw entry bytes remain unchanged in the local store after later writes (0.029333ms) # TODO +✔ entry kind separation remains explicit once the first derived-entry write path exists (0.047875ms) # TODO +✔ think --prompt-metrics prints factual prompt telemetry totals and medians (374.854833ms) +✔ think --prompt-metrics does not bootstrap local state before the first capture (299.978833ms) +✔ think --prompt-metrics supports --since filtering over prompt sessions (296.753542ms) +✔ think --prompt-metrics supports --bucket=day (284.038291ms) +✔ think --json --prompt-metrics emits explicit summary, timing, and bucket rows (303.1205ms) +✔ think --prompt-metrics rejects an unexpected thought argument (346.953084ms) +✔ think --prompt-metrics rejects invalid filter values (580.5055ms) +✔ think --recent --count limits output to the newest N raw captures (6843.85575ms) +✔ think --recent --query filters raw captures by case-insensitive text match (5835.055208ms) +✔ removed recent alias flags fail clearly and point to the scoped forms (1591.36075ms) +✔ think --json --recent applies count and query filters while remaining JSONL-only (5372.569209ms) +✔ think --remember uses the current project context to recall relevant prior thoughts (3732.963375ms) +✔ think --remember with an explicit phrase recalls matching thoughts without turning into generic recent listing (5639.135334ms) +✔ think --json --remember emits explicit ambient scope and match receipts for agents (3516.629083ms) +✔ think --remember falls back honestly to textual project-token matching for entries without ambient project receipts (3394.144708ms) +✔ think --remember --limit returns only the top N matching thoughts in deterministic order (7086.896584ms) +✔ think --remember --brief returns a triage-friendly snippet instead of the full multiline thought (3128.112834ms) +✔ think --json --remember --brief --limit preserves bounded explicit recall receipts for agents (4972.953667ms) +✔ think --remember rejects invalid --limit values (1347.983459ms) +✔ think --browse shows one raw thought with its immediate newer and older neighbors (4975.108333ms) +✔ think --browse without an entry id fails clearly outside interactive TTY use and remains read-only (225.742959ms) +✔ think --json --browse without an entry id stays machine-readable and does not try to open the shell (230.624833ms) +✔ think --json --browse emits JSONL rows for the current raw thought and its neighbors (4974.603458ms) +✔ think --browse opens a reader-first browse TUI with metadata and no permanent recent rail (5557.950541ms) +✔ think --browse can reveal a chronology drawer on demand instead of showing the full log by default (4783.984667ms) +✔ think --browse can jump to another thought through a fuzzy jump surface (4722.180666ms) +✔ think --browse can reveal inspect receipts inside the scripted browse TUI (3068.112833ms) +✔ think --browse can hand the selected thought into reflect from the scripted browse TUI (3326.133292ms) +✔ think --browse surfaces session identity for the current thought without replacing the reader-first view (6956.515792ms) +✔ think --browse uses a short visible entry id in the reader-first metadata while inspect keeps the full exact id (6104.553417ms) +✔ think --browse can reveal a summon-only session drawer that excludes out-of-session thoughts (6953.830334ms) +✔ think --browse reveals a structured session drawer with a visible start label and current-thought marker (6859.134708ms) +✔ think --json --browse emits explicit session context and session-nearby rows without mislabeling out-of-session thoughts (7041.197541ms) +✔ think --browse can move to the previous thought within the current session without leaving reader-first browse (4978.646833ms) +✔ think --browse keeps the current thought in place when there is no next thought in the current session (4820.170042ms) +✔ think --json --browse exposes explicit session traversal semantics without conflating them with chronology neighbors (5028.915625ms) +✔ think --inspect exposes exact raw entry metadata without narration (2515.659084ms) +✔ think --json --inspect emits JSONL for the exact raw entry metadata (1762.690417ms) +✔ think --inspect exposes additive capture provenance separately from the raw text (1610.931542ms) +✔ think --json --inspect includes additive capture provenance in the inspected entry payload (1657.947583ms) +✔ think --inspect exposes canonical content identity and direct derived receipts when they exist (5149.579584ms) +✔ think --json --inspect emits canonical content identity and direct derived receipt rows (4988.753875ms) +✔ think --inspect exposes the first derived bundle as explicit raw, canonical, derived, and context sections (5563.545417ms) +✔ think --json --inspect emits canonical identity plus seed-quality and session-attribution receipts with provenance (5601.621458ms) +✔ think --json --inspect keeps duplicate raw captures distinct while linking them to the same canonical thought (4851.713042ms) +✔ think --reflect starts an explicit seeded reflect session with a deterministic seed-first challenge prompt (4770.078625ms) +✔ removed brainstorm aliases fail clearly and point to reflect (2226.700875ms) +✔ think --reflect can use an explicit sharpen prompt family (2126.031ms) +✔ think --reflect-session stores a separate derived entry with preserved seed-first lineage (5763.356167ms) +✔ think --reflect validates explicit session entry and stays read-only on invalid start (2273.888833ms) +✔ think --reflect fails clearly when the seed entry does not exist (248.706667ms) +✔ think --reflect refuses status-like seeds that are not pressure-testable ideas (6386.750833ms) +✔ think --json --reflect refuses ineligible seeds with structured machine-readable errors (6243.611209ms) +✔ think --json --reflect emits only JSONL with seed-first session and prompt data (3439.972042ms) +✔ think --json --reflect-session emits only JSONL and preserves stored seed-first lineage (2562.731ms) +✔ think --json reflect validation failures stay fully machine-readable (232.179791ms) +✔ think --stats prints total thoughts (3967.019708ms) +✔ think --stats does not bootstrap local state before the first capture (267.024916ms) +✔ think "stats" is captured as a thought rather than triggering the command (2478.384208ms) +✔ think --stats rejects an unexpected thought argument (259.770167ms) +✔ think stats supports --since filter (3558.133042ms) +✔ think --stats rejects an invalid --since value (251.322292ms) +✔ think stats supports --from and --to filters (5326.068333ms) +✔ think --stats rejects invalid absolute date filters (251.043625ms) +✔ think stats supports --bucket=day (5383.441708ms) +✔ think --stats --bucket=day includes a sparkline in text output (5258.039917ms) +✔ think --stats --bucket=day --json includes sparkline in stats.total event (4964.8885ms) +✔ think --stats without --bucket omits sparkline (1543.003958ms) +✔ think --stats rejects an invalid bucket value (239.657583ms) +ℹ tests 128 +ℹ suites 0 +ℹ pass 125 +ℹ fail 0 +ℹ cancelled 0 +ℹ skipped 0 +ℹ todo 3 +ℹ duration_ms 168515.395333 + +``` + +## Drift Results + +```text +Playback-question drift found. +Scanned 1 active cycle, 6 playback questions, 0 test descriptions. +Search basis: exact normalized match in tests/**/*.test.* and tests/**/*.spec.* descriptions. + +docs/design/0010-document-mind-orchestration/document-mind-orchestration.md +- Human: Does the doc explain how to create a mind? + No exact normalized test description match found. +- Human: Does it explain how discovery works? + No exact normalized test description match found. +- Human: Does it explain human/agent separation? + No exact normalized test description match found. +- Agent: Does the doc explain the TUI mind switcher? + No exact normalized test description match found. +- Agent: Does it explain THINK_REPO_DIR interaction? + No exact normalized test description match found. +- Agent: Is the doc linked from README and GUIDE? + No exact normalized test description match found. + +``` + +## Manual Verification + +- [x] Automated capture completed successfully. diff --git a/docs/method/retro/0011-doctor-inconsistent-skip-logic/doctor-inconsistent-skip-logic.md b/docs/method/retro/0011-doctor-inconsistent-skip-logic/doctor-inconsistent-skip-logic.md new file mode 100644 index 0000000..0add570 --- /dev/null +++ b/docs/method/retro/0011-doctor-inconsistent-skip-logic/doctor-inconsistent-skip-logic.md @@ -0,0 +1,18 @@ +--- +title: "Doctor inconsistent skip logic" +cycle: "0011-doctor-inconsistent-skip-logic" +outcome: hill-met +drift_check: yes +--- + +# Doctor inconsistent skip logic Retro + +## Summary + +Upstream check now reports 'skip' when URL is set but no checker +callback is provided. One line changed, one test added, one test +updated. 14 doctor tests pass. + +## Drift + +- None. diff --git a/docs/method/retro/0011-shaderForMind-no-input-validation/shaderForMind-no-input-validation.md b/docs/method/retro/0011-shaderForMind-no-input-validation/shaderForMind-no-input-validation.md new file mode 100644 index 0000000..9ec68ed --- /dev/null +++ b/docs/method/retro/0011-shaderForMind-no-input-validation/shaderForMind-no-input-validation.md @@ -0,0 +1,16 @@ +--- +title: "shaderForMind input validation" +cycle: "0011-shaderForMind-no-input-validation" +outcome: hill-met +drift_check: yes +--- + +# shaderForMind input validation Retro + +## Summary + +Added guard for shaderCount <= 0. Two new tests. One line of code. + +## Drift + +- None. diff --git a/docs/method/retro/0011-unused-browseStartMs-field/unused-browseStartMs-field.md b/docs/method/retro/0011-unused-browseStartMs-field/unused-browseStartMs-field.md new file mode 100644 index 0000000..d461d33 --- /dev/null +++ b/docs/method/retro/0011-unused-browseStartMs-field/unused-browseStartMs-field.md @@ -0,0 +1,22 @@ +--- +title: "Unused browseStartMs field" +cycle: "0011-unused-browseStartMs-field" +design_doc: "docs/design/0011-unused-browseStartMs-field/unused-browseStartMs-field.md" +outcome: hill-met +drift_check: yes +--- + +# Unused browseStartMs field Retro + +## Summary + +Removed dead `browseStartMs` field from `createWindowedBrowseModel`. +One line deleted. 49 port tests pass. + +## Drift + +- None. + +## Backlog Maintenance + +- [x] Done diff --git a/docs/method/retro/0012-audit-plain-object-model/audit-plain-object-model.md b/docs/method/retro/0012-audit-plain-object-model/audit-plain-object-model.md new file mode 100644 index 0000000..b564d10 --- /dev/null +++ b/docs/method/retro/0012-audit-plain-object-model/audit-plain-object-model.md @@ -0,0 +1,32 @@ +--- +title: "Promote Entry and ReflectSession to classes" +cycle: "0012-audit-plain-object-model" +outcome: hill-met +drift_check: yes +--- + +# Promote Entry and ReflectSession to classes Retro + +## Summary + +Converted createEntry and createReflectSession from plain-object +factories to frozen class constructors. Entry accepts optional +reflect fields at construction, eliminating post-creation mutation +in reflect.js. 5 new port tests, 182 total pass. + +## Drift + +- Reflect path was mutating Entry post-creation (lines 115-118 in + reflect.js). Fixed by accepting reflect fields in constructor. + +## New Debt + +- None. + +## Cool Ideas + +- None. + +## Backlog Maintenance + +- [x] Done diff --git a/docs/method/retro/0012-buildStatsSparkline-duplication/buildStatsSparkline-duplication.md b/docs/method/retro/0012-buildStatsSparkline-duplication/buildStatsSparkline-duplication.md new file mode 100644 index 0000000..5eb8160 --- /dev/null +++ b/docs/method/retro/0012-buildStatsSparkline-duplication/buildStatsSparkline-duplication.md @@ -0,0 +1,17 @@ +--- +title: "DRY sparkline duplication" +cycle: "0012-buildStatsSparkline-duplication" +outcome: hill-met +drift_check: yes +--- + +# DRY sparkline duplication Retro + +## Summary + +formatStats now calls buildStatsSparkline instead of duplicating the +buckets→sparkline transformation. One call site changed. + +## Drift + +- None. diff --git a/docs/method/retro/0013-ssjr-src-store-model-js/ssjr-src-store-model-js.md b/docs/method/retro/0013-ssjr-src-store-model-js/ssjr-src-store-model-js.md new file mode 100644 index 0000000..77f887d --- /dev/null +++ b/docs/method/retro/0013-ssjr-src-store-model-js/ssjr-src-store-model-js.md @@ -0,0 +1,27 @@ +--- +title: "SSJR for src/store/model.js" +cycle: "0013-ssjr-src-store-model-js" +outcome: hill-met +drift_check: yes +--- + +# SSJR for src/store/model.js Retro + +## Summary + +Added ENTRY_KINDS and BUCKET_PERIODS frozen constants. storesTextContent +validates against the constant instead of magic strings. formatBucketKey +validates bucket period and throws on invalid input. 3 new port tests. + +Remaining SSJR items for model.js (deferred): +- getCurrentTime reads process.env directly (clock injection) +- createWriterId reads os.hostname directly (IO injection) +- Comparators are standalone functions, not methods + +## Drift + +- None. + +## Backlog Maintenance + +- [x] Done diff --git a/docs/method/retro/0014-audit-provenance-url-schemes/audit-provenance-url-schemes.md b/docs/method/retro/0014-audit-provenance-url-schemes/audit-provenance-url-schemes.md new file mode 100644 index 0000000..8f6ca8e --- /dev/null +++ b/docs/method/retro/0014-audit-provenance-url-schemes/audit-provenance-url-schemes.md @@ -0,0 +1,18 @@ +--- +title: "Restrict provenance URL schemes" +cycle: "0014-audit-provenance-url-schemes" +outcome: hill-met +drift_check: yes +--- + +# Restrict provenance URL schemes Retro + +## Summary + +Added SAFE_URL_SCHEMES allowlist (http:, https:) to normalizeUrl. +Dangerous schemes (data:, file:, ftp:, javascript:) are now rejected. +Two new port tests. + +## Drift + +- None. diff --git a/docs/method/retro/0015-audit-capture-path-sync-git/audit-capture-path-sync-git.md b/docs/method/retro/0015-audit-capture-path-sync-git/audit-capture-path-sync-git.md new file mode 100644 index 0000000..139c15d --- /dev/null +++ b/docs/method/retro/0015-audit-capture-path-sync-git/audit-capture-path-sync-git.md @@ -0,0 +1,19 @@ +--- +title: "Move ambient context out of store" +cycle: "0015-audit-capture-path-sync-git" +outcome: hill-met +drift_check: yes +--- + +# Move ambient context out of store Retro + +## Summary + +Removed process.cwd() and getAmbientProjectContext fallback from +saveRawCapture and finalizeCapturedThought. CLI and MCP callers now +resolve ambient context at the boundary. Port test updated to pass +ambientContext directly. 187 tests pass. + +## Drift + +- None. diff --git a/docs/method/retro/0016-ssjr-src-capture-provenance-js/ssjr-src-capture-provenance-js.md b/docs/method/retro/0016-ssjr-src-capture-provenance-js/ssjr-src-capture-provenance-js.md new file mode 100644 index 0000000..bc98244 --- /dev/null +++ b/docs/method/retro/0016-ssjr-src-capture-provenance-js/ssjr-src-capture-provenance-js.md @@ -0,0 +1,17 @@ +--- +title: "CaptureProvenance class" +cycle: "0016-ssjr-src-capture-provenance-js" +outcome: hill-met +drift_check: yes +--- + +# CaptureProvenance class Retro + +## Summary + +normalizeCaptureProvenance returns a frozen CaptureProvenance class +instead of a plain object. One new test, existing tests updated. + +## Drift + +- None. diff --git a/docs/method/retro/0017-audit-undocumented-ambient-context-and-recall/audit-undocumented-ambient-context-and-recall.md b/docs/method/retro/0017-audit-undocumented-ambient-context-and-recall/audit-undocumented-ambient-context-and-recall.md new file mode 100644 index 0000000..594a87c --- /dev/null +++ b/docs/method/retro/0017-audit-undocumented-ambient-context-and-recall/audit-undocumented-ambient-context-and-recall.md @@ -0,0 +1,18 @@ +--- +title: "Document ambient context and recall" +cycle: "0017-audit-undocumented-ambient-context-and-recall" +outcome: hill-met +drift_check: yes +--- + +# Document ambient context and recall Retro + +## Summary + +Created docs/AMBIENT_CONTEXT.md covering the full pipeline: two-phase +collection, WARP persistence, recall scoring (ambient + explicit), and +provenance normalization. Single contributor-facing reference. + +## Drift + +- None. diff --git a/docs/method/retro/0018-ssjr-src-store-capture-js/ssjr-src-store-capture-js.md b/docs/method/retro/0018-ssjr-src-store-capture-js/ssjr-src-store-capture-js.md new file mode 100644 index 0000000..97ce2e5 --- /dev/null +++ b/docs/method/retro/0018-ssjr-src-store-capture-js/ssjr-src-store-capture-js.md @@ -0,0 +1,23 @@ +--- +title: "SSJR for src/store/capture.js" +cycle: "0018-ssjr-src-store-capture-js" +outcome: hill-met +drift_check: yes +--- + +# SSJR for src/store/capture.js Retro + +## Summary + +Removed pointless captureAmbientContext alias. File is now clean: +Entry class for construction, CaptureProvenance class for normalization, +ambient context passed from boundary. 63 port tests pass. + +## Drift + +- Discovered stale blocked_by dependency on project-context — removed. + +## New Debt + +- getGraphModelStatus is misplaced in capture.js (belongs in runtime + or its own module). Not worth moving this cycle due to barrel export. diff --git a/docs/method/retro/0019-audit-unvalidated-read-models/audit-unvalidated-read-models.md b/docs/method/retro/0019-audit-unvalidated-read-models/audit-unvalidated-read-models.md new file mode 100644 index 0000000..8ed05dc --- /dev/null +++ b/docs/method/retro/0019-audit-unvalidated-read-models/audit-unvalidated-read-models.md @@ -0,0 +1,17 @@ +--- +title: "Validated read models" +cycle: "0019-audit-unvalidated-read-models" +outcome: hill-met +drift_check: yes +--- + +# Validated read models Retro + +## Summary + +getStoredEntry now returns a frozen StoredEntry class. All 187 +acceptance tests pass unchanged — property access is identical. + +## Drift + +- None. diff --git a/docs/method/retro/0020-audit-no-error-taxonomy/audit-no-error-taxonomy.md b/docs/method/retro/0020-audit-no-error-taxonomy/audit-no-error-taxonomy.md new file mode 100644 index 0000000..27af945 --- /dev/null +++ b/docs/method/retro/0020-audit-no-error-taxonomy/audit-no-error-taxonomy.md @@ -0,0 +1,23 @@ +--- +title: "Error taxonomy" +cycle: "0020-audit-no-error-taxonomy" +outcome: hill-met +drift_check: yes +--- + +# Error taxonomy Retro + +## Summary + +Introduced ThinkError hierarchy (ValidationError, NotFoundError, +GraphError, CaptureError). Migrated MCP service (7 throws) and +Entry constructor (2 throws) to typed errors. CLI surface unchanged +for now — can migrate in follow-up cycles. + +## Drift + +- None. + +## New Debt + +- CLI and store paths still use raw Error in some places. diff --git a/docs/method/retro/0021-audit-git-binary-path-trust/audit-git-binary-path-trust.md b/docs/method/retro/0021-audit-git-binary-path-trust/audit-git-binary-path-trust.md new file mode 100644 index 0000000..9540d75 --- /dev/null +++ b/docs/method/retro/0021-audit-git-binary-path-trust/audit-git-binary-path-trust.md @@ -0,0 +1,17 @@ +--- +title: "Git binary path trust" +cycle: "0021-audit-git-binary-path-trust" +outcome: hill-met +drift_check: yes +--- + +# Git binary path trust Retro + +## Summary + +GIT_BINARY resolved once via 'which git', exported from git.js, +used by both git.js (3 sites) and project-context.js (1 site). + +## Drift + +- None. diff --git a/docs/method/retro/0022-audit-query-reshape-pipeline/audit-query-reshape-pipeline.md b/docs/method/retro/0022-audit-query-reshape-pipeline/audit-query-reshape-pipeline.md new file mode 100644 index 0000000..906be60 --- /dev/null +++ b/docs/method/retro/0022-audit-query-reshape-pipeline/audit-query-reshape-pipeline.md @@ -0,0 +1,23 @@ +--- +title: "Query reshape pipeline" +cycle: "0022-audit-query-reshape-pipeline" +outcome: hill-met +drift_check: yes +--- + +# Query reshape pipeline Retro + +## Summary + +Froze inspect and browse window result objects. Anonymous shapes +remain but are now immutable — downstream can't accidentally mutate +query results. + +## Drift + +- None. + +## New Debt + +- Named result classes (InspectResult, BrowseWindow) deferred. + Freezing is sufficient for correctness. diff --git a/docs/method/retro/0023-audit-warp-handle-reuse/audit-warp-handle-reuse.md b/docs/method/retro/0023-audit-warp-handle-reuse/audit-warp-handle-reuse.md new file mode 100644 index 0000000..ae872bb --- /dev/null +++ b/docs/method/retro/0023-audit-warp-handle-reuse/audit-warp-handle-reuse.md @@ -0,0 +1,17 @@ +--- +title: "Warp handle reuse" +cycle: "0023-audit-warp-handle-reuse" +outcome: hill-met +drift_check: yes +--- + +# Warp handle reuse Retro + +## Summary + +Added a Map cache in runtime.js keyed by repoDir. openWarpApp returns +the cached instance on subsequent calls. 7 call sites benefit. + +## Drift + +- None. diff --git a/docs/method/retro/0024-ssjr-src-store-runtime-js/ssjr-src-store-runtime-js.md b/docs/method/retro/0024-ssjr-src-store-runtime-js/ssjr-src-store-runtime-js.md new file mode 100644 index 0000000..3aa6c42 --- /dev/null +++ b/docs/method/retro/0024-ssjr-src-store-runtime-js/ssjr-src-store-runtime-js.md @@ -0,0 +1,17 @@ +--- +title: "SSJR for src/store/runtime.js" +cycle: "0024-ssjr-src-store-runtime-js" +outcome: hill-met +drift_check: yes +--- + +# SSJR for src/store/runtime.js Retro + +## Summary + +Added SESSION_KINDS constant. getReflectSession uses it. The heavy +lifting (StoredEntry class, warp cache) was done in prior cycles. + +## Drift + +- None. diff --git a/docs/method/retro/0025-ssjr-src-verbose-js/ssjr-src-verbose-js.md b/docs/method/retro/0025-ssjr-src-verbose-js/ssjr-src-verbose-js.md new file mode 100644 index 0000000..9012f82 --- /dev/null +++ b/docs/method/retro/0025-ssjr-src-verbose-js/ssjr-src-verbose-js.md @@ -0,0 +1,16 @@ +--- +title: "SSJR for src/verbose.js" +cycle: "0025-ssjr-src-verbose-js" +outcome: hill-met +drift_check: yes +--- + +# SSJR for src/verbose.js Retro + +## Summary + +Promoted to VerboseReporter class. Factory preserved. + +## Drift + +- None. diff --git a/docs/method/retro/0026-audit-mcp-service-shape-soup/audit-mcp-service-shape-soup.md b/docs/method/retro/0026-audit-mcp-service-shape-soup/audit-mcp-service-shape-soup.md new file mode 100644 index 0000000..2193fe1 --- /dev/null +++ b/docs/method/retro/0026-audit-mcp-service-shape-soup/audit-mcp-service-shape-soup.md @@ -0,0 +1,22 @@ +--- +title: "MCP service shape soup" +cycle: "0026-audit-mcp-service-shape-soup" +outcome: hill-met +drift_check: yes +--- + +# MCP service shape soup Retro + +## Summary + +Merged duplicate toBrowseEntry/toRecentEntry into frozen toMcpEntry. +Same entry shape for browse and recent MCP results. + +## Drift + +- None. + +## New Debt + +- Other service returns still unfrozen. Low priority — they're + consumed by the MCP server immediately. diff --git a/docs/method/retro/0027-audit-mcp-contract-holes/audit-mcp-contract-holes.md b/docs/method/retro/0027-audit-mcp-contract-holes/audit-mcp-contract-holes.md new file mode 100644 index 0000000..0fdac4f --- /dev/null +++ b/docs/method/retro/0027-audit-mcp-contract-holes/audit-mcp-contract-holes.md @@ -0,0 +1,17 @@ +--- +title: "MCP contract holes" +cycle: "0027-audit-mcp-contract-holes" +outcome: hill-met +drift_check: yes +--- + +# MCP contract holes Retro + +## Summary + +Replaced 5 z.any() holes with typed Zod schemas. Merged duplicate +entry schemas. MCP output contracts now match the data shapes. + +## Drift + +- None. diff --git a/docs/method/retro/0028-audit-cli-options-bag/audit-cli-options-bag.md b/docs/method/retro/0028-audit-cli-options-bag/audit-cli-options-bag.md new file mode 100644 index 0000000..65ef00d --- /dev/null +++ b/docs/method/retro/0028-audit-cli-options-bag/audit-cli-options-bag.md @@ -0,0 +1,18 @@ +--- +title: "CLI options bag" +cycle: "0028-audit-cli-options-bag" +outcome: hill-met +drift_check: yes +--- + +# CLI options bag Retro + +## Summary + +Froze the parsed options object and positionals array. The bag is +still one large object, but it's now immutable. Named command forms +deferred — the current shape is serviceable. + +## Drift + +- None. diff --git a/docs/method/retro/0029-ssjr-src-cli-options-js/ssjr-src-cli-options-js.md b/docs/method/retro/0029-ssjr-src-cli-options-js/ssjr-src-cli-options-js.md new file mode 100644 index 0000000..b25c8c6 --- /dev/null +++ b/docs/method/retro/0029-ssjr-src-cli-options-js/ssjr-src-cli-options-js.md @@ -0,0 +1,17 @@ +--- +title: "SSJR for src/cli/options.js" +cycle: "0029-ssjr-src-cli-options-js" +outcome: hill-met +drift_check: yes +--- + +# SSJR for src/cli/options.js Retro + +## Summary + +Added COMMANDS constant object. resolveCommand uses it. Combined +with prior freeze, options.js is now immutable with named commands. + +## Drift + +- None. diff --git a/docs/method/retro/0030-ssjr-src-store-queries-js/ssjr-src-store-queries-js.md b/docs/method/retro/0030-ssjr-src-store-queries-js/ssjr-src-store-queries-js.md new file mode 100644 index 0000000..148fbc0 --- /dev/null +++ b/docs/method/retro/0030-ssjr-src-store-queries-js/ssjr-src-store-queries-js.md @@ -0,0 +1,17 @@ +--- +title: "SSJR for src/store/queries.js" +cycle: "0030-ssjr-src-store-queries-js" +outcome: hill-met +drift_check: yes +--- + +# SSJR for src/store/queries.js Retro + +## Summary + +Froze remember and stats return paths. Combined with cycle 0022 +(inspect/browse), all query results are now immutable. + +## Drift + +- None. diff --git a/docs/method/retro/0031-ssjr-src-mcp-service-js/ssjr-src-mcp-service-js.md b/docs/method/retro/0031-ssjr-src-mcp-service-js/ssjr-src-mcp-service-js.md new file mode 100644 index 0000000..edf6bc6 --- /dev/null +++ b/docs/method/retro/0031-ssjr-src-mcp-service-js/ssjr-src-mcp-service-js.md @@ -0,0 +1,19 @@ +--- +title: "SSJR for src/mcp/service.js" +cycle: "0031-ssjr-src-mcp-service-js" +outcome: hill-met +drift_check: yes +--- + +# SSJR for src/mcp/service.js Retro + +## Summary + +Bulk of SSJR work landed in prior cycles: typed errors (0020), +DRY toMcpEntry with freeze (0026), z.any() holes filled (0027). +Remaining service returns are consumed by toToolResult immediately +and serialized to JSON — further freezing is low-impact. + +## Drift + +- None. diff --git a/docs/method/retro/0032-HT-007-remediation-payloads-in-json-errors/HT-007-remediation-payloads-in-json-errors.md b/docs/method/retro/0032-HT-007-remediation-payloads-in-json-errors/HT-007-remediation-payloads-in-json-errors.md new file mode 100644 index 0000000..2ae9e4b --- /dev/null +++ b/docs/method/retro/0032-HT-007-remediation-payloads-in-json-errors/HT-007-remediation-payloads-in-json-errors.md @@ -0,0 +1,18 @@ +--- +title: "Remediation payloads in JSON errors" +cycle: "0032-HT-007-remediation-payloads-in-json-errors" +outcome: hill-met +drift_check: yes +--- + +# Remediation payloads Retro + +## Summary + +Added remediation field to graph.migration_required error in both +CLI (graph-gate.js) and MCP (service.js). Agents can parse the +exact command to run for recovery. + +## Drift + +- None. diff --git a/docs/method/retro/0032-ssjr-src-store-migrations-js/ssjr-src-store-migrations-js.md b/docs/method/retro/0032-ssjr-src-store-migrations-js/ssjr-src-store-migrations-js.md new file mode 100644 index 0000000..176f79b --- /dev/null +++ b/docs/method/retro/0032-ssjr-src-store-migrations-js/ssjr-src-store-migrations-js.md @@ -0,0 +1,16 @@ +--- +title: "SSJR for src/store/migrations.js" +cycle: "0032-ssjr-src-store-migrations-js" +outcome: hill-met +drift_check: yes +--- + +# SSJR for src/store/migrations.js Retro + +## Summary + +Froze both return paths in migrateGraphModel. + +## Drift + +- None. diff --git a/docs/method/retro/0033-audit-cli-dispatch-chain/audit-cli-dispatch-chain.md b/docs/method/retro/0033-audit-cli-dispatch-chain/audit-cli-dispatch-chain.md new file mode 100644 index 0000000..f1b9b8b --- /dev/null +++ b/docs/method/retro/0033-audit-cli-dispatch-chain/audit-cli-dispatch-chain.md @@ -0,0 +1,16 @@ +--- +title: "CLI dispatch chain" +cycle: "0033-audit-cli-dispatch-chain" +outcome: hill-met +drift_check: yes +--- + +# CLI dispatch chain Retro + +## Summary + +Replaced if/else ladder with dispatch map keyed by COMMANDS constants. + +## Drift + +- None. diff --git a/docs/method/retro/0034-ssjr-src-cli-output-js/ssjr-src-cli-output-js.md b/docs/method/retro/0034-ssjr-src-cli-output-js/ssjr-src-cli-output-js.md new file mode 100644 index 0000000..742d653 --- /dev/null +++ b/docs/method/retro/0034-ssjr-src-cli-output-js/ssjr-src-cli-output-js.md @@ -0,0 +1,16 @@ +--- +title: "SSJR for src/cli/output.js" +cycle: "0034-ssjr-src-cli-output-js" +outcome: hill-met +drift_check: yes +--- + +# SSJR for src/cli/output.js Retro + +## Summary + +CliOutput class + STDERR_EVENTS frozen constant. + +## Drift + +- None. diff --git a/docs/method/retro/0035-ssjr-src-mcp-result-js/ssjr-src-mcp-result-js.md b/docs/method/retro/0035-ssjr-src-mcp-result-js/ssjr-src-mcp-result-js.md new file mode 100644 index 0000000..4abd1c4 --- /dev/null +++ b/docs/method/retro/0035-ssjr-src-mcp-result-js/ssjr-src-mcp-result-js.md @@ -0,0 +1,16 @@ +--- +title: "SSJR for src/mcp/result.js" +cycle: "0035-ssjr-src-mcp-result-js" +outcome: hill-met +drift_check: yes +--- + +# SSJR for src/mcp/result.js Retro + +## Summary + +Froze toToolResult return and content array. + +## Drift + +- None. diff --git a/docs/method/retro/0036-ssjr-src-git-js/ssjr-src-git-js.md b/docs/method/retro/0036-ssjr-src-git-js/ssjr-src-git-js.md new file mode 100644 index 0000000..ff125d7 --- /dev/null +++ b/docs/method/retro/0036-ssjr-src-git-js/ssjr-src-git-js.md @@ -0,0 +1,17 @@ +--- +title: "SSJR for src/git.js" +cycle: "0036-ssjr-src-git-js" +outcome: hill-met +drift_check: yes +--- + +# SSJR for src/git.js Retro + +## Summary + +Git command failures use ThinkError with GIT_COMMAND_FAILED code. +Combined with GIT_BINARY from cycle 0021, git.js SSJR is addressed. + +## Drift + +- None. diff --git a/docs/method/retro/0037-audit-prompt-metrics-raw-parse/audit-prompt-metrics-raw-parse.md b/docs/method/retro/0037-audit-prompt-metrics-raw-parse/audit-prompt-metrics-raw-parse.md new file mode 100644 index 0000000..07d04fe --- /dev/null +++ b/docs/method/retro/0037-audit-prompt-metrics-raw-parse/audit-prompt-metrics-raw-parse.md @@ -0,0 +1,17 @@ +--- +title: "Prompt metrics raw parse" +cycle: "0037-audit-prompt-metrics-raw-parse" +outcome: hill-met +drift_check: yes +--- + +# Prompt metrics raw parse Retro + +## Summary + +Added normalizeMetricRecord that validates and freezes each parsed +JSONL record. Required sessionId, typed timing fields. + +## Drift + +- None. diff --git a/docs/method/retro/0038-ssjr-src-cli-js/ssjr-src-cli-js.md b/docs/method/retro/0038-ssjr-src-cli-js/ssjr-src-cli-js.md new file mode 100644 index 0000000..b6fb30c --- /dev/null +++ b/docs/method/retro/0038-ssjr-src-cli-js/ssjr-src-cli-js.md @@ -0,0 +1,17 @@ +--- +title: "SSJR for src/cli.js" +cycle: "0038-ssjr-src-cli-js" +outcome: hill-met +drift_check: yes +--- + +# SSJR for src/cli.js Retro + +## Summary + +File was already clean from prior cycles (dispatch map, COMMANDS, +CliOutput). No further changes needed. + +## Drift + +- None. diff --git a/docs/method/retro/0039-ssjr-src-project-context-js/ssjr-src-project-context-js.md b/docs/method/retro/0039-ssjr-src-project-context-js/ssjr-src-project-context-js.md new file mode 100644 index 0000000..c5ab442 --- /dev/null +++ b/docs/method/retro/0039-ssjr-src-project-context-js/ssjr-src-project-context-js.md @@ -0,0 +1,18 @@ +--- +title: "SSJR for src/project-context.js" +cycle: "0039-ssjr-src-project-context-js" +outcome: hill-met +drift_check: yes +--- + +# SSJR for src/project-context.js Retro + +## Summary + +Froze both getAmbientProjectContext and getCaptureAmbientContext +return objects plus their projectTokens arrays. Uses GIT_BINARY +from prior cycle. + +## Drift + +- None. diff --git a/docs/method/retro/0040-audit-prompt-metrics-io-port/audit-prompt-metrics-io-port.md b/docs/method/retro/0040-audit-prompt-metrics-io-port/audit-prompt-metrics-io-port.md new file mode 100644 index 0000000..dff54a4 --- /dev/null +++ b/docs/method/retro/0040-audit-prompt-metrics-io-port/audit-prompt-metrics-io-port.md @@ -0,0 +1,17 @@ +--- +title: "Prompt metrics IOPort" +cycle: "0040-audit-prompt-metrics-io-port" +outcome: hill-met +drift_check: yes +--- + +# Prompt metrics IOPort Retro + +## Summary + +readPromptMetricsRecords accepts an optional reader function, +defaulting to readFile. Tests can inject in-memory readers. + +## Drift + +- None. diff --git a/docs/method/retro/0041-ssjr-src-mcp-server-js/ssjr-src-mcp-server-js.md b/docs/method/retro/0041-ssjr-src-mcp-server-js/ssjr-src-mcp-server-js.md new file mode 100644 index 0000000..2b4517b --- /dev/null +++ b/docs/method/retro/0041-ssjr-src-mcp-server-js/ssjr-src-mcp-server-js.md @@ -0,0 +1,17 @@ +--- +title: "SSJR for src/mcp/server.js" +cycle: "0041-ssjr-src-mcp-server-js" +outcome: hill-met +drift_check: yes +--- + +# SSJR for src/mcp/server.js Retro + +## Summary + +Heavy lifting done in prior cycles: z.any() holes (0027), service +shape soup (0026), result freeze (0035). No further changes needed. + +## Drift + +- None. diff --git a/docs/method/retro/0042-ssjr-src-cli-graph-gate-js/ssjr-src-cli-graph-gate-js.md b/docs/method/retro/0042-ssjr-src-cli-graph-gate-js/ssjr-src-cli-graph-gate-js.md new file mode 100644 index 0000000..392c7c6 --- /dev/null +++ b/docs/method/retro/0042-ssjr-src-cli-graph-gate-js/ssjr-src-cli-graph-gate-js.md @@ -0,0 +1,16 @@ +--- +title: "SSJR for src/cli/graph-gate.js" +cycle: "0042-ssjr-src-cli-graph-gate-js" +outcome: hill-met +drift_check: yes +--- + +# SSJR for src/cli/graph-gate.js Retro + +## Summary + +File already clean from remediation payloads (0032). 61 lines. + +## Drift + +- None. diff --git a/docs/method/retro/0043-ssjr-bin-think-mcp-js/ssjr-bin-think-mcp-js.md b/docs/method/retro/0043-ssjr-bin-think-mcp-js/ssjr-bin-think-mcp-js.md new file mode 100644 index 0000000..9c0bd9a --- /dev/null +++ b/docs/method/retro/0043-ssjr-bin-think-mcp-js/ssjr-bin-think-mcp-js.md @@ -0,0 +1,10 @@ +--- +title: "SSJR for bin/think-mcp.js" +cycle: "0043-ssjr-bin-think-mcp-js" +outcome: hill-met +drift_check: yes +--- + +# Retro + +9 lines. Clean entry point. No changes needed. diff --git a/docs/method/retro/0044-ssjr-src-cli-environment-js/ssjr-src-cli-environment-js.md b/docs/method/retro/0044-ssjr-src-cli-environment-js/ssjr-src-cli-environment-js.md new file mode 100644 index 0000000..a079e6c --- /dev/null +++ b/docs/method/retro/0044-ssjr-src-cli-environment-js/ssjr-src-cli-environment-js.md @@ -0,0 +1,11 @@ +--- +title: "SSJR for src/cli/environment.js" +cycle: "0044-ssjr-src-cli-environment-js" +outcome: hill-met +drift_check: yes +--- + +# Retro + +Extracted isInteractiveShellAvailable shared helper. 5 functions +now delegate to it instead of duplicating the same check. diff --git a/docs/method/retro/0045-ssjr-src-cli-help-js/ssjr-src-cli-help-js.md b/docs/method/retro/0045-ssjr-src-cli-help-js/ssjr-src-cli-help-js.md new file mode 100644 index 0000000..ee1ced2 --- /dev/null +++ b/docs/method/retro/0045-ssjr-src-cli-help-js/ssjr-src-cli-help-js.md @@ -0,0 +1,10 @@ +--- +title: "SSJR for src/cli/help.js" +cycle: "0045-ssjr-src-cli-help-js" +outcome: hill-met +drift_check: yes +--- + +# Retro + +Froze HELP_TEXT constant and renderHelp return object. diff --git a/docs/method/retro/0046-ssjr-src-cli-commands-capture-js/ssjr-src-cli-commands-capture-js.md b/docs/method/retro/0046-ssjr-src-cli-commands-capture-js/ssjr-src-cli-commands-capture-js.md new file mode 100644 index 0000000..80572ce --- /dev/null +++ b/docs/method/retro/0046-ssjr-src-cli-commands-capture-js/ssjr-src-cli-commands-capture-js.md @@ -0,0 +1,10 @@ +--- +title: "SSJR for src/cli/commands/capture.js" +cycle: "0046-ssjr-src-cli-commands-capture-js" +outcome: hill-met +drift_check: yes +--- + +# Retro + +File clean from prior cycles. 169 lines, well-structured. diff --git a/docs/method/retro/0047-ssjr-src-cli-commands-read-js/ssjr-src-cli-commands-read-js.md b/docs/method/retro/0047-ssjr-src-cli-commands-read-js/ssjr-src-cli-commands-read-js.md new file mode 100644 index 0000000..b1db6e2 --- /dev/null +++ b/docs/method/retro/0047-ssjr-src-cli-commands-read-js/ssjr-src-cli-commands-read-js.md @@ -0,0 +1,12 @@ +--- +title: "SSJR for src/cli/commands/read.js" +cycle: "0047-ssjr-src-cli-commands-read-js" +outcome: hill-met +drift_check: yes +--- + +# Retro + +Large file (703 lines) but clean from extensive prior work across +8+ cycles. Each function is self-contained. process.env reads are +at the appropriate CLI boundary. diff --git a/docs/method/retro/0048-ssjr-bin-think-js/ssjr-bin-think-js.md b/docs/method/retro/0048-ssjr-bin-think-js/ssjr-bin-think-js.md new file mode 100644 index 0000000..eb1b959 --- /dev/null +++ b/docs/method/retro/0048-ssjr-bin-think-js/ssjr-bin-think-js.md @@ -0,0 +1,37 @@ +--- +title: "Raise SSJR grades for `bin/think.js`" +cycle: "0048-ssjr-bin-think-js" +design_doc: "docs/design/0048-ssjr-bin-think-js/ssjr-bin-think-js.md" +outcome: hill-met +drift_check: yes +--- + +# Raise SSJR grades for `bin/think.js` Retro + +## Summary + +File is 19 lines and already clean from prior work. Structurally +correct CLI entrypoint: delegates to `main()`, passes process +streams, handles top-level rejection. No code changes needed. + +## Playback Witness + +Add artifacts under `docs/method/retro/0048-ssjr-bin-think-js/witness` and link them here. + +## Drift + +- None recorded. + +## New Debt + +- None recorded. + +## Cool Ideas + +- None recorded. + +## Backlog Maintenance + +- [ ] Inbox processed +- [ ] Priorities reviewed +- [ ] Dead work buried or merged diff --git a/docs/method/retro/0048-ssjr-bin-think-js/witness/verification.md b/docs/method/retro/0048-ssjr-bin-think-js/witness/verification.md new file mode 100644 index 0000000..42314e7 --- /dev/null +++ b/docs/method/retro/0048-ssjr-bin-think-js/witness/verification.md @@ -0,0 +1,252 @@ +--- +title: "Verification Witness for Cycle 48" +--- + +# Verification Witness for Cycle 48 + +This witness proves that `Raise SSJR grades for `bin/think.js`` now carries the required +behavior and adheres to the repo invariants. + +## Test Results + +```text + +> think@0.7.0 test +> npm run test:ports && npm run test:m1 + + +> think@0.7.0 test:ports +> node --test test/ports/*.test.js + +✔ BG_TOKEN is exported from style.js alongside the palette (0.716292ms) +✔ windowed browse initializes with no drawer open (52.950875ms) +✔ saveRawCapture writes cwd receipts first and defers git enrichment to followthrough (1160.884333ms) +✔ capture provenance exports the canonical ingress set (2.241792ms) +✔ capture provenance trims source strings while preserving valid ingress and URL (0.167375ms) +✔ capture provenance trims ingress strings before validation (0.072916ms) +✔ capture provenance rejects dangerous URL schemes (0.078375ms) +✔ capture provenance accepts safe URL schemes (0.107541ms) +✔ normalizeCaptureProvenance returns a frozen CaptureProvenance instance (0.057ms) +✔ capture provenance reads and normalizes environment input (0.093583ms) +✔ METHOD docs use one consistent cycle-only release and README closeout policy (1.925083ms) +✔ MIND_ORCHESTRATION.md exists and is linked from GUIDE.md (0.69725ms) +✔ cycle 0006 retrospective restarts ordered numbering for the human playback section (0.422166ms) +✔ runDiagnostics reports ok for a healthy repo with entries (69.205625ms) +✔ runDiagnostics reports fail when think directory does not exist (0.190292ms) +✔ runDiagnostics reports fail when local repo has no git init (1.555208ms) +✔ runDiagnostics reports ok for upstream when reachable (18.877458ms) +✔ runDiagnostics reports warn for upstream when unreachable (23.602917ms) +✔ runDiagnostics reports skip for upstream when URL is set but no checker provided (18.390791ms) +✔ runDiagnostics reports skip for upstream when not configured (17.902166ms) +✔ runDiagnostics reports skip for upstream when configured without checker (24.534ms) +✔ runDiagnostics includes all expected check names (18.186416ms) +✔ runDiagnostics reports graph model version when available (17.51325ms) +✔ runDiagnostics warns when graph model needs migration (17.3685ms) +✔ runDiagnostics reports entry count when available (16.202208ms) +✔ runDiagnostics warns when entry count is zero (16.507833ms) +✔ runDiagnostics skips graph and entry checks when no repo exists (0.398083ms) +✔ shared JSON helper canonicalizes object keys deterministically on parse and stringify (1.692542ms) +✔ discoverMinds finds all valid repos under the think directory (122.875167ms) +✔ discoverMinds ignores directories without git repos (23.871292ms) +✔ discoverMinds labels ~/.think/repo as "default" (16.564ms) +✔ discoverMinds sorts with default first, then alphabetical (51.713ms) +✔ discoverMinds returns empty array when think directory does not exist (0.1425ms) +✔ discoverMinds includes repoDir for each mind (16.625667ms) +✔ shaderForMind returns a deterministic index for a given name (0.179458ms) +✔ shaderForMind returns different indices for different names (0.084584ms) +✔ shaderForMind stays within the shader count range (0.0905ms) +✔ shaderForMind throws when shaderCount is zero (0.307833ms) +✔ shaderForMind throws when shaderCount is negative (0.07375ms) +✔ shaderForMind handles single-character names (0.065125ms) +✔ createEntry returns an Entry instance (3.444625ms) +✔ Entry is frozen (0.156542ms) +✔ createEntry validates required fields (1.996958ms) +✔ createReflectSession returns a ReflectSession instance (0.15675ms) +✔ ReflectSession is frozen (0.083333ms) +✔ ENTRY_KINDS is a frozen array of valid kind strings (0.059417ms) +✔ BUCKET_PERIODS is a frozen array of valid bucket strings (0.052584ms) +✔ storesTextContent validates against ENTRY_KINDS (0.064667ms) +✔ selectLogo picks large mind logo when terminal is wide and tall enough (0.974667ms) +✔ selectLogo picks medium mind logo when terminal fits medium but not large (0.099791ms) +✔ selectLogo picks text logo when terminal is too small for mind (0.061084ms) +✔ selectLogo always returns something even for tiny terminals (0.053792ms) +✔ renderSplash contains the logo (0.136667ms) +✔ renderSplash contains the Enter prompt (0.061625ms) +✔ renderSplash output fits within the given dimensions (0.066625ms) +✔ splash.js does not export renderSplashView (dead code from RE-015 workaround) (0.045041ms) +✔ renderSplash centers the prompt horizontally (0.143875ms) +✔ windowed browse model initializes in windowed mode (0.186708ms) +✔ formatStats includes a sparkline when buckets are present (1.695209ms) +✔ formatStats omits sparkline when no buckets are present (0.0865ms) +✔ formatStats handles a single bucket without crashing (0.092666ms) +✔ formatStats handles empty bucket array without sparkline (0.067459ms) +✔ formatStats sparkline is oldest-to-newest (left-to-right) (0.084125ms) +ℹ tests 63 +ℹ suites 0 +ℹ pass 63 +ℹ fail 0 +ℹ cancelled 0 +ℹ skipped 0 +ℹ todo 0 +ℹ duration_ms 1513.442958 + +> think@0.7.0 test:m1 +> node --test test/acceptance/*.test.js + +✔ think --doctor reports health of a repo with captures (3302.753083ms) +✔ think --doctor succeeds before the first capture (294.261416ms) +✔ think --json --doctor emits a structured health report (2974.290333ms) +✔ think --doctor rejects an unexpected thought argument (287.941542ms) +✔ new capture writes graph-native relationship edges while preserving compatibility properties (2285.031708ms) +✔ think --migrate-graph upgrades a version-1 property-linked repo additively (3505.488667ms) +✔ think --migrate-graph is idempotent and safe to rerun (3395.946667ms) +✔ capture on a version-1 repo still succeeds and only migrates after the raw local save (5168.66025ms) +✔ graph-native commands fail clearly on an outdated repo outside interactive use (4573.414666ms) +✔ interactive inspect on an outdated repo shows visible upgrade progress before continuing (3309.45775ms) +✔ interactive browse on an outdated repo shows visible upgrade progress before continuing (2911.837583ms) +✔ think --json emits explicit graph migration required errors for outdated graph-native commands (2184.471042ms) +✔ think --migrate-graph upgrades a version-2 repo to graph model version 3 with browse and reflect read edges (6893.349583ms) +✔ think --json --inspect exposes direct reflect receipts that exist only through graph-native v3 edges (2330.214708ms) +✔ think --help prints top-level usage without bootstrapping local state (488.498666ms) +✔ think -h is accepted as a short alias for top-level help (316.711334ms) +✔ think --recent --help prints recent help instead of running the command (310.5935ms) +✔ think --recent -h prints recent help instead of running the command (292.162791ms) +✔ think recent --help fails and points callers to the explicit flag form (287.610166ms) +✔ think --inspect --help bypasses required entry validation (315.738041ms) +✔ think --json --help emits structured JSONL help output (321.241333ms) +✔ think recent --json --help fails machine-readably instead of acting as shorthand help (296.447417ms) +✔ think -- -h captures the literal text after option parsing is terminated (2646.939209ms) +✔ think --ingest reads stdin explicitly and captures it into the normal raw-capture core (2945.128625ms) +✔ think with stdin but without --ingest does not accidentally capture piped input (321.025084ms) +✔ think --ingest rejects mixed positional capture text and stdin capture text (322.096458ms) +✔ think --json --ingest preserves machine-readable capture semantics for agents (2656.372375ms) +✔ think --ingest rejects empty stdin payloads (357.348875ms) +✔ think --json capture emits JSONL on stdout and keeps stderr quiet when there are no warnings (1916.247583ms) +✔ think --json --recent emits entry events instead of plain text (5643.687042ms) +✔ think --json --stats emits totals and bucket rows as JSONL (4894.378459ms) +✔ think --json validation failures emit JSONL on stderr instead of stdout (279.270792ms) +✔ think --json reports backup pending as a structured warning on stderr (1366.283708ms) +✔ think --json emits deterministically sorted keys in JSONL output (1890.792542ms) +✔ think MCP server lists the core Think tools (504.091875ms) +✔ think MCP capture, recent, browse, and inspect route through the existing Think runtime (3675.084709ms) +✔ think MCP capture preserves additive provenance separately from the raw text (2362.590667ms) +✔ think MCP capture trims additive provenance strings before persistence (2193.63525ms) +✔ think MCP remember, stats, and prompt_metrics expose structured read results (5506.600792ms) +✔ think MCP doctor tool returns structured health checks (2383.664ms) +✔ CLI raw capture bootstraps the local repo and preserves exact text (2885.143958ms) +✔ think "recent" is captured as a thought rather than triggering the list (2685.399584ms) +✔ think --recent does not bootstrap local state before the first capture (287.814791ms) +✔ think --recent rejects an unexpected thought argument (275.418875ms) +✔ capture does not require retrieval-before-write or conceptual confirmation (3936.844292ms) +✔ THINK_REPO_DIR overrides the default local repo path (2248.403583ms) +✔ reachable upstream reports local save first and backup second (1492.015292ms) +✔ unreachable upstream keeps capture successful and reports backup pending (1318.300291ms) +✔ recent stays plain and chronological (6657.990958ms) +✔ capture is append-only across later capture activity (3734.49025ms) +✔ duplicate thoughts produce distinct captures rather than deduping (3747.162667ms) +✔ empty input is rejected (257.476458ms) +✔ whitespace-only input is rejected (252.550625ms) +✔ capture preserves formatting neutrality for spacing, casing, and punctuation (1850.169459ms) +✔ default user language avoids Git terminology (1190.286417ms) +✔ verbose capture emits JSONL trace updates on stderr (1137.863292ms) +✔ raw entries remain immutable after later derived entries exist (0.100209ms) # TODO +✔ stored raw entry bytes remain unchanged in the local store after later writes (0.02475ms) # TODO +✔ entry kind separation remains explicit once the first derived-entry write path exists (0.019792ms) # TODO +✔ think --prompt-metrics prints factual prompt telemetry totals and medians (478.241875ms) +✔ think --prompt-metrics does not bootstrap local state before the first capture (329.773333ms) +✔ think --prompt-metrics supports --since filtering over prompt sessions (315.942875ms) +✔ think --prompt-metrics supports --bucket=day (305.384042ms) +✔ think --json --prompt-metrics emits explicit summary, timing, and bucket rows (305.156041ms) +✔ think --prompt-metrics rejects an unexpected thought argument (307.509708ms) +✔ think --prompt-metrics rejects invalid filter values (591.56075ms) +✔ think --recent --count limits output to the newest N raw captures (8604.82ms) +✔ think --recent --query filters raw captures by case-insensitive text match (6978.760334ms) +✔ removed recent alias flags fail clearly and point to the scoped forms (1739.056166ms) +✔ think --json --recent applies count and query filters while remaining JSONL-only (6381.610167ms) +✔ think --remember uses the current project context to recall relevant prior thoughts (4162.422459ms) +✔ think --remember with an explicit phrase recalls matching thoughts without turning into generic recent listing (6307.0655ms) +✔ think --json --remember emits explicit ambient scope and match receipts for agents (3789.237166ms) +✔ think --remember falls back honestly to textual project-token matching for entries without ambient project receipts (3642.041666ms) +✔ think --remember --limit returns only the top N matching thoughts in deterministic order (7647.153167ms) +✔ think --remember --brief returns a triage-friendly snippet instead of the full multiline thought (3362.373083ms) +✔ think --json --remember --brief --limit preserves bounded explicit recall receipts for agents (5484.807542ms) +✔ think --remember rejects invalid --limit values (1444.698792ms) +✔ think --browse shows one raw thought with its immediate newer and older neighbors (5613.494875ms) +✔ think --browse without an entry id fails clearly outside interactive TTY use and remains read-only (233.688792ms) +✔ think --json --browse without an entry id stays machine-readable and does not try to open the shell (230.407ms) +✔ think --json --browse emits JSONL rows for the current raw thought and its neighbors (5382.533542ms) +✔ think --browse opens a reader-first browse TUI with metadata and no permanent recent rail (5996.294166ms) +✔ think --browse can reveal a chronology drawer on demand instead of showing the full log by default (5136.837042ms) +✔ think --browse can jump to another thought through a fuzzy jump surface (5237.053208ms) +✔ think --browse can reveal inspect receipts inside the scripted browse TUI (3269.050292ms) +✔ think --browse can hand the selected thought into reflect from the scripted browse TUI (3310.888875ms) +✔ think --browse surfaces session identity for the current thought without replacing the reader-first view (7461.667166ms) +✔ think --browse uses a short visible entry id in the reader-first metadata while inspect keeps the full exact id (6492.260041ms) +✔ think --browse can reveal a summon-only session drawer that excludes out-of-session thoughts (7465.313667ms) +✔ think --browse reveals a structured session drawer with a visible start label and current-thought marker (7320.143208ms) +✔ think --json --browse emits explicit session context and session-nearby rows without mislabeling out-of-session thoughts (7375.380209ms) +✔ think --browse can move to the previous thought within the current session without leaving reader-first browse (5060.547666ms) +✔ think --browse keeps the current thought in place when there is no next thought in the current session (5008.838167ms) +✔ think --json --browse exposes explicit session traversal semantics without conflating them with chronology neighbors (5344.501125ms) +✔ think --inspect exposes exact raw entry metadata without narration (1831.617583ms) +✔ think --json --inspect emits JSONL for the exact raw entry metadata (1779.0815ms) +✔ think --inspect exposes additive capture provenance separately from the raw text (1730.523125ms) +✔ think --json --inspect includes additive capture provenance in the inspected entry payload (1729.794ms) +✔ think --inspect exposes canonical content identity and direct derived receipts when they exist (3449.059333ms) +✔ think --json --inspect emits canonical content identity and direct derived receipt rows (3610.483125ms) +✔ think --inspect exposes the first derived bundle as explicit raw, canonical, derived, and context sections (5360.01575ms) +✔ think --json --inspect emits canonical identity plus seed-quality and session-attribution receipts with provenance (5275.303334ms) +✔ think --json --inspect keeps duplicate raw captures distinct while linking them to the same canonical thought (4355.653958ms) +✔ think --reflect starts an explicit seeded reflect session with a deterministic seed-first challenge prompt (5928.292333ms) +✔ removed brainstorm aliases fail clearly and point to reflect (2675.815542ms) +✔ think --reflect can use an explicit sharpen prompt family (2563.206333ms) +✔ think --reflect-session stores a separate derived entry with preserved seed-first lineage (6992.63ms) +✔ think --reflect validates explicit session entry and stays read-only on invalid start (2428.903083ms) +✔ think --reflect fails clearly when the seed entry does not exist (261.850792ms) +✔ think --reflect refuses status-like seeds that are not pressure-testable ideas (6888.410041ms) +✔ think --json --reflect refuses ineligible seeds with structured machine-readable errors (6622.374042ms) +✔ think --json --reflect emits only JSONL with seed-first session and prompt data (3697.997334ms) +✔ think --json --reflect-session emits only JSONL and preserves stored seed-first lineage (2731.085375ms) +✔ think --json reflect validation failures stay fully machine-readable (246.89675ms) +✔ think --stats prints total thoughts (5088.671792ms) +✔ think --stats does not bootstrap local state before the first capture (295.807334ms) +✔ think "stats" is captured as a thought rather than triggering the command (3058.70625ms) +✔ think --stats rejects an unexpected thought argument (286.305583ms) +✔ think stats supports --since filter (4207.706333ms) +✔ think --stats rejects an invalid --since value (272.35925ms) +✔ think stats supports --from and --to filters (6369.175375ms) +✔ think --stats rejects invalid absolute date filters (256.512625ms) +✔ think stats supports --bucket=day (6126.525959ms) +✔ think --stats --bucket=day includes a sparkline in text output (5930.839042ms) +✔ think --stats --bucket=day --json includes sparkline in stats.total event (5359.88475ms) +✔ think --stats without --bucket omits sparkline (1636.741292ms) +✔ think --stats rejects an invalid bucket value (240.70275ms) +ℹ tests 128 +ℹ suites 0 +ℹ pass 125 +ℹ fail 0 +ℹ cancelled 0 +ℹ skipped 0 +ℹ todo 3 +ℹ duration_ms 176726.638 + +``` + +## Drift Results + +```text +Playback-question drift found. +Scanned 1 active cycle, 2 playback questions, 0 test descriptions. +Search basis: exact normalized match in tests/**/*.test.* and tests/**/*.spec.* descriptions. + +docs/design/0048-ssjr-bin-think-js/ssjr-bin-think-js.md +- Human: TBD + No exact normalized test description match found. +- Agent: TBD + No exact normalized test description match found. + +``` + +## Manual Verification + +- [x] Automated capture completed successfully. diff --git a/docs/method/retro/0049-ssjr-src-browse-benchmark-js/ssjr-src-browse-benchmark-js.md b/docs/method/retro/0049-ssjr-src-browse-benchmark-js/ssjr-src-browse-benchmark-js.md new file mode 100644 index 0000000..4df2d94 --- /dev/null +++ b/docs/method/retro/0049-ssjr-src-browse-benchmark-js/ssjr-src-browse-benchmark-js.md @@ -0,0 +1,39 @@ +--- +title: "Raise SSJR grades for `src/browse-benchmark.js`" +cycle: "0049-ssjr-src-browse-benchmark-js" +design_doc: "docs/design/0049-ssjr-src-browse-benchmark-js/ssjr-src-browse-benchmark-js.md" +outcome: hill-met +drift_check: yes +--- + +# Raise SSJR grades for `src/browse-benchmark.js` Retro + +## Summary + +Replaced raw Error with ValidationError for input validation. +Replaced magic strings with constants (GRAPH_META_ID, SESSION_PREFIX, +ENTRY_PREFIX, GRAPH_MODEL_VERSION, SCHEMA_VERSION, TEXT_MIME). Froze +return objects. Pre-existing test failure in browse-bootstrap.test.js +(sessionContext undefined) confirmed not caused by these changes. + +## Playback Witness + +Add artifacts under `docs/method/retro/0049-ssjr-src-browse-benchmark-js/witness` and link them here. + +## Drift + +- None recorded. + +## New Debt + +- None recorded. + +## Cool Ideas + +- None recorded. + +## Backlog Maintenance + +- [ ] Inbox processed +- [ ] Priorities reviewed +- [ ] Dead work buried or merged diff --git a/docs/method/retro/0049-ssjr-src-browse-benchmark-js/witness/verification.md b/docs/method/retro/0049-ssjr-src-browse-benchmark-js/witness/verification.md new file mode 100644 index 0000000..50e3152 --- /dev/null +++ b/docs/method/retro/0049-ssjr-src-browse-benchmark-js/witness/verification.md @@ -0,0 +1,252 @@ +--- +title: "Verification Witness for Cycle 49" +--- + +# Verification Witness for Cycle 49 + +This witness proves that `Raise SSJR grades for `src/browse-benchmark.js`` now carries the required +behavior and adheres to the repo invariants. + +## Test Results + +```text + +> think@0.7.0 test +> npm run test:ports && npm run test:m1 + + +> think@0.7.0 test:ports +> node --test test/ports/*.test.js + +✔ BG_TOKEN is exported from style.js alongside the palette (0.850167ms) +✔ windowed browse initializes with no drawer open (20.595583ms) +✔ saveRawCapture writes cwd receipts first and defers git enrichment to followthrough (1150.370292ms) +✔ capture provenance exports the canonical ingress set (1.584541ms) +✔ capture provenance trims source strings while preserving valid ingress and URL (0.150667ms) +✔ capture provenance trims ingress strings before validation (0.068041ms) +✔ capture provenance rejects dangerous URL schemes (0.076292ms) +✔ capture provenance accepts safe URL schemes (0.100917ms) +✔ normalizeCaptureProvenance returns a frozen CaptureProvenance instance (0.056583ms) +✔ capture provenance reads and normalizes environment input (0.072875ms) +✔ METHOD docs use one consistent cycle-only release and README closeout policy (2.96825ms) +✔ MIND_ORCHESTRATION.md exists and is linked from GUIDE.md (3.484167ms) +✔ cycle 0006 retrospective restarts ordered numbering for the human playback section (0.549042ms) +✔ runDiagnostics reports ok for a healthy repo with entries (32.616417ms) +✔ runDiagnostics reports fail when think directory does not exist (0.60425ms) +✔ runDiagnostics reports fail when local repo has no git init (4.249125ms) +✔ runDiagnostics reports ok for upstream when reachable (23.356375ms) +✔ runDiagnostics reports warn for upstream when unreachable (23.82425ms) +✔ runDiagnostics reports skip for upstream when URL is set but no checker provided (17.527083ms) +✔ runDiagnostics reports skip for upstream when not configured (22.188167ms) +✔ runDiagnostics reports skip for upstream when configured without checker (21.6425ms) +✔ runDiagnostics includes all expected check names (23.635458ms) +✔ runDiagnostics reports graph model version when available (18.26975ms) +✔ runDiagnostics warns when graph model needs migration (17.888083ms) +✔ runDiagnostics reports entry count when available (15.788875ms) +✔ runDiagnostics warns when entry count is zero (16.184875ms) +✔ runDiagnostics skips graph and entry checks when no repo exists (0.171959ms) +✔ shared JSON helper canonicalizes object keys deterministically on parse and stringify (4.402166ms) +✔ discoverMinds finds all valid repos under the think directory (89.027792ms) +✔ discoverMinds ignores directories without git repos (18.39375ms) +✔ discoverMinds labels ~/.think/repo as "default" (19.057625ms) +✔ discoverMinds sorts with default first, then alphabetical (63.764167ms) +✔ discoverMinds returns empty array when think directory does not exist (0.150958ms) +✔ discoverMinds includes repoDir for each mind (17.311375ms) +✔ shaderForMind returns a deterministic index for a given name (0.164917ms) +✔ shaderForMind returns different indices for different names (0.088417ms) +✔ shaderForMind stays within the shader count range (0.070041ms) +✔ shaderForMind throws when shaderCount is zero (0.289875ms) +✔ shaderForMind throws when shaderCount is negative (0.068916ms) +✔ shaderForMind handles single-character names (0.058833ms) +✔ createEntry returns an Entry instance (7.050625ms) +✔ Entry is frozen (0.113542ms) +✔ createEntry validates required fields (0.766708ms) +✔ createReflectSession returns a ReflectSession instance (0.125167ms) +✔ ReflectSession is frozen (0.077708ms) +✔ ENTRY_KINDS is a frozen array of valid kind strings (0.060417ms) +✔ BUCKET_PERIODS is a frozen array of valid bucket strings (0.053583ms) +✔ storesTextContent validates against ENTRY_KINDS (0.060125ms) +✔ selectLogo picks large mind logo when terminal is wide and tall enough (0.945375ms) +✔ selectLogo picks medium mind logo when terminal fits medium but not large (0.098708ms) +✔ selectLogo picks text logo when terminal is too small for mind (0.057375ms) +✔ selectLogo always returns something even for tiny terminals (0.052667ms) +✔ renderSplash contains the logo (0.140958ms) +✔ renderSplash contains the Enter prompt (0.058792ms) +✔ renderSplash output fits within the given dimensions (0.0665ms) +✔ splash.js does not export renderSplashView (dead code from RE-015 workaround) (0.045292ms) +✔ renderSplash centers the prompt horizontally (0.152334ms) +✔ windowed browse model initializes in windowed mode (0.193916ms) +✔ formatStats includes a sparkline when buckets are present (1.633833ms) +✔ formatStats omits sparkline when no buckets are present (0.082291ms) +✔ formatStats handles a single bucket without crashing (0.087625ms) +✔ formatStats handles empty bucket array without sparkline (0.062ms) +✔ formatStats sparkline is oldest-to-newest (left-to-right) (0.078041ms) +ℹ tests 63 +ℹ suites 0 +ℹ pass 63 +ℹ fail 0 +ℹ cancelled 0 +ℹ skipped 0 +ℹ todo 0 +ℹ duration_ms 1450.184833 + +> think@0.7.0 test:m1 +> node --test test/acceptance/*.test.js + +✔ think --doctor reports health of a repo with captures (3619.672917ms) +✔ think --doctor succeeds before the first capture (326.35275ms) +✔ think --json --doctor emits a structured health report (3232.322958ms) +✔ think --doctor rejects an unexpected thought argument (297.535291ms) +✔ new capture writes graph-native relationship edges while preserving compatibility properties (2451.399084ms) +✔ think --migrate-graph upgrades a version-1 property-linked repo additively (3788.34925ms) +✔ think --migrate-graph is idempotent and safe to rerun (3368.195291ms) +✔ capture on a version-1 repo still succeeds and only migrates after the raw local save (6110.14ms) +✔ graph-native commands fail clearly on an outdated repo outside interactive use (4655.426875ms) +✔ interactive inspect on an outdated repo shows visible upgrade progress before continuing (3138.821084ms) +✔ interactive browse on an outdated repo shows visible upgrade progress before continuing (3545.748041ms) +✔ think --json emits explicit graph migration required errors for outdated graph-native commands (2442.188708ms) +✔ think --migrate-graph upgrades a version-2 repo to graph model version 3 with browse and reflect read edges (7554.924625ms) +✔ think --json --inspect exposes direct reflect receipts that exist only through graph-native v3 edges (2858.593125ms) +✔ think --help prints top-level usage without bootstrapping local state (554.510875ms) +✔ think -h is accepted as a short alias for top-level help (333.425834ms) +✔ think --recent --help prints recent help instead of running the command (335.001792ms) +✔ think --recent -h prints recent help instead of running the command (299.130625ms) +✔ think recent --help fails and points callers to the explicit flag form (292.264ms) +✔ think --inspect --help bypasses required entry validation (313.5395ms) +✔ think --json --help emits structured JSONL help output (385.453208ms) +✔ think recent --json --help fails machine-readably instead of acting as shorthand help (319.811ms) +✔ think -- -h captures the literal text after option parsing is terminated (2907.61225ms) +✔ think --ingest reads stdin explicitly and captures it into the normal raw-capture core (3290.683667ms) +✔ think with stdin but without --ingest does not accidentally capture piped input (313.318166ms) +✔ think --ingest rejects mixed positional capture text and stdin capture text (349.252709ms) +✔ think --json --ingest preserves machine-readable capture semantics for agents (2855.954625ms) +✔ think --ingest rejects empty stdin payloads (343.937875ms) +✔ think --json capture emits JSONL on stdout and keeps stderr quiet when there are no warnings (2132.482458ms) +✔ think --json --recent emits entry events instead of plain text (5904.099708ms) +✔ think --json --stats emits totals and bucket rows as JSONL (5137.619083ms) +✔ think --json validation failures emit JSONL on stderr instead of stdout (360.051709ms) +✔ think --json reports backup pending as a structured warning on stderr (1774.222334ms) +✔ think --json emits deterministically sorted keys in JSONL output (2113.061541ms) +✔ think MCP server lists the core Think tools (543.850709ms) +✔ think MCP capture, recent, browse, and inspect route through the existing Think runtime (3776.441666ms) +✔ think MCP capture preserves additive provenance separately from the raw text (2675.300208ms) +✔ think MCP capture trims additive provenance strings before persistence (2220.511291ms) +✔ think MCP remember, stats, and prompt_metrics expose structured read results (6486.158708ms) +✔ think MCP doctor tool returns structured health checks (2402.642333ms) +✔ CLI raw capture bootstraps the local repo and preserves exact text (3286.315334ms) +✔ think "recent" is captured as a thought rather than triggering the list (2784.955458ms) +✔ think --recent does not bootstrap local state before the first capture (312.763875ms) +✔ think --recent rejects an unexpected thought argument (320.350958ms) +✔ capture does not require retrieval-before-write or conceptual confirmation (3832.410292ms) +✔ THINK_REPO_DIR overrides the default local repo path (2545.018583ms) +✔ reachable upstream reports local save first and backup second (1958.628458ms) +✔ unreachable upstream keeps capture successful and reports backup pending (1538.252958ms) +✔ recent stays plain and chronological (6583.2965ms) +✔ capture is append-only across later capture activity (4527.0005ms) +✔ duplicate thoughts produce distinct captures rather than deduping (4174.964041ms) +✔ empty input is rejected (262.89675ms) +✔ whitespace-only input is rejected (260.195375ms) +✔ capture preserves formatting neutrality for spacing, casing, and punctuation (1986.786583ms) +✔ default user language avoids Git terminology (1289.488583ms) +✔ verbose capture emits JSONL trace updates on stderr (1275.264625ms) +✔ raw entries remain immutable after later derived entries exist (0.103625ms) # TODO +✔ stored raw entry bytes remain unchanged in the local store after later writes (0.02425ms) # TODO +✔ entry kind separation remains explicit once the first derived-entry write path exists (0.019958ms) # TODO +✔ think --prompt-metrics prints factual prompt telemetry totals and medians (508.788792ms) +✔ think --prompt-metrics does not bootstrap local state before the first capture (357.036916ms) +✔ think --prompt-metrics supports --since filtering over prompt sessions (338.770125ms) +✔ think --prompt-metrics supports --bucket=day (317.932ms) +✔ think --json --prompt-metrics emits explicit summary, timing, and bucket rows (307.717667ms) +✔ think --prompt-metrics rejects an unexpected thought argument (320.708583ms) +✔ think --prompt-metrics rejects invalid filter values (705.742ms) +✔ think --recent --count limits output to the newest N raw captures (8839.024792ms) +✔ think --recent --query filters raw captures by case-insensitive text match (8130.335958ms) +✔ removed recent alias flags fail clearly and point to the scoped forms (1791.964583ms) +✔ think --json --recent applies count and query filters while remaining JSONL-only (7116.994291ms) +✔ think --remember uses the current project context to recall relevant prior thoughts (4538.872917ms) +✔ think --remember with an explicit phrase recalls matching thoughts without turning into generic recent listing (7205.961541ms) +✔ think --json --remember emits explicit ambient scope and match receipts for agents (4102.292459ms) +✔ think --remember falls back honestly to textual project-token matching for entries without ambient project receipts (3999.732417ms) +✔ think --remember --limit returns only the top N matching thoughts in deterministic order (8544.292458ms) +✔ think --remember --brief returns a triage-friendly snippet instead of the full multiline thought (4180.401166ms) +✔ think --json --remember --brief --limit preserves bounded explicit recall receipts for agents (11637.093792ms) +✔ think --remember rejects invalid --limit values (2763.234333ms) +✔ think --browse shows one raw thought with its immediate newer and older neighbors (10560.643708ms) +✔ think --browse without an entry id fails clearly outside interactive TTY use and remains read-only (245.874458ms) +✔ think --json --browse without an entry id stays machine-readable and does not try to open the shell (237.452459ms) +✔ think --json --browse emits JSONL rows for the current raw thought and its neighbors (5697.890834ms) +✔ think --browse opens a reader-first browse TUI with metadata and no permanent recent rail (6520.381041ms) +✔ think --browse can reveal a chronology drawer on demand instead of showing the full log by default (5453.301459ms) +✔ think --browse can jump to another thought through a fuzzy jump surface (5712.065125ms) +✔ think --browse can reveal inspect receipts inside the scripted browse TUI (4905.778208ms) +✔ think --browse can hand the selected thought into reflect from the scripted browse TUI (6943.899459ms) +✔ think --browse surfaces session identity for the current thought without replacing the reader-first view (13975.783083ms) +✔ think --browse uses a short visible entry id in the reader-first metadata while inspect keeps the full exact id (6688.945375ms) +✔ think --browse can reveal a summon-only session drawer that excludes out-of-session thoughts (7781.691583ms) +✔ think --browse reveals a structured session drawer with a visible start label and current-thought marker (9793.574417ms) +✔ think --json --browse emits explicit session context and session-nearby rows without mislabeling out-of-session thoughts (8404.500417ms) +✔ think --browse can move to the previous thought within the current session without leaving reader-first browse (6034.510125ms) +✔ think --browse keeps the current thought in place when there is no next thought in the current session (6675.024291ms) +✔ think --json --browse exposes explicit session traversal semantics without conflating them with chronology neighbors (6359.336084ms) +✔ think --inspect exposes exact raw entry metadata without narration (2161.734834ms) +✔ think --json --inspect emits JSONL for the exact raw entry metadata (2001.712125ms) +✔ think --inspect exposes additive capture provenance separately from the raw text (2105.099584ms) +✔ think --json --inspect includes additive capture provenance in the inspected entry payload (2088.714792ms) +✔ think --inspect exposes canonical content identity and direct derived receipts when they exist (3668.255208ms) +✔ think --json --inspect emits canonical content identity and direct derived receipt rows (3544.707875ms) +✔ think --inspect exposes the first derived bundle as explicit raw, canonical, derived, and context sections (5497.819417ms) +✔ think --json --inspect emits canonical identity plus seed-quality and session-attribution receipts with provenance (5549.951208ms) +✔ think --json --inspect keeps duplicate raw captures distinct while linking them to the same canonical thought (4684.121833ms) +✔ think --reflect starts an explicit seeded reflect session with a deterministic seed-first challenge prompt (6200.054875ms) +✔ removed brainstorm aliases fail clearly and point to reflect (2703.622042ms) +✔ think --reflect can use an explicit sharpen prompt family (3313.681833ms) +✔ think --reflect-session stores a separate derived entry with preserved seed-first lineage (7036.409333ms) +✔ think --reflect validates explicit session entry and stays read-only on invalid start (2631.812416ms) +✔ think --reflect fails clearly when the seed entry does not exist (272.691042ms) +✔ think --reflect refuses status-like seeds that are not pressure-testable ideas (8004.6325ms) +✔ think --json --reflect refuses ineligible seeds with structured machine-readable errors (7650.849709ms) +✔ think --json --reflect emits only JSONL with seed-first session and prompt data (4020.399875ms) +✔ think --json --reflect-session emits only JSONL and preserves stored seed-first lineage (2995.73775ms) +✔ think --json reflect validation failures stay fully machine-readable (258.143292ms) +✔ think --stats prints total thoughts (5178.1855ms) +✔ think --stats does not bootstrap local state before the first capture (310.879666ms) +✔ think "stats" is captured as a thought rather than triggering the command (3886.912458ms) +✔ think --stats rejects an unexpected thought argument (300.940792ms) +✔ think stats supports --since filter (4200.245833ms) +✔ think --stats rejects an invalid --since value (274.047792ms) +✔ think stats supports --from and --to filters (6936.454291ms) +✔ think --stats rejects invalid absolute date filters (286.101458ms) +✔ think stats supports --bucket=day (6734.242166ms) +✔ think --stats --bucket=day includes a sparkline in text output (6793.897667ms) +✔ think --stats --bucket=day --json includes sparkline in stats.total event (5935.098625ms) +✔ think --stats without --bucket omits sparkline (1824.877708ms) +✔ think --stats rejects an invalid bucket value (262.063541ms) +ℹ tests 128 +ℹ suites 0 +ℹ pass 125 +ℹ fail 0 +ℹ cancelled 0 +ℹ skipped 0 +ℹ todo 3 +ℹ duration_ms 218550.711417 + +``` + +## Drift Results + +```text +Playback-question drift found. +Scanned 1 active cycle, 2 playback questions, 0 test descriptions. +Search basis: exact normalized match in tests/**/*.test.* and tests/**/*.spec.* descriptions. + +docs/design/0049-ssjr-src-browse-benchmark-js/ssjr-src-browse-benchmark-js.md +- Human: TBD + No exact normalized test description match found. +- Agent: TBD + No exact normalized test description match found. + +``` + +## Manual Verification + +- [x] Automated capture completed successfully. diff --git a/docs/method/retro/0050-ssjr-src-cli-interactive-js/ssjr-src-cli-interactive-js.md b/docs/method/retro/0050-ssjr-src-cli-interactive-js/ssjr-src-cli-interactive-js.md new file mode 100644 index 0000000..9b73788 --- /dev/null +++ b/docs/method/retro/0050-ssjr-src-cli-interactive-js/ssjr-src-cli-interactive-js.md @@ -0,0 +1,37 @@ +--- +title: "Raise SSJR grades for `src/cli/interactive.js`" +cycle: "0050-ssjr-src-cli-interactive-js" +design_doc: "docs/design/0050-ssjr-src-cli-interactive-js/ssjr-src-cli-interactive-js.md" +outcome: hill-met +drift_check: yes +--- + +# Raise SSJR grades for `src/cli/interactive.js` Retro + +## Summary + +DRYed duplicate `capitalize` function. Exported from interactive.js +so reflect.js command imports instead of redefining. File was otherwise +structurally clean at 163 lines. + +## Playback Witness + +Add artifacts under `docs/method/retro/0050-ssjr-src-cli-interactive-js/witness` and link them here. + +## Drift + +- None recorded. + +## New Debt + +- None recorded. + +## Cool Ideas + +- None recorded. + +## Backlog Maintenance + +- [ ] Inbox processed +- [ ] Priorities reviewed +- [ ] Dead work buried or merged diff --git a/docs/method/retro/0050-ssjr-src-cli-interactive-js/witness/verification.md b/docs/method/retro/0050-ssjr-src-cli-interactive-js/witness/verification.md new file mode 100644 index 0000000..77b5eca --- /dev/null +++ b/docs/method/retro/0050-ssjr-src-cli-interactive-js/witness/verification.md @@ -0,0 +1,252 @@ +--- +title: "Verification Witness for Cycle 50" +--- + +# Verification Witness for Cycle 50 + +This witness proves that `Raise SSJR grades for `src/cli/interactive.js`` now carries the required +behavior and adheres to the repo invariants. + +## Test Results + +```text + +> think@0.7.0 test +> npm run test:ports && npm run test:m1 + + +> think@0.7.0 test:ports +> node --test test/ports/*.test.js + +✔ BG_TOKEN is exported from style.js alongside the palette (0.981583ms) +✔ windowed browse initializes with no drawer open (19.264667ms) +✔ saveRawCapture writes cwd receipts first and defers git enrichment to followthrough (1097.017709ms) +✔ capture provenance exports the canonical ingress set (1.5935ms) +✔ capture provenance trims source strings while preserving valid ingress and URL (0.158292ms) +✔ capture provenance trims ingress strings before validation (0.068958ms) +✔ capture provenance rejects dangerous URL schemes (0.088209ms) +✔ capture provenance accepts safe URL schemes (0.10825ms) +✔ normalizeCaptureProvenance returns a frozen CaptureProvenance instance (0.0605ms) +✔ capture provenance reads and normalizes environment input (0.822667ms) +✔ METHOD docs use one consistent cycle-only release and README closeout policy (2.47175ms) +✔ MIND_ORCHESTRATION.md exists and is linked from GUIDE.md (1.459083ms) +✔ cycle 0006 retrospective restarts ordered numbering for the human playback section (1.973709ms) +✔ runDiagnostics reports ok for a healthy repo with entries (28.113959ms) +✔ runDiagnostics reports fail when think directory does not exist (0.555958ms) +✔ runDiagnostics reports fail when local repo has no git init (1.681375ms) +✔ runDiagnostics reports ok for upstream when reachable (20.160375ms) +✔ runDiagnostics reports warn for upstream when unreachable (27.44675ms) +✔ runDiagnostics reports skip for upstream when URL is set but no checker provided (21.331166ms) +✔ runDiagnostics reports skip for upstream when not configured (18.510541ms) +✔ runDiagnostics reports skip for upstream when configured without checker (20.476875ms) +✔ runDiagnostics includes all expected check names (18.535375ms) +✔ runDiagnostics reports graph model version when available (18.523459ms) +✔ runDiagnostics warns when graph model needs migration (17.829708ms) +✔ runDiagnostics reports entry count when available (18.45575ms) +✔ runDiagnostics warns when entry count is zero (18.215041ms) +✔ runDiagnostics skips graph and entry checks when no repo exists (0.146167ms) +✔ shared JSON helper canonicalizes object keys deterministically on parse and stringify (1.692459ms) +✔ discoverMinds finds all valid repos under the think directory (73.727708ms) +✔ discoverMinds ignores directories without git repos (21.841084ms) +✔ discoverMinds labels ~/.think/repo as "default" (19.730708ms) +✔ discoverMinds sorts with default first, then alphabetical (54.147542ms) +✔ discoverMinds returns empty array when think directory does not exist (0.201667ms) +✔ discoverMinds includes repoDir for each mind (18.528583ms) +✔ shaderForMind returns a deterministic index for a given name (0.1765ms) +✔ shaderForMind returns different indices for different names (0.079ms) +✔ shaderForMind stays within the shader count range (0.082708ms) +✔ shaderForMind throws when shaderCount is zero (0.285625ms) +✔ shaderForMind throws when shaderCount is negative (0.0715ms) +✔ shaderForMind handles single-character names (0.063042ms) +✔ createEntry returns an Entry instance (5.425084ms) +✔ Entry is frozen (0.106333ms) +✔ createEntry validates required fields (0.751791ms) +✔ createReflectSession returns a ReflectSession instance (0.118ms) +✔ ReflectSession is frozen (0.072333ms) +✔ ENTRY_KINDS is a frozen array of valid kind strings (0.061291ms) +✔ BUCKET_PERIODS is a frozen array of valid bucket strings (0.062334ms) +✔ storesTextContent validates against ENTRY_KINDS (0.0965ms) +✔ selectLogo picks large mind logo when terminal is wide and tall enough (1.148125ms) +✔ selectLogo picks medium mind logo when terminal fits medium but not large (0.139208ms) +✔ selectLogo picks text logo when terminal is too small for mind (0.069708ms) +✔ selectLogo always returns something even for tiny terminals (0.059875ms) +✔ renderSplash contains the logo (0.164709ms) +✔ renderSplash contains the Enter prompt (0.0625ms) +✔ renderSplash output fits within the given dimensions (0.069209ms) +✔ splash.js does not export renderSplashView (dead code from RE-015 workaround) (0.129416ms) +✔ renderSplash centers the prompt horizontally (0.340292ms) +✔ windowed browse model initializes in windowed mode (0.34825ms) +✔ formatStats includes a sparkline when buckets are present (1.832792ms) +✔ formatStats omits sparkline when no buckets are present (0.099917ms) +✔ formatStats handles a single bucket without crashing (0.101125ms) +✔ formatStats handles empty bucket array without sparkline (0.06625ms) +✔ formatStats sparkline is oldest-to-newest (left-to-right) (0.08225ms) +ℹ tests 63 +ℹ suites 0 +ℹ pass 63 +ℹ fail 0 +ℹ cancelled 0 +ℹ skipped 0 +ℹ todo 0 +ℹ duration_ms 1510.511708 + +> think@0.7.0 test:m1 +> node --test test/acceptance/*.test.js + +✔ think --doctor reports health of a repo with captures (3398.832292ms) +✔ think --doctor succeeds before the first capture (298.5995ms) +✔ think --json --doctor emits a structured health report (3012.740542ms) +✔ think --doctor rejects an unexpected thought argument (287.73325ms) +✔ new capture writes graph-native relationship edges while preserving compatibility properties (2261.923958ms) +✔ think --migrate-graph upgrades a version-1 property-linked repo additively (3515.005625ms) +✔ think --migrate-graph is idempotent and safe to rerun (3105.018ms) +✔ capture on a version-1 repo still succeeds and only migrates after the raw local save (5070.180542ms) +✔ graph-native commands fail clearly on an outdated repo outside interactive use (4559.247125ms) +✔ interactive inspect on an outdated repo shows visible upgrade progress before continuing (3081.324459ms) +✔ interactive browse on an outdated repo shows visible upgrade progress before continuing (3020.43625ms) +✔ think --json emits explicit graph migration required errors for outdated graph-native commands (2298.112417ms) +✔ think --migrate-graph upgrades a version-2 repo to graph model version 3 with browse and reflect read edges (7513.047708ms) +✔ think --json --inspect exposes direct reflect receipts that exist only through graph-native v3 edges (2438.836583ms) +✔ think --help prints top-level usage without bootstrapping local state (436.396584ms) +✔ think -h is accepted as a short alias for top-level help (332.218125ms) +✔ think --recent --help prints recent help instead of running the command (321.852042ms) +✔ think --recent -h prints recent help instead of running the command (297.550084ms) +✔ think recent --help fails and points callers to the explicit flag form (293.450708ms) +✔ think --inspect --help bypasses required entry validation (327.495042ms) +✔ think --json --help emits structured JSONL help output (378.974958ms) +✔ think recent --json --help fails machine-readably instead of acting as shorthand help (313.016917ms) +✔ think -- -h captures the literal text after option parsing is terminated (2677.125459ms) +✔ think --ingest reads stdin explicitly and captures it into the normal raw-capture core (2973.168459ms) +✔ think with stdin but without --ingest does not accidentally capture piped input (312.473125ms) +✔ think --ingest rejects mixed positional capture text and stdin capture text (314.556458ms) +✔ think --json --ingest preserves machine-readable capture semantics for agents (2677.942125ms) +✔ think --ingest rejects empty stdin payloads (325.1465ms) +✔ think --json capture emits JSONL on stdout and keeps stderr quiet when there are no warnings (1874.877875ms) +✔ think --json --recent emits entry events instead of plain text (5521.850333ms) +✔ think --json --stats emits totals and bucket rows as JSONL (4536.423792ms) +✔ think --json validation failures emit JSONL on stderr instead of stdout (284.25875ms) +✔ think --json reports backup pending as a structured warning on stderr (1377.617292ms) +✔ think --json emits deterministically sorted keys in JSONL output (1894.845542ms) +✔ think MCP server lists the core Think tools (498.444416ms) +✔ think MCP capture, recent, browse, and inspect route through the existing Think runtime (3665.674917ms) +✔ think MCP capture preserves additive provenance separately from the raw text (2401.396667ms) +✔ think MCP capture trims additive provenance strings before persistence (1982.484667ms) +✔ think MCP remember, stats, and prompt_metrics expose structured read results (5329.704417ms) +✔ think MCP doctor tool returns structured health checks (2371.790125ms) +✔ CLI raw capture bootstraps the local repo and preserves exact text (2985.43675ms) +✔ think "recent" is captured as a thought rather than triggering the list (2630.840958ms) +✔ think --recent does not bootstrap local state before the first capture (288.753834ms) +✔ think --recent rejects an unexpected thought argument (293.562083ms) +✔ capture does not require retrieval-before-write or conceptual confirmation (3465.908916ms) +✔ THINK_REPO_DIR overrides the default local repo path (2192.258584ms) +✔ reachable upstream reports local save first and backup second (1527.9225ms) +✔ unreachable upstream keeps capture successful and reports backup pending (1282.315625ms) +✔ recent stays plain and chronological (6562.861917ms) +✔ capture is append-only across later capture activity (3962.695625ms) +✔ duplicate thoughts produce distinct captures rather than deduping (4068.034334ms) +✔ empty input is rejected (261.493542ms) +✔ whitespace-only input is rejected (264.825167ms) +✔ capture preserves formatting neutrality for spacing, casing, and punctuation (1996.651625ms) +✔ default user language avoids Git terminology (1247.799ms) +✔ verbose capture emits JSONL trace updates on stderr (1250.113542ms) +✔ raw entries remain immutable after later derived entries exist (0.093709ms) # TODO +✔ stored raw entry bytes remain unchanged in the local store after later writes (0.024583ms) # TODO +✔ entry kind separation remains explicit once the first derived-entry write path exists (0.020208ms) # TODO +✔ think --prompt-metrics prints factual prompt telemetry totals and medians (409.086583ms) +✔ think --prompt-metrics does not bootstrap local state before the first capture (340.396333ms) +✔ think --prompt-metrics supports --since filtering over prompt sessions (324.43175ms) +✔ think --prompt-metrics supports --bucket=day (313.824083ms) +✔ think --json --prompt-metrics emits explicit summary, timing, and bucket rows (297.792875ms) +✔ think --prompt-metrics rejects an unexpected thought argument (361.776ms) +✔ think --prompt-metrics rejects invalid filter values (669.802667ms) +✔ think --recent --count limits output to the newest N raw captures (8297.429167ms) +✔ think --recent --query filters raw captures by case-insensitive text match (6999.698708ms) +✔ removed recent alias flags fail clearly and point to the scoped forms (1740.511416ms) +✔ think --json --recent applies count and query filters while remaining JSONL-only (6315.817709ms) +✔ think --remember uses the current project context to recall relevant prior thoughts (4474.784833ms) +✔ think --remember with an explicit phrase recalls matching thoughts without turning into generic recent listing (6715.896375ms) +✔ think --json --remember emits explicit ambient scope and match receipts for agents (3844.81575ms) +✔ think --remember falls back honestly to textual project-token matching for entries without ambient project receipts (3856.126958ms) +✔ think --remember --limit returns only the top N matching thoughts in deterministic order (8209.658958ms) +✔ think --remember --brief returns a triage-friendly snippet instead of the full multiline thought (3615.156417ms) +✔ think --json --remember --brief --limit preserves bounded explicit recall receipts for agents (5572.701208ms) +✔ think --remember rejects invalid --limit values (1454.481042ms) +✔ think --browse shows one raw thought with its immediate newer and older neighbors (5515.678084ms) +✔ think --browse without an entry id fails clearly outside interactive TTY use and remains read-only (262.404208ms) +✔ think --json --browse without an entry id stays machine-readable and does not try to open the shell (250.444458ms) +✔ think --json --browse emits JSONL rows for the current raw thought and its neighbors (5676.380042ms) +✔ think --browse opens a reader-first browse TUI with metadata and no permanent recent rail (6479.408791ms) +✔ think --browse can reveal a chronology drawer on demand instead of showing the full log by default (8031.748083ms) +✔ think --browse can jump to another thought through a fuzzy jump surface (6022.852083ms) +✔ think --browse can reveal inspect receipts inside the scripted browse TUI (3842.039375ms) +✔ think --browse can hand the selected thought into reflect from the scripted browse TUI (3809.911042ms) +✔ think --browse surfaces session identity for the current thought without replacing the reader-first view (13282.349042ms) +✔ think --browse uses a short visible entry id in the reader-first metadata while inspect keeps the full exact id (17776.599625ms) +✔ think --browse can reveal a summon-only session drawer that excludes out-of-session thoughts (16959.296667ms) +✔ think --browse reveals a structured session drawer with a visible start label and current-thought marker (7868.513792ms) +✔ think --json --browse emits explicit session context and session-nearby rows without mislabeling out-of-session thoughts (8581.473125ms) +✔ think --browse can move to the previous thought within the current session without leaving reader-first browse (6398.36575ms) +✔ think --browse keeps the current thought in place when there is no next thought in the current session (6544.721458ms) +✔ think --json --browse exposes explicit session traversal semantics without conflating them with chronology neighbors (6729.890083ms) +✔ think --inspect exposes exact raw entry metadata without narration (1935.577208ms) +✔ think --json --inspect emits JSONL for the exact raw entry metadata (1879.606959ms) +✔ think --inspect exposes additive capture provenance separately from the raw text (2012.78475ms) +✔ think --json --inspect includes additive capture provenance in the inspected entry payload (2147.238ms) +✔ think --inspect exposes canonical content identity and direct derived receipts when they exist (4677.033167ms) +✔ think --json --inspect emits canonical content identity and direct derived receipt rows (6348.185292ms) +✔ think --inspect exposes the first derived bundle as explicit raw, canonical, derived, and context sections (6697.9885ms) +✔ think --json --inspect emits canonical identity plus seed-quality and session-attribution receipts with provenance (7654.420958ms) +✔ think --json --inspect keeps duplicate raw captures distinct while linking them to the same canonical thought (10759.661916ms) +✔ think --reflect starts an explicit seeded reflect session with a deterministic seed-first challenge prompt (5749.585041ms) +✔ removed brainstorm aliases fail clearly and point to reflect (2537.466584ms) +✔ think --reflect can use an explicit sharpen prompt family (2576.16425ms) +✔ think --reflect-session stores a separate derived entry with preserved seed-first lineage (6801.562625ms) +✔ think --reflect validates explicit session entry and stays read-only on invalid start (2538.006459ms) +✔ think --reflect fails clearly when the seed entry does not exist (262.728666ms) +✔ think --reflect refuses status-like seeds that are not pressure-testable ideas (7413.143375ms) +✔ think --json --reflect refuses ineligible seeds with structured machine-readable errors (6977.354583ms) +✔ think --json --reflect emits only JSONL with seed-first session and prompt data (3817.842042ms) +✔ think --json --reflect-session emits only JSONL and preserves stored seed-first lineage (2844.29425ms) +✔ think --json reflect validation failures stay fully machine-readable (252.7095ms) +✔ think --stats prints total thoughts (4714.675916ms) +✔ think --stats does not bootstrap local state before the first capture (285.57ms) +✔ think "stats" is captured as a thought rather than triggering the command (3007.878625ms) +✔ think --stats rejects an unexpected thought argument (274.142083ms) +✔ think stats supports --since filter (4166.815709ms) +✔ think --stats rejects an invalid --since value (270.797834ms) +✔ think stats supports --from and --to filters (6362.55775ms) +✔ think --stats rejects invalid absolute date filters (264.036541ms) +✔ think stats supports --bucket=day (6459.249792ms) +✔ think --stats --bucket=day includes a sparkline in text output (6148.509458ms) +✔ think --stats --bucket=day --json includes sparkline in stats.total event (5601.618875ms) +✔ think --stats without --bucket omits sparkline (1730.442959ms) +✔ think --stats rejects an invalid bucket value (241.186375ms) +ℹ tests 128 +ℹ suites 0 +ℹ pass 125 +ℹ fail 0 +ℹ cancelled 0 +ℹ skipped 0 +ℹ todo 3 +ℹ duration_ms 231396.262084 + +``` + +## Drift Results + +```text +Playback-question drift found. +Scanned 1 active cycle, 2 playback questions, 0 test descriptions. +Search basis: exact normalized match in tests/**/*.test.* and tests/**/*.spec.* descriptions. + +docs/design/0050-ssjr-src-cli-interactive-js/ssjr-src-cli-interactive-js.md +- Human: TBD + No exact normalized test description match found. +- Agent: TBD + No exact normalized test description match found. + +``` + +## Manual Verification + +- [x] Automated capture completed successfully. diff --git a/docs/method/retro/0051-ssjr-src-store-js/ssjr-src-store-js.md b/docs/method/retro/0051-ssjr-src-store-js/ssjr-src-store-js.md new file mode 100644 index 0000000..db8e559 --- /dev/null +++ b/docs/method/retro/0051-ssjr-src-store-js/ssjr-src-store-js.md @@ -0,0 +1,37 @@ +--- +title: "Raise SSJR grades for `src/store.js`" +cycle: "0051-ssjr-src-store-js" +design_doc: "docs/design/0051-ssjr-src-store-js/ssjr-src-store-js.md" +outcome: hill-met +drift_check: yes +--- + +# Raise SSJR grades for `src/store.js` Retro + +## Summary + +Barrel export file (41 lines). Already clean: grouped re-exports +from internal modules, no logic, no magic strings. No code changes +needed. + +## Playback Witness + +Add artifacts under `docs/method/retro/0051-ssjr-src-store-js/witness` and link them here. + +## Drift + +- None recorded. + +## New Debt + +- None recorded. + +## Cool Ideas + +- None recorded. + +## Backlog Maintenance + +- [ ] Inbox processed +- [ ] Priorities reviewed +- [ ] Dead work buried or merged diff --git a/docs/method/retro/0051-ssjr-src-store-js/witness/verification.md b/docs/method/retro/0051-ssjr-src-store-js/witness/verification.md new file mode 100644 index 0000000..0640a75 --- /dev/null +++ b/docs/method/retro/0051-ssjr-src-store-js/witness/verification.md @@ -0,0 +1,252 @@ +--- +title: "Verification Witness for Cycle 51" +--- + +# Verification Witness for Cycle 51 + +This witness proves that `Raise SSJR grades for `src/store.js`` now carries the required +behavior and adheres to the repo invariants. + +## Test Results + +```text + +> think@0.7.0 test +> npm run test:ports && npm run test:m1 + + +> think@0.7.0 test:ports +> node --test test/ports/*.test.js + +✔ BG_TOKEN is exported from style.js alongside the palette (0.8175ms) +✔ windowed browse initializes with no drawer open (17.7565ms) +✔ saveRawCapture writes cwd receipts first and defers git enrichment to followthrough (1120.726583ms) +✔ capture provenance exports the canonical ingress set (2.122333ms) +✔ capture provenance trims source strings while preserving valid ingress and URL (0.165166ms) +✔ capture provenance trims ingress strings before validation (0.071292ms) +✔ capture provenance rejects dangerous URL schemes (0.079791ms) +✔ capture provenance accepts safe URL schemes (0.104583ms) +✔ normalizeCaptureProvenance returns a frozen CaptureProvenance instance (0.058792ms) +✔ capture provenance reads and normalizes environment input (0.092625ms) +✔ METHOD docs use one consistent cycle-only release and README closeout policy (1.849708ms) +✔ MIND_ORCHESTRATION.md exists and is linked from GUIDE.md (0.692166ms) +✔ cycle 0006 retrospective restarts ordered numbering for the human playback section (0.634042ms) +✔ runDiagnostics reports ok for a healthy repo with entries (26.059292ms) +✔ runDiagnostics reports fail when think directory does not exist (0.19925ms) +✔ runDiagnostics reports fail when local repo has no git init (1.340333ms) +✔ runDiagnostics reports ok for upstream when reachable (20.436125ms) +✔ runDiagnostics reports warn for upstream when unreachable (19.534875ms) +✔ runDiagnostics reports skip for upstream when URL is set but no checker provided (20.325292ms) +✔ runDiagnostics reports skip for upstream when not configured (19.1215ms) +✔ runDiagnostics reports skip for upstream when configured without checker (17.2695ms) +✔ runDiagnostics includes all expected check names (18.042834ms) +✔ runDiagnostics reports graph model version when available (17.322417ms) +✔ runDiagnostics warns when graph model needs migration (17.33825ms) +✔ runDiagnostics reports entry count when available (19.428333ms) +✔ runDiagnostics warns when entry count is zero (16.549291ms) +✔ runDiagnostics skips graph and entry checks when no repo exists (0.172917ms) +✔ shared JSON helper canonicalizes object keys deterministically on parse and stringify (1.604166ms) +✔ discoverMinds finds all valid repos under the think directory (75.607708ms) +✔ discoverMinds ignores directories without git repos (21.642583ms) +✔ discoverMinds labels ~/.think/repo as "default" (17.999375ms) +✔ discoverMinds sorts with default first, then alphabetical (53.193666ms) +✔ discoverMinds returns empty array when think directory does not exist (0.1445ms) +✔ discoverMinds includes repoDir for each mind (17.330125ms) +✔ shaderForMind returns a deterministic index for a given name (0.167542ms) +✔ shaderForMind returns different indices for different names (0.085541ms) +✔ shaderForMind stays within the shader count range (0.07725ms) +✔ shaderForMind throws when shaderCount is zero (0.294292ms) +✔ shaderForMind throws when shaderCount is negative (0.073666ms) +✔ shaderForMind handles single-character names (0.060708ms) +✔ createEntry returns an Entry instance (3.632792ms) +✔ Entry is frozen (0.213541ms) +✔ createEntry validates required fields (0.803958ms) +✔ createReflectSession returns a ReflectSession instance (0.129667ms) +✔ ReflectSession is frozen (0.078125ms) +✔ ENTRY_KINDS is a frozen array of valid kind strings (0.061708ms) +✔ BUCKET_PERIODS is a frozen array of valid bucket strings (0.052291ms) +✔ storesTextContent validates against ENTRY_KINDS (0.060541ms) +✔ selectLogo picks large mind logo when terminal is wide and tall enough (0.969ms) +✔ selectLogo picks medium mind logo when terminal fits medium but not large (0.138583ms) +✔ selectLogo picks text logo when terminal is too small for mind (0.076125ms) +✔ selectLogo always returns something even for tiny terminals (0.064708ms) +✔ renderSplash contains the logo (0.227041ms) +✔ renderSplash contains the Enter prompt (0.107291ms) +✔ renderSplash output fits within the given dimensions (0.088583ms) +✔ splash.js does not export renderSplashView (dead code from RE-015 workaround) (0.06175ms) +✔ renderSplash centers the prompt horizontally (0.177208ms) +✔ windowed browse model initializes in windowed mode (0.222584ms) +✔ formatStats includes a sparkline when buckets are present (1.610583ms) +✔ formatStats omits sparkline when no buckets are present (0.085042ms) +✔ formatStats handles a single bucket without crashing (0.087042ms) +✔ formatStats handles empty bucket array without sparkline (0.0655ms) +✔ formatStats sparkline is oldest-to-newest (left-to-right) (0.081916ms) +ℹ tests 63 +ℹ suites 0 +ℹ pass 63 +ℹ fail 0 +ℹ cancelled 0 +ℹ skipped 0 +ℹ todo 0 +ℹ duration_ms 1469.967834 + +> think@0.7.0 test:m1 +> node --test test/acceptance/*.test.js + +✔ think --doctor reports health of a repo with captures (3279.431791ms) +✔ think --doctor succeeds before the first capture (312.402833ms) +✔ think --json --doctor emits a structured health report (2907.391917ms) +✔ think --doctor rejects an unexpected thought argument (299.092584ms) +✔ new capture writes graph-native relationship edges while preserving compatibility properties (2138.930334ms) +✔ think --migrate-graph upgrades a version-1 property-linked repo additively (3354.035583ms) +✔ think --migrate-graph is idempotent and safe to rerun (3181.872083ms) +✔ capture on a version-1 repo still succeeds and only migrates after the raw local save (5209.531375ms) +✔ graph-native commands fail clearly on an outdated repo outside interactive use (4858.901ms) +✔ interactive inspect on an outdated repo shows visible upgrade progress before continuing (3096.047625ms) +✔ interactive browse on an outdated repo shows visible upgrade progress before continuing (3187.206916ms) +✔ think --json emits explicit graph migration required errors for outdated graph-native commands (2523.402417ms) +✔ think --migrate-graph upgrades a version-2 repo to graph model version 3 with browse and reflect read edges (7452.56325ms) +✔ think --json --inspect exposes direct reflect receipts that exist only through graph-native v3 edges (2587.533375ms) +✔ think --help prints top-level usage without bootstrapping local state (414.541917ms) +✔ think -h is accepted as a short alias for top-level help (309.079125ms) +✔ think --recent --help prints recent help instead of running the command (314.901375ms) +✔ think --recent -h prints recent help instead of running the command (289.963916ms) +✔ think recent --help fails and points callers to the explicit flag form (290.266792ms) +✔ think --inspect --help bypasses required entry validation (296.696583ms) +✔ think --json --help emits structured JSONL help output (370.97475ms) +✔ think recent --json --help fails machine-readably instead of acting as shorthand help (305.351375ms) +✔ think -- -h captures the literal text after option parsing is terminated (2554.134833ms) +✔ think --ingest reads stdin explicitly and captures it into the normal raw-capture core (2864.624125ms) +✔ think with stdin but without --ingest does not accidentally capture piped input (305.512125ms) +✔ think --ingest rejects mixed positional capture text and stdin capture text (307.727125ms) +✔ think --json --ingest preserves machine-readable capture semantics for agents (2545.503042ms) +✔ think --ingest rejects empty stdin payloads (351.046ms) +✔ think --json capture emits JSONL on stdout and keeps stderr quiet when there are no warnings (1843.671ms) +✔ think --json --recent emits entry events instead of plain text (5385.69625ms) +✔ think --json --stats emits totals and bucket rows as JSONL (4633.402ms) +✔ think --json validation failures emit JSONL on stderr instead of stdout (288.647333ms) +✔ think --json reports backup pending as a structured warning on stderr (1463.658042ms) +✔ think --json emits deterministically sorted keys in JSONL output (1786.601125ms) +✔ think MCP server lists the core Think tools (488.462875ms) +✔ think MCP capture, recent, browse, and inspect route through the existing Think runtime (3438.694292ms) +✔ think MCP capture preserves additive provenance separately from the raw text (2348.672791ms) +✔ think MCP capture trims additive provenance strings before persistence (1992.238417ms) +✔ think MCP remember, stats, and prompt_metrics expose structured read results (5515.961667ms) +✔ think MCP doctor tool returns structured health checks (2452.457833ms) +✔ CLI raw capture bootstraps the local repo and preserves exact text (2931.819542ms) +✔ think "recent" is captured as a thought rather than triggering the list (2606.758625ms) +✔ think --recent does not bootstrap local state before the first capture (293.704292ms) +✔ think --recent rejects an unexpected thought argument (332.5285ms) +✔ capture does not require retrieval-before-write or conceptual confirmation (3531.05425ms) +✔ THINK_REPO_DIR overrides the default local repo path (2374.052834ms) +✔ reachable upstream reports local save first and backup second (1493.60375ms) +✔ unreachable upstream keeps capture successful and reports backup pending (1338.158125ms) +✔ recent stays plain and chronological (6802.405916ms) +✔ capture is append-only across later capture activity (4101.263208ms) +✔ duplicate thoughts produce distinct captures rather than deduping (4223.477791ms) +✔ empty input is rejected (265.878416ms) +✔ whitespace-only input is rejected (267.344209ms) +✔ capture preserves formatting neutrality for spacing, casing, and punctuation (1965.607542ms) +✔ default user language avoids Git terminology (1235.039375ms) +✔ verbose capture emits JSONL trace updates on stderr (1241.087458ms) +✔ raw entries remain immutable after later derived entries exist (0.143041ms) # TODO +✔ stored raw entry bytes remain unchanged in the local store after later writes (0.028917ms) # TODO +✔ entry kind separation remains explicit once the first derived-entry write path exists (0.022541ms) # TODO +✔ think --prompt-metrics prints factual prompt telemetry totals and medians (427.2765ms) +✔ think --prompt-metrics does not bootstrap local state before the first capture (311.284208ms) +✔ think --prompt-metrics supports --since filtering over prompt sessions (306.155583ms) +✔ think --prompt-metrics supports --bucket=day (304.402417ms) +✔ think --json --prompt-metrics emits explicit summary, timing, and bucket rows (301.677833ms) +✔ think --prompt-metrics rejects an unexpected thought argument (307.157833ms) +✔ think --prompt-metrics rejects invalid filter values (637.040125ms) +✔ think --recent --count limits output to the newest N raw captures (8129.581833ms) +✔ think --recent --query filters raw captures by case-insensitive text match (7296.687542ms) +✔ removed recent alias flags fail clearly and point to the scoped forms (1880.100834ms) +✔ think --json --recent applies count and query filters while remaining JSONL-only (6613.14725ms) +✔ think --remember uses the current project context to recall relevant prior thoughts (4651.500084ms) +✔ think --remember with an explicit phrase recalls matching thoughts without turning into generic recent listing (6818.414292ms) +✔ think --json --remember emits explicit ambient scope and match receipts for agents (3955.812458ms) +✔ think --remember falls back honestly to textual project-token matching for entries without ambient project receipts (5244.268542ms) +✔ think --remember --limit returns only the top N matching thoughts in deterministic order (8688.144709ms) +✔ think --remember --brief returns a triage-friendly snippet instead of the full multiline thought (4308.112833ms) +✔ think --json --remember --brief --limit preserves bounded explicit recall receipts for agents (6242.028917ms) +✔ think --remember rejects invalid --limit values (1597.004833ms) +✔ think --browse shows one raw thought with its immediate newer and older neighbors (5659.754042ms) +✔ think --browse without an entry id fails clearly outside interactive TTY use and remains read-only (240.996959ms) +✔ think --json --browse without an entry id stays machine-readable and does not try to open the shell (241.279291ms) +✔ think --json --browse emits JSONL rows for the current raw thought and its neighbors (6237.058833ms) +✔ think --browse opens a reader-first browse TUI with metadata and no permanent recent rail (7407.069541ms) +✔ think --browse can reveal a chronology drawer on demand instead of showing the full log by default (5524.905875ms) +✔ think --browse can jump to another thought through a fuzzy jump surface (5499.044209ms) +✔ think --browse can reveal inspect receipts inside the scripted browse TUI (3459.719667ms) +✔ think --browse can hand the selected thought into reflect from the scripted browse TUI (3375.284333ms) +✔ think --browse surfaces session identity for the current thought without replacing the reader-first view (7811.562292ms) +✔ think --browse uses a short visible entry id in the reader-first metadata while inspect keeps the full exact id (6748.198167ms) +✔ think --browse can reveal a summon-only session drawer that excludes out-of-session thoughts (7762.720791ms) +✔ think --browse reveals a structured session drawer with a visible start label and current-thought marker (7838.082ms) +✔ think --json --browse emits explicit session context and session-nearby rows without mislabeling out-of-session thoughts (8217.474833ms) +✔ think --browse can move to the previous thought within the current session without leaving reader-first browse (5348.260167ms) +✔ think --browse keeps the current thought in place when there is no next thought in the current session (5309.423416ms) +✔ think --json --browse exposes explicit session traversal semantics without conflating them with chronology neighbors (5535.556167ms) +✔ think --inspect exposes exact raw entry metadata without narration (1783.079292ms) +✔ think --json --inspect emits JSONL for the exact raw entry metadata (1770.8715ms) +✔ think --inspect exposes additive capture provenance separately from the raw text (1863.675958ms) +✔ think --json --inspect includes additive capture provenance in the inspected entry payload (1866.494125ms) +✔ think --inspect exposes canonical content identity and direct derived receipts when they exist (3575.140417ms) +✔ think --json --inspect emits canonical content identity and direct derived receipt rows (3583.037875ms) +✔ think --inspect exposes the first derived bundle as explicit raw, canonical, derived, and context sections (5804.055834ms) +✔ think --json --inspect emits canonical identity plus seed-quality and session-attribution receipts with provenance (6158.05475ms) +✔ think --json --inspect keeps duplicate raw captures distinct while linking them to the same canonical thought (5074.606541ms) +✔ think --reflect starts an explicit seeded reflect session with a deterministic seed-first challenge prompt (5502.805959ms) +✔ removed brainstorm aliases fail clearly and point to reflect (2521.494333ms) +✔ think --reflect can use an explicit sharpen prompt family (2573.572459ms) +✔ think --reflect-session stores a separate derived entry with preserved seed-first lineage (7058.775709ms) +✔ think --reflect validates explicit session entry and stays read-only on invalid start (2654.422209ms) +✔ think --reflect fails clearly when the seed entry does not exist (288.178084ms) +✔ think --reflect refuses status-like seeds that are not pressure-testable ideas (7645.956667ms) +✔ think --json --reflect refuses ineligible seeds with structured machine-readable errors (7194.538834ms) +✔ think --json --reflect emits only JSONL with seed-first session and prompt data (3963.789042ms) +✔ think --json --reflect-session emits only JSONL and preserves stored seed-first lineage (4138.1305ms) +✔ think --json reflect validation failures stay fully machine-readable (276.3055ms) +✔ think --stats prints total thoughts (4717.101959ms) +✔ think --stats does not bootstrap local state before the first capture (294.065917ms) +✔ think "stats" is captured as a thought rather than triggering the command (3114.005875ms) +✔ think --stats rejects an unexpected thought argument (296.041542ms) +✔ think stats supports --since filter (4346.326208ms) +✔ think --stats rejects an invalid --since value (285.638208ms) +✔ think stats supports --from and --to filters (6719.297083ms) +✔ think --stats rejects invalid absolute date filters (281.133125ms) +✔ think stats supports --bucket=day (6758.431791ms) +✔ think --stats --bucket=day includes a sparkline in text output (6371.168583ms) +✔ think --stats --bucket=day --json includes sparkline in stats.total event (6554.776791ms) +✔ think --stats without --bucket omits sparkline (2327.119125ms) +✔ think --stats rejects an invalid bucket value (264.119834ms) +ℹ tests 128 +ℹ suites 0 +ℹ pass 125 +ℹ fail 0 +ℹ cancelled 0 +ℹ skipped 0 +ℹ todo 3 +ℹ duration_ms 191146.34075 + +``` + +## Drift Results + +```text +Playback-question drift found. +Scanned 1 active cycle, 2 playback questions, 0 test descriptions. +Search basis: exact normalized match in tests/**/*.test.* and tests/**/*.spec.* descriptions. + +docs/design/0051-ssjr-src-store-js/ssjr-src-store-js.md +- Human: TBD + No exact normalized test description match found. +- Agent: TBD + No exact normalized test description match found. + +``` + +## Manual Verification + +- [x] Automated capture completed successfully. diff --git a/docs/method/retro/0052-ssjr-src-store-prompt-metrics-js/ssjr-src-store-prompt-metrics-js.md b/docs/method/retro/0052-ssjr-src-store-prompt-metrics-js/ssjr-src-store-prompt-metrics-js.md new file mode 100644 index 0000000..dfa4f1b --- /dev/null +++ b/docs/method/retro/0052-ssjr-src-store-prompt-metrics-js/ssjr-src-store-prompt-metrics-js.md @@ -0,0 +1,37 @@ +--- +title: "Raise SSJR grades for `src/store/prompt-metrics.js`" +cycle: "0052-ssjr-src-store-prompt-metrics-js" +design_doc: "docs/design/0052-ssjr-src-store-prompt-metrics-js/ssjr-src-store-prompt-metrics-js.md" +outcome: hill-met +drift_check: yes +--- + +# Raise SSJR grades for `src/store/prompt-metrics.js` Retro + +## Summary + +Froze return objects from all three summarizer functions. The +normalizeMetricRecord function already froze individual records. +No other structural issues in this 154-line file. + +## Playback Witness + +Add artifacts under `docs/method/retro/0052-ssjr-src-store-prompt-metrics-js/witness` and link them here. + +## Drift + +- None recorded. + +## New Debt + +- None recorded. + +## Cool Ideas + +- None recorded. + +## Backlog Maintenance + +- [ ] Inbox processed +- [ ] Priorities reviewed +- [ ] Dead work buried or merged diff --git a/docs/method/retro/0052-ssjr-src-store-prompt-metrics-js/witness/verification.md b/docs/method/retro/0052-ssjr-src-store-prompt-metrics-js/witness/verification.md new file mode 100644 index 0000000..2b9eba4 --- /dev/null +++ b/docs/method/retro/0052-ssjr-src-store-prompt-metrics-js/witness/verification.md @@ -0,0 +1,252 @@ +--- +title: "Verification Witness for Cycle 52" +--- + +# Verification Witness for Cycle 52 + +This witness proves that `Raise SSJR grades for `src/store/prompt-metrics.js`` now carries the required +behavior and adheres to the repo invariants. + +## Test Results + +```text + +> think@0.7.0 test +> npm run test:ports && npm run test:m1 + + +> think@0.7.0 test:ports +> node --test test/ports/*.test.js + +✔ BG_TOKEN is exported from style.js alongside the palette (0.786583ms) +✔ windowed browse initializes with no drawer open (17.738167ms) +✔ saveRawCapture writes cwd receipts first and defers git enrichment to followthrough (1100.043667ms) +✔ capture provenance exports the canonical ingress set (2.071667ms) +✔ capture provenance trims source strings while preserving valid ingress and URL (0.188292ms) +✔ capture provenance trims ingress strings before validation (0.073833ms) +✔ capture provenance rejects dangerous URL schemes (0.083791ms) +✔ capture provenance accepts safe URL schemes (0.109625ms) +✔ normalizeCaptureProvenance returns a frozen CaptureProvenance instance (0.05825ms) +✔ capture provenance reads and normalizes environment input (0.099042ms) +✔ METHOD docs use one consistent cycle-only release and README closeout policy (2.92375ms) +✔ MIND_ORCHESTRATION.md exists and is linked from GUIDE.md (1.57375ms) +✔ cycle 0006 retrospective restarts ordered numbering for the human playback section (2.120417ms) +✔ runDiagnostics reports ok for a healthy repo with entries (25.978083ms) +✔ runDiagnostics reports fail when think directory does not exist (0.206583ms) +✔ runDiagnostics reports fail when local repo has no git init (1.806958ms) +✔ runDiagnostics reports ok for upstream when reachable (19.63925ms) +✔ runDiagnostics reports warn for upstream when unreachable (21.061417ms) +✔ runDiagnostics reports skip for upstream when URL is set but no checker provided (19.82725ms) +✔ runDiagnostics reports skip for upstream when not configured (17.786875ms) +✔ runDiagnostics reports skip for upstream when configured without checker (17.424208ms) +✔ runDiagnostics includes all expected check names (17.744375ms) +✔ runDiagnostics reports graph model version when available (18.628167ms) +✔ runDiagnostics warns when graph model needs migration (15.76875ms) +✔ runDiagnostics reports entry count when available (18.67525ms) +✔ runDiagnostics warns when entry count is zero (16.686667ms) +✔ runDiagnostics skips graph and entry checks when no repo exists (0.16625ms) +✔ shared JSON helper canonicalizes object keys deterministically on parse and stringify (1.629209ms) +✔ discoverMinds finds all valid repos under the think directory (77.442375ms) +✔ discoverMinds ignores directories without git repos (19.12475ms) +✔ discoverMinds labels ~/.think/repo as "default" (18.081ms) +✔ discoverMinds sorts with default first, then alphabetical (59.427625ms) +✔ discoverMinds returns empty array when think directory does not exist (0.194ms) +✔ discoverMinds includes repoDir for each mind (18.535583ms) +✔ shaderForMind returns a deterministic index for a given name (0.19775ms) +✔ shaderForMind returns different indices for different names (0.151041ms) +✔ shaderForMind stays within the shader count range (0.083125ms) +✔ shaderForMind throws when shaderCount is zero (0.295042ms) +✔ shaderForMind throws when shaderCount is negative (0.073791ms) +✔ shaderForMind handles single-character names (0.063833ms) +✔ createEntry returns an Entry instance (3.865333ms) +✔ Entry is frozen (0.129459ms) +✔ createEntry validates required fields (0.8815ms) +✔ createReflectSession returns a ReflectSession instance (0.37225ms) +✔ ReflectSession is frozen (0.08975ms) +✔ ENTRY_KINDS is a frozen array of valid kind strings (0.064709ms) +✔ BUCKET_PERIODS is a frozen array of valid bucket strings (0.057917ms) +✔ storesTextContent validates against ENTRY_KINDS (0.076166ms) +✔ selectLogo picks large mind logo when terminal is wide and tall enough (0.955166ms) +✔ selectLogo picks medium mind logo when terminal fits medium but not large (0.100292ms) +✔ selectLogo picks text logo when terminal is too small for mind (0.06125ms) +✔ selectLogo always returns something even for tiny terminals (0.056ms) +✔ renderSplash contains the logo (0.150458ms) +✔ renderSplash contains the Enter prompt (0.064208ms) +✔ renderSplash output fits within the given dimensions (0.069417ms) +✔ splash.js does not export renderSplashView (dead code from RE-015 workaround) (0.046583ms) +✔ renderSplash centers the prompt horizontally (0.156792ms) +✔ windowed browse model initializes in windowed mode (0.190708ms) +✔ formatStats includes a sparkline when buckets are present (1.765708ms) +✔ formatStats omits sparkline when no buckets are present (0.084542ms) +✔ formatStats handles a single bucket without crashing (0.091916ms) +✔ formatStats handles empty bucket array without sparkline (0.066792ms) +✔ formatStats sparkline is oldest-to-newest (left-to-right) (0.078666ms) +ℹ tests 63 +ℹ suites 0 +ℹ pass 63 +ℹ fail 0 +ℹ cancelled 0 +ℹ skipped 0 +ℹ todo 0 +ℹ duration_ms 1490.267584 + +> think@0.7.0 test:m1 +> node --test test/acceptance/*.test.js + +✔ think --doctor reports health of a repo with captures (4582.488084ms) +✔ think --doctor succeeds before the first capture (344.655916ms) +✔ think --json --doctor emits a structured health report (3686.069959ms) +✔ think --doctor rejects an unexpected thought argument (306.757208ms) +✔ new capture writes graph-native relationship edges while preserving compatibility properties (3152.595208ms) +✔ think --migrate-graph upgrades a version-1 property-linked repo additively (4569.062333ms) +✔ think --migrate-graph is idempotent and safe to rerun (3137.418667ms) +✔ capture on a version-1 repo still succeeds and only migrates after the raw local save (5071.125166ms) +✔ graph-native commands fail clearly on an outdated repo outside interactive use (6225.938458ms) +✔ interactive inspect on an outdated repo shows visible upgrade progress before continuing (2997.289542ms) +✔ interactive browse on an outdated repo shows visible upgrade progress before continuing (3007.055875ms) +✔ think --json emits explicit graph migration required errors for outdated graph-native commands (3743.458417ms) +✔ think --migrate-graph upgrades a version-2 repo to graph model version 3 with browse and reflect read edges (7394.993584ms) +✔ think --json --inspect exposes direct reflect receipts that exist only through graph-native v3 edges (2490.921292ms) +✔ think --help prints top-level usage without bootstrapping local state (617.521542ms) +✔ think -h is accepted as a short alias for top-level help (410.751334ms) +✔ think --recent --help prints recent help instead of running the command (432.890875ms) +✔ think --recent -h prints recent help instead of running the command (355.214125ms) +✔ think recent --help fails and points callers to the explicit flag form (342.824375ms) +✔ think --inspect --help bypasses required entry validation (343.162416ms) +✔ think --json --help emits structured JSONL help output (415.285166ms) +✔ think recent --json --help fails machine-readably instead of acting as shorthand help (393.968125ms) +✔ think -- -h captures the literal text after option parsing is terminated (3395.075459ms) +✔ think --ingest reads stdin explicitly and captures it into the normal raw-capture core (4171.241625ms) +✔ think with stdin but without --ingest does not accidentally capture piped input (365.184209ms) +✔ think --ingest rejects mixed positional capture text and stdin capture text (385.883084ms) +✔ think --json --ingest preserves machine-readable capture semantics for agents (3450.124125ms) +✔ think --ingest rejects empty stdin payloads (316.856958ms) +✔ think --json capture emits JSONL on stdout and keeps stderr quiet when there are no warnings (2699.085166ms) +✔ think --json --recent emits entry events instead of plain text (6836.181958ms) +✔ think --json --stats emits totals and bucket rows as JSONL (4621.999875ms) +✔ think --json validation failures emit JSONL on stderr instead of stdout (280.433917ms) +✔ think --json reports backup pending as a structured warning on stderr (1418.231334ms) +✔ think --json emits deterministically sorted keys in JSONL output (2745.181959ms) +✔ think MCP server lists the core Think tools (656.924083ms) +✔ think MCP capture, recent, browse, and inspect route through the existing Think runtime (4949.84325ms) +✔ think MCP capture preserves additive provenance separately from the raw text (2840.714125ms) +✔ think MCP capture trims additive provenance strings before persistence (2049.410292ms) +✔ think MCP remember, stats, and prompt_metrics expose structured read results (5237.509208ms) +✔ think MCP doctor tool returns structured health checks (4097.861875ms) +✔ CLI raw capture bootstraps the local repo and preserves exact text (4133.778792ms) +✔ think "recent" is captured as a thought rather than triggering the list (2958.645667ms) +✔ think --recent does not bootstrap local state before the first capture (337.151458ms) +✔ think --recent rejects an unexpected thought argument (306.750958ms) +✔ capture does not require retrieval-before-write or conceptual confirmation (3807.283625ms) +✔ THINK_REPO_DIR overrides the default local repo path (2242.083125ms) +✔ reachable upstream reports local save first and backup second (1450.518209ms) +✔ unreachable upstream keeps capture successful and reports backup pending (1399.826667ms) +✔ recent stays plain and chronological (8120.739ms) +✔ capture is append-only across later capture activity (3925.255875ms) +✔ duplicate thoughts produce distinct captures rather than deduping (5482.369583ms) +✔ empty input is rejected (294.306417ms) +✔ whitespace-only input is rejected (266.835375ms) +✔ capture preserves formatting neutrality for spacing, casing, and punctuation (1949.905417ms) +✔ default user language avoids Git terminology (1215.616042ms) +✔ verbose capture emits JSONL trace updates on stderr (1226.871958ms) +✔ raw entries remain immutable after later derived entries exist (0.099583ms) # TODO +✔ stored raw entry bytes remain unchanged in the local store after later writes (0.08275ms) # TODO +✔ entry kind separation remains explicit once the first derived-entry write path exists (0.043917ms) # TODO +✔ think --prompt-metrics prints factual prompt telemetry totals and medians (568.464458ms) +✔ think --prompt-metrics does not bootstrap local state before the first capture (445.160125ms) +✔ think --prompt-metrics supports --since filtering over prompt sessions (440.905583ms) +✔ think --prompt-metrics supports --bucket=day (372.750542ms) +✔ think --json --prompt-metrics emits explicit summary, timing, and bucket rows (363.675ms) +✔ think --prompt-metrics rejects an unexpected thought argument (364.41125ms) +✔ think --prompt-metrics rejects invalid filter values (754.208667ms) +✔ think --recent --count limits output to the newest N raw captures (9507.936125ms) +✔ think --recent --query filters raw captures by case-insensitive text match (8666.366ms) +✔ removed recent alias flags fail clearly and point to the scoped forms (1734.265125ms) +✔ think --json --recent applies count and query filters while remaining JSONL-only (6242.627166ms) +✔ think --remember uses the current project context to recall relevant prior thoughts (5934.054625ms) +✔ think --remember with an explicit phrase recalls matching thoughts without turning into generic recent listing (6721.944542ms) +✔ think --json --remember emits explicit ambient scope and match receipts for agents (6043.393125ms) +✔ think --remember falls back honestly to textual project-token matching for entries without ambient project receipts (3902.270208ms) +✔ think --remember --limit returns only the top N matching thoughts in deterministic order (8079.104625ms) +✔ think --remember --brief returns a triage-friendly snippet instead of the full multiline thought (3728.992209ms) +✔ think --json --remember --brief --limit preserves bounded explicit recall receipts for agents (5777.691208ms) +✔ think --remember rejects invalid --limit values (1523.957709ms) +✔ think --browse shows one raw thought with its immediate newer and older neighbors (5827.215417ms) +✔ think --browse without an entry id fails clearly outside interactive TTY use and remains read-only (236.194125ms) +✔ think --json --browse without an entry id stays machine-readable and does not try to open the shell (236.002334ms) +✔ think --json --browse emits JSONL rows for the current raw thought and its neighbors (5853.188583ms) +✔ think --browse opens a reader-first browse TUI with metadata and no permanent recent rail (7288.658541ms) +✔ think --browse can reveal a chronology drawer on demand instead of showing the full log by default (7803.225041ms) +✔ think --browse can jump to another thought through a fuzzy jump surface (6451.520083ms) +✔ think --browse can reveal inspect receipts inside the scripted browse TUI (4940.062666ms) +✔ think --browse can hand the selected thought into reflect from the scripted browse TUI (7108.618125ms) +✔ think --browse surfaces session identity for the current thought without replacing the reader-first view (15960.900792ms) +✔ think --browse uses a short visible entry id in the reader-first metadata while inspect keeps the full exact id (6815.672583ms) +✔ think --browse can reveal a summon-only session drawer that excludes out-of-session thoughts (8495.124917ms) +✔ think --browse reveals a structured session drawer with a visible start label and current-thought marker (8612.685125ms) +✔ think --json --browse emits explicit session context and session-nearby rows without mislabeling out-of-session thoughts (7860.544334ms) +✔ think --browse can move to the previous thought within the current session without leaving reader-first browse (5688.013041ms) +✔ think --browse keeps the current thought in place when there is no next thought in the current session (6060.133458ms) +✔ think --json --browse exposes explicit session traversal semantics without conflating them with chronology neighbors (6023.113458ms) +✔ think --inspect exposes exact raw entry metadata without narration (1906.592375ms) +✔ think --json --inspect emits JSONL for the exact raw entry metadata (1895.4645ms) +✔ think --inspect exposes additive capture provenance separately from the raw text (1867.824292ms) +✔ think --json --inspect includes additive capture provenance in the inspected entry payload (1939.609875ms) +✔ think --inspect exposes canonical content identity and direct derived receipts when they exist (4009.391292ms) +✔ think --json --inspect emits canonical content identity and direct derived receipt rows (3944.273208ms) +✔ think --inspect exposes the first derived bundle as explicit raw, canonical, derived, and context sections (6270.992584ms) +✔ think --json --inspect emits canonical identity plus seed-quality and session-attribution receipts with provenance (7918.559958ms) +✔ think --json --inspect keeps duplicate raw captures distinct while linking them to the same canonical thought (9971.589583ms) +✔ think --reflect starts an explicit seeded reflect session with a deterministic seed-first challenge prompt (7039.709292ms) +✔ removed brainstorm aliases fail clearly and point to reflect (2534.448166ms) +✔ think --reflect can use an explicit sharpen prompt family (2519.901ms) +✔ think --reflect-session stores a separate derived entry with preserved seed-first lineage (8386.256375ms) +✔ think --reflect validates explicit session entry and stays read-only on invalid start (2507.722916ms) +✔ think --reflect fails clearly when the seed entry does not exist (263.118875ms) +✔ think --reflect refuses status-like seeds that are not pressure-testable ideas (8868.667708ms) +✔ think --json --reflect refuses ineligible seeds with structured machine-readable errors (7078.624125ms) +✔ think --json --reflect emits only JSONL with seed-first session and prompt data (5981.579333ms) +✔ think --json --reflect-session emits only JSONL and preserves stored seed-first lineage (2886.455042ms) +✔ think --json reflect validation failures stay fully machine-readable (257.822792ms) +✔ think --stats prints total thoughts (5210.787083ms) +✔ think --stats does not bootstrap local state before the first capture (276.577458ms) +✔ think "stats" is captured as a thought rather than triggering the command (2977.8825ms) +✔ think --stats rejects an unexpected thought argument (268.3605ms) +✔ think stats supports --since filter (5898.350958ms) +✔ think --stats rejects an invalid --since value (261.803375ms) +✔ think stats supports --from and --to filters (6244.788875ms) +✔ think --stats rejects invalid absolute date filters (270.0055ms) +✔ think stats supports --bucket=day (8049.732833ms) +✔ think --stats --bucket=day includes a sparkline in text output (6256.0335ms) +✔ think --stats --bucket=day --json includes sparkline in stats.total event (7817.698042ms) +✔ think --stats without --bucket omits sparkline (1709.9565ms) +✔ think --stats rejects an invalid bucket value (243.723208ms) +ℹ tests 128 +ℹ suites 0 +ℹ pass 125 +ℹ fail 0 +ℹ cancelled 0 +ℹ skipped 0 +ℹ todo 3 +ℹ duration_ms 221909.347959 + +``` + +## Drift Results + +```text +Playback-question drift found. +Scanned 1 active cycle, 2 playback questions, 0 test descriptions. +Search basis: exact normalized match in tests/**/*.test.* and tests/**/*.spec.* descriptions. + +docs/design/0052-ssjr-src-store-prompt-metrics-js/ssjr-src-store-prompt-metrics-js.md +- Human: TBD + No exact normalized test description match found. +- Agent: TBD + No exact normalized test description match found. + +``` + +## Manual Verification + +- [x] Automated capture completed successfully. diff --git a/docs/method/retro/0053-ssjr-src-store-remember-js/ssjr-src-store-remember-js.md b/docs/method/retro/0053-ssjr-src-store-remember-js/ssjr-src-store-remember-js.md new file mode 100644 index 0000000..a218315 --- /dev/null +++ b/docs/method/retro/0053-ssjr-src-store-remember-js/ssjr-src-store-remember-js.md @@ -0,0 +1,36 @@ +--- +title: "Raise SSJR grades for `src/store/remember.js`" +cycle: "0053-ssjr-src-store-remember-js" +design_doc: "docs/design/0053-ssjr-src-store-remember-js/ssjr-src-store-remember-js.md" +outcome: hill-met +drift_check: yes +--- + +# Raise SSJR grades for `src/store/remember.js` Retro + +## Summary + +Froze all scope and match return objects (4 functions). Arrays +inside frozen objects also frozen to prevent deep mutation. + +## Playback Witness + +Add artifacts under `docs/method/retro/0053-ssjr-src-store-remember-js/witness` and link them here. + +## Drift + +- None recorded. + +## New Debt + +- None recorded. + +## Cool Ideas + +- None recorded. + +## Backlog Maintenance + +- [ ] Inbox processed +- [ ] Priorities reviewed +- [ ] Dead work buried or merged diff --git a/docs/method/retro/0053-ssjr-src-store-remember-js/witness/verification.md b/docs/method/retro/0053-ssjr-src-store-remember-js/witness/verification.md new file mode 100644 index 0000000..261526a --- /dev/null +++ b/docs/method/retro/0053-ssjr-src-store-remember-js/witness/verification.md @@ -0,0 +1,252 @@ +--- +title: "Verification Witness for Cycle 53" +--- + +# Verification Witness for Cycle 53 + +This witness proves that `Raise SSJR grades for `src/store/remember.js`` now carries the required +behavior and adheres to the repo invariants. + +## Test Results + +```text + +> think@0.7.0 test +> npm run test:ports && npm run test:m1 + + +> think@0.7.0 test:ports +> node --test test/ports/*.test.js + +✔ BG_TOKEN is exported from style.js alongside the palette (0.786958ms) +✔ windowed browse initializes with no drawer open (18.833583ms) +✔ saveRawCapture writes cwd receipts first and defers git enrichment to followthrough (1176.255084ms) +✔ capture provenance exports the canonical ingress set (1.562125ms) +✔ capture provenance trims source strings while preserving valid ingress and URL (0.156ms) +✔ capture provenance trims ingress strings before validation (0.066792ms) +✔ capture provenance rejects dangerous URL schemes (0.078208ms) +✔ capture provenance accepts safe URL schemes (0.105583ms) +✔ normalizeCaptureProvenance returns a frozen CaptureProvenance instance (0.0575ms) +✔ capture provenance reads and normalizes environment input (0.097792ms) +✔ METHOD docs use one consistent cycle-only release and README closeout policy (2.39675ms) +✔ MIND_ORCHESTRATION.md exists and is linked from GUIDE.md (0.682291ms) +✔ cycle 0006 retrospective restarts ordered numbering for the human playback section (0.545ms) +✔ runDiagnostics reports ok for a healthy repo with entries (22.28475ms) +✔ runDiagnostics reports fail when think directory does not exist (0.192ms) +✔ runDiagnostics reports fail when local repo has no git init (1.421583ms) +✔ runDiagnostics reports ok for upstream when reachable (20.697792ms) +✔ runDiagnostics reports warn for upstream when unreachable (18.458833ms) +✔ runDiagnostics reports skip for upstream when URL is set but no checker provided (21.195334ms) +✔ runDiagnostics reports skip for upstream when not configured (19.679875ms) +✔ runDiagnostics reports skip for upstream when configured without checker (23.924708ms) +✔ runDiagnostics includes all expected check names (20.131666ms) +✔ runDiagnostics reports graph model version when available (20.109ms) +✔ runDiagnostics warns when graph model needs migration (18.402125ms) +✔ runDiagnostics reports entry count when available (18.70625ms) +✔ runDiagnostics warns when entry count is zero (19.740584ms) +✔ runDiagnostics skips graph and entry checks when no repo exists (0.255417ms) +✔ shared JSON helper canonicalizes object keys deterministically on parse and stringify (1.635708ms) +✔ discoverMinds finds all valid repos under the think directory (70.783208ms) +✔ discoverMinds ignores directories without git repos (23.201958ms) +✔ discoverMinds labels ~/.think/repo as "default" (19.260625ms) +✔ discoverMinds sorts with default first, then alphabetical (64.863792ms) +✔ discoverMinds returns empty array when think directory does not exist (0.192542ms) +✔ discoverMinds includes repoDir for each mind (18.451834ms) +✔ shaderForMind returns a deterministic index for a given name (0.177958ms) +✔ shaderForMind returns different indices for different names (0.085042ms) +✔ shaderForMind stays within the shader count range (0.178583ms) +✔ shaderForMind throws when shaderCount is zero (0.354333ms) +✔ shaderForMind throws when shaderCount is negative (0.086375ms) +✔ shaderForMind handles single-character names (0.068291ms) +✔ createEntry returns an Entry instance (2.717875ms) +✔ Entry is frozen (0.139292ms) +✔ createEntry validates required fields (0.793083ms) +✔ createReflectSession returns a ReflectSession instance (0.12825ms) +✔ ReflectSession is frozen (0.0815ms) +✔ ENTRY_KINDS is a frozen array of valid kind strings (0.058625ms) +✔ BUCKET_PERIODS is a frozen array of valid bucket strings (0.057625ms) +✔ storesTextContent validates against ENTRY_KINDS (0.071042ms) +✔ selectLogo picks large mind logo when terminal is wide and tall enough (0.927125ms) +✔ selectLogo picks medium mind logo when terminal fits medium but not large (0.1045ms) +✔ selectLogo picks text logo when terminal is too small for mind (0.062458ms) +✔ selectLogo always returns something even for tiny terminals (0.05675ms) +✔ renderSplash contains the logo (0.145625ms) +✔ renderSplash contains the Enter prompt (0.066625ms) +✔ renderSplash output fits within the given dimensions (0.070334ms) +✔ splash.js does not export renderSplashView (dead code from RE-015 workaround) (0.044334ms) +✔ renderSplash centers the prompt horizontally (0.152583ms) +✔ windowed browse model initializes in windowed mode (0.192875ms) +✔ formatStats includes a sparkline when buckets are present (1.702708ms) +✔ formatStats omits sparkline when no buckets are present (0.084958ms) +✔ formatStats handles a single bucket without crashing (0.098709ms) +✔ formatStats handles empty bucket array without sparkline (0.06975ms) +✔ formatStats sparkline is oldest-to-newest (left-to-right) (0.078875ms) +ℹ tests 63 +ℹ suites 0 +ℹ pass 63 +ℹ fail 0 +ℹ cancelled 0 +ℹ skipped 0 +ℹ todo 0 +ℹ duration_ms 1566.522791 + +> think@0.7.0 test:m1 +> node --test test/acceptance/*.test.js + +✔ think --doctor reports health of a repo with captures (3582.9635ms) +✔ think --doctor succeeds before the first capture (333.108875ms) +✔ think --json --doctor emits a structured health report (3107.363625ms) +✔ think --doctor rejects an unexpected thought argument (323.166333ms) +✔ new capture writes graph-native relationship edges while preserving compatibility properties (2510.036917ms) +✔ think --migrate-graph upgrades a version-1 property-linked repo additively (3747.352083ms) +✔ think --migrate-graph is idempotent and safe to rerun (3390.06375ms) +✔ capture on a version-1 repo still succeeds and only migrates after the raw local save (5536.262167ms) +✔ graph-native commands fail clearly on an outdated repo outside interactive use (4738.807584ms) +✔ interactive inspect on an outdated repo shows visible upgrade progress before continuing (3081.897917ms) +✔ interactive browse on an outdated repo shows visible upgrade progress before continuing (3171.304ms) +✔ think --json emits explicit graph migration required errors for outdated graph-native commands (2370.950417ms) +✔ think --migrate-graph upgrades a version-2 repo to graph model version 3 with browse and reflect read edges (7328.51325ms) +✔ think --json --inspect exposes direct reflect receipts that exist only through graph-native v3 edges (2560.145ms) +✔ think --help prints top-level usage without bootstrapping local state (613.175916ms) +✔ think -h is accepted as a short alias for top-level help (376.168708ms) +✔ think --recent --help prints recent help instead of running the command (324.450042ms) +✔ think --recent -h prints recent help instead of running the command (318.941541ms) +✔ think recent --help fails and points callers to the explicit flag form (338.192208ms) +✔ think --inspect --help bypasses required entry validation (326.34975ms) +✔ think --json --help emits structured JSONL help output (371.342458ms) +✔ think recent --json --help fails machine-readably instead of acting as shorthand help (292.992833ms) +✔ think -- -h captures the literal text after option parsing is terminated (2715.458875ms) +✔ think --ingest reads stdin explicitly and captures it into the normal raw-capture core (3331.077834ms) +✔ think with stdin but without --ingest does not accidentally capture piped input (391.788791ms) +✔ think --ingest rejects mixed positional capture text and stdin capture text (337.662291ms) +✔ think --json --ingest preserves machine-readable capture semantics for agents (2757.798625ms) +✔ think --ingest rejects empty stdin payloads (372.014584ms) +✔ think --json capture emits JSONL on stdout and keeps stderr quiet when there are no warnings (2190.517917ms) +✔ think --json --recent emits entry events instead of plain text (5959.868584ms) +✔ think --json --stats emits totals and bucket rows as JSONL (4658.880459ms) +✔ think --json validation failures emit JSONL on stderr instead of stdout (292.378084ms) +✔ think --json reports backup pending as a structured warning on stderr (1524.60175ms) +✔ think --json emits deterministically sorted keys in JSONL output (2245.094625ms) +✔ think MCP server lists the core Think tools (613.498208ms) +✔ think MCP capture, recent, browse, and inspect route through the existing Think runtime (3728.8535ms) +✔ think MCP capture preserves additive provenance separately from the raw text (2831.957375ms) +✔ think MCP capture trims additive provenance strings before persistence (2017.422834ms) +✔ think MCP remember, stats, and prompt_metrics expose structured read results (5845.3265ms) +✔ think MCP doctor tool returns structured health checks (2577.856875ms) +✔ CLI raw capture bootstraps the local repo and preserves exact text (3273.533917ms) +✔ think "recent" is captured as a thought rather than triggering the list (2783.904541ms) +✔ think --recent does not bootstrap local state before the first capture (322.991583ms) +✔ think --recent rejects an unexpected thought argument (294.737875ms) +✔ capture does not require retrieval-before-write or conceptual confirmation (3787.80175ms) +✔ THINK_REPO_DIR overrides the default local repo path (2313.021125ms) +✔ reachable upstream reports local save first and backup second (1637.727458ms) +✔ unreachable upstream keeps capture successful and reports backup pending (1677.638459ms) +✔ recent stays plain and chronological (6558.379166ms) +✔ capture is append-only across later capture activity (4065.495708ms) +✔ duplicate thoughts produce distinct captures rather than deduping (4005.669083ms) +✔ empty input is rejected (281.539542ms) +✔ whitespace-only input is rejected (262.4225ms) +✔ capture preserves formatting neutrality for spacing, casing, and punctuation (1955.644709ms) +✔ default user language avoids Git terminology (1255.802083ms) +✔ verbose capture emits JSONL trace updates on stderr (1254.358458ms) +✔ raw entries remain immutable after later derived entries exist (0.112625ms) # TODO +✔ stored raw entry bytes remain unchanged in the local store after later writes (0.03325ms) # TODO +✔ entry kind separation remains explicit once the first derived-entry write path exists (0.027584ms) # TODO +✔ think --prompt-metrics prints factual prompt telemetry totals and medians (608.826542ms) +✔ think --prompt-metrics does not bootstrap local state before the first capture (377.701375ms) +✔ think --prompt-metrics supports --since filtering over prompt sessions (326.161292ms) +✔ think --prompt-metrics supports --bucket=day (318.237958ms) +✔ think --json --prompt-metrics emits explicit summary, timing, and bucket rows (341.231459ms) +✔ think --prompt-metrics rejects an unexpected thought argument (317.76325ms) +✔ think --prompt-metrics rejects invalid filter values (672.570625ms) +✔ think --recent --count limits output to the newest N raw captures (8623.748708ms) +✔ think --recent --query filters raw captures by case-insensitive text match (7730.328084ms) +✔ removed recent alias flags fail clearly and point to the scoped forms (1723.510125ms) +✔ think --json --recent applies count and query filters while remaining JSONL-only (6580.273917ms) +✔ think --remember uses the current project context to recall relevant prior thoughts (4478.987958ms) +✔ think --remember with an explicit phrase recalls matching thoughts without turning into generic recent listing (6733.967375ms) +✔ think --json --remember emits explicit ambient scope and match receipts for agents (4016.743334ms) +✔ think --remember falls back honestly to textual project-token matching for entries without ambient project receipts (4247.382125ms) +✔ think --remember --limit returns only the top N matching thoughts in deterministic order (8452.643375ms) +✔ think --remember --brief returns a triage-friendly snippet instead of the full multiline thought (3557.349042ms) +✔ think --json --remember --brief --limit preserves bounded explicit recall receipts for agents (5375.937125ms) +✔ think --remember rejects invalid --limit values (1425.241125ms) +✔ think --browse shows one raw thought with its immediate newer and older neighbors (5481.659458ms) +✔ think --browse without an entry id fails clearly outside interactive TTY use and remains read-only (240.137ms) +✔ think --json --browse without an entry id stays machine-readable and does not try to open the shell (230.901625ms) +✔ think --json --browse emits JSONL rows for the current raw thought and its neighbors (5563.097333ms) +✔ think --browse opens a reader-first browse TUI with metadata and no permanent recent rail (6144.552125ms) +✔ think --browse can reveal a chronology drawer on demand instead of showing the full log by default (5345.048125ms) +✔ think --browse can jump to another thought through a fuzzy jump surface (5245.08225ms) +✔ think --browse can reveal inspect receipts inside the scripted browse TUI (3350.500792ms) +✔ think --browse can hand the selected thought into reflect from the scripted browse TUI (3319.631042ms) +✔ think --browse surfaces session identity for the current thought without replacing the reader-first view (7802.349875ms) +✔ think --browse uses a short visible entry id in the reader-first metadata while inspect keeps the full exact id (6569.20325ms) +✔ think --browse can reveal a summon-only session drawer that excludes out-of-session thoughts (7525.397292ms) +✔ think --browse reveals a structured session drawer with a visible start label and current-thought marker (7646.955833ms) +✔ think --json --browse emits explicit session context and session-nearby rows without mislabeling out-of-session thoughts (7678.173459ms) +✔ think --browse can move to the previous thought within the current session without leaving reader-first browse (5261.061542ms) +✔ think --browse keeps the current thought in place when there is no next thought in the current session (5382.249208ms) +✔ think --json --browse exposes explicit session traversal semantics without conflating them with chronology neighbors (5801.065792ms) +✔ think --inspect exposes exact raw entry metadata without narration (1813.542667ms) +✔ think --json --inspect emits JSONL for the exact raw entry metadata (1754.28725ms) +✔ think --inspect exposes additive capture provenance separately from the raw text (1808.160291ms) +✔ think --json --inspect includes additive capture provenance in the inspected entry payload (1744.224667ms) +✔ think --inspect exposes canonical content identity and direct derived receipts when they exist (3534.255042ms) +✔ think --json --inspect emits canonical content identity and direct derived receipt rows (3487.493625ms) +✔ think --inspect exposes the first derived bundle as explicit raw, canonical, derived, and context sections (5440.001708ms) +✔ think --json --inspect emits canonical identity plus seed-quality and session-attribution receipts with provenance (5414.49625ms) +✔ think --json --inspect keeps duplicate raw captures distinct while linking them to the same canonical thought (4443.855916ms) +✔ think --reflect starts an explicit seeded reflect session with a deterministic seed-first challenge prompt (6145.098167ms) +✔ removed brainstorm aliases fail clearly and point to reflect (2519.48675ms) +✔ think --reflect can use an explicit sharpen prompt family (2889.7075ms) +✔ think --reflect-session stores a separate derived entry with preserved seed-first lineage (7194.349625ms) +✔ think --reflect validates explicit session entry and stays read-only on invalid start (2592.261292ms) +✔ think --reflect fails clearly when the seed entry does not exist (268.229375ms) +✔ think --reflect refuses status-like seeds that are not pressure-testable ideas (7431.357959ms) +✔ think --json --reflect refuses ineligible seeds with structured machine-readable errors (7178.357834ms) +✔ think --json --reflect emits only JSONL with seed-first session and prompt data (4022.81425ms) +✔ think --json --reflect-session emits only JSONL and preserves stored seed-first lineage (3176.537958ms) +✔ think --json reflect validation failures stay fully machine-readable (260.390708ms) +✔ think --stats prints total thoughts (5034.983333ms) +✔ think --stats does not bootstrap local state before the first capture (278.29975ms) +✔ think "stats" is captured as a thought rather than triggering the command (3254.877417ms) +✔ think --stats rejects an unexpected thought argument (284.731125ms) +✔ think stats supports --since filter (4484.092167ms) +✔ think --stats rejects an invalid --since value (263.367167ms) +✔ think stats supports --from and --to filters (6533.030084ms) +✔ think --stats rejects invalid absolute date filters (280.692792ms) +✔ think stats supports --bucket=day (6539.913083ms) +✔ think --stats --bucket=day includes a sparkline in text output (6336.468666ms) +✔ think --stats --bucket=day --json includes sparkline in stats.total event (6134.257292ms) +✔ think --stats without --bucket omits sparkline (1817.014708ms) +✔ think --stats rejects an invalid bucket value (248.064042ms) +ℹ tests 128 +ℹ suites 0 +ℹ pass 125 +ℹ fail 0 +ℹ cancelled 0 +ℹ skipped 0 +ℹ todo 3 +ℹ duration_ms 183487.726875 + +``` + +## Drift Results + +```text +Playback-question drift found. +Scanned 1 active cycle, 2 playback questions, 0 test descriptions. +Search basis: exact normalized match in tests/**/*.test.* and tests/**/*.spec.* descriptions. + +docs/design/0053-ssjr-src-store-remember-js/ssjr-src-store-remember-js.md +- Human: TBD + No exact normalized test description match found. +- Agent: TBD + No exact normalized test description match found. + +``` + +## Manual Verification + +- [x] Automated capture completed successfully. diff --git a/docs/method/retro/0054-ssjr-src-cli-commands-reflect-js/ssjr-src-cli-commands-reflect-js.md b/docs/method/retro/0054-ssjr-src-cli-commands-reflect-js/ssjr-src-cli-commands-reflect-js.md new file mode 100644 index 0000000..c4db16c --- /dev/null +++ b/docs/method/retro/0054-ssjr-src-cli-commands-reflect-js/ssjr-src-cli-commands-reflect-js.md @@ -0,0 +1,37 @@ +--- +title: "Raise SSJR grades for `src/cli/commands/reflect.js`" +cycle: "0054-ssjr-src-cli-commands-reflect-js" +design_doc: "docs/design/0054-ssjr-src-cli-commands-reflect-js/ssjr-src-cli-commands-reflect-js.md" +outcome: hill-met +drift_check: yes +--- + +# Raise SSJR grades for `src/cli/commands/reflect.js` Retro + +## Summary + +Already cleaned by cycle 0050 (DRY capitalize). File is 243 lines, +structurally correct orchestration code. Remaining backlog concerns +(raw result bags) require deeper refactoring beyond quick fixes. + +## Playback Witness + +Add artifacts under `docs/method/retro/0054-ssjr-src-cli-commands-reflect-js/witness` and link them here. + +## Drift + +- None recorded. + +## New Debt + +- None recorded. + +## Cool Ideas + +- None recorded. + +## Backlog Maintenance + +- [ ] Inbox processed +- [ ] Priorities reviewed +- [ ] Dead work buried or merged diff --git a/docs/method/retro/0054-ssjr-src-cli-commands-reflect-js/witness/verification.md b/docs/method/retro/0054-ssjr-src-cli-commands-reflect-js/witness/verification.md new file mode 100644 index 0000000..2fc0594 --- /dev/null +++ b/docs/method/retro/0054-ssjr-src-cli-commands-reflect-js/witness/verification.md @@ -0,0 +1,252 @@ +--- +title: "Verification Witness for Cycle 54" +--- + +# Verification Witness for Cycle 54 + +This witness proves that `Raise SSJR grades for `src/cli/commands/reflect.js`` now carries the required +behavior and adheres to the repo invariants. + +## Test Results + +```text + +> think@0.7.0 test +> npm run test:ports && npm run test:m1 + + +> think@0.7.0 test:ports +> node --test test/ports/*.test.js + +✔ BG_TOKEN is exported from style.js alongside the palette (0.82725ms) +✔ windowed browse initializes with no drawer open (19.909084ms) +✔ saveRawCapture writes cwd receipts first and defers git enrichment to followthrough (1128.788583ms) +✔ capture provenance exports the canonical ingress set (1.558334ms) +✔ capture provenance trims source strings while preserving valid ingress and URL (0.16425ms) +✔ capture provenance trims ingress strings before validation (0.072083ms) +✔ capture provenance rejects dangerous URL schemes (0.075208ms) +✔ capture provenance accepts safe URL schemes (0.106916ms) +✔ normalizeCaptureProvenance returns a frozen CaptureProvenance instance (0.058292ms) +✔ capture provenance reads and normalizes environment input (0.09025ms) +✔ METHOD docs use one consistent cycle-only release and README closeout policy (1.943916ms) +✔ MIND_ORCHESTRATION.md exists and is linked from GUIDE.md (0.647375ms) +✔ cycle 0006 retrospective restarts ordered numbering for the human playback section (0.3875ms) +✔ runDiagnostics reports ok for a healthy repo with entries (33.456ms) +✔ runDiagnostics reports fail when think directory does not exist (0.222083ms) +✔ runDiagnostics reports fail when local repo has no git init (1.406916ms) +✔ runDiagnostics reports ok for upstream when reachable (25.890041ms) +✔ runDiagnostics reports warn for upstream when unreachable (31.480166ms) +✔ runDiagnostics reports skip for upstream when URL is set but no checker provided (18.77225ms) +✔ runDiagnostics reports skip for upstream when not configured (18.723709ms) +✔ runDiagnostics reports skip for upstream when configured without checker (17.940958ms) +✔ runDiagnostics includes all expected check names (17.6635ms) +✔ runDiagnostics reports graph model version when available (16.693583ms) +✔ runDiagnostics warns when graph model needs migration (17.745417ms) +✔ runDiagnostics reports entry count when available (20.323292ms) +✔ runDiagnostics warns when entry count is zero (16.557208ms) +✔ runDiagnostics skips graph and entry checks when no repo exists (0.164ms) +✔ shared JSON helper canonicalizes object keys deterministically on parse and stringify (1.6515ms) +✔ discoverMinds finds all valid repos under the think directory (97.329292ms) +✔ discoverMinds ignores directories without git repos (21.514209ms) +✔ discoverMinds labels ~/.think/repo as "default" (18.411208ms) +✔ discoverMinds sorts with default first, then alphabetical (53.932083ms) +✔ discoverMinds returns empty array when think directory does not exist (0.14625ms) +✔ discoverMinds includes repoDir for each mind (21.20875ms) +✔ shaderForMind returns a deterministic index for a given name (0.160459ms) +✔ shaderForMind returns different indices for different names (0.081958ms) +✔ shaderForMind stays within the shader count range (0.069708ms) +✔ shaderForMind throws when shaderCount is zero (0.297333ms) +✔ shaderForMind throws when shaderCount is negative (0.090417ms) +✔ shaderForMind handles single-character names (0.06775ms) +✔ createEntry returns an Entry instance (2.02575ms) +✔ Entry is frozen (0.4065ms) +✔ createEntry validates required fields (1.52475ms) +✔ createReflectSession returns a ReflectSession instance (0.239333ms) +✔ ReflectSession is frozen (0.362542ms) +✔ ENTRY_KINDS is a frozen array of valid kind strings (0.101ms) +✔ BUCKET_PERIODS is a frozen array of valid bucket strings (0.068333ms) +✔ storesTextContent validates against ENTRY_KINDS (0.07325ms) +✔ selectLogo picks large mind logo when terminal is wide and tall enough (0.997375ms) +✔ selectLogo picks medium mind logo when terminal fits medium but not large (0.135625ms) +✔ selectLogo picks text logo when terminal is too small for mind (0.066125ms) +✔ selectLogo always returns something even for tiny terminals (0.058458ms) +✔ renderSplash contains the logo (0.160583ms) +✔ renderSplash contains the Enter prompt (0.061583ms) +✔ renderSplash output fits within the given dimensions (0.067875ms) +✔ splash.js does not export renderSplashView (dead code from RE-015 workaround) (0.045709ms) +✔ renderSplash centers the prompt horizontally (0.199875ms) +✔ windowed browse model initializes in windowed mode (0.220792ms) +✔ formatStats includes a sparkline when buckets are present (1.977ms) +✔ formatStats omits sparkline when no buckets are present (0.101875ms) +✔ formatStats handles a single bucket without crashing (0.100375ms) +✔ formatStats handles empty bucket array without sparkline (0.076959ms) +✔ formatStats sparkline is oldest-to-newest (left-to-right) (0.084375ms) +ℹ tests 63 +ℹ suites 0 +ℹ pass 63 +ℹ fail 0 +ℹ cancelled 0 +ℹ skipped 0 +ℹ todo 0 +ℹ duration_ms 1507.663917 + +> think@0.7.0 test:m1 +> node --test test/acceptance/*.test.js + +✔ think --doctor reports health of a repo with captures (3587.663833ms) +✔ think --doctor succeeds before the first capture (329.790416ms) +✔ think --json --doctor emits a structured health report (3102.059333ms) +✔ think --doctor rejects an unexpected thought argument (311.458709ms) +✔ new capture writes graph-native relationship edges while preserving compatibility properties (2359.152042ms) +✔ think --migrate-graph upgrades a version-1 property-linked repo additively (3711.15475ms) +✔ think --migrate-graph is idempotent and safe to rerun (3265.288625ms) +✔ capture on a version-1 repo still succeeds and only migrates after the raw local save (5411.490542ms) +✔ graph-native commands fail clearly on an outdated repo outside interactive use (4553.0555ms) +✔ interactive inspect on an outdated repo shows visible upgrade progress before continuing (2961.735709ms) +✔ interactive browse on an outdated repo shows visible upgrade progress before continuing (2887.775ms) +✔ think --json emits explicit graph migration required errors for outdated graph-native commands (2349.803458ms) +✔ think --migrate-graph upgrades a version-2 repo to graph model version 3 with browse and reflect read edges (7414.523125ms) +✔ think --json --inspect exposes direct reflect receipts that exist only through graph-native v3 edges (2585.412708ms) +✔ think --help prints top-level usage without bootstrapping local state (516.433375ms) +✔ think -h is accepted as a short alias for top-level help (377.699542ms) +✔ think --recent --help prints recent help instead of running the command (345.907166ms) +✔ think --recent -h prints recent help instead of running the command (320.332ms) +✔ think recent --help fails and points callers to the explicit flag form (312.992875ms) +✔ think --inspect --help bypasses required entry validation (314.686292ms) +✔ think --json --help emits structured JSONL help output (398.134834ms) +✔ think recent --json --help fails machine-readably instead of acting as shorthand help (355.33425ms) +✔ think -- -h captures the literal text after option parsing is terminated (2747.766792ms) +✔ think --ingest reads stdin explicitly and captures it into the normal raw-capture core (3368.600625ms) +✔ think with stdin but without --ingest does not accidentally capture piped input (304.695084ms) +✔ think --ingest rejects mixed positional capture text and stdin capture text (388.934958ms) +✔ think --json --ingest preserves machine-readable capture semantics for agents (2911.97125ms) +✔ think --ingest rejects empty stdin payloads (347.648917ms) +✔ think --json capture emits JSONL on stdout and keeps stderr quiet when there are no warnings (2087.343333ms) +✔ think --json --recent emits entry events instead of plain text (5779.189ms) +✔ think --json --stats emits totals and bucket rows as JSONL (4718.911708ms) +✔ think --json validation failures emit JSONL on stderr instead of stdout (340.565583ms) +✔ think --json reports backup pending as a structured warning on stderr (1474.006334ms) +✔ think --json emits deterministically sorted keys in JSONL output (2097.681917ms) +✔ think MCP server lists the core Think tools (489.649042ms) +✔ think MCP capture, recent, browse, and inspect route through the existing Think runtime (3786.497666ms) +✔ think MCP capture preserves additive provenance separately from the raw text (2588.757ms) +✔ think MCP capture trims additive provenance strings before persistence (2096.787625ms) +✔ think MCP remember, stats, and prompt_metrics expose structured read results (5674.713625ms) +✔ think MCP doctor tool returns structured health checks (2393.95975ms) +✔ CLI raw capture bootstraps the local repo and preserves exact text (3283.982667ms) +✔ think "recent" is captured as a thought rather than triggering the list (2799.350209ms) +✔ think --recent does not bootstrap local state before the first capture (301.635042ms) +✔ think --recent rejects an unexpected thought argument (293.230917ms) +✔ capture does not require retrieval-before-write or conceptual confirmation (3677.006333ms) +✔ THINK_REPO_DIR overrides the default local repo path (2422.539375ms) +✔ reachable upstream reports local save first and backup second (1579.607166ms) +✔ unreachable upstream keeps capture successful and reports backup pending (1323.088833ms) +✔ recent stays plain and chronological (6371.517709ms) +✔ capture is append-only across later capture activity (3760.758166ms) +✔ duplicate thoughts produce distinct captures rather than deduping (3906.840667ms) +✔ empty input is rejected (264.203291ms) +✔ whitespace-only input is rejected (260.094084ms) +✔ capture preserves formatting neutrality for spacing, casing, and punctuation (1864.868208ms) +✔ default user language avoids Git terminology (1361.053958ms) +✔ verbose capture emits JSONL trace updates on stderr (1377.984792ms) +✔ raw entries remain immutable after later derived entries exist (0.213333ms) # TODO +✔ stored raw entry bytes remain unchanged in the local store after later writes (0.033042ms) # TODO +✔ entry kind separation remains explicit once the first derived-entry write path exists (0.0385ms) # TODO +✔ think --prompt-metrics prints factual prompt telemetry totals and medians (479.955334ms) +✔ think --prompt-metrics does not bootstrap local state before the first capture (370.464792ms) +✔ think --prompt-metrics supports --since filtering over prompt sessions (340.773959ms) +✔ think --prompt-metrics supports --bucket=day (334.483625ms) +✔ think --json --prompt-metrics emits explicit summary, timing, and bucket rows (345.939875ms) +✔ think --prompt-metrics rejects an unexpected thought argument (318.378125ms) +✔ think --prompt-metrics rejects invalid filter values (688.626958ms) +✔ think --recent --count limits output to the newest N raw captures (8681.433209ms) +✔ think --recent --query filters raw captures by case-insensitive text match (7335.738542ms) +✔ removed recent alias flags fail clearly and point to the scoped forms (1664.77325ms) +✔ think --json --recent applies count and query filters while remaining JSONL-only (6134.779917ms) +✔ think --remember uses the current project context to recall relevant prior thoughts (4242.71725ms) +✔ think --remember with an explicit phrase recalls matching thoughts without turning into generic recent listing (7071.701959ms) +✔ think --json --remember emits explicit ambient scope and match receipts for agents (3880.860416ms) +✔ think --remember falls back honestly to textual project-token matching for entries without ambient project receipts (3831.259792ms) +✔ think --remember --limit returns only the top N matching thoughts in deterministic order (8241.783459ms) +✔ think --remember --brief returns a triage-friendly snippet instead of the full multiline thought (3689.91725ms) +✔ think --json --remember --brief --limit preserves bounded explicit recall receipts for agents (5723.133583ms) +✔ think --remember rejects invalid --limit values (1503.155834ms) +✔ think --browse shows one raw thought with its immediate newer and older neighbors (5797.415ms) +✔ think --browse without an entry id fails clearly outside interactive TTY use and remains read-only (252.585625ms) +✔ think --json --browse without an entry id stays machine-readable and does not try to open the shell (246.967416ms) +✔ think --json --browse emits JSONL rows for the current raw thought and its neighbors (5758.370959ms) +✔ think --browse opens a reader-first browse TUI with metadata and no permanent recent rail (6517.196834ms) +✔ think --browse can reveal a chronology drawer on demand instead of showing the full log by default (5610.4525ms) +✔ think --browse can jump to another thought through a fuzzy jump surface (5562.1265ms) +✔ think --browse can reveal inspect receipts inside the scripted browse TUI (3531.668959ms) +✔ think --browse can hand the selected thought into reflect from the scripted browse TUI (3469.868875ms) +✔ think --browse surfaces session identity for the current thought without replacing the reader-first view (8011.736083ms) +✔ think --browse uses a short visible entry id in the reader-first metadata while inspect keeps the full exact id (6955.22825ms) +✔ think --browse can reveal a summon-only session drawer that excludes out-of-session thoughts (8279.200958ms) +✔ think --browse reveals a structured session drawer with a visible start label and current-thought marker (8061.005209ms) +✔ think --json --browse emits explicit session context and session-nearby rows without mislabeling out-of-session thoughts (8325.869375ms) +✔ think --browse can move to the previous thought within the current session without leaving reader-first browse (5554.059584ms) +✔ think --browse keeps the current thought in place when there is no next thought in the current session (5816.590333ms) +✔ think --json --browse exposes explicit session traversal semantics without conflating them with chronology neighbors (5837.657042ms) +✔ think --inspect exposes exact raw entry metadata without narration (1917.699875ms) +✔ think --json --inspect emits JSONL for the exact raw entry metadata (1915.336375ms) +✔ think --inspect exposes additive capture provenance separately from the raw text (1906.563416ms) +✔ think --json --inspect includes additive capture provenance in the inspected entry payload (2007.08725ms) +✔ think --inspect exposes canonical content identity and direct derived receipts when they exist (4056.693958ms) +✔ think --json --inspect emits canonical content identity and direct derived receipt rows (3941.2225ms) +✔ think --inspect exposes the first derived bundle as explicit raw, canonical, derived, and context sections (6092.272583ms) +✔ think --json --inspect emits canonical identity plus seed-quality and session-attribution receipts with provenance (6100.071ms) +✔ think --json --inspect keeps duplicate raw captures distinct while linking them to the same canonical thought (4672.486292ms) +✔ think --reflect starts an explicit seeded reflect session with a deterministic seed-first challenge prompt (6122.19925ms) +✔ removed brainstorm aliases fail clearly and point to reflect (2597.554916ms) +✔ think --reflect can use an explicit sharpen prompt family (2735.415125ms) +✔ think --reflect-session stores a separate derived entry with preserved seed-first lineage (6593.909ms) +✔ think --reflect validates explicit session entry and stays read-only on invalid start (2507.418291ms) +✔ think --reflect fails clearly when the seed entry does not exist (261.869875ms) +✔ think --reflect refuses status-like seeds that are not pressure-testable ideas (6987.369458ms) +✔ think --json --reflect refuses ineligible seeds with structured machine-readable errors (7462.323375ms) +✔ think --json --reflect emits only JSONL with seed-first session and prompt data (3769.895042ms) +✔ think --json --reflect-session emits only JSONL and preserves stored seed-first lineage (2842.658666ms) +✔ think --json reflect validation failures stay fully machine-readable (257.564625ms) +✔ think --stats prints total thoughts (5022.016417ms) +✔ think --stats does not bootstrap local state before the first capture (283.26775ms) +✔ think "stats" is captured as a thought rather than triggering the command (3238.001958ms) +✔ think --stats rejects an unexpected thought argument (307.189084ms) +✔ think stats supports --since filter (4137.2405ms) +✔ think --stats rejects an invalid --since value (267.95225ms) +✔ think stats supports --from and --to filters (6108.22ms) +✔ think --stats rejects invalid absolute date filters (255.543125ms) +✔ think stats supports --bucket=day (6153.632125ms) +✔ think --stats --bucket=day includes a sparkline in text output (6637.313959ms) +✔ think --stats --bucket=day --json includes sparkline in stats.total event (5548.544416ms) +✔ think --stats without --bucket omits sparkline (1775.32425ms) +✔ think --stats rejects an invalid bucket value (244.611ms) +ℹ tests 128 +ℹ suites 0 +ℹ pass 125 +ℹ fail 0 +ℹ cancelled 0 +ℹ skipped 0 +ℹ todo 3 +ℹ duration_ms 190564.491583 + +``` + +## Drift Results + +```text +Playback-question drift found. +Scanned 1 active cycle, 2 playback questions, 0 test descriptions. +Search basis: exact normalized match in tests/**/*.test.* and tests/**/*.spec.* descriptions. + +docs/design/0054-ssjr-src-cli-commands-reflect-js/ssjr-src-cli-commands-reflect-js.md +- Human: TBD + No exact normalized test description match found. +- Agent: TBD + No exact normalized test description match found. + +``` + +## Manual Verification + +- [x] Automated capture completed successfully. diff --git a/docs/method/retro/0055-ssjr-src-store-derivation-js/ssjr-src-store-derivation-js.md b/docs/method/retro/0055-ssjr-src-store-derivation-js/ssjr-src-store-derivation-js.md new file mode 100644 index 0000000..fa7d66f --- /dev/null +++ b/docs/method/retro/0055-ssjr-src-store-derivation-js/ssjr-src-store-derivation-js.md @@ -0,0 +1,38 @@ +--- +title: "Raise SSJR grades for `src/store/derivation.js`" +cycle: "0055-ssjr-src-store-derivation-js" +design_doc: "docs/design/0055-ssjr-src-store-derivation-js/ssjr-src-store-derivation-js.md" +outcome: hill-met +drift_check: yes +--- + +# Raise SSJR grades for `src/store/derivation.js` Retro + +## Summary + +Froze all public return objects (7 functions) including nested +arrays. Largest file in the batch at 451 lines. Deeper structural +concerns (kind-driven branching) remain but are beyond quick-fix +scope. + +## Playback Witness + +Add artifacts under `docs/method/retro/0055-ssjr-src-store-derivation-js/witness` and link them here. + +## Drift + +- None recorded. + +## New Debt + +- None recorded. + +## Cool Ideas + +- None recorded. + +## Backlog Maintenance + +- [ ] Inbox processed +- [ ] Priorities reviewed +- [ ] Dead work buried or merged diff --git a/docs/method/retro/0055-ssjr-src-store-derivation-js/witness/verification.md b/docs/method/retro/0055-ssjr-src-store-derivation-js/witness/verification.md new file mode 100644 index 0000000..cec3b15 --- /dev/null +++ b/docs/method/retro/0055-ssjr-src-store-derivation-js/witness/verification.md @@ -0,0 +1,252 @@ +--- +title: "Verification Witness for Cycle 55" +--- + +# Verification Witness for Cycle 55 + +This witness proves that `Raise SSJR grades for `src/store/derivation.js`` now carries the required +behavior and adheres to the repo invariants. + +## Test Results + +```text + +> think@0.7.0 test +> npm run test:ports && npm run test:m1 + + +> think@0.7.0 test:ports +> node --test test/ports/*.test.js + +✔ BG_TOKEN is exported from style.js alongside the palette (0.835792ms) +✔ windowed browse initializes with no drawer open (36.029208ms) +✔ saveRawCapture writes cwd receipts first and defers git enrichment to followthrough (1193.053292ms) +✔ capture provenance exports the canonical ingress set (2.17325ms) +✔ capture provenance trims source strings while preserving valid ingress and URL (0.165333ms) +✔ capture provenance trims ingress strings before validation (0.070167ms) +✔ capture provenance rejects dangerous URL schemes (0.079875ms) +✔ capture provenance accepts safe URL schemes (0.099042ms) +✔ normalizeCaptureProvenance returns a frozen CaptureProvenance instance (0.056125ms) +✔ capture provenance reads and normalizes environment input (0.087625ms) +✔ METHOD docs use one consistent cycle-only release and README closeout policy (1.976333ms) +✔ MIND_ORCHESTRATION.md exists and is linked from GUIDE.md (0.981208ms) +✔ cycle 0006 retrospective restarts ordered numbering for the human playback section (0.522292ms) +✔ runDiagnostics reports ok for a healthy repo with entries (31.662416ms) +✔ runDiagnostics reports fail when think directory does not exist (0.495375ms) +✔ runDiagnostics reports fail when local repo has no git init (1.605833ms) +✔ runDiagnostics reports ok for upstream when reachable (30.324792ms) +✔ runDiagnostics reports warn for upstream when unreachable (24.024125ms) +✔ runDiagnostics reports skip for upstream when URL is set but no checker provided (22.450792ms) +✔ runDiagnostics reports skip for upstream when not configured (20.559125ms) +✔ runDiagnostics reports skip for upstream when configured without checker (27.494375ms) +✔ runDiagnostics includes all expected check names (21.875042ms) +✔ runDiagnostics reports graph model version when available (19.128041ms) +✔ runDiagnostics warns when graph model needs migration (18.682458ms) +✔ runDiagnostics reports entry count when available (19.891417ms) +✔ runDiagnostics warns when entry count is zero (17.01275ms) +✔ runDiagnostics skips graph and entry checks when no repo exists (0.158958ms) +✔ shared JSON helper canonicalizes object keys deterministically on parse and stringify (1.634542ms) +✔ discoverMinds finds all valid repos under the think directory (84.188083ms) +✔ discoverMinds ignores directories without git repos (24.149708ms) +✔ discoverMinds labels ~/.think/repo as "default" (20.968416ms) +✔ discoverMinds sorts with default first, then alphabetical (71.267209ms) +✔ discoverMinds returns empty array when think directory does not exist (0.153ms) +✔ discoverMinds includes repoDir for each mind (18.1055ms) +✔ shaderForMind returns a deterministic index for a given name (0.32175ms) +✔ shaderForMind returns different indices for different names (0.152666ms) +✔ shaderForMind stays within the shader count range (0.091584ms) +✔ shaderForMind throws when shaderCount is zero (0.312458ms) +✔ shaderForMind throws when shaderCount is negative (0.07575ms) +✔ shaderForMind handles single-character names (0.064875ms) +✔ createEntry returns an Entry instance (7.858584ms) +✔ Entry is frozen (0.1275ms) +✔ createEntry validates required fields (2.329667ms) +✔ createReflectSession returns a ReflectSession instance (0.166125ms) +✔ ReflectSession is frozen (0.086917ms) +✔ ENTRY_KINDS is a frozen array of valid kind strings (0.061708ms) +✔ BUCKET_PERIODS is a frozen array of valid bucket strings (0.055ms) +✔ storesTextContent validates against ENTRY_KINDS (0.065583ms) +✔ selectLogo picks large mind logo when terminal is wide and tall enough (0.93125ms) +✔ selectLogo picks medium mind logo when terminal fits medium but not large (0.112625ms) +✔ selectLogo picks text logo when terminal is too small for mind (0.060625ms) +✔ selectLogo always returns something even for tiny terminals (0.057125ms) +✔ renderSplash contains the logo (0.151708ms) +✔ renderSplash contains the Enter prompt (0.06725ms) +✔ renderSplash output fits within the given dimensions (0.0675ms) +✔ splash.js does not export renderSplashView (dead code from RE-015 workaround) (0.046208ms) +✔ renderSplash centers the prompt horizontally (0.165208ms) +✔ windowed browse model initializes in windowed mode (0.202417ms) +✔ formatStats includes a sparkline when buckets are present (1.921917ms) +✔ formatStats omits sparkline when no buckets are present (0.099583ms) +✔ formatStats handles a single bucket without crashing (0.10625ms) +✔ formatStats handles empty bucket array without sparkline (0.17925ms) +✔ formatStats sparkline is oldest-to-newest (left-to-right) (0.191375ms) +ℹ tests 63 +ℹ suites 0 +ℹ pass 63 +ℹ fail 0 +ℹ cancelled 0 +ℹ skipped 0 +ℹ todo 0 +ℹ duration_ms 1620.576417 + +> think@0.7.0 test:m1 +> node --test test/acceptance/*.test.js + +✔ think --doctor reports health of a repo with captures (4386.85125ms) +✔ think --doctor succeeds before the first capture (411.360333ms) +✔ think --json --doctor emits a structured health report (3305.270291ms) +✔ think --doctor rejects an unexpected thought argument (298.852792ms) +✔ new capture writes graph-native relationship edges while preserving compatibility properties (2968.727042ms) +✔ think --migrate-graph upgrades a version-1 property-linked repo additively (4297.062458ms) +✔ think --migrate-graph is idempotent and safe to rerun (3162.271958ms) +✔ capture on a version-1 repo still succeeds and only migrates after the raw local save (5002.621958ms) +✔ graph-native commands fail clearly on an outdated repo outside interactive use (4793.698834ms) +✔ interactive inspect on an outdated repo shows visible upgrade progress before continuing (3319.160666ms) +✔ interactive browse on an outdated repo shows visible upgrade progress before continuing (3016.619166ms) +✔ think --json emits explicit graph migration required errors for outdated graph-native commands (2314.975334ms) +✔ think --migrate-graph upgrades a version-2 repo to graph model version 3 with browse and reflect read edges (7053.458166ms) +✔ think --json --inspect exposes direct reflect receipts that exist only through graph-native v3 edges (2392.484042ms) +✔ think --help prints top-level usage without bootstrapping local state (664.30475ms) +✔ think -h is accepted as a short alias for top-level help (473.392958ms) +✔ think --recent --help prints recent help instead of running the command (356.29675ms) +✔ think --recent -h prints recent help instead of running the command (374.9325ms) +✔ think recent --help fails and points callers to the explicit flag form (348.514959ms) +✔ think --inspect --help bypasses required entry validation (307.038292ms) +✔ think --json --help emits structured JSONL help output (464.031333ms) +✔ think recent --json --help fails machine-readably instead of acting as shorthand help (382.758042ms) +✔ think -- -h captures the literal text after option parsing is terminated (3357.538667ms) +✔ think --ingest reads stdin explicitly and captures it into the normal raw-capture core (3973.196791ms) +✔ think with stdin but without --ingest does not accidentally capture piped input (361.567334ms) +✔ think --ingest rejects mixed positional capture text and stdin capture text (466.326083ms) +✔ think --json --ingest preserves machine-readable capture semantics for agents (2950.949875ms) +✔ think --ingest rejects empty stdin payloads (388.4575ms) +✔ think --json capture emits JSONL on stdout and keeps stderr quiet when there are no warnings (2567.325459ms) +✔ think --json --recent emits entry events instead of plain text (6392.134958ms) +✔ think --json --stats emits totals and bucket rows as JSONL (4609.918208ms) +✔ think --json validation failures emit JSONL on stderr instead of stdout (289.633542ms) +✔ think --json reports backup pending as a structured warning on stderr (1326.414791ms) +✔ think --json emits deterministically sorted keys in JSONL output (2566.372709ms) +✔ think MCP server lists the core Think tools (824.538667ms) +✔ think MCP capture, recent, browse, and inspect route through the existing Think runtime (4643.009375ms) +✔ think MCP capture preserves additive provenance separately from the raw text (2633.153833ms) +✔ think MCP capture trims additive provenance strings before persistence (1875.26275ms) +✔ think MCP remember, stats, and prompt_metrics expose structured read results (5328.842791ms) +✔ think MCP doctor tool returns structured health checks (2472.74775ms) +✔ CLI raw capture bootstraps the local repo and preserves exact text (3943.013333ms) +✔ think "recent" is captured as a thought rather than triggering the list (3304.971583ms) +✔ think --recent does not bootstrap local state before the first capture (297.371209ms) +✔ think --recent rejects an unexpected thought argument (358.406375ms) +✔ capture does not require retrieval-before-write or conceptual confirmation (3502.227709ms) +✔ THINK_REPO_DIR overrides the default local repo path (2288.736375ms) +✔ reachable upstream reports local save first and backup second (1441.41025ms) +✔ unreachable upstream keeps capture successful and reports backup pending (1292.263292ms) +✔ recent stays plain and chronological (7013.2335ms) +✔ capture is append-only across later capture activity (3979.166083ms) +✔ duplicate thoughts produce distinct captures rather than deduping (3916.060541ms) +✔ empty input is rejected (257.546458ms) +✔ whitespace-only input is rejected (260.678917ms) +✔ capture preserves formatting neutrality for spacing, casing, and punctuation (1958.792166ms) +✔ default user language avoids Git terminology (1144.732833ms) +✔ verbose capture emits JSONL trace updates on stderr (1143.906166ms) +✔ raw entries remain immutable after later derived entries exist (0.104042ms) # TODO +✔ stored raw entry bytes remain unchanged in the local store after later writes (0.023709ms) # TODO +✔ entry kind separation remains explicit once the first derived-entry write path exists (0.025333ms) # TODO +✔ think --prompt-metrics prints factual prompt telemetry totals and medians (800.736708ms) +✔ think --prompt-metrics does not bootstrap local state before the first capture (400.78925ms) +✔ think --prompt-metrics supports --since filtering over prompt sessions (424.813041ms) +✔ think --prompt-metrics supports --bucket=day (408.21375ms) +✔ think --json --prompt-metrics emits explicit summary, timing, and bucket rows (335.833834ms) +✔ think --prompt-metrics rejects an unexpected thought argument (380.292708ms) +✔ think --prompt-metrics rejects invalid filter values (806.598125ms) +✔ think --recent --count limits output to the newest N raw captures (9182.332125ms) +✔ think --recent --query filters raw captures by case-insensitive text match (7216.9435ms) +✔ removed recent alias flags fail clearly and point to the scoped forms (1868.2115ms) +✔ think --json --recent applies count and query filters while remaining JSONL-only (6585.585125ms) +✔ think --remember uses the current project context to recall relevant prior thoughts (4336.224542ms) +✔ think --remember with an explicit phrase recalls matching thoughts without turning into generic recent listing (6417.283ms) +✔ think --json --remember emits explicit ambient scope and match receipts for agents (3804.36275ms) +✔ think --remember falls back honestly to textual project-token matching for entries without ambient project receipts (5340.141041ms) +✔ think --remember --limit returns only the top N matching thoughts in deterministic order (8169.523417ms) +✔ think --remember --brief returns a triage-friendly snippet instead of the full multiline thought (4931.967417ms) +✔ think --json --remember --brief --limit preserves bounded explicit recall receipts for agents (5832.12ms) +✔ think --remember rejects invalid --limit values (1591.851916ms) +✔ think --browse shows one raw thought with its immediate newer and older neighbors (5704.701209ms) +✔ think --browse without an entry id fails clearly outside interactive TTY use and remains read-only (236.44675ms) +✔ think --json --browse without an entry id stays machine-readable and does not try to open the shell (238.447833ms) +✔ think --json --browse emits JSONL rows for the current raw thought and its neighbors (5680.066417ms) +✔ think --browse opens a reader-first browse TUI with metadata and no permanent recent rail (6514.070875ms) +✔ think --browse can reveal a chronology drawer on demand instead of showing the full log by default (5429.225167ms) +✔ think --browse can jump to another thought through a fuzzy jump surface (5491.14825ms) +✔ think --browse can reveal inspect receipts inside the scripted browse TUI (3417.3385ms) +✔ think --browse can hand the selected thought into reflect from the scripted browse TUI (3402.692708ms) +✔ think --browse surfaces session identity for the current thought without replacing the reader-first view (7860.414083ms) +✔ think --browse uses a short visible entry id in the reader-first metadata while inspect keeps the full exact id (6712.403125ms) +✔ think --browse can reveal a summon-only session drawer that excludes out-of-session thoughts (7893.545959ms) +✔ think --browse reveals a structured session drawer with a visible start label and current-thought marker (7852.994958ms) +✔ think --json --browse emits explicit session context and session-nearby rows without mislabeling out-of-session thoughts (8102.317292ms) +✔ think --browse can move to the previous thought within the current session without leaving reader-first browse (5504.302916ms) +✔ think --browse keeps the current thought in place when there is no next thought in the current session (5501.073667ms) +✔ think --json --browse exposes explicit session traversal semantics without conflating them with chronology neighbors (5681.338083ms) +✔ think --inspect exposes exact raw entry metadata without narration (1865.648ms) +✔ think --json --inspect emits JSONL for the exact raw entry metadata (1983.070583ms) +✔ think --inspect exposes additive capture provenance separately from the raw text (1910.644791ms) +✔ think --json --inspect includes additive capture provenance in the inspected entry payload (1862.069ms) +✔ think --inspect exposes canonical content identity and direct derived receipts when they exist (3616.223917ms) +✔ think --json --inspect emits canonical content identity and direct derived receipt rows (3640.741625ms) +✔ think --inspect exposes the first derived bundle as explicit raw, canonical, derived, and context sections (5685.63975ms) +✔ think --json --inspect emits canonical identity plus seed-quality and session-attribution receipts with provenance (6195.293292ms) +✔ think --json --inspect keeps duplicate raw captures distinct while linking them to the same canonical thought (6096.241792ms) +✔ think --reflect starts an explicit seeded reflect session with a deterministic seed-first challenge prompt (6445.073958ms) +✔ removed brainstorm aliases fail clearly and point to reflect (2572.930334ms) +✔ think --reflect can use an explicit sharpen prompt family (2543.589875ms) +✔ think --reflect-session stores a separate derived entry with preserved seed-first lineage (7244.240083ms) +✔ think --reflect validates explicit session entry and stays read-only on invalid start (2613.586625ms) +✔ think --reflect fails clearly when the seed entry does not exist (268.675833ms) +✔ think --reflect refuses status-like seeds that are not pressure-testable ideas (7171.3185ms) +✔ think --json --reflect refuses ineligible seeds with structured machine-readable errors (6703.525292ms) +✔ think --json --reflect emits only JSONL with seed-first session and prompt data (3812.306916ms) +✔ think --json --reflect-session emits only JSONL and preserves stored seed-first lineage (4319.535833ms) +✔ think --json reflect validation failures stay fully machine-readable (256.185334ms) +✔ think --stats prints total thoughts (4856.76775ms) +✔ think --stats does not bootstrap local state before the first capture (308.134375ms) +✔ think "stats" is captured as a thought rather than triggering the command (3001.591833ms) +✔ think --stats rejects an unexpected thought argument (284.94875ms) +✔ think stats supports --since filter (4354.796333ms) +✔ think --stats rejects an invalid --since value (284.775208ms) +✔ think stats supports --from and --to filters (6721.381667ms) +✔ think --stats rejects invalid absolute date filters (271.867166ms) +✔ think stats supports --bucket=day (6330.324291ms) +✔ think --stats --bucket=day includes a sparkline in text output (5879.416583ms) +✔ think --stats --bucket=day --json includes sparkline in stats.total event (6206.527875ms) +✔ think --stats without --bucket omits sparkline (2570.383958ms) +✔ think --stats rejects an invalid bucket value (243.6175ms) +ℹ tests 128 +ℹ suites 0 +ℹ pass 125 +ℹ fail 0 +ℹ cancelled 0 +ℹ skipped 0 +ℹ todo 3 +ℹ duration_ms 192209.271209 + +``` + +## Drift Results + +```text +Playback-question drift found. +Scanned 1 active cycle, 2 playback questions, 0 test descriptions. +Search basis: exact normalized match in tests/**/*.test.* and tests/**/*.spec.* descriptions. + +docs/design/0055-ssjr-src-store-derivation-js/ssjr-src-store-derivation-js.md +- Human: TBD + No exact normalized test description match found. +- Agent: TBD + No exact normalized test description match found. + +``` + +## Manual Verification + +- [x] Automated capture completed successfully. diff --git a/docs/method/retro/0056-ssjr-src-store-reflect-js/ssjr-src-store-reflect-js.md b/docs/method/retro/0056-ssjr-src-store-reflect-js/ssjr-src-store-reflect-js.md new file mode 100644 index 0000000..19f87de --- /dev/null +++ b/docs/method/retro/0056-ssjr-src-store-reflect-js/ssjr-src-store-reflect-js.md @@ -0,0 +1,37 @@ +--- +title: "Raise SSJR grades for `src/store/reflect.js`" +cycle: "0056-ssjr-src-store-reflect-js" +design_doc: "docs/design/0056-ssjr-src-store-reflect-js/ssjr-src-store-reflect-js.md" +outcome: hill-met +drift_check: yes +--- + +# Raise SSJR grades for `src/store/reflect.js` Retro + +## Summary + +Froze all return objects from startReflect, previewReflect, +selectReflectPrompt (5 branches), and planReflect (3 branches). +Nested selectionReason objects also frozen. + +## Playback Witness + +Add artifacts under `docs/method/retro/0056-ssjr-src-store-reflect-js/witness` and link them here. + +## Drift + +- None recorded. + +## New Debt + +- None recorded. + +## Cool Ideas + +- None recorded. + +## Backlog Maintenance + +- [ ] Inbox processed +- [ ] Priorities reviewed +- [ ] Dead work buried or merged diff --git a/docs/method/retro/0056-ssjr-src-store-reflect-js/witness/verification.md b/docs/method/retro/0056-ssjr-src-store-reflect-js/witness/verification.md new file mode 100644 index 0000000..a836c4c --- /dev/null +++ b/docs/method/retro/0056-ssjr-src-store-reflect-js/witness/verification.md @@ -0,0 +1,252 @@ +--- +title: "Verification Witness for Cycle 56" +--- + +# Verification Witness for Cycle 56 + +This witness proves that `Raise SSJR grades for `src/store/reflect.js`` now carries the required +behavior and adheres to the repo invariants. + +## Test Results + +```text + +> think@0.7.0 test +> npm run test:ports && npm run test:m1 + + +> think@0.7.0 test:ports +> node --test test/ports/*.test.js + +✔ BG_TOKEN is exported from style.js alongside the palette (0.783ms) +✔ windowed browse initializes with no drawer open (17.880833ms) +✔ saveRawCapture writes cwd receipts first and defers git enrichment to followthrough (1041.104ms) +✔ capture provenance exports the canonical ingress set (1.555875ms) +✔ capture provenance trims source strings while preserving valid ingress and URL (0.153042ms) +✔ capture provenance trims ingress strings before validation (0.069125ms) +✔ capture provenance rejects dangerous URL schemes (0.076708ms) +✔ capture provenance accepts safe URL schemes (0.101834ms) +✔ normalizeCaptureProvenance returns a frozen CaptureProvenance instance (0.08ms) +✔ capture provenance reads and normalizes environment input (0.072083ms) +✔ METHOD docs use one consistent cycle-only release and README closeout policy (2.171334ms) +✔ MIND_ORCHESTRATION.md exists and is linked from GUIDE.md (0.771625ms) +✔ cycle 0006 retrospective restarts ordered numbering for the human playback section (0.485ms) +✔ runDiagnostics reports ok for a healthy repo with entries (23.100416ms) +✔ runDiagnostics reports fail when think directory does not exist (0.195417ms) +✔ runDiagnostics reports fail when local repo has no git init (1.368041ms) +✔ runDiagnostics reports ok for upstream when reachable (19.188084ms) +✔ runDiagnostics reports warn for upstream when unreachable (20.662375ms) +✔ runDiagnostics reports skip for upstream when URL is set but no checker provided (18.769958ms) +✔ runDiagnostics reports skip for upstream when not configured (17.939416ms) +✔ runDiagnostics reports skip for upstream when configured without checker (21.897708ms) +✔ runDiagnostics includes all expected check names (18.305708ms) +✔ runDiagnostics reports graph model version when available (17.356667ms) +✔ runDiagnostics warns when graph model needs migration (16.000875ms) +✔ runDiagnostics reports entry count when available (17.080792ms) +✔ runDiagnostics warns when entry count is zero (17.157125ms) +✔ runDiagnostics skips graph and entry checks when no repo exists (0.487125ms) +✔ shared JSON helper canonicalizes object keys deterministically on parse and stringify (1.6445ms) +✔ discoverMinds finds all valid repos under the think directory (67.8365ms) +✔ discoverMinds ignores directories without git repos (19.35275ms) +✔ discoverMinds labels ~/.think/repo as "default" (17.305833ms) +✔ discoverMinds sorts with default first, then alphabetical (53.135458ms) +✔ discoverMinds returns empty array when think directory does not exist (0.161875ms) +✔ discoverMinds includes repoDir for each mind (17.197459ms) +✔ shaderForMind returns a deterministic index for a given name (0.165ms) +✔ shaderForMind returns different indices for different names (0.07925ms) +✔ shaderForMind stays within the shader count range (0.069542ms) +✔ shaderForMind throws when shaderCount is zero (0.28175ms) +✔ shaderForMind throws when shaderCount is negative (0.067375ms) +✔ shaderForMind handles single-character names (0.06125ms) +✔ createEntry returns an Entry instance (2.149458ms) +✔ Entry is frozen (0.114542ms) +✔ createEntry validates required fields (0.90825ms) +✔ createReflectSession returns a ReflectSession instance (0.150083ms) +✔ ReflectSession is frozen (0.084833ms) +✔ ENTRY_KINDS is a frozen array of valid kind strings (0.063625ms) +✔ BUCKET_PERIODS is a frozen array of valid bucket strings (0.055292ms) +✔ storesTextContent validates against ENTRY_KINDS (0.067917ms) +✔ selectLogo picks large mind logo when terminal is wide and tall enough (0.922ms) +✔ selectLogo picks medium mind logo when terminal fits medium but not large (0.093833ms) +✔ selectLogo picks text logo when terminal is too small for mind (0.059375ms) +✔ selectLogo always returns something even for tiny terminals (0.055375ms) +✔ renderSplash contains the logo (0.140709ms) +✔ renderSplash contains the Enter prompt (0.062583ms) +✔ renderSplash output fits within the given dimensions (0.0725ms) +✔ splash.js does not export renderSplashView (dead code from RE-015 workaround) (0.050667ms) +✔ renderSplash centers the prompt horizontally (0.158042ms) +✔ windowed browse model initializes in windowed mode (0.209417ms) +✔ formatStats includes a sparkline when buckets are present (1.65575ms) +✔ formatStats omits sparkline when no buckets are present (0.084459ms) +✔ formatStats handles a single bucket without crashing (0.088542ms) +✔ formatStats handles empty bucket array without sparkline (0.065292ms) +✔ formatStats sparkline is oldest-to-newest (left-to-right) (0.087292ms) +ℹ tests 63 +ℹ suites 0 +ℹ pass 63 +ℹ fail 0 +ℹ cancelled 0 +ℹ skipped 0 +ℹ todo 0 +ℹ duration_ms 1361.565959 + +> think@0.7.0 test:m1 +> node --test test/acceptance/*.test.js + +✔ think --doctor reports health of a repo with captures (4368.642333ms) +✔ think --doctor succeeds before the first capture (335.779042ms) +✔ think --json --doctor emits a structured health report (2914.110375ms) +✔ think --doctor rejects an unexpected thought argument (275.099375ms) +✔ new capture writes graph-native relationship edges while preserving compatibility properties (2694.586166ms) +✔ think --migrate-graph upgrades a version-1 property-linked repo additively (4069.467959ms) +✔ think --migrate-graph is idempotent and safe to rerun (2952.715709ms) +✔ capture on a version-1 repo still succeeds and only migrates after the raw local save (4665.951167ms) +✔ graph-native commands fail clearly on an outdated repo outside interactive use (4328.190333ms) +✔ interactive inspect on an outdated repo shows visible upgrade progress before continuing (2851.259167ms) +✔ interactive browse on an outdated repo shows visible upgrade progress before continuing (2911.670458ms) +✔ think --json emits explicit graph migration required errors for outdated graph-native commands (2132.215208ms) +✔ think --migrate-graph upgrades a version-2 repo to graph model version 3 with browse and reflect read edges (6851.021041ms) +✔ think --json --inspect exposes direct reflect receipts that exist only through graph-native v3 edges (2376.945041ms) +✔ think --help prints top-level usage without bootstrapping local state (471.160875ms) +✔ think -h is accepted as a short alias for top-level help (321.085ms) +✔ think --recent --help prints recent help instead of running the command (299.287125ms) +✔ think --recent -h prints recent help instead of running the command (310.950208ms) +✔ think recent --help fails and points callers to the explicit flag form (288.470959ms) +✔ think --inspect --help bypasses required entry validation (541.283958ms) +✔ think --json --help emits structured JSONL help output (437.271417ms) +✔ think recent --json --help fails machine-readably instead of acting as shorthand help (509.657ms) +✔ think -- -h captures the literal text after option parsing is terminated (3199.600125ms) +✔ think --ingest reads stdin explicitly and captures it into the normal raw-capture core (3911.304584ms) +✔ think with stdin but without --ingest does not accidentally capture piped input (416.334166ms) +✔ think --ingest rejects mixed positional capture text and stdin capture text (387.864ms) +✔ think --json --ingest preserves machine-readable capture semantics for agents (2528.636958ms) +✔ think --ingest rejects empty stdin payloads (342.009458ms) +✔ think --json capture emits JSONL on stdout and keeps stderr quiet when there are no warnings (1792.840042ms) +✔ think --json --recent emits entry events instead of plain text (6520.081208ms) +✔ think --json --stats emits totals and bucket rows as JSONL (4312.32525ms) +✔ think --json validation failures emit JSONL on stderr instead of stdout (278.914958ms) +✔ think --json reports backup pending as a structured warning on stderr (1284.708709ms) +✔ think --json emits deterministically sorted keys in JSONL output (1877.370167ms) +✔ think MCP server lists the core Think tools (515.678417ms) +✔ think MCP capture, recent, browse, and inspect route through the existing Think runtime (4814.964208ms) +✔ think MCP capture preserves additive provenance separately from the raw text (2231.547166ms) +✔ think MCP capture trims additive provenance strings before persistence (1902.322833ms) +✔ think MCP remember, stats, and prompt_metrics expose structured read results (4899.237875ms) +✔ think MCP doctor tool returns structured health checks (2159.247916ms) +✔ CLI raw capture bootstraps the local repo and preserves exact text (3922.696459ms) +✔ think "recent" is captured as a thought rather than triggering the list (2784.988792ms) +✔ think --recent does not bootstrap local state before the first capture (280.133667ms) +✔ think --recent rejects an unexpected thought argument (278.593708ms) +✔ capture does not require retrieval-before-write or conceptual confirmation (3412.514959ms) +✔ THINK_REPO_DIR overrides the default local repo path (2058.1035ms) +✔ reachable upstream reports local save first and backup second (1396.314833ms) +✔ unreachable upstream keeps capture successful and reports backup pending (1217.616459ms) +✔ recent stays plain and chronological (6134.726833ms) +✔ capture is append-only across later capture activity (3845.184375ms) +✔ duplicate thoughts produce distinct captures rather than deduping (3692.89275ms) +✔ empty input is rejected (261.967791ms) +✔ whitespace-only input is rejected (263.167667ms) +✔ capture preserves formatting neutrality for spacing, casing, and punctuation (1799.905ms) +✔ default user language avoids Git terminology (1138.034167ms) +✔ verbose capture emits JSONL trace updates on stderr (1156.178791ms) +✔ raw entries remain immutable after later derived entries exist (0.089667ms) # TODO +✔ stored raw entry bytes remain unchanged in the local store after later writes (0.024ms) # TODO +✔ entry kind separation remains explicit once the first derived-entry write path exists (0.019417ms) # TODO +✔ think --prompt-metrics prints factual prompt telemetry totals and medians (492.31875ms) +✔ think --prompt-metrics does not bootstrap local state before the first capture (320.771667ms) +✔ think --prompt-metrics supports --since filtering over prompt sessions (321.234833ms) +✔ think --prompt-metrics supports --bucket=day (331.389583ms) +✔ think --json --prompt-metrics emits explicit summary, timing, and bucket rows (324.789958ms) +✔ think --prompt-metrics rejects an unexpected thought argument (441.6125ms) +✔ think --prompt-metrics rejects invalid filter values (991.812416ms) +✔ think --recent --count limits output to the newest N raw captures (8957.976042ms) +✔ think --recent --query filters raw captures by case-insensitive text match (6454.996209ms) +✔ removed recent alias flags fail clearly and point to the scoped forms (1699.721208ms) +✔ think --json --recent applies count and query filters while remaining JSONL-only (6107.6915ms) +✔ think --remember uses the current project context to recall relevant prior thoughts (4088.791917ms) +✔ think --remember with an explicit phrase recalls matching thoughts without turning into generic recent listing (6281.211583ms) +✔ think --json --remember emits explicit ambient scope and match receipts for agents (3746.9855ms) +✔ think --remember falls back honestly to textual project-token matching for entries without ambient project receipts (4305.518ms) +✔ think --remember --limit returns only the top N matching thoughts in deterministic order (7504.948875ms) +✔ think --remember --brief returns a triage-friendly snippet instead of the full multiline thought (3387.481875ms) +✔ think --json --remember --brief --limit preserves bounded explicit recall receipts for agents (5391.143709ms) +✔ think --remember rejects invalid --limit values (1435.333333ms) +✔ think --browse shows one raw thought with its immediate newer and older neighbors (5521.227833ms) +✔ think --browse without an entry id fails clearly outside interactive TTY use and remains read-only (236.985625ms) +✔ think --json --browse without an entry id stays machine-readable and does not try to open the shell (230.656208ms) +✔ think --json --browse emits JSONL rows for the current raw thought and its neighbors (5303.589334ms) +✔ think --browse opens a reader-first browse TUI with metadata and no permanent recent rail (6318.588125ms) +✔ think --browse can reveal a chronology drawer on demand instead of showing the full log by default (5506.802584ms) +✔ think --browse can jump to another thought through a fuzzy jump surface (5213.695417ms) +✔ think --browse can reveal inspect receipts inside the scripted browse TUI (3261.360917ms) +✔ think --browse can hand the selected thought into reflect from the scripted browse TUI (3201.617417ms) +✔ think --browse surfaces session identity for the current thought without replacing the reader-first view (7595.455875ms) +✔ think --browse uses a short visible entry id in the reader-first metadata while inspect keeps the full exact id (6356.756542ms) +✔ think --browse can reveal a summon-only session drawer that excludes out-of-session thoughts (7436.038708ms) +✔ think --browse reveals a structured session drawer with a visible start label and current-thought marker (7351.486292ms) +✔ think --json --browse emits explicit session context and session-nearby rows without mislabeling out-of-session thoughts (7424.706292ms) +✔ think --browse can move to the previous thought within the current session without leaving reader-first browse (5104.607417ms) +✔ think --browse keeps the current thought in place when there is no next thought in the current session (5094.312333ms) +✔ think --json --browse exposes explicit session traversal semantics without conflating them with chronology neighbors (5231.73175ms) +✔ think --inspect exposes exact raw entry metadata without narration (1684.456083ms) +✔ think --json --inspect emits JSONL for the exact raw entry metadata (1706.073458ms) +✔ think --inspect exposes additive capture provenance separately from the raw text (1726.049458ms) +✔ think --json --inspect includes additive capture provenance in the inspected entry payload (1689.164375ms) +✔ think --inspect exposes canonical content identity and direct derived receipts when they exist (3392.697958ms) +✔ think --json --inspect emits canonical content identity and direct derived receipt rows (3395.93925ms) +✔ think --inspect exposes the first derived bundle as explicit raw, canonical, derived, and context sections (5235.523708ms) +✔ think --json --inspect emits canonical identity plus seed-quality and session-attribution receipts with provenance (5272.425541ms) +✔ think --json --inspect keeps duplicate raw captures distinct while linking them to the same canonical thought (4292.966917ms) +✔ think --reflect starts an explicit seeded reflect session with a deterministic seed-first challenge prompt (5966.245333ms) +✔ removed brainstorm aliases fail clearly and point to reflect (2402.169417ms) +✔ think --reflect can use an explicit sharpen prompt family (2354.835125ms) +✔ think --reflect-session stores a separate derived entry with preserved seed-first lineage (6303.515708ms) +✔ think --reflect validates explicit session entry and stays read-only on invalid start (2520.471834ms) +✔ think --reflect fails clearly when the seed entry does not exist (258.165625ms) +✔ think --reflect refuses status-like seeds that are not pressure-testable ideas (6738.171166ms) +✔ think --json --reflect refuses ineligible seeds with structured machine-readable errors (6662.736416ms) +✔ think --json --reflect emits only JSONL with seed-first session and prompt data (3745.084209ms) +✔ think --json --reflect-session emits only JSONL and preserves stored seed-first lineage (3297.6245ms) +✔ think --json reflect validation failures stay fully machine-readable (246.64675ms) +✔ think --stats prints total thoughts (4601.846625ms) +✔ think --stats does not bootstrap local state before the first capture (282.400792ms) +✔ think "stats" is captured as a thought rather than triggering the command (2813.670083ms) +✔ think --stats rejects an unexpected thought argument (265.926041ms) +✔ think stats supports --since filter (3804.783167ms) +✔ think --stats rejects an invalid --since value (301.979666ms) +✔ think stats supports --from and --to filters (6030.739708ms) +✔ think --stats rejects invalid absolute date filters (256.979083ms) +✔ think stats supports --bucket=day (5971.729958ms) +✔ think --stats --bucket=day includes a sparkline in text output (5850.85125ms) +✔ think --stats --bucket=day --json includes sparkline in stats.total event (6047.698875ms) +✔ think --stats without --bucket omits sparkline (1667.340708ms) +✔ think --stats rejects an invalid bucket value (237.750291ms) +ℹ tests 128 +ℹ suites 0 +ℹ pass 125 +ℹ fail 0 +ℹ cancelled 0 +ℹ skipped 0 +ℹ todo 3 +ℹ duration_ms 176307.711833 + +``` + +## Drift Results + +```text +Playback-question drift found. +Scanned 1 active cycle, 2 playback questions, 0 test descriptions. +Search basis: exact normalized match in tests/**/*.test.* and tests/**/*.spec.* descriptions. + +docs/design/0056-ssjr-src-store-reflect-js/ssjr-src-store-reflect-js.md +- Human: TBD + No exact normalized test description match found. +- Agent: TBD + No exact normalized test description match found. + +``` + +## Manual Verification + +- [x] Automated capture completed successfully. diff --git a/docs/method/retro/0057-audit-cli-generic-errors/audit-cli-generic-errors.md b/docs/method/retro/0057-audit-cli-generic-errors/audit-cli-generic-errors.md new file mode 100644 index 0000000..d1aaeee --- /dev/null +++ b/docs/method/retro/0057-audit-cli-generic-errors/audit-cli-generic-errors.md @@ -0,0 +1,38 @@ +--- +title: "CLI still hides too much behind a generic top-level error" +cycle: "0057-audit-cli-generic-errors" +design_doc: "docs/design/0057-audit-cli-generic-errors/audit-cli-generic-errors.md" +outcome: hill-met +drift_check: yes +--- + +# CLI still hides too much behind a generic top-level error Retro + +## Summary + +Replaced generic "Something went wrong" catch-all with typed error +handling. ThinkError subclasses surface their message directly. +Unknown errors append the message to the generic prefix for +actionable self-serve debugging context. + +## Playback Witness + +Add artifacts under `docs/method/retro/0057-audit-cli-generic-errors/witness` and link them here. + +## Drift + +- None recorded. + +## New Debt + +- None recorded. + +## Cool Ideas + +- None recorded. + +## Backlog Maintenance + +- [ ] Inbox processed +- [ ] Priorities reviewed +- [ ] Dead work buried or merged diff --git a/docs/method/retro/0057-audit-cli-generic-errors/witness/verification.md b/docs/method/retro/0057-audit-cli-generic-errors/witness/verification.md new file mode 100644 index 0000000..1a08a18 --- /dev/null +++ b/docs/method/retro/0057-audit-cli-generic-errors/witness/verification.md @@ -0,0 +1,252 @@ +--- +title: "Verification Witness for Cycle 57" +--- + +# Verification Witness for Cycle 57 + +This witness proves that `CLI still hides too much behind a generic top-level error` now carries the required +behavior and adheres to the repo invariants. + +## Test Results + +```text + +> think@0.7.0 test +> npm run test:ports && npm run test:m1 + + +> think@0.7.0 test:ports +> node --test test/ports/*.test.js + +✔ BG_TOKEN is exported from style.js alongside the palette (3.098084ms) +✔ windowed browse initializes with no drawer open (35.678542ms) +✔ saveRawCapture writes cwd receipts first and defers git enrichment to followthrough (1676.128042ms) +✔ capture provenance exports the canonical ingress set (1.851541ms) +✔ capture provenance trims source strings while preserving valid ingress and URL (0.284084ms) +✔ capture provenance trims ingress strings before validation (0.078625ms) +✔ capture provenance rejects dangerous URL schemes (0.08325ms) +✔ capture provenance accepts safe URL schemes (0.559833ms) +✔ normalizeCaptureProvenance returns a frozen CaptureProvenance instance (0.118625ms) +✔ capture provenance reads and normalizes environment input (0.143375ms) +✔ METHOD docs use one consistent cycle-only release and README closeout policy (1.694792ms) +✔ MIND_ORCHESTRATION.md exists and is linked from GUIDE.md (0.700417ms) +✔ cycle 0006 retrospective restarts ordered numbering for the human playback section (0.486291ms) +✔ runDiagnostics reports ok for a healthy repo with entries (51.738083ms) +✔ runDiagnostics reports fail when think directory does not exist (0.2685ms) +✔ runDiagnostics reports fail when local repo has no git init (1.74ms) +✔ runDiagnostics reports ok for upstream when reachable (25.995875ms) +✔ runDiagnostics reports warn for upstream when unreachable (31.732ms) +✔ runDiagnostics reports skip for upstream when URL is set but no checker provided (32.448875ms) +✔ runDiagnostics reports skip for upstream when not configured (23.878959ms) +✔ runDiagnostics reports skip for upstream when configured without checker (21.55ms) +✔ runDiagnostics includes all expected check names (25.162708ms) +✔ runDiagnostics reports graph model version when available (18.265375ms) +✔ runDiagnostics warns when graph model needs migration (18.408042ms) +✔ runDiagnostics reports entry count when available (18.730667ms) +✔ runDiagnostics warns when entry count is zero (18.889125ms) +✔ runDiagnostics skips graph and entry checks when no repo exists (0.185375ms) +✔ shared JSON helper canonicalizes object keys deterministically on parse and stringify (1.745417ms) +✔ discoverMinds finds all valid repos under the think directory (93.544334ms) +✔ discoverMinds ignores directories without git repos (27.760416ms) +✔ discoverMinds labels ~/.think/repo as "default" (38.933958ms) +✔ discoverMinds sorts with default first, then alphabetical (64.462292ms) +✔ discoverMinds returns empty array when think directory does not exist (0.143083ms) +✔ discoverMinds includes repoDir for each mind (20.605625ms) +✔ shaderForMind returns a deterministic index for a given name (1.0275ms) +✔ shaderForMind returns different indices for different names (0.504833ms) +✔ shaderForMind stays within the shader count range (0.170791ms) +✔ shaderForMind throws when shaderCount is zero (0.426792ms) +✔ shaderForMind throws when shaderCount is negative (0.1675ms) +✔ shaderForMind handles single-character names (0.094458ms) +✔ createEntry returns an Entry instance (3.323709ms) +✔ Entry is frozen (0.224292ms) +✔ createEntry validates required fields (0.899458ms) +✔ createReflectSession returns a ReflectSession instance (0.142166ms) +✔ ReflectSession is frozen (0.115083ms) +✔ ENTRY_KINDS is a frozen array of valid kind strings (0.065583ms) +✔ BUCKET_PERIODS is a frozen array of valid bucket strings (0.058167ms) +✔ storesTextContent validates against ENTRY_KINDS (0.066541ms) +✔ selectLogo picks large mind logo when terminal is wide and tall enough (0.912167ms) +✔ selectLogo picks medium mind logo when terminal fits medium but not large (0.104084ms) +✔ selectLogo picks text logo when terminal is too small for mind (0.058625ms) +✔ selectLogo always returns something even for tiny terminals (0.058875ms) +✔ renderSplash contains the logo (0.147959ms) +✔ renderSplash contains the Enter prompt (0.05825ms) +✔ renderSplash output fits within the given dimensions (0.065458ms) +✔ splash.js does not export renderSplashView (dead code from RE-015 workaround) (0.043958ms) +✔ renderSplash centers the prompt horizontally (0.155792ms) +✔ windowed browse model initializes in windowed mode (0.196708ms) +✔ formatStats includes a sparkline when buckets are present (1.794333ms) +✔ formatStats omits sparkline when no buckets are present (0.173625ms) +✔ formatStats handles a single bucket without crashing (0.123833ms) +✔ formatStats handles empty bucket array without sparkline (0.075625ms) +✔ formatStats sparkline is oldest-to-newest (left-to-right) (0.098875ms) +ℹ tests 63 +ℹ suites 0 +ℹ pass 63 +ℹ fail 0 +ℹ cancelled 0 +ℹ skipped 0 +ℹ todo 0 +ℹ duration_ms 2084.890125 + +> think@0.7.0 test:m1 +> node --test test/acceptance/*.test.js + +✔ think --doctor reports health of a repo with captures (4965.736541ms) +✔ think --doctor succeeds before the first capture (371.642792ms) +✔ think --json --doctor emits a structured health report (3951.255166ms) +✔ think --doctor rejects an unexpected thought argument (366.78175ms) +✔ new capture writes graph-native relationship edges while preserving compatibility properties (3475.9465ms) +✔ think --migrate-graph upgrades a version-1 property-linked repo additively (4923.740291ms) +✔ think --migrate-graph is idempotent and safe to rerun (4256.355417ms) +✔ capture on a version-1 repo still succeeds and only migrates after the raw local save (7408.258917ms) +✔ graph-native commands fail clearly on an outdated repo outside interactive use (6055.713ms) +✔ interactive inspect on an outdated repo shows visible upgrade progress before continuing (4248.853083ms) +✔ interactive browse on an outdated repo shows visible upgrade progress before continuing (3949.73375ms) +✔ think --json emits explicit graph migration required errors for outdated graph-native commands (2810.500959ms) +✔ think --migrate-graph upgrades a version-2 repo to graph model version 3 with browse and reflect read edges (8450.507625ms) +✔ think --json --inspect exposes direct reflect receipts that exist only through graph-native v3 edges (2588.298084ms) +✔ think --help prints top-level usage without bootstrapping local state (636.959875ms) +✔ think -h is accepted as a short alias for top-level help (353.992584ms) +✔ think --recent --help prints recent help instead of running the command (401.852792ms) +✔ think --recent -h prints recent help instead of running the command (346.518625ms) +✔ think recent --help fails and points callers to the explicit flag form (337.4035ms) +✔ think --inspect --help bypasses required entry validation (334.837291ms) +✔ think --json --help emits structured JSONL help output (333.633875ms) +✔ think recent --json --help fails machine-readably instead of acting as shorthand help (375.991958ms) +✔ think -- -h captures the literal text after option parsing is terminated (3873.100583ms) +✔ think --ingest reads stdin explicitly and captures it into the normal raw-capture core (4569.592791ms) +✔ think with stdin but without --ingest does not accidentally capture piped input (356.163625ms) +✔ think --ingest rejects mixed positional capture text and stdin capture text (355.3075ms) +✔ think --json --ingest preserves machine-readable capture semantics for agents (3669.603292ms) +✔ think --ingest rejects empty stdin payloads (388.585625ms) +✔ think --json capture emits JSONL on stdout and keeps stderr quiet when there are no warnings (2956.244ms) +✔ think --json --recent emits entry events instead of plain text (7581.666708ms) +✔ think --json --stats emits totals and bucket rows as JSONL (6710.804709ms) +✔ think --json validation failures emit JSONL on stderr instead of stdout (314.843625ms) +✔ think --json reports backup pending as a structured warning on stderr (1743.696834ms) +✔ think --json emits deterministically sorted keys in JSONL output (3092.827292ms) +✔ think MCP server lists the core Think tools (640.052166ms) +✔ think MCP capture, recent, browse, and inspect route through the existing Think runtime (5284.552208ms) +✔ think MCP capture preserves additive provenance separately from the raw text (3136.124833ms) +✔ think MCP capture trims additive provenance strings before persistence (2889.555958ms) +✔ think MCP remember, stats, and prompt_metrics expose structured read results (7676.578375ms) +✔ think MCP doctor tool returns structured health checks (3229.616417ms) +✔ CLI raw capture bootstraps the local repo and preserves exact text (4452.231917ms) +✔ think "recent" is captured as a thought rather than triggering the list (3572.147208ms) +✔ think --recent does not bootstrap local state before the first capture (310.73475ms) +✔ think --recent rejects an unexpected thought argument (364.631542ms) +✔ capture does not require retrieval-before-write or conceptual confirmation (5081.462666ms) +✔ THINK_REPO_DIR overrides the default local repo path (3068.825167ms) +✔ reachable upstream reports local save first and backup second (2065.901375ms) +✔ unreachable upstream keeps capture successful and reports backup pending (1897.766667ms) +✔ recent stays plain and chronological (8928.934875ms) +✔ capture is append-only across later capture activity (5187.718541ms) +✔ duplicate thoughts produce distinct captures rather than deduping (4677.647458ms) +✔ empty input is rejected (271.585417ms) +✔ whitespace-only input is rejected (273.502334ms) +✔ capture preserves formatting neutrality for spacing, casing, and punctuation (2222.172292ms) +✔ default user language avoids Git terminology (1418.408708ms) +✔ verbose capture emits JSONL trace updates on stderr (1384.015542ms) +✔ raw entries remain immutable after later derived entries exist (0.109292ms) # TODO +✔ stored raw entry bytes remain unchanged in the local store after later writes (0.024542ms) # TODO +✔ entry kind separation remains explicit once the first derived-entry write path exists (0.027042ms) # TODO +✔ think --prompt-metrics prints factual prompt telemetry totals and medians (665.799875ms) +✔ think --prompt-metrics does not bootstrap local state before the first capture (361.350584ms) +✔ think --prompt-metrics supports --since filtering over prompt sessions (402.112042ms) +✔ think --prompt-metrics supports --bucket=day (341.434083ms) +✔ think --json --prompt-metrics emits explicit summary, timing, and bucket rows (384.501625ms) +✔ think --prompt-metrics rejects an unexpected thought argument (334.516375ms) +✔ think --prompt-metrics rejects invalid filter values (800.963625ms) +✔ think --recent --count limits output to the newest N raw captures (11729.903292ms) +✔ think --recent --query filters raw captures by case-insensitive text match (9710.210083ms) +✔ removed recent alias flags fail clearly and point to the scoped forms (2226.37675ms) +✔ think --json --recent applies count and query filters while remaining JSONL-only (8440.995083ms) +✔ think --remember uses the current project context to recall relevant prior thoughts (5126.679333ms) +✔ think --remember with an explicit phrase recalls matching thoughts without turning into generic recent listing (7593.334375ms) +✔ think --json --remember emits explicit ambient scope and match receipts for agents (3903.816583ms) +✔ think --remember falls back honestly to textual project-token matching for entries without ambient project receipts (3820.505625ms) +✔ think --remember --limit returns only the top N matching thoughts in deterministic order (7882.008792ms) +✔ think --remember --brief returns a triage-friendly snippet instead of the full multiline thought (3499.180042ms) +✔ think --json --remember --brief --limit preserves bounded explicit recall receipts for agents (5509.60425ms) +✔ think --remember rejects invalid --limit values (1442.463417ms) +✔ think --browse shows one raw thought with its immediate newer and older neighbors (5523.483459ms) +✔ think --browse without an entry id fails clearly outside interactive TTY use and remains read-only (238.017583ms) +✔ think --json --browse without an entry id stays machine-readable and does not try to open the shell (242.985667ms) +✔ think --json --browse emits JSONL rows for the current raw thought and its neighbors (5564.444625ms) +✔ think --browse opens a reader-first browse TUI with metadata and no permanent recent rail (6300.286667ms) +✔ think --browse can reveal a chronology drawer on demand instead of showing the full log by default (5298.274834ms) +✔ think --browse can jump to another thought through a fuzzy jump surface (5376.77775ms) +✔ think --browse can reveal inspect receipts inside the scripted browse TUI (3494.612083ms) +✔ think --browse can hand the selected thought into reflect from the scripted browse TUI (3353.930542ms) +✔ think --browse surfaces session identity for the current thought without replacing the reader-first view (7709.751375ms) +✔ think --browse uses a short visible entry id in the reader-first metadata while inspect keeps the full exact id (6661.660875ms) +✔ think --browse can reveal a summon-only session drawer that excludes out-of-session thoughts (7821.806125ms) +✔ think --browse reveals a structured session drawer with a visible start label and current-thought marker (8043.396542ms) +✔ think --json --browse emits explicit session context and session-nearby rows without mislabeling out-of-session thoughts (8087.636167ms) +✔ think --browse can move to the previous thought within the current session without leaving reader-first browse (5345.340917ms) +✔ think --browse keeps the current thought in place when there is no next thought in the current session (5452.041917ms) +✔ think --json --browse exposes explicit session traversal semantics without conflating them with chronology neighbors (6165.661584ms) +✔ think --inspect exposes exact raw entry metadata without narration (1938.564ms) +✔ think --json --inspect emits JSONL for the exact raw entry metadata (1864.00475ms) +✔ think --inspect exposes additive capture provenance separately from the raw text (1840.230958ms) +✔ think --json --inspect includes additive capture provenance in the inspected entry payload (1928.670792ms) +✔ think --inspect exposes canonical content identity and direct derived receipts when they exist (3638.785ms) +✔ think --json --inspect emits canonical content identity and direct derived receipt rows (3893.080542ms) +✔ think --inspect exposes the first derived bundle as explicit raw, canonical, derived, and context sections (5896.402125ms) +✔ think --json --inspect emits canonical identity plus seed-quality and session-attribution receipts with provenance (6999.458083ms) +✔ think --json --inspect keeps duplicate raw captures distinct while linking them to the same canonical thought (5707.430625ms) +✔ think --reflect starts an explicit seeded reflect session with a deterministic seed-first challenge prompt (8043.869209ms) +✔ removed brainstorm aliases fail clearly and point to reflect (3305.198375ms) +✔ think --reflect can use an explicit sharpen prompt family (3631.732833ms) +✔ think --reflect-session stores a separate derived entry with preserved seed-first lineage (9333.411541ms) +✔ think --reflect validates explicit session entry and stays read-only on invalid start (3154.975375ms) +✔ think --reflect fails clearly when the seed entry does not exist (316.89575ms) +✔ think --reflect refuses status-like seeds that are not pressure-testable ideas (8831.23125ms) +✔ think --json --reflect refuses ineligible seeds with structured machine-readable errors (8087.35875ms) +✔ think --json --reflect emits only JSONL with seed-first session and prompt data (3940.238833ms) +✔ think --json --reflect-session emits only JSONL and preserves stored seed-first lineage (2865.203792ms) +✔ think --json reflect validation failures stay fully machine-readable (254.224834ms) +✔ think --stats prints total thoughts (6924.949541ms) +✔ think --stats does not bootstrap local state before the first capture (377.194209ms) +✔ think "stats" is captured as a thought rather than triggering the command (4121.758125ms) +✔ think --stats rejects an unexpected thought argument (313.942625ms) +✔ think stats supports --since filter (5723.757166ms) +✔ think --stats rejects an invalid --since value (313.419917ms) +✔ think stats supports --from and --to filters (8765.821708ms) +✔ think --stats rejects invalid absolute date filters (303.897042ms) +✔ think stats supports --bucket=day (7480.604958ms) +✔ think --stats --bucket=day includes a sparkline in text output (6965.857083ms) +✔ think --stats --bucket=day --json includes sparkline in stats.total event (5649.333042ms) +✔ think --stats without --bucket omits sparkline (1707.413333ms) +✔ think --stats rejects an invalid bucket value (247.581458ms) +ℹ tests 128 +ℹ suites 0 +ℹ pass 125 +ℹ fail 0 +ℹ cancelled 0 +ℹ skipped 0 +ℹ todo 3 +ℹ duration_ms 198671.698958 + +``` + +## Drift Results + +```text +Playback-question drift found. +Scanned 1 active cycle, 2 playback questions, 0 test descriptions. +Search basis: exact normalized match in tests/**/*.test.* and tests/**/*.spec.* descriptions. + +docs/design/0057-audit-cli-generic-errors/audit-cli-generic-errors.md +- Human: TBD + No exact normalized test description match found. +- Agent: TBD + No exact normalized test description match found. + +``` + +## Manual Verification + +- [x] Automated capture completed successfully. diff --git a/docs/method/retro/0058-SURFACE_audit-missing-pr-issue-templates/SURFACE_audit-missing-pr-issue-templates.md b/docs/method/retro/0058-SURFACE_audit-missing-pr-issue-templates/SURFACE_audit-missing-pr-issue-templates.md new file mode 100644 index 0000000..54af763 --- /dev/null +++ b/docs/method/retro/0058-SURFACE_audit-missing-pr-issue-templates/SURFACE_audit-missing-pr-issue-templates.md @@ -0,0 +1,7 @@ +--- +title: PR and issue templates +cycle: 0058 +outcome: hill-met +--- + +Added .github/PULL_REQUEST_TEMPLATE.md and .github/ISSUE_TEMPLATE/bug_report.md. diff --git a/docs/method/retro/0059-batch-audit-fixes/batch-audit-fixes.md b/docs/method/retro/0059-batch-audit-fixes/batch-audit-fixes.md new file mode 100644 index 0000000..844e9be --- /dev/null +++ b/docs/method/retro/0059-batch-audit-fixes/batch-audit-fixes.md @@ -0,0 +1,15 @@ +--- +title: "Batch audit fixes" +cycle: "0059-batch-audit-fixes" +outcome: hill-met +drift_check: yes +--- + +# Batch audit fixes + +- README: added Git requirement +- test:local: already documented in CLAUDE.md as Darwin-only +- Release runbook: created docs/method/release-runbook.md +- stdin POLA: added hint when piped input detected without --ingest +- Surface capability docs: deferred (placeholder not worth creating) +- Agent bootstrap: addressed by MIND_ORCHESTRATION.md and AGENTS.md diff --git a/docs/method/retro/0060-graph-v4-enrichment-schema/graph-v4-enrichment-schema.md b/docs/method/retro/0060-graph-v4-enrichment-schema/graph-v4-enrichment-schema.md new file mode 100644 index 0000000..b975e11 --- /dev/null +++ b/docs/method/retro/0060-graph-v4-enrichment-schema/graph-v4-enrichment-schema.md @@ -0,0 +1,26 @@ +--- +title: "Graph v4 enrichment schema" +cycle: "0060-graph-v4-enrichment-schema" +outcome: hill-met +drift_check: yes +--- + +# Graph v4 enrichment schema Retro + +## Summary + +Extended the WARP graph schema for the enrichment pipeline: +- 7 new node prefixes in constants.js +- CLASSIFICATIONS frozen array with 7 types +- PRODUCT_READ_LENS includes all new prefixes +- Migration creates standing classification nodes +- GRAPH_MODEL_VERSION = 4 +- 3 new port tests, 191 total pass + +## Drift + +- Acceptance tests had hardcoded version 3 — updated to 4. + +## New Debt + +- None. diff --git a/docs/method/retro/0061-annotate-command/annotate-command.md b/docs/method/retro/0061-annotate-command/annotate-command.md new file mode 100644 index 0000000..c765cfa --- /dev/null +++ b/docs/method/retro/0061-annotate-command/annotate-command.md @@ -0,0 +1,26 @@ +--- +title: "think --annotate" +cycle: "0061-annotate-command" +outcome: hill-met +drift_check: yes +--- + +# think --annotate Retro + +## Summary + +First enrichment surface. Users can annotate existing captures via +--annotate= "text". Annotations are graph nodes linked by +annotates edges. Visible in --inspect. 4 new acceptance tests. + +Found that ENTRY_KINDS didn't cover text-bearing enrichment nodes. +Added TEXT_CONTENT_KINDS constant for kinds that store content. + +## Drift + +- None. + +## New Debt + +- MCP annotate tool (follow-up) +- Browse TUI 'a' key (follow-up) diff --git a/docs/method/retro/0062-auto-tags-stage/auto-tags-stage.md b/docs/method/retro/0062-auto-tags-stage/auto-tags-stage.md new file mode 100644 index 0000000..405ba6e --- /dev/null +++ b/docs/method/retro/0062-auto-tags-stage/auto-tags-stage.md @@ -0,0 +1,46 @@ +--- +title: "auto_tags enrichment stage" +cycle: "0062-auto-tags-stage" +design_doc: "docs/design/0062-auto-tags-stage/auto-tags-stage.md" +outcome: hill-met +drift_check: yes +--- + +# auto_tags enrichment stage Retro + +## Summary + +First automated enrichment stage. extractTopics() does keyword +extraction (stopwords, dedup, normalize). runEnrichmentPipeline() +creates topic nodes when keywords cross promotion threshold (2+ +thoughts), adds about edges. CLI: --enrich and --topics. + +7 port tests (extractTopics), 2 acceptance tests (--topics after +enrichment, --json --topics). 204 total tests pass. + +## Playback Witness + +- [verification.md](witness/verification.md) — automated test run. +- Acceptance tests prove topics are promoted after two captures + share a keyword, and --json --topics emits topic events with + name and thoughtCount. + +## Drift + +- Accidentally ran --enrich against live archive during playback. + New policy: never touch live data — use test fixtures only. + +## New Debt + +- Stopword list needs tuning (common words like "even", "basically" + slip through) +- No --enrich= for single-entry enrichment yet +- No MCP enrich/topics tools yet + +## Cool Ideas + +- None. + +## Backlog Maintenance + +- [x] Done diff --git a/docs/method/retro/0062-auto-tags-stage/witness/verification.md b/docs/method/retro/0062-auto-tags-stage/witness/verification.md new file mode 100644 index 0000000..59b66e9 --- /dev/null +++ b/docs/method/retro/0062-auto-tags-stage/witness/verification.md @@ -0,0 +1,280 @@ +--- +title: "Verification Witness for Cycle 62" +--- + +# Verification Witness for Cycle 62 + +This witness proves that `auto_tags enrichment stage` now carries the required +behavior and adheres to the repo invariants. + +## Test Results + +```text + +> think@0.7.0 test +> npm run test:ports && npm run test:m1 + + +> think@0.7.0 test:ports +> node --test test/ports/*.test.js + +✔ extractTopics returns meaningful keywords from thought text (0.963959ms) +✔ extractTopics filters out stopwords (0.086667ms) +✔ extractTopics filters out short tokens (0.071416ms) +✔ extractTopics normalizes to lowercase (0.099959ms) +✔ extractTopics returns empty array for empty text (0.762625ms) +✔ extractTopics deduplicates repeated words (0.078708ms) +✔ extractTopics handles hyphenated terms (0.071958ms) +✔ BG_TOKEN is exported from style.js alongside the palette (0.808208ms) +✔ windowed browse initializes with no drawer open (20.15525ms) +✔ saveRawCapture writes cwd receipts first and defers git enrichment to followthrough (1019.704792ms) +✔ capture provenance exports the canonical ingress set (1.551291ms) +✔ capture provenance trims source strings while preserving valid ingress and URL (0.15175ms) +✔ capture provenance trims ingress strings before validation (0.067375ms) +✔ capture provenance rejects dangerous URL schemes (0.078959ms) +✔ capture provenance accepts safe URL schemes (0.102333ms) +✔ normalizeCaptureProvenance returns a frozen CaptureProvenance instance (0.0725ms) +✔ capture provenance reads and normalizes environment input (0.078667ms) +✔ METHOD docs use one consistent cycle-only release and README closeout policy (2.681041ms) +✔ MIND_ORCHESTRATION.md exists and is linked from GUIDE.md (0.725709ms) +✔ cycle 0006 retrospective restarts ordered numbering for the human playback section (0.510625ms) +✔ runDiagnostics reports ok for a healthy repo with entries (25.396667ms) +✔ runDiagnostics reports fail when think directory does not exist (0.711209ms) +✔ runDiagnostics reports fail when local repo has no git init (1.470458ms) +✔ runDiagnostics reports ok for upstream when reachable (18.696833ms) +✔ runDiagnostics reports warn for upstream when unreachable (20.646209ms) +✔ runDiagnostics reports skip for upstream when URL is set but no checker provided (29.958167ms) +✔ runDiagnostics reports skip for upstream when not configured (19.763708ms) +✔ runDiagnostics reports skip for upstream when configured without checker (20.960958ms) +✔ runDiagnostics includes all expected check names (17.628125ms) +✔ runDiagnostics reports graph model version when available (17.678917ms) +✔ runDiagnostics warns when graph model needs migration (16.588959ms) +✔ runDiagnostics reports entry count when available (17.958916ms) +✔ runDiagnostics warns when entry count is zero (15.683916ms) +✔ runDiagnostics skips graph and entry checks when no repo exists (0.300125ms) +✔ GRAPH_MODEL_VERSION is 4 (1.115292ms) +✔ CLASSIFICATIONS has 7 entries including unclassified (0.126833ms) +✔ PRODUCT_READ_LENS includes enrichment prefixes (0.080583ms) +✔ shared JSON helper canonicalizes object keys deterministically on parse and stringify (1.579416ms) +✔ discoverMinds finds all valid repos under the think directory (72.283875ms) +✔ discoverMinds ignores directories without git repos (26.676042ms) +✔ discoverMinds labels ~/.think/repo as "default" (19.649584ms) +✔ discoverMinds sorts with default first, then alphabetical (56.535333ms) +✔ discoverMinds returns empty array when think directory does not exist (0.143875ms) +✔ discoverMinds includes repoDir for each mind (15.896541ms) +✔ shaderForMind returns a deterministic index for a given name (0.173584ms) +✔ shaderForMind returns different indices for different names (0.085208ms) +✔ shaderForMind stays within the shader count range (0.073667ms) +✔ shaderForMind throws when shaderCount is zero (0.309708ms) +✔ shaderForMind throws when shaderCount is negative (0.071375ms) +✔ shaderForMind handles single-character names (0.091666ms) +✔ createEntry returns an Entry instance (2.878542ms) +✔ Entry is frozen (0.153417ms) +✔ createEntry validates required fields (1.028583ms) +✔ createReflectSession returns a ReflectSession instance (0.16525ms) +✔ ReflectSession is frozen (0.092625ms) +✔ ENTRY_KINDS is a frozen array of valid kind strings (0.066917ms) +✔ BUCKET_PERIODS is a frozen array of valid bucket strings (0.061542ms) +✔ storesTextContent validates against ENTRY_KINDS (0.301292ms) +✔ selectLogo picks large mind logo when terminal is wide and tall enough (0.923833ms) +✔ selectLogo picks medium mind logo when terminal fits medium but not large (0.105ms) +✔ selectLogo picks text logo when terminal is too small for mind (0.056458ms) +✔ selectLogo always returns something even for tiny terminals (0.053708ms) +✔ renderSplash contains the logo (0.141208ms) +✔ renderSplash contains the Enter prompt (0.064375ms) +✔ renderSplash output fits within the given dimensions (0.06925ms) +✔ splash.js does not export renderSplashView (dead code from RE-015 workaround) (0.048625ms) +✔ renderSplash centers the prompt horizontally (0.164917ms) +✔ windowed browse model initializes in windowed mode (0.206125ms) +✔ formatStats includes a sparkline when buckets are present (1.673ms) +✔ formatStats omits sparkline when no buckets are present (0.081042ms) +✔ formatStats handles a single bucket without crashing (0.097666ms) +✔ formatStats handles empty bucket array without sparkline (0.067083ms) +✔ formatStats sparkline is oldest-to-newest (left-to-right) (0.078167ms) +ℹ tests 73 +ℹ suites 0 +ℹ pass 73 +ℹ fail 0 +ℹ cancelled 0 +ℹ skipped 0 +ℹ todo 0 +ℹ duration_ms 1411.789625 + +> think@0.7.0 test:m1 +> node --test test/acceptance/*.test.js + +✔ think --annotate attaches a note to an existing capture (4023.776583ms) +✔ think --json --annotate emits structured annotation result (9317.66175ms) +✔ think --annotate rejects empty annotation text (2190.508083ms) +✔ think --annotate shows annotation in --inspect output (5027.723917ms) +✔ think --topics lists promoted topics after multiple captures share a keyword (12006.263291ms) +✔ think --json --topics emits JSONL topic list (7275.305208ms) +✔ think --doctor reports health of a repo with captures (3070.20575ms) +✔ think --doctor succeeds before the first capture (285.141958ms) +✔ think --json --doctor emits a structured health report (7984.867667ms) +✔ think --doctor rejects an unexpected thought argument (799.976625ms) +✔ new capture writes graph-native relationship edges while preserving compatibility properties (2044.037167ms) +✔ think --migrate-graph upgrades a version-1 property-linked repo additively (7722.295292ms) +✔ think --migrate-graph is idempotent and safe to rerun (5285.570959ms) +✔ capture on a version-1 repo still succeeds and only migrates after the raw local save (5807.079667ms) +✔ graph-native commands fail clearly on an outdated repo outside interactive use (4614.512375ms) +✔ interactive inspect on an outdated repo shows visible upgrade progress before continuing (3090.802958ms) +✔ interactive browse on an outdated repo shows visible upgrade progress before continuing (3039.753583ms) +✔ think --json emits explicit graph migration required errors for outdated graph-native commands (2209.771416ms) +✔ think --migrate-graph upgrades a version-2 repo to graph model version 4 with browse, reflect, and enrichment nodes (7463.829542ms) +✔ think --json --inspect exposes direct reflect receipts that exist only through graph-native v3 edges (2493.805292ms) +✔ think --help prints top-level usage without bootstrapping local state (451.254833ms) +✔ think -h is accepted as a short alias for top-level help (293.025041ms) +✔ think --recent --help prints recent help instead of running the command (273.017167ms) +✔ think --recent -h prints recent help instead of running the command (282.397583ms) +✔ think recent --help fails and points callers to the explicit flag form (280.780417ms) +✔ think --inspect --help bypasses required entry validation (297.51575ms) +✔ think --json --help emits structured JSONL help output (317.752375ms) +✔ think recent --json --help fails machine-readably instead of acting as shorthand help (281.867291ms) +✔ think -- -h captures the literal text after option parsing is terminated (6219.618875ms) +✔ think --ingest reads stdin explicitly and captures it into the normal raw-capture core (2804.434083ms) +✔ think with stdin but without --ingest does not accidentally capture piped input (304.243708ms) +✔ think --ingest rejects mixed positional capture text and stdin capture text (298.445959ms) +✔ think --json --ingest preserves machine-readable capture semantics for agents (7624.344917ms) +✔ think --ingest rejects empty stdin payloads (852.279792ms) +✔ think --json capture emits JSONL on stdout and keeps stderr quiet when there are no warnings (1774.137375ms) +✔ think --json --recent emits entry events instead of plain text (11061.2205ms) +✔ think --json --stats emits totals and bucket rows as JSONL (5896.340708ms) +✔ think --json validation failures emit JSONL on stderr instead of stdout (290.123834ms) +✔ think --json reports backup pending as a structured warning on stderr (1533.406292ms) +✔ think --json emits deterministically sorted keys in JSONL output (1791.878958ms) +✔ think MCP server lists the core Think tools (486.174375ms) +✔ think MCP capture, recent, browse, and inspect route through the existing Think runtime (3984.42825ms) +✔ think MCP capture preserves additive provenance separately from the raw text (6616.569958ms) +✔ think MCP capture trims additive provenance strings before persistence (3619.88725ms) +✔ think MCP remember, stats, and prompt_metrics expose structured read results (6062.451416ms) +✔ think MCP doctor tool returns structured health checks (2317.588583ms) +✔ CLI raw capture bootstraps the local repo and preserves exact text (4888.622208ms) +✔ think "recent" is captured as a thought rather than triggering the list (5982.783417ms) +✔ think --recent does not bootstrap local state before the first capture (497.552583ms) +✔ think --recent rejects an unexpected thought argument (360.640334ms) +✔ capture does not require retrieval-before-write or conceptual confirmation (4336.237042ms) +✔ THINK_REPO_DIR overrides the default local repo path (2454.815125ms) +✔ reachable upstream reports local save first and backup second (1303.795125ms) +✔ unreachable upstream keeps capture successful and reports backup pending (1363.258417ms) +✔ recent stays plain and chronological (6570.653625ms) +✔ capture is append-only across later capture activity (3932.455083ms) +✔ duplicate thoughts produce distinct captures rather than deduping (3957.55525ms) +✔ empty input is rejected (263.292042ms) +✔ whitespace-only input is rejected (251.489834ms) +✔ capture preserves formatting neutrality for spacing, casing, and punctuation (2048.75525ms) +✔ default user language avoids Git terminology (1246.277333ms) +✔ verbose capture emits JSONL trace updates on stderr (1165.463875ms) +✔ raw entries remain immutable after later derived entries exist (0.110375ms) # TODO +✔ stored raw entry bytes remain unchanged in the local store after later writes (0.052125ms) # TODO +✔ entry kind separation remains explicit once the first derived-entry write path exists (0.022458ms) # TODO +✔ think --prompt-metrics prints factual prompt telemetry totals and medians (922.296833ms) +✔ think --prompt-metrics does not bootstrap local state before the first capture (923.920167ms) +✔ think --prompt-metrics supports --since filtering over prompt sessions (940.343ms) +✔ think --prompt-metrics supports --bucket=day (877.723083ms) +✔ think --json --prompt-metrics emits explicit summary, timing, and bucket rows (516.106959ms) +✔ think --prompt-metrics rejects an unexpected thought argument (394.158667ms) +✔ think --prompt-metrics rejects invalid filter values (668.295417ms) +✔ think --recent --count limits output to the newest N raw captures (9172.769833ms) +✔ think --recent --query filters raw captures by case-insensitive text match (6606.829625ms) +✔ removed recent alias flags fail clearly and point to the scoped forms (1728.326167ms) +✔ think --json --recent applies count and query filters while remaining JSONL-only (6335.261583ms) +✔ think --remember uses the current project context to recall relevant prior thoughts (4496.919834ms) +✔ think --remember with an explicit phrase recalls matching thoughts without turning into generic recent listing (6481.381708ms) +✔ think --json --remember emits explicit ambient scope and match receipts for agents (3843.467125ms) +✔ think --remember falls back honestly to textual project-token matching for entries without ambient project receipts (3764.809833ms) +✔ think --remember --limit returns only the top N matching thoughts in deterministic order (7259.345917ms) +✔ think --remember --brief returns a triage-friendly snippet instead of the full multiline thought (3215.614583ms) +✔ think --json --remember --brief --limit preserves bounded explicit recall receipts for agents (5079.602625ms) +✔ think --remember rejects invalid --limit values (1369.847208ms) +✔ think --browse shows one raw thought with its immediate newer and older neighbors (5027.893333ms) +✔ think --browse without an entry id fails clearly outside interactive TTY use and remains read-only (230.885208ms) +✔ think --json --browse without an entry id stays machine-readable and does not try to open the shell (227.942667ms) +✔ think --json --browse emits JSONL rows for the current raw thought and its neighbors (5074.683125ms) +✔ think --browse opens a reader-first browse TUI with metadata and no permanent recent rail (5781.033667ms) +✔ think --browse can reveal a chronology drawer on demand instead of showing the full log by default (5072.739292ms) +✔ think --browse can jump to another thought through a fuzzy jump surface (5096.659167ms) +✔ think --browse can reveal inspect receipts inside the scripted browse TUI (3239.946292ms) +✔ think --browse can hand the selected thought into reflect from the scripted browse TUI (3286.107042ms) +✔ think --browse surfaces session identity for the current thought without replacing the reader-first view (7194.863042ms) +✔ think --browse uses a short visible entry id in the reader-first metadata while inspect keeps the full exact id (6549.237042ms) +✔ think --browse can reveal a summon-only session drawer that excludes out-of-session thoughts (7540.498792ms) +✔ think --browse reveals a structured session drawer with a visible start label and current-thought marker (7325.136084ms) +✔ think --json --browse emits explicit session context and session-nearby rows without mislabeling out-of-session thoughts (7640.469167ms) +✔ think --browse can move to the previous thought within the current session without leaving reader-first browse (5098.867625ms) +✔ think --browse keeps the current thought in place when there is no next thought in the current session (5013.36725ms) +✔ think --json --browse exposes explicit session traversal semantics without conflating them with chronology neighbors (5184.727791ms) +✔ think --inspect exposes exact raw entry metadata without narration (1756.683416ms) +✔ think --json --inspect emits JSONL for the exact raw entry metadata (1792.5215ms) +✔ think --inspect exposes additive capture provenance separately from the raw text (1752.892625ms) +✔ think --json --inspect includes additive capture provenance in the inspected entry payload (1701.171125ms) +✔ think --inspect exposes canonical content identity and direct derived receipts when they exist (3374.385541ms) +✔ think --json --inspect emits canonical content identity and direct derived receipt rows (3407.908625ms) +✔ think --inspect exposes the first derived bundle as explicit raw, canonical, derived, and context sections (5079.960458ms) +✔ think --json --inspect emits canonical identity plus seed-quality and session-attribution receipts with provenance (5100.93825ms) +✔ think --json --inspect keeps duplicate raw captures distinct while linking them to the same canonical thought (4232.899125ms) +✔ think --reflect starts an explicit seeded reflect session with a deterministic seed-first challenge prompt (6701.450125ms) +✔ removed brainstorm aliases fail clearly and point to reflect (2609.291166ms) +✔ think --reflect can use an explicit sharpen prompt family (2464.751708ms) +✔ think --reflect-session stores a separate derived entry with preserved seed-first lineage (6660.776417ms) +✔ think --reflect validates explicit session entry and stays read-only on invalid start (2533.493625ms) +✔ think --reflect fails clearly when the seed entry does not exist (260.316ms) +✔ think --reflect refuses status-like seeds that are not pressure-testable ideas (7363.383917ms) +✔ think --json --reflect refuses ineligible seeds with structured machine-readable errors (6661.00875ms) +✔ think --json --reflect emits only JSONL with seed-first session and prompt data (3815.944375ms) +✔ think --json --reflect-session emits only JSONL and preserves stored seed-first lineage (2818.459834ms) +✔ think --json reflect validation failures stay fully machine-readable (242.915292ms) +✔ think --stats prints total thoughts (5473.517542ms) +✔ think --stats does not bootstrap local state before the first capture (273.133166ms) +✔ think "stats" is captured as a thought rather than triggering the command (2859.886583ms) +✔ think --stats rejects an unexpected thought argument (269.052833ms) +✔ think stats supports --since filter (4065.725042ms) +✔ think --stats rejects an invalid --since value (263.203042ms) +✔ think stats supports --from and --to filters (6330.114208ms) +✔ think --stats rejects invalid absolute date filters (259.923625ms) +✔ think stats supports --bucket=day (6593.111167ms) +✔ think --stats --bucket=day includes a sparkline in text output (6044.610917ms) +✔ think --stats --bucket=day --json includes sparkline in stats.total event (5601.589041ms) +✔ think --stats without --bucket omits sparkline (1716.418209ms) +✔ think --stats rejects an invalid bucket value (241.585875ms) +ℹ tests 134 +ℹ suites 0 +ℹ pass 131 +ℹ fail 0 +ℹ cancelled 0 +ℹ skipped 0 +ℹ todo 3 +ℹ duration_ms 184281.294125 + +``` + +## Drift Results + +```text +Playback-question drift found. +Scanned 1 active cycle, 8 playback questions, 0 test descriptions. +Search basis: exact normalized match in tests/**/*.test.* and tests/**/*.spec.* descriptions. + +docs/design/0062-auto-tags-stage/auto-tags-stage.md +- Human: After capturing a thought about "performance optimization", can I find it by querying topic:performance? + No exact normalized test description match found. +- Human: Do topics only become graph nodes after appearing in multiple thoughts (promotion threshold)? + No exact normalized test description match found. +- Agent: Does `extractTopics(text, corpus)` return relevant keywords without an LLM? + No exact normalized test description match found. +- Agent: Does the auto_tags stage create `about` edges from thoughts to topic nodes? + No exact normalized test description match found. +- Agent: Does a receipt artifact track what was extracted and when? + No exact normalized test description match found. +- Agent: Are candidate topics below the threshold stored on the receipt (not as graph nodes)? + No exact normalized test description match found. +- Agent: Does re-running the stage on the same thought produce the same result (idempotent)? + No exact normalized test description match found. +- Agent: Does a new CLI command (`--topics`) list all promoted topics? + No exact normalized test description match found. + +``` + +## Manual Verification + +- [x] Automated capture completed successfully. diff --git a/docs/method/retro/0063-semantic-parse-stage/semantic-parse-stage.md b/docs/method/retro/0063-semantic-parse-stage/semantic-parse-stage.md new file mode 100644 index 0000000..26aa541 --- /dev/null +++ b/docs/method/retro/0063-semantic-parse-stage/semantic-parse-stage.md @@ -0,0 +1,41 @@ +--- +title: "semantic_parse enrichment stage" +cycle: "0063-semantic-parse-stage" +design_doc: "docs/design/0063-semantic-parse-stage/semantic-parse-stage.md" +outcome: hill-met +drift_check: yes +--- + +# semantic_parse enrichment stage Retro + +## Summary + +Pattern-based multi-class thought classification. classifyThought() +matches 6 types (question, decision, observation, action_item, idea, +reference) + unclassified fallback. Enrichment pipeline creates +classified_as edges to standing classification nodes with receipt +artifacts. 10 new port tests. + +## Playback Witness + +- [verification.md](witness/verification.md) — automated test run. +- Port tests prove all 7 classification types, multi-class, markers. +- Evidence from test output only (no live data per policy). + +## Drift + +- None. + +## New Debt + +- No acceptance test for classified_as edges yet (acceptance tests + only cover auto_tags topics so far) +- No --questions / --decisions CLI query yet + +## Cool Ideas + +- None. + +## Backlog Maintenance + +- [x] Done diff --git a/docs/method/retro/0063-semantic-parse-stage/witness/verification.md b/docs/method/retro/0063-semantic-parse-stage/witness/verification.md new file mode 100644 index 0000000..2855b3f --- /dev/null +++ b/docs/method/retro/0063-semantic-parse-stage/witness/verification.md @@ -0,0 +1,288 @@ +--- +title: "Verification Witness for Cycle 63" +--- + +# Verification Witness for Cycle 63 + +This witness proves that `semantic_parse enrichment stage` now carries the required +behavior and adheres to the repo invariants. + +## Test Results + +```text + +> think@0.7.0 test +> npm run test:ports && npm run test:m1 + + +> think@0.7.0 test:ports +> node --test test/ports/*.test.js + +✔ extractTopics returns meaningful keywords from thought text (0.9725ms) +✔ extractTopics filters out stopwords (0.09025ms) +✔ extractTopics filters out short tokens (0.074042ms) +✔ extractTopics normalizes to lowercase (0.083666ms) +✔ extractTopics returns empty array for empty text (0.772417ms) +✔ extractTopics deduplicates repeated words (0.089334ms) +✔ extractTopics handles hyphenated terms (0.064708ms) +✔ BG_TOKEN is exported from style.js alongside the palette (0.789ms) +✔ windowed browse initializes with no drawer open (19.116542ms) +✔ saveRawCapture writes cwd receipts first and defers git enrichment to followthrough (1033.505375ms) +✔ capture provenance exports the canonical ingress set (1.555125ms) +✔ capture provenance trims source strings while preserving valid ingress and URL (0.169ms) +✔ capture provenance trims ingress strings before validation (0.080583ms) +✔ capture provenance rejects dangerous URL schemes (0.0805ms) +✔ capture provenance accepts safe URL schemes (0.1205ms) +✔ normalizeCaptureProvenance returns a frozen CaptureProvenance instance (0.0605ms) +✔ capture provenance reads and normalizes environment input (0.087125ms) +✔ METHOD docs use one consistent cycle-only release and README closeout policy (3.626458ms) +✔ MIND_ORCHESTRATION.md exists and is linked from GUIDE.md (3.106625ms) +✔ cycle 0006 retrospective restarts ordered numbering for the human playback section (0.968208ms) +✔ runDiagnostics reports ok for a healthy repo with entries (26.903917ms) +✔ runDiagnostics reports fail when think directory does not exist (0.79275ms) +✔ runDiagnostics reports fail when local repo has no git init (1.599292ms) +✔ runDiagnostics reports ok for upstream when reachable (22.909875ms) +✔ runDiagnostics reports warn for upstream when unreachable (28.322416ms) +✔ runDiagnostics reports skip for upstream when URL is set but no checker provided (27.803583ms) +✔ runDiagnostics reports skip for upstream when not configured (21.853667ms) +✔ runDiagnostics reports skip for upstream when configured without checker (20.496959ms) +✔ runDiagnostics includes all expected check names (20.559042ms) +✔ runDiagnostics reports graph model version when available (18.689917ms) +✔ runDiagnostics warns when graph model needs migration (17.769459ms) +✔ runDiagnostics reports entry count when available (17.099667ms) +✔ runDiagnostics warns when entry count is zero (17.153541ms) +✔ runDiagnostics skips graph and entry checks when no repo exists (0.160583ms) +✔ GRAPH_MODEL_VERSION is 4 (0.822959ms) +✔ CLASSIFICATIONS has 7 entries including unclassified (0.107042ms) +✔ PRODUCT_READ_LENS includes enrichment prefixes (0.077125ms) +✔ shared JSON helper canonicalizes object keys deterministically on parse and stringify (2.059042ms) +✔ discoverMinds finds all valid repos under the think directory (83.06075ms) +✔ discoverMinds ignores directories without git repos (26.573083ms) +✔ discoverMinds labels ~/.think/repo as "default" (22.84825ms) +✔ discoverMinds sorts with default first, then alphabetical (60.810458ms) +✔ discoverMinds returns empty array when think directory does not exist (0.539209ms) +✔ discoverMinds includes repoDir for each mind (16.979333ms) +✔ shaderForMind returns a deterministic index for a given name (0.155ms) +✔ shaderForMind returns different indices for different names (0.088791ms) +✔ shaderForMind stays within the shader count range (0.081583ms) +✔ shaderForMind throws when shaderCount is zero (0.328083ms) +✔ shaderForMind throws when shaderCount is negative (0.078167ms) +✔ shaderForMind handles single-character names (0.063709ms) +✔ createEntry returns an Entry instance (3.5225ms) +✔ Entry is frozen (0.20525ms) +✔ createEntry validates required fields (0.938834ms) +✔ createReflectSession returns a ReflectSession instance (0.153166ms) +✔ ReflectSession is frozen (0.082583ms) +✔ ENTRY_KINDS is a frozen array of valid kind strings (0.0575ms) +✔ BUCKET_PERIODS is a frozen array of valid bucket strings (0.210416ms) +✔ storesTextContent validates against ENTRY_KINDS (0.125375ms) +✔ classifyThought detects questions (1.98175ms) +✔ classifyThought detects decisions (0.79175ms) +✔ classifyThought detects observations (0.166167ms) +✔ classifyThought detects action items (0.096709ms) +✔ classifyThought detects ideas (0.068709ms) +✔ classifyThought detects references (0.074042ms) +✔ classifyThought returns unclassified when no pattern matches (1.065959ms) +✔ classifyThought supports multi-class (0.095792ms) +✔ classifyThought returns markers for each match (0.089334ms) +✔ classifyThought handles empty text (0.114334ms) +✔ selectLogo picks large mind logo when terminal is wide and tall enough (0.903125ms) +✔ selectLogo picks medium mind logo when terminal fits medium but not large (0.095167ms) +✔ selectLogo picks text logo when terminal is too small for mind (0.0605ms) +✔ selectLogo always returns something even for tiny terminals (0.056958ms) +✔ renderSplash contains the logo (0.1425ms) +✔ renderSplash contains the Enter prompt (0.064917ms) +✔ renderSplash output fits within the given dimensions (0.072792ms) +✔ splash.js does not export renderSplashView (dead code from RE-015 workaround) (0.049583ms) +✔ renderSplash centers the prompt horizontally (0.174792ms) +✔ windowed browse model initializes in windowed mode (0.196417ms) +✔ formatStats includes a sparkline when buckets are present (1.725458ms) +✔ formatStats omits sparkline when no buckets are present (0.0845ms) +✔ formatStats handles a single bucket without crashing (0.09775ms) +✔ formatStats handles empty bucket array without sparkline (0.06975ms) +✔ formatStats sparkline is oldest-to-newest (left-to-right) (0.087333ms) +ℹ tests 83 +ℹ suites 0 +ℹ pass 83 +ℹ fail 0 +ℹ cancelled 0 +ℹ skipped 0 +ℹ todo 0 +ℹ duration_ms 1447.747208 + +> think@0.7.0 test:m1 +> node --test test/acceptance/*.test.js + +✔ think --annotate attaches a note to an existing capture (4185.642791ms) +✔ think --json --annotate emits structured annotation result (4042.209875ms) +✔ think --annotate rejects empty annotation text (1918.977083ms) +✔ think --annotate shows annotation in --inspect output (4957.120292ms) +✔ think --topics lists promoted topics after multiple captures share a keyword (7173.79625ms) +✔ think --json --topics emits JSONL topic list (6513.652417ms) +✔ think --doctor reports health of a repo with captures (3291.822583ms) +✔ think --doctor succeeds before the first capture (314.2885ms) +✔ think --json --doctor emits a structured health report (3301.153084ms) +✔ think --doctor rejects an unexpected thought argument (343.643583ms) +✔ new capture writes graph-native relationship edges while preserving compatibility properties (2202.863583ms) +✔ think --migrate-graph upgrades a version-1 property-linked repo additively (3776.962791ms) +✔ think --migrate-graph is idempotent and safe to rerun (3599.274125ms) +✔ capture on a version-1 repo still succeeds and only migrates after the raw local save (5788.056083ms) +✔ graph-native commands fail clearly on an outdated repo outside interactive use (4663.459792ms) +✔ interactive inspect on an outdated repo shows visible upgrade progress before continuing (3304.77325ms) +✔ interactive browse on an outdated repo shows visible upgrade progress before continuing (3143.288ms) +✔ think --json emits explicit graph migration required errors for outdated graph-native commands (2349.26225ms) +✔ think --migrate-graph upgrades a version-2 repo to graph model version 4 with browse, reflect, and enrichment nodes (7752.524917ms) +✔ think --json --inspect exposes direct reflect receipts that exist only through graph-native v3 edges (2712.072417ms) +✔ think --help prints top-level usage without bootstrapping local state (486.106916ms) +✔ think -h is accepted as a short alias for top-level help (314.657ms) +✔ think --recent --help prints recent help instead of running the command (309.994416ms) +✔ think --recent -h prints recent help instead of running the command (286.40175ms) +✔ think recent --help fails and points callers to the explicit flag form (289.334292ms) +✔ think --inspect --help bypasses required entry validation (321.735875ms) +✔ think --json --help emits structured JSONL help output (340.927292ms) +✔ think recent --json --help fails machine-readably instead of acting as shorthand help (300.280875ms) +✔ think -- -h captures the literal text after option parsing is terminated (3055.358375ms) +✔ think --ingest reads stdin explicitly and captures it into the normal raw-capture core (2950.374291ms) +✔ think with stdin but without --ingest does not accidentally capture piped input (333.490375ms) +✔ think --ingest rejects mixed positional capture text and stdin capture text (337.775292ms) +✔ think --json --ingest preserves machine-readable capture semantics for agents (3077.813333ms) +✔ think --ingest rejects empty stdin payloads (333.411333ms) +✔ think --json capture emits JSONL on stdout and keeps stderr quiet when there are no warnings (1934.375917ms) +✔ think --json --recent emits entry events instead of plain text (5962.064917ms) +✔ think --json --stats emits totals and bucket rows as JSONL (5371.526042ms) +✔ think --json validation failures emit JSONL on stderr instead of stdout (311.135334ms) +✔ think --json reports backup pending as a structured warning on stderr (1570.988166ms) +✔ think --json emits deterministically sorted keys in JSONL output (1907.96075ms) +✔ think MCP server lists the core Think tools (520.708375ms) +✔ think MCP capture, recent, browse, and inspect route through the existing Think runtime (3623.898209ms) +✔ think MCP capture preserves additive provenance separately from the raw text (2641.224667ms) +✔ think MCP capture trims additive provenance strings before persistence (2516.70775ms) +✔ think MCP remember, stats, and prompt_metrics expose structured read results (6161.19225ms) +✔ think MCP doctor tool returns structured health checks (2366.998458ms) +✔ CLI raw capture bootstraps the local repo and preserves exact text (2862.830584ms) +✔ think "recent" is captured as a thought rather than triggering the list (2812.354667ms) +✔ think --recent does not bootstrap local state before the first capture (308.009041ms) +✔ think --recent rejects an unexpected thought argument (318.023625ms) +✔ capture does not require retrieval-before-write or conceptual confirmation (4128.623083ms) +✔ THINK_REPO_DIR overrides the default local repo path (2507.923917ms) +✔ reachable upstream reports local save first and backup second (1351.114583ms) +✔ unreachable upstream keeps capture successful and reports backup pending (1344.450459ms) +✔ recent stays plain and chronological (6787.696458ms) +✔ capture is append-only across later capture activity (4088.971ms) +✔ duplicate thoughts produce distinct captures rather than deduping (4150.873667ms) +✔ empty input is rejected (268.466709ms) +✔ whitespace-only input is rejected (293.37575ms) +✔ capture preserves formatting neutrality for spacing, casing, and punctuation (2199.025833ms) +✔ default user language avoids Git terminology (1276.299125ms) +✔ verbose capture emits JSONL trace updates on stderr (1367.724584ms) +✔ raw entries remain immutable after later derived entries exist (0.140625ms) # TODO +✔ stored raw entry bytes remain unchanged in the local store after later writes (0.031459ms) # TODO +✔ entry kind separation remains explicit once the first derived-entry write path exists (0.0285ms) # TODO +✔ think --prompt-metrics prints factual prompt telemetry totals and medians (305.257167ms) +✔ think --prompt-metrics does not bootstrap local state before the first capture (317.773916ms) +✔ think --prompt-metrics supports --since filtering over prompt sessions (320.819459ms) +✔ think --prompt-metrics supports --bucket=day (352.576708ms) +✔ think --json --prompt-metrics emits explicit summary, timing, and bucket rows (356.757208ms) +✔ think --prompt-metrics rejects an unexpected thought argument (316.014875ms) +✔ think --prompt-metrics rejects invalid filter values (611.441333ms) +✔ think --recent --count limits output to the newest N raw captures (8552.80225ms) +✔ think --recent --query filters raw captures by case-insensitive text match (6825.704708ms) +✔ removed recent alias flags fail clearly and point to the scoped forms (1811.3435ms) +✔ think --json --recent applies count and query filters while remaining JSONL-only (6666.71375ms) +✔ think --remember uses the current project context to recall relevant prior thoughts (4767.642ms) +✔ think --remember with an explicit phrase recalls matching thoughts without turning into generic recent listing (6871.508125ms) +✔ think --json --remember emits explicit ambient scope and match receipts for agents (4050.245667ms) +✔ think --remember falls back honestly to textual project-token matching for entries without ambient project receipts (3966.350584ms) +✔ think --remember --limit returns only the top N matching thoughts in deterministic order (7552.181167ms) +✔ think --remember --brief returns a triage-friendly snippet instead of the full multiline thought (3745.377083ms) +✔ think --json --remember --brief --limit preserves bounded explicit recall receipts for agents (5638.788916ms) +✔ think --remember rejects invalid --limit values (1503.946417ms) +✔ think --browse shows one raw thought with its immediate newer and older neighbors (5848.356875ms) +✔ think --browse without an entry id fails clearly outside interactive TTY use and remains read-only (231.876334ms) +✔ think --json --browse without an entry id stays machine-readable and does not try to open the shell (232.682792ms) +✔ think --json --browse emits JSONL rows for the current raw thought and its neighbors (5987.628792ms) +✔ think --browse opens a reader-first browse TUI with metadata and no permanent recent rail (6496.599042ms) +✔ think --browse can reveal a chronology drawer on demand instead of showing the full log by default (5582.753375ms) +✔ think --browse can jump to another thought through a fuzzy jump surface (6207.076125ms) +✔ think --browse can reveal inspect receipts inside the scripted browse TUI (3686.530875ms) +✔ think --browse can hand the selected thought into reflect from the scripted browse TUI (3649.513208ms) +✔ think --browse surfaces session identity for the current thought without replacing the reader-first view (8354.971083ms) +✔ think --browse uses a short visible entry id in the reader-first metadata while inspect keeps the full exact id (7314.805041ms) +✔ think --browse can reveal a summon-only session drawer that excludes out-of-session thoughts (8778.462708ms) +✔ think --browse reveals a structured session drawer with a visible start label and current-thought marker (9068.90125ms) +✔ think --json --browse emits explicit session context and session-nearby rows without mislabeling out-of-session thoughts (8570.644667ms) +✔ think --browse can move to the previous thought within the current session without leaving reader-first browse (5485.240583ms) +✔ think --browse keeps the current thought in place when there is no next thought in the current session (5656.936084ms) +✔ think --json --browse exposes explicit session traversal semantics without conflating them with chronology neighbors (5687.6895ms) +✔ think --inspect exposes exact raw entry metadata without narration (1920.91225ms) +✔ think --json --inspect emits JSONL for the exact raw entry metadata (1874.84925ms) +✔ think --inspect exposes additive capture provenance separately from the raw text (1867.070958ms) +✔ think --json --inspect includes additive capture provenance in the inspected entry payload (2194.994416ms) +✔ think --inspect exposes canonical content identity and direct derived receipts when they exist (3787.036625ms) +✔ think --json --inspect emits canonical content identity and direct derived receipt rows (4034.116041ms) +✔ think --inspect exposes the first derived bundle as explicit raw, canonical, derived, and context sections (6239.512584ms) +✔ think --json --inspect emits canonical identity plus seed-quality and session-attribution receipts with provenance (5849.097458ms) +✔ think --json --inspect keeps duplicate raw captures distinct while linking them to the same canonical thought (4796.615459ms) +✔ think --reflect starts an explicit seeded reflect session with a deterministic seed-first challenge prompt (6152.412167ms) +✔ removed brainstorm aliases fail clearly and point to reflect (2609.4715ms) +✔ think --reflect can use an explicit sharpen prompt family (2409.147584ms) +✔ think --reflect-session stores a separate derived entry with preserved seed-first lineage (6878.675208ms) +✔ think --reflect validates explicit session entry and stays read-only on invalid start (2630.358667ms) +✔ think --reflect fails clearly when the seed entry does not exist (260.019417ms) +✔ think --reflect refuses status-like seeds that are not pressure-testable ideas (7704.979792ms) +✔ think --json --reflect refuses ineligible seeds with structured machine-readable errors (7200.004583ms) +✔ think --json --reflect emits only JSONL with seed-first session and prompt data (4093.666625ms) +✔ think --json --reflect-session emits only JSONL and preserves stored seed-first lineage (3027.721083ms) +✔ think --json reflect validation failures stay fully machine-readable (241.904375ms) +✔ think --stats prints total thoughts (5292.255042ms) +✔ think --stats does not bootstrap local state before the first capture (299.656209ms) +✔ think "stats" is captured as a thought rather than triggering the command (2965.983708ms) +✔ think --stats rejects an unexpected thought argument (279.244292ms) +✔ think stats supports --since filter (4170.246084ms) +✔ think --stats rejects an invalid --since value (256.720708ms) +✔ think stats supports --from and --to filters (6685.858125ms) +✔ think --stats rejects invalid absolute date filters (280.013542ms) +✔ think stats supports --bucket=day (6858.174792ms) +✔ think --stats --bucket=day includes a sparkline in text output (6522.862875ms) +✔ think --stats --bucket=day --json includes sparkline in stats.total event (5947.034417ms) +✔ think --stats without --bucket omits sparkline (1800.083583ms) +✔ think --stats rejects an invalid bucket value (252.057125ms) +ℹ tests 134 +ℹ suites 0 +ℹ pass 131 +ℹ fail 0 +ℹ cancelled 0 +ℹ skipped 0 +ℹ todo 3 +ℹ duration_ms 198616.914292 + +``` + +## Drift Results + +```text +Playback-question drift found. +Scanned 1 active cycle, 7 playback questions, 0 test descriptions. +Search basis: exact normalized match in tests/**/*.test.* and tests/**/*.spec.* descriptions. + +docs/design/0063-semantic-parse-stage/semantic-parse-stage.md +- Human: After enriching, can I find all my questions? + No exact normalized test description match found. +- Human: Does a thought get multiple classifications when it matches multiple patterns? + No exact normalized test description match found. +- Agent: Does `classifyThought(text)` return correct types for questions, decisions, observations, action items, and ideas? + No exact normalized test description match found. +- Agent: Does a thought that matches no pattern get `unclassified`? + No exact normalized test description match found. +- Agent: Does the enrichment pipeline create `classified_as` edges? + No exact normalized test description match found. +- Agent: Does a receipt artifact track the classification result? + No exact normalized test description match found. +- Agent: Is the stage idempotent (re-run doesn't duplicate edges)? + No exact normalized test description match found. + +``` + +## Manual Verification + +- [x] Automated capture completed successfully. diff --git a/docs/method/retro/0065-eliminate-full-graph-materialization/eliminate-full-graph-materialization.md b/docs/method/retro/0065-eliminate-full-graph-materialization/eliminate-full-graph-materialization.md new file mode 100644 index 0000000..e85035c --- /dev/null +++ b/docs/method/retro/0065-eliminate-full-graph-materialization/eliminate-full-graph-materialization.md @@ -0,0 +1,35 @@ +--- +title: "Eliminate full graph materialization anti-pattern" +cycle: "0065-eliminate-full-graph-materialization" +design_doc: "docs/design/0065-eliminate-full-graph-materialization/eliminate-full-graph-materialization.md" +outcome: partial +drift_check: yes +--- + +# Eliminate full graph materialization anti-pattern Retro + +## Summary + +Rewrote migrations.js, enrichment/runner.js, and queries.js to use +worldline query API instead of getNodes()/getEdges(). Static analysis +test enforces zero full-materialization calls in src/. + +**Partial** because the v2→v4 migration test fails. The worldline +query API doesn't find edges correctly on repos that were downgraded +by the test fixture. Needs further investigation into how worldline +queries interact with edge-stripped repos. + +## Drift + +- v2 migration test broken — filed as remaining work. + +## New Debt + +- Fix v2→v4 migration test (worldline query on downgraded repos) +- The --recent default limit change was started but interrupted — + listRecent now returns { entries, total } but needs a follow-up + cycle to complete + +## Backlog Maintenance + +- [x] Done diff --git a/docs/method/retro/0065-eliminate-full-graph-materialization/witness/verification.md b/docs/method/retro/0065-eliminate-full-graph-materialization/witness/verification.md new file mode 100644 index 0000000..0e97caa --- /dev/null +++ b/docs/method/retro/0065-eliminate-full-graph-materialization/witness/verification.md @@ -0,0 +1,308 @@ +--- +title: "Verification Witness for Cycle 65" +--- + +# Verification Witness for Cycle 65 + +This witness proves that `Eliminate full graph materialization anti-pattern` now carries the required +behavior and adheres to the repo invariants. + +## Test Results + +```text + +> think@0.7.0 test +> npm run test:ports && npm run test:m1 + + +> think@0.7.0 test:ports +> node --test test/ports/*.test.js + +✔ extractTopics returns meaningful keywords from thought text (1.041ms) +✔ extractTopics filters out stopwords (0.100625ms) +✔ extractTopics filters out short tokens (0.077083ms) +✔ extractTopics normalizes to lowercase (0.091708ms) +✔ extractTopics returns empty array for empty text (0.922875ms) +✔ extractTopics deduplicates repeated words (0.087667ms) +✔ extractTopics handles hyphenated terms (0.069541ms) +✔ BG_TOKEN is exported from style.js alongside the palette (0.8175ms) +✔ windowed browse initializes with no drawer open (24.067875ms) +✔ saveRawCapture writes cwd receipts first and defers git enrichment to followthrough (1110.422958ms) +✔ capture provenance exports the canonical ingress set (1.607958ms) +✔ capture provenance trims source strings while preserving valid ingress and URL (0.165958ms) +✔ capture provenance trims ingress strings before validation (0.073042ms) +✔ capture provenance rejects dangerous URL schemes (0.079375ms) +✔ capture provenance accepts safe URL schemes (0.111542ms) +✔ normalizeCaptureProvenance returns a frozen CaptureProvenance instance (0.084625ms) +✔ capture provenance reads and normalizes environment input (0.081292ms) +✔ METHOD docs use one consistent cycle-only release and README closeout policy (3.656084ms) +✔ MIND_ORCHESTRATION.md exists and is linked from GUIDE.md (2.452334ms) +✔ cycle 0006 retrospective restarts ordered numbering for the human playback section (0.504917ms) +✔ runDiagnostics reports ok for a healthy repo with entries (36.707416ms) +✔ runDiagnostics reports fail when think directory does not exist (0.201041ms) +✔ runDiagnostics reports fail when local repo has no git init (3.523083ms) +✔ runDiagnostics reports ok for upstream when reachable (30.670875ms) +✔ runDiagnostics reports warn for upstream when unreachable (32.236167ms) +✔ runDiagnostics reports skip for upstream when URL is set but no checker provided (25.975166ms) +✔ runDiagnostics reports skip for upstream when not configured (24.891ms) +✔ runDiagnostics reports skip for upstream when configured without checker (20.194375ms) +✔ runDiagnostics includes all expected check names (21.235333ms) +✔ runDiagnostics reports graph model version when available (16.226459ms) +✔ runDiagnostics warns when graph model needs migration (17.412292ms) +✔ runDiagnostics reports entry count when available (18.030791ms) +✔ runDiagnostics warns when entry count is zero (16.85875ms) +✔ runDiagnostics skips graph and entry checks when no repo exists (0.205583ms) +✔ GRAPH_MODEL_VERSION is 4 (0.814875ms) +✔ CLASSIFICATIONS has 7 entries including unclassified (0.10475ms) +✔ PRODUCT_READ_LENS includes enrichment prefixes (0.069583ms) +✔ shared JSON helper canonicalizes object keys deterministically on parse and stringify (2.080583ms) +✔ discoverMinds finds all valid repos under the think directory (98.669208ms) +✔ discoverMinds ignores directories without git repos (27.496375ms) +✔ discoverMinds labels ~/.think/repo as "default" (24.522791ms) +✔ discoverMinds sorts with default first, then alphabetical (58.652667ms) +✔ discoverMinds returns empty array when think directory does not exist (0.14725ms) +✔ discoverMinds includes repoDir for each mind (16.311459ms) +✔ shaderForMind returns a deterministic index for a given name (0.1865ms) +✔ shaderForMind returns different indices for different names (0.158666ms) +✔ shaderForMind stays within the shader count range (0.083833ms) +✔ shaderForMind throws when shaderCount is zero (0.306542ms) +✔ shaderForMind throws when shaderCount is negative (0.092917ms) +✔ shaderForMind handles single-character names (0.064041ms) +✔ createEntry returns an Entry instance (5.906ms) +✔ Entry is frozen (0.123084ms) +✔ createEntry validates required fields (1.051834ms) +✔ createReflectSession returns a ReflectSession instance (0.680917ms) +✔ ReflectSession is frozen (0.129041ms) +✔ ENTRY_KINDS is a frozen array of valid kind strings (0.06775ms) +✔ BUCKET_PERIODS is a frozen array of valid bucket strings (0.059667ms) +✔ storesTextContent validates against ENTRY_KINDS (0.069958ms) +✔ no source file calls getNodes() or getEdges() for full graph materialization (24.072959ms) +✔ classifyThought detects questions (1.415875ms) +✔ classifyThought detects decisions (0.484541ms) +✔ classifyThought detects observations (0.231333ms) +✔ classifyThought detects action items (0.08125ms) +✔ classifyThought detects ideas (0.066458ms) +✔ classifyThought detects references (0.05525ms) +✔ classifyThought returns unclassified when no pattern matches (0.967709ms) +✔ classifyThought supports multi-class (0.09875ms) +✔ classifyThought returns markers for each match (0.094334ms) +✔ classifyThought handles empty text (0.117333ms) +✔ selectLogo picks large mind logo when terminal is wide and tall enough (0.903042ms) +✔ selectLogo picks medium mind logo when terminal fits medium but not large (0.099959ms) +✔ selectLogo picks text logo when terminal is too small for mind (0.060375ms) +✔ selectLogo always returns something even for tiny terminals (0.055583ms) +✔ renderSplash contains the logo (0.140458ms) +✔ renderSplash contains the Enter prompt (0.060958ms) +✔ renderSplash output fits within the given dimensions (0.069542ms) +✔ splash.js does not export renderSplashView (dead code from RE-015 workaround) (0.046291ms) +✔ renderSplash centers the prompt horizontally (0.154625ms) +✔ windowed browse model initializes in windowed mode (0.19225ms) +✔ formatStats includes a sparkline when buckets are present (1.667417ms) +✔ formatStats omits sparkline when no buckets are present (0.089792ms) +✔ formatStats handles a single bucket without crashing (0.091458ms) +✔ formatStats handles empty bucket array without sparkline (0.065292ms) +✔ formatStats sparkline is oldest-to-newest (left-to-right) (0.079334ms) +ℹ tests 84 +ℹ suites 0 +ℹ pass 84 +ℹ fail 0 +ℹ cancelled 0 +ℹ skipped 0 +ℹ todo 0 +ℹ duration_ms 1509.722541 + +> think@0.7.0 test:m1 +> node --test test/acceptance/*.test.js + +✔ think --annotate attaches a note to an existing capture (4418.36525ms) +✔ think --json --annotate emits structured annotation result (4004.308417ms) +✔ think --annotate rejects empty annotation text (2033.3855ms) +✔ think --annotate shows annotation in --inspect output (5592.78625ms) +✔ think --topics lists promoted topics after multiple captures share a keyword (7287.982917ms) +✔ think --json --topics emits JSONL topic list (6957.6935ms) +✔ think --doctor reports health of a repo with captures (3561.8115ms) +✔ think --doctor succeeds before the first capture (319.993875ms) +✔ think --json --doctor emits a structured health report (3018.738917ms) +✔ think --doctor rejects an unexpected thought argument (310.651834ms) +✔ new capture writes graph-native relationship edges while preserving compatibility properties (2330.739333ms) +✔ think --migrate-graph upgrades a version-1 property-linked repo additively (3817.30325ms) +✔ think --migrate-graph is idempotent and safe to rerun (4095.006166ms) +✔ capture on a version-1 repo still succeeds and only migrates after the raw local save (6990.43925ms) +✔ graph-native commands fail clearly on an outdated repo outside interactive use (5155.457708ms) +✔ interactive inspect on an outdated repo shows visible upgrade progress before continuing (4326.677792ms) +✔ interactive browse on an outdated repo shows visible upgrade progress before continuing (5708.067083ms) +✔ think --json emits explicit graph migration required errors for outdated graph-native commands (3257.740625ms) +✖ think --migrate-graph upgrades a version-2 repo to graph model version 4 with browse, reflect, and enrichment nodes (9420.255792ms) +✔ think --json --inspect exposes direct reflect receipts that exist only through graph-native v3 edges (2878.938083ms) +✔ think --help prints top-level usage without bootstrapping local state (536.670666ms) +✔ think -h is accepted as a short alias for top-level help (332.463166ms) +✔ think --recent --help prints recent help instead of running the command (297.877375ms) +✔ think --recent -h prints recent help instead of running the command (322.006ms) +✔ think recent --help fails and points callers to the explicit flag form (339.566709ms) +✔ think --inspect --help bypasses required entry validation (282.974958ms) +✔ think --json --help emits structured JSONL help output (355.300667ms) +✔ think recent --json --help fails machine-readably instead of acting as shorthand help (301.496709ms) +✔ think -- -h captures the literal text after option parsing is terminated (2870.93825ms) +✔ think --ingest reads stdin explicitly and captures it into the normal raw-capture core (3228.771666ms) +✔ think with stdin but without --ingest does not accidentally capture piped input (342.467541ms) +✔ think --ingest rejects mixed positional capture text and stdin capture text (326.893833ms) +✔ think --json --ingest preserves machine-readable capture semantics for agents (2797.290291ms) +✔ think --ingest rejects empty stdin payloads (335.29175ms) +✔ think --json capture emits JSONL on stdout and keeps stderr quiet when there are no warnings (2104.384625ms) +✔ think --json --recent emits entry events instead of plain text (5779.579458ms) +✔ think --json --stats emits totals and bucket rows as JSONL (5953.092667ms) +✔ think --json validation failures emit JSONL on stderr instead of stdout (296.605125ms) +✔ think --json reports backup pending as a structured warning on stderr (1798.861875ms) +✔ think --json emits deterministically sorted keys in JSONL output (2067.145417ms) +✔ think MCP server lists the core Think tools (540.413583ms) +✔ think MCP capture, recent, browse, and inspect route through the existing Think runtime (3785.790917ms) +✔ think MCP capture preserves additive provenance separately from the raw text (2522.297417ms) +✔ think MCP capture trims additive provenance strings before persistence (2542.049792ms) +✔ think MCP remember, stats, and prompt_metrics expose structured read results (6861.669833ms) +✔ think MCP doctor tool returns structured health checks (2635.856709ms) +✔ CLI raw capture bootstraps the local repo and preserves exact text (2847.294916ms) +✔ think "recent" is captured as a thought rather than triggering the list (2821.456625ms) +✔ think --recent does not bootstrap local state before the first capture (304.9595ms) +✔ think --recent rejects an unexpected thought argument (318.27825ms) +✔ capture does not require retrieval-before-write or conceptual confirmation (4367.278916ms) +✔ THINK_REPO_DIR overrides the default local repo path (2826.130875ms) +✔ reachable upstream reports local save first and backup second (1583.876292ms) +✔ unreachable upstream keeps capture successful and reports backup pending (1484.882875ms) +✔ recent stays plain and chronological (7356.748125ms) +✔ capture is append-only across later capture activity (6559.978875ms) +✔ duplicate thoughts produce distinct captures rather than deduping (5844.55425ms) +✔ empty input is rejected (337.262625ms) +✔ whitespace-only input is rejected (305.115333ms) +✔ capture preserves formatting neutrality for spacing, casing, and punctuation (2300.662833ms) +✔ default user language avoids Git terminology (1391.914875ms) +✔ verbose capture emits JSONL trace updates on stderr (1530.892875ms) +✔ raw entries remain immutable after later derived entries exist (0.112208ms) # TODO +✔ stored raw entry bytes remain unchanged in the local store after later writes (0.025625ms) # TODO +✔ entry kind separation remains explicit once the first derived-entry write path exists (0.026958ms) # TODO +✔ think --prompt-metrics prints factual prompt telemetry totals and medians (324.737ms) +✔ think --prompt-metrics does not bootstrap local state before the first capture (292.2525ms) +✔ think --prompt-metrics supports --since filtering over prompt sessions (309.039709ms) +✔ think --prompt-metrics supports --bucket=day (314.759916ms) +✔ think --json --prompt-metrics emits explicit summary, timing, and bucket rows (399.059917ms) +✔ think --prompt-metrics rejects an unexpected thought argument (336.387958ms) +✔ think --prompt-metrics rejects invalid filter values (593.596625ms) +✔ think --recent --count limits output to the newest N raw captures (9422.235208ms) +✔ think --recent --query filters raw captures by case-insensitive text match (7555.279875ms) +✔ removed recent alias flags fail clearly and point to the scoped forms (1924.517167ms) +✔ think --json --recent applies count and query filters while remaining JSONL-only (10221.092334ms) +✔ think --remember uses the current project context to recall relevant prior thoughts (5720.037875ms) +✔ think --remember with an explicit phrase recalls matching thoughts without turning into generic recent listing (8172.20525ms) +✔ think --json --remember emits explicit ambient scope and match receipts for agents (4535.905ms) +✔ think --remember falls back honestly to textual project-token matching for entries without ambient project receipts (4077.079709ms) +✔ think --remember --limit returns only the top N matching thoughts in deterministic order (9452.162334ms) +✔ think --remember --brief returns a triage-friendly snippet instead of the full multiline thought (3593.013791ms) +✔ think --json --remember --brief --limit preserves bounded explicit recall receipts for agents (5515.763667ms) +✔ think --remember rejects invalid --limit values (1454.735959ms) +✔ think --browse shows one raw thought with its immediate newer and older neighbors (5500.4725ms) +✔ think --browse without an entry id fails clearly outside interactive TTY use and remains read-only (237.152917ms) +✔ think --json --browse without an entry id stays machine-readable and does not try to open the shell (234.77925ms) +✔ think --json --browse emits JSONL rows for the current raw thought and its neighbors (5454.234084ms) +✔ think --browse opens a reader-first browse TUI with metadata and no permanent recent rail (6314.63025ms) +✔ think --browse can reveal a chronology drawer on demand instead of showing the full log by default (5480.92625ms) +✔ think --browse can jump to another thought through a fuzzy jump surface (5318.957ms) +✔ think --browse can reveal inspect receipts inside the scripted browse TUI (3378.939ms) +✔ think --browse can hand the selected thought into reflect from the scripted browse TUI (3355.147041ms) +✔ think --browse surfaces session identity for the current thought without replacing the reader-first view (7462.126209ms) +✔ think --browse uses a short visible entry id in the reader-first metadata while inspect keeps the full exact id (6581.31375ms) +✔ think --browse can reveal a summon-only session drawer that excludes out-of-session thoughts (8406.636542ms) +✔ think --browse reveals a structured session drawer with a visible start label and current-thought marker (8130.426083ms) +✔ think --json --browse emits explicit session context and session-nearby rows without mislabeling out-of-session thoughts (7714.796417ms) +✔ think --browse can move to the previous thought within the current session without leaving reader-first browse (5486.972042ms) +✔ think --browse keeps the current thought in place when there is no next thought in the current session (5349.722959ms) +✔ think --json --browse exposes explicit session traversal semantics without conflating them with chronology neighbors (5494.200625ms) +✔ think --inspect exposes exact raw entry metadata without narration (1864.789917ms) +✔ think --json --inspect emits JSONL for the exact raw entry metadata (1893.963625ms) +✔ think --inspect exposes additive capture provenance separately from the raw text (1912.18625ms) +✔ think --json --inspect includes additive capture provenance in the inspected entry payload (1825.584708ms) +✔ think --inspect exposes canonical content identity and direct derived receipts when they exist (3554.914458ms) +✔ think --json --inspect emits canonical content identity and direct derived receipt rows (3528.252792ms) +✔ think --inspect exposes the first derived bundle as explicit raw, canonical, derived, and context sections (5553.89875ms) +✔ think --json --inspect emits canonical identity plus seed-quality and session-attribution receipts with provenance (5525.133917ms) +✔ think --json --inspect keeps duplicate raw captures distinct while linking them to the same canonical thought (4585.400416ms) +✔ think --recent defaults to a bounded window with total count (16425.604791ms) +✔ think --json --recent includes total count in done event (9572.825041ms) +✔ think --recent text output shows trailer when results are truncated (16045.298709ms) +✔ think --reflect starts an explicit seeded reflect session with a deterministic seed-first challenge prompt (6456.929125ms) +✔ removed brainstorm aliases fail clearly and point to reflect (2734.56175ms) +✔ think --reflect can use an explicit sharpen prompt family (2643.007166ms) +✔ think --reflect-session stores a separate derived entry with preserved seed-first lineage (8483.2995ms) +✔ think --reflect validates explicit session entry and stays read-only on invalid start (3707.198791ms) +✔ think --reflect fails clearly when the seed entry does not exist (302.288166ms) +✔ think --reflect refuses status-like seeds that are not pressure-testable ideas (9903.59575ms) +✔ think --json --reflect refuses ineligible seeds with structured machine-readable errors (8873.7515ms) +✔ think --json --reflect emits only JSONL with seed-first session and prompt data (4168.352208ms) +✔ think --json --reflect-session emits only JSONL and preserves stored seed-first lineage (3080.611959ms) +✔ think --json reflect validation failures stay fully machine-readable (271.463291ms) +✔ think --stats prints total thoughts (5036.939708ms) +✔ think --stats does not bootstrap local state before the first capture (268.4585ms) +✔ think "stats" is captured as a thought rather than triggering the command (3046.328042ms) +✔ think --stats rejects an unexpected thought argument (273.412916ms) +✔ think stats supports --since filter (5025.692833ms) +✔ think --stats rejects an invalid --since value (541.543833ms) +✔ think stats supports --from and --to filters (9792.137458ms) +✔ think --stats rejects invalid absolute date filters (292.5865ms) +✔ think stats supports --bucket=day (7672.639041ms) +✔ think --stats --bucket=day includes a sparkline in text output (7491.916084ms) +✔ think --stats --bucket=day --json includes sparkline in stats.total event (6101.384209ms) +✔ think --stats without --bucket omits sparkline (2325.401ms) +✔ think --stats rejects an invalid bucket value (295.255708ms) +ℹ tests 137 +ℹ suites 0 +ℹ pass 133 +ℹ fail 1 +ℹ cancelled 0 +ℹ skipped 0 +ℹ todo 3 +ℹ duration_ms 199079.763708 + +✖ failing tests: + +test at test/acceptance/graph-migration.test.js:354:1 +✖ think --migrate-graph upgrades a version-2 repo to graph model version 4 with browse, reflect, and enrichment nodes (9420.255792ms) + AssertionError [ERR_ASSERTION]: Expected graph model version 4 migration to add an explicit produced_in edge from reflect entry to its session. + at assertEdge (file://./test/acceptance/graph-migration.test.js:663:10) + at TestContext. (file://./test/acceptance/graph-migration.test.js:407:3) + at process.processTicksAndRejections (node:internal/process/task_queues:104:5) + at async Test.run (node:internal/test_runner/test:1208:7) + at async Test.processPendingSubtests (node:internal/test_runner/test:831:7) { + generatedMessage: false, + code: 'ERR_ASSERTION', + actual: false, + expected: true, + operator: '==', + diff: 'simple' + } + +``` + +## Drift Results + +```text +Playback-question drift found. +Scanned 1 active cycle, 6 playback questions, 0 test descriptions. +Search basis: exact normalized match in tests/**/*.test.* and tests/**/*.spec.* descriptions. + +docs/design/0065-eliminate-full-graph-materialization/eliminate-full-graph-materialization.md +- Human: Can the codex mind (317MB) capture without buffer errors? + No exact normalized test description match found. +- Agent: Does `grep -r 'getNodes\|getEdges' src/` return zero hits? + No exact normalized test description match found. +- Agent: Does migration use worldline queries instead of full scan? + No exact normalized test description match found. +- Agent: Does enrichment use worldline queries instead of full scan? + No exact normalized test description match found. +- Agent: Does annotation lookup use edge traversal instead of full scan? + No exact normalized test description match found. +- Agent: Do all existing tests pass? + No exact normalized test description match found. + +``` + +## Manual Verification + +- [x] Automated capture completed successfully. diff --git a/src/browse-benchmark.js b/src/browse-benchmark.js index 4def205..5615f78 100644 --- a/src/browse-benchmark.js +++ b/src/browse-benchmark.js @@ -1,27 +1,40 @@ import Plumbing from '@git-stunts/plumbing'; import WarpApp, { GitGraphAdapter } from '@git-stunts/git-warp'; +import { ValidationError } from './errors.js'; import { ensureGitRepo, hasGitRepo } from './git.js'; -import { GRAPH_NAME, loadBrowseChronologyEntries, prepareBrowseBootstrap as loadBrowseBootstrap } from './store.js'; +import { + GRAPH_NAME, + loadBrowseChronologyEntries, + prepareBrowseBootstrap as loadBrowseBootstrap, +} from './store.js'; +import { + ENTRY_PREFIX, + GRAPH_META_ID, + GRAPH_MODEL_VERSION, + SCHEMA_VERSION, + SESSION_PREFIX, + TEXT_MIME, +} from './store/constants.js'; +import { encodeTextContent } from './store/content.js'; const DEFAULT_START_TIME_MS = Date.parse('2026-03-20T16:00:00.000Z'); const WITHIN_SESSION_GAP_MS = 30 * 1000; const BETWEEN_SESSION_GAP_MS = 10 * 60 * 1000; -const TEXT_MIME = 'text/plain; charset=utf-8'; // eslint-disable-next-line require-await -- wraps store call that returns a promise (git-warp) export async function prepareBrowseBootstrap(repoDir) { if (!hasGitRepo(repoDir)) { - return { + return Object.freeze({ ok: false, reason: 'repo_missing', current: null, newer: null, older: null, sessionContext: null, - sessionEntries: [], - sessionSteps: [], - }; + sessionEntries: Object.freeze([]), + sessionSteps: Object.freeze([]), + }); } return loadBrowseBootstrap(repoDir); @@ -42,16 +55,16 @@ export async function createSyntheticBrowseFixture({ sessionCount = 10, } = {}) { if (!repoDir) { - throw new Error('repoDir is required'); + throw new ValidationError('repoDir is required'); } if (!Number.isInteger(captureCount) || captureCount <= 0) { - throw new Error('captureCount must be a positive integer'); + throw new ValidationError('captureCount must be a positive integer'); } if (!Number.isInteger(sessionCount) || sessionCount <= 0) { - throw new Error('sessionCount must be a positive integer'); + throw new ValidationError('sessionCount must be a positive integer'); } if (sessionCount > captureCount) { - throw new Error('sessionCount cannot exceed captureCount'); + throw new ValidationError('sessionCount cannot exceed captureCount'); } await ensureGitRepo(repoDir); @@ -76,14 +89,14 @@ export async function createSyntheticBrowseFixture({ timestampMs: currentMs, }); sessionStartSortKey ??= entry.sortKey; - entry.sessionId = `session:${sessionStartSortKey}`; + entry.sessionId = `${SESSION_PREFIX}${sessionStartSortKey}`; entries.push(entry); createdCaptures += 1; currentMs += WITHIN_SESSION_GAP_MS; } if (countInSession > 0) { - const sessionId = `session:${sessionStartSortKey}`; + const sessionId = `${SESSION_PREFIX}${sessionStartSortKey}`; entries.push({ type: 'session', id: sessionId, @@ -99,11 +112,11 @@ export async function createSyntheticBrowseFixture({ await graph.patch(async (patch) => { patch - .addNode('meta:graph') - .setProperty('meta:graph', 'kind', 'graph_meta') - .setProperty('meta:graph', 'createdAt', new Date(DEFAULT_START_TIME_MS).toISOString()) - .setProperty('meta:graph', 'updatedAt', new Date(currentMs).toISOString()) - .setProperty('meta:graph', 'graphModelVersion', 3); + .addNode(GRAPH_META_ID) + .setProperty(GRAPH_META_ID, 'kind', 'graph_meta') + .setProperty(GRAPH_META_ID, 'createdAt', new Date(DEFAULT_START_TIME_MS).toISOString()) + .setProperty(GRAPH_META_ID, 'updatedAt', new Date(currentMs).toISOString()) + .setProperty(GRAPH_META_ID, 'graphModelVersion', GRAPH_MODEL_VERSION); for (const item of entries) { if (item.type === 'session') { @@ -112,7 +125,7 @@ export async function createSyntheticBrowseFixture({ .setProperty(item.id, 'kind', 'session') .setProperty(item.id, 'createdAt', item.createdAt) .setProperty(item.id, 'sortKey', item.sortKey) - .setProperty(item.id, 'schemaVersion', '1'); + .setProperty(item.id, 'schemaVersion', SCHEMA_VERSION); continue; } @@ -129,7 +142,7 @@ export async function createSyntheticBrowseFixture({ patch.addEdge(item.id, item.sessionId, 'captured_in'); // eslint-disable-next-line no-await-in-loop -- sequential graph writes within a patch transaction - await patch.attachContent(item.id, item.text, { mime: TEXT_MIME }); + await patch.attachContent(item.id, encodeTextContent(item.text), { mime: TEXT_MIME }); } const captures = entries @@ -137,7 +150,7 @@ export async function createSyntheticBrowseFixture({ .sort((left, right) => right.sortKey.localeCompare(left.sortKey)); if (captures[0]) { - patch.addEdge('meta:graph', captures[0].id, 'latest_capture'); + patch.addEdge(GRAPH_META_ID, captures[0].id, 'latest_capture'); } for (let index = 0; index + 1 < captures.length; index += 1) { @@ -145,14 +158,14 @@ export async function createSyntheticBrowseFixture({ } }); - return { + return Object.freeze({ captureCount, sessionCount, startTimeMs: DEFAULT_START_TIME_MS, endTimeMs: currentMs, withinSessionGapMs: WITHIN_SESSION_GAP_MS, betweenSessionGapMs: BETWEEN_SESSION_GAP_MS, - }; + }); } function distributeCaptures(captureCount, sessionCount) { @@ -177,7 +190,7 @@ function createSyntheticEntry({ thoughtNumber, sessionNumber, captureNumberInSes return { type: 'capture', - id: `entry:${sortKey}`, + id: `${ENTRY_PREFIX}${sortKey}`, createdAt, sortKey, text: createSyntheticThought({ diff --git a/src/browse-tui/actions.js b/src/browse-tui/actions.js index cbffd6e..2a26333 100644 --- a/src/browse-tui/actions.js +++ b/src/browse-tui/actions.js @@ -284,6 +284,26 @@ export function applyBrowseAction(model, action) { effect: { type: 'switch_mind', mind: targetMind }, }; } + case 'toggle_perf_stream': { + const enabled = !model.perfStreamEnabled; + return { + model: { + ...model, + perfStreamEnabled: enabled, + notice: enabled ? 'Perf stream started' : 'Perf stream ended', + }, + effect: null, + }; + } + case 'theme_changed': { + return { + model: { + ...model, + notice: `Theme set to ${action.name}`, + }, + effect: null, + }; + } default: return { model, diff --git a/src/browse-tui/app.js b/src/browse-tui/app.js index 1963db8..722261a 100644 --- a/src/browse-tui/app.js +++ b/src/browse-tui/app.js @@ -1,7 +1,7 @@ import { createBijou } from '@flyingrobots/bijou'; import { nodeRuntime, nodeIO, chalkStyle } from '@flyingrobots/bijou-node'; import { createFramedApp, run } from '@flyingrobots/bijou-tui'; -import { thinkTheme } from './theme.js'; +import { thinkShellThemes, thinkTheme } from './theme.js'; import { selectLogo } from '../splash.js'; import { shaderFrame, compositeAndRender, buildLogoMask, buildInteriorMask, buildDistanceFromOutline, getShaderCount, getShaderName, BG } from '../splash-shader.js'; import { shaderForMind } from '../minds.js'; @@ -17,6 +17,7 @@ export async function runBrowseTui({ minds = [], activeMind = null, skipSplash = false, + handoffFromSplash = false, loadBrowseWindow = null, loadChronologyEntries = null, loadInspectEntry = null, @@ -33,6 +34,7 @@ export async function runBrowseTui({ // caller can re-bootstrap with the correct mind's data. const selectedMind = splashResult.mind; if (selectedMind && activeMind && selectedMind.repoDir !== activeMind.repoDir) { + restoreTerminalScreen(); return { type: 'switch_mind', mind: selectedMind }; } } @@ -63,6 +65,7 @@ export async function runBrowseTui({ const app = createFramedApp({ ctx, pages: [browsePage], + shellThemes: thinkShellThemes, keyPriority: 'page-first', bodyTopRows: 1, bodyBottomRows: 1, @@ -74,6 +77,11 @@ export async function runBrowseTui({ return resolveHelpLine(pageModel); }, overlayFactory: (overlayCtx) => buildBrowseOverlays(overlayCtx.pageModel, overlayCtx.screenRect, ctx), + onShellThemeChange: ({ shellTheme }) => { + if (modelRef.current) { + modelRef.current.notice = `Theme set to ${shellTheme.label}`; + } + }, observeKey: (msg, route) => { // Let the frame handle its own bindings (help, quit confirm, etc.) if (route === 'frame' || route === 'help' || route === 'palette') { @@ -87,12 +95,20 @@ export async function runBrowseTui({ // When splash ran, fade the browse content in from plum before // handing off to bijou. This avoids both the screen-clear flash // and the content pop. - const splashRan = !skipSplash; - if (splashRan) { - await fadeInBrowse(bootstrap, minds, activeMind); - } + const screenHeldFromSplash = !skipSplash || handoffFromSplash; + try { + if (screenHeldFromSplash) { + await fadeInBrowse(bootstrap, minds, activeMind); + } - await run(app, { ctx }); + await run(app, screenHeldFromSplash + ? { ctx, altScreen: false, hideCursor: false } + : { ctx }); + } finally { + if (screenHeldFromSplash) { + restoreTerminalScreen(); + } + } if (modelRef.current?.switchTarget) { return { type: 'switch_mind', mind: modelRef.current.switchTarget }; @@ -100,6 +116,12 @@ export async function runBrowseTui({ return { type: 'quit' }; } +function restoreTerminalScreen() { + process.stdout.write('\x1b[?25h'); + process.stdout.write('\x1b[?7h'); + process.stdout.write('\x1b[?1049l'); +} + const FADE_IN_DURATION_MS = 800; const FADE_IN_FRAME_MS = 50; @@ -158,7 +180,7 @@ function fadeInBrowse(bootstrap, minds, activeMind) { }); } -export function showSplash({ minds = [] } = {}) { +export function showSplash({ minds = [], closeOnEnter = false } = {}) { let cols = process.stdout.columns || 80; let rows = process.stdout.rows || 24; const startTime = Date.now(); @@ -176,6 +198,7 @@ export function showSplash({ minds = [] } = {}) { process.stdout.write('\x1b[?1049h'); // enter alt screen process.stdout.write('\x1b[?25l'); // hide cursor + process.stdout.write('\x1b[?7l'); // disable wrap process.stdout.write(`\x1b[48;2;${BG[0]};${BG[1]};${BG[2]}m`); process.stdout.write('\x1b[2J'); // clear screen @@ -250,13 +273,15 @@ export function showSplash({ minds = [] } = {}) { if (transition && transition.progress >= 1.0) { clearInterval(checkDone); cleanup(); + if (closeOnEnter) { + restoreTerminalScreen(); + } resolve({ action: 'enter', mind: minds[mindIndex] ?? null }); } }, 50); } else if (key === 113 || (key === 27 && data.length === 1)) { // q / Escape (not arrow seq) cleanup(); - process.stdout.write('\x1b[?25h'); - process.stdout.write('\x1b[?1049l'); + restoreTerminalScreen(); resolve({ action: 'quit' }); } else if (key === 9) { // Tab — next mind (or shader if single) if (multiMind) { diff --git a/src/browse-tui/keymap.js b/src/browse-tui/keymap.js index 89a6cd6..d25f562 100644 --- a/src/browse-tui/keymap.js +++ b/src/browse-tui/keymap.js @@ -21,5 +21,6 @@ export const browseKeymap = createKeyMap() .group('Actions', (group) => group .bind('r', 'Reflect', { type: 'reflect' }) .bind('m', 'Mind', { type: 'open_mind_switcher' }) + .bind('f10', 'Perf stream', { type: 'toggle_perf_stream' }) .bind('q', 'Quit', { type: 'quit' }) .bind('escape', 'Close/Quit', { type: 'escape' })); diff --git a/src/browse-tui/model.js b/src/browse-tui/model.js index 115a520..df452fd 100644 --- a/src/browse-tui/model.js +++ b/src/browse-tui/model.js @@ -25,11 +25,11 @@ export function createWindowedBrowseModel({ minds, activeMind, switchTarget: null, - browseStartMs: Date.now(), columns: process.stdout.columns ?? DEFAULT_COLUMNS, rows: process.stdout.rows ?? DEFAULT_ROWS, contentScrollY: 0, panelMode: 'none', + perfStreamEnabled: false, jumpPalette: createJumpPalette([]), previousPanelMode: 'none', notice: null, @@ -54,6 +54,7 @@ export function createBrowseModel({ entries, inspectCache, initialEntryId }) { columns: process.stdout.columns ?? DEFAULT_COLUMNS, rows: process.stdout.rows ?? DEFAULT_ROWS, contentScrollY: 0, + perfStreamEnabled: false, jumpPalette: createJumpPalette(entries), previousPanelMode: 'none', notice: null, diff --git a/src/browse-tui/overlays.js b/src/browse-tui/overlays.js index 6c0cb8c..50ed21e 100644 --- a/src/browse-tui/overlays.js +++ b/src/browse-tui/overlays.js @@ -1,4 +1,5 @@ -import { drawer, modal } from '@flyingrobots/bijou-tui'; +import { drawer, modal, toast } from '@flyingrobots/bijou-tui'; +import { perfOverlaySurface, surfaceToString } from '@flyingrobots/bijou'; import { BG_TOKEN } from './style.js'; import { resolveLayout, @@ -76,16 +77,36 @@ export function buildBrowseOverlays(model, screenRect, ctx) { })); } - // Transient notice overlay (session boundary, etc.) + // Transient notice overlay (session boundary, theme change, perf toggle, etc.) if (model.notice) { - const pad = 2; - const text = ` ${model.notice} `; - const noticeWidth = text.length + pad * 2; - const col = Math.max(0, Math.floor((screenRect.width - noticeWidth) / 2)); + overlays.push(toast({ + message: model.notice, + variant: 'info', + anchor: 'bottom-right', + screenWidth: screenRect.width, + screenHeight: screenRect.height, + ctx, + })); + } + + // Performance stream overlay + if (model.perfStreamEnabled) { + const stats = ctx?.perf?.getStats?.() ?? { + fps: 60, + frameTimeMs: 16.6, + frameTimeHistory: [], + width: screenRect.width, + height: screenRect.height, + }; + const surface = perfOverlaySurface(stats, { + ctx, + title: 'Perf Stream', + }); overlays.push({ - content: `╭${'─'.repeat(text.length)}╮\n│${text}│\n╰${'─'.repeat(text.length)}╯`, - row: 1, - col, + surface, + content: surfaceToString(surface, ctx.style), + row: 0, + col: screenRect.width - surface.width, }); } diff --git a/src/browse-tui/theme.js b/src/browse-tui/theme.js index 940adde..11da792 100644 --- a/src/browse-tui/theme.js +++ b/src/browse-tui/theme.js @@ -10,52 +10,63 @@ import { tv } from '@flyingrobots/bijou'; export const thinkTheme = { name: 'think', + label: 'Think Warm', status: { success: tv('#41b797'), - error: tv('#ed555d'), warning: tv('#eda126'), - info: tv('#fffcc9'), + error: tv('#ed555d'), + info: tv('#7b5770'), pending: tv('#7b5770', ['dim']), active: tv('#41b797'), - muted: tv('#7b5770', ['dim']), + muted: tv('#7b5770', ['dim', 'strikethrough']), }, semantic: { success: tv('#41b797'), - error: tv('#ed555d'), warning: tv('#eda126'), - info: tv('#fffcc9'), + error: tv('#ed555d'), + info: tv('#41b797'), accent: tv('#41b797'), - muted: tv('#7b5770'), + header: tv('#eda126'), + dim: tv('#7b5770'), + highlight: tv('#ed555d'), + muted: tv('#7b5770', ['dim']), primary: tv('#fffcc9', ['bold']), + text: tv('#fffcc9'), + bg: tv('#2d1922'), }, border: { - primary: tv('#7b5770'), - secondary: tv('#41b797'), + primary: tv('#41b797'), + secondary: tv('#eda126'), success: tv('#41b797'), warning: tv('#eda126'), error: tv('#ed555d'), - muted: tv('#5a3d4f'), - }, - - surface: { - primary: { hex: '#fffcc9', bg: '#2d1922' }, - secondary: { hex: '#fffcc9', bg: '#3a2230' }, - elevated: { hex: '#fffcc9', bg: '#46293a' }, - overlay: { hex: '#fffcc9', bg: '#2d1922' }, - muted: { hex: '#7b5770', bg: '#1e1018' }, + muted: tv('#7b5770'), }, ui: { + border: tv('#7b5770'), + selection: tv('#41b797'), + focus: tv('#eda126'), cursor: tv('#41b797'), - scrollThumb: tv('#7b5770'), - scrollTrack: tv('#3a2230'), + scrollThumb: tv('#41b797'), + scrollTrack: tv('#7b5770'), sectionHeader: tv('#eda126', ['bold']), - logo: tv('#41b797'), + logo: tv('#fffcc9'), tableHeader: tv('#fffcc9'), - trackEmpty: tv('#3a2230'), + trackEmpty: tv('#24141c'), + }, + + surface: { + primary: { hex: '#fffcc9', bg: '#2d1922' }, + secondary: { hex: '#fffcc9', bg: '#24141c' }, + elevated: { hex: '#fffcc9', bg: '#3a212b' }, + overlay: { hex: '#fffcc9', bg: '#3a212b' }, + muted: { hex: '#7b5770', bg: '#24141c' }, + panel: tv('#3a212b'), + well: tv('#24141c'), }, gradient: { @@ -70,3 +81,135 @@ export const thinkTheme = { ], }, }; + +export const matrixTheme = { + name: 'matrix', + label: 'Matrix', + + status: { + success: tv('#00ff41'), + warning: tv('#d1ffbd'), + error: tv('#ff4141'), + info: tv('#008f11'), + pending: tv('#003b00', ['dim']), + active: tv('#00ff41'), + muted: tv('#003b00', ['dim', 'strikethrough']), + }, + + semantic: { + success: tv('#00ff41'), + warning: tv('#d1ffbd'), + error: tv('#ff4141'), + info: tv('#008f11'), + accent: tv('#00ff41'), + header: tv('#008f11'), + dim: tv('#003b00'), + highlight: tv('#d1ffbd'), + muted: tv('#003b00', ['dim']), + primary: tv('#00ff41', ['bold']), + text: tv('#00ff41'), + bg: tv('#000000'), + }, + + border: { + primary: tv('#00ff41'), + secondary: tv('#008f11'), + success: tv('#00ff41'), + warning: tv('#d1ffbd'), + error: tv('#ff4141'), + muted: tv('#003b00'), + }, + + ui: { + border: tv('#008f11'), + selection: tv('#00ff41'), + focus: tv('#d1ffbd'), + cursor: tv('#00ff41'), + scrollThumb: tv('#00ff41'), + scrollTrack: tv('#003b00'), + sectionHeader: tv('#008f11', ['bold']), + logo: tv('#00ff41'), + tableHeader: tv('#00ff41'), + trackEmpty: tv('#000500'), + }, + + surface: { + primary: { hex: '#00ff41', bg: '#000000' }, + secondary: { hex: '#00ff41', bg: '#000500' }, + elevated: { hex: '#00ff41', bg: '#001100' }, + overlay: { hex: '#00ff41', bg: '#001100' }, + muted: { hex: '#003b00', bg: '#000500' }, + panel: tv('#001100'), + well: tv('#000500'), + }, +}; + +export const cyberTheme = { + name: 'cyber', + label: 'Cyberpunk', + + status: { + success: tv('#00ff9f'), + warning: tv('#fcee0a'), + error: tv('#ff003c'), + info: tv('#3d1a5d'), + pending: tv('#3d1a5d', ['dim']), + active: tv('#fcee0a'), + muted: tv('#3d1a5d', ['dim', 'strikethrough']), + }, + + semantic: { + success: tv('#00ff9f'), + warning: tv('#fcee0a'), + error: tv('#ff003c'), + info: tv('#00ff9f'), + accent: tv('#fcee0a'), + header: tv('#00ff9f'), + dim: tv('#3d1a5d'), + highlight: tv('#ff003c'), + muted: tv('#3d1a5d', ['dim']), + primary: tv('#00ff9f', ['bold']), + text: tv('#00ff9f'), + bg: tv('#050a0e'), + }, + + border: { + primary: tv('#00ff9f'), + secondary: tv('#fcee0a'), + success: tv('#00ff9f'), + warning: tv('#fcee0a'), + error: tv('#ff003c'), + muted: tv('#3d1a5d'), + }, + + ui: { + border: tv('#3d1a5d'), + selection: tv('#fcee0a'), + focus: tv('#ff003c'), + cursor: tv('#fcee0a'), + scrollThumb: tv('#fcee0a'), + scrollTrack: tv('#3d1a5d'), + sectionHeader: tv('#00ff9f', ['bold']), + logo: tv('#00ff9f'), + tableHeader: tv('#00ff9f'), + trackEmpty: tv('#0f0f1b'), + }, + + surface: { + primary: { hex: '#00ff9f', bg: '#050a0e' }, + secondary: { hex: '#00ff9f', bg: '#0f0f1b' }, + elevated: { hex: '#00ff9f', bg: '#1a1a2e' }, + overlay: { hex: '#00ff9f', bg: '#1a1a2e' }, + muted: { hex: '#3d1a5d', bg: '#0f0f1b' }, + panel: tv('#1a1a2e'), + well: tv('#0f0f1b'), + }, +}; + +export const thinkThemes = [thinkTheme, matrixTheme, cyberTheme]; + +export const thinkShellThemes = thinkThemes.map((theme) => Object.freeze({ + id: theme.name, + label: theme.label, + theme, +})); diff --git a/src/capture-provenance.js b/src/capture-provenance.js index 9b48ca8..dbc75c1 100644 --- a/src/capture-provenance.js +++ b/src/capture-provenance.js @@ -13,22 +13,29 @@ export function captureProvenanceFromEnvironment(environment = process.env) { }); } +export class CaptureProvenance { + constructor(ingress, sourceApp, sourceURL) { + this.ingress = ingress; + this.sourceApp = sourceApp; + this.sourceURL = sourceURL; + Object.freeze(this); + } +} + export function normalizeCaptureProvenance(provenance) { if (!provenance || typeof provenance !== 'object') { return null; } - const normalized = { - ingress: normalizeIngress(provenance.ingress), - sourceApp: normalizeString(provenance.sourceApp), - sourceURL: normalizeUrl(provenance.sourceURL), - }; + const ingress = normalizeIngress(provenance.ingress); + const sourceApp = normalizeString(provenance.sourceApp); + const sourceURL = normalizeUrl(provenance.sourceURL); - if (!normalized.ingress && !normalized.sourceApp && !normalized.sourceURL) { + if (!ingress && !sourceApp && !sourceURL) { return null; } - return normalized; + return new CaptureProvenance(ingress, sourceApp, sourceURL); } function normalizeIngress(value) { @@ -49,13 +56,19 @@ function normalizeString(value) { return trimmed === '' ? null : trimmed; } +const SAFE_URL_SCHEMES = new Set(['http:', 'https:']); + function normalizeUrl(value) { if (typeof value !== 'string' || value.trim() === '') { return null; } try { - return new URL(value).toString(); + const parsed = new URL(value); + if (!SAFE_URL_SCHEMES.has(parsed.protocol)) { + return null; + } + return parsed.toString(); } catch { return null; } diff --git a/src/cli.js b/src/cli.js index d211cc3..fc14302 100644 --- a/src/cli.js +++ b/src/cli.js @@ -1,7 +1,9 @@ +import { ThinkError } from './errors.js'; import { createVerboseReporter } from './verbose.js'; import { stringifyJson } from './json.js'; import { renderHelp } from './cli/help.js'; import { + COMMANDS, parseArgs, resolveCommand, resolveHelpTopic, @@ -10,13 +12,16 @@ import { import { createOutput, resolveJsonStream } from './cli/output.js'; import { runCapture, runIngest, runMigrateGraph } from './cli/commands/capture.js'; import { + runAnnotate, runBrowse, runDoctor, + runEnrich, runInspect, runPromptMetrics, runRecent, runRemember, runStats, + runTopics, } from './cli/commands/read.js'; import { runReflectReply, runReflectStart } from './cli/commands/reflect.js'; @@ -57,52 +62,55 @@ export async function main(argv, { stdout, stderr, stdin }) { return 0; } - let exitCode = 0; - if (command === 'recent') { - exitCode = await runRecent(output, reporter, options); - } else if (command === 'remember') { - exitCode = await runRemember(output, reporter, options); - } else if (command === 'browse') { - exitCode = await runBrowse(options.browse, output, reporter); - } else if (command === 'inspect') { - exitCode = await runInspect(options.inspect, output, reporter); - } else if (command === 'doctor') { - exitCode = await runDoctor(output, reporter); - } else if (command === 'migrate_graph') { - exitCode = await runMigrateGraph(output, reporter); - } else if (command === 'ingest') { - exitCode = await runIngest(stdin, output, reporter); - } else if (command === 'stats') { - exitCode = await runStats(output, reporter, options); - } else if (command === 'prompt_metrics') { - exitCode = await runPromptMetrics(output, reporter, options); - } else if (command === 'reflect_start') { - exitCode = await runReflectStart(options.reflect, output, reporter, { + const dispatch = { + [COMMANDS.ANNOTATE]: () => runAnnotate(options.annotate, options.positionals.join(' '), output, reporter), + [COMMANDS.ENRICH]: () => runEnrich(output, reporter), + [COMMANDS.TOPICS]: () => runTopics(output, reporter), + [COMMANDS.RECENT]: () => runRecent(output, reporter, options), + [COMMANDS.REMEMBER]: () => runRemember(output, reporter, options), + [COMMANDS.BROWSE]: () => runBrowse(options.browse, output, reporter), + [COMMANDS.INSPECT]: () => runInspect(options.inspect, output, reporter), + [COMMANDS.DOCTOR]: () => runDoctor(output, reporter), + [COMMANDS.MIGRATE_GRAPH]: () => runMigrateGraph(output, reporter), + [COMMANDS.INGEST]: () => runIngest(stdin, output, reporter), + [COMMANDS.STATS]: () => runStats(output, reporter, options), + [COMMANDS.PROMPT_METRICS]: () => runPromptMetrics(output, reporter, options), + [COMMANDS.REFLECT_START]: () => runReflectStart(options.reflect, output, reporter, { reflectMode: options.reflectMode, - }); - } else if (command === 'reflect_reply') { - exitCode = await runReflectReply( + }), + [COMMANDS.REFLECT_REPLY]: () => runReflectReply( options.reflectSession, options.positionals.join(' '), output, reporter - ); - } else { - const thought = options.positionals.length <= 1 - ? (options.positionals[0] ?? '') - : options.positionals.join(' '); - exitCode = await runCapture(thought, output, reporter); - } + ), + [COMMANDS.CAPTURE]: () => { + const thought = options.positionals.length <= 1 + ? (options.positionals[0] ?? '') + : options.positionals.join(' '); + if (!thought && stdin && !stdin.isTTY) { + stderr.write('Hint: piped input detected. Use --ingest to capture stdin.\n'); + } + return runCapture(thought, output, reporter); + }, + }; + + const handler = dispatch[command] ?? dispatch[COMMANDS.CAPTURE]; + const exitCode = await handler(); reporter.event(exitCode === 0 ? 'cli.success' : 'cli.failure', { command, exitCode }); return exitCode; } catch (error) { - reporter.event('cli.error', { - command, - message: error instanceof Error ? error.message : String(error), - }); - if (!options.json) { - output.error('Something went wrong'); + const message = error instanceof Error ? error.message : String(error); + const code = error instanceof ThinkError ? error.code : 'UNEXPECTED_ERROR'; + reporter.event('cli.error', { command, message, code }); + + if (error instanceof ThinkError) { + output.error(message, `cli.${code.toLowerCase()}`, { command }); + } else if (options.json) { + output.error(message, 'cli.unexpected_error', { command }); + } else { + output.error(`Something went wrong: ${message}`); } return 1; } diff --git a/src/cli/commands/capture.js b/src/cli/commands/capture.js index 77b2009..1c378d2 100644 --- a/src/cli/commands/capture.js +++ b/src/cli/commands/capture.js @@ -1,5 +1,6 @@ import { ensureGitRepo, hasGitRepo, pushWarpRefs } from '../../git.js'; import { captureProvenanceFromEnvironment } from '../../capture-provenance.js'; +import { getCaptureAmbientContext, getAmbientProjectContext } from '../../project-context.js'; import { getLocalRepoDir, getUpstreamUrl } from '../../paths.js'; import { finalizeCapturedThought, @@ -9,6 +10,9 @@ import { saveRawCapture, } from '../../store.js'; +const CAPTURE_FOLLOWTHROUGH_TIMEOUT_MS = 3_000; +const CAPTURE_FOLLOWTHROUGH_DEFERRED = Object.freeze({ status: 'deferred' }); + export async function runCapture(thought, output, reporter) { if (thought.trim() === '') { if (output.json) { @@ -34,9 +38,10 @@ export async function runCapture(thought, output, reporter) { migrationRequired: false, }; const provenance = captureProvenanceFromEnvironment(process.env); + const ambientContext = getCaptureAmbientContext(process.cwd()); reporter.event('capture.local_save.start'); - const entry = await saveRawCapture(repoDir, thought, { provenance }); + const entry = await saveRawCapture(repoDir, thought, { provenance, ambientContext }); reporter.event('capture.local_save.done', { entryId: entry.id }); output.out('Saved locally', 'capture.status', { @@ -55,9 +60,23 @@ export async function runCapture(thought, output, reporter) { }); } - const followthrough = await finalizeCapturedThought(repoDir, entry.id, { + const followthroughPromise = finalizeCapturedThought(repoDir, entry.id, { migrateIfNeeded: graphStatus.migrationRequired, + ambientContext: getAmbientProjectContext(process.cwd()), }); + const followthrough = graphStatus.migrationRequired + ? await followthroughPromise + : await waitForCaptureFollowthrough(followthroughPromise); + + if (followthrough === CAPTURE_FOLLOWTHROUGH_DEFERRED) { + reporter.event('capture.followthrough.deferred', { + command: 'capture', + trigger: 'post_capture', + entryId: entry.id, + timeoutMs: CAPTURE_FOLLOWTHROUGH_TIMEOUT_MS, + }); + return await runBackup(repoDir, output, reporter); + } if (graphStatus.migrationRequired) { reporter.event('graph.migration.done', { @@ -83,6 +102,10 @@ export async function runCapture(thought, output, reporter) { }); } + return await runBackup(repoDir, output, reporter); +} + +async function runBackup(repoDir, output, reporter) { const upstreamUrl = getUpstreamUrl(); if (!upstreamUrl) { reporter.event('backup.skipped'); @@ -98,6 +121,20 @@ export async function runCapture(thought, output, reporter) { return 0; } +async function waitForCaptureFollowthrough(followthroughPromise) { + let timeoutId = null; + const timeout = new Promise((resolve) => { + timeoutId = setTimeout(() => resolve(CAPTURE_FOLLOWTHROUGH_DEFERRED), CAPTURE_FOLLOWTHROUGH_TIMEOUT_MS); + timeoutId.unref?.(); + }); + + try { + return await Promise.race([followthroughPromise, timeout]); + } finally { + clearTimeout(timeoutId); + } +} + export async function runIngest(stdin, output, reporter) { const thought = await readStdinText(stdin); diff --git a/src/cli/commands/read.js b/src/cli/commands/read.js index 341d37f..2eb0fa8 100644 --- a/src/cli/commands/read.js +++ b/src/cli/commands/read.js @@ -1,5 +1,5 @@ import { parseJson } from '../../json.js'; -import { runBrowseTui } from '../../browse-tui/app.js'; +import { runBrowseTui, showSplash } from '../../browse-tui/app.js'; import { runBrowseTuiScript } from '../../browse-tui/script.js'; import { hasGitRepo, lsRemote } from '../../git.js'; import { discoverMinds } from '../../minds.js'; @@ -21,7 +21,9 @@ import { rememberThoughts, saveReflectResponse, startReflect, + saveAnnotation, } from '../../store.js'; +import { runEnrichmentPipeline, listTopics } from '../../store/enrichment/runner.js'; import { buildStatsSparkline } from '../../mcp/format.js'; import { shouldUseInteractiveBrowseShell } from '../environment.js'; import { runDiagnostics } from '../../doctor.js'; @@ -63,6 +65,81 @@ export async function runDoctor(output, reporter) { return 0; } +export async function runAnnotate(entryId, text, output, reporter) { + const repoDir = getLocalRepoDir(); + + if (!hasGitRepo(repoDir)) { + output.error('No local thought repo found', 'annotate.repo_not_found'); + return 1; + } + + reporter.event('annotate.start', { targetEntryId: entryId }); + + const result = await saveAnnotation(repoDir, entryId, text); + + output.out('Annotated', 'annotate.done', { + annotationId: result.annotationId, + targetEntryId: result.targetEntryId, + }); + + reporter.event('annotate.done', result); + return 0; +} + +export async function runEnrich(output, reporter) { + const repoDir = getLocalRepoDir(); + + if (!hasGitRepo(repoDir)) { + output.error('No local thought repo found', 'enrich.repo_not_found'); + return 1; + } + + reporter.event('enrich.start'); + const result = await runEnrichmentPipeline(repoDir); + + if (output.json) { + output.data('enrich.result', result); + } else { + const lines = [ + `Enriched ${result.capturesProcessed} captures`, + `Topics promoted: ${result.promotedTopics.length}`, + `Topic nodes created: ${result.topicNodesCreated}`, + `About edges added: ${result.aboutEdgesAdded}`, + ]; + output.out(lines.join('\n')); + } + + reporter.event('enrich.done', result); + return 0; +} + +export async function runTopics(output, reporter) { + const repoDir = getLocalRepoDir(); + + if (!hasGitRepo(repoDir)) { + output.error('No local thought repo found', 'topics.repo_not_found'); + return 1; + } + + reporter.event('topics.start'); + const topics = await listTopics(repoDir); + reporter.event('topics.done', { count: topics.length }); + + if (output.json) { + for (const topic of topics) { + output.data('topics.topic', topic); + } + } else if (topics.length === 0) { + output.out('No promoted topics yet. Capture more thoughts and run --enrich.'); + } else { + for (const topic of topics) { + output.out(`${topic.name} (${topic.thoughtCount} thoughts)`); + } + } + + return 0; +} + export async function runStats(output, reporter, options) { const repoDir = getLocalRepoDir(); @@ -180,11 +257,12 @@ export async function runRecent(output, reporter, options) { return 0; } - const entries = await listRecent(repoDir, { + const result = await listRecent(repoDir, { count: options.recentCount === null || options.recentCount === undefined ? null : Number(options.recentCount), query: options.recentQuery, }); - reporter.event('recent.done', { count: entries.length }); + const { entries, total } = result; + reporter.event('recent.done', { count: entries.length, total }); if (entries.length > 0) { if (output.json) { for (const [index, entry] of entries.entries()) { @@ -197,6 +275,9 @@ export async function runRecent(output, reporter, options) { } } else { output.out(entries.map(entry => entry.text).join('\n')); + if (entries.length < total) { + output.out(`(showing ${entries.length} of ${total} captures)`); + } } } @@ -489,28 +570,92 @@ async function runInteractiveBrowseShell(output, reporter) { let activeMind = minds[0]; let skipSplash = false; + let splashShown = false; + const browseLoads = new Map(); + const preloadMind = (mind) => { + const key = mind.repoDir; + if (!browseLoads.has(key)) { + browseLoads.set(key, beginInteractiveBrowseLoad(mind)); + } + return browseLoads.get(key); + }; + let pendingBrowseLoad = preloadMind(activeMind); while (true) { + const showInitialSplash = !splashShown; + if (showInitialSplash) { + // eslint-disable-next-line no-await-in-loop -- interactive splash selects the initial mind + const splashResult = await showSplash({ minds }); + if (splashResult.action === 'quit') { + return 0; + } + if (splashResult.mind && splashResult.mind.repoDir !== activeMind.repoDir) { + activeMind = splashResult.mind; + pendingBrowseLoad = preloadMind(activeMind); + } + splashShown = true; + skipSplash = true; + } + const { repoDir } = activeMind; + const screenHeldFromSplash = showInitialSplash; - if (!hasGitRepo(repoDir)) { + if (screenHeldFromSplash && !pendingBrowseLoad.isSettled()) { + renderBrowseOpeningFrame(activeMind); + } + // eslint-disable-next-line no-await-in-loop -- sequential mind-switch loop + const loaded = await pendingBrowseLoad.promise; + if (!loaded.ok && loaded.reason === 'repo_missing') { + if (screenHeldFromSplash) { + restoreInteractiveBrowseScreen(); + } output.error(`Mind "${activeMind.name}" has no thought repo`, 'browse.entry_not_found'); return 1; } + if (!loaded.ok) { + if (screenHeldFromSplash) { + restoreInteractiveBrowseScreen(); + } + throw loaded.error; + } + let { read, graphStatus, bootstrap } = loaded; + const migrationBreaksHandoff = graphStatus.migrationRequired; + const screenStillHeldFromSplash = screenHeldFromSplash && !migrationBreaksHandoff; - // eslint-disable-next-line no-await-in-loop -- sequential mind-switch loop - const read = await openProductReadHandle(repoDir); - // eslint-disable-next-line no-await-in-loop -- sequential mind-switch loop - const graphStatus = await getGraphModelStatusForRead(read); + if (migrationBreaksHandoff && screenHeldFromSplash) { + restoreInteractiveBrowseScreen(); + } // eslint-disable-next-line no-await-in-loop -- sequential mind-switch loop if (!await ensureGraphModelReadyFromStatus(repoDir, 'browse', graphStatus, output, reporter)) { return 1; } - // eslint-disable-next-line no-await-in-loop -- sequential mind-switch loop - const bootstrap = await prepareBrowseBootstrapForRead(read); + if (migrationBreaksHandoff) { + browseLoads.delete(repoDir); + pendingBrowseLoad = preloadMind(activeMind); + // eslint-disable-next-line no-await-in-loop -- graph migration changes the read basis + const reloaded = await pendingBrowseLoad.promise; + if (!reloaded.ok) { + if (screenStillHeldFromSplash) { + restoreInteractiveBrowseScreen(); + } + throw reloaded.error; + } + ({ read, bootstrap } = reloaded); + } + + if (!bootstrap) { + if (screenStillHeldFromSplash) { + restoreInteractiveBrowseScreen(); + } + output.error('Mind could not be prepared for browse', 'browse.entry_not_found'); + return 1; + } if (!bootstrap.ok) { + if (screenStillHeldFromSplash) { + restoreInteractiveBrowseScreen(); + } output.error(`Mind "${activeMind.name}" has no raw captures to browse`, 'browse.entry_not_found'); return 1; } @@ -524,6 +669,7 @@ async function runInteractiveBrowseShell(output, reporter) { minds, activeMind, skipSplash, + handoffFromSplash: screenStillHeldFromSplash, loadBrowseWindow: (thoughtEntryId) => getBrowseWindowForRead(read, thoughtEntryId), loadChronologyEntries: () => loadBrowseChronologyEntriesForRead(read), loadInspectEntry: (thoughtEntryId) => inspectRawEntryForRead(read, thoughtEntryId), @@ -568,6 +714,7 @@ async function runInteractiveBrowseShell(output, reporter) { if (tuiResult.type === 'switch_mind') { activeMind = tuiResult.mind; skipSplash = true; + pendingBrowseLoad = preloadMind(activeMind); continue; } @@ -577,6 +724,78 @@ async function runInteractiveBrowseShell(output, reporter) { return 0; } +function beginInteractiveBrowseLoad(mind) { + let settled = false; + const promise = loadInteractiveBrowseMind(mind).then( + (result) => { + settled = true; + return result; + }, + (error) => { + settled = true; + return { ok: false, reason: 'error', error }; + }, + ); + + return { + promise, + isSettled: () => settled, + }; +} + +async function loadInteractiveBrowseMind(mind) { + const { repoDir } = mind; + if (!hasGitRepo(repoDir)) { + return { ok: false, reason: 'repo_missing' }; + } + + const read = await openProductReadHandle(repoDir); + const graphStatus = await getGraphModelStatusForRead(read); + const bootstrap = graphStatus.migrationRequired + ? null + : await prepareBrowseBootstrapForRead(read); + + return { + ok: true, + read, + graphStatus, + bootstrap, + }; +} + +function renderBrowseOpeningFrame(mind) { + const cols = process.stdout.columns || 80; + const rows = process.stdout.rows || 24; + const bg = '\x1b[48;2;45;25;34m'; + const titleFg = '\x1b[38;2;255;252;201m'; + const dimFg = '\x1b[38;2;140;138;110m'; + const accentFg = '\x1b[38;2;65;183;151m'; + const title = `Opening mind "${mind.name}"`; + const subtitle = 'Preparing read view'; + const barWidth = Math.max(4, Math.min(32, cols - 4)); + const fillWidth = Math.floor(barWidth * 0.65); + const bar = `[${'='.repeat(fillWidth)}${' '.repeat(barWidth - fillWidth)}]`; + const centerY = Math.max(0, Math.floor(rows / 2) - 2); + + process.stdout.write(`${bg}\x1b[2J`); + writeCenteredLine(centerY, title, titleFg, cols); + writeCenteredLine(centerY + 2, bar, accentFg, cols); + writeCenteredLine(centerY + 4, subtitle, dimFg, cols); + process.stdout.write('\x1b[0m'); +} + +function restoreInteractiveBrowseScreen() { + process.stdout.write('\x1b[?25h'); + process.stdout.write('\x1b[?7h'); + process.stdout.write('\x1b[?1049l'); +} + +function writeCenteredLine(row, text, fg, cols) { + const fittedText = text.length > cols ? text.slice(0, cols) : text; + const column = Math.max(0, Math.floor((cols - fittedText.length) / 2)); + process.stdout.write(`\x1b[${row + 1};${column + 1}H${fg}${fittedText}`); +} + export async function runInspect(entryId, output, reporter) { const repoDir = getLocalRepoDir(); @@ -619,6 +838,11 @@ export async function runInspect(entryId, output, reporter) { if (entry.sessionAttribution) { output.data('inspect.receipt', entry.sessionAttribution); } + if (entry.annotations && entry.annotations.length > 0) { + for (const annotation of entry.annotations) { + output.data('inspect.annotation', annotation); + } + } return 0; } @@ -675,6 +899,13 @@ export async function runInspect(entryId, output, reporter) { lines.push('Why: Session attribution has not been materialized yet.'); } + if (entry.annotations && entry.annotations.length > 0) { + lines.push('Annotations'); + for (const annotation of entry.annotations) { + lines.push(`${annotation.createdAt}: ${annotation.text}`); + } + } + output.out(lines.join('\n')); return 0; } diff --git a/src/cli/commands/reflect.js b/src/cli/commands/reflect.js index 3f265ee..77b7700 100644 --- a/src/cli/commands/reflect.js +++ b/src/cli/commands/reflect.js @@ -10,6 +10,7 @@ import { } from '../../store.js'; import { ensureGraphModelReady } from '../graph-gate.js'; import { + capitalize, formatIneligibleSeedMessage, normalizeForPicker, pickReflectMode, @@ -239,12 +240,3 @@ async function suggestAlternativeReflectSeeds(repoDir, excludedSeedEntryId) { text: normalizeForPicker(entry.text), })); } - -function capitalize(value) { - const text = String(value || ''); - if (text.length === 0) { - return text; - } - - return text.charAt(0).toUpperCase() + text.slice(1); -} diff --git a/src/cli/environment.js b/src/cli/environment.js index f32d9a5..3722211 100644 --- a/src/cli/environment.js +++ b/src/cli/environment.js @@ -6,22 +6,26 @@ export function isInteractiveReflectAvailable() { return process.stdin.isTTY === true && process.stdout.isTTY === true; } +function isInteractiveShellAvailable(outputOrOptions) { + return !outputOrOptions.json && isInteractiveReflectAvailable(); +} + export function shouldUseInteractiveReflectShell(output) { - return !output.json && isInteractiveReflectAvailable(); + return isInteractiveShellAvailable(output); } export function shouldUseInteractiveBrowseShell(output) { - return !output.json && isInteractiveReflectAvailable(); + return isInteractiveShellAvailable(output); } export function canInteractivelyOpenBrowseShell(options) { - return !options.json && isInteractiveReflectAvailable(); + return isInteractiveShellAvailable(options); } export function canInteractivelyPickReflectSeed(options) { - return !options.json && isInteractiveReflectAvailable(); + return isInteractiveShellAvailable(options); } export function canInteractivelyOfferGraphMigration(output) { - return !output.json && isInteractiveReflectAvailable(); + return isInteractiveShellAvailable(output); } diff --git a/src/cli/graph-gate.js b/src/cli/graph-gate.js index fff36d7..881ee8c 100644 --- a/src/cli/graph-gate.js +++ b/src/cli/graph-gate.js @@ -54,6 +54,7 @@ export async function ensureGraphModelReadyFromStatus(repoDir, command, status, output.error('Graph migration required. Run think --migrate-graph.', 'graph.migration_required', { command, + remediation: 'think --migrate-graph', ...status, }); return false; diff --git a/src/cli/help.js b/src/cli/help.js index f610b22..6e85549 100644 --- a/src/cli/help.js +++ b/src/cli/help.js @@ -1,4 +1,4 @@ -const HELP_TEXT = { +const HELP_TEXT = Object.freeze({ general: [ 'Usage: think "raw thought"', ' think --ingest', @@ -8,6 +8,9 @@ const HELP_TEXT = { ' think --prompt-metrics [--since=DURATION] [--from=DATE] [--to=DATE] [--bucket=hour|day|week]', ' think --browse[=]', ' think --inspect=', + ' think --annotate= ', + ' think --enrich', + ' think --topics', ' think --reflect[=] [--mode=challenge|constraint|sharpen]', ' think --reflect-session= ', ' think --migrate-graph', @@ -21,6 +24,9 @@ const HELP_TEXT = { 'Command help:', ' think --recent --help', ' think --inspect -h', + ' think --annotate --help', + ' think --enrich --help', + ' think --topics -h', '', 'To capture text that starts with a dash, end option parsing first:', ' think -- "-h"', @@ -82,6 +88,21 @@ const HELP_TEXT = { '', 'Show exact metadata and derived receipts for a single capture.', ].join('\n'), + annotate: [ + 'Usage: think --annotate= ', + '', + 'Attach a note to an existing capture.', + ].join('\n'), + enrich: [ + 'Usage: think --enrich', + '', + 'Extract keyword, topic, and classification receipts from captured thoughts.', + ].join('\n'), + topics: [ + 'Usage: think --topics', + '', + 'List promoted topics produced by enrichment.', + ].join('\n'), reflect: [ 'Usage: think --reflect', ' think --reflect= [--mode=challenge|constraint|sharpen]', @@ -108,12 +129,12 @@ const HELP_TEXT = { 'Reports think directory, local repo, graph model version,', 'entry count, and upstream reachability.', ].join('\n'), -}; +}); export function renderHelp(topic) { const resolvedTopic = HELP_TEXT[topic] ? topic : 'general'; - return { + return Object.freeze({ topic: resolvedTopic, message: HELP_TEXT[resolvedTopic], - }; + }); } diff --git a/src/cli/interactive.js b/src/cli/interactive.js index 23c941b..cd3325a 100644 --- a/src/cli/interactive.js +++ b/src/cli/interactive.js @@ -153,7 +153,7 @@ function renderProgressBar(progress) { return `[${'#'.repeat(filled)}${'-'.repeat(width - filled)}]`; } -function capitalize(value) { +export function capitalize(value) { const text = String(value || ''); if (text.length === 0) { return text; diff --git a/src/cli/options.js b/src/cli/options.js index a647665..c90b3cb 100644 --- a/src/cli/options.js +++ b/src/cli/options.js @@ -1,4 +1,22 @@ import { REFLECT_PROMPT_TYPES } from '../store.js'; + +export const COMMANDS = Object.freeze({ + ANNOTATE: 'annotate', + CAPTURE: 'capture', + ENRICH: 'enrich', + RECENT: 'recent', + REMEMBER: 'remember', + STATS: 'stats', + TOPICS: 'topics', + PROMPT_METRICS: 'prompt_metrics', + BROWSE: 'browse', + INSPECT: 'inspect', + DOCTOR: 'doctor', + MIGRATE_GRAPH: 'migrate_graph', + INGEST: 'ingest', + REFLECT_START: 'reflect_start', + REFLECT_REPLY: 'reflect_reply', +}); import { canInteractivelyOpenBrowseShell, canInteractivelyPickReflectSeed, @@ -15,6 +33,10 @@ export function parseArgs(args) { recent: false, remember: false, ingest: false, + annotateFlag: false, + annotate: null, + enrich: false, + topics: false, reflectFlag: false, reflect: null, reflectMode: null, @@ -82,9 +104,19 @@ export function parseArgs(args) { } else if (arg === '--inspect') { options.inspectFlag = true; options.inspect = ''; + } else if (arg === '--annotate') { + options.annotateFlag = true; + options.annotate = ''; } else if (arg.startsWith('--inspect=')) { options.inspectFlag = true; options.inspect = arg.slice('--inspect='.length); + } else if (arg.startsWith('--annotate=')) { + options.annotateFlag = true; + options.annotate = arg.slice('--annotate='.length); + } else if (arg === '--enrich') { + options.enrich = true; + } else if (arg === '--topics') { + options.topics = true; } else if (arg === '--migrate-graph') { options.migrateGraph = true; } else if (arg === '--doctor') { @@ -137,47 +169,28 @@ export function parseArgs(args) { positionals.push(arg); } - return { + return Object.freeze({ ...options, - positionals, - }; + positionals: Object.freeze(positionals), + }); } export function resolveCommand(options) { - if (options.reflectSessionFlag) { - return 'reflect_reply'; - } - if (options.reflectFlag) { - return 'reflect_start'; - } - if (options.browseFlag) { - return 'browse'; - } - if (options.inspectFlag) { - return 'inspect'; - } - if (options.doctor) { - return 'doctor'; - } - if (options.migrateGraph) { - return 'migrate_graph'; - } - if (options.ingest) { - return 'ingest'; - } - if (options.remember) { - return 'remember'; - } - if (options.stats) { - return 'stats'; - } - if (options.promptMetrics) { - return 'prompt_metrics'; - } - if (options.recent) { - return 'recent'; - } - return 'capture'; + if (options.reflectSessionFlag) { return COMMANDS.REFLECT_REPLY; } + if (options.reflectFlag) { return COMMANDS.REFLECT_START; } + if (options.browseFlag) { return COMMANDS.BROWSE; } + if (options.inspectFlag) { return COMMANDS.INSPECT; } + if (options.annotateFlag) { return COMMANDS.ANNOTATE; } + if (options.enrich) { return COMMANDS.ENRICH; } + if (options.topics) { return COMMANDS.TOPICS; } + if (options.doctor) { return COMMANDS.DOCTOR; } + if (options.migrateGraph) { return COMMANDS.MIGRATE_GRAPH; } + if (options.ingest) { return COMMANDS.INGEST; } + if (options.remember) { return COMMANDS.REMEMBER; } + if (options.stats) { return COMMANDS.STATS; } + if (options.promptMetrics) { return COMMANDS.PROMPT_METRICS; } + if (options.recent) { return COMMANDS.RECENT; } + return COMMANDS.CAPTURE; } export function validateOptions(options, command) { @@ -279,6 +292,23 @@ export function validateOptions(options, command) { return '--prompt-metrics does not take a thought'; } + if (command === 'annotate') { + if (!options.annotate) { + return '--annotate requires an entry id'; + } + if (options.positionals.length === 0) { + return '--annotate requires annotation text'; + } + } + + if (command === 'enrich' && options.positionals.length > 0) { + return '--enrich does not take a thought'; + } + + if (command === 'topics' && options.positionals.length > 0) { + return '--topics does not take a thought'; + } + if (command === 'reflect_start') { if (options.reflectMode && !REFLECT_PROMPT_TYPES.includes(options.reflectMode)) { return 'Invalid --mode value'; @@ -350,6 +380,9 @@ export function countExplicitCommands(options) { options.stats, options.reflectFlag, options.reflectSessionFlag, + options.annotateFlag, + options.enrich, + options.topics, ].filter(Boolean).length; } diff --git a/src/cli/output.js b/src/cli/output.js index fd44abb..ca7f42d 100644 --- a/src/cli/output.js +++ b/src/cli/output.js @@ -1,36 +1,46 @@ -export function createOutput({ stdout, stderr, reporter, json }) { - return { - json, - out(message, eventName, data = {}) { - if (json) { - reporter.event(eventName ?? 'cli.output', { - ...data, - message, - }); - return; - } +export class CliOutput { + constructor({ stdout, stderr, reporter, json }) { + this.json = json; + this._stdout = stdout; + this._stderr = stderr; + this._reporter = reporter; + } + + out(message, eventName, data = {}) { + if (this.json) { + this._reporter.event(eventName ?? 'cli.output', { + ...data, + message, + }); + return; + } + + this._stdout.write(message.endsWith('\n') ? message : `${message}\n`); + } + + error(message, eventName, data = {}) { + if (this.json) { + this._reporter.event(eventName ?? 'cli.error_output', { + ...data, + message, + }); + return; + } - stdout.write(message.endsWith('\n') ? message : `${message}\n`); - }, - error(message, eventName, data = {}) { - if (json) { - reporter.event(eventName ?? 'cli.error_output', { - ...data, - message, - }); - return; - } + this._stderr.write(message.endsWith('\n') ? message : `${message}\n`); + } - stderr.write(message.endsWith('\n') ? message : `${message}\n`); - }, - data(eventName, data = {}) { - if (!json) { - return; - } + data(eventName, data = {}) { + if (!this.json) { + return; + } - reporter.event(eventName, data); - }, - }; + this._reporter.event(eventName, data); + } +} + +export function createOutput(options) { + return new CliOutput(options); } export function writeShellBlock(content, output) { @@ -41,32 +51,32 @@ export function writeShellBlock(content, output) { output.out(content); } +const STDERR_EVENTS = Object.freeze([ + 'cli.validation_failed', + 'cli.failure', + 'cli.error', + 'capture.validation_failed', + 'backup.pending', + 'backup.failure', + 'backup.timeout', + 'backup.retry', + 'reflect.validation_failed', + 'reflect.seed_not_found', + 'reflect.seed_ineligible', + 'reflect.session_not_found', + 'graph.migration_required', + 'graph.migration_cancelled', + 'graph.migration.failed', + 'browse.entry_not_found', + 'inspect.entry_not_found', +]); + export function resolveJsonStream(payload) { if (payload.event === 'backup.status' && payload.status === 'pending') { return 'stderr'; } - if ( - [ - 'cli.validation_failed', - 'cli.failure', - 'cli.error', - 'capture.validation_failed', - 'backup.pending', - 'backup.failure', - 'backup.timeout', - 'backup.retry', - 'reflect.validation_failed', - 'reflect.seed_not_found', - 'reflect.seed_ineligible', - 'reflect.session_not_found', - 'graph.migration_required', - 'graph.migration_cancelled', - 'graph.migration.failed', - 'browse.entry_not_found', - 'inspect.entry_not_found', - ].includes(payload.event) - ) { + if (STDERR_EVENTS.includes(payload.event)) { return 'stderr'; } diff --git a/src/doctor.js b/src/doctor.js index 8a1de71..63f7239 100644 --- a/src/doctor.js +++ b/src/doctor.js @@ -92,7 +92,7 @@ async function checkUpstream(upstreamUrl, checkUpstreamReachable) { } if (!checkUpstreamReachable) { - return { name: 'upstream', status: 'ok', message: `Upstream configured (${upstreamUrl})` }; + return { name: 'upstream', status: 'skip', message: `Upstream configured but not verified (${upstreamUrl})` }; } try { diff --git a/src/errors.js b/src/errors.js new file mode 100644 index 0000000..ecd5904 --- /dev/null +++ b/src/errors.js @@ -0,0 +1,42 @@ +/** + * Think error taxonomy. + * + * Named error classes for cross-surface failures so CLI, MCP, and + * store paths report the same truth consistently. + */ + +export class ThinkError extends Error { + constructor(message, code) { + super(message); + this.name = 'ThinkError'; + this.code = code; + } +} + +export class ValidationError extends ThinkError { + constructor(message) { + super(message, 'VALIDATION_ERROR'); + this.name = 'ValidationError'; + } +} + +export class NotFoundError extends ThinkError { + constructor(message) { + super(message, 'NOT_FOUND'); + this.name = 'NotFoundError'; + } +} + +export class GraphError extends ThinkError { + constructor(message) { + super(message, 'GRAPH_ERROR'); + this.name = 'GraphError'; + } +} + +export class CaptureError extends ThinkError { + constructor(message) { + super(message, 'CAPTURE_ERROR'); + this.name = 'CaptureError'; + } +} diff --git a/src/git.js b/src/git.js index 3e8ff7c..62332fe 100644 --- a/src/git.js +++ b/src/git.js @@ -1,12 +1,23 @@ import { existsSync } from 'node:fs'; import { mkdir } from 'node:fs/promises'; import path from 'node:path'; -import { spawn, spawnSync } from 'node:child_process'; +import { execSync, spawn, spawnSync } from 'node:child_process'; import { TimeoutError } from '@git-stunts/alfred'; +import { ThinkError } from './errors.js'; import { createPushPolicy } from './policies.js'; +function resolveGitBinary() { + try { + return execSync('which git', { encoding: 'utf8', stdio: ['ignore', 'pipe', 'ignore'] }).trim() || 'git'; + } catch { + return 'git'; + } +} + +export const GIT_BINARY = resolveGitBinary(); + const DEFAULT_GIT_ENV = { GIT_AUTHOR_NAME: 'think', GIT_AUTHOR_EMAIL: 'think@local.invalid', @@ -74,7 +85,7 @@ export function hasGitRepo(repoDir) { const LS_REMOTE_TIMEOUT_MS = 5000; export function lsRemote(upstreamUrl) { - const result = spawnSync('git', ['ls-remote', '--exit-code', upstreamUrl], { + const result = spawnSync(GIT_BINARY, ['ls-remote', '--exit-code', upstreamUrl], { env: { ...process.env, ...NON_INTERACTIVE_PUSH_ENV, @@ -88,7 +99,7 @@ export function lsRemote(upstreamUrl) { function runGitPush(repoDir, upstreamUrl, graphName, signal) { return new Promise((resolve, reject) => { const child = spawn( - 'git', + GIT_BINARY, ['-C', repoDir, 'push', '--porcelain', upstreamUrl, `refs/warp/${graphName}/*:refs/warp/${graphName}/*`], { env: { @@ -197,7 +208,7 @@ function buildPushError(message, details = {}) { } function runGit(args, options = {}) { - const result = spawnSync('git', args, { + const result = spawnSync(GIT_BINARY, args, { encoding: 'utf8', env: { ...process.env, @@ -207,7 +218,7 @@ function runGit(args, options = {}) { }); if (result.status !== 0) { - const error = new Error(`git command failed: git ${args.join(' ')}`); + const error = new ThinkError(`git command failed: git ${args.join(' ')}`, 'GIT_COMMAND_FAILED'); error.result = result; throw error; } diff --git a/src/mcp/format.js b/src/mcp/format.js index 5c9db4f..80c6baa 100644 --- a/src/mcp/format.js +++ b/src/mcp/format.js @@ -117,9 +117,11 @@ export function formatStats(statsResult) { ctx, }))); - const values = statsResult.buckets.map((b) => b.count).reverse(); - lines.push(''); - lines.push(`Capture frequency: ${sparkline(values)}`); + const spark = buildStatsSparkline(statsResult.buckets); + if (spark) { + lines.push(''); + lines.push(`Capture frequency: ${spark}`); + } } return lines.join('\n'); diff --git a/src/mcp/result.js b/src/mcp/result.js index c856409..be4d731 100644 --- a/src/mcp/result.js +++ b/src/mcp/result.js @@ -1,16 +1,89 @@ import { stringifyJson } from '../json.js'; -export function toToolResult(structuredContent, richText = null) { - const content = []; +/** + * Base class for all MCP tool outcomes. + * Adheres to Infrastructure Doctrine P3. + */ +export class McpOutcome { + /** + * @param {Object} structuredContent + * @param {string|null} richText + */ + constructor(structuredContent, richText = null) { + this.structuredContent = structuredContent; + this.richText = richText; + Object.freeze(this); + } + + /** + * Format the outcome for the Model Context Protocol. + */ + toToolResult() { + const content = []; + + if (this.richText) { + content.push({ type: 'text', text: this.richText }); + } + + content.push({ type: 'text', text: stringifyJson(this.structuredContent) }); + + return Object.freeze({ + content: Object.freeze(content), + structuredContent: this.structuredContent, + }); + } +} + +export class CaptureOutcome extends McpOutcome { + constructor(data) { + const richText = `Thought captured: ${data.entryId}`; + super(data, richText); + } +} + +export class BrowseOutcome extends McpOutcome {} + +export class RecentThoughtsOutcome extends McpOutcome { + constructor(data) { + const richText = `Showing ${data.entries.length} recent thoughts (Total: ${data.total}).`; + super(data, richText); + } +} - if (richText) { - content.push({ type: 'text', text: richText }); +export class RememberOutcome extends McpOutcome { + constructor(data) { + const richText = `Found ${data.matches.length} matching thoughts for query: "${data.scope.queryText || 'ambient context'}".`; + super(data, richText); } +} - content.push({ type: 'text', text: stringifyJson(structuredContent) }); +export class StatsOutcome extends McpOutcome {} - return { - content, - structuredContent, - }; +export class PromptMetricsOutcome extends McpOutcome { + constructor(data) { + const richText = `Read prompt UX telemetry for ${data.summary.sessions} sessions.`; + super(data, richText); + } +} + +export class HealthOutcome extends McpOutcome { + constructor(data) { + const status = data.ok ? 'Healthy' : 'Issues found'; + super(data, `Think Health: ${status}`); + } +} + +export class MigrationOutcome extends McpOutcome { + constructor(data) { + const richText = `Graph migrated to version ${data.graphModelVersion}. Edges added: ${data.edgesAdded}.`; + super(data, richText); + } +} + +/** + * Legacy wrapper for plain object results. + * @deprecated Use specialized Outcome classes instead. + */ +export function toToolResult(structuredContent, richText = null) { + return new McpOutcome(structuredContent, richText).toToolResult(); } diff --git a/src/mcp/server.js b/src/mcp/server.js index 642a1e0..74a5bc7 100644 --- a/src/mcp/server.js +++ b/src/mcp/server.js @@ -4,27 +4,19 @@ import * as z from 'zod/v4'; import pkg from '../../package.json' with { type: 'json' }; import { VALID_CAPTURE_INGRESSES } from '../capture-provenance.js'; -import { toToolResult } from './result.js'; -import { - formatBrowseWindow, - formatInspectEntry, - formatPromptMetrics, - formatRecentEntries, - formatStats, -} from './format.js'; import { browseThought, captureThought, getPromptMetricsForMcp, getThoughtStats, - checkThinkHealth, + checkThinkHealthForMcp, inspectThought, listRecentThoughts, migrateThoughtGraph, rememberThoughtsForMcp, } from './service.js'; -const recentEntrySchema = z.object({ +const mcpEntrySchema = z.object({ createdAt: z.string(), entryId: z.string(), sessionId: z.string().nullable(), @@ -32,14 +24,47 @@ const recentEntrySchema = z.object({ text: z.string(), }); -const browseEntrySchema = z.object({ - createdAt: z.string(), +const migrationSchema = z.object({ + changed: z.boolean(), + edgesAdded: z.number().int().nonnegative(), + edgesRemoved: z.number().int().nonnegative(), + graphModelVersion: z.number().int().positive(), + metadataUpdated: z.boolean(), +}).nullable(); + +const matchSchema = z.object({ entryId: z.string(), - sessionId: z.string().nullable(), - sortKey: z.string(), text: z.string(), + createdAt: z.string(), + sortKey: z.string(), + score: z.number(), + tier: z.number(), + matchKinds: z.array(z.string()), + reasonText: z.string(), }); +const scopeSchema = z.object({ + scopeKind: z.string(), +}).passthrough(); + +const sessionContextSchema = z.object({ + entryId: z.string(), + sessionId: z.string(), + reasonKind: z.string(), + reasonText: z.string(), + sessionPosition: z.number().int(), + sessionCount: z.number().int(), +}).nullable(); + +const inspectEntrySchema = z.object({ + entryId: z.string(), + thoughtId: z.string(), + kind: z.string(), + text: z.string(), + sortKey: z.string(), + createdAt: z.string(), +}).passthrough(); + const bucketSchema = z.object({ count: z.number().int().nonnegative(), key: z.string(), @@ -88,14 +113,14 @@ export function createThinkMcpServer() { outputSchema: { backupStatus: z.enum(['backed_up', 'pending', 'skipped']), entryId: z.string(), - migration: z.any().nullable(), + migration: migrationSchema, repoBootstrapped: z.boolean(), status: z.literal('saved_locally'), warnings: z.array(z.string()), }, - }, async ({ ingress, sourceApp, sourceURL, text }) => toToolResult(await captureThought(text, { + }, async ({ ingress, sourceApp, sourceURL, text }) => (await captureThought(text, { provenance: { ingress, sourceApp, sourceURL }, - }))); + })).toToolResult()); server.registerTool('recent', { description: 'List recent raw captures from Think.', @@ -104,12 +129,13 @@ export function createThinkMcpServer() { query: z.string().optional().describe('Optional case-insensitive text filter.'), }, outputSchema: { - entries: z.array(recentEntrySchema), + entries: z.array(mcpEntrySchema), repoPresent: z.boolean(), + total: z.number().int().nonnegative(), }, }, async ({ count, query }) => { const result = await listRecentThoughts({ count: count ?? null, query: query ?? null }); - return toToolResult(result, formatRecentEntries(result.entries)); + return result.toToolResult(); }); server.registerTool('remember', { @@ -120,15 +146,15 @@ export function createThinkMcpServer() { query: z.string().optional().describe('Optional explicit recall query. When omitted, uses ambient project context.'), }, outputSchema: { - matches: z.array(z.any()), + matches: z.array(matchSchema), repoPresent: z.boolean(), - scope: z.any(), + scope: scopeSchema, }, - }, async ({ brief, limit, query }) => toToolResult(await rememberThoughtsForMcp({ + }, async ({ brief, limit, query }) => (await rememberThoughtsForMcp({ brief: brief ?? false, limit: limit ?? null, query: query ?? null, - }))); + })).toToolResult()); server.registerTool('browse', { description: 'Return a browse window for one thought, including chronology and session neighbors. If entryId is omitted, starts from the latest capture.', @@ -136,11 +162,11 @@ export function createThinkMcpServer() { entryId: z.string().optional().describe('Optional capture entry id. When omitted, uses the latest capture.'), }, outputSchema: { - current: browseEntrySchema, - newer: browseEntrySchema.nullable(), - older: browseEntrySchema.nullable(), - sessionContext: z.any().nullable(), - sessionEntries: z.array(browseEntrySchema), + current: mcpEntrySchema, + newer: mcpEntrySchema.nullable(), + older: mcpEntrySchema.nullable(), + sessionContext: sessionContextSchema, + sessionEntries: z.array(mcpEntrySchema), sessionSteps: z.array(z.object({ createdAt: z.string(), direction: z.enum(['next', 'previous']), @@ -153,7 +179,7 @@ export function createThinkMcpServer() { }, }, async ({ entryId }) => { const result = await browseThought({ entryId: entryId ?? null }); - return toToolResult(result, formatBrowseWindow(result)); + return result.toToolResult(); }); server.registerTool('inspect', { @@ -162,11 +188,11 @@ export function createThinkMcpServer() { entryId: z.string().describe('The raw capture entry id to inspect.'), }, outputSchema: { - entry: z.any(), + entry: inspectEntrySchema, }, }, async ({ entryId }) => { const result = await inspectThought(entryId); - return toToolResult(result, formatInspectEntry(result)); + return result.toToolResult(); }); server.registerTool('stats', { @@ -184,7 +210,7 @@ export function createThinkMcpServer() { }, }, async ({ bucket, from, since, to }) => { const result = await getThoughtStats({ bucket: bucket ?? null, from: from ?? null, since: since ?? null, to: to ?? null }); - return toToolResult(result, formatStats(result)); + return result.toToolResult(); }); server.registerTool('prompt_metrics', { @@ -202,7 +228,7 @@ export function createThinkMcpServer() { }, }, async ({ bucket, from, since, to }) => { const result = await getPromptMetricsForMcp({ bucket: bucket ?? null, from: from ?? null, since: since ?? null, to: to ?? null }); - return toToolResult(result, formatPromptMetrics(result)); + return result.toToolResult(); }); const checkSchema = z.object({ @@ -216,7 +242,7 @@ export function createThinkMcpServer() { outputSchema: { checks: z.array(checkSchema), }, - }, async () => toToolResult(await checkThinkHealth())); + }, async () => (await checkThinkHealthForMcp()).toToolResult()); server.registerTool('migrate_graph', { description: 'Upgrade the local Think graph model in place.', @@ -227,7 +253,7 @@ export function createThinkMcpServer() { graphModelVersion: z.number().int().positive(), metadataUpdated: z.boolean(), }, - }, async () => toToolResult(await migrateThoughtGraph())); + }, async () => (await migrateThoughtGraph()).toToolResult()); return server; } diff --git a/src/mcp/service.js b/src/mcp/service.js index a00f083..19790fd 100644 --- a/src/mcp/service.js +++ b/src/mcp/service.js @@ -1,8 +1,10 @@ import { runDiagnostics } from '../doctor.js'; +import { ValidationError, NotFoundError, GraphError } from '../errors.js'; import { ensureGitRepo, hasGitRepo, lsRemote, pushWarpRefs } from '../git.js'; import { getLocalRepoDir, getThinkDir, getUpstreamUrl } from '../paths.js'; import { capturePolicy } from '../policies.js'; import { normalizeCaptureProvenance } from '../capture-provenance.js'; +import { getCaptureAmbientContext, getAmbientProjectContext } from '../project-context.js'; import { finalizeCapturedThought, getBrowseWindow, @@ -21,11 +23,22 @@ import { buildAmbientRememberScope, buildExplicitRememberScope, } from '../store/remember.js'; +import { + BrowseOutcome, + CaptureOutcome, + HealthOutcome, + McpOutcome, + MigrationOutcome, + PromptMetricsOutcome, + RecentThoughtsOutcome, + RememberOutcome, + StatsOutcome, +} from './result.js'; export async function captureThought(text, { provenance = null } = {}) { const thought = String(text ?? ''); if (thought.trim() === '') { - throw new Error('Thought cannot be empty'); + throw new ValidationError('Thought cannot be empty'); } const captureProvenance = normalizeCaptureProvenance(provenance); @@ -45,6 +58,7 @@ export async function captureThought(text, { provenance = null } = {}) { const { entry, migration, warnings } = await capturePolicy.execute(async () => { const saved = await saveRawCapture(repoDir, thought, { provenance: captureProvenance, + ambientContext: getCaptureAmbientContext(process.cwd()), }); let mig = null; const warns = []; @@ -52,6 +66,7 @@ export async function captureThought(text, { provenance = null } = {}) { try { const followthrough = await finalizeCapturedThought(repoDir, saved.id, { migrateIfNeeded: graphStatus.migrationRequired, + ambientContext: getAmbientProjectContext(process.cwd()), }); mig = followthrough.migration ?? null; } catch (error) { @@ -68,30 +83,32 @@ export async function captureThought(text, { provenance = null } = {}) { backupStatus = backedUp ? 'backed_up' : 'pending'; } - return { + return new CaptureOutcome({ backupStatus, entryId: entry.id, migration, repoBootstrapped: !repoAlreadyExists, status: 'saved_locally', warnings, - }; + }); } export async function listRecentThoughts({ count = null, query = null } = {}) { const repoDir = getLocalRepoDir(); if (!hasGitRepo(repoDir)) { - return { + return new RecentThoughtsOutcome({ entries: [], repoPresent: false, - }; + total: 0, + }); } - const entries = await listRecent(repoDir, { count, query }); - return { - entries: entries.map(toRecentEntry), + const result = await listRecent(repoDir, { count, query }); + return new RecentThoughtsOutcome({ + entries: result.entries.map(toMcpEntry), repoPresent: true, - }; + total: result.total, + }); } export async function rememberThoughtsForMcp({ @@ -102,11 +119,11 @@ export async function rememberThoughtsForMcp({ } = {}) { const repoDir = getLocalRepoDir(); if (!hasGitRepo(repoDir)) { - return { + return new RememberOutcome({ matches: [], repoPresent: false, scope: buildRememberScope({ cwd, query, limit, brief }), - }; + }); } await assertGraphReady('remember'); @@ -118,17 +135,17 @@ export async function rememberThoughtsForMcp({ brief, }); - return { + return new RememberOutcome({ matches: remember.matches, repoPresent: true, scope: remember.scope, - }; + }); } export async function browseThought({ entryId = null } = {}) { const repoDir = getLocalRepoDir(); if (!hasGitRepo(repoDir)) { - throw new Error('No raw captures available to browse'); + throw new NotFoundError('No raw captures available to browse'); } await assertGraphReady('browse'); @@ -137,22 +154,22 @@ export async function browseThought({ entryId = null } = {}) { if (entryId) { window = await getBrowseWindow(repoDir, entryId); if (!window) { - throw new Error('Browse entry not found'); + throw new NotFoundError('Browse entry not found'); } } else { const bootstrap = await prepareBrowseBootstrap(repoDir); if (!bootstrap.ok) { - throw new Error('No raw captures available to browse'); + throw new NotFoundError('No raw captures available to browse'); } window = bootstrap; } - return { - current: toBrowseEntry(window.current), - newer: toBrowseEntry(window.newer), - older: toBrowseEntry(window.older), + return new BrowseOutcome({ + current: toMcpEntry(window.current), + newer: toMcpEntry(window.newer), + older: toMcpEntry(window.older), sessionContext: window.sessionContext, - sessionEntries: window.sessionEntries.map(toBrowseEntry), + sessionEntries: window.sessionEntries.map(toMcpEntry), sessionSteps: window.sessionSteps.map((step) => ({ createdAt: step.createdAt, direction: step.direction, @@ -162,56 +179,56 @@ export async function browseThought({ entryId = null } = {}) { sortKey: step.sortKey, text: step.text, })), - }; + }); } export async function inspectThought(entryId) { const repoDir = getLocalRepoDir(); if (!hasGitRepo(repoDir)) { - throw new Error('Inspect entry not found'); + throw new NotFoundError('Inspect entry not found'); } await assertGraphReady('inspect'); const entry = await inspectRawEntry(repoDir, entryId); if (!entry) { - throw new Error('Inspect entry not found'); + throw new NotFoundError('Inspect entry not found'); } - return { entry }; + return new McpOutcome({ entry }); } export async function getThoughtStats({ from = null, to = null, since = null, bucket = null } = {}) { const repoDir = getLocalRepoDir(); if (!hasGitRepo(repoDir)) { - return { + return new StatsOutcome({ buckets: null, repoPresent: false, total: 0, - }; + }); } const stats = await getStats(repoDir, { from, to, since, bucket }); - return { + return new StatsOutcome({ buckets: stats.buckets ?? null, repoPresent: true, total: stats.total, - }; + }); } export async function getPromptMetricsForMcp({ from = null, to = null, since = null, bucket = null } = {}) { const promptMetrics = await getPromptMetrics({ from, to, since, bucket }); - return { + return new PromptMetricsOutcome({ buckets: promptMetrics.buckets ?? null, summary: promptMetrics.summary, timings: promptMetrics.timings, - }; + }); } -export function checkThinkHealth() { +export async function checkThinkHealthForMcp() { const repoDir = getLocalRepoDir(); const upstreamUrl = getUpstreamUrl(); - return runDiagnostics({ + const diagnostics = await runDiagnostics({ thinkDir: getThinkDir(), repoDir, upstreamUrl, @@ -223,16 +240,18 @@ export function checkThinkHealth() { : null, checkUpstreamReachable: upstreamUrl ? () => lsRemote(upstreamUrl) : null, }); + + return new HealthOutcome(diagnostics); } -// eslint-disable-next-line require-await -- wraps store call that returns a promise (git-warp) export async function migrateThoughtGraph() { const repoDir = getLocalRepoDir(); if (!hasGitRepo(repoDir)) { - throw new Error('No local thought repo found to migrate'); + throw new GraphError('No local thought repo found to migrate'); } - return migrateGraphModel(repoDir); + const result = await migrateGraphModel(repoDir); + return new MigrationOutcome(result); } async function assertGraphReady(command) { @@ -242,9 +261,10 @@ async function assertGraphReady(command) { return; } - const error = new Error('Graph migration required. Run think --migrate-graph.'); + const error = new GraphError('Graph migration required. Run think --migrate-graph.'); error.code = 'graph_migration_required'; error.command = command; + error.remediation = 'think --migrate-graph'; error.status = status; throw error; } @@ -261,26 +281,21 @@ function buildRememberScope({ cwd, query, limit, brief }) { }; } -function toBrowseEntry(entry) { +function toMcpEntry(entry) { if (!entry) { return null; } - return { + return Object.freeze({ createdAt: entry.createdAt, entryId: entry.id, sessionId: entry.sessionId ?? null, sortKey: entry.sortKey, text: entry.text, - }; + }); } -function toRecentEntry(entry) { - return { - createdAt: entry.createdAt, - entryId: entry.id, - sessionId: entry.sessionId ?? null, - sortKey: entry.sortKey, - text: entry.text, - }; +/** @deprecated Use checkThinkHealthForMcp instead */ +export function checkThinkHealth() { + return checkThinkHealthForMcp(); } diff --git a/src/minds.js b/src/minds.js index ee67185..dbca609 100644 --- a/src/minds.js +++ b/src/minds.js @@ -50,6 +50,9 @@ export function discoverMinds(thinkDir = getThinkDir()) { * Uses djb2 hash to map name → shader index. */ export function shaderForMind(name, shaderCount) { + if (shaderCount <= 0) { + throw new Error(`shaderForMind: shaderCount must be > 0 (got ${shaderCount})`); + } let hash = 5381; for (const ch of name) { hash = ((hash << 5) + hash + ch.charCodeAt(0)) | 0; diff --git a/src/project-context.js b/src/project-context.js index 30a31dc..2a0bac1 100644 --- a/src/project-context.js +++ b/src/project-context.js @@ -1,6 +1,8 @@ import path from 'node:path'; import { spawnSync } from 'node:child_process'; +import { GIT_BINARY } from './git.js'; + export function getAmbientProjectContext(cwd = process.cwd()) { const baseContext = getCaptureAmbientContext(cwd); const gitRoot = runGitString(['-C', baseContext.cwd, 'rev-parse', '--show-toplevel']); @@ -12,19 +14,19 @@ export function getAmbientProjectContext(cwd = process.cwd()) { gitRemote, }); - return { + return Object.freeze({ cwd: baseContext.cwd, gitRoot, gitRemote, gitBranch, projectName, - projectTokens: buildProjectTokens({ + projectTokens: Object.freeze(buildProjectTokens({ cwd: baseContext.cwd, gitRoot, gitRemote, projectName, - }), - }; + })), + }); } export function getCaptureAmbientContext(cwd = process.cwd()) { @@ -35,19 +37,19 @@ export function getCaptureAmbientContext(cwd = process.cwd()) { gitRemote: null, }); - return { + return Object.freeze({ cwd: resolvedCwd, gitRoot: null, gitRemote: null, gitBranch: null, projectName, - projectTokens: buildProjectTokens({ + projectTokens: Object.freeze(buildProjectTokens({ cwd: resolvedCwd, gitRoot: null, gitRemote: null, projectName, - }), - }; + })), + }); } export function buildQueryTerms(query) { @@ -112,7 +114,7 @@ function unique(values) { } function runGitString(args) { - const result = spawnSync('git', args, { + const result = spawnSync(GIT_BINARY, args, { encoding: 'utf8', stdio: ['ignore', 'pipe', 'pipe'], }); diff --git a/src/store.js b/src/store.js index 026a0cd..3305ad4 100644 --- a/src/store.js +++ b/src/store.js @@ -39,3 +39,5 @@ export { } from './store/runtime.js'; export { assessReflectability } from './store/derivation.js'; + +export { saveAnnotation } from './store/annotate.js'; diff --git a/src/store/annotate.js b/src/store/annotate.js new file mode 100644 index 0000000..44c5490 --- /dev/null +++ b/src/store/annotate.js @@ -0,0 +1,54 @@ +import { randomUUID } from 'node:crypto'; + +import { ValidationError, NotFoundError } from '../errors.js'; +import { ANNOTATION_PREFIX, TEXT_MIME } from './constants.js'; +import { encodeTextContent } from './content.js'; +import { getCurrentTime } from './model.js'; +import { + createProductReadHandle, + getStoredEntry, + openWarpApp, + patchWarpApp, +} from './runtime.js'; + +export async function saveAnnotation(repoDir, targetEntryId, text, { writerId = null } = {}) { + if (!text || typeof text !== 'string' || text.trim() === '') { + throw new ValidationError('Annotation text cannot be empty'); + } + + const app = await openWarpApp(repoDir); + const read = await createProductReadHandle(app, repoDir); + const targetEntry = await getStoredEntry(read, targetEntryId); + + if (!targetEntry) { + throw new NotFoundError(`Entry not found: ${targetEntryId}`); + } + + const timestamp = getCurrentTime(); + const unique = randomUUID(); + const createdAt = timestamp.toISOString(); + const sortKey = `${String(timestamp.getTime()).padStart(13, '0')}-${unique}`; + const annotationId = `${ANNOTATION_PREFIX}${sortKey}`; + const resolvedWriterId = writerId ?? app.writerId; + + await patchWarpApp(repoDir, async (patch) => { + patch + .addNode(annotationId) + .setProperty(annotationId, 'kind', 'annotation') + .setProperty(annotationId, 'source', 'annotation') + .setProperty(annotationId, 'channel', 'cli') + .setProperty(annotationId, 'writerId', resolvedWriterId) + .setProperty(annotationId, 'createdAt', createdAt) + .setProperty(annotationId, 'sortKey', sortKey) + .setProperty(annotationId, 'targetEntryId', targetEntryId) + .addEdge(annotationId, targetEntryId, 'annotates'); + + await patch.attachContent(annotationId, encodeTextContent(text.trim()), { mime: TEXT_MIME }); + }); + + return Object.freeze({ + annotationId, + targetEntryId, + createdAt, + }); +} diff --git a/src/store/capture.js b/src/store/capture.js index 2ec34b9..25c65d9 100644 --- a/src/store/capture.js +++ b/src/store/capture.js @@ -1,9 +1,6 @@ -import { - getAmbientProjectContext, - getCaptureAmbientContext, -} from '../project-context.js'; import { normalizeCaptureProvenance } from '../capture-provenance.js'; import { TEXT_MIME } from './constants.js'; +import { encodeTextContent } from './content.js'; import { createEntry } from './model.js'; import { createProductReadHandle, @@ -11,23 +8,31 @@ import { getStoredEntry, openProductReadHandle, openWarpApp, + patchWarpApp, } from './runtime.js'; import { ensureCaptureReadEdges, ensureFirstDerivedArtifacts } from './derivation.js'; import { migrateGraphModel } from './migrations.js'; +import { getCheckpointGraphModelStatus } from './checkpoint-read.js'; export async function saveRawCapture(repoDir, thought, { provenance = null, - cwd = process.cwd(), ambientContext = null, } = {}) { + return await writeRawCapture(repoDir, thought, { + provenance, + ambientContext, + }); +} + +async function writeRawCapture(repoDir, thought, { + provenance, + ambientContext, +}) { const app = await openWarpApp(repoDir); const entry = createEntry(thought, app.writerId, { kind: 'capture', source: 'capture' }); - const captureAmbientContext = ambientContext ?? getCaptureAmbientContext(cwd); - // Keep the store boundary defensive because direct callers can bypass the - // CLI and MCP normalization helpers before reaching persistence. const captureProvenance = normalizeCaptureProvenance(provenance); - await app.patch(async patch => { + const patcher = async (patch) => { patch .addNode(entry.id) .setProperty(entry.id, 'kind', entry.kind) @@ -37,7 +42,7 @@ export async function saveRawCapture(repoDir, thought, { .setProperty(entry.id, 'createdAt', entry.createdAt) .setProperty(entry.id, 'sortKey', entry.sortKey); - applyAmbientContextPatch(patch, entry.id, captureAmbientContext); + applyAmbientContextPatch(patch, entry.id, ambientContext); if (captureProvenance?.ingress) { patch.setProperty(entry.id, 'captureIngress', captureProvenance.ingress); } @@ -48,19 +53,26 @@ export async function saveRawCapture(repoDir, thought, { patch.setProperty(entry.id, 'captureSourceURL', captureProvenance.sourceURL); } - await patch.attachContent(entry.id, thought, { mime: TEXT_MIME }); - }); + await patch.attachContent(entry.id, encodeTextContent(thought), { mime: TEXT_MIME }); + }; + + await patchWarpApp(repoDir, patcher, { genesisOnNoState: true }); return entry; } export async function finalizeCapturedThought(repoDir, entryId, { migrateIfNeeded = false, - cwd = process.cwd(), ambientContext = null, } = {}) { - const app = await openWarpApp(repoDir); - let read = await createProductReadHandle(app); + let app = await openWarpApp(repoDir); + + if (ambientContext) { + await patchAmbientContext(repoDir, entryId, ambientContext); + app = await openWarpApp(repoDir); + } + + let read = await createProductReadHandle(app, repoDir); let entry = await getStoredEntry(read, entryId); if (!entry || entry.kind !== 'capture') { @@ -70,13 +82,12 @@ export async function finalizeCapturedThought(repoDir, entryId, { }; } - const resolvedAmbientContext = ambientContext ?? getAmbientProjectContext(cwd); - await patchAmbientContext(app, entryId, resolvedAmbientContext); - read = await createProductReadHandle(app); - entry = await getStoredEntry(read, entryId); - - await ensureFirstDerivedArtifacts(app, read, entry); - await ensureCaptureReadEdges(app, read, entryId); + await ensureFirstDerivedArtifacts(repoDir, read, entry); + app = await openWarpApp(repoDir); + read = await createProductReadHandle(app, repoDir); + await ensureCaptureReadEdges(repoDir, read, entryId); + app = await openWarpApp(repoDir); + read = await createProductReadHandle(app, repoDir); entry = await getStoredEntry(read, entryId); return { @@ -86,6 +97,10 @@ export async function finalizeCapturedThought(repoDir, entryId, { } export async function getGraphModelStatus(repoDir) { + const checkpointStatus = await getCheckpointGraphModelStatus(repoDir); + if (checkpointStatus !== null) { + return checkpointStatus; + } const read = await openProductReadHandle(repoDir); return getGraphModelStatusForRead(read); } @@ -109,8 +124,10 @@ function applyAmbientContextPatch(patch, entryId, ambientContext) { } } -async function patchAmbientContext(app, entryId, ambientContext) { - await app.patch(patch => { +async function patchAmbientContext(repoDir, entryId, ambientContext) { + const patcher = (patch) => { applyAmbientContextPatch(patch, entryId, ambientContext); - }); + }; + + await patchWarpApp(repoDir, patcher, { genesisOnNoState: true }); } diff --git a/src/store/checkpoint-product-read.js b/src/store/checkpoint-product-read.js new file mode 100644 index 0000000..42eefe0 --- /dev/null +++ b/src/store/checkpoint-product-read.js @@ -0,0 +1,316 @@ +import { openCheckpointStateRead } from './checkpoint-state.js'; + +const DEFAULT_PATTERN = '*'; +const DEFAULT_MAX_DEPTH = 1000; + +class CheckpointProductQuery { + constructor({ reader, stateHash }) { + this._reader = reader; + this._stateHash = stateHash; + this._pattern = DEFAULT_PATTERN; + this._operations = []; + } + + match(pattern) { + this._pattern = pattern; + return this; + } + + where(criteria) { + this._operations.push({ type: 'where', criteria }); + return this; + } + + incoming(label) { + this._operations.push({ type: 'incoming', label }); + return this; + } + + outgoing(label) { + this._operations.push({ type: 'outgoing', label }); + return this; + } + + async run() { + let strand = this._matchingNodeIds(this._pattern); + for (const operation of this._operations) { + if (operation.type === 'where') { + strand = this._applyWhere(strand, operation.criteria); + continue; + } + strand = this._applyNeighborHop(strand, operation.type, operation.label); + } + + return Object.freeze({ + stateHash: this._stateHash, + nodes: Object.freeze(await Promise.all(strand.map(async (id) => Object.freeze({ + id, + props: Object.freeze(await this._reader.getNodeProps(id) ?? {}), + })))), + }); + } + + _matchingNodeIds(pattern) { + if (isSingleExactPattern(pattern)) { + return this._reader.hasNode(pattern) ? [pattern] : []; + } + return this._reader.project().nodes + .filter((nodeId) => matchesPattern(pattern, nodeId)) + .sort(compareStrings); + } + + _applyWhere(strand, criteria) { + if (typeof criteria === 'function') { + return this._applyPredicateWhere(strand, criteria); + } + if (!isPlainWhereObject(criteria)) { + throw new TypeError('checkpoint product query where() expects an object or predicate'); + } + + const filtered = []; + for (const nodeId of strand) { + const props = this._reader.getNodeProps(nodeId); + if (propsMatch(props ?? {}, criteria)) { + filtered.push(nodeId); + } + } + return filtered.sort(compareStrings); + } + + _applyPredicateWhere(strand, predicate) { + const filtered = []; + for (const nodeId of strand) { + const snapshot = this._nodeSnapshot(nodeId); + if (predicate(snapshot)) { + filtered.push(nodeId); + } + } + return filtered.sort(compareStrings); + } + + _nodeSnapshot(nodeId) { + const [props, edgesOut, edgesIn] = [ + this._reader.getNodeProps(nodeId), + this._neighborEdges(nodeId, 'outgoing'), + this._neighborEdges(nodeId, 'incoming'), + ]; + return Object.freeze({ + id: nodeId, + props: Object.freeze(props ?? {}), + edgesOut, + edgesIn, + }); + } + + _neighborEdges(nodeId, direction) { + return Object.freeze(this._reader.neighbors(nodeId, direction).map((entry) => Object.freeze( + direction === 'outgoing' + ? { label: entry.label, to: entry.nodeId } + : { label: entry.label, from: entry.nodeId }, + ))); + } + + _applyNeighborHop(strand, direction, label) { + const next = new Set(); + for (const nodeId of strand) { + for (const neighbor of this._reader.neighbors(nodeId, direction, label)) { + next.add(neighbor.nodeId); + } + } + return [...next].sort(compareStrings); + } +} + +class CheckpointProductTraversal { + constructor(reader) { + this._reader = reader; + Object.freeze(this); + } + + bfs(start, options = {}) { + if (!this._reader.hasNode(start)) { + throw new Error(`Start node not found: ${start}`); + } + + const direction = normalizeTraversalDirection(options.dir); + const labels = normalizeLabelFilter(options.labelFilter); + const maxDepth = options.maxDepth ?? DEFAULT_MAX_DEPTH; + const visited = new Set(); + let currentLevel = [{ nodeId: start, depth: 0 }]; + const result = []; + + while (currentLevel.length > 0) { + currentLevel.sort((left, right) => compareStrings(left.nodeId, right.nodeId)); + const nextLevel = []; + const queued = new Set(); + + for (const { nodeId, depth } of currentLevel) { + if (visited.has(nodeId) || depth > maxDepth) { + continue; + } + + visited.add(nodeId); + result.push(nodeId); + + if (depth >= maxDepth) { + continue; + } + + for (const neighbor of this._neighbors(nodeId, direction, labels)) { + if (!visited.has(neighbor.nodeId) && !queued.has(neighbor.nodeId)) { + queued.add(neighbor.nodeId); + nextLevel.push({ nodeId: neighbor.nodeId, depth: depth + 1 }); + } + } + } + + currentLevel = nextLevel; + } + + return result; + } + + _neighbors(nodeId, direction, labels) { + if (direction === 'both') { + return sortNeighbors(dedupeNeighbors([ + ...this._reader.neighbors(nodeId, 'outgoing'), + ...this._reader.neighbors(nodeId, 'incoming'), + ])).filter((neighbor) => labelMatches(neighbor.label, labels)); + } + return sortNeighbors(this._reader.neighbors(nodeId, direction)) + .filter((neighbor) => labelMatches(neighbor.label, labels)); + } +} + +class CheckpointProductView { + constructor({ reader, stateHash }) { + this._reader = reader; + this._stateHash = stateHash; + this.traverse = new CheckpointProductTraversal(reader); + Object.freeze(this); + } + + hasNode(nodeId) { + return this._reader.hasNode(nodeId); + } + + getNodeProps(nodeId) { + return this._reader.getNodeProps(nodeId); + } + + getNodeContentMeta(nodeId) { + return this._reader.getNodeContentMeta(nodeId); + } + + query() { + return new CheckpointProductQuery({ + reader: this._reader, + stateHash: this._stateHash, + }); + } +} + +export async function openCheckpointProductRead(repoDir, app = null) { + const checkpoint = await openCheckpointStateRead(repoDir, app); + if (checkpoint === null) { + return null; + } + + return Object.freeze({ + blobStorage: checkpoint.blobStorage, + readContent: checkpoint.readContent, + view: new CheckpointProductView({ + reader: checkpoint.reader, + stateHash: checkpoint.checkpointSha, + }), + }); +} + +function isSingleExactPattern(pattern) { + return typeof pattern === 'string' && !pattern.includes('*'); +} + +function matchesPattern(pattern, nodeId) { + if (typeof pattern === 'string') { + return matchGlob(pattern, nodeId); + } + return pattern.some((entry) => matchGlob(entry, nodeId)); +} + +function matchGlob(pattern, value) { + return globToRegExp(pattern).test(value); +} + +function globToRegExp(pattern) { + return new RegExp(`^${String(pattern).split('*').map(escapeRegExp).join('.*')}$`); +} + +function escapeRegExp(value) { + return value.replace(/[\\^$+?.()|[\]{}]/g, '\\$&'); +} + +function isPlainWhereObject(value) { + return value !== null && typeof value === 'object' && !Array.isArray(value); +} + +function propsMatch(props, criteria) { + for (const [key, value] of Object.entries(criteria)) { + if (props[key] !== value) { + return false; + } + } + return true; +} + +function compareStrings(left, right) { + if (left < right) { return -1; } + if (left > right) { return 1; } + return 0; +} + +function normalizeTraversalDirection(direction = 'out') { + if (direction === 'out' || direction === 'outgoing') { + return 'outgoing'; + } + if (direction === 'in' || direction === 'incoming') { + return 'incoming'; + } + if (direction === 'both') { + return 'both'; + } + throw new Error(`Unsupported traversal direction: ${direction}`); +} + +function normalizeLabelFilter(labelFilter) { + if (labelFilter === undefined || labelFilter === null) { + return null; + } + return new Set(Array.isArray(labelFilter) ? labelFilter : [labelFilter]); +} + +function labelMatches(label, labels) { + return labels === null || labels.has(label); +} + +function sortNeighbors(neighbors) { + return [...neighbors].sort((left, right) => { + const nodeComparison = compareStrings(left.nodeId, right.nodeId); + if (nodeComparison !== 0) { + return nodeComparison; + } + return compareStrings(left.label, right.label); + }); +} + +function dedupeNeighbors(neighbors) { + const seen = new Set(); + const deduped = []; + for (const neighbor of neighbors) { + const key = `${neighbor.nodeId}\0${neighbor.label}`; + if (!seen.has(key)) { + seen.add(key); + deduped.push(neighbor); + } + } + return deduped; +} diff --git a/src/store/checkpoint-read.js b/src/store/checkpoint-read.js new file mode 100644 index 0000000..d9daac8 --- /dev/null +++ b/src/store/checkpoint-read.js @@ -0,0 +1,118 @@ +import { + ENTRY_PREFIX, + GRAPH_META_ID, + GRAPH_MODEL_VERSION, +} from './constants.js'; +import { storesTextContent } from './model.js'; +import { BaseEntry } from './runtime.js'; +import { openCheckpointStateRead } from './checkpoint-state.js'; + +class CheckpointReadModel { + constructor({ blobStorage, readContent, reader }) { + this._blobStorage = blobStorage; + this._readContent = readContent; + this._reader = reader; + Object.freeze(this); + } + + static async open(repoDir, app = null) { + const checkpoint = await openCheckpointStateRead(repoDir, app); + if (checkpoint === null) { + return null; + } + + return new CheckpointReadModel({ + blobStorage: checkpoint.blobStorage, + readContent: checkpoint.readContent, + reader: checkpoint.reader, + }); + } + + graphModelStatus() { + if (this._latestCaptureId() === null) { + return { + currentGraphModelVersion: 1, + requiredGraphModelVersion: GRAPH_MODEL_VERSION, + migrationRequired: true, + }; + } + + const props = this._reader.getNodeProps(GRAPH_META_ID); + const currentGraphModelVersion = Number(props?.graphModelVersion ?? 1); + return { + currentGraphModelVersion, + requiredGraphModelVersion: GRAPH_MODEL_VERSION, + migrationRequired: currentGraphModelVersion < GRAPH_MODEL_VERSION, + }; + } + + async listEntriesByKind(kind) { + if (kind !== 'capture') { + return null; + } + + const entryNodes = this._entryNodeIds() + .map((nodeId) => this._entryCandidate(nodeId, kind)) + .filter(Boolean); + return await Promise.all( + entryNodes.map(({ nodeId, props }) => this._storedEntry(nodeId, props)), + ); + } + + _entryNodeIds() { + return this._reader.project().nodes.filter((nodeId) => nodeId.startsWith(ENTRY_PREFIX)); + } + + _latestCaptureId() { + return this._singleOutgoingNodeId(GRAPH_META_ID, 'latest_capture'); + } + + _singleOutgoingNodeId(nodeId, label) { + const neighbors = this._reader.neighbors(nodeId, 'outgoing', label); + return neighbors[0]?.nodeId ?? null; + } + + _entryCandidate(nodeId, kind) { + const props = this._reader.getNodeProps(nodeId); + if (props?.kind !== kind) { + return null; + } + return { nodeId, props }; + } + + async _storedEntry(nodeId, props) { + const text = storesTextContent(props.kind) + ? await this._readNodeText(nodeId) + : ''; + return BaseEntry.from(nodeId, props, text); + } + + async _readNodeText(nodeId) { + const oid = this._reader.getNodeContentMeta(nodeId)?.oid; + if (this._blobStorage && typeof oid === 'string' && oid.length > 0) { + return new TextDecoder().decode(await this._blobStorage.retrieve(oid)); + } + + const content = await this._readContent(nodeId); + if (!content) { + return ''; + } + return new TextDecoder().decode(content); + } +} + +export async function getCheckpointGraphModelStatus(repoDir, app = null) { + const readModel = await CheckpointReadModel.open(repoDir, app); + if (readModel === null) { + return null; + } + return readModel.graphModelStatus(); +} + +export async function listCheckpointEntriesByKind(repoDir, kind, app = null) { + const readModel = await CheckpointReadModel.open(repoDir, app); + if (readModel === null) { + return null; + } + return await readModel.listEntriesByKind(kind); +} diff --git a/src/store/checkpoint-state.js b/src/store/checkpoint-state.js new file mode 100644 index 0000000..bbbdcf4 --- /dev/null +++ b/src/store/checkpoint-state.js @@ -0,0 +1,55 @@ +import Plumbing from '@git-stunts/plumbing'; +import WarpApp, * as GitWarp from '@git-stunts/git-warp'; +import { createAppContentReader } from './content-reader.js'; +import { CHECKPOINT_POLICY, GRAPH_NAME } from './constants.js'; +import { createWriterId } from './model.js'; + +const CHECKPOINT_REF = `refs/warp/${GRAPH_NAME}/checkpoints/head`; + +export async function openCheckpointStateRead(repoDir, app = null) { + const persistence = new GitWarp.GitGraphAdapter({ + plumbing: Plumbing.createDefault({ cwd: repoDir }), + }); + const checkpointSha = await persistence.readRef(CHECKPOINT_REF); + if (checkpointSha === null) { + return null; + } + + const resolvedApp = await resolveApp({ + app, + persistence, + }); + const state = await resolvedApp.core().materialize(); + + return Object.freeze({ + blobStorage: await createRuntimeBlobStorage(persistence), + checkpointSha, + readContent: createAppContentReader(resolvedApp), + reader: createCheckpointStateReader(state), + }); +} + +function createRuntimeBlobStorage(persistence) { + const createStorage = persistence.createRuntimeBlobStorage; + if (typeof createStorage !== 'function') { + return null; + } + return createStorage.call(persistence); +} + +async function resolveApp({ app, persistence }) { + return app ?? await WarpApp.open({ + persistence, + graphName: GRAPH_NAME, + writerId: createWriterId(), + checkpointPolicy: CHECKPOINT_POLICY, + }); +} + +function createCheckpointStateReader(state) { + const createReader = GitWarp.createStateReader ?? GitWarp.createStateReaderV5; + if (typeof createReader !== 'function') { + throw new Error('Installed @git-stunts/git-warp does not expose a public state reader factory'); + } + return createReader(state); +} diff --git a/src/store/constants.js b/src/store/constants.js index 922a96a..da9e3aa 100644 --- a/src/store/constants.js +++ b/src/store/constants.js @@ -1,19 +1,46 @@ +export const ENTRY_KINDS = Object.freeze(['capture', 'reflect', 'thought']); +export const TEXT_CONTENT_KINDS = Object.freeze(['capture', 'reflect', 'thought', 'annotation', 'evolution']); +export const SESSION_KINDS = Object.freeze(['reflect_session', 'brainstorm_session']); +export const BUCKET_PERIODS = Object.freeze(['hour', 'day', 'week']); + export const GRAPH_NAME = 'think'; export const REFLECT_PROMPT_TYPES = ['challenge', 'constraint', 'sharpen']; + +// Node prefixes export const ENTRY_PREFIX = 'entry:'; export const THOUGHT_PREFIX = 'thought:'; export const SESSION_PREFIX = 'session:'; export const ARTIFACT_PREFIX = 'artifact:'; export const REFLECT_SESSION_PREFIX = 'reflect:'; export const LEGACY_BRAINSTORM_SESSION_PREFIX = 'brainstorm:'; +export const TOPIC_PREFIX = 'topic:'; +export const KEYWORD_PREFIX = 'keyword:'; +export const CLASSIFICATION_PREFIX = 'classification:'; +export const ENTITY_PREFIX = 'entity:'; +export const ANNOTATION_PREFIX = 'annotation:'; +export const LINK_PREFIX = 'link:'; +export const EVOLUTION_PREFIX = 'evolution:'; +export const PIPELINE_RUN_PREFIX = 'pipeline_run:'; export const GRAPH_META_ID = 'meta:graph'; + +// Standing classification node IDs +export const CLASSIFICATIONS = Object.freeze([ + 'question', + 'decision', + 'observation', + 'action_item', + 'idea', + 'reference', + 'unclassified', +]); + export const TEXT_MIME = 'text/plain; charset=utf-8'; export const MAX_REFLECT_STEPS = 3; export const SESSION_IDLE_GAP_MS = 5 * 60 * 1000; export const DERIVER_NAME = 'think'; export const DERIVER_VERSION = '1'; export const SCHEMA_VERSION = '1'; -export const GRAPH_MODEL_VERSION = 3; +export const GRAPH_MODEL_VERSION = 4; export const CHECKPOINT_POLICY = { every: 20 }; export const PRODUCT_READ_LENS = { match: [ @@ -24,6 +51,14 @@ export const PRODUCT_READ_LENS = { `${ARTIFACT_PREFIX}*`, `${REFLECT_SESSION_PREFIX}*`, `${LEGACY_BRAINSTORM_SESSION_PREFIX}*`, + `${TOPIC_PREFIX}*`, + `${KEYWORD_PREFIX}*`, + `${CLASSIFICATION_PREFIX}*`, + `${ENTITY_PREFIX}*`, + `${ANNOTATION_PREFIX}*`, + `${LINK_PREFIX}*`, + `${EVOLUTION_PREFIX}*`, + `${PIPELINE_RUN_PREFIX}*`, ], }; export const CHALLENGE_PROMPTS = [ diff --git a/src/store/content-reader.js b/src/store/content-reader.js new file mode 100644 index 0000000..1727ccc --- /dev/null +++ b/src/store/content-reader.js @@ -0,0 +1,12 @@ +export function createAppContentReader(app) { + if (typeof app?.getContent === 'function') { + return async (nodeId) => await app.getContent(nodeId); + } + + const core = typeof app?.core === 'function' ? app.core() : null; + if (typeof core?.getContent === 'function') { + return async (nodeId) => await core.getContent(nodeId); + } + + throw new Error('Installed @git-stunts/git-warp does not expose a public content reader'); +} diff --git a/src/store/content.js b/src/store/content.js new file mode 100644 index 0000000..5b12b1d --- /dev/null +++ b/src/store/content.js @@ -0,0 +1,3 @@ +export function encodeTextContent(text) { + return Buffer.from(text, 'utf8'); +} diff --git a/src/store/derivation.js b/src/store/derivation.js index 5300492..69ee518 100644 --- a/src/store/derivation.js +++ b/src/store/derivation.js @@ -10,42 +10,43 @@ import { SESSION_IDLE_GAP_MS, SESSION_PREFIX, } from './constants.js'; +import { encodeTextContent } from './content.js'; import { compareEntriesNewestFirst, - compareEntriesOldestFirst, createArtifactId, createThoughtId, getCurrentTime, normalizeSeed, } from './model.js'; import { - getLatestCaptureId, + getLatestStoredEntry, getProducedInSessionId, getStoredEntry, hasNode, listEntriesByKind, + patchWarpApp, } from './runtime.js'; export function assessReflectability(text) { const seedQuality = deriveSeedQuality(createThoughtId(text), text); if (seedQuality.verdict === 'likely_reflectable') { - return { + return Object.freeze({ eligible: true, kind: 'pressure_testable', text: 'This entry looks like a candidate idea, question, or decision that can be pressure-tested.', - }; + }); } - return { + return Object.freeze({ eligible: false, kind: 'not_pressure_testable', text: 'This entry looks more like a note than a pressure-testable idea.', suggestion: 'Pick a different seed or capture a sharper claim first.', - }; + }); } -export async function ensureFirstDerivedArtifacts(app, read, entry) { +export async function ensureFirstDerivedArtifacts(repoDir, read, entry) { if (!entry || entry.kind !== 'capture') { return null; } @@ -90,7 +91,7 @@ export async function ensureFirstDerivedArtifacts(app, read, entry) { }; } - await app.patch(async (patch) => { + await patchWarpApp(repoDir, async (patch) => { ensureGraphMetadataNode(patch, graphMetaProps); if (!thoughtNodeExists) { @@ -101,7 +102,7 @@ export async function ensureFirstDerivedArtifacts(app, read, entry) { .setProperty(thoughtId, 'createdAt', entry.createdAt) .setProperty(thoughtId, 'schemaVersion', SCHEMA_VERSION); - await patch.attachContent(thoughtId, entry.text, { mime: 'text/plain; charset=utf-8' }); + await patch.attachContent(thoughtId, encodeTextContent(entry.text), { mime: 'text/plain; charset=utf-8' }); } if (needsCaptureThoughtLink) { @@ -139,18 +140,17 @@ export async function ensureFirstDerivedArtifacts(app, read, entry) { }; } -export async function ensureCaptureReadEdges(app, read, entryId) { +export async function ensureCaptureReadEdges(repoDir, read, entryId) { const entry = await getStoredEntry(read, entryId); if (!entry || entry.kind !== 'capture') { return; } - const latestCaptureId = await getLatestCaptureId(read); - if (latestCaptureId === entry.id) { + const latestEntry = await getLatestStoredEntry(read); + if (latestEntry && latestEntry.id === entry.id) { return; } - const latestEntry = latestCaptureId ? await getStoredEntry(read, latestCaptureId) : null; if (latestEntry && compareEntriesNewestFirst(entry, latestEntry) >= 0) { return; } @@ -160,7 +160,7 @@ export async function ensureCaptureReadEdges(app, read, entryId) { .outgoing('latest_capture') .run(); - await app.patch((patch) => { + await patchWarpApp(repoDir, (patch) => { for (const node of latestCaptureNodes.nodes ?? []) { patch.removeEdge(GRAPH_META_ID, node.id, 'latest_capture'); } @@ -168,6 +168,7 @@ export async function ensureCaptureReadEdges(app, read, entryId) { patch.addEdge(GRAPH_META_ID, entry.id, 'latest_capture'); if (latestEntry) { patch.addEdge(entry.id, latestEntry.id, 'older'); + patch.addEdge(latestEntry.id, entry.id, 'newer'); } }); } @@ -226,7 +227,7 @@ export function deriveSeedQuality(thoughtId, text) { const normalized = normalizeSeed(text); const eligible = REFLECT_MARKERS.some((pattern) => pattern.test(normalized)); - return { + return Object.freeze({ artifactId: createArtifactId('seed_quality', thoughtId), kind: 'seed_quality', primaryInputKind: 'thought', @@ -236,87 +237,70 @@ export function deriveSeedQuality(thoughtId, text) { reasonText: eligible ? 'Contains explicit proposal, uncertainty, or decision language that can be pressure-tested.' : 'Reads more like a status, narrative, or observational note than a pressure-testable idea.', - promptFamilies: eligible ? [...REFLECT_PROMPT_TYPES] : [], + promptFamilies: Object.freeze(eligible ? [...REFLECT_PROMPT_TYPES] : []), deriver: DERIVER_NAME, deriverVersion: DERIVER_VERSION, schemaVersion: SCHEMA_VERSION, createdAt: getCurrentTime().toISOString(), - }; + }); } export async function deriveSessionAttribution(read, entry) { - const captures = await listEntriesByKind(read, 'capture'); - const ordered = captures - .filter((candidate) => candidate.id !== entry.id) - .concat([{ ...entry }]) - .sort(compareEntriesOldestFirst); - - let sessionStart = ordered[0]; - let previous = null; - - for (const capture of ordered) { - if (previous) { - const gapMs = Date.parse(capture.createdAt) - Date.parse(previous.createdAt); - if (gapMs > SESSION_IDLE_GAP_MS) { - sessionStart = capture; - } - } + const latestEntry = await getLatestStoredEntry(read); - if (capture.id === entry.id) { - const withinBucket = previous - && (Date.parse(capture.createdAt) - Date.parse(previous.createdAt)) <= SESSION_IDLE_GAP_MS; - const sessionId = `${SESSION_PREFIX}${sessionStart.sortKey}`; + if (latestEntry && latestEntry.id !== entry.id) { + const gapMs = Date.parse(entry.createdAt) - Date.parse(latestEntry.createdAt); + if (gapMs <= SESSION_IDLE_GAP_MS) { + const activeSessionId = latestEntry.sessionId || `${SESSION_PREFIX}${latestEntry.sortKey}`; + const sessionCreatedAt = latestEntry.sessionCreatedAt || latestEntry.createdAt; + const sessionStartSortKey = latestEntry.sessionStartSortKey || latestEntry.sortKey; - return { - artifactId: createArtifactId('session_attribution', entry.id, sessionId), + return Object.freeze({ + artifactId: createArtifactId('session_attribution', entry.id, activeSessionId), kind: 'session_attribution', primaryInputKind: 'capture', primaryInputId: entry.id, - sessionId, - sessionCreatedAt: sessionStart.createdAt, - sessionStartSortKey: sessionStart.sortKey, - reasonKind: withinBucket ? 'temporal_proximity' : 'new_session_bucket', - reasonText: withinBucket - ? 'Captured within 5 minutes of neighboring entries in the same session bucket.' - : 'Started a new session bucket because no neighboring capture fell within the 5 minute idle-gap threshold.', + sessionId: activeSessionId, + sessionCreatedAt, + sessionStartSortKey, + reasonKind: 'temporal_proximity', + reasonText: 'Captured within 5 minutes of the most recent entry.', deriver: DERIVER_NAME, deriverVersion: DERIVER_VERSION, schemaVersion: SCHEMA_VERSION, createdAt: getCurrentTime().toISOString(), - }; + }); } - - previous = capture; } - const fallbackSessionId = `${SESSION_PREFIX}${entry.sortKey}`; - return { - artifactId: createArtifactId('session_attribution', entry.id, fallbackSessionId), + const sessionId = `${SESSION_PREFIX}${entry.sortKey}`; + return Object.freeze({ + artifactId: createArtifactId('session_attribution', entry.id, sessionId), kind: 'session_attribution', primaryInputKind: 'capture', primaryInputId: entry.id, - sessionId: fallbackSessionId, + sessionId, sessionCreatedAt: entry.createdAt, sessionStartSortKey: entry.sortKey, reasonKind: 'new_session_bucket', - reasonText: 'Started a new session bucket because no neighboring capture fell within the 5 minute idle-gap threshold.', + reasonText: 'Started a new session bucket because no recent capture fell within the 5 minute idle-gap threshold.', deriver: DERIVER_NAME, deriverVersion: DERIVER_VERSION, schemaVersion: SCHEMA_VERSION, createdAt: getCurrentTime().toISOString(), - }; + }); } export async function getCanonicalThought(read, entry) { const thoughtId = entry.thoughtId ?? createThoughtId(entry.text); const thoughtProps = await read.view.getNodeProps(thoughtId); - return { + return Object.freeze({ entryId: entry.id, thoughtId, relation: 'expresses', stored: Boolean(thoughtProps), - }; + }); } export async function getSeedQualityReceipt(read, entry) { @@ -327,7 +311,7 @@ export async function getSeedQualityReceipt(read, entry) { return null; } - return { + return Object.freeze({ artifactId, kind: 'seed_quality', primaryInputKind: props.primaryInputKind, @@ -335,12 +319,12 @@ export async function getSeedQualityReceipt(read, entry) { verdict: props.verdict, reasonKind: props.reasonKind, reasonText: props.reasonText, - promptFamilies: parseJsonArray(props.promptFamiliesJson), + promptFamilies: Object.freeze(parseJsonArray(props.promptFamiliesJson)), deriver: props.deriver, deriverVersion: props.deriverVersion, schemaVersion: props.schemaVersion, createdAt: props.createdAt, - }; + }); } export async function getSessionAttributionReceipt(read, entry) { @@ -353,7 +337,7 @@ export async function getSessionAttributionReceipt(read, entry) { return null; } - return { + return Object.freeze({ artifactId, kind: 'session_attribution', primaryInputKind: props.primaryInputKind, @@ -365,7 +349,7 @@ export async function getSessionAttributionReceipt(read, entry) { deriverVersion: props.deriverVersion, schemaVersion: props.schemaVersion, createdAt: props.createdAt, - }; + }); } export async function getSessionAttributionReceiptIfPresent(read, entry) { @@ -379,7 +363,7 @@ export async function getSessionAttributionReceiptIfPresent(read, entry) { return null; } - return { + return Object.freeze({ artifactId, kind: 'session_attribution', primaryInputKind: props.primaryInputKind, @@ -391,7 +375,7 @@ export async function getSessionAttributionReceiptIfPresent(read, entry) { deriverVersion: props.deriverVersion, schemaVersion: props.schemaVersion, createdAt: props.createdAt, - }; + }); } function addArtifactNode(patch, artifact) { diff --git a/src/store/enrichment/auto-tags.js b/src/store/enrichment/auto-tags.js new file mode 100644 index 0000000..93523b5 --- /dev/null +++ b/src/store/enrichment/auto-tags.js @@ -0,0 +1,48 @@ +import { STOPWORDS } from './stopwords.js'; + +const MIN_TOKEN_LENGTH = 3; + +/** + * Extract topic keywords from thought text. + * Pure function — no graph, no LLM. + * + * Algorithm (v1: simple keyword extraction): + * 1. Lowercase the text + * 2. Split on whitespace and punctuation (preserving hyphens within words) + * 3. Remove stopwords + * 4. Remove tokens < 3 characters + * 5. Deduplicate + * 6. Return in order of first appearance + */ +export function extractTopics(text) { + if (!text || typeof text !== 'string' || text.trim() === '') { + return []; + } + + const lower = text.toLowerCase(); + + // Split on whitespace and punctuation, but preserve hyphens within words + const tokens = lower + .split(/[\s,.:;!?()[\]{}"'`]+/) + .map((token) => token.replace(/^-+|-+$/g, '')) + .filter(Boolean); + + const seen = new Set(); + const result = []; + + for (const token of tokens) { + if (token.length < MIN_TOKEN_LENGTH) { + continue; + } + if (STOPWORDS.has(token)) { + continue; + } + if (seen.has(token)) { + continue; + } + seen.add(token); + result.push(token); + } + + return result; +} diff --git a/src/store/enrichment/runner.js b/src/store/enrichment/runner.js new file mode 100644 index 0000000..13c5fdc --- /dev/null +++ b/src/store/enrichment/runner.js @@ -0,0 +1,351 @@ +import { CLASSIFICATION_PREFIX, TOPIC_PREFIX, KEYWORD_PREFIX, GRAPH_META_ID } from '../constants.js'; +import { createArtifactId, getCurrentTime } from '../model.js'; +import { + createProductReadHandle, + getStoredEntry, + listEntriesByKind, + openWarpApp, + patchWarpApp, +} from '../runtime.js'; +import { invalidateSearchIndex } from '../queries.js'; +import { extractTopics } from './auto-tags.js'; +import { classifyThought } from './semantic-parse.js'; + +const TOPIC_PROMOTION_THRESHOLD = 2; + +/** + * Run the enrichment pipeline on all un-enriched captures in a repo. + * Uses worldline query API — no full graph materialization. + */ +export async function runEnrichmentPipeline(repoDir) { + const app = await openWarpApp(repoDir); + const read = await createProductReadHandle(app, repoDir); + const { view } = read; + + // 1. Determine the starting point (high-water mark cursor) + const metaProps = await view.getNodeProps(GRAPH_META_ID); + const cursorId = metaProps?.lastEnrichedCaptureId; + + let captures = []; + if (cursorId && await view.hasNode(cursorId)) { + // Incremental path: Traverse 'newer' edges from the cursor + const forwardIds = await view.traverse.bfs(cursorId, { + dir: 'out', + labelFilter: 'newer', + }); + + for (const id of forwardIds) { + if (id === cursorId) { continue; } + // eslint-disable-next-line no-await-in-loop -- sequential retrieval of new captures + const entry = await getStoredEntry(read, id); + if (entry && entry.kind === 'capture') { + captures.push(entry); + } + } + } else { + // Bootstrap path: O(N) scan (only happens once or if cursor is lost) + captures = await listEntriesByKind(read, 'capture'); + } + + if (captures.length === 0) { + return Object.freeze({ + capturesProcessed: 0, + topicNodesCreated: 0, + keywordNodesCreated: 0, + aboutEdgesAdded: 0, + mentionsEdgesAdded: 0, + classifiedEdgesAdded: 0, + receiptsCreated: 0, + promotedTopics: [], + }); + } + + // Find existing auto_tags receipts via query + const existingReceipts = new Set(); + const tagReceiptResult = await view.query().match('artifact:*').where({ kind: 'auto_tags' }).run(); + for (const node of tagReceiptResult.nodes ?? []) { + if (node.props.primaryInputId) { + existingReceipts.add(node.props.primaryInputId); + } + } + + // Find existing semantic_parse receipts via query + const existingParseReceipts = new Set(); + const parseReceiptResult = await view.query().match('artifact:*').where({ kind: 'semantic_parse' }).run(); + for (const node of parseReceiptResult.nodes ?? []) { + if (node.props.primaryInputId) { + existingParseReceipts.add(node.props.primaryInputId); + } + } + + // Find existing topic nodes via query + const existingTopicNodes = new Set(); + const topicResult = await view.query().match(`${TOPIC_PREFIX}*`).run(); + for (const node of topicResult.nodes ?? []) { + existingTopicNodes.add(node.id); + } + + // Find existing keyword nodes via query + const existingKeywordNodes = new Set(); + const keywordResult = await view.query().match(`${KEYWORD_PREFIX}*`).run(); + for (const node of keywordResult.nodes ?? []) { + existingKeywordNodes.add(node.id); + } + + // Track candidate topic counts and classifications across all captures + const topicCounts = new Map(); + const thoughtTopics = new Map(); + const thoughtClassifications = new Map(); + + for (const capture of captures) { + const { thoughtId } = capture; + if (!thoughtId) { continue; } + + if (!thoughtTopics.has(thoughtId)) { + const topics = extractTopics(capture.text); + thoughtTopics.set(thoughtId, topics); + for (const topic of topics) { + topicCounts.set(topic, (topicCounts.get(topic) || 0) + 1); + } + } + + if (!thoughtClassifications.has(thoughtId)) { + thoughtClassifications.set(thoughtId, classifyThought(capture.text)); + } + } + + // Determine promoted topics + const promotedTopics = new Set(); + for (const [topic, count] of topicCounts) { + if (count >= TOPIC_PROMOTION_THRESHOLD) { + promotedTopics.add(topic); + } + } + + // Check existing about edges per thought via traversal + const existingAboutEdges = new Set(); + for (const [thoughtId] of thoughtTopics) { + // eslint-disable-next-line no-await-in-loop -- per-thought traversal + const traversal = await view.query().match(thoughtId).outgoing('about').run(); + for (const node of traversal.nodes ?? []) { + existingAboutEdges.add(`${thoughtId}\0${node.id}`); + } + } + + // Check existing mentions edges per thought via traversal (inverted index) + const existingMentionsEdges = new Set(); + for (const [thoughtId] of thoughtTopics) { + // eslint-disable-next-line no-await-in-loop -- per-thought traversal + const traversal = await view.query().match(thoughtId).outgoing('mentions').run(); + for (const node of traversal.nodes ?? []) { + existingMentionsEdges.add(`${thoughtId}\0${node.id}`); + } + } + + // Check existing classified_as edges per thought via traversal + const existingClassifiedEdges = new Set(); + for (const [thoughtId] of thoughtClassifications) { + // eslint-disable-next-line no-await-in-loop -- per-thought traversal + const traversal = await view.query().match(thoughtId).outgoing('classified_as').run(); + for (const node of traversal.nodes ?? []) { + existingClassifiedEdges.add(`${thoughtId}\0${node.id}`); + } + } + + const timestamp = getCurrentTime().toISOString(); + const keywordNodesToCreate = []; + const mentionsEdgesToAdd = []; + const topicNodesToCreate = []; + const aboutEdgesToAdd = []; + const autoTagReceiptsToCreate = []; + const classifiedEdgesToAdd = []; + const semanticParseReceiptsToCreate = []; + + for (const [thoughtId, topics] of thoughtTopics) { + for (const keyword of topics) { + const keywordNodeId = `${KEYWORD_PREFIX}${keyword}`; + if (!existingKeywordNodes.has(keywordNodeId)) { + keywordNodesToCreate.push({ keywordNodeId, keyword }); + existingKeywordNodes.add(keywordNodeId); + } + + const edgeKey = `${thoughtId}\0${keywordNodeId}`; + if (!existingMentionsEdges.has(edgeKey)) { + mentionsEdgesToAdd.push({ thoughtId, keywordNodeId }); + existingMentionsEdges.add(edgeKey); + } + } + } + + for (const topic of promotedTopics) { + const nodeId = `${TOPIC_PREFIX}${topic}`; + if (!existingTopicNodes.has(nodeId)) { + topicNodesToCreate.push({ nodeId, topic }); + existingTopicNodes.add(nodeId); + } + } + + for (const [thoughtId, topics] of thoughtTopics) { + for (const topic of topics) { + if (!promotedTopics.has(topic)) { continue; } + const topicNodeId = `${TOPIC_PREFIX}${topic}`; + const edgeKey = `${thoughtId}\0${topicNodeId}`; + if (!existingAboutEdges.has(edgeKey)) { + aboutEdgesToAdd.push({ thoughtId, topicNodeId }); + existingAboutEdges.add(edgeKey); + } + } + } + + for (const capture of captures) { + const { thoughtId } = capture; + if (!thoughtId || existingReceipts.has(thoughtId)) { continue; } + + autoTagReceiptsToCreate.push({ + artifactId: createArtifactId('auto_tags', thoughtId), + thoughtId, + topics: thoughtTopics.get(thoughtId) || [], + }); + existingReceipts.add(thoughtId); + } + + for (const [thoughtId, result] of thoughtClassifications) { + for (const classification of result.classifications) { + const classNodeId = `${CLASSIFICATION_PREFIX}${classification}`; + const edgeKey = `${thoughtId}\0${classNodeId}`; + if (!existingClassifiedEdges.has(edgeKey)) { + classifiedEdgesToAdd.push({ thoughtId, classNodeId }); + existingClassifiedEdges.add(edgeKey); + } + } + } + + for (const capture of captures) { + const { thoughtId } = capture; + if (!thoughtId || existingParseReceipts.has(thoughtId)) { continue; } + + const result = thoughtClassifications.get(thoughtId); + if (!result) { continue; } + + semanticParseReceiptsToCreate.push({ + artifactId: createArtifactId('semantic_parse', thoughtId), + thoughtId, + result, + }); + existingParseReceipts.add(thoughtId); + } + + await patchWarpApp(repoDir, (patch) => { + // Create keyword nodes and mentions edges (The Inverted Index) + for (const { keywordNodeId, keyword } of keywordNodesToCreate) { + patch + .addNode(keywordNodeId) + .setProperty(keywordNodeId, 'kind', 'keyword') + .setProperty(keywordNodeId, 'name', keyword) + .setProperty(keywordNodeId, 'createdAt', timestamp); + } + + for (const { thoughtId, keywordNodeId } of mentionsEdgesToAdd) { + patch.addEdge(thoughtId, keywordNodeId, 'mentions'); + } + + // Create promoted topic nodes + for (const { nodeId, topic } of topicNodesToCreate) { + patch + .addNode(nodeId) + .setProperty(nodeId, 'kind', 'topic') + .setProperty(nodeId, 'name', topic) + .setProperty(nodeId, 'normalizedName', topic) + .setProperty(nodeId, 'createdAt', timestamp) + .setProperty(nodeId, 'source', 'auto_tags'); + } + + // Add about edges for promoted topics + for (const { thoughtId, topicNodeId } of aboutEdgesToAdd) { + patch.addEdge(thoughtId, topicNodeId, 'about'); + } + + // Create auto_tags receipt artifacts + for (const { artifactId, thoughtId, topics } of autoTagReceiptsToCreate) { + patch + .addNode(artifactId) + .setProperty(artifactId, 'kind', 'auto_tags') + .setProperty(artifactId, 'primaryInputKind', 'thought') + .setProperty(artifactId, 'primaryInputId', thoughtId) + .setProperty(artifactId, 'topicsExtracted', JSON.stringify(topics)) + .setProperty(artifactId, 'method', 'keyword-extraction') + .setProperty(artifactId, 'topicNodesCreated', 0) + .setProperty(artifactId, 'deriver', 'think') + .setProperty(artifactId, 'deriverVersion', '1') + .setProperty(artifactId, 'schemaVersion', '1') + .setProperty(artifactId, 'createdAt', timestamp) + .addEdge(artifactId, thoughtId, 'derived_from'); + } + + // Add classified_as edges + for (const { thoughtId, classNodeId } of classifiedEdgesToAdd) { + patch.addEdge(thoughtId, classNodeId, 'classified_as'); + } + + // Create semantic_parse receipt artifacts + for (const { artifactId, thoughtId, result } of semanticParseReceiptsToCreate) { + patch + .addNode(artifactId) + .setProperty(artifactId, 'kind', 'semantic_parse') + .setProperty(artifactId, 'primaryInputKind', 'thought') + .setProperty(artifactId, 'primaryInputId', thoughtId) + .setProperty(artifactId, 'classifications', JSON.stringify(result.classifications)) + .setProperty(artifactId, 'markers', JSON.stringify(result.markers)) + .setProperty(artifactId, 'deriver', 'think') + .setProperty(artifactId, 'deriverVersion', '1') + .setProperty(artifactId, 'schemaVersion', '1') + .setProperty(artifactId, 'createdAt', timestamp) + .addEdge(artifactId, thoughtId, 'derived_from'); + } + + // Update the high-water mark cursor to the latest capture processed + const latestProcessed = [...captures].sort((a, b) => b.createdAt.localeCompare(a.createdAt))[0]; + if (latestProcessed) { + patch.setProperty(GRAPH_META_ID, 'lastEnrichedCaptureId', latestProcessed.id); + } + }); + + invalidateSearchIndex(repoDir); + + return Object.freeze({ + capturesProcessed: captures.length, + topicNodesCreated: topicNodesToCreate.length, + keywordNodesCreated: keywordNodesToCreate.length, + aboutEdgesAdded: aboutEdgesToAdd.length, + mentionsEdgesAdded: mentionsEdgesToAdd.length, + classifiedEdgesAdded: classifiedEdgesToAdd.length, + receiptsCreated: autoTagReceiptsToCreate.length + semanticParseReceiptsToCreate.length, + promotedTopics: [...promotedTopics].sort(), + }); +} + +/** + * List all promoted topics in the graph with thought counts. + * Uses worldline query API — no full graph materialization. + */ +export async function listTopics(repoDir) { + const app = await openWarpApp(repoDir); + const read = await createProductReadHandle(app, repoDir); + + const topicResult = await read.view.query().match(`${TOPIC_PREFIX}*`).where({ kind: 'topic' }).run(); + const topics = []; + + for (const node of topicResult.nodes ?? []) { + // eslint-disable-next-line no-await-in-loop -- per-topic traversal for count + const incoming = await read.view.query().match(node.id).incoming('about').run(); + const thoughtCount = (incoming.nodes ?? []).length; + + topics.push(Object.freeze({ + name: node.props.name, + thoughtCount, + createdAt: node.props.createdAt, + })); + } + + return topics.sort((a, b) => b.thoughtCount - a.thoughtCount); +} diff --git a/src/store/enrichment/semantic-parse.js b/src/store/enrichment/semantic-parse.js new file mode 100644 index 0000000..44e0a53 --- /dev/null +++ b/src/store/enrichment/semantic-parse.js @@ -0,0 +1,77 @@ +/** + * Classify a thought by structural type using pattern matching. + * Returns multiple classifications when multiple patterns match. + * No LLM — deterministic pattern-based. + */ + +const PATTERNS = [ + { + type: 'question', + patterns: [ + /\?/, + /^(how|what|why|when|where|who|can|should|could|would|is there|do we)\b/i, + ], + }, + { + type: 'decision', + patterns: [ + /\b(i decided|we decided|decision:|going with|chose to|picking)\b/i, + ], + }, + { + type: 'observation', + patterns: [ + /\b(i noticed|i observed|it seems|turns out|interesting that|realized)\b/i, + ], + }, + { + type: 'action_item', + patterns: [ + /\b(need to|todo|must\b|should do|action:|next step|follow up)\b/i, + ], + }, + { + type: 'idea', + patterns: [ + /\b(what if|idea:|concept:|maybe we could|imagine|proposal)\b/i, + ], + }, + { + type: 'reference', + patterns: [ + /https?:\/\//, + /\b(see:|ref:|link:|source:)\b/i, + ], + }, +]; + +export function classifyThought(text) { + if (!text || typeof text !== 'string' || text.trim() === '') { + return Object.freeze({ classifications: ['unclassified'], markers: [] }); + } + + const classifications = []; + const markers = []; + + for (const { type, patterns } of PATTERNS) { + for (const pattern of patterns) { + const match = text.match(pattern); + if (match) { + if (!classifications.includes(type)) { + classifications.push(type); + } + markers.push(`${type}:${match[0]}`); + break; + } + } + } + + if (classifications.length === 0) { + return Object.freeze({ classifications: ['unclassified'], markers: [] }); + } + + return Object.freeze({ + classifications: Object.freeze(classifications), + markers: Object.freeze(markers), + }); +} diff --git a/src/store/enrichment/stopwords.js b/src/store/enrichment/stopwords.js new file mode 100644 index 0000000..2755e34 --- /dev/null +++ b/src/store/enrichment/stopwords.js @@ -0,0 +1,19 @@ +// Common English stopwords. Kept minimal — domain terms should survive. +export const STOPWORDS = Object.freeze(new Set([ + 'a', 'an', 'the', 'and', 'or', 'but', 'if', 'in', 'on', 'at', + 'to', 'for', 'of', 'with', 'by', 'from', 'as', 'is', 'was', + 'are', 'were', 'be', 'been', 'being', 'have', 'has', 'had', + 'do', 'does', 'did', 'will', 'would', 'could', 'should', 'may', + 'might', 'shall', 'can', 'need', 'must', 'not', 'no', 'nor', + 'so', 'than', 'too', 'very', 'just', 'about', 'above', 'after', + 'again', 'all', 'also', 'am', 'any', 'because', 'before', + 'between', 'both', 'each', 'few', 'get', 'got', 'her', 'here', + 'him', 'his', 'how', 'into', 'its', 'let', 'like', 'made', + 'make', 'many', 'more', 'most', 'much', 'my', 'now', 'off', + 'only', 'other', 'our', 'out', 'own', 'over', 'same', 'she', + 'some', 'still', 'such', 'take', 'that', 'their', 'them', + 'then', 'there', 'these', 'they', 'this', 'those', 'through', + 'under', 'until', 'up', 'us', 'use', 'used', 'using', 'want', + 'way', 'we', 'well', 'what', 'when', 'where', 'which', 'while', + 'who', 'why', 'you', 'your', +])); diff --git a/src/store/migrations.js b/src/store/migrations.js index e152e61..9053664 100644 --- a/src/store/migrations.js +++ b/src/store/migrations.js @@ -1,147 +1,258 @@ import { ARTIFACT_PREFIX, + CLASSIFICATION_PREFIX, + CLASSIFICATIONS, GRAPH_META_ID, GRAPH_MODEL_VERSION, } from './constants.js'; import { compareEntriesNewestFirst, getCurrentTime } from './model.js'; -import { openWarpApp } from './runtime.js'; +import { openWarpApp, patchWarpApp, patchWarpAppWithWriter } from './runtime.js'; export async function migrateGraphModel(repoDir) { const app = await openWarpApp(repoDir); - const graph = app.core(); - const nodes = await graph.getNodes(); - const edges = await graph.getEdges(); - const edgeKeys = new Set(edges.map(edge => `${edge.from}\0${edge.to}\0${edge.label}`)); - const propsById = new Map(); - - for (const nodeId of nodes) { - // eslint-disable-next-line no-await-in-loop -- sequential graph node reads during migration - const props = await graph.getNodeProps(nodeId); - if (props) { - propsById.set(nodeId, props); - } - } + const worldline = app.worldline(); + + // Check current graph model version + const graphMeta = await worldline.getNodeProps(GRAPH_META_ID); + const needsMetadataNode = !graphMeta; + const needsGraphVersionUpdate = !graphMeta || graphMeta.graphModelVersion !== GRAPH_MODEL_VERSION; + + // Query each node kind separately — no full materialization + const captureResult = await worldline.query().match('entry:*').where({ kind: 'capture' }).run(); + const reflectEntryResult = await worldline.query().match('entry:*').where({ kind: 'reflect' }).run(); + const brainstormEntryResult = await worldline.query().match('entry:*').where({ kind: 'brainstorm' }).run(); + const reflectKindNodes = [ + ...(reflectEntryResult.nodes ?? []), + ...(brainstormEntryResult.nodes ?? []), + ]; + const reflectSessionResult = await worldline.query().match('reflect:*').run(); + const brainstormSessionResult = await worldline.query().match('brainstorm:*').run(); + const sessionNodes = [ + ...(reflectSessionResult.nodes ?? []), + ...(brainstormSessionResult.nodes ?? []), + ]; + const sessionNodeIds = new Set(sessionNodes.map((node) => node.id)); + const artifactResult = await worldline.query().match(`${ARTIFACT_PREFIX}*`).run(); const missingEdges = []; const removableEdges = []; - for (const [nodeId, props] of propsById) { - if (props.kind === 'capture') { - if (typeof props.thoughtId === 'string' && props.thoughtId !== '' && propsById.has(props.thoughtId)) { - pushMissingEdge(missingEdges, edgeKeys, nodeId, props.thoughtId, 'expresses'); - } - if (typeof props.sessionId === 'string' && props.sessionId !== '' && propsById.has(props.sessionId)) { - pushMissingEdge(missingEdges, edgeKeys, nodeId, props.sessionId, 'captured_in'); - } - } + const reflectNodes = new Map(); + addReflectNodes(reflectNodes, reflectKindNodes); - if (props.kind === 'reflect_session' || props.kind === 'brainstorm_session') { - if (typeof props.seedEntryId === 'string' && props.seedEntryId !== '' && propsById.has(props.seedEntryId)) { - pushMissingEdge(missingEdges, edgeKeys, nodeId, props.seedEntryId, 'seeded_by'); - } + for (const node of sessionNodes) { + // eslint-disable-next-line no-await-in-loop -- sequential migration fallback query per reflect session + const linkedReflectEntryResult = await worldline.query() + .match('entry:*') + .where({ sessionId: node.id }) + .run(); + addReflectNodes(reflectNodes, linkedReflectEntryResult.nodes); + } + + // Check capture edges — sequential per-node edge traversal + for (const node of captureResult.nodes ?? []) { + const { id, props } = node; + /* eslint-disable no-await-in-loop -- sequential migration edge checks */ + if (props.thoughtId) { + await pushMissingEdgeIfAbsent(worldline, missingEdges, id, props.thoughtId, 'expresses'); } + if (props.sessionId) { + await pushMissingEdgeIfAbsent(worldline, missingEdges, id, props.sessionId, 'captured_in'); + } + /* eslint-enable no-await-in-loop */ + } - if (props.kind === 'reflect') { - if (typeof props.sessionId === 'string' && props.sessionId !== '' && propsById.has(props.sessionId)) { - pushMissingEdge(missingEdges, edgeKeys, nodeId, props.sessionId, 'produced_in'); - } - if (typeof props.seedEntryId === 'string' && props.seedEntryId !== '' && propsById.has(props.seedEntryId)) { - pushMissingEdge(missingEdges, edgeKeys, nodeId, props.seedEntryId, 'responds_to'); - } + // Check reflect session edges + const seedEntryIdBySessionId = new Map(); + for (const node of sessionNodes) { + const { id, props } = node; + if (props.seedEntryId) { + seedEntryIdBySessionId.set(id, props.seedEntryId); + // eslint-disable-next-line no-await-in-loop -- sequential migration + await pushMissingEdgeIfAbsent(worldline, missingEdges, id, props.seedEntryId, 'seeded_by'); } + } - if (String(nodeId).startsWith(ARTIFACT_PREFIX)) { - if ( - props.primaryInputKind === 'thought' - && typeof props.primaryInputId === 'string' - && propsById.has(props.primaryInputId) - ) { - pushMissingEdge(missingEdges, edgeKeys, nodeId, props.primaryInputId, 'derived_from'); - } + // Check reflect entry edges + for (const node of reflectNodes.values()) { + const { id, props } = node; + const sessionId = props.sessionId ?? inferReflectSessionId(props, sessionNodes); + const seedEntryId = props.seedEntryId ?? seedEntryIdBySessionId.get(sessionId); + /* eslint-disable no-await-in-loop -- sequential migration edge checks */ + if (sessionId) { + await pushMissingEdgeIfAbsent(worldline, missingEdges, id, sessionId, 'produced_in', { + knownTargetNodeIds: sessionNodeIds, + }); + } + if (seedEntryId) { + await pushMissingEdgeIfAbsent(worldline, missingEdges, id, seedEntryId, 'responds_to'); + } + /* eslint-enable no-await-in-loop */ + } - if ( - props.primaryInputKind === 'capture' - && typeof props.primaryInputId === 'string' - && propsById.has(props.primaryInputId) - ) { - pushMissingEdge(missingEdges, edgeKeys, nodeId, props.primaryInputId, 'contextualizes'); - } + // Check artifact edges + for (const node of artifactResult.nodes ?? []) { + const { id, props } = node; + /* eslint-disable no-await-in-loop -- sequential migration edge checks */ + if (props.primaryInputKind === 'thought' && props.primaryInputId) { + await pushMissingEdgeIfAbsent(worldline, missingEdges, id, props.primaryInputId, 'derived_from'); } + if (props.primaryInputKind === 'capture' && props.primaryInputId) { + await pushMissingEdgeIfAbsent(worldline, missingEdges, id, props.primaryInputId, 'contextualizes'); + } + /* eslint-enable no-await-in-loop */ } - const captures = [...propsById.entries()] - .filter(([, props]) => props.kind === 'capture') - .map(([nodeId, props]) => ({ - id: nodeId, - sortKey: String(props.sortKey || ''), - })) + // Build chronology chain + const captures = (captureResult.nodes ?? []) + .map((node) => ({ id: node.id, sortKey: String(node.props.sortKey || '') })) .sort(compareEntriesNewestFirst); - const latestCaptureEdges = edges.filter((edge) => edge.from === GRAPH_META_ID && edge.label === 'latest_capture'); + // Check latest_capture edge const latestCaptureId = captures[0]?.id ?? null; - for (const edge of latestCaptureEdges) { - if (edge.to !== latestCaptureId) { - removableEdges.push(edge); - edgeKeys.delete(`${edge.from}\0${edge.to}\0${edge.label}`); + const latestCaptureTraversal = await worldline.query() + .match(GRAPH_META_ID) + .outgoing('latest_capture') + .run(); + const currentLatestEdges = latestCaptureTraversal.nodes ?? []; + + for (const node of currentLatestEdges) { + if (node.id !== latestCaptureId) { + removableEdges.push({ from: GRAPH_META_ID, to: node.id, label: 'latest_capture' }); } } if (latestCaptureId) { - pushMissingEdge(missingEdges, edgeKeys, GRAPH_META_ID, latestCaptureId, 'latest_capture'); + const hasLatest = currentLatestEdges.some((n) => n.id === latestCaptureId); + if (!hasLatest) { + missingEdges.push({ from: GRAPH_META_ID, to: latestCaptureId, label: 'latest_capture' }); + } } + // Check older chain for (let index = 0; index + 1 < captures.length; index += 1) { - pushMissingEdge(missingEdges, edgeKeys, captures[index].id, captures[index + 1].id, 'older'); + // eslint-disable-next-line no-await-in-loop -- sequential chain check + await pushMissingEdgeIfAbsent(worldline, missingEdges, captures[index].id, captures[index + 1].id, 'older'); } - const graphMeta = propsById.get(GRAPH_META_ID) ?? null; - const needsMetadataNode = !graphMeta; - const needsGraphVersionUpdate = graphMeta?.graphModelVersion !== GRAPH_MODEL_VERSION; + // Check classification nodes + const classificationNodesToCreate = []; + for (const name of CLASSIFICATIONS) { + const nodeId = `${CLASSIFICATION_PREFIX}${name}`; + // eslint-disable-next-line no-await-in-loop -- checking 7 standing nodes + const existing = await worldline.getNodeProps(nodeId); + if (!existing) { + classificationNodesToCreate.push({ nodeId, name }); + } + } - if (missingEdges.length === 0 && removableEdges.length === 0 && !needsMetadataNode && !needsGraphVersionUpdate) { - return { + if ( + missingEdges.length === 0 + && removableEdges.length === 0 + && classificationNodesToCreate.length === 0 + && !needsMetadataNode + && !needsGraphVersionUpdate + ) { + return Object.freeze({ changed: false, graphModelVersion: GRAPH_MODEL_VERSION, edgesAdded: 0, edgesRemoved: 0, metadataUpdated: false, - }; + }); } const timestamp = getCurrentTime().toISOString(); - await graph.patch((patch) => { - if (needsMetadataNode) { + const needsStandardPatch = removableEdges.length > 0 + || classificationNodesToCreate.length > 0 + || needsMetadataNode + || needsGraphVersionUpdate; + + if (needsStandardPatch) { + await patchWarpApp(repoDir, (patch) => { + if (needsMetadataNode) { + patch + .addNode(GRAPH_META_ID) + .setProperty(GRAPH_META_ID, 'kind', 'graph_meta') + .setProperty(GRAPH_META_ID, 'createdAt', timestamp); + } + patch - .addNode(GRAPH_META_ID) - .setProperty(GRAPH_META_ID, 'kind', 'graph_meta') - .setProperty(GRAPH_META_ID, 'createdAt', timestamp); - } + .setProperty(GRAPH_META_ID, 'graphModelVersion', GRAPH_MODEL_VERSION) + .setProperty(GRAPH_META_ID, 'updatedAt', timestamp); - patch - .setProperty(GRAPH_META_ID, 'graphModelVersion', GRAPH_MODEL_VERSION) - .setProperty(GRAPH_META_ID, 'updatedAt', timestamp); + for (const edge of removableEdges) { + patch.removeEdge(edge.from, edge.to, edge.label); + } - for (const edge of removableEdges) { - patch.removeEdge(edge.from, edge.to, edge.label); - } - for (const edge of missingEdges) { - patch.addEdge(edge.from, edge.to, edge.label); - } - }); + for (const { nodeId, name } of classificationNodesToCreate) { + patch + .addNode(nodeId) + .setProperty(nodeId, 'kind', 'classification') + .setProperty(nodeId, 'name', name) + .setProperty(nodeId, 'createdAt', timestamp); + } + }); + } - return { + if (missingEdges.length > 0) { + const migrationWriterId = `${app.writerId}.migration`; + await patchWarpAppWithWriter(repoDir, migrationWriterId, (patch) => { + for (const edge of missingEdges) { + patch.addEdge(edge.from, edge.to, edge.label); + } + }); + } + + return Object.freeze({ changed: true, graphModelVersion: GRAPH_MODEL_VERSION, edgesAdded: missingEdges.length, edgesRemoved: removableEdges.length, metadataUpdated: needsMetadataNode || needsGraphVersionUpdate, - }; + }); } -function pushMissingEdge(target, existingEdgeKeys, from, to, label) { - const key = `${from}\0${to}\0${label}`; - if (existingEdgeKeys.has(key)) { - return; +async function pushMissingEdgeIfAbsent(worldline, target, from, to, label, { knownTargetNodeIds = null } = {}) { + // Verify target node exists before checking edge + if (!knownTargetNodeIds?.has(to)) { + const targetProps = await worldline.getNodeProps(to); + if (!targetProps) { return; } } - existingEdgeKeys.add(key); - target.push({ from, to, label }); + const traversal = await worldline.query().match(from).outgoing(label).run(); + const hasEdge = (traversal.nodes ?? []).some((n) => n.id === to); + if (!hasEdge) { + target.push({ from, to, label }); + } +} + +function addReflectNodes(target, nodes = []) { + for (const node of nodes ?? []) { + if (!node?.id || !isReflectEntryProps(node.props ?? {})) { + continue; + } + target.set(node.id, node); + } +} + +function isReflectEntryProps(props) { + return props.kind === 'reflect' + || props.kind === 'brainstorm' + || props.source === 'reflect' + || props.source === 'brainstorm' + || typeof props.seedEntryId === 'string' + || typeof props.promptType === 'string'; +} + +function inferReflectSessionId(reflectProps, sessionNodes) { + const candidates = sessionNodes.filter(({ props }) => { + if (reflectProps.seedEntryId && props.seedEntryId !== reflectProps.seedEntryId) { + return false; + } + if (reflectProps.promptType && props.promptType && props.promptType !== reflectProps.promptType) { + return false; + } + return true; + }); + + return candidates.length === 1 ? candidates[0].id : null; } diff --git a/src/store/model.js b/src/store/model.js index ada974c..9c87f1a 100644 --- a/src/store/model.js +++ b/src/store/model.js @@ -1,22 +1,35 @@ -import { createHash, randomUUID } from 'node:crypto'; +import { createHash } from 'node:crypto'; import os from 'node:os'; +import { ValidationError } from '../errors.js'; + import { parseJson } from '../json.js'; import { ARTIFACT_PREFIX, + BUCKET_PERIODS, DERIVER_VERSION, ENTRY_PREFIX, MAX_REFLECT_STEPS, REFLECT_SESSION_PREFIX, SCHEMA_VERSION, + TEXT_CONTENT_KINDS, THOUGHT_PREFIX, } from './constants.js'; +/** + * Ports for deterministic execution. + */ +export const DEFAULT_PORTS = Object.freeze({ + clock: { now: () => new Date() }, + random: { uuid: () => crypto.randomUUID() }, + host: { hostname: () => os.hostname() }, +}); + export function storesTextContent(kind) { - return kind === 'capture' || kind === 'reflect' || kind === 'thought'; + return TEXT_CONTENT_KINDS.includes(kind); } -export function getCurrentTime() { +export function getCurrentTime(ports = DEFAULT_PORTS) { if (process.env.THINK_TEST_NOW) { const ms = parseInt(process.env.THINK_TEST_NOW, 10); if (!Number.isNaN(ms)) { @@ -24,7 +37,7 @@ export function getCurrentTime() { } } - return new Date(); + return ports.clock.now(); } export function parseSince(since, now) { @@ -44,9 +57,12 @@ export function parseSince(since, now) { } export function formatBucketKey(date, bucket) { + if (!BUCKET_PERIODS.includes(bucket)) { + throw new Error(`formatBucketKey: invalid bucket "${bucket}" (expected ${BUCKET_PERIODS.join(', ')})`); + } + const iso = date.toISOString(); - if (bucket === 'hour') {return `${iso.substring(0, 13) }:00`;} - if (bucket === 'day') {return iso.substring(0, 10);} + if (bucket === 'hour') { return `${iso.substring(0, 13)}:00`; } if (bucket === 'week') { const day = new Date(date); day.setUTCHours(0, 0, 0, 0); @@ -97,57 +113,87 @@ export function createArtifactId(kind, primaryInputId, discriminator = '') { return `${ARTIFACT_PREFIX}${fingerprint}`; } -export function createWriterId() { - const hostname = os.hostname().toLowerCase().replace(/[^a-z0-9._-]+/g, '-'); +export function createWriterId(ports = DEFAULT_PORTS) { + const hostname = ports.host.hostname().toLowerCase().replace(/[^a-z0-9._-]+/g, '-'); const safeHostname = hostname || 'unknown-host'; return `local.${safeHostname}.cli`; } -export function createEntry(text, writerId, { kind, source }) { - const timestamp = getCurrentTime(); - const unique = randomUUID(); - const createdAt = timestamp.toISOString(); - const sortKey = `${String(timestamp.getTime()).padStart(13, '0')}-${unique}`; - - return { - id: `${ENTRY_PREFIX}${sortKey}`, +export class Entry { + constructor(text, writerId, { kind, source, - channel: 'cli', - writerId, - createdAt, - sortKey, - text, - }; -} - -export function createReflectSession(writerId, { - seedEntryId, - contrastEntryId, - promptType, - question, - selectionReason, -}) { - const timestamp = getCurrentTime(); - const createdAt = timestamp.toISOString(); - const unique = randomUUID(); - const sortKey = `${String(timestamp.getTime()).padStart(13, '0')}-${unique}`; - - return { - id: `${REFLECT_SESSION_PREFIX}${unique}`, - kind: 'reflect_session', - source: 'reflect', - channel: 'cli', - writerId, - createdAt, - sortKey, + seedEntryId = null, + contrastEntryId = null, + sessionId = null, + promptType = null, + }, ports = DEFAULT_PORTS) { + if (!text || typeof text !== 'string') { + throw new ValidationError('Entry: text is required and must be a non-empty string'); + } + if (!writerId || typeof writerId !== 'string') { + throw new ValidationError('Entry: writerId is required and must be a non-empty string'); + } + + const timestamp = getCurrentTime(ports); + const unique = ports.random.uuid(); + const createdAt = timestamp.toISOString(); + const sortKey = `${String(timestamp.getTime()).padStart(13, '0')}-${unique}`; + + this.id = `${ENTRY_PREFIX}${sortKey}`; + this.kind = kind; + this.source = source; + this.channel = 'cli'; + this.writerId = writerId; + this.createdAt = createdAt; + this.sortKey = sortKey; + this.text = text; + this.seedEntryId = seedEntryId; + this.contrastEntryId = contrastEntryId; + this.sessionId = sessionId; + this.promptType = promptType; + + Object.freeze(this); + } +} + +export function createEntry(text, writerId, options, ports = DEFAULT_PORTS) { + return new Entry(text, writerId, options, ports); +} + +export class ReflectSession { + constructor(writerId, { seedEntryId, contrastEntryId, promptType, question, selectionReason, - maxSteps: MAX_REFLECT_STEPS, - }; + }, ports = DEFAULT_PORTS) { + const timestamp = getCurrentTime(ports); + const createdAt = timestamp.toISOString(); + const unique = ports.random.uuid(); + const sortKey = `${String(timestamp.getTime()).padStart(13, '0')}-${unique}`; + + this.id = `${REFLECT_SESSION_PREFIX}${unique}`; + this.kind = 'reflect_session'; + this.source = 'reflect'; + this.channel = 'cli'; + this.writerId = writerId; + this.createdAt = createdAt; + this.sortKey = sortKey; + this.seedEntryId = seedEntryId; + this.contrastEntryId = contrastEntryId; + this.promptType = promptType; + this.question = question; + this.selectionReason = selectionReason; + this.maxSteps = MAX_REFLECT_STEPS; + + Object.freeze(this); + } +} + +export function createReflectSession(writerId, options, ports = DEFAULT_PORTS) { + return new ReflectSession(writerId, options, ports); } export function compareEntriesNewestFirst(left, right) { diff --git a/src/store/ports.js b/src/store/ports.js new file mode 100644 index 0000000..e5e2a28 --- /dev/null +++ b/src/store/ports.js @@ -0,0 +1,62 @@ +/** + * ClockPort interface for deterministic time. + * Adheres to Infrastructure Doctrine P7. + */ +export class ClockPort { + /** @returns {Date} */ + now() { throw new Error('now() not implemented'); } +} + +/** + * HostPort interface for host-specific metadata. + * Adheres to Infrastructure Doctrine Hexagonal Architecture rule. + */ +export class HostPort { + /** @returns {string} */ + hostname() { throw new Error('hostname() not implemented'); } +} + +/** + * RandomPort interface for deterministic randomness. + * Adheres to Infrastructure Doctrine P7. + */ +export class RandomPort { + /** @returns {string} */ + uuid() { throw new Error('uuid() not implemented'); } +} + +class SystemClock extends ClockPort { + now() { + if (process.env.THINK_TEST_NOW) { + const ms = parseInt(process.env.THINK_TEST_NOW, 10); + if (!Number.isNaN(ms)) { + return new Date(ms); + } + } + return new Date(); + } +} + +class SystemHost extends HostPort { + hostname() { + // This needs careful implementation for browser-first doctrine. + return 'unknown-host'; + } +} + +class SystemRandom extends RandomPort { + uuid() { + return crypto.randomUUID(); + } +} + +/** + * Standard System implementation of ports for production CLI use. + */ +export class SystemPorts { + constructor() { + this.clock = new SystemClock(); + this.host = new SystemHost(); + this.random = new SystemRandom(); + } +} diff --git a/src/store/prompt-metrics.js b/src/store/prompt-metrics.js index 9d4572a..fdf556c 100644 --- a/src/store/prompt-metrics.js +++ b/src/store/prompt-metrics.js @@ -2,16 +2,17 @@ import { readFile } from 'node:fs/promises'; import { parseJson } from '../json.js'; -export async function readPromptMetricsRecords(filePath) { +export async function readPromptMetricsRecords(filePath, { reader = null } = {}) { try { - const contents = await readFile(filePath, 'utf8'); + const read = reader ?? ((p) => readFile(p, 'utf8')); + const contents = await read(filePath); return String(contents) .split('\n') .map((line) => line.trim()) .filter(Boolean) .map((line) => { try { - return parseJson(line); + return normalizeMetricRecord(parseJson(line)); } catch { return null; } @@ -25,25 +26,44 @@ export async function readPromptMetricsRecords(filePath) { } } +function normalizeMetricRecord(raw) { + if (!raw || typeof raw !== 'object' || !raw.sessionId) { + return null; + } + + return Object.freeze({ + sessionId: String(raw.sessionId), + ts: raw.ts ?? null, + dismissalOutcome: raw.dismissalOutcome ?? null, + trigger: raw.trigger ?? null, + triggerToVisibleMs: typeof raw.triggerToVisibleMs === 'number' ? raw.triggerToVisibleMs : null, + typingDurationMs: typeof raw.typingDurationMs === 'number' ? raw.typingDurationMs : null, + submitToHideMs: typeof raw.submitToHideMs === 'number' ? raw.submitToHideMs : null, + submitToLocalCaptureMs: typeof raw.submitToLocalCaptureMs === 'number' ? raw.submitToLocalCaptureMs : null, + captureOutcome: raw.captureOutcome ?? null, + backupState: raw.backupState ?? null, + }); +} + export function summarizePromptMetrics(records) { - return records.reduce((summary, record) => { - summary.sessions += 1; + const summary = records.reduce((acc, record) => { + acc.sessions += 1; if (record.dismissalOutcome === 'submitted') { - summary.submitted += 1; + acc.submitted += 1; } else if (record.dismissalOutcome === 'abandoned_empty') { - summary.abandonedEmpty += 1; + acc.abandonedEmpty += 1; } else if (record.dismissalOutcome === 'abandoned_started') { - summary.abandonedStarted += 1; + acc.abandonedStarted += 1; } if (record.trigger === 'hotkey') { - summary.hotkey += 1; + acc.hotkey += 1; } else if (record.trigger === 'menu') { - summary.menu += 1; + acc.menu += 1; } - return summary; + return acc; }, { sessions: 0, submitted: 0, @@ -52,6 +72,8 @@ export function summarizePromptMetrics(records) { hotkey: 0, menu: 0, }); + + return Object.freeze(summary); } export function summarizePromptMetricTimings(records) { @@ -68,14 +90,14 @@ export function summarizePromptMetricTimings(records) { .filter((value) => Number.isFinite(value)) .sort((left, right) => left - right); - return { + return Object.freeze({ metric, sampleCount: samples.length, medianMs: samples.length > 0 ? median(samples) : null, meanMs: samples.length > 0 ? mean(samples) : null, minMs: samples.length > 0 ? samples[0] : null, maxMs: samples.length > 0 ? samples[samples.length - 1] : null, - }; + }); }); } @@ -109,7 +131,9 @@ export function summarizePromptMetricBuckets(records, bucket, formatBucketKey) { } } - return Object.values(buckets).sort((left, right) => right.key.localeCompare(left.key)); + return Object.values(buckets) + .map((b) => Object.freeze(b)) + .sort((left, right) => right.key.localeCompare(left.key)); } function median(values) { diff --git a/src/store/queries.js b/src/store/queries.js index cbeaf3b..997e255 100644 --- a/src/store/queries.js +++ b/src/store/queries.js @@ -1,4 +1,7 @@ import { getPromptMetricsFile } from '../paths.js'; +import { + KEYWORD_PREFIX, +} from './constants.js'; import { compareEntriesNewestFirst, formatBucketKey, @@ -27,10 +30,12 @@ import { getStoredEntry, listChronologyEntries, listEntriesByKind, + listRecentStoredEntries, openProductReadHandle, resolveGraphSessionTraversal, toBrowseEntry, } from './runtime.js'; +import { listCheckpointEntriesByKind } from './checkpoint-read.js'; import { assessReflectability, ensureFirstDerivedArtifacts, @@ -40,6 +45,52 @@ import { getSessionAttributionReceiptIfPresent, listDirectDerivedReceipts, } from './derivation.js'; +import { KeywordTrie } from './trie.js'; + +const DEFAULT_RECENT_LIMIT = 50; +const searchIndexCache = new Map(); +const searchIndexLoadingPromises = new Map(); + +export function invalidateSearchIndex(repoDir) { + searchIndexCache.delete(repoDir); + searchIndexLoadingPromises.delete(repoDir); +} + +/** + * Bootstrap the in-memory search index (Trie) from keyword nodes in the graph. + * Uses a loading promise to prevent race conditions during concurrent requests. + */ +export function loadSearchIndex(repoDir) { + const cached = searchIndexCache.get(repoDir); + if (cached) { + return Promise.resolve(cached); + } + + const loading = searchIndexLoadingPromises.get(repoDir); + if (loading) { + return loading; + } + + const loadingPromise = (async () => { + const read = await openProductReadHandle(repoDir); + const trie = new KeywordTrie(); + + const keywordResult = await read.view.query().match(`${KEYWORD_PREFIX}*`).where({ kind: 'keyword' }).run(); + for (const node of keywordResult.nodes ?? []) { + if (node.props.name) { + trie.insert(node.props.name); + } + } + + searchIndexCache.set(repoDir, trie); + return trie; + })().finally(() => { + searchIndexLoadingPromises.delete(repoDir); + }); + + searchIndexLoadingPromises.set(repoDir, loadingPromise); + return loadingPromise; +} export async function rememberThoughts( repoDir, @@ -51,54 +102,126 @@ export async function rememberThoughts( } = {} ) { const read = await openProductReadHandle(repoDir); - const captures = (await listEntriesByKind(read, 'capture')) - .map((entry) => ({ - id: entry.id, - text: entry.text, - sortKey: entry.sortKey, - createdAt: entry.createdAt, - ambientCwd: entry.ambientCwd ?? null, - ambientGitRoot: entry.ambientGitRoot ?? null, - ambientGitRemote: entry.ambientGitRemote ?? null, - ambientGitBranch: entry.ambientGitBranch ?? null, - })) - .sort(compareEntriesNewestFirst); + const limitValue = limit ?? DEFAULT_RECENT_LIMIT; + // 1. If there's an explicit query, try the graph-native inverted index first (O(1)) if (query && String(query).trim() !== '') { const explicitScope = buildExplicitRememberScope(query); - const explicitMatches = captures - .map((entry) => buildExplicitRememberMatch(entry, explicitScope)) + const queryTerms = query.toLowerCase().split(/\s+/).filter(Boolean); + const indexMatches = new Map(); + + // Use Trie for prefix matching on query terms + const trie = await loadSearchIndex(repoDir); + const expandedKeywords = new Map(); // keyword -> distance + + for (const term of queryTerms) { + const prefixMatches = trie.search(term); + for (const m of prefixMatches) { + expandedKeywords.set(m, 0); // Exact or prefix match has distance 0 + } + + // If we don't have many matches, try fuzzy (edit distance) + if (prefixMatches.length < 10) { + const fuzzyMatches = trie.searchFuzzy(term, term.length > 4 ? 2 : 1); + for (const { keyword, distance } of fuzzyMatches) { + if (!expandedKeywords.has(keyword) || distance < expandedKeywords.get(keyword)) { + expandedKeywords.set(keyword, distance); + } + } + } + } + + for (const [keyword, distance] of expandedKeywords) { + const keywordNodeId = `${KEYWORD_PREFIX}${keyword}`; + // eslint-disable-next-line no-await-in-loop -- sequential keyword index lookup + const traversal = await read.view.query().match(keywordNodeId).incoming('mentions').run(); + + for (const node of traversal.nodes ?? []) { + if (!indexMatches.has(node.id)) { + // eslint-disable-next-line no-await-in-loop -- sequential retrieval of indexed thoughts + const entry = await getStoredEntry(read, node.id); + if (entry) { + const match = buildExplicitRememberMatch({ + ...entry, + ambientCwd: entry.ambientCwd ?? null, + ambientGitRoot: entry.ambientGitRoot ?? null, + ambientGitRemote: entry.ambientGitRemote ?? null, + ambientGitBranch: entry.ambientGitBranch ?? null, + }, explicitScope); + + if (match) { + // Adjust score based on fuzzy distance + const fuzzyAdjustedMatch = { + ...match, + score: match.score - (distance * 0.1), // Typos rank slightly lower + }; + indexMatches.set(node.id, fuzzyAdjustedMatch); + } + } + } + } + } + + if (indexMatches.size > 0) { + const sortedMatches = Array.from(indexMatches.values()).sort(compareRememberMatches); + return Object.freeze({ + scope: Object.freeze({ ...explicitScope, brief, limit: limitValue }), + matches: finalizeRememberMatches(sortedMatches, { brief, limit: limitValue }), + }); + } + + // Fallback: If index is empty (e.g. not enriched yet or partial word), use windowed scan + const chronologyList = await listRecentStoredEntries(read, { limit: 2000 }); + const fallbackMatches = chronologyList + .map((entry) => buildExplicitRememberMatch({ + ...entry, + ambientCwd: entry.ambientCwd ?? null, + ambientGitRoot: entry.ambientGitRoot ?? null, + ambientGitRemote: entry.ambientGitRemote ?? null, + ambientGitBranch: entry.ambientGitBranch ?? null, + }, explicitScope)) .filter(Boolean) .sort(compareRememberMatches); - return { - scope: { - ...explicitScope, - brief, - limit, - }, - matches: finalizeRememberMatches(explicitMatches, { brief, limit }), - }; + + return Object.freeze({ + scope: Object.freeze({ ...explicitScope, brief, limit: limitValue }), + matches: finalizeRememberMatches(fallbackMatches, { brief, limit: limitValue }), + }); } - const scope = buildAmbientRememberScope(cwd); - const matches = captures - .map((entry) => buildAmbientRememberMatch(entry, scope)) + // 2. Ambient remember (cwd-based) + const ambientScope = buildAmbientRememberScope(cwd); + const fullChronology = await listRecentStoredEntries(read, { limit: 2000 }); + const ambientMatches = fullChronology + .map((entry) => buildAmbientRememberMatch({ + ...entry, + ambientCwd: entry.ambientCwd ?? null, + ambientGitRoot: entry.ambientGitRoot ?? null, + ambientGitRemote: entry.ambientGitRemote ?? null, + ambientGitBranch: entry.ambientGitBranch ?? null, + }, ambientScope)) .filter(Boolean) .sort(compareRememberMatches); - return { - scope: { - ...scope, - brief, - limit, - }, - matches: finalizeRememberMatches(matches, { brief, limit }), - }; + + return Object.freeze({ + scope: Object.freeze({ ...ambientScope, brief, limit: limitValue }), + matches: finalizeRememberMatches(ambientMatches, { brief, limit: limitValue }), + }); } export async function getStats(repoDir, { from, to, since, bucket } = {}) { + const checkpointCaptures = await listCheckpointEntriesByKind(repoDir, 'capture'); + if (checkpointCaptures !== null) { + return statsFromCaptures(checkpointCaptures, { from, to, since, bucket }); + } + const read = await openProductReadHandle(repoDir); - const entries = []; + const captures = await listEntriesByKind(read, 'capture'); + return statsFromCaptures(captures, { from, to, since, bucket }); +} +function statsFromCaptures(captures, { from, to, since, bucket } = {}) { + const entries = []; const now = getCurrentTime(); const sinceDate = since ? parseSince(since, now) : null; const fromDate = from ? new Date(from) : null; @@ -108,7 +231,7 @@ export async function getStats(repoDir, { from, to, since, bucket } = {}) { toDate.setUTCHours(23, 59, 59, 999); } - for (const entry of await listEntriesByKind(read, 'capture')) { + for (const entry of captures) { const createdAt = new Date(entry.createdAt); if (sinceDate && createdAt < sinceDate) {continue;} @@ -119,7 +242,7 @@ export async function getStats(repoDir, { from, to, since, bucket } = {}) { } if (!bucket) { - return { total: entries.length }; + return Object.freeze({ total: entries.length }); } const buckets = {}; @@ -128,12 +251,14 @@ export async function getStats(repoDir, { from, to, since, bucket } = {}) { buckets[key] = (buckets[key] || 0) + 1; } - return { + return Object.freeze({ total: entries.length, - buckets: Object.entries(buckets) - .sort((a, b) => b[0].localeCompare(a[0])) - .map(([key, count]) => ({ key, count })), - }; + buckets: Object.freeze( + Object.entries(buckets) + .sort((a, b) => b[0].localeCompare(a[0])) + .map(([key, count]) => Object.freeze({ key, count })) + ), + }); } export async function getPromptMetrics({ from, to, since, bucket } = {}) { @@ -166,9 +291,24 @@ export async function getPromptMetrics({ from, to, since, bucket } = {}) { } export async function listRecent(repoDir, { count = null, query = null } = {}) { + const limit = count ?? DEFAULT_RECENT_LIMIT; const read = await openProductReadHandle(repoDir); - const captures = await listEntriesByKind(read, 'capture'); + // Recent output reports the total capture count, so use the authoritative + // capture set instead of a potentially stale latest_capture chain. + if (!query) { + const unfilteredRecent = (await listEntriesByKind(read, 'capture')) + .map(toBrowseEntry) + .sort(compareEntriesNewestFirst); + return Object.freeze({ + entries: unfilteredRecent.slice(0, limit), + total: unfilteredRecent.length, + }); + } + + // If there is a query, we still need to filter. + // Future optimization: windowed search traversal. + const captures = await listEntriesByKind(read, 'capture'); const recent = captures .map(entry => ({ id: entry.id, @@ -179,20 +319,16 @@ export async function listRecent(repoDir, { count = null, query = null } = {}) { })) .sort(compareEntriesNewestFirst); - const filtered = query - ? recent.filter((entry) => matchesRecentQuery(entry.text, query)) - : recent; - - if (count === null) { - return filtered; - } + const filtered = recent.filter((entry) => matchesRecentQuery(entry.text, query)); + const total = filtered.length; + const entries = filtered.slice(0, limit); - return filtered.slice(0, count); + return Object.freeze({ entries, total }); } export async function listReflectableRecent(repoDir) { - const recent = await listRecent(repoDir); - return recent.filter((entry) => assessReflectability(entry.text).eligible); + const { entries } = await listRecent(repoDir); + return entries.filter((entry) => assessReflectability(entry.text).eligible); } export async function loadBrowseChronologyEntries(repoDir) { @@ -267,7 +403,7 @@ export async function inspectRawEntryForRead(read, entryId) { return null; } - await ensureFirstDerivedArtifacts(read.app, read, entry); + await ensureFirstDerivedArtifacts(read.repoDir, read, entry); entry = await getStoredEntry(read, entryId); const canonicalThought = await getCanonicalThought(read, entry); @@ -275,7 +411,9 @@ export async function inspectRawEntryForRead(read, entryId) { const sessionAttribution = await getSessionAttributionReceipt(read, entry); const derivedReceipts = await listDirectDerivedReceipts(read, entryId); - return { + const annotations = await listAnnotationsForEntry(read, entryId); + + return Object.freeze({ entryId: entry.id, thoughtId: canonicalThought?.thoughtId ?? createThoughtId(entry.text), kind: 'raw_capture', @@ -287,7 +425,27 @@ export async function inspectRawEntryForRead(read, entryId) { seedQuality, sessionAttribution, derivedReceipts, - }; + annotations, + }); +} + +async function listAnnotationsForEntry(read, entryId) { + const traversal = await read.view.query().match(entryId).incoming('annotates').run(); + const annotations = []; + + for (const node of traversal.nodes ?? []) { + // eslint-disable-next-line no-await-in-loop -- sequential annotation reads + const entry = await getStoredEntry(read, node.id); + if (entry) { + annotations.push(Object.freeze({ + annotationId: entry.id, + text: entry.text, + createdAt: entry.createdAt, + })); + } + } + + return annotations.sort((a, b) => a.createdAt.localeCompare(b.createdAt)); } async function buildBrowseWindow(read, entryId) { @@ -305,7 +463,7 @@ async function buildBrowseWindow(read, entryId) { const sessionAttribution = await getSessionAttributionReceiptIfPresent(read, currentEntry); const sessionTraversal = await resolveGraphSessionTraversal(read, current); - return { + return Object.freeze({ current, newer, older, @@ -340,5 +498,5 @@ async function buildBrowseWindow(read, entryId) { : []), ] : [], - }; + }); } diff --git a/src/store/reflect.js b/src/store/reflect.js index 9e4155b..2d5daaf 100644 --- a/src/store/reflect.js +++ b/src/store/reflect.js @@ -5,6 +5,7 @@ import { SHARPEN_PROMPTS, TEXT_MIME, } from './constants.js'; +import { encodeTextContent } from './content.js'; import { createEntry, createReflectSession, @@ -16,12 +17,13 @@ import { getReflectSession, getStoredEntry, openWarpApp, + patchWarpApp, } from './runtime.js'; import { assessReflectability } from './derivation.js'; export async function startReflect(repoDir, seedEntryId, { promptType = null } = {}) { const app = await openWarpApp(repoDir); - const read = await createProductReadHandle(app); + const read = await createProductReadHandle(app, repoDir); const planned = await planReflect(read, seedEntryId, { promptType }); if (!planned.ok) { @@ -38,7 +40,7 @@ export async function startReflect(repoDir, seedEntryId, { promptType = null } = }); // eslint-disable-next-line require-await -- git-warp patch callback must be async for the library API - await app.patch(async patch => { + await patchWarpApp(repoDir, async patch => { patch .addNode(session.id) .setProperty(session.id, 'kind', session.kind) @@ -62,7 +64,7 @@ export async function startReflect(repoDir, seedEntryId, { promptType = null } = } }); - return { + return Object.freeze({ ok: true, sessionId: session.id, seedEntryId: session.seedEntryId, @@ -73,19 +75,19 @@ export async function startReflect(repoDir, seedEntryId, { promptType = null } = selectionReason: session.selectionReason, seedEntry: planned.seedEntry, contrastEntry: null, - }; + }); } export async function previewReflect(repoDir, seedEntryId, { promptType = null } = {}) { const app = await openWarpApp(repoDir); - const read = await createProductReadHandle(app); + const read = await createProductReadHandle(app, repoDir); const planned = await planReflect(read, seedEntryId, { promptType }); if (!planned.ok) { return planned; } - return { + return Object.freeze({ ok: true, seedEntryId, contrastEntryId: null, @@ -95,12 +97,12 @@ export async function previewReflect(repoDir, seedEntryId, { promptType = null } selectionReason: planned.promptPlan.selectionReason, seedEntry: planned.seedEntry, contrastEntry: null, - }; + }); } export async function saveReflectResponse(repoDir, sessionId, response) { const app = await openWarpApp(repoDir); - const read = await createProductReadHandle(app); + const read = await createProductReadHandle(app, repoDir); const session = await getReflectSession(read, sessionId); if (!session) { @@ -110,14 +112,13 @@ export async function saveReflectResponse(repoDir, sessionId, response) { const entry = createEntry(response, app.writerId, { kind: 'reflect', source: 'reflect', + seedEntryId: session.seedEntryId, + contrastEntryId: session.contrastEntryId, + sessionId: session.id, + promptType: session.promptType, }); - entry.seedEntryId = session.seedEntryId; - entry.contrastEntryId = session.contrastEntryId; - entry.sessionId = session.id; - entry.promptType = session.promptType; - - await app.patch(async patch => { + await patchWarpApp(repoDir, async patch => { patch .addNode(entry.id) .setProperty(entry.id, 'kind', entry.kind) @@ -142,7 +143,7 @@ export async function saveReflectResponse(repoDir, sessionId, response) { .setProperty(session.id, 'stepCount', session.stepCount + 1) .setProperty(session.id, 'updatedAt', entry.createdAt); - await patch.attachContent(entry.id, response, { mime: TEXT_MIME }); + await patch.attachContent(entry.id, encodeTextContent(response), { mime: TEXT_MIME }); }); return entry; @@ -152,87 +153,87 @@ function selectReflectPrompt(seedEntry, requestedPromptType = null) { const normalized = normalizeSeed(seedEntry.text); if (requestedPromptType === 'challenge') { - return { + return Object.freeze({ promptType: 'challenge', - selectionReason: { + selectionReason: Object.freeze({ kind: 'requested_challenge', text: 'Used the requested challenge prompt family for this reflect session.', - }, + }), question: pickDeterministicPrompt(CHALLENGE_PROMPTS, normalized), - }; + }); } if (requestedPromptType === 'constraint') { - return { + return Object.freeze({ promptType: 'constraint', - selectionReason: { + selectionReason: Object.freeze({ kind: 'requested_constraint', text: 'Used the requested constraint prompt family for this reflect session.', - }, + }), question: pickDeterministicPrompt(CONSTRAINT_PROMPTS, normalized), - }; + }); } if (requestedPromptType === 'sharpen') { - return { + return Object.freeze({ promptType: 'sharpen', - selectionReason: { + selectionReason: Object.freeze({ kind: 'requested_sharpen', text: 'Used the requested sharpen prompt family for this reflect session.', - }, + }), question: pickDeterministicPrompt(SHARPEN_PROMPTS, normalized), - }; + }); } const familyIndex = stableHash(normalized) % 2; if (familyIndex === 0) { - return { + return Object.freeze({ promptType: 'challenge', - selectionReason: { + selectionReason: Object.freeze({ kind: 'seed_only_challenge', text: 'Used a deterministic challenge prompt from the seed thought alone.', - }, + }), question: pickDeterministicPrompt(CHALLENGE_PROMPTS, normalized), - }; + }); } - return { + return Object.freeze({ promptType: 'constraint', - selectionReason: { + selectionReason: Object.freeze({ kind: 'seed_only_constraint', text: 'Used a deterministic constraint prompt from the seed thought alone.', - }, + }), question: pickDeterministicPrompt(CONSTRAINT_PROMPTS, normalized), - }; + }); } async function planReflect(read, seedEntryId, { promptType = null } = {}) { const seedEntry = await getStoredEntry(read, seedEntryId); if (!seedEntry || seedEntry.kind !== 'capture') { - return { + return Object.freeze({ ok: false, code: 'seed_not_found', - }; + }); } const eligibility = assessReflectability(seedEntry.text); if (!eligibility.eligible) { - return { + return Object.freeze({ ok: false, code: 'seed_ineligible', seedEntryId, seedEntry, eligibility, - }; + }); } - return { + return Object.freeze({ ok: true, seedEntry, promptPlan: selectReflectPrompt(seedEntry, promptType), - }; + }); } function pickDeterministicPrompt(prompts, normalizedSeed) { diff --git a/src/store/remember.js b/src/store/remember.js index 988f1da..57a029c 100644 --- a/src/store/remember.js +++ b/src/store/remember.js @@ -4,7 +4,7 @@ import { normalizeSeed } from './model.js'; export function buildAmbientRememberScope(cwd) { const context = getAmbientProjectContext(cwd); - return { + return Object.freeze({ scopeKind: 'ambient_project', cwd: context.cwd, gitRoot: context.gitRoot, @@ -12,15 +12,15 @@ export function buildAmbientRememberScope(cwd) { gitBranch: context.gitBranch, projectName: context.projectName, projectTokens: context.projectTokens, - }; + }); } export function buildExplicitRememberScope(query) { - return { + return Object.freeze({ scopeKind: 'query', queryText: String(query).trim(), - queryTerms: buildQueryTerms(query), - }; + queryTerms: Object.freeze(buildQueryTerms(query)), + }); } export function buildAmbientRememberMatch(entry, scope) { @@ -86,16 +86,16 @@ export function buildAmbientRememberMatch(entry, scope) { return null; } - return { + return Object.freeze({ entryId: entry.id, text: entry.text, sortKey: entry.sortKey, createdAt: entry.createdAt, score, tier, - matchKinds, + matchKinds: Object.freeze(matchKinds), reasonText, - }; + }); } export function buildExplicitRememberMatch(entry, scope) { @@ -114,16 +114,16 @@ export function buildExplicitRememberMatch(entry, scope) { ? `matched query phrase "${scope.queryText}"` : `matched query terms "${matchedTerms.join('", "')}"`; - return { + return Object.freeze({ entryId: entry.id, text: entry.text, sortKey: entry.sortKey, createdAt: entry.createdAt, score: matchedTerms.length || 1, tier: 1, - matchKinds, + matchKinds: Object.freeze(matchKinds), reasonText, - }; + }); } export function compareRememberMatches(left, right) { diff --git a/src/store/runtime.js b/src/store/runtime.js index c46c4e5..cf754cc 100644 --- a/src/store/runtime.js +++ b/src/store/runtime.js @@ -1,6 +1,8 @@ import Plumbing from '@git-stunts/plumbing'; import WarpApp, { GitGraphAdapter } from '@git-stunts/git-warp'; +import { createAppContentReader } from './content-reader.js'; +import { openCheckpointProductRead } from './checkpoint-product-read.js'; import { ARTIFACT_PREFIX, CHECKPOINT_POLICY, @@ -11,6 +13,7 @@ import { LEGACY_BRAINSTORM_SESSION_PREFIX, PRODUCT_READ_LENS, REFLECT_SESSION_PREFIX, + SESSION_KINDS, SESSION_PREFIX, THOUGHT_PREFIX, } from './constants.js'; @@ -21,35 +24,272 @@ import { storesTextContent, } from './model.js'; -// eslint-disable-next-line require-await -- wraps git-warp WarpApp.open which returns a promise +export class GenericEntry { + constructor(nodeId, resolvedProps, text) { + this.id = nodeId; + this.kind = resolvedProps.kind; + this.writerId = resolvedProps.writerId; + this.createdAt = resolvedProps.createdAt; + this.sortKey = String(resolvedProps.sortKey || ''); + this.text = text; + Object.freeze(this); + } +} + +export class CaptureEntry { + constructor(nodeId, resolvedProps, text) { + this.id = nodeId; + this.kind = resolvedProps.kind; + this.writerId = resolvedProps.writerId; + this.createdAt = resolvedProps.createdAt; + this.sortKey = String(resolvedProps.sortKey || ''); + this.text = text; + this.source = resolvedProps.source; + this.channel = resolvedProps.channel; + this.thoughtId = resolvedProps.thoughtId ?? null; + this.sessionId = resolvedProps.sessionId ?? null; + this.ambientCwd = resolvedProps.ambientCwd ?? null; + this.ambientGitRoot = resolvedProps.ambientGitRoot ?? null; + this.ambientGitRemote = resolvedProps.ambientGitRemote ?? null; + this.ambientGitBranch = resolvedProps.ambientGitBranch ?? null; + this.captureProvenance = resolvedProps.captureIngress || resolvedProps.captureSourceApp || resolvedProps.captureSourceURL + ? Object.freeze({ + ingress: resolvedProps.captureIngress ?? null, + sourceApp: resolvedProps.captureSourceApp ?? null, + sourceURL: resolvedProps.captureSourceURL ?? null, + }) + : null; + Object.freeze(this); + } +} + +export class ReflectEntry { + constructor(nodeId, resolvedProps, text) { + this.id = nodeId; + this.kind = resolvedProps.kind; + this.writerId = resolvedProps.writerId; + this.createdAt = resolvedProps.createdAt; + this.sortKey = String(resolvedProps.sortKey || ''); + this.text = text; + this.seedEntryId = resolvedProps.seedEntryId ?? null; + this.contrastEntryId = resolvedProps.contrastEntryId ?? null; + this.promptType = resolvedProps.promptType ?? null; + this.question = resolvedProps.question ?? null; + this.selectionReason = resolvedProps.selectionReasonKind + ? Object.freeze({ + kind: resolvedProps.selectionReasonKind, + text: resolvedProps.selectionReasonText ?? '', + }) + : null; + this.stepCount = Number(resolvedProps.stepCount ?? 0); + this.maxSteps = Number(resolvedProps.maxSteps ?? 0); + Object.freeze(this); + } +} + +export class AnnotationEntry { + constructor(nodeId, resolvedProps, text) { + this.id = nodeId; + this.kind = resolvedProps.kind; + this.writerId = resolvedProps.writerId; + this.createdAt = resolvedProps.createdAt; + this.sortKey = String(resolvedProps.sortKey || ''); + this.text = text; + Object.freeze(this); + } +} + +export class BaseEntry { + static from(nodeId, resolvedProps, text) { + if (resolvedProps.kind === 'capture') { return new CaptureEntry(nodeId, resolvedProps, text); } + if (resolvedProps.kind === 'reflect' || SESSION_KINDS.includes(resolvedProps.kind)) { + return new ReflectEntry(nodeId, resolvedProps, text); + } + if (resolvedProps.kind === 'annotation') { return new AnnotationEntry(nodeId, resolvedProps, text); } + return new GenericEntry(nodeId, resolvedProps, text); + } +} + +const WRITER_CAS_CONFLICT_TEXT = 'writer ref was updated by another process'; +const DEFAULT_PATCH_MAX_ATTEMPTS = 3; +const warpAppCache = new Map(); +const runtimeBlobStorageCache = new Map(); + export async function openWarpApp(repoDir) { + const cached = warpAppCache.get(repoDir); + if (cached) { + return cached; + } + const plumbing = Plumbing.createDefault({ cwd: repoDir }); const persistence = new GitGraphAdapter({ plumbing }); - return WarpApp.open({ + const app = await WarpApp.open({ persistence, graphName: GRAPH_NAME, writerId: createWriterId(), checkpointPolicy: CHECKPOINT_POLICY, }); + + warpAppCache.set(repoDir, app); + return app; +} + +export function clearWarpAppCache(repoDir) { + warpAppCache.delete(repoDir); +} + +export async function patchWarpApp(repoDir, patcher, { + genesisOnNoState = false, + maxAttempts = DEFAULT_PATCH_MAX_ATTEMPTS, + syncAfterPatch = true, +} = {}) { + let attempt = 1; + + /* eslint-disable no-await-in-loop -- retry attempts must run sequentially against a refreshed cached app */ + while (true) { + const app = await openWarpApp(repoDir); + + try { + try { + await app.patch(patcher); + } catch (error) { + if (!genesisOnNoState || error?.code !== 'E_NO_STATE') { + throw error; + } + await app.patch(patcher, { genesis: true }); + } + + if (syncAfterPatch) { + await app.syncWith(app.core()); + } + + return app; + } catch (error) { + if (!isWriterCasConflict(error) || attempt >= maxAttempts) { + throw error; + } + + clearWarpAppCache(repoDir); + attempt += 1; + } + } + /* eslint-enable no-await-in-loop */ +} + +export async function patchWarpAppWithWriter(repoDir, writerId, patcher, { + genesisOnNoState = false, + maxAttempts = DEFAULT_PATCH_MAX_ATTEMPTS, + syncAfterPatch = true, +} = {}) { + let attempt = 1; + + /* eslint-disable no-await-in-loop -- retry attempts must run sequentially against a refreshed app */ + while (true) { + const app = await openWarpAppUncached(repoDir, writerId); + + try { + try { + await app.patch(patcher); + } catch (error) { + if (!genesisOnNoState || error?.code !== 'E_NO_STATE') { + throw error; + } + await app.patch(patcher, { genesis: true }); + } + + if (syncAfterPatch) { + await app.syncWith(app.core()); + } + + return app; + } catch (error) { + if (!isWriterCasConflict(error) || attempt >= maxAttempts) { + throw error; + } + + attempt += 1; + } + } + /* eslint-enable no-await-in-loop */ +} + +async function openWarpAppUncached(repoDir, writerId) { + const plumbing = Plumbing.createDefault({ cwd: repoDir }); + const persistence = new GitGraphAdapter({ plumbing }); + + return await WarpApp.open({ + persistence, + graphName: GRAPH_NAME, + writerId, + checkpointPolicy: CHECKPOINT_POLICY, + }); +} + +export function isWriterCasConflict(error) { + return error instanceof Error && error.message.includes(WRITER_CAS_CONFLICT_TEXT); } -export async function createProductReadHandle(app) { +export async function createProductReadHandle(app, repoDir = null) { const worldline = app.worldline(); const view = await worldline.observer('think-product', PRODUCT_READ_LENS); return { app, + repoDir, worldline, view, contentCore: app.core(), + blobStorage: repoDir ? await getRuntimeBlobStorage(repoDir) : null, + readContent: createAppContentReader(app), writerId: app.writerId, }; } export async function openProductReadHandle(repoDir) { const app = await openWarpApp(repoDir); - return createProductReadHandle(app); + const checkpointRead = await tryOpenCheckpointProductRead(repoDir, app); + const worldline = app.worldline(); + const view = checkpointRead?.view ?? await worldline.observer('think-product', PRODUCT_READ_LENS); + + return { + app, + repoDir, + worldline, + view, + contentCore: app.core(), + blobStorage: checkpointRead?.blobStorage ?? await getRuntimeBlobStorage(repoDir), + readContent: checkpointRead?.readContent ?? createAppContentReader(app), + writerId: app.writerId, + }; +} + +async function tryOpenCheckpointProductRead(repoDir, app = null) { + try { + return await openCheckpointProductRead(repoDir, app); + } catch { + return null; + } +} + +async function getRuntimeBlobStorage(repoDir) { + if (runtimeBlobStorageCache.has(repoDir)) { + return await runtimeBlobStorageCache.get(repoDir); + } + + const plumbing = Plumbing.createDefault({ cwd: repoDir }); + const persistence = new GitGraphAdapter({ plumbing }); + const blobStorage = createRuntimeBlobStorage(persistence); + runtimeBlobStorageCache.set(repoDir, blobStorage); + return await blobStorage; +} + +function createRuntimeBlobStorage(persistence) { + const createStorage = persistence.createRuntimeBlobStorage; + if (typeof createStorage !== 'function') { + return null; + } + return createStorage.call(persistence); } export async function getGraphModelStatusForRead(read) { @@ -78,43 +318,11 @@ export async function getStoredEntry(read, nodeId, props = null) { return null; } - const {kind} = resolvedProps; + const text = storesTextContent(resolvedProps.kind) + ? await readNodeText(read, nodeId, resolvedProps) + : ''; - return { - id: nodeId, - kind, - source: resolvedProps.source, - channel: resolvedProps.channel, - writerId: resolvedProps.writerId, - createdAt: resolvedProps.createdAt, - sortKey: String(resolvedProps.sortKey || ''), - thoughtId: resolvedProps.thoughtId ?? null, - seedEntryId: resolvedProps.seedEntryId ?? null, - contrastEntryId: resolvedProps.contrastEntryId ?? null, - sessionId: resolvedProps.sessionId ?? null, - promptType: resolvedProps.promptType ?? null, - question: resolvedProps.question ?? null, - ambientCwd: resolvedProps.ambientCwd ?? null, - ambientGitRoot: resolvedProps.ambientGitRoot ?? null, - ambientGitRemote: resolvedProps.ambientGitRemote ?? null, - ambientGitBranch: resolvedProps.ambientGitBranch ?? null, - captureProvenance: resolvedProps.captureIngress || resolvedProps.captureSourceApp || resolvedProps.captureSourceURL - ? { - ingress: resolvedProps.captureIngress ?? null, - sourceApp: resolvedProps.captureSourceApp ?? null, - sourceURL: resolvedProps.captureSourceURL ?? null, - } - : null, - selectionReason: resolvedProps.selectionReasonKind - ? { - kind: resolvedProps.selectionReasonKind, - text: resolvedProps.selectionReasonText ?? '', - } - : null, - stepCount: Number(resolvedProps.stepCount ?? 0), - maxSteps: Number(resolvedProps.maxSteps ?? 0), - text: storesTextContent(kind) ? await readNodeText(read, nodeId) : '', - }; + return BaseEntry.from(nodeId, resolvedProps, text); } export function toBrowseEntry(entry) { @@ -133,7 +341,7 @@ export function toBrowseEntry(entry) { export async function getReflectSession(read, sessionId) { const session = await getStoredEntry(read, sessionId); - if (!session || (session.kind !== 'reflect_session' && session.kind !== 'brainstorm_session')) { + if (!session || !SESSION_KINDS.includes(session.kind)) { return null; } @@ -221,11 +429,74 @@ export async function getSingleNeighborId(read, nodeId, direction, label) { return result.nodes?.[0]?.id ?? null; } -export async function readNodeText(read, nodeId) { - const content = await read.contentCore.getContent(nodeId); +export async function getLatestStoredEntry(read, kind = 'capture') { + const latestId = await getLatestIdByKind(read, kind); + return latestId ? await getStoredEntry(read, latestId) : null; +} + +export async function listRecentStoredEntries(read, { kind = 'capture', limit = 50 } = {}) { + const latestId = await getLatestIdByKind(read, kind); + if (!latestId) { + const fallbackEntries = await listEntriesByKind(read, kind); + return fallbackEntries + .sort(compareEntriesNewestFirst) + .slice(0, limit); + } + + const ids = await read.view.traverse.bfs(latestId, { + dir: 'out', + labelFilter: 'older', + }); + + const entries = []; + for (const id of ids) { + if (entries.length >= limit) { break; } + // eslint-disable-next-line no-await-in-loop -- sequential retrieval of windowed entries + const entry = await getStoredEntry(read, id); + if (entry && entry.kind === kind) { + entries.push(entry); + } + } + + return entries; +} + +async function getLatestIdByKind(read, kind) { + if (kind !== 'capture') { + // For now, only capture has a latest pointer. + // Future: generic latest_by_kind metadata. + return null; + } + + return await getLatestCaptureId(read); +} + +export async function readNodeText(read, nodeId, props = null) { + const resolvedProps = props ?? await read.view.getNodeProps(nodeId); + const contentOid = typeof resolvedProps?._content === 'string' + ? resolvedProps._content + : await readNodeContentOid(read, nodeId); + const content = contentOid && read.blobStorage + ? await read.blobStorage.retrieve(contentOid) + : await readContent(read, nodeId); return content ? new TextDecoder().decode(content) : ''; } +async function readContent(read, nodeId) { + if (typeof read.readContent === 'function') { + return await read.readContent(nodeId); + } + return await read.contentCore.getContent(nodeId); +} + +async function readNodeContentOid(read, nodeId) { + if (typeof read.view.getNodeContentMeta !== 'function') { + return null; + } + const contentMeta = await read.view.getNodeContentMeta(nodeId); + return typeof contentMeta?.oid === 'string' ? contentMeta.oid : null; +} + export async function getLatestCaptureId(read) { const result = await read.view.query() .match(GRAPH_META_ID) diff --git a/src/store/trie.js b/src/store/trie.js new file mode 100644 index 0000000..78870d9 --- /dev/null +++ b/src/store/trie.js @@ -0,0 +1,101 @@ +/** + * A lightweight in-memory Trie for fast prefix matching of keywords. + * This is used to provide instant search-as-you-type in the TUI + * without bloating the permanent Git/WARP graph with fragment nodes. + */ +export class KeywordTrie { + constructor() { + this.root = { children: {}, keyword: null }; + } + + /** + * Insert a keyword from the graph into the in-memory Trie. + */ + insert(keyword) { + let current = this.root; + for (const char of keyword.toLowerCase()) { + if (!current.children[char]) { + current.children[char] = { children: {}, keyword: null }; + } + current = current.children[char]; + } + current.keyword = keyword; + } + + /** + * Find all keywords that match the given prefix. + */ + search(prefix) { + let current = this.root; + for (const char of prefix.toLowerCase()) { + if (!current.children[char]) { + return []; + } + current = current.children[char]; + } + + const results = []; + this._collect(current, results); + return results; + } + + /** + * Recursive helper to collect all keywords under a given node. + */ + _collect(node, results) { + if (node.keyword) { + results.push(node.keyword); + } + for (const char of Object.keys(node.children)) { + this._collect(node.children[char], results); + } + } + + /** + * Find all keywords within a certain edit distance of the query. + */ + searchFuzzy(query, maxDistance = 2) { + const results = []; + const lowerQuery = query.toLowerCase(); + + // Small optimization: collect all keywords and filter by distance. + // For a more advanced approach, we'd use a recursive search on the trie branches. + const allKeywords = []; + this._collect(this.root, allKeywords); + + for (const keyword of allKeywords) { + const distance = levenshteinDistance(lowerQuery, keyword.toLowerCase()); + if (distance <= maxDistance) { + results.push({ keyword, distance }); + } + } + + return results.sort((a, b) => a.distance - b.distance); + } +} + +/** + * Calculate the Levenshtein distance between two strings. + * Used for fuzzy matching and ranking. + */ +export function levenshteinDistance(s1, s2) { + const m = s1.length; + const n = s2.length; + const dp = Array.from({ length: m + 1 }, () => new Array(n + 1).fill(0)); + + for (let i = 0; i <= m; i++) { dp[i][0] = i; } + for (let j = 0; j <= n; j++) { dp[0][j] = j; } + + for (let i = 1; i <= m; i++) { + for (let j = 1; j <= n; j++) { + const cost = s1[i - 1] === s2[j - 1] ? 0 : 1; + dp[i][j] = Math.min( + dp[i - 1][j] + 1, // deletion + dp[i][j - 1] + 1, // insertion + dp[i - 1][j - 1] + cost // substitution + ); + } + } + + return dp[m][n]; +} diff --git a/src/verbose.js b/src/verbose.js index 912c6f8..c77516c 100644 --- a/src/verbose.js +++ b/src/verbose.js @@ -1,25 +1,31 @@ import { stringifyJson } from './json.js'; -export function createVerboseReporter(stream, enabled) { - return { - enabled, - event(name, data = {}) { - if (!enabled) { - return; - } +export class VerboseReporter { + constructor(stream, enabled) { + this.enabled = enabled; + this._stream = stream; + } + + event(name, data = {}) { + if (!this.enabled) { + return; + } - const payload = { - ts: new Date().toISOString(), - event: name, - ...data, - }; + const payload = { + ts: new Date().toISOString(), + event: name, + ...data, + }; - if (typeof stream === 'function') { - stream(payload); - return; - } + if (typeof this._stream === 'function') { + this._stream(payload); + return; + } - stream.write(`${stringifyJson(payload)}\n`); - }, - }; + this._stream.write(`${stringifyJson(payload)}\n`); + } +} + +export function createVerboseReporter(stream, enabled) { + return new VerboseReporter(stream, enabled); } diff --git a/test/acceptance/annotate.test.js b/test/acceptance/annotate.test.js new file mode 100644 index 0000000..097ef7c --- /dev/null +++ b/test/acceptance/annotate.test.js @@ -0,0 +1,75 @@ +import assert from 'node:assert/strict'; +import test from 'node:test'; + +import { + runThink, + createThinkContext, +} from '../fixtures/think.js'; + +import { + assertSuccess, + assertFailure, + assertContains, + parseJsonLines, +} from '../support/assertions.js'; + +test('think --annotate attaches a note to an existing capture', async () => { + const context = await createThinkContext(); + + assertSuccess(runThink(context, ['original thought']), 'Expected capture to succeed.'); + + const recent = runThink(context, ['--json', '--recent']); + const events = parseJsonLines(recent.stdout); + const { entryId } = events.find((e) => e.event === 'recent.entry'); + + const annotate = runThink(context, [`--annotate=${entryId}`, 'this was wrong']); + assertSuccess(annotate, 'Expected annotation to succeed.'); + assertContains(annotate, 'Annotated', 'Expected success message.'); +}); + +test('think --json --annotate emits structured annotation result', async () => { + const context = await createThinkContext(); + + assertSuccess(runThink(context, ['a thought']), 'Expected capture to succeed.'); + + const recent = runThink(context, ['--json', '--recent']); + const events = parseJsonLines(recent.stdout); + const { entryId } = events.find((e) => e.event === 'recent.entry'); + + const annotate = runThink(context, ['--json', `--annotate=${entryId}`, 'my note']); + assertSuccess(annotate, 'Expected JSON annotation to succeed.'); + + const result = parseJsonLines(annotate.stdout); + const annotateEvent = result.find((e) => e.event === 'annotate.done'); + assert.ok(annotateEvent, 'Expected annotate.done event.'); + assert.ok(annotateEvent.annotationId, 'Expected annotationId in result.'); + assert.equal(annotateEvent.targetEntryId, entryId, 'Expected targetEntryId to match.'); +}); + +test('think --annotate rejects empty annotation text', async () => { + const context = await createThinkContext(); + + assertSuccess(runThink(context, ['thought']), 'Expected capture to succeed.'); + + const annotate = runThink(context, ['--annotate=entry:fake', '']); + assertFailure(annotate, 'Expected empty annotation to fail.'); +}); + +test('think --annotate shows annotation in --inspect output', async () => { + const context = await createThinkContext(); + + assertSuccess(runThink(context, ['inspectable thought']), 'Expected capture to succeed.'); + + const recent = runThink(context, ['--json', '--recent']); + const events = parseJsonLines(recent.stdout); + const { entryId } = events.find((e) => e.event === 'recent.entry'); + + assertSuccess( + runThink(context, [`--annotate=${entryId}`, 'later reflection']), + 'Expected annotation to succeed.' + ); + + const inspect = runThink(context, [`--inspect=${entryId}`]); + assertSuccess(inspect, 'Expected inspect to succeed.'); + assertContains(inspect, 'later reflection', 'Expected annotation text in inspect output.'); +}); diff --git a/test/acceptance/auto-tags.test.js b/test/acceptance/auto-tags.test.js new file mode 100644 index 0000000..aec4f3d --- /dev/null +++ b/test/acceptance/auto-tags.test.js @@ -0,0 +1,50 @@ +import assert from 'node:assert/strict'; +import test from 'node:test'; + +import { + runThink, + createThinkContext, +} from '../fixtures/think.js'; + +import { + assertSuccess, + assertContains, + parseJsonLines, +} from '../support/assertions.js'; + +test('think --topics lists promoted topics after multiple captures share a keyword', async () => { + const context = await createThinkContext(); + + // Capture two thoughts that share "performance" + assertSuccess(runThink(context, ['capture latency and performance optimization'])); + assertSuccess(runThink(context, ['performance benchmarks show improvement'])); + + // Run enrichment to extract tags + const enrich = runThink(context, ['--enrich']); + assertSuccess(enrich, 'Expected enrichment to succeed.'); + + // Check topics + const topics = runThink(context, ['--topics']); + assertSuccess(topics, 'Expected --topics to succeed.'); + assertContains(topics, 'performance', 'Expected "performance" to be a promoted topic.'); +}); + +test('think --json --topics emits JSONL topic list', async () => { + const context = await createThinkContext(); + + assertSuccess(runThink(context, ['architecture decisions for the store layer'])); + assertSuccess(runThink(context, ['architecture review completed'])); + assertSuccess(runThink(context, ['--enrich'])); + + const topics = runThink(context, ['--json', '--topics']); + assertSuccess(topics, 'Expected JSON topics to succeed.'); + + const events = parseJsonLines(topics.stdout); + const topicEvents = events.filter((e) => e.event === 'topics.topic'); + assert.ok(topicEvents.length > 0, 'Expected at least one topic event.'); + + for (const event of topicEvents) { + assert.ok(event.name, 'Expected topic to have a name.'); + assert.ok(typeof event.thoughtCount === 'number', 'Expected topic to have a thoughtCount.'); + } +}); diff --git a/test/acceptance/graph-migration.test.js b/test/acceptance/graph-migration.test.js index 8cd545e..ea290fc 100644 --- a/test/acceptance/graph-migration.test.js +++ b/test/acceptance/graph-migration.test.js @@ -82,7 +82,7 @@ test('think --migrate-graph upgrades a version-1 property-linked repo additively const migrate = runThink(context, ['--migrate-graph']); assertSuccess(migrate, `Expected graph migration to succeed.\n${formatResult(migrate)}`); assertContains(migrate, 'Graph migration complete', 'Expected migration to report explicit success.'); - assertContains(migrate, 'graph model version 3', 'Expected migration to report the upgraded graph model generation.'); + assertContains(migrate, 'graph model version 4', 'Expected migration to report the upgraded graph model generation.'); const migratedGraph = await openThinkGraph(context.localRepoDir); const afterEdges = await migratedGraph.getEdges(); @@ -107,7 +107,7 @@ test('think --migrate-graph upgrades a version-1 property-linked repo additively const metadata = await migratedGraph.getNodeProps('meta:graph'); assert.ok(metadata, 'Expected migration to materialize graph metadata.'); - assert.equal(metadata.graphModelVersion, 3, 'Expected migration to upgrade the repo graph model generation to 3.'); + assert.equal(metadata.graphModelVersion, 4, 'Expected migration to upgrade the repo graph model generation to 3.'); }); test('think --migrate-graph is idempotent and safe to rerun', async () => { @@ -188,7 +188,7 @@ test('capture on a version-1 repo still succeeds and only migrates after the raw graph = await openThinkGraph(context.localRepoDir); const metadata = await graph.getNodeProps('meta:graph'); assert.ok(metadata, 'Expected post-capture migration to leave graph metadata materialized.'); - assert.equal(metadata.graphModelVersion, 3, 'Expected post-capture migration to upgrade the repo back to graph model version 3.'); + assert.equal(metadata.graphModelVersion, 4, 'Expected post-capture migration to upgrade the repo back to graph model version 4.'); const edges = await graph.getEdges(); assertEdge( @@ -336,7 +336,7 @@ test('think --json emits explicit graph migration required errors for outdated g assert.equal(migrationRequired.command, 'inspect', 'Expected migration-required payload to name the blocked command.'); assert.equal(migrationRequired.currentGraphModelVersion, 1, 'Expected migration-required payload to report the current graph model generation.'); - assert.equal(migrationRequired.requiredGraphModelVersion, 3, 'Expected migration-required payload to report the required graph model generation.'); + assert.equal(migrationRequired.requiredGraphModelVersion, 4, 'Expected migration-required payload to report the required graph model generation.'); assert.equal( migrationRequired.message, 'Graph migration required. Run think --migrate-graph.', @@ -351,7 +351,7 @@ test('think --json emits explicit graph migration required errors for outdated g assert.equal(failure.command, 'inspect', 'Expected CLI failure payload to preserve the blocked command identity.'); }); -test('think --migrate-graph upgrades a version-2 repo to graph model version 3 with browse and reflect read edges', async () => { +test('think --migrate-graph upgrades a version-2 repo to graph model version 4 with browse, reflect, and enrichment nodes', async () => { const context = await createThinkContext(); const { entryId: olderEntryId } = captureWithEntryId( context, @@ -375,12 +375,12 @@ test('think --migrate-graph upgrades a version-2 repo to graph model version 3 w const migrate = runThink(context, ['--migrate-graph']); assertSuccess(migrate, `Expected graph migration to succeed for a version-2 repo.\n${formatResult(migrate)}`); - assertContains(migrate, 'Graph migration complete', 'Expected migration to report explicit success when upgrading to graph model version 3.'); - assertContains(migrate, 'graph model version 3', 'Expected migration to report the new graph model generation.'); + assertContains(migrate, 'Graph migration complete', 'Expected migration to report explicit success when upgrading to graph model version 4.'); + assertContains(migrate, 'graph model version 4', 'Expected migration to report the new graph model generation.'); const migratedGraph = await openThinkGraph(context.localRepoDir); const afterMetadata = await migratedGraph.getNodeProps('meta:graph'); - assert.equal(afterMetadata?.graphModelVersion, 3, 'Expected migration to upgrade the repo graph model generation to 3.'); + assert.equal(afterMetadata?.graphModelVersion, 4, 'Expected migration to upgrade the repo graph model generation to 3.'); const edges = await migratedGraph.getEdges(); assertEdge( @@ -388,35 +388,35 @@ test('think --migrate-graph upgrades a version-2 repo to graph model version 3 w 'meta:graph', newerEntryId, 'latest_capture', - 'Expected graph model version 3 migration to add a latest_capture anchor for browse bootstrap.' + 'Expected graph model version 4 migration to add a latest_capture anchor for browse bootstrap.' ); assertEdge( edges, newerEntryId, olderEntryId, 'older', - 'Expected graph model version 3 migration to add explicit chronology edges between captures.' + 'Expected graph model version 4 migration to add explicit chronology edges between captures.' ); assertEdge( edges, reflect.sessionId, olderEntryId, 'seeded_by', - 'Expected graph model version 3 migration to add an explicit seeded_by edge from reflect session to seed capture.' + 'Expected graph model version 4 migration to add an explicit seeded_by edge from reflect session to seed capture.' ); assertEdge( edges, reflect.reflectEntryId, reflect.sessionId, 'produced_in', - 'Expected graph model version 3 migration to add an explicit produced_in edge from reflect entry to its session.' + 'Expected graph model version 4 migration to add an explicit produced_in edge from reflect entry to its session.' ); assertEdge( edges, reflect.reflectEntryId, olderEntryId, 'responds_to', - 'Expected graph model version 3 migration to add an explicit responds_to edge from reflect entry to its seed capture.' + 'Expected graph model version 4 migration to add an explicit responds_to edge from reflect entry to its seed capture.' ); }); @@ -460,7 +460,7 @@ test('think --json --inspect exposes direct reflect receipts that exist only thr await patch.attachContent( reflectEntryId, - 'Inspect should still find this reflect receipt through explicit graph edges.', + Buffer.from('Inspect should still find this reflect receipt through explicit graph edges.', 'utf8'), { mime: 'text/plain; charset=utf-8' } ); }); diff --git a/test/acceptance/help.test.js b/test/acceptance/help.test.js index 5dd6cb4..b88036f 100644 --- a/test/acceptance/help.test.js +++ b/test/acceptance/help.test.js @@ -23,6 +23,9 @@ test('think --help prints top-level usage without bootstrapping local state', as assertSuccess(result, 'Expected --help to exit successfully.'); assertContains(result, 'Usage: think', 'Expected top-level help to include a usage line.'); assertContains(result, '--recent', 'Expected top-level help to enumerate explicit command surfaces.'); + assertContains(result, '--annotate=', 'Expected top-level help to enumerate annotation help.'); + assertContains(result, '--enrich', 'Expected top-level help to enumerate enrichment help.'); + assertContains(result, '--topics', 'Expected top-level help to enumerate topic help.'); assert.ok( !existsSync(context.localRepoDir), `Expected --help to stay read-only and avoid creating ${context.localRepoDir}.` @@ -38,6 +41,59 @@ test('think -h is accepted as a short alias for top-level help', async () => { assertContains(result, 'Usage: think', 'Expected -h to print the same top-level usage banner.'); }); +test('think --enrich --help prints enrich help instead of running the command', async () => { + const context = await createThinkContext(); + + const result = runThink(context, ['--enrich', '--help']); + + assertSuccess(result, 'Expected enrich help to exit successfully.'); + assertContains(result, 'Usage: think --enrich', 'Expected enrich help to render an enrich-specific usage line.'); + assert.ok( + !existsSync(context.localRepoDir), + `Expected enrich help to remain read-only and avoid creating ${context.localRepoDir}.` + ); +}); + +test('think --topics -h prints topics help instead of running the command', async () => { + const context = await createThinkContext(); + + const result = runThink(context, ['--topics', '-h']); + + assertSuccess(result, 'Expected topics help to exit successfully.'); + assertContains(result, 'Usage: think --topics', 'Expected topics help to render a topics-specific usage line.'); + assert.ok( + !existsSync(context.localRepoDir), + `Expected topics help to remain read-only and avoid creating ${context.localRepoDir}.` + ); +}); + +test('think --annotate --help bypasses required entry and text validation', async () => { + const context = await createThinkContext(); + + const result = runThink(context, ['--annotate', '--help']); + + assertSuccess(result, 'Expected annotate help to succeed without an entry id or annotation text.'); + assertContains(result, 'Usage: think --annotate=', 'Expected annotate help to document the entry id usage.'); + assertNotContains( + result, + '--annotate requires an entry id', + 'Expected help to bypass the normal annotate validation path.' + ); +}); + +test('think --enrich and --topics reject stray positional text', async () => { + const context = await createThinkContext(); + + const enrich = runThink(context, ['--enrich', 'stray text']); + const topics = runThink(context, ['--topics', 'stray text']); + + assertFailure(enrich, 'Expected enrich with stray positional text to fail.'); + assertContains(enrich, '--enrich does not take a thought', 'Expected enrich validation to reject positionals.'); + + assertFailure(topics, 'Expected topics with stray positional text to fail.'); + assertContains(topics, '--topics does not take a thought', 'Expected topics validation to reject positionals.'); +}); + test('think --recent --help prints recent help instead of running the command', async () => { const context = await createThinkContext(); diff --git a/test/acceptance/recent-limit.test.js b/test/acceptance/recent-limit.test.js new file mode 100644 index 0000000..cfe74b7 --- /dev/null +++ b/test/acceptance/recent-limit.test.js @@ -0,0 +1,58 @@ +import assert from 'node:assert/strict'; +import test from 'node:test'; + +import { + runThink, + createThinkContext, +} from '../fixtures/think.js'; + +import { + assertSuccess, + assertContains, + parseJsonLines, +} from '../support/assertions.js'; + +test('think --recent defaults to a bounded window with total count', async () => { + const context = await createThinkContext(); + + // Capture 5 thoughts + for (let i = 0; i < 5; i++) { + assertSuccess(runThink(context, [`thought number ${i}`])); + } + + const recent = runThink(context, ['--recent']); + assertSuccess(recent, 'Expected --recent to succeed.'); + // Should show all 5 since they're under the default limit + assertContains(recent, 'thought number', 'Expected thoughts in output.'); +}); + +test('think --json --recent includes total count in done event', async () => { + const context = await createThinkContext(); + + for (let i = 0; i < 3; i++) { + assertSuccess(runThink(context, [`json thought ${i}`])); + } + + const recent = runThink(context, ['--json', '--recent']); + assertSuccess(recent, 'Expected --json --recent to succeed.'); + + const events = parseJsonLines(recent.stdout); + const doneEvent = events.find((e) => e.event === 'recent.done'); + assert.ok(doneEvent, 'Expected recent.done event.'); + assert.equal(typeof doneEvent.total, 'number', 'Expected total field in done event.'); + assert.equal(doneEvent.total, 3, 'Expected total to reflect all captures.'); +}); + +test('think --recent text output shows trailer when results are truncated', async () => { + const context = await createThinkContext(); + + // We can't easily create 51 captures in a test, so we test that + // the trailer appears when --count is used and there are more entries + for (let i = 0; i < 5; i++) { + assertSuccess(runThink(context, [`trailer thought ${i}`])); + } + + const recent = runThink(context, ['--recent', '--count=2']); + assertSuccess(recent, 'Expected --recent --count=2 to succeed.'); + assertContains(recent, 'of 5', 'Expected trailer showing total when results are truncated.'); +}); diff --git a/test/ports/auto-tags.test.js b/test/ports/auto-tags.test.js new file mode 100644 index 0000000..a5546cc --- /dev/null +++ b/test/ports/auto-tags.test.js @@ -0,0 +1,63 @@ +import assert from 'node:assert/strict'; +import test from 'node:test'; + +import { extractTopics } from '../../src/store/enrichment/auto-tags.js'; + +// --------------------------------------------------------------------------- +// Topic extraction (pure function, no graph) +// --------------------------------------------------------------------------- + +test('extractTopics returns meaningful keywords from thought text', () => { + const topics = extractTopics('git-warp performance optimization is critical for capture latency'); + + assert.ok(topics.includes('performance'), 'Expected "performance" as a topic.'); + assert.ok(topics.includes('optimization'), 'Expected "optimization" as a topic.'); + assert.ok(topics.includes('capture'), 'Expected "capture" as a topic.'); + assert.ok(topics.includes('latency'), 'Expected "latency" as a topic.'); +}); + +test('extractTopics filters out stopwords', () => { + const topics = extractTopics('the quick brown fox jumps over the lazy dog'); + + assert.ok(!topics.includes('the'), 'Expected "the" to be filtered.'); + assert.ok(!topics.includes('over'), 'Expected "over" to be filtered.'); + assert.ok(topics.includes('quick'), 'Expected "quick" to survive.'); + assert.ok(topics.includes('fox'), 'Expected "fox" to survive.'); +}); + +test('extractTopics filters out short tokens', () => { + const topics = extractTopics('I am on it ok go do'); + + for (const topic of topics) { + assert.ok(topic.length >= 3, `Expected topic "${topic}" to be >= 3 chars.`); + } +}); + +test('extractTopics normalizes to lowercase', () => { + const topics = extractTopics('Think PERFORMANCE Optimization'); + + for (const topic of topics) { + assert.equal(topic, topic.toLowerCase(), `Expected topic "${topic}" to be lowercase.`); + } +}); + +test('extractTopics returns empty array for empty text', () => { + assert.deepEqual(extractTopics(''), []); + assert.deepEqual(extractTopics(' '), []); +}); + +test('extractTopics deduplicates repeated words', () => { + const topics = extractTopics('performance performance performance latency'); + + const perfCount = topics.filter((t) => t === 'performance').length; + assert.equal(perfCount, 1, 'Expected no duplicate topics.'); +}); + +test('extractTopics handles hyphenated terms', () => { + const topics = extractTopics('git-warp is a graph database'); + + assert.ok( + topics.includes('git-warp') || (topics.includes('git') && topics.includes('warp')), + 'Expected hyphenated terms to be handled.' + ); +}); diff --git a/test/ports/browse-tui.test.js b/test/ports/browse-tui.test.js index 0eeb8e5..26f9d3d 100644 --- a/test/ports/browse-tui.test.js +++ b/test/ports/browse-tui.test.js @@ -1,7 +1,9 @@ import assert from 'node:assert/strict'; import test from 'node:test'; +import { createResolved } from '@flyingrobots/bijou'; import { createWindowedBrowseModel } from '../../src/browse-tui/model.js'; +import { thinkShellThemes, thinkThemes } from '../../src/browse-tui/theme.js'; import { renderBrowseModel } from '../../src/browse-tui/view.js'; import * as styleExports from '../../src/browse-tui/style.js'; @@ -48,3 +50,37 @@ test('windowed browse initializes with no drawer open', () => { 'Expected the initial live browse frame not to render any drawer border before the user opens a panel.' ); }); + +test('all browse shell themes resolve with status tokens', () => { + const requiredTokens = [ + 'surface.primary', + 'surface.secondary', + 'surface.elevated', + 'surface.overlay', + 'surface.muted', + 'semantic.primary', + 'semantic.muted', + 'border.primary', + 'border.muted', + 'ui.cursor', + 'ui.scrollThumb', + 'ui.scrollTrack', + ]; + + for (const shellTheme of thinkShellThemes) { + const { theme } = shellTheme; + assert.equal(shellTheme.id, theme.name, 'Expected shell theme id to match the theme name'); + const resolved = createResolved(theme, false); + assert.doesNotThrow( + () => { + for (const token of requiredTokens) { + resolved.tokenGraph.get(token, resolved.colorScheme); + } + }, + `Expected shell theme ${theme.name} to resolve` + ); + assert.ok(theme.status.active, `Expected shell theme ${theme.name} to define active status`); + assert.ok(theme.status.muted, `Expected shell theme ${theme.name} to define muted status`); + } + assert.equal(thinkShellThemes.length, thinkThemes.length); +}); diff --git a/test/ports/capture-context.test.js b/test/ports/capture-context.test.js index 881469a..a28734e 100644 --- a/test/ports/capture-context.test.js +++ b/test/ports/capture-context.test.js @@ -1,12 +1,21 @@ import assert from 'node:assert/strict'; import test from 'node:test'; +import Plumbing from '@git-stunts/plumbing'; +import WarpApp, { GitGraphAdapter } from '@git-stunts/git-warp'; import { ensureGitRepo } from '../../src/git.js'; +import { getCaptureAmbientContext, getAmbientProjectContext } from '../../src/project-context.js'; import { finalizeCapturedThought, + GRAPH_NAME, + inspectRawEntry, openProductReadHandle, + saveAnnotation, saveRawCapture, + saveReflectResponse, + startReflect, } from '../../src/store.js'; +import { createWriterId } from '../../src/store/model.js'; import { createGitRepo, runGit } from '../fixtures/git.js'; import { createTempDir } from '../fixtures/tmp.js'; import { formatResult } from '../fixtures/runtime.js'; @@ -38,7 +47,7 @@ test('saveRawCapture writes cwd receipts first and defers git enrichment to foll ); const entry = await saveRawCapture(localRepoDir, 'capture should stay cheap', { - cwd: projectRepoDir, + ambientContext: getCaptureAmbientContext(projectRepoDir), }); const readBeforeFollowthrough = await openProductReadHandle(localRepoDir); const savedBeforeFollowthrough = await readBeforeFollowthrough.view.getNodeProps(entry.id); @@ -50,7 +59,7 @@ test('saveRawCapture writes cwd receipts first and defers git enrichment to foll assert.equal(savedBeforeFollowthrough.ambientGitBranch ?? null, null, 'Expected git branch enrichment to be deferred until followthrough.'); const followthrough = await finalizeCapturedThought(localRepoDir, entry.id, { - cwd: projectRepoDir, + ambientContext: getAmbientProjectContext(projectRepoDir), }); const readAfterFollowthrough = await openProductReadHandle(localRepoDir); const savedAfterFollowthrough = await readAfterFollowthrough.view.getNodeProps(entry.id); @@ -70,3 +79,90 @@ test('saveRawCapture writes cwd receipts first and defers git enrichment to foll 'Expected followthrough to backfill the current git branch receipt.' ); }); + +test('saveRawCapture retries after the cached writer ref is advanced externally', async () => { + const localRepoDir = await createTempDir('think-capture-retry-'); + await ensureGitRepo(localRepoDir); + + await saveRawCapture(localRepoDir, 'seed capture before external writer advance'); + const externalApp = await openExternalWarpApp(localRepoDir); + await externalApp.patch((patch) => { + patch + .addNode('external:writer-advance') + .setProperty('external:writer-advance', 'kind', 'external_fixture'); + }); + + const entry = await saveRawCapture(localRepoDir, 'capture should retry after writer ref conflict'); + const read = await openProductReadHandle(localRepoDir); + const saved = await read.view.getNodeProps(entry.id); + + assert.ok(saved, 'Expected retrying raw capture to be committed after the writer ref advanced.'); + assert.equal(saved.kind, 'capture', 'Expected retried write to preserve capture semantics.'); +}); + +test('saveAnnotation retries after the cached writer ref is advanced externally', async () => { + const localRepoDir = await createTempDir('think-annotation-retry-'); + await ensureGitRepo(localRepoDir); + + const entry = await saveRawCapture(localRepoDir, 'annotation retry seed capture'); + const externalApp = await openExternalWarpApp(localRepoDir); + await externalApp.patch((patch) => { + patch + .addNode('external:annotation-writer-advance') + .setProperty('external:annotation-writer-advance', 'kind', 'external_fixture'); + }); + + const result = await saveAnnotation(localRepoDir, entry.id, 'annotation should retry after writer ref conflict'); + const inspected = await inspectRawEntry(localRepoDir, entry.id); + + assert.ok(result.annotationId, 'Expected annotation save to return the created annotation id.'); + assert.ok( + inspected.annotations.some((annotation) => annotation.annotationId === result.annotationId), + 'Expected the retried annotation write to be visible on inspect.' + ); +}); + +test('reflect writes retry after the cached writer ref is advanced externally', async () => { + const localRepoDir = await createTempDir('think-reflect-retry-'); + await ensureGitRepo(localRepoDir); + + const entry = await saveRawCapture( + localRepoDir, + 'We should redesign browse startup because checkpoint reads can hide transition latency.' + ); + const firstExternalApp = await openExternalWarpApp(localRepoDir); + await firstExternalApp.patch((patch) => { + patch + .addNode('external:reflect-start-writer-advance') + .setProperty('external:reflect-start-writer-advance', 'kind', 'external_fixture'); + }); + + const started = await startReflect(localRepoDir, entry.id, { promptType: 'challenge' }); + assert.equal(started.ok, true, 'Expected reflect start to retry and create a session after writer ref conflict.'); + + const secondExternalApp = await openExternalWarpApp(localRepoDir); + await secondExternalApp.patch((patch) => { + patch + .addNode('external:reflect-reply-writer-advance') + .setProperty('external:reflect-reply-writer-advance', 'kind', 'external_fixture'); + }); + + const saved = await saveReflectResponse( + localRepoDir, + started.sessionId, + 'The transition should keep the alternate screen stable while loading the next view.' + ); + + assert.ok(saved, 'Expected reflect reply to retry and save after writer ref conflict.'); + assert.equal(saved.sessionId, started.sessionId, 'Expected retried reflect reply to preserve session lineage.'); +}); + +async function openExternalWarpApp(repoDir) { + return await WarpApp.open({ + persistence: new GitGraphAdapter({ + plumbing: Plumbing.createDefault({ cwd: repoDir }), + }), + graphName: GRAPH_NAME, + writerId: createWriterId(), + }); +} diff --git a/test/ports/capture-provenance.test.js b/test/ports/capture-provenance.test.js index 34d4d85..8415b24 100644 --- a/test/ports/capture-provenance.test.js +++ b/test/ports/capture-provenance.test.js @@ -2,6 +2,7 @@ import assert from 'node:assert/strict'; import test from 'node:test'; import { + CaptureProvenance, VALID_CAPTURE_INGRESSES, captureProvenanceFromEnvironment, normalizeCaptureProvenance, @@ -16,49 +17,75 @@ test('capture provenance exports the canonical ingress set', () => { }); test('capture provenance trims source strings while preserving valid ingress and URL', () => { - assert.deepEqual( - normalizeCaptureProvenance({ - ingress: 'share', - sourceApp: ' Safari ', - sourceURL: 'https://example.com/article', - }), - { - ingress: 'share', - sourceApp: 'Safari', - sourceURL: 'https://example.com/article', - }, - 'Expected provenance normalization to trim additive string fields.' - ); + const result = normalizeCaptureProvenance({ + ingress: 'share', + sourceApp: ' Safari ', + sourceURL: 'https://example.com/article', + }); + assert.equal(result.ingress, 'share'); + assert.equal(result.sourceApp, 'Safari'); + assert.equal(result.sourceURL, 'https://example.com/article'); }); test('capture provenance trims ingress strings before validation', () => { - assert.deepEqual( - normalizeCaptureProvenance({ - ingress: ' url ', - sourceApp: ' Safari ', - sourceURL: 'https://example.com/article', - }), - { + const result = normalizeCaptureProvenance({ + ingress: ' url ', + sourceApp: ' Safari ', + sourceURL: 'https://example.com/article', + }); + assert.equal(result.ingress, 'url'); + assert.equal(result.sourceApp, 'Safari'); + assert.equal(result.sourceURL, 'https://example.com/article'); +}); + +test('capture provenance rejects dangerous URL schemes', () => { + for (const dangerous of ['data:text/html,

x

', 'file:///etc/passwd', 'ftp://evil.example.com/payload']) { + const result = normalizeCaptureProvenance({ ingress: 'url', - sourceApp: 'Safari', - sourceURL: 'https://example.com/article', - }, - 'Expected ingress normalization to accept valid values with surrounding whitespace.' - ); + sourceApp: 'Test', + sourceURL: dangerous, + }); + assert.equal( + result.sourceURL, + null, + `Expected "${dangerous}" to be rejected as a provenance URL.` + ); + } +}); + +test('capture provenance accepts safe URL schemes', () => { + for (const safe of ['https://example.com', 'http://localhost:3000']) { + const result = normalizeCaptureProvenance({ + ingress: 'url', + sourceApp: 'Test', + sourceURL: safe, + }); + assert.ok( + result.sourceURL !== null, + `Expected "${safe}" to be accepted as a provenance URL.` + ); + } +}); + +test('normalizeCaptureProvenance returns a frozen CaptureProvenance instance', () => { + const result = normalizeCaptureProvenance({ + ingress: 'url', + sourceApp: 'Safari', + sourceURL: 'https://example.com', + }); + + assert.ok(result instanceof CaptureProvenance, 'Expected CaptureProvenance instance.'); + assert.ok(Object.isFrozen(result), 'Expected frozen.'); }); test('capture provenance reads and normalizes environment input', () => { - assert.deepEqual( - captureProvenanceFromEnvironment({ - THINK_CAPTURE_INGRESS: 'selected_text', - THINK_CAPTURE_SOURCE_APP: ' Mail ', - THINK_CAPTURE_SOURCE_URL: 'https://example.com/share', - }), - { - ingress: 'selected_text', - sourceApp: 'Mail', - sourceURL: 'https://example.com/share', - }, - 'Expected environment-derived provenance to be normalized like other capture surfaces.' - ); + const result = captureProvenanceFromEnvironment({ + THINK_CAPTURE_INGRESS: 'selected_text', + THINK_CAPTURE_SOURCE_APP: ' Mail ', + THINK_CAPTURE_SOURCE_URL: 'https://example.com/share', + }); + assert.ok(result instanceof CaptureProvenance, 'Expected CaptureProvenance from environment.'); + assert.equal(result.ingress, 'selected_text'); + assert.equal(result.sourceApp, 'Mail'); + assert.equal(result.sourceURL, 'https://example.com/share'); }); diff --git a/test/ports/checkpoint-read.test.js b/test/ports/checkpoint-read.test.js new file mode 100644 index 0000000..159498d --- /dev/null +++ b/test/ports/checkpoint-read.test.js @@ -0,0 +1,86 @@ +import assert from 'node:assert/strict'; +import test from 'node:test'; + +import { ensureGitRepo } from '../../src/git.js'; +import { + listRecent, + saveRawCapture, +} from '../../src/store.js'; +import { listCheckpointEntriesByKind } from '../../src/store/checkpoint-read.js'; +import { + listEntriesByKind, + openProductReadHandle, + openWarpApp, +} from '../../src/store/runtime.js'; +import { runGit } from '../fixtures/git.js'; +import { createTempDir } from '../fixtures/tmp.js'; +import { formatResult } from '../fixtures/runtime.js'; + +test('checkpoint reads include CAS-backed raw tail captures', async () => { + const repoDir = await createTempDir('think-checkpoint-read-'); + await ensureGitRepo(repoDir); + const previousTestNow = process.env.THINK_TEST_NOW; + + try { + for (let i = 0; i < 20; i += 1) { + process.env.THINK_TEST_NOW = String(1_900_000_000_000 + i); + // eslint-disable-next-line no-await-in-loop -- fixture needs ordered writer patches + await saveRawCapture(repoDir, `checkpoint-backed raw capture ${i}`); + } + const app = await openWarpApp(repoDir); + await app.core().materialize(); + await app.core().createCheckpoint(); + // eslint-disable-next-line require-atomic-updates -- test fixture restores THINK_TEST_NOW in finally + process.env.THINK_TEST_NOW = String(1_900_000_000_020); + await saveRawCapture(repoDir, 'checkpoint-backed raw capture 20'); + } finally { + restoreTestNow(previousTestNow); + } + + const checkpointRef = runGit( + ['rev-parse', '--verify', '--quiet', 'refs/warp/think/checkpoints/head'], + { cwd: repoDir }, + ); + assert.equal( + checkpointRef.status, + 0, + `Expected fixture writes to create an indexed checkpoint.\n${formatResult(checkpointRef)}` + ); + + const checkpointCaptures = await listCheckpointEntriesByKind(repoDir, 'capture'); + assert.ok(checkpointCaptures, 'Expected checkpoint-backed capture listing to be reachable.'); + assert.equal(checkpointCaptures.length, 21, 'Expected checkpoint read model to include raw tail captures.'); + + const productRead = await openProductReadHandle(repoDir); + assert.equal( + typeof productRead.view.getNodeContentMeta, + 'function', + 'Expected product reads to use the checkpoint-backed view when a checkpoint is available.' + ); + const productCaptures = await listEntriesByKind(productRead, 'capture'); + assert.equal(productCaptures.length, 21, 'Expected checkpoint-backed product reads to include raw tail captures.'); + assert.ok( + productCaptures.some((entry) => entry.text === 'checkpoint-backed raw capture 20'), + 'Expected checkpoint-backed product reads to decode CAS-backed tail content.', + ); + + const recent = await listRecent(repoDir, { count: 2 }); + + assert.equal(recent.total, 21, 'Expected public recent reads to use the checkpoint-backed capture set.'); + assert.deepEqual( + recent.entries.map((entry) => entry.text), + [ + 'checkpoint-backed raw capture 20', + 'checkpoint-backed raw capture 19', + ], + 'Expected checkpoint reads to decode CAS-backed capture text from the live tail.', + ); +}); + +function restoreTestNow(previousTestNow) { + if (previousTestNow === undefined) { + delete process.env.THINK_TEST_NOW; + return; + } + process.env.THINK_TEST_NOW = previousTestNow; +} diff --git a/test/ports/docs-consistency.test.js b/test/ports/docs-consistency.test.js index 5a7dc9d..272852d 100644 --- a/test/ports/docs-consistency.test.js +++ b/test/ports/docs-consistency.test.js @@ -47,6 +47,19 @@ test('METHOD docs use one consistent cycle-only release and README closeout poli ); }); +test('MIND_ORCHESTRATION.md exists and is linked from GUIDE.md', () => { + const mindDoc = readRepoFile('docs/MIND_ORCHESTRATION.md'); + assert.ok(mindDoc.length > 0, 'Expected docs/MIND_ORCHESTRATION.md to exist and have content.'); + assert.match(mindDoc, /mind/i, 'Expected the doc to mention minds.'); + + const guide = readRepoFile('GUIDE.md'); + assert.match( + guide, + /MIND_ORCHESTRATION/, + 'Expected GUIDE.md to link to MIND_ORCHESTRATION.md.' + ); +}); + test('cycle 0006 retrospective restarts ordered numbering for the human playback section', () => { const retro = readRepoFile('docs/method/retro/0006/refresh-contributing.md'); const humanPerspective = retro.match( diff --git a/test/ports/doctor.test.js b/test/ports/doctor.test.js index ae8d9c7..c81805a 100644 --- a/test/ports/doctor.test.js +++ b/test/ports/doctor.test.js @@ -67,6 +67,19 @@ test('runDiagnostics reports warn for upstream when unreachable', async () => { assert.equal(upstreamCheck.status, 'warn', 'Expected upstream to warn when unreachable.'); }); +test('runDiagnostics reports skip for upstream when URL is set but no checker provided', async () => { + const context = await createDoctorContext({ withRepo: true }); + const result = await runDiagnostics({ + thinkDir: context.thinkDir, + repoDir: context.repoDir, + upstreamUrl: 'git@github.com:example/backup.git', + // no checkUpstreamReachable provided + }); + + const upstreamCheck = findCheck(result, 'upstream'); + assert.equal(upstreamCheck.status, 'skip', 'Expected upstream to skip when checker is not provided, even if URL is set.'); +}); + test('runDiagnostics reports skip for upstream when not configured', async () => { const context = await createDoctorContext({ withRepo: true }); const result = await runDiagnostics({ @@ -79,7 +92,7 @@ test('runDiagnostics reports skip for upstream when not configured', async () => assert.equal(upstreamCheck.status, 'skip', 'Expected upstream check to be skipped when not configured.'); }); -test('runDiagnostics reports ok for upstream when configured', async () => { +test('runDiagnostics reports skip for upstream when configured without checker', async () => { const context = await createDoctorContext({ withRepo: true }); const result = await runDiagnostics({ thinkDir: context.thinkDir, @@ -88,7 +101,7 @@ test('runDiagnostics reports ok for upstream when configured', async () => { }); const upstreamCheck = findCheck(result, 'upstream'); - assert.equal(upstreamCheck.status, 'ok', 'Expected upstream check to report ok when URL is set.'); + assert.equal(upstreamCheck.status, 'skip', 'Expected upstream to skip when configured but no checker provided.'); }); test('runDiagnostics includes all expected check names', async () => { diff --git a/test/ports/enrichment-cache.test.js b/test/ports/enrichment-cache.test.js new file mode 100644 index 0000000..537a553 --- /dev/null +++ b/test/ports/enrichment-cache.test.js @@ -0,0 +1,90 @@ +import assert from 'node:assert/strict'; +import test from 'node:test'; + +import { ensureGitRepo } from '../../src/git.js'; +import { + finalizeCapturedThought, + saveRawCapture, +} from '../../src/store.js'; +import { + invalidateSearchIndex, + loadSearchIndex, +} from '../../src/store/queries.js'; +import { runEnrichmentPipeline } from '../../src/store/enrichment/runner.js'; +import { createTempDir } from '../fixtures/tmp.js'; + +test('enrichment invalidates the per-repo search index after creating keyword nodes', async () => { + const repoDir = await createTempDir('think-enrichment-index-'); + await ensureGitRepo(repoDir); + + const entry = await saveRawCapture( + repoDir, + 'git-warp performance optimization should make browse startup faster' + ); + await finalizeCapturedThought(repoDir, entry.id); + + const before = await loadSearchIndex(repoDir); + assert.deepEqual( + before.search('performance'), + [], + 'Expected the pre-enrichment search index to start empty.' + ); + + const result = await runEnrichmentPipeline(repoDir); + assert.equal( + result.receiptsCreated, + 2, + 'Expected enrichment to count both auto_tags and semantic_parse receipts.' + ); + + const after = await loadSearchIndex(repoDir); + assert.ok( + after.search('performance').includes('performance'), + 'Expected loadSearchIndex to reload keywords after enrichment invalidates the stale trie.' + ); +}); + +test('search indexes are cached independently per repo', async () => { + const performanceRepoDir = await createEnrichedRepo( + 'think-enrichment-performance-', + 'performance optimization keeps browse startup fast' + ); + const latencyRepoDir = await createEnrichedRepo( + 'think-enrichment-latency-', + 'latency budget work should protect capture responsiveness' + ); + + const performanceTrie = await loadSearchIndex(performanceRepoDir); + const latencyTrie = await loadSearchIndex(latencyRepoDir); + + assert.ok( + performanceTrie.search('performance').includes('performance'), + 'Expected the first repo index to include its own keyword.' + ); + assert.deepEqual( + performanceTrie.search('latency'), + [], + 'Expected the first repo index not to leak keywords from the second repo.' + ); + assert.ok( + latencyTrie.search('latency').includes('latency'), + 'Expected the second repo index to include its own keyword.' + ); + assert.deepEqual( + latencyTrie.search('performance'), + [], + 'Expected the second repo index not to reuse the first repo trie.' + ); +}); + +async function createEnrichedRepo(prefix, thought) { + const repoDir = await createTempDir(prefix); + await ensureGitRepo(repoDir); + invalidateSearchIndex(repoDir); + + const entry = await saveRawCapture(repoDir, thought); + await finalizeCapturedThought(repoDir, entry.id); + await runEnrichmentPipeline(repoDir); + + return repoDir; +} diff --git a/test/ports/graph-v4.test.js b/test/ports/graph-v4.test.js new file mode 100644 index 0000000..89c6075 --- /dev/null +++ b/test/ports/graph-v4.test.js @@ -0,0 +1,38 @@ +import assert from 'node:assert/strict'; +import test from 'node:test'; + +import { + CLASSIFICATIONS, + CLASSIFICATION_PREFIX, + GRAPH_MODEL_VERSION, + PRODUCT_READ_LENS, + TOPIC_PREFIX, + ENTITY_PREFIX, + ANNOTATION_PREFIX, + PIPELINE_RUN_PREFIX, +} from '../../src/store/constants.js'; + +test('GRAPH_MODEL_VERSION is 4', () => { + assert.equal(GRAPH_MODEL_VERSION, 4); +}); + +test('CLASSIFICATIONS has 7 entries including unclassified', () => { + assert.equal(CLASSIFICATIONS.length, 7); + assert.ok(CLASSIFICATIONS.includes('question')); + assert.ok(CLASSIFICATIONS.includes('decision')); + assert.ok(CLASSIFICATIONS.includes('observation')); + assert.ok(CLASSIFICATIONS.includes('action_item')); + assert.ok(CLASSIFICATIONS.includes('idea')); + assert.ok(CLASSIFICATIONS.includes('reference')); + assert.ok(CLASSIFICATIONS.includes('unclassified')); + assert.ok(Object.isFrozen(CLASSIFICATIONS)); +}); + +test('PRODUCT_READ_LENS includes enrichment prefixes', () => { + const patterns = PRODUCT_READ_LENS.match; + assert.ok(patterns.includes(`${TOPIC_PREFIX}*`), 'Missing topic prefix'); + assert.ok(patterns.includes(`${CLASSIFICATION_PREFIX}*`), 'Missing classification prefix'); + assert.ok(patterns.includes(`${ENTITY_PREFIX}*`), 'Missing entity prefix'); + assert.ok(patterns.includes(`${ANNOTATION_PREFIX}*`), 'Missing annotation prefix'); + assert.ok(patterns.includes(`${PIPELINE_RUN_PREFIX}*`), 'Missing pipeline_run prefix'); +}); diff --git a/test/ports/minds.test.js b/test/ports/minds.test.js index 8a9e8db..e742597 100644 --- a/test/ports/minds.test.js +++ b/test/ports/minds.test.js @@ -131,6 +131,22 @@ test('shaderForMind stays within the shader count range', () => { } }); +test('shaderForMind throws when shaderCount is zero', () => { + assert.throws( + () => shaderForMind('test', 0), + { message: /shaderCount/ }, + 'Expected shaderForMind to reject zero shaderCount.' + ); +}); + +test('shaderForMind throws when shaderCount is negative', () => { + assert.throws( + () => shaderForMind('test', -1), + { message: /shaderCount/ }, + 'Expected shaderForMind to reject negative shaderCount.' + ); +}); + test('shaderForMind handles single-character names', () => { const index = shaderForMind('x', 5); assert.ok(index >= 0 && index < 5); diff --git a/test/ports/model.test.js b/test/ports/model.test.js new file mode 100644 index 0000000..15b2699 --- /dev/null +++ b/test/ports/model.test.js @@ -0,0 +1,85 @@ +import assert from 'node:assert/strict'; +import test from 'node:test'; + +import { Entry, ReflectSession, createEntry, createReflectSession, storesTextContent } from '../../src/store/model.js'; +import { ENTRY_KINDS, BUCKET_PERIODS } from '../../src/store/constants.js'; + +test('createEntry returns an Entry instance', () => { + const entry = createEntry('test thought', 'local.test.cli', { kind: 'capture', source: 'capture' }); + + assert.ok(entry instanceof Entry, 'Expected entry to be an Entry instance.'); + assert.equal(entry.text, 'test thought'); + assert.equal(entry.kind, 'capture'); + assert.equal(entry.writerId, 'local.test.cli'); + assert.ok(entry.id.startsWith('entry:'), 'Expected id to have entry prefix.'); + assert.ok(entry.createdAt, 'Expected createdAt to be set.'); + assert.ok(entry.sortKey, 'Expected sortKey to be set.'); +}); + +test('Entry is frozen', () => { + const entry = createEntry('frozen', 'local.test.cli', { kind: 'capture', source: 'capture' }); + + assert.ok(Object.isFrozen(entry), 'Expected entry to be frozen.'); +}); + +test('createEntry validates required fields', () => { + assert.throws( + () => createEntry('', 'local.test.cli', { kind: 'capture', source: 'capture' }), + { message: /text/ }, + 'Expected empty text to throw.' + ); + + assert.throws( + () => createEntry('thought', '', { kind: 'capture', source: 'capture' }), + { message: /writerId/ }, + 'Expected empty writerId to throw.' + ); +}); + +test('createReflectSession returns a ReflectSession instance', () => { + const session = createReflectSession('local.test.cli', { + seedEntryId: 'entry:123', + contrastEntryId: null, + promptType: 'challenge', + question: 'Why?', + selectionReason: 'test', + }); + + assert.ok(session instanceof ReflectSession, 'Expected session to be a ReflectSession instance.'); + assert.equal(session.seedEntryId, 'entry:123'); + assert.equal(session.promptType, 'challenge'); + assert.ok(session.id.startsWith('reflect:'), 'Expected id to have session prefix.'); +}); + +test('ReflectSession is frozen', () => { + const session = createReflectSession('local.test.cli', { + seedEntryId: 'entry:123', + contrastEntryId: null, + promptType: 'challenge', + question: 'Why?', + selectionReason: 'test', + }); + + assert.ok(Object.isFrozen(session), 'Expected session to be frozen.'); +}); + +test('ENTRY_KINDS is a frozen array of valid kind strings', () => { + assert.ok(Object.isFrozen(ENTRY_KINDS), 'Expected ENTRY_KINDS to be frozen.'); + assert.ok(ENTRY_KINDS.includes('capture'), 'Expected capture in ENTRY_KINDS.'); + assert.ok(ENTRY_KINDS.includes('reflect'), 'Expected reflect in ENTRY_KINDS.'); + assert.ok(ENTRY_KINDS.includes('thought'), 'Expected thought in ENTRY_KINDS.'); +}); + +test('BUCKET_PERIODS is a frozen array of valid bucket strings', () => { + assert.ok(Object.isFrozen(BUCKET_PERIODS), 'Expected BUCKET_PERIODS to be frozen.'); + assert.ok(BUCKET_PERIODS.includes('hour'), 'Expected hour in BUCKET_PERIODS.'); + assert.ok(BUCKET_PERIODS.includes('day'), 'Expected day in BUCKET_PERIODS.'); + assert.ok(BUCKET_PERIODS.includes('week'), 'Expected week in BUCKET_PERIODS.'); +}); + +test('storesTextContent validates against ENTRY_KINDS', () => { + for (const kind of ENTRY_KINDS) { + assert.equal(typeof storesTextContent(kind), 'boolean', `Expected boolean for kind "${kind}".`); + } + assert.equal(storesTextContent('invalid_kind'), false, 'Expected false for invalid kind.'); +}); diff --git a/test/ports/no-full-materialization.test.js b/test/ports/no-full-materialization.test.js new file mode 100644 index 0000000..3ca2016 --- /dev/null +++ b/test/ports/no-full-materialization.test.js @@ -0,0 +1,47 @@ +import assert from 'node:assert/strict'; +import { readFileSync, readdirSync, statSync } from 'node:fs'; +import path from 'node:path'; +import test from 'node:test'; + +function collectJsFiles(dir) { + const files = []; + for (const entry of readdirSync(dir)) { + const full = path.join(dir, entry); + if (statSync(full).isDirectory()) { + files.push(...collectJsFiles(full)); + } else if (entry.endsWith('.js')) { + files.push(full); + } + } + return files; +} + +test('no source file calls getNodes() or getEdges() for full graph materialization', () => { + const srcDir = new URL('../../src/', import.meta.url).pathname; + const files = collectJsFiles(srcDir); + const violations = []; + + for (const file of files) { + const content = readFileSync(file, 'utf8'); + const relPath = path.relative(path.join(srcDir, '..'), file); + + // Match .getNodes() or .getEdges() but not in comments + const lines = content.split('\n'); + for (let i = 0; i < lines.length; i++) { + const line = lines[i].trim(); + if (line.startsWith('//') || line.startsWith('*')) { continue; } + if (/\.getNodes\(\)/.test(line)) { + violations.push(`${relPath}:${i + 1}: ${line.trim()}`); + } + if (/\.getEdges\(\)/.test(line)) { + violations.push(`${relPath}:${i + 1}: ${line.trim()}`); + } + } + } + + assert.deepEqual( + violations, + [], + `Found ${violations.length} full-materialization anti-pattern(s):\n${violations.join('\n')}` + ); +}); diff --git a/test/ports/private-imports.test.js b/test/ports/private-imports.test.js new file mode 100644 index 0000000..429f44f --- /dev/null +++ b/test/ports/private-imports.test.js @@ -0,0 +1,53 @@ +import assert from 'node:assert/strict'; +import { readFile } from 'node:fs/promises'; +import { join } from 'node:path'; +import test from 'node:test'; + +import { repoRoot } from '../fixtures/runtime.js'; + +const CHECKPOINT_SOURCE_FILES = Object.freeze([ + 'src/store/checkpoint-read.js', + 'src/store/checkpoint-state.js', + 'src/store/checkpoint-product-read.js', +]); + +const RUNTIME_READ_SOURCE_FILES = Object.freeze([ + 'src/store/runtime.js', + 'src/store/checkpoint-state.js', +]); + +test('checkpoint read fast paths do not import git-warp internals from node_modules', async () => { + const offenders = []; + + for (const relativePath of CHECKPOINT_SOURCE_FILES) { + // eslint-disable-next-line no-await-in-loop -- this guard reports deterministic file-level evidence + const source = await readFile(join(repoRoot, relativePath), 'utf8'); + if (source.includes('node_modules/@git-stunts/git-warp/src')) { + offenders.push(relativePath); + } + } + + assert.deepEqual( + offenders, + [], + 'Expected production checkpoint code to use only public git-warp package exports.' + ); +}); + +test('runtime read paths do not call GitGraphAdapter.createRuntimeBlobStorage without feature detection', async () => { + const offenders = []; + + for (const relativePath of RUNTIME_READ_SOURCE_FILES) { + // eslint-disable-next-line no-await-in-loop -- this guard reports deterministic file-level evidence + const source = await readFile(join(repoRoot, relativePath), 'utf8'); + if (source.includes('.createRuntimeBlobStorage()')) { + offenders.push(relativePath); + } + } + + assert.deepEqual( + offenders, + [], + 'Expected runtime reads to feature-detect the optional git-warp blob-storage helper before using it.' + ); +}); diff --git a/test/ports/semantic-parse.test.js b/test/ports/semantic-parse.test.js new file mode 100644 index 0000000..6430ebc --- /dev/null +++ b/test/ports/semantic-parse.test.js @@ -0,0 +1,61 @@ +import assert from 'node:assert/strict'; +import test from 'node:test'; + +import { classifyThought } from '../../src/store/enrichment/semantic-parse.js'; + +// --------------------------------------------------------------------------- +// Classification (pure function, no graph) +// --------------------------------------------------------------------------- + +test('classifyThought detects questions', () => { + const result = classifyThought('How do I improve capture latency?'); + assert.ok(result.classifications.includes('question')); +}); + +test('classifyThought detects decisions', () => { + const result = classifyThought('I decided to use git-warp for storage'); + assert.ok(result.classifications.includes('decision')); +}); + +test('classifyThought detects observations', () => { + const result = classifyThought('I noticed the splash shader is slow on large terminals'); + assert.ok(result.classifications.includes('observation')); +}); + +test('classifyThought detects action items', () => { + const result = classifyThought('Need to fix the browse fade-in single color issue'); + assert.ok(result.classifications.includes('action_item')); +}); + +test('classifyThought detects ideas', () => { + const result = classifyThought('What if we added a thought graph visualization?'); + assert.ok(result.classifications.includes('idea')); +}); + +test('classifyThought detects references', () => { + const result = classifyThought('See https://github.com/flyingrobots/think for details'); + assert.ok(result.classifications.includes('reference')); +}); + +test('classifyThought returns unclassified when no pattern matches', () => { + const result = classifyThought('turkey is good in burritos'); + assert.deepEqual(result.classifications, ['unclassified']); +}); + +test('classifyThought supports multi-class', () => { + const result = classifyThought('Should we refactor this? Need to fix the tests too.'); + assert.ok(result.classifications.includes('question'), 'Expected question from "Should".'); + assert.ok(result.classifications.includes('action_item'), 'Expected action_item from "Need to".'); + assert.ok(result.classifications.length >= 2, 'Expected at least 2 classifications.'); +}); + +test('classifyThought returns markers for each match', () => { + const result = classifyThought('How do I fix this?'); + assert.ok(Array.isArray(result.markers), 'Expected markers array.'); + assert.ok(result.markers.length > 0, 'Expected at least one marker.'); +}); + +test('classifyThought handles empty text', () => { + const result = classifyThought(''); + assert.deepEqual(result.classifications, ['unclassified']); +});