Visor supports defining dependencies between checks using depends_on. This enables:
- Sequential execution: dependents wait for prerequisites to finish
- Parallel optimization: independent checks run simultaneously
- Smart scheduling: automatic topological ordering
version: "1.0"
steps:
security:
type: ai
group: code-review
schema: code-review
prompt: "Comprehensive security analysis..."
tags:
- security
- critical
- comprehensive
on:
- pr_opened
- pr_updated
# No dependencies - runs first
performance:
type: ai
group: code-review
schema: code-review
prompt: "Performance analysis..."
tags:
- performance
- fast
- local
- remote
on:
- pr_opened
- pr_updated
# No dependencies - runs parallel with security
style:
type: ai
group: code-review
schema: code-review
prompt: "Style analysis based on security findings..."
tags:
- style
- fast
- local
on:
- pr_opened
depends_on:
- security # Waits for security to complete
architecture:
type: ai
group: code-review
schema: code-review
prompt: "Architecture analysis building on previous checks..."
on:
- pr_opened
- pr_updated
depends_on:
- security
- performance- Level 0:
securityandperformancerun in parallel - Level 1:
styleruns aftersecurity - Level 2:
architectureruns after both
steps:
foundation: { type: ai, group: base, schema: code-review, prompt: "Base analysis" }
branch_a: { type: ai, group: code-review, schema: code-review, depends_on: [foundation] }
branch_b: { type: ai, group: code-review, schema: code-review, depends_on: [foundation] }
final: { type: ai, group: summary, schema: markdown, depends_on: [branch_a, branch_b] }steps:
security_basic: { type: ai, group: security, schema: code-review }
security_advanced: { type: ai, group: security, schema: code-review, depends_on: [security_basic] }
performance_basic: { type: ai, group: performance, schema: code-review }
performance_advanced:{ type: ai, group: performance, schema: code-review, depends_on: [performance_basic] }
integration: { type: ai, group: summary, schema: markdown, depends_on: [security_advanced, performance_advanced] }Sometimes a check can proceed when any one of several upstream steps has completed successfully. Visor supports this with pipe‑separated tokens inside depends_on.
continue_on_failure controls whether dependents may run after a failure — it is a gating knob, not the definition of criticality. Classify steps by criticality (external | internal | policy | info) and derive defaults:
- Critical:
continue_on_failure: false, requireassume/guarantee, tighter loop budgets, retries only for transient faults. - Non‑critical: may allow
continue_on_failure: trueto keep non‑critical branches moving.
Example — non‑critical branch that can proceed after a soft failure:
steps:
summarize:
type: ai
tags:
- info
continue_on_failure: true
fail_if: "(output.errors || []).length > 0"steps:
parse-issue: { type: noop }
parse-comment: { type: noop }
triage: { type: noop, depends_on: ["parse-issue|parse-comment"] }Rules:
- Each string containing
|denotes an ANY‑OF group. In the example above, eitherparse-issueorparse-commentsatisfies the dependency fortriage. - You may combine ALL‑OF and ANY‑OF:
depends_on: ["a|b", "c"]means “(a or b) and c”. - Event gating still applies: a dependency only counts if it is applicable to the current event (has compatible
onor noon). - Failure/skip semantics: a member that is skipped or fails fatally does not satisfy the group; at least one member must complete without a fatal error for the group to be satisfied.
- Session reuse: if
reuse_ai_session: trueanddepends_oncontains a pipe group, the session parent is selected from the first satisfied member at runtime.
Tip: When targeting a leaf in ad‑hoc runs (e.g., visor --check final), include one member of each pipe group explicitly (e.g., --check a --check final) to make intent unambiguous. In normal runs Visor computes the plan automatically from your config.
For AI checks that depend on other AI checks, you can reuse the parent's conversation session to maintain context:
steps:
initial-analysis:
type: ai
prompt: "Analyze this code for issues..."
follow-up:
type: ai
depends_on: [initial-analysis]
reuse_ai_session: true # Reuses session from first dependency
prompt: "Based on your analysis, suggest fixes..."Options:
reuse_ai_session: true- Reuse session from first dependencyreuse_ai_session: "step-name"- Reuse session from specific stepsession_mode: 'clone'- Copy conversation history (default)session_mode: 'append'- Share conversation history (modifications visible to both)
When using ANY-OF dependencies (depends_on: ["a|b"]), the session is taken from whichever dependency completes first.
When a step is triggered via routing (on_success.run, on_fail.run) from a forEach scope, you can control how it schedules:
steps:
process-items:
type: command
forEach: true
exec: echo '["a","b","c"]'
validate-item:
depends_on: [process-items]
fanout: map # Run once per forEach item (fan-out)
aggregate-results:
depends_on: [process-items]
fanout: reduce # Run once at parent scope (aggregation)
# Alias: reduce: truefanout: 'map'- Schedule once per forEach item (fan-out behavior)fanout: 'reduce'- Schedule a single run at parent scope (aggregation)reduce: true- Alias forfanout: 'reduce'
- Cycle detection and missing dependency validation
- Failed checks don't block independent branches
- Dependency results are available to dependents via
outputs
When a check has forEach: true, it outputs an array and all its dependent checks run once per array item. After all dependents complete all iterations, the on_finish hook on the forEach check triggers to aggregate results and optionally route to a different check.
steps:
extract-items:
type: ai
forEach: true
# Outputs: [item1, item2, item3]
process-item:
depends_on: [extract-items]
# Runs 3 times (once per item)Execution order:
extract-itemsruns once → outputs[item1, item2, item3]process-itemruns 3 times (once for each item)- All 3 iterations complete
- Downstream checks that depend on
process-itemcan now run
The on_finish hook runs once after all dependent checks complete all their iterations, making it perfect for aggregating results and making routing decisions:
steps:
extract-facts:
type: ai
forEach: true
# Outputs: [fact1, fact2, fact3]
on_finish:
# Run aggregation check
run: [aggregate-validations]
# Then decide whether to retry
goto_js: |
const allValid = memory.get('all_valid', 'validation');
return allValid ? null : 'retry-assistant';
validate-fact:
depends_on: [extract-facts]
# Runs 3 times (once per fact)
aggregate-validations:
type: script
content: |
// Access ALL validation results
const results = outputs.history['validate-fact'];
const allValid = results.every(r => r.is_valid);
memory.set('all_valid', allValid, 'validation');
return { total: results.length, valid: allValid };Execution order:
extract-factsruns once → outputs array of factsvalidate-factruns N times (once per fact)- on_finish triggers:
- First:
aggregate-validationsruns - Then:
goto_jsevaluates - If goto returns a check name, jump to that ancestor
- First:
- Downstream checks continue
- Only on checks with
forEach: true - After ALL dependent checks complete ALL iterations
- Does not trigger if forEach array is empty
- Before any downstream checks that don't depend on the forEach check
Inside on_finish hooks, you have access to all iteration results. The context provides these variables:
// In on_finish.goto_js or on_finish.run_js
// Available variables:
outputs['extract-facts'] // The forEach array (latest value)
outputs['validate-fact'] // Latest result from validate-fact
outputs.history['validate-fact'] // ALL results from ALL iterations (array)
outputs_history['validate-fact'] // Alias for outputs.history
outputs_raw['extract-facts'] // Aggregate value (full array)
// forEach metadata
forEach.total // Total forEach items
forEach.successful // Number of successful iterations
forEach.failed // Number of failed iterations
forEach.items // The forEach items array
// Memory access
memory.get('key', 'namespace')
memory.set('key', value, 'namespace')
memory.increment('key', amount, 'namespace')Note: outputs.history and outputs_history are aliases - both provide access to the full history array for each check.
The real power of on_finish is aggregating results from multiple dependent checks:
steps:
# Step 1: Extract claims from AI response
extract-claims:
type: ai
forEach: true
prompt: "Extract all factual claims from: {{ outputs.ai-response }}"
transform_js: JSON.parse(output).claims
depends_on: [ai-response]
# Step 4: After ALL validations complete
on_finish:
run: [aggregate-all-validations]
goto_js: |
const securityOk = memory.get('security_valid', 'validation');
const technicalOk = memory.get('technical_valid', 'validation');
const attempt = memory.get('attempt', 'validation') || 0;
if (securityOk && technicalOk) {
return null; // All good, proceed
}
if (attempt >= 2) {
return null; // Max attempts, give up
}
memory.increment('attempt', 1, 'validation');
return 'ai-response'; // Retry with validation context
# Step 2: Validate security aspects (runs N times)
validate-security:
type: ai
depends_on: [extract-claims]
prompt: |
Validate security implications of: {{ outputs['extract-claims'].claim }}
# Step 3: Validate technical accuracy (runs N times)
validate-technical:
type: ai
depends_on: [extract-claims]
prompt: |
Validate technical accuracy of: {{ outputs['extract-claims'].claim }}
# Step 4a: Aggregate ALL results
aggregate-all-validations:
type: script
content: |
// Get results from BOTH dependent checks
const securityResults = outputs.history['validate-security'];
const technicalResults = outputs.history['validate-technical'];
const securityValid = securityResults.every(r => r.is_valid);
const technicalValid = technicalResults.every(r => r.is_valid);
memory.set('security_valid', securityValid, 'validation');
memory.set('technical_valid', technicalValid, 'validation');
// Store issues for retry context
if (!securityValid || !technicalValid) {
const issues = [
...securityResults.filter(r => !r.is_valid),
...technicalResults.filter(r => !r.is_valid)
];
memory.set('validation_issues', issues, 'validation');
}
return {
security: { total: securityResults.length, valid: securityValid },
technical: { total: technicalResults.length, valid: technicalValid }
};
# Step 5: Post if validation passed
post-response:
type: github
depends_on: [extract-claims]
if: "memory.get('security_valid', 'validation') && memory.get('technical_valid', 'validation')"
op: comment.create
value: "{{ outputs['ai-response'] }}"This is the only way to aggregate across multiple dependent checks in a forEach scenario. Without on_finish, there would be no single point where all results are available together.
- Use outputs.history: Access all forEach iteration results with
outputs.history['check-name'] - Store in Memory: Use memory to pass aggregated state to
goto_jsand downstream checks - Handle Empty Arrays: Check
forEach.totalor array length before processing - Limit Loops: Use attempt counters in memory to prevent infinite retry loops
- Multiple Dependents:
on_finishis perfect when you have multiple checks depending on the same forEach check - Event Preservation: Use
goto_eventwhen jumping back to maintain correct event context
| Approach | When It Runs | Access to Results | Use Case |
|---|---|---|---|
| Regular dependent check | After forEach parent completes | Only parent's array items | Process individual items |
on_finish hook |
After all dependents complete all iterations | All iteration results via outputs.history |
Aggregate, validate, route |
Example showing the difference:
steps:
extract-items:
type: command
forEach: true
exec: echo '[1, 2, 3]'
on_finish:
run: [summarize-all]
process-item:
depends_on: [extract-items]
# Runs 3 times, once per item
# Has access to: outputs['extract-items'] (current item)
summarize-all:
type: script
# Runs ONCE after all 3 process-item iterations
# Has access to: outputs.history['process-item'] (all 3 results)
content: |
const allResults = outputs.history['process-item'];
return { processed: allResults.length };- Failure Routing - Complete
on_finishreference - forEach Dependency Propagation - Detailed forEach mechanics
- Output History - Accessing historical outputs