This document answers common questions about Visor, the AI-powered workflow orchestration tool for code review, automation, and CI/CD pipelines.
- General Questions
- Configuration Questions
- GitHub Actions Questions
- Provider Questions
- Troubleshooting
- Advanced Topics
Visor is an AI-powered workflow orchestration tool that can perform intelligent code review, automate CI/CD tasks, and integrate with various services. It supports multiple AI providers (Google Gemini, Anthropic Claude, OpenAI GPT, AWS Bedrock) and can run as both a GitHub Action and a CLI tool.
Key capabilities:
- Automated code review for pull requests
- Security, performance, and style analysis
- Custom workflow automation with 15+ provider types
- MCP (Model Context Protocol) tool integration
- Slack and HTTP webhook integrations
Unlike traditional linters that rely on static rules, Visor uses AI to understand context and provide nuanced feedback. Key differentiators:
- AI-powered analysis: Uses LLMs to understand code intent and provide contextual suggestions
- Workflow orchestration: Not just code review - supports complex multi-step workflows with routing, retries, and state management
- Pluggable architecture: 15+ provider types (AI, command, MCP, HTTP, memory, etc.) that can be combined
- Configuration-driven: Define workflows in YAML without writing code
- Multiple transports: Works as GitHub Action, CLI tool, or Slack bot
Yes. If no AI API key is configured, Visor falls back to fast, heuristic-based checks using simple pattern matching for basic style and performance issues.
To use AI-powered features, set one of these environment variables:
GOOGLE_API_KEYfor Google GeminiANTHROPIC_API_KEYfor Anthropic ClaudeOPENAI_API_KEYfor OpenAI GPT- AWS credentials for AWS Bedrock
| Provider | Environment Variable | Example Models |
|---|---|---|
| Google Gemini | GOOGLE_API_KEY |
gemini-2.0-flash-exp, gemini-1.5-pro |
| Anthropic Claude | ANTHROPIC_API_KEY |
claude-3-5-sonnet-latest, claude-3-opus-latest |
| OpenAI GPT | OPENAI_API_KEY |
gpt-4o, gpt-4-turbo |
| AWS Bedrock | AWS credentials | anthropic.claude-sonnet-4-20250514-v1:0 |
See AI Configuration for complete setup instructions.
Quick start (no installation required):
npx -y @probelabs/visor@latest --helpGlobal installation:
npm install -g @probelabs/visorProject dependency:
npm install --save-dev @probelabs/visorSee NPM Usage for detailed installation options.
Visor looks for configuration in this order:
- CLI
--configparameter .visor.yamlin the project root (note the leading dot)- Default configuration
Example:
# Use default location (.visor.yaml)
visor --check all
# Use custom config file
visor --config path/to/my-config.yamlUse the validate command to check for errors before running:
# Validate default config
visor validate
# Validate specific file
visor validate --config .visor.yamlThe validator checks for:
- Missing required fields
- Invalid check types
- Incorrect event triggers
- Schema compliance
See Configuration for details.
You can set a global default and override per-check:
# Global default
ai_provider: anthropic
ai_model: claude-3-5-sonnet-latest
steps:
# This uses the global default (Anthropic)
security-review:
type: ai
prompt: "Analyze security vulnerabilities"
# This overrides to use Google
performance-review:
type: ai
ai_provider: google
ai_model: gemini-2.0-flash-exp
prompt: "Analyze performance issues"
# Alternative syntax using nested 'ai' block
style-review:
type: ai
ai:
provider: openai
model: gpt-4o
prompt: "Review code style"Use the on field to control when checks run:
steps:
# Runs on PR open and update
security-check:
type: ai
on: [pr_opened, pr_updated]
prompt: "Check for security issues"
# Disable a check by setting on to empty
disabled-check:
type: ai
on: [] # Never runsYou can also use tags and the CLI to filter checks:
# Run only checks tagged 'security'
visor --tags security
# Exclude checks tagged 'experimental'
visor --exclude-tags experimentalSee Tag Filtering for more options.
Define tools in the tools section and reference them in checks:
tools:
my-lint-tool:
name: my-lint-tool
description: Run custom linter
inputSchema:
type: object
properties:
files:
type: array
items:
type: string
required: [files]
exec: 'eslint {{ args.files | join: " " }}'
steps:
run-linter:
type: mcp
transport: custom
method: my-lint-tool
methodArgs:
files: ["src/**/*.ts"]See Custom Tools for complete documentation.
Use the extends field to inherit from base configurations:
# .visor.yaml
extends:
- ./team-standards.yaml # Local file
- default # Built-in defaults
steps:
my-custom-check:
type: ai
prompt: "Project-specific analysis"You can also extend remote configurations:
visor --allowed-remote-patterns "https://github.com/myorg/"See Configuration Inheritance.
Create .github/workflows/visor.yml:
name: Visor Code Review
on:
pull_request:
types: [opened, synchronize]
permissions:
contents: read
pull-requests: write
issues: write
checks: write
jobs:
review:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: buger/visor@main
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
env:
GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }}See Action Reference for all available inputs and outputs.
| Event | Trigger | Use Case |
|---|---|---|
pull_request (opened) |
pr_opened |
New PR review |
pull_request (synchronize) |
pr_updated |
Updated PR review |
pull_request (closed) |
pr_closed |
PR close handling |
issues (opened) |
issue_opened |
Issue assistants |
issue_comment |
issue_comment |
Comment commands |
schedule |
schedule |
Cron jobs |
workflow_dispatch |
schedule |
Manual triggers |
See Event Triggers for complete documentation.
Use the on field and conditions:
steps:
# Only review TypeScript files
ts-review:
type: ai
on: [pr_opened, pr_updated]
if: "files.some(f => f.filename.endsWith('.ts'))"
prompt: "Review TypeScript code"
# Only review on main branch PRs
main-review:
type: ai
on: [pr_opened]
if: "pr.base === 'main'"
prompt: "Review changes to main"For large PRs, consider:
-
Increase timeout:
steps: review: type: ai timeout: 300000 # 5 minutes
-
Run checks in parallel:
max_parallelism: 5
-
Split into focused checks:
steps: security-review: type: ai prompt: "Focus only on security" style-review: type: ai prompt: "Focus only on style"
-
Filter by file type:
steps: js-review: type: ai if: "files.some(f => f.filename.endsWith('.js'))"
Fork PRs have restricted permissions by default. Solutions:
- Accept comment-only mode: Visor falls back to PR comments automatically
- Use
pull_request_target: For full check run support (requires careful security review)
See GitHub Checks - Fork PR Support.
| Provider | Best For | Notes |
|---|---|---|
| Anthropic Claude | Complex code analysis, security review | Strong reasoning, good context handling |
| Google Gemini | Fast analysis, cost-effective | Good for high-volume reviews |
| OpenAI GPT-4 | General-purpose analysis | Wide model availability |
| AWS Bedrock | Enterprise environments | IAM integration, private endpoints |
For most use cases, start with whichever provider you already have API access to.
Several provider types support custom logic:
Command provider (shell commands):
steps:
custom-lint:
type: command
exec: "npm run lint"Script provider (JavaScript):
steps:
custom-analysis:
type: script
content: |
const largeFiles = pr.files.filter(f => f.additions > 100);
return {
hasLargeChanges: largeFiles.length > 0,
files: largeFiles.map(f => f.filename)
};AI provider (custom prompts):
steps:
domain-review:
type: ai
prompt: |
You are an expert in our domain. Review this code for:
- Business logic correctness
- Domain model violations
- API contract adherenceThe MCP provider supports direct tool execution via multiple transports:
stdio transport (local command):
steps:
probe-search:
type: mcp
transport: stdio
command: npx
command_args: ["-y", "@probelabs/probe@latest", "mcp"]
method: search_code
methodArgs:
query: "TODO"HTTP transport (remote server):
steps:
remote-tool:
type: mcp
transport: http
url: https://mcp-server.example.com/mcp
method: analyze
methodArgs:
data: "{{ pr.title }}"Custom transport (YAML-defined tools):
tools:
grep-tool:
exec: 'grep -rn "{{ args.pattern }}" src/'
steps:
search:
type: mcp
transport: custom
method: grep-tool
methodArgs:
pattern: "FIXME"See MCP Provider for complete documentation.
| Feature | command |
script |
|---|---|---|
| Execution | Shell commands | JavaScript sandbox |
| Use case | External tools, shell scripts | Logic, data processing |
| Access | File system, external commands | PR context, memory, outputs |
| Security | Runs with process permissions | Sandboxed environment |
Use command for:
steps:
run-tests:
type: command
exec: "npm test -- --json"Use script for:
steps:
process-results:
type: script
depends_on: [run-tests]
content: |
const results = outputs['run-tests'];
return {
passed: results.tests.filter(t => t.passed).length,
failed: results.tests.filter(t => !t.passed).length
};Common causes:
-
Event filter mismatch: Check if the
onfield matches the current eventsteps: my-check: on: [pr_opened] # Won't run on pr_updated
-
Condition evaluated to false: Check your
ifexpressionsteps: my-check: if: "files.length > 0" # Won't run if no files changed
-
Tag filter exclusion: Check if tags are filtering out the check
visor --tags github # Only runs checks tagged 'github' -
Missing dependencies: Ensure
depends_ontargets existsteps: my-check: depends_on: [nonexistent-check] # Will fail
Debug with:
visor --check all --debugCommon issues with goto, retry, and run:
-
goto must target ancestors only: You can only jump back to previously executed checks
steps: step-a: type: command step-b: depends_on: [step-a] on_fail: goto: step-a # Valid (ancestor) # goto: step-c # Invalid (not an ancestor)
-
Loop limit reached: Check
max_loopssettingrouting: max_loops: 10 # Increase if needed
-
JS expression errors: Use
log()to debugon_fail: goto_js: | log("Current outputs:", outputs); log("History:", outputs.history); return null;
See Failure Routing for complete documentation.
Enable debug mode:
visor --check all --debugUse the logger check type:
steps:
debug-flow:
type: logger
depends_on: [previous-check]
message: |
Outputs: {{ outputs | json }}
PR: {{ pr | json }}Use log() in JavaScript expressions:
steps:
my-check:
type: command
if: |
log("Files:", filesChanged);
log("Event:", event);
return filesChanged.length > 0;Enable tracing with OpenTelemetry:
VISOR_TELEMETRY_ENABLED=true \
VISOR_TELEMETRY_SINK=otlp \
visor --check allSee Debugging Guide for comprehensive techniques.
| Error | Meaning | Solution |
|---|---|---|
Configuration not found |
No .visor.yaml found |
Create config or use --config |
Invalid check type |
Unknown provider type | Use valid type: ai, command, script, etc. |
outputs is undefined |
Missing depends_on |
Add dependency to access outputs |
Rate limit exceeded |
API quota reached | Reduce parallelism or add delays |
Command execution failed |
Shell command error | Check command syntax and permissions |
Transform error |
Invalid Liquid/JS | Debug with log() function |
See Troubleshooting for more error resolutions.
Possible causes:
-
Timeout too short: Increase step timeout
steps: analysis: type: ai timeout: 120000 # 2 minutes
-
Model token limits: Switch to a model with larger context
steps: analysis: type: ai ai_model: gpt-4-turbo # 128k context
-
Prompt too complex: Split into smaller, focused prompts
Use on_fail.retry with optional backoff:
steps:
api-call:
type: http_client
url: https://api.example.com/data
on_fail:
retry:
max: 3
backoff:
mode: exponential
delay_ms: 1000 # 1s, 2s, 4sYou can also configure retries at the AI provider level:
steps:
analysis:
type: ai
ai:
retry:
maxRetries: 3
initialDelay: 1000
backoffFactor: 2See Failure Routing for complete retry options.
Use the memory provider for persistent key-value storage:
steps:
store-value:
type: memory
operation: set
key: my-key
value: "{{ outputs['previous-check'].result }}"
namespace: my-workflow
read-value:
type: script
content: |
const value = memory.get('my-key', 'my-workflow');
return { retrieved: value };In script and routing expressions, use the memory object:
// Read
const value = memory.get('key', 'namespace');
// Write
memory.set('key', 'value', 'namespace');
// Increment
memory.increment('counter', 1, 'namespace');See Memory Provider for complete documentation.
Use if conditions and routing:
Simple conditions:
steps:
security-scan:
type: ai
if: "files.some(f => f.filename.includes('security'))"Branch by output:
steps:
check-type:
type: script
content: |
return { type: pr.title.startsWith('fix:') ? 'bugfix' : 'feature' };
bugfix-review:
type: ai
depends_on: [check-type]
if: "outputs['check-type'].type === 'bugfix'"
prompt: "Review this bug fix"
feature-review:
type: ai
depends_on: [check-type]
if: "outputs['check-type'].type === 'feature'"
prompt: "Review this feature"Declarative routing with transitions:
steps:
validate:
type: ai
on_success:
transitions:
- when: "outputs['validate'].score >= 90"
to: publish
- when: "outputs['validate'].score >= 70"
to: review
- when: "true"
to: rejectSee Router Patterns for best practices.
Use the built-in test framework with YAML test files:
# visor.tests.yaml
version: "1.0"
extends: ".visor.yaml"
tests:
defaults:
strict: true
ai_provider: mock
cases:
- name: security-check-runs
event: pr_opened
fixture: gh.pr_open.minimal
mocks:
security-review:
text: "No security issues found"
expect:
calls:
- step: security-review
exactly: 1Run tests:
# Run all tests
visor test
# Run specific test case
visor test --only security-check-runs
# Validate test file only
visor test --validateSee Testing Guide for complete documentation.
Define reusable workflows in separate files:
# workflows/security-scan.yaml
id: security-scan
name: Security Scanner
inputs:
- name: severity_threshold
schema:
type: string
enum: [low, medium, high]
default: medium
steps:
scan:
type: ai
prompt: |
Scan for security issues with threshold: {{ inputs.severity_threshold }}
outputs:
- name: vulnerabilities
value_js: steps.scan.output.issuesImport and use in your main config:
# .visor.yaml
imports:
- ./workflows/security-scan.yaml
steps:
run-security:
type: workflow
workflow: security-scan
args:
severity_threshold: highSee Reusable Workflows for complete documentation.
- Configuration Reference - Complete configuration options
- Provider Documentation - All 15+ provider types
- Debugging Guide - Troubleshooting techniques
- Recipes - Copy-paste workflow examples
- Workflow Style Guide - Best practices