You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Build the CLI tools before starting: `make cli` (builds `pred` and `pred-sym`). All commands below assume `pred` and `pred-sym` are available via `cargo run -p problemreductions-cli --bin pred --` and `cargo run -p problemreductions-cli --bin pred-sym --` respectively.
39
+
36
40
## CRITICAL: Output Visibility
37
41
38
-
Bash tool results are hidden from the user in the Claude Code UI. **After every `pred` command, you MUST copy-paste the full stdout/stderr into your response as text.** The pattern for every command is:
42
+
Bash tool results are hidden from the user in the Claude Code UI. **After every `pred`/ `pred-sym`command, you MUST copy-paste the full stdout/stderr into your response as text.** The pattern for every command is:
39
43
40
44
1. Announce the command and why: "Let me run `pred to MIS --hops 3` to discover all problems that can reduce to MIS:"
41
45
2. Run the command via the Bash tool
@@ -73,20 +77,23 @@ Never skip step 1 or 3.
73
77
74
78
2.**For each discovered problem**, run:
75
79
-`pred path <source> <model>` — get the cheapest witness-capable reduction path
80
+
-**IMPORTANT:** Use the exact variant-qualified name from `pred to` output (e.g., `SpinGlass/SimpleGraph/f64`, not bare `SpinGlass`). Bare names resolve to the default variant, which may differ from the reachable variant and cause false "no path" errors.
76
81
-`pred show <source>` — get best-known brute-force complexity
77
82
78
83
3.**Compute effective complexity** for each source problem:
79
84
- Take the user's solver complexity expression (e.g., `O(1.1996^num_vertices)`)
80
85
- Substitute the overhead expressions from the reduction path into the solver's variables
81
86
- Example: if MVC→MIS has overhead `num_vertices = num_vertices`, then solving MVC via MIS costs `O(1.1996^num_vertices)` — same as MIS
82
87
- Example: if overhead is `num_vertices = num_clauses * 3`, then effective complexity is `O(1.1996^(3 * num_clauses))`
88
+
-**Use `pred-sym` to verify:** after manual substitution, run `pred-sym big-o "<effective_expr>"` to normalize the expression. Use `pred-sym eval --vars <bindings> "<expr>"` at a concrete size (e.g., n=20) to numerically verify the simplification.
83
89
84
90
4.**Compare to best-known**: for each source, compare effective complexity to the source's own best-known complexity from `pred show`. Classify as:
85
91
-**Better** — effective complexity has a smaller base or exponent than best-known
86
92
-**Similar** — comparable asymptotic behavior
87
93
-**Worse** — effective complexity exceeds best-known (reduction overhead makes it impractical)
94
+
-**When effective and best-known use different variables** (e.g., `O(1.5^num_subsets)` vs `O(2^universe_size)`): this happens when a problem has multiple independent size fields and the best-known algorithm's dominant variable differs from the reduction overhead's. In this case, use `pred-sym eval` at representative concrete values to determine the comparison. State the result conditionally: "Better when num_subsets ≤ c·universe_size" with the crossover ratio.
88
95
89
-
5.**Web search**each discovered source problem + "applications" or "real-world" to find practical use cases. Use `WebSearch` tool.
96
+
5.**Web search**only the **Better** and **Similar** candidates for real-world applications (not the Worse ones). Use `WebSearch` tool with query "<problemname> real-world applications".
90
97
91
98
**If `--hops 3` returns more than 15 results:** present only the top 10 by effective complexity and mention the rest are available if the user wants to see them.
92
99
@@ -98,12 +105,12 @@ Never skip step 1 or 3.
98
105
99
106
**Goal:** Show all discovered problems ranked by practical usefulness.
100
107
101
-
Present a ranked table (most practical first):
108
+
Present a ranked table (most practical first). **Mark a recommendation** — highlight the "Better" entries as the most valuable discoveries:
102
109
103
110
| # | Problem | Hops | Overhead | Effective Complexity | vs Best-Known | Applications |
Ask using `AskUserQuestion`: "Which problems would you like included in the solution doc? Pick numbers, or 'all practical' for only the Better/Similar ones."
@@ -124,6 +131,8 @@ Where:
124
131
125
132
Ask the user to confirm the filename before writing.
126
133
134
+
**Before writing the doc**, run `pred create <Source> --help` for each selected problem to verify the correct CLI flag names. Use the flags exactly as shown in the help output.
1. Show the user the generated filename and a brief summary of what's in it.
171
-
2. Ask if they want to make any changes before finishing.
180
+
2.**If a built-in solver covers the model** (brute-force or ILP), offer to run a live demo with one of the "Better" problems: "Want me to run an example end-to-end so you can see it in action?"
181
+
3. Ask if they want to make any changes before finishing.
172
182
173
183
---
174
184
175
185
## Key Behaviors
176
186
177
187
-**One question at a time.** Never ask multiple questions in one message. Use `AskUserQuestion` for every decision point.
178
-
-**Web search before presenting applications.** In Step 2, web search each discovered problem for real-world use cases. Never guess applications from internal knowledge alone.
188
+
-**Web search only Better/Similar candidates.** In Step 2, web search only the problems classified as Better or Similar for real-world use cases. Skip Worse ones unless the user asks for all. Never guess applications from internal knowledge alone.
179
189
-**Show full output.** After every Bash tool call, copy-paste the COMPLETE output into your text response as a fenced code block. Bash tool results are hidden in the UI.
180
190
-**Announce every command.** Before running, say what command you're using and why.
191
+
-**Always use variant-qualified names in `pred path`.** When `pred to` returns names like `SpinGlass/SimpleGraph/f64`, use that exact string in subsequent `pred path` calls. Bare names (e.g., `SpinGlass`) resolve to the default variant, which may differ from the reachable variant and cause false "no path" errors.
192
+
-**Recommend, don't just list.** When presenting the ranked table in Step 3, bold the "Better" entries as the most valuable discoveries. The user can still pick freely.
181
193
-**Compact formatting.** Write explanations as plain paragraphs. Do not use blockquote `>` syntax for explanations. Keep tight: command announcement, code block output, 1-3 sentence explanation.
182
194
-**Conversational tone.** Guided consultation, not a lecture.
183
195
-**Live execution.** Every `pred` command runs for real. No fake output.
Copy file name to clipboardExpand all lines: .claude/skills/find-solver/SKILL.md
+34-4Lines changed: 34 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -96,9 +96,16 @@ Use `AskUserQuestion` for each question. Format options as **(a)**/**(b)**/**(c)
96
96
| 2 | ... | ... | ... |
97
97
| 3 | ... | ... | ... |
98
98
99
+
**Include a recommendation:** Bold or mark the option you think is the best fit, with a brief reason why.
100
+
99
101
4. For each candidate, run `pred show <model>` and show the output — fields, complexity, available reductions. This helps the user see what data they would need to provide.
100
102
101
-
5.**Ask the user to pick one** using `AskUserQuestion`. If none fit, ask the user for more detail and re-run the web search with refined keywords.
103
+
5.**Check optimization vs decision mismatch.** If the user's goal is "minimize X" or "maximize X" but the matched model is a decision/feasibility problem (Value = `Or`, fields include a `deadline`/`bound`), explain the gap:
104
+
- "This model checks feasibility ('can it be done within bound D?'), not optimization directly."
105
+
- "To find the optimum, we'll binary search on the bound parameter."
106
+
- This is common for scheduling problems (deadline), knapsack (bound), etc.
107
+
108
+
6.**Ask the user to pick one** using `AskUserQuestion`. If none fit, ask the user for more detail and re-run the web search with refined keywords.
102
109
103
110
**Proceed to Step 3 with the chosen model.**
104
111
@@ -114,17 +121,20 @@ Use `AskUserQuestion` for each question. Format options as **(a)**/**(b)**/**(c)
114
121
115
122
2.**For each reachable problem**, gather info:
116
123
- Run `pred path <model> <target>` to get the cheapest witness-capable reduction path and composed overhead
124
+
-**IMPORTANT:** Use the exact variant-qualified name from `pred from` output (e.g., `SpinGlass/SimpleGraph/f64`, not bare `SpinGlass`). Bare names resolve to the default variant, which may differ from the reachable variant and cause false "no path" errors.
117
125
- Run `pred show <target>` to get its best-known complexity
118
126
- Check if it's a solver-ready target (ILP, QUBO, SAT) or has a path to one via `pred path <target> ILP`
119
127
120
-
3.**Present a ranked table** (most practical paths first — fewest hops, lowest overhead):
128
+
3.**Present a ranked table** (most practical paths first — fewest hops, lowest overhead). **Mark a recommendation** for the most practical path:
| 3 | MaxSetPacking | 1 | num_sets = n | O(2^num_sets) | Yes (ILP in 2 steps) |
127
135
136
+
When overhead grows significantly between options (e.g., linear vs quadratic), note the practical implication: "QUBO adds quadratic variable blowup — prefer this only if targeting quantum/annealing hardware."
137
+
128
138
4.**Ask the user** using `AskUserQuestion`: "Which reduction path would you like to use? Pick a number."
129
139
130
140
**If `pred from --hops 3` returns more than 15 results:** present only the top 10 by overhead and mention the rest are available.
# Try midpoint, narrow based on Or(true)/Or(false)
251
+
```
226
252
227
253
## Solution Extraction
228
254
@@ -269,7 +295,8 @@ pred evaluate input.json --config <solution_vector>
269
295
**After writing the doc:**
270
296
271
297
1. Show the user the generated filename and a brief summary of what's in it.
272
-
2. Ask if they want to make any changes before finishing.
298
+
2.**If a built-in solver covers the chosen path** (brute-force or ILP), offer to run a live demo with the example instance: "Want me to run the example end-to-end so you can see it in action?"
299
+
3. Ask if they want to make any changes before finishing.
273
300
274
301
---
275
302
@@ -279,9 +306,12 @@ pred evaluate input.json --config <solution_vector>
279
306
-**Web search before recommendations.** In Step 2 (model matching) and Step 4 (solver recommendation), always web search first. Never rely on internal knowledge alone.
280
307
-**Show full output.** After every Bash tool call, copy-paste the COMPLETE output into your text response as a fenced code block. Bash tool results are hidden in the UI.
281
308
-**Announce every command.** Before running, say what command you're using and why.
309
+
-**Always use variant-qualified names in `pred path`.** When `pred from` returns names like `SpinGlass/SimpleGraph/f64`, use that exact string in subsequent `pred path` calls. Bare names (e.g., `SpinGlass`) resolve to the default variant, which may differ from the reachable variant and cause false "no path" errors.
310
+
-**Recommend, don't just list.** When presenting options (models in Step 2, paths in Step 3, solvers in Step 4), always bold or mark your recommended choice with a brief reason. The user can still pick freely.
282
311
-**Compact formatting.** Write explanations as plain paragraphs. Do not use blockquote `>` syntax for explanations. Keep tight: command announcement, code block output, 1-3 sentence explanation.
283
312
-**Conversational tone.** Guided consultation, not a lecture.
284
313
-**Live execution.** Every `pred` command runs for real. No fake output.
285
314
-**Graceful fallbacks.** If a path doesn't exist or a command fails, explain what happened and suggest alternatives (try another model, use brute-force, backtrack).
286
315
-**Adapt to user level.** If the user gives a formal problem name, skip clarification. If they describe a fuzzy real-world problem, ask follow-ups one at a time.
287
316
-**Use `--timeout 30`** with `pred solve` in any live demos during the session.
317
+
-**Doc template sections are conditional.** "Finding the Optimum" only applies to decision models. "External Solver Alternatives" only applies when external solvers were chosen. "Solution Extraction" can be folded into "Solving" when the bundle workflow handles it automatically.
Copy file name to clipboardExpand all lines: README.md
+4Lines changed: 4 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -42,6 +42,10 @@ make cli # builds target/release/pred
42
42
43
43
See the [Getting Started](https://codingthrust.github.io/problem-reductions/getting-started.html) guide for usage examples, the reduction workflow, and [CLI usage](https://codingthrust.github.io/problem-reductions/cli.html).
44
44
45
+
**Have a problem and looking for a solver?** Run `/find-solver` — it matches your real-world problem to a library model, explores reduction paths, and recommends solvers.
46
+
47
+
**Have a solver and wondering what it can solve?** Run `/find-problem` — given a solver for a specific model, it discovers all other problems reachable via incoming reductions, ranked by effective complexity.
0 commit comments