
TLDR
- A prompt add-on that forces your AI to surface the single highest-leverage question it wishes you’d asked.
- Includes a Think Harder pass, assumption audits, evidence checks, premortems, and decision gates.
- Works by pasting after any base prompt, no edits required.
- Comes with a Comprehensive version for high-stakes work and a Concise variant for speed.
- Adds guardrails: privacy redactions, compliance notes, and explicit evaluation rubric.
Introduction
AI often produces answers without exposing the real decision-driving question you should have asked. The Socratic Sidecar Pro v2.1 add-on fixes this. It attaches to any base prompt, forcing the model to propose and sharpen the single best question, audit assumptions, define evidence, and bind the next step to tests and decision gates.
By using this add-on, you cut down on wasted loops and improve both the clarity and the safety of your outputs.
The Custom Prompt
Master Add-On — Socratic Sidecar Pro v2.1
<<< Socratic Sidecar Pro v2.1 — Meta-Question + Risk-Bound Action >>>
Role: You are an expert assistant optimizing the user’s outcome while strictly following the base prompt above and any applicable safety policies. Do NOT reveal hidden chain-of-thought; provide concise, decision-useful outputs only.
Objective: Surface the single highest-leverage question you would ask if you were the user right now, sharpen it (“think harder”), and convert it into an evidence-backed, risk-bounded, testable next step.
Variables (edit as needed):
- {num_alternatives: 3} # number of alternative questions
- {domain: auto} # e.g., marketing, data, code, product, legal
- {tone: consultative} # consultative | direct | creative | technical
- {lens_focus: auto} # risks | ROI | feasibility | user_experience | ethics | auto
- {mode: sprint} # sprint | deep_dive
- {token_budget: 1200} # guidance for brevity
- {time_bias: speed} # speed | quality
- {self_answer: false} # if true, include a 3–4 bullet mini-answer
- {max_words_per_section: 40}
- {require_sources: false} # if true, include citations or data-acquisition paths
- {guardrails: standard} # standard | regulated(strict): avoid sensitive/private data; propose safe substitutes
- {privacy_redaction: request} # request | silent | off → request the user to redact PII if present
- {stop_when: none} # none | after_decision_gate | after_tests
Method:
1) Infer goal, constraints, success criteria, and risks from the base prompt (do not restate it).
2) Generate diverse candidate questions (outcome, constraints, risks, evaluation, assumptions, {lens_focus}).
3) Select Top-1; perform a **Think Harder Pass** to make it specific, decision-driving, and testable (include variables/ranges/examples).
4) Run Boosters: Assumption Audit, Evidence & Falsification, Premortem + Red-Team, Calibration, Decision Gate, Output Contract, Acceptance Tests.
5) Fit depth to {mode}, obey {token_budget}, and output only the template below.
Template (follow exactly):
- **Top Question:** <one sentence, concrete variables/ranges>
- **Think Harder Version:** <sharper, more targeted and testable>
- **Why This Matters (≤{max_words_per_section}):** <impact on quality/risk>
- **How It Improves the Answer (≤{max_words_per_section}):** <what changes>
- **Assumptions Detected:** • … • … • … (3–6)
- **Assumption Audit (table):**
Assumption | Importance | Confidence | 48h Test
- | - | - | -
<A> | <H/M/L> | <0–100%> | <how>
- **Evidence Wanted:** • <best evidence item + where to get it> (1–3)
- **Fastest Falsifier:** <single observation that would change course>
- **Premortem (3 bullets):** • It failed because… • … • …
- **Red-Team Counter:** <safest viable alternative path in one line>
- **Output Contract:** <fields/types/required/optional/length + one short example>
- **Acceptance Criteria (3–5):** • … • … • …
- **Minimal Tests (2):** • Test name, steps, pass condition • …
- **Alternative Questions ({num_alternatives}):** • … • … • …
- **Confidence (0–100%):** <number> — **Would change with:** <one datapoint + direction>
- **Decision Gate:** <next concrete action if ready; else fastest info to collect next (≤2 steps)>
- **What to Provide Next:** • … • … • … (3–6 precise items)
- **If {self_answer}=true → Mini-Answer Draft:** • … • … • …
Evaluation Rubric (self-check; do not print scores):
- Leverage, Specificity, Non-redundancy, Actionability, Risk Awareness → score 1–5 each. If total <20, improve once, then finalize.
Acceptance Criteria:
- Questions are specific (contain variables, ranges, or examples) and decision-relevant.
- “Think Harder” is materially sharper (not a rephrase).
- Boosters included, concise, and template-true; no fluff; no restating the base prompt.
- Respect {mode}, {token_budget}, {guardrails}, and {require_sources}. If {require_sources}=true, include citations or data paths.
- Do not reveal chain-of-thought; provide only the fields above.
Privacy & Safety:
- If {privacy_redaction}=request and PII/sensitive data is implied, add a one-line redaction reminder in “What to Provide Next”.
- In regulated contexts ({guardrails}=regulated), avoid requesting restricted data; suggest safe proxies.
Stop Conditions:
- If {stop_when}=after_decision_gate or after_tests, stop output after that section.
<<< END Socratic Sidecar Pro >>>
What This Prompt Does
The Socratic Sidecar Pro v2.1 ensures that every AI response is paired with the most important question you didn’t ask. It transforms that question into a sharper, testable version, audits assumptions, and creates a clear Decision Gate. This reduces guesswork, strengthens risk visibility, and accelerates decision-making.
Example: You paste a market research prompt. Sidecar adds: “Top Question: What is the minimum number of paying customers at $25/mo that validates the offer in 14 days?” It then proposes tests, kill criteria, and red-team counters.
Step by Step Usage
- Write your base prompt for the task at hand.
- Immediately paste the Socratic Sidecar Pro v2.1 block below it.
- Optionally adjust variables such as
lens_focus,self_answer, orstop_when. - Run the prompt. The output will follow the strict template: top question, sharper version, assumptions, evidence, premortem, counter, and decision gate.
Quality and Safety Checks
- Enforces privacy guardrails with optional redaction requests.
- Provides risk lenses: evidence, premortem, red-team, acceptance tests.
- Explicit evaluation rubric ensures each output meets a quality bar.
- Prevents chain-of-thought leakage.
FAQ
Q1: Do I need to modify my base prompt?
No. Just paste your base prompt first, then paste the Sidecar block.
Q2: What if I want speed?
Use the Concise variant (same structure, shorter outputs).
Q3: How does it reduce re-prompting loops?
By surfacing the next best question, sharpening it, and binding it to evidence and tests.
Q4: Is it safe for regulated contexts?
Yes, set guardrails: regulated and it will avoid requesting restricted data.
Q5: Can it provide draft answers too?
Yes, set self_answer: true and you’ll get a mini-answer along with the questioning.
Conclusion
The Socratic Sidecar Pro v2.1 add-on transforms any AI prompt into a decision aid with built-in questioning, risk checks, and decision gates. It shortens feedback loops, exposes blind spots, and ensures each step is grounded in evidence. For high-stakes work, it is an essential meta-layer for better, safer outcomes.
Field Drill Walkthrough
Scenario: Evaluating a New Product Launch Prompt
- Base Prompt: “Generate a go-to-market plan for a productivity app.”
- Sidecar Add-On: Surfaces the question: “What is the smallest evidence threshold (number of pilot signups in 2 weeks) that validates demand?”
- Outputs: assumption audit, evidence to collect, premortem risks, red-team counter, acceptance tests, and a clear decision gate.
Top Benefits:
- Forces clarity and testable thresholds.
- Reduces looping and rework.
- Creates a clear next step tied to evidence.
References and Links
- HBR: Why Organizations Don’t Learn
- Stanford d.school: Design Thinking for Risk
- NIST: Risk Management Framework
TLDR Introduction Most productivity tools depend on online services and external dependencies. But consultants and project managers often need offline-first artifacts: portable,...
