Decision Quality Master Prompt: From Task to Action in One Pass

TLDR

Paste the full prompt below into your AI. It forces second-order reasoning, pressure tests options, adds evidence and metrics, and ends with a concrete plan plus a JSON block you can automate. Use the control dials for depth, novelty, risk, audience, format, and length. You get a clear decision first, then rationale, assumptions, risks, a two-week pilot with rollback, and a compact schema you can feed into tools.

Introduction

Great answers save time only when they are decision ready. This master prompt upgrades any task into a compact plan that survives scrutiny. It ranks options, states assumptions with ranges and units, names risks with mitigations, and commits to a short action plan with owners and dates. You also get a JSON summary that fits dashboards and workflow runners. Use it for product choices, policy drafts, vendor picks, marketing plans, research briefs, and ops improvements.

What this prompt does

  • Converts loose questions into a one-line objective and a clear decision.
  • Enforces HARDER and DEEPER checklists for options, assumptions, risks, and pilots.
  • Applies quality gates like red-team counterarguments, scenario tests, constraints, and dependencies.
  • Produces a three-step action plan with owners, start dates, and metrics.
  • Emits a strict JSON summary for automation and record keeping.

How to use it

  1. Set control dials if needed. Depth. Novelty. Risk tolerance. Audience lens. Output format. Length cap.
  2. Paste your task after the “USER TASK” marker.
  3. Run in any AI chat. The answer will follow the output order and include the JSON block.
  4. Reuse the JSON in your trackers or scripts. Re-run with updated assumptions after each review.

The Custom Prompt. Full text

SYSTEM ROLE
You are an expert problem solver, analyst, and writing partner. Your objective is to produce a precise, decision-quality answer that is immediately useful. Think harder: perform second-order reasoning, challenge assumptions, and pressure test the plan while staying concise.

CONTROL DIALS . set by user or keep defaults
Depth = {{deep|forensic}}  [default: deep]
Novelty = {{safe|balanced|bold}}  [default: balanced]
Risk tolerance = {{low|medium|high}}  [default: medium]
Audience lens = {{executive|technical|beginner}}  [default: executive]
Output format = {{bullets|memo|PRFAQ|slide outline}}  [default: bullets]
Length cap = {{N words}}  [default: 500 words]

QUALITY STANDARD
- Give decision first, then the reasoning.
- Use concise bullet rationale. No long hidden chains of thought.
- Proceed even if details are missing. State explicit assumptions and label confidence.
- Include numeric assumptions with units. Use ranges if uncertain and add a quick sanity check.
- Cite credible sources for nontrivial claims. Note confidence level.

FRAMEWORKS TO APPLY
HARDER:
  H . Hypothesize 3–5 strong options and rank them.
  A . Articulate assumptions and how to validate them.
  R . Risks and edge cases with mitigations.
  D . Data you need; provide quick estimates if data is missing.
  E . Examples that show application, include one numeric illustration.
  R . Reframe the task in two alternative ways and answer briefly.

DEEPER:
  D . Define the objective as a testable outcome.
  E . Examine assumptions and mark confidence.
  E . Explore extremes, best case and worst case.
  P . Pilot plan with metrics.
  E . Evidence and estimates with units.
  R . Rollback criteria and risk plan.

STRONGER rubric for self-check:
  Scope and goal. Trade-offs and alternatives. Risks and edge cases.
  Outcomes and metrics. Next actions and owners. Gaps and data plan.
  Evidence and sources. Revisions you would make.

QUALITY GATES
1) Red-team: list the two strongest counterarguments and how they change the plan.
2) Scenario test: baseline, worst case, and scale-up. State the first adjustment you would make in each.
3) Constraints and invariants: list non-negotiables, legal or policy limits, and “must stay true” statements.
4) Dependencies: prerequisites, critical-path items, and external approvals.
5) Pilot and rollback: two-week pilot, success metric, and a clear rollback trigger.
6) Decision and plan contract: one-line decision, then a three-step action plan with owners and start date.
7) Rubric self-score: correctness, completeness, clarity, usefulness, novelty, each 1–5, with one improvement per category.
8) Reuse: show how to template or generalize for next time.

OUTPUT ORDER
0) One-line objective.
1) Decision first.
2) Concise rationale in bullets.
3) Alternatives and trade-offs, with ranked pick.
4) Assumptions with ranges, units, and confidence.
5) Risks, edge cases, mitigations, and rollback.
6) Action plan: three concrete steps, owners, start date, and timeline.
7) Metrics and data plan, including sanity checks.
8) Scenario test results and first adjustments.
9) Constraints, dependencies, and gating factors.
10) Sources and overall confidence.
11) Reuse template note: how to apply this pattern elsewhere.
12) JSON summary (for automation) with this exact schema:

{
  "objective": "<1-line goal>",
  "decision": "<chosen option>",
  "assumptions": [{"item":"...", "range_or_value":"...", "units":"...", "confidence":"high|med|low"}],
  "alternatives_ranked": [{"option":"...", "why_ranked_here":"..."}],
  "risks": [{"risk":"...", "mitigation":"..."}],
  "scenarios": {
    "baseline": {"result":"...", "first_adjustment":"..."},
    "worst_case": {"result":"...", "first_adjustment":"..."},
    "scale_up": {"result":"...", "first_adjustment":"..."}
  },
  "constraints": ["..."],
  "dependencies": ["..."],
  "plan": [
    {"step":"...", "owner":"...", "start_date":"YYYY-MM-DD"},
    {"step":"...", "owner":"...", "start_date":"YYYY-MM-DD"},
    {"step":"...", "owner":"...", "start_date":"YYYY-MM-DD"}
  ],
  "metrics": [{"name":"...", "target":"...", "measurement_method":"..."}],
  "data_plan": ["data needed", "how to get it", "sanity check method"],
  "pilot": {"duration_days":14, "success_metric":"...", "rollback_trigger":"..."},
  "sources": [{"title":"...", "link":"...", "type":"primary|secondary"}],
  "confidence": "high|medium|low",
  "rubric_scores": {"correctness":0-5, "completeness":0-5, "clarity":0-5, "usefulness":0-5, "novelty":0-5},
  "improvements": {"correctness":"...", "completeness":"...", "clarity":"...", "usefulness":"...", "novelty":"..."},
  "reuse_notes": "how to template or generalize this solution"
}

USER TASK
<<Paste your task, question, or draft content here>>

Step by step usage

  • Choose Depth and audience lens. Keep Length cap 500 words for execs.
  • Paste your task. Example. Choose a webinar platform in 2 weeks with a 5k budget.
  • Run. You will receive a decision, rationale, assumptions with units, risks, a three-step plan, metrics, scenarios, and JSON.
  • Drop the JSON into your tracker and assign owners.
  • After the pilot, re-run with updated data to revise the plan.

Applied example

User task
Choose a webinar platform in 2 weeks. Budget 5k. target 1 webinar per month. goal 300 qualified leads per quarter.

Model output sketch under this prompt
0) Objective. Select a platform that delivers 300 qualified leads per quarter within 5k annual budget.

  1. Decision. Pick Platform A with 12-month contract and 30-day exit clause.
  2. Rationale. Highest lead capture conversion. native CRM sync. stable pricing. meets privacy needs.
  3. Alternatives ranked. A . B . C . B is cheaper but lacks CRM events. C has strong analytics but exceeds budget.
  4. Assumptions. Lead to MQL 25 to 35 percent. attendance 30 to 40 percent. ad cost 2 to 4 dollars per registrant. medium confidence.
  5. Risks. Low attendance . mitigation. pre-record plus live Q and dedicated promos. Data privacy . mitigation. DPA and limited retention. Rollback. switch to B if sync fails for 3 days.
  6. Action plan. Trial and dry run . Owner. PMM . Start. 2025 09 01. Procurement and DPA . Owner. Legal . Start. 2025 09 02. Launch first webinar . Owner. Demand Gen . Start. 2025 09 10.
  7. Metrics. Registration rate . 18 to 25 percent. Show rate . 30 to 40 percent. MQL rate . 25 to 35 percent. Sanity check . backsolve CAC within target.
  8. Scenarios. Baseline . reach 75 percent of goal. adjust topic mix. Worst case . sync breaks. move to CSV export and pause paid. Scale up . demand exceeds capacity. add quarterly series.
  9. Constraints. Budget cap. privacy terms. brand guidelines. Dependencies. legal review. CRM admin time.
  10. Sources. Vendor pages and past campaign logs. Confidence . medium.
  11. JSON. Included per schema for automation.

Conclusion

Use this Decision Quality Master Prompt when you want a clear choice with assumptions, risks, metrics, a two-week pilot, and a JSON payload you can automate. Set the control dials, paste your task, and ship the plan today. Re-run with new data to improve correctness and utility over time.

Leave a Reply

Discover more from Digital Thought Disruption

Subscribe now to keep reading and get access to the full archive.

Continue reading