The Research Pipeline
The Research Pipeline
Section titled “The Research Pipeline”One-liner: Build a complete research synthesis pipeline — from question to evidence-graded conclusions — using structured AI queries and your own critical judgment.
🔧 Jump in (Tinkerers start here)
Section titled “🔧 Jump in (Tinkerers start here)”Pick a question you genuinely need answered for your work. Not a trivia question — something where the answer shapes a real decision.
Phase 1 — Define the research question. Send:
I need to research this question: [your question]
Help me refine it into a research-ready question by:
- Breaking it into 3-4 sub-questions that, if answered, would fully address the main question
- For each sub-question, identifying what type of evidence would count as a strong answer (data, expert consensus, case studies, logical argument, etc.)
- Flagging any assumptions embedded in the main question that I should test
Phase 2 — Structured evidence gathering. For each sub-question, run a separate AI query:
Research sub-question: [sub-question]
For this query, I want structured evidence:
- Strong evidence: Claims supported by widely documented data, peer-reviewed research, or established expert consensus
- Moderate evidence: Claims supported by credible case studies, industry reports, or respected analysis
- Weak evidence: Claims based on anecdotes, single examples, logical inference without data, or common assertions that may not hold up
Classify every claim you make. If you’re not sure about the evidence quality, say so. I’d rather have honest uncertainty than false confidence.
Phase 3 — Contradiction analysis. After running all sub-queries, send this to a fresh session:
Here are the findings from my research on [main question]:
Sub-question 1 findings: [paste summary] Sub-question 2 findings: [paste summary] Sub-question 3 findings: [paste summary]
Analyze the contradictions:
- Where do the findings from different sub-questions conflict?
- Which conflicts can be resolved by looking at the evidence quality?
- Which conflicts are genuine unresolved tensions?
- What additional evidence would resolve the remaining tensions?
Phase 4 — Your synthesis. Write a 500-word research brief yourself (not AI-generated) that answers your original question. Structure it as:
- Bottom line: Your answer in 1-2 sentences
- Key evidence: The 3 strongest pieces of evidence supporting your answer, with evidence grades
- Key uncertainty: What you’re least confident about and why
- What would change your mind: 1-2 pieces of evidence that, if found, would reverse your conclusion
📋 Plan first (Planners start here)
Section titled “📋 Plan first (Planners start here)”Here’s what you’re about to do:
- Formulate a research question — Choose something decision-relevant. Use AI to decompose it into sub-questions with defined evidence standards.
- Gather evidence by sub-question — Run separate queries for each sub-question, requiring the AI to grade its own evidence quality (strong/moderate/weak).
- Analyze contradictions — Feed all findings into a fresh session and ask for conflict analysis. Identify which conflicts are real vs. caused by weak evidence.
- Write your own synthesis — Produce a 500-word brief that answers the question, cites evidence with quality grades, and states what would change your mind.
- Assess the pipeline — Evaluate whether this process produced a meaningfully better answer than a single AI query would have.
“Done” looks like: A research brief that clearly distinguishes strong from weak evidence, acknowledges uncertainty, and provides a decision-ready answer with stated confidence.
🧭 Why this matters (Strategists start here)
Section titled “🧭 Why this matters (Strategists start here)”This exercise combines the skills from IS-Basic-01 (extracting signal from noise) and IS-Intermediate-01 (triangulating across perspectives) into a complete research methodology. The evidence grading system prevents the common failure mode of treating all AI output as equally reliable. The contradiction analysis surfaces genuinely open questions rather than papering over them. This pipeline is directly applicable to due diligence, competitive intelligence, policy analysis, and any context where the cost of being wrong is high and the question is too complex for a single query.
Reflection
Section titled “Reflection”- Did the evidence grading change which findings you trusted? Were you surprised by what was classified as “weak”?
- How did the contradiction analysis change your initial view?
- Was the 500-word synthesis harder or easier than expected? What was the hardest part?
- What surprised you about the output?
- What did you have to fix or override?
- How would you explain what you just did to a colleague?
- 💬 Discuss: Try explaining your result to someone who hasn’t used AI for this task. What questions do they ask? (Social Learners)
⬆️ Level up
Section titled “⬆️ Level up”You’ve reached the advanced level for Insight Synthesis. From here, consider:
- Using this pipeline for a real decision and tracking whether your evidence-graded conclusion held up
- Combining this with AC-Advanced-01 to delegate different research phases to different agent roles
- Teaching this method to a colleague and seeing how they adapt it
Back to Insight Synthesis | 🔴 Advanced Level