Files
BMAD-METHOD/docs/explanation/adversarial-review.md
Alex Verkhovsky 91f6c41be1 docs: radical reduction of documentation scope for v6 beta (#1406)
* docs: radical reduction of documentation scope for v6 beta

Archive and basement unreviewed content to ship a focused, minimal doc set.

Changes:
- Archive stale how-to workflow guides (will rewrite for v6)
- Archive outdated explanation and reference content
- Move unreviewed content to basement for later review
- Reorganize TEA docs into dedicated /tea/ section
- Add workflow-map visual reference page
- Simplify getting-started tutorial and sidebar navigation
- Add explanation pages: brainstorming, adversarial-review, party-mode,
  quick-flow, advanced-elicitation
- Fix base URL handling for subdirectory deployments (GitHub Pages forks)

The goal is a minimal, accurate doc set for beta rather than
comprehensive but potentially misleading content.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* refactor: restructure BMM and agents documentation by consolidating and flattening index files.

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-25 14:00:26 -06:00

2.7 KiB

title, description
title description
Adversarial Review Forced reasoning technique that prevents lazy "looks good" reviews

Force deeper analysis by requiring problems to be found.

What is Adversarial Review?

A review technique where the reviewer must find issues. No "looks good" allowed. The reviewer adopts a cynical stance - assume problems exist and find them.

This isn't about being negative. It's about forcing genuine analysis instead of a cursory glance that rubber-stamps whatever was submitted.

The core rule: You must find issues. Zero findings triggers a halt - re-analyze or explain why.

Why It Works

Normal reviews suffer from confirmation bias. You skim the work, nothing jumps out, you approve it. The "find problems" mandate breaks this pattern:

  • Forces thoroughness - Can't approve until you've looked hard enough to find issues
  • Catches missing things - "What's not here?" becomes a natural question
  • Improves signal quality - Findings are specific and actionable, not vague concerns
  • Information asymmetry - Run reviews with fresh context (no access to original reasoning) so you evaluate the artifact, not the intent

Where It's Used

Adversarial review appears throughout BMAD workflows - code review, implementation readiness checks, spec validation, and others. Sometimes it's a required step, sometimes optional (like advanced elicitation or party mode). The pattern adapts to whatever artifact needs scrutiny.

Human Filtering Required

Because the AI is instructed to find problems, it will find problems - even when they don't exist. Expect false positives: nitpicks dressed as issues, misunderstandings of intent, or outright hallucinated concerns.

You decide what's real. Review each finding, dismiss the noise, fix what matters.

Example

Instead of:

"The authentication implementation looks reasonable. Approved."

An adversarial review produces:

  1. HIGH - login.ts:47 - No rate limiting on failed attempts
  2. HIGH - Session token stored in localStorage (XSS vulnerable)
  3. MEDIUM - Password validation happens client-side only
  4. MEDIUM - No audit logging for failed login attempts
  5. LOW - Magic number 3600 should be SESSION_TIMEOUT_SECONDS

The first review might miss a security vulnerability. The second caught four.

Iteration and Diminishing Returns

After addressing findings, consider running it again. A second pass usually catches more. A third isn't always useless either. But each pass takes time, and eventually you hit diminishing returns - just nitpicks and false findings.

:::tip[Better Reviews] Assume problems exist. Look for what's missing, not just what's wrong. :::