What if your devil's advocate is making things worse?
We added a permanent contrarian to our multi-agent debates. Quality went down. The fix: inject adversarial roles dynamically, only when the committee starts agreeing too fast.
How Counsel turns multi-model deliberation into better decisions. Technical deep-dives on steerability, committee design, and decision intelligence.
We added a permanent contrarian to our multi-agent debates. Quality went down. The fix: inject adversarial roles dynamically, only when the committee starts agreeing too fast.
When one LLM plays both sides of a debate, 74% of the analytical vocabulary is shared between opposing roles. The real diversity premium isn't in quality -- it's in consistency.
More guidance makes multi-agent deliberation better -- until it doesn't. We found a sharp cliff, not a gradual tradeoff, between context that helps and conclusions that predetermine.
We expected more customization to always improve output. Across 120 debates, quality peaked at exactly 3 configuration overrides, then degraded. The cause: parameter conflicts users can't predict.
Mismatched decision templates score 9.4% below unstructured deliberation. The committee reasons rigorously about the wrong things.
10 curated documents outperformed 200 unfocused ones by 2.5x. The retriever can't distinguish relevant from irrelevant when the corpus is noisy, and no retrieval trick fixes that.