Fusion

Multi-model analysis with a judge model

Fusion turns any OpenRouter request into a small multi-model deliberation: a configurable panel of expert models analyzes the prompt in parallel with web search and web fetch enabled, then a judge model produces a structured analysis (consensus, contradictions, partial coverage, unique insights, blind spots). The calling model uses that analysis to write the final answer.

The Fusion plugin is the configuration surface for this pipeline. It’s a thin sugar layer on top of the openrouter:fusion server tool and the openrouter/fusion model alias. Pick whichever entry point fits your workflow.

When to use Fusion

Reach for Fusion when a single model isn’t enough — research, expert critique, or tasks that benefit from multiple perspectives. Fusion is overkill for short tactical prompts; use it when the cost of being wrong is higher than the cost of a few extra completions.

How it works

  1. The plugin injects the openrouter:fusion server tool into your request and (if you sent model: "openrouter/fusion") swaps the alias for the configured judge / fusion model.
  2. The judge model runs your prompt and decides whether to invoke the fusion tool.
  3. When invoked, the tool dispatches your prompt to every analysis model in parallel with openrouter:web_search and openrouter:web_fetch enabled.
  4. The same judge model then receives a synthesis prompt with every panel response and returns structured analysis JSON.
  5. The outer judge model receives that analysis and writes the final user-facing answer.

The final synthesis call is not given web tools — by that point all the freshness lives in the panel responses, and turning off web tools keeps the answer grounded in the deliberation.

Configuration

1{
2 "model": "openrouter/fusion",
3 "plugins": [
4 {
5 "id": "fusion",
6 "analysis_models": [
7 "~anthropic/claude-opus-latest",
8 "~openai/gpt-latest"
9 ],
10 "model": "~anthropic/claude-opus-latest"
11 }
12 ]
13}
FieldDefaultDescription
analysis_modelsQuality preset (~anthropic/claude-opus-latest, ~openai/gpt-latest)Slugs of the parallel analysis panel. Each receives the prompt with web search + web fetch.
modelFirst analysis modelSlug of the judge / fusion model used to summarize the panel and write the final answer. Only applied when the request uses openrouter/fusion as the model.
enabledtrueSet to false to bypass the plugin for a single request.

When you pass model: "openrouter/fusion" without a plugin config, the defaults are equivalent to the Quality preset on the Fusion lab.

Two entry points, one pipeline

openrouter/fusion is exactly equivalent to enabling the openrouter:fusion server tool on the configured judge model. The model below behaves identically:

1{
2 "model": "openrouter/fusion",
3 "messages": [
4 { "role": "user", "content": "What are the strongest arguments for and against carbon taxes?" }
5 ]
6}

The model decides when to call openrouter:fusion. For tasks that don’t need deliberation, it can answer directly — including invoking any other tools you’ve defined.

Complete example

1const response = await fetch('https://openrouter.ai/api/v1/chat/completions', {
2 method: 'POST',
3 headers: {
4 Authorization: 'Bearer {{API_KEY_REF}}',
5 'Content-Type': 'application/json',
6 },
7 body: JSON.stringify({
8 model: 'openrouter/fusion',
9 messages: [
10 {
11 role: 'user',
12 content: 'Compare ridge, lasso, and elastic-net regression. Where does each shine?',
13 },
14 ],
15 plugins: [
16 {
17 id: 'fusion',
18 analysis_models: [
19 '~anthropic/claude-opus-latest',
20 '~openai/gpt-latest',
21 ],
22 },
23 ],
24 }),
25});
26
27const data = await response.json();
28console.log(data.choices[0].message.content);

Recursion protection

Fusion attaches an x-openrouter-fusion-depth header to every inner call (analysis + judge). If an analysis model tries to recursively invoke openrouter:fusion or openrouter/fusion, the plugin refuses to inject the tool a second time and the call returns an error rather than fanning out unbounded extra inference.