再帰的分解で長コンテキストタスクを処理するスキル。大規模タスクを管理可能なチャンクに分割。
Handle long-context tasks with Claude Code through recursive decomposition
What It Does • Installation • Usage • How It Works • Benchmarks • Acknowledgments
When analyzing large codebases, processing many documents, or aggregating information across dozens of files, Claude's context window becomes a bottleneck. As context grows, "context rot" degrades performance:
This skill implements Recursive Language Model (RLM) strategies from Zhang, Kraska, and Khattab's 2025 research, enabling Claude Code to handle inputs up to 2 orders of magnitude beyond normal context limits.
Instead of cramming everything into context, Claude learns to:
| Task Type | Without Skill | With Skill |
|---|---|---|
| Analyze 100+ files | Context overflow / degraded results | Systematic coverage via decomposition |
| Multi-document QA | Missed information | Comprehensive extraction |
| Codebase-wide search | Manual iteration | Parallel sub-agent analysis |
| Information aggregation | Incomplete synthesis | Map-reduce pattern |
We tested on the Anthropic Cookbook (196 files, 356MB):
Task: "Find all Anthropic API calling patterns across the codebase"
Results:
├── Files scanned: 142
├── Files with API calls: 18
├── Patterns identified: 8 distinct patterns
├── Anti-patterns detected: 4
└── Output: Comprehensive report with file:line references
# Add the marketplace
claude plugin marketplace add massimodeluisa/recursive-decomposition-skill
# Install the plugin
claude plugin install recursive-decomposition@recursive-decomposition
# Clone the repository
git clone https://github.com/massimodeluisa/recursive-decomposition-skill.git ~/recursive-decomposition-skill
# Add as local marketplace
claude plugin marketplace add ~/recursive-decomposition-skill
# Install the plugin
claude plugin install recursive-decomposition
# Copy skill directly to Claude's skills directory
cp -r plugins/recursive-decomposition/skills/recursive-decomposition ~/.claude/skills/
After installation, restart Claude Code for the skill to take effect.
# Update marketplace index
claude plugin marketplace update
# Update the plugin
claude plugin update recursive-decomposition@recursive-decomposition
The skill activates automatically when you describe tasks involving:
"analyze all files in...")"aggregate information from...")"find all occurrences across...")"summarize these 50 documents...")"Analyze error handling patterns across this entire codebase"
"Find all TODO comments in the project and categorize by priority"
"What API endpoints are defined across all route files?"
"Summarize the key decisions from all meeting notes in /docs"
"Find security vulnerabilities across all Python files"
The skill recognizes these patterns:
"analyze all files""process this large document""aggregate information from""search across the codebase"The skill is designed for complex, long-context tasks. Use it when:
When NOT to use:
1000 files → Glob filter → 100 files
100 files → Grep filter → 20 files
20 files → Deep analysis
Result: 50x reduction before expensive processing
Main Agent
├── Sub-Agent 1 (Batch A) ─┐
├── Sub-Agent 2 (Batch B) ─┼── Parallel
├── Sub-Agent 3 (Batch C) ─┘
└── Synthesize results
Re-check synthesized answers against focused evidence to catch context rot errors.
From the RLM paper:
| Task | Direct Model | With RLM | Improvement |
|---|---|---|---|
| Multi-hop QA (6-11M tokens) | 70% | 91% | +21% |
| Linear aggregation | Baseline | +28-33% | Significant |
| Quadratic reasoning | <0.1% | 58% | Massive |
| Context scaling | 2^14 tokens | 2^18 tokens | 16x |
Cost: RLM approaches are ~3x cheaper than summarization baselines while achieving superior quality.
recursive-decomposition-skill/
├── .claude-plugin/
│ └── marketplace.json # Marketplace manifest
├── plugins/
│ └── recursive-decomposition/
│ ├── .claude-plugin/
│ │ └── plugin.json # Plugin manifest
│ ├── README.md # Plugin documentation
│ └── skills/
│ └── recursive-decomposition/
│ ├── SKILL.md # Core skill instructions
│ └── references/
│ ├── rlm-strategies.md
│ ├── cost-analysis.md
│ ├── codebase-analysis.md
│ └── document-aggregation.md
├── assets/
│ └── logo.png # Project logo
├── AGENTS.md # Agent-facing docs
├── CONTRIBUTING.md # Contribution guidelines
├── LICENSE
└── README.md
| File | Purpose |
|---|---|
SKILL.md | Core decomposition strategies and patterns |
references/rlm-strategies.md | Detailed techniques from the RLM paper |
references/cost-analysis.md | When to use recursive vs. direct approaches |
references/codebase-analysis.md | Full walkthrough: multi-file error handling analysis |
references/document-aggregation.md | Full walkthrough: multi-document feature extraction |
This skill is based on the Recursive Language Models research paper. Huge thanks to the authors for their groundbreaking work:
|
Alex L. Zhang
@a1zhang MIT CSAIL |
Tim Kraska
@tim_kraska MIT Professor |
Omar Khattab
@lateinteraction MIT CSAIL, Creator of DSPy |
Recursive Language Models
Alex L. Zhang, Tim Kraska, Omar Khattab
arXiv:2512.24601 • December 2025
We propose Recursive Language Models (RLMs), an inference technique enabling LLMs to handle prompts up to two orders of magnitude beyond model context windows through programmatic decomposition and recursive self-invocation over prompt segments.
Contributions welcome! Please see CONTRIBUTING.md for guidelines.
Massimo De Luisa — @massimodeluisa
MIT License — see LICENSE for details.
トピック