Compare commits
6 Commits
task-maste
...
docs/auto-
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
5781d61d9c | ||
|
|
25a00dca67 | ||
|
|
f263d4b2e0 | ||
|
|
f12a16d096 | ||
|
|
aaf903ff2f | ||
|
|
2a910a40ba |
7
.changeset/auto-update-changelog-highlights.md
Normal file
7
.changeset/auto-update-changelog-highlights.md
Normal file
@@ -0,0 +1,7 @@
|
||||
---
|
||||
"task-master-ai": minor
|
||||
---
|
||||
|
||||
Add changelog highlights to auto-update notifications
|
||||
|
||||
When the CLI auto-updates to a new version, it now displays a "What's New" section.
|
||||
17
.changeset/nice-ways-hope.md
Normal file
17
.changeset/nice-ways-hope.md
Normal file
@@ -0,0 +1,17 @@
|
||||
---
|
||||
"task-master-ai": minor
|
||||
---
|
||||
|
||||
Add RPG (Repository Planning Graph) method template for structured PRD creation. The new `example_prd_rpg.txt` template teaches AI agents and developers the RPG methodology through embedded instructions, inline good/bad examples, and XML-style tags for structure. This template enables creation of dependency-aware PRDs that automatically generate topologically-ordered task graphs when parsed with Task Master.
|
||||
|
||||
Key features:
|
||||
- Method-as-template: teaches RPG principles (dual-semantics, explicit dependencies, topological order) while being used
|
||||
- Inline instructions at decision points guide AI through each section
|
||||
- Good/bad examples for immediate pattern matching
|
||||
- Flexible plain-text format with XML-style tags for parseability
|
||||
- Critical dependency-graph section ensures correct task ordering
|
||||
- Automatic inclusion during `task-master init`
|
||||
- Comprehensive documentation at [docs.task-master.dev/capabilities/rpg-method](https://docs.task-master.dev/capabilities/rpg-method)
|
||||
- Tool recommendations for code-context-aware PRD creation (Claude Code, Cursor, Gemini CLI, Codex/Grok)
|
||||
|
||||
The RPG template complements the existing `example_prd.txt` and provides a more structured approach for complex projects requiring clear module boundaries and dependency chains.
|
||||
7
.changeset/plain-falcons-serve.md
Normal file
7
.changeset/plain-falcons-serve.md
Normal file
@@ -0,0 +1,7 @@
|
||||
---
|
||||
"task-master-ai": patch
|
||||
---
|
||||
|
||||
Fix cross-level task dependencies not being saved
|
||||
|
||||
Fixes an issue where adding dependencies between subtasks and top-level tasks (e.g., `task-master add-dependency --id=2.2 --depends-on=11`) would report success but fail to persist the changes. Dependencies can now be created in both directions between any task levels.
|
||||
511
.taskmaster/templates/example_prd_rpg.txt
Normal file
511
.taskmaster/templates/example_prd_rpg.txt
Normal file
@@ -0,0 +1,511 @@
|
||||
<rpg-method>
|
||||
# Repository Planning Graph (RPG) Method - PRD Template
|
||||
|
||||
This template teaches you (AI or human) how to create structured, dependency-aware PRDs using the RPG methodology from Microsoft Research. The key insight: separate WHAT (functional) from HOW (structural), then connect them with explicit dependencies.
|
||||
|
||||
## Core Principles
|
||||
|
||||
1. **Dual-Semantics**: Think functional (capabilities) AND structural (code organization) separately, then map them
|
||||
2. **Explicit Dependencies**: Never assume - always state what depends on what
|
||||
3. **Topological Order**: Build foundation first, then layers on top
|
||||
4. **Progressive Refinement**: Start broad, refine iteratively
|
||||
|
||||
## How to Use This Template
|
||||
|
||||
- Follow the instructions in each `<instruction>` block
|
||||
- Look at `<example>` blocks to see good vs bad patterns
|
||||
- Fill in the content sections with your project details
|
||||
- The AI reading this will learn the RPG method by following along
|
||||
- Task Master will parse the resulting PRD into dependency-aware tasks
|
||||
|
||||
## Recommended Tools for Creating PRDs
|
||||
|
||||
When using this template to **create** a PRD (not parse it), use **code-context-aware AI assistants** for best results:
|
||||
|
||||
**Why?** The AI needs to understand your existing codebase to make good architectural decisions about modules, dependencies, and integration points.
|
||||
|
||||
**Recommended tools:**
|
||||
- **Claude Code** (claude-code CLI) - Best for structured reasoning and large contexts
|
||||
- **Cursor/Windsurf** - IDE integration with full codebase context
|
||||
- **Gemini CLI** (gemini-cli) - Massive context window for large codebases
|
||||
- **Codex/Grok CLI** - Strong code generation with context awareness
|
||||
|
||||
**Note:** Once your PRD is created, `task-master parse-prd` works with any configured AI model - it just needs to read the PRD text itself, not your codebase.
|
||||
</rpg-method>
|
||||
|
||||
---
|
||||
|
||||
<overview>
|
||||
<instruction>
|
||||
Start with the problem, not the solution. Be specific about:
|
||||
- What pain point exists?
|
||||
- Who experiences it?
|
||||
- Why existing solutions don't work?
|
||||
- What success looks like (measurable outcomes)?
|
||||
|
||||
Keep this section focused - don't jump into implementation details yet.
|
||||
</instruction>
|
||||
|
||||
## Problem Statement
|
||||
[Describe the core problem. Be concrete about user pain points.]
|
||||
|
||||
## Target Users
|
||||
[Define personas, their workflows, and what they're trying to achieve.]
|
||||
|
||||
## Success Metrics
|
||||
[Quantifiable outcomes. Examples: "80% task completion via autopilot", "< 5% manual intervention rate"]
|
||||
|
||||
</overview>
|
||||
|
||||
---
|
||||
|
||||
<functional-decomposition>
|
||||
<instruction>
|
||||
Now think about CAPABILITIES (what the system DOES), not code structure yet.
|
||||
|
||||
Step 1: Identify high-level capability domains
|
||||
- Think: "What major things does this system do?"
|
||||
- Examples: Data Management, Core Processing, Presentation Layer
|
||||
|
||||
Step 2: For each capability, enumerate specific features
|
||||
- Use explore-exploit strategy:
|
||||
* Exploit: What features are REQUIRED for core value?
|
||||
* Explore: What features make this domain COMPLETE?
|
||||
|
||||
Step 3: For each feature, define:
|
||||
- Description: What it does in one sentence
|
||||
- Inputs: What data/context it needs
|
||||
- Outputs: What it produces/returns
|
||||
- Behavior: Key logic or transformations
|
||||
|
||||
<example type="good">
|
||||
Capability: Data Validation
|
||||
Feature: Schema validation
|
||||
- Description: Validate JSON payloads against defined schemas
|
||||
- Inputs: JSON object, schema definition
|
||||
- Outputs: Validation result (pass/fail) + error details
|
||||
- Behavior: Iterate fields, check types, enforce constraints
|
||||
|
||||
Feature: Business rule validation
|
||||
- Description: Apply domain-specific validation rules
|
||||
- Inputs: Validated data object, rule set
|
||||
- Outputs: Boolean + list of violated rules
|
||||
- Behavior: Execute rules sequentially, short-circuit on failure
|
||||
</example>
|
||||
|
||||
<example type="bad">
|
||||
Capability: validation.js
|
||||
(Problem: This is a FILE, not a CAPABILITY. Mixing structure into functional thinking.)
|
||||
|
||||
Capability: Validation
|
||||
Feature: Make sure data is good
|
||||
(Problem: Too vague. No inputs/outputs. Not actionable.)
|
||||
</example>
|
||||
</instruction>
|
||||
|
||||
## Capability Tree
|
||||
|
||||
### Capability: [Name]
|
||||
[Brief description of what this capability domain covers]
|
||||
|
||||
#### Feature: [Name]
|
||||
- **Description**: [One sentence]
|
||||
- **Inputs**: [What it needs]
|
||||
- **Outputs**: [What it produces]
|
||||
- **Behavior**: [Key logic]
|
||||
|
||||
#### Feature: [Name]
|
||||
- **Description**:
|
||||
- **Inputs**:
|
||||
- **Outputs**:
|
||||
- **Behavior**:
|
||||
|
||||
### Capability: [Name]
|
||||
...
|
||||
|
||||
</functional-decomposition>
|
||||
|
||||
---
|
||||
|
||||
<structural-decomposition>
|
||||
<instruction>
|
||||
NOW think about code organization. Map capabilities to actual file/folder structure.
|
||||
|
||||
Rules:
|
||||
1. Each capability maps to a module (folder or file)
|
||||
2. Features within a capability map to functions/classes
|
||||
3. Use clear module boundaries - each module has ONE responsibility
|
||||
4. Define what each module exports (public interface)
|
||||
|
||||
The goal: Create a clear mapping between "what it does" (functional) and "where it lives" (structural).
|
||||
|
||||
<example type="good">
|
||||
Capability: Data Validation
|
||||
→ Maps to: src/validation/
|
||||
├── schema-validator.js (Schema validation feature)
|
||||
├── rule-validator.js (Business rule validation feature)
|
||||
└── index.js (Public exports)
|
||||
|
||||
Exports:
|
||||
- validateSchema(data, schema)
|
||||
- validateRules(data, rules)
|
||||
</example>
|
||||
|
||||
<example type="bad">
|
||||
Capability: Data Validation
|
||||
→ Maps to: src/utils.js
|
||||
(Problem: "utils" is not a clear module boundary. Where do I find validation logic?)
|
||||
|
||||
Capability: Data Validation
|
||||
→ Maps to: src/validation/everything.js
|
||||
(Problem: One giant file. Features should map to separate files for maintainability.)
|
||||
</example>
|
||||
</instruction>
|
||||
|
||||
## Repository Structure
|
||||
|
||||
```
|
||||
project-root/
|
||||
├── src/
|
||||
│ ├── [module-name]/ # Maps to: [Capability Name]
|
||||
│ │ ├── [file].js # Maps to: [Feature Name]
|
||||
│ │ └── index.js # Public exports
|
||||
│ └── [module-name]/
|
||||
├── tests/
|
||||
└── docs/
|
||||
```
|
||||
|
||||
## Module Definitions
|
||||
|
||||
### Module: [Name]
|
||||
- **Maps to capability**: [Capability from functional decomposition]
|
||||
- **Responsibility**: [Single clear purpose]
|
||||
- **File structure**:
|
||||
```
|
||||
module-name/
|
||||
├── feature1.js
|
||||
├── feature2.js
|
||||
└── index.js
|
||||
```
|
||||
- **Exports**:
|
||||
- `functionName()` - [what it does]
|
||||
- `ClassName` - [what it does]
|
||||
|
||||
</structural-decomposition>
|
||||
|
||||
---
|
||||
|
||||
<dependency-graph>
|
||||
<instruction>
|
||||
This is THE CRITICAL SECTION for Task Master parsing.
|
||||
|
||||
Define explicit dependencies between modules. This creates the topological order for task execution.
|
||||
|
||||
Rules:
|
||||
1. List modules in dependency order (foundation first)
|
||||
2. For each module, state what it depends on
|
||||
3. Foundation modules should have NO dependencies
|
||||
4. Every non-foundation module should depend on at least one other module
|
||||
5. Think: "What must EXIST before I can build this module?"
|
||||
|
||||
<example type="good">
|
||||
Foundation Layer (no dependencies):
|
||||
- error-handling: No dependencies
|
||||
- config-manager: No dependencies
|
||||
- base-types: No dependencies
|
||||
|
||||
Data Layer:
|
||||
- schema-validator: Depends on [base-types, error-handling]
|
||||
- data-ingestion: Depends on [schema-validator, config-manager]
|
||||
|
||||
Core Layer:
|
||||
- algorithm-engine: Depends on [base-types, error-handling]
|
||||
- pipeline-orchestrator: Depends on [algorithm-engine, data-ingestion]
|
||||
</example>
|
||||
|
||||
<example type="bad">
|
||||
- validation: Depends on API
|
||||
- API: Depends on validation
|
||||
(Problem: Circular dependency. This will cause build/runtime issues.)
|
||||
|
||||
- user-auth: Depends on everything
|
||||
(Problem: Too many dependencies. Should be more focused.)
|
||||
</example>
|
||||
</instruction>
|
||||
|
||||
## Dependency Chain
|
||||
|
||||
### Foundation Layer (Phase 0)
|
||||
No dependencies - these are built first.
|
||||
|
||||
- **[Module Name]**: [What it provides]
|
||||
- **[Module Name]**: [What it provides]
|
||||
|
||||
### [Layer Name] (Phase 1)
|
||||
- **[Module Name]**: Depends on [[module-from-phase-0], [module-from-phase-0]]
|
||||
- **[Module Name]**: Depends on [[module-from-phase-0]]
|
||||
|
||||
### [Layer Name] (Phase 2)
|
||||
- **[Module Name]**: Depends on [[module-from-phase-1], [module-from-foundation]]
|
||||
|
||||
[Continue building up layers...]
|
||||
|
||||
</dependency-graph>
|
||||
|
||||
---
|
||||
|
||||
<implementation-roadmap>
|
||||
<instruction>
|
||||
Turn the dependency graph into concrete development phases.
|
||||
|
||||
Each phase should:
|
||||
1. Have clear entry criteria (what must exist before starting)
|
||||
2. Contain tasks that can be parallelized (no inter-dependencies within phase)
|
||||
3. Have clear exit criteria (how do we know phase is complete?)
|
||||
4. Build toward something USABLE (not just infrastructure)
|
||||
|
||||
Phase ordering follows topological sort of dependency graph.
|
||||
|
||||
<example type="good">
|
||||
Phase 0: Foundation
|
||||
Entry: Clean repository
|
||||
Tasks:
|
||||
- Implement error handling utilities
|
||||
- Create base type definitions
|
||||
- Setup configuration system
|
||||
Exit: Other modules can import foundation without errors
|
||||
|
||||
Phase 1: Data Layer
|
||||
Entry: Phase 0 complete
|
||||
Tasks:
|
||||
- Implement schema validator (uses: base types, error handling)
|
||||
- Build data ingestion pipeline (uses: validator, config)
|
||||
Exit: End-to-end data flow from input to validated output
|
||||
</example>
|
||||
|
||||
<example type="bad">
|
||||
Phase 1: Build Everything
|
||||
Tasks:
|
||||
- API
|
||||
- Database
|
||||
- UI
|
||||
- Tests
|
||||
(Problem: No clear focus. Too broad. Dependencies not considered.)
|
||||
</example>
|
||||
</instruction>
|
||||
|
||||
## Development Phases
|
||||
|
||||
### Phase 0: [Foundation Name]
|
||||
**Goal**: [What foundational capability this establishes]
|
||||
|
||||
**Entry Criteria**: [What must be true before starting]
|
||||
|
||||
**Tasks**:
|
||||
- [ ] [Task name] (depends on: [none or list])
|
||||
- Acceptance criteria: [How we know it's done]
|
||||
- Test strategy: [What tests prove it works]
|
||||
|
||||
- [ ] [Task name] (depends on: [none or list])
|
||||
|
||||
**Exit Criteria**: [Observable outcome that proves phase complete]
|
||||
|
||||
**Delivers**: [What can users/developers do after this phase?]
|
||||
|
||||
---
|
||||
|
||||
### Phase 1: [Layer Name]
|
||||
**Goal**:
|
||||
|
||||
**Entry Criteria**: Phase 0 complete
|
||||
|
||||
**Tasks**:
|
||||
- [ ] [Task name] (depends on: [[tasks-from-phase-0]])
|
||||
- [ ] [Task name] (depends on: [[tasks-from-phase-0]])
|
||||
|
||||
**Exit Criteria**:
|
||||
|
||||
**Delivers**:
|
||||
|
||||
---
|
||||
|
||||
[Continue with more phases...]
|
||||
|
||||
</implementation-roadmap>
|
||||
|
||||
---
|
||||
|
||||
<test-strategy>
|
||||
<instruction>
|
||||
Define how testing will be integrated throughout development (TDD approach).
|
||||
|
||||
Specify:
|
||||
1. Test pyramid ratios (unit vs integration vs e2e)
|
||||
2. Coverage requirements
|
||||
3. Critical test scenarios
|
||||
4. Test generation guidelines for Surgical Test Generator
|
||||
|
||||
This section guides the AI when generating tests during the RED phase of TDD.
|
||||
|
||||
<example type="good">
|
||||
Critical Test Scenarios for Data Validation module:
|
||||
- Happy path: Valid data passes all checks
|
||||
- Edge cases: Empty strings, null values, boundary numbers
|
||||
- Error cases: Invalid types, missing required fields
|
||||
- Integration: Validator works with ingestion pipeline
|
||||
</example>
|
||||
</instruction>
|
||||
|
||||
## Test Pyramid
|
||||
|
||||
```
|
||||
/\
|
||||
/E2E\ ← [X]% (End-to-end, slow, comprehensive)
|
||||
/------\
|
||||
/Integration\ ← [Y]% (Module interactions)
|
||||
/------------\
|
||||
/ Unit Tests \ ← [Z]% (Fast, isolated, deterministic)
|
||||
/----------------\
|
||||
```
|
||||
|
||||
## Coverage Requirements
|
||||
- Line coverage: [X]% minimum
|
||||
- Branch coverage: [X]% minimum
|
||||
- Function coverage: [X]% minimum
|
||||
- Statement coverage: [X]% minimum
|
||||
|
||||
## Critical Test Scenarios
|
||||
|
||||
### [Module/Feature Name]
|
||||
**Happy path**:
|
||||
- [Scenario description]
|
||||
- Expected: [What should happen]
|
||||
|
||||
**Edge cases**:
|
||||
- [Scenario description]
|
||||
- Expected: [What should happen]
|
||||
|
||||
**Error cases**:
|
||||
- [Scenario description]
|
||||
- Expected: [How system handles failure]
|
||||
|
||||
**Integration points**:
|
||||
- [What interactions to test]
|
||||
- Expected: [End-to-end behavior]
|
||||
|
||||
## Test Generation Guidelines
|
||||
[Specific instructions for Surgical Test Generator about what to focus on, what patterns to follow, project-specific test conventions]
|
||||
|
||||
</test-strategy>
|
||||
|
||||
---
|
||||
|
||||
<architecture>
|
||||
<instruction>
|
||||
Describe technical architecture, data models, and key design decisions.
|
||||
|
||||
Keep this section AFTER functional/structural decomposition - implementation details come after understanding structure.
|
||||
</instruction>
|
||||
|
||||
## System Components
|
||||
[Major architectural pieces and their responsibilities]
|
||||
|
||||
## Data Models
|
||||
[Core data structures, schemas, database design]
|
||||
|
||||
## Technology Stack
|
||||
[Languages, frameworks, key libraries]
|
||||
|
||||
**Decision: [Technology/Pattern]**
|
||||
- **Rationale**: [Why chosen]
|
||||
- **Trade-offs**: [What we're giving up]
|
||||
- **Alternatives considered**: [What else we looked at]
|
||||
|
||||
</architecture>
|
||||
|
||||
---
|
||||
|
||||
<risks>
|
||||
<instruction>
|
||||
Identify risks that could derail development and how to mitigate them.
|
||||
|
||||
Categories:
|
||||
- Technical risks (complexity, unknowns)
|
||||
- Dependency risks (blocking issues)
|
||||
- Scope risks (creep, underestimation)
|
||||
</instruction>
|
||||
|
||||
## Technical Risks
|
||||
**Risk**: [Description]
|
||||
- **Impact**: [High/Medium/Low - effect on project]
|
||||
- **Likelihood**: [High/Medium/Low]
|
||||
- **Mitigation**: [How to address]
|
||||
- **Fallback**: [Plan B if mitigation fails]
|
||||
|
||||
## Dependency Risks
|
||||
[External dependencies, blocking issues]
|
||||
|
||||
## Scope Risks
|
||||
[Scope creep, underestimation, unclear requirements]
|
||||
|
||||
</risks>
|
||||
|
||||
---
|
||||
|
||||
<appendix>
|
||||
## References
|
||||
[Papers, documentation, similar systems]
|
||||
|
||||
## Glossary
|
||||
[Domain-specific terms]
|
||||
|
||||
## Open Questions
|
||||
[Things to resolve during development]
|
||||
</appendix>
|
||||
|
||||
---
|
||||
|
||||
<task-master-integration>
|
||||
# How Task Master Uses This PRD
|
||||
|
||||
When you run `task-master parse-prd <file>.txt`, the parser:
|
||||
|
||||
1. **Extracts capabilities** → Main tasks
|
||||
- Each `### Capability:` becomes a top-level task
|
||||
|
||||
2. **Extracts features** → Subtasks
|
||||
- Each `#### Feature:` becomes a subtask under its capability
|
||||
|
||||
3. **Parses dependencies** → Task dependencies
|
||||
- `Depends on: [X, Y]` sets task.dependencies = ["X", "Y"]
|
||||
|
||||
4. **Orders by phases** → Task priorities
|
||||
- Phase 0 tasks = highest priority
|
||||
- Phase N tasks = lower priority, properly sequenced
|
||||
|
||||
5. **Uses test strategy** → Test generation context
|
||||
- Feeds test scenarios to Surgical Test Generator during implementation
|
||||
|
||||
**Result**: A dependency-aware task graph that can be executed in topological order.
|
||||
|
||||
## Why RPG Structure Matters
|
||||
|
||||
Traditional flat PRDs lead to:
|
||||
- ❌ Unclear task dependencies
|
||||
- ❌ Arbitrary task ordering
|
||||
- ❌ Circular dependencies discovered late
|
||||
- ❌ Poorly scoped tasks
|
||||
|
||||
RPG-structured PRDs provide:
|
||||
- ✅ Explicit dependency chains
|
||||
- ✅ Topological execution order
|
||||
- ✅ Clear module boundaries
|
||||
- ✅ Validated task graph before implementation
|
||||
|
||||
## Tips for Best Results
|
||||
|
||||
1. **Spend time on dependency graph** - This is the most valuable section for Task Master
|
||||
2. **Keep features atomic** - Each feature should be independently testable
|
||||
3. **Progressive refinement** - Start broad, use `task-master expand` to break down complex tasks
|
||||
4. **Use research mode** - `task-master parse-prd --research` leverages AI for better task generation
|
||||
</task-master-integration>
|
||||
@@ -12,6 +12,7 @@ export interface UpdateInfo {
|
||||
currentVersion: string;
|
||||
latestVersion: string;
|
||||
needsUpdate: boolean;
|
||||
highlights?: string[];
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -59,6 +60,116 @@ export function compareVersions(v1: string, v2: string): number {
|
||||
return a.pre < b.pre ? -1 : 1; // basic prerelease tie-break
|
||||
}
|
||||
|
||||
/**
|
||||
* Fetch CHANGELOG.md from GitHub and extract highlights for a specific version
|
||||
*/
|
||||
async function fetchChangelogHighlights(version: string): Promise<string[]> {
|
||||
return new Promise((resolve) => {
|
||||
const options = {
|
||||
hostname: 'raw.githubusercontent.com',
|
||||
path: '/eyaltoledano/claude-task-master/main/CHANGELOG.md',
|
||||
method: 'GET',
|
||||
headers: {
|
||||
'User-Agent': `task-master-ai/${version}`
|
||||
}
|
||||
};
|
||||
|
||||
const req = https.request(options, (res) => {
|
||||
let data = '';
|
||||
|
||||
res.on('data', (chunk) => {
|
||||
data += chunk;
|
||||
});
|
||||
|
||||
res.on('end', () => {
|
||||
try {
|
||||
if (res.statusCode !== 200) {
|
||||
resolve([]);
|
||||
return;
|
||||
}
|
||||
|
||||
const highlights = parseChangelogHighlights(data, version);
|
||||
resolve(highlights);
|
||||
} catch (error) {
|
||||
resolve([]);
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
req.on('error', () => {
|
||||
resolve([]);
|
||||
});
|
||||
|
||||
req.setTimeout(3000, () => {
|
||||
req.destroy();
|
||||
resolve([]);
|
||||
});
|
||||
|
||||
req.end();
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Parse changelog markdown to extract Minor Changes for a specific version
|
||||
* @internal - Exported for testing purposes only
|
||||
*/
|
||||
export function parseChangelogHighlights(
|
||||
changelog: string,
|
||||
version: string
|
||||
): string[] {
|
||||
try {
|
||||
// Validate version format (basic semver pattern) to prevent ReDoS
|
||||
if (!/^\d+\.\d+\.\d+(-[a-zA-Z0-9.-]+)?$/.test(version)) {
|
||||
return [];
|
||||
}
|
||||
|
||||
// Find the version section
|
||||
const versionRegex = new RegExp(
|
||||
`## ${version.replace(/\./g, '\\.')}\\s*\\n`,
|
||||
'i'
|
||||
);
|
||||
const versionMatch = changelog.match(versionRegex);
|
||||
|
||||
if (!versionMatch) {
|
||||
return [];
|
||||
}
|
||||
|
||||
// Extract content from this version to the next version heading
|
||||
const startIdx = versionMatch.index! + versionMatch[0].length;
|
||||
const nextVersionIdx = changelog.indexOf('\n## ', startIdx);
|
||||
const versionContent =
|
||||
nextVersionIdx > 0
|
||||
? changelog.slice(startIdx, nextVersionIdx)
|
||||
: changelog.slice(startIdx);
|
||||
|
||||
// Find Minor Changes section
|
||||
const minorChangesMatch = versionContent.match(
|
||||
/### Minor Changes\s*\n([\s\S]*?)(?=\n###|\n##|$)/i
|
||||
);
|
||||
|
||||
if (!minorChangesMatch) {
|
||||
return [];
|
||||
}
|
||||
|
||||
const minorChangesContent = minorChangesMatch[1];
|
||||
const highlights: string[] = [];
|
||||
|
||||
// Extract all bullet points (lines starting with -)
|
||||
// Format: - [#PR](...) Thanks [@author]! - Description
|
||||
const bulletRegex = /^-\s+\[#\d+\][^\n]*?!\s+-\s+(.+?)$/gm;
|
||||
let match;
|
||||
|
||||
while ((match = bulletRegex.exec(minorChangesContent)) !== null) {
|
||||
const desc = match[1].trim();
|
||||
highlights.push(desc);
|
||||
}
|
||||
|
||||
return highlights;
|
||||
} catch (error) {
|
||||
return [];
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Check for newer version of task-master-ai
|
||||
*/
|
||||
@@ -85,7 +196,7 @@ export async function checkForUpdate(
|
||||
data += chunk;
|
||||
});
|
||||
|
||||
res.on('end', () => {
|
||||
res.on('end', async () => {
|
||||
try {
|
||||
if (res.statusCode !== 200)
|
||||
throw new Error(`npm registry status ${res.statusCode}`);
|
||||
@@ -95,10 +206,17 @@ export async function checkForUpdate(
|
||||
const needsUpdate =
|
||||
compareVersions(currentVersion, latestVersion) < 0;
|
||||
|
||||
// Fetch highlights if update is needed
|
||||
let highlights: string[] | undefined;
|
||||
if (needsUpdate) {
|
||||
highlights = await fetchChangelogHighlights(latestVersion);
|
||||
}
|
||||
|
||||
resolve({
|
||||
currentVersion,
|
||||
latestVersion,
|
||||
needsUpdate
|
||||
needsUpdate,
|
||||
highlights
|
||||
});
|
||||
} catch (error) {
|
||||
resolve({
|
||||
@@ -136,18 +254,29 @@ export async function checkForUpdate(
|
||||
*/
|
||||
export function displayUpgradeNotification(
|
||||
currentVersion: string,
|
||||
latestVersion: string
|
||||
latestVersion: string,
|
||||
highlights?: string[]
|
||||
) {
|
||||
const message = boxen(
|
||||
`${chalk.blue.bold('Update Available!')} ${chalk.dim(currentVersion)} → ${chalk.green(latestVersion)}\n\n` +
|
||||
`Auto-updating to the latest version with new features and bug fixes...`,
|
||||
{
|
||||
let content = `${chalk.blue.bold('Update Available!')} ${chalk.dim(currentVersion)} → ${chalk.green(latestVersion)}`;
|
||||
|
||||
if (highlights && highlights.length > 0) {
|
||||
content += '\n\n' + chalk.bold("What's New:");
|
||||
for (const highlight of highlights) {
|
||||
content += '\n' + chalk.cyan('• ') + highlight;
|
||||
}
|
||||
content += '\n\n' + 'Auto-updating to the latest version...';
|
||||
} else {
|
||||
content +=
|
||||
'\n\n' +
|
||||
'Auto-updating to the latest version with new features and bug fixes...';
|
||||
}
|
||||
|
||||
const message = boxen(content, {
|
||||
padding: 1,
|
||||
margin: { top: 1, bottom: 1 },
|
||||
borderColor: 'yellow',
|
||||
borderStyle: 'round'
|
||||
}
|
||||
);
|
||||
});
|
||||
|
||||
console.log(message);
|
||||
}
|
||||
|
||||
@@ -6,10 +6,28 @@ description: "Learn how to set up and use Task Master with Cursor AI"
|
||||
## Setting up Cursor AI Integration
|
||||
|
||||
<Check>
|
||||
Task Master is designed to work seamlessly with [Cursor AI](https://www.cursor.so/), providing a structured workflow for AI-driven development.
|
||||
Task Master is designed to work seamlessly with [Cursor AI](https://www.cursor.so/), providing a structured workflow for AI-driven development. As of version 0.28.0, Task Master automatically sets up custom slash commands in Cursor IDE.
|
||||
</Check>
|
||||
|
||||
<AccordionGroup>
|
||||
<Accordion title="Cursor Custom Slash Commands (New in 0.28.0)" icon="terminal">
|
||||
Task Master now automatically configures custom slash commands in Cursor IDE when adding a profile. These commands provide quick access to Task Master functionality:
|
||||
|
||||
### Available Slash Commands
|
||||
- `/tm-list` - View all tasks with their status
|
||||
- `/tm-next` - Get the next available task to work on
|
||||
- `/tm-show` - Show detailed task information
|
||||
- `/tm-add` - Add a new task with AI assistance
|
||||
- `/tm-status` - Set task status (pending, in-progress, done, etc.)
|
||||
- `/tm-expand` - Expand a task into subtasks
|
||||
- `/tm-complexity` - Analyze task complexity
|
||||
|
||||
### Automatic Setup
|
||||
When you run `task-master profiles add cursor`, the slash commands are automatically copied to `.cursor/commands/`. If you remove the profile with `task-master profiles remove cursor`, the commands are cleaned up automatically.
|
||||
|
||||
### Manual Setup
|
||||
If you need to manually set up the commands, they're available in the `assets/claude/commands/` directory of your Task Master installation.
|
||||
</Accordion>
|
||||
<Accordion title="Using Cursor with MCP (Recommended)" icon="sparkles">
|
||||
If you've already set up Task Master with MCP in Cursor, the integration is automatic. You can simply use natural language to interact with Task Master:
|
||||
|
||||
|
||||
@@ -66,3 +66,36 @@ The MCP tools can be categorized in the same way as the core functionalities:
|
||||
- **`use_tag`**: Switches to a different tag.
|
||||
- **`rename_tag`**: Renames a tag.
|
||||
- **`copy_tag`**: Copies a tag.
|
||||
|
||||
## Configuration and Performance
|
||||
|
||||
### Timeout Configuration
|
||||
|
||||
As of version 0.28.0, Task Master automatically configures appropriate timeouts for MCP operations to handle long-running AI tasks. The Roo Code profile now includes a 300-second timeout (increased from the default 60 seconds) to accommodate complex operations like:
|
||||
|
||||
- `parse_prd` - PRD parsing and task generation
|
||||
- `expand_all` - Expanding multiple tasks into subtasks
|
||||
- `analyze_project_complexity` - Project-wide complexity analysis
|
||||
- Research-enabled operations with the `--research` flag
|
||||
|
||||
### MCP Server Configuration
|
||||
|
||||
The recommended MCP server configuration automatically includes these timeout settings:
|
||||
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"task-master-ai": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "task-master-ai"],
|
||||
"timeout": 300000,
|
||||
"env": {
|
||||
"ANTHROPIC_API_KEY": "your-key-here",
|
||||
"PERPLEXITY_API_KEY": "your-key-here"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
This configuration is automatically generated when using Task Master profiles with Roo Code and other AI coding assistants.
|
||||
326
apps/docs/capabilities/rpg-method.mdx
Normal file
326
apps/docs/capabilities/rpg-method.mdx
Normal file
@@ -0,0 +1,326 @@
|
||||
---
|
||||
title: RPG Method for PRD Creation
|
||||
sidebarTitle: "RPG Method"
|
||||
---
|
||||
|
||||
# Repository Planning Graph (RPG) Method
|
||||
|
||||
The RPG (Repository Planning Graph) method is an advanced approach to creating Product Requirements Documents that generate highly-structured, dependency-aware task graphs. It's based on Microsoft Research's methodology for scalable codebase generation.
|
||||
|
||||
## When to Use RPG
|
||||
|
||||
Use the RPG template (`example_prd_rpg.txt`) for:
|
||||
|
||||
- **Complex multi-module systems** with intricate dependencies
|
||||
- **Large-scale codebases** being built from scratch
|
||||
- **Projects requiring explicit architecture** and clear module boundaries
|
||||
- **Teams needing dependency visibility** for parallel development
|
||||
|
||||
For simpler features or smaller projects, the standard `example_prd.txt` template may be more appropriate.
|
||||
|
||||
---
|
||||
|
||||
## Core Principles
|
||||
|
||||
### 1. Dual-Semantics
|
||||
|
||||
Separate **functional** thinking (WHAT) from **structural** thinking (HOW):
|
||||
|
||||
```
|
||||
Functional: "Data Validation capability with schema checking and rule enforcement"
|
||||
↓
|
||||
Structural: "src/validation/ with schema-validator.js and rule-validator.js"
|
||||
```
|
||||
|
||||
This separation prevents mixing concerns and creates clearer module boundaries.
|
||||
|
||||
### 2. Explicit Dependencies
|
||||
|
||||
Never assume dependencies - always state them explicitly:
|
||||
|
||||
```
|
||||
Good:
|
||||
Module: data-ingestion
|
||||
Depends on: [schema-validator, config-manager]
|
||||
|
||||
Bad:
|
||||
Module: data-ingestion
|
||||
(Assumes schema-validator exists somewhere)
|
||||
```
|
||||
|
||||
Explicit dependencies enable:
|
||||
- Topological ordering of implementation
|
||||
- Parallel development of independent modules
|
||||
- Clear build/test order
|
||||
- Early detection of circular dependencies
|
||||
|
||||
### 3. Topological Order
|
||||
|
||||
Build foundation layers before higher layers:
|
||||
|
||||
```
|
||||
Phase 0 (Foundation): error-handling, base-types, config
|
||||
↓
|
||||
Phase 1 (Data): validation, ingestion (depend on Phase 0)
|
||||
↓
|
||||
Phase 2 (Core): algorithms, pipelines (depend on Phase 1)
|
||||
↓
|
||||
Phase 3 (API): routes, handlers (depend on Phase 2)
|
||||
```
|
||||
|
||||
Task Master automatically orders tasks based on this dependency chain.
|
||||
|
||||
### 4. Progressive Refinement
|
||||
|
||||
Start broad, refine iteratively:
|
||||
|
||||
1. High-level capabilities → Main tasks
|
||||
2. Features per capability → Subtasks
|
||||
3. Implementation details → Expanded subtasks
|
||||
|
||||
---
|
||||
|
||||
## Template Structure
|
||||
|
||||
The RPG template guides you through 7 key sections:
|
||||
|
||||
### 1. Overview
|
||||
- Problem statement
|
||||
- Target users
|
||||
- Success metrics
|
||||
|
||||
### 2. Functional Decomposition (WHAT)
|
||||
- High-level capability domains
|
||||
- Features per capability
|
||||
- Inputs/outputs/behavior for each feature
|
||||
|
||||
**Example:**
|
||||
```
|
||||
Capability: Data Management
|
||||
Feature: Schema validation
|
||||
Description: Validate JSON against defined schemas
|
||||
Inputs: JSON object, schema definition
|
||||
Outputs: Validation result + error details
|
||||
Behavior: Iterate fields, check types, enforce constraints
|
||||
```
|
||||
|
||||
### 3. Structural Decomposition (HOW)
|
||||
- Repository folder structure
|
||||
- Module-to-capability mapping
|
||||
- File organization
|
||||
- Public interfaces/exports
|
||||
|
||||
**Example:**
|
||||
```
|
||||
Capability: Data Management
|
||||
→ Maps to: src/data/
|
||||
├── schema-validator.js (Schema validation feature)
|
||||
├── rule-validator.js (Rule validation feature)
|
||||
└── index.js (Exports)
|
||||
```
|
||||
|
||||
### 4. Dependency Graph (CRITICAL)
|
||||
- Foundation layer (no dependencies)
|
||||
- Each subsequent layer's dependencies
|
||||
- Explicit "depends on" declarations
|
||||
|
||||
**Example:**
|
||||
```
|
||||
Foundation Layer (Phase 0):
|
||||
- error-handling: No dependencies
|
||||
- base-types: No dependencies
|
||||
|
||||
Data Layer (Phase 1):
|
||||
- schema-validator: Depends on [base-types, error-handling]
|
||||
- data-ingestion: Depends on [schema-validator]
|
||||
```
|
||||
|
||||
### 5. Implementation Roadmap
|
||||
- Phases with entry/exit criteria
|
||||
- Tasks grouped by phase
|
||||
- Clear deliverables per phase
|
||||
|
||||
### 6. Test Strategy
|
||||
- Test pyramid ratios
|
||||
- Coverage requirements
|
||||
- Critical test scenarios per module
|
||||
- Guidelines for test generation
|
||||
|
||||
### 7. Architecture & Risks
|
||||
- Technical architecture
|
||||
- Data models
|
||||
- Technology decisions
|
||||
- Risk mitigation strategies
|
||||
|
||||
---
|
||||
|
||||
## Using RPG with Task Master
|
||||
|
||||
### Step 1: Create PRD with RPG Template
|
||||
|
||||
Use a code-context-aware tool to fill out the template:
|
||||
|
||||
```bash
|
||||
# In Claude Code, Cursor, or similar
|
||||
"Create a PRD using @.taskmaster/templates/example_prd_rpg.txt for [your project]"
|
||||
```
|
||||
|
||||
**Why code context matters:** The AI needs to understand your existing codebase to make informed decisions about:
|
||||
- Module boundaries
|
||||
- Dependency relationships
|
||||
- Integration points
|
||||
- Naming conventions
|
||||
|
||||
**Recommended tools:**
|
||||
- Claude Code (claude-code CLI)
|
||||
- Cursor/Windsurf
|
||||
- Gemini CLI (large contexts)
|
||||
- Codex/Grok CLI
|
||||
|
||||
### Step 2: Parse PRD into Tasks
|
||||
|
||||
```bash
|
||||
task-master parse-prd .taskmaster/docs/your-prd.txt --research
|
||||
```
|
||||
|
||||
Task Master will:
|
||||
1. Extract capabilities → Main tasks
|
||||
2. Extract features → Subtasks
|
||||
3. Parse dependencies → Task dependencies
|
||||
4. Order by phases → Task priorities
|
||||
|
||||
**Result:** A dependency-aware task graph ready for topological execution.
|
||||
|
||||
### Step 3: Analyze Complexity
|
||||
|
||||
```bash
|
||||
task-master analyze-complexity --research
|
||||
```
|
||||
|
||||
Review the complexity report to identify tasks that need expansion.
|
||||
|
||||
### Step 4: Expand Tasks
|
||||
|
||||
```bash
|
||||
task-master expand --all --research
|
||||
```
|
||||
|
||||
Break down complex tasks into manageable subtasks while preserving dependency chains.
|
||||
|
||||
---
|
||||
|
||||
## RPG Benefits
|
||||
|
||||
### For Solo Developers
|
||||
- Clear roadmap for implementing complex features
|
||||
- Prevents architectural mistakes early
|
||||
- Explicit dependency tracking avoids integration issues
|
||||
- Enables resuming work after interruptions
|
||||
|
||||
### For Teams
|
||||
- Parallel development of independent modules
|
||||
- Clear contracts between modules (explicit dependencies)
|
||||
- Reduced merge conflicts (proper module boundaries)
|
||||
- Onboarding aid (architectural overview in PRD)
|
||||
|
||||
### For AI Agents
|
||||
- Structured context for code generation
|
||||
- Clear scope boundaries per task
|
||||
- Dependency awareness prevents incomplete implementations
|
||||
- Test strategy guidance for TDD workflows
|
||||
|
||||
---
|
||||
|
||||
## RPG vs Standard Template
|
||||
|
||||
| Aspect | Standard Template | RPG Template |
|
||||
|--------|------------------|--------------|
|
||||
| **Best for** | Simple features | Complex systems |
|
||||
| **Dependency handling** | Implicit | Explicit graph |
|
||||
| **Structure guidance** | Minimal | Step-by-step |
|
||||
| **Examples** | Few | Inline good/bad examples |
|
||||
| **Module boundaries** | Vague | Precise mapping |
|
||||
| **Task ordering** | Manual | Automatic (topological) |
|
||||
| **Learning curve** | Low | Medium |
|
||||
| **Resulting task quality** | Good | Excellent |
|
||||
|
||||
---
|
||||
|
||||
## Tips for Best Results
|
||||
|
||||
### 1. Spend Time on Dependencies
|
||||
The dependency graph section is the most valuable. List all dependencies explicitly, even if they seem obvious.
|
||||
|
||||
### 2. Keep Features Atomic
|
||||
Each feature should be independently testable. If a feature description is vague ("handle data"), break it into specific features.
|
||||
|
||||
### 3. Progressive Refinement
|
||||
Don't try to get everything perfect on the first pass:
|
||||
1. Fill out high-level sections
|
||||
2. Review and refine
|
||||
3. Add detail where needed
|
||||
4. Let `task-master expand` break down complex tasks further
|
||||
|
||||
### 4. Use Research Mode
|
||||
```bash
|
||||
task-master parse-prd --research
|
||||
```
|
||||
The `--research` flag leverages AI to enhance task generation with domain knowledge.
|
||||
|
||||
### 5. Validate Early
|
||||
```bash
|
||||
task-master validate-dependencies
|
||||
```
|
||||
Check for circular dependencies or orphaned modules before starting implementation.
|
||||
|
||||
---
|
||||
|
||||
## Common Pitfalls
|
||||
|
||||
### ❌ Mixing Functional and Structural
|
||||
```
|
||||
Bad: "Capability: validation.js"
|
||||
Good: "Capability: Data Validation" → maps to "src/validation/"
|
||||
```
|
||||
|
||||
### ❌ Vague Module Boundaries
|
||||
```
|
||||
Bad: "Module: utils"
|
||||
Good: "Module: string-utilities" with clear exports
|
||||
```
|
||||
|
||||
### ❌ Implicit Dependencies
|
||||
```
|
||||
Bad: "Module: API handlers (needs validation)"
|
||||
Good: "Module: API handlers, Depends on: [validation, error-handling]"
|
||||
```
|
||||
|
||||
### ❌ Skipping Test Strategy
|
||||
Without test strategy, the AI won't know what to test during implementation.
|
||||
|
||||
---
|
||||
|
||||
## Example Workflow
|
||||
|
||||
1. **Discuss idea with AI**: Explain your project concept
|
||||
2. **Reference RPG template**: Show AI the `example_prd_rpg.txt`
|
||||
3. **Co-create PRD**: Work through each section with AI guidance
|
||||
4. **Save to docs**: Place in `.taskmaster/docs/your-project.txt`
|
||||
5. **Parse PRD**: `task-master parse-prd .taskmaster/docs/your-project.txt --research`
|
||||
6. **Analyze**: `task-master analyze-complexity --research`
|
||||
7. **Expand**: `task-master expand --all --research`
|
||||
8. **Start work**: `task-master next`
|
||||
|
||||
---
|
||||
|
||||
## Further Reading
|
||||
|
||||
- [PRD Creation and Parsing Guide](/getting-started/quick-start/prd-quick)
|
||||
- [Task Structure Documentation](/capabilities/task-structure)
|
||||
- [Microsoft Research RPG Paper](https://arxiv.org/abs/2410.21376) (Original methodology)
|
||||
|
||||
---
|
||||
|
||||
<Tip>
|
||||
The RPG template includes inline `<instruction>` and `<example>` blocks that teach the method as you use it. Read these sections carefully - they provide valuable guidance at each decision point.
|
||||
</Tip>
|
||||
@@ -42,6 +42,20 @@ PERPLEXITY_API_KEY="pplx-your-key-here"
|
||||
OPENAI_API_KEY="sk-proj-your-key-here"
|
||||
```
|
||||
|
||||
### OPENAI_CODEX_API_KEY (New)
|
||||
- **Provider**: Codex CLI (GPT-5 and GPT-5-Codex models)
|
||||
- **Format**: Various formats
|
||||
- **Required**: ❌ **No** (OAuth-first authentication via `codex login`)
|
||||
- **Models**: GPT-5, GPT-5-Codex (272K input / 128K output context)
|
||||
- **Authentication**: Primary authentication is OAuth via `codex login` command
|
||||
- **Features**: Codebase analysis capabilities automatically enabled
|
||||
- **Get Access**: Follow Codex CLI setup instructions
|
||||
|
||||
```bash
|
||||
# Optional - OAuth via 'codex login' is preferred
|
||||
OPENAI_CODEX_API_KEY="your-codex-api-key-here"
|
||||
```
|
||||
|
||||
### GOOGLE_API_KEY
|
||||
- **Provider**: Google Gemini models
|
||||
- **Format**: Various formats
|
||||
@@ -197,16 +211,20 @@ For Claude Code integration, configure keys in `.mcp.json`:
|
||||
"task-master-ai": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "task-master-ai"],
|
||||
"timeout": 300000,
|
||||
"env": {
|
||||
"ANTHROPIC_API_KEY": "your-key-here",
|
||||
"PERPLEXITY_API_KEY": "your-key-here",
|
||||
"OPENAI_API_KEY": "your-key-here"
|
||||
"OPENAI_API_KEY": "your-key-here",
|
||||
"OPENAI_CODEX_API_KEY": "your-codex-key-here"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Note**: The `timeout: 300000` (300 seconds) setting is recommended for long-running AI operations like PRD parsing and complexity analysis.
|
||||
|
||||
## Key Requirements
|
||||
|
||||
### Minimum Requirements
|
||||
|
||||
@@ -32,7 +32,11 @@ The more context you give the model, the better the breakdown and results.
|
||||
|
||||
## Writing a PRD for Task Master
|
||||
|
||||
<Note>An example PRD can be found in .taskmaster/templates/example_prd.txt</Note>
|
||||
<Note>
|
||||
Two example PRD templates are available in `.taskmaster/templates/`:
|
||||
- `example_prd.txt` - Simple template for straightforward projects
|
||||
- `example_prd_rpg.txt` - Advanced RPG (Repository Planning Graph) template for complex projects with dependencies
|
||||
</Note>
|
||||
|
||||
|
||||
You can co-write your PRD with an LLM model using the following workflow:
|
||||
@@ -43,6 +47,29 @@ You can co-write your PRD with an LLM model using the following workflow:
|
||||
|
||||
This approach works great in Cursor, or anywhere you use a chat-based LLM.
|
||||
|
||||
### Choosing Between Templates
|
||||
|
||||
**Use `example_prd.txt` when:**
|
||||
- Building straightforward features
|
||||
- Working on smaller projects
|
||||
- Dependencies are simple and obvious
|
||||
|
||||
**Use `example_prd_rpg.txt` when:**
|
||||
- Building complex systems with multiple modules
|
||||
- Need explicit dependency management
|
||||
- Want structured guidance on architecture decisions
|
||||
- Planning a large codebase from scratch
|
||||
|
||||
The RPG template teaches you to think about:
|
||||
1. **Functional decomposition** (WHAT the system does)
|
||||
2. **Structural decomposition** (HOW it's organized in code)
|
||||
3. **Explicit dependencies** (WHAT depends on WHAT)
|
||||
4. **Topological ordering** (build foundation first, then layers)
|
||||
|
||||
<Tip>
|
||||
For complex projects, using the RPG template with a code-context-aware ai agent produces the best results because the AI can understand your existing codebase structure. [Learn more about the RPG method →](/capabilities/rpg-method)
|
||||
</Tip>
|
||||
|
||||
---
|
||||
|
||||
## Where to Save Your PRD
|
||||
|
||||
@@ -3,4 +3,58 @@ title: "What's New"
|
||||
sidebarTitle: "What's New"
|
||||
---
|
||||
|
||||
An easy way to see the latest releases
|
||||
# What's New in Task Master
|
||||
|
||||
## Version 0.28.0 - Latest Release
|
||||
|
||||
### 🚀 New Features
|
||||
|
||||
#### Codex CLI Provider Support
|
||||
- Added support for GPT-5 and GPT-5-Codex models via Codex CLI
|
||||
- OAuth-first authentication - no API key required (just run `codex login`)
|
||||
- 272K input / 128K output context windows
|
||||
- Codebase analysis capabilities automatically enabled
|
||||
- Optional `OPENAI_CODEX_API_KEY` support for API-based usage
|
||||
|
||||
#### Cursor IDE Slash Commands
|
||||
- Automatically installs custom slash commands when adding Cursor profile
|
||||
- Quick access to Task Master functionality directly in Cursor IDE
|
||||
- Commands include `/tm-list`, `/tm-next`, `/tm-show`, `/tm-add`, and more
|
||||
- Automatic cleanup when removing Cursor profile
|
||||
|
||||
#### Enhanced MCP Configuration
|
||||
- Improved timeout configuration for long-running AI operations
|
||||
- 300-second timeout for complex tasks like PRD parsing and task expansion
|
||||
- Programmatic MCP configuration generation for better reliability
|
||||
- Enhanced support for Roo Code and other AI coding assistants
|
||||
|
||||
### 🔧 Technical Improvements
|
||||
|
||||
#### AI SDK v5 Migration
|
||||
- Migrated to AI SDK v5 for better compatibility with modern AI providers
|
||||
- Improved support for Claude Code and Gemini CLI integration
|
||||
- Better structured output generation with `generateObject`
|
||||
- Enhanced JSON mode support for supported providers
|
||||
|
||||
#### Structured Data Generation
|
||||
- All AI services now use `generateObject` for more reliable responses
|
||||
- Integrated Zod schemas for automatic validation
|
||||
- Reduced parsing errors and improved data consistency
|
||||
- Better subtask ID generation (fixes numbering inconsistencies)
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
- Fixed MCP connection errors from deprecated function calls
|
||||
- Fixed MCP server errors when file parameters not provided
|
||||
- Corrected complexity scores display in `task-master show` and `list`
|
||||
- Fixed Perplexity Sonar Deep Research model naming
|
||||
- Improved parent task status handling when all subtasks are pending
|
||||
|
||||
### 📖 Documentation Updates
|
||||
- New comprehensive API keys configuration guide
|
||||
- Enhanced Cursor IDE integration documentation
|
||||
- Improved MCP timeout configuration documentation
|
||||
- Updated examples and configuration snippets
|
||||
|
||||
---
|
||||
|
||||
For detailed release notes and full changelog, visit our [GitHub releases page](https://github.com/eyaltoledano/claude-task-master/releases).
|
||||
511
assets/example_prd_rpg.txt
Normal file
511
assets/example_prd_rpg.txt
Normal file
@@ -0,0 +1,511 @@
|
||||
<rpg-method>
|
||||
# Repository Planning Graph (RPG) Method - PRD Template
|
||||
|
||||
This template teaches you (AI or human) how to create structured, dependency-aware PRDs using the RPG methodology from Microsoft Research. The key insight: separate WHAT (functional) from HOW (structural), then connect them with explicit dependencies.
|
||||
|
||||
## Core Principles
|
||||
|
||||
1. **Dual-Semantics**: Think functional (capabilities) AND structural (code organization) separately, then map them
|
||||
2. **Explicit Dependencies**: Never assume - always state what depends on what
|
||||
3. **Topological Order**: Build foundation first, then layers on top
|
||||
4. **Progressive Refinement**: Start broad, refine iteratively
|
||||
|
||||
## How to Use This Template
|
||||
|
||||
- Follow the instructions in each `<instruction>` block
|
||||
- Look at `<example>` blocks to see good vs bad patterns
|
||||
- Fill in the content sections with your project details
|
||||
- The AI reading this will learn the RPG method by following along
|
||||
- Task Master will parse the resulting PRD into dependency-aware tasks
|
||||
|
||||
## Recommended Tools for Creating PRDs
|
||||
|
||||
When using this template to **create** a PRD (not parse it), use **code-context-aware AI assistants** for best results:
|
||||
|
||||
**Why?** The AI needs to understand your existing codebase to make good architectural decisions about modules, dependencies, and integration points.
|
||||
|
||||
**Recommended tools:**
|
||||
- **Claude Code** (claude-code CLI) - Best for structured reasoning and large contexts
|
||||
- **Cursor/Windsurf** - IDE integration with full codebase context
|
||||
- **Gemini CLI** (gemini-cli) - Massive context window for large codebases
|
||||
- **Codex/Grok CLI** - Strong code generation with context awareness
|
||||
|
||||
**Note:** Once your PRD is created, `task-master parse-prd` works with any configured AI model - it just needs to read the PRD text itself, not your codebase.
|
||||
</rpg-method>
|
||||
|
||||
---
|
||||
|
||||
<overview>
|
||||
<instruction>
|
||||
Start with the problem, not the solution. Be specific about:
|
||||
- What pain point exists?
|
||||
- Who experiences it?
|
||||
- Why existing solutions don't work?
|
||||
- What success looks like (measurable outcomes)?
|
||||
|
||||
Keep this section focused - don't jump into implementation details yet.
|
||||
</instruction>
|
||||
|
||||
## Problem Statement
|
||||
[Describe the core problem. Be concrete about user pain points.]
|
||||
|
||||
## Target Users
|
||||
[Define personas, their workflows, and what they're trying to achieve.]
|
||||
|
||||
## Success Metrics
|
||||
[Quantifiable outcomes. Examples: "80% task completion via autopilot", "< 5% manual intervention rate"]
|
||||
|
||||
</overview>
|
||||
|
||||
---
|
||||
|
||||
<functional-decomposition>
|
||||
<instruction>
|
||||
Now think about CAPABILITIES (what the system DOES), not code structure yet.
|
||||
|
||||
Step 1: Identify high-level capability domains
|
||||
- Think: "What major things does this system do?"
|
||||
- Examples: Data Management, Core Processing, Presentation Layer
|
||||
|
||||
Step 2: For each capability, enumerate specific features
|
||||
- Use explore-exploit strategy:
|
||||
* Exploit: What features are REQUIRED for core value?
|
||||
* Explore: What features make this domain COMPLETE?
|
||||
|
||||
Step 3: For each feature, define:
|
||||
- Description: What it does in one sentence
|
||||
- Inputs: What data/context it needs
|
||||
- Outputs: What it produces/returns
|
||||
- Behavior: Key logic or transformations
|
||||
|
||||
<example type="good">
|
||||
Capability: Data Validation
|
||||
Feature: Schema validation
|
||||
- Description: Validate JSON payloads against defined schemas
|
||||
- Inputs: JSON object, schema definition
|
||||
- Outputs: Validation result (pass/fail) + error details
|
||||
- Behavior: Iterate fields, check types, enforce constraints
|
||||
|
||||
Feature: Business rule validation
|
||||
- Description: Apply domain-specific validation rules
|
||||
- Inputs: Validated data object, rule set
|
||||
- Outputs: Boolean + list of violated rules
|
||||
- Behavior: Execute rules sequentially, short-circuit on failure
|
||||
</example>
|
||||
|
||||
<example type="bad">
|
||||
Capability: validation.js
|
||||
(Problem: This is a FILE, not a CAPABILITY. Mixing structure into functional thinking.)
|
||||
|
||||
Capability: Validation
|
||||
Feature: Make sure data is good
|
||||
(Problem: Too vague. No inputs/outputs. Not actionable.)
|
||||
</example>
|
||||
</instruction>
|
||||
|
||||
## Capability Tree
|
||||
|
||||
### Capability: [Name]
|
||||
[Brief description of what this capability domain covers]
|
||||
|
||||
#### Feature: [Name]
|
||||
- **Description**: [One sentence]
|
||||
- **Inputs**: [What it needs]
|
||||
- **Outputs**: [What it produces]
|
||||
- **Behavior**: [Key logic]
|
||||
|
||||
#### Feature: [Name]
|
||||
- **Description**:
|
||||
- **Inputs**:
|
||||
- **Outputs**:
|
||||
- **Behavior**:
|
||||
|
||||
### Capability: [Name]
|
||||
...
|
||||
|
||||
</functional-decomposition>
|
||||
|
||||
---
|
||||
|
||||
<structural-decomposition>
|
||||
<instruction>
|
||||
NOW think about code organization. Map capabilities to actual file/folder structure.
|
||||
|
||||
Rules:
|
||||
1. Each capability maps to a module (folder or file)
|
||||
2. Features within a capability map to functions/classes
|
||||
3. Use clear module boundaries - each module has ONE responsibility
|
||||
4. Define what each module exports (public interface)
|
||||
|
||||
The goal: Create a clear mapping between "what it does" (functional) and "where it lives" (structural).
|
||||
|
||||
<example type="good">
|
||||
Capability: Data Validation
|
||||
→ Maps to: src/validation/
|
||||
├── schema-validator.js (Schema validation feature)
|
||||
├── rule-validator.js (Business rule validation feature)
|
||||
└── index.js (Public exports)
|
||||
|
||||
Exports:
|
||||
- validateSchema(data, schema)
|
||||
- validateRules(data, rules)
|
||||
</example>
|
||||
|
||||
<example type="bad">
|
||||
Capability: Data Validation
|
||||
→ Maps to: src/utils.js
|
||||
(Problem: "utils" is not a clear module boundary. Where do I find validation logic?)
|
||||
|
||||
Capability: Data Validation
|
||||
→ Maps to: src/validation/everything.js
|
||||
(Problem: One giant file. Features should map to separate files for maintainability.)
|
||||
</example>
|
||||
</instruction>
|
||||
|
||||
## Repository Structure
|
||||
|
||||
```
|
||||
project-root/
|
||||
├── src/
|
||||
│ ├── [module-name]/ # Maps to: [Capability Name]
|
||||
│ │ ├── [file].js # Maps to: [Feature Name]
|
||||
│ │ └── index.js # Public exports
|
||||
│ └── [module-name]/
|
||||
├── tests/
|
||||
└── docs/
|
||||
```
|
||||
|
||||
## Module Definitions
|
||||
|
||||
### Module: [Name]
|
||||
- **Maps to capability**: [Capability from functional decomposition]
|
||||
- **Responsibility**: [Single clear purpose]
|
||||
- **File structure**:
|
||||
```
|
||||
module-name/
|
||||
├── feature1.js
|
||||
├── feature2.js
|
||||
└── index.js
|
||||
```
|
||||
- **Exports**:
|
||||
- `functionName()` - [what it does]
|
||||
- `ClassName` - [what it does]
|
||||
|
||||
</structural-decomposition>
|
||||
|
||||
---
|
||||
|
||||
<dependency-graph>
|
||||
<instruction>
|
||||
This is THE CRITICAL SECTION for Task Master parsing.
|
||||
|
||||
Define explicit dependencies between modules. This creates the topological order for task execution.
|
||||
|
||||
Rules:
|
||||
1. List modules in dependency order (foundation first)
|
||||
2. For each module, state what it depends on
|
||||
3. Foundation modules should have NO dependencies
|
||||
4. Every non-foundation module should depend on at least one other module
|
||||
5. Think: "What must EXIST before I can build this module?"
|
||||
|
||||
<example type="good">
|
||||
Foundation Layer (no dependencies):
|
||||
- error-handling: No dependencies
|
||||
- config-manager: No dependencies
|
||||
- base-types: No dependencies
|
||||
|
||||
Data Layer:
|
||||
- schema-validator: Depends on [base-types, error-handling]
|
||||
- data-ingestion: Depends on [schema-validator, config-manager]
|
||||
|
||||
Core Layer:
|
||||
- algorithm-engine: Depends on [base-types, error-handling]
|
||||
- pipeline-orchestrator: Depends on [algorithm-engine, data-ingestion]
|
||||
</example>
|
||||
|
||||
<example type="bad">
|
||||
- validation: Depends on API
|
||||
- API: Depends on validation
|
||||
(Problem: Circular dependency. This will cause build/runtime issues.)
|
||||
|
||||
- user-auth: Depends on everything
|
||||
(Problem: Too many dependencies. Should be more focused.)
|
||||
</example>
|
||||
</instruction>
|
||||
|
||||
## Dependency Chain
|
||||
|
||||
### Foundation Layer (Phase 0)
|
||||
No dependencies - these are built first.
|
||||
|
||||
- **[Module Name]**: [What it provides]
|
||||
- **[Module Name]**: [What it provides]
|
||||
|
||||
### [Layer Name] (Phase 1)
|
||||
- **[Module Name]**: Depends on [[module-from-phase-0], [module-from-phase-0]]
|
||||
- **[Module Name]**: Depends on [[module-from-phase-0]]
|
||||
|
||||
### [Layer Name] (Phase 2)
|
||||
- **[Module Name]**: Depends on [[module-from-phase-1], [module-from-foundation]]
|
||||
|
||||
[Continue building up layers...]
|
||||
|
||||
</dependency-graph>
|
||||
|
||||
---
|
||||
|
||||
<implementation-roadmap>
|
||||
<instruction>
|
||||
Turn the dependency graph into concrete development phases.
|
||||
|
||||
Each phase should:
|
||||
1. Have clear entry criteria (what must exist before starting)
|
||||
2. Contain tasks that can be parallelized (no inter-dependencies within phase)
|
||||
3. Have clear exit criteria (how do we know phase is complete?)
|
||||
4. Build toward something USABLE (not just infrastructure)
|
||||
|
||||
Phase ordering follows topological sort of dependency graph.
|
||||
|
||||
<example type="good">
|
||||
Phase 0: Foundation
|
||||
Entry: Clean repository
|
||||
Tasks:
|
||||
- Implement error handling utilities
|
||||
- Create base type definitions
|
||||
- Setup configuration system
|
||||
Exit: Other modules can import foundation without errors
|
||||
|
||||
Phase 1: Data Layer
|
||||
Entry: Phase 0 complete
|
||||
Tasks:
|
||||
- Implement schema validator (uses: base types, error handling)
|
||||
- Build data ingestion pipeline (uses: validator, config)
|
||||
Exit: End-to-end data flow from input to validated output
|
||||
</example>
|
||||
|
||||
<example type="bad">
|
||||
Phase 1: Build Everything
|
||||
Tasks:
|
||||
- API
|
||||
- Database
|
||||
- UI
|
||||
- Tests
|
||||
(Problem: No clear focus. Too broad. Dependencies not considered.)
|
||||
</example>
|
||||
</instruction>
|
||||
|
||||
## Development Phases
|
||||
|
||||
### Phase 0: [Foundation Name]
|
||||
**Goal**: [What foundational capability this establishes]
|
||||
|
||||
**Entry Criteria**: [What must be true before starting]
|
||||
|
||||
**Tasks**:
|
||||
- [ ] [Task name] (depends on: [none or list])
|
||||
- Acceptance criteria: [How we know it's done]
|
||||
- Test strategy: [What tests prove it works]
|
||||
|
||||
- [ ] [Task name] (depends on: [none or list])
|
||||
|
||||
**Exit Criteria**: [Observable outcome that proves phase complete]
|
||||
|
||||
**Delivers**: [What can users/developers do after this phase?]
|
||||
|
||||
---
|
||||
|
||||
### Phase 1: [Layer Name]
|
||||
**Goal**:
|
||||
|
||||
**Entry Criteria**: Phase 0 complete
|
||||
|
||||
**Tasks**:
|
||||
- [ ] [Task name] (depends on: [[tasks-from-phase-0]])
|
||||
- [ ] [Task name] (depends on: [[tasks-from-phase-0]])
|
||||
|
||||
**Exit Criteria**:
|
||||
|
||||
**Delivers**:
|
||||
|
||||
---
|
||||
|
||||
[Continue with more phases...]
|
||||
|
||||
</implementation-roadmap>
|
||||
|
||||
---
|
||||
|
||||
<test-strategy>
|
||||
<instruction>
|
||||
Define how testing will be integrated throughout development (TDD approach).
|
||||
|
||||
Specify:
|
||||
1. Test pyramid ratios (unit vs integration vs e2e)
|
||||
2. Coverage requirements
|
||||
3. Critical test scenarios
|
||||
4. Test generation guidelines for Surgical Test Generator
|
||||
|
||||
This section guides the AI when generating tests during the RED phase of TDD.
|
||||
|
||||
<example type="good">
|
||||
Critical Test Scenarios for Data Validation module:
|
||||
- Happy path: Valid data passes all checks
|
||||
- Edge cases: Empty strings, null values, boundary numbers
|
||||
- Error cases: Invalid types, missing required fields
|
||||
- Integration: Validator works with ingestion pipeline
|
||||
</example>
|
||||
</instruction>
|
||||
|
||||
## Test Pyramid
|
||||
|
||||
```
|
||||
/\
|
||||
/E2E\ ← [X]% (End-to-end, slow, comprehensive)
|
||||
/------\
|
||||
/Integration\ ← [Y]% (Module interactions)
|
||||
/------------\
|
||||
/ Unit Tests \ ← [Z]% (Fast, isolated, deterministic)
|
||||
/----------------\
|
||||
```
|
||||
|
||||
## Coverage Requirements
|
||||
- Line coverage: [X]% minimum
|
||||
- Branch coverage: [X]% minimum
|
||||
- Function coverage: [X]% minimum
|
||||
- Statement coverage: [X]% minimum
|
||||
|
||||
## Critical Test Scenarios
|
||||
|
||||
### [Module/Feature Name]
|
||||
**Happy path**:
|
||||
- [Scenario description]
|
||||
- Expected: [What should happen]
|
||||
|
||||
**Edge cases**:
|
||||
- [Scenario description]
|
||||
- Expected: [What should happen]
|
||||
|
||||
**Error cases**:
|
||||
- [Scenario description]
|
||||
- Expected: [How system handles failure]
|
||||
|
||||
**Integration points**:
|
||||
- [What interactions to test]
|
||||
- Expected: [End-to-end behavior]
|
||||
|
||||
## Test Generation Guidelines
|
||||
[Specific instructions for Surgical Test Generator about what to focus on, what patterns to follow, project-specific test conventions]
|
||||
|
||||
</test-strategy>
|
||||
|
||||
---
|
||||
|
||||
<architecture>
|
||||
<instruction>
|
||||
Describe technical architecture, data models, and key design decisions.
|
||||
|
||||
Keep this section AFTER functional/structural decomposition - implementation details come after understanding structure.
|
||||
</instruction>
|
||||
|
||||
## System Components
|
||||
[Major architectural pieces and their responsibilities]
|
||||
|
||||
## Data Models
|
||||
[Core data structures, schemas, database design]
|
||||
|
||||
## Technology Stack
|
||||
[Languages, frameworks, key libraries]
|
||||
|
||||
**Decision: [Technology/Pattern]**
|
||||
- **Rationale**: [Why chosen]
|
||||
- **Trade-offs**: [What we're giving up]
|
||||
- **Alternatives considered**: [What else we looked at]
|
||||
|
||||
</architecture>
|
||||
|
||||
---
|
||||
|
||||
<risks>
|
||||
<instruction>
|
||||
Identify risks that could derail development and how to mitigate them.
|
||||
|
||||
Categories:
|
||||
- Technical risks (complexity, unknowns)
|
||||
- Dependency risks (blocking issues)
|
||||
- Scope risks (creep, underestimation)
|
||||
</instruction>
|
||||
|
||||
## Technical Risks
|
||||
**Risk**: [Description]
|
||||
- **Impact**: [High/Medium/Low - effect on project]
|
||||
- **Likelihood**: [High/Medium/Low]
|
||||
- **Mitigation**: [How to address]
|
||||
- **Fallback**: [Plan B if mitigation fails]
|
||||
|
||||
## Dependency Risks
|
||||
[External dependencies, blocking issues]
|
||||
|
||||
## Scope Risks
|
||||
[Scope creep, underestimation, unclear requirements]
|
||||
|
||||
</risks>
|
||||
|
||||
---
|
||||
|
||||
<appendix>
|
||||
## References
|
||||
[Papers, documentation, similar systems]
|
||||
|
||||
## Glossary
|
||||
[Domain-specific terms]
|
||||
|
||||
## Open Questions
|
||||
[Things to resolve during development]
|
||||
</appendix>
|
||||
|
||||
---
|
||||
|
||||
<task-master-integration>
|
||||
# How Task Master Uses This PRD
|
||||
|
||||
When you run `task-master parse-prd <file>.txt`, the parser:
|
||||
|
||||
1. **Extracts capabilities** → Main tasks
|
||||
- Each `### Capability:` becomes a top-level task
|
||||
|
||||
2. **Extracts features** → Subtasks
|
||||
- Each `#### Feature:` becomes a subtask under its capability
|
||||
|
||||
3. **Parses dependencies** → Task dependencies
|
||||
- `Depends on: [X, Y]` sets task.dependencies = ["X", "Y"]
|
||||
|
||||
4. **Orders by phases** → Task priorities
|
||||
- Phase 0 tasks = highest priority
|
||||
- Phase N tasks = lower priority, properly sequenced
|
||||
|
||||
5. **Uses test strategy** → Test generation context
|
||||
- Feeds test scenarios to Surgical Test Generator during implementation
|
||||
|
||||
**Result**: A dependency-aware task graph that can be executed in topological order.
|
||||
|
||||
## Why RPG Structure Matters
|
||||
|
||||
Traditional flat PRDs lead to:
|
||||
- ❌ Unclear task dependencies
|
||||
- ❌ Arbitrary task ordering
|
||||
- ❌ Circular dependencies discovered late
|
||||
- ❌ Poorly scoped tasks
|
||||
|
||||
RPG-structured PRDs provide:
|
||||
- ✅ Explicit dependency chains
|
||||
- ✅ Topological execution order
|
||||
- ✅ Clear module boundaries
|
||||
- ✅ Validated task graph before implementation
|
||||
|
||||
## Tips for Best Results
|
||||
|
||||
1. **Spend time on dependency graph** - This is the most valuable section for Task Master
|
||||
2. **Keep features atomic** - Each feature should be independently testable
|
||||
3. **Progressive refinement** - Start broad, use `task-master expand` to break down complex tasks
|
||||
4. **Use research mode** - `task-master parse-prd --research` leverages AI for better task generation
|
||||
</task-master-integration>
|
||||
60
output.txt
Normal file
60
output.txt
Normal file
File diff suppressed because one or more lines are too long
@@ -118,7 +118,13 @@
|
||||
"bugs": {
|
||||
"url": "https://github.com/eyaltoledano/claude-task-master/issues"
|
||||
},
|
||||
"files": ["dist/**", "README-task-master.md", "README.md", "LICENSE"],
|
||||
"files": [
|
||||
"dist/**",
|
||||
"README-task-master.md",
|
||||
"README.md",
|
||||
"LICENSE",
|
||||
"CHANGELOG.md"
|
||||
],
|
||||
"overrides": {
|
||||
"node-fetch": "^2.6.12",
|
||||
"whatwg-url": "^11.0.0"
|
||||
|
||||
@@ -628,6 +628,12 @@ function createProjectStructure(
|
||||
// Copy example_prd.txt to NEW location
|
||||
copyTemplateFile('example_prd.txt', path.join(targetDir, EXAMPLE_PRD_FILE));
|
||||
|
||||
// Copy example_prd_rpg.txt to templates directory
|
||||
copyTemplateFile(
|
||||
'example_prd_rpg.txt',
|
||||
path.join(targetDir, TASKMASTER_TEMPLATES_DIR, 'example_prd_rpg.txt')
|
||||
);
|
||||
|
||||
// Initialize git repository if git is available
|
||||
try {
|
||||
if (initGit === false) {
|
||||
@@ -856,10 +862,10 @@ function createProjectStructure(
|
||||
)}\n${chalk.white(' ├─ ')}${chalk.dim('Models: Use `task-master models` commands')}\n${chalk.white(' └─ ')}${chalk.dim(
|
||||
'Keys: Add provider API keys to .env (or inside the MCP config file i.e. .cursor/mcp.json)'
|
||||
)}\n${chalk.white('2. ')}${chalk.yellow(
|
||||
'Discuss your idea with AI and ask for a PRD using example_prd.txt, and save it to scripts/PRD.txt'
|
||||
)}\n${chalk.white('3. ')}${chalk.yellow(
|
||||
'Discuss your idea with AI and ask for a PRD, and save it to .taskmaster/docs/prd.txt'
|
||||
)}\n${chalk.white(' ├─ ')}${chalk.dim('Simple projects: Use ')}${chalk.cyan('example_prd.txt')}${chalk.dim(' template')}\n${chalk.white(' └─ ')}${chalk.dim('Complex systems: Use ')}${chalk.cyan('example_prd_rpg.txt')}${chalk.dim(' template (for dependency-aware task graphs)')}\n${chalk.white('3. ')}${chalk.yellow(
|
||||
'Ask Cursor Agent (or run CLI) to parse your PRD and generate initial tasks:'
|
||||
)}\n${chalk.white(' └─ ')}${chalk.dim('MCP Tool: ')}${chalk.cyan('parse_prd')}${chalk.dim(' | CLI: ')}${chalk.cyan('task-master parse-prd scripts/prd.txt')}\n${chalk.white('4. ')}${chalk.yellow(
|
||||
)}\n${chalk.white(' └─ ')}${chalk.dim('MCP Tool: ')}${chalk.cyan('parse_prd')}${chalk.dim(' | CLI: ')}${chalk.cyan('task-master parse-prd .taskmaster/docs/prd.txt')}\n${chalk.white('4. ')}${chalk.yellow(
|
||||
'Ask Cursor to analyze the complexity of the tasks in your PRD using research'
|
||||
)}\n${chalk.white(' └─ ')}${chalk.dim('MCP Tool: ')}${chalk.cyan('analyze_project_complexity')}${chalk.dim(' | CLI: ')}${chalk.cyan('task-master analyze-complexity')}\n${chalk.white('5. ')}${chalk.yellow(
|
||||
'Ask Cursor to expand all of your tasks using the complexity analysis'
|
||||
|
||||
@@ -5111,7 +5111,8 @@ async function runCLI(argv = process.argv) {
|
||||
// Display the upgrade notification first
|
||||
displayUpgradeNotification(
|
||||
updateInfo.currentVersion,
|
||||
updateInfo.latestVersion
|
||||
updateInfo.latestVersion,
|
||||
updateInfo.highlights
|
||||
);
|
||||
|
||||
// Then automatically perform the update
|
||||
|
||||
@@ -312,18 +312,23 @@ async function removeDependency(tasksPath, taskId, dependencyId, context = {}) {
|
||||
|
||||
// Check if the dependency exists by comparing string representations
|
||||
const dependencyIndex = targetTask.dependencies.findIndex((dep) => {
|
||||
// Convert both to strings for comparison
|
||||
let depStr = String(dep);
|
||||
|
||||
// Special handling for numeric IDs that might be subtask references
|
||||
if (typeof dep === 'number' && dep < 100 && isSubtask) {
|
||||
// It's likely a reference to another subtask in the same parent task
|
||||
// Convert to full format for comparison (e.g., 2 -> "1.2" for a subtask in task 1)
|
||||
const [parentId] = formattedTaskId.split('.');
|
||||
depStr = `${parentId}.${dep}`;
|
||||
// Direct string comparison (handles both numeric IDs and dot notation)
|
||||
const depStr = String(dep);
|
||||
if (depStr === normalizedDependencyId) {
|
||||
return true;
|
||||
}
|
||||
|
||||
return depStr === normalizedDependencyId;
|
||||
// For subtasks: handle numeric dependencies that might be references to other subtasks
|
||||
// in the same parent (e.g., subtask 1.2 depending on subtask 1.1 stored as just "1")
|
||||
if (typeof dep === 'number' && dep < 100 && isSubtask) {
|
||||
const [parentId] = formattedTaskId.split('.');
|
||||
const fullSubtaskRef = `${parentId}.${dep}`;
|
||||
if (fullSubtaskRef === normalizedDependencyId) {
|
||||
return true;
|
||||
}
|
||||
}
|
||||
|
||||
return false;
|
||||
});
|
||||
|
||||
if (dependencyIndex === -1) {
|
||||
@@ -396,8 +401,9 @@ function isCircularDependency(tasks, taskId, chain = []) {
|
||||
task = parentTask.subtasks.find((st) => st.id === subtaskId);
|
||||
}
|
||||
} else {
|
||||
// Regular task
|
||||
task = tasks.find((t) => String(t.id) === taskIdStr);
|
||||
// Regular task - handle both string and numeric task IDs
|
||||
const taskIdNum = parseInt(taskIdStr, 10);
|
||||
task = tasks.find((t) => t.id === taskIdNum || String(t.id) === taskIdStr);
|
||||
}
|
||||
|
||||
if (!task) {
|
||||
|
||||
@@ -1,14 +0,0 @@
|
||||
{
|
||||
"tasks": [
|
||||
{
|
||||
"id": 1,
|
||||
"dependencies": [],
|
||||
"subtasks": [
|
||||
{
|
||||
"id": 1,
|
||||
"dependencies": []
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
35
tests/fixtures/sample-tasks.js
vendored
35
tests/fixtures/sample-tasks.js
vendored
@@ -88,3 +88,38 @@ export const emptySampleTasks = {
|
||||
},
|
||||
tasks: []
|
||||
};
|
||||
|
||||
export const crossLevelDependencyTasks = {
|
||||
tasks: [
|
||||
{
|
||||
id: 2,
|
||||
title: 'Task 2 with subtasks',
|
||||
description: 'Parent task',
|
||||
status: 'pending',
|
||||
dependencies: [],
|
||||
subtasks: [
|
||||
{
|
||||
id: 1,
|
||||
title: 'Subtask 2.1',
|
||||
description: 'First subtask',
|
||||
status: 'pending',
|
||||
dependencies: []
|
||||
},
|
||||
{
|
||||
id: 2,
|
||||
title: 'Subtask 2.2',
|
||||
description: 'Second subtask that should depend on Task 11',
|
||||
status: 'pending',
|
||||
dependencies: []
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
id: 11,
|
||||
title: 'Task 11',
|
||||
description: 'Top-level task that 2.2 should depend on',
|
||||
status: 'done',
|
||||
dependencies: []
|
||||
}
|
||||
]
|
||||
};
|
||||
|
||||
@@ -279,12 +279,14 @@ describe('Version comparison utility', () => {
|
||||
|
||||
describe('Update check functionality', () => {
|
||||
let displayUpgradeNotification;
|
||||
let parseChangelogHighlights;
|
||||
let consoleLogSpy;
|
||||
|
||||
beforeAll(async () => {
|
||||
// Import from @tm/cli instead of commands.js
|
||||
const cliModule = await import('../../apps/cli/src/utils/auto-update.js');
|
||||
displayUpgradeNotification = cliModule.displayUpgradeNotification;
|
||||
parseChangelogHighlights = cliModule.parseChangelogHighlights;
|
||||
});
|
||||
|
||||
beforeEach(() => {
|
||||
@@ -302,6 +304,61 @@ describe('Update check functionality', () => {
|
||||
expect(consoleLogSpy.mock.calls[0][0]).toContain('1.0.0');
|
||||
expect(consoleLogSpy.mock.calls[0][0]).toContain('1.1.0');
|
||||
});
|
||||
|
||||
test('displays upgrade notification with highlights when provided', () => {
|
||||
const highlights = [
|
||||
'Add Codex CLI provider with OAuth authentication',
|
||||
'Cursor IDE custom slash command support',
|
||||
'Move to AI SDK v5'
|
||||
];
|
||||
displayUpgradeNotification('1.0.0', '1.1.0', highlights);
|
||||
expect(consoleLogSpy).toHaveBeenCalled();
|
||||
const output = consoleLogSpy.mock.calls[0][0];
|
||||
expect(output).toContain('Update Available!');
|
||||
expect(output).toContain('1.0.0');
|
||||
expect(output).toContain('1.1.0');
|
||||
expect(output).toContain("What's New:");
|
||||
expect(output).toContain(
|
||||
'Add Codex CLI provider with OAuth authentication'
|
||||
);
|
||||
expect(output).toContain('Cursor IDE custom slash command support');
|
||||
expect(output).toContain('Move to AI SDK v5');
|
||||
});
|
||||
|
||||
test('displays upgrade notification without highlights section when empty array', () => {
|
||||
displayUpgradeNotification('1.0.0', '1.1.0', []);
|
||||
expect(consoleLogSpy).toHaveBeenCalled();
|
||||
const output = consoleLogSpy.mock.calls[0][0];
|
||||
expect(output).toContain('Update Available!');
|
||||
expect(output).not.toContain("What's New:");
|
||||
expect(output).toContain(
|
||||
'Auto-updating to the latest version with new features and bug fixes'
|
||||
);
|
||||
});
|
||||
|
||||
test('parseChangelogHighlights validates version format to prevent ReDoS', () => {
|
||||
const mockChangelog = `
|
||||
## 1.0.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#123](https://example.com) Thanks [@user](https://example.com)! - Test feature
|
||||
`;
|
||||
|
||||
// Valid versions should work
|
||||
expect(parseChangelogHighlights(mockChangelog, '1.0.0')).toEqual([
|
||||
'Test feature'
|
||||
]);
|
||||
expect(parseChangelogHighlights(mockChangelog, '1.0.0-rc.1')).toEqual([]);
|
||||
|
||||
// Invalid versions should return empty array (ReDoS protection)
|
||||
expect(parseChangelogHighlights(mockChangelog, 'invalid')).toEqual([]);
|
||||
expect(parseChangelogHighlights(mockChangelog, '1.0')).toEqual([]);
|
||||
expect(parseChangelogHighlights(mockChangelog, 'a.b.c')).toEqual([]);
|
||||
expect(
|
||||
parseChangelogHighlights(mockChangelog, '((((((((((((((((((((((((((((((a')
|
||||
).toEqual([]);
|
||||
});
|
||||
});
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
|
||||
@@ -4,18 +4,47 @@
|
||||
|
||||
import { jest } from '@jest/globals';
|
||||
import {
|
||||
validateTaskDependencies,
|
||||
isCircularDependency,
|
||||
removeDuplicateDependencies,
|
||||
cleanupSubtaskDependencies,
|
||||
ensureAtLeastOneIndependentSubtask,
|
||||
validateAndFixDependencies,
|
||||
canMoveWithDependencies
|
||||
} from '../../scripts/modules/dependency-manager.js';
|
||||
import * as utils from '../../scripts/modules/utils.js';
|
||||
import { sampleTasks } from '../fixtures/sample-tasks.js';
|
||||
sampleTasks,
|
||||
crossLevelDependencyTasks
|
||||
} from '../fixtures/sample-tasks.js';
|
||||
|
||||
// Create mock functions that we can control in tests
|
||||
const mockTaskExists = jest.fn();
|
||||
const mockFormatTaskId = jest.fn();
|
||||
const mockFindCycles = jest.fn();
|
||||
const mockLog = jest.fn();
|
||||
const mockReadJSON = jest.fn();
|
||||
const mockWriteJSON = jest.fn();
|
||||
|
||||
// Mock the utils module using the same pattern as move-task-cross-tag.test.js
|
||||
jest.mock('../../scripts/modules/utils.js', () => ({
|
||||
log: mockLog,
|
||||
readJSON: mockReadJSON,
|
||||
writeJSON: mockWriteJSON,
|
||||
taskExists: mockTaskExists,
|
||||
formatTaskId: mockFormatTaskId,
|
||||
findCycles: mockFindCycles,
|
||||
traverseDependencies: jest.fn(() => []),
|
||||
isSilentMode: jest.fn(() => true),
|
||||
findProjectRoot: jest.fn(() => '/test'),
|
||||
resolveEnvVariable: jest.fn(() => undefined),
|
||||
isEmpty: jest.fn((v) =>
|
||||
v == null
|
||||
? true
|
||||
: Array.isArray(v)
|
||||
? v.length === 0
|
||||
: typeof v === 'object'
|
||||
? Object.keys(v).length === 0
|
||||
: false
|
||||
),
|
||||
// Common extras
|
||||
enableSilentMode: jest.fn(),
|
||||
disableSilentMode: jest.fn(),
|
||||
getTaskManager: jest.fn(async () => ({})),
|
||||
getTagAwareFilePath: jest.fn((basePath, _tag, projectRoot = '.') => basePath),
|
||||
readComplexityReport: jest.fn(() => null)
|
||||
}));
|
||||
|
||||
// Mock dependencies
|
||||
jest.mock('path');
|
||||
jest.mock('chalk', () => ({
|
||||
green: jest.fn((text) => `<green>${text}</green>`),
|
||||
@@ -27,22 +56,16 @@ jest.mock('chalk', () => ({
|
||||
|
||||
jest.mock('boxen', () => jest.fn((text) => `[boxed: ${text}]`));
|
||||
|
||||
// Mock utils module
|
||||
const mockTaskExists = jest.fn();
|
||||
const mockFormatTaskId = jest.fn();
|
||||
const mockFindCycles = jest.fn();
|
||||
const mockLog = jest.fn();
|
||||
const mockReadJSON = jest.fn();
|
||||
const mockWriteJSON = jest.fn();
|
||||
|
||||
jest.mock('../../scripts/modules/utils.js', () => ({
|
||||
log: mockLog,
|
||||
readJSON: mockReadJSON,
|
||||
writeJSON: mockWriteJSON,
|
||||
taskExists: mockTaskExists,
|
||||
formatTaskId: mockFormatTaskId,
|
||||
findCycles: mockFindCycles
|
||||
}));
|
||||
// Now import SUT after mocks are in place
|
||||
import {
|
||||
validateTaskDependencies,
|
||||
isCircularDependency,
|
||||
removeDuplicateDependencies,
|
||||
cleanupSubtaskDependencies,
|
||||
ensureAtLeastOneIndependentSubtask,
|
||||
validateAndFixDependencies,
|
||||
canMoveWithDependencies
|
||||
} from '../../scripts/modules/dependency-manager.js';
|
||||
|
||||
jest.mock('../../scripts/modules/ui.js', () => ({
|
||||
displayBanner: jest.fn()
|
||||
@@ -52,8 +75,8 @@ jest.mock('../../scripts/modules/task-manager.js', () => ({
|
||||
generateTaskFiles: jest.fn()
|
||||
}));
|
||||
|
||||
// Create a path for test files
|
||||
const TEST_TASKS_PATH = 'tests/fixture/test-tasks.json';
|
||||
// Use a temporary path for test files - Jest will clean up the temp directory
|
||||
const TEST_TASKS_PATH = '/tmp/jest-test-tasks.json';
|
||||
|
||||
describe('Dependency Manager Module', () => {
|
||||
beforeEach(() => {
|
||||
@@ -684,6 +707,8 @@ describe('Dependency Manager Module', () => {
|
||||
// IMPORTANT: Verify no calls to writeJSON with actual tasks.json
|
||||
expect(mockWriteJSON).not.toHaveBeenCalledWith(
|
||||
'tasks/tasks.json',
|
||||
expect.anything(),
|
||||
expect.anything(),
|
||||
expect.anything()
|
||||
);
|
||||
});
|
||||
@@ -737,6 +762,8 @@ describe('Dependency Manager Module', () => {
|
||||
// IMPORTANT: Verify no calls to writeJSON with actual tasks.json
|
||||
expect(mockWriteJSON).not.toHaveBeenCalledWith(
|
||||
'tasks/tasks.json',
|
||||
expect.anything(),
|
||||
expect.anything(),
|
||||
expect.anything()
|
||||
);
|
||||
});
|
||||
@@ -750,6 +777,8 @@ describe('Dependency Manager Module', () => {
|
||||
// IMPORTANT: Verify no calls to writeJSON with actual tasks.json
|
||||
expect(mockWriteJSON).not.toHaveBeenCalledWith(
|
||||
'tasks/tasks.json',
|
||||
expect.anything(),
|
||||
expect.anything(),
|
||||
expect.anything()
|
||||
);
|
||||
});
|
||||
@@ -803,6 +832,8 @@ describe('Dependency Manager Module', () => {
|
||||
// IMPORTANT: Verify no calls to writeJSON with actual tasks.json
|
||||
expect(mockWriteJSON).not.toHaveBeenCalledWith(
|
||||
'tasks/tasks.json',
|
||||
expect.anything(),
|
||||
expect.anything(),
|
||||
expect.anything()
|
||||
);
|
||||
});
|
||||
@@ -916,4 +947,297 @@ describe('Dependency Manager Module', () => {
|
||||
expect(result.conflicts).toEqual([]);
|
||||
});
|
||||
});
|
||||
|
||||
describe('Cross-level dependency tests (Issue #542)', () => {
|
||||
let originalExit;
|
||||
|
||||
beforeEach(async () => {
|
||||
// Ensure a fresh module instance so ESM mocks apply to dynamic imports
|
||||
jest.resetModules();
|
||||
originalExit = process.exit;
|
||||
process.exit = jest.fn();
|
||||
|
||||
// For ESM dynamic imports, use the same pattern
|
||||
await jest.unstable_mockModule('../../scripts/modules/utils.js', () => ({
|
||||
log: mockLog,
|
||||
readJSON: mockReadJSON,
|
||||
writeJSON: mockWriteJSON,
|
||||
taskExists: mockTaskExists,
|
||||
formatTaskId: mockFormatTaskId,
|
||||
findCycles: mockFindCycles,
|
||||
traverseDependencies: jest.fn(() => []),
|
||||
isSilentMode: jest.fn(() => true),
|
||||
findProjectRoot: jest.fn(() => '/test'),
|
||||
resolveEnvVariable: jest.fn(() => undefined),
|
||||
isEmpty: jest.fn((v) =>
|
||||
v == null
|
||||
? true
|
||||
: Array.isArray(v)
|
||||
? v.length === 0
|
||||
: typeof v === 'object'
|
||||
? Object.keys(v).length === 0
|
||||
: false
|
||||
),
|
||||
enableSilentMode: jest.fn(),
|
||||
disableSilentMode: jest.fn(),
|
||||
getTaskManager: jest.fn(async () => ({})),
|
||||
getTagAwareFilePath: jest.fn(
|
||||
(basePath, _tag, projectRoot = '.') => basePath
|
||||
),
|
||||
readComplexityReport: jest.fn(() => null)
|
||||
}));
|
||||
|
||||
// Also mock transitive imports to keep dependency surface minimal
|
||||
await jest.unstable_mockModule('../../scripts/modules/ui.js', () => ({
|
||||
displayBanner: jest.fn()
|
||||
}));
|
||||
await jest.unstable_mockModule(
|
||||
'../../scripts/modules/task-manager/generate-task-files.js',
|
||||
() => ({ default: jest.fn() })
|
||||
);
|
||||
// Set up test data that matches the issue report
|
||||
// Clone fixture data before each test to prevent mutation issues
|
||||
mockReadJSON.mockImplementation(() =>
|
||||
structuredClone(crossLevelDependencyTasks)
|
||||
);
|
||||
|
||||
// Configure mockTaskExists to properly validate cross-level dependencies
|
||||
mockTaskExists.mockImplementation((tasks, taskId) => {
|
||||
if (typeof taskId === 'string' && taskId.includes('.')) {
|
||||
const [parentId, subtaskId] = taskId.split('.').map(Number);
|
||||
const task = tasks.find((t) => t.id === parentId);
|
||||
return (
|
||||
task &&
|
||||
task.subtasks &&
|
||||
task.subtasks.some((st) => st.id === subtaskId)
|
||||
);
|
||||
}
|
||||
|
||||
const numericId =
|
||||
typeof taskId === 'string' ? parseInt(taskId, 10) : taskId;
|
||||
return tasks.some((task) => task.id === numericId);
|
||||
});
|
||||
|
||||
mockFormatTaskId.mockImplementation((id) => {
|
||||
if (typeof id === 'string' && id.includes('.')) return id; // keep dot notation
|
||||
return parseInt(id, 10); // normalize top-level task IDs to number
|
||||
});
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
process.exit = originalExit;
|
||||
});
|
||||
|
||||
test('should allow subtask to depend on top-level task', async () => {
|
||||
const { addDependency } = await import(
|
||||
'../../scripts/modules/dependency-manager.js'
|
||||
);
|
||||
|
||||
// Test the specific scenario from Issue #542: subtask 2.2 depending on task 11
|
||||
await addDependency(TEST_TASKS_PATH, '2.2', 11, { projectRoot: '/test' });
|
||||
|
||||
// Verify we wrote to the test path (and not the real tasks.json)
|
||||
expect(mockWriteJSON).toHaveBeenCalledWith(
|
||||
TEST_TASKS_PATH,
|
||||
expect.anything(),
|
||||
'/test',
|
||||
undefined
|
||||
);
|
||||
expect(mockWriteJSON).not.toHaveBeenCalledWith(
|
||||
'tasks/tasks.json',
|
||||
expect.anything(),
|
||||
expect.anything(),
|
||||
expect.anything()
|
||||
);
|
||||
// Get the specific write call for TEST_TASKS_PATH
|
||||
const writeCall = mockWriteJSON.mock.calls.find(
|
||||
([p]) => p === TEST_TASKS_PATH
|
||||
);
|
||||
expect(writeCall).toBeDefined();
|
||||
const savedData = writeCall[1];
|
||||
const parent2 = savedData.tasks.find((t) => t.id === 2);
|
||||
const subtask22 = parent2.subtasks.find((st) => st.id === 2);
|
||||
|
||||
// Verify the dependency was actually added to subtask 2.2
|
||||
expect(subtask22.dependencies).toContain(11);
|
||||
// Also verify a success log was emitted
|
||||
const successCall = mockLog.mock.calls.find(
|
||||
([level]) => level === 'success'
|
||||
);
|
||||
expect(successCall).toBeDefined();
|
||||
expect(successCall[1]).toContain('2.2');
|
||||
expect(successCall[1]).toContain('11');
|
||||
});
|
||||
|
||||
test('should allow top-level task to depend on subtask', async () => {
|
||||
const { addDependency } = await import(
|
||||
'../../scripts/modules/dependency-manager.js'
|
||||
);
|
||||
|
||||
// Test reverse scenario: task 11 depending on subtask 2.1
|
||||
await addDependency(TEST_TASKS_PATH, 11, '2.1', { projectRoot: '/test' });
|
||||
|
||||
// Stronger assertions for writeJSON call and locating the correct task
|
||||
expect(mockWriteJSON).toHaveBeenCalledWith(
|
||||
TEST_TASKS_PATH,
|
||||
expect.anything(),
|
||||
'/test',
|
||||
undefined
|
||||
);
|
||||
expect(mockWriteJSON).not.toHaveBeenCalledWith(
|
||||
'tasks/tasks.json',
|
||||
expect.anything(),
|
||||
expect.anything(),
|
||||
expect.anything()
|
||||
);
|
||||
const writeCall = mockWriteJSON.mock.calls.find(
|
||||
([p]) => p === TEST_TASKS_PATH
|
||||
);
|
||||
expect(writeCall).toBeDefined();
|
||||
const savedData = writeCall[1];
|
||||
const task11 = savedData.tasks.find((t) => t.id === 11);
|
||||
|
||||
// Verify the dependency was actually added to task 11
|
||||
expect(task11.dependencies).toContain('2.1');
|
||||
// Verify a success log was emitted mentioning both task 11 and subtask 2.1
|
||||
const successCall = mockLog.mock.calls.find(
|
||||
([level]) => level === 'success'
|
||||
);
|
||||
expect(successCall).toBeDefined();
|
||||
expect(successCall[1]).toContain('11');
|
||||
expect(successCall[1]).toContain('2.1');
|
||||
});
|
||||
|
||||
test('should properly validate cross-level dependencies exist', async () => {
|
||||
// Test that validation correctly identifies when a cross-level dependency target doesn't exist
|
||||
mockTaskExists.mockImplementation((tasks, taskId) => {
|
||||
// Simulate task 99 not existing
|
||||
if (taskId === '99' || taskId === 99) {
|
||||
return false;
|
||||
}
|
||||
|
||||
if (typeof taskId === 'string' && taskId.includes('.')) {
|
||||
const [parentId, subtaskId] = taskId.split('.').map(Number);
|
||||
const task = tasks.find((t) => t.id === parentId);
|
||||
return (
|
||||
task &&
|
||||
task.subtasks &&
|
||||
task.subtasks.some((st) => st.id === subtaskId)
|
||||
);
|
||||
}
|
||||
|
||||
const numericId =
|
||||
typeof taskId === 'string' ? parseInt(taskId, 10) : taskId;
|
||||
return tasks.some((task) => task.id === numericId);
|
||||
});
|
||||
|
||||
const { addDependency } = await import(
|
||||
'../../scripts/modules/dependency-manager.js'
|
||||
);
|
||||
|
||||
const exitError = new Error('process.exit invoked');
|
||||
process.exit.mockImplementation(() => {
|
||||
throw exitError;
|
||||
});
|
||||
|
||||
await expect(
|
||||
addDependency(TEST_TASKS_PATH, '2.2', 99, { projectRoot: '/test' })
|
||||
).rejects.toBe(exitError);
|
||||
|
||||
expect(process.exit).toHaveBeenCalledWith(1);
|
||||
expect(mockWriteJSON).not.toHaveBeenCalled();
|
||||
// Verify that an error was reported to the user
|
||||
expect(mockLog).toHaveBeenCalled();
|
||||
});
|
||||
|
||||
test('should remove top-level task dependency from a subtask', async () => {
|
||||
const { addDependency, removeDependency } = await import(
|
||||
'../../scripts/modules/dependency-manager.js'
|
||||
);
|
||||
|
||||
// Start with cloned data and add 11 to 2.2
|
||||
await addDependency(TEST_TASKS_PATH, '2.2', 11, { projectRoot: '/test' });
|
||||
|
||||
// Get the saved data from the add operation
|
||||
const addWriteCall = mockWriteJSON.mock.calls.find(
|
||||
([p]) => p === TEST_TASKS_PATH
|
||||
);
|
||||
expect(addWriteCall).toBeDefined();
|
||||
const dataWithDep = addWriteCall[1];
|
||||
|
||||
// Verify the dependency was added
|
||||
const subtask22AfterAdd = dataWithDep.tasks
|
||||
.find((t) => t.id === 2)
|
||||
.subtasks.find((st) => st.id === 2);
|
||||
expect(subtask22AfterAdd.dependencies).toContain(11);
|
||||
|
||||
// Clear mocks and re-setup mockReadJSON with the modified data
|
||||
jest.clearAllMocks();
|
||||
mockReadJSON.mockImplementation(() => structuredClone(dataWithDep));
|
||||
|
||||
await removeDependency(TEST_TASKS_PATH, '2.2', 11, {
|
||||
projectRoot: '/test'
|
||||
});
|
||||
|
||||
const writeCall = mockWriteJSON.mock.calls.find(
|
||||
([p]) => p === TEST_TASKS_PATH
|
||||
);
|
||||
expect(writeCall).toBeDefined();
|
||||
const saved = writeCall[1];
|
||||
const subtask22 = saved.tasks
|
||||
.find((t) => t.id === 2)
|
||||
.subtasks.find((st) => st.id === 2);
|
||||
expect(subtask22.dependencies).not.toContain(11);
|
||||
// Verify success log was emitted
|
||||
const successCall = mockLog.mock.calls.find(
|
||||
([level]) => level === 'success'
|
||||
);
|
||||
expect(successCall).toBeDefined();
|
||||
expect(successCall[1]).toContain('2.2');
|
||||
expect(successCall[1]).toContain('11');
|
||||
});
|
||||
|
||||
test('should remove subtask dependency from a top-level task', async () => {
|
||||
const { addDependency, removeDependency } = await import(
|
||||
'../../scripts/modules/dependency-manager.js'
|
||||
);
|
||||
|
||||
// Add subtask dependency to task 11
|
||||
await addDependency(TEST_TASKS_PATH, 11, '2.1', { projectRoot: '/test' });
|
||||
|
||||
// Get the saved data from the add operation
|
||||
const addWriteCall = mockWriteJSON.mock.calls.find(
|
||||
([p]) => p === TEST_TASKS_PATH
|
||||
);
|
||||
expect(addWriteCall).toBeDefined();
|
||||
const dataWithDep = addWriteCall[1];
|
||||
|
||||
// Verify the dependency was added
|
||||
const task11AfterAdd = dataWithDep.tasks.find((t) => t.id === 11);
|
||||
expect(task11AfterAdd.dependencies).toContain('2.1');
|
||||
|
||||
// Clear mocks and re-setup mockReadJSON with the modified data
|
||||
jest.clearAllMocks();
|
||||
mockReadJSON.mockImplementation(() => structuredClone(dataWithDep));
|
||||
|
||||
await removeDependency(TEST_TASKS_PATH, 11, '2.1', {
|
||||
projectRoot: '/test'
|
||||
});
|
||||
|
||||
const writeCall = mockWriteJSON.mock.calls.find(
|
||||
([p]) => p === TEST_TASKS_PATH
|
||||
);
|
||||
expect(writeCall).toBeDefined();
|
||||
const saved = writeCall[1];
|
||||
const task11 = saved.tasks.find((t) => t.id === 11);
|
||||
expect(task11.dependencies).not.toContain('2.1');
|
||||
// Verify success log was emitted
|
||||
const successCall = mockLog.mock.calls.find(
|
||||
([level]) => level === 'success'
|
||||
);
|
||||
expect(successCall).toBeDefined();
|
||||
expect(successCall[1]).toContain('11');
|
||||
expect(successCall[1]).toContain('2.1');
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
Reference in New Issue
Block a user