Compare commits
8 Commits
docs/auto-
...
docs/auto-
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
8a894969b5 | ||
|
|
2a910a40ba | ||
|
|
5cb7ed557a | ||
|
|
b9e644c556 | ||
|
|
7265a6cf53 | ||
|
|
db6f405f23 | ||
|
|
7b5a7c4495 | ||
|
|
caee040907 |
5
.changeset/fix-mcp-connection-errors.md
Normal file
5
.changeset/fix-mcp-connection-errors.md
Normal file
@@ -0,0 +1,5 @@
|
||||
---
|
||||
"task-master-ai": patch
|
||||
---
|
||||
|
||||
Fix MCP connection errors caused by deprecated generateTaskFiles calls. Resolves "Cannot read properties of null (reading 'toString')" errors when using MCP tools for task management operations.
|
||||
5
.changeset/fix-mcp-default-tasks-path.md
Normal file
5
.changeset/fix-mcp-default-tasks-path.md
Normal file
@@ -0,0 +1,5 @@
|
||||
---
|
||||
"task-master-ai": patch
|
||||
---
|
||||
|
||||
Fix MCP server error when file parameter not provided - now properly constructs default tasks.json path instead of failing with 'tasksJsonPath is required' error.
|
||||
17
.changeset/nice-ways-hope.md
Normal file
17
.changeset/nice-ways-hope.md
Normal file
@@ -0,0 +1,17 @@
|
||||
---
|
||||
"task-master-ai": minor
|
||||
---
|
||||
|
||||
Add RPG (Repository Planning Graph) method template for structured PRD creation. The new `example_prd_rpg.txt` template teaches AI agents and developers the RPG methodology through embedded instructions, inline good/bad examples, and XML-style tags for structure. This template enables creation of dependency-aware PRDs that automatically generate topologically-ordered task graphs when parsed with Task Master.
|
||||
|
||||
Key features:
|
||||
- Method-as-template: teaches RPG principles (dual-semantics, explicit dependencies, topological order) while being used
|
||||
- Inline instructions at decision points guide AI through each section
|
||||
- Good/bad examples for immediate pattern matching
|
||||
- Flexible plain-text format with XML-style tags for parseability
|
||||
- Critical dependency-graph section ensures correct task ordering
|
||||
- Automatic inclusion during `task-master init`
|
||||
- Comprehensive documentation at [docs.task-master.dev/capabilities/rpg-method](https://docs.task-master.dev/capabilities/rpg-method)
|
||||
- Tool recommendations for code-context-aware PRD creation (Claude Code, Cursor, Gemini CLI, Codex/Grok)
|
||||
|
||||
The RPG template complements the existing `example_prd.txt` and provides a more structured approach for complex projects requiring clear module boundaries and dependency chains.
|
||||
@@ -1,5 +1,5 @@
|
||||
{
|
||||
"mode": "pre",
|
||||
"mode": "exit",
|
||||
"tag": "rc",
|
||||
"initialVersions": {
|
||||
"task-master-ai": "0.27.3",
|
||||
@@ -7,10 +7,13 @@
|
||||
"extension": "0.25.4"
|
||||
},
|
||||
"changesets": [
|
||||
"brave-lions-sing",
|
||||
"chore-fix-docs",
|
||||
"cursor-slash-commands",
|
||||
"curvy-weeks-flow",
|
||||
"easy-spiders-wave",
|
||||
"fix-mcp-connection-errors",
|
||||
"fix-mcp-default-tasks-path",
|
||||
"flat-cities-say",
|
||||
"forty-tables-invite",
|
||||
"gentle-cats-dance",
|
||||
|
||||
511
.taskmaster/templates/example_prd_rpg.txt
Normal file
511
.taskmaster/templates/example_prd_rpg.txt
Normal file
@@ -0,0 +1,511 @@
|
||||
<rpg-method>
|
||||
# Repository Planning Graph (RPG) Method - PRD Template
|
||||
|
||||
This template teaches you (AI or human) how to create structured, dependency-aware PRDs using the RPG methodology from Microsoft Research. The key insight: separate WHAT (functional) from HOW (structural), then connect them with explicit dependencies.
|
||||
|
||||
## Core Principles
|
||||
|
||||
1. **Dual-Semantics**: Think functional (capabilities) AND structural (code organization) separately, then map them
|
||||
2. **Explicit Dependencies**: Never assume - always state what depends on what
|
||||
3. **Topological Order**: Build foundation first, then layers on top
|
||||
4. **Progressive Refinement**: Start broad, refine iteratively
|
||||
|
||||
## How to Use This Template
|
||||
|
||||
- Follow the instructions in each `<instruction>` block
|
||||
- Look at `<example>` blocks to see good vs bad patterns
|
||||
- Fill in the content sections with your project details
|
||||
- The AI reading this will learn the RPG method by following along
|
||||
- Task Master will parse the resulting PRD into dependency-aware tasks
|
||||
|
||||
## Recommended Tools for Creating PRDs
|
||||
|
||||
When using this template to **create** a PRD (not parse it), use **code-context-aware AI assistants** for best results:
|
||||
|
||||
**Why?** The AI needs to understand your existing codebase to make good architectural decisions about modules, dependencies, and integration points.
|
||||
|
||||
**Recommended tools:**
|
||||
- **Claude Code** (claude-code CLI) - Best for structured reasoning and large contexts
|
||||
- **Cursor/Windsurf** - IDE integration with full codebase context
|
||||
- **Gemini CLI** (gemini-cli) - Massive context window for large codebases
|
||||
- **Codex/Grok CLI** - Strong code generation with context awareness
|
||||
|
||||
**Note:** Once your PRD is created, `task-master parse-prd` works with any configured AI model - it just needs to read the PRD text itself, not your codebase.
|
||||
</rpg-method>
|
||||
|
||||
---
|
||||
|
||||
<overview>
|
||||
<instruction>
|
||||
Start with the problem, not the solution. Be specific about:
|
||||
- What pain point exists?
|
||||
- Who experiences it?
|
||||
- Why existing solutions don't work?
|
||||
- What success looks like (measurable outcomes)?
|
||||
|
||||
Keep this section focused - don't jump into implementation details yet.
|
||||
</instruction>
|
||||
|
||||
## Problem Statement
|
||||
[Describe the core problem. Be concrete about user pain points.]
|
||||
|
||||
## Target Users
|
||||
[Define personas, their workflows, and what they're trying to achieve.]
|
||||
|
||||
## Success Metrics
|
||||
[Quantifiable outcomes. Examples: "80% task completion via autopilot", "< 5% manual intervention rate"]
|
||||
|
||||
</overview>
|
||||
|
||||
---
|
||||
|
||||
<functional-decomposition>
|
||||
<instruction>
|
||||
Now think about CAPABILITIES (what the system DOES), not code structure yet.
|
||||
|
||||
Step 1: Identify high-level capability domains
|
||||
- Think: "What major things does this system do?"
|
||||
- Examples: Data Management, Core Processing, Presentation Layer
|
||||
|
||||
Step 2: For each capability, enumerate specific features
|
||||
- Use explore-exploit strategy:
|
||||
* Exploit: What features are REQUIRED for core value?
|
||||
* Explore: What features make this domain COMPLETE?
|
||||
|
||||
Step 3: For each feature, define:
|
||||
- Description: What it does in one sentence
|
||||
- Inputs: What data/context it needs
|
||||
- Outputs: What it produces/returns
|
||||
- Behavior: Key logic or transformations
|
||||
|
||||
<example type="good">
|
||||
Capability: Data Validation
|
||||
Feature: Schema validation
|
||||
- Description: Validate JSON payloads against defined schemas
|
||||
- Inputs: JSON object, schema definition
|
||||
- Outputs: Validation result (pass/fail) + error details
|
||||
- Behavior: Iterate fields, check types, enforce constraints
|
||||
|
||||
Feature: Business rule validation
|
||||
- Description: Apply domain-specific validation rules
|
||||
- Inputs: Validated data object, rule set
|
||||
- Outputs: Boolean + list of violated rules
|
||||
- Behavior: Execute rules sequentially, short-circuit on failure
|
||||
</example>
|
||||
|
||||
<example type="bad">
|
||||
Capability: validation.js
|
||||
(Problem: This is a FILE, not a CAPABILITY. Mixing structure into functional thinking.)
|
||||
|
||||
Capability: Validation
|
||||
Feature: Make sure data is good
|
||||
(Problem: Too vague. No inputs/outputs. Not actionable.)
|
||||
</example>
|
||||
</instruction>
|
||||
|
||||
## Capability Tree
|
||||
|
||||
### Capability: [Name]
|
||||
[Brief description of what this capability domain covers]
|
||||
|
||||
#### Feature: [Name]
|
||||
- **Description**: [One sentence]
|
||||
- **Inputs**: [What it needs]
|
||||
- **Outputs**: [What it produces]
|
||||
- **Behavior**: [Key logic]
|
||||
|
||||
#### Feature: [Name]
|
||||
- **Description**:
|
||||
- **Inputs**:
|
||||
- **Outputs**:
|
||||
- **Behavior**:
|
||||
|
||||
### Capability: [Name]
|
||||
...
|
||||
|
||||
</functional-decomposition>
|
||||
|
||||
---
|
||||
|
||||
<structural-decomposition>
|
||||
<instruction>
|
||||
NOW think about code organization. Map capabilities to actual file/folder structure.
|
||||
|
||||
Rules:
|
||||
1. Each capability maps to a module (folder or file)
|
||||
2. Features within a capability map to functions/classes
|
||||
3. Use clear module boundaries - each module has ONE responsibility
|
||||
4. Define what each module exports (public interface)
|
||||
|
||||
The goal: Create a clear mapping between "what it does" (functional) and "where it lives" (structural).
|
||||
|
||||
<example type="good">
|
||||
Capability: Data Validation
|
||||
→ Maps to: src/validation/
|
||||
├── schema-validator.js (Schema validation feature)
|
||||
├── rule-validator.js (Business rule validation feature)
|
||||
└── index.js (Public exports)
|
||||
|
||||
Exports:
|
||||
- validateSchema(data, schema)
|
||||
- validateRules(data, rules)
|
||||
</example>
|
||||
|
||||
<example type="bad">
|
||||
Capability: Data Validation
|
||||
→ Maps to: src/utils.js
|
||||
(Problem: "utils" is not a clear module boundary. Where do I find validation logic?)
|
||||
|
||||
Capability: Data Validation
|
||||
→ Maps to: src/validation/everything.js
|
||||
(Problem: One giant file. Features should map to separate files for maintainability.)
|
||||
</example>
|
||||
</instruction>
|
||||
|
||||
## Repository Structure
|
||||
|
||||
```
|
||||
project-root/
|
||||
├── src/
|
||||
│ ├── [module-name]/ # Maps to: [Capability Name]
|
||||
│ │ ├── [file].js # Maps to: [Feature Name]
|
||||
│ │ └── index.js # Public exports
|
||||
│ └── [module-name]/
|
||||
├── tests/
|
||||
└── docs/
|
||||
```
|
||||
|
||||
## Module Definitions
|
||||
|
||||
### Module: [Name]
|
||||
- **Maps to capability**: [Capability from functional decomposition]
|
||||
- **Responsibility**: [Single clear purpose]
|
||||
- **File structure**:
|
||||
```
|
||||
module-name/
|
||||
├── feature1.js
|
||||
├── feature2.js
|
||||
└── index.js
|
||||
```
|
||||
- **Exports**:
|
||||
- `functionName()` - [what it does]
|
||||
- `ClassName` - [what it does]
|
||||
|
||||
</structural-decomposition>
|
||||
|
||||
---
|
||||
|
||||
<dependency-graph>
|
||||
<instruction>
|
||||
This is THE CRITICAL SECTION for Task Master parsing.
|
||||
|
||||
Define explicit dependencies between modules. This creates the topological order for task execution.
|
||||
|
||||
Rules:
|
||||
1. List modules in dependency order (foundation first)
|
||||
2. For each module, state what it depends on
|
||||
3. Foundation modules should have NO dependencies
|
||||
4. Every non-foundation module should depend on at least one other module
|
||||
5. Think: "What must EXIST before I can build this module?"
|
||||
|
||||
<example type="good">
|
||||
Foundation Layer (no dependencies):
|
||||
- error-handling: No dependencies
|
||||
- config-manager: No dependencies
|
||||
- base-types: No dependencies
|
||||
|
||||
Data Layer:
|
||||
- schema-validator: Depends on [base-types, error-handling]
|
||||
- data-ingestion: Depends on [schema-validator, config-manager]
|
||||
|
||||
Core Layer:
|
||||
- algorithm-engine: Depends on [base-types, error-handling]
|
||||
- pipeline-orchestrator: Depends on [algorithm-engine, data-ingestion]
|
||||
</example>
|
||||
|
||||
<example type="bad">
|
||||
- validation: Depends on API
|
||||
- API: Depends on validation
|
||||
(Problem: Circular dependency. This will cause build/runtime issues.)
|
||||
|
||||
- user-auth: Depends on everything
|
||||
(Problem: Too many dependencies. Should be more focused.)
|
||||
</example>
|
||||
</instruction>
|
||||
|
||||
## Dependency Chain
|
||||
|
||||
### Foundation Layer (Phase 0)
|
||||
No dependencies - these are built first.
|
||||
|
||||
- **[Module Name]**: [What it provides]
|
||||
- **[Module Name]**: [What it provides]
|
||||
|
||||
### [Layer Name] (Phase 1)
|
||||
- **[Module Name]**: Depends on [[module-from-phase-0], [module-from-phase-0]]
|
||||
- **[Module Name]**: Depends on [[module-from-phase-0]]
|
||||
|
||||
### [Layer Name] (Phase 2)
|
||||
- **[Module Name]**: Depends on [[module-from-phase-1], [module-from-foundation]]
|
||||
|
||||
[Continue building up layers...]
|
||||
|
||||
</dependency-graph>
|
||||
|
||||
---
|
||||
|
||||
<implementation-roadmap>
|
||||
<instruction>
|
||||
Turn the dependency graph into concrete development phases.
|
||||
|
||||
Each phase should:
|
||||
1. Have clear entry criteria (what must exist before starting)
|
||||
2. Contain tasks that can be parallelized (no inter-dependencies within phase)
|
||||
3. Have clear exit criteria (how do we know phase is complete?)
|
||||
4. Build toward something USABLE (not just infrastructure)
|
||||
|
||||
Phase ordering follows topological sort of dependency graph.
|
||||
|
||||
<example type="good">
|
||||
Phase 0: Foundation
|
||||
Entry: Clean repository
|
||||
Tasks:
|
||||
- Implement error handling utilities
|
||||
- Create base type definitions
|
||||
- Setup configuration system
|
||||
Exit: Other modules can import foundation without errors
|
||||
|
||||
Phase 1: Data Layer
|
||||
Entry: Phase 0 complete
|
||||
Tasks:
|
||||
- Implement schema validator (uses: base types, error handling)
|
||||
- Build data ingestion pipeline (uses: validator, config)
|
||||
Exit: End-to-end data flow from input to validated output
|
||||
</example>
|
||||
|
||||
<example type="bad">
|
||||
Phase 1: Build Everything
|
||||
Tasks:
|
||||
- API
|
||||
- Database
|
||||
- UI
|
||||
- Tests
|
||||
(Problem: No clear focus. Too broad. Dependencies not considered.)
|
||||
</example>
|
||||
</instruction>
|
||||
|
||||
## Development Phases
|
||||
|
||||
### Phase 0: [Foundation Name]
|
||||
**Goal**: [What foundational capability this establishes]
|
||||
|
||||
**Entry Criteria**: [What must be true before starting]
|
||||
|
||||
**Tasks**:
|
||||
- [ ] [Task name] (depends on: [none or list])
|
||||
- Acceptance criteria: [How we know it's done]
|
||||
- Test strategy: [What tests prove it works]
|
||||
|
||||
- [ ] [Task name] (depends on: [none or list])
|
||||
|
||||
**Exit Criteria**: [Observable outcome that proves phase complete]
|
||||
|
||||
**Delivers**: [What can users/developers do after this phase?]
|
||||
|
||||
---
|
||||
|
||||
### Phase 1: [Layer Name]
|
||||
**Goal**:
|
||||
|
||||
**Entry Criteria**: Phase 0 complete
|
||||
|
||||
**Tasks**:
|
||||
- [ ] [Task name] (depends on: [[tasks-from-phase-0]])
|
||||
- [ ] [Task name] (depends on: [[tasks-from-phase-0]])
|
||||
|
||||
**Exit Criteria**:
|
||||
|
||||
**Delivers**:
|
||||
|
||||
---
|
||||
|
||||
[Continue with more phases...]
|
||||
|
||||
</implementation-roadmap>
|
||||
|
||||
---
|
||||
|
||||
<test-strategy>
|
||||
<instruction>
|
||||
Define how testing will be integrated throughout development (TDD approach).
|
||||
|
||||
Specify:
|
||||
1. Test pyramid ratios (unit vs integration vs e2e)
|
||||
2. Coverage requirements
|
||||
3. Critical test scenarios
|
||||
4. Test generation guidelines for Surgical Test Generator
|
||||
|
||||
This section guides the AI when generating tests during the RED phase of TDD.
|
||||
|
||||
<example type="good">
|
||||
Critical Test Scenarios for Data Validation module:
|
||||
- Happy path: Valid data passes all checks
|
||||
- Edge cases: Empty strings, null values, boundary numbers
|
||||
- Error cases: Invalid types, missing required fields
|
||||
- Integration: Validator works with ingestion pipeline
|
||||
</example>
|
||||
</instruction>
|
||||
|
||||
## Test Pyramid
|
||||
|
||||
```
|
||||
/\
|
||||
/E2E\ ← [X]% (End-to-end, slow, comprehensive)
|
||||
/------\
|
||||
/Integration\ ← [Y]% (Module interactions)
|
||||
/------------\
|
||||
/ Unit Tests \ ← [Z]% (Fast, isolated, deterministic)
|
||||
/----------------\
|
||||
```
|
||||
|
||||
## Coverage Requirements
|
||||
- Line coverage: [X]% minimum
|
||||
- Branch coverage: [X]% minimum
|
||||
- Function coverage: [X]% minimum
|
||||
- Statement coverage: [X]% minimum
|
||||
|
||||
## Critical Test Scenarios
|
||||
|
||||
### [Module/Feature Name]
|
||||
**Happy path**:
|
||||
- [Scenario description]
|
||||
- Expected: [What should happen]
|
||||
|
||||
**Edge cases**:
|
||||
- [Scenario description]
|
||||
- Expected: [What should happen]
|
||||
|
||||
**Error cases**:
|
||||
- [Scenario description]
|
||||
- Expected: [How system handles failure]
|
||||
|
||||
**Integration points**:
|
||||
- [What interactions to test]
|
||||
- Expected: [End-to-end behavior]
|
||||
|
||||
## Test Generation Guidelines
|
||||
[Specific instructions for Surgical Test Generator about what to focus on, what patterns to follow, project-specific test conventions]
|
||||
|
||||
</test-strategy>
|
||||
|
||||
---
|
||||
|
||||
<architecture>
|
||||
<instruction>
|
||||
Describe technical architecture, data models, and key design decisions.
|
||||
|
||||
Keep this section AFTER functional/structural decomposition - implementation details come after understanding structure.
|
||||
</instruction>
|
||||
|
||||
## System Components
|
||||
[Major architectural pieces and their responsibilities]
|
||||
|
||||
## Data Models
|
||||
[Core data structures, schemas, database design]
|
||||
|
||||
## Technology Stack
|
||||
[Languages, frameworks, key libraries]
|
||||
|
||||
**Decision: [Technology/Pattern]**
|
||||
- **Rationale**: [Why chosen]
|
||||
- **Trade-offs**: [What we're giving up]
|
||||
- **Alternatives considered**: [What else we looked at]
|
||||
|
||||
</architecture>
|
||||
|
||||
---
|
||||
|
||||
<risks>
|
||||
<instruction>
|
||||
Identify risks that could derail development and how to mitigate them.
|
||||
|
||||
Categories:
|
||||
- Technical risks (complexity, unknowns)
|
||||
- Dependency risks (blocking issues)
|
||||
- Scope risks (creep, underestimation)
|
||||
</instruction>
|
||||
|
||||
## Technical Risks
|
||||
**Risk**: [Description]
|
||||
- **Impact**: [High/Medium/Low - effect on project]
|
||||
- **Likelihood**: [High/Medium/Low]
|
||||
- **Mitigation**: [How to address]
|
||||
- **Fallback**: [Plan B if mitigation fails]
|
||||
|
||||
## Dependency Risks
|
||||
[External dependencies, blocking issues]
|
||||
|
||||
## Scope Risks
|
||||
[Scope creep, underestimation, unclear requirements]
|
||||
|
||||
</risks>
|
||||
|
||||
---
|
||||
|
||||
<appendix>
|
||||
## References
|
||||
[Papers, documentation, similar systems]
|
||||
|
||||
## Glossary
|
||||
[Domain-specific terms]
|
||||
|
||||
## Open Questions
|
||||
[Things to resolve during development]
|
||||
</appendix>
|
||||
|
||||
---
|
||||
|
||||
<task-master-integration>
|
||||
# How Task Master Uses This PRD
|
||||
|
||||
When you run `task-master parse-prd <file>.txt`, the parser:
|
||||
|
||||
1. **Extracts capabilities** → Main tasks
|
||||
- Each `### Capability:` becomes a top-level task
|
||||
|
||||
2. **Extracts features** → Subtasks
|
||||
- Each `#### Feature:` becomes a subtask under its capability
|
||||
|
||||
3. **Parses dependencies** → Task dependencies
|
||||
- `Depends on: [X, Y]` sets task.dependencies = ["X", "Y"]
|
||||
|
||||
4. **Orders by phases** → Task priorities
|
||||
- Phase 0 tasks = highest priority
|
||||
- Phase N tasks = lower priority, properly sequenced
|
||||
|
||||
5. **Uses test strategy** → Test generation context
|
||||
- Feeds test scenarios to Surgical Test Generator during implementation
|
||||
|
||||
**Result**: A dependency-aware task graph that can be executed in topological order.
|
||||
|
||||
## Why RPG Structure Matters
|
||||
|
||||
Traditional flat PRDs lead to:
|
||||
- ❌ Unclear task dependencies
|
||||
- ❌ Arbitrary task ordering
|
||||
- ❌ Circular dependencies discovered late
|
||||
- ❌ Poorly scoped tasks
|
||||
|
||||
RPG-structured PRDs provide:
|
||||
- ✅ Explicit dependency chains
|
||||
- ✅ Topological execution order
|
||||
- ✅ Clear module boundaries
|
||||
- ✅ Validated task graph before implementation
|
||||
|
||||
## Tips for Best Results
|
||||
|
||||
1. **Spend time on dependency graph** - This is the most valuable section for Task Master
|
||||
2. **Keep features atomic** - Each feature should be independently testable
|
||||
3. **Progressive refinement** - Start broad, use `task-master expand` to break down complex tasks
|
||||
4. **Use research mode** - `task-master parse-prd --research` leverages AI for better task generation
|
||||
</task-master-integration>
|
||||
17
CHANGELOG.md
17
CHANGELOG.md
@@ -1,5 +1,22 @@
|
||||
# task-master-ai
|
||||
|
||||
## 0.28.0-rc.2
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#1273](https://github.com/eyaltoledano/claude-task-master/pull/1273) [`b43b7ce`](https://github.com/eyaltoledano/claude-task-master/commit/b43b7ce201625eee956fb2f8cd332f238bb78c21) Thanks [@ben-vargas](https://github.com/ben-vargas)! - Add Codex CLI provider with OAuth authentication
|
||||
- Added codex-cli provider for GPT-5 and GPT-5-Codex models (272K input / 128K output)
|
||||
- OAuth-first authentication via `codex login` - no API key required
|
||||
- Optional OPENAI_CODEX_API_KEY support
|
||||
- Codebase analysis capabilities automatically enabled
|
||||
- Command-specific settings and approval/sandbox modes
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#1277](https://github.com/eyaltoledano/claude-task-master/pull/1277) [`7b5a7c4`](https://github.com/eyaltoledano/claude-task-master/commit/7b5a7c4495a68b782f7407fc5d0e0d3ae81f42f5) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix MCP connection errors caused by deprecated generateTaskFiles calls. Resolves "Cannot read properties of null (reading 'toString')" errors when using MCP tools for task management operations.
|
||||
|
||||
- [#1276](https://github.com/eyaltoledano/claude-task-master/pull/1276) [`caee040`](https://github.com/eyaltoledano/claude-task-master/commit/caee040907f856d31a660171c9e6d966f23c632e) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix MCP server error when file parameter not provided - now properly constructs default tasks.json path instead of failing with 'tasksJsonPath is required' error.
|
||||
|
||||
## 0.28.0-rc.1
|
||||
|
||||
### Patch Changes
|
||||
|
||||
255
apps/cli/src/command-registry.ts
Normal file
255
apps/cli/src/command-registry.ts
Normal file
@@ -0,0 +1,255 @@
|
||||
/**
|
||||
* @fileoverview Centralized Command Registry
|
||||
* Provides a single location for registering all CLI commands
|
||||
*/
|
||||
|
||||
import { Command } from 'commander';
|
||||
|
||||
// Import all commands
|
||||
import { ListTasksCommand } from './commands/list.command.js';
|
||||
import { ShowCommand } from './commands/show.command.js';
|
||||
import { AuthCommand } from './commands/auth.command.js';
|
||||
import { ContextCommand } from './commands/context.command.js';
|
||||
import { StartCommand } from './commands/start.command.js';
|
||||
import { SetStatusCommand } from './commands/set-status.command.js';
|
||||
import { ExportCommand } from './commands/export.command.js';
|
||||
|
||||
/**
|
||||
* Command metadata for registration
|
||||
*/
|
||||
export interface CommandMetadata {
|
||||
name: string;
|
||||
description: string;
|
||||
commandClass: typeof Command;
|
||||
category?: 'task' | 'auth' | 'utility' | 'development';
|
||||
}
|
||||
|
||||
/**
|
||||
* Registry of all available commands
|
||||
*/
|
||||
export class CommandRegistry {
|
||||
/**
|
||||
* All available commands with their metadata
|
||||
*/
|
||||
private static commands: CommandMetadata[] = [
|
||||
// Task Management Commands
|
||||
{
|
||||
name: 'list',
|
||||
description: 'List all tasks with filtering and status overview',
|
||||
commandClass: ListTasksCommand as any,
|
||||
category: 'task'
|
||||
},
|
||||
{
|
||||
name: 'show',
|
||||
description: 'Display detailed information about a specific task',
|
||||
commandClass: ShowCommand as any,
|
||||
category: 'task'
|
||||
},
|
||||
{
|
||||
name: 'start',
|
||||
description: 'Start working on a task with claude-code',
|
||||
commandClass: StartCommand as any,
|
||||
category: 'task'
|
||||
},
|
||||
{
|
||||
name: 'set-status',
|
||||
description: 'Update the status of one or more tasks',
|
||||
commandClass: SetStatusCommand as any,
|
||||
category: 'task'
|
||||
},
|
||||
{
|
||||
name: 'export',
|
||||
description: 'Export tasks to external systems',
|
||||
commandClass: ExportCommand as any,
|
||||
category: 'task'
|
||||
},
|
||||
|
||||
// Authentication & Context Commands
|
||||
{
|
||||
name: 'auth',
|
||||
description: 'Manage authentication with tryhamster.com',
|
||||
commandClass: AuthCommand as any,
|
||||
category: 'auth'
|
||||
},
|
||||
{
|
||||
name: 'context',
|
||||
description: 'Manage workspace context (organization/brief)',
|
||||
commandClass: ContextCommand as any,
|
||||
category: 'auth'
|
||||
}
|
||||
];
|
||||
|
||||
/**
|
||||
* Register all commands on a program instance
|
||||
* @param program - Commander program to register commands on
|
||||
*/
|
||||
static registerAll(program: Command): void {
|
||||
for (const cmd of this.commands) {
|
||||
this.registerCommand(program, cmd);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Register specific commands by category
|
||||
* @param program - Commander program to register commands on
|
||||
* @param category - Category of commands to register
|
||||
*/
|
||||
static registerByCategory(
|
||||
program: Command,
|
||||
category: 'task' | 'auth' | 'utility' | 'development'
|
||||
): void {
|
||||
const categoryCommands = this.commands.filter(
|
||||
(cmd) => cmd.category === category
|
||||
);
|
||||
|
||||
for (const cmd of categoryCommands) {
|
||||
this.registerCommand(program, cmd);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Register a single command by name
|
||||
* @param program - Commander program to register the command on
|
||||
* @param name - Name of the command to register
|
||||
*/
|
||||
static registerByName(program: Command, name: string): void {
|
||||
const cmd = this.commands.find((c) => c.name === name);
|
||||
if (cmd) {
|
||||
this.registerCommand(program, cmd);
|
||||
} else {
|
||||
throw new Error(`Command '${name}' not found in registry`);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Register a single command
|
||||
* @param program - Commander program to register the command on
|
||||
* @param metadata - Command metadata
|
||||
*/
|
||||
private static registerCommand(
|
||||
program: Command,
|
||||
metadata: CommandMetadata
|
||||
): void {
|
||||
const CommandClass = metadata.commandClass as any;
|
||||
|
||||
// Use the static registration method that all commands have
|
||||
if (CommandClass.registerOn) {
|
||||
CommandClass.registerOn(program);
|
||||
} else if (CommandClass.register) {
|
||||
CommandClass.register(program);
|
||||
} else {
|
||||
// Fallback to creating instance and adding
|
||||
const instance = new CommandClass();
|
||||
program.addCommand(instance);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get all registered command names
|
||||
*/
|
||||
static getCommandNames(): string[] {
|
||||
return this.commands.map((cmd) => cmd.name);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get commands by category
|
||||
*/
|
||||
static getCommandsByCategory(
|
||||
category: 'task' | 'auth' | 'utility' | 'development'
|
||||
): CommandMetadata[] {
|
||||
return this.commands.filter((cmd) => cmd.category === category);
|
||||
}
|
||||
|
||||
/**
|
||||
* Add a new command to the registry
|
||||
* @param metadata - Command metadata to add
|
||||
*/
|
||||
static addCommand(metadata: CommandMetadata): void {
|
||||
// Check if command already exists
|
||||
if (this.commands.some((cmd) => cmd.name === metadata.name)) {
|
||||
throw new Error(`Command '${metadata.name}' already exists in registry`);
|
||||
}
|
||||
|
||||
this.commands.push(metadata);
|
||||
}
|
||||
|
||||
/**
|
||||
* Remove a command from the registry
|
||||
* @param name - Name of the command to remove
|
||||
*/
|
||||
static removeCommand(name: string): boolean {
|
||||
const index = this.commands.findIndex((cmd) => cmd.name === name);
|
||||
if (index >= 0) {
|
||||
this.commands.splice(index, 1);
|
||||
return true;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get command metadata by name
|
||||
* @param name - Name of the command
|
||||
*/
|
||||
static getCommand(name: string): CommandMetadata | undefined {
|
||||
return this.commands.find((cmd) => cmd.name === name);
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if a command exists
|
||||
* @param name - Name of the command
|
||||
*/
|
||||
static hasCommand(name: string): boolean {
|
||||
return this.commands.some((cmd) => cmd.name === name);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get a formatted list of all commands for display
|
||||
*/
|
||||
static getFormattedCommandList(): string {
|
||||
const categories = {
|
||||
task: 'Task Management',
|
||||
auth: 'Authentication & Context',
|
||||
utility: 'Utilities',
|
||||
development: 'Development'
|
||||
};
|
||||
|
||||
let output = '';
|
||||
|
||||
for (const [category, title] of Object.entries(categories)) {
|
||||
const cmds = this.getCommandsByCategory(
|
||||
category as keyof typeof categories
|
||||
);
|
||||
if (cmds.length > 0) {
|
||||
output += `\n${title}:\n`;
|
||||
for (const cmd of cmds) {
|
||||
output += ` ${cmd.name.padEnd(20)} ${cmd.description}\n`;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return output;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Convenience function to register all CLI commands
|
||||
* @param program - Commander program instance
|
||||
*/
|
||||
export function registerAllCommands(program: Command): void {
|
||||
CommandRegistry.registerAll(program);
|
||||
}
|
||||
|
||||
/**
|
||||
* Convenience function to register commands by category
|
||||
* @param program - Commander program instance
|
||||
* @param category - Category to register
|
||||
*/
|
||||
export function registerCommandsByCategory(
|
||||
program: Command,
|
||||
category: 'task' | 'auth' | 'utility' | 'development'
|
||||
): void {
|
||||
CommandRegistry.registerByCategory(program, category);
|
||||
}
|
||||
|
||||
// Export the registry for direct access if needed
|
||||
export default CommandRegistry;
|
||||
@@ -493,18 +493,7 @@ export class AuthCommand extends Command {
|
||||
}
|
||||
|
||||
/**
|
||||
* Static method to register this command on an existing program
|
||||
* This is for gradual migration - allows commands.js to use this
|
||||
*/
|
||||
static registerOn(program: Command): Command {
|
||||
const authCommand = new AuthCommand();
|
||||
program.addCommand(authCommand);
|
||||
return authCommand;
|
||||
}
|
||||
|
||||
/**
|
||||
* Alternative registration that returns the command for chaining
|
||||
* Can also configure the command name if needed
|
||||
* Register this command on an existing program
|
||||
*/
|
||||
static register(program: Command, name?: string): AuthCommand {
|
||||
const authCommand = new AuthCommand(name);
|
||||
|
||||
@@ -694,16 +694,7 @@ export class ContextCommand extends Command {
|
||||
}
|
||||
|
||||
/**
|
||||
* Static method to register this command on an existing program
|
||||
*/
|
||||
static registerOn(program: Command): Command {
|
||||
const contextCommand = new ContextCommand();
|
||||
program.addCommand(contextCommand);
|
||||
return contextCommand;
|
||||
}
|
||||
|
||||
/**
|
||||
* Alternative registration that returns the command for chaining
|
||||
* Register this command on an existing program
|
||||
*/
|
||||
static register(program: Command, name?: string): ContextCommand {
|
||||
const contextCommand = new ContextCommand(name);
|
||||
|
||||
379
apps/cli/src/commands/export.command.ts
Normal file
379
apps/cli/src/commands/export.command.ts
Normal file
@@ -0,0 +1,379 @@
|
||||
/**
|
||||
* @fileoverview Export command for exporting tasks to external systems
|
||||
* Provides functionality to export tasks to Hamster briefs
|
||||
*/
|
||||
|
||||
import { Command } from 'commander';
|
||||
import chalk from 'chalk';
|
||||
import inquirer from 'inquirer';
|
||||
import ora, { Ora } from 'ora';
|
||||
import {
|
||||
AuthManager,
|
||||
AuthenticationError,
|
||||
type UserContext
|
||||
} from '@tm/core/auth';
|
||||
import { TaskMasterCore, type ExportResult } from '@tm/core';
|
||||
import * as ui from '../utils/ui.js';
|
||||
|
||||
/**
|
||||
* Result type from export command
|
||||
*/
|
||||
export interface ExportCommandResult {
|
||||
success: boolean;
|
||||
action: 'export' | 'validate' | 'cancelled';
|
||||
result?: ExportResult;
|
||||
message?: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* ExportCommand extending Commander's Command class
|
||||
* Handles task export to external systems
|
||||
*/
|
||||
export class ExportCommand extends Command {
|
||||
private authManager: AuthManager;
|
||||
private taskMasterCore?: TaskMasterCore;
|
||||
private lastResult?: ExportCommandResult;
|
||||
|
||||
constructor(name?: string) {
|
||||
super(name || 'export');
|
||||
|
||||
// Initialize auth manager
|
||||
this.authManager = AuthManager.getInstance();
|
||||
|
||||
// Configure the command
|
||||
this.description('Export tasks to external systems (e.g., Hamster briefs)');
|
||||
|
||||
// Add options
|
||||
this.option('--org <id>', 'Organization ID to export to');
|
||||
this.option('--brief <id>', 'Brief ID to export tasks to');
|
||||
this.option('--tag <tag>', 'Export tasks from a specific tag');
|
||||
this.option(
|
||||
'--status <status>',
|
||||
'Filter tasks by status (pending, in-progress, done, etc.)'
|
||||
);
|
||||
this.option('--exclude-subtasks', 'Exclude subtasks from export');
|
||||
this.option('-y, --yes', 'Skip confirmation prompt');
|
||||
|
||||
// Accept optional positional argument for brief ID or Hamster URL
|
||||
this.argument('[briefOrUrl]', 'Brief ID or Hamster brief URL');
|
||||
|
||||
// Default action
|
||||
this.action(async (briefOrUrl?: string, options?: any) => {
|
||||
await this.executeExport(briefOrUrl, options);
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Initialize the TaskMasterCore
|
||||
*/
|
||||
private async initializeServices(): Promise<void> {
|
||||
if (this.taskMasterCore) {
|
||||
return;
|
||||
}
|
||||
|
||||
try {
|
||||
// Initialize TaskMasterCore
|
||||
this.taskMasterCore = await TaskMasterCore.create({
|
||||
projectPath: process.cwd()
|
||||
});
|
||||
} catch (error) {
|
||||
throw new Error(
|
||||
`Failed to initialize services: ${(error as Error).message}`
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Execute the export command
|
||||
*/
|
||||
private async executeExport(
|
||||
briefOrUrl?: string,
|
||||
options?: any
|
||||
): Promise<void> {
|
||||
let spinner: Ora | undefined;
|
||||
|
||||
try {
|
||||
// Check authentication
|
||||
if (!this.authManager.isAuthenticated()) {
|
||||
ui.displayError('Not authenticated. Run "tm auth login" first.');
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Initialize services
|
||||
await this.initializeServices();
|
||||
|
||||
// Get current context
|
||||
const context = this.authManager.getContext();
|
||||
|
||||
// Determine org and brief IDs
|
||||
let orgId = options?.org || context?.orgId;
|
||||
let briefId = options?.brief || briefOrUrl || context?.briefId;
|
||||
|
||||
// If a URL/ID was provided as argument, resolve it
|
||||
if (briefOrUrl && !options?.brief) {
|
||||
spinner = ora('Resolving brief...').start();
|
||||
const resolvedBrief = await this.resolveBriefInput(briefOrUrl);
|
||||
if (resolvedBrief) {
|
||||
briefId = resolvedBrief.briefId;
|
||||
orgId = resolvedBrief.orgId;
|
||||
spinner.succeed('Brief resolved');
|
||||
} else {
|
||||
spinner.fail('Could not resolve brief');
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
// Validate we have necessary IDs
|
||||
if (!orgId) {
|
||||
ui.displayError(
|
||||
'No organization selected. Run "tm context org" or use --org flag.'
|
||||
);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
if (!briefId) {
|
||||
ui.displayError(
|
||||
'No brief specified. Run "tm context brief", provide a brief ID/URL, or use --brief flag.'
|
||||
);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Confirm export if not auto-confirmed
|
||||
if (!options?.yes) {
|
||||
const confirmed = await this.confirmExport(orgId, briefId, context);
|
||||
if (!confirmed) {
|
||||
ui.displayWarning('Export cancelled');
|
||||
this.lastResult = {
|
||||
success: false,
|
||||
action: 'cancelled',
|
||||
message: 'User cancelled export'
|
||||
};
|
||||
process.exit(0);
|
||||
}
|
||||
}
|
||||
|
||||
// Perform export
|
||||
spinner = ora('Exporting tasks...').start();
|
||||
|
||||
const exportResult = await this.taskMasterCore!.exportTasks({
|
||||
orgId,
|
||||
briefId,
|
||||
tag: options?.tag,
|
||||
status: options?.status,
|
||||
excludeSubtasks: options?.excludeSubtasks || false
|
||||
});
|
||||
|
||||
if (exportResult.success) {
|
||||
spinner.succeed(
|
||||
`Successfully exported ${exportResult.taskCount} task(s) to brief`
|
||||
);
|
||||
|
||||
// Display summary
|
||||
console.log(chalk.cyan('\n📤 Export Summary\n'));
|
||||
console.log(chalk.white(` Organization: ${orgId}`));
|
||||
console.log(chalk.white(` Brief: ${briefId}`));
|
||||
console.log(chalk.white(` Tasks exported: ${exportResult.taskCount}`));
|
||||
if (options?.tag) {
|
||||
console.log(chalk.gray(` Tag: ${options.tag}`));
|
||||
}
|
||||
if (options?.status) {
|
||||
console.log(chalk.gray(` Status filter: ${options.status}`));
|
||||
}
|
||||
|
||||
if (exportResult.message) {
|
||||
console.log(chalk.gray(`\n ${exportResult.message}`));
|
||||
}
|
||||
} else {
|
||||
spinner.fail('Export failed');
|
||||
if (exportResult.error) {
|
||||
console.error(chalk.red(`\n✗ ${exportResult.error.message}`));
|
||||
}
|
||||
}
|
||||
|
||||
this.lastResult = {
|
||||
success: exportResult.success,
|
||||
action: 'export',
|
||||
result: exportResult
|
||||
};
|
||||
} catch (error: any) {
|
||||
if (spinner?.isSpinning) spinner.fail('Export failed');
|
||||
this.handleError(error);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Resolve brief input to get brief and org IDs
|
||||
*/
|
||||
private async resolveBriefInput(
|
||||
briefOrUrl: string
|
||||
): Promise<{ briefId: string; orgId: string } | null> {
|
||||
try {
|
||||
// Extract brief ID from input
|
||||
const briefId = this.extractBriefId(briefOrUrl);
|
||||
if (!briefId) {
|
||||
return null;
|
||||
}
|
||||
|
||||
// Fetch brief to get organization
|
||||
const brief = await this.authManager.getBrief(briefId);
|
||||
if (!brief) {
|
||||
ui.displayError('Brief not found or you do not have access');
|
||||
return null;
|
||||
}
|
||||
|
||||
return {
|
||||
briefId: brief.id,
|
||||
orgId: brief.accountId
|
||||
};
|
||||
} catch (error) {
|
||||
console.error(chalk.red(`Failed to resolve brief: ${error}`));
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract a brief ID from raw input (ID or URL)
|
||||
*/
|
||||
private extractBriefId(input: string): string | null {
|
||||
const raw = input?.trim() ?? '';
|
||||
if (!raw) return null;
|
||||
|
||||
const parseUrl = (s: string): URL | null => {
|
||||
try {
|
||||
return new URL(s);
|
||||
} catch {}
|
||||
try {
|
||||
return new URL(`https://${s}`);
|
||||
} catch {}
|
||||
return null;
|
||||
};
|
||||
|
||||
const fromParts = (path: string): string | null => {
|
||||
const parts = path.split('/').filter(Boolean);
|
||||
const briefsIdx = parts.lastIndexOf('briefs');
|
||||
const candidate =
|
||||
briefsIdx >= 0 && parts.length > briefsIdx + 1
|
||||
? parts[briefsIdx + 1]
|
||||
: parts[parts.length - 1];
|
||||
return candidate?.trim() || null;
|
||||
};
|
||||
|
||||
// Try URL parsing
|
||||
const url = parseUrl(raw);
|
||||
if (url) {
|
||||
const qId = url.searchParams.get('id') || url.searchParams.get('briefId');
|
||||
const candidate = (qId || fromParts(url.pathname)) ?? null;
|
||||
if (candidate) {
|
||||
if (this.isLikelyId(candidate) || candidate.length >= 8) {
|
||||
return candidate;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Check if it looks like a path
|
||||
if (raw.includes('/')) {
|
||||
const candidate = fromParts(raw);
|
||||
if (candidate && (this.isLikelyId(candidate) || candidate.length >= 8)) {
|
||||
return candidate;
|
||||
}
|
||||
}
|
||||
|
||||
// Return raw if it looks like an ID
|
||||
return raw;
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if a string looks like a brief ID
|
||||
*/
|
||||
private isLikelyId(value: string): boolean {
|
||||
const uuidRegex =
|
||||
/^[0-9a-fA-F]{8}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{12}$/;
|
||||
const ulidRegex = /^[0-9A-HJKMNP-TV-Z]{26}$/i;
|
||||
const slugRegex = /^[A-Za-z0-9_-]{16,}$/;
|
||||
return (
|
||||
uuidRegex.test(value) || ulidRegex.test(value) || slugRegex.test(value)
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Confirm export with the user
|
||||
*/
|
||||
private async confirmExport(
|
||||
orgId: string,
|
||||
briefId: string,
|
||||
context: UserContext | null
|
||||
): Promise<boolean> {
|
||||
console.log(chalk.cyan('\n📤 Export Tasks\n'));
|
||||
|
||||
// Show org name if available
|
||||
if (context?.orgName) {
|
||||
console.log(chalk.white(` Organization: ${context.orgName}`));
|
||||
console.log(chalk.gray(` ID: ${orgId}`));
|
||||
} else {
|
||||
console.log(chalk.white(` Organization ID: ${orgId}`));
|
||||
}
|
||||
|
||||
// Show brief info
|
||||
if (context?.briefName) {
|
||||
console.log(chalk.white(`\n Brief: ${context.briefName}`));
|
||||
console.log(chalk.gray(` ID: ${briefId}`));
|
||||
} else {
|
||||
console.log(chalk.white(`\n Brief ID: ${briefId}`));
|
||||
}
|
||||
|
||||
const { confirmed } = await inquirer.prompt([
|
||||
{
|
||||
type: 'confirm',
|
||||
name: 'confirmed',
|
||||
message: 'Do you want to proceed with export?',
|
||||
default: true
|
||||
}
|
||||
]);
|
||||
|
||||
return confirmed;
|
||||
}
|
||||
|
||||
/**
|
||||
* Handle errors
|
||||
*/
|
||||
private handleError(error: any): void {
|
||||
if (error instanceof AuthenticationError) {
|
||||
console.error(chalk.red(`\n✗ ${error.message}`));
|
||||
|
||||
if (error.code === 'NOT_AUTHENTICATED') {
|
||||
ui.displayWarning('Please authenticate first: tm auth login');
|
||||
}
|
||||
} else {
|
||||
const msg = error?.message ?? String(error);
|
||||
console.error(chalk.red(`Error: ${msg}`));
|
||||
|
||||
if (error.stack && process.env.DEBUG) {
|
||||
console.error(chalk.gray(error.stack));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the last export result (useful for testing)
|
||||
*/
|
||||
public getLastResult(): ExportCommandResult | undefined {
|
||||
return this.lastResult;
|
||||
}
|
||||
|
||||
/**
|
||||
* Clean up resources
|
||||
*/
|
||||
async cleanup(): Promise<void> {
|
||||
// No resources to clean up
|
||||
}
|
||||
|
||||
/**
|
||||
* Register this command on an existing program
|
||||
*/
|
||||
static register(program: Command, name?: string): ExportCommand {
|
||||
const exportCommand = new ExportCommand(name);
|
||||
program.addCommand(exportCommand);
|
||||
return exportCommand;
|
||||
}
|
||||
}
|
||||
@@ -246,7 +246,7 @@ export class ListTasksCommand extends Command {
|
||||
task.subtasks.forEach((subtask) => {
|
||||
const subIcon = STATUS_ICONS[subtask.status];
|
||||
console.log(
|
||||
` ${chalk.gray(`${task.id}.${subtask.id}`)} ${subIcon} ${chalk.gray(subtask.title)}`
|
||||
` ${chalk.gray(String(subtask.id))} ${subIcon} ${chalk.gray(subtask.title)}`
|
||||
);
|
||||
});
|
||||
}
|
||||
@@ -297,7 +297,7 @@ export class ListTasksCommand extends Command {
|
||||
nextTask
|
||||
);
|
||||
|
||||
// Task table - no title, just show the table directly
|
||||
// Task table
|
||||
console.log(
|
||||
ui.createTaskTable(tasks, {
|
||||
showSubtasks: withSubtasks,
|
||||
@@ -474,18 +474,7 @@ export class ListTasksCommand extends Command {
|
||||
}
|
||||
|
||||
/**
|
||||
* Static method to register this command on an existing program
|
||||
* This is for gradual migration - allows commands.js to use this
|
||||
*/
|
||||
static registerOn(program: Command): Command {
|
||||
const listCommand = new ListTasksCommand();
|
||||
program.addCommand(listCommand);
|
||||
return listCommand;
|
||||
}
|
||||
|
||||
/**
|
||||
* Alternative registration that returns the command for chaining
|
||||
* Can also configure the command name if needed
|
||||
* Register this command on an existing program
|
||||
*/
|
||||
static register(program: Command, name?: string): ListTasksCommand {
|
||||
const listCommand = new ListTasksCommand(name);
|
||||
|
||||
@@ -258,9 +258,6 @@ export class SetStatusCommand extends Command {
|
||||
)
|
||||
);
|
||||
}
|
||||
|
||||
// Show storage info
|
||||
console.log(chalk.gray(`\nUsing ${result.storageType} storage`));
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -290,18 +287,7 @@ export class SetStatusCommand extends Command {
|
||||
}
|
||||
|
||||
/**
|
||||
* Static method to register this command on an existing program
|
||||
* This is for gradual migration - allows commands.js to use this
|
||||
*/
|
||||
static registerOn(program: Command): Command {
|
||||
const setStatusCommand = new SetStatusCommand();
|
||||
program.addCommand(setStatusCommand);
|
||||
return setStatusCommand;
|
||||
}
|
||||
|
||||
/**
|
||||
* Alternative registration that returns the command for chaining
|
||||
* Can also configure the command name if needed
|
||||
* Register this command on an existing program
|
||||
*/
|
||||
static register(program: Command, name?: string): SetStatusCommand {
|
||||
const setStatusCommand = new SetStatusCommand(name);
|
||||
|
||||
@@ -322,18 +322,7 @@ export class ShowCommand extends Command {
|
||||
}
|
||||
|
||||
/**
|
||||
* Static method to register this command on an existing program
|
||||
* This is for gradual migration - allows commands.js to use this
|
||||
*/
|
||||
static registerOn(program: Command): Command {
|
||||
const showCommand = new ShowCommand();
|
||||
program.addCommand(showCommand);
|
||||
return showCommand;
|
||||
}
|
||||
|
||||
/**
|
||||
* Alternative registration that returns the command for chaining
|
||||
* Can also configure the command name if needed
|
||||
* Register this command on an existing program
|
||||
*/
|
||||
static register(program: Command, name?: string): ShowCommand {
|
||||
const showCommand = new ShowCommand(name);
|
||||
|
||||
@@ -493,16 +493,7 @@ export class StartCommand extends Command {
|
||||
}
|
||||
|
||||
/**
|
||||
* Static method to register this command on an existing program
|
||||
*/
|
||||
static registerOn(program: Command): Command {
|
||||
const startCommand = new StartCommand();
|
||||
program.addCommand(startCommand);
|
||||
return startCommand;
|
||||
}
|
||||
|
||||
/**
|
||||
* Alternative registration that returns the command for chaining
|
||||
* Register this command on an existing program
|
||||
*/
|
||||
static register(program: Command, name?: string): StartCommand {
|
||||
const startCommand = new StartCommand(name);
|
||||
|
||||
@@ -10,6 +10,15 @@ export { AuthCommand } from './commands/auth.command.js';
|
||||
export { ContextCommand } from './commands/context.command.js';
|
||||
export { StartCommand } from './commands/start.command.js';
|
||||
export { SetStatusCommand } from './commands/set-status.command.js';
|
||||
export { ExportCommand } from './commands/export.command.js';
|
||||
|
||||
// Command Registry
|
||||
export {
|
||||
CommandRegistry,
|
||||
registerAllCommands,
|
||||
registerCommandsByCategory,
|
||||
type CommandMetadata
|
||||
} from './command-registry.js';
|
||||
|
||||
// UI utilities (for other commands to use)
|
||||
export * as ui from './utils/ui.js';
|
||||
|
||||
@@ -192,8 +192,7 @@ export function displaySubtasks(
|
||||
status: any;
|
||||
description?: string;
|
||||
dependencies?: string[];
|
||||
}>,
|
||||
parentId: string | number
|
||||
}>
|
||||
): void {
|
||||
const terminalWidth = process.stdout.columns * 0.95 || 100;
|
||||
// Display subtasks header
|
||||
@@ -228,7 +227,7 @@ export function displaySubtasks(
|
||||
});
|
||||
|
||||
subtasks.forEach((subtask) => {
|
||||
const subtaskId = `${parentId}.${subtask.id}`;
|
||||
const subtaskId = String(subtask.id);
|
||||
|
||||
// Format dependencies
|
||||
const deps =
|
||||
@@ -329,7 +328,7 @@ export function displayTaskDetails(
|
||||
console.log(chalk.gray(` No subtasks with status '${statusFilter}'`));
|
||||
} else if (filteredSubtasks.length > 0) {
|
||||
console.log(); // Empty line for spacing
|
||||
displaySubtasks(filteredSubtasks, task.id);
|
||||
displaySubtasks(filteredSubtasks);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -286,12 +286,12 @@ export function createTaskTable(
|
||||
// Adjust column widths to better match the original layout
|
||||
const baseColWidths = showComplexity
|
||||
? [
|
||||
Math.floor(terminalWidth * 0.06),
|
||||
Math.floor(terminalWidth * 0.1),
|
||||
Math.floor(terminalWidth * 0.4),
|
||||
Math.floor(terminalWidth * 0.15),
|
||||
Math.floor(terminalWidth * 0.12),
|
||||
Math.floor(terminalWidth * 0.1),
|
||||
Math.floor(terminalWidth * 0.2),
|
||||
Math.floor(terminalWidth * 0.12)
|
||||
Math.floor(terminalWidth * 0.1)
|
||||
] // ID, Title, Status, Priority, Dependencies, Complexity
|
||||
: [
|
||||
Math.floor(terminalWidth * 0.08),
|
||||
@@ -377,7 +377,11 @@ export function createTaskTable(
|
||||
}
|
||||
|
||||
if (showComplexity) {
|
||||
subRow.push(chalk.gray('--'));
|
||||
const complexityDisplay =
|
||||
typeof subtask.complexity === 'number'
|
||||
? getComplexityWithColor(subtask.complexity)
|
||||
: '--';
|
||||
subRow.push(chalk.gray(complexityDisplay));
|
||||
}
|
||||
|
||||
table.push(subRow);
|
||||
|
||||
326
apps/docs/capabilities/rpg-method.mdx
Normal file
326
apps/docs/capabilities/rpg-method.mdx
Normal file
@@ -0,0 +1,326 @@
|
||||
---
|
||||
title: RPG Method for PRD Creation
|
||||
sidebarTitle: "RPG Method"
|
||||
---
|
||||
|
||||
# Repository Planning Graph (RPG) Method
|
||||
|
||||
The RPG (Repository Planning Graph) method is an advanced approach to creating Product Requirements Documents that generate highly-structured, dependency-aware task graphs. It's based on Microsoft Research's methodology for scalable codebase generation.
|
||||
|
||||
## When to Use RPG
|
||||
|
||||
Use the RPG template (`example_prd_rpg.txt`) for:
|
||||
|
||||
- **Complex multi-module systems** with intricate dependencies
|
||||
- **Large-scale codebases** being built from scratch
|
||||
- **Projects requiring explicit architecture** and clear module boundaries
|
||||
- **Teams needing dependency visibility** for parallel development
|
||||
|
||||
For simpler features or smaller projects, the standard `example_prd.txt` template may be more appropriate.
|
||||
|
||||
---
|
||||
|
||||
## Core Principles
|
||||
|
||||
### 1. Dual-Semantics
|
||||
|
||||
Separate **functional** thinking (WHAT) from **structural** thinking (HOW):
|
||||
|
||||
```
|
||||
Functional: "Data Validation capability with schema checking and rule enforcement"
|
||||
↓
|
||||
Structural: "src/validation/ with schema-validator.js and rule-validator.js"
|
||||
```
|
||||
|
||||
This separation prevents mixing concerns and creates clearer module boundaries.
|
||||
|
||||
### 2. Explicit Dependencies
|
||||
|
||||
Never assume dependencies - always state them explicitly:
|
||||
|
||||
```
|
||||
Good:
|
||||
Module: data-ingestion
|
||||
Depends on: [schema-validator, config-manager]
|
||||
|
||||
Bad:
|
||||
Module: data-ingestion
|
||||
(Assumes schema-validator exists somewhere)
|
||||
```
|
||||
|
||||
Explicit dependencies enable:
|
||||
- Topological ordering of implementation
|
||||
- Parallel development of independent modules
|
||||
- Clear build/test order
|
||||
- Early detection of circular dependencies
|
||||
|
||||
### 3. Topological Order
|
||||
|
||||
Build foundation layers before higher layers:
|
||||
|
||||
```
|
||||
Phase 0 (Foundation): error-handling, base-types, config
|
||||
↓
|
||||
Phase 1 (Data): validation, ingestion (depend on Phase 0)
|
||||
↓
|
||||
Phase 2 (Core): algorithms, pipelines (depend on Phase 1)
|
||||
↓
|
||||
Phase 3 (API): routes, handlers (depend on Phase 2)
|
||||
```
|
||||
|
||||
Task Master automatically orders tasks based on this dependency chain.
|
||||
|
||||
### 4. Progressive Refinement
|
||||
|
||||
Start broad, refine iteratively:
|
||||
|
||||
1. High-level capabilities → Main tasks
|
||||
2. Features per capability → Subtasks
|
||||
3. Implementation details → Expanded subtasks
|
||||
|
||||
---
|
||||
|
||||
## Template Structure
|
||||
|
||||
The RPG template guides you through 7 key sections:
|
||||
|
||||
### 1. Overview
|
||||
- Problem statement
|
||||
- Target users
|
||||
- Success metrics
|
||||
|
||||
### 2. Functional Decomposition (WHAT)
|
||||
- High-level capability domains
|
||||
- Features per capability
|
||||
- Inputs/outputs/behavior for each feature
|
||||
|
||||
**Example:**
|
||||
```
|
||||
Capability: Data Management
|
||||
Feature: Schema validation
|
||||
Description: Validate JSON against defined schemas
|
||||
Inputs: JSON object, schema definition
|
||||
Outputs: Validation result + error details
|
||||
Behavior: Iterate fields, check types, enforce constraints
|
||||
```
|
||||
|
||||
### 3. Structural Decomposition (HOW)
|
||||
- Repository folder structure
|
||||
- Module-to-capability mapping
|
||||
- File organization
|
||||
- Public interfaces/exports
|
||||
|
||||
**Example:**
|
||||
```
|
||||
Capability: Data Management
|
||||
→ Maps to: src/data/
|
||||
├── schema-validator.js (Schema validation feature)
|
||||
├── rule-validator.js (Rule validation feature)
|
||||
└── index.js (Exports)
|
||||
```
|
||||
|
||||
### 4. Dependency Graph (CRITICAL)
|
||||
- Foundation layer (no dependencies)
|
||||
- Each subsequent layer's dependencies
|
||||
- Explicit "depends on" declarations
|
||||
|
||||
**Example:**
|
||||
```
|
||||
Foundation Layer (Phase 0):
|
||||
- error-handling: No dependencies
|
||||
- base-types: No dependencies
|
||||
|
||||
Data Layer (Phase 1):
|
||||
- schema-validator: Depends on [base-types, error-handling]
|
||||
- data-ingestion: Depends on [schema-validator]
|
||||
```
|
||||
|
||||
### 5. Implementation Roadmap
|
||||
- Phases with entry/exit criteria
|
||||
- Tasks grouped by phase
|
||||
- Clear deliverables per phase
|
||||
|
||||
### 6. Test Strategy
|
||||
- Test pyramid ratios
|
||||
- Coverage requirements
|
||||
- Critical test scenarios per module
|
||||
- Guidelines for test generation
|
||||
|
||||
### 7. Architecture & Risks
|
||||
- Technical architecture
|
||||
- Data models
|
||||
- Technology decisions
|
||||
- Risk mitigation strategies
|
||||
|
||||
---
|
||||
|
||||
## Using RPG with Task Master
|
||||
|
||||
### Step 1: Create PRD with RPG Template
|
||||
|
||||
Use a code-context-aware tool to fill out the template:
|
||||
|
||||
```bash
|
||||
# In Claude Code, Cursor, or similar
|
||||
"Create a PRD using @.taskmaster/templates/example_prd_rpg.txt for [your project]"
|
||||
```
|
||||
|
||||
**Why code context matters:** The AI needs to understand your existing codebase to make informed decisions about:
|
||||
- Module boundaries
|
||||
- Dependency relationships
|
||||
- Integration points
|
||||
- Naming conventions
|
||||
|
||||
**Recommended tools:**
|
||||
- Claude Code (claude-code CLI)
|
||||
- Cursor/Windsurf
|
||||
- Gemini CLI (large contexts)
|
||||
- Codex/Grok CLI
|
||||
|
||||
### Step 2: Parse PRD into Tasks
|
||||
|
||||
```bash
|
||||
task-master parse-prd .taskmaster/docs/your-prd.txt --research
|
||||
```
|
||||
|
||||
Task Master will:
|
||||
1. Extract capabilities → Main tasks
|
||||
2. Extract features → Subtasks
|
||||
3. Parse dependencies → Task dependencies
|
||||
4. Order by phases → Task priorities
|
||||
|
||||
**Result:** A dependency-aware task graph ready for topological execution.
|
||||
|
||||
### Step 3: Analyze Complexity
|
||||
|
||||
```bash
|
||||
task-master analyze-complexity --research
|
||||
```
|
||||
|
||||
Review the complexity report to identify tasks that need expansion.
|
||||
|
||||
### Step 4: Expand Tasks
|
||||
|
||||
```bash
|
||||
task-master expand --all --research
|
||||
```
|
||||
|
||||
Break down complex tasks into manageable subtasks while preserving dependency chains.
|
||||
|
||||
---
|
||||
|
||||
## RPG Benefits
|
||||
|
||||
### For Solo Developers
|
||||
- Clear roadmap for implementing complex features
|
||||
- Prevents architectural mistakes early
|
||||
- Explicit dependency tracking avoids integration issues
|
||||
- Enables resuming work after interruptions
|
||||
|
||||
### For Teams
|
||||
- Parallel development of independent modules
|
||||
- Clear contracts between modules (explicit dependencies)
|
||||
- Reduced merge conflicts (proper module boundaries)
|
||||
- Onboarding aid (architectural overview in PRD)
|
||||
|
||||
### For AI Agents
|
||||
- Structured context for code generation
|
||||
- Clear scope boundaries per task
|
||||
- Dependency awareness prevents incomplete implementations
|
||||
- Test strategy guidance for TDD workflows
|
||||
|
||||
---
|
||||
|
||||
## RPG vs Standard Template
|
||||
|
||||
| Aspect | Standard Template | RPG Template |
|
||||
|--------|------------------|--------------|
|
||||
| **Best for** | Simple features | Complex systems |
|
||||
| **Dependency handling** | Implicit | Explicit graph |
|
||||
| **Structure guidance** | Minimal | Step-by-step |
|
||||
| **Examples** | Few | Inline good/bad examples |
|
||||
| **Module boundaries** | Vague | Precise mapping |
|
||||
| **Task ordering** | Manual | Automatic (topological) |
|
||||
| **Learning curve** | Low | Medium |
|
||||
| **Resulting task quality** | Good | Excellent |
|
||||
|
||||
---
|
||||
|
||||
## Tips for Best Results
|
||||
|
||||
### 1. Spend Time on Dependencies
|
||||
The dependency graph section is the most valuable. List all dependencies explicitly, even if they seem obvious.
|
||||
|
||||
### 2. Keep Features Atomic
|
||||
Each feature should be independently testable. If a feature description is vague ("handle data"), break it into specific features.
|
||||
|
||||
### 3. Progressive Refinement
|
||||
Don't try to get everything perfect on the first pass:
|
||||
1. Fill out high-level sections
|
||||
2. Review and refine
|
||||
3. Add detail where needed
|
||||
4. Let `task-master expand` break down complex tasks further
|
||||
|
||||
### 4. Use Research Mode
|
||||
```bash
|
||||
task-master parse-prd --research
|
||||
```
|
||||
The `--research` flag leverages AI to enhance task generation with domain knowledge.
|
||||
|
||||
### 5. Validate Early
|
||||
```bash
|
||||
task-master validate-dependencies
|
||||
```
|
||||
Check for circular dependencies or orphaned modules before starting implementation.
|
||||
|
||||
---
|
||||
|
||||
## Common Pitfalls
|
||||
|
||||
### ❌ Mixing Functional and Structural
|
||||
```
|
||||
Bad: "Capability: validation.js"
|
||||
Good: "Capability: Data Validation" → maps to "src/validation/"
|
||||
```
|
||||
|
||||
### ❌ Vague Module Boundaries
|
||||
```
|
||||
Bad: "Module: utils"
|
||||
Good: "Module: string-utilities" with clear exports
|
||||
```
|
||||
|
||||
### ❌ Implicit Dependencies
|
||||
```
|
||||
Bad: "Module: API handlers (needs validation)"
|
||||
Good: "Module: API handlers, Depends on: [validation, error-handling]"
|
||||
```
|
||||
|
||||
### ❌ Skipping Test Strategy
|
||||
Without test strategy, the AI won't know what to test during implementation.
|
||||
|
||||
---
|
||||
|
||||
## Example Workflow
|
||||
|
||||
1. **Discuss idea with AI**: Explain your project concept
|
||||
2. **Reference RPG template**: Show AI the `example_prd_rpg.txt`
|
||||
3. **Co-create PRD**: Work through each section with AI guidance
|
||||
4. **Save to docs**: Place in `.taskmaster/docs/your-project.txt`
|
||||
5. **Parse PRD**: `task-master parse-prd .taskmaster/docs/your-project.txt --research`
|
||||
6. **Analyze**: `task-master analyze-complexity --research`
|
||||
7. **Expand**: `task-master expand --all --research`
|
||||
8. **Start work**: `task-master next`
|
||||
|
||||
---
|
||||
|
||||
## Further Reading
|
||||
|
||||
- [PRD Creation and Parsing Guide](/getting-started/quick-start/prd-quick)
|
||||
- [Task Structure Documentation](/capabilities/task-structure)
|
||||
- [Microsoft Research RPG Paper](https://arxiv.org/abs/2410.21376) (Original methodology)
|
||||
|
||||
---
|
||||
|
||||
<Tip>
|
||||
The RPG template includes inline `<instruction>` and `<example>` blocks that teach the method as you use it. Read these sections carefully - they provide valuable guidance at each decision point.
|
||||
</Tip>
|
||||
@@ -50,7 +50,8 @@
|
||||
"pages": [
|
||||
"capabilities/mcp",
|
||||
"capabilities/cli-root-commands",
|
||||
"capabilities/task-structure"
|
||||
"capabilities/task-structure",
|
||||
"capabilities/rpg-method"
|
||||
]
|
||||
}
|
||||
]
|
||||
|
||||
@@ -32,7 +32,11 @@ The more context you give the model, the better the breakdown and results.
|
||||
|
||||
## Writing a PRD for Task Master
|
||||
|
||||
<Note>An example PRD can be found in .taskmaster/templates/example_prd.txt</Note>
|
||||
<Note>
|
||||
Two example PRD templates are available in `.taskmaster/templates/`:
|
||||
- `example_prd.txt` - Simple template for straightforward projects
|
||||
- `example_prd_rpg.txt` - Advanced RPG (Repository Planning Graph) template for complex projects with dependencies
|
||||
</Note>
|
||||
|
||||
|
||||
You can co-write your PRD with an LLM model using the following workflow:
|
||||
@@ -43,6 +47,29 @@ You can co-write your PRD with an LLM model using the following workflow:
|
||||
|
||||
This approach works great in Cursor, or anywhere you use a chat-based LLM.
|
||||
|
||||
### Choosing Between Templates
|
||||
|
||||
**Use `example_prd.txt` when:**
|
||||
- Building straightforward features
|
||||
- Working on smaller projects
|
||||
- Dependencies are simple and obvious
|
||||
|
||||
**Use `example_prd_rpg.txt` when:**
|
||||
- Building complex systems with multiple modules
|
||||
- Need explicit dependency management
|
||||
- Want structured guidance on architecture decisions
|
||||
- Planning a large codebase from scratch
|
||||
|
||||
The RPG template teaches you to think about:
|
||||
1. **Functional decomposition** (WHAT the system does)
|
||||
2. **Structural decomposition** (HOW it's organized in code)
|
||||
3. **Explicit dependencies** (WHAT depends on WHAT)
|
||||
4. **Topological ordering** (build foundation first, then layers)
|
||||
|
||||
<Tip>
|
||||
For complex projects, using the RPG template with a code-context-aware ai agent produces the best results because the AI can understand your existing codebase structure. [Learn more about the RPG method →](/capabilities/rpg-method)
|
||||
</Tip>
|
||||
|
||||
---
|
||||
|
||||
## Where to Save Your PRD
|
||||
|
||||
511
assets/example_prd_rpg.txt
Normal file
511
assets/example_prd_rpg.txt
Normal file
@@ -0,0 +1,511 @@
|
||||
<rpg-method>
|
||||
# Repository Planning Graph (RPG) Method - PRD Template
|
||||
|
||||
This template teaches you (AI or human) how to create structured, dependency-aware PRDs using the RPG methodology from Microsoft Research. The key insight: separate WHAT (functional) from HOW (structural), then connect them with explicit dependencies.
|
||||
|
||||
## Core Principles
|
||||
|
||||
1. **Dual-Semantics**: Think functional (capabilities) AND structural (code organization) separately, then map them
|
||||
2. **Explicit Dependencies**: Never assume - always state what depends on what
|
||||
3. **Topological Order**: Build foundation first, then layers on top
|
||||
4. **Progressive Refinement**: Start broad, refine iteratively
|
||||
|
||||
## How to Use This Template
|
||||
|
||||
- Follow the instructions in each `<instruction>` block
|
||||
- Look at `<example>` blocks to see good vs bad patterns
|
||||
- Fill in the content sections with your project details
|
||||
- The AI reading this will learn the RPG method by following along
|
||||
- Task Master will parse the resulting PRD into dependency-aware tasks
|
||||
|
||||
## Recommended Tools for Creating PRDs
|
||||
|
||||
When using this template to **create** a PRD (not parse it), use **code-context-aware AI assistants** for best results:
|
||||
|
||||
**Why?** The AI needs to understand your existing codebase to make good architectural decisions about modules, dependencies, and integration points.
|
||||
|
||||
**Recommended tools:**
|
||||
- **Claude Code** (claude-code CLI) - Best for structured reasoning and large contexts
|
||||
- **Cursor/Windsurf** - IDE integration with full codebase context
|
||||
- **Gemini CLI** (gemini-cli) - Massive context window for large codebases
|
||||
- **Codex/Grok CLI** - Strong code generation with context awareness
|
||||
|
||||
**Note:** Once your PRD is created, `task-master parse-prd` works with any configured AI model - it just needs to read the PRD text itself, not your codebase.
|
||||
</rpg-method>
|
||||
|
||||
---
|
||||
|
||||
<overview>
|
||||
<instruction>
|
||||
Start with the problem, not the solution. Be specific about:
|
||||
- What pain point exists?
|
||||
- Who experiences it?
|
||||
- Why existing solutions don't work?
|
||||
- What success looks like (measurable outcomes)?
|
||||
|
||||
Keep this section focused - don't jump into implementation details yet.
|
||||
</instruction>
|
||||
|
||||
## Problem Statement
|
||||
[Describe the core problem. Be concrete about user pain points.]
|
||||
|
||||
## Target Users
|
||||
[Define personas, their workflows, and what they're trying to achieve.]
|
||||
|
||||
## Success Metrics
|
||||
[Quantifiable outcomes. Examples: "80% task completion via autopilot", "< 5% manual intervention rate"]
|
||||
|
||||
</overview>
|
||||
|
||||
---
|
||||
|
||||
<functional-decomposition>
|
||||
<instruction>
|
||||
Now think about CAPABILITIES (what the system DOES), not code structure yet.
|
||||
|
||||
Step 1: Identify high-level capability domains
|
||||
- Think: "What major things does this system do?"
|
||||
- Examples: Data Management, Core Processing, Presentation Layer
|
||||
|
||||
Step 2: For each capability, enumerate specific features
|
||||
- Use explore-exploit strategy:
|
||||
* Exploit: What features are REQUIRED for core value?
|
||||
* Explore: What features make this domain COMPLETE?
|
||||
|
||||
Step 3: For each feature, define:
|
||||
- Description: What it does in one sentence
|
||||
- Inputs: What data/context it needs
|
||||
- Outputs: What it produces/returns
|
||||
- Behavior: Key logic or transformations
|
||||
|
||||
<example type="good">
|
||||
Capability: Data Validation
|
||||
Feature: Schema validation
|
||||
- Description: Validate JSON payloads against defined schemas
|
||||
- Inputs: JSON object, schema definition
|
||||
- Outputs: Validation result (pass/fail) + error details
|
||||
- Behavior: Iterate fields, check types, enforce constraints
|
||||
|
||||
Feature: Business rule validation
|
||||
- Description: Apply domain-specific validation rules
|
||||
- Inputs: Validated data object, rule set
|
||||
- Outputs: Boolean + list of violated rules
|
||||
- Behavior: Execute rules sequentially, short-circuit on failure
|
||||
</example>
|
||||
|
||||
<example type="bad">
|
||||
Capability: validation.js
|
||||
(Problem: This is a FILE, not a CAPABILITY. Mixing structure into functional thinking.)
|
||||
|
||||
Capability: Validation
|
||||
Feature: Make sure data is good
|
||||
(Problem: Too vague. No inputs/outputs. Not actionable.)
|
||||
</example>
|
||||
</instruction>
|
||||
|
||||
## Capability Tree
|
||||
|
||||
### Capability: [Name]
|
||||
[Brief description of what this capability domain covers]
|
||||
|
||||
#### Feature: [Name]
|
||||
- **Description**: [One sentence]
|
||||
- **Inputs**: [What it needs]
|
||||
- **Outputs**: [What it produces]
|
||||
- **Behavior**: [Key logic]
|
||||
|
||||
#### Feature: [Name]
|
||||
- **Description**:
|
||||
- **Inputs**:
|
||||
- **Outputs**:
|
||||
- **Behavior**:
|
||||
|
||||
### Capability: [Name]
|
||||
...
|
||||
|
||||
</functional-decomposition>
|
||||
|
||||
---
|
||||
|
||||
<structural-decomposition>
|
||||
<instruction>
|
||||
NOW think about code organization. Map capabilities to actual file/folder structure.
|
||||
|
||||
Rules:
|
||||
1. Each capability maps to a module (folder or file)
|
||||
2. Features within a capability map to functions/classes
|
||||
3. Use clear module boundaries - each module has ONE responsibility
|
||||
4. Define what each module exports (public interface)
|
||||
|
||||
The goal: Create a clear mapping between "what it does" (functional) and "where it lives" (structural).
|
||||
|
||||
<example type="good">
|
||||
Capability: Data Validation
|
||||
→ Maps to: src/validation/
|
||||
├── schema-validator.js (Schema validation feature)
|
||||
├── rule-validator.js (Business rule validation feature)
|
||||
└── index.js (Public exports)
|
||||
|
||||
Exports:
|
||||
- validateSchema(data, schema)
|
||||
- validateRules(data, rules)
|
||||
</example>
|
||||
|
||||
<example type="bad">
|
||||
Capability: Data Validation
|
||||
→ Maps to: src/utils.js
|
||||
(Problem: "utils" is not a clear module boundary. Where do I find validation logic?)
|
||||
|
||||
Capability: Data Validation
|
||||
→ Maps to: src/validation/everything.js
|
||||
(Problem: One giant file. Features should map to separate files for maintainability.)
|
||||
</example>
|
||||
</instruction>
|
||||
|
||||
## Repository Structure
|
||||
|
||||
```
|
||||
project-root/
|
||||
├── src/
|
||||
│ ├── [module-name]/ # Maps to: [Capability Name]
|
||||
│ │ ├── [file].js # Maps to: [Feature Name]
|
||||
│ │ └── index.js # Public exports
|
||||
│ └── [module-name]/
|
||||
├── tests/
|
||||
└── docs/
|
||||
```
|
||||
|
||||
## Module Definitions
|
||||
|
||||
### Module: [Name]
|
||||
- **Maps to capability**: [Capability from functional decomposition]
|
||||
- **Responsibility**: [Single clear purpose]
|
||||
- **File structure**:
|
||||
```
|
||||
module-name/
|
||||
├── feature1.js
|
||||
├── feature2.js
|
||||
└── index.js
|
||||
```
|
||||
- **Exports**:
|
||||
- `functionName()` - [what it does]
|
||||
- `ClassName` - [what it does]
|
||||
|
||||
</structural-decomposition>
|
||||
|
||||
---
|
||||
|
||||
<dependency-graph>
|
||||
<instruction>
|
||||
This is THE CRITICAL SECTION for Task Master parsing.
|
||||
|
||||
Define explicit dependencies between modules. This creates the topological order for task execution.
|
||||
|
||||
Rules:
|
||||
1. List modules in dependency order (foundation first)
|
||||
2. For each module, state what it depends on
|
||||
3. Foundation modules should have NO dependencies
|
||||
4. Every non-foundation module should depend on at least one other module
|
||||
5. Think: "What must EXIST before I can build this module?"
|
||||
|
||||
<example type="good">
|
||||
Foundation Layer (no dependencies):
|
||||
- error-handling: No dependencies
|
||||
- config-manager: No dependencies
|
||||
- base-types: No dependencies
|
||||
|
||||
Data Layer:
|
||||
- schema-validator: Depends on [base-types, error-handling]
|
||||
- data-ingestion: Depends on [schema-validator, config-manager]
|
||||
|
||||
Core Layer:
|
||||
- algorithm-engine: Depends on [base-types, error-handling]
|
||||
- pipeline-orchestrator: Depends on [algorithm-engine, data-ingestion]
|
||||
</example>
|
||||
|
||||
<example type="bad">
|
||||
- validation: Depends on API
|
||||
- API: Depends on validation
|
||||
(Problem: Circular dependency. This will cause build/runtime issues.)
|
||||
|
||||
- user-auth: Depends on everything
|
||||
(Problem: Too many dependencies. Should be more focused.)
|
||||
</example>
|
||||
</instruction>
|
||||
|
||||
## Dependency Chain
|
||||
|
||||
### Foundation Layer (Phase 0)
|
||||
No dependencies - these are built first.
|
||||
|
||||
- **[Module Name]**: [What it provides]
|
||||
- **[Module Name]**: [What it provides]
|
||||
|
||||
### [Layer Name] (Phase 1)
|
||||
- **[Module Name]**: Depends on [[module-from-phase-0], [module-from-phase-0]]
|
||||
- **[Module Name]**: Depends on [[module-from-phase-0]]
|
||||
|
||||
### [Layer Name] (Phase 2)
|
||||
- **[Module Name]**: Depends on [[module-from-phase-1], [module-from-foundation]]
|
||||
|
||||
[Continue building up layers...]
|
||||
|
||||
</dependency-graph>
|
||||
|
||||
---
|
||||
|
||||
<implementation-roadmap>
|
||||
<instruction>
|
||||
Turn the dependency graph into concrete development phases.
|
||||
|
||||
Each phase should:
|
||||
1. Have clear entry criteria (what must exist before starting)
|
||||
2. Contain tasks that can be parallelized (no inter-dependencies within phase)
|
||||
3. Have clear exit criteria (how do we know phase is complete?)
|
||||
4. Build toward something USABLE (not just infrastructure)
|
||||
|
||||
Phase ordering follows topological sort of dependency graph.
|
||||
|
||||
<example type="good">
|
||||
Phase 0: Foundation
|
||||
Entry: Clean repository
|
||||
Tasks:
|
||||
- Implement error handling utilities
|
||||
- Create base type definitions
|
||||
- Setup configuration system
|
||||
Exit: Other modules can import foundation without errors
|
||||
|
||||
Phase 1: Data Layer
|
||||
Entry: Phase 0 complete
|
||||
Tasks:
|
||||
- Implement schema validator (uses: base types, error handling)
|
||||
- Build data ingestion pipeline (uses: validator, config)
|
||||
Exit: End-to-end data flow from input to validated output
|
||||
</example>
|
||||
|
||||
<example type="bad">
|
||||
Phase 1: Build Everything
|
||||
Tasks:
|
||||
- API
|
||||
- Database
|
||||
- UI
|
||||
- Tests
|
||||
(Problem: No clear focus. Too broad. Dependencies not considered.)
|
||||
</example>
|
||||
</instruction>
|
||||
|
||||
## Development Phases
|
||||
|
||||
### Phase 0: [Foundation Name]
|
||||
**Goal**: [What foundational capability this establishes]
|
||||
|
||||
**Entry Criteria**: [What must be true before starting]
|
||||
|
||||
**Tasks**:
|
||||
- [ ] [Task name] (depends on: [none or list])
|
||||
- Acceptance criteria: [How we know it's done]
|
||||
- Test strategy: [What tests prove it works]
|
||||
|
||||
- [ ] [Task name] (depends on: [none or list])
|
||||
|
||||
**Exit Criteria**: [Observable outcome that proves phase complete]
|
||||
|
||||
**Delivers**: [What can users/developers do after this phase?]
|
||||
|
||||
---
|
||||
|
||||
### Phase 1: [Layer Name]
|
||||
**Goal**:
|
||||
|
||||
**Entry Criteria**: Phase 0 complete
|
||||
|
||||
**Tasks**:
|
||||
- [ ] [Task name] (depends on: [[tasks-from-phase-0]])
|
||||
- [ ] [Task name] (depends on: [[tasks-from-phase-0]])
|
||||
|
||||
**Exit Criteria**:
|
||||
|
||||
**Delivers**:
|
||||
|
||||
---
|
||||
|
||||
[Continue with more phases...]
|
||||
|
||||
</implementation-roadmap>
|
||||
|
||||
---
|
||||
|
||||
<test-strategy>
|
||||
<instruction>
|
||||
Define how testing will be integrated throughout development (TDD approach).
|
||||
|
||||
Specify:
|
||||
1. Test pyramid ratios (unit vs integration vs e2e)
|
||||
2. Coverage requirements
|
||||
3. Critical test scenarios
|
||||
4. Test generation guidelines for Surgical Test Generator
|
||||
|
||||
This section guides the AI when generating tests during the RED phase of TDD.
|
||||
|
||||
<example type="good">
|
||||
Critical Test Scenarios for Data Validation module:
|
||||
- Happy path: Valid data passes all checks
|
||||
- Edge cases: Empty strings, null values, boundary numbers
|
||||
- Error cases: Invalid types, missing required fields
|
||||
- Integration: Validator works with ingestion pipeline
|
||||
</example>
|
||||
</instruction>
|
||||
|
||||
## Test Pyramid
|
||||
|
||||
```
|
||||
/\
|
||||
/E2E\ ← [X]% (End-to-end, slow, comprehensive)
|
||||
/------\
|
||||
/Integration\ ← [Y]% (Module interactions)
|
||||
/------------\
|
||||
/ Unit Tests \ ← [Z]% (Fast, isolated, deterministic)
|
||||
/----------------\
|
||||
```
|
||||
|
||||
## Coverage Requirements
|
||||
- Line coverage: [X]% minimum
|
||||
- Branch coverage: [X]% minimum
|
||||
- Function coverage: [X]% minimum
|
||||
- Statement coverage: [X]% minimum
|
||||
|
||||
## Critical Test Scenarios
|
||||
|
||||
### [Module/Feature Name]
|
||||
**Happy path**:
|
||||
- [Scenario description]
|
||||
- Expected: [What should happen]
|
||||
|
||||
**Edge cases**:
|
||||
- [Scenario description]
|
||||
- Expected: [What should happen]
|
||||
|
||||
**Error cases**:
|
||||
- [Scenario description]
|
||||
- Expected: [How system handles failure]
|
||||
|
||||
**Integration points**:
|
||||
- [What interactions to test]
|
||||
- Expected: [End-to-end behavior]
|
||||
|
||||
## Test Generation Guidelines
|
||||
[Specific instructions for Surgical Test Generator about what to focus on, what patterns to follow, project-specific test conventions]
|
||||
|
||||
</test-strategy>
|
||||
|
||||
---
|
||||
|
||||
<architecture>
|
||||
<instruction>
|
||||
Describe technical architecture, data models, and key design decisions.
|
||||
|
||||
Keep this section AFTER functional/structural decomposition - implementation details come after understanding structure.
|
||||
</instruction>
|
||||
|
||||
## System Components
|
||||
[Major architectural pieces and their responsibilities]
|
||||
|
||||
## Data Models
|
||||
[Core data structures, schemas, database design]
|
||||
|
||||
## Technology Stack
|
||||
[Languages, frameworks, key libraries]
|
||||
|
||||
**Decision: [Technology/Pattern]**
|
||||
- **Rationale**: [Why chosen]
|
||||
- **Trade-offs**: [What we're giving up]
|
||||
- **Alternatives considered**: [What else we looked at]
|
||||
|
||||
</architecture>
|
||||
|
||||
---
|
||||
|
||||
<risks>
|
||||
<instruction>
|
||||
Identify risks that could derail development and how to mitigate them.
|
||||
|
||||
Categories:
|
||||
- Technical risks (complexity, unknowns)
|
||||
- Dependency risks (blocking issues)
|
||||
- Scope risks (creep, underestimation)
|
||||
</instruction>
|
||||
|
||||
## Technical Risks
|
||||
**Risk**: [Description]
|
||||
- **Impact**: [High/Medium/Low - effect on project]
|
||||
- **Likelihood**: [High/Medium/Low]
|
||||
- **Mitigation**: [How to address]
|
||||
- **Fallback**: [Plan B if mitigation fails]
|
||||
|
||||
## Dependency Risks
|
||||
[External dependencies, blocking issues]
|
||||
|
||||
## Scope Risks
|
||||
[Scope creep, underestimation, unclear requirements]
|
||||
|
||||
</risks>
|
||||
|
||||
---
|
||||
|
||||
<appendix>
|
||||
## References
|
||||
[Papers, documentation, similar systems]
|
||||
|
||||
## Glossary
|
||||
[Domain-specific terms]
|
||||
|
||||
## Open Questions
|
||||
[Things to resolve during development]
|
||||
</appendix>
|
||||
|
||||
---
|
||||
|
||||
<task-master-integration>
|
||||
# How Task Master Uses This PRD
|
||||
|
||||
When you run `task-master parse-prd <file>.txt`, the parser:
|
||||
|
||||
1. **Extracts capabilities** → Main tasks
|
||||
- Each `### Capability:` becomes a top-level task
|
||||
|
||||
2. **Extracts features** → Subtasks
|
||||
- Each `#### Feature:` becomes a subtask under its capability
|
||||
|
||||
3. **Parses dependencies** → Task dependencies
|
||||
- `Depends on: [X, Y]` sets task.dependencies = ["X", "Y"]
|
||||
|
||||
4. **Orders by phases** → Task priorities
|
||||
- Phase 0 tasks = highest priority
|
||||
- Phase N tasks = lower priority, properly sequenced
|
||||
|
||||
5. **Uses test strategy** → Test generation context
|
||||
- Feeds test scenarios to Surgical Test Generator during implementation
|
||||
|
||||
**Result**: A dependency-aware task graph that can be executed in topological order.
|
||||
|
||||
## Why RPG Structure Matters
|
||||
|
||||
Traditional flat PRDs lead to:
|
||||
- ❌ Unclear task dependencies
|
||||
- ❌ Arbitrary task ordering
|
||||
- ❌ Circular dependencies discovered late
|
||||
- ❌ Poorly scoped tasks
|
||||
|
||||
RPG-structured PRDs provide:
|
||||
- ✅ Explicit dependency chains
|
||||
- ✅ Topological execution order
|
||||
- ✅ Clear module boundaries
|
||||
- ✅ Validated task graph before implementation
|
||||
|
||||
## Tips for Best Results
|
||||
|
||||
1. **Spend time on dependency graph** - This is the most valuable section for Task Master
|
||||
2. **Keep features atomic** - Each feature should be independently testable
|
||||
3. **Progressive refinement** - Start broad, use `task-master expand` to break down complex tasks
|
||||
4. **Use research mode** - `task-master parse-prd --research` leverages AI for better task generation
|
||||
</task-master-integration>
|
||||
@@ -69,11 +69,29 @@ export function resolveTasksPath(args, log = silentLogger) {
|
||||
|
||||
// Use core findTasksPath with explicit path and normalized projectRoot context
|
||||
if (projectRoot) {
|
||||
return coreFindTasksPath(explicitPath, { projectRoot }, log);
|
||||
const foundPath = coreFindTasksPath(explicitPath, { projectRoot }, log);
|
||||
// If core function returns null and no explicit path was provided,
|
||||
// construct the expected default path as documented
|
||||
if (foundPath === null && !explicitPath) {
|
||||
const defaultPath = path.join(
|
||||
projectRoot,
|
||||
'.taskmaster',
|
||||
'tasks',
|
||||
'tasks.json'
|
||||
);
|
||||
log?.info?.(
|
||||
`Core findTasksPath returned null, using default path: ${defaultPath}`
|
||||
);
|
||||
return defaultPath;
|
||||
}
|
||||
return foundPath;
|
||||
}
|
||||
|
||||
// Fallback to core function without projectRoot context
|
||||
return coreFindTasksPath(explicitPath, null, log);
|
||||
const foundPath = coreFindTasksPath(explicitPath, null, log);
|
||||
// Note: When no projectRoot is available, we can't construct a default path
|
||||
// so we return null and let the calling code handle the error
|
||||
return foundPath;
|
||||
}
|
||||
|
||||
/**
|
||||
|
||||
34
output.txt
Normal file
34
output.txt
Normal file
File diff suppressed because one or more lines are too long
41
package-lock.json
generated
41
package-lock.json
generated
@@ -1,12 +1,12 @@
|
||||
{
|
||||
"name": "task-master-ai",
|
||||
"version": "0.27.3",
|
||||
"version": "0.28.0-rc.1",
|
||||
"lockfileVersion": 3,
|
||||
"requires": true,
|
||||
"packages": {
|
||||
"": {
|
||||
"name": "task-master-ai",
|
||||
"version": "0.27.3",
|
||||
"version": "0.28.0-rc.1",
|
||||
"license": "MIT WITH Commons-Clause",
|
||||
"workspaces": [
|
||||
"apps/*",
|
||||
@@ -131,7 +131,7 @@
|
||||
}
|
||||
},
|
||||
"apps/extension": {
|
||||
"version": "0.25.4",
|
||||
"version": "0.25.5-rc.0",
|
||||
"dependencies": {
|
||||
"task-master-ai": "*"
|
||||
},
|
||||
@@ -635,7 +635,6 @@
|
||||
"apps/extension/node_modules/zod": {
|
||||
"version": "3.25.76",
|
||||
"license": "MIT",
|
||||
"peer": true,
|
||||
"funding": {
|
||||
"url": "https://github.com/sponsors/colinhacks"
|
||||
}
|
||||
@@ -1830,7 +1829,6 @@
|
||||
"version": "7.28.4",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"peer": true,
|
||||
"dependencies": {
|
||||
"@babel/code-frame": "^7.27.1",
|
||||
"@babel/generator": "^7.28.3",
|
||||
@@ -2663,7 +2661,6 @@
|
||||
"version": "6.3.1",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"peer": true,
|
||||
"dependencies": {
|
||||
"@dnd-kit/accessibility": "^3.1.1",
|
||||
"@dnd-kit/utilities": "^3.2.2",
|
||||
@@ -4583,6 +4580,7 @@
|
||||
"version": "0.23.2",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"peer": true,
|
||||
"dependencies": {
|
||||
"loose-envify": "^1.1.0"
|
||||
}
|
||||
@@ -5172,6 +5170,7 @@
|
||||
"version": "0.23.2",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"peer": true,
|
||||
"dependencies": {
|
||||
"loose-envify": "^1.1.0"
|
||||
}
|
||||
@@ -5180,7 +5179,6 @@
|
||||
"version": "3.25.76",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"peer": true,
|
||||
"funding": {
|
||||
"url": "https://github.com/sponsors/colinhacks"
|
||||
}
|
||||
@@ -5471,7 +5469,6 @@
|
||||
"node_modules/@modelcontextprotocol/sdk/node_modules/zod": {
|
||||
"version": "3.25.76",
|
||||
"license": "MIT",
|
||||
"peer": true,
|
||||
"funding": {
|
||||
"url": "https://github.com/sponsors/colinhacks"
|
||||
}
|
||||
@@ -5572,7 +5569,6 @@
|
||||
"node_modules/@opentelemetry/api": {
|
||||
"version": "1.9.0",
|
||||
"license": "Apache-2.0",
|
||||
"peer": true,
|
||||
"engines": {
|
||||
"node": ">=8.0.0"
|
||||
}
|
||||
@@ -8592,7 +8588,6 @@
|
||||
"version": "19.1.8",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"peer": true,
|
||||
"dependencies": {
|
||||
"csstype": "^3.0.2"
|
||||
}
|
||||
@@ -8601,7 +8596,6 @@
|
||||
"version": "19.1.6",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"peer": true,
|
||||
"peerDependencies": {
|
||||
"@types/react": "^19.0.0"
|
||||
}
|
||||
@@ -9047,7 +9041,6 @@
|
||||
"node_modules/acorn": {
|
||||
"version": "8.15.0",
|
||||
"license": "MIT",
|
||||
"peer": true,
|
||||
"bin": {
|
||||
"acorn": "bin/acorn"
|
||||
},
|
||||
@@ -9113,7 +9106,6 @@
|
||||
"node_modules/ai": {
|
||||
"version": "5.0.57",
|
||||
"license": "Apache-2.0",
|
||||
"peer": true,
|
||||
"dependencies": {
|
||||
"@ai-sdk/gateway": "1.0.30",
|
||||
"@ai-sdk/provider": "2.0.0",
|
||||
@@ -9333,7 +9325,6 @@
|
||||
"node_modules/ajv": {
|
||||
"version": "8.17.1",
|
||||
"license": "MIT",
|
||||
"peer": true,
|
||||
"dependencies": {
|
||||
"fast-deep-equal": "^3.1.3",
|
||||
"fast-uri": "^3.0.1",
|
||||
@@ -10339,7 +10330,6 @@
|
||||
}
|
||||
],
|
||||
"license": "MIT",
|
||||
"peer": true,
|
||||
"dependencies": {
|
||||
"baseline-browser-mapping": "^2.8.3",
|
||||
"caniuse-lite": "^1.0.30001741",
|
||||
@@ -12203,8 +12193,7 @@
|
||||
"node_modules/devtools-protocol": {
|
||||
"version": "0.0.1312386",
|
||||
"dev": true,
|
||||
"license": "BSD-3-Clause",
|
||||
"peer": true
|
||||
"license": "BSD-3-Clause"
|
||||
},
|
||||
"node_modules/dezalgo": {
|
||||
"version": "1.0.4",
|
||||
@@ -12798,7 +12787,6 @@
|
||||
"version": "0.25.10",
|
||||
"hasInstallScript": true,
|
||||
"license": "MIT",
|
||||
"peer": true,
|
||||
"bin": {
|
||||
"esbuild": "bin/esbuild"
|
||||
},
|
||||
@@ -13111,7 +13099,6 @@
|
||||
"node_modules/express": {
|
||||
"version": "4.21.2",
|
||||
"license": "MIT",
|
||||
"peer": true,
|
||||
"dependencies": {
|
||||
"accepts": "~1.3.8",
|
||||
"array-flatten": "1.1.1",
|
||||
@@ -15465,7 +15452,6 @@
|
||||
"version": "6.3.1",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"peer": true,
|
||||
"dependencies": {
|
||||
"@alcalzone/ansi-tokenize": "^0.2.0",
|
||||
"ansi-escapes": "^7.0.0",
|
||||
@@ -16423,7 +16409,6 @@
|
||||
"version": "29.7.0",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"peer": true,
|
||||
"dependencies": {
|
||||
"@jest/core": "^29.7.0",
|
||||
"@jest/types": "^29.6.3",
|
||||
@@ -18041,7 +18026,6 @@
|
||||
"version": "1.4.0",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"peer": true,
|
||||
"engines": {
|
||||
"node": ">= 10.16.0"
|
||||
}
|
||||
@@ -18367,6 +18351,7 @@
|
||||
"os": [
|
||||
"darwin"
|
||||
],
|
||||
"peer": true,
|
||||
"engines": {
|
||||
"node": ">= 12.0.0"
|
||||
},
|
||||
@@ -18591,6 +18576,7 @@
|
||||
"version": "1.4.0",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"peer": true,
|
||||
"dependencies": {
|
||||
"js-tokens": "^3.0.0 || ^4.0.0"
|
||||
},
|
||||
@@ -18721,7 +18707,6 @@
|
||||
"node_modules/marked": {
|
||||
"version": "15.0.12",
|
||||
"license": "MIT",
|
||||
"peer": true,
|
||||
"bin": {
|
||||
"marked": "bin/marked.js"
|
||||
},
|
||||
@@ -21444,7 +21429,6 @@
|
||||
}
|
||||
],
|
||||
"license": "MIT",
|
||||
"peer": true,
|
||||
"dependencies": {
|
||||
"nanoid": "^3.3.11",
|
||||
"picocolors": "^1.1.1",
|
||||
@@ -22827,7 +22811,6 @@
|
||||
"integrity": "sha512-U+NPR0Bkg3wm61dteD2L4nAM1U9dtaqVrpDXwC36IKRHpEO/Ubpid4Nijpa2imPchcVNHfxVFwSSMJdwdGFUbg==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"peer": true,
|
||||
"dependencies": {
|
||||
"@oxc-project/types": "=0.93.0",
|
||||
"@rolldown/pluginutils": "1.0.0-beta.41",
|
||||
@@ -25256,7 +25239,6 @@
|
||||
"version": "5.9.2",
|
||||
"devOptional": true,
|
||||
"license": "Apache-2.0",
|
||||
"peer": true,
|
||||
"bin": {
|
||||
"tsc": "bin/tsc",
|
||||
"tsserver": "bin/tsserver"
|
||||
@@ -25373,7 +25355,6 @@
|
||||
"version": "11.0.5",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"peer": true,
|
||||
"dependencies": {
|
||||
"@types/unist": "^3.0.0",
|
||||
"bail": "^2.0.0",
|
||||
@@ -25816,7 +25797,6 @@
|
||||
"version": "5.4.20",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"peer": true,
|
||||
"dependencies": {
|
||||
"esbuild": "^0.21.3",
|
||||
"postcss": "^8.4.43",
|
||||
@@ -25929,6 +25909,7 @@
|
||||
"os": [
|
||||
"darwin"
|
||||
],
|
||||
"peer": true,
|
||||
"engines": {
|
||||
"node": ">=12"
|
||||
}
|
||||
@@ -26512,7 +26493,7 @@
|
||||
},
|
||||
"node_modules/yaml": {
|
||||
"version": "1.10.2",
|
||||
"dev": true,
|
||||
"devOptional": true,
|
||||
"license": "ISC",
|
||||
"engines": {
|
||||
"node": ">= 6"
|
||||
@@ -26655,7 +26636,6 @@
|
||||
"node_modules/zod": {
|
||||
"version": "4.1.11",
|
||||
"license": "MIT",
|
||||
"peer": true,
|
||||
"funding": {
|
||||
"url": "https://github.com/sponsors/colinhacks"
|
||||
}
|
||||
@@ -27397,7 +27377,6 @@
|
||||
"version": "3.2.4",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"peer": true,
|
||||
"dependencies": {
|
||||
"@types/chai": "^5.2.2",
|
||||
"@vitest/expect": "3.2.4",
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "task-master-ai",
|
||||
"version": "0.28.0-rc.1",
|
||||
"version": "0.28.0-rc.2",
|
||||
"description": "A task management system for ambitious AI-driven development that doesn't overwhelm and confuse Cursor.",
|
||||
"main": "index.js",
|
||||
"type": "module",
|
||||
|
||||
@@ -53,7 +53,7 @@ export class TaskEntity implements Task {
|
||||
// Normalize subtask IDs to strings
|
||||
this.subtasks = (data.subtasks || []).map((subtask) => ({
|
||||
...subtask,
|
||||
id: Number(subtask.id), // Keep subtask IDs as numbers per interface
|
||||
id: String(subtask.id),
|
||||
parentId: String(subtask.parentId)
|
||||
}));
|
||||
|
||||
|
||||
@@ -51,7 +51,8 @@ export const ERROR_CODES = {
|
||||
INTERNAL_ERROR: 'INTERNAL_ERROR',
|
||||
INVALID_INPUT: 'INVALID_INPUT',
|
||||
NOT_IMPLEMENTED: 'NOT_IMPLEMENTED',
|
||||
UNKNOWN_ERROR: 'UNKNOWN_ERROR'
|
||||
UNKNOWN_ERROR: 'UNKNOWN_ERROR',
|
||||
NOT_FOUND: 'NOT_FOUND'
|
||||
} as const;
|
||||
|
||||
export type ErrorCode = (typeof ERROR_CODES)[keyof typeof ERROR_CODES];
|
||||
|
||||
@@ -11,7 +11,9 @@ export {
|
||||
type ListTasksResult,
|
||||
type StartTaskOptions,
|
||||
type StartTaskResult,
|
||||
type ConflictCheckResult
|
||||
type ConflictCheckResult,
|
||||
type ExportTasksOptions,
|
||||
type ExportResult
|
||||
} from './task-master-core.js';
|
||||
|
||||
// Re-export types
|
||||
|
||||
@@ -5,6 +5,16 @@
|
||||
|
||||
import type { Task, TaskMetadata, TaskStatus } from '../types/index.js';
|
||||
|
||||
/**
|
||||
* Options for loading tasks from storage
|
||||
*/
|
||||
export interface LoadTasksOptions {
|
||||
/** Filter tasks by status */
|
||||
status?: TaskStatus;
|
||||
/** Exclude subtasks from loaded tasks (default: false) */
|
||||
excludeSubtasks?: boolean;
|
||||
}
|
||||
|
||||
/**
|
||||
* Result type for updateTaskStatus operations
|
||||
*/
|
||||
@@ -21,11 +31,12 @@ export interface UpdateStatusResult {
|
||||
*/
|
||||
export interface IStorage {
|
||||
/**
|
||||
* Load all tasks from storage, optionally filtered by tag
|
||||
* Load all tasks from storage, optionally filtered by tag and other criteria
|
||||
* @param tag - Optional tag to filter tasks by
|
||||
* @param options - Optional filtering options (status, excludeSubtasks)
|
||||
* @returns Promise that resolves to an array of tasks
|
||||
*/
|
||||
loadTasks(tag?: string): Promise<Task[]>;
|
||||
loadTasks(tag?: string, options?: LoadTasksOptions): Promise<Task[]>;
|
||||
|
||||
/**
|
||||
* Load a single task by ID
|
||||
@@ -205,7 +216,7 @@ export abstract class BaseStorage implements IStorage {
|
||||
}
|
||||
|
||||
// Abstract methods that must be implemented by concrete classes
|
||||
abstract loadTasks(tag?: string): Promise<Task[]>;
|
||||
abstract loadTasks(tag?: string, options?: LoadTasksOptions): Promise<Task[]>;
|
||||
abstract loadTask(taskId: string, tag?: string): Promise<Task | null>;
|
||||
abstract saveTasks(tasks: Task[], tag?: string): Promise<void>;
|
||||
abstract appendTasks(tasks: Task[], tag?: string): Promise<void>;
|
||||
|
||||
148
packages/tm-core/src/mappers/TaskMapper.test.ts
Normal file
148
packages/tm-core/src/mappers/TaskMapper.test.ts
Normal file
@@ -0,0 +1,148 @@
|
||||
import { describe, it, expect, vi } from 'vitest';
|
||||
import { TaskMapper } from './TaskMapper.js';
|
||||
import type { Tables } from '../types/database.types.js';
|
||||
|
||||
type TaskRow = Tables<'tasks'>;
|
||||
|
||||
describe('TaskMapper', () => {
|
||||
describe('extractMetadataField', () => {
|
||||
it('should extract string field from metadata', () => {
|
||||
const taskRow: TaskRow = {
|
||||
id: '123',
|
||||
display_id: '1',
|
||||
title: 'Test Task',
|
||||
description: 'Test description',
|
||||
status: 'todo',
|
||||
priority: 'medium',
|
||||
parent_task_id: null,
|
||||
subtask_position: 0,
|
||||
created_at: new Date().toISOString(),
|
||||
updated_at: new Date().toISOString(),
|
||||
metadata: {
|
||||
details: 'Some details',
|
||||
testStrategy: 'Test with unit tests'
|
||||
},
|
||||
complexity: null,
|
||||
assignee_id: null,
|
||||
estimated_hours: null,
|
||||
actual_hours: null,
|
||||
due_date: null,
|
||||
completed_at: null
|
||||
};
|
||||
|
||||
const task = TaskMapper.mapDatabaseTaskToTask(taskRow, [], new Map());
|
||||
|
||||
expect(task.details).toBe('Some details');
|
||||
expect(task.testStrategy).toBe('Test with unit tests');
|
||||
});
|
||||
|
||||
it('should use default value when metadata field is missing', () => {
|
||||
const taskRow: TaskRow = {
|
||||
id: '123',
|
||||
display_id: '1',
|
||||
title: 'Test Task',
|
||||
description: 'Test description',
|
||||
status: 'todo',
|
||||
priority: 'medium',
|
||||
parent_task_id: null,
|
||||
subtask_position: 0,
|
||||
created_at: new Date().toISOString(),
|
||||
updated_at: new Date().toISOString(),
|
||||
metadata: {},
|
||||
complexity: null,
|
||||
assignee_id: null,
|
||||
estimated_hours: null,
|
||||
actual_hours: null,
|
||||
due_date: null,
|
||||
completed_at: null
|
||||
};
|
||||
|
||||
const task = TaskMapper.mapDatabaseTaskToTask(taskRow, [], new Map());
|
||||
|
||||
expect(task.details).toBe('');
|
||||
expect(task.testStrategy).toBe('');
|
||||
});
|
||||
|
||||
it('should use default value when metadata is null', () => {
|
||||
const taskRow: TaskRow = {
|
||||
id: '123',
|
||||
display_id: '1',
|
||||
title: 'Test Task',
|
||||
description: 'Test description',
|
||||
status: 'todo',
|
||||
priority: 'medium',
|
||||
parent_task_id: null,
|
||||
subtask_position: 0,
|
||||
created_at: new Date().toISOString(),
|
||||
updated_at: new Date().toISOString(),
|
||||
metadata: null,
|
||||
complexity: null,
|
||||
assignee_id: null,
|
||||
estimated_hours: null,
|
||||
actual_hours: null,
|
||||
due_date: null,
|
||||
completed_at: null
|
||||
};
|
||||
|
||||
const task = TaskMapper.mapDatabaseTaskToTask(taskRow, [], new Map());
|
||||
|
||||
expect(task.details).toBe('');
|
||||
expect(task.testStrategy).toBe('');
|
||||
});
|
||||
|
||||
it('should use default value and warn when metadata field has wrong type', () => {
|
||||
const consoleWarnSpy = vi
|
||||
.spyOn(console, 'warn')
|
||||
.mockImplementation(() => {});
|
||||
|
||||
const taskRow: TaskRow = {
|
||||
id: '123',
|
||||
display_id: '1',
|
||||
title: 'Test Task',
|
||||
description: 'Test description',
|
||||
status: 'todo',
|
||||
priority: 'medium',
|
||||
parent_task_id: null,
|
||||
subtask_position: 0,
|
||||
created_at: new Date().toISOString(),
|
||||
updated_at: new Date().toISOString(),
|
||||
metadata: {
|
||||
details: 12345, // Wrong type: number instead of string
|
||||
testStrategy: ['test1', 'test2'] // Wrong type: array instead of string
|
||||
},
|
||||
complexity: null,
|
||||
assignee_id: null,
|
||||
estimated_hours: null,
|
||||
actual_hours: null,
|
||||
due_date: null,
|
||||
completed_at: null
|
||||
};
|
||||
|
||||
const task = TaskMapper.mapDatabaseTaskToTask(taskRow, [], new Map());
|
||||
|
||||
// Should use empty string defaults when type doesn't match
|
||||
expect(task.details).toBe('');
|
||||
expect(task.testStrategy).toBe('');
|
||||
|
||||
// Should have logged warnings
|
||||
expect(consoleWarnSpy).toHaveBeenCalledWith(
|
||||
expect.stringContaining('Type mismatch in metadata field "details"')
|
||||
);
|
||||
expect(consoleWarnSpy).toHaveBeenCalledWith(
|
||||
expect.stringContaining(
|
||||
'Type mismatch in metadata field "testStrategy"'
|
||||
)
|
||||
);
|
||||
|
||||
consoleWarnSpy.mockRestore();
|
||||
});
|
||||
});
|
||||
|
||||
describe('mapStatus', () => {
|
||||
it('should map database status to internal status', () => {
|
||||
expect(TaskMapper.mapStatus('todo')).toBe('pending');
|
||||
expect(TaskMapper.mapStatus('in_progress')).toBe('in-progress');
|
||||
expect(TaskMapper.mapStatus('done')).toBe('done');
|
||||
});
|
||||
});
|
||||
});
|
||||
@@ -2,22 +2,32 @@ import { Task, Subtask } from '../types/index.js';
|
||||
import { Database, Tables } from '../types/database.types.js';
|
||||
|
||||
type TaskRow = Tables<'tasks'>;
|
||||
type DependencyRow = Tables<'task_dependencies'>;
|
||||
|
||||
// Legacy type for backward compatibility
|
||||
type DependencyRow = Tables<'task_dependencies'> & {
|
||||
depends_on_task?: { display_id: string } | null;
|
||||
depends_on_task_id?: string;
|
||||
};
|
||||
|
||||
export class TaskMapper {
|
||||
/**
|
||||
* Maps database tasks to internal Task format
|
||||
* @param dbTasks - Array of tasks from database
|
||||
* @param dependencies - Either a Map of task_id to display_ids or legacy array format
|
||||
*/
|
||||
static mapDatabaseTasksToTasks(
|
||||
dbTasks: TaskRow[],
|
||||
dbDependencies: DependencyRow[]
|
||||
dependencies: Map<string, string[]> | DependencyRow[]
|
||||
): Task[] {
|
||||
if (!dbTasks || dbTasks.length === 0) {
|
||||
return [];
|
||||
}
|
||||
|
||||
// Group dependencies by task_id
|
||||
const dependenciesByTaskId = this.groupDependenciesByTaskId(dbDependencies);
|
||||
// Handle both Map and array formats for backward compatibility
|
||||
const dependenciesByTaskId =
|
||||
dependencies instanceof Map
|
||||
? dependencies
|
||||
: this.groupDependenciesByTaskId(dependencies);
|
||||
|
||||
// Separate parent tasks and subtasks
|
||||
const parentTasks = dbTasks.filter((t) => !t.parent_task_id);
|
||||
@@ -43,21 +53,23 @@ export class TaskMapper {
|
||||
): Task {
|
||||
// Map subtasks
|
||||
const subtasks: Subtask[] = dbSubtasks.map((subtask, index) => ({
|
||||
id: index + 1, // Use numeric ID for subtasks
|
||||
id: subtask.display_id || String(index + 1), // Use display_id if available (API storage), fallback to numeric (file storage)
|
||||
parentId: dbTask.id,
|
||||
title: subtask.title,
|
||||
description: subtask.description || '',
|
||||
status: this.mapStatus(subtask.status),
|
||||
priority: this.mapPriority(subtask.priority),
|
||||
dependencies: dependenciesByTaskId.get(subtask.id) || [],
|
||||
details: (subtask.metadata as any)?.details || '',
|
||||
testStrategy: (subtask.metadata as any)?.testStrategy || '',
|
||||
details: this.extractMetadataField(subtask.metadata, 'details', ''),
|
||||
testStrategy: this.extractMetadataField(
|
||||
subtask.metadata,
|
||||
'testStrategy',
|
||||
''
|
||||
),
|
||||
createdAt: subtask.created_at,
|
||||
updatedAt: subtask.updated_at,
|
||||
assignee: subtask.assignee_id || undefined,
|
||||
complexity: subtask.complexity
|
||||
? this.mapComplexityToInternal(subtask.complexity)
|
||||
: undefined
|
||||
complexity: subtask.complexity ?? undefined
|
||||
}));
|
||||
|
||||
return {
|
||||
@@ -67,22 +79,25 @@ export class TaskMapper {
|
||||
status: this.mapStatus(dbTask.status),
|
||||
priority: this.mapPriority(dbTask.priority),
|
||||
dependencies: dependenciesByTaskId.get(dbTask.id) || [],
|
||||
details: (dbTask.metadata as any)?.details || '',
|
||||
testStrategy: (dbTask.metadata as any)?.testStrategy || '',
|
||||
details: this.extractMetadataField(dbTask.metadata, 'details', ''),
|
||||
testStrategy: this.extractMetadataField(
|
||||
dbTask.metadata,
|
||||
'testStrategy',
|
||||
''
|
||||
),
|
||||
subtasks,
|
||||
createdAt: dbTask.created_at,
|
||||
updatedAt: dbTask.updated_at,
|
||||
assignee: dbTask.assignee_id || undefined,
|
||||
complexity: dbTask.complexity
|
||||
? this.mapComplexityToInternal(dbTask.complexity)
|
||||
: undefined,
|
||||
complexity: dbTask.complexity ?? undefined,
|
||||
effort: dbTask.estimated_hours || undefined,
|
||||
actualEffort: dbTask.actual_hours || undefined
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Groups dependencies by task ID
|
||||
* Groups dependencies by task ID (legacy method for backward compatibility)
|
||||
* @deprecated Use DependencyFetcher.fetchDependenciesWithDisplayIds instead
|
||||
*/
|
||||
private static groupDependenciesByTaskId(
|
||||
dependencies: DependencyRow[]
|
||||
@@ -92,7 +107,14 @@ export class TaskMapper {
|
||||
if (dependencies) {
|
||||
for (const dep of dependencies) {
|
||||
const deps = dependenciesByTaskId.get(dep.task_id) || [];
|
||||
deps.push(dep.depends_on_task_id);
|
||||
// Handle both old format (UUID string) and new format (object with display_id)
|
||||
const dependencyId =
|
||||
typeof dep.depends_on_task === 'object'
|
||||
? dep.depends_on_task?.display_id
|
||||
: dep.depends_on_task_id;
|
||||
if (dependencyId) {
|
||||
deps.push(dependencyId);
|
||||
}
|
||||
dependenciesByTaskId.set(dep.task_id, deps);
|
||||
}
|
||||
}
|
||||
@@ -157,14 +179,38 @@ export class TaskMapper {
|
||||
}
|
||||
|
||||
/**
|
||||
* Maps numeric complexity to descriptive complexity
|
||||
* Safely extracts a field from metadata JSON with runtime type validation
|
||||
* @param metadata The metadata object (could be null or any type)
|
||||
* @param field The field to extract
|
||||
* @param defaultValue Default value if field doesn't exist
|
||||
* @returns The extracted value if it matches the expected type, otherwise defaultValue
|
||||
*/
|
||||
private static mapComplexityToInternal(
|
||||
complexity: number
|
||||
): Task['complexity'] {
|
||||
if (complexity <= 2) return 'simple';
|
||||
if (complexity <= 5) return 'moderate';
|
||||
if (complexity <= 8) return 'complex';
|
||||
return 'very-complex';
|
||||
private static extractMetadataField<T>(
|
||||
metadata: unknown,
|
||||
field: string,
|
||||
defaultValue: T
|
||||
): T {
|
||||
if (!metadata || typeof metadata !== 'object') {
|
||||
return defaultValue;
|
||||
}
|
||||
|
||||
const value = (metadata as Record<string, unknown>)[field];
|
||||
|
||||
if (value === undefined) {
|
||||
return defaultValue;
|
||||
}
|
||||
|
||||
// Runtime type validation: ensure value matches the type of defaultValue
|
||||
const expectedType = typeof defaultValue;
|
||||
const actualType = typeof value;
|
||||
|
||||
if (expectedType !== actualType) {
|
||||
console.warn(
|
||||
`Type mismatch in metadata field "${field}": expected ${expectedType}, got ${actualType}. Using default value.`
|
||||
);
|
||||
return defaultValue;
|
||||
}
|
||||
|
||||
return value as T;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,224 +0,0 @@
|
||||
import { SupabaseClient } from '@supabase/supabase-js';
|
||||
import { Task } from '../types/index.js';
|
||||
import { Database } from '../types/database.types.js';
|
||||
import { TaskMapper } from '../mappers/TaskMapper.js';
|
||||
import { AuthManager } from '../auth/auth-manager.js';
|
||||
import { z } from 'zod';
|
||||
|
||||
// Zod schema for task status validation
|
||||
const TaskStatusSchema = z.enum([
|
||||
'pending',
|
||||
'in-progress',
|
||||
'done',
|
||||
'review',
|
||||
'deferred',
|
||||
'cancelled',
|
||||
'blocked'
|
||||
]);
|
||||
|
||||
// Zod schema for task updates
|
||||
const TaskUpdateSchema = z
|
||||
.object({
|
||||
title: z.string().min(1).optional(),
|
||||
description: z.string().optional(),
|
||||
status: TaskStatusSchema.optional(),
|
||||
priority: z.enum(['low', 'medium', 'high', 'critical']).optional(),
|
||||
details: z.string().optional(),
|
||||
testStrategy: z.string().optional()
|
||||
})
|
||||
.partial();
|
||||
|
||||
export class SupabaseTaskRepository {
|
||||
constructor(private supabase: SupabaseClient<Database>) {}
|
||||
|
||||
async getTasks(_projectId?: string): Promise<Task[]> {
|
||||
// Get the current context to determine briefId
|
||||
const authManager = AuthManager.getInstance();
|
||||
const context = authManager.getContext();
|
||||
|
||||
if (!context || !context.briefId) {
|
||||
throw new Error(
|
||||
'No brief selected. Please select a brief first using: tm context brief'
|
||||
);
|
||||
}
|
||||
|
||||
// Get all tasks for the brief using the exact query structure
|
||||
const { data: tasks, error } = await this.supabase
|
||||
.from('tasks')
|
||||
.select(`
|
||||
*,
|
||||
document:document_id (
|
||||
id,
|
||||
document_name,
|
||||
title,
|
||||
description
|
||||
)
|
||||
`)
|
||||
.eq('brief_id', context.briefId)
|
||||
.order('position', { ascending: true })
|
||||
.order('subtask_position', { ascending: true })
|
||||
.order('created_at', { ascending: true });
|
||||
|
||||
if (error) {
|
||||
throw new Error(`Failed to fetch tasks: ${error.message}`);
|
||||
}
|
||||
|
||||
if (!tasks || tasks.length === 0) {
|
||||
return [];
|
||||
}
|
||||
|
||||
// Get all dependencies for these tasks
|
||||
const taskIds = tasks.map((t: any) => t.id);
|
||||
const { data: depsData, error: depsError } = await this.supabase
|
||||
.from('task_dependencies')
|
||||
.select('*')
|
||||
.in('task_id', taskIds);
|
||||
|
||||
if (depsError) {
|
||||
throw new Error(
|
||||
`Failed to fetch task dependencies: ${depsError.message}`
|
||||
);
|
||||
}
|
||||
|
||||
// Use mapper to convert to internal format
|
||||
return TaskMapper.mapDatabaseTasksToTasks(tasks, depsData || []);
|
||||
}
|
||||
|
||||
async getTask(_projectId: string, taskId: string): Promise<Task | null> {
|
||||
// Get the current context to determine briefId (projectId not used in Supabase context)
|
||||
const authManager = AuthManager.getInstance();
|
||||
const context = authManager.getContext();
|
||||
|
||||
if (!context || !context.briefId) {
|
||||
throw new Error(
|
||||
'No brief selected. Please select a brief first using: tm context brief'
|
||||
);
|
||||
}
|
||||
|
||||
const { data, error } = await this.supabase
|
||||
.from('tasks')
|
||||
.select('*')
|
||||
.eq('brief_id', context.briefId)
|
||||
.eq('display_id', taskId.toUpperCase())
|
||||
.single();
|
||||
|
||||
if (error) {
|
||||
if (error.code === 'PGRST116') {
|
||||
return null; // Not found
|
||||
}
|
||||
throw new Error(`Failed to fetch task: ${error.message}`);
|
||||
}
|
||||
|
||||
// Get dependencies for this task
|
||||
const { data: depsData } = await this.supabase
|
||||
.from('task_dependencies')
|
||||
.select('*')
|
||||
.eq('task_id', taskId);
|
||||
|
||||
// Get subtasks if this is a parent task
|
||||
const { data: subtasksData } = await this.supabase
|
||||
.from('tasks')
|
||||
.select('*')
|
||||
.eq('parent_task_id', taskId)
|
||||
.order('subtask_position', { ascending: true });
|
||||
|
||||
// Create dependency map
|
||||
const dependenciesByTaskId = new Map<string, string[]>();
|
||||
if (depsData) {
|
||||
dependenciesByTaskId.set(
|
||||
taskId,
|
||||
depsData.map(
|
||||
(d: Database['public']['Tables']['task_dependencies']['Row']) =>
|
||||
d.depends_on_task_id
|
||||
)
|
||||
);
|
||||
}
|
||||
|
||||
// Use mapper to convert single task
|
||||
return TaskMapper.mapDatabaseTaskToTask(
|
||||
data,
|
||||
subtasksData || [],
|
||||
dependenciesByTaskId
|
||||
);
|
||||
}
|
||||
|
||||
async updateTask(
|
||||
projectId: string,
|
||||
taskId: string,
|
||||
updates: Partial<Task>
|
||||
): Promise<Task> {
|
||||
// Get the current context to determine briefId
|
||||
const authManager = AuthManager.getInstance();
|
||||
const context = authManager.getContext();
|
||||
|
||||
if (!context || !context.briefId) {
|
||||
throw new Error(
|
||||
'No brief selected. Please select a brief first using: tm context brief'
|
||||
);
|
||||
}
|
||||
|
||||
// Validate updates using Zod schema
|
||||
try {
|
||||
TaskUpdateSchema.parse(updates);
|
||||
} catch (error) {
|
||||
if (error instanceof z.ZodError) {
|
||||
const errorMessages = error.issues
|
||||
.map((err) => `${err.path.join('.')}: ${err.message}`)
|
||||
.join(', ');
|
||||
throw new Error(`Invalid task update data: ${errorMessages}`);
|
||||
}
|
||||
throw error;
|
||||
}
|
||||
|
||||
// Convert Task fields to database fields - only include fields that actually exist in the database
|
||||
const dbUpdates: any = {};
|
||||
|
||||
if (updates.title !== undefined) dbUpdates.title = updates.title;
|
||||
if (updates.description !== undefined)
|
||||
dbUpdates.description = updates.description;
|
||||
if (updates.status !== undefined)
|
||||
dbUpdates.status = this.mapStatusToDatabase(updates.status);
|
||||
if (updates.priority !== undefined) dbUpdates.priority = updates.priority;
|
||||
// Skip fields that don't exist in database schema: details, testStrategy, etc.
|
||||
|
||||
// Update the task
|
||||
const { error } = await this.supabase
|
||||
.from('tasks')
|
||||
.update(dbUpdates)
|
||||
.eq('brief_id', context.briefId)
|
||||
.eq('display_id', taskId.toUpperCase());
|
||||
|
||||
if (error) {
|
||||
throw new Error(`Failed to update task: ${error.message}`);
|
||||
}
|
||||
|
||||
// Return the updated task by fetching it
|
||||
const updatedTask = await this.getTask(projectId, taskId);
|
||||
if (!updatedTask) {
|
||||
throw new Error(`Failed to retrieve updated task ${taskId}`);
|
||||
}
|
||||
|
||||
return updatedTask;
|
||||
}
|
||||
|
||||
/**
|
||||
* Maps internal status to database status
|
||||
*/
|
||||
private mapStatusToDatabase(
|
||||
status: string
|
||||
): Database['public']['Enums']['task_status'] {
|
||||
switch (status) {
|
||||
case 'pending':
|
||||
return 'todo';
|
||||
case 'in-progress':
|
||||
case 'in_progress': // Accept both formats
|
||||
return 'in_progress';
|
||||
case 'done':
|
||||
return 'done';
|
||||
default:
|
||||
throw new Error(
|
||||
`Invalid task status: ${status}. Valid statuses are: pending, in-progress, done`
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,68 @@
|
||||
import { SupabaseClient } from '@supabase/supabase-js';
|
||||
import { Database } from '../../types/database.types.js';
|
||||
import { DependencyWithDisplayId } from '../../types/repository-types.js';
|
||||
|
||||
/**
|
||||
* Handles fetching and processing of task dependencies with display_ids
|
||||
*/
|
||||
export class DependencyFetcher {
|
||||
constructor(private supabase: SupabaseClient<Database>) {}
|
||||
|
||||
/**
|
||||
* Fetches dependencies for given task IDs with display_ids joined
|
||||
* @param taskIds Array of task IDs to fetch dependencies for
|
||||
* @returns Map of task ID to array of dependency display_ids
|
||||
*/
|
||||
async fetchDependenciesWithDisplayIds(
|
||||
taskIds: string[]
|
||||
): Promise<Map<string, string[]>> {
|
||||
if (!taskIds || taskIds.length === 0) {
|
||||
return new Map();
|
||||
}
|
||||
|
||||
const { data, error } = await this.supabase
|
||||
.from('task_dependencies')
|
||||
.select(`
|
||||
task_id,
|
||||
depends_on_task:tasks!task_dependencies_depends_on_task_id_fkey (
|
||||
display_id
|
||||
)
|
||||
`)
|
||||
.in('task_id', taskIds);
|
||||
|
||||
if (error) {
|
||||
throw new Error(`Failed to fetch task dependencies: ${error.message}`);
|
||||
}
|
||||
|
||||
return this.processDependencyData(data as DependencyWithDisplayId[]);
|
||||
}
|
||||
|
||||
/**
|
||||
* Processes raw dependency data into a map structure
|
||||
*/
|
||||
private processDependencyData(
|
||||
dependencies: DependencyWithDisplayId[]
|
||||
): Map<string, string[]> {
|
||||
const dependenciesByTaskId = new Map<string, string[]>();
|
||||
|
||||
if (!dependencies) {
|
||||
return dependenciesByTaskId;
|
||||
}
|
||||
|
||||
for (const dep of dependencies) {
|
||||
if (!dep.task_id) continue;
|
||||
|
||||
const currentDeps = dependenciesByTaskId.get(dep.task_id) || [];
|
||||
|
||||
// Extract display_id from the joined object
|
||||
const displayId = dep.depends_on_task?.display_id;
|
||||
if (displayId) {
|
||||
currentDeps.push(displayId);
|
||||
}
|
||||
|
||||
dependenciesByTaskId.set(dep.task_id, currentDeps);
|
||||
}
|
||||
|
||||
return dependenciesByTaskId;
|
||||
}
|
||||
}
|
||||
5
packages/tm-core/src/repositories/supabase/index.ts
Normal file
5
packages/tm-core/src/repositories/supabase/index.ts
Normal file
@@ -0,0 +1,5 @@
|
||||
/**
|
||||
* Supabase repository implementations
|
||||
*/
|
||||
export { SupabaseTaskRepository } from './supabase-task-repository.js';
|
||||
export { DependencyFetcher } from './dependency-fetcher.js';
|
||||
@@ -0,0 +1,275 @@
|
||||
import { SupabaseClient } from '@supabase/supabase-js';
|
||||
import { Task } from '../../types/index.js';
|
||||
import { Database, Json } from '../../types/database.types.js';
|
||||
import { TaskMapper } from '../../mappers/TaskMapper.js';
|
||||
import { AuthManager } from '../../auth/auth-manager.js';
|
||||
import { DependencyFetcher } from './dependency-fetcher.js';
|
||||
import {
|
||||
TaskWithRelations,
|
||||
TaskDatabaseUpdate
|
||||
} from '../../types/repository-types.js';
|
||||
import { LoadTasksOptions } from '../../interfaces/storage.interface.js';
|
||||
import { z } from 'zod';
|
||||
|
||||
// Zod schema for task status validation
|
||||
const TaskStatusSchema = z.enum([
|
||||
'pending',
|
||||
'in-progress',
|
||||
'done',
|
||||
'review',
|
||||
'deferred',
|
||||
'cancelled',
|
||||
'blocked'
|
||||
]);
|
||||
|
||||
// Zod schema for task updates
|
||||
const TaskUpdateSchema = z
|
||||
.object({
|
||||
title: z.string().min(1).optional(),
|
||||
description: z.string().optional(),
|
||||
status: TaskStatusSchema.optional(),
|
||||
priority: z.enum(['low', 'medium', 'high', 'critical']).optional(),
|
||||
details: z.string().optional(),
|
||||
testStrategy: z.string().optional()
|
||||
})
|
||||
.partial();
|
||||
|
||||
export class SupabaseTaskRepository {
|
||||
private dependencyFetcher: DependencyFetcher;
|
||||
private authManager: AuthManager;
|
||||
|
||||
constructor(private supabase: SupabaseClient<Database>) {
|
||||
this.dependencyFetcher = new DependencyFetcher(supabase);
|
||||
this.authManager = AuthManager.getInstance();
|
||||
}
|
||||
|
||||
/**
|
||||
* Gets the current brief ID from auth context
|
||||
* @throws {Error} If no brief is selected
|
||||
*/
|
||||
private getBriefIdOrThrow(): string {
|
||||
const context = this.authManager.getContext();
|
||||
if (!context?.briefId) {
|
||||
throw new Error(
|
||||
'No brief selected. Please select a brief first using: tm context brief'
|
||||
);
|
||||
}
|
||||
return context.briefId;
|
||||
}
|
||||
|
||||
async getTasks(
|
||||
_projectId?: string,
|
||||
options?: LoadTasksOptions
|
||||
): Promise<Task[]> {
|
||||
const briefId = this.getBriefIdOrThrow();
|
||||
|
||||
// Build query with filters
|
||||
let query = this.supabase
|
||||
.from('tasks')
|
||||
.select(`
|
||||
*,
|
||||
document:document_id (
|
||||
id,
|
||||
document_name,
|
||||
title,
|
||||
description
|
||||
)
|
||||
`)
|
||||
.eq('brief_id', briefId);
|
||||
|
||||
// Apply status filter at database level if specified
|
||||
if (options?.status) {
|
||||
const dbStatus = this.mapStatusToDatabase(options.status);
|
||||
query = query.eq('status', dbStatus);
|
||||
}
|
||||
|
||||
// Apply subtask exclusion at database level if specified
|
||||
if (options?.excludeSubtasks) {
|
||||
// Only fetch parent tasks (where parent_task_id is null)
|
||||
query = query.is('parent_task_id', null);
|
||||
}
|
||||
|
||||
// Execute query with ordering
|
||||
const { data: tasks, error } = await query
|
||||
.order('position', { ascending: true })
|
||||
.order('subtask_position', { ascending: true })
|
||||
.order('created_at', { ascending: true });
|
||||
|
||||
if (error) {
|
||||
throw new Error(`Failed to fetch tasks: ${error.message}`);
|
||||
}
|
||||
|
||||
if (!tasks || tasks.length === 0) {
|
||||
return [];
|
||||
}
|
||||
|
||||
// Type-safe task ID extraction
|
||||
const typedTasks = tasks as TaskWithRelations[];
|
||||
const taskIds = typedTasks.map((t) => t.id);
|
||||
const dependenciesMap =
|
||||
await this.dependencyFetcher.fetchDependenciesWithDisplayIds(taskIds);
|
||||
|
||||
// Use mapper to convert to internal format
|
||||
return TaskMapper.mapDatabaseTasksToTasks(tasks, dependenciesMap);
|
||||
}
|
||||
|
||||
async getTask(_projectId: string, taskId: string): Promise<Task | null> {
|
||||
const briefId = this.getBriefIdOrThrow();
|
||||
|
||||
const { data, error } = await this.supabase
|
||||
.from('tasks')
|
||||
.select('*')
|
||||
.eq('brief_id', briefId)
|
||||
.eq('display_id', taskId.toUpperCase())
|
||||
.single();
|
||||
|
||||
if (error) {
|
||||
if (error.code === 'PGRST116') {
|
||||
return null; // Not found
|
||||
}
|
||||
throw new Error(`Failed to fetch task: ${error.message}`);
|
||||
}
|
||||
|
||||
// Get subtasks if this is a parent task
|
||||
const { data: subtasksData } = await this.supabase
|
||||
.from('tasks')
|
||||
.select('*')
|
||||
.eq('parent_task_id', data.id)
|
||||
.order('subtask_position', { ascending: true });
|
||||
|
||||
// Get all task IDs (parent + subtasks) to fetch dependencies
|
||||
const allTaskIds = [data.id, ...(subtasksData?.map((st) => st.id) || [])];
|
||||
|
||||
// Fetch dependencies using the dedicated fetcher
|
||||
const dependenciesByTaskId =
|
||||
await this.dependencyFetcher.fetchDependenciesWithDisplayIds(allTaskIds);
|
||||
|
||||
// Use mapper to convert single task
|
||||
return TaskMapper.mapDatabaseTaskToTask(
|
||||
data,
|
||||
subtasksData || [],
|
||||
dependenciesByTaskId
|
||||
);
|
||||
}
|
||||
|
||||
async updateTask(
|
||||
projectId: string,
|
||||
taskId: string,
|
||||
updates: Partial<Task>
|
||||
): Promise<Task> {
|
||||
const briefId = this.getBriefIdOrThrow();
|
||||
|
||||
// Validate updates using Zod schema
|
||||
try {
|
||||
TaskUpdateSchema.parse(updates);
|
||||
} catch (error) {
|
||||
if (error instanceof z.ZodError) {
|
||||
const errorMessages = error.issues
|
||||
.map((err) => `${err.path.join('.')}: ${err.message}`)
|
||||
.join(', ');
|
||||
throw new Error(`Invalid task update data: ${errorMessages}`);
|
||||
}
|
||||
throw error;
|
||||
}
|
||||
|
||||
// Convert Task fields to database fields with proper typing
|
||||
const dbUpdates: TaskDatabaseUpdate = {};
|
||||
|
||||
if (updates.title !== undefined) dbUpdates.title = updates.title;
|
||||
if (updates.description !== undefined)
|
||||
dbUpdates.description = updates.description;
|
||||
if (updates.status !== undefined)
|
||||
dbUpdates.status = this.mapStatusToDatabase(updates.status);
|
||||
if (updates.priority !== undefined)
|
||||
dbUpdates.priority = this.mapPriorityToDatabase(updates.priority);
|
||||
|
||||
// Handle metadata fields (details, testStrategy, etc.)
|
||||
// Load existing metadata to preserve fields not being updated
|
||||
const { data: existingMetadataRow, error: existingMetadataError } =
|
||||
await this.supabase
|
||||
.from('tasks')
|
||||
.select('metadata')
|
||||
.eq('brief_id', briefId)
|
||||
.eq('display_id', taskId.toUpperCase())
|
||||
.single();
|
||||
|
||||
if (existingMetadataError) {
|
||||
throw new Error(
|
||||
`Failed to load existing task metadata: ${existingMetadataError.message}`
|
||||
);
|
||||
}
|
||||
|
||||
const metadata: Record<string, unknown> = {
|
||||
...((existingMetadataRow?.metadata as Record<string, unknown>) ?? {})
|
||||
};
|
||||
|
||||
if (updates.details !== undefined) metadata.details = updates.details;
|
||||
if (updates.testStrategy !== undefined)
|
||||
metadata.testStrategy = updates.testStrategy;
|
||||
|
||||
if (Object.keys(metadata).length > 0) {
|
||||
dbUpdates.metadata = metadata as Json;
|
||||
}
|
||||
|
||||
// Update the task
|
||||
const { error } = await this.supabase
|
||||
.from('tasks')
|
||||
.update(dbUpdates)
|
||||
.eq('brief_id', briefId)
|
||||
.eq('display_id', taskId.toUpperCase());
|
||||
|
||||
if (error) {
|
||||
throw new Error(`Failed to update task: ${error.message}`);
|
||||
}
|
||||
|
||||
// Return the updated task by fetching it
|
||||
const updatedTask = await this.getTask(projectId, taskId);
|
||||
if (!updatedTask) {
|
||||
throw new Error(`Failed to retrieve updated task ${taskId}`);
|
||||
}
|
||||
|
||||
return updatedTask;
|
||||
}
|
||||
|
||||
/**
|
||||
* Maps internal status to database status
|
||||
*/
|
||||
private mapStatusToDatabase(
|
||||
status: string
|
||||
): Database['public']['Enums']['task_status'] {
|
||||
switch (status) {
|
||||
case 'pending':
|
||||
return 'todo';
|
||||
case 'in-progress':
|
||||
case 'in_progress': // Accept both formats
|
||||
return 'in_progress';
|
||||
case 'done':
|
||||
return 'done';
|
||||
default:
|
||||
throw new Error(
|
||||
`Invalid task status: ${status}. Valid statuses are: pending, in-progress, done`
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Maps internal priority to database priority
|
||||
* Task Master uses 'critical', database uses 'urgent'
|
||||
*/
|
||||
private mapPriorityToDatabase(
|
||||
priority: string
|
||||
): Database['public']['Enums']['task_priority'] {
|
||||
switch (priority) {
|
||||
case 'critical':
|
||||
return 'urgent';
|
||||
case 'low':
|
||||
case 'medium':
|
||||
case 'high':
|
||||
return priority as Database['public']['Enums']['task_priority'];
|
||||
default:
|
||||
throw new Error(
|
||||
`Invalid task priority: ${priority}. Valid priorities are: low, medium, high, critical`
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,8 +1,9 @@
|
||||
import { Task, TaskTag } from '../types/index.js';
|
||||
import { LoadTasksOptions } from '../interfaces/storage.interface.js';
|
||||
|
||||
export interface TaskRepository {
|
||||
// Task operations
|
||||
getTasks(projectId: string): Promise<Task[]>;
|
||||
getTasks(projectId: string, options?: LoadTasksOptions): Promise<Task[]>;
|
||||
getTask(projectId: string, taskId: string): Promise<Task | null>;
|
||||
createTask(projectId: string, task: Omit<Task, 'id'>): Promise<Task>;
|
||||
updateTask(
|
||||
|
||||
496
packages/tm-core/src/services/export.service.ts
Normal file
496
packages/tm-core/src/services/export.service.ts
Normal file
@@ -0,0 +1,496 @@
|
||||
/**
|
||||
* @fileoverview Export Service
|
||||
* Core service for exporting tasks to external systems (e.g., Hamster briefs)
|
||||
*/
|
||||
|
||||
import type { Task, TaskStatus } from '../types/index.js';
|
||||
import type { UserContext } from '../auth/types.js';
|
||||
import { ConfigManager } from '../config/config-manager.js';
|
||||
import { AuthManager } from '../auth/auth-manager.js';
|
||||
import { ERROR_CODES, TaskMasterError } from '../errors/task-master-error.js';
|
||||
import { FileStorage } from '../storage/file-storage/index.js';
|
||||
|
||||
// Type definitions for the bulk API response
|
||||
interface TaskImportResult {
|
||||
externalId?: string;
|
||||
index: number;
|
||||
success: boolean;
|
||||
taskId?: string;
|
||||
error?: string;
|
||||
validationErrors?: string[];
|
||||
}
|
||||
|
||||
interface BulkTasksResponse {
|
||||
dryRun: boolean;
|
||||
totalTasks: number;
|
||||
successCount: number;
|
||||
failedCount: number;
|
||||
skippedCount: number;
|
||||
results: TaskImportResult[];
|
||||
summary: {
|
||||
message: string;
|
||||
duration: number;
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Options for exporting tasks
|
||||
*/
|
||||
export interface ExportTasksOptions {
|
||||
/** Optional tag to export tasks from (uses active tag if not provided) */
|
||||
tag?: string;
|
||||
/** Brief ID to export to */
|
||||
briefId?: string;
|
||||
/** Organization ID (required if briefId is provided) */
|
||||
orgId?: string;
|
||||
/** Filter by task status */
|
||||
status?: TaskStatus;
|
||||
/** Exclude subtasks from export (default: false, subtasks included by default) */
|
||||
excludeSubtasks?: boolean;
|
||||
}
|
||||
|
||||
/**
|
||||
* Result of the export operation
|
||||
*/
|
||||
export interface ExportResult {
|
||||
/** Whether the export was successful */
|
||||
success: boolean;
|
||||
/** Number of tasks exported */
|
||||
taskCount: number;
|
||||
/** The brief ID tasks were exported to */
|
||||
briefId: string;
|
||||
/** The organization ID */
|
||||
orgId: string;
|
||||
/** Optional message */
|
||||
message?: string;
|
||||
/** Error details if export failed */
|
||||
error?: {
|
||||
code: string;
|
||||
message: string;
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Brief information from API
|
||||
*/
|
||||
export interface Brief {
|
||||
id: string;
|
||||
accountId: string;
|
||||
createdAt: string;
|
||||
name?: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* ExportService handles task export to external systems
|
||||
*/
|
||||
export class ExportService {
|
||||
private configManager: ConfigManager;
|
||||
private authManager: AuthManager;
|
||||
|
||||
constructor(configManager: ConfigManager, authManager: AuthManager) {
|
||||
this.configManager = configManager;
|
||||
this.authManager = authManager;
|
||||
}
|
||||
|
||||
/**
|
||||
* Export tasks to a brief
|
||||
*/
|
||||
async exportTasks(options: ExportTasksOptions): Promise<ExportResult> {
|
||||
// Validate authentication
|
||||
if (!this.authManager.isAuthenticated()) {
|
||||
throw new TaskMasterError(
|
||||
'Authentication required for export',
|
||||
ERROR_CODES.AUTHENTICATION_ERROR
|
||||
);
|
||||
}
|
||||
|
||||
// Get current context
|
||||
const context = this.authManager.getContext();
|
||||
|
||||
// Determine org and brief IDs
|
||||
let orgId = options.orgId || context?.orgId;
|
||||
let briefId = options.briefId || context?.briefId;
|
||||
|
||||
// Validate we have necessary IDs
|
||||
if (!orgId) {
|
||||
throw new TaskMasterError(
|
||||
'Organization ID is required for export. Use "tm context org" to select one.',
|
||||
ERROR_CODES.MISSING_CONFIGURATION
|
||||
);
|
||||
}
|
||||
|
||||
if (!briefId) {
|
||||
throw new TaskMasterError(
|
||||
'Brief ID is required for export. Use "tm context brief" or provide --brief flag.',
|
||||
ERROR_CODES.MISSING_CONFIGURATION
|
||||
);
|
||||
}
|
||||
|
||||
// Get tasks from the specified or active tag
|
||||
const activeTag = this.configManager.getActiveTag();
|
||||
const tag = options.tag || activeTag;
|
||||
|
||||
// Always read tasks from local file storage for export
|
||||
// (we're exporting local tasks to a remote brief)
|
||||
const fileStorage = new FileStorage(this.configManager.getProjectRoot());
|
||||
await fileStorage.initialize();
|
||||
|
||||
// Load tasks with filters applied at storage layer
|
||||
const filteredTasks = await fileStorage.loadTasks(tag, {
|
||||
status: options.status,
|
||||
excludeSubtasks: options.excludeSubtasks
|
||||
});
|
||||
|
||||
// Get total count (without filters) for comparison
|
||||
const allTasks = await fileStorage.loadTasks(tag);
|
||||
|
||||
const taskListResult = {
|
||||
tasks: filteredTasks,
|
||||
total: allTasks.length,
|
||||
filtered: filteredTasks.length,
|
||||
tag,
|
||||
storageType: 'file' as const
|
||||
};
|
||||
|
||||
if (taskListResult.tasks.length === 0) {
|
||||
return {
|
||||
success: false,
|
||||
taskCount: 0,
|
||||
briefId,
|
||||
orgId,
|
||||
message: 'No tasks found to export',
|
||||
error: {
|
||||
code: 'NO_TASKS',
|
||||
message: 'No tasks match the specified criteria'
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
try {
|
||||
// Call the export API with the original tasks
|
||||
// performExport will handle the transformation based on the method used
|
||||
await this.performExport(orgId, briefId, taskListResult.tasks);
|
||||
|
||||
return {
|
||||
success: true,
|
||||
taskCount: taskListResult.tasks.length,
|
||||
briefId,
|
||||
orgId,
|
||||
message: `Successfully exported ${taskListResult.tasks.length} task(s) to brief`
|
||||
};
|
||||
} catch (error) {
|
||||
const errorMessage =
|
||||
error instanceof Error ? error.message : String(error);
|
||||
|
||||
return {
|
||||
success: false,
|
||||
taskCount: 0,
|
||||
briefId,
|
||||
orgId,
|
||||
error: {
|
||||
code: 'EXPORT_FAILED',
|
||||
message: errorMessage
|
||||
}
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Export tasks from a brief ID or URL
|
||||
*/
|
||||
async exportFromBriefInput(briefInput: string): Promise<ExportResult> {
|
||||
// Extract brief ID from input
|
||||
const briefId = this.extractBriefId(briefInput);
|
||||
if (!briefId) {
|
||||
throw new TaskMasterError(
|
||||
'Invalid brief ID or URL provided',
|
||||
ERROR_CODES.VALIDATION_ERROR
|
||||
);
|
||||
}
|
||||
|
||||
// Fetch brief to get organization
|
||||
const brief = await this.authManager.getBrief(briefId);
|
||||
if (!brief) {
|
||||
throw new TaskMasterError(
|
||||
'Brief not found or you do not have access',
|
||||
ERROR_CODES.NOT_FOUND
|
||||
);
|
||||
}
|
||||
|
||||
// Export with the resolved org and brief
|
||||
return this.exportTasks({
|
||||
orgId: brief.accountId,
|
||||
briefId: brief.id
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Validate export context before prompting
|
||||
*/
|
||||
async validateContext(): Promise<{
|
||||
hasOrg: boolean;
|
||||
hasBrief: boolean;
|
||||
context: UserContext | null;
|
||||
}> {
|
||||
const context = this.authManager.getContext();
|
||||
|
||||
return {
|
||||
hasOrg: !!context?.orgId,
|
||||
hasBrief: !!context?.briefId,
|
||||
context
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Transform tasks for API bulk import format (flat structure)
|
||||
*/
|
||||
private transformTasksForBulkImport(tasks: Task[]): any[] {
|
||||
const flatTasks: any[] = [];
|
||||
|
||||
// Process each task and its subtasks
|
||||
tasks.forEach((task) => {
|
||||
// Add parent task
|
||||
flatTasks.push({
|
||||
externalId: String(task.id),
|
||||
title: task.title,
|
||||
description: this.enrichDescription(task),
|
||||
status: this.mapStatusForAPI(task.status),
|
||||
priority: task.priority || 'medium',
|
||||
dependencies: task.dependencies?.map(String) || [],
|
||||
details: task.details,
|
||||
testStrategy: task.testStrategy,
|
||||
complexity: task.complexity,
|
||||
metadata: {
|
||||
complexity: task.complexity,
|
||||
originalId: task.id,
|
||||
originalDescription: task.description,
|
||||
originalDetails: task.details,
|
||||
originalTestStrategy: task.testStrategy
|
||||
}
|
||||
});
|
||||
|
||||
// Add subtasks if they exist
|
||||
if (task.subtasks && task.subtasks.length > 0) {
|
||||
task.subtasks.forEach((subtask) => {
|
||||
flatTasks.push({
|
||||
externalId: `${task.id}.${subtask.id}`,
|
||||
parentExternalId: String(task.id),
|
||||
title: subtask.title,
|
||||
description: this.enrichDescription(subtask),
|
||||
status: this.mapStatusForAPI(subtask.status),
|
||||
priority: subtask.priority || 'medium',
|
||||
dependencies:
|
||||
subtask.dependencies?.map((dep) => {
|
||||
// Convert subtask dependencies to full ID format
|
||||
if (String(dep).includes('.')) {
|
||||
return String(dep);
|
||||
}
|
||||
return `${task.id}.${dep}`;
|
||||
}) || [],
|
||||
details: subtask.details,
|
||||
testStrategy: subtask.testStrategy,
|
||||
complexity: subtask.complexity,
|
||||
metadata: {
|
||||
complexity: subtask.complexity,
|
||||
originalId: subtask.id,
|
||||
originalDescription: subtask.description,
|
||||
originalDetails: subtask.details,
|
||||
originalTestStrategy: subtask.testStrategy
|
||||
}
|
||||
});
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
return flatTasks;
|
||||
}
|
||||
|
||||
/**
|
||||
* Enrich task/subtask description with implementation details and test strategy
|
||||
* Creates a comprehensive markdown-formatted description
|
||||
*/
|
||||
private enrichDescription(taskOrSubtask: Task | any): string {
|
||||
const sections: string[] = [];
|
||||
|
||||
// Start with original description if it exists
|
||||
if (taskOrSubtask.description) {
|
||||
sections.push(taskOrSubtask.description);
|
||||
}
|
||||
|
||||
// Add implementation details section
|
||||
if (taskOrSubtask.details) {
|
||||
sections.push('## Implementation Details\n');
|
||||
sections.push(taskOrSubtask.details);
|
||||
}
|
||||
|
||||
// Add test strategy section
|
||||
if (taskOrSubtask.testStrategy) {
|
||||
sections.push('## Test Strategy\n');
|
||||
sections.push(taskOrSubtask.testStrategy);
|
||||
}
|
||||
|
||||
// Join sections with double newlines for better markdown formatting
|
||||
return sections.join('\n\n').trim() || 'No description provided';
|
||||
}
|
||||
|
||||
/**
|
||||
* Map internal status to API status format
|
||||
*/
|
||||
private mapStatusForAPI(status?: string): string {
|
||||
switch (status) {
|
||||
case 'pending':
|
||||
return 'todo';
|
||||
case 'in-progress':
|
||||
return 'in_progress';
|
||||
case 'done':
|
||||
return 'done';
|
||||
default:
|
||||
return 'todo';
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Perform the actual export API call
|
||||
*/
|
||||
private async performExport(
|
||||
orgId: string,
|
||||
briefId: string,
|
||||
tasks: any[]
|
||||
): Promise<void> {
|
||||
// Check if we should use the API endpoint or direct Supabase
|
||||
const useAPIEndpoint = process.env.TM_PUBLIC_BASE_DOMAIN;
|
||||
|
||||
if (useAPIEndpoint) {
|
||||
// Use the new bulk import API endpoint
|
||||
const apiUrl = `${process.env.TM_PUBLIC_BASE_DOMAIN}/ai/api/v1/briefs/${briefId}/tasks/bulk`;
|
||||
|
||||
// Transform tasks to flat structure for API
|
||||
const flatTasks = this.transformTasksForBulkImport(tasks);
|
||||
|
||||
// Prepare request body
|
||||
const requestBody = {
|
||||
source: 'task-master-cli',
|
||||
accountId: orgId,
|
||||
options: {
|
||||
dryRun: false,
|
||||
stopOnError: false
|
||||
},
|
||||
tasks: flatTasks
|
||||
};
|
||||
|
||||
// Get auth token
|
||||
const credentials = this.authManager.getCredentials();
|
||||
if (!credentials || !credentials.token) {
|
||||
throw new Error('Not authenticated');
|
||||
}
|
||||
|
||||
// Make API request
|
||||
const response = await fetch(apiUrl, {
|
||||
method: 'POST',
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
Authorization: `Bearer ${credentials.token}`
|
||||
},
|
||||
body: JSON.stringify(requestBody)
|
||||
});
|
||||
|
||||
if (!response.ok) {
|
||||
const errorText = await response.text();
|
||||
throw new Error(
|
||||
`API request failed: ${response.status} - ${errorText}`
|
||||
);
|
||||
}
|
||||
|
||||
const result = (await response.json()) as BulkTasksResponse;
|
||||
|
||||
if (result.failedCount > 0) {
|
||||
const failedTasks = result.results
|
||||
.filter((r) => !r.success)
|
||||
.map((r) => `${r.externalId}: ${r.error}`)
|
||||
.join(', ');
|
||||
console.warn(
|
||||
`Warning: ${result.failedCount} tasks failed to import: ${failedTasks}`
|
||||
);
|
||||
}
|
||||
|
||||
console.log(
|
||||
`Successfully exported ${result.successCount} of ${result.totalTasks} tasks to brief ${briefId}`
|
||||
);
|
||||
} else {
|
||||
// Direct Supabase approach is no longer supported
|
||||
// The extractTasks method has been removed from SupabaseTaskRepository
|
||||
// as we now exclusively use the API endpoint for exports
|
||||
throw new Error(
|
||||
'Export API endpoint not configured. Please set TM_PUBLIC_BASE_DOMAIN environment variable to enable task export.'
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract a brief ID from raw input (ID or URL)
|
||||
*/
|
||||
private extractBriefId(input: string): string | null {
|
||||
const raw = input?.trim() ?? '';
|
||||
if (!raw) return null;
|
||||
|
||||
const parseUrl = (s: string): URL | null => {
|
||||
try {
|
||||
return new URL(s);
|
||||
} catch {}
|
||||
try {
|
||||
return new URL(`https://${s}`);
|
||||
} catch {}
|
||||
return null;
|
||||
};
|
||||
|
||||
const fromParts = (path: string): string | null => {
|
||||
const parts = path.split('/').filter(Boolean);
|
||||
const briefsIdx = parts.lastIndexOf('briefs');
|
||||
const candidate =
|
||||
briefsIdx >= 0 && parts.length > briefsIdx + 1
|
||||
? parts[briefsIdx + 1]
|
||||
: parts[parts.length - 1];
|
||||
return candidate?.trim() || null;
|
||||
};
|
||||
|
||||
// Try to parse as URL
|
||||
const url = parseUrl(raw);
|
||||
if (url) {
|
||||
const qId = url.searchParams.get('id') || url.searchParams.get('briefId');
|
||||
const candidate = (qId || fromParts(url.pathname)) ?? null;
|
||||
if (candidate) {
|
||||
if (this.isLikelyId(candidate) || candidate.length >= 8) {
|
||||
return candidate;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Check if it looks like a path without scheme
|
||||
if (raw.includes('/')) {
|
||||
const candidate = fromParts(raw);
|
||||
if (candidate && (this.isLikelyId(candidate) || candidate.length >= 8)) {
|
||||
return candidate;
|
||||
}
|
||||
}
|
||||
|
||||
// Return as-is if it looks like an ID
|
||||
if (this.isLikelyId(raw) || raw.length >= 8) {
|
||||
return raw;
|
||||
}
|
||||
|
||||
return null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if a string looks like a brief ID (UUID-like)
|
||||
*/
|
||||
private isLikelyId(value: string): boolean {
|
||||
const uuidRegex =
|
||||
/^[0-9a-fA-F]{8}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{12}$/;
|
||||
const ulidRegex = /^[0-9A-HJKMNP-TV-Z]{26}$/i;
|
||||
const slugRegex = /^[A-Za-z0-9_-]{16,}$/;
|
||||
return (
|
||||
uuidRegex.test(value) || ulidRegex.test(value) || slugRegex.test(value)
|
||||
);
|
||||
}
|
||||
}
|
||||
@@ -5,4 +5,9 @@
|
||||
|
||||
export { TaskService } from './task-service.js';
|
||||
export { OrganizationService } from './organization.service.js';
|
||||
export { ExportService } from './export.service.js';
|
||||
export type { Organization, Brief } from './organization.service.js';
|
||||
export type {
|
||||
ExportTasksOptions,
|
||||
ExportResult
|
||||
} from './export.service.js';
|
||||
|
||||
@@ -14,6 +14,7 @@ import { ConfigManager } from '../config/config-manager.js';
|
||||
import { StorageFactory } from '../storage/storage-factory.js';
|
||||
import { TaskEntity } from '../entities/task.entity.js';
|
||||
import { ERROR_CODES, TaskMasterError } from '../errors/task-master-error.js';
|
||||
import { getLogger } from '../logger/factory.js';
|
||||
|
||||
/**
|
||||
* Result returned by getTaskList
|
||||
@@ -51,6 +52,7 @@ export class TaskService {
|
||||
private configManager: ConfigManager;
|
||||
private storage: IStorage;
|
||||
private initialized = false;
|
||||
private logger = getLogger('TaskService');
|
||||
|
||||
constructor(configManager: ConfigManager) {
|
||||
this.configManager = configManager;
|
||||
@@ -90,37 +92,76 @@ export class TaskService {
|
||||
const tag = options.tag || activeTag;
|
||||
|
||||
try {
|
||||
// Load raw tasks from storage - storage only knows about tags
|
||||
const rawTasks = await this.storage.loadTasks(tag);
|
||||
// Determine if we can push filters to storage layer
|
||||
const canPushStatusFilter =
|
||||
options.filter?.status &&
|
||||
!options.filter.priority &&
|
||||
!options.filter.tags &&
|
||||
!options.filter.assignee &&
|
||||
!options.filter.search &&
|
||||
options.filter.hasSubtasks === undefined;
|
||||
|
||||
// Build storage-level options
|
||||
const storageOptions: any = {};
|
||||
|
||||
// Push status filter to storage if it's the only filter
|
||||
if (canPushStatusFilter) {
|
||||
const statuses = Array.isArray(options.filter!.status)
|
||||
? options.filter!.status
|
||||
: [options.filter!.status];
|
||||
// Only push single status to storage (multiple statuses need in-memory filtering)
|
||||
if (statuses.length === 1) {
|
||||
storageOptions.status = statuses[0];
|
||||
}
|
||||
}
|
||||
|
||||
// Push subtask exclusion to storage
|
||||
if (options.includeSubtasks === false) {
|
||||
storageOptions.excludeSubtasks = true;
|
||||
}
|
||||
|
||||
// Load tasks from storage with pushed-down filters
|
||||
const rawTasks = await this.storage.loadTasks(tag, storageOptions);
|
||||
|
||||
// Get total count without status filters, but preserve subtask exclusion
|
||||
const baseOptions: any = {};
|
||||
if (options.includeSubtasks === false) {
|
||||
baseOptions.excludeSubtasks = true;
|
||||
}
|
||||
|
||||
const allTasks =
|
||||
storageOptions.status !== undefined
|
||||
? await this.storage.loadTasks(tag, baseOptions)
|
||||
: rawTasks;
|
||||
|
||||
// Convert to TaskEntity for business logic operations
|
||||
const taskEntities = TaskEntity.fromArray(rawTasks);
|
||||
|
||||
// Apply filters if provided
|
||||
// Apply remaining filters in-memory if needed
|
||||
let filteredEntities = taskEntities;
|
||||
if (options.filter) {
|
||||
if (options.filter && !canPushStatusFilter) {
|
||||
filteredEntities = this.applyFilters(taskEntities, options.filter);
|
||||
} else if (
|
||||
options.filter?.status &&
|
||||
Array.isArray(options.filter.status) &&
|
||||
options.filter.status.length > 1
|
||||
) {
|
||||
// Multiple statuses - filter in-memory
|
||||
filteredEntities = this.applyFilters(taskEntities, options.filter);
|
||||
}
|
||||
|
||||
// Convert back to plain objects
|
||||
let tasks = filteredEntities.map((entity) => entity.toJSON());
|
||||
|
||||
// Handle subtasks option
|
||||
if (options.includeSubtasks === false) {
|
||||
tasks = tasks.map((task) => ({
|
||||
...task,
|
||||
subtasks: []
|
||||
}));
|
||||
}
|
||||
const tasks = filteredEntities.map((entity) => entity.toJSON());
|
||||
|
||||
return {
|
||||
tasks,
|
||||
total: rawTasks.length,
|
||||
total: allTasks.length,
|
||||
filtered: filteredEntities.length,
|
||||
tag: tag, // Return the actual tag being used (either explicitly provided or active tag)
|
||||
storageType: this.getStorageType()
|
||||
};
|
||||
} catch (error) {
|
||||
this.logger.error('Failed to get task list', error);
|
||||
throw new TaskMasterError(
|
||||
'Failed to get task list',
|
||||
ERROR_CODES.INTERNAL_ERROR,
|
||||
|
||||
@@ -6,7 +6,8 @@
|
||||
import type {
|
||||
IStorage,
|
||||
StorageStats,
|
||||
UpdateStatusResult
|
||||
UpdateStatusResult,
|
||||
LoadTasksOptions
|
||||
} from '../interfaces/storage.interface.js';
|
||||
import type {
|
||||
Task,
|
||||
@@ -16,7 +17,7 @@ import type {
|
||||
} from '../types/index.js';
|
||||
import { ERROR_CODES, TaskMasterError } from '../errors/task-master-error.js';
|
||||
import { TaskRepository } from '../repositories/task-repository.interface.js';
|
||||
import { SupabaseTaskRepository } from '../repositories/supabase-task-repository.js';
|
||||
import { SupabaseTaskRepository } from '../repositories/supabase/index.js';
|
||||
import { SupabaseClient } from '@supabase/supabase-js';
|
||||
import { AuthManager } from '../auth/auth-manager.js';
|
||||
|
||||
@@ -146,7 +147,7 @@ export class ApiStorage implements IStorage {
|
||||
* Load tasks from API
|
||||
* In our system, the tag parameter represents a brief ID
|
||||
*/
|
||||
async loadTasks(tag?: string): Promise<Task[]> {
|
||||
async loadTasks(tag?: string, options?: LoadTasksOptions): Promise<Task[]> {
|
||||
await this.ensureInitialized();
|
||||
|
||||
try {
|
||||
@@ -160,9 +161,9 @@ export class ApiStorage implements IStorage {
|
||||
);
|
||||
}
|
||||
|
||||
// Load tasks from the current brief context
|
||||
// Load tasks from the current brief context with filters pushed to repository
|
||||
const tasks = await this.retryOperation(() =>
|
||||
this.repository.getTasks(this.projectId)
|
||||
this.repository.getTasks(this.projectId, options)
|
||||
);
|
||||
|
||||
// Update the tag cache with the loaded task IDs
|
||||
|
||||
@@ -6,7 +6,8 @@ import type { Task, TaskMetadata, TaskStatus } from '../../types/index.js';
|
||||
import type {
|
||||
IStorage,
|
||||
StorageStats,
|
||||
UpdateStatusResult
|
||||
UpdateStatusResult,
|
||||
LoadTasksOptions
|
||||
} from '../../interfaces/storage.interface.js';
|
||||
import { FormatHandler } from './format-handler.js';
|
||||
import { FileOperations } from './file-operations.js';
|
||||
@@ -92,15 +93,30 @@ export class FileStorage implements IStorage {
|
||||
* Load tasks from the single tasks.json file for a specific tag
|
||||
* Enriches tasks with complexity data from the complexity report
|
||||
*/
|
||||
async loadTasks(tag?: string): Promise<Task[]> {
|
||||
async loadTasks(tag?: string, options?: LoadTasksOptions): Promise<Task[]> {
|
||||
const filePath = this.pathResolver.getTasksPath();
|
||||
const resolvedTag = tag || 'master';
|
||||
|
||||
try {
|
||||
const rawData = await this.fileOps.readJson(filePath);
|
||||
const tasks = this.formatHandler.extractTasks(rawData, resolvedTag);
|
||||
let tasks = this.formatHandler.extractTasks(rawData, resolvedTag);
|
||||
|
||||
// Apply filters if provided
|
||||
if (options) {
|
||||
// Filter by status if specified
|
||||
if (options.status) {
|
||||
tasks = tasks.filter((task) => task.status === options.status);
|
||||
}
|
||||
|
||||
// Exclude subtasks if specified
|
||||
if (options.excludeSubtasks) {
|
||||
tasks = tasks.map((task) => ({
|
||||
...task,
|
||||
subtasks: []
|
||||
}));
|
||||
}
|
||||
}
|
||||
|
||||
// Enrich tasks with complexity data
|
||||
return await this.enrichTasksWithComplexity(tasks, resolvedTag);
|
||||
} catch (error: any) {
|
||||
if (error.code === 'ENOENT') {
|
||||
|
||||
@@ -14,7 +14,14 @@ import {
|
||||
type StartTaskResult,
|
||||
type ConflictCheckResult
|
||||
} from './services/task-execution-service.js';
|
||||
import {
|
||||
ExportService,
|
||||
type ExportTasksOptions,
|
||||
type ExportResult
|
||||
} from './services/export.service.js';
|
||||
import { AuthManager } from './auth/auth-manager.js';
|
||||
import { ERROR_CODES, TaskMasterError } from './errors/task-master-error.js';
|
||||
import type { UserContext } from './auth/types.js';
|
||||
import type { IConfiguration } from './interfaces/configuration.interface.js';
|
||||
import type {
|
||||
Task,
|
||||
@@ -47,6 +54,10 @@ export type {
|
||||
StartTaskResult,
|
||||
ConflictCheckResult
|
||||
} from './services/task-execution-service.js';
|
||||
export type {
|
||||
ExportTasksOptions,
|
||||
ExportResult
|
||||
} from './services/export.service.js';
|
||||
|
||||
/**
|
||||
* TaskMasterCore facade class
|
||||
@@ -56,6 +67,7 @@ export class TaskMasterCore {
|
||||
private configManager: ConfigManager;
|
||||
private taskService: TaskService;
|
||||
private taskExecutionService: TaskExecutionService;
|
||||
private exportService: ExportService;
|
||||
private executorService: ExecutorService | null = null;
|
||||
|
||||
/**
|
||||
@@ -80,6 +92,7 @@ export class TaskMasterCore {
|
||||
this.configManager = null as any;
|
||||
this.taskService = null as any;
|
||||
this.taskExecutionService = null as any;
|
||||
this.exportService = null as any;
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -109,6 +122,10 @@ export class TaskMasterCore {
|
||||
|
||||
// Create task execution service
|
||||
this.taskExecutionService = new TaskExecutionService(this.taskService);
|
||||
|
||||
// Create export service
|
||||
const authManager = AuthManager.getInstance();
|
||||
this.exportService = new ExportService(this.configManager, authManager);
|
||||
} catch (error) {
|
||||
throw new TaskMasterError(
|
||||
'Failed to initialize TaskMasterCore',
|
||||
@@ -242,6 +259,33 @@ export class TaskMasterCore {
|
||||
return this.taskExecutionService.getNextAvailableTask();
|
||||
}
|
||||
|
||||
// ==================== Export Service Methods ====================
|
||||
|
||||
/**
|
||||
* Export tasks to an external system (e.g., Hamster brief)
|
||||
*/
|
||||
async exportTasks(options: ExportTasksOptions): Promise<ExportResult> {
|
||||
return this.exportService.exportTasks(options);
|
||||
}
|
||||
|
||||
/**
|
||||
* Export tasks from a brief ID or URL
|
||||
*/
|
||||
async exportFromBriefInput(briefInput: string): Promise<ExportResult> {
|
||||
return this.exportService.exportFromBriefInput(briefInput);
|
||||
}
|
||||
|
||||
/**
|
||||
* Validate export context before prompting
|
||||
*/
|
||||
async validateExportContext(): Promise<{
|
||||
hasOrg: boolean;
|
||||
hasBrief: boolean;
|
||||
context: UserContext | null;
|
||||
}> {
|
||||
return this.exportService.validateContext();
|
||||
}
|
||||
|
||||
// ==================== Executor Service Methods ====================
|
||||
|
||||
/**
|
||||
|
||||
@@ -82,10 +82,11 @@ export interface Task {
|
||||
}
|
||||
|
||||
/**
|
||||
* Subtask interface extending Task with numeric ID
|
||||
* Subtask interface extending Task
|
||||
* ID can be number (file storage) or string (API storage with display_id)
|
||||
*/
|
||||
export interface Subtask extends Omit<Task, 'id' | 'subtasks'> {
|
||||
id: number;
|
||||
id: number | string;
|
||||
parentId: string;
|
||||
subtasks?: never; // Subtasks cannot have their own subtasks
|
||||
}
|
||||
|
||||
83
packages/tm-core/src/types/repository-types.ts
Normal file
83
packages/tm-core/src/types/repository-types.ts
Normal file
@@ -0,0 +1,83 @@
|
||||
/**
|
||||
* Type definitions for repository operations
|
||||
*/
|
||||
import { Database, Tables } from './database.types.js';
|
||||
|
||||
/**
|
||||
* Task row from database with optional joined relations
|
||||
*/
|
||||
export interface TaskWithRelations extends Tables<'tasks'> {
|
||||
document?: {
|
||||
id: string;
|
||||
document_name: string;
|
||||
title: string;
|
||||
description: string | null;
|
||||
} | null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Dependency row with joined display_id
|
||||
*/
|
||||
export interface DependencyWithDisplayId {
|
||||
task_id: string;
|
||||
depends_on_task: {
|
||||
display_id: string;
|
||||
} | null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Task metadata structure
|
||||
*/
|
||||
export interface TaskMetadata {
|
||||
details?: string;
|
||||
testStrategy?: string;
|
||||
[key: string]: unknown; // Allow additional fields but be explicit
|
||||
}
|
||||
|
||||
/**
|
||||
* Database update payload for tasks
|
||||
*/
|
||||
export type TaskDatabaseUpdate =
|
||||
Database['public']['Tables']['tasks']['Update'];
|
||||
/**
|
||||
* Configuration for task queries
|
||||
*/
|
||||
export interface TaskQueryConfig {
|
||||
briefId: string;
|
||||
includeSubtasks?: boolean;
|
||||
includeDependencies?: boolean;
|
||||
includeDocument?: boolean;
|
||||
}
|
||||
|
||||
/**
|
||||
* Result of a task fetch operation
|
||||
*/
|
||||
export interface TaskFetchResult {
|
||||
task: Tables<'tasks'>;
|
||||
subtasks: Tables<'tasks'>[];
|
||||
dependencies: Map<string, string[]>;
|
||||
}
|
||||
|
||||
/**
|
||||
* Task validation errors
|
||||
*/
|
||||
export class TaskValidationError extends Error {
|
||||
constructor(
|
||||
message: string,
|
||||
public readonly field: string,
|
||||
public readonly value: unknown
|
||||
) {
|
||||
super(message);
|
||||
this.name = 'TaskValidationError';
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Context validation errors
|
||||
*/
|
||||
export class ContextValidationError extends Error {
|
||||
constructor(message: string) {
|
||||
super(message);
|
||||
this.name = 'ContextValidationError';
|
||||
}
|
||||
}
|
||||
@@ -628,6 +628,12 @@ function createProjectStructure(
|
||||
// Copy example_prd.txt to NEW location
|
||||
copyTemplateFile('example_prd.txt', path.join(targetDir, EXAMPLE_PRD_FILE));
|
||||
|
||||
// Copy example_prd_rpg.txt to templates directory
|
||||
copyTemplateFile(
|
||||
'example_prd_rpg.txt',
|
||||
path.join(targetDir, TASKMASTER_TEMPLATES_DIR, 'example_prd_rpg.txt')
|
||||
);
|
||||
|
||||
// Initialize git repository if git is available
|
||||
try {
|
||||
if (initGit === false) {
|
||||
@@ -856,10 +862,10 @@ function createProjectStructure(
|
||||
)}\n${chalk.white(' ├─ ')}${chalk.dim('Models: Use `task-master models` commands')}\n${chalk.white(' └─ ')}${chalk.dim(
|
||||
'Keys: Add provider API keys to .env (or inside the MCP config file i.e. .cursor/mcp.json)'
|
||||
)}\n${chalk.white('2. ')}${chalk.yellow(
|
||||
'Discuss your idea with AI and ask for a PRD using example_prd.txt, and save it to scripts/PRD.txt'
|
||||
)}\n${chalk.white('3. ')}${chalk.yellow(
|
||||
'Discuss your idea with AI and ask for a PRD, and save it to .taskmaster/docs/prd.txt'
|
||||
)}\n${chalk.white(' ├─ ')}${chalk.dim('Simple projects: Use ')}${chalk.cyan('example_prd.txt')}${chalk.dim(' template')}\n${chalk.white(' └─ ')}${chalk.dim('Complex systems: Use ')}${chalk.cyan('example_prd_rpg.txt')}${chalk.dim(' template (for dependency-aware task graphs)')}\n${chalk.white('3. ')}${chalk.yellow(
|
||||
'Ask Cursor Agent (or run CLI) to parse your PRD and generate initial tasks:'
|
||||
)}\n${chalk.white(' └─ ')}${chalk.dim('MCP Tool: ')}${chalk.cyan('parse_prd')}${chalk.dim(' | CLI: ')}${chalk.cyan('task-master parse-prd scripts/prd.txt')}\n${chalk.white('4. ')}${chalk.yellow(
|
||||
)}\n${chalk.white(' └─ ')}${chalk.dim('MCP Tool: ')}${chalk.cyan('parse_prd')}${chalk.dim(' | CLI: ')}${chalk.cyan('task-master parse-prd .taskmaster/docs/prd.txt')}\n${chalk.white('4. ')}${chalk.yellow(
|
||||
'Ask Cursor to analyze the complexity of the tasks in your PRD using research'
|
||||
)}\n${chalk.white(' └─ ')}${chalk.dim('MCP Tool: ')}${chalk.cyan('analyze_project_complexity')}${chalk.dim(' | CLI: ')}${chalk.cyan('task-master analyze-complexity')}\n${chalk.white('5. ')}${chalk.yellow(
|
||||
'Ask Cursor to expand all of your tasks using the complexity analysis'
|
||||
|
||||
@@ -12,17 +12,11 @@ import https from 'https';
|
||||
import http from 'http';
|
||||
import inquirer from 'inquirer';
|
||||
import search from '@inquirer/search';
|
||||
import ora from 'ora'; // Import ora
|
||||
|
||||
import { log, readJSON } from './utils.js';
|
||||
// Import new commands from @tm/cli
|
||||
// Import command registry and utilities from @tm/cli
|
||||
import {
|
||||
ListTasksCommand,
|
||||
ShowCommand,
|
||||
AuthCommand,
|
||||
ContextCommand,
|
||||
StartCommand,
|
||||
SetStatusCommand,
|
||||
registerAllCommands,
|
||||
checkForUpdate,
|
||||
performAutoUpdate,
|
||||
displayUpgradeNotification
|
||||
@@ -32,7 +26,6 @@ import {
|
||||
parsePRD,
|
||||
updateTasks,
|
||||
generateTaskFiles,
|
||||
listTasks,
|
||||
expandTask,
|
||||
expandAllTasks,
|
||||
clearSubtasks,
|
||||
@@ -53,11 +46,7 @@ import {
|
||||
validateStrength
|
||||
} from './task-manager.js';
|
||||
|
||||
import {
|
||||
moveTasksBetweenTags,
|
||||
MoveTaskError,
|
||||
MOVE_ERROR_CODES
|
||||
} from './task-manager/move-task.js';
|
||||
import { moveTasksBetweenTags } from './task-manager/move-task.js';
|
||||
|
||||
import {
|
||||
createTag,
|
||||
@@ -72,9 +61,7 @@ import {
|
||||
addDependency,
|
||||
removeDependency,
|
||||
validateDependenciesCommand,
|
||||
fixDependenciesCommand,
|
||||
DependencyError,
|
||||
DEPENDENCY_ERROR_CODES
|
||||
fixDependenciesCommand
|
||||
} from './dependency-manager.js';
|
||||
|
||||
import {
|
||||
@@ -103,7 +90,6 @@ import {
|
||||
displayBanner,
|
||||
displayHelp,
|
||||
displayNextTask,
|
||||
displayTaskById,
|
||||
displayComplexityReport,
|
||||
getStatusWithColor,
|
||||
confirmTaskOverwrite,
|
||||
@@ -112,8 +98,6 @@ import {
|
||||
displayModelConfiguration,
|
||||
displayAvailableModels,
|
||||
displayApiKeyStatus,
|
||||
displayAiUsageSummary,
|
||||
displayMultipleTasksSummary,
|
||||
displayTaggedTasksFYI,
|
||||
displayCurrentTagIndicator,
|
||||
displayCrossTagDependencyError,
|
||||
@@ -137,10 +121,6 @@ import {
|
||||
setModel,
|
||||
getApiKeyStatusReport
|
||||
} from './task-manager/models.js';
|
||||
import {
|
||||
isValidTaskStatus,
|
||||
TASK_STATUS_OPTIONS
|
||||
} from '../../src/constants/task-status.js';
|
||||
import {
|
||||
isValidRulesAction,
|
||||
RULES_ACTIONS,
|
||||
@@ -1687,29 +1667,12 @@ function registerCommands(programInstance) {
|
||||
});
|
||||
});
|
||||
|
||||
// Register the set-status command from @tm/cli
|
||||
// Handles task status updates with proper error handling and validation
|
||||
SetStatusCommand.registerOn(programInstance);
|
||||
|
||||
// NEW: Register the new list command from @tm/cli
|
||||
// This command handles all its own configuration and logic
|
||||
ListTasksCommand.registerOn(programInstance);
|
||||
|
||||
// Register the auth command from @tm/cli
|
||||
// Handles authentication with tryhamster.com
|
||||
AuthCommand.registerOn(programInstance);
|
||||
|
||||
// Register the context command from @tm/cli
|
||||
// Manages workspace context (org/brief selection)
|
||||
ContextCommand.registerOn(programInstance);
|
||||
|
||||
// Register the show command from @tm/cli
|
||||
// Displays detailed information about tasks
|
||||
ShowCommand.registerOn(programInstance);
|
||||
|
||||
// Register the start command from @tm/cli
|
||||
// Starts working on a task by launching claude-code with a standardized prompt
|
||||
StartCommand.registerOn(programInstance);
|
||||
// ========================================
|
||||
// Register All Commands from @tm/cli
|
||||
// ========================================
|
||||
// Use the centralized command registry to register all CLI commands
|
||||
// This replaces individual command registrations and reduces duplication
|
||||
registerAllCommands(programInstance);
|
||||
|
||||
// expand command
|
||||
programInstance
|
||||
|
||||
@@ -1,8 +1,5 @@
|
||||
import path from 'path';
|
||||
|
||||
import { log, readJSON, writeJSON, getCurrentTag } from '../utils.js';
|
||||
import { isTaskDependentOn } from '../task-manager.js';
|
||||
import generateTaskFiles from './generate-task-files.js';
|
||||
|
||||
/**
|
||||
* Add a subtask to a parent task
|
||||
@@ -142,11 +139,7 @@ async function addSubtask(
|
||||
// Write the updated tasks back to the file with proper context
|
||||
writeJSON(tasksPath, data, projectRoot, tag);
|
||||
|
||||
// Generate task files if requested
|
||||
if (generateFiles) {
|
||||
log('info', 'Regenerating task files...');
|
||||
await generateTaskFiles(tasksPath, path.dirname(tasksPath), context);
|
||||
}
|
||||
// Note: Task file generation is no longer supported and has been removed
|
||||
|
||||
return newSubtask;
|
||||
} catch (error) {
|
||||
|
||||
@@ -6,7 +6,6 @@ import {
|
||||
setTasksForTag,
|
||||
traverseDependencies
|
||||
} from '../utils.js';
|
||||
import generateTaskFiles from './generate-task-files.js';
|
||||
import {
|
||||
findCrossTagDependencies,
|
||||
getDependentTaskIds,
|
||||
@@ -142,13 +141,7 @@ async function moveTask(
|
||||
results.push(result);
|
||||
}
|
||||
|
||||
// Generate files once at the end if requested
|
||||
if (generateFiles) {
|
||||
await generateTaskFiles(tasksPath, path.dirname(tasksPath), {
|
||||
tag: tag,
|
||||
projectRoot: projectRoot
|
||||
});
|
||||
}
|
||||
// Note: Task file generation is no longer supported and has been removed
|
||||
|
||||
return {
|
||||
message: `Successfully moved ${sourceIds.length} tasks/subtasks`,
|
||||
@@ -209,12 +202,7 @@ async function moveTask(
|
||||
// The writeJSON function will filter out _rawTaggedData automatically
|
||||
writeJSON(tasksPath, rawData, options.projectRoot, tag);
|
||||
|
||||
if (generateFiles) {
|
||||
await generateTaskFiles(tasksPath, path.dirname(tasksPath), {
|
||||
tag: tag,
|
||||
projectRoot: projectRoot
|
||||
});
|
||||
}
|
||||
// Note: Task file generation is no longer supported and has been removed
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
@@ -1,6 +1,4 @@
|
||||
import path from 'path';
|
||||
import { log, readJSON, writeJSON } from '../utils.js';
|
||||
import generateTaskFiles from './generate-task-files.js';
|
||||
|
||||
/**
|
||||
* Remove a subtask from its parent task
|
||||
@@ -108,11 +106,7 @@ async function removeSubtask(
|
||||
// Write the updated tasks back to the file with proper context
|
||||
writeJSON(tasksPath, data, projectRoot, tag);
|
||||
|
||||
// Generate task files if requested
|
||||
if (generateFiles) {
|
||||
log('info', 'Regenerating task files...');
|
||||
await generateTaskFiles(tasksPath, path.dirname(tasksPath), context);
|
||||
}
|
||||
// Note: Task file generation is no longer supported and has been removed
|
||||
|
||||
return convertedTask;
|
||||
} catch (error) {
|
||||
|
||||
@@ -94,7 +94,6 @@ describe('addSubtask function', () => {
|
||||
const parentTask = writeCallArgs.tasks.find((t) => t.id === 1);
|
||||
expect(parentTask.subtasks).toHaveLength(1);
|
||||
expect(parentTask.subtasks[0].title).toBe('New Subtask');
|
||||
expect(mockGenerateTaskFiles).toHaveBeenCalled();
|
||||
});
|
||||
|
||||
test('should convert an existing task to a subtask', async () => {
|
||||
|
||||
@@ -88,11 +88,6 @@ describe('moveTask (unit)', () => {
|
||||
).rejects.toThrow(/Number of source IDs/);
|
||||
});
|
||||
|
||||
test('batch move calls generateTaskFiles once when flag true', async () => {
|
||||
await moveTask('tasks.json', '1,2', '3,4', true, { tag: 'master' });
|
||||
expect(generateTaskFiles).toHaveBeenCalledTimes(1);
|
||||
});
|
||||
|
||||
test('error when tag invalid', async () => {
|
||||
await expect(
|
||||
moveTask('tasks.json', '1', '2', false, { tag: 'ghost' })
|
||||
|
||||
@@ -1,6 +1,10 @@
|
||||
import { defineConfig } from 'tsdown';
|
||||
import { baseConfig, mergeConfig } from '@tm/build-config';
|
||||
import 'dotenv/config';
|
||||
import { config } from 'dotenv';
|
||||
import { resolve } from 'path';
|
||||
|
||||
// Load .env file explicitly with absolute path
|
||||
config({ path: resolve(process.cwd(), '.env') });
|
||||
|
||||
// Get all TM_PUBLIC_* env variables for build-time injection
|
||||
const getBuildTimeEnvs = () => {
|
||||
|
||||
Reference in New Issue
Block a user