Revert "Release 0.13.0"

This commit is contained in:
Ralph Khreish
2025-05-03 14:38:33 +02:00
committed by GitHub
parent 8dace2186c
commit 6f5ddabc96
177 changed files with 13894 additions and 26358 deletions

View File

@@ -0,0 +1,257 @@
# AI Client Utilities for MCP Tools
This document provides examples of how to use the new AI client utilities with AsyncOperationManager in MCP tools.
## Basic Usage with Direct Functions
```javascript
// In your direct function implementation:
import {
getAnthropicClientForMCP,
getModelConfig,
handleClaudeError
} from '../utils/ai-client-utils.js';
export async function someAiOperationDirect(args, log, context) {
try {
// Initialize Anthropic client with session from context
const client = getAnthropicClientForMCP(context.session, log);
// Get model configuration with defaults or session overrides
const modelConfig = getModelConfig(context.session);
// Make API call with proper error handling
try {
const response = await client.messages.create({
model: modelConfig.model,
max_tokens: modelConfig.maxTokens,
temperature: modelConfig.temperature,
messages: [{ role: 'user', content: 'Your prompt here' }]
});
return {
success: true,
data: response
};
} catch (apiError) {
// Use helper to get user-friendly error message
const friendlyMessage = handleClaudeError(apiError);
return {
success: false,
error: {
code: 'AI_API_ERROR',
message: friendlyMessage
}
};
}
} catch (error) {
// Handle client initialization errors
return {
success: false,
error: {
code: 'AI_CLIENT_ERROR',
message: error.message
}
};
}
}
```
## Integration with AsyncOperationManager
```javascript
// In your MCP tool implementation:
import {
AsyncOperationManager,
StatusCodes
} from '../../utils/async-operation-manager.js';
import { someAiOperationDirect } from '../../core/direct-functions/some-ai-operation.js';
export async function someAiOperation(args, context) {
const { session, mcpLog } = context;
const log = mcpLog || console;
try {
// Create operation description
const operationDescription = `AI operation: ${args.someParam}`;
// Start async operation
const operation = AsyncOperationManager.createOperation(
operationDescription,
async (reportProgress) => {
try {
// Initial progress report
reportProgress({
progress: 0,
status: 'Starting AI operation...'
});
// Call direct function with session and progress reporting
const result = await someAiOperationDirect(args, log, {
reportProgress,
mcpLog: log,
session
});
// Final progress update
reportProgress({
progress: 100,
status: result.success ? 'Operation completed' : 'Operation failed',
result: result.data,
error: result.error
});
return result;
} catch (error) {
// Handle errors in the operation
reportProgress({
progress: 100,
status: 'Operation failed',
error: {
message: error.message,
code: error.code || 'OPERATION_FAILED'
}
});
throw error;
}
}
);
// Return immediate response with operation ID
return {
status: StatusCodes.ACCEPTED,
body: {
success: true,
message: 'Operation started',
operationId: operation.id
}
};
} catch (error) {
// Handle errors in the MCP tool
log.error(`Error in someAiOperation: ${error.message}`);
return {
status: StatusCodes.INTERNAL_SERVER_ERROR,
body: {
success: false,
error: {
code: 'OPERATION_FAILED',
message: error.message
}
}
};
}
}
```
## Using Research Capabilities with Perplexity
```javascript
// In your direct function:
import {
getPerplexityClientForMCP,
getBestAvailableAIModel
} from '../utils/ai-client-utils.js';
export async function researchOperationDirect(args, log, context) {
try {
// Get the best AI model for this operation based on needs
const { type, client } = await getBestAvailableAIModel(
context.session,
{ requiresResearch: true },
log
);
// Report which model we're using
if (context.reportProgress) {
await context.reportProgress({
progress: 10,
status: `Using ${type} model for research...`
});
}
// Make API call based on the model type
if (type === 'perplexity') {
// Call Perplexity
const response = await client.chat.completions.create({
model: context.session?.env?.PERPLEXITY_MODEL || 'sonar-medium-online',
messages: [{ role: 'user', content: args.researchQuery }],
temperature: 0.1
});
return {
success: true,
data: response.choices[0].message.content
};
} else {
// Call Claude as fallback
// (Implementation depends on specific needs)
// ...
}
} catch (error) {
// Handle errors
return {
success: false,
error: {
code: 'RESEARCH_ERROR',
message: error.message
}
};
}
}
```
## Model Configuration Override Example
```javascript
// In your direct function:
import { getModelConfig } from '../utils/ai-client-utils.js';
// Using custom defaults for a specific operation
const operationDefaults = {
model: 'claude-3-haiku-20240307', // Faster, smaller model
maxTokens: 1000, // Lower token limit
temperature: 0.2 // Lower temperature for more deterministic output
};
// Get model config with operation-specific defaults
const modelConfig = getModelConfig(context.session, operationDefaults);
// Now use modelConfig in your API calls
const response = await client.messages.create({
model: modelConfig.model,
max_tokens: modelConfig.maxTokens,
temperature: modelConfig.temperature
// Other parameters...
});
```
## Best Practices
1. **Error Handling**:
- Always use try/catch blocks around both client initialization and API calls
- Use `handleClaudeError` to provide user-friendly error messages
- Return standardized error objects with code and message
2. **Progress Reporting**:
- Report progress at key points (starting, processing, completing)
- Include meaningful status messages
- Include error details in progress reports when failures occur
3. **Session Handling**:
- Always pass the session from the context to the AI client getters
- Use `getModelConfig` to respect user settings from session
4. **Model Selection**:
- Use `getBestAvailableAIModel` when you need to select between different models
- Set `requiresResearch: true` when you need Perplexity capabilities
5. **AsyncOperationManager Integration**:
- Create descriptive operation names
- Handle all errors within the operation function
- Return standardized results from direct functions
- Return immediate responses with operation IDs

View File

@@ -52,9 +52,6 @@ task-master show 1.2
```bash
# Update tasks from a specific ID and provide context
task-master update --from=<id> --prompt="<prompt>"
# Update tasks using research role
task-master update --from=<id> --prompt="<prompt>" --research
```
## Update a Specific Task
@@ -63,7 +60,7 @@ task-master update --from=<id> --prompt="<prompt>" --research
# Update a single task by ID with new information
task-master update-task --id=<id> --prompt="<prompt>"
# Use research-backed updates
# Use research-backed updates with Perplexity AI
task-master update-task --id=<id> --prompt="<prompt>" --research
```
@@ -76,7 +73,7 @@ task-master update-subtask --id=<parentId.subtaskId> --prompt="<prompt>"
# Example: Add details about API rate limiting to subtask 2 of task 5
task-master update-subtask --id=5.2 --prompt="Add rate limiting of 100 requests per minute"
# Use research-backed updates
# Use research-backed updates with Perplexity AI
task-master update-subtask --id=<parentId.subtaskId> --prompt="<prompt>" --research
```
@@ -190,12 +187,9 @@ task-master fix-dependencies
## Add a New Task
```bash
# Add a new task using AI (main role)
# Add a new task using AI
task-master add-task --prompt="Description of the new task"
# Add a new task using AI (research role)
task-master add-task --prompt="Description of the new task" --research
# Add a task with dependencies
task-master add-task --prompt="Description" --dependencies=1,2,3
@@ -209,30 +203,3 @@ task-master add-task --prompt="Description" --priority=high
# Initialize a new project with Task Master structure
task-master init
```
## Configure AI Models
```bash
# View current AI model configuration and API key status
task-master models
# Set the primary model for generation/updates (provider inferred if known)
task-master models --set-main=claude-3-opus-20240229
# Set the research model
task-master models --set-research=sonar-pro
# Set the fallback model
task-master models --set-fallback=claude-3-haiku-20240307
# Set a custom Ollama model for the main role
task-master models --set-main=my-local-llama --ollama
# Set a custom OpenRouter model for the research role
task-master models --set-research=google/gemini-pro --openrouter
# Run interactive setup to configure models, including custom ones
task-master models --setup
```
Configuration is stored in `.taskmasterconfig` in your project root. API keys are still managed via `.env` or MCP configuration. Use `task-master models` without flags to see available built-in models. Use `--setup` for a guided experience.

View File

@@ -1,89 +1,53 @@
# Configuration
Taskmaster uses two primary methods for configuration:
Task Master can be configured through environment variables in a `.env` file at the root of your project.
1. **`.taskmasterconfig` File (Project Root - Recommended for most settings)**
## Required Configuration
- This JSON file stores most configuration settings, including AI model selections, parameters, logging levels, and project defaults.
- **Location:** This file is created in the root directory of your project when you run the `task-master models --setup` interactive setup. You typically do this during the initialization sequence. Do not manually edit this file beyond adjusting Temperature and Max Tokens depending on your model.
- **Management:** Use the `task-master models --setup` command (or `models` MCP tool) to interactively create and manage this file. You can also set specific models directly using `task-master models --set-<role>=<model_id>`, adding `--ollama` or `--openrouter` flags for custom models. Manual editing is possible but not recommended unless you understand the structure.
- **Example Structure:**
```json
{
"models": {
"main": {
"provider": "anthropic",
"modelId": "claude-3-7-sonnet-20250219",
"maxTokens": 64000,
"temperature": 0.2
},
"research": {
"provider": "perplexity",
"modelId": "sonar-pro",
"maxTokens": 8700,
"temperature": 0.1
},
"fallback": {
"provider": "anthropic",
"modelId": "claude-3-5-sonnet",
"maxTokens": 64000,
"temperature": 0.2
}
},
"global": {
"logLevel": "info",
"debug": false,
"defaultSubtasks": 5,
"defaultPriority": "medium",
"projectName": "Your Project Name",
"ollamaBaseUrl": "http://localhost:11434/api",
"azureOpenaiBaseUrl": "https://your-endpoint.openai.azure.com/"
}
}
```
- `ANTHROPIC_API_KEY`: Your Anthropic API key for Claude (Example: `ANTHROPIC_API_KEY=sk-ant-api03-...`)
2. **Environment Variables (`.env` file or MCP `env` block - For API Keys Only)**
- Used **exclusively** for sensitive API keys and specific endpoint URLs.
- **Location:**
- For CLI usage: Create a `.env` file in your project root.
- For MCP/Cursor usage: Configure keys in the `env` section of your `.cursor/mcp.json` file.
- **Required API Keys (Depending on configured providers):**
- `ANTHROPIC_API_KEY`: Your Anthropic API key.
- `PERPLEXITY_API_KEY`: Your Perplexity API key.
- `OPENAI_API_KEY`: Your OpenAI API key.
- `GOOGLE_API_KEY`: Your Google API key.
- `MISTRAL_API_KEY`: Your Mistral API key.
- `AZURE_OPENAI_API_KEY`: Your Azure OpenAI API key (also requires `AZURE_OPENAI_ENDPOINT`).
- `OPENROUTER_API_KEY`: Your OpenRouter API key.
- `XAI_API_KEY`: Your X-AI API key.
- **Optional Endpoint Overrides (in .taskmasterconfig):**
- `AZURE_OPENAI_ENDPOINT`: Required if using Azure OpenAI key.
- `OLLAMA_BASE_URL`: Override the default Ollama API URL (Default: `http://localhost:11434/api`).
## Optional Configuration
**Important:** Settings like model ID selections (`main`, `research`, `fallback`), `maxTokens`, `temperature`, `logLevel`, `defaultSubtasks`, `defaultPriority`, and `projectName` are **managed in `.taskmasterconfig`**, not environment variables.
- `MODEL` (Default: `"claude-3-7-sonnet-20250219"`): Claude model to use (Example: `MODEL=claude-3-opus-20240229`)
- `MAX_TOKENS` (Default: `"4000"`): Maximum tokens for responses (Example: `MAX_TOKENS=8000`)
- `TEMPERATURE` (Default: `"0.7"`): Temperature for model responses (Example: `TEMPERATURE=0.5`)
- `DEBUG` (Default: `"false"`): Enable debug logging (Example: `DEBUG=true`)
- `LOG_LEVEL` (Default: `"info"`): Console output level (Example: `LOG_LEVEL=debug`)
- `DEFAULT_SUBTASKS` (Default: `"3"`): Default subtask count (Example: `DEFAULT_SUBTASKS=5`)
- `DEFAULT_PRIORITY` (Default: `"medium"`): Default priority (Example: `DEFAULT_PRIORITY=high`)
- `PROJECT_NAME` (Default: `"MCP SaaS MVP"`): Project name in metadata (Example: `PROJECT_NAME=My Awesome Project`)
- `PROJECT_VERSION` (Default: `"1.0.0"`): Version in metadata (Example: `PROJECT_VERSION=2.1.0`)
- `PERPLEXITY_API_KEY`: For research-backed features (Example: `PERPLEXITY_API_KEY=pplx-...`)
- `PERPLEXITY_MODEL` (Default: `"sonar-medium-online"`): Perplexity model (Example: `PERPLEXITY_MODEL=sonar-large-online`)
## Example `.env` File (for API Keys)
## Example .env File
```
# Required API keys for providers configured in .taskmasterconfig
ANTHROPIC_API_KEY=sk-ant-api03-your-key-here
PERPLEXITY_API_KEY=pplx-your-key-here
# OPENAI_API_KEY=sk-your-key-here
# GOOGLE_API_KEY=AIzaSy...
# etc.
# Required
ANTHROPIC_API_KEY=sk-ant-api03-your-api-key
# Optional Endpoint Overrides
# AZURE_OPENAI_ENDPOINT=https://your-azure-endpoint.openai.azure.com/
# OLLAMA_BASE_URL=http://custom-ollama-host:11434/api
# Optional - Claude Configuration
MODEL=claude-3-7-sonnet-20250219
MAX_TOKENS=4000
TEMPERATURE=0.7
# Optional - Perplexity API for Research
PERPLEXITY_API_KEY=pplx-your-api-key
PERPLEXITY_MODEL=sonar-medium-online
# Optional - Project Info
PROJECT_NAME=My Project
PROJECT_VERSION=1.0.0
# Optional - Application Configuration
DEFAULT_SUBTASKS=3
DEFAULT_PRIORITY=medium
DEBUG=false
LOG_LEVEL=info
```
## Troubleshooting
### Configuration Errors
- If Task Master reports errors about missing configuration or cannot find `.taskmasterconfig`, run `task-master models --setup` in your project root to create or repair the file.
- Ensure API keys are correctly placed in your `.env` file (for CLI) or `.cursor/mcp.json` (for MCP) and are valid for the providers selected in `.taskmasterconfig`.
### If `task-master init` doesn't respond:
Try running it with Node directly:

View File

@@ -1,94 +0,0 @@
# Testing Roo Integration
This document provides instructions for testing the Roo integration in the Task Master package.
## Running Tests
To run the tests for the Roo integration:
```bash
# Run all tests
npm test
# Run only Roo integration tests
npm test -- -t "Roo"
# Run specific test file
npm test -- tests/integration/roo-files-inclusion.test.js
```
## Manual Testing
To manually verify that the Roo files are properly included in the package:
1. Create a test directory:
```bash
mkdir test-tm
cd test-tm
```
2. Create a package.json file:
```bash
npm init -y
```
3. Install the task-master-ai package locally:
```bash
# From the root of the claude-task-master repository
cd ..
npm pack
# This will create a file like task-master-ai-0.12.0.tgz
# Move back to the test directory
cd test-tm
npm install ../task-master-ai-0.12.0.tgz
```
4. Initialize a new Task Master project:
```bash
npx task-master init --yes
```
5. Verify that all Roo files and directories are created:
```bash
# Check that .roomodes file exists
ls -la | grep .roomodes
# Check that .roo directory exists and contains all mode directories
ls -la .roo
ls -la .roo/rules
ls -la .roo/rules-architect
ls -la .roo/rules-ask
ls -la .roo/rules-boomerang
ls -la .roo/rules-code
ls -la .roo/rules-debug
ls -la .roo/rules-test
```
## What to Look For
When running the tests or performing manual verification, ensure that:
1. The package includes `.roo/**` and `.roomodes` in the `files` array in package.json
2. The `prepare-package.js` script verifies the existence of all required Roo files
3. The `init.js` script creates all necessary .roo directories and copies .roomodes file
4. All source files for Roo integration exist in `assets/roocode/.roo` and `assets/roocode/.roomodes`
## Compatibility
Ensure that the Roo integration works alongside existing Cursor functionality:
1. Initialize a new project that uses both Cursor and Roo:
```bash
npx task-master init --yes
```
2. Verify that both `.cursor` and `.roo` directories are created
3. Verify that both `.windsurfrules` and `.roomodes` files are created
4. Confirm that existing functionality continues to work as expected

View File

@@ -51,33 +51,3 @@ Can you analyze the complexity of our tasks to help me understand which ones nee
```
Can you show me the complexity report in a more readable format?
```
### Breaking Down Complex Tasks
```
Task 5 seems complex. Can you break it down into subtasks?
```
(Agent runs: `task-master expand --id=5`)
```
Please break down task 5 using research-backed generation.
```
(Agent runs: `task-master expand --id=5 --research`)
### Updating Tasks with Research
```
We need to update task 15 based on the latest React Query v5 changes. Can you research this and update the task?
```
(Agent runs: `task-master update-task --id=15 --prompt="Update based on React Query v5 changes" --research`)
### Adding Tasks with Research
```
Please add a new task to implement user profile image uploads using Cloudinary, research the best approach.
```
(Agent runs: `task-master add-task --prompt="Implement user profile image uploads using Cloudinary" --research`)

1179
docs/fastmcp-core.txt Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -10,13 +10,7 @@ There are two ways to set up Task Master: using MCP (recommended) or via npm ins
MCP (Model Control Protocol) provides the easiest way to get started with Task Master directly in your editor.
1. **Install the package**
```bash
npm i -g task-master-ai
```
2. **Add the MCP config to your IDE/MCP Client** (Cursor is recommended, but it works with other clients):
1. **Add the MCP config to your editor** (Cursor recommended, but it works with other text editors):
```json
{
@@ -27,28 +21,21 @@ npm i -g task-master-ai
"env": {
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
"OPENAI_API_KEY": "YOUR_OPENAI_KEY_HERE",
"GOOGLE_API_KEY": "YOUR_GOOGLE_KEY_HERE",
"MISTRAL_API_KEY": "YOUR_MISTRAL_KEY_HERE",
"OPENROUTER_API_KEY": "YOUR_OPENROUTER_KEY_HERE",
"XAI_API_KEY": "YOUR_XAI_KEY_HERE",
"AZURE_OPENAI_API_KEY": "YOUR_AZURE_KEY_HERE"
"MODEL": "claude-3-7-sonnet-20250219",
"PERPLEXITY_MODEL": "sonar-pro",
"MAX_TOKENS": 64000,
"TEMPERATURE": 0.2,
"DEFAULT_SUBTASKS": 5,
"DEFAULT_PRIORITY": "medium"
}
}
}
}
```
**IMPORTANT:** An API key is _required_ for each AI provider you plan on using. Run the `task-master models` command to see your selected models and the status of your API keys across .env and mcp.json
2. **Enable the MCP** in your editor settings
**To use AI commands in CLI** you MUST have API keys in the .env file
**To use AI commands in MCP** you MUST have API keys in the .mcp.json file (or MCP config equivalent)
We recommend having keys in both places and adding mcp.json to your gitignore so your API keys aren't checked into git.
3. **Enable the MCP** in your editor settings
4. **Prompt the AI** to initialize Task Master:
3. **Prompt the AI** to initialize Task Master:
```
Can you please initialize taskmaster-ai into my project?
@@ -60,9 +47,9 @@ The AI will:
- Set up initial configuration files
- Guide you through the rest of the process
5. Place your PRD document in the `scripts/` directory (e.g., `scripts/prd.txt`)
4. Place your PRD document in the `scripts/` directory (e.g., `scripts/prd.txt`)
6. **Use natural language commands** to interact with Task Master:
5. **Use natural language commands** to interact with Task Master:
```
Can you parse my PRD at scripts/prd.txt?
@@ -89,7 +76,7 @@ Initialize a new project:
task-master init
# If installed locally
npx task-master init
npx task-master-init
```
This will prompt you for project details and set up a new project with the necessary files and structure.
@@ -254,16 +241,13 @@ If during implementation, you discover that:
Tell the agent:
```
We've decided to use MongoDB instead of PostgreSQL. Can you update all future tasks (from ID 4) to reflect this change?
We've changed our approach. We're now using Express instead of Fastify. Please update all future tasks to reflect this change.
```
The agent will execute:
```bash
task-master update --from=4 --prompt="Now we are using MongoDB instead of PostgreSQL."
# OR, if research is needed to find best practices for MongoDB:
task-master update --from=4 --prompt="Update to use MongoDB, researching best practices" --research
task-master update --from=4 --prompt="Now we are using Express instead of Fastify."
```
This will rewrite or re-scope subsequent tasks in tasks.json while preserving completed work.
@@ -306,7 +290,7 @@ The agent will execute:
task-master expand --all
```
For research-backed subtask generation using the configured research model:
For research-backed subtask generation using Perplexity AI:
```
Please break down task 5 using research-backed generation.