diff --git a/.github/workflows/update-models-md.yml b/.github/workflows/update-models-md.yml new file mode 100644 index 00000000..aeea0b05 --- /dev/null +++ b/.github/workflows/update-models-md.yml @@ -0,0 +1,40 @@ +name: Update models.md from supported-models.json + +on: + push: + branches: + - main + - next + paths: + - 'scripts/modules/supported-models.json' + - 'docs/scripts/models-json-to-markdown.js' + +jobs: + update_markdown: + runs-on: ubuntu-latest + steps: + - name: Checkout repository + uses: actions/checkout@v4 + + - name: Set up Node.js + uses: actions/setup-node@v4 + with: + node-version: 20 + + - name: Run transformation script + run: node docs/scripts/models-json-to-markdown.js + + - name: Format Markdown with Prettier + run: npx prettier --write docs/models.md + + - name: Stage docs/models.md + run: git add docs/models.md + + - name: Commit & Push docs/models.md + uses: actions-js/push@master + with: + github_token: ${{ secrets.GITHUB_TOKEN }} + branch: ${{ github.ref_name }} + message: 'docs: Auto-update and format models.md' + author_name: 'github-actions[bot]' + author_email: 'github-actions[bot]@users.noreply.github.com' diff --git a/README.md b/README.md index a4444795..4817f208 100644 --- a/README.md +++ b/README.md @@ -28,13 +28,22 @@ Using the research model is optional but highly recommended. You will need at le ## Quick Start -### Option 1 | MCP (Recommended): +### Option 1: MCP (Recommended) -MCP (Model Control Protocol) provides the easiest way to get started with Task Master directly in your editor. +MCP (Model Control Protocol) lets you run Task Master directly from your editor. -1. **Add the MCP config to your editor** (Cursor recommended, but it works with other text editors): +#### 1. Add your MCP config at the following path depending on your editor -```json +| Editor | Scope | Linux/macOS Path | Windows Path | Key | +| ------------ | ------- | ------------------------------------- | ------------------------------------------------- | ------------ | +| **Cursor** | Global | `~/.cursor/mcp.json` | `%USERPROFILE%\.cursor\mcp.json` | `mcpServers` | +| | Project | `/.cursor/mcp.json` | `\.cursor\mcp.json` | `mcpServers` | +| **Windsurf** | Global | `~/.codeium/windsurf/mcp_config.json` | `%USERPROFILE%\.codeium\windsurf\mcp_config.json` | `mcpServers` | +| **VSβ€―Code** | Project | `/.vscode/mcp.json` | `\.vscode\mcp.json` | `servers` | + +##### Cursor & Windsurf (`mcpServers`) + +```jsonc { "mcpServers": { "taskmaster-ai": { @@ -56,23 +65,75 @@ MCP (Model Control Protocol) provides the easiest way to get started with Task M } ``` -2. **Enable the MCP** in your editor +> πŸ”‘ Replace `YOUR_…_KEY_HERE` with your real API keys. You can remove keys you don't use. -3. **Prompt the AI** to initialize Task Master: +##### VSβ€―Code (`servers` + `type`) -``` -Can you please initialize taskmaster-ai into my project? +```jsonc +{ + "servers": { + "taskmaster-ai": { + "command": "npx", + "args": ["-y", "--package=task-master-ai", "task-master-ai"], + "env": { + "ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE", + "PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE", + "OPENAI_API_KEY": "YOUR_OPENAI_KEY_HERE", + "GOOGLE_API_KEY": "YOUR_GOOGLE_KEY_HERE", + "MISTRAL_API_KEY": "YOUR_MISTRAL_KEY_HERE", + "OPENROUTER_API_KEY": "YOUR_OPENROUTER_KEY_HERE", + "XAI_API_KEY": "YOUR_XAI_KEY_HERE", + "AZURE_OPENAI_API_KEY": "YOUR_AZURE_KEY_HERE" + }, + "type": "stdio" + } + } +} ``` -4. **Use common commands** directly through your AI assistant: +> πŸ”‘ Replace `YOUR_…_KEY_HERE` with your real API keys. You can remove keys you don't use. + +#### 2. (Cursor-only) Enable Taskmaster MCP + +Open Cursor Settings (Ctrl+Shift+J) ➑ Click on MCP tab on the left ➑ Enable task-master-ai with the toggle + +#### 3. (Optional) Configure the models you want to use + +In your editor’s AI chat pane, say: ```txt -Can you parse my PRD at scripts/prd.txt? -What's the next task I should work on? -Can you help me implement task 3? -Can you help me expand task 4? +Change the main, research and fallback models to , and respectively. ``` +[Table of available models](docs/models.md) + +#### 4. Initialize Task Master + +In your editor’s AI chat pane, say: + +```txt +Initialize taskmaster-ai in my project +``` + +#### 5. Make sure you have a PRD in `/scripts/prd.txt` + +An example of a PRD is located into `/scripts/example_prd.txt`. + +**Always start with a detailed PRD.** + +The more detailed your PRD, the better the generated tasks will be. + +#### 6. Common Commands + +Use your AI assistant to: + +- Parse requirements: `Can you parse my PRD at scripts/prd.txt?` +- Plan next step: `What’s the next task I should work on?` +- Implement a task: `Can you help me implement task 3?` +- Expand a task: `Can you help me expand task 4?` + +[More examples on how to use Task Master in chat](docs/examples.md) + ### Option 2: Using Command Line #### Installation diff --git a/docs/models.md b/docs/models.md new file mode 100644 index 00000000..75a8f1c4 --- /dev/null +++ b/docs/models.md @@ -0,0 +1,120 @@ +# Available Models as of May 16, 2025 + +## Main Models + +| Provider | Model Name | SWE Score | Input Cost | Output Cost | +| ---------- | --------------------------------------------- | --------- | ---------- | ----------- | +| anthropic | claude-3-7-sonnet-20250219 | 0.623 | 3 | 15 | +| anthropic | claude-3-5-sonnet-20241022 | 0.49 | 3 | 15 | +| openai | gpt-4o | 0.332 | 2.5 | 10 | +| openai | o1 | 0.489 | 15 | 60 | +| openai | o3 | 0.5 | 10 | 40 | +| openai | o3-mini | 0.493 | 1.1 | 4.4 | +| openai | o4-mini | 0.45 | 1.1 | 4.4 | +| openai | o1-mini | 0.4 | 1.1 | 4.4 | +| openai | o1-pro | β€” | 150 | 600 | +| openai | gpt-4-5-preview | 0.38 | 75 | 150 | +| openai | gpt-4-1-mini | β€” | 0.4 | 1.6 | +| openai | gpt-4-1-nano | β€” | 0.1 | 0.4 | +| openai | gpt-4o-mini | 0.3 | 0.15 | 0.6 | +| google | gemini-2.5-pro-exp-03-25 | 0.638 | β€” | β€” | +| google | gemini-2.5-flash-preview-04-17 | β€” | β€” | β€” | +| google | gemini-2.0-flash | 0.754 | 0.15 | 0.6 | +| google | gemini-2.0-flash-thinking-experimental | 0.754 | 0.15 | 0.6 | +| google | gemini-2.0-pro | β€” | β€” | β€” | +| perplexity | sonar-reasoning-pro | 0.211 | 2 | 8 | +| perplexity | sonar-reasoning | 0.211 | 1 | 5 | +| xai | grok-3 | β€” | 3 | 15 | +| xai | grok-3-fast | β€” | 5 | 25 | +| ollama | gemma3:27b | β€” | 0 | 0 | +| ollama | gemma3:12b | β€” | 0 | 0 | +| ollama | qwq | β€” | 0 | 0 | +| ollama | deepseek-r1 | β€” | 0 | 0 | +| ollama | mistral-small3.1 | β€” | 0 | 0 | +| ollama | llama3.3 | β€” | 0 | 0 | +| ollama | phi4 | β€” | 0 | 0 | +| openrouter | google/gemini-2.0-flash-001 | β€” | 0.1 | 0.4 | +| openrouter | google/gemini-2.5-pro-exp-03-25 | β€” | 0 | 0 | +| openrouter | deepseek/deepseek-chat-v3-0324:free | β€” | 0 | 0 | +| openrouter | deepseek/deepseek-chat-v3-0324 | β€” | 0.27 | 1.1 | +| openrouter | deepseek/deepseek-r1:free | β€” | 0 | 0 | +| openrouter | microsoft/mai-ds-r1:free | β€” | 0 | 0 | +| openrouter | google/gemini-2.5-pro-preview-03-25 | β€” | 1.25 | 10 | +| openrouter | google/gemini-2.5-flash-preview | β€” | 0.15 | 0.6 | +| openrouter | google/gemini-2.5-flash-preview:thinking | β€” | 0.15 | 3.5 | +| openrouter | openai/o3 | β€” | 10 | 40 | +| openrouter | openai/o4-mini | 0.45 | 1.1 | 4.4 | +| openrouter | openai/o4-mini-high | β€” | 1.1 | 4.4 | +| openrouter | openai/o1-pro | β€” | 150 | 600 | +| openrouter | meta-llama/llama-3.3-70b-instruct | β€” | 120 | 600 | +| openrouter | google/gemma-3-12b-it:free | β€” | 0 | 0 | +| openrouter | google/gemma-3-12b-it | β€” | 50 | 100 | +| openrouter | google/gemma-3-27b-it:free | β€” | 0 | 0 | +| openrouter | google/gemma-3-27b-it | β€” | 100 | 200 | +| openrouter | qwen/qwq-32b:free | β€” | 0 | 0 | +| openrouter | qwen/qwq-32b | β€” | 150 | 200 | +| openrouter | qwen/qwen-max | β€” | 1.6 | 6.4 | +| openrouter | qwen/qwen-turbo | β€” | 0.05 | 0.2 | +| openrouter | mistralai/mistral-small-3.1-24b-instruct:free | β€” | 0 | 0 | +| openrouter | mistralai/mistral-small-3.1-24b-instruct | β€” | 0.1 | 0.3 | +| openrouter | thudm/glm-4-32b:free | β€” | 0 | 0 | + +## Research Models + +| Provider | Model Name | SWE Score | Input Cost | Output Cost | +| ---------- | -------------------------- | --------- | ---------- | ----------- | +| openai | gpt-4o-search-preview | 0.33 | 2.5 | 10 | +| openai | gpt-4o-mini-search-preview | 0.3 | 0.15 | 0.6 | +| perplexity | sonar-pro | β€” | 3 | 15 | +| perplexity | sonar | β€” | 1 | 1 | +| perplexity | deep-research | 0.211 | 2 | 8 | +| xai | grok-3 | β€” | 3 | 15 | +| xai | grok-3-fast | β€” | 5 | 25 | + +## Fallback Models + +| Provider | Model Name | SWE Score | Input Cost | Output Cost | +| ---------- | --------------------------------------------- | --------- | ---------- | ----------- | +| anthropic | claude-3-7-sonnet-20250219 | 0.623 | 3 | 15 | +| anthropic | claude-3-5-sonnet-20241022 | 0.49 | 3 | 15 | +| openai | gpt-4o | 0.332 | 2.5 | 10 | +| openai | o3 | 0.5 | 10 | 40 | +| openai | o4-mini | 0.45 | 1.1 | 4.4 | +| google | gemini-2.5-pro-exp-03-25 | 0.638 | β€” | β€” | +| google | gemini-2.5-flash-preview-04-17 | β€” | β€” | β€” | +| google | gemini-2.0-flash | 0.754 | 0.15 | 0.6 | +| google | gemini-2.0-flash-thinking-experimental | 0.754 | 0.15 | 0.6 | +| google | gemini-2.0-pro | β€” | β€” | β€” | +| perplexity | sonar-reasoning-pro | 0.211 | 2 | 8 | +| perplexity | sonar-reasoning | 0.211 | 1 | 5 | +| xai | grok-3 | β€” | 3 | 15 | +| xai | grok-3-fast | β€” | 5 | 25 | +| ollama | gemma3:27b | β€” | 0 | 0 | +| ollama | gemma3:12b | β€” | 0 | 0 | +| ollama | qwq | β€” | 0 | 0 | +| ollama | deepseek-r1 | β€” | 0 | 0 | +| ollama | mistral-small3.1 | β€” | 0 | 0 | +| ollama | llama3.3 | β€” | 0 | 0 | +| ollama | phi4 | β€” | 0 | 0 | +| openrouter | google/gemini-2.0-flash-001 | β€” | 0.1 | 0.4 | +| openrouter | google/gemini-2.5-pro-exp-03-25 | β€” | 0 | 0 | +| openrouter | deepseek/deepseek-chat-v3-0324:free | β€” | 0 | 0 | +| openrouter | deepseek/deepseek-r1:free | β€” | 0 | 0 | +| openrouter | microsoft/mai-ds-r1:free | β€” | 0 | 0 | +| openrouter | google/gemini-2.5-pro-preview-03-25 | β€” | 1.25 | 10 | +| openrouter | openai/o3 | β€” | 10 | 40 | +| openrouter | openai/o4-mini | 0.45 | 1.1 | 4.4 | +| openrouter | openai/o4-mini-high | β€” | 1.1 | 4.4 | +| openrouter | openai/o1-pro | β€” | 150 | 600 | +| openrouter | meta-llama/llama-3.3-70b-instruct | β€” | 120 | 600 | +| openrouter | google/gemma-3-12b-it:free | β€” | 0 | 0 | +| openrouter | google/gemma-3-12b-it | β€” | 50 | 100 | +| openrouter | google/gemma-3-27b-it:free | β€” | 0 | 0 | +| openrouter | google/gemma-3-27b-it | β€” | 100 | 200 | +| openrouter | qwen/qwq-32b:free | β€” | 0 | 0 | +| openrouter | qwen/qwq-32b | β€” | 150 | 200 | +| openrouter | qwen/qwen-max | β€” | 1.6 | 6.4 | +| openrouter | qwen/qwen-turbo | β€” | 0.05 | 0.2 | +| openrouter | mistralai/mistral-small-3.1-24b-instruct:free | β€” | 0 | 0 | +| openrouter | mistralai/mistral-small-3.1-24b-instruct | β€” | 0.1 | 0.3 | +| openrouter | thudm/glm-4-32b:free | β€” | 0 | 0 | diff --git a/docs/scripts/models-json-to-markdown.js b/docs/scripts/models-json-to-markdown.js new file mode 100644 index 00000000..26ab8a1e --- /dev/null +++ b/docs/scripts/models-json-to-markdown.js @@ -0,0 +1,131 @@ +import fs from 'fs'; +import path from 'path'; +import { fileURLToPath } from 'url'; + +const __filename = fileURLToPath(import.meta.url); +const __dirname = path.dirname(__filename); + +const supportedModelsPath = path.join( + __dirname, + '..', + 'modules', + 'supported-models.json' +); +const outputMarkdownPath = path.join( + __dirname, + '..', + '..', + 'docs', + 'models.md' +); + +function formatCost(cost) { + if (cost === null || cost === undefined) { + return 'β€”'; + } + return cost; +} + +function formatSweScore(score) { + if (score === null || score === undefined || score === 0) { + return 'β€”'; + } + return score.toString(); +} + +function generateMarkdownTable(title, models) { + if (!models || models.length === 0) { + return `## ${title}\n\nNo models in this category.\n\n`; + } + let table = `## ${title}\n\n`; + table += '| Provider | Model Name | SWE Score | Input Cost | Output Cost |\n'; + table += '|---|---|---|---|---|\n'; + models.forEach((model) => { + table += `| ${model.provider} | ${model.modelName} | ${formatSweScore(model.sweScore)} | ${formatCost(model.inputCost)} | ${formatCost(model.outputCost)} |\n`; + }); + table += '\n'; + return table; +} + +function main() { + try { + const correctSupportedModelsPath = path.join( + __dirname, + '..', + '..', + 'scripts', + 'modules', + 'supported-models.json' + ); + const correctOutputMarkdownPath = path.join(__dirname, '..', 'models.md'); + + const supportedModelsContent = fs.readFileSync( + correctSupportedModelsPath, + 'utf8' + ); + const supportedModels = JSON.parse(supportedModelsContent); + + const mainModels = []; + const researchModels = []; + const fallbackModels = []; + + for (const provider in supportedModels) { + if (Object.hasOwnProperty.call(supportedModels, provider)) { + const models = supportedModels[provider]; + models.forEach((model) => { + const modelEntry = { + provider: provider, + modelName: model.id, + sweScore: model.swe_score, + inputCost: model.cost_per_1m_tokens + ? model.cost_per_1m_tokens.input + : null, + outputCost: model.cost_per_1m_tokens + ? model.cost_per_1m_tokens.output + : null + }; + + if (model.allowed_roles.includes('main')) { + mainModels.push(modelEntry); + } + if (model.allowed_roles.includes('research')) { + researchModels.push(modelEntry); + } + if (model.allowed_roles.includes('fallback')) { + fallbackModels.push(modelEntry); + } + }); + } + } + + const date = new Date(); + const monthNames = [ + 'January', + 'February', + 'March', + 'April', + 'May', + 'June', + 'July', + 'August', + 'September', + 'October', + 'November', + 'December' + ]; + const formattedDate = `${monthNames[date.getMonth()]} ${date.getDate()}, ${date.getFullYear()}`; + + let markdownContent = `# Available Models as of ${formattedDate}\n\n`; + markdownContent += generateMarkdownTable('Main Models', mainModels); + markdownContent += generateMarkdownTable('Research Models', researchModels); + markdownContent += generateMarkdownTable('Fallback Models', fallbackModels); + + fs.writeFileSync(correctOutputMarkdownPath, markdownContent, 'utf8'); + console.log(`Successfully updated ${correctOutputMarkdownPath}`); + } catch (error) { + console.error('Error transforming models.json to models.md:', error); + process.exit(1); + } +} + +main();