revamping readme (#522)
This commit is contained in:
40
.github/workflows/update-models-md.yml
vendored
Normal file
40
.github/workflows/update-models-md.yml
vendored
Normal file
@@ -0,0 +1,40 @@
|
|||||||
|
name: Update models.md from supported-models.json
|
||||||
|
|
||||||
|
on:
|
||||||
|
push:
|
||||||
|
branches:
|
||||||
|
- main
|
||||||
|
- next
|
||||||
|
paths:
|
||||||
|
- 'scripts/modules/supported-models.json'
|
||||||
|
- 'docs/scripts/models-json-to-markdown.js'
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
update_markdown:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
- name: Checkout repository
|
||||||
|
uses: actions/checkout@v4
|
||||||
|
|
||||||
|
- name: Set up Node.js
|
||||||
|
uses: actions/setup-node@v4
|
||||||
|
with:
|
||||||
|
node-version: 20
|
||||||
|
|
||||||
|
- name: Run transformation script
|
||||||
|
run: node docs/scripts/models-json-to-markdown.js
|
||||||
|
|
||||||
|
- name: Format Markdown with Prettier
|
||||||
|
run: npx prettier --write docs/models.md
|
||||||
|
|
||||||
|
- name: Stage docs/models.md
|
||||||
|
run: git add docs/models.md
|
||||||
|
|
||||||
|
- name: Commit & Push docs/models.md
|
||||||
|
uses: actions-js/push@master
|
||||||
|
with:
|
||||||
|
github_token: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
branch: ${{ github.ref_name }}
|
||||||
|
message: 'docs: Auto-update and format models.md'
|
||||||
|
author_name: 'github-actions[bot]'
|
||||||
|
author_email: 'github-actions[bot]@users.noreply.github.com'
|
||||||
87
README.md
87
README.md
@@ -28,13 +28,22 @@ Using the research model is optional but highly recommended. You will need at le
|
|||||||
|
|
||||||
## Quick Start
|
## Quick Start
|
||||||
|
|
||||||
### Option 1 | MCP (Recommended):
|
### Option 1: MCP (Recommended)
|
||||||
|
|
||||||
MCP (Model Control Protocol) provides the easiest way to get started with Task Master directly in your editor.
|
MCP (Model Control Protocol) lets you run Task Master directly from your editor.
|
||||||
|
|
||||||
1. **Add the MCP config to your editor** (Cursor recommended, but it works with other text editors):
|
#### 1. Add your MCP config at the following path depending on your editor
|
||||||
|
|
||||||
```json
|
| Editor | Scope | Linux/macOS Path | Windows Path | Key |
|
||||||
|
| ------------ | ------- | ------------------------------------- | ------------------------------------------------- | ------------ |
|
||||||
|
| **Cursor** | Global | `~/.cursor/mcp.json` | `%USERPROFILE%\.cursor\mcp.json` | `mcpServers` |
|
||||||
|
| | Project | `<project_folder>/.cursor/mcp.json` | `<project_folder>\.cursor\mcp.json` | `mcpServers` |
|
||||||
|
| **Windsurf** | Global | `~/.codeium/windsurf/mcp_config.json` | `%USERPROFILE%\.codeium\windsurf\mcp_config.json` | `mcpServers` |
|
||||||
|
| **VS Code** | Project | `<project_folder>/.vscode/mcp.json` | `<project_folder>\.vscode\mcp.json` | `servers` |
|
||||||
|
|
||||||
|
##### Cursor & Windsurf (`mcpServers`)
|
||||||
|
|
||||||
|
```jsonc
|
||||||
{
|
{
|
||||||
"mcpServers": {
|
"mcpServers": {
|
||||||
"taskmaster-ai": {
|
"taskmaster-ai": {
|
||||||
@@ -56,23 +65,75 @@ MCP (Model Control Protocol) provides the easiest way to get started with Task M
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
2. **Enable the MCP** in your editor
|
> 🔑 Replace `YOUR_…_KEY_HERE` with your real API keys. You can remove keys you don't use.
|
||||||
|
|
||||||
3. **Prompt the AI** to initialize Task Master:
|
##### VS Code (`servers` + `type`)
|
||||||
|
|
||||||
```
|
```jsonc
|
||||||
Can you please initialize taskmaster-ai into my project?
|
{
|
||||||
|
"servers": {
|
||||||
|
"taskmaster-ai": {
|
||||||
|
"command": "npx",
|
||||||
|
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
|
||||||
|
"env": {
|
||||||
|
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
|
||||||
|
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
|
||||||
|
"OPENAI_API_KEY": "YOUR_OPENAI_KEY_HERE",
|
||||||
|
"GOOGLE_API_KEY": "YOUR_GOOGLE_KEY_HERE",
|
||||||
|
"MISTRAL_API_KEY": "YOUR_MISTRAL_KEY_HERE",
|
||||||
|
"OPENROUTER_API_KEY": "YOUR_OPENROUTER_KEY_HERE",
|
||||||
|
"XAI_API_KEY": "YOUR_XAI_KEY_HERE",
|
||||||
|
"AZURE_OPENAI_API_KEY": "YOUR_AZURE_KEY_HERE"
|
||||||
|
},
|
||||||
|
"type": "stdio"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
4. **Use common commands** directly through your AI assistant:
|
> 🔑 Replace `YOUR_…_KEY_HERE` with your real API keys. You can remove keys you don't use.
|
||||||
|
|
||||||
|
#### 2. (Cursor-only) Enable Taskmaster MCP
|
||||||
|
|
||||||
|
Open Cursor Settings (Ctrl+Shift+J) ➡ Click on MCP tab on the left ➡ Enable task-master-ai with the toggle
|
||||||
|
|
||||||
|
#### 3. (Optional) Configure the models you want to use
|
||||||
|
|
||||||
|
In your editor’s AI chat pane, say:
|
||||||
|
|
||||||
```txt
|
```txt
|
||||||
Can you parse my PRD at scripts/prd.txt?
|
Change the main, research and fallback models to <model_name>, <model_name> and <model_name> respectively.
|
||||||
What's the next task I should work on?
|
|
||||||
Can you help me implement task 3?
|
|
||||||
Can you help me expand task 4?
|
|
||||||
```
|
```
|
||||||
|
|
||||||
|
[Table of available models](docs/models.md)
|
||||||
|
|
||||||
|
#### 4. Initialize Task Master
|
||||||
|
|
||||||
|
In your editor’s AI chat pane, say:
|
||||||
|
|
||||||
|
```txt
|
||||||
|
Initialize taskmaster-ai in my project
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 5. Make sure you have a PRD in `<project_folder>/scripts/prd.txt`
|
||||||
|
|
||||||
|
An example of a PRD is located into `<project_folder>/scripts/example_prd.txt`.
|
||||||
|
|
||||||
|
**Always start with a detailed PRD.**
|
||||||
|
|
||||||
|
The more detailed your PRD, the better the generated tasks will be.
|
||||||
|
|
||||||
|
#### 6. Common Commands
|
||||||
|
|
||||||
|
Use your AI assistant to:
|
||||||
|
|
||||||
|
- Parse requirements: `Can you parse my PRD at scripts/prd.txt?`
|
||||||
|
- Plan next step: `What’s the next task I should work on?`
|
||||||
|
- Implement a task: `Can you help me implement task 3?`
|
||||||
|
- Expand a task: `Can you help me expand task 4?`
|
||||||
|
|
||||||
|
[More examples on how to use Task Master in chat](docs/examples.md)
|
||||||
|
|
||||||
### Option 2: Using Command Line
|
### Option 2: Using Command Line
|
||||||
|
|
||||||
#### Installation
|
#### Installation
|
||||||
|
|||||||
120
docs/models.md
Normal file
120
docs/models.md
Normal file
@@ -0,0 +1,120 @@
|
|||||||
|
# Available Models as of May 16, 2025
|
||||||
|
|
||||||
|
## Main Models
|
||||||
|
|
||||||
|
| Provider | Model Name | SWE Score | Input Cost | Output Cost |
|
||||||
|
| ---------- | --------------------------------------------- | --------- | ---------- | ----------- |
|
||||||
|
| anthropic | claude-3-7-sonnet-20250219 | 0.623 | 3 | 15 |
|
||||||
|
| anthropic | claude-3-5-sonnet-20241022 | 0.49 | 3 | 15 |
|
||||||
|
| openai | gpt-4o | 0.332 | 2.5 | 10 |
|
||||||
|
| openai | o1 | 0.489 | 15 | 60 |
|
||||||
|
| openai | o3 | 0.5 | 10 | 40 |
|
||||||
|
| openai | o3-mini | 0.493 | 1.1 | 4.4 |
|
||||||
|
| openai | o4-mini | 0.45 | 1.1 | 4.4 |
|
||||||
|
| openai | o1-mini | 0.4 | 1.1 | 4.4 |
|
||||||
|
| openai | o1-pro | — | 150 | 600 |
|
||||||
|
| openai | gpt-4-5-preview | 0.38 | 75 | 150 |
|
||||||
|
| openai | gpt-4-1-mini | — | 0.4 | 1.6 |
|
||||||
|
| openai | gpt-4-1-nano | — | 0.1 | 0.4 |
|
||||||
|
| openai | gpt-4o-mini | 0.3 | 0.15 | 0.6 |
|
||||||
|
| google | gemini-2.5-pro-exp-03-25 | 0.638 | — | — |
|
||||||
|
| google | gemini-2.5-flash-preview-04-17 | — | — | — |
|
||||||
|
| google | gemini-2.0-flash | 0.754 | 0.15 | 0.6 |
|
||||||
|
| google | gemini-2.0-flash-thinking-experimental | 0.754 | 0.15 | 0.6 |
|
||||||
|
| google | gemini-2.0-pro | — | — | — |
|
||||||
|
| perplexity | sonar-reasoning-pro | 0.211 | 2 | 8 |
|
||||||
|
| perplexity | sonar-reasoning | 0.211 | 1 | 5 |
|
||||||
|
| xai | grok-3 | — | 3 | 15 |
|
||||||
|
| xai | grok-3-fast | — | 5 | 25 |
|
||||||
|
| ollama | gemma3:27b | — | 0 | 0 |
|
||||||
|
| ollama | gemma3:12b | — | 0 | 0 |
|
||||||
|
| ollama | qwq | — | 0 | 0 |
|
||||||
|
| ollama | deepseek-r1 | — | 0 | 0 |
|
||||||
|
| ollama | mistral-small3.1 | — | 0 | 0 |
|
||||||
|
| ollama | llama3.3 | — | 0 | 0 |
|
||||||
|
| ollama | phi4 | — | 0 | 0 |
|
||||||
|
| openrouter | google/gemini-2.0-flash-001 | — | 0.1 | 0.4 |
|
||||||
|
| openrouter | google/gemini-2.5-pro-exp-03-25 | — | 0 | 0 |
|
||||||
|
| openrouter | deepseek/deepseek-chat-v3-0324:free | — | 0 | 0 |
|
||||||
|
| openrouter | deepseek/deepseek-chat-v3-0324 | — | 0.27 | 1.1 |
|
||||||
|
| openrouter | deepseek/deepseek-r1:free | — | 0 | 0 |
|
||||||
|
| openrouter | microsoft/mai-ds-r1:free | — | 0 | 0 |
|
||||||
|
| openrouter | google/gemini-2.5-pro-preview-03-25 | — | 1.25 | 10 |
|
||||||
|
| openrouter | google/gemini-2.5-flash-preview | — | 0.15 | 0.6 |
|
||||||
|
| openrouter | google/gemini-2.5-flash-preview:thinking | — | 0.15 | 3.5 |
|
||||||
|
| openrouter | openai/o3 | — | 10 | 40 |
|
||||||
|
| openrouter | openai/o4-mini | 0.45 | 1.1 | 4.4 |
|
||||||
|
| openrouter | openai/o4-mini-high | — | 1.1 | 4.4 |
|
||||||
|
| openrouter | openai/o1-pro | — | 150 | 600 |
|
||||||
|
| openrouter | meta-llama/llama-3.3-70b-instruct | — | 120 | 600 |
|
||||||
|
| openrouter | google/gemma-3-12b-it:free | — | 0 | 0 |
|
||||||
|
| openrouter | google/gemma-3-12b-it | — | 50 | 100 |
|
||||||
|
| openrouter | google/gemma-3-27b-it:free | — | 0 | 0 |
|
||||||
|
| openrouter | google/gemma-3-27b-it | — | 100 | 200 |
|
||||||
|
| openrouter | qwen/qwq-32b:free | — | 0 | 0 |
|
||||||
|
| openrouter | qwen/qwq-32b | — | 150 | 200 |
|
||||||
|
| openrouter | qwen/qwen-max | — | 1.6 | 6.4 |
|
||||||
|
| openrouter | qwen/qwen-turbo | — | 0.05 | 0.2 |
|
||||||
|
| openrouter | mistralai/mistral-small-3.1-24b-instruct:free | — | 0 | 0 |
|
||||||
|
| openrouter | mistralai/mistral-small-3.1-24b-instruct | — | 0.1 | 0.3 |
|
||||||
|
| openrouter | thudm/glm-4-32b:free | — | 0 | 0 |
|
||||||
|
|
||||||
|
## Research Models
|
||||||
|
|
||||||
|
| Provider | Model Name | SWE Score | Input Cost | Output Cost |
|
||||||
|
| ---------- | -------------------------- | --------- | ---------- | ----------- |
|
||||||
|
| openai | gpt-4o-search-preview | 0.33 | 2.5 | 10 |
|
||||||
|
| openai | gpt-4o-mini-search-preview | 0.3 | 0.15 | 0.6 |
|
||||||
|
| perplexity | sonar-pro | — | 3 | 15 |
|
||||||
|
| perplexity | sonar | — | 1 | 1 |
|
||||||
|
| perplexity | deep-research | 0.211 | 2 | 8 |
|
||||||
|
| xai | grok-3 | — | 3 | 15 |
|
||||||
|
| xai | grok-3-fast | — | 5 | 25 |
|
||||||
|
|
||||||
|
## Fallback Models
|
||||||
|
|
||||||
|
| Provider | Model Name | SWE Score | Input Cost | Output Cost |
|
||||||
|
| ---------- | --------------------------------------------- | --------- | ---------- | ----------- |
|
||||||
|
| anthropic | claude-3-7-sonnet-20250219 | 0.623 | 3 | 15 |
|
||||||
|
| anthropic | claude-3-5-sonnet-20241022 | 0.49 | 3 | 15 |
|
||||||
|
| openai | gpt-4o | 0.332 | 2.5 | 10 |
|
||||||
|
| openai | o3 | 0.5 | 10 | 40 |
|
||||||
|
| openai | o4-mini | 0.45 | 1.1 | 4.4 |
|
||||||
|
| google | gemini-2.5-pro-exp-03-25 | 0.638 | — | — |
|
||||||
|
| google | gemini-2.5-flash-preview-04-17 | — | — | — |
|
||||||
|
| google | gemini-2.0-flash | 0.754 | 0.15 | 0.6 |
|
||||||
|
| google | gemini-2.0-flash-thinking-experimental | 0.754 | 0.15 | 0.6 |
|
||||||
|
| google | gemini-2.0-pro | — | — | — |
|
||||||
|
| perplexity | sonar-reasoning-pro | 0.211 | 2 | 8 |
|
||||||
|
| perplexity | sonar-reasoning | 0.211 | 1 | 5 |
|
||||||
|
| xai | grok-3 | — | 3 | 15 |
|
||||||
|
| xai | grok-3-fast | — | 5 | 25 |
|
||||||
|
| ollama | gemma3:27b | — | 0 | 0 |
|
||||||
|
| ollama | gemma3:12b | — | 0 | 0 |
|
||||||
|
| ollama | qwq | — | 0 | 0 |
|
||||||
|
| ollama | deepseek-r1 | — | 0 | 0 |
|
||||||
|
| ollama | mistral-small3.1 | — | 0 | 0 |
|
||||||
|
| ollama | llama3.3 | — | 0 | 0 |
|
||||||
|
| ollama | phi4 | — | 0 | 0 |
|
||||||
|
| openrouter | google/gemini-2.0-flash-001 | — | 0.1 | 0.4 |
|
||||||
|
| openrouter | google/gemini-2.5-pro-exp-03-25 | — | 0 | 0 |
|
||||||
|
| openrouter | deepseek/deepseek-chat-v3-0324:free | — | 0 | 0 |
|
||||||
|
| openrouter | deepseek/deepseek-r1:free | — | 0 | 0 |
|
||||||
|
| openrouter | microsoft/mai-ds-r1:free | — | 0 | 0 |
|
||||||
|
| openrouter | google/gemini-2.5-pro-preview-03-25 | — | 1.25 | 10 |
|
||||||
|
| openrouter | openai/o3 | — | 10 | 40 |
|
||||||
|
| openrouter | openai/o4-mini | 0.45 | 1.1 | 4.4 |
|
||||||
|
| openrouter | openai/o4-mini-high | — | 1.1 | 4.4 |
|
||||||
|
| openrouter | openai/o1-pro | — | 150 | 600 |
|
||||||
|
| openrouter | meta-llama/llama-3.3-70b-instruct | — | 120 | 600 |
|
||||||
|
| openrouter | google/gemma-3-12b-it:free | — | 0 | 0 |
|
||||||
|
| openrouter | google/gemma-3-12b-it | — | 50 | 100 |
|
||||||
|
| openrouter | google/gemma-3-27b-it:free | — | 0 | 0 |
|
||||||
|
| openrouter | google/gemma-3-27b-it | — | 100 | 200 |
|
||||||
|
| openrouter | qwen/qwq-32b:free | — | 0 | 0 |
|
||||||
|
| openrouter | qwen/qwq-32b | — | 150 | 200 |
|
||||||
|
| openrouter | qwen/qwen-max | — | 1.6 | 6.4 |
|
||||||
|
| openrouter | qwen/qwen-turbo | — | 0.05 | 0.2 |
|
||||||
|
| openrouter | mistralai/mistral-small-3.1-24b-instruct:free | — | 0 | 0 |
|
||||||
|
| openrouter | mistralai/mistral-small-3.1-24b-instruct | — | 0.1 | 0.3 |
|
||||||
|
| openrouter | thudm/glm-4-32b:free | — | 0 | 0 |
|
||||||
131
docs/scripts/models-json-to-markdown.js
Normal file
131
docs/scripts/models-json-to-markdown.js
Normal file
@@ -0,0 +1,131 @@
|
|||||||
|
import fs from 'fs';
|
||||||
|
import path from 'path';
|
||||||
|
import { fileURLToPath } from 'url';
|
||||||
|
|
||||||
|
const __filename = fileURLToPath(import.meta.url);
|
||||||
|
const __dirname = path.dirname(__filename);
|
||||||
|
|
||||||
|
const supportedModelsPath = path.join(
|
||||||
|
__dirname,
|
||||||
|
'..',
|
||||||
|
'modules',
|
||||||
|
'supported-models.json'
|
||||||
|
);
|
||||||
|
const outputMarkdownPath = path.join(
|
||||||
|
__dirname,
|
||||||
|
'..',
|
||||||
|
'..',
|
||||||
|
'docs',
|
||||||
|
'models.md'
|
||||||
|
);
|
||||||
|
|
||||||
|
function formatCost(cost) {
|
||||||
|
if (cost === null || cost === undefined) {
|
||||||
|
return '—';
|
||||||
|
}
|
||||||
|
return cost;
|
||||||
|
}
|
||||||
|
|
||||||
|
function formatSweScore(score) {
|
||||||
|
if (score === null || score === undefined || score === 0) {
|
||||||
|
return '—';
|
||||||
|
}
|
||||||
|
return score.toString();
|
||||||
|
}
|
||||||
|
|
||||||
|
function generateMarkdownTable(title, models) {
|
||||||
|
if (!models || models.length === 0) {
|
||||||
|
return `## ${title}\n\nNo models in this category.\n\n`;
|
||||||
|
}
|
||||||
|
let table = `## ${title}\n\n`;
|
||||||
|
table += '| Provider | Model Name | SWE Score | Input Cost | Output Cost |\n';
|
||||||
|
table += '|---|---|---|---|---|\n';
|
||||||
|
models.forEach((model) => {
|
||||||
|
table += `| ${model.provider} | ${model.modelName} | ${formatSweScore(model.sweScore)} | ${formatCost(model.inputCost)} | ${formatCost(model.outputCost)} |\n`;
|
||||||
|
});
|
||||||
|
table += '\n';
|
||||||
|
return table;
|
||||||
|
}
|
||||||
|
|
||||||
|
function main() {
|
||||||
|
try {
|
||||||
|
const correctSupportedModelsPath = path.join(
|
||||||
|
__dirname,
|
||||||
|
'..',
|
||||||
|
'..',
|
||||||
|
'scripts',
|
||||||
|
'modules',
|
||||||
|
'supported-models.json'
|
||||||
|
);
|
||||||
|
const correctOutputMarkdownPath = path.join(__dirname, '..', 'models.md');
|
||||||
|
|
||||||
|
const supportedModelsContent = fs.readFileSync(
|
||||||
|
correctSupportedModelsPath,
|
||||||
|
'utf8'
|
||||||
|
);
|
||||||
|
const supportedModels = JSON.parse(supportedModelsContent);
|
||||||
|
|
||||||
|
const mainModels = [];
|
||||||
|
const researchModels = [];
|
||||||
|
const fallbackModels = [];
|
||||||
|
|
||||||
|
for (const provider in supportedModels) {
|
||||||
|
if (Object.hasOwnProperty.call(supportedModels, provider)) {
|
||||||
|
const models = supportedModels[provider];
|
||||||
|
models.forEach((model) => {
|
||||||
|
const modelEntry = {
|
||||||
|
provider: provider,
|
||||||
|
modelName: model.id,
|
||||||
|
sweScore: model.swe_score,
|
||||||
|
inputCost: model.cost_per_1m_tokens
|
||||||
|
? model.cost_per_1m_tokens.input
|
||||||
|
: null,
|
||||||
|
outputCost: model.cost_per_1m_tokens
|
||||||
|
? model.cost_per_1m_tokens.output
|
||||||
|
: null
|
||||||
|
};
|
||||||
|
|
||||||
|
if (model.allowed_roles.includes('main')) {
|
||||||
|
mainModels.push(modelEntry);
|
||||||
|
}
|
||||||
|
if (model.allowed_roles.includes('research')) {
|
||||||
|
researchModels.push(modelEntry);
|
||||||
|
}
|
||||||
|
if (model.allowed_roles.includes('fallback')) {
|
||||||
|
fallbackModels.push(modelEntry);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
const date = new Date();
|
||||||
|
const monthNames = [
|
||||||
|
'January',
|
||||||
|
'February',
|
||||||
|
'March',
|
||||||
|
'April',
|
||||||
|
'May',
|
||||||
|
'June',
|
||||||
|
'July',
|
||||||
|
'August',
|
||||||
|
'September',
|
||||||
|
'October',
|
||||||
|
'November',
|
||||||
|
'December'
|
||||||
|
];
|
||||||
|
const formattedDate = `${monthNames[date.getMonth()]} ${date.getDate()}, ${date.getFullYear()}`;
|
||||||
|
|
||||||
|
let markdownContent = `# Available Models as of ${formattedDate}\n\n`;
|
||||||
|
markdownContent += generateMarkdownTable('Main Models', mainModels);
|
||||||
|
markdownContent += generateMarkdownTable('Research Models', researchModels);
|
||||||
|
markdownContent += generateMarkdownTable('Fallback Models', fallbackModels);
|
||||||
|
|
||||||
|
fs.writeFileSync(correctOutputMarkdownPath, markdownContent, 'utf8');
|
||||||
|
console.log(`Successfully updated ${correctOutputMarkdownPath}`);
|
||||||
|
} catch (error) {
|
||||||
|
console.error('Error transforming models.json to models.md:', error);
|
||||||
|
process.exit(1);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
main();
|
||||||
Reference in New Issue
Block a user