feat(ai): Add Google Gemini provider support and fix config loading
This commit is contained in:
5
.changeset/beige-rats-accept.md
Normal file
5
.changeset/beige-rats-accept.md
Normal file
@@ -0,0 +1,5 @@
|
||||
---
|
||||
'task-master-ai': patch
|
||||
---
|
||||
|
||||
- Add support for Google Gemini models via Vercel AI SDK integration.
|
||||
@@ -0,0 +1,58 @@
|
||||
---
|
||||
description: Guidelines for managing Task Master AI providers and models.
|
||||
globs:
|
||||
alwaysApply: false
|
||||
---
|
||||
|
||||
# Task Master AI Provider Management
|
||||
|
||||
This rule guides AI assistants on how to view, configure, and interact with the different AI providers and models supported by Task Master. For internal implementation details of the service layer, see [`ai_services.mdc`](mdc:.cursor/rules/ai_services.mdc).
|
||||
|
||||
- **Primary Interaction:**
|
||||
- Use the `models` MCP tool or the `task-master models` CLI command to manage AI configurations. See [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc) for detailed command/tool usage.
|
||||
|
||||
- **Configuration Roles:**
|
||||
- Task Master uses three roles for AI models:
|
||||
- `main`: Primary model for general tasks (generation, updates).
|
||||
- `research`: Model used when the `--research` flag or `research: true` parameter is used (typically models with web access or specialized knowledge).
|
||||
- `fallback`: Model used if the primary (`main`) model fails.
|
||||
- Each role is configured with a specific `provider:modelId` pair (e.g., `openai:gpt-4o`).
|
||||
|
||||
- **Viewing Configuration & Available Models:**
|
||||
- To see the current model assignments for each role and list all models available for assignment:
|
||||
- **MCP Tool:** `models` (call with no arguments or `listAvailableModels: true`)
|
||||
- **CLI Command:** `task-master models`
|
||||
- The output will show currently assigned models and a list of others, prefixed with their provider (e.g., `google:gemini-2.5-pro-exp-03-25`).
|
||||
|
||||
- **Setting Models for Roles:**
|
||||
- To assign a model to a role:
|
||||
- **MCP Tool:** `models` with `setMain`, `setResearch`, or `setFallback` parameters.
|
||||
- **CLI Command:** `task-master models` with `--set-main`, `--set-research`, or `--set-fallback` flags.
|
||||
- **Crucially:** When providing the model ID to *set*, **DO NOT include the `provider:` prefix**. Use only the model ID itself.
|
||||
- ✅ **DO:** `models(setMain='gpt-4o')` or `task-master models --set-main=gpt-4o`
|
||||
- ❌ **DON'T:** `models(setMain='openai:gpt-4o')` or `task-master models --set-main=openai:gpt-4o`
|
||||
- The tool/command will automatically determine the provider based on the model ID.
|
||||
|
||||
- **Supported Providers & Required API Keys:**
|
||||
- Task Master integrates with various providers via the Vercel AI SDK.
|
||||
- **API keys are essential** for most providers and must be configured correctly.
|
||||
- **Key Locations** (See [`dev_workflow.mdc`](mdc:.cursor/rules/dev_workflow.mdc) - Configuration Management):
|
||||
- **MCP/Cursor:** Set keys in the `env` section of `.cursor/mcp.json`.
|
||||
- **CLI:** Set keys in a `.env` file in the project root.
|
||||
- **Provider List & Keys:**
|
||||
- **`anthropic`**: Requires `ANTHROPIC_API_KEY`.
|
||||
- **`google`**: Requires `GOOGLE_API_KEY`.
|
||||
- **`openai`**: Requires `OPENAI_API_KEY`.
|
||||
- **`perplexity`**: Requires `PERPLEXITY_API_KEY`.
|
||||
- **`xai`**: Requires `XAI_API_KEY`.
|
||||
- **`mistral`**: Requires `MISTRAL_API_KEY`.
|
||||
- **`azure`**: Requires `AZURE_OPENAI_API_KEY` and `AZURE_OPENAI_ENDPOINT`.
|
||||
- **`openrouter`**: Requires `OPENROUTER_API_KEY`.
|
||||
- **`ollama`**: Typically requires `OLLAMA_API_KEY` *and* `OLLAMA_BASE_URL` (default: `http://localhost:11434/api`). *Check specific setup.*
|
||||
|
||||
- **Troubleshooting:**
|
||||
- If AI commands fail (especially in MCP context):
|
||||
1. **Verify API Key:** Ensure the correct API key for the *selected provider* (check `models` output) exists in the appropriate location (`.cursor/mcp.json` env or `.env`).
|
||||
2. **Check Model ID:** Ensure the model ID set for the role is valid (use `models` listAvailableModels/`task-master models`).
|
||||
3. **Provider Status:** Check the status of the external AI provider's service.
|
||||
4. **Restart MCP:** If changes were made to configuration or provider code, restart the MCP server.
|
||||
@@ -1,8 +1,8 @@
|
||||
{
|
||||
"models": {
|
||||
"main": {
|
||||
"provider": "anthropic",
|
||||
"modelId": "claude-3-7-sonnet-20250219",
|
||||
"provider": "google",
|
||||
"modelId": "gemini-2.5-pro-exp-03-25",
|
||||
"maxTokens": 120000,
|
||||
"temperature": 0.2
|
||||
},
|
||||
|
||||
16
package-lock.json
generated
16
package-lock.json
generated
@@ -11,14 +11,14 @@
|
||||
"dependencies": {
|
||||
"@ai-sdk/anthropic": "^1.2.10",
|
||||
"@ai-sdk/azure": "^1.3.17",
|
||||
"@ai-sdk/google": "^1.2.12",
|
||||
"@ai-sdk/google": "^1.2.13",
|
||||
"@ai-sdk/mistral": "^1.2.7",
|
||||
"@ai-sdk/openai": "^1.3.16",
|
||||
"@ai-sdk/perplexity": "^1.1.7",
|
||||
"@ai-sdk/xai": "^1.2.13",
|
||||
"@anthropic-ai/sdk": "^0.39.0",
|
||||
"@openrouter/ai-sdk-provider": "^0.4.5",
|
||||
"ai": "^4.3.9",
|
||||
"ai": "^4.3.10",
|
||||
"boxen": "^8.0.1",
|
||||
"chalk": "^4.1.2",
|
||||
"cli-table3": "^0.6.5",
|
||||
@@ -91,9 +91,9 @@
|
||||
}
|
||||
},
|
||||
"node_modules/@ai-sdk/google": {
|
||||
"version": "1.2.12",
|
||||
"resolved": "https://registry.npmjs.org/@ai-sdk/google/-/google-1.2.12.tgz",
|
||||
"integrity": "sha512-A8AYqCmBs9SJFiAOP6AX0YEDHWTDrCaUDiRY2cdMSKjJiEknvwnPrAAKf3idgVqYaM2kS0qWz5v9v4pBzXDx+w==",
|
||||
"version": "1.2.13",
|
||||
"resolved": "https://registry.npmjs.org/@ai-sdk/google/-/google-1.2.13.tgz",
|
||||
"integrity": "sha512-nnHDzbX1Zst28AjP3718xSWsEqx++qmFuqmnDc2Htelc02HyO6WkWOXMH+YVK3W8zdIyZEKpHL9KKlql7pa10A==",
|
||||
"license": "Apache-2.0",
|
||||
"dependencies": {
|
||||
"@ai-sdk/provider": "1.1.3",
|
||||
@@ -2704,9 +2704,9 @@
|
||||
}
|
||||
},
|
||||
"node_modules/ai": {
|
||||
"version": "4.3.9",
|
||||
"resolved": "https://registry.npmjs.org/ai/-/ai-4.3.9.tgz",
|
||||
"integrity": "sha512-P2RpV65sWIPdUlA4f1pcJ11pB0N1YmqPVLEmC4j8WuBwKY0L3q9vGhYPh0Iv+spKHKyn0wUbMfas+7Z6nTfS0g==",
|
||||
"version": "4.3.10",
|
||||
"resolved": "https://registry.npmjs.org/ai/-/ai-4.3.10.tgz",
|
||||
"integrity": "sha512-jw+ahNu+T4SHj9gtraIKtYhanJI6gj2IZ5BFcfEHgoyQVMln5a5beGjzl/nQSX6FxyLqJ/UBpClRa279EEKK/Q==",
|
||||
"license": "Apache-2.0",
|
||||
"dependencies": {
|
||||
"@ai-sdk/provider": "1.1.3",
|
||||
|
||||
@@ -40,14 +40,14 @@
|
||||
"dependencies": {
|
||||
"@ai-sdk/anthropic": "^1.2.10",
|
||||
"@ai-sdk/azure": "^1.3.17",
|
||||
"@ai-sdk/google": "^1.2.12",
|
||||
"@ai-sdk/google": "^1.2.13",
|
||||
"@ai-sdk/mistral": "^1.2.7",
|
||||
"@ai-sdk/openai": "^1.3.16",
|
||||
"@ai-sdk/perplexity": "^1.1.7",
|
||||
"@ai-sdk/xai": "^1.2.13",
|
||||
"@anthropic-ai/sdk": "^0.39.0",
|
||||
"@openrouter/ai-sdk-provider": "^0.4.5",
|
||||
"ai": "^4.3.9",
|
||||
"ai": "^4.3.10",
|
||||
"boxen": "^8.0.1",
|
||||
"chalk": "^4.1.2",
|
||||
"cli-table3": "^0.6.5",
|
||||
|
||||
@@ -24,6 +24,7 @@ import { log, resolveEnvVariable } from './utils.js';
|
||||
// Corrected path from scripts/ai-providers/... to ../../src/ai-providers/...
|
||||
import * as anthropic from '../../src/ai-providers/anthropic.js';
|
||||
import * as perplexity from '../../src/ai-providers/perplexity.js';
|
||||
import * as google from '../../src/ai-providers/google.js'; // Import Google provider
|
||||
// TODO: Import other provider modules when implemented (openai, ollama, etc.)
|
||||
|
||||
// --- Provider Function Map ---
|
||||
@@ -40,6 +41,12 @@ const PROVIDER_FUNCTIONS = {
|
||||
streamText: perplexity.streamPerplexityText,
|
||||
generateObject: perplexity.generatePerplexityObject
|
||||
// streamObject: perplexity.streamPerplexityObject, // Add when implemented
|
||||
},
|
||||
google: {
|
||||
// Add Google entry
|
||||
generateText: google.generateGoogleText,
|
||||
streamText: google.streamGoogleText,
|
||||
generateObject: google.generateGoogleObject
|
||||
}
|
||||
// TODO: Add entries for openai, ollama, etc. when implemented
|
||||
};
|
||||
@@ -75,7 +82,7 @@ function _resolveApiKey(providerName, session) {
|
||||
const keyMap = {
|
||||
openai: 'OPENAI_API_KEY',
|
||||
anthropic: 'ANTHROPIC_API_KEY',
|
||||
google: 'GOOGLE_API_KEY',
|
||||
google: 'GOOGLE_API_KEY', // Add Google API Key
|
||||
perplexity: 'PERPLEXITY_API_KEY',
|
||||
mistral: 'MISTRAL_API_KEY',
|
||||
azure: 'AZURE_OPENAI_API_KEY',
|
||||
|
||||
@@ -66,7 +66,7 @@ import {
|
||||
getAvailableModelsList,
|
||||
setModel
|
||||
} from './task-manager/models.js'; // Import new core functions
|
||||
import { findProjectRoot } from './utils.js';
|
||||
import { findProjectRoot } from './utils.js'; // Import findProjectRoot
|
||||
|
||||
/**
|
||||
* Configure and register CLI commands
|
||||
@@ -1597,15 +1597,37 @@ function registerCommands(programInstance) {
|
||||
.option('--setup', 'Run interactive setup to configure models')
|
||||
.action(async (options) => {
|
||||
try {
|
||||
// ---> Explicitly find project root for CLI execution <---
|
||||
const projectRoot = findProjectRoot();
|
||||
if (!projectRoot && !options.setup) {
|
||||
// Allow setup even if root isn't found immediately
|
||||
console.error(
|
||||
chalk.red(
|
||||
"Error: Could not determine the project root. Ensure you're running this command within a Task Master project directory."
|
||||
)
|
||||
);
|
||||
process.exit(1);
|
||||
}
|
||||
// ---> End find project root <---
|
||||
|
||||
// --- Set Operations ---
|
||||
if (options.setMain || options.setResearch || options.setFallback) {
|
||||
let resultSet = null;
|
||||
const coreOptions = { projectRoot }; // Pass root to setModel
|
||||
if (options.setMain) {
|
||||
resultSet = await setModel('main', options.setMain);
|
||||
resultSet = await setModel('main', options.setMain, coreOptions);
|
||||
} else if (options.setResearch) {
|
||||
resultSet = await setModel('research', options.setResearch);
|
||||
resultSet = await setModel(
|
||||
'research',
|
||||
options.setResearch,
|
||||
coreOptions
|
||||
);
|
||||
} else if (options.setFallback) {
|
||||
resultSet = await setModel('fallback', options.setFallback);
|
||||
resultSet = await setModel(
|
||||
'fallback',
|
||||
options.setFallback,
|
||||
coreOptions
|
||||
);
|
||||
}
|
||||
|
||||
if (resultSet?.success) {
|
||||
@@ -1619,7 +1641,7 @@ function registerCommands(programInstance) {
|
||||
if (resultSet?.error?.code === 'MODEL_NOT_FOUND') {
|
||||
console.log(
|
||||
chalk.yellow(
|
||||
'\nRun `task-master models` to see available models.'
|
||||
'\\nRun `task-master models` to see available models.'
|
||||
)
|
||||
);
|
||||
}
|
||||
@@ -1630,8 +1652,10 @@ function registerCommands(programInstance) {
|
||||
|
||||
// --- Interactive Setup ---
|
||||
if (options.setup) {
|
||||
// Get available models for interactive setup
|
||||
const availableModelsResult = await getAvailableModelsList();
|
||||
// Get available models for interactive setup - pass projectRoot
|
||||
const availableModelsResult = await getAvailableModelsList({
|
||||
projectRoot
|
||||
});
|
||||
if (!availableModelsResult.success) {
|
||||
console.error(
|
||||
chalk.red(
|
||||
@@ -1642,7 +1666,10 @@ function registerCommands(programInstance) {
|
||||
}
|
||||
const availableModelsForSetup = availableModelsResult.data.models;
|
||||
|
||||
const currentConfigResult = await getModelConfiguration();
|
||||
// Get current config - pass projectRoot
|
||||
const currentConfigResult = await getModelConfiguration({
|
||||
projectRoot
|
||||
});
|
||||
if (!currentConfigResult.success) {
|
||||
console.error(
|
||||
chalk.red(
|
||||
@@ -1657,24 +1684,12 @@ function registerCommands(programInstance) {
|
||||
fallback: {}
|
||||
};
|
||||
|
||||
console.log(chalk.cyan.bold('\nInteractive Model Setup:'));
|
||||
console.log(chalk.cyan.bold('\\nInteractive Model Setup:'));
|
||||
|
||||
const getMainChoicesAndDefault = () => {
|
||||
const mainChoices = allModelsForSetup.filter((modelChoice) =>
|
||||
availableModelsForSetup
|
||||
.find((m) => m.modelId === modelChoice.value.id)
|
||||
?.allowedRoles?.includes('main')
|
||||
);
|
||||
const defaultIndex = mainChoices.findIndex(
|
||||
(m) => m.value.id === currentModels.main?.modelId
|
||||
);
|
||||
return { choices: mainChoices, default: defaultIndex };
|
||||
};
|
||||
|
||||
// Get all available models, including active ones
|
||||
// Find all available models for setup options
|
||||
const allModelsForSetup = availableModelsForSetup.map((model) => ({
|
||||
name: `${model.provider} / ${model.modelId}`,
|
||||
value: { provider: model.provider, id: model.modelId } // Use id here for comparison
|
||||
value: { provider: model.provider, id: model.modelId }
|
||||
}));
|
||||
|
||||
if (allModelsForSetup.length === 0) {
|
||||
@@ -1684,118 +1699,110 @@ function registerCommands(programInstance) {
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Function to find the index of the currently selected model ID
|
||||
// Ensure it correctly searches the unfiltered selectableModels list
|
||||
const findDefaultIndex = (roleModelId) => {
|
||||
if (!roleModelId) return -1; // Handle cases where a role isn't set
|
||||
return allModelsForSetup.findIndex(
|
||||
(m) => m.value.id === roleModelId // Compare using the 'id' from the value object
|
||||
);
|
||||
};
|
||||
|
||||
// Helper to get research choices and default index
|
||||
const getResearchChoicesAndDefault = () => {
|
||||
const researchChoices = allModelsForSetup.filter((modelChoice) =>
|
||||
// Helper to get choices and default index for a role
|
||||
const getPromptData = (role, allowNone = false) => {
|
||||
const roleChoices = allModelsForSetup.filter((modelChoice) =>
|
||||
availableModelsForSetup
|
||||
.find((m) => m.modelId === modelChoice.value.id)
|
||||
?.allowedRoles?.includes('research')
|
||||
?.allowedRoles?.includes(role)
|
||||
);
|
||||
const defaultIndex = researchChoices.findIndex(
|
||||
(m) => m.value.id === currentModels.research?.modelId
|
||||
);
|
||||
return { choices: researchChoices, default: defaultIndex };
|
||||
};
|
||||
|
||||
// Helper to get fallback choices and default index
|
||||
const getFallbackChoicesAndDefault = () => {
|
||||
const choices = [
|
||||
{ name: 'None (disable fallback)', value: null },
|
||||
let choices = [...roleChoices];
|
||||
let defaultIndex = -1;
|
||||
const currentModelId = currentModels[role]?.modelId;
|
||||
|
||||
if (allowNone) {
|
||||
choices = [
|
||||
{ name: 'None (disable)', value: null },
|
||||
new inquirer.Separator(),
|
||||
...allModelsForSetup
|
||||
...roleChoices
|
||||
];
|
||||
const currentFallbackId = currentModels.fallback?.modelId;
|
||||
let defaultIndex = 0; // Default to 'None'
|
||||
if (currentFallbackId) {
|
||||
const foundIndex = allModelsForSetup.findIndex(
|
||||
(m) => m.value.id === currentFallbackId
|
||||
if (currentModelId) {
|
||||
const foundIndex = roleChoices.findIndex(
|
||||
(m) => m.value.id === currentModelId
|
||||
);
|
||||
defaultIndex = foundIndex !== -1 ? foundIndex + 2 : 0; // +2 for None and Separator
|
||||
} else {
|
||||
defaultIndex = 0; // Default to 'None'
|
||||
}
|
||||
} else {
|
||||
if (currentModelId) {
|
||||
defaultIndex = roleChoices.findIndex(
|
||||
(m) => m.value.id === currentModelId
|
||||
);
|
||||
if (foundIndex !== -1) {
|
||||
defaultIndex = foundIndex + 2; // +2 because of 'None' and Separator
|
||||
}
|
||||
}
|
||||
return { choices, default: defaultIndex };
|
||||
};
|
||||
|
||||
const researchPromptData = getResearchChoicesAndDefault();
|
||||
const fallbackPromptData = getFallbackChoicesAndDefault();
|
||||
// Call the helper function for main model choices
|
||||
const mainPromptData = getMainChoicesAndDefault();
|
||||
|
||||
// Add cancel option for all prompts
|
||||
// Add Cancel option
|
||||
const cancelOption = {
|
||||
name: 'Cancel setup (q)',
|
||||
value: '__CANCEL__'
|
||||
};
|
||||
choices = [cancelOption, new inquirer.Separator(), ...choices];
|
||||
defaultIndex = defaultIndex !== -1 ? defaultIndex + 2 : 0; // +2 for Cancel and Separator
|
||||
|
||||
const mainModelChoices = [
|
||||
cancelOption,
|
||||
new inquirer.Separator(),
|
||||
...mainPromptData.choices
|
||||
];
|
||||
|
||||
const researchModelChoices = [
|
||||
cancelOption,
|
||||
new inquirer.Separator(),
|
||||
...researchPromptData.choices
|
||||
];
|
||||
|
||||
const fallbackModelChoices = [
|
||||
cancelOption,
|
||||
new inquirer.Separator(),
|
||||
...fallbackPromptData.choices
|
||||
];
|
||||
return { choices, default: defaultIndex };
|
||||
};
|
||||
|
||||
// Add key press handler for 'q' to cancel
|
||||
process.stdin.on('keypress', (str, key) => {
|
||||
if (key.name === 'q') {
|
||||
process.stdin.pause();
|
||||
console.log(chalk.yellow('\nSetup canceled. No changes made.'));
|
||||
// Ensure stdin is available and resume it if needed
|
||||
if (process.stdin.isTTY) {
|
||||
process.stdin.setRawMode(true);
|
||||
process.stdin.resume();
|
||||
process.stdin.setEncoding('utf8');
|
||||
process.stdin.on('data', (key) => {
|
||||
if (key === 'q' || key === '\\u0003') {
|
||||
// 'q' or Ctrl+C
|
||||
console.log(
|
||||
chalk.yellow('\\nSetup canceled. No changes made.')
|
||||
);
|
||||
process.exit(0);
|
||||
}
|
||||
});
|
||||
console.log(
|
||||
chalk.gray('Press "q" at any time to cancel the setup.')
|
||||
);
|
||||
}
|
||||
|
||||
console.log(chalk.gray('Press "q" at any time to cancel the setup.'));
|
||||
// --- Generate choices using the helper ---
|
||||
const mainPromptData = getPromptData('main');
|
||||
const researchPromptData = getPromptData('research');
|
||||
const fallbackPromptData = getPromptData('fallback', true); // Allow 'None' for fallback
|
||||
|
||||
const answers = await inquirer.prompt([
|
||||
{
|
||||
type: 'list',
|
||||
name: 'mainModel',
|
||||
message: 'Select the main model for generation/updates:',
|
||||
choices: mainModelChoices,
|
||||
default: mainPromptData.default + 2 // +2 for cancel option and separator
|
||||
choices: mainPromptData.choices,
|
||||
default: mainPromptData.default
|
||||
},
|
||||
{
|
||||
type: 'list',
|
||||
name: 'researchModel',
|
||||
message: 'Select the research model:',
|
||||
choices: researchModelChoices,
|
||||
default: researchPromptData.default + 2, // +2 for cancel option and separator
|
||||
when: (answers) => answers.mainModel !== '__CANCEL__'
|
||||
choices: researchPromptData.choices,
|
||||
default: researchPromptData.default,
|
||||
when: (ans) => ans.mainModel !== '__CANCEL__'
|
||||
},
|
||||
{
|
||||
type: 'list',
|
||||
name: 'fallbackModel',
|
||||
message: 'Select the fallback model (optional):',
|
||||
choices: fallbackModelChoices,
|
||||
default: fallbackPromptData.default + 2, // +2 for cancel option and separator
|
||||
when: (answers) =>
|
||||
answers.mainModel !== '__CANCEL__' &&
|
||||
answers.researchModel !== '__CANCEL__'
|
||||
choices: fallbackPromptData.choices,
|
||||
default: fallbackPromptData.default,
|
||||
when: (ans) =>
|
||||
ans.mainModel !== '__CANCEL__' &&
|
||||
ans.researchModel !== '__CANCEL__'
|
||||
}
|
||||
]);
|
||||
|
||||
// Clean up the keypress handler
|
||||
process.stdin.removeAllListeners('keypress');
|
||||
if (process.stdin.isTTY) {
|
||||
process.stdin.pause();
|
||||
process.stdin.removeAllListeners('data');
|
||||
process.stdin.setRawMode(false);
|
||||
}
|
||||
|
||||
// Check if user canceled at any point
|
||||
if (
|
||||
@@ -1803,19 +1810,25 @@ function registerCommands(programInstance) {
|
||||
answers.researchModel === '__CANCEL__' ||
|
||||
answers.fallbackModel === '__CANCEL__'
|
||||
) {
|
||||
console.log(chalk.yellow('\nSetup canceled. No changes made.'));
|
||||
console.log(chalk.yellow('\\nSetup canceled. No changes made.'));
|
||||
return;
|
||||
}
|
||||
|
||||
// Apply changes using setModel
|
||||
let setupSuccess = true;
|
||||
let setupConfigModified = false;
|
||||
const coreOptionsSetup = { projectRoot }; // Pass root for setup actions
|
||||
|
||||
if (
|
||||
answers.mainModel &&
|
||||
answers.mainModel?.id &&
|
||||
answers.mainModel.id !== currentModels.main?.modelId
|
||||
) {
|
||||
const result = await setModel('main', answers.mainModel.id);
|
||||
const result = await setModel(
|
||||
'main',
|
||||
answers.mainModel.id,
|
||||
coreOptionsSetup
|
||||
);
|
||||
if (result.success) {
|
||||
console.log(
|
||||
chalk.blue(
|
||||
@@ -1835,9 +1848,14 @@ function registerCommands(programInstance) {
|
||||
|
||||
if (
|
||||
answers.researchModel &&
|
||||
answers.researchModel?.id &&
|
||||
answers.researchModel.id !== currentModels.research?.modelId
|
||||
) {
|
||||
const result = await setModel('research', answers.researchModel.id);
|
||||
const result = await setModel(
|
||||
'research',
|
||||
answers.researchModel.id,
|
||||
coreOptionsSetup
|
||||
);
|
||||
if (result.success) {
|
||||
console.log(
|
||||
chalk.blue(
|
||||
@@ -1857,12 +1875,18 @@ function registerCommands(programInstance) {
|
||||
|
||||
// Set Fallback Model - Handle 'None' selection
|
||||
const currentFallbackId = currentModels.fallback?.modelId;
|
||||
const selectedFallbackId = answers.fallbackModel?.id; // Will be null if 'None' selected
|
||||
const selectedFallbackValue = answers.fallbackModel; // Could be null or model object
|
||||
const selectedFallbackId = selectedFallbackValue?.id; // Undefined if null
|
||||
|
||||
if (selectedFallbackId !== currentFallbackId) {
|
||||
// Compare IDs
|
||||
if (selectedFallbackId) {
|
||||
// User selected a specific fallback model
|
||||
const result = await setModel('fallback', selectedFallbackId);
|
||||
const result = await setModel(
|
||||
'fallback',
|
||||
selectedFallbackId,
|
||||
coreOptionsSetup
|
||||
);
|
||||
if (result.success) {
|
||||
console.log(
|
||||
chalk.blue(
|
||||
@@ -1881,35 +1905,43 @@ function registerCommands(programInstance) {
|
||||
} else if (currentFallbackId) {
|
||||
// User selected 'None' but a fallback was previously set
|
||||
// Need to explicitly clear it in the config file
|
||||
const currentCfg = getConfig();
|
||||
const currentCfg = getConfig(projectRoot); // Pass root
|
||||
if (currentCfg?.models?.fallback) {
|
||||
// Check if fallback exists before clearing
|
||||
currentCfg.models.fallback = {
|
||||
...currentCfg.models.fallback,
|
||||
provider: undefined,
|
||||
modelId: undefined
|
||||
};
|
||||
if (writeConfig(currentCfg)) {
|
||||
if (writeConfig(currentCfg, projectRoot)) {
|
||||
// Pass root
|
||||
console.log(chalk.blue('Fallback model disabled.'));
|
||||
setupConfigModified = true;
|
||||
} else {
|
||||
console.error(
|
||||
chalk.red('Failed to disable fallback model in config file.')
|
||||
chalk.red(
|
||||
'Failed to disable fallback model in config file.'
|
||||
)
|
||||
);
|
||||
setupSuccess = false;
|
||||
}
|
||||
} else {
|
||||
console.log(chalk.blue('Fallback model was already disabled.'));
|
||||
}
|
||||
}
|
||||
// No action needed if fallback was already null/undefined and user selected None
|
||||
}
|
||||
|
||||
if (setupSuccess && setupConfigModified) {
|
||||
console.log(chalk.green.bold('\nModel setup complete!'));
|
||||
console.log(chalk.green.bold('\\nModel setup complete!'));
|
||||
} else if (setupSuccess && !setupConfigModified) {
|
||||
console.log(
|
||||
chalk.yellow('\nNo changes made to model configuration.')
|
||||
chalk.yellow('\\nNo changes made to model configuration.')
|
||||
);
|
||||
} else if (!setupSuccess) {
|
||||
console.error(
|
||||
chalk.red(
|
||||
'\nErrors occurred during model selection. Please review and try again.'
|
||||
'\\nErrors occurred during model selection. Please review and try again.'
|
||||
)
|
||||
);
|
||||
}
|
||||
@@ -1917,9 +1949,8 @@ function registerCommands(programInstance) {
|
||||
}
|
||||
|
||||
// --- Default: Display Current Configuration ---
|
||||
// No longer need to check configModified here, as the set/setup logic returns early
|
||||
// Fetch configuration using the core function
|
||||
const result = await getModelConfiguration();
|
||||
// Fetch configuration using the core function - PASS projectRoot
|
||||
const result = await getModelConfiguration({ projectRoot });
|
||||
|
||||
if (!result.success) {
|
||||
// Handle specific CONFIG_MISSING error gracefully
|
||||
|
||||
@@ -79,15 +79,25 @@ class ConfigurationError extends Error {
|
||||
|
||||
function _loadAndValidateConfig(explicitRoot = null) {
|
||||
const defaults = DEFAULTS; // Use the defined defaults
|
||||
let rootToUse = explicitRoot;
|
||||
let configSource = explicitRoot
|
||||
? `explicit root (${explicitRoot})`
|
||||
: 'defaults (no root provided yet)';
|
||||
|
||||
// If no explicit root is provided (e.g., during initial server load),
|
||||
// return defaults immediately and silently.
|
||||
if (!explicitRoot) {
|
||||
// ---> If no explicit root, TRY to find it <---
|
||||
if (!rootToUse) {
|
||||
rootToUse = findProjectRoot();
|
||||
if (rootToUse) {
|
||||
configSource = `found root (${rootToUse})`;
|
||||
} else {
|
||||
// No root found, return defaults immediately
|
||||
return defaults;
|
||||
}
|
||||
}
|
||||
// ---> End find project root logic <---
|
||||
|
||||
// --- Proceed with loading from the provided explicitRoot ---
|
||||
const configPath = path.join(explicitRoot, CONFIG_FILE_NAME);
|
||||
// --- Proceed with loading from the determined rootToUse ---
|
||||
const configPath = path.join(rootToUse, CONFIG_FILE_NAME);
|
||||
let config = { ...defaults }; // Start with a deep copy of defaults
|
||||
let configExists = false;
|
||||
|
||||
@@ -113,9 +123,10 @@ function _loadAndValidateConfig(explicitRoot = null) {
|
||||
},
|
||||
global: { ...defaults.global, ...parsedConfig?.global }
|
||||
};
|
||||
configSource = `file (${configPath})`; // Update source info
|
||||
|
||||
// --- Validation (Warn if file content is invalid) ---
|
||||
// Only use console.warn here, as this part runs only when an explicitRoot *is* provided
|
||||
// Use log.warn for consistency
|
||||
if (!validateProvider(config.models.main.provider)) {
|
||||
console.warn(
|
||||
chalk.yellow(
|
||||
@@ -152,17 +163,27 @@ function _loadAndValidateConfig(explicitRoot = null) {
|
||||
)
|
||||
);
|
||||
config = { ...defaults }; // Reset to defaults on parse error
|
||||
configSource = `defaults (parse error at ${configPath})`;
|
||||
}
|
||||
} else {
|
||||
// Config file doesn't exist at the provided explicitRoot.
|
||||
// Use console.warn because an explicit root *was* given.
|
||||
// Config file doesn't exist at the determined rootToUse.
|
||||
if (explicitRoot) {
|
||||
// Only warn if an explicit root was *expected*.
|
||||
console.warn(
|
||||
chalk.yellow(
|
||||
`Warning: ${CONFIG_FILE_NAME} not found at provided project root (${explicitRoot}). Using default configuration. Run 'task-master models --setup' to configure.`
|
||||
)
|
||||
);
|
||||
} else {
|
||||
console.warn(
|
||||
chalk.yellow(
|
||||
`Warning: ${CONFIG_FILE_NAME} not found at derived root (${rootToUse}). Using defaults.`
|
||||
)
|
||||
);
|
||||
}
|
||||
// Keep config as defaults
|
||||
config = { ...defaults };
|
||||
configSource = `defaults (file not found at ${configPath})`;
|
||||
}
|
||||
|
||||
return config;
|
||||
@@ -392,10 +413,11 @@ function isApiKeySet(providerName, session = null) {
|
||||
* Checks the API key status within .cursor/mcp.json for a given provider.
|
||||
* Reads the mcp.json file, finds the taskmaster-ai server config, and checks the relevant env var.
|
||||
* @param {string} providerName The name of the provider.
|
||||
* @param {string|null} projectRoot - Optional explicit path to the project root.
|
||||
* @returns {boolean} True if the key exists and is not a placeholder, false otherwise.
|
||||
*/
|
||||
function getMcpApiKeyStatus(providerName) {
|
||||
const rootDir = findProjectRoot(); // Use existing root finding
|
||||
function getMcpApiKeyStatus(providerName, projectRoot = null) {
|
||||
const rootDir = projectRoot || findProjectRoot(); // Use existing root finding
|
||||
if (!rootDir) {
|
||||
console.warn(
|
||||
chalk.yellow('Warning: Could not find project root to check mcp.json.')
|
||||
|
||||
167
src/ai-providers/google.js
Normal file
167
src/ai-providers/google.js
Normal file
@@ -0,0 +1,167 @@
|
||||
/**
|
||||
* google.js
|
||||
* AI provider implementation for Google AI models (e.g., Gemini) using Vercel AI SDK.
|
||||
*/
|
||||
|
||||
// import { GoogleGenerativeAI } from '@ai-sdk/google'; // Incorrect import
|
||||
import { createGoogleGenerativeAI } from '@ai-sdk/google'; // Correct import for customization
|
||||
import { generateText, streamText, generateObject } from 'ai'; // Import from main 'ai' package
|
||||
import { log } from '../../scripts/modules/utils.js'; // Import logging utility
|
||||
|
||||
// Consider making model configurable via config-manager.js later
|
||||
const DEFAULT_MODEL = 'gemini-2.0-pro'; // Or a suitable default
|
||||
const DEFAULT_TEMPERATURE = 0.2; // Or a suitable default
|
||||
|
||||
/**
|
||||
* Generates text using a Google AI model.
|
||||
*
|
||||
* @param {object} params - Parameters for the generation.
|
||||
* @param {string} params.apiKey - Google API Key.
|
||||
* @param {string} params.modelId - Specific model ID to use (overrides default).
|
||||
* @param {number} params.temperature - Generation temperature.
|
||||
* @param {Array<object>} params.messages - The conversation history (system/user prompts).
|
||||
* @param {number} [params.maxTokens] - Optional max tokens.
|
||||
* @returns {Promise<string>} The generated text content.
|
||||
* @throws {Error} If API key is missing or API call fails.
|
||||
*/
|
||||
async function generateGoogleText({
|
||||
apiKey,
|
||||
modelId = DEFAULT_MODEL,
|
||||
temperature = DEFAULT_TEMPERATURE,
|
||||
messages,
|
||||
maxTokens // Note: Vercel SDK might handle this differently, needs verification
|
||||
}) {
|
||||
if (!apiKey) {
|
||||
throw new Error('Google API key is required.');
|
||||
}
|
||||
log('info', `Generating text with Google model: ${modelId}`);
|
||||
|
||||
try {
|
||||
// const google = new GoogleGenerativeAI({ apiKey }); // Incorrect instantiation
|
||||
const googleProvider = createGoogleGenerativeAI({ apiKey }); // Correct instantiation
|
||||
// const model = google.getGenerativeModel({ model: modelId }); // Incorrect model retrieval
|
||||
const model = googleProvider(modelId); // Correct model retrieval
|
||||
|
||||
// Construct payload suitable for Vercel SDK's generateText
|
||||
// Note: The exact structure might depend on how messages are passed
|
||||
const result = await generateText({
|
||||
model, // Pass the model instance
|
||||
messages, // Pass the messages array directly
|
||||
temperature,
|
||||
maxOutputTokens: maxTokens // Map to correct Vercel SDK param if available
|
||||
});
|
||||
|
||||
// Assuming result structure provides text directly or within a property
|
||||
return result.text; // Adjust based on actual SDK response
|
||||
} catch (error) {
|
||||
log(
|
||||
'error',
|
||||
`Error generating text with Google (${modelId}): ${error.message}`
|
||||
);
|
||||
throw error; // Re-throw for unified service handler
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Streams text using a Google AI model.
|
||||
*
|
||||
* @param {object} params - Parameters for the streaming.
|
||||
* @param {string} params.apiKey - Google API Key.
|
||||
* @param {string} params.modelId - Specific model ID to use (overrides default).
|
||||
* @param {number} params.temperature - Generation temperature.
|
||||
* @param {Array<object>} params.messages - The conversation history.
|
||||
* @param {number} [params.maxTokens] - Optional max tokens.
|
||||
* @returns {Promise<ReadableStream>} A readable stream of text deltas.
|
||||
* @throws {Error} If API key is missing or API call fails.
|
||||
*/
|
||||
async function streamGoogleText({
|
||||
apiKey,
|
||||
modelId = DEFAULT_MODEL,
|
||||
temperature = DEFAULT_TEMPERATURE,
|
||||
messages,
|
||||
maxTokens
|
||||
}) {
|
||||
if (!apiKey) {
|
||||
throw new Error('Google API key is required.');
|
||||
}
|
||||
log('info', `Streaming text with Google model: ${modelId}`);
|
||||
|
||||
try {
|
||||
// const google = new GoogleGenerativeAI({ apiKey }); // Incorrect instantiation
|
||||
const googleProvider = createGoogleGenerativeAI({ apiKey }); // Correct instantiation
|
||||
// const model = google.getGenerativeModel({ model: modelId }); // Incorrect model retrieval
|
||||
const model = googleProvider(modelId); // Correct model retrieval
|
||||
|
||||
const stream = await streamText({
|
||||
model, // Pass the model instance
|
||||
messages,
|
||||
temperature,
|
||||
maxOutputTokens: maxTokens
|
||||
});
|
||||
|
||||
return stream; // Return the stream directly
|
||||
} catch (error) {
|
||||
log(
|
||||
'error',
|
||||
`Error streaming text with Google (${modelId}): ${error.message}`
|
||||
);
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Generates a structured object using a Google AI model.
|
||||
*
|
||||
* @param {object} params - Parameters for the object generation.
|
||||
* @param {string} params.apiKey - Google API Key.
|
||||
* @param {string} params.modelId - Specific model ID to use (overrides default).
|
||||
* @param {number} params.temperature - Generation temperature.
|
||||
* @param {Array<object>} params.messages - The conversation history.
|
||||
* @param {import('zod').ZodSchema} params.schema - Zod schema for the expected object.
|
||||
* @param {string} params.objectName - Name for the object generation context.
|
||||
* @param {number} [params.maxTokens] - Optional max tokens.
|
||||
* @returns {Promise<object>} The generated object matching the schema.
|
||||
* @throws {Error} If API key is missing or API call fails.
|
||||
*/
|
||||
async function generateGoogleObject({
|
||||
apiKey,
|
||||
modelId = DEFAULT_MODEL,
|
||||
temperature = DEFAULT_TEMPERATURE,
|
||||
messages,
|
||||
schema,
|
||||
objectName, // Note: Vercel SDK might use this differently or not at all
|
||||
maxTokens
|
||||
}) {
|
||||
if (!apiKey) {
|
||||
throw new Error('Google API key is required.');
|
||||
}
|
||||
log('info', `Generating object with Google model: ${modelId}`);
|
||||
|
||||
try {
|
||||
// const google = new GoogleGenerativeAI({ apiKey }); // Incorrect instantiation
|
||||
const googleProvider = createGoogleGenerativeAI({ apiKey }); // Correct instantiation
|
||||
// const model = google.getGenerativeModel({ model: modelId }); // Incorrect model retrieval
|
||||
const model = googleProvider(modelId); // Correct model retrieval
|
||||
|
||||
const { object } = await generateObject({
|
||||
model, // Pass the model instance
|
||||
schema,
|
||||
messages,
|
||||
temperature,
|
||||
maxOutputTokens: maxTokens
|
||||
// Note: 'objectName' or 'mode' might not be directly applicable here
|
||||
// depending on how `@ai-sdk/google` handles `generateObject`.
|
||||
// Check SDK docs if specific tool calling/JSON mode needs explicit setup.
|
||||
});
|
||||
|
||||
return object; // Return the parsed object
|
||||
} catch (error) {
|
||||
log(
|
||||
'error',
|
||||
`Error generating object with Google (${modelId}): ${error.message}`
|
||||
);
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
export { generateGoogleText, streamGoogleText, generateGoogleObject };
|
||||
@@ -1,6 +1,6 @@
|
||||
# Task ID: 37
|
||||
# Title: Add Gemini Support for Main AI Services as Claude Alternative
|
||||
# Status: pending
|
||||
# Status: done
|
||||
# Dependencies: None
|
||||
# Priority: medium
|
||||
# Description: Implement Google's Gemini API integration as an alternative to Claude for all main AI services, allowing users to switch between different LLM providers.
|
||||
|
||||
@@ -1431,6 +1431,91 @@ function checkProviderCapability(provider, capability) {
|
||||
### Details:
|
||||
|
||||
|
||||
<info added on 2025-04-27T00:00:46.675Z>
|
||||
```javascript
|
||||
// Implementation details for google.js provider module
|
||||
|
||||
// 1. Required imports
|
||||
import { GoogleGenerativeAI } from "@ai-sdk/google";
|
||||
import { streamText, generateText, generateObject } from "@ai-sdk/core";
|
||||
|
||||
// 2. Model configuration
|
||||
const DEFAULT_MODEL = "gemini-1.5-pro"; // Default model, can be overridden
|
||||
const TEMPERATURE_DEFAULT = 0.7;
|
||||
|
||||
// 3. Function implementations
|
||||
export async function generateGoogleText({
|
||||
prompt,
|
||||
model = DEFAULT_MODEL,
|
||||
temperature = TEMPERATURE_DEFAULT,
|
||||
apiKey
|
||||
}) {
|
||||
if (!apiKey) throw new Error("Google API key is required");
|
||||
|
||||
const googleAI = new GoogleGenerativeAI(apiKey);
|
||||
const googleModel = googleAI.getGenerativeModel({ model });
|
||||
|
||||
const result = await generateText({
|
||||
model: googleModel,
|
||||
prompt,
|
||||
temperature
|
||||
});
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
export async function streamGoogleText({
|
||||
prompt,
|
||||
model = DEFAULT_MODEL,
|
||||
temperature = TEMPERATURE_DEFAULT,
|
||||
apiKey
|
||||
}) {
|
||||
if (!apiKey) throw new Error("Google API key is required");
|
||||
|
||||
const googleAI = new GoogleGenerativeAI(apiKey);
|
||||
const googleModel = googleAI.getGenerativeModel({ model });
|
||||
|
||||
const stream = await streamText({
|
||||
model: googleModel,
|
||||
prompt,
|
||||
temperature
|
||||
});
|
||||
|
||||
return stream;
|
||||
}
|
||||
|
||||
export async function generateGoogleObject({
|
||||
prompt,
|
||||
schema,
|
||||
model = DEFAULT_MODEL,
|
||||
temperature = TEMPERATURE_DEFAULT,
|
||||
apiKey
|
||||
}) {
|
||||
if (!apiKey) throw new Error("Google API key is required");
|
||||
|
||||
const googleAI = new GoogleGenerativeAI(apiKey);
|
||||
const googleModel = googleAI.getGenerativeModel({ model });
|
||||
|
||||
const result = await generateObject({
|
||||
model: googleModel,
|
||||
prompt,
|
||||
schema,
|
||||
temperature
|
||||
});
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
// 4. Environment variable setup in .env.local
|
||||
// GOOGLE_API_KEY=your_google_api_key_here
|
||||
|
||||
// 5. Error handling considerations
|
||||
// - Implement proper error handling for API rate limits
|
||||
// - Add retries for transient failures
|
||||
// - Consider adding logging for debugging purposes
|
||||
```
|
||||
</info added on 2025-04-27T00:00:46.675Z>
|
||||
|
||||
## 25. Implement `ollama.js` Provider Module [pending]
|
||||
### Dependencies: None
|
||||
### Description: Create and implement the `ollama.js` module within `src/ai-providers/`. This module should contain functions to interact with local Ollama models using the **`ollama-ai-provider` library**, adhering to the standardized input/output format defined for `ai-services-unified.js`. Note the specific library used.
|
||||
|
||||
@@ -9,3 +9,35 @@ This task has two main components:\n\n1. Add `--json` flag to all relevant CLI c
|
||||
|
||||
# Test Strategy:
|
||||
1. JSON output testing:\n - Unit tests for each command with the --json flag\n - Verify JSON schema consistency across commands\n - Validate that all necessary task data is included in the JSON output\n - Test piping output to other commands like jq\n\n2. Keybindings command testing:\n - Test on different OSes (macOS, Windows, Linux)\n - Verify correct path detection for Cursor's keybindings.json\n - Test behavior when file doesn't exist\n - Test behavior when existing keybindings conflict\n - Validate the installed keybindings work as expected\n - Test uninstall/restore functionality
|
||||
|
||||
# Subtasks:
|
||||
## 1. Implement Core JSON Output Logic for `next` and `show` Commands [pending]
|
||||
### Dependencies: None
|
||||
### Description: Modify the command handlers for `task-master next` and `task-master show <id>` to recognize and handle a `--json` flag. When the flag is present, output the raw data received from MCP tools directly as JSON.
|
||||
### Details:
|
||||
Use a CLI argument parsing library (e.g., argparse, click, commander) to add the `--json` boolean flag. In the command execution logic, check if the flag is set. If true, serialize the data object (before any human-readable formatting) into a JSON string and print it to stdout. If false, proceed with the existing formatting logic. Focus on these two commands first to establish the pattern.
|
||||
|
||||
## 2. Extend JSON Output to All Relevant Commands and Ensure Schema Consistency [pending]
|
||||
### Dependencies: 67.1
|
||||
### Description: Apply the JSON output pattern established in subtask 1 to all other relevant Taskmaster CLI commands that display data (e.g., `list`, `status`, etc.). Ensure the JSON structure is consistent where applicable (e.g., task objects should have the same fields). Add help text mentioning the `--json` flag for each modified command.
|
||||
### Details:
|
||||
Identify all commands that output structured data. Refactor the JSON output logic into a reusable utility function if possible. Define a standard schema for common data types like tasks. Update the help documentation for each command to include the `--json` flag description. Ensure error outputs are also handled appropriately (e.g., potentially outputting JSON error objects).
|
||||
|
||||
## 3. Create `install-keybindings` Command Structure and OS Detection [pending]
|
||||
### Dependencies: None
|
||||
### Description: Set up the basic structure for the new `task-master install-keybindings` command. Implement logic to detect the user's operating system (Linux, macOS, Windows) and determine the default path to Cursor's `keybindings.json` file.
|
||||
### Details:
|
||||
Add a new command entry point using the CLI framework. Use standard library functions (e.g., `os.platform()` in Node, `platform.system()` in Python) to detect the OS. Define constants or a configuration map for the default `keybindings.json` paths for each supported OS. Handle cases where the path might vary (e.g., different installation methods for Cursor). Add basic help text for the new command.
|
||||
|
||||
## 4. Implement Keybinding File Handling and Backup Logic [pending]
|
||||
### Dependencies: 67.3
|
||||
### Description: Implement the core logic within the `install-keybindings` command to read the target `keybindings.json` file. If it exists, create a backup. If it doesn't exist, create a new file with an empty JSON array `[]`. Prepare the structure to add new keybindings.
|
||||
### Details:
|
||||
Use file system modules to check for file existence, read, write, and copy files. Implement a backup mechanism (e.g., copy `keybindings.json` to `keybindings.json.bak`). Handle potential file I/O errors gracefully (e.g., permissions issues). Parse the existing JSON content; if parsing fails, report an error and potentially abort. Ensure the file is created with `[]` if it's missing.
|
||||
|
||||
## 5. Add Taskmaster Keybindings, Prevent Duplicates, and Support Customization [pending]
|
||||
### Dependencies: 67.4
|
||||
### Description: Define the specific Taskmaster keybindings (e.g., next task to clipboard, status update, open agent chat) and implement the logic to merge them into the user's `keybindings.json` data. Prevent adding duplicate keybindings (based on command ID or key combination). Add support for custom key combinations via command flags.
|
||||
### Details:
|
||||
Define the desired keybindings as a list of JSON objects following Cursor's format. Before adding, iterate through the existing keybindings (parsed in subtask 4) to check if a Taskmaster keybinding with the same command or key combination already exists. If not, append the new keybinding to the list. Add command-line flags (e.g., `--next-key='ctrl+alt+n'`) to allow users to override default key combinations. Serialize the updated list back to JSON and write it to the `keybindings.json` file.
|
||||
|
||||
|
||||
11
tasks/task_068.txt
Normal file
11
tasks/task_068.txt
Normal file
@@ -0,0 +1,11 @@
|
||||
# Task ID: 68
|
||||
# Title: Ability to create tasks without parsing PRD
|
||||
# Status: pending
|
||||
# Dependencies: None
|
||||
# Priority: medium
|
||||
# Description: Which just means that when we create a task, if there's no tasks.json, we should create it calling the same function that is done by parse-prd. this lets taskmaster be used without a prd as a starding point.
|
||||
# Details:
|
||||
|
||||
|
||||
# Test Strategy:
|
||||
|
||||
59
tasks/task_069.txt
Normal file
59
tasks/task_069.txt
Normal file
@@ -0,0 +1,59 @@
|
||||
# Task ID: 69
|
||||
# Title: Enhance Analyze Complexity for Specific Task IDs
|
||||
# Status: pending
|
||||
# Dependencies: None
|
||||
# Priority: medium
|
||||
# Description: Modify the analyze-complexity feature (CLI and MCP) to allow analyzing only specified task IDs and append/update results in the report.
|
||||
# Details:
|
||||
|
||||
Implementation Plan:
|
||||
|
||||
1. **Core Logic (`scripts/modules/task-manager/analyze-task-complexity.js`):**
|
||||
* Modify the function signature to accept an optional `options.ids` parameter (string, comma-separated IDs).
|
||||
* If `options.ids` is present:
|
||||
* Parse the `ids` string into an array of target IDs.
|
||||
* Filter `tasksData.tasks` to *only* include tasks matching the target IDs. Use this filtered list for analysis.
|
||||
* Handle cases where provided IDs don't exist in `tasks.json`.
|
||||
* If `options.ids` is *not* present: Continue with existing logic (filtering by active status).
|
||||
* **Report Handling:**
|
||||
* Before generating the analysis, check if the `outputPath` report file exists.
|
||||
* If it exists, read the existing `complexityAnalysis` array.
|
||||
* Generate the new analysis *only* for the target tasks (filtered by ID or status).
|
||||
* Merge the results: Remove any entries from the *existing* array that match the IDs analyzed in the *current run*. Then, append the *new* analysis results to the array.
|
||||
* Update the `meta` section (`generatedAt`, `tasksAnalyzed` should reflect *this run*).
|
||||
* Write the *merged* `complexityAnalysis` array and updated `meta` back to the report file.
|
||||
* If the report file doesn't exist, create it as usual.
|
||||
* **Prompt Generation:** Ensure `generateInternalComplexityAnalysisPrompt` receives the correctly filtered list of tasks.
|
||||
|
||||
2. **CLI (`scripts/modules/commands.js`):**
|
||||
* Add a new option `--id <ids>` to the `analyze-complexity` command definition. Description: "Comma-separated list of specific task IDs to analyze".
|
||||
* In the `.action` handler:
|
||||
* Check if `options.id` is provided.
|
||||
* If yes, pass `options.id` (as the comma-separated string) to the `analyzeTaskComplexity` core function via the `options` object.
|
||||
* Update user feedback messages to indicate specific task analysis.
|
||||
|
||||
3. **MCP Tool (`mcp-server/src/tools/analyze.js`):**
|
||||
* Add a new optional parameter `ids: z.string().optional().describe("Comma-separated list of task IDs to analyze specifically")` to the Zod schema for the `analyze_project_complexity` tool.
|
||||
* In the `execute` method, pass `args.ids` to the `analyzeTaskComplexityDirect` function within its `args` object.
|
||||
|
||||
4. **Direct Function (`mcp-server/src/core/direct-functions/analyze-task-complexity.js`):**
|
||||
* Update the function to receive the `ids` string within the `args` object.
|
||||
* Pass the `ids` string along to the core `analyzeTaskComplexity` function within its `options` object.
|
||||
|
||||
5. **Documentation:** Update relevant rule files (`commands.mdc`, `taskmaster.mdc`) to reflect the new `--id` option/parameter.
|
||||
|
||||
|
||||
# Test Strategy:
|
||||
|
||||
1. **CLI:**
|
||||
* Run `task-master analyze-complexity --id=<id1>` (where report doesn't exist). Verify report created with only task id1.
|
||||
* Run `task-master analyze-complexity --id=<id2>` (where report exists). Verify report updated, containing analysis for both id1 and id2 (id2 replaces any previous id2 analysis).
|
||||
* Run `task-master analyze-complexity --id=<id1>,<id3>`. Verify report updated, containing id1, id2, id3.
|
||||
* Run `task-master analyze-complexity` (no id). Verify it analyzes *all* active tasks and updates the report accordingly, merging with previous specific analyses.
|
||||
* Test with invalid/non-existent IDs.
|
||||
2. **MCP:**
|
||||
* Call `analyze_project_complexity` tool with `ids: "<id1>"`. Verify report creation/update.
|
||||
* Call `analyze_project_complexity` tool with `ids: "<id1>,<id2>"`. Verify report merging.
|
||||
* Call `analyze_project_complexity` tool without `ids`. Verify full analysis and merging.
|
||||
3. Verify report `meta` section is updated correctly on each run.
|
||||
|
||||
@@ -2308,7 +2308,7 @@
|
||||
"id": 37,
|
||||
"title": "Add Gemini Support for Main AI Services as Claude Alternative",
|
||||
"description": "Implement Google's Gemini API integration as an alternative to Claude for all main AI services, allowing users to switch between different LLM providers.",
|
||||
"status": "pending",
|
||||
"status": "done",
|
||||
"dependencies": [],
|
||||
"priority": "medium",
|
||||
"details": "This task involves integrating Google's Gemini API across all main AI services that currently use Claude:\n\n1. Create a new GeminiService class that implements the same interface as the existing ClaudeService\n2. Implement authentication and API key management for Gemini API\n3. Map our internal prompt formats to Gemini's expected input format\n4. Handle Gemini-specific parameters (temperature, top_p, etc.) and response parsing\n5. Update the AI service factory/provider to support selecting Gemini as an alternative\n6. Add configuration options in settings to allow users to select Gemini as their preferred provider\n7. Implement proper error handling for Gemini-specific API errors\n8. Ensure streaming responses are properly supported if Gemini offers this capability\n9. Update documentation to reflect the new Gemini option\n10. Consider implementing model selection if Gemini offers multiple models (e.g., Gemini Pro, Gemini Ultra)\n11. Ensure all existing AI capabilities (summarization, code generation, etc.) maintain feature parity when using Gemini\n\nThe implementation should follow the same pattern as the recent Ollama integration (Task #36) to maintain consistency in how alternative AI providers are supported.",
|
||||
@@ -3251,7 +3251,7 @@
|
||||
"id": 24,
|
||||
"title": "Implement `google.js` Provider Module using Vercel AI SDK",
|
||||
"description": "Create and implement the `google.js` module within `src/ai-providers/`. This module should contain functions to interact with Google AI models (e.g., Gemini) using the **Vercel AI SDK (`@ai-sdk/google`)**, adhering to the standardized input/output format defined for `ai-services-unified.js`.",
|
||||
"details": "",
|
||||
"details": "\n\n<info added on 2025-04-27T00:00:46.675Z>\n```javascript\n// Implementation details for google.js provider module\n\n// 1. Required imports\nimport { GoogleGenerativeAI } from \"@ai-sdk/google\";\nimport { streamText, generateText, generateObject } from \"@ai-sdk/core\";\n\n// 2. Model configuration\nconst DEFAULT_MODEL = \"gemini-1.5-pro\"; // Default model, can be overridden\nconst TEMPERATURE_DEFAULT = 0.7;\n\n// 3. Function implementations\nexport async function generateGoogleText({ \n prompt, \n model = DEFAULT_MODEL, \n temperature = TEMPERATURE_DEFAULT,\n apiKey \n}) {\n if (!apiKey) throw new Error(\"Google API key is required\");\n \n const googleAI = new GoogleGenerativeAI(apiKey);\n const googleModel = googleAI.getGenerativeModel({ model });\n \n const result = await generateText({\n model: googleModel,\n prompt,\n temperature\n });\n \n return result;\n}\n\nexport async function streamGoogleText({ \n prompt, \n model = DEFAULT_MODEL, \n temperature = TEMPERATURE_DEFAULT,\n apiKey \n}) {\n if (!apiKey) throw new Error(\"Google API key is required\");\n \n const googleAI = new GoogleGenerativeAI(apiKey);\n const googleModel = googleAI.getGenerativeModel({ model });\n \n const stream = await streamText({\n model: googleModel,\n prompt,\n temperature\n });\n \n return stream;\n}\n\nexport async function generateGoogleObject({ \n prompt, \n schema,\n model = DEFAULT_MODEL, \n temperature = TEMPERATURE_DEFAULT,\n apiKey \n}) {\n if (!apiKey) throw new Error(\"Google API key is required\");\n \n const googleAI = new GoogleGenerativeAI(apiKey);\n const googleModel = googleAI.getGenerativeModel({ model });\n \n const result = await generateObject({\n model: googleModel,\n prompt,\n schema,\n temperature\n });\n \n return result;\n}\n\n// 4. Environment variable setup in .env.local\n// GOOGLE_API_KEY=your_google_api_key_here\n\n// 5. Error handling considerations\n// - Implement proper error handling for API rate limits\n// - Add retries for transient failures\n// - Consider adding logging for debugging purposes\n```\n</info added on 2025-04-27T00:00:46.675Z>",
|
||||
"status": "pending",
|
||||
"dependencies": [],
|
||||
"parentTaskId": 61
|
||||
@@ -3801,6 +3801,80 @@
|
||||
"status": "pending",
|
||||
"dependencies": [],
|
||||
"priority": "high",
|
||||
"subtasks": [
|
||||
{
|
||||
"id": 1,
|
||||
"title": "Implement Core JSON Output Logic for `next` and `show` Commands",
|
||||
"description": "Modify the command handlers for `task-master next` and `task-master show <id>` to recognize and handle a `--json` flag. When the flag is present, output the raw data received from MCP tools directly as JSON.",
|
||||
"dependencies": [],
|
||||
"details": "Use a CLI argument parsing library (e.g., argparse, click, commander) to add the `--json` boolean flag. In the command execution logic, check if the flag is set. If true, serialize the data object (before any human-readable formatting) into a JSON string and print it to stdout. If false, proceed with the existing formatting logic. Focus on these two commands first to establish the pattern.",
|
||||
"status": "pending",
|
||||
"testStrategy": "Run `task-master next --json` and `task-master show <some_id> --json`. Verify the output is valid JSON and contains the expected data fields. Compare with non-JSON output to ensure data consistency."
|
||||
},
|
||||
{
|
||||
"id": 2,
|
||||
"title": "Extend JSON Output to All Relevant Commands and Ensure Schema Consistency",
|
||||
"description": "Apply the JSON output pattern established in subtask 1 to all other relevant Taskmaster CLI commands that display data (e.g., `list`, `status`, etc.). Ensure the JSON structure is consistent where applicable (e.g., task objects should have the same fields). Add help text mentioning the `--json` flag for each modified command.",
|
||||
"dependencies": [
|
||||
1
|
||||
],
|
||||
"details": "Identify all commands that output structured data. Refactor the JSON output logic into a reusable utility function if possible. Define a standard schema for common data types like tasks. Update the help documentation for each command to include the `--json` flag description. Ensure error outputs are also handled appropriately (e.g., potentially outputting JSON error objects).",
|
||||
"status": "pending",
|
||||
"testStrategy": "Test the `--json` flag on all modified commands with various inputs. Validate the output against the defined JSON schemas. Check help text using `--help` flag for each command."
|
||||
},
|
||||
{
|
||||
"id": 3,
|
||||
"title": "Create `install-keybindings` Command Structure and OS Detection",
|
||||
"description": "Set up the basic structure for the new `task-master install-keybindings` command. Implement logic to detect the user's operating system (Linux, macOS, Windows) and determine the default path to Cursor's `keybindings.json` file.",
|
||||
"dependencies": [],
|
||||
"details": "Add a new command entry point using the CLI framework. Use standard library functions (e.g., `os.platform()` in Node, `platform.system()` in Python) to detect the OS. Define constants or a configuration map for the default `keybindings.json` paths for each supported OS. Handle cases where the path might vary (e.g., different installation methods for Cursor). Add basic help text for the new command.",
|
||||
"status": "pending",
|
||||
"testStrategy": "Run the command stub on different OSes (or mock the OS detection) and verify it correctly identifies the expected default path. Test edge cases like unsupported OS."
|
||||
},
|
||||
{
|
||||
"id": 4,
|
||||
"title": "Implement Keybinding File Handling and Backup Logic",
|
||||
"description": "Implement the core logic within the `install-keybindings` command to read the target `keybindings.json` file. If it exists, create a backup. If it doesn't exist, create a new file with an empty JSON array `[]`. Prepare the structure to add new keybindings.",
|
||||
"dependencies": [
|
||||
3
|
||||
],
|
||||
"details": "Use file system modules to check for file existence, read, write, and copy files. Implement a backup mechanism (e.g., copy `keybindings.json` to `keybindings.json.bak`). Handle potential file I/O errors gracefully (e.g., permissions issues). Parse the existing JSON content; if parsing fails, report an error and potentially abort. Ensure the file is created with `[]` if it's missing.",
|
||||
"status": "pending",
|
||||
"testStrategy": "Test file handling scenarios: file exists, file doesn't exist, file exists but is invalid JSON, file exists but has no write permissions (if possible to simulate). Verify backup file creation."
|
||||
},
|
||||
{
|
||||
"id": 5,
|
||||
"title": "Add Taskmaster Keybindings, Prevent Duplicates, and Support Customization",
|
||||
"description": "Define the specific Taskmaster keybindings (e.g., next task to clipboard, status update, open agent chat) and implement the logic to merge them into the user's `keybindings.json` data. Prevent adding duplicate keybindings (based on command ID or key combination). Add support for custom key combinations via command flags.",
|
||||
"dependencies": [
|
||||
4
|
||||
],
|
||||
"details": "Define the desired keybindings as a list of JSON objects following Cursor's format. Before adding, iterate through the existing keybindings (parsed in subtask 4) to check if a Taskmaster keybinding with the same command or key combination already exists. If not, append the new keybinding to the list. Add command-line flags (e.g., `--next-key='ctrl+alt+n'`) to allow users to override default key combinations. Serialize the updated list back to JSON and write it to the `keybindings.json` file.",
|
||||
"status": "pending",
|
||||
"testStrategy": "Test adding keybindings to an empty file, a file with existing non-Taskmaster keybindings, and a file that already contains some Taskmaster keybindings (to test duplicate prevention). Test overriding default keys using flags. Manually inspect the resulting `keybindings.json` file and test the keybindings within Cursor if possible."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 68,
|
||||
"title": "Ability to create tasks without parsing PRD",
|
||||
"description": "Which just means that when we create a task, if there's no tasks.json, we should create it calling the same function that is done by parse-prd. this lets taskmaster be used without a prd as a starding point.",
|
||||
"details": "",
|
||||
"testStrategy": "",
|
||||
"status": "pending",
|
||||
"dependencies": [],
|
||||
"priority": "medium",
|
||||
"subtasks": []
|
||||
},
|
||||
{
|
||||
"id": 69,
|
||||
"title": "Enhance Analyze Complexity for Specific Task IDs",
|
||||
"description": "Modify the analyze-complexity feature (CLI and MCP) to allow analyzing only specified task IDs and append/update results in the report.",
|
||||
"details": "\nImplementation Plan:\n\n1. **Core Logic (`scripts/modules/task-manager/analyze-task-complexity.js`):**\n * Modify the function signature to accept an optional `options.ids` parameter (string, comma-separated IDs).\n * If `options.ids` is present:\n * Parse the `ids` string into an array of target IDs.\n * Filter `tasksData.tasks` to *only* include tasks matching the target IDs. Use this filtered list for analysis.\n * Handle cases where provided IDs don't exist in `tasks.json`.\n * If `options.ids` is *not* present: Continue with existing logic (filtering by active status).\n * **Report Handling:**\n * Before generating the analysis, check if the `outputPath` report file exists.\n * If it exists, read the existing `complexityAnalysis` array.\n * Generate the new analysis *only* for the target tasks (filtered by ID or status).\n * Merge the results: Remove any entries from the *existing* array that match the IDs analyzed in the *current run*. Then, append the *new* analysis results to the array.\n * Update the `meta` section (`generatedAt`, `tasksAnalyzed` should reflect *this run*).\n * Write the *merged* `complexityAnalysis` array and updated `meta` back to the report file.\n * If the report file doesn't exist, create it as usual.\n * **Prompt Generation:** Ensure `generateInternalComplexityAnalysisPrompt` receives the correctly filtered list of tasks.\n\n2. **CLI (`scripts/modules/commands.js`):**\n * Add a new option `--id <ids>` to the `analyze-complexity` command definition. Description: \"Comma-separated list of specific task IDs to analyze\".\n * In the `.action` handler:\n * Check if `options.id` is provided.\n * If yes, pass `options.id` (as the comma-separated string) to the `analyzeTaskComplexity` core function via the `options` object.\n * Update user feedback messages to indicate specific task analysis.\n\n3. **MCP Tool (`mcp-server/src/tools/analyze.js`):**\n * Add a new optional parameter `ids: z.string().optional().describe(\"Comma-separated list of task IDs to analyze specifically\")` to the Zod schema for the `analyze_project_complexity` tool.\n * In the `execute` method, pass `args.ids` to the `analyzeTaskComplexityDirect` function within its `args` object.\n\n4. **Direct Function (`mcp-server/src/core/direct-functions/analyze-task-complexity.js`):**\n * Update the function to receive the `ids` string within the `args` object.\n * Pass the `ids` string along to the core `analyzeTaskComplexity` function within its `options` object.\n\n5. **Documentation:** Update relevant rule files (`commands.mdc`, `taskmaster.mdc`) to reflect the new `--id` option/parameter.\n",
|
||||
"testStrategy": "\n1. **CLI:**\n * Run `task-master analyze-complexity --id=<id1>` (where report doesn't exist). Verify report created with only task id1.\n * Run `task-master analyze-complexity --id=<id2>` (where report exists). Verify report updated, containing analysis for both id1 and id2 (id2 replaces any previous id2 analysis).\n * Run `task-master analyze-complexity --id=<id1>,<id3>`. Verify report updated, containing id1, id2, id3.\n * Run `task-master analyze-complexity` (no id). Verify it analyzes *all* active tasks and updates the report accordingly, merging with previous specific analyses.\n * Test with invalid/non-existent IDs.\n2. **MCP:**\n * Call `analyze_project_complexity` tool with `ids: \"<id1>\"`. Verify report creation/update.\n * Call `analyze_project_complexity` tool with `ids: \"<id1>,<id2>\"`. Verify report merging.\n * Call `analyze_project_complexity` tool without `ids`. Verify full analysis and merging.\n3. Verify report `meta` section is updated correctly on each run.\n",
|
||||
"status": "pending",
|
||||
"dependencies": [],
|
||||
"priority": "medium",
|
||||
"subtasks": []
|
||||
}
|
||||
]
|
||||
|
||||
Reference in New Issue
Block a user