Commit Graph

13 Commits

Author SHA1 Message Date
Eyal Toledano
fcd80623b6 linting 2025-05-17 18:43:15 -04:00
Eyal Toledano
026815353f fix(ai): Correctly imports generateText in openai.js, adds specific cause and reason for OpenRouter failures in the openrouter.js catch, performs complexity analysis on all tm tasks, adds new tasks to further improve the maxTokens to take input and output maximum into account. Adjusts default fallback max tokens so 3.5 does not fail. 2025-05-17 18:42:57 -04:00
Eyal Toledano
59230c4d91 chore: task management and formatting. 2025-05-09 14:12:21 -04:00
Eyal Toledano
04b6a3cb21 feat(telemetry): Integrate AI usage telemetry into analyze-complexity
This commit applies the standard telemetry pattern to the analyze-task-complexity command and its corresponding MCP tool.

Key Changes:

1.  Core Logic (scripts/modules/task-manager/analyze-task-complexity.js):
    -   The call to generateTextService now includes commandName: 'analyze-complexity' and outputType.
    -   The full response { mainResult, telemetryData } is captured.
    -   mainResult (the AI-generated text) is used for parsing the complexity report JSON.
    -   If running in CLI mode (outputFormat === 'text'), displayAiUsageSummary is called with the telemetryData.
    -   The function now returns { report: ..., telemetryData: ... }.

2.  Direct Function (mcp-server/src/core/direct-functions/analyze-task-complexity.js):
    -   The call to the core analyzeTaskComplexity function now passes the necessary context for telemetry (commandName, outputType).
    -   The successful response object now correctly extracts coreResult.telemetryData and includes it in the data.telemetryData field returned to the MCP client.
2025-05-08 19:34:00 -04:00
Eyal Toledano
655c7c225a chore: prettier 2025-05-03 02:09:35 -04:00
Eyal Toledano
cd32fd9edf fix(add/remove-dependency): dependency mcp tools were failing due to hard-coded tasks path in generate task files. 2025-05-03 01:31:16 -04:00
Eyal Toledano
ae2d43de29 chore: prettier 2025-05-01 22:43:36 -04:00
Eyal Toledano
c7158d4910 fix(analyze-complexity): pass projectRoot through analyze-complexity flow
Modified analyze-task-complexity.js core function, direct function, and analyze.js tool to correctly pass projectRoot. Fixed import error in tools/index.js. Added debug logging to _resolveApiKey in ai-services-unified.js. This enables the .env API key fallback for analyze_project_complexity.
2025-05-01 14:18:44 -04:00
Eyal Toledano
4cf7e8a74a Refactor: Improve MCP logging, update E2E & tests
Refactors MCP server logging and updates testing infrastructure.

- MCP Server:

  - Replaced manual logger wrappers with centralized `createLogWrapper` utility.

  - Updated direct function calls to use `{ session, mcpLog }` context.

  - Removed deprecated `model` parameter from analyze, expand-all, expand-task tools.

  - Adjusted MCP tool import paths and parameter descriptions.

- Documentation:

  - Modified `docs/configuration.md`.

  - Modified `docs/tutorial.md`.

- Testing:

  - E2E Script (`run_e2e.sh`):

    - Removed `set -e`.

    - Added LLM analysis function (`analyze_log_with_llm`) & integration.

    - Adjusted test run directory creation timing.

    - Added debug echo statements.

  - Deleted Unit Tests: Removed `ai-client-factory.test.js`, `ai-client-utils.test.js`, `ai-services.test.js`.

  - Modified Fixtures: Updated `scripts/task-complexity-report.json`.

- Dev Scripts:

  - Modified `scripts/dev.js`.
2025-04-28 14:38:01 -04:00
Eyal Toledano
70cc15bc87 refactor(analyze): Align complexity analysis with unified AI service
Refactored the  feature and related components (CLI command, MCP tool, direct function) to integrate with the unified AI service layer ().

Initially,  was implemented to leverage structured output generation. However, this approach encountered persistent errors:
- Perplexity provider returned internal server errors.
- Anthropic provider failed with schema type and model errors.

Due to the unreliability of  for this specific use case, the core AI interaction within  was reverted to use . Basic manual JSON parsing and cleanup logic for the text response were reintroduced.

Key changes include:
- Removed direct AI client initialization (Anthropic, Perplexity).
- Removed direct fetching of AI model configuration parameters.
- Removed manual AI retry/fallback/streaming logic.
- Replaced direct AI calls with a call to .
- Updated  wrapper to pass session context correctly.
- Updated  MCP tool for correct path resolution and argument passing.
- Updated  CLI command for correct path resolution.
- Preserved core functionality: task loading/filtering, report generation, CLI summary display.

Both the CLI command ([INFO] Initialized Perplexity client with OpenAI compatibility layer
[INFO] Initialized Perplexity client with OpenAI compatibility layer
Analyzing task complexity from: tasks/tasks.json
Output report will be saved to: scripts/task-complexity-report.json
Analyzing task complexity and generating expansion recommendations...
[INFO] Reading tasks from tasks/tasks.json...
[INFO] Found 62 total tasks in the task file.
[INFO] Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks.
Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks.
[INFO] Claude API attempt 1/2
[ERROR] Error in Claude API call: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}
[ERROR] Non-overload Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}
Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}
[ERROR] Error during AI analysis: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}
[ERROR] Error analyzing task complexity: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}) and the MCP tool () have been verified to work correctly with this revised approach.
2025-04-24 22:33:33 -04:00
Ralph Khreish
c02483bc41 chore: run npm run format 2025-04-09 00:30:05 +02:00
Eyal Toledano
be3fe9c55e fixes issue with perplexity model used by default (now sonar-pro in all cases). Fixes an issue preventing analyzeTaskComplexity to work as designed. Fixes an issue that prevented parse-prd from working. Stubs in the test for analyzeTaskComplexity to be done later. 2025-03-24 16:30:27 -04:00
Eyal Toledano
eadd13e798 fix: enhance task expansion with multiple improvements
This commit resolves several issues with the task expansion system to
ensure higher quality subtasks and better synchronization:

1. Task File Generation
- Add automatic regeneration of task files after expanding tasks
- Ensure individual task text files stay in sync with tasks.json
- Avoids manual regeneration steps after task expansion

2. Perplexity API Integration
- Fix 'researchPrompt is not defined' error in Perplexity integration
- Add specialized research-oriented prompt template
- Improve system message for better context and instruction
- Better fallback to Claude when Perplexity unavailable

3. Subtask Parsing Improvements
- Enhance regex pattern to handle more formatting variations
- Implement multiple parsing strategies for different response formats:
  * Improved section detection with flexible headings
  * Added support for numbered and bulleted lists
  * Implemented heuristic-based title and description extraction
- Create more meaningful dummy subtasks with relevant titles and descriptions
  instead of generic placeholders
- Ensure minimal descriptions are always provided

4. Quality Verification and Retry System
- Add post-expansion verification to identify low-quality subtask sets
- Detect tasks with too many generic/placeholder subtasks
- Implement interactive retry mechanism with enhanced prompts
- Use adjusted settings for retries (research mode, subtask count)
- Clear existing subtasks before retry to prevent duplicates
- Provide detailed reporting of verification and retry process

These changes significantly improve the quality of generated subtasks
and reduce the need for manual intervention when subtask generation
produces suboptimal results.
2025-03-21 16:25:12 -04:00