Compare commits
19 Commits
claude/iss
...
ralph/fix/
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
7a5aad4178 | ||
|
|
c0682ac795 | ||
|
|
01a7faea8f | ||
|
|
814265cd33 | ||
|
|
9b7b2ca7b2 | ||
|
|
949f091179 | ||
|
|
32c2b03c23 | ||
|
|
3bfd999d81 | ||
|
|
9fa79eb026 | ||
|
|
875134247a | ||
|
|
c2fc61ddb3 | ||
|
|
aaacc3dae3 | ||
|
|
46cd5dc186 | ||
|
|
49a31be416 | ||
|
|
2b69936ee7 | ||
|
|
b5fe723f8e | ||
|
|
d67b81d25d | ||
|
|
66c05053c0 | ||
|
|
d7ab4609aa |
5
.changeset/chore-fix-docs.md
Normal file
5
.changeset/chore-fix-docs.md
Normal file
@@ -0,0 +1,5 @@
|
||||
---
|
||||
"task-master-ai": patch
|
||||
---
|
||||
|
||||
Improve `analyze-complexity` cli docs and `--research` flag documentation
|
||||
5
.changeset/curvy-weeks-flow.md
Normal file
5
.changeset/curvy-weeks-flow.md
Normal file
@@ -0,0 +1,5 @@
|
||||
---
|
||||
"task-master-ai": patch
|
||||
---
|
||||
|
||||
Change parent task back to "pending" when all subtasks are in "pending" state
|
||||
@@ -1,23 +0,0 @@
|
||||
---
|
||||
"task-master-ai": minor
|
||||
---
|
||||
|
||||
Add intelligent `scan` command for automated codebase analysis
|
||||
|
||||
Introduces a comprehensive project scanning feature that intelligently analyzes codebases using ast-grep and AI-powered analysis. The new `task-master scan` command provides:
|
||||
|
||||
- **Multi-phase Analysis**: Performs iterative scanning (project type identification → entry points → core structure → recursive deepening)
|
||||
- **AST-grep Integration**: Uses ast-grep as an AI SDK tool for advanced code structure analysis
|
||||
- **AI Enhancement**: Optional AI-powered analysis for intelligent project understanding
|
||||
- **Structured Output**: Generates detailed JSON reports with file/directory summaries
|
||||
- **Transparent Logging**: Clear progress indicators showing each analysis phase
|
||||
- **Configurable Options**: Supports custom include/exclude patterns, scan depth, and output paths
|
||||
|
||||
This feature addresses the challenge of quickly understanding existing project structures when adopting Task Master, significantly streamlining initial setup and project onboarding.
|
||||
|
||||
Usage:
|
||||
```bash
|
||||
task-master scan --output=project_scan.json
|
||||
task-master scan --include="*.js,*.ts" --exclude="*.test.*" --depth=3
|
||||
task-master scan --no-ai # Skip AI analysis for faster results
|
||||
```
|
||||
13
.changeset/mcp-timeout-configuration.md
Normal file
13
.changeset/mcp-timeout-configuration.md
Normal file
@@ -0,0 +1,13 @@
|
||||
---
|
||||
"task-master-ai": minor
|
||||
---
|
||||
|
||||
Enhanced Roo Code profile with MCP timeout configuration for improved reliability during long-running AI operations. The Roo profile now automatically configures a 300-second timeout for MCP server operations, preventing timeouts during complex tasks like `parse-prd`, `expand-all`, `analyze-complexity`, and `research` operations. This change also replaces static MCP configuration files with programmatic generation for better maintainability.
|
||||
|
||||
**What's New:**
|
||||
- 300-second timeout for MCP operations (up from default 60 seconds)
|
||||
- Programmatic MCP configuration generation (replaces static asset files)
|
||||
- Enhanced reliability for AI-powered operations
|
||||
- Consistent with other AI coding assistant profiles
|
||||
|
||||
**Migration:** No user action required - existing Roo Code installations will automatically receive the enhanced MCP configuration on next initialization.
|
||||
5
.changeset/petite-ideas-grab.md
Normal file
5
.changeset/petite-ideas-grab.md
Normal file
@@ -0,0 +1,5 @@
|
||||
---
|
||||
"task-master-ai": patch
|
||||
---
|
||||
|
||||
Fix Claude Code settings validation for pathToClaudeCodeExecutable
|
||||
5
.changeset/silly-pandas-find.md
Normal file
5
.changeset/silly-pandas-find.md
Normal file
@@ -0,0 +1,5 @@
|
||||
---
|
||||
"task-master-ai": patch
|
||||
---
|
||||
|
||||
Fix sonar deep research model failing, should be called `sonar-deep-research`
|
||||
3
.github/workflows/ci.yml
vendored
3
.github/workflows/ci.yml
vendored
@@ -6,9 +6,6 @@ on:
|
||||
- main
|
||||
- next
|
||||
pull_request:
|
||||
branches:
|
||||
- main
|
||||
- next
|
||||
workflow_dispatch:
|
||||
|
||||
concurrency:
|
||||
|
||||
@@ -1,12 +1,5 @@
|
||||
# task-master-ai
|
||||
|
||||
## 0.27.3
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#1254](https://github.com/eyaltoledano/claude-task-master/pull/1254) [`af53525`](https://github.com/eyaltoledano/claude-task-master/commit/af53525cbc660a595b67d4bb90d906911c71f45d) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fixed issue where `tm show` command could not find subtasks using dotted notation IDs (e.g., '8.1').
|
||||
- The command now properly searches within parent task subtasks and returns the correct subtask information.
|
||||
|
||||
## 0.27.2
|
||||
|
||||
### Patch Changes
|
||||
|
||||
16
README.md
16
README.md
@@ -60,6 +60,19 @@ The following documentation is also available in the `docs` directory:
|
||||
|
||||
> **Note:** After clicking the link, you'll still need to add your API keys to the configuration. The link installs the MCP server with placeholder keys that you'll need to replace with your actual API keys.
|
||||
|
||||
#### Claude Code Quick Install
|
||||
|
||||
For Claude Code users:
|
||||
|
||||
```bash
|
||||
claude mcp add taskmaster-ai -- npx -y task-master-ai
|
||||
```
|
||||
|
||||
Don't forget to add your API keys to the configuration:
|
||||
- in the root .env of your Project
|
||||
- in the "env" section of your mcp config for taskmaster-ai
|
||||
|
||||
|
||||
## Requirements
|
||||
|
||||
Taskmaster utilizes AI across several commands, and those require a separate API key. You can use a variety of models from different AI providers provided you add your API keys. For example, if you want to use Claude 3.7, you'll need an Anthropic API key.
|
||||
@@ -92,10 +105,11 @@ MCP (Model Control Protocol) lets you run Task Master directly from your editor.
|
||||
| | Project | `<project_folder>/.cursor/mcp.json` | `<project_folder>\.cursor\mcp.json` | `mcpServers` |
|
||||
| **Windsurf** | Global | `~/.codeium/windsurf/mcp_config.json` | `%USERPROFILE%\.codeium\windsurf\mcp_config.json` | `mcpServers` |
|
||||
| **VS Code** | Project | `<project_folder>/.vscode/mcp.json` | `<project_folder>\.vscode\mcp.json` | `servers` |
|
||||
| **Q CLI** | Global | `~/.aws/amazonq/mcp.json` | | `mcpServers` |
|
||||
|
||||
##### Manual Configuration
|
||||
|
||||
###### Cursor & Windsurf (`mcpServers`)
|
||||
###### Cursor & Windsurf & Q Developer CLI (`mcpServers`)
|
||||
|
||||
```json
|
||||
{
|
||||
|
||||
@@ -1,22 +1,24 @@
|
||||
# Task Master Documentation
|
||||
|
||||
Welcome to the Task Master documentation. Use the links below to navigate to the information you need:
|
||||
Welcome to the Task Master documentation. This documentation site provides comprehensive guides for getting started with Task Master.
|
||||
|
||||
## Getting Started
|
||||
|
||||
- [Configuration Guide](archive/configuration.md) - Set up environment variables and customize Task Master
|
||||
- [Tutorial](archive/ctutorial.md) - Step-by-step guide to getting started with Task Master
|
||||
- [Quick Start Guide](/getting-started/quick-start) - Complete setup and first-time usage guide
|
||||
- [Requirements](/getting-started/quick-start/requirements) - What you need to get started
|
||||
- [Installation](/getting-started/quick-start/installation) - How to install Task Master
|
||||
|
||||
## Reference
|
||||
## Core Capabilities
|
||||
|
||||
- [Command Reference](archive/ccommand-reference.md) - Complete list of all available commands
|
||||
- [Task Structure](archive/ctask-structure.md) - Understanding the task format and features
|
||||
- [MCP Tools](/capabilities/mcp) - Model Control Protocol integration
|
||||
- [CLI Commands](/capabilities/cli-root-commands) - Command line interface reference
|
||||
- [Task Structure](/capabilities/task-structure) - Understanding tasks and subtasks
|
||||
|
||||
## Examples & Licensing
|
||||
## Best Practices
|
||||
|
||||
- [Example Interactions](archive/cexamples.md) - Common Cursor AI interaction examples
|
||||
- [Licensing Information](archive/clicensing.md) - Detailed information about the license
|
||||
- [Advanced Configuration](/best-practices/configuration-advanced) - Detailed configuration options
|
||||
- [Advanced Tasks](/best-practices/advanced-tasks) - Working with complex task structures
|
||||
|
||||
## Need More Help?
|
||||
|
||||
If you can't find what you're looking for in these docs, please check the [main README](../README.md) or visit our [GitHub repository](https://github.com/eyaltoledano/claude-task-master).
|
||||
If you can't find what you're looking for in these docs, please check the root README.md or visit our [GitHub repository](https://github.com/eyaltoledano/claude-task-master).
|
||||
|
||||
@@ -156,7 +156,7 @@ sidebarTitle: "CLI Commands"
|
||||
# Use an alternative tasks file
|
||||
task-master analyze-complexity --file=custom-tasks.json
|
||||
|
||||
# Use Perplexity AI for research-backed complexity analysis
|
||||
# Use your configured research model for research-backed complexity analysis
|
||||
task-master analyze-complexity --research
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
@@ -108,5 +108,5 @@ You don’t need to configure everything up front. Most settings can be left as
|
||||
</Accordion>
|
||||
|
||||
<Note>
|
||||
For advanced configuration options and detailed customization, see our [Advanced Configuration Guide](/docs/best-practices/configuration-advanced) page.
|
||||
For advanced configuration options and detailed customization, see our [Advanced Configuration Guide](/best-practices/configuration-advanced) page.
|
||||
</Note>
|
||||
@@ -56,4 +56,4 @@ If you ran into problems and had to debug errors you can create new rules as you
|
||||
|
||||
By now you have all you need to get started executing code faster and smarter with Task Master.
|
||||
|
||||
If you have any questions please check out [Frequently Asked Questions](/docs/getting-started/faq)
|
||||
If you have any questions please check out [Frequently Asked Questions](/getting-started/faq)
|
||||
|
||||
@@ -30,6 +30,19 @@ cursor://anysphere.cursor-deeplink/mcp/install?name=taskmaster-ai&config=eyJjb21
|
||||
```
|
||||
|
||||
> **Note:** After clicking the link, you'll still need to add your API keys to the configuration. The link installs the MCP server with placeholder keys that you'll need to replace with your actual API keys.
|
||||
|
||||
### Claude Code Quick Install
|
||||
|
||||
For Claude Code users:
|
||||
|
||||
```bash
|
||||
claude mcp add taskmaster-ai -- npx -y task-master-ai
|
||||
```
|
||||
|
||||
Don't forget to add your API keys to the configuration:
|
||||
- in the root .env of your Project
|
||||
- in the "env" section of your mcp config for taskmaster-ai
|
||||
|
||||
</Accordion>
|
||||
## Installation Options
|
||||
|
||||
|
||||
@@ -6,13 +6,13 @@ sidebarTitle: "Quick Start"
|
||||
This guide is for new users who want to start using Task Master with minimal setup time.
|
||||
|
||||
It covers:
|
||||
- [Requirements](/docs/getting-started/quick-start/requirements): You will need Node.js and an AI model API Key.
|
||||
- [Installation](/docs/getting-started/quick-start/installation): How to Install Task Master.
|
||||
- [Configuration](/docs/getting-started/quick-start/configuration-quick): Setting up your API Key, MCP, and more.
|
||||
- [PRD](/docs/getting-started/quick-start/prd-quick): Writing and parsing your first PRD.
|
||||
- [Task Setup](/docs/getting-started/quick-start/tasks-quick): Preparing your tasks for execution.
|
||||
- [Executing Tasks](/docs/getting-started/quick-start/execute-quick): Using Task Master to execute tasks.
|
||||
- [Rules & Context](/docs/getting-started/quick-start/rules-quick): Learn how and why to build context in your project over time.
|
||||
- [Requirements](/getting-started/quick-start/requirements): You will need Node.js and an AI model API Key.
|
||||
- [Installation](/getting-started/quick-start/installation): How to Install Task Master.
|
||||
- [Configuration](/getting-started/quick-start/configuration-quick): Setting up your API Key, MCP, and more.
|
||||
- [PRD](/getting-started/quick-start/prd-quick): Writing and parsing your first PRD.
|
||||
- [Task Setup](/getting-started/quick-start/tasks-quick): Preparing your tasks for execution.
|
||||
- [Executing Tasks](/getting-started/quick-start/execute-quick): Using Task Master to execute tasks.
|
||||
- [Rules & Context](/getting-started/quick-start/rules-quick): Learn how and why to build context in your project over time.
|
||||
|
||||
<Tip>
|
||||
By the end of this guide, you'll have everything you need to begin working productively with Task Master.
|
||||
|
||||
@@ -61,9 +61,25 @@ Task Master can provide a complexity report which can be helpful to read before
|
||||
Can you analyze the complexity of our tasks to help me understand which ones need to be broken down further?
|
||||
```
|
||||
|
||||
The agent will use the `analyze_project_complexity` MCP tool, or you can run it directly with the CLI command:
|
||||
```bash
|
||||
task-master analyze-complexity
|
||||
```
|
||||
|
||||
For more comprehensive analysis using your configured research model, you can use:
|
||||
```bash
|
||||
task-master analyze-complexity --research
|
||||
```
|
||||
|
||||
<Tip>
|
||||
The `--research` flag uses whatever research model you have configured in `.taskmaster/config.json` (configurable via `task-master models --setup`) for research-backed complexity analysis, providing more informed recommendations.
|
||||
</Tip>
|
||||
|
||||
You can view the report in a friendly table using:
|
||||
```
|
||||
Can you show me the complexity report in a more readable format?
|
||||
```
|
||||
|
||||
<Check>Now you are ready to begin [executing tasks](/docs/getting-started/quick-start/execute-quick)</Check>
|
||||
For more detailed CLI options, see the [Analyze Task Complexity](/capabilities/cli-root-commands#analyze-task-complexity) section.
|
||||
|
||||
<Check>Now you are ready to begin [executing tasks](/getting-started/quick-start/execute-quick)</Check>
|
||||
@@ -4,7 +4,7 @@ Welcome to v1 of the Task Master Docs. Expect weekly updates as we expand and re
|
||||
|
||||
We've organized the docs into three sections depending on your experience level and goals:
|
||||
|
||||
### Getting Started - Jump in to [Quick Start](/docs/getting-started/quick-start)
|
||||
### Getting Started - Jump in to [Quick Start](/getting-started/quick-start)
|
||||
Designed for first-time users. Get set up, create your first PRD, and run your first task.
|
||||
|
||||
### Best Practices
|
||||
|
||||
@@ -1,12 +1,5 @@
|
||||
# Change Log
|
||||
|
||||
## 0.25.4
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- Updated dependencies [[`af53525`](https://github.com/eyaltoledano/claude-task-master/commit/af53525cbc660a595b67d4bb90d906911c71f45d)]:
|
||||
- task-master-ai@0.27.3
|
||||
|
||||
## 0.25.3
|
||||
|
||||
### Patch Changes
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
"private": true,
|
||||
"displayName": "TaskMaster",
|
||||
"description": "A visual Kanban board interface for TaskMaster projects in VS Code",
|
||||
"version": "0.25.4",
|
||||
"version": "0.25.3",
|
||||
"publisher": "Hamster",
|
||||
"icon": "assets/icon.png",
|
||||
"engines": {
|
||||
@@ -240,7 +240,7 @@
|
||||
"check-types": "tsc --noEmit"
|
||||
},
|
||||
"dependencies": {
|
||||
"task-master-ai": "0.27.3"
|
||||
"task-master-ai": "0.27.2"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@dnd-kit/core": "^6.3.1",
|
||||
|
||||
@@ -235,6 +235,60 @@ node scripts/init.js
|
||||
- "MCP provider requires session context" → Ensure running in MCP environment
|
||||
- See the [MCP Provider Guide](./mcp-provider-guide.md) for detailed troubleshooting
|
||||
|
||||
### MCP Timeout Configuration
|
||||
|
||||
Long-running AI operations in taskmaster-ai can exceed the default 60-second MCP timeout. Operations like `parse_prd`, `expand_task`, `research`, and `analyze_project_complexity` may take 2-5 minutes to complete.
|
||||
|
||||
#### Adding Timeout Configuration
|
||||
|
||||
Add a `timeout` parameter to your MCP configuration to extend the timeout limit. The timeout configuration works identically across MCP clients including Cursor, Windsurf, and RooCode:
|
||||
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"task-master-ai": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
|
||||
"timeout": 300,
|
||||
"env": {
|
||||
"ANTHROPIC_API_KEY": "your-anthropic-api-key"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Configuration Details:**
|
||||
- **`timeout: 300`** - Sets timeout to 300 seconds (5 minutes)
|
||||
- **Value range**: 1-3600 seconds (1 second to 1 hour)
|
||||
- **Recommended**: 300 seconds provides sufficient time for most AI operations
|
||||
- **Format**: Integer value in seconds (not milliseconds)
|
||||
|
||||
#### Automatic Setup
|
||||
|
||||
When adding taskmaster rules for supported editors, the timeout configuration is automatically included:
|
||||
|
||||
```bash
|
||||
# Automatically includes timeout configuration
|
||||
task-master rules add cursor
|
||||
task-master rules add roo
|
||||
task-master rules add windsurf
|
||||
task-master rules add vscode
|
||||
```
|
||||
|
||||
#### Troubleshooting Timeouts
|
||||
|
||||
If you're still experiencing timeout errors:
|
||||
|
||||
1. **Verify configuration**: Check that `timeout: 300` is present in your MCP config
|
||||
2. **Restart editor**: Restart your editor after making configuration changes
|
||||
3. **Increase timeout**: For very complex operations, try `timeout: 600` (10 minutes)
|
||||
4. **Check API keys**: Ensure required API keys are properly configured
|
||||
|
||||
**Expected behavior:**
|
||||
- **Before fix**: Operations fail after 60 seconds with `MCP request timed out after 60000ms`
|
||||
- **After fix**: Operations complete successfully within the configured timeout limit
|
||||
|
||||
### Google Vertex AI Configuration
|
||||
|
||||
Google Vertex AI is Google Cloud's enterprise AI platform and requires specific configuration:
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Available Models as of September 19, 2025
|
||||
# Available Models as of September 23, 2025
|
||||
|
||||
## Main Models
|
||||
|
||||
@@ -119,7 +119,7 @@
|
||||
| groq | deepseek-r1-distill-llama-70b | 0.52 | 0.75 | 0.99 |
|
||||
| perplexity | sonar-pro | — | 3 | 15 |
|
||||
| perplexity | sonar | — | 1 | 1 |
|
||||
| perplexity | deep-research | 0.211 | 2 | 8 |
|
||||
| perplexity | sonar-deep-research | 0.211 | 2 | 8 |
|
||||
| perplexity | sonar-reasoning-pro | 0.211 | 2 | 8 |
|
||||
| perplexity | sonar-reasoning | 0.211 | 1 | 5 |
|
||||
| bedrock | us.anthropic.claude-3-opus-20240229-v1:0 | 0.725 | 15 | 75 |
|
||||
|
||||
8
package-lock.json
generated
8
package-lock.json
generated
@@ -1,12 +1,12 @@
|
||||
{
|
||||
"name": "task-master-ai",
|
||||
"version": "0.27.3",
|
||||
"version": "0.27.2",
|
||||
"lockfileVersion": 3,
|
||||
"requires": true,
|
||||
"packages": {
|
||||
"": {
|
||||
"name": "task-master-ai",
|
||||
"version": "0.27.3",
|
||||
"version": "0.27.2",
|
||||
"license": "MIT WITH Commons-Clause",
|
||||
"workspaces": [
|
||||
"apps/*",
|
||||
@@ -357,9 +357,9 @@
|
||||
}
|
||||
},
|
||||
"apps/extension": {
|
||||
"version": "0.25.4",
|
||||
"version": "0.25.3",
|
||||
"dependencies": {
|
||||
"task-master-ai": "0.27.3"
|
||||
"task-master-ai": "0.27.2"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@dnd-kit/core": "^6.3.1",
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "task-master-ai",
|
||||
"version": "0.27.3",
|
||||
"version": "0.27.2",
|
||||
"description": "A task management system for ambitious AI-driven development that doesn't overwhelm and confuse Cursor.",
|
||||
"main": "index.js",
|
||||
"type": "module",
|
||||
@@ -53,7 +53,6 @@
|
||||
"license": "MIT WITH Commons-Clause",
|
||||
"dependencies": {
|
||||
"@ai-sdk/amazon-bedrock": "^2.2.9",
|
||||
"@ast-grep/cli": "^0.29.0",
|
||||
"@ai-sdk/anthropic": "^1.2.10",
|
||||
"@ai-sdk/azure": "^1.3.17",
|
||||
"@ai-sdk/google": "^1.2.13",
|
||||
|
||||
@@ -135,28 +135,15 @@ export class TaskService {
|
||||
}
|
||||
|
||||
/**
|
||||
* Get a single task by ID - delegates to storage layer
|
||||
* Get a single task by ID
|
||||
*/
|
||||
async getTask(taskId: string, tag?: string): Promise<Task | null> {
|
||||
// Use provided tag or get active tag
|
||||
const activeTag = tag || this.getActiveTag();
|
||||
const result = await this.getTaskList({
|
||||
tag,
|
||||
includeSubtasks: true
|
||||
});
|
||||
|
||||
try {
|
||||
// Delegate to storage layer which handles the specific logic for tasks vs subtasks
|
||||
return await this.storage.loadTask(String(taskId), activeTag);
|
||||
} catch (error) {
|
||||
throw new TaskMasterError(
|
||||
`Failed to get task ${taskId}`,
|
||||
ERROR_CODES.STORAGE_ERROR,
|
||||
{
|
||||
operation: 'getTask',
|
||||
resource: 'task',
|
||||
taskId: String(taskId),
|
||||
tag: activeTag
|
||||
},
|
||||
error as Error
|
||||
);
|
||||
}
|
||||
return result.tasks.find((t) => t.id === taskId) || null;
|
||||
}
|
||||
|
||||
/**
|
||||
|
||||
@@ -105,65 +105,9 @@ export class FileStorage implements IStorage {
|
||||
|
||||
/**
|
||||
* Load a single task by ID from the tasks.json file
|
||||
* Handles both regular tasks and subtasks (with dotted notation like "1.2")
|
||||
*/
|
||||
async loadTask(taskId: string, tag?: string): Promise<Task | null> {
|
||||
const tasks = await this.loadTasks(tag);
|
||||
|
||||
// Check if this is a subtask (contains a dot)
|
||||
if (taskId.includes('.')) {
|
||||
const [parentId, subtaskId] = taskId.split('.');
|
||||
const parentTask = tasks.find((t) => String(t.id) === parentId);
|
||||
|
||||
if (!parentTask || !parentTask.subtasks) {
|
||||
return null;
|
||||
}
|
||||
|
||||
const subtask = parentTask.subtasks.find(
|
||||
(st) => String(st.id) === subtaskId
|
||||
);
|
||||
if (!subtask) {
|
||||
return null;
|
||||
}
|
||||
|
||||
const toFullSubId = (maybeDotId: string | number): string => {
|
||||
const depId = String(maybeDotId);
|
||||
return depId.includes('.') ? depId : `${parentTask.id}.${depId}`;
|
||||
};
|
||||
const resolvedDependencies =
|
||||
subtask.dependencies?.map((dep) => toFullSubId(dep)) ?? [];
|
||||
|
||||
// Return a Task-like object for the subtask with the full dotted ID
|
||||
// Following the same pattern as findTaskById in utils.js
|
||||
const subtaskResult = {
|
||||
...subtask,
|
||||
id: taskId, // Use the full dotted ID
|
||||
title: subtask.title || `Subtask ${subtaskId}`,
|
||||
description: subtask.description || '',
|
||||
status: subtask.status || 'pending',
|
||||
priority: subtask.priority || parentTask.priority || 'medium',
|
||||
dependencies: resolvedDependencies,
|
||||
details: subtask.details || '',
|
||||
testStrategy: subtask.testStrategy || '',
|
||||
subtasks: [],
|
||||
tags: parentTask.tags || [],
|
||||
assignee: subtask.assignee || parentTask.assignee,
|
||||
complexity: subtask.complexity || parentTask.complexity,
|
||||
createdAt: subtask.createdAt || parentTask.createdAt,
|
||||
updatedAt: subtask.updatedAt || parentTask.updatedAt,
|
||||
// Add reference to parent task for context (like utils.js does)
|
||||
parentTask: {
|
||||
id: parentTask.id,
|
||||
title: parentTask.title,
|
||||
status: parentTask.status
|
||||
},
|
||||
isSubtask: true
|
||||
};
|
||||
|
||||
return subtaskResult;
|
||||
}
|
||||
|
||||
// Handle regular task lookup
|
||||
return tasks.find((task) => String(task.id) === String(taskId)) || null;
|
||||
}
|
||||
|
||||
@@ -465,8 +409,11 @@ export class FileStorage implements IStorage {
|
||||
const allDone = subs.every(isDoneLike);
|
||||
const anyInProgress = subs.some((s) => norm(s) === 'in-progress');
|
||||
const anyDone = subs.some(isDoneLike);
|
||||
const allPending = subs.every((s) => norm(s) === 'pending');
|
||||
|
||||
if (allDone) parentNewStatus = 'done';
|
||||
else if (anyInProgress || anyDone) parentNewStatus = 'in-progress';
|
||||
else if (allPending) parentNewStatus = 'pending';
|
||||
}
|
||||
|
||||
// Always bump updatedAt; update status only if changed
|
||||
|
||||
@@ -53,8 +53,6 @@ import {
|
||||
validateStrength
|
||||
} from './task-manager.js';
|
||||
|
||||
import { scanProject } from './task-manager/scan-project/index.js';
|
||||
|
||||
import {
|
||||
moveTasksBetweenTags,
|
||||
MoveTaskError,
|
||||
@@ -1849,7 +1847,7 @@ function registerCommands(programInstance) {
|
||||
)
|
||||
.option(
|
||||
'-r, --research',
|
||||
'Use Perplexity AI for research-backed complexity analysis'
|
||||
'Use configured research model for research-backed complexity analysis'
|
||||
)
|
||||
.option(
|
||||
'-i, --id <ids>',
|
||||
@@ -5069,110 +5067,6 @@ Examples:
|
||||
process.exit(1);
|
||||
});
|
||||
|
||||
// scan command
|
||||
programInstance
|
||||
.command('scan')
|
||||
.description('Intelligently scan and analyze the project codebase structure')
|
||||
.option(
|
||||
'--output <file>',
|
||||
'Path to save scan results (JSON format)',
|
||||
'project_scan.json'
|
||||
)
|
||||
.option(
|
||||
'--include <patterns>',
|
||||
'Comma-separated list of file patterns to include (e.g., "*.js,*.ts")'
|
||||
)
|
||||
.option(
|
||||
'--exclude <patterns>',
|
||||
'Comma-separated list of file patterns to exclude (e.g., "*.log,tmp/*")'
|
||||
)
|
||||
.option(
|
||||
'--depth <number>',
|
||||
'Maximum directory depth to scan',
|
||||
'5'
|
||||
)
|
||||
.option('--debug', 'Enable debug output')
|
||||
.option('--no-ai', 'Skip AI-powered analysis (faster but less detailed)')
|
||||
.action(async (options) => {
|
||||
try {
|
||||
// Initialize TaskMaster to get project root
|
||||
const taskMaster = initTaskMaster({});
|
||||
const projectRoot = taskMaster.getProjectRoot();
|
||||
|
||||
if (!projectRoot) {
|
||||
console.error(chalk.red('Error: Could not determine project root.'));
|
||||
console.log(chalk.yellow('Make sure you are in a valid project directory.'));
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
console.log(chalk.blue(`🔍 Starting intelligent scan of project: ${projectRoot}`));
|
||||
console.log(chalk.gray(`Output will be saved to: ${options.output}`));
|
||||
|
||||
// Parse options
|
||||
const scanOptions = {
|
||||
outputPath: path.isAbsolute(options.output)
|
||||
? options.output
|
||||
: path.join(projectRoot, options.output),
|
||||
includeFiles: options.include ? options.include.split(',').map(s => s.trim()) : [],
|
||||
excludeFiles: options.exclude ? options.exclude.split(',').map(s => s.trim()) : undefined,
|
||||
scanDepth: parseInt(options.depth, 10),
|
||||
debug: options.debug || false,
|
||||
reportProgress: true,
|
||||
skipAI: options.noAi || false
|
||||
};
|
||||
|
||||
// Perform the scan
|
||||
const spinner = ora('Scanning project structure...').start();
|
||||
|
||||
try {
|
||||
const result = await scanProject(projectRoot, scanOptions);
|
||||
|
||||
spinner.stop();
|
||||
|
||||
if (result.success) {
|
||||
console.log(chalk.green('✅ Project scan completed successfully!'));
|
||||
console.log(chalk.cyan('\n📊 Scan Summary:'));
|
||||
console.log(chalk.white(` Project Type: ${result.data.scanSummary.projectType}`));
|
||||
console.log(chalk.white(` Total Files: ${result.data.stats.totalFiles}`));
|
||||
console.log(chalk.white(` Languages: ${result.data.scanSummary.languages.join(', ')}`));
|
||||
console.log(chalk.white(` Code Lines: ${result.data.scanSummary.codeMetrics.totalLines}`));
|
||||
console.log(chalk.white(` Functions: ${result.data.scanSummary.codeMetrics.totalFunctions}`));
|
||||
console.log(chalk.white(` Classes: ${result.data.scanSummary.codeMetrics.totalClasses}`));
|
||||
|
||||
if (result.data.scanSummary.recommendations.length > 0) {
|
||||
console.log(chalk.yellow('\n💡 Recommendations:'));
|
||||
result.data.scanSummary.recommendations.forEach(rec => {
|
||||
console.log(chalk.white(` • ${rec}`));
|
||||
});
|
||||
}
|
||||
|
||||
console.log(chalk.green(`\n📄 Detailed results saved to: ${scanOptions.outputPath}`));
|
||||
} else {
|
||||
console.error(chalk.red('❌ Project scan failed:'));
|
||||
console.error(chalk.red(` ${result.error.message}`));
|
||||
if (scanOptions.debug && result.error.stack) {
|
||||
console.error(chalk.gray(` ${result.error.stack}`));
|
||||
}
|
||||
process.exit(1);
|
||||
}
|
||||
} catch (error) {
|
||||
spinner.stop();
|
||||
console.error(chalk.red(`❌ Scan failed: ${error.message}`));
|
||||
if (scanOptions.debug) {
|
||||
console.error(chalk.gray(error.stack));
|
||||
}
|
||||
process.exit(1);
|
||||
}
|
||||
} catch (error) {
|
||||
console.error(chalk.red(`Error initializing scan: ${error.message}`));
|
||||
process.exit(1);
|
||||
}
|
||||
})
|
||||
.on('error', function (err) {
|
||||
console.error(chalk.red(`Error: ${err.message}`));
|
||||
process.exit(1);
|
||||
});
|
||||
|
||||
return programInstance;
|
||||
}
|
||||
|
||||
|
||||
@@ -310,6 +310,7 @@ function validateProviderModelCombination(providerName, modelId) {
|
||||
function validateClaudeCodeSettings(settings) {
|
||||
// Define the base settings schema without commandSpecific first
|
||||
const BaseSettingsSchema = z.object({
|
||||
pathToClaudeCodeExecutable: z.string().optional(),
|
||||
maxTurns: z.number().int().positive().optional(),
|
||||
customSystemPrompt: z.string().optional(),
|
||||
appendSystemPrompt: z.string().optional(),
|
||||
|
||||
@@ -522,7 +522,7 @@
|
||||
"supported": true
|
||||
},
|
||||
{
|
||||
"id": "deep-research",
|
||||
"id": "sonar-deep-research",
|
||||
"swe_score": 0.211,
|
||||
"cost_per_1m_tokens": {
|
||||
"input": 2,
|
||||
|
||||
@@ -1,328 +0,0 @@
|
||||
/**
|
||||
* AI-powered analysis for project scanning
|
||||
*/
|
||||
import { ScanLoggingConfig } from './scan-config.js';
|
||||
|
||||
// Dynamically import AI service with fallback
|
||||
async function getAiService(options) {
|
||||
try {
|
||||
const { getAiService: aiService } = await import('../../ai-services-unified.js');
|
||||
return aiService(options);
|
||||
} catch (error) {
|
||||
throw new Error(`AI service not available: ${error.message}`);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Analyze project structure using AI
|
||||
* @param {Object} scanResults - Raw scan results
|
||||
* @param {Object} config - Scan configuration
|
||||
* @returns {Promise<Object>} AI-enhanced analysis
|
||||
*/
|
||||
export async function analyzeWithAI(scanResults, config) {
|
||||
const logger = new ScanLoggingConfig(config.mcpLog, config.reportProgress);
|
||||
logger.info('Starting AI-powered analysis...');
|
||||
|
||||
try {
|
||||
// Step 1: Project Type Analysis
|
||||
const projectTypeAnalysis = await analyzeProjectType(scanResults, config, logger);
|
||||
|
||||
// Step 2: Entry Points Analysis
|
||||
const entryPointsAnalysis = await analyzeEntryPoints(scanResults, projectTypeAnalysis, config, logger);
|
||||
|
||||
// Step 3: Core Structure Analysis
|
||||
const coreStructureAnalysis = await analyzeCoreStructure(scanResults, entryPointsAnalysis, config, logger);
|
||||
|
||||
// Step 4: Recursive Analysis (if needed)
|
||||
const detailedAnalysis = await performDetailedAnalysis(scanResults, coreStructureAnalysis, config, logger);
|
||||
|
||||
// Combine all analyses
|
||||
const enhancedAnalysis = {
|
||||
projectType: projectTypeAnalysis,
|
||||
entryPoints: entryPointsAnalysis,
|
||||
coreStructure: coreStructureAnalysis,
|
||||
detailed: detailedAnalysis,
|
||||
summary: generateProjectSummary(scanResults, projectTypeAnalysis, coreStructureAnalysis)
|
||||
};
|
||||
|
||||
logger.info('AI analysis completed successfully');
|
||||
return enhancedAnalysis;
|
||||
} catch (error) {
|
||||
logger.error(`AI analysis failed: ${error.message}`);
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Step 1: Analyze project type using AI
|
||||
* @param {Object} scanResults - Raw scan results
|
||||
* @param {Object} config - Scan configuration
|
||||
* @param {ScanLoggingConfig} logger - Logger instance
|
||||
* @returns {Promise<Object>} Project type analysis
|
||||
*/
|
||||
async function analyzeProjectType(scanResults, config, logger) {
|
||||
logger.info('[Scan #1]: Analyzing project type and structure...');
|
||||
|
||||
const prompt = `Given this root directory structure and files, identify the type of project and key characteristics:
|
||||
|
||||
Root files: ${JSON.stringify(scanResults.rootFiles, null, 2)}
|
||||
Directory structure: ${JSON.stringify(scanResults.directories, null, 2)}
|
||||
|
||||
Please analyze:
|
||||
1. Project type (e.g., Node.js, React, Laravel, Python, etc.)
|
||||
2. Programming languages used
|
||||
3. Frameworks and libraries
|
||||
4. Build tools and configuration
|
||||
5. Files or folders that should be excluded from further analysis (logs, binaries, etc.)
|
||||
|
||||
Respond with a JSON object containing your analysis.`;
|
||||
|
||||
try {
|
||||
const aiService = getAiService({ projectRoot: config.projectRoot });
|
||||
const response = await aiService.generateStructuredOutput({
|
||||
prompt,
|
||||
schema: {
|
||||
type: 'object',
|
||||
properties: {
|
||||
projectType: { type: 'string' },
|
||||
languages: { type: 'array', items: { type: 'string' } },
|
||||
frameworks: { type: 'array', items: { type: 'string' } },
|
||||
buildTools: { type: 'array', items: { type: 'string' } },
|
||||
excludePatterns: { type: 'array', items: { type: 'string' } },
|
||||
confidence: { type: 'number' },
|
||||
reasoning: { type: 'string' }
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
logger.info(`[Scan #1]: Detected ${response.projectType} project`);
|
||||
return response;
|
||||
} catch (error) {
|
||||
logger.warn(`[Scan #1]: AI analysis failed, using fallback detection`);
|
||||
// Fallback to rule-based detection
|
||||
return scanResults.projectType;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Step 2: Analyze entry points using AI
|
||||
* @param {Object} scanResults - Raw scan results
|
||||
* @param {Object} projectTypeAnalysis - Project type analysis
|
||||
* @param {Object} config - Scan configuration
|
||||
* @param {ScanLoggingConfig} logger - Logger instance
|
||||
* @returns {Promise<Object>} Entry points analysis
|
||||
*/
|
||||
async function analyzeEntryPoints(scanResults, projectTypeAnalysis, config, logger) {
|
||||
logger.info('[Scan #2]: Identifying main entry points and core files...');
|
||||
|
||||
const prompt = `Based on the project type "${projectTypeAnalysis.projectType}" and these files, identify the main entry points and core files:
|
||||
|
||||
Available files: ${JSON.stringify(scanResults.fileList.slice(0, 50), null, 2)}
|
||||
Project type: ${projectTypeAnalysis.projectType}
|
||||
Languages: ${JSON.stringify(projectTypeAnalysis.languages)}
|
||||
Frameworks: ${JSON.stringify(projectTypeAnalysis.frameworks)}
|
||||
|
||||
Please identify:
|
||||
1. Main entry points (files that start the application)
|
||||
2. Configuration files
|
||||
3. Core application files
|
||||
4. Important directories to analyze further
|
||||
|
||||
Respond with a structured JSON object.`;
|
||||
|
||||
try {
|
||||
const aiService = getAiService({ projectRoot: config.projectRoot });
|
||||
const response = await aiService.generateStructuredOutput({
|
||||
prompt,
|
||||
schema: {
|
||||
type: 'object',
|
||||
properties: {
|
||||
entryPoints: {
|
||||
type: 'array',
|
||||
items: {
|
||||
type: 'object',
|
||||
properties: {
|
||||
path: { type: 'string' },
|
||||
type: { type: 'string' },
|
||||
description: { type: 'string' }
|
||||
}
|
||||
}
|
||||
},
|
||||
configFiles: { type: 'array', items: { type: 'string' } },
|
||||
coreFiles: { type: 'array', items: { type: 'string' } },
|
||||
importantDirectories: { type: 'array', items: { type: 'string' } }
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
logger.info(`[Scan #2]: Found ${response.entryPoints.length} entry points`);
|
||||
return response;
|
||||
} catch (error) {
|
||||
logger.warn(`[Scan #2]: AI analysis failed, using basic detection`);
|
||||
return {
|
||||
entryPoints: scanResults.projectType.entryPoints.map(ep => ({ path: ep, type: 'main', description: 'Main entry point' })),
|
||||
configFiles: [],
|
||||
coreFiles: [],
|
||||
importantDirectories: []
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Step 3: Analyze core structure using AI
|
||||
* @param {Object} scanResults - Raw scan results
|
||||
* @param {Object} entryPointsAnalysis - Entry points analysis
|
||||
* @param {Object} config - Scan configuration
|
||||
* @param {ScanLoggingConfig} logger - Logger instance
|
||||
* @returns {Promise<Object>} Core structure analysis
|
||||
*/
|
||||
async function analyzeCoreStructure(scanResults, entryPointsAnalysis, config, logger) {
|
||||
logger.info('[Scan #3]: Analyzing core structure and key directories...');
|
||||
|
||||
const prompt = `Based on the entry points and project structure, analyze the core architecture:
|
||||
|
||||
Entry points: ${JSON.stringify(entryPointsAnalysis.entryPoints, null, 2)}
|
||||
Important directories: ${JSON.stringify(entryPointsAnalysis.importantDirectories)}
|
||||
File analysis: ${JSON.stringify(scanResults.detailedFiles.slice(0, 20), null, 2)}
|
||||
|
||||
Please analyze:
|
||||
1. Directory-level summaries and purposes
|
||||
2. File relationships and dependencies
|
||||
3. Key architectural patterns
|
||||
4. Data flow and component relationships
|
||||
|
||||
Respond with a structured analysis.`;
|
||||
|
||||
try {
|
||||
const aiService = getAiService({ projectRoot: config.projectRoot });
|
||||
const response = await aiService.generateStructuredOutput({
|
||||
prompt,
|
||||
schema: {
|
||||
type: 'object',
|
||||
properties: {
|
||||
directories: {
|
||||
type: 'object',
|
||||
additionalProperties: {
|
||||
type: 'object',
|
||||
properties: {
|
||||
purpose: { type: 'string' },
|
||||
importance: { type: 'string' },
|
||||
keyFiles: { type: 'array', items: { type: 'string' } },
|
||||
description: { type: 'string' }
|
||||
}
|
||||
}
|
||||
},
|
||||
architecture: {
|
||||
type: 'object',
|
||||
properties: {
|
||||
pattern: { type: 'string' },
|
||||
layers: { type: 'array', items: { type: 'string' } },
|
||||
dataFlow: { type: 'string' }
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
logger.info(`[Scan #3]: Analyzed ${Object.keys(response.directories || {}).length} directories`);
|
||||
return response;
|
||||
} catch (error) {
|
||||
logger.warn(`[Scan #3]: AI analysis failed, using basic structure`);
|
||||
return {
|
||||
directories: {},
|
||||
architecture: {
|
||||
pattern: 'unknown',
|
||||
layers: [],
|
||||
dataFlow: 'unknown'
|
||||
}
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Step 4: Perform detailed analysis on specific files/directories
|
||||
* @param {Object} scanResults - Raw scan results
|
||||
* @param {Object} coreStructureAnalysis - Core structure analysis
|
||||
* @param {Object} config - Scan configuration
|
||||
* @param {ScanLoggingConfig} logger - Logger instance
|
||||
* @returns {Promise<Object>} Detailed analysis
|
||||
*/
|
||||
async function performDetailedAnalysis(scanResults, coreStructureAnalysis, config, logger) {
|
||||
logger.info('[Scan #4+]: Performing detailed file-level analysis...');
|
||||
|
||||
const importantFiles = scanResults.detailedFiles
|
||||
.filter(file => file.functions?.length > 0 || file.classes?.length > 0)
|
||||
.slice(0, 10); // Limit to most important files
|
||||
|
||||
if (importantFiles.length === 0) {
|
||||
logger.info('No files requiring detailed analysis found');
|
||||
return { files: {} };
|
||||
}
|
||||
|
||||
const prompt = `Analyze these key files in detail:
|
||||
|
||||
${importantFiles.map(file => `
|
||||
File: ${file.path}
|
||||
Functions: ${JSON.stringify(file.functions)}
|
||||
Classes: ${JSON.stringify(file.classes)}
|
||||
Imports: ${JSON.stringify(file.imports)}
|
||||
Size: ${file.size} bytes, ${file.lines} lines
|
||||
`).join('\n')}
|
||||
|
||||
For each file, provide:
|
||||
1. Purpose and responsibility
|
||||
2. Key functions and their roles
|
||||
3. Dependencies and relationships
|
||||
4. Importance to the overall architecture
|
||||
|
||||
Respond with detailed analysis for each file.`;
|
||||
|
||||
try {
|
||||
const aiService = getAiService({ projectRoot: config.projectRoot });
|
||||
const response = await aiService.generateStructuredOutput({
|
||||
prompt,
|
||||
schema: {
|
||||
type: 'object',
|
||||
properties: {
|
||||
files: {
|
||||
type: 'object',
|
||||
additionalProperties: {
|
||||
type: 'object',
|
||||
properties: {
|
||||
purpose: { type: 'string' },
|
||||
keyFunctions: { type: 'array', items: { type: 'string' } },
|
||||
dependencies: { type: 'array', items: { type: 'string' } },
|
||||
importance: { type: 'string' },
|
||||
description: { type: 'string' }
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
logger.info(`[Scan #4+]: Detailed analysis completed for ${Object.keys(response.files || {}).length} files`);
|
||||
return response;
|
||||
} catch (error) {
|
||||
logger.warn(`[Scan #4+]: Detailed analysis failed`);
|
||||
return { files: {} };
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Generate a comprehensive project summary
|
||||
* @param {Object} scanResults - Raw scan results
|
||||
* @param {Object} projectTypeAnalysis - Project type analysis
|
||||
* @param {Object} coreStructureAnalysis - Core structure analysis
|
||||
* @returns {Object} Project summary
|
||||
*/
|
||||
function generateProjectSummary(scanResults, projectTypeAnalysis, coreStructureAnalysis) {
|
||||
return {
|
||||
overview: `${projectTypeAnalysis.projectType} project with ${scanResults.stats.totalFiles} files across ${scanResults.stats.totalDirectories} directories`,
|
||||
languages: projectTypeAnalysis.languages,
|
||||
frameworks: projectTypeAnalysis.frameworks,
|
||||
architecture: coreStructureAnalysis.architecture?.pattern || 'standard',
|
||||
complexity: scanResults.stats.totalFiles > 100 ? 'high' : scanResults.stats.totalFiles > 50 ? 'medium' : 'low',
|
||||
keyComponents: Object.keys(coreStructureAnalysis.directories || {}).slice(0, 5)
|
||||
};
|
||||
}
|
||||
@@ -1,3 +0,0 @@
|
||||
// Main entry point for scan-project module
|
||||
export { default } from './scan-project.js';
|
||||
export { default as scanProject } from './scan-project.js';
|
||||
@@ -1,61 +0,0 @@
|
||||
/**
|
||||
* Configuration classes for project scanning functionality
|
||||
*/
|
||||
|
||||
/**
|
||||
* Configuration object for scan operations
|
||||
*/
|
||||
export class ScanConfig {
|
||||
constructor({
|
||||
projectRoot,
|
||||
outputPath = null,
|
||||
includeFiles = [],
|
||||
excludeFiles = ['node_modules', '.git', 'dist', 'build', '*.log'],
|
||||
scanDepth = 5,
|
||||
mcpLog = false,
|
||||
reportProgress = false,
|
||||
debug = false
|
||||
} = {}) {
|
||||
this.projectRoot = projectRoot;
|
||||
this.outputPath = outputPath;
|
||||
this.includeFiles = includeFiles;
|
||||
this.excludeFiles = excludeFiles;
|
||||
this.scanDepth = scanDepth;
|
||||
this.mcpLog = mcpLog;
|
||||
this.reportProgress = reportProgress;
|
||||
this.debug = debug;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Logging configuration for scan operations
|
||||
*/
|
||||
export class ScanLoggingConfig {
|
||||
constructor(mcpLog = false, reportProgress = false) {
|
||||
this.mcpLog = mcpLog;
|
||||
this.reportProgress = reportProgress;
|
||||
}
|
||||
|
||||
report(message, level = 'info') {
|
||||
if (this.reportProgress || this.mcpLog) {
|
||||
const prefix = this.mcpLog ? '[MCP]' : '[SCAN]';
|
||||
console.log(`${prefix} ${level.toUpperCase()}: ${message}`);
|
||||
}
|
||||
}
|
||||
|
||||
debug(message) {
|
||||
this.report(message, 'debug');
|
||||
}
|
||||
|
||||
info(message) {
|
||||
this.report(message, 'info');
|
||||
}
|
||||
|
||||
warn(message) {
|
||||
this.report(message, 'warn');
|
||||
}
|
||||
|
||||
error(message) {
|
||||
this.report(message, 'error');
|
||||
}
|
||||
}
|
||||
@@ -1,422 +0,0 @@
|
||||
/**
|
||||
* Helper functions for project scanning
|
||||
*/
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
import { spawn } from 'child_process';
|
||||
import { ScanLoggingConfig } from './scan-config.js';
|
||||
|
||||
/**
|
||||
* Execute ast-grep command to analyze files
|
||||
* @param {string} projectRoot - Project root directory
|
||||
* @param {string} pattern - AST pattern to search for
|
||||
* @param {Array} files - Files to analyze
|
||||
* @returns {Promise<Object>} AST analysis results
|
||||
*/
|
||||
export async function executeAstGrep(projectRoot, pattern, files = []) {
|
||||
return new Promise((resolve, reject) => {
|
||||
const astGrepPath = path.join(process.cwd(), 'node_modules/.bin/ast-grep');
|
||||
const args = ['run', '--json'];
|
||||
|
||||
if (pattern) {
|
||||
args.push('-p', pattern);
|
||||
}
|
||||
|
||||
if (files.length > 0) {
|
||||
args.push(...files);
|
||||
}
|
||||
|
||||
const child = spawn(astGrepPath, args, {
|
||||
cwd: projectRoot,
|
||||
stdio: ['pipe', 'pipe', 'pipe']
|
||||
});
|
||||
|
||||
let stdout = '';
|
||||
let stderr = '';
|
||||
|
||||
child.stdout.on('data', (data) => {
|
||||
stdout += data.toString();
|
||||
});
|
||||
|
||||
child.stderr.on('data', (data) => {
|
||||
stderr += data.toString();
|
||||
});
|
||||
|
||||
child.on('close', (code) => {
|
||||
if (code === 0) {
|
||||
try {
|
||||
const results = stdout ? JSON.parse(stdout) : [];
|
||||
resolve(results);
|
||||
} catch (error) {
|
||||
reject(new Error(`Failed to parse ast-grep output: ${error.message}`));
|
||||
}
|
||||
} else {
|
||||
reject(new Error(`ast-grep failed with code ${code}: ${stderr}`));
|
||||
}
|
||||
});
|
||||
|
||||
child.on('error', (error) => {
|
||||
reject(new Error(`Failed to execute ast-grep: ${error.message}`));
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Detect project type based on files in root directory
|
||||
* @param {string} projectRoot - Project root directory
|
||||
* @returns {Object} Project type information
|
||||
*/
|
||||
export function detectProjectType(projectRoot) {
|
||||
const files = fs.readdirSync(projectRoot);
|
||||
const projectType = {
|
||||
type: 'unknown',
|
||||
frameworks: [],
|
||||
languages: [],
|
||||
buildTools: [],
|
||||
entryPoints: []
|
||||
};
|
||||
|
||||
// Check for common project indicators
|
||||
const indicators = {
|
||||
'package.json': () => {
|
||||
projectType.type = 'nodejs';
|
||||
projectType.languages.push('javascript');
|
||||
|
||||
try {
|
||||
const packageJson = JSON.parse(fs.readFileSync(path.join(projectRoot, 'package.json'), 'utf8'));
|
||||
|
||||
// Detect frameworks and libraries
|
||||
const deps = { ...packageJson.dependencies, ...packageJson.devDependencies };
|
||||
if (deps.react) projectType.frameworks.push('react');
|
||||
if (deps.next) projectType.frameworks.push('next.js');
|
||||
if (deps.express) projectType.frameworks.push('express');
|
||||
if (deps.typescript) projectType.languages.push('typescript');
|
||||
|
||||
// Find entry points
|
||||
if (packageJson.main) projectType.entryPoints.push(packageJson.main);
|
||||
if (packageJson.scripts?.start) {
|
||||
const startScript = packageJson.scripts.start;
|
||||
const match = startScript.match(/node\s+(\S+)/);
|
||||
if (match) projectType.entryPoints.push(match[1]);
|
||||
}
|
||||
} catch (error) {
|
||||
// Ignore package.json parsing errors
|
||||
}
|
||||
},
|
||||
'pom.xml': () => {
|
||||
projectType.type = 'java';
|
||||
projectType.languages.push('java');
|
||||
projectType.buildTools.push('maven');
|
||||
},
|
||||
'build.gradle': () => {
|
||||
projectType.type = 'java';
|
||||
projectType.languages.push('java');
|
||||
projectType.buildTools.push('gradle');
|
||||
},
|
||||
'requirements.txt': () => {
|
||||
projectType.type = 'python';
|
||||
projectType.languages.push('python');
|
||||
},
|
||||
'Pipfile': () => {
|
||||
projectType.type = 'python';
|
||||
projectType.languages.push('python');
|
||||
projectType.buildTools.push('pipenv');
|
||||
},
|
||||
'pyproject.toml': () => {
|
||||
projectType.type = 'python';
|
||||
projectType.languages.push('python');
|
||||
},
|
||||
'Cargo.toml': () => {
|
||||
projectType.type = 'rust';
|
||||
projectType.languages.push('rust');
|
||||
projectType.buildTools.push('cargo');
|
||||
},
|
||||
'go.mod': () => {
|
||||
projectType.type = 'go';
|
||||
projectType.languages.push('go');
|
||||
},
|
||||
'composer.json': () => {
|
||||
projectType.type = 'php';
|
||||
projectType.languages.push('php');
|
||||
},
|
||||
'Gemfile': () => {
|
||||
projectType.type = 'ruby';
|
||||
projectType.languages.push('ruby');
|
||||
}
|
||||
};
|
||||
|
||||
// Check for indicators
|
||||
for (const file of files) {
|
||||
if (indicators[file]) {
|
||||
indicators[file]();
|
||||
}
|
||||
}
|
||||
|
||||
return projectType;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get file list based on include/exclude patterns
|
||||
* @param {string} projectRoot - Project root directory
|
||||
* @param {Array} includePatterns - Patterns to include
|
||||
* @param {Array} excludePatterns - Patterns to exclude
|
||||
* @param {number} maxDepth - Maximum directory depth to scan
|
||||
* @returns {Array} List of files to analyze
|
||||
*/
|
||||
export function getFileList(projectRoot, includePatterns = [], excludePatterns = [], maxDepth = 5) {
|
||||
const files = [];
|
||||
|
||||
function scanDirectory(dirPath, depth = 0) {
|
||||
if (depth > maxDepth) return;
|
||||
|
||||
try {
|
||||
const items = fs.readdirSync(dirPath, { withFileTypes: true });
|
||||
|
||||
for (const item of items) {
|
||||
const fullPath = path.join(dirPath, item.name);
|
||||
const relativePath = path.relative(projectRoot, fullPath);
|
||||
|
||||
// Check exclude patterns
|
||||
if (shouldExclude(relativePath, excludePatterns)) {
|
||||
continue;
|
||||
}
|
||||
|
||||
if (item.isDirectory()) {
|
||||
scanDirectory(fullPath, depth + 1);
|
||||
} else if (item.isFile()) {
|
||||
// Check include patterns (if specified)
|
||||
if (includePatterns.length === 0 || shouldInclude(relativePath, includePatterns)) {
|
||||
files.push(relativePath);
|
||||
}
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
// Ignore permission errors and continue
|
||||
}
|
||||
}
|
||||
|
||||
scanDirectory(projectRoot);
|
||||
return files;
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if file should be excluded based on patterns
|
||||
* @param {string} filePath - File path to check
|
||||
* @param {Array} excludePatterns - Exclude patterns
|
||||
* @returns {boolean} True if should be excluded
|
||||
*/
|
||||
function shouldExclude(filePath, excludePatterns) {
|
||||
return excludePatterns.some(pattern => {
|
||||
if (pattern.includes('*')) {
|
||||
const regex = new RegExp(pattern.replace(/\*/g, '.*'));
|
||||
return regex.test(filePath);
|
||||
}
|
||||
return filePath.includes(pattern);
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if file should be included based on patterns
|
||||
* @param {string} filePath - File path to check
|
||||
* @param {Array} includePatterns - Include patterns
|
||||
* @returns {boolean} True if should be included
|
||||
*/
|
||||
function shouldInclude(filePath, includePatterns) {
|
||||
return includePatterns.some(pattern => {
|
||||
if (pattern.includes('*')) {
|
||||
const regex = new RegExp(pattern.replace(/\*/g, '.*'));
|
||||
return regex.test(filePath);
|
||||
}
|
||||
return filePath.includes(pattern);
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Analyze file content to extract key information
|
||||
* @param {string} filePath - Path to file
|
||||
* @param {string} projectRoot - Project root
|
||||
* @returns {Object} File analysis results
|
||||
*/
|
||||
export function analyzeFileContent(filePath, projectRoot) {
|
||||
try {
|
||||
const fullPath = path.join(projectRoot, filePath);
|
||||
const content = fs.readFileSync(fullPath, 'utf8');
|
||||
const ext = path.extname(filePath);
|
||||
|
||||
const analysis = {
|
||||
path: filePath,
|
||||
size: content.length,
|
||||
lines: content.split('\n').length,
|
||||
language: getLanguageFromExtension(ext),
|
||||
functions: [],
|
||||
classes: [],
|
||||
imports: [],
|
||||
exports: []
|
||||
};
|
||||
|
||||
// Basic pattern matching for common constructs
|
||||
switch (ext) {
|
||||
case '.js':
|
||||
case '.ts':
|
||||
case '.jsx':
|
||||
case '.tsx':
|
||||
analyzeJavaScriptFile(content, analysis);
|
||||
break;
|
||||
case '.py':
|
||||
analyzePythonFile(content, analysis);
|
||||
break;
|
||||
case '.java':
|
||||
analyzeJavaFile(content, analysis);
|
||||
break;
|
||||
case '.go':
|
||||
analyzeGoFile(content, analysis);
|
||||
break;
|
||||
}
|
||||
|
||||
return analysis;
|
||||
} catch (error) {
|
||||
return {
|
||||
path: filePath,
|
||||
error: error.message
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get programming language from file extension
|
||||
* @param {string} ext - File extension
|
||||
* @returns {string} Programming language
|
||||
*/
|
||||
function getLanguageFromExtension(ext) {
|
||||
const langMap = {
|
||||
'.js': 'javascript',
|
||||
'.jsx': 'javascript',
|
||||
'.ts': 'typescript',
|
||||
'.tsx': 'typescript',
|
||||
'.py': 'python',
|
||||
'.java': 'java',
|
||||
'.go': 'go',
|
||||
'.rs': 'rust',
|
||||
'.php': 'php',
|
||||
'.rb': 'ruby',
|
||||
'.cpp': 'cpp',
|
||||
'.c': 'c',
|
||||
'.cs': 'csharp'
|
||||
};
|
||||
return langMap[ext] || 'unknown';
|
||||
}
|
||||
|
||||
/**
|
||||
* Analyze JavaScript/TypeScript file content
|
||||
* @param {string} content - File content
|
||||
* @param {Object} analysis - Analysis object to populate
|
||||
*/
|
||||
function analyzeJavaScriptFile(content, analysis) {
|
||||
// Extract function declarations
|
||||
const functionRegex = /(?:function\s+(\w+)|const\s+(\w+)\s*=\s*(?:async\s+)?(?:function|\([^)]*\)\s*=>)|(\w+)\s*:\s*(?:async\s+)?(?:function|\([^)]*\)\s*=>))/g;
|
||||
let match;
|
||||
while ((match = functionRegex.exec(content)) !== null) {
|
||||
const functionName = match[1] || match[2] || match[3];
|
||||
if (functionName) {
|
||||
analysis.functions.push(functionName);
|
||||
}
|
||||
}
|
||||
|
||||
// Extract class declarations
|
||||
const classRegex = /class\s+(\w+)/g;
|
||||
while ((match = classRegex.exec(content)) !== null) {
|
||||
analysis.classes.push(match[1]);
|
||||
}
|
||||
|
||||
// Extract imports
|
||||
const importRegex = /import\s+(?:.*?\s+from\s+)?['"]([^'"]+)['"]/g;
|
||||
while ((match = importRegex.exec(content)) !== null) {
|
||||
analysis.imports.push(match[1]);
|
||||
}
|
||||
|
||||
// Extract exports
|
||||
const exportRegex = /export\s+(?:default\s+)?(?:const\s+|function\s+|class\s+)?(\w+)/g;
|
||||
while ((match = exportRegex.exec(content)) !== null) {
|
||||
analysis.exports.push(match[1]);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Analyze Python file content
|
||||
* @param {string} content - File content
|
||||
* @param {Object} analysis - Analysis object to populate
|
||||
*/
|
||||
function analyzePythonFile(content, analysis) {
|
||||
// Extract function definitions
|
||||
const functionRegex = /def\s+(\w+)/g;
|
||||
let match;
|
||||
while ((match = functionRegex.exec(content)) !== null) {
|
||||
analysis.functions.push(match[1]);
|
||||
}
|
||||
|
||||
// Extract class definitions
|
||||
const classRegex = /class\s+(\w+)/g;
|
||||
while ((match = classRegex.exec(content)) !== null) {
|
||||
analysis.classes.push(match[1]);
|
||||
}
|
||||
|
||||
// Extract imports
|
||||
const importRegex = /(?:import\s+(\w+)|from\s+(\w+)\s+import)/g;
|
||||
while ((match = importRegex.exec(content)) !== null) {
|
||||
analysis.imports.push(match[1] || match[2]);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Analyze Java file content
|
||||
* @param {string} content - File content
|
||||
* @param {Object} analysis - Analysis object to populate
|
||||
*/
|
||||
function analyzeJavaFile(content, analysis) {
|
||||
// Extract method declarations
|
||||
const methodRegex = /(?:public|private|protected|static|\s)*\s+\w+\s+(\w+)\s*\(/g;
|
||||
let match;
|
||||
while ((match = methodRegex.exec(content)) !== null) {
|
||||
analysis.functions.push(match[1]);
|
||||
}
|
||||
|
||||
// Extract class declarations
|
||||
const classRegex = /(?:public\s+)?class\s+(\w+)/g;
|
||||
while ((match = classRegex.exec(content)) !== null) {
|
||||
analysis.classes.push(match[1]);
|
||||
}
|
||||
|
||||
// Extract imports
|
||||
const importRegex = /import\s+([^;]+);/g;
|
||||
while ((match = importRegex.exec(content)) !== null) {
|
||||
analysis.imports.push(match[1]);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Analyze Go file content
|
||||
* @param {string} content - File content
|
||||
* @param {Object} analysis - Analysis object to populate
|
||||
*/
|
||||
function analyzeGoFile(content, analysis) {
|
||||
// Extract function declarations
|
||||
const functionRegex = /func\s+(?:\([^)]*\)\s+)?(\w+)/g;
|
||||
let match;
|
||||
while ((match = functionRegex.exec(content)) !== null) {
|
||||
analysis.functions.push(match[1]);
|
||||
}
|
||||
|
||||
// Extract type/struct declarations
|
||||
const typeRegex = /type\s+(\w+)\s+struct/g;
|
||||
while ((match = typeRegex.exec(content)) !== null) {
|
||||
analysis.classes.push(match[1]); // Treating structs as classes
|
||||
}
|
||||
|
||||
// Extract imports
|
||||
const importRegex = /import\s+(?:\([^)]+\)|"([^"]+)")/g;
|
||||
while ((match = importRegex.exec(content)) !== null) {
|
||||
if (match[1]) {
|
||||
analysis.imports.push(match[1]);
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,441 +0,0 @@
|
||||
/**
|
||||
* Main scan-project functionality
|
||||
* Implements intelligent project scanning with AI-driven analysis and ast-grep integration
|
||||
*/
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
import chalk from 'chalk';
|
||||
import { ScanConfig, ScanLoggingConfig } from './scan-config.js';
|
||||
import {
|
||||
detectProjectType,
|
||||
getFileList,
|
||||
analyzeFileContent,
|
||||
executeAstGrep
|
||||
} from './scan-helpers.js';
|
||||
import { analyzeWithAI } from './ai-analysis.js';
|
||||
|
||||
/**
|
||||
* Main scan project function
|
||||
* @param {string} projectRoot - Project root directory
|
||||
* @param {Object} options - Scan options
|
||||
* @returns {Promise<Object>} Scan results
|
||||
*/
|
||||
export default async function scanProject(projectRoot, options = {}) {
|
||||
const config = new ScanConfig({
|
||||
projectRoot,
|
||||
outputPath: options.outputPath,
|
||||
includeFiles: options.includeFiles || [],
|
||||
excludeFiles: options.excludeFiles || ['node_modules', '.git', 'dist', 'build', '*.log'],
|
||||
scanDepth: options.scanDepth || 5,
|
||||
mcpLog: options.mcpLog || false,
|
||||
reportProgress: options.reportProgress !== false, // Default to true
|
||||
debug: options.debug || false
|
||||
});
|
||||
|
||||
const logger = new ScanLoggingConfig(config.mcpLog, config.reportProgress);
|
||||
logger.info('Starting intelligent project scan...');
|
||||
|
||||
try {
|
||||
// Phase 1: Initial project discovery
|
||||
logger.info('Phase 1: Discovering project structure...');
|
||||
const initialScan = await performInitialScan(config, logger);
|
||||
|
||||
// Phase 2: File-level analysis
|
||||
logger.info('Phase 2: Analyzing individual files...');
|
||||
const fileAnalysis = await performFileAnalysis(config, initialScan, logger);
|
||||
|
||||
// Phase 3: AST-grep enhanced analysis
|
||||
logger.info('Phase 3: Performing AST analysis...');
|
||||
const astAnalysis = await performASTAnalysis(config, fileAnalysis, logger);
|
||||
|
||||
// Phase 4: AI-powered analysis (optional)
|
||||
let aiAnalysis = null;
|
||||
if (!options.skipAI) {
|
||||
logger.info('Phase 4: Enhancing with AI analysis...');
|
||||
try {
|
||||
aiAnalysis = await analyzeWithAI({
|
||||
...initialScan,
|
||||
...fileAnalysis,
|
||||
...astAnalysis
|
||||
}, config);
|
||||
} catch (error) {
|
||||
logger.warn(`AI analysis failed, continuing without it: ${error.message}`);
|
||||
aiAnalysis = {
|
||||
projectType: { confidence: 0 },
|
||||
coreStructure: { architecture: { pattern: 'unknown' } },
|
||||
summary: { complexity: 'unknown' }
|
||||
};
|
||||
}
|
||||
} else {
|
||||
logger.info('Phase 4: Skipping AI analysis...');
|
||||
aiAnalysis = {
|
||||
projectType: { confidence: 0 },
|
||||
coreStructure: { architecture: { pattern: 'unknown' } },
|
||||
summary: { complexity: 'unknown' }
|
||||
};
|
||||
}
|
||||
|
||||
// Phase 5: Generate final output
|
||||
const finalResults = {
|
||||
timestamp: new Date().toISOString(),
|
||||
projectRoot: config.projectRoot,
|
||||
scanConfig: {
|
||||
excludeFiles: config.excludeFiles,
|
||||
scanDepth: config.scanDepth
|
||||
},
|
||||
...initialScan,
|
||||
...fileAnalysis,
|
||||
...astAnalysis,
|
||||
aiAnalysis,
|
||||
scanSummary: generateScanSummary(initialScan, fileAnalysis, aiAnalysis)
|
||||
};
|
||||
|
||||
// Save results if output path is specified
|
||||
if (config.outputPath) {
|
||||
await saveResults(finalResults, config.outputPath, logger);
|
||||
}
|
||||
|
||||
logger.info('Project scan completed successfully');
|
||||
return {
|
||||
success: true,
|
||||
data: finalResults
|
||||
};
|
||||
|
||||
} catch (error) {
|
||||
logger.error(`Scan failed: ${error.message}`);
|
||||
return {
|
||||
success: false,
|
||||
error: {
|
||||
message: error.message,
|
||||
stack: config.debug ? error.stack : undefined
|
||||
}
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Phase 1: Perform initial project discovery
|
||||
* @param {ScanConfig} config - Scan configuration
|
||||
* @param {ScanLoggingConfig} logger - Logger instance
|
||||
* @returns {Promise<Object>} Initial scan results
|
||||
*/
|
||||
async function performInitialScan(config, logger) {
|
||||
logger.info('[Initial Scan]: Discovering project type and structure...');
|
||||
|
||||
// Detect project type
|
||||
const projectType = detectProjectType(config.projectRoot);
|
||||
logger.info(`[Initial Scan]: Detected ${projectType.type} project`);
|
||||
|
||||
// Get root-level files
|
||||
const rootFiles = fs.readdirSync(config.projectRoot)
|
||||
.filter(item => {
|
||||
const fullPath = path.join(config.projectRoot, item);
|
||||
return fs.statSync(fullPath).isFile();
|
||||
});
|
||||
|
||||
// Get directory structure (first level)
|
||||
const directories = fs.readdirSync(config.projectRoot)
|
||||
.filter(item => {
|
||||
const fullPath = path.join(config.projectRoot, item);
|
||||
return fs.statSync(fullPath).isDirectory() &&
|
||||
!config.excludeFiles.includes(item);
|
||||
})
|
||||
.map(dir => {
|
||||
const dirPath = path.join(config.projectRoot, dir);
|
||||
try {
|
||||
const files = fs.readdirSync(dirPath);
|
||||
return {
|
||||
name: dir,
|
||||
path: dirPath,
|
||||
fileCount: files.length,
|
||||
files: files.slice(0, 10) // Sample of files
|
||||
};
|
||||
} catch (error) {
|
||||
return {
|
||||
name: dir,
|
||||
path: dirPath,
|
||||
error: 'Access denied'
|
||||
};
|
||||
}
|
||||
});
|
||||
|
||||
// Get complete file list for scanning
|
||||
const fileList = getFileList(
|
||||
config.projectRoot,
|
||||
config.includeFiles,
|
||||
config.excludeFiles,
|
||||
config.scanDepth
|
||||
);
|
||||
|
||||
// Calculate basic statistics
|
||||
const stats = {
|
||||
totalFiles: fileList.length,
|
||||
totalDirectories: directories.length,
|
||||
rootFiles: rootFiles.length,
|
||||
languages: [...new Set(fileList.map(f => {
|
||||
const ext = path.extname(f);
|
||||
return ext ? ext.substring(1) : 'unknown';
|
||||
}))],
|
||||
largestFiles: fileList
|
||||
.map(f => {
|
||||
try {
|
||||
const fullPath = path.join(config.projectRoot, f);
|
||||
const stats = fs.statSync(fullPath);
|
||||
return { path: f, size: stats.size };
|
||||
} catch {
|
||||
return { path: f, size: 0 };
|
||||
}
|
||||
})
|
||||
.sort((a, b) => b.size - a.size)
|
||||
.slice(0, 10)
|
||||
};
|
||||
|
||||
logger.info(`[Initial Scan]: Found ${stats.totalFiles} files in ${stats.totalDirectories} directories`);
|
||||
|
||||
return {
|
||||
projectType,
|
||||
rootFiles,
|
||||
directories,
|
||||
fileList,
|
||||
stats
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Phase 2: Perform detailed file analysis
|
||||
* @param {ScanConfig} config - Scan configuration
|
||||
* @param {Object} initialScan - Initial scan results
|
||||
* @param {ScanLoggingConfig} logger - Logger instance
|
||||
* @returns {Promise<Object>} File analysis results
|
||||
*/
|
||||
async function performFileAnalysis(config, initialScan, logger) {
|
||||
logger.info('[File Analysis]: Analyzing file contents...');
|
||||
|
||||
const { fileList, projectType } = initialScan;
|
||||
|
||||
// Filter files for detailed analysis (avoid binary files, focus on source code)
|
||||
const sourceExtensions = ['.js', '.ts', '.jsx', '.tsx', '.py', '.java', '.go', '.rs', '.php', '.rb', '.cpp', '.c', '.cs'];
|
||||
const sourceFiles = fileList.filter(file => {
|
||||
const ext = path.extname(file);
|
||||
return sourceExtensions.includes(ext) || projectType.entryPoints.includes(file);
|
||||
}).slice(0, 100); // Limit to prevent excessive processing
|
||||
|
||||
logger.info(`[File Analysis]: Analyzing ${sourceFiles.length} source files...`);
|
||||
|
||||
// Analyze files
|
||||
const detailedFiles = sourceFiles.map(file => {
|
||||
try {
|
||||
return analyzeFileContent(file, config.projectRoot);
|
||||
} catch (error) {
|
||||
logger.warn(`[File Analysis]: Failed to analyze ${file}: ${error.message}`);
|
||||
return { path: file, error: error.message };
|
||||
}
|
||||
}).filter(result => !result.error);
|
||||
|
||||
// Group by language
|
||||
const byLanguage = detailedFiles.reduce((acc, file) => {
|
||||
const lang = file.language || 'unknown';
|
||||
if (!acc[lang]) acc[lang] = [];
|
||||
acc[lang].push(file);
|
||||
return acc;
|
||||
}, {});
|
||||
|
||||
// Extract key statistics
|
||||
const codeStats = {
|
||||
totalLines: detailedFiles.reduce((sum, f) => sum + (f.lines || 0), 0),
|
||||
totalFunctions: detailedFiles.reduce((sum, f) => sum + (f.functions?.length || 0), 0),
|
||||
totalClasses: detailedFiles.reduce((sum, f) => sum + (f.classes?.length || 0), 0),
|
||||
languageBreakdown: Object.keys(byLanguage).map(lang => ({
|
||||
language: lang,
|
||||
files: byLanguage[lang].length,
|
||||
lines: byLanguage[lang].reduce((sum, f) => sum + (f.lines || 0), 0)
|
||||
}))
|
||||
};
|
||||
|
||||
logger.info(`[File Analysis]: Analyzed ${detailedFiles.length} files, ${codeStats.totalLines} lines, ${codeStats.totalFunctions} functions`);
|
||||
|
||||
return {
|
||||
detailedFiles,
|
||||
byLanguage,
|
||||
codeStats
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Phase 3: Perform AST-grep enhanced analysis
|
||||
* @param {ScanConfig} config - Scan configuration
|
||||
* @param {Object} fileAnalysis - File analysis results
|
||||
* @param {ScanLoggingConfig} logger - Logger instance
|
||||
* @returns {Promise<Object>} AST analysis results
|
||||
*/
|
||||
async function performASTAnalysis(config, fileAnalysis, logger) {
|
||||
logger.info('[AST Analysis]: Performing syntax tree analysis...');
|
||||
|
||||
const { detailedFiles } = fileAnalysis;
|
||||
|
||||
// Select files for AST analysis (focus on main source files)
|
||||
const astTargetFiles = detailedFiles
|
||||
.filter(file => file.functions?.length > 0 || file.classes?.length > 0)
|
||||
.slice(0, 20) // Limit for performance
|
||||
.map(file => file.path);
|
||||
|
||||
if (astTargetFiles.length === 0) {
|
||||
logger.info('[AST Analysis]: No suitable files found for AST analysis');
|
||||
return { astResults: {} };
|
||||
}
|
||||
|
||||
logger.info(`[AST Analysis]: Analyzing ${astTargetFiles.length} files with ast-grep...`);
|
||||
|
||||
const astResults = {};
|
||||
|
||||
// Define common patterns to search for
|
||||
const patterns = {
|
||||
functions: {
|
||||
javascript: 'function $_($$$) { $$$ }',
|
||||
typescript: 'function $_($$$): $_ { $$$ }',
|
||||
python: 'def $_($$$): $$$',
|
||||
java: '$_ $_($$$ args) { $$$ }'
|
||||
},
|
||||
classes: {
|
||||
javascript: 'class $_ { $$$ }',
|
||||
typescript: 'class $_ { $$$ }',
|
||||
python: 'class $_: $$$',
|
||||
java: 'class $_ { $$$ }'
|
||||
},
|
||||
imports: {
|
||||
javascript: 'import $_ from $_',
|
||||
typescript: 'import $_ from $_',
|
||||
python: 'import $_',
|
||||
java: 'import $_;'
|
||||
}
|
||||
};
|
||||
|
||||
// Run AST analysis for different languages
|
||||
for (const [language, files] of Object.entries(fileAnalysis.byLanguage || {})) {
|
||||
if (patterns.functions[language] && files.length > 0) {
|
||||
try {
|
||||
logger.debug(`[AST Analysis]: Analyzing ${language} files...`);
|
||||
|
||||
const langFiles = files.map(f => f.path).filter(path => astTargetFiles.includes(path));
|
||||
if (langFiles.length > 0) {
|
||||
// Run ast-grep for functions
|
||||
const functionResults = await executeAstGrep(
|
||||
config.projectRoot,
|
||||
patterns.functions[language],
|
||||
langFiles
|
||||
);
|
||||
|
||||
// Run ast-grep for classes
|
||||
const classResults = await executeAstGrep(
|
||||
config.projectRoot,
|
||||
patterns.classes[language],
|
||||
langFiles
|
||||
);
|
||||
|
||||
astResults[language] = {
|
||||
functions: functionResults || [],
|
||||
classes: classResults || [],
|
||||
files: langFiles
|
||||
};
|
||||
}
|
||||
} catch (error) {
|
||||
logger.warn(`[AST Analysis]: AST analysis failed for ${language}: ${error.message}`);
|
||||
// Continue with other languages
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
const totalMatches = Object.values(astResults).reduce((sum, lang) =>
|
||||
sum + (lang.functions?.length || 0) + (lang.classes?.length || 0), 0);
|
||||
|
||||
logger.info(`[AST Analysis]: Found ${totalMatches} AST matches across ${Object.keys(astResults).length} languages`);
|
||||
|
||||
return { astResults };
|
||||
}
|
||||
|
||||
/**
|
||||
* Generate scan summary
|
||||
* @param {Object} initialScan - Initial scan results
|
||||
* @param {Object} fileAnalysis - File analysis results
|
||||
* @param {Object} aiAnalysis - AI analysis results
|
||||
* @returns {Object} Scan summary
|
||||
*/
|
||||
function generateScanSummary(initialScan, fileAnalysis, aiAnalysis) {
|
||||
return {
|
||||
overview: `Scanned ${initialScan.stats.totalFiles} files across ${initialScan.stats.totalDirectories} directories`,
|
||||
projectType: initialScan.projectType.type,
|
||||
languages: initialScan.stats.languages,
|
||||
codeMetrics: {
|
||||
totalLines: fileAnalysis.codeStats?.totalLines || 0,
|
||||
totalFunctions: fileAnalysis.codeStats?.totalFunctions || 0,
|
||||
totalClasses: fileAnalysis.codeStats?.totalClasses || 0
|
||||
},
|
||||
aiInsights: {
|
||||
confidence: aiAnalysis.projectType?.confidence || 0,
|
||||
architecture: aiAnalysis.coreStructure?.architecture?.pattern || 'unknown',
|
||||
complexity: aiAnalysis.summary?.complexity || 'unknown'
|
||||
},
|
||||
recommendations: generateRecommendations(initialScan, fileAnalysis, aiAnalysis)
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Generate recommendations based on scan results
|
||||
* @param {Object} initialScan - Initial scan results
|
||||
* @param {Object} fileAnalysis - File analysis results
|
||||
* @param {Object} aiAnalysis - AI analysis results
|
||||
* @returns {Array} List of recommendations
|
||||
*/
|
||||
function generateRecommendations(initialScan, fileAnalysis, aiAnalysis) {
|
||||
const recommendations = [];
|
||||
|
||||
// Size-based recommendations
|
||||
if (initialScan.stats.totalFiles > 500) {
|
||||
recommendations.push('Consider using a monorepo management tool for large codebase');
|
||||
}
|
||||
|
||||
// Language-specific recommendations
|
||||
const jsFiles = fileAnalysis.byLanguage?.javascript?.length || 0;
|
||||
const tsFiles = fileAnalysis.byLanguage?.typescript?.length || 0;
|
||||
|
||||
if (jsFiles > tsFiles && jsFiles > 10) {
|
||||
recommendations.push('Consider migrating JavaScript files to TypeScript for better type safety');
|
||||
}
|
||||
|
||||
// Documentation recommendations
|
||||
const readmeExists = initialScan.rootFiles.some(f => f.toLowerCase().includes('readme'));
|
||||
if (!readmeExists) {
|
||||
recommendations.push('Add a README.md file to document the project');
|
||||
}
|
||||
|
||||
// Testing recommendations
|
||||
const hasTests = initialScan.fileList.some(f => f.includes('test') || f.includes('spec'));
|
||||
if (!hasTests) {
|
||||
recommendations.push('Consider adding unit tests to improve code quality');
|
||||
}
|
||||
|
||||
return recommendations;
|
||||
}
|
||||
|
||||
/**
|
||||
* Save scan results to file
|
||||
* @param {Object} results - Scan results
|
||||
* @param {string} outputPath - Output file path
|
||||
* @param {ScanLoggingConfig} logger - Logger instance
|
||||
*/
|
||||
async function saveResults(results, outputPath, logger) {
|
||||
try {
|
||||
// Ensure output directory exists
|
||||
const outputDir = path.dirname(outputPath);
|
||||
if (!fs.existsSync(outputDir)) {
|
||||
fs.mkdirSync(outputDir, { recursive: true });
|
||||
}
|
||||
|
||||
// Write results to file
|
||||
fs.writeFileSync(outputPath, JSON.stringify(results, null, 2));
|
||||
logger.info(`Scan results saved to: ${outputPath}`);
|
||||
} catch (error) {
|
||||
logger.error(`Failed to save results: ${error.message}`);
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
@@ -5,6 +5,40 @@ import { isSilentMode, log } from '../../scripts/modules/utils.js';
|
||||
import { createProfile, COMMON_TOOL_MAPPINGS } from './base-profile.js';
|
||||
import { ROO_MODES } from '../constants/profiles.js';
|
||||
|
||||
// Import the shared MCP configuration helper
|
||||
import { formatJSONWithTabs } from '../utils/create-mcp-config.js';
|
||||
|
||||
// Roo-specific MCP configuration enhancements
|
||||
function enhanceRooMCPConfiguration(mcpPath) {
|
||||
if (!fs.existsSync(mcpPath)) {
|
||||
log('warn', `[Roo] MCP configuration file not found at ${mcpPath}`);
|
||||
return;
|
||||
}
|
||||
|
||||
try {
|
||||
// Read the existing configuration
|
||||
const mcpConfig = JSON.parse(fs.readFileSync(mcpPath, 'utf8'));
|
||||
|
||||
if (mcpConfig.mcpServers && mcpConfig.mcpServers['task-master-ai']) {
|
||||
const server = mcpConfig.mcpServers['task-master-ai'];
|
||||
|
||||
// Add Roo-specific timeout enhancement for long-running AI operations
|
||||
server.timeout = 300;
|
||||
|
||||
// Write the enhanced configuration back
|
||||
fs.writeFileSync(mcpPath, formatJSONWithTabs(mcpConfig) + '\n');
|
||||
log(
|
||||
'debug',
|
||||
`[Roo] Enhanced MCP configuration with timeout at ${mcpPath}`
|
||||
);
|
||||
} else {
|
||||
log('warn', `[Roo] task-master-ai server not found in MCP configuration`);
|
||||
}
|
||||
} catch (error) {
|
||||
log('error', `[Roo] Failed to enhance MCP configuration: ${error.message}`);
|
||||
}
|
||||
}
|
||||
|
||||
// Lifecycle functions for Roo profile
|
||||
function onAddRulesProfile(targetDir, assetsDir) {
|
||||
// Use the provided assets directory to find the roocode directory
|
||||
@@ -32,6 +66,9 @@ function onAddRulesProfile(targetDir, assetsDir) {
|
||||
}
|
||||
}
|
||||
|
||||
// Note: MCP configuration is now handled by the base profile system
|
||||
// The base profile will call setupMCPConfiguration, and we enhance it in onPostConvert
|
||||
|
||||
for (const mode of ROO_MODES) {
|
||||
const src = path.join(rooModesDir, `rules-${mode}`, `${mode}-rules`);
|
||||
const dest = path.join(targetDir, '.roo', `rules-${mode}`, `${mode}-rules`);
|
||||
@@ -78,6 +115,15 @@ function onRemoveRulesProfile(targetDir) {
|
||||
|
||||
const rooDir = path.join(targetDir, '.roo');
|
||||
if (fs.existsSync(rooDir)) {
|
||||
// Remove MCP configuration
|
||||
const mcpPath = path.join(rooDir, 'mcp.json');
|
||||
try {
|
||||
fs.rmSync(mcpPath, { force: true });
|
||||
log('debug', `[Roo] Removed MCP configuration from ${mcpPath}`);
|
||||
} catch (err) {
|
||||
log('error', `[Roo] Failed to remove MCP configuration: ${err.message}`);
|
||||
}
|
||||
|
||||
fs.readdirSync(rooDir).forEach((entry) => {
|
||||
if (entry.startsWith('rules-')) {
|
||||
const modeDir = path.join(rooDir, entry);
|
||||
@@ -101,7 +147,13 @@ function onRemoveRulesProfile(targetDir) {
|
||||
}
|
||||
|
||||
function onPostConvertRulesProfile(targetDir, assetsDir) {
|
||||
onAddRulesProfile(targetDir, assetsDir);
|
||||
// Enhance the MCP configuration with Roo-specific features after base setup
|
||||
const mcpPath = path.join(targetDir, '.roo', 'mcp.json');
|
||||
try {
|
||||
enhanceRooMCPConfiguration(mcpPath);
|
||||
} catch (err) {
|
||||
log('error', `[Roo] Failed to enhance MCP configuration: ${err.message}`);
|
||||
}
|
||||
}
|
||||
|
||||
// Create and export roo profile using the base factory
|
||||
@@ -111,6 +163,7 @@ export const rooProfile = createProfile({
|
||||
url: 'roocode.com',
|
||||
docsUrl: 'docs.roocode.com',
|
||||
toolMappings: COMMON_TOOL_MAPPINGS.ROO_STYLE,
|
||||
mcpConfig: true, // Enable MCP config - we enhance it with Roo-specific features
|
||||
onAdd: onAddRulesProfile,
|
||||
onRemove: onRemoveRulesProfile,
|
||||
onPostConvert: onPostConvertRulesProfile
|
||||
|
||||
@@ -262,3 +262,6 @@ export function removeTaskMasterMCPConfiguration(projectRoot, mcpConfigPath) {
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
// Export the formatting function for use by other modules
|
||||
export { formatJSONWithTabs };
|
||||
|
||||
@@ -26,7 +26,7 @@ describe('Roo Profile Initialization Functionality', () => {
|
||||
expect(rooProfile.displayName).toBe('Roo Code');
|
||||
expect(rooProfile.profileDir).toBe('.roo'); // default
|
||||
expect(rooProfile.rulesDir).toBe('.roo/rules'); // default
|
||||
expect(rooProfile.mcpConfig).toBe(true); // default
|
||||
expect(rooProfile.mcpConfig).toBe(true); // now uses standard MCP configuration with Roo enhancements
|
||||
});
|
||||
|
||||
test('roo.js uses custom ROO_STYLE tool mappings', () => {
|
||||
|
||||
@@ -266,10 +266,10 @@ describe('MCP Configuration Validation', () => {
|
||||
expect(mcpEnabledProfiles).toContain('cursor');
|
||||
expect(mcpEnabledProfiles).toContain('gemini');
|
||||
expect(mcpEnabledProfiles).toContain('opencode');
|
||||
expect(mcpEnabledProfiles).toContain('roo');
|
||||
expect(mcpEnabledProfiles).toContain('vscode');
|
||||
expect(mcpEnabledProfiles).toContain('windsurf');
|
||||
expect(mcpEnabledProfiles).toContain('zed');
|
||||
expect(mcpEnabledProfiles).toContain('roo');
|
||||
expect(mcpEnabledProfiles).not.toContain('cline');
|
||||
expect(mcpEnabledProfiles).not.toContain('codex');
|
||||
expect(mcpEnabledProfiles).not.toContain('trae');
|
||||
@@ -384,6 +384,7 @@ describe('MCP Configuration Validation', () => {
|
||||
'claude',
|
||||
'cursor',
|
||||
'gemini',
|
||||
'kiro',
|
||||
'opencode',
|
||||
'roo',
|
||||
'windsurf',
|
||||
|
||||
Reference in New Issue
Block a user