Compare commits
1 Commits
claude/iss
...
claude/iss
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
57ed3a37e4 |
23
.changeset/intelligent-scan-command.md
Normal file
23
.changeset/intelligent-scan-command.md
Normal file
@@ -0,0 +1,23 @@
|
||||
---
|
||||
"task-master-ai": minor
|
||||
---
|
||||
|
||||
Add intelligent `scan` command for automated codebase analysis
|
||||
|
||||
Introduces a comprehensive project scanning feature that intelligently analyzes codebases using ast-grep and AI-powered analysis. The new `task-master scan` command provides:
|
||||
|
||||
- **Multi-phase Analysis**: Performs iterative scanning (project type identification → entry points → core structure → recursive deepening)
|
||||
- **AST-grep Integration**: Uses ast-grep as an AI SDK tool for advanced code structure analysis
|
||||
- **AI Enhancement**: Optional AI-powered analysis for intelligent project understanding
|
||||
- **Structured Output**: Generates detailed JSON reports with file/directory summaries
|
||||
- **Transparent Logging**: Clear progress indicators showing each analysis phase
|
||||
- **Configurable Options**: Supports custom include/exclude patterns, scan depth, and output paths
|
||||
|
||||
This feature addresses the challenge of quickly understanding existing project structures when adopting Task Master, significantly streamlining initial setup and project onboarding.
|
||||
|
||||
Usage:
|
||||
```bash
|
||||
task-master scan --output=project_scan.json
|
||||
task-master scan --include="*.js,*.ts" --exclude="*.test.*" --depth=3
|
||||
task-master scan --no-ai # Skip AI analysis for faster results
|
||||
```
|
||||
@@ -102,35 +102,6 @@ Task Master provides an MCP server that Claude Code can connect to. Configure in
|
||||
}
|
||||
```
|
||||
|
||||
For Windows users without WSL, use this configuration instead:
|
||||
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"task-master-ai": {
|
||||
"command": "cmd",
|
||||
"args": ["/c", "npx -y --package=task-master-ai task-master-ai"],
|
||||
"env": {
|
||||
"ANTHROPIC_API_KEY": "your_key_here",
|
||||
"PERPLEXITY_API_KEY": "your_key_here",
|
||||
"OPENAI_API_KEY": "OPENAI_API_KEY_HERE",
|
||||
"GOOGLE_API_KEY": "GOOGLE_API_KEY_HERE",
|
||||
"XAI_API_KEY": "XAI_API_KEY_HERE",
|
||||
"OPENROUTER_API_KEY": "OPENROUTER_API_KEY_HERE",
|
||||
"MISTRAL_API_KEY": "MISTRAL_API_KEY_HERE",
|
||||
"AZURE_OPENAI_API_KEY": "AZURE_OPENAI_API_KEY_HERE",
|
||||
"OLLAMA_API_KEY": "OLLAMA_API_KEY_HERE"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Or install at the project level with Claude Code:
|
||||
```bash
|
||||
claude mcp add task-master-mcp -s project -- cmd /c "npx -y --package=task-master-ai task-master-ai"
|
||||
```
|
||||
|
||||
### Essential MCP Tools
|
||||
|
||||
```javascript
|
||||
|
||||
@@ -124,7 +124,6 @@ MCP (Model Control Protocol) lets you run Task Master directly from your editor.
|
||||
|
||||
> **Note**: If you see `0 tools enabled` in the MCP settings, restart your editor and check that your API keys are correctly configured.
|
||||
|
||||
|
||||
###### VS Code (`servers` + `type`)
|
||||
|
||||
```json
|
||||
|
||||
@@ -1,275 +0,0 @@
|
||||
---
|
||||
title: Claude Code Setup
|
||||
sidebarTitle: "Claude Code"
|
||||
---
|
||||
|
||||
<div className="flex items-center space-x-4 mb-6">
|
||||
<img src="/logo/claude-logo.svg" className="w-12 h-12" alt="Claude Code" />
|
||||
<div>
|
||||
<h1 className="text-2xl font-bold">Claude Code</h1>
|
||||
<p className="text-gray-600">Anthropic's official CLI for Claude</p>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
Claude Code offers the smoothest Task Master experience with **zero API key setup** and direct Claude integration.
|
||||
|
||||
## 🎯 Why Choose Claude Code?
|
||||
|
||||
<div className="grid grid-cols-1 md:grid-cols-2 gap-4 mb-6">
|
||||
<div className="bg-blue-50 dark:bg-blue-900/20 p-4 rounded-lg border border-blue-200 dark:border-blue-800">
|
||||
<h3 className="font-semibold text-blue-800 dark:text-blue-200 mb-2">🔓 No API Keys</h3>
|
||||
<p className="text-sm text-blue-700 dark:text-blue-300">Uses your existing Claude subscription - no separate API setup needed</p>
|
||||
</div>
|
||||
<div className="bg-green-50 dark:bg-green-900/20 p-4 rounded-lg border border-green-200 dark:border-green-800">
|
||||
<h3 className="font-semibold text-green-800 dark:text-green-200 mb-2">⚡ Native Integration</h3>
|
||||
<p className="text-sm text-green-700 dark:text-green-300">Built specifically for Claude - seamless Task Master experience</p>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
## 📦 Installation
|
||||
|
||||
### Step 1: Install Claude Code
|
||||
|
||||
Follow the [official Claude Code installation guide](https://docs.anthropic.com/en/docs/claude-code) or use our quick setup:
|
||||
|
||||
<Tabs>
|
||||
<Tab title="macOS">
|
||||
```bash
|
||||
# Install via Homebrew (recommended)
|
||||
brew install claude-code
|
||||
|
||||
# Or download from GitHub releases
|
||||
curl -L https://github.com/anthropics/claude-code/releases/latest/download/claude-code-macos.tar.gz | tar xz
|
||||
sudo mv claude-code /usr/local/bin/
|
||||
```
|
||||
</Tab>
|
||||
|
||||
<Tab title="Windows">
|
||||
```powershell
|
||||
# Download and install from GitHub releases
|
||||
# Visit: https://github.com/anthropics/claude-code/releases/latest
|
||||
# Download: claude-code-windows.exe
|
||||
|
||||
# Or use winget (if available)
|
||||
winget install Anthropic.ClaudeCode
|
||||
```
|
||||
</Tab>
|
||||
|
||||
<Tab title="Linux">
|
||||
```bash
|
||||
# Download from GitHub releases
|
||||
curl -L https://github.com/anthropics/claude-code/releases/latest/download/claude-code-linux.tar.gz | tar xz
|
||||
sudo mv claude-code /usr/local/bin/
|
||||
|
||||
# Or install via package manager (if available)
|
||||
sudo apt install claude-code # Ubuntu/Debian
|
||||
sudo yum install claude-code # RHEL/CentOS
|
||||
```
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
### Step 2: Authenticate with Claude
|
||||
|
||||
```bash
|
||||
# Login to your Claude account
|
||||
claude auth login
|
||||
```
|
||||
|
||||
Follow the prompts to authenticate with your Anthropic account.
|
||||
|
||||
## 🔧 Task Master Integration
|
||||
|
||||
### Method 1: MCP Integration (Recommended)
|
||||
|
||||
Add Task Master to your Claude Code MCP configuration:
|
||||
|
||||
<Tabs>
|
||||
<Tab title="Global Setup">
|
||||
```bash
|
||||
# Add Task Master MCP server globally
|
||||
claude mcp add task-master-ai -s global -- npx -y task-master-ai
|
||||
```
|
||||
</Tab>
|
||||
|
||||
<Tab title="Project Setup">
|
||||
```bash
|
||||
# Add for current project only
|
||||
claude mcp add task-master-ai -s project -- npx -y task-master-ai
|
||||
```
|
||||
</Tab>
|
||||
|
||||
<Tab title="Windows">
|
||||
```bash
|
||||
# Windows-specific command
|
||||
claude mcp add task-master-mcp -s project -- cmd /c "npx -y --package=task-master-ai task-master-ai"
|
||||
```
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
### Method 2: Direct CLI Usage
|
||||
|
||||
You can also use Task Master commands directly alongside Claude Code:
|
||||
|
||||
```bash
|
||||
# Initialize Task Master in your project
|
||||
npx task-master-ai init
|
||||
|
||||
# Use Claude Code for AI interactions
|
||||
claude "Help me implement the next task"
|
||||
|
||||
# Use Task Master for task management
|
||||
npx task-master-ai next
|
||||
npx task-master-ai show 1.2
|
||||
```
|
||||
|
||||
## 🚀 Getting Started
|
||||
|
||||
### 1. Initialize Your Project
|
||||
|
||||
In your project directory:
|
||||
|
||||
```bash
|
||||
# Start Claude Code
|
||||
claude
|
||||
|
||||
# In the Claude Code chat, initialize Task Master
|
||||
Initialize taskmaster-ai in my project
|
||||
```
|
||||
|
||||
### 2. Configure Models (Optional)
|
||||
|
||||
Since Claude Code doesn't need API keys, you can use it as your main model:
|
||||
|
||||
```
|
||||
Change the main model to claude-code/sonnet
|
||||
```
|
||||
|
||||
Available Claude Code models:
|
||||
- `claude-code/sonnet` - Claude 3.5 Sonnet (recommended)
|
||||
- `claude-code/opus` - Claude 3 Opus (for complex tasks)
|
||||
|
||||
### 3. Create Your First Tasks
|
||||
|
||||
```
|
||||
Can you parse my PRD and create tasks for building a todo app?
|
||||
```
|
||||
|
||||
## 💡 Advanced Configuration
|
||||
|
||||
### Hybrid Setup
|
||||
|
||||
Use Claude Code as your main model with other APIs for research:
|
||||
|
||||
Create `.env` in your project:
|
||||
```bash
|
||||
# Optional: Add research capabilities
|
||||
PERPLEXITY_API_KEY=your_perplexity_key_here
|
||||
OPENAI_API_KEY=your_openai_key_here
|
||||
```
|
||||
|
||||
Then configure models:
|
||||
```
|
||||
Change the main model to claude-code/sonnet and research model to perplexity-llama-3.1-sonar-large-128k-online
|
||||
```
|
||||
|
||||
### Multi-Session Workflows
|
||||
|
||||
Claude Code excels at parallel development:
|
||||
|
||||
```bash
|
||||
# Terminal 1: Main development
|
||||
cd my-project && claude
|
||||
|
||||
# Terminal 2: Testing and validation
|
||||
cd my-project && claude
|
||||
|
||||
# Terminal 3: Documentation
|
||||
cd my-project && claude
|
||||
```
|
||||
|
||||
Each session maintains Task Master context while allowing focused work streams.
|
||||
|
||||
## 🔍 Troubleshooting
|
||||
|
||||
<Accordion title="Claude Code not found">
|
||||
- **Check installation**: Run `claude --version` to verify installation
|
||||
- **Update PATH**: Ensure Claude Code is in your system PATH
|
||||
- **Reinstall**: Try reinstalling Claude Code from scratch
|
||||
- **Permissions**: Check file permissions for the claude binary
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Authentication issues">
|
||||
- **Re-login**: Run `claude auth logout` then `claude auth login`
|
||||
- **Check account**: Verify your Anthropic account is active
|
||||
- **Network issues**: Check if you're behind a proxy or firewall
|
||||
- **Clear cache**: Delete `~/.claude` directory and re-authenticate
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="MCP server connection fails">
|
||||
- **Check Node.js**: Ensure Node.js 16+ is installed
|
||||
- **Test manually**: Run `npx task-master-ai` to test the server
|
||||
- **Clear MCP cache**: Remove and re-add the MCP server
|
||||
- **Check permissions**: Ensure npm can install packages
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Task Master commands not working">
|
||||
- **Verify MCP**: Run `claude mcp list` to see installed servers
|
||||
- **Re-add server**: Remove and re-add the task-master-ai MCP server
|
||||
- **Check initialization**: Ensure project is initialized with `Initialize taskmaster-ai`
|
||||
- **Review logs**: Check Claude Code logs for error messages
|
||||
</Accordion>
|
||||
|
||||
## 💡 Pro Tips
|
||||
|
||||
<Tip>
|
||||
**Use headless mode** for automation: `claude -p "What's the next task I should work on?"` gives quick answers without opening the full chat interface.
|
||||
</Tip>
|
||||
|
||||
<Tip>
|
||||
**Create custom commands** using Claude Code's command system for repeated Task Master workflows like "complete task and get next".
|
||||
</Tip>
|
||||
|
||||
<Tip>
|
||||
**Leverage context persistence** - Claude Code maintains conversation history, making it perfect for long-running development sessions.
|
||||
</Tip>
|
||||
|
||||
## 🎯 Best Practices
|
||||
|
||||
### Development Workflow
|
||||
|
||||
```bash
|
||||
# Morning routine
|
||||
claude "Show me today's tasks and priorities"
|
||||
|
||||
# During development
|
||||
claude "Help me implement task 2.1"
|
||||
claude "Update task 2.1 with implementation notes"
|
||||
|
||||
# End of day
|
||||
claude "Mark completed tasks as done and show tomorrow's priorities"
|
||||
```
|
||||
|
||||
### Team Collaboration
|
||||
|
||||
```bash
|
||||
# Share task status
|
||||
claude "Generate a progress report for the team"
|
||||
|
||||
# Review dependencies
|
||||
claude "Check which tasks are blocked and why"
|
||||
|
||||
# Planning sessions
|
||||
claude "Analyze complexity of remaining tasks"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
<div className="bg-purple-50 dark:bg-purple-900/20 p-4 rounded-lg border border-purple-200 dark:border-purple-800">
|
||||
<div className="flex items-center space-x-2 mb-2">
|
||||
<span className="text-purple-600 dark:text-purple-400 text-lg">🎉</span>
|
||||
<h3 className="font-semibold text-purple-800 dark:text-purple-200">You're all set with Claude Code!</h3>
|
||||
</div>
|
||||
<p className="text-purple-700 dark:text-purple-300">
|
||||
Claude Code offers the most seamless Task Master experience. Ready to create your first project? Check out our <a href="/getting-started/quick-start/prd-quick" className="underline">PRD guide</a>.
|
||||
</p>
|
||||
</div>
|
||||
@@ -1,373 +0,0 @@
|
||||
---
|
||||
title: Command Line Setup
|
||||
sidebarTitle: "CLI"
|
||||
---
|
||||
|
||||
<div className="flex items-center space-x-4 mb-6">
|
||||
<img src="/logo/terminal-logo.svg" className="w-12 h-12" alt="Terminal" />
|
||||
<div>
|
||||
<h1 className="text-2xl font-bold">Command Line Interface</h1>
|
||||
<p className="text-gray-600">Direct CLI usage without IDE integration</p>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
Use Task Master directly from the command line for maximum flexibility and control.
|
||||
|
||||
## 🎯 Why Choose CLI?
|
||||
|
||||
<div className="grid grid-cols-1 md:grid-cols-2 gap-4 mb-6">
|
||||
<div className="bg-blue-50 dark:bg-blue-900/20 p-4 rounded-lg border border-blue-200 dark:border-blue-800">
|
||||
<h3 className="font-semibold text-blue-800 dark:text-blue-200 mb-2">🚀 Maximum Performance</h3>
|
||||
<p className="text-sm text-blue-700 dark:text-blue-300">No IDE overhead - pure command line speed</p>
|
||||
</div>
|
||||
<div className="bg-green-50 dark:bg-green-900/20 p-4 rounded-lg border border-green-200 dark:border-green-800">
|
||||
<h3 className="font-semibold text-green-800 dark:text-green-200 mb-2">🔧 Full Control</h3>
|
||||
<p className="text-sm text-green-700 dark:text-green-300">Access to all Task Master features and configurations</p>
|
||||
</div>
|
||||
<div className="bg-purple-50 dark:bg-purple-900/20 p-4 rounded-lg border border-purple-200 dark:border-purple-800">
|
||||
<h3 className="font-semibold text-purple-800 dark:text-purple-200 mb-2">📜 Scriptable</h3>
|
||||
<p className="text-sm text-purple-700 dark:text-purple-300">Perfect for automation and CI/CD integration</p>
|
||||
</div>
|
||||
<div className="bg-orange-50 dark:bg-orange-900/20 p-4 rounded-lg border border-orange-200 dark:border-orange-800">
|
||||
<h3 className="font-semibold text-orange-800 dark:text-orange-200 mb-2">🌐 Universal</h3>
|
||||
<p className="text-sm text-orange-700 dark:text-orange-300">Works on any system with Node.js</p>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
## 📦 Installation
|
||||
|
||||
### Global Installation (Recommended)
|
||||
|
||||
```bash
|
||||
# Install Task Master globally
|
||||
npm install -g task-master-ai
|
||||
|
||||
# Verify installation
|
||||
task-master --version
|
||||
```
|
||||
|
||||
### Local Installation
|
||||
|
||||
```bash
|
||||
# Install in your project
|
||||
npm install task-master-ai
|
||||
|
||||
# Use with npx
|
||||
npx task-master-ai --version
|
||||
```
|
||||
|
||||
## 🔧 Configuration
|
||||
|
||||
### Step 1: Set Up API Keys
|
||||
|
||||
Create a `.env` file in your project root:
|
||||
|
||||
```bash
|
||||
# At least one of these is required
|
||||
ANTHROPIC_API_KEY=your_anthropic_key_here
|
||||
PERPLEXITY_API_KEY=your_perplexity_key_here # Recommended for research
|
||||
OPENAI_API_KEY=your_openai_key_here
|
||||
|
||||
# Optional additional providers
|
||||
GOOGLE_API_KEY=your_google_key_here
|
||||
MISTRAL_API_KEY=your_mistral_key_here
|
||||
OPENROUTER_API_KEY=your_openrouter_key_here
|
||||
XAI_API_KEY=your_xai_key_here
|
||||
```
|
||||
|
||||
### Step 2: Configure Models
|
||||
|
||||
```bash
|
||||
# Interactive model configuration
|
||||
task-master models --setup
|
||||
|
||||
# Or set specific models
|
||||
task-master models --set-main claude-3-5-sonnet-20241022
|
||||
task-master models --set-research perplexity-llama-3.1-sonar-large-128k-online
|
||||
task-master models --set-fallback gpt-4o-mini
|
||||
```
|
||||
|
||||
### Step 3: Initialize Your Project
|
||||
|
||||
```bash
|
||||
# Initialize Task Master in current directory
|
||||
task-master init
|
||||
|
||||
# Initialize with specific rules
|
||||
task-master init --rules cursor,windsurf,vscode
|
||||
```
|
||||
|
||||
## 🚀 Quick Start Guide
|
||||
|
||||
### 1. Create Your PRD
|
||||
|
||||
```bash
|
||||
# Create a Product Requirements Document
|
||||
touch .taskmaster/docs/prd.txt
|
||||
|
||||
# Edit with your favorite editor
|
||||
nano .taskmaster/docs/prd.txt
|
||||
# or
|
||||
code .taskmaster/docs/prd.txt
|
||||
```
|
||||
|
||||
### 2. Generate Tasks
|
||||
|
||||
```bash
|
||||
# Parse your PRD and create tasks
|
||||
task-master parse-prd .taskmaster/docs/prd.txt
|
||||
|
||||
# Analyze task complexity
|
||||
task-master analyze-complexity --research
|
||||
|
||||
# Expand tasks into subtasks
|
||||
task-master expand --all --research
|
||||
```
|
||||
|
||||
### 3. Start Working
|
||||
|
||||
```bash
|
||||
# See all tasks
|
||||
task-master list
|
||||
|
||||
# Get next task to work on
|
||||
task-master next
|
||||
|
||||
# Show specific task details
|
||||
task-master show 1.2
|
||||
|
||||
# Mark task as in-progress
|
||||
task-master set-status --id=1.2 --status=in-progress
|
||||
```
|
||||
|
||||
## 📋 Essential Commands
|
||||
|
||||
### Task Management
|
||||
|
||||
```bash
|
||||
# List all tasks
|
||||
task-master list
|
||||
|
||||
# Show specific tasks (comma-separated)
|
||||
task-master show 1,2,3
|
||||
|
||||
# Get next available task
|
||||
task-master next
|
||||
|
||||
# Add a new task
|
||||
task-master add-task --prompt="Implement user login" --research
|
||||
|
||||
# Update task with notes
|
||||
task-master update-task --id=1.2 --prompt="Added JWT authentication"
|
||||
|
||||
# Update subtask with implementation notes
|
||||
task-master update-subtask --id=1.2.1 --prompt="Used bcrypt for password hashing"
|
||||
```
|
||||
|
||||
### Task Status Management
|
||||
|
||||
```bash
|
||||
# Mark task as done
|
||||
task-master set-status --id=1.2 --status=done
|
||||
|
||||
# Mark as in-progress
|
||||
task-master set-status --id=1.2 --status=in-progress
|
||||
|
||||
# Mark as blocked
|
||||
task-master set-status --id=1.2 --status=blocked
|
||||
```
|
||||
|
||||
### Analysis and Planning
|
||||
|
||||
```bash
|
||||
# Research latest information
|
||||
task-master research "What are the latest React best practices?"
|
||||
|
||||
# Analyze project complexity
|
||||
task-master analyze-complexity --research
|
||||
|
||||
# View complexity report
|
||||
task-master complexity-report
|
||||
|
||||
# Expand task into subtasks
|
||||
task-master expand --id=1.2 --research --force
|
||||
```
|
||||
|
||||
### Dependencies and Organization
|
||||
|
||||
```bash
|
||||
# Add task dependency
|
||||
task-master add-dependency --id=2.1 --depends-on=1.2
|
||||
|
||||
# Move task to different position
|
||||
task-master move --from=3 --to=1
|
||||
|
||||
# Validate dependencies
|
||||
task-master validate-dependencies
|
||||
|
||||
# Generate markdown files
|
||||
task-master generate
|
||||
```
|
||||
|
||||
## 🎯 Advanced Usage
|
||||
|
||||
### Scripting and Automation
|
||||
|
||||
Create shell scripts for common workflows:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# daily-standup.sh
|
||||
|
||||
echo "=== Today's Tasks ==="
|
||||
task-master next
|
||||
|
||||
echo -e "\n=== In Progress ==="
|
||||
task-master list | grep "in-progress"
|
||||
|
||||
echo -e "\n=== Blocked Tasks ==="
|
||||
task-master list | grep "blocked"
|
||||
|
||||
echo -e "\n=== Complexity Report ==="
|
||||
task-master complexity-report
|
||||
```
|
||||
|
||||
### CI/CD Integration
|
||||
|
||||
```yaml
|
||||
# .github/workflows/task-validation.yml
|
||||
name: Task Validation
|
||||
|
||||
on: [push, pull_request]
|
||||
|
||||
jobs:
|
||||
validate-tasks:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
- uses: actions/setup-node@v2
|
||||
with:
|
||||
node-version: '18'
|
||||
|
||||
- name: Install Task Master
|
||||
run: npm install -g task-master-ai
|
||||
|
||||
- name: Validate task dependencies
|
||||
run: task-master validate-dependencies
|
||||
|
||||
- name: Generate task report
|
||||
run: task-master complexity-report > task-report.json
|
||||
```
|
||||
|
||||
### Custom Aliases
|
||||
|
||||
Add these to your `.bashrc` or `.zshrc`:
|
||||
|
||||
```bash
|
||||
# Task Master shortcuts
|
||||
alias tm="task-master"
|
||||
alias tmn="task-master next"
|
||||
alias tml="task-master list"
|
||||
alias tmr="task-master research"
|
||||
alias tms="task-master show"
|
||||
```
|
||||
|
||||
## 🔍 Troubleshooting
|
||||
|
||||
<Accordion title="Command not found: task-master">
|
||||
**Solutions:**
|
||||
- Verify Node.js installation: `node --version`
|
||||
- Reinstall globally: `npm install -g task-master-ai`
|
||||
- Check npm global path: `npm config get prefix`
|
||||
- Use npx if global install fails: `npx task-master-ai`
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="API key errors">
|
||||
**Solutions:**
|
||||
- Check `.env` file exists and has correct keys
|
||||
- Verify API key format and validity
|
||||
- Test with a single API key first
|
||||
- Check for typos in environment variable names
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Tasks not generating">
|
||||
**Solutions:**
|
||||
- Verify PRD file exists and has content
|
||||
- Check API keys are working: `task-master models`
|
||||
- Try with different model: `task-master models --set-main gpt-4o-mini`
|
||||
- Add `--research` flag for better results
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Permission errors">
|
||||
**Solutions:**
|
||||
- Use `sudo` for global installs (not recommended)
|
||||
- Configure npm to use different directory: `npm config set prefix ~/.local`
|
||||
- Use local installation: `npm install task-master-ai`
|
||||
- Check file permissions in `.taskmaster` directory
|
||||
</Accordion>
|
||||
|
||||
## 💡 Pro Tips
|
||||
|
||||
<Tip>
|
||||
**Use `--research` flag** with AI commands for more informed and up-to-date task suggestions based on current best practices.
|
||||
</Tip>
|
||||
|
||||
<Tip>
|
||||
**Create project templates** with pre-configured `.taskmaster` directories for different types of projects (web apps, APIs, mobile apps).
|
||||
</Tip>
|
||||
|
||||
<Tip>
|
||||
**Combine with other tools** like `jq` for parsing JSON outputs: `task-master list --json | jq '.[] | select(.status=="pending")'`
|
||||
</Tip>
|
||||
|
||||
<Tip>
|
||||
**Use environment-specific configs** by creating different `.env` files (.env.development, .env.production) and symlinking as needed.
|
||||
</Tip>
|
||||
|
||||
## 📚 Integration Examples
|
||||
|
||||
### With Git Hooks
|
||||
|
||||
```bash
|
||||
#!/bin/sh
|
||||
# .git/hooks/pre-commit
|
||||
|
||||
# Check if any tasks are marked as done
|
||||
if task-master list | grep -q "done"; then
|
||||
echo "✅ Tasks completed in this commit:"
|
||||
task-master list | grep "done"
|
||||
fi
|
||||
```
|
||||
|
||||
### With Make
|
||||
|
||||
```makefile
|
||||
# Makefile
|
||||
|
||||
.PHONY: tasks next status
|
||||
|
||||
tasks:
|
||||
@task-master list
|
||||
|
||||
next:
|
||||
@task-master next
|
||||
|
||||
status:
|
||||
@echo "=== Task Status ==="
|
||||
@task-master list | grep -E "(in-progress|blocked)"
|
||||
@echo ""
|
||||
@echo "=== Next Task ==="
|
||||
@task-master next
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
<div className="bg-cyan-50 dark:bg-cyan-900/20 p-4 rounded-lg border border-cyan-200 dark:border-cyan-800">
|
||||
<div className="flex items-center space-x-2 mb-2">
|
||||
<span className="text-cyan-600 dark:text-cyan-400 text-lg">⚡</span>
|
||||
<h3 className="font-semibold text-cyan-800 dark:text-cyan-200">Ready for maximum productivity!</h3>
|
||||
</div>
|
||||
<p className="text-cyan-700 dark:text-cyan-300">
|
||||
You now have the full power of Task Master at your fingertips. Create your first project with our <a href="/getting-started/quick-start/prd-quick" className="underline">PRD guide</a>.
|
||||
</p>
|
||||
</div>
|
||||
@@ -1,247 +0,0 @@
|
||||
---
|
||||
title: Cursor Setup
|
||||
sidebarTitle: "Cursor"
|
||||
---
|
||||
|
||||
<div className="flex items-center space-x-4 mb-6">
|
||||
<img src="/logo/cursor-logo.svg" className="w-12 h-12" alt="Cursor" />
|
||||
<div>
|
||||
<h1 className="text-2xl font-bold">Cursor AI Editor</h1>
|
||||
<p className="text-gray-600">AI-powered VS Code fork with built-in MCP support</p>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
Cursor offers the smoothest Task Master experience with one-click installation and native MCP integration.
|
||||
|
||||
## 🚀 One-Click Install (Recommended)
|
||||
|
||||
<div className="bg-blue-50 dark:bg-blue-900/20 p-4 rounded-lg border border-blue-200 dark:border-blue-800 mb-6">
|
||||
<div className="flex items-center space-x-2 mb-3">
|
||||
<span className="text-blue-600 dark:text-blue-400 text-lg">⚡</span>
|
||||
<h3 className="font-semibold text-blue-800 dark:text-blue-200">Fastest Setup</h3>
|
||||
</div>
|
||||
|
||||
<a href="cursor://anysphere.cursor-deeplink/mcp/install?name=task-master-ai&config=eyJjb21tYW5kIjoibnB4IiwiYXJncyI6WyIteSIsIi0tcGFja2FnZT10YXNrLW1hc3Rlci1haSIsInRhc2stbWFzdGVyLWFpIl0sImVudiI6eyJBTlRIUk9QSUNfQVBJX0tFWSI6IllPVVJfQU5USFJPUElDX0FQSV9LRVlfSEVSRSIsIlBFUlBMRVhJVFlfQVBJX0tFWSI6IllPVVJfUEVSUExFWElUWV9BUElfS0VZX0hFUkUiLCJPUEVOQUlfQVBJX0tFWSI6IllPVVJfT1BFTkFJX0tFWV9IRVJFIiwiR09PR0xFX0FQSV9LRVkiOiJZT1VSX0dPT0dMRV9LRVlfSEVSRSIsIk1JU1RSQUxfQVBJX0tFWSI6IllPVVJfTUlTVFJBTF9LRVlfSEVSRSIsIk9QRU5ST1VURVJfQVBJX0tFWSI6IllPVVJfT1BFTlJPVVRFUl9LRVlfSEVSRSIsIlhBSV9BUElfS0VZIjoiWU9VUl9YQUlfS0VZX0hFUkUiLCJBWlVSRV9PUEVOQUJFX0FQSV9LRVkiOiJZT1VSX0FaVVJFX0tFWV9IRVJFIiwiT0xMQU1BX0FQSV9LRVkiOiJZT1VSX09MTEFNQV9BUElfS0VZX0hFUkUifX0%3D">
|
||||
<img
|
||||
className="block dark:hidden hover:opacity-80 transition-opacity cursor-pointer"
|
||||
src="https://cursor.com/deeplink/mcp-install-light.png"
|
||||
alt="Add Task Master MCP server to Cursor"
|
||||
noZoom
|
||||
/>
|
||||
<img
|
||||
className="hidden dark:block hover:opacity-80 transition-opacity cursor-pointer"
|
||||
src="https://cursor.com/deeplink/mcp-install-dark.png"
|
||||
alt="Add Task Master MCP server to Cursor"
|
||||
noZoom
|
||||
/>
|
||||
</a>
|
||||
</div>
|
||||
|
||||
<Warning>
|
||||
**After one-click install**: You still need to add your actual API keys! The installer uses placeholder keys that must be replaced.
|
||||
</Warning>
|
||||
|
||||
## 📋 Manual Setup
|
||||
|
||||
If you prefer manual configuration or the one-click install doesn't work:
|
||||
|
||||
### Step 1: Create MCP Configuration
|
||||
|
||||
Choose your configuration scope:
|
||||
|
||||
<Tabs>
|
||||
<Tab title="Global Config">
|
||||
Create or edit `~/.cursor/mcp.json` (macOS/Linux) or `%USERPROFILE%\.cursor\mcp.json` (Windows):
|
||||
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"task-master-ai": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "task-master-ai"],
|
||||
"env": {
|
||||
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
|
||||
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
|
||||
"OPENAI_API_KEY": "YOUR_OPENAI_KEY_HERE",
|
||||
"GOOGLE_API_KEY": "YOUR_GOOGLE_KEY_HERE",
|
||||
"MISTRAL_API_KEY": "YOUR_MISTRAL_KEY_HERE",
|
||||
"OPENROUTER_API_KEY": "YOUR_OPENROUTER_KEY_HERE",
|
||||
"XAI_API_KEY": "YOUR_XAI_KEY_HERE"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
</Tab>
|
||||
|
||||
<Tab title="Project Config">
|
||||
Create `.cursor/mcp.json` in your project root:
|
||||
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"task-master-ai": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "task-master-ai"],
|
||||
"env": {
|
||||
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
|
||||
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
</Tab>
|
||||
|
||||
<Tab title="Windows Native">
|
||||
For Windows users without WSL:
|
||||
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"task-master-ai": {
|
||||
"command": "cmd",
|
||||
"args": ["/c", "npx -y --package=task-master-ai task-master-ai"],
|
||||
"env": {
|
||||
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
|
||||
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
|
||||
"OPENAI_API_KEY": "YOUR_OPENAI_KEY_HERE"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
<Note>
|
||||
**Alternative Windows Setup**: Use Claude Code's project-level installation:
|
||||
```bash
|
||||
claude mcp add task-master-mcp -s project -- cmd /c "npx -y --package=task-master-ai task-master-ai"
|
||||
```
|
||||
</Note>
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
### Step 2: Enable MCP Server
|
||||
|
||||
1. Open Cursor Settings (`Ctrl+Shift+J` or `Cmd+Shift+J`)
|
||||
2. Click the **MCP** tab in the left sidebar
|
||||
3. Find `task-master-ai` and toggle it **ON**
|
||||
4. Restart Cursor if needed
|
||||
|
||||
### Step 3: Verify Installation
|
||||
|
||||
In Cursor's chat panel, type:
|
||||
```
|
||||
help
|
||||
```
|
||||
|
||||
You should see Task Master commands available. If you see "0 tools enabled", check your API keys and restart Cursor.
|
||||
|
||||
## 🔧 Configuration
|
||||
|
||||
### Add Your API Keys
|
||||
|
||||
<Tabs>
|
||||
<Tab title="Required Keys">
|
||||
You need **at least one** of these:
|
||||
- `ANTHROPIC_API_KEY` - For Claude models (recommended)
|
||||
- `OPENAI_API_KEY` - For GPT models
|
||||
- `GOOGLE_API_KEY` - For Gemini models
|
||||
</Tab>
|
||||
|
||||
<Tab title="Recommended Keys">
|
||||
For the best experience, also add:
|
||||
- `PERPLEXITY_API_KEY` - Enables research features
|
||||
- `OPENAI_API_KEY` - Fallback option
|
||||
</Tab>
|
||||
|
||||
<Tab title="All Supported Keys">
|
||||
Full list of supported API keys:
|
||||
- `ANTHROPIC_API_KEY`
|
||||
- `PERPLEXITY_API_KEY`
|
||||
- `OPENAI_API_KEY`
|
||||
- `GOOGLE_API_KEY`
|
||||
- `MISTRAL_API_KEY`
|
||||
- `OPENROUTER_API_KEY`
|
||||
- `XAI_API_KEY`
|
||||
- `AZURE_OPENAI_API_KEY`
|
||||
- `OLLAMA_API_KEY`
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
### Configure Models (Optional)
|
||||
|
||||
In Cursor's chat, set your preferred models:
|
||||
|
||||
```
|
||||
Change the main, research and fallback models to claude-3-5-sonnet-20241022, perplexity-llama-3.1-sonar-large-128k-online and gpt-4o-mini respectively.
|
||||
```
|
||||
|
||||
## 🎯 Getting Started
|
||||
|
||||
### 1. Initialize Task Master
|
||||
|
||||
In Cursor's chat panel:
|
||||
```
|
||||
Initialize taskmaster-ai in my project
|
||||
```
|
||||
|
||||
### 2. Create Your First Task
|
||||
|
||||
```
|
||||
Can you help me implement user authentication for my web app?
|
||||
```
|
||||
|
||||
### 3. Start Working
|
||||
|
||||
```
|
||||
What's the next task I should work on?
|
||||
```
|
||||
|
||||
## 🔍 Troubleshooting
|
||||
|
||||
<Accordion title="0 tools enabled in MCP settings">
|
||||
- **Check API keys**: Ensure at least one API key is correctly set
|
||||
- **Restart Cursor**: Close and reopen Cursor completely
|
||||
- **Check file paths**: Verify your `mcp.json` is in the correct location
|
||||
- **Test manually**: Run `npx task-master-ai` in terminal to test
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="MCP server fails to start">
|
||||
- **Update Node.js**: Ensure you have Node.js 16+ installed
|
||||
- **Clear npm cache**: Run `npm cache clean --force`
|
||||
- **Try global install**: Run `npm install -g task-master-ai`
|
||||
- **Check permissions**: Ensure npm has permission to install packages
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Commands not working">
|
||||
- **Verify installation**: Type `help` in chat to see available commands
|
||||
- **Check project initialization**: Run `Initialize taskmaster-ai in my project`
|
||||
- **Review logs**: Check Cursor's developer console for error messages
|
||||
</Accordion>
|
||||
|
||||
## 💡 Pro Tips
|
||||
|
||||
<Tip>
|
||||
**Use project-specific configs** for different API keys per project, especially when working with team projects that have shared API limits.
|
||||
</Tip>
|
||||
|
||||
<Tip>
|
||||
**Enable research mode** with a Perplexity API key to get AI-powered task suggestions based on the latest best practices.
|
||||
</Tip>
|
||||
|
||||
<Tip>
|
||||
**Set up keyboard shortcuts** in Cursor for common Task Master commands like "What's next?" or "Show task status".
|
||||
</Tip>
|
||||
|
||||
---
|
||||
|
||||
<div className="bg-green-50 dark:bg-green-900/20 p-4 rounded-lg border border-green-200 dark:border-green-800">
|
||||
<div className="flex items-center space-x-2 mb-2">
|
||||
<span className="text-green-600 dark:text-green-400 text-lg">✅</span>
|
||||
<h3 className="font-semibold text-green-800 dark:text-green-200">Ready to go!</h3>
|
||||
</div>
|
||||
<p className="text-green-700 dark:text-green-300">
|
||||
You're all set with Cursor! Head over to our <a href="/getting-started/quick-start/prd-quick" className="underline">PRD guide</a> to create your first project.
|
||||
</p>
|
||||
</div>
|
||||
@@ -1,118 +0,0 @@
|
||||
---
|
||||
title: Choose Your AI Agent
|
||||
sidebarTitle: "AI Agents"
|
||||
---
|
||||
|
||||
Task Master works seamlessly with various AI agents. Choose your preferred agent to get customized setup instructions.
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card
|
||||
title="Cursor"
|
||||
icon="cursor"
|
||||
href="/getting-started/agents/cursor"
|
||||
>
|
||||
<div className="flex items-center space-x-3">
|
||||
<img src="/logo/cursor-logo.svg" className="w-8 h-8" alt="Cursor" />
|
||||
<span>AI-powered VS Code fork with built-in MCP support</span>
|
||||
</div>
|
||||
</Card>
|
||||
|
||||
<Card
|
||||
title="Claude Code"
|
||||
icon="claude"
|
||||
href="/getting-started/agents/claude-code"
|
||||
>
|
||||
<div className="flex items-center space-x-3">
|
||||
<img src="/logo/claude-logo.svg" className="w-8 h-8" alt="Claude Code" />
|
||||
<span>Anthropic's official CLI for Claude</span>
|
||||
</div>
|
||||
</Card>
|
||||
|
||||
<Card
|
||||
title="Windsurf"
|
||||
icon="windsurf"
|
||||
href="/getting-started/agents/windsurf"
|
||||
>
|
||||
<div className="flex items-center space-x-3">
|
||||
<img src="/logo/windsurf-logo.svg" className="w-8 h-8" alt="Windsurf" />
|
||||
<span>Codeium's AI-native IDE</span>
|
||||
</div>
|
||||
</Card>
|
||||
|
||||
<Card
|
||||
title="VS Code"
|
||||
icon="vscode"
|
||||
href="/getting-started/agents/vscode"
|
||||
>
|
||||
<div className="flex items-center space-x-3">
|
||||
<img src="/logo/vscode-logo.svg" className="w-8 h-8" alt="VS Code" />
|
||||
<span>Microsoft's editor with MCP extensions</span>
|
||||
</div>
|
||||
</Card>
|
||||
|
||||
<Card
|
||||
title="Command Line"
|
||||
icon="terminal"
|
||||
href="/getting-started/agents/cli"
|
||||
>
|
||||
<div className="flex items-center space-x-3">
|
||||
<img src="/logo/terminal-logo.svg" className="w-8 h-8" alt="Terminal" />
|
||||
<span>Direct CLI usage without IDE integration</span>
|
||||
</div>
|
||||
</Card>
|
||||
|
||||
<Card
|
||||
title="Other Agents"
|
||||
icon="question"
|
||||
href="/getting-started/agents/other"
|
||||
>
|
||||
<div className="flex items-center space-x-3">
|
||||
<img src="/logo/generic-logo.svg" className="w-8 h-8" alt="Other" />
|
||||
<span>Generic setup for other AI agents</span>
|
||||
</div>
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
## Quick Recommendations
|
||||
|
||||
<Accordion title="🎯 Which agent should I choose?">
|
||||
**For beginners**: Start with **Cursor** - it has the most seamless MCP integration and one-click install.
|
||||
|
||||
**For Claude users**: Use **Claude Code** - it's Anthropic's official CLI with no API key required.
|
||||
|
||||
**For existing VS Code users**: Stick with **VS Code** if you're already comfortable with your setup.
|
||||
|
||||
**For advanced users**: Try **Windsurf** for its AI-native features or use **Command Line** for maximum flexibility.
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="💡 What's MCP and why does it matter?">
|
||||
MCP (Model Control Protocol) allows Task Master to run directly inside your AI agent, giving you:
|
||||
- 🔥 **Seamless integration** - No switching between tools
|
||||
- ⚡ **Real-time task management** - Tasks update as you work
|
||||
- 🧠 **Context awareness** - Your AI knows about your tasks
|
||||
- 🎯 **Smart suggestions** - AI can recommend next tasks
|
||||
</Accordion>
|
||||
|
||||
## Platform-Specific Notes
|
||||
|
||||
<Tabs>
|
||||
<Tab title="Windows">
|
||||
**Important**: Windows users may need special configuration for some agents. We'll provide Windows-specific instructions for each agent.
|
||||
|
||||
Some agents work better with WSL (Windows Subsystem for Linux), while others have native Windows support.
|
||||
</Tab>
|
||||
|
||||
<Tab title="macOS">
|
||||
Most agents work seamlessly on macOS. Claude Code and Cursor have the best native macOS integration.
|
||||
</Tab>
|
||||
|
||||
<Tab title="Linux">
|
||||
All agents have excellent Linux support. Command Line interface works particularly well in Linux environments.
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
---
|
||||
|
||||
<Note>
|
||||
**Need help choosing?** Check our [comparison table](/getting-started/agent-comparison) or join our [Discord community](https://discord.gg/taskmasterai) for personalized recommendations.
|
||||
</Note>
|
||||
@@ -77,36 +77,6 @@ MCP (Model Control Protocol) lets you run Task Master directly from your editor.
|
||||
|
||||
> **Note**: If you see `0 tools enabled` in the MCP settings, restart your editor and check that your API keys are correctly configured.
|
||||
|
||||
### Windows-specific Configuration
|
||||
|
||||
For Windows users without WSL, you may need to use `cmd` to run the MCP server:
|
||||
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"task-master-ai": {
|
||||
"command": "cmd",
|
||||
"args": ["/c", "npx -y --package=task-master-ai task-master-ai"],
|
||||
"env": {
|
||||
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
|
||||
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
|
||||
"OPENAI_API_KEY": "YOUR_OPENAI_KEY_HERE",
|
||||
"GOOGLE_API_KEY": "YOUR_GOOGLE_KEY_HERE",
|
||||
"MISTRAL_API_KEY": "YOUR_MISTRAL_KEY_HERE",
|
||||
"OPENROUTER_API_KEY": "YOUR_OPENROUTER_KEY_HERE",
|
||||
"XAI_API_KEY": "YOUR_XAI_KEY_HERE",
|
||||
"AZURE_OPENAI_API_KEY": "YOUR_AZURE_KEY_HERE"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Alternatively, you can install at the project level with Claude Code:
|
||||
```bash
|
||||
claude mcp add task-master-mcp -s project -- cmd /c "npx -y --package=task-master-ai task-master-ai"
|
||||
```
|
||||
|
||||
### VS Code (`servers` + `type`)
|
||||
|
||||
```json
|
||||
|
||||
@@ -5,27 +5,7 @@ sidebarTitle: "Quick Start"
|
||||
|
||||
This guide is for new users who want to start using Task Master with minimal setup time.
|
||||
|
||||
## 🎯 Choose Your AI Agent
|
||||
|
||||
First, pick your preferred AI development environment:
|
||||
|
||||
<CardGroup cols={3}>
|
||||
<Card title="Cursor" icon="cursor" href="/getting-started/agents/cursor">
|
||||
One-click install with native MCP support
|
||||
</Card>
|
||||
<Card title="Claude Code" icon="claude" href="/getting-started/agents/claude-code">
|
||||
No API keys needed - uses your Claude subscription
|
||||
</Card>
|
||||
<Card title="Command Line" icon="terminal" href="/getting-started/agents/cli">
|
||||
Maximum control and scriptability
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
[View all AI agents →](/getting-started/ai-agents)
|
||||
|
||||
## 📋 Quick Start Steps
|
||||
|
||||
After setting up your AI agent, this guide covers:
|
||||
It covers:
|
||||
- [Requirements](/docs/getting-started/quick-start/requirements): You will need Node.js and an AI model API Key.
|
||||
- [Installation](/docs/getting-started/quick-start/installation): How to Install Task Master.
|
||||
- [Configuration](/docs/getting-started/quick-start/configuration-quick): Setting up your API Key, MCP, and more.
|
||||
|
||||
@@ -53,6 +53,7 @@
|
||||
"license": "MIT WITH Commons-Clause",
|
||||
"dependencies": {
|
||||
"@ai-sdk/amazon-bedrock": "^2.2.9",
|
||||
"@ast-grep/cli": "^0.29.0",
|
||||
"@ai-sdk/anthropic": "^1.2.10",
|
||||
"@ai-sdk/azure": "^1.3.17",
|
||||
"@ai-sdk/google": "^1.2.13",
|
||||
|
||||
@@ -53,6 +53,8 @@ import {
|
||||
validateStrength
|
||||
} from './task-manager.js';
|
||||
|
||||
import { scanProject } from './task-manager/scan-project/index.js';
|
||||
|
||||
import {
|
||||
moveTasksBetweenTags,
|
||||
MoveTaskError,
|
||||
@@ -5067,6 +5069,110 @@ Examples:
|
||||
process.exit(1);
|
||||
});
|
||||
|
||||
// scan command
|
||||
programInstance
|
||||
.command('scan')
|
||||
.description('Intelligently scan and analyze the project codebase structure')
|
||||
.option(
|
||||
'--output <file>',
|
||||
'Path to save scan results (JSON format)',
|
||||
'project_scan.json'
|
||||
)
|
||||
.option(
|
||||
'--include <patterns>',
|
||||
'Comma-separated list of file patterns to include (e.g., "*.js,*.ts")'
|
||||
)
|
||||
.option(
|
||||
'--exclude <patterns>',
|
||||
'Comma-separated list of file patterns to exclude (e.g., "*.log,tmp/*")'
|
||||
)
|
||||
.option(
|
||||
'--depth <number>',
|
||||
'Maximum directory depth to scan',
|
||||
'5'
|
||||
)
|
||||
.option('--debug', 'Enable debug output')
|
||||
.option('--no-ai', 'Skip AI-powered analysis (faster but less detailed)')
|
||||
.action(async (options) => {
|
||||
try {
|
||||
// Initialize TaskMaster to get project root
|
||||
const taskMaster = initTaskMaster({});
|
||||
const projectRoot = taskMaster.getProjectRoot();
|
||||
|
||||
if (!projectRoot) {
|
||||
console.error(chalk.red('Error: Could not determine project root.'));
|
||||
console.log(chalk.yellow('Make sure you are in a valid project directory.'));
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
console.log(chalk.blue(`🔍 Starting intelligent scan of project: ${projectRoot}`));
|
||||
console.log(chalk.gray(`Output will be saved to: ${options.output}`));
|
||||
|
||||
// Parse options
|
||||
const scanOptions = {
|
||||
outputPath: path.isAbsolute(options.output)
|
||||
? options.output
|
||||
: path.join(projectRoot, options.output),
|
||||
includeFiles: options.include ? options.include.split(',').map(s => s.trim()) : [],
|
||||
excludeFiles: options.exclude ? options.exclude.split(',').map(s => s.trim()) : undefined,
|
||||
scanDepth: parseInt(options.depth, 10),
|
||||
debug: options.debug || false,
|
||||
reportProgress: true,
|
||||
skipAI: options.noAi || false
|
||||
};
|
||||
|
||||
// Perform the scan
|
||||
const spinner = ora('Scanning project structure...').start();
|
||||
|
||||
try {
|
||||
const result = await scanProject(projectRoot, scanOptions);
|
||||
|
||||
spinner.stop();
|
||||
|
||||
if (result.success) {
|
||||
console.log(chalk.green('✅ Project scan completed successfully!'));
|
||||
console.log(chalk.cyan('\n📊 Scan Summary:'));
|
||||
console.log(chalk.white(` Project Type: ${result.data.scanSummary.projectType}`));
|
||||
console.log(chalk.white(` Total Files: ${result.data.stats.totalFiles}`));
|
||||
console.log(chalk.white(` Languages: ${result.data.scanSummary.languages.join(', ')}`));
|
||||
console.log(chalk.white(` Code Lines: ${result.data.scanSummary.codeMetrics.totalLines}`));
|
||||
console.log(chalk.white(` Functions: ${result.data.scanSummary.codeMetrics.totalFunctions}`));
|
||||
console.log(chalk.white(` Classes: ${result.data.scanSummary.codeMetrics.totalClasses}`));
|
||||
|
||||
if (result.data.scanSummary.recommendations.length > 0) {
|
||||
console.log(chalk.yellow('\n💡 Recommendations:'));
|
||||
result.data.scanSummary.recommendations.forEach(rec => {
|
||||
console.log(chalk.white(` • ${rec}`));
|
||||
});
|
||||
}
|
||||
|
||||
console.log(chalk.green(`\n📄 Detailed results saved to: ${scanOptions.outputPath}`));
|
||||
} else {
|
||||
console.error(chalk.red('❌ Project scan failed:'));
|
||||
console.error(chalk.red(` ${result.error.message}`));
|
||||
if (scanOptions.debug && result.error.stack) {
|
||||
console.error(chalk.gray(` ${result.error.stack}`));
|
||||
}
|
||||
process.exit(1);
|
||||
}
|
||||
} catch (error) {
|
||||
spinner.stop();
|
||||
console.error(chalk.red(`❌ Scan failed: ${error.message}`));
|
||||
if (scanOptions.debug) {
|
||||
console.error(chalk.gray(error.stack));
|
||||
}
|
||||
process.exit(1);
|
||||
}
|
||||
} catch (error) {
|
||||
console.error(chalk.red(`Error initializing scan: ${error.message}`));
|
||||
process.exit(1);
|
||||
}
|
||||
})
|
||||
.on('error', function (err) {
|
||||
console.error(chalk.red(`Error: ${err.message}`));
|
||||
process.exit(1);
|
||||
});
|
||||
|
||||
return programInstance;
|
||||
}
|
||||
|
||||
|
||||
328
scripts/modules/task-manager/scan-project/ai-analysis.js
Normal file
328
scripts/modules/task-manager/scan-project/ai-analysis.js
Normal file
@@ -0,0 +1,328 @@
|
||||
/**
|
||||
* AI-powered analysis for project scanning
|
||||
*/
|
||||
import { ScanLoggingConfig } from './scan-config.js';
|
||||
|
||||
// Dynamically import AI service with fallback
|
||||
async function getAiService(options) {
|
||||
try {
|
||||
const { getAiService: aiService } = await import('../../ai-services-unified.js');
|
||||
return aiService(options);
|
||||
} catch (error) {
|
||||
throw new Error(`AI service not available: ${error.message}`);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Analyze project structure using AI
|
||||
* @param {Object} scanResults - Raw scan results
|
||||
* @param {Object} config - Scan configuration
|
||||
* @returns {Promise<Object>} AI-enhanced analysis
|
||||
*/
|
||||
export async function analyzeWithAI(scanResults, config) {
|
||||
const logger = new ScanLoggingConfig(config.mcpLog, config.reportProgress);
|
||||
logger.info('Starting AI-powered analysis...');
|
||||
|
||||
try {
|
||||
// Step 1: Project Type Analysis
|
||||
const projectTypeAnalysis = await analyzeProjectType(scanResults, config, logger);
|
||||
|
||||
// Step 2: Entry Points Analysis
|
||||
const entryPointsAnalysis = await analyzeEntryPoints(scanResults, projectTypeAnalysis, config, logger);
|
||||
|
||||
// Step 3: Core Structure Analysis
|
||||
const coreStructureAnalysis = await analyzeCoreStructure(scanResults, entryPointsAnalysis, config, logger);
|
||||
|
||||
// Step 4: Recursive Analysis (if needed)
|
||||
const detailedAnalysis = await performDetailedAnalysis(scanResults, coreStructureAnalysis, config, logger);
|
||||
|
||||
// Combine all analyses
|
||||
const enhancedAnalysis = {
|
||||
projectType: projectTypeAnalysis,
|
||||
entryPoints: entryPointsAnalysis,
|
||||
coreStructure: coreStructureAnalysis,
|
||||
detailed: detailedAnalysis,
|
||||
summary: generateProjectSummary(scanResults, projectTypeAnalysis, coreStructureAnalysis)
|
||||
};
|
||||
|
||||
logger.info('AI analysis completed successfully');
|
||||
return enhancedAnalysis;
|
||||
} catch (error) {
|
||||
logger.error(`AI analysis failed: ${error.message}`);
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Step 1: Analyze project type using AI
|
||||
* @param {Object} scanResults - Raw scan results
|
||||
* @param {Object} config - Scan configuration
|
||||
* @param {ScanLoggingConfig} logger - Logger instance
|
||||
* @returns {Promise<Object>} Project type analysis
|
||||
*/
|
||||
async function analyzeProjectType(scanResults, config, logger) {
|
||||
logger.info('[Scan #1]: Analyzing project type and structure...');
|
||||
|
||||
const prompt = `Given this root directory structure and files, identify the type of project and key characteristics:
|
||||
|
||||
Root files: ${JSON.stringify(scanResults.rootFiles, null, 2)}
|
||||
Directory structure: ${JSON.stringify(scanResults.directories, null, 2)}
|
||||
|
||||
Please analyze:
|
||||
1. Project type (e.g., Node.js, React, Laravel, Python, etc.)
|
||||
2. Programming languages used
|
||||
3. Frameworks and libraries
|
||||
4. Build tools and configuration
|
||||
5. Files or folders that should be excluded from further analysis (logs, binaries, etc.)
|
||||
|
||||
Respond with a JSON object containing your analysis.`;
|
||||
|
||||
try {
|
||||
const aiService = getAiService({ projectRoot: config.projectRoot });
|
||||
const response = await aiService.generateStructuredOutput({
|
||||
prompt,
|
||||
schema: {
|
||||
type: 'object',
|
||||
properties: {
|
||||
projectType: { type: 'string' },
|
||||
languages: { type: 'array', items: { type: 'string' } },
|
||||
frameworks: { type: 'array', items: { type: 'string' } },
|
||||
buildTools: { type: 'array', items: { type: 'string' } },
|
||||
excludePatterns: { type: 'array', items: { type: 'string' } },
|
||||
confidence: { type: 'number' },
|
||||
reasoning: { type: 'string' }
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
logger.info(`[Scan #1]: Detected ${response.projectType} project`);
|
||||
return response;
|
||||
} catch (error) {
|
||||
logger.warn(`[Scan #1]: AI analysis failed, using fallback detection`);
|
||||
// Fallback to rule-based detection
|
||||
return scanResults.projectType;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Step 2: Analyze entry points using AI
|
||||
* @param {Object} scanResults - Raw scan results
|
||||
* @param {Object} projectTypeAnalysis - Project type analysis
|
||||
* @param {Object} config - Scan configuration
|
||||
* @param {ScanLoggingConfig} logger - Logger instance
|
||||
* @returns {Promise<Object>} Entry points analysis
|
||||
*/
|
||||
async function analyzeEntryPoints(scanResults, projectTypeAnalysis, config, logger) {
|
||||
logger.info('[Scan #2]: Identifying main entry points and core files...');
|
||||
|
||||
const prompt = `Based on the project type "${projectTypeAnalysis.projectType}" and these files, identify the main entry points and core files:
|
||||
|
||||
Available files: ${JSON.stringify(scanResults.fileList.slice(0, 50), null, 2)}
|
||||
Project type: ${projectTypeAnalysis.projectType}
|
||||
Languages: ${JSON.stringify(projectTypeAnalysis.languages)}
|
||||
Frameworks: ${JSON.stringify(projectTypeAnalysis.frameworks)}
|
||||
|
||||
Please identify:
|
||||
1. Main entry points (files that start the application)
|
||||
2. Configuration files
|
||||
3. Core application files
|
||||
4. Important directories to analyze further
|
||||
|
||||
Respond with a structured JSON object.`;
|
||||
|
||||
try {
|
||||
const aiService = getAiService({ projectRoot: config.projectRoot });
|
||||
const response = await aiService.generateStructuredOutput({
|
||||
prompt,
|
||||
schema: {
|
||||
type: 'object',
|
||||
properties: {
|
||||
entryPoints: {
|
||||
type: 'array',
|
||||
items: {
|
||||
type: 'object',
|
||||
properties: {
|
||||
path: { type: 'string' },
|
||||
type: { type: 'string' },
|
||||
description: { type: 'string' }
|
||||
}
|
||||
}
|
||||
},
|
||||
configFiles: { type: 'array', items: { type: 'string' } },
|
||||
coreFiles: { type: 'array', items: { type: 'string' } },
|
||||
importantDirectories: { type: 'array', items: { type: 'string' } }
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
logger.info(`[Scan #2]: Found ${response.entryPoints.length} entry points`);
|
||||
return response;
|
||||
} catch (error) {
|
||||
logger.warn(`[Scan #2]: AI analysis failed, using basic detection`);
|
||||
return {
|
||||
entryPoints: scanResults.projectType.entryPoints.map(ep => ({ path: ep, type: 'main', description: 'Main entry point' })),
|
||||
configFiles: [],
|
||||
coreFiles: [],
|
||||
importantDirectories: []
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Step 3: Analyze core structure using AI
|
||||
* @param {Object} scanResults - Raw scan results
|
||||
* @param {Object} entryPointsAnalysis - Entry points analysis
|
||||
* @param {Object} config - Scan configuration
|
||||
* @param {ScanLoggingConfig} logger - Logger instance
|
||||
* @returns {Promise<Object>} Core structure analysis
|
||||
*/
|
||||
async function analyzeCoreStructure(scanResults, entryPointsAnalysis, config, logger) {
|
||||
logger.info('[Scan #3]: Analyzing core structure and key directories...');
|
||||
|
||||
const prompt = `Based on the entry points and project structure, analyze the core architecture:
|
||||
|
||||
Entry points: ${JSON.stringify(entryPointsAnalysis.entryPoints, null, 2)}
|
||||
Important directories: ${JSON.stringify(entryPointsAnalysis.importantDirectories)}
|
||||
File analysis: ${JSON.stringify(scanResults.detailedFiles.slice(0, 20), null, 2)}
|
||||
|
||||
Please analyze:
|
||||
1. Directory-level summaries and purposes
|
||||
2. File relationships and dependencies
|
||||
3. Key architectural patterns
|
||||
4. Data flow and component relationships
|
||||
|
||||
Respond with a structured analysis.`;
|
||||
|
||||
try {
|
||||
const aiService = getAiService({ projectRoot: config.projectRoot });
|
||||
const response = await aiService.generateStructuredOutput({
|
||||
prompt,
|
||||
schema: {
|
||||
type: 'object',
|
||||
properties: {
|
||||
directories: {
|
||||
type: 'object',
|
||||
additionalProperties: {
|
||||
type: 'object',
|
||||
properties: {
|
||||
purpose: { type: 'string' },
|
||||
importance: { type: 'string' },
|
||||
keyFiles: { type: 'array', items: { type: 'string' } },
|
||||
description: { type: 'string' }
|
||||
}
|
||||
}
|
||||
},
|
||||
architecture: {
|
||||
type: 'object',
|
||||
properties: {
|
||||
pattern: { type: 'string' },
|
||||
layers: { type: 'array', items: { type: 'string' } },
|
||||
dataFlow: { type: 'string' }
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
logger.info(`[Scan #3]: Analyzed ${Object.keys(response.directories || {}).length} directories`);
|
||||
return response;
|
||||
} catch (error) {
|
||||
logger.warn(`[Scan #3]: AI analysis failed, using basic structure`);
|
||||
return {
|
||||
directories: {},
|
||||
architecture: {
|
||||
pattern: 'unknown',
|
||||
layers: [],
|
||||
dataFlow: 'unknown'
|
||||
}
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Step 4: Perform detailed analysis on specific files/directories
|
||||
* @param {Object} scanResults - Raw scan results
|
||||
* @param {Object} coreStructureAnalysis - Core structure analysis
|
||||
* @param {Object} config - Scan configuration
|
||||
* @param {ScanLoggingConfig} logger - Logger instance
|
||||
* @returns {Promise<Object>} Detailed analysis
|
||||
*/
|
||||
async function performDetailedAnalysis(scanResults, coreStructureAnalysis, config, logger) {
|
||||
logger.info('[Scan #4+]: Performing detailed file-level analysis...');
|
||||
|
||||
const importantFiles = scanResults.detailedFiles
|
||||
.filter(file => file.functions?.length > 0 || file.classes?.length > 0)
|
||||
.slice(0, 10); // Limit to most important files
|
||||
|
||||
if (importantFiles.length === 0) {
|
||||
logger.info('No files requiring detailed analysis found');
|
||||
return { files: {} };
|
||||
}
|
||||
|
||||
const prompt = `Analyze these key files in detail:
|
||||
|
||||
${importantFiles.map(file => `
|
||||
File: ${file.path}
|
||||
Functions: ${JSON.stringify(file.functions)}
|
||||
Classes: ${JSON.stringify(file.classes)}
|
||||
Imports: ${JSON.stringify(file.imports)}
|
||||
Size: ${file.size} bytes, ${file.lines} lines
|
||||
`).join('\n')}
|
||||
|
||||
For each file, provide:
|
||||
1. Purpose and responsibility
|
||||
2. Key functions and their roles
|
||||
3. Dependencies and relationships
|
||||
4. Importance to the overall architecture
|
||||
|
||||
Respond with detailed analysis for each file.`;
|
||||
|
||||
try {
|
||||
const aiService = getAiService({ projectRoot: config.projectRoot });
|
||||
const response = await aiService.generateStructuredOutput({
|
||||
prompt,
|
||||
schema: {
|
||||
type: 'object',
|
||||
properties: {
|
||||
files: {
|
||||
type: 'object',
|
||||
additionalProperties: {
|
||||
type: 'object',
|
||||
properties: {
|
||||
purpose: { type: 'string' },
|
||||
keyFunctions: { type: 'array', items: { type: 'string' } },
|
||||
dependencies: { type: 'array', items: { type: 'string' } },
|
||||
importance: { type: 'string' },
|
||||
description: { type: 'string' }
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
logger.info(`[Scan #4+]: Detailed analysis completed for ${Object.keys(response.files || {}).length} files`);
|
||||
return response;
|
||||
} catch (error) {
|
||||
logger.warn(`[Scan #4+]: Detailed analysis failed`);
|
||||
return { files: {} };
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Generate a comprehensive project summary
|
||||
* @param {Object} scanResults - Raw scan results
|
||||
* @param {Object} projectTypeAnalysis - Project type analysis
|
||||
* @param {Object} coreStructureAnalysis - Core structure analysis
|
||||
* @returns {Object} Project summary
|
||||
*/
|
||||
function generateProjectSummary(scanResults, projectTypeAnalysis, coreStructureAnalysis) {
|
||||
return {
|
||||
overview: `${projectTypeAnalysis.projectType} project with ${scanResults.stats.totalFiles} files across ${scanResults.stats.totalDirectories} directories`,
|
||||
languages: projectTypeAnalysis.languages,
|
||||
frameworks: projectTypeAnalysis.frameworks,
|
||||
architecture: coreStructureAnalysis.architecture?.pattern || 'standard',
|
||||
complexity: scanResults.stats.totalFiles > 100 ? 'high' : scanResults.stats.totalFiles > 50 ? 'medium' : 'low',
|
||||
keyComponents: Object.keys(coreStructureAnalysis.directories || {}).slice(0, 5)
|
||||
};
|
||||
}
|
||||
3
scripts/modules/task-manager/scan-project/index.js
Normal file
3
scripts/modules/task-manager/scan-project/index.js
Normal file
@@ -0,0 +1,3 @@
|
||||
// Main entry point for scan-project module
|
||||
export { default } from './scan-project.js';
|
||||
export { default as scanProject } from './scan-project.js';
|
||||
61
scripts/modules/task-manager/scan-project/scan-config.js
Normal file
61
scripts/modules/task-manager/scan-project/scan-config.js
Normal file
@@ -0,0 +1,61 @@
|
||||
/**
|
||||
* Configuration classes for project scanning functionality
|
||||
*/
|
||||
|
||||
/**
|
||||
* Configuration object for scan operations
|
||||
*/
|
||||
export class ScanConfig {
|
||||
constructor({
|
||||
projectRoot,
|
||||
outputPath = null,
|
||||
includeFiles = [],
|
||||
excludeFiles = ['node_modules', '.git', 'dist', 'build', '*.log'],
|
||||
scanDepth = 5,
|
||||
mcpLog = false,
|
||||
reportProgress = false,
|
||||
debug = false
|
||||
} = {}) {
|
||||
this.projectRoot = projectRoot;
|
||||
this.outputPath = outputPath;
|
||||
this.includeFiles = includeFiles;
|
||||
this.excludeFiles = excludeFiles;
|
||||
this.scanDepth = scanDepth;
|
||||
this.mcpLog = mcpLog;
|
||||
this.reportProgress = reportProgress;
|
||||
this.debug = debug;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Logging configuration for scan operations
|
||||
*/
|
||||
export class ScanLoggingConfig {
|
||||
constructor(mcpLog = false, reportProgress = false) {
|
||||
this.mcpLog = mcpLog;
|
||||
this.reportProgress = reportProgress;
|
||||
}
|
||||
|
||||
report(message, level = 'info') {
|
||||
if (this.reportProgress || this.mcpLog) {
|
||||
const prefix = this.mcpLog ? '[MCP]' : '[SCAN]';
|
||||
console.log(`${prefix} ${level.toUpperCase()}: ${message}`);
|
||||
}
|
||||
}
|
||||
|
||||
debug(message) {
|
||||
this.report(message, 'debug');
|
||||
}
|
||||
|
||||
info(message) {
|
||||
this.report(message, 'info');
|
||||
}
|
||||
|
||||
warn(message) {
|
||||
this.report(message, 'warn');
|
||||
}
|
||||
|
||||
error(message) {
|
||||
this.report(message, 'error');
|
||||
}
|
||||
}
|
||||
422
scripts/modules/task-manager/scan-project/scan-helpers.js
Normal file
422
scripts/modules/task-manager/scan-project/scan-helpers.js
Normal file
@@ -0,0 +1,422 @@
|
||||
/**
|
||||
* Helper functions for project scanning
|
||||
*/
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
import { spawn } from 'child_process';
|
||||
import { ScanLoggingConfig } from './scan-config.js';
|
||||
|
||||
/**
|
||||
* Execute ast-grep command to analyze files
|
||||
* @param {string} projectRoot - Project root directory
|
||||
* @param {string} pattern - AST pattern to search for
|
||||
* @param {Array} files - Files to analyze
|
||||
* @returns {Promise<Object>} AST analysis results
|
||||
*/
|
||||
export async function executeAstGrep(projectRoot, pattern, files = []) {
|
||||
return new Promise((resolve, reject) => {
|
||||
const astGrepPath = path.join(process.cwd(), 'node_modules/.bin/ast-grep');
|
||||
const args = ['run', '--json'];
|
||||
|
||||
if (pattern) {
|
||||
args.push('-p', pattern);
|
||||
}
|
||||
|
||||
if (files.length > 0) {
|
||||
args.push(...files);
|
||||
}
|
||||
|
||||
const child = spawn(astGrepPath, args, {
|
||||
cwd: projectRoot,
|
||||
stdio: ['pipe', 'pipe', 'pipe']
|
||||
});
|
||||
|
||||
let stdout = '';
|
||||
let stderr = '';
|
||||
|
||||
child.stdout.on('data', (data) => {
|
||||
stdout += data.toString();
|
||||
});
|
||||
|
||||
child.stderr.on('data', (data) => {
|
||||
stderr += data.toString();
|
||||
});
|
||||
|
||||
child.on('close', (code) => {
|
||||
if (code === 0) {
|
||||
try {
|
||||
const results = stdout ? JSON.parse(stdout) : [];
|
||||
resolve(results);
|
||||
} catch (error) {
|
||||
reject(new Error(`Failed to parse ast-grep output: ${error.message}`));
|
||||
}
|
||||
} else {
|
||||
reject(new Error(`ast-grep failed with code ${code}: ${stderr}`));
|
||||
}
|
||||
});
|
||||
|
||||
child.on('error', (error) => {
|
||||
reject(new Error(`Failed to execute ast-grep: ${error.message}`));
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Detect project type based on files in root directory
|
||||
* @param {string} projectRoot - Project root directory
|
||||
* @returns {Object} Project type information
|
||||
*/
|
||||
export function detectProjectType(projectRoot) {
|
||||
const files = fs.readdirSync(projectRoot);
|
||||
const projectType = {
|
||||
type: 'unknown',
|
||||
frameworks: [],
|
||||
languages: [],
|
||||
buildTools: [],
|
||||
entryPoints: []
|
||||
};
|
||||
|
||||
// Check for common project indicators
|
||||
const indicators = {
|
||||
'package.json': () => {
|
||||
projectType.type = 'nodejs';
|
||||
projectType.languages.push('javascript');
|
||||
|
||||
try {
|
||||
const packageJson = JSON.parse(fs.readFileSync(path.join(projectRoot, 'package.json'), 'utf8'));
|
||||
|
||||
// Detect frameworks and libraries
|
||||
const deps = { ...packageJson.dependencies, ...packageJson.devDependencies };
|
||||
if (deps.react) projectType.frameworks.push('react');
|
||||
if (deps.next) projectType.frameworks.push('next.js');
|
||||
if (deps.express) projectType.frameworks.push('express');
|
||||
if (deps.typescript) projectType.languages.push('typescript');
|
||||
|
||||
// Find entry points
|
||||
if (packageJson.main) projectType.entryPoints.push(packageJson.main);
|
||||
if (packageJson.scripts?.start) {
|
||||
const startScript = packageJson.scripts.start;
|
||||
const match = startScript.match(/node\s+(\S+)/);
|
||||
if (match) projectType.entryPoints.push(match[1]);
|
||||
}
|
||||
} catch (error) {
|
||||
// Ignore package.json parsing errors
|
||||
}
|
||||
},
|
||||
'pom.xml': () => {
|
||||
projectType.type = 'java';
|
||||
projectType.languages.push('java');
|
||||
projectType.buildTools.push('maven');
|
||||
},
|
||||
'build.gradle': () => {
|
||||
projectType.type = 'java';
|
||||
projectType.languages.push('java');
|
||||
projectType.buildTools.push('gradle');
|
||||
},
|
||||
'requirements.txt': () => {
|
||||
projectType.type = 'python';
|
||||
projectType.languages.push('python');
|
||||
},
|
||||
'Pipfile': () => {
|
||||
projectType.type = 'python';
|
||||
projectType.languages.push('python');
|
||||
projectType.buildTools.push('pipenv');
|
||||
},
|
||||
'pyproject.toml': () => {
|
||||
projectType.type = 'python';
|
||||
projectType.languages.push('python');
|
||||
},
|
||||
'Cargo.toml': () => {
|
||||
projectType.type = 'rust';
|
||||
projectType.languages.push('rust');
|
||||
projectType.buildTools.push('cargo');
|
||||
},
|
||||
'go.mod': () => {
|
||||
projectType.type = 'go';
|
||||
projectType.languages.push('go');
|
||||
},
|
||||
'composer.json': () => {
|
||||
projectType.type = 'php';
|
||||
projectType.languages.push('php');
|
||||
},
|
||||
'Gemfile': () => {
|
||||
projectType.type = 'ruby';
|
||||
projectType.languages.push('ruby');
|
||||
}
|
||||
};
|
||||
|
||||
// Check for indicators
|
||||
for (const file of files) {
|
||||
if (indicators[file]) {
|
||||
indicators[file]();
|
||||
}
|
||||
}
|
||||
|
||||
return projectType;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get file list based on include/exclude patterns
|
||||
* @param {string} projectRoot - Project root directory
|
||||
* @param {Array} includePatterns - Patterns to include
|
||||
* @param {Array} excludePatterns - Patterns to exclude
|
||||
* @param {number} maxDepth - Maximum directory depth to scan
|
||||
* @returns {Array} List of files to analyze
|
||||
*/
|
||||
export function getFileList(projectRoot, includePatterns = [], excludePatterns = [], maxDepth = 5) {
|
||||
const files = [];
|
||||
|
||||
function scanDirectory(dirPath, depth = 0) {
|
||||
if (depth > maxDepth) return;
|
||||
|
||||
try {
|
||||
const items = fs.readdirSync(dirPath, { withFileTypes: true });
|
||||
|
||||
for (const item of items) {
|
||||
const fullPath = path.join(dirPath, item.name);
|
||||
const relativePath = path.relative(projectRoot, fullPath);
|
||||
|
||||
// Check exclude patterns
|
||||
if (shouldExclude(relativePath, excludePatterns)) {
|
||||
continue;
|
||||
}
|
||||
|
||||
if (item.isDirectory()) {
|
||||
scanDirectory(fullPath, depth + 1);
|
||||
} else if (item.isFile()) {
|
||||
// Check include patterns (if specified)
|
||||
if (includePatterns.length === 0 || shouldInclude(relativePath, includePatterns)) {
|
||||
files.push(relativePath);
|
||||
}
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
// Ignore permission errors and continue
|
||||
}
|
||||
}
|
||||
|
||||
scanDirectory(projectRoot);
|
||||
return files;
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if file should be excluded based on patterns
|
||||
* @param {string} filePath - File path to check
|
||||
* @param {Array} excludePatterns - Exclude patterns
|
||||
* @returns {boolean} True if should be excluded
|
||||
*/
|
||||
function shouldExclude(filePath, excludePatterns) {
|
||||
return excludePatterns.some(pattern => {
|
||||
if (pattern.includes('*')) {
|
||||
const regex = new RegExp(pattern.replace(/\*/g, '.*'));
|
||||
return regex.test(filePath);
|
||||
}
|
||||
return filePath.includes(pattern);
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if file should be included based on patterns
|
||||
* @param {string} filePath - File path to check
|
||||
* @param {Array} includePatterns - Include patterns
|
||||
* @returns {boolean} True if should be included
|
||||
*/
|
||||
function shouldInclude(filePath, includePatterns) {
|
||||
return includePatterns.some(pattern => {
|
||||
if (pattern.includes('*')) {
|
||||
const regex = new RegExp(pattern.replace(/\*/g, '.*'));
|
||||
return regex.test(filePath);
|
||||
}
|
||||
return filePath.includes(pattern);
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Analyze file content to extract key information
|
||||
* @param {string} filePath - Path to file
|
||||
* @param {string} projectRoot - Project root
|
||||
* @returns {Object} File analysis results
|
||||
*/
|
||||
export function analyzeFileContent(filePath, projectRoot) {
|
||||
try {
|
||||
const fullPath = path.join(projectRoot, filePath);
|
||||
const content = fs.readFileSync(fullPath, 'utf8');
|
||||
const ext = path.extname(filePath);
|
||||
|
||||
const analysis = {
|
||||
path: filePath,
|
||||
size: content.length,
|
||||
lines: content.split('\n').length,
|
||||
language: getLanguageFromExtension(ext),
|
||||
functions: [],
|
||||
classes: [],
|
||||
imports: [],
|
||||
exports: []
|
||||
};
|
||||
|
||||
// Basic pattern matching for common constructs
|
||||
switch (ext) {
|
||||
case '.js':
|
||||
case '.ts':
|
||||
case '.jsx':
|
||||
case '.tsx':
|
||||
analyzeJavaScriptFile(content, analysis);
|
||||
break;
|
||||
case '.py':
|
||||
analyzePythonFile(content, analysis);
|
||||
break;
|
||||
case '.java':
|
||||
analyzeJavaFile(content, analysis);
|
||||
break;
|
||||
case '.go':
|
||||
analyzeGoFile(content, analysis);
|
||||
break;
|
||||
}
|
||||
|
||||
return analysis;
|
||||
} catch (error) {
|
||||
return {
|
||||
path: filePath,
|
||||
error: error.message
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get programming language from file extension
|
||||
* @param {string} ext - File extension
|
||||
* @returns {string} Programming language
|
||||
*/
|
||||
function getLanguageFromExtension(ext) {
|
||||
const langMap = {
|
||||
'.js': 'javascript',
|
||||
'.jsx': 'javascript',
|
||||
'.ts': 'typescript',
|
||||
'.tsx': 'typescript',
|
||||
'.py': 'python',
|
||||
'.java': 'java',
|
||||
'.go': 'go',
|
||||
'.rs': 'rust',
|
||||
'.php': 'php',
|
||||
'.rb': 'ruby',
|
||||
'.cpp': 'cpp',
|
||||
'.c': 'c',
|
||||
'.cs': 'csharp'
|
||||
};
|
||||
return langMap[ext] || 'unknown';
|
||||
}
|
||||
|
||||
/**
|
||||
* Analyze JavaScript/TypeScript file content
|
||||
* @param {string} content - File content
|
||||
* @param {Object} analysis - Analysis object to populate
|
||||
*/
|
||||
function analyzeJavaScriptFile(content, analysis) {
|
||||
// Extract function declarations
|
||||
const functionRegex = /(?:function\s+(\w+)|const\s+(\w+)\s*=\s*(?:async\s+)?(?:function|\([^)]*\)\s*=>)|(\w+)\s*:\s*(?:async\s+)?(?:function|\([^)]*\)\s*=>))/g;
|
||||
let match;
|
||||
while ((match = functionRegex.exec(content)) !== null) {
|
||||
const functionName = match[1] || match[2] || match[3];
|
||||
if (functionName) {
|
||||
analysis.functions.push(functionName);
|
||||
}
|
||||
}
|
||||
|
||||
// Extract class declarations
|
||||
const classRegex = /class\s+(\w+)/g;
|
||||
while ((match = classRegex.exec(content)) !== null) {
|
||||
analysis.classes.push(match[1]);
|
||||
}
|
||||
|
||||
// Extract imports
|
||||
const importRegex = /import\s+(?:.*?\s+from\s+)?['"]([^'"]+)['"]/g;
|
||||
while ((match = importRegex.exec(content)) !== null) {
|
||||
analysis.imports.push(match[1]);
|
||||
}
|
||||
|
||||
// Extract exports
|
||||
const exportRegex = /export\s+(?:default\s+)?(?:const\s+|function\s+|class\s+)?(\w+)/g;
|
||||
while ((match = exportRegex.exec(content)) !== null) {
|
||||
analysis.exports.push(match[1]);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Analyze Python file content
|
||||
* @param {string} content - File content
|
||||
* @param {Object} analysis - Analysis object to populate
|
||||
*/
|
||||
function analyzePythonFile(content, analysis) {
|
||||
// Extract function definitions
|
||||
const functionRegex = /def\s+(\w+)/g;
|
||||
let match;
|
||||
while ((match = functionRegex.exec(content)) !== null) {
|
||||
analysis.functions.push(match[1]);
|
||||
}
|
||||
|
||||
// Extract class definitions
|
||||
const classRegex = /class\s+(\w+)/g;
|
||||
while ((match = classRegex.exec(content)) !== null) {
|
||||
analysis.classes.push(match[1]);
|
||||
}
|
||||
|
||||
// Extract imports
|
||||
const importRegex = /(?:import\s+(\w+)|from\s+(\w+)\s+import)/g;
|
||||
while ((match = importRegex.exec(content)) !== null) {
|
||||
analysis.imports.push(match[1] || match[2]);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Analyze Java file content
|
||||
* @param {string} content - File content
|
||||
* @param {Object} analysis - Analysis object to populate
|
||||
*/
|
||||
function analyzeJavaFile(content, analysis) {
|
||||
// Extract method declarations
|
||||
const methodRegex = /(?:public|private|protected|static|\s)*\s+\w+\s+(\w+)\s*\(/g;
|
||||
let match;
|
||||
while ((match = methodRegex.exec(content)) !== null) {
|
||||
analysis.functions.push(match[1]);
|
||||
}
|
||||
|
||||
// Extract class declarations
|
||||
const classRegex = /(?:public\s+)?class\s+(\w+)/g;
|
||||
while ((match = classRegex.exec(content)) !== null) {
|
||||
analysis.classes.push(match[1]);
|
||||
}
|
||||
|
||||
// Extract imports
|
||||
const importRegex = /import\s+([^;]+);/g;
|
||||
while ((match = importRegex.exec(content)) !== null) {
|
||||
analysis.imports.push(match[1]);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Analyze Go file content
|
||||
* @param {string} content - File content
|
||||
* @param {Object} analysis - Analysis object to populate
|
||||
*/
|
||||
function analyzeGoFile(content, analysis) {
|
||||
// Extract function declarations
|
||||
const functionRegex = /func\s+(?:\([^)]*\)\s+)?(\w+)/g;
|
||||
let match;
|
||||
while ((match = functionRegex.exec(content)) !== null) {
|
||||
analysis.functions.push(match[1]);
|
||||
}
|
||||
|
||||
// Extract type/struct declarations
|
||||
const typeRegex = /type\s+(\w+)\s+struct/g;
|
||||
while ((match = typeRegex.exec(content)) !== null) {
|
||||
analysis.classes.push(match[1]); // Treating structs as classes
|
||||
}
|
||||
|
||||
// Extract imports
|
||||
const importRegex = /import\s+(?:\([^)]+\)|"([^"]+)")/g;
|
||||
while ((match = importRegex.exec(content)) !== null) {
|
||||
if (match[1]) {
|
||||
analysis.imports.push(match[1]);
|
||||
}
|
||||
}
|
||||
}
|
||||
441
scripts/modules/task-manager/scan-project/scan-project.js
Normal file
441
scripts/modules/task-manager/scan-project/scan-project.js
Normal file
@@ -0,0 +1,441 @@
|
||||
/**
|
||||
* Main scan-project functionality
|
||||
* Implements intelligent project scanning with AI-driven analysis and ast-grep integration
|
||||
*/
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
import chalk from 'chalk';
|
||||
import { ScanConfig, ScanLoggingConfig } from './scan-config.js';
|
||||
import {
|
||||
detectProjectType,
|
||||
getFileList,
|
||||
analyzeFileContent,
|
||||
executeAstGrep
|
||||
} from './scan-helpers.js';
|
||||
import { analyzeWithAI } from './ai-analysis.js';
|
||||
|
||||
/**
|
||||
* Main scan project function
|
||||
* @param {string} projectRoot - Project root directory
|
||||
* @param {Object} options - Scan options
|
||||
* @returns {Promise<Object>} Scan results
|
||||
*/
|
||||
export default async function scanProject(projectRoot, options = {}) {
|
||||
const config = new ScanConfig({
|
||||
projectRoot,
|
||||
outputPath: options.outputPath,
|
||||
includeFiles: options.includeFiles || [],
|
||||
excludeFiles: options.excludeFiles || ['node_modules', '.git', 'dist', 'build', '*.log'],
|
||||
scanDepth: options.scanDepth || 5,
|
||||
mcpLog: options.mcpLog || false,
|
||||
reportProgress: options.reportProgress !== false, // Default to true
|
||||
debug: options.debug || false
|
||||
});
|
||||
|
||||
const logger = new ScanLoggingConfig(config.mcpLog, config.reportProgress);
|
||||
logger.info('Starting intelligent project scan...');
|
||||
|
||||
try {
|
||||
// Phase 1: Initial project discovery
|
||||
logger.info('Phase 1: Discovering project structure...');
|
||||
const initialScan = await performInitialScan(config, logger);
|
||||
|
||||
// Phase 2: File-level analysis
|
||||
logger.info('Phase 2: Analyzing individual files...');
|
||||
const fileAnalysis = await performFileAnalysis(config, initialScan, logger);
|
||||
|
||||
// Phase 3: AST-grep enhanced analysis
|
||||
logger.info('Phase 3: Performing AST analysis...');
|
||||
const astAnalysis = await performASTAnalysis(config, fileAnalysis, logger);
|
||||
|
||||
// Phase 4: AI-powered analysis (optional)
|
||||
let aiAnalysis = null;
|
||||
if (!options.skipAI) {
|
||||
logger.info('Phase 4: Enhancing with AI analysis...');
|
||||
try {
|
||||
aiAnalysis = await analyzeWithAI({
|
||||
...initialScan,
|
||||
...fileAnalysis,
|
||||
...astAnalysis
|
||||
}, config);
|
||||
} catch (error) {
|
||||
logger.warn(`AI analysis failed, continuing without it: ${error.message}`);
|
||||
aiAnalysis = {
|
||||
projectType: { confidence: 0 },
|
||||
coreStructure: { architecture: { pattern: 'unknown' } },
|
||||
summary: { complexity: 'unknown' }
|
||||
};
|
||||
}
|
||||
} else {
|
||||
logger.info('Phase 4: Skipping AI analysis...');
|
||||
aiAnalysis = {
|
||||
projectType: { confidence: 0 },
|
||||
coreStructure: { architecture: { pattern: 'unknown' } },
|
||||
summary: { complexity: 'unknown' }
|
||||
};
|
||||
}
|
||||
|
||||
// Phase 5: Generate final output
|
||||
const finalResults = {
|
||||
timestamp: new Date().toISOString(),
|
||||
projectRoot: config.projectRoot,
|
||||
scanConfig: {
|
||||
excludeFiles: config.excludeFiles,
|
||||
scanDepth: config.scanDepth
|
||||
},
|
||||
...initialScan,
|
||||
...fileAnalysis,
|
||||
...astAnalysis,
|
||||
aiAnalysis,
|
||||
scanSummary: generateScanSummary(initialScan, fileAnalysis, aiAnalysis)
|
||||
};
|
||||
|
||||
// Save results if output path is specified
|
||||
if (config.outputPath) {
|
||||
await saveResults(finalResults, config.outputPath, logger);
|
||||
}
|
||||
|
||||
logger.info('Project scan completed successfully');
|
||||
return {
|
||||
success: true,
|
||||
data: finalResults
|
||||
};
|
||||
|
||||
} catch (error) {
|
||||
logger.error(`Scan failed: ${error.message}`);
|
||||
return {
|
||||
success: false,
|
||||
error: {
|
||||
message: error.message,
|
||||
stack: config.debug ? error.stack : undefined
|
||||
}
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Phase 1: Perform initial project discovery
|
||||
* @param {ScanConfig} config - Scan configuration
|
||||
* @param {ScanLoggingConfig} logger - Logger instance
|
||||
* @returns {Promise<Object>} Initial scan results
|
||||
*/
|
||||
async function performInitialScan(config, logger) {
|
||||
logger.info('[Initial Scan]: Discovering project type and structure...');
|
||||
|
||||
// Detect project type
|
||||
const projectType = detectProjectType(config.projectRoot);
|
||||
logger.info(`[Initial Scan]: Detected ${projectType.type} project`);
|
||||
|
||||
// Get root-level files
|
||||
const rootFiles = fs.readdirSync(config.projectRoot)
|
||||
.filter(item => {
|
||||
const fullPath = path.join(config.projectRoot, item);
|
||||
return fs.statSync(fullPath).isFile();
|
||||
});
|
||||
|
||||
// Get directory structure (first level)
|
||||
const directories = fs.readdirSync(config.projectRoot)
|
||||
.filter(item => {
|
||||
const fullPath = path.join(config.projectRoot, item);
|
||||
return fs.statSync(fullPath).isDirectory() &&
|
||||
!config.excludeFiles.includes(item);
|
||||
})
|
||||
.map(dir => {
|
||||
const dirPath = path.join(config.projectRoot, dir);
|
||||
try {
|
||||
const files = fs.readdirSync(dirPath);
|
||||
return {
|
||||
name: dir,
|
||||
path: dirPath,
|
||||
fileCount: files.length,
|
||||
files: files.slice(0, 10) // Sample of files
|
||||
};
|
||||
} catch (error) {
|
||||
return {
|
||||
name: dir,
|
||||
path: dirPath,
|
||||
error: 'Access denied'
|
||||
};
|
||||
}
|
||||
});
|
||||
|
||||
// Get complete file list for scanning
|
||||
const fileList = getFileList(
|
||||
config.projectRoot,
|
||||
config.includeFiles,
|
||||
config.excludeFiles,
|
||||
config.scanDepth
|
||||
);
|
||||
|
||||
// Calculate basic statistics
|
||||
const stats = {
|
||||
totalFiles: fileList.length,
|
||||
totalDirectories: directories.length,
|
||||
rootFiles: rootFiles.length,
|
||||
languages: [...new Set(fileList.map(f => {
|
||||
const ext = path.extname(f);
|
||||
return ext ? ext.substring(1) : 'unknown';
|
||||
}))],
|
||||
largestFiles: fileList
|
||||
.map(f => {
|
||||
try {
|
||||
const fullPath = path.join(config.projectRoot, f);
|
||||
const stats = fs.statSync(fullPath);
|
||||
return { path: f, size: stats.size };
|
||||
} catch {
|
||||
return { path: f, size: 0 };
|
||||
}
|
||||
})
|
||||
.sort((a, b) => b.size - a.size)
|
||||
.slice(0, 10)
|
||||
};
|
||||
|
||||
logger.info(`[Initial Scan]: Found ${stats.totalFiles} files in ${stats.totalDirectories} directories`);
|
||||
|
||||
return {
|
||||
projectType,
|
||||
rootFiles,
|
||||
directories,
|
||||
fileList,
|
||||
stats
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Phase 2: Perform detailed file analysis
|
||||
* @param {ScanConfig} config - Scan configuration
|
||||
* @param {Object} initialScan - Initial scan results
|
||||
* @param {ScanLoggingConfig} logger - Logger instance
|
||||
* @returns {Promise<Object>} File analysis results
|
||||
*/
|
||||
async function performFileAnalysis(config, initialScan, logger) {
|
||||
logger.info('[File Analysis]: Analyzing file contents...');
|
||||
|
||||
const { fileList, projectType } = initialScan;
|
||||
|
||||
// Filter files for detailed analysis (avoid binary files, focus on source code)
|
||||
const sourceExtensions = ['.js', '.ts', '.jsx', '.tsx', '.py', '.java', '.go', '.rs', '.php', '.rb', '.cpp', '.c', '.cs'];
|
||||
const sourceFiles = fileList.filter(file => {
|
||||
const ext = path.extname(file);
|
||||
return sourceExtensions.includes(ext) || projectType.entryPoints.includes(file);
|
||||
}).slice(0, 100); // Limit to prevent excessive processing
|
||||
|
||||
logger.info(`[File Analysis]: Analyzing ${sourceFiles.length} source files...`);
|
||||
|
||||
// Analyze files
|
||||
const detailedFiles = sourceFiles.map(file => {
|
||||
try {
|
||||
return analyzeFileContent(file, config.projectRoot);
|
||||
} catch (error) {
|
||||
logger.warn(`[File Analysis]: Failed to analyze ${file}: ${error.message}`);
|
||||
return { path: file, error: error.message };
|
||||
}
|
||||
}).filter(result => !result.error);
|
||||
|
||||
// Group by language
|
||||
const byLanguage = detailedFiles.reduce((acc, file) => {
|
||||
const lang = file.language || 'unknown';
|
||||
if (!acc[lang]) acc[lang] = [];
|
||||
acc[lang].push(file);
|
||||
return acc;
|
||||
}, {});
|
||||
|
||||
// Extract key statistics
|
||||
const codeStats = {
|
||||
totalLines: detailedFiles.reduce((sum, f) => sum + (f.lines || 0), 0),
|
||||
totalFunctions: detailedFiles.reduce((sum, f) => sum + (f.functions?.length || 0), 0),
|
||||
totalClasses: detailedFiles.reduce((sum, f) => sum + (f.classes?.length || 0), 0),
|
||||
languageBreakdown: Object.keys(byLanguage).map(lang => ({
|
||||
language: lang,
|
||||
files: byLanguage[lang].length,
|
||||
lines: byLanguage[lang].reduce((sum, f) => sum + (f.lines || 0), 0)
|
||||
}))
|
||||
};
|
||||
|
||||
logger.info(`[File Analysis]: Analyzed ${detailedFiles.length} files, ${codeStats.totalLines} lines, ${codeStats.totalFunctions} functions`);
|
||||
|
||||
return {
|
||||
detailedFiles,
|
||||
byLanguage,
|
||||
codeStats
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Phase 3: Perform AST-grep enhanced analysis
|
||||
* @param {ScanConfig} config - Scan configuration
|
||||
* @param {Object} fileAnalysis - File analysis results
|
||||
* @param {ScanLoggingConfig} logger - Logger instance
|
||||
* @returns {Promise<Object>} AST analysis results
|
||||
*/
|
||||
async function performASTAnalysis(config, fileAnalysis, logger) {
|
||||
logger.info('[AST Analysis]: Performing syntax tree analysis...');
|
||||
|
||||
const { detailedFiles } = fileAnalysis;
|
||||
|
||||
// Select files for AST analysis (focus on main source files)
|
||||
const astTargetFiles = detailedFiles
|
||||
.filter(file => file.functions?.length > 0 || file.classes?.length > 0)
|
||||
.slice(0, 20) // Limit for performance
|
||||
.map(file => file.path);
|
||||
|
||||
if (astTargetFiles.length === 0) {
|
||||
logger.info('[AST Analysis]: No suitable files found for AST analysis');
|
||||
return { astResults: {} };
|
||||
}
|
||||
|
||||
logger.info(`[AST Analysis]: Analyzing ${astTargetFiles.length} files with ast-grep...`);
|
||||
|
||||
const astResults = {};
|
||||
|
||||
// Define common patterns to search for
|
||||
const patterns = {
|
||||
functions: {
|
||||
javascript: 'function $_($$$) { $$$ }',
|
||||
typescript: 'function $_($$$): $_ { $$$ }',
|
||||
python: 'def $_($$$): $$$',
|
||||
java: '$_ $_($$$ args) { $$$ }'
|
||||
},
|
||||
classes: {
|
||||
javascript: 'class $_ { $$$ }',
|
||||
typescript: 'class $_ { $$$ }',
|
||||
python: 'class $_: $$$',
|
||||
java: 'class $_ { $$$ }'
|
||||
},
|
||||
imports: {
|
||||
javascript: 'import $_ from $_',
|
||||
typescript: 'import $_ from $_',
|
||||
python: 'import $_',
|
||||
java: 'import $_;'
|
||||
}
|
||||
};
|
||||
|
||||
// Run AST analysis for different languages
|
||||
for (const [language, files] of Object.entries(fileAnalysis.byLanguage || {})) {
|
||||
if (patterns.functions[language] && files.length > 0) {
|
||||
try {
|
||||
logger.debug(`[AST Analysis]: Analyzing ${language} files...`);
|
||||
|
||||
const langFiles = files.map(f => f.path).filter(path => astTargetFiles.includes(path));
|
||||
if (langFiles.length > 0) {
|
||||
// Run ast-grep for functions
|
||||
const functionResults = await executeAstGrep(
|
||||
config.projectRoot,
|
||||
patterns.functions[language],
|
||||
langFiles
|
||||
);
|
||||
|
||||
// Run ast-grep for classes
|
||||
const classResults = await executeAstGrep(
|
||||
config.projectRoot,
|
||||
patterns.classes[language],
|
||||
langFiles
|
||||
);
|
||||
|
||||
astResults[language] = {
|
||||
functions: functionResults || [],
|
||||
classes: classResults || [],
|
||||
files: langFiles
|
||||
};
|
||||
}
|
||||
} catch (error) {
|
||||
logger.warn(`[AST Analysis]: AST analysis failed for ${language}: ${error.message}`);
|
||||
// Continue with other languages
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
const totalMatches = Object.values(astResults).reduce((sum, lang) =>
|
||||
sum + (lang.functions?.length || 0) + (lang.classes?.length || 0), 0);
|
||||
|
||||
logger.info(`[AST Analysis]: Found ${totalMatches} AST matches across ${Object.keys(astResults).length} languages`);
|
||||
|
||||
return { astResults };
|
||||
}
|
||||
|
||||
/**
|
||||
* Generate scan summary
|
||||
* @param {Object} initialScan - Initial scan results
|
||||
* @param {Object} fileAnalysis - File analysis results
|
||||
* @param {Object} aiAnalysis - AI analysis results
|
||||
* @returns {Object} Scan summary
|
||||
*/
|
||||
function generateScanSummary(initialScan, fileAnalysis, aiAnalysis) {
|
||||
return {
|
||||
overview: `Scanned ${initialScan.stats.totalFiles} files across ${initialScan.stats.totalDirectories} directories`,
|
||||
projectType: initialScan.projectType.type,
|
||||
languages: initialScan.stats.languages,
|
||||
codeMetrics: {
|
||||
totalLines: fileAnalysis.codeStats?.totalLines || 0,
|
||||
totalFunctions: fileAnalysis.codeStats?.totalFunctions || 0,
|
||||
totalClasses: fileAnalysis.codeStats?.totalClasses || 0
|
||||
},
|
||||
aiInsights: {
|
||||
confidence: aiAnalysis.projectType?.confidence || 0,
|
||||
architecture: aiAnalysis.coreStructure?.architecture?.pattern || 'unknown',
|
||||
complexity: aiAnalysis.summary?.complexity || 'unknown'
|
||||
},
|
||||
recommendations: generateRecommendations(initialScan, fileAnalysis, aiAnalysis)
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Generate recommendations based on scan results
|
||||
* @param {Object} initialScan - Initial scan results
|
||||
* @param {Object} fileAnalysis - File analysis results
|
||||
* @param {Object} aiAnalysis - AI analysis results
|
||||
* @returns {Array} List of recommendations
|
||||
*/
|
||||
function generateRecommendations(initialScan, fileAnalysis, aiAnalysis) {
|
||||
const recommendations = [];
|
||||
|
||||
// Size-based recommendations
|
||||
if (initialScan.stats.totalFiles > 500) {
|
||||
recommendations.push('Consider using a monorepo management tool for large codebase');
|
||||
}
|
||||
|
||||
// Language-specific recommendations
|
||||
const jsFiles = fileAnalysis.byLanguage?.javascript?.length || 0;
|
||||
const tsFiles = fileAnalysis.byLanguage?.typescript?.length || 0;
|
||||
|
||||
if (jsFiles > tsFiles && jsFiles > 10) {
|
||||
recommendations.push('Consider migrating JavaScript files to TypeScript for better type safety');
|
||||
}
|
||||
|
||||
// Documentation recommendations
|
||||
const readmeExists = initialScan.rootFiles.some(f => f.toLowerCase().includes('readme'));
|
||||
if (!readmeExists) {
|
||||
recommendations.push('Add a README.md file to document the project');
|
||||
}
|
||||
|
||||
// Testing recommendations
|
||||
const hasTests = initialScan.fileList.some(f => f.includes('test') || f.includes('spec'));
|
||||
if (!hasTests) {
|
||||
recommendations.push('Consider adding unit tests to improve code quality');
|
||||
}
|
||||
|
||||
return recommendations;
|
||||
}
|
||||
|
||||
/**
|
||||
* Save scan results to file
|
||||
* @param {Object} results - Scan results
|
||||
* @param {string} outputPath - Output file path
|
||||
* @param {ScanLoggingConfig} logger - Logger instance
|
||||
*/
|
||||
async function saveResults(results, outputPath, logger) {
|
||||
try {
|
||||
// Ensure output directory exists
|
||||
const outputDir = path.dirname(outputPath);
|
||||
if (!fs.existsSync(outputDir)) {
|
||||
fs.mkdirSync(outputDir, { recursive: true });
|
||||
}
|
||||
|
||||
// Write results to file
|
||||
fs.writeFileSync(outputPath, JSON.stringify(results, null, 2));
|
||||
logger.info(`Scan results saved to: ${outputPath}`);
|
||||
} catch (error) {
|
||||
logger.error(`Failed to save results: ${error.message}`);
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
Reference in New Issue
Block a user