mirror of
https://github.com/leonvanzyl/autocoder.git
synced 2026-01-30 06:12:06 +00:00
Compare commits
49 Commits
910ca34eac
...
master
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
3edb380b58 | ||
|
|
3256072793 | ||
|
|
9f67d7ffe4 | ||
|
|
8ae6189c0f | ||
|
|
a4a33e612e | ||
|
|
cf62885e83 | ||
|
|
0a46eda5e8 | ||
|
|
06c0bf4fd3 | ||
|
|
1d67fff9e0 | ||
|
|
4cec4e63a4 | ||
|
|
836bc8ae16 | ||
|
|
ce6da81a34 | ||
|
|
77b91caa85 | ||
|
|
51dc1bba5b | ||
|
|
f6ddffa6e2 | ||
|
|
d47028d97a | ||
|
|
a12e4aa3b8 | ||
|
|
51d7d79695 | ||
|
|
3b905cf3d7 | ||
|
|
868a90ab03 | ||
|
|
52331d126f | ||
|
|
7a6b7f8f9c | ||
|
|
5f64ae36f2 | ||
|
|
5ae7f4cffa | ||
|
|
56f260cb79 | ||
|
|
f286c93ca3 | ||
|
|
3588dc8df7 | ||
|
|
3161c1260a | ||
|
|
d68d70c800 | ||
|
|
76475d1fb6 | ||
|
|
f10ad59cf5 | ||
|
|
b2bfc4cb3b | ||
|
|
11cefec85b | ||
|
|
9e097b3fc8 | ||
|
|
80c15a534d | ||
|
|
0072951221 | ||
|
|
d652b18587 | ||
|
|
03504b3c1a | ||
|
|
d1233ad104 | ||
|
|
cd9f5b76cf | ||
|
|
2b07625ce4 | ||
|
|
468e59f86c | ||
|
|
95b0dfac83 | ||
|
|
e756486515 | ||
|
|
dae16c3cca | ||
|
|
8e23fee094 | ||
|
|
795bd5f92a | ||
|
|
45289ef0d2 | ||
|
|
d48fb0a6fc |
@@ -95,6 +95,27 @@ Ask the user about their involvement preference:
|
|||||||
|
|
||||||
**For Detailed Mode users**, ask specific tech questions about frontend, backend, database, etc.
|
**For Detailed Mode users**, ask specific tech questions about frontend, backend, database, etc.
|
||||||
|
|
||||||
|
### Phase 3b: Database Requirements (MANDATORY)
|
||||||
|
|
||||||
|
**Always ask this question regardless of mode:**
|
||||||
|
|
||||||
|
> "One foundational question about data storage:
|
||||||
|
>
|
||||||
|
> **Does this application need to store user data persistently?**
|
||||||
|
>
|
||||||
|
> 1. **Yes, needs a database** - Users create, save, and retrieve data (most apps)
|
||||||
|
> 2. **No, stateless** - Pure frontend, no data storage needed (calculators, static sites)
|
||||||
|
> 3. **Not sure** - Let me describe what I need and you decide"
|
||||||
|
|
||||||
|
**Branching logic:**
|
||||||
|
|
||||||
|
- **If "Yes" or "Not sure"**: Continue normally. The spec will include database in tech stack and the initializer will create 5 mandatory Infrastructure features (indices 0-4) to verify database connectivity and persistence.
|
||||||
|
|
||||||
|
- **If "No, stateless"**: Note this in the spec. Skip database from tech stack. Infrastructure features will be simplified (no database persistence tests). Mark this clearly:
|
||||||
|
```xml
|
||||||
|
<database>none - stateless application</database>
|
||||||
|
```
|
||||||
|
|
||||||
## Phase 4: Features (THE MAIN PHASE)
|
## Phase 4: Features (THE MAIN PHASE)
|
||||||
|
|
||||||
This is where you spend most of your time. Ask questions in plain language that anyone can answer.
|
This is where you spend most of your time. Ask questions in plain language that anyone can answer.
|
||||||
@@ -207,12 +228,23 @@ After gathering all features, **you** (the agent) should tally up the testable f
|
|||||||
|
|
||||||
**Typical ranges for reference:**
|
**Typical ranges for reference:**
|
||||||
|
|
||||||
- **Simple apps** (todo list, calculator, notes): ~20-50 features
|
- **Simple apps** (todo list, calculator, notes): ~25-55 features (includes 5 infrastructure)
|
||||||
- **Medium apps** (blog, task manager with auth): ~100 features
|
- **Medium apps** (blog, task manager with auth): ~105 features (includes 5 infrastructure)
|
||||||
- **Advanced apps** (e-commerce, CRM, full SaaS): ~150-200 features
|
- **Advanced apps** (e-commerce, CRM, full SaaS): ~155-205 features (includes 5 infrastructure)
|
||||||
|
|
||||||
These are just reference points - your actual count should come from the requirements discussed.
|
These are just reference points - your actual count should come from the requirements discussed.
|
||||||
|
|
||||||
|
**MANDATORY: Infrastructure Features**
|
||||||
|
|
||||||
|
If the app requires a database (Phase 3b answer was "Yes" or "Not sure"), you MUST include 5 Infrastructure features (indices 0-4):
|
||||||
|
1. Database connection established
|
||||||
|
2. Database schema applied correctly
|
||||||
|
3. Data persists across server restart
|
||||||
|
4. No mock data patterns in codebase
|
||||||
|
5. Backend API queries real database
|
||||||
|
|
||||||
|
These features ensure the coding agent implements a real database, not mock data or in-memory storage.
|
||||||
|
|
||||||
**How to count features:**
|
**How to count features:**
|
||||||
For each feature area discussed, estimate the number of discrete, testable behaviors:
|
For each feature area discussed, estimate the number of discrete, testable behaviors:
|
||||||
|
|
||||||
@@ -225,17 +257,20 @@ For each feature area discussed, estimate the number of discrete, testable behav
|
|||||||
|
|
||||||
> "Based on what we discussed, here's my feature breakdown:
|
> "Based on what we discussed, here's my feature breakdown:
|
||||||
>
|
>
|
||||||
|
> - **Infrastructure (required)**: 5 features (database setup, persistence verification)
|
||||||
> - [Category 1]: ~X features
|
> - [Category 1]: ~X features
|
||||||
> - [Category 2]: ~Y features
|
> - [Category 2]: ~Y features
|
||||||
> - [Category 3]: ~Z features
|
> - [Category 3]: ~Z features
|
||||||
> - ...
|
> - ...
|
||||||
>
|
>
|
||||||
> **Total: ~N features**
|
> **Total: ~N features** (including 5 infrastructure)
|
||||||
>
|
>
|
||||||
> Does this seem right, or should I adjust?"
|
> Does this seem right, or should I adjust?"
|
||||||
|
|
||||||
Let the user confirm or adjust. This becomes your `feature_count` for the spec.
|
Let the user confirm or adjust. This becomes your `feature_count` for the spec.
|
||||||
|
|
||||||
|
**Important:** The first 5 features (indices 0-4) created by the initializer MUST be the Infrastructure category with no dependencies. All other features depend on these.
|
||||||
|
|
||||||
## Phase 5: Technical Details (DERIVED OR DISCUSSED)
|
## Phase 5: Technical Details (DERIVED OR DISCUSSED)
|
||||||
|
|
||||||
**For Quick Mode users:**
|
**For Quick Mode users:**
|
||||||
|
|||||||
@@ -8,8 +8,47 @@ Pull request(s): $ARGUMENTS
|
|||||||
- At least 1 PR is required.
|
- At least 1 PR is required.
|
||||||
|
|
||||||
## TASKS
|
## TASKS
|
||||||
- Use the GH CLI tool to retrieve the details (descriptions, divs, comments, feedback, reviews, etc)
|
|
||||||
- Use 3 deepdive subagents to analyze the impact of the codebase
|
1. **Retrieve PR Details**
|
||||||
- Provide a review on whether the PR is safe to merge as-is
|
- Use the GH CLI tool to retrieve the details (descriptions, diffs, comments, feedback, reviews, etc)
|
||||||
- Provide any feedback in terms of risk level
|
|
||||||
- Propose any improments in terms of importance and complexity
|
2. **Assess PR Complexity**
|
||||||
|
|
||||||
|
After retrieving PR details, assess complexity based on:
|
||||||
|
- Number of files changed
|
||||||
|
- Lines added/removed
|
||||||
|
- Number of contributors/commits
|
||||||
|
- Whether changes touch core/architectural files
|
||||||
|
|
||||||
|
### Complexity Tiers
|
||||||
|
|
||||||
|
**Simple** (no deep dive agents needed):
|
||||||
|
- ≤5 files changed AND ≤100 lines changed AND single author
|
||||||
|
- Review directly without spawning agents
|
||||||
|
|
||||||
|
**Medium** (1-2 deep dive agents):
|
||||||
|
- 6-15 files changed, OR 100-500 lines, OR 2 contributors
|
||||||
|
- Spawn 1 agent for focused areas, 2 if changes span multiple domains
|
||||||
|
|
||||||
|
**Complex** (up to 3 deep dive agents):
|
||||||
|
- >15 files, OR >500 lines, OR >2 contributors, OR touches core architecture
|
||||||
|
- Spawn up to 3 agents to analyze different aspects (e.g., security, performance, architecture)
|
||||||
|
|
||||||
|
3. **Analyze Codebase Impact**
|
||||||
|
- Based on the complexity tier determined above, spawn the appropriate number of deep dive subagents
|
||||||
|
- For Simple PRs: analyze directly without spawning agents
|
||||||
|
- For Medium PRs: spawn 1-2 agents focusing on the most impacted areas
|
||||||
|
- For Complex PRs: spawn up to 3 agents to cover security, performance, and architectural concerns
|
||||||
|
|
||||||
|
4. **Vision Alignment Check**
|
||||||
|
- Read the project's README.md and CLAUDE.md to understand the application's core purpose
|
||||||
|
- Assess whether this PR aligns with the application's intended functionality
|
||||||
|
- If the changes deviate significantly from the core vision or add functionality that doesn't serve the application's purpose, note this in the review
|
||||||
|
- This is not a blocker, but should be flagged for the reviewer's consideration
|
||||||
|
|
||||||
|
5. **Safety Assessment**
|
||||||
|
- Provide a review on whether the PR is safe to merge as-is
|
||||||
|
- Provide any feedback in terms of risk level
|
||||||
|
|
||||||
|
6. **Improvements**
|
||||||
|
- Propose any improvements in terms of importance and complexity
|
||||||
@@ -156,6 +156,9 @@ Use browser automation tools:
|
|||||||
- [ ] Deleted the test data - verified it's gone everywhere
|
- [ ] Deleted the test data - verified it's gone everywhere
|
||||||
- [ ] NO unexplained data appeared (would indicate mock data)
|
- [ ] NO unexplained data appeared (would indicate mock data)
|
||||||
- [ ] Dashboard/counts reflect real numbers after my changes
|
- [ ] Dashboard/counts reflect real numbers after my changes
|
||||||
|
- [ ] **Ran extended mock data grep (STEP 5.6) - no hits in src/ (excluding tests)**
|
||||||
|
- [ ] **Verified no globalThis, devStore, or dev-store patterns**
|
||||||
|
- [ ] **Server restart test passed (STEP 5.7) - data persists across restart**
|
||||||
|
|
||||||
#### Navigation Verification
|
#### Navigation Verification
|
||||||
|
|
||||||
@@ -174,10 +177,92 @@ Use browser automation tools:
|
|||||||
|
|
||||||
### STEP 5.6: MOCK DATA DETECTION (Before marking passing)
|
### STEP 5.6: MOCK DATA DETECTION (Before marking passing)
|
||||||
|
|
||||||
1. **Search code:** `grep -r "mockData\|fakeData\|TODO\|STUB" --include="*.ts" --include="*.tsx"`
|
**Run ALL these grep checks. Any hits in src/ (excluding test files) require investigation:**
|
||||||
2. **Runtime test:** Create unique data (e.g., "TEST_12345") → verify in UI → delete → verify gone
|
|
||||||
3. **Check database:** All displayed data must come from real DB queries
|
```bash
|
||||||
4. If unexplained data appears, it's mock data - fix before marking passing.
|
# Common exclusions for test files
|
||||||
|
EXCLUDE="--exclude=*.test.* --exclude=*.spec.* --exclude=*__test__* --exclude=*__mocks__*"
|
||||||
|
|
||||||
|
# 1. In-memory storage patterns (CRITICAL - catches dev-store)
|
||||||
|
grep -r "globalThis\." --include="*.ts" --include="*.tsx" --include="*.js" $EXCLUDE src/
|
||||||
|
grep -r "dev-store\|devStore\|DevStore\|mock-db\|mockDb" --include="*.ts" --include="*.tsx" --include="*.js" $EXCLUDE src/
|
||||||
|
|
||||||
|
# 2. Mock data variables
|
||||||
|
grep -r "mockData\|fakeData\|sampleData\|dummyData\|testData" --include="*.ts" --include="*.tsx" --include="*.js" $EXCLUDE src/
|
||||||
|
|
||||||
|
# 3. TODO/incomplete markers
|
||||||
|
grep -r "TODO.*real\|TODO.*database\|TODO.*API\|STUB\|MOCK" --include="*.ts" --include="*.tsx" --include="*.js" $EXCLUDE src/
|
||||||
|
|
||||||
|
# 4. Development-only conditionals
|
||||||
|
grep -r "isDevelopment\|isDev\|process\.env\.NODE_ENV.*development" --include="*.ts" --include="*.tsx" --include="*.js" $EXCLUDE src/
|
||||||
|
|
||||||
|
# 5. In-memory collections as data stores
|
||||||
|
grep -r "new Map\(\)\|new Set\(\)" --include="*.ts" --include="*.tsx" --include="*.js" $EXCLUDE src/ 2>/dev/null
|
||||||
|
```
|
||||||
|
|
||||||
|
**Rule:** If ANY grep returns results in production code → investigate → FIX before marking passing.
|
||||||
|
|
||||||
|
**Runtime verification:**
|
||||||
|
1. Create unique data (e.g., "TEST_12345") → verify in UI → delete → verify gone
|
||||||
|
2. Check database directly - all displayed data must come from real DB queries
|
||||||
|
3. If unexplained data appears, it's mock data - fix before marking passing.
|
||||||
|
|
||||||
|
### STEP 5.7: SERVER RESTART PERSISTENCE TEST (MANDATORY for data features)
|
||||||
|
|
||||||
|
**When required:** Any feature involving CRUD operations or data persistence.
|
||||||
|
|
||||||
|
**This test is NON-NEGOTIABLE. It catches in-memory storage implementations that pass all other tests.**
|
||||||
|
|
||||||
|
**Steps:**
|
||||||
|
|
||||||
|
1. Create unique test data via UI or API (e.g., item named "RESTART_TEST_12345")
|
||||||
|
2. Verify data appears in UI and API response
|
||||||
|
|
||||||
|
3. **STOP the server completely:**
|
||||||
|
```bash
|
||||||
|
# Kill by port (safer - only kills the dev server, not VS Code/Claude Code/etc.)
|
||||||
|
# Unix/macOS:
|
||||||
|
lsof -ti :${PORT:-3000} | xargs kill -TERM 2>/dev/null || true
|
||||||
|
sleep 3
|
||||||
|
lsof -ti :${PORT:-3000} | xargs kill -9 2>/dev/null || true
|
||||||
|
sleep 2
|
||||||
|
|
||||||
|
# Windows alternative (use if lsof not available):
|
||||||
|
# netstat -ano | findstr :${PORT:-3000} | findstr LISTENING
|
||||||
|
# taskkill /F /PID <pid_from_above> 2>nul
|
||||||
|
|
||||||
|
# Verify server is stopped
|
||||||
|
if lsof -ti :${PORT:-3000} > /dev/null 2>&1; then
|
||||||
|
echo "ERROR: Server still running on port ${PORT:-3000}!"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
```
|
||||||
|
|
||||||
|
4. **RESTART the server:**
|
||||||
|
```bash
|
||||||
|
./init.sh &
|
||||||
|
sleep 15 # Allow server to fully start
|
||||||
|
# Verify server is responding
|
||||||
|
if ! curl -f http://localhost:${PORT:-3000}/api/health && ! curl -f http://localhost:${PORT:-3000}; then
|
||||||
|
echo "ERROR: Server failed to start after restart"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
```
|
||||||
|
|
||||||
|
5. **Query for test data - it MUST still exist**
|
||||||
|
- Via UI: Navigate to data location, verify data appears
|
||||||
|
- Via API: `curl http://localhost:${PORT:-3000}/api/items` - verify data in response
|
||||||
|
|
||||||
|
6. **If data is GONE:** Implementation uses in-memory storage → CRITICAL FAIL
|
||||||
|
- Run all grep commands from STEP 5.6 to identify the mock pattern
|
||||||
|
- You MUST fix the in-memory storage implementation before proceeding
|
||||||
|
- Replace in-memory storage with real database queries
|
||||||
|
|
||||||
|
7. **Clean up test data** after successful verification
|
||||||
|
|
||||||
|
**Why this test exists:** In-memory stores like `globalThis.devStore` pass all other tests because data persists during a single server run. Only a full server restart reveals this bug. Skipping this step WILL allow dev-store implementations to slip through.
|
||||||
|
|
||||||
|
**YOLO Mode Note:** Even in YOLO mode, this verification is MANDATORY for data features. Use curl instead of browser automation.
|
||||||
|
|
||||||
### STEP 6: UPDATE FEATURE STATUS (CAREFULLY!)
|
### STEP 6: UPDATE FEATURE STATUS (CAREFULLY!)
|
||||||
|
|
||||||
@@ -202,17 +287,23 @@ Use the feature_mark_passing tool with feature_id=42
|
|||||||
|
|
||||||
### STEP 7: COMMIT YOUR PROGRESS
|
### STEP 7: COMMIT YOUR PROGRESS
|
||||||
|
|
||||||
Make a descriptive git commit:
|
Make a descriptive git commit.
|
||||||
|
|
||||||
|
**Git Commit Rules:**
|
||||||
|
- ALWAYS use simple `-m` flag for commit messages
|
||||||
|
- NEVER use heredocs (`cat <<EOF` or `<<'EOF'`) - they fail in sandbox mode with "can't create temp file for here document: operation not permitted"
|
||||||
|
- For multi-line messages, use multiple `-m` flags:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
git add .
|
git add .
|
||||||
git commit -m "Implement [feature name] - verified end-to-end
|
git commit -m "Implement [feature name] - verified end-to-end" -m "- Added [specific changes]" -m "- Tested with browser automation" -m "- Marked feature #X as passing"
|
||||||
|
```
|
||||||
|
|
||||||
- Added [specific changes]
|
Or use a single descriptive message:
|
||||||
- Tested with browser automation
|
|
||||||
- Marked feature #X as passing
|
```bash
|
||||||
- Screenshots in verification/ directory
|
git add .
|
||||||
"
|
git commit -m "feat: implement [feature name] with browser verification"
|
||||||
```
|
```
|
||||||
|
|
||||||
### STEP 8: UPDATE PROGRESS NOTES
|
### STEP 8: UPDATE PROGRESS NOTES
|
||||||
|
|||||||
@@ -36,9 +36,9 @@ Use the feature_create_bulk tool to add all features at once. You can create fea
|
|||||||
|
|
||||||
- Feature count must match the `feature_count` specified in app_spec.txt
|
- Feature count must match the `feature_count` specified in app_spec.txt
|
||||||
- Reference tiers for other projects:
|
- Reference tiers for other projects:
|
||||||
- **Simple apps**: ~150 tests
|
- **Simple apps**: ~165 tests (includes 5 infrastructure)
|
||||||
- **Medium apps**: ~250 tests
|
- **Medium apps**: ~265 tests (includes 5 infrastructure)
|
||||||
- **Complex apps**: ~400+ tests
|
- **Advanced apps**: ~405+ tests (includes 5 infrastructure)
|
||||||
- Both "functional" and "style" categories
|
- Both "functional" and "style" categories
|
||||||
- Mix of narrow tests (2-5 steps) and comprehensive tests (10+ steps)
|
- Mix of narrow tests (2-5 steps) and comprehensive tests (10+ steps)
|
||||||
- At least 25 tests MUST have 10+ steps each (more for complex apps)
|
- At least 25 tests MUST have 10+ steps each (more for complex apps)
|
||||||
@@ -60,8 +60,9 @@ Dependencies enable **parallel execution** of independent features. When specifi
|
|||||||
2. **Can only depend on EARLIER features** (index must be less than current position)
|
2. **Can only depend on EARLIER features** (index must be less than current position)
|
||||||
3. **No circular dependencies** allowed
|
3. **No circular dependencies** allowed
|
||||||
4. **Maximum 20 dependencies** per feature
|
4. **Maximum 20 dependencies** per feature
|
||||||
5. **Foundation features (index 0-9)** should have NO dependencies
|
5. **Infrastructure features (indices 0-4)** have NO dependencies - they run FIRST
|
||||||
6. **60% of features after index 10** should have at least one dependency
|
6. **ALL features after index 4** MUST depend on `[0, 1, 2, 3, 4]` (infrastructure)
|
||||||
|
7. **60% of features after index 10** should have additional dependencies beyond infrastructure
|
||||||
|
|
||||||
### Dependency Types
|
### Dependency Types
|
||||||
|
|
||||||
@@ -82,30 +83,113 @@ Create WIDE dependency graphs, not linear chains:
|
|||||||
|
|
||||||
```json
|
```json
|
||||||
[
|
[
|
||||||
// FOUNDATION TIER (indices 0-2, no dependencies) - run first
|
// INFRASTRUCTURE TIER (indices 0-4, no dependencies) - MUST run first
|
||||||
{ "name": "App loads without errors", "category": "functional" },
|
{ "name": "Database connection established", "category": "functional" },
|
||||||
{ "name": "Navigation bar displays", "category": "style" },
|
{ "name": "Database schema applied correctly", "category": "functional" },
|
||||||
{ "name": "Homepage renders correctly", "category": "functional" },
|
{ "name": "Data persists across server restart", "category": "functional" },
|
||||||
|
{ "name": "No mock data patterns in codebase", "category": "functional" },
|
||||||
|
{ "name": "Backend API queries real database", "category": "functional" },
|
||||||
|
|
||||||
// AUTH TIER (indices 3-5, depend on foundation) - run in parallel
|
// FOUNDATION TIER (indices 5-7, depend on infrastructure)
|
||||||
{ "name": "User can register", "depends_on_indices": [0] },
|
{ "name": "App loads without errors", "category": "functional", "depends_on_indices": [0, 1, 2, 3, 4] },
|
||||||
{ "name": "User can login", "depends_on_indices": [0, 3] },
|
{ "name": "Navigation bar displays", "category": "style", "depends_on_indices": [0, 1, 2, 3, 4] },
|
||||||
{ "name": "User can logout", "depends_on_indices": [4] },
|
{ "name": "Homepage renders correctly", "category": "functional", "depends_on_indices": [0, 1, 2, 3, 4] },
|
||||||
|
|
||||||
// CORE CRUD TIER (indices 6-9) - WIDE GRAPH: all 4 depend on login
|
// AUTH TIER (indices 8-10, depend on foundation + infrastructure)
|
||||||
// All 4 start as soon as login passes!
|
{ "name": "User can register", "depends_on_indices": [0, 1, 2, 3, 4, 5] },
|
||||||
{ "name": "User can create todo", "depends_on_indices": [4] },
|
{ "name": "User can login", "depends_on_indices": [0, 1, 2, 3, 4, 5, 8] },
|
||||||
{ "name": "User can view todos", "depends_on_indices": [4] },
|
{ "name": "User can logout", "depends_on_indices": [0, 1, 2, 3, 4, 9] },
|
||||||
{ "name": "User can edit todo", "depends_on_indices": [4, 6] },
|
|
||||||
{ "name": "User can delete todo", "depends_on_indices": [4, 6] },
|
|
||||||
|
|
||||||
// ADVANCED TIER (indices 10-11) - both depend on view, not each other
|
// CORE CRUD TIER (indices 11-14) - WIDE GRAPH: all 4 depend on login
|
||||||
{ "name": "User can filter todos", "depends_on_indices": [7] },
|
{ "name": "User can create todo", "depends_on_indices": [0, 1, 2, 3, 4, 9] },
|
||||||
{ "name": "User can search todos", "depends_on_indices": [7] }
|
{ "name": "User can view todos", "depends_on_indices": [0, 1, 2, 3, 4, 9] },
|
||||||
|
{ "name": "User can edit todo", "depends_on_indices": [0, 1, 2, 3, 4, 9, 11] },
|
||||||
|
{ "name": "User can delete todo", "depends_on_indices": [0, 1, 2, 3, 4, 9, 11] },
|
||||||
|
|
||||||
|
// ADVANCED TIER (indices 15-16) - both depend on view, not each other
|
||||||
|
{ "name": "User can filter todos", "depends_on_indices": [0, 1, 2, 3, 4, 12] },
|
||||||
|
{ "name": "User can search todos", "depends_on_indices": [0, 1, 2, 3, 4, 12] }
|
||||||
]
|
]
|
||||||
```
|
```
|
||||||
|
|
||||||
**Result:** With 3 parallel agents, this 12-feature project completes in ~5-6 cycles instead of 12 sequential cycles.
|
**Result:** With 3 parallel agents, this project completes efficiently with proper database validation first.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## MANDATORY INFRASTRUCTURE FEATURES (Indices 0-4)
|
||||||
|
|
||||||
|
**CRITICAL:** Create these FIRST, before any functional features. These features ensure the application uses a real database, not mock data or in-memory storage.
|
||||||
|
|
||||||
|
| Index | Name | Test Steps |
|
||||||
|
|-------|------|------------|
|
||||||
|
| 0 | Database connection established | Start server → check logs for DB connection → health endpoint returns DB status |
|
||||||
|
| 1 | Database schema applied correctly | Connect to DB directly → list tables → verify schema matches spec |
|
||||||
|
| 2 | Data persists across server restart | Create via API → STOP server completely → START server → query API → data still exists |
|
||||||
|
| 3 | No mock data patterns in codebase | Run grep for prohibited patterns → must return empty |
|
||||||
|
| 4 | Backend API queries real database | Check server logs → SQL/DB queries appear for API calls |
|
||||||
|
|
||||||
|
**ALL other features MUST depend on indices [0, 1, 2, 3, 4].**
|
||||||
|
|
||||||
|
### Infrastructure Feature Descriptions
|
||||||
|
|
||||||
|
**Feature 0 - Database connection established:**
|
||||||
|
```text
|
||||||
|
Steps:
|
||||||
|
1. Start the development server
|
||||||
|
2. Check server logs for database connection message
|
||||||
|
3. Call health endpoint (e.g., GET /api/health)
|
||||||
|
4. Verify response includes database status: connected
|
||||||
|
```
|
||||||
|
|
||||||
|
**Feature 1 - Database schema applied correctly:**
|
||||||
|
```text
|
||||||
|
Steps:
|
||||||
|
1. Connect to database directly (sqlite3, psql, etc.)
|
||||||
|
2. List all tables in the database
|
||||||
|
3. Verify tables match what's defined in app_spec.txt
|
||||||
|
4. Verify key columns exist on each table
|
||||||
|
```
|
||||||
|
|
||||||
|
**Feature 2 - Data persists across server restart (CRITICAL):**
|
||||||
|
```text
|
||||||
|
Steps:
|
||||||
|
1. Create unique test data via API (e.g., POST /api/items with name "RESTART_TEST_12345")
|
||||||
|
2. Verify data appears in API response (GET /api/items)
|
||||||
|
3. STOP the server completely (kill by port to avoid killing unrelated Node processes):
|
||||||
|
- Unix/macOS: lsof -ti :$PORT | xargs kill -9 2>/dev/null || true && sleep 5
|
||||||
|
- Windows: FOR /F "tokens=5" %a IN ('netstat -aon ^| find ":$PORT"') DO taskkill /F /PID %a 2>nul
|
||||||
|
- Note: Replace $PORT with actual port (e.g., 3000)
|
||||||
|
4. Verify server is stopped: lsof -ti :$PORT returns nothing (or netstat on Windows)
|
||||||
|
5. RESTART the server: ./init.sh & sleep 15
|
||||||
|
6. Query API again: GET /api/items
|
||||||
|
7. Verify "RESTART_TEST_12345" still exists
|
||||||
|
8. If data is GONE → CRITICAL FAILURE (in-memory storage detected)
|
||||||
|
9. Clean up test data
|
||||||
|
```
|
||||||
|
|
||||||
|
**Feature 3 - No mock data patterns in codebase:**
|
||||||
|
```text
|
||||||
|
Steps:
|
||||||
|
1. Run: grep -r "globalThis\." --include="*.ts" --include="*.tsx" --include="*.js" src/
|
||||||
|
2. Run: grep -r "dev-store\|devStore\|DevStore\|mock-db\|mockDb" --include="*.ts" --include="*.tsx" --include="*.js" src/
|
||||||
|
3. Run: grep -r "mockData\|testData\|fakeData\|sampleData\|dummyData" --include="*.ts" --include="*.tsx" --include="*.js" src/
|
||||||
|
4. Run: grep -r "TODO.*real\|TODO.*database\|TODO.*API\|STUB\|MOCK" --include="*.ts" --include="*.tsx" --include="*.js" src/
|
||||||
|
5. Run: grep -r "isDevelopment\|isDev\|process\.env\.NODE_ENV.*development" --include="*.ts" --include="*.tsx" --include="*.js" src/
|
||||||
|
6. Run: grep -r "new Map\(\)\|new Set\(\)" --include="*.ts" --include="*.tsx" --include="*.js" src/ 2>/dev/null
|
||||||
|
7. Run: grep -E "json-server|miragejs|msw" package.json
|
||||||
|
8. ALL grep commands must return empty (exit code 1)
|
||||||
|
9. If any returns results → investigate and fix before passing
|
||||||
|
```
|
||||||
|
|
||||||
|
**Feature 4 - Backend API queries real database:**
|
||||||
|
```text
|
||||||
|
Steps:
|
||||||
|
1. Start server with verbose logging
|
||||||
|
2. Make API call (e.g., GET /api/items)
|
||||||
|
3. Check server logs
|
||||||
|
4. Verify SQL query appears (SELECT, INSERT, etc.) or ORM query log
|
||||||
|
5. If no DB queries in logs → implementation is using mock data
|
||||||
|
```
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -115,8 +199,9 @@ The feature_list.json **MUST** include tests from ALL 20 categories. Minimum cou
|
|||||||
|
|
||||||
### Category Distribution by Complexity Tier
|
### Category Distribution by Complexity Tier
|
||||||
|
|
||||||
| Category | Simple | Medium | Complex |
|
| Category | Simple | Medium | Advanced |
|
||||||
| -------------------------------- | ------- | ------- | -------- |
|
| -------------------------------- | ------- | ------- | -------- |
|
||||||
|
| **0. Infrastructure (REQUIRED)** | 5 | 5 | 5 |
|
||||||
| A. Security & Access Control | 5 | 20 | 40 |
|
| A. Security & Access Control | 5 | 20 | 40 |
|
||||||
| B. Navigation Integrity | 15 | 25 | 40 |
|
| B. Navigation Integrity | 15 | 25 | 40 |
|
||||||
| C. Real Data Verification | 20 | 30 | 50 |
|
| C. Real Data Verification | 20 | 30 | 50 |
|
||||||
@@ -137,12 +222,14 @@ The feature_list.json **MUST** include tests from ALL 20 categories. Minimum cou
|
|||||||
| R. Concurrency & Race Conditions | 5 | 8 | 15 |
|
| R. Concurrency & Race Conditions | 5 | 8 | 15 |
|
||||||
| S. Export/Import | 5 | 6 | 10 |
|
| S. Export/Import | 5 | 6 | 10 |
|
||||||
| T. Performance | 5 | 5 | 10 |
|
| T. Performance | 5 | 5 | 10 |
|
||||||
| **TOTAL** | **150** | **250** | **400+** |
|
| **TOTAL** | **165** | **265** | **405+** |
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
### Category Descriptions
|
### Category Descriptions
|
||||||
|
|
||||||
|
**0. Infrastructure (REQUIRED - Priority 0)** - Database connectivity, schema existence, data persistence across server restart, absence of mock patterns. These features MUST pass before any functional features can begin. All tiers require exactly 5 infrastructure features (indices 0-4).
|
||||||
|
|
||||||
**A. Security & Access Control** - Test unauthorized access blocking, permission enforcement, session management, role-based access, and data isolation between users.
|
**A. Security & Access Control** - Test unauthorized access blocking, permission enforcement, session management, role-based access, and data isolation between users.
|
||||||
|
|
||||||
**B. Navigation Integrity** - Test all buttons, links, menus, breadcrumbs, deep links, back button behavior, 404 handling, and post-login/logout redirects.
|
**B. Navigation Integrity** - Test all buttons, links, menus, breadcrumbs, deep links, back button behavior, 404 handling, and post-login/logout redirects.
|
||||||
@@ -205,6 +292,16 @@ The feature_list.json must include tests that **actively verify real data** and
|
|||||||
- `setTimeout` simulating API delays with static data
|
- `setTimeout` simulating API delays with static data
|
||||||
- Static returns instead of database queries
|
- Static returns instead of database queries
|
||||||
|
|
||||||
|
**Additional prohibited patterns (in-memory stores):**
|
||||||
|
|
||||||
|
- `globalThis.` (in-memory storage pattern)
|
||||||
|
- `dev-store`, `devStore`, `DevStore` (development stores)
|
||||||
|
- `json-server`, `mirage`, `msw` (mock backends)
|
||||||
|
- `Map()` or `Set()` used as primary data store
|
||||||
|
- Environment checks like `if (process.env.NODE_ENV === 'development')` for data routing
|
||||||
|
|
||||||
|
**Why this matters:** In-memory stores (like `globalThis.devStore`) will pass simple tests because data persists during a single server run. But data is LOST on server restart, which is unacceptable for production. The Infrastructure features (0-4) specifically test for this by requiring data to survive a full server restart.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
**CRITICAL INSTRUCTION:**
|
**CRITICAL INSTRUCTION:**
|
||||||
|
|||||||
@@ -15,6 +15,13 @@
|
|||||||
# - false: Browser opens a visible window (useful for debugging)
|
# - false: Browser opens a visible window (useful for debugging)
|
||||||
# PLAYWRIGHT_HEADLESS=true
|
# PLAYWRIGHT_HEADLESS=true
|
||||||
|
|
||||||
|
# Extra Read Paths (Optional)
|
||||||
|
# Comma-separated list of absolute paths for read-only access to external directories.
|
||||||
|
# The agent can read files from these paths but cannot write to them.
|
||||||
|
# Useful for referencing documentation, shared libraries, or other projects.
|
||||||
|
# Example: EXTRA_READ_PATHS=/Volumes/Data/dev,/Users/shared/libs
|
||||||
|
# EXTRA_READ_PATHS=
|
||||||
|
|
||||||
# GLM/Alternative API Configuration (Optional)
|
# GLM/Alternative API Configuration (Optional)
|
||||||
# To use Zhipu AI's GLM models instead of Claude, uncomment and set these variables.
|
# To use Zhipu AI's GLM models instead of Claude, uncomment and set these variables.
|
||||||
# This only affects AutoCoder - your global Claude Code settings remain unchanged.
|
# This only affects AutoCoder - your global Claude Code settings remain unchanged.
|
||||||
|
|||||||
134
CLAUDE.md
134
CLAUDE.md
@@ -2,6 +2,12 @@
|
|||||||
|
|
||||||
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
|
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
- Python 3.11+
|
||||||
|
- Node.js 20+ (for UI development)
|
||||||
|
- Claude Code CLI
|
||||||
|
|
||||||
## Project Overview
|
## Project Overview
|
||||||
|
|
||||||
This is an autonomous coding agent system with a React-based UI. It uses the Claude Agent SDK to build complete applications over multiple sessions using a two-agent pattern:
|
This is an autonomous coding agent system with a React-based UI. It uses the Claude Agent SDK to build complete applications over multiple sessions using a two-agent pattern:
|
||||||
@@ -86,6 +92,33 @@ npm run lint # Run ESLint
|
|||||||
|
|
||||||
**Note:** The `start_ui.bat` script serves the pre-built UI from `ui/dist/`. After making UI changes, run `npm run build` in the `ui/` directory.
|
**Note:** The `start_ui.bat` script serves the pre-built UI from `ui/dist/`. After making UI changes, run `npm run build` in the `ui/` directory.
|
||||||
|
|
||||||
|
## Testing
|
||||||
|
|
||||||
|
### Python
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ruff check . # Lint
|
||||||
|
mypy . # Type check
|
||||||
|
python test_security.py # Security unit tests (163 tests)
|
||||||
|
python test_security_integration.py # Integration tests (9 tests)
|
||||||
|
```
|
||||||
|
|
||||||
|
### React UI
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd ui
|
||||||
|
npm run lint # ESLint
|
||||||
|
npm run build # Type check + build
|
||||||
|
npm run test:e2e # Playwright end-to-end tests
|
||||||
|
npm run test:e2e:ui # Playwright tests with UI
|
||||||
|
```
|
||||||
|
|
||||||
|
### Code Quality
|
||||||
|
|
||||||
|
Configuration in `pyproject.toml`:
|
||||||
|
- ruff: Line length 120, Python 3.11 target
|
||||||
|
- mypy: Strict return type checking, ignores missing imports
|
||||||
|
|
||||||
## Architecture
|
## Architecture
|
||||||
|
|
||||||
### Core Python Modules
|
### Core Python Modules
|
||||||
@@ -141,7 +174,7 @@ MCP tools available to the agent:
|
|||||||
|
|
||||||
### React UI (ui/)
|
### React UI (ui/)
|
||||||
|
|
||||||
- Tech stack: React 18, TypeScript, TanStack Query, Tailwind CSS v4, Radix UI, dagre (graph layout)
|
- Tech stack: React 19, TypeScript, TanStack Query, Tailwind CSS v4, Radix UI, dagre (graph layout)
|
||||||
- `src/App.tsx` - Main app with project selection, kanban board, agent controls
|
- `src/App.tsx` - Main app with project selection, kanban board, agent controls
|
||||||
- `src/hooks/useWebSocket.ts` - Real-time updates via WebSocket (progress, agent status, logs, agent updates)
|
- `src/hooks/useWebSocket.ts` - Real-time updates via WebSocket (progress, agent status, logs, agent updates)
|
||||||
- `src/hooks/useProjects.ts` - React Query hooks for API calls
|
- `src/hooks/useProjects.ts` - React Query hooks for API calls
|
||||||
@@ -178,6 +211,46 @@ Defense-in-depth approach configured in `client.py`:
|
|||||||
2. Filesystem restricted to project directory only
|
2. Filesystem restricted to project directory only
|
||||||
3. Bash commands validated using hierarchical allowlist system
|
3. Bash commands validated using hierarchical allowlist system
|
||||||
|
|
||||||
|
#### Extra Read Paths (Cross-Project File Access)
|
||||||
|
|
||||||
|
The agent can optionally read files from directories outside the project folder via the `EXTRA_READ_PATHS` environment variable. This enables referencing documentation, shared libraries, or other projects.
|
||||||
|
|
||||||
|
**Configuration:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Single path
|
||||||
|
EXTRA_READ_PATHS=/Users/me/docs
|
||||||
|
|
||||||
|
# Multiple paths (comma-separated)
|
||||||
|
EXTRA_READ_PATHS=/Users/me/docs,/opt/shared-libs,/Volumes/Data/reference
|
||||||
|
```
|
||||||
|
|
||||||
|
**Security Controls:**
|
||||||
|
|
||||||
|
All paths are validated before being granted read access:
|
||||||
|
- Must be absolute paths (not relative)
|
||||||
|
- Must exist and be directories
|
||||||
|
- Paths are canonicalized via `Path.resolve()` to prevent `..` traversal attacks
|
||||||
|
- Sensitive directories are blocked (see blocklist below)
|
||||||
|
- Only Read, Glob, and Grep operations are allowed (no Write/Edit)
|
||||||
|
|
||||||
|
**Blocked Sensitive Directories:**
|
||||||
|
|
||||||
|
The following directories (relative to home) are always blocked:
|
||||||
|
- `.ssh`, `.aws`, `.azure`, `.kube` - Cloud/SSH credentials
|
||||||
|
- `.gnupg`, `.gpg`, `.password-store` - Encryption keys
|
||||||
|
- `.docker`, `.config/gcloud` - Container/cloud configs
|
||||||
|
- `.npmrc`, `.pypirc`, `.netrc` - Package manager credentials
|
||||||
|
|
||||||
|
**Example Output:**
|
||||||
|
|
||||||
|
```
|
||||||
|
Created security settings at /path/to/project/.claude_settings.json
|
||||||
|
- Sandbox enabled (OS-level bash isolation)
|
||||||
|
- Filesystem restricted to: /path/to/project
|
||||||
|
- Extra read paths (validated): /Users/me/docs, /opt/shared-libs
|
||||||
|
```
|
||||||
|
|
||||||
#### Per-Project Allowed Commands
|
#### Per-Project Allowed Commands
|
||||||
|
|
||||||
The agent's bash command access is controlled through a hierarchical configuration system:
|
The agent's bash command access is controlled through a hierarchical configuration system:
|
||||||
@@ -237,15 +310,6 @@ blocked_commands:
|
|||||||
- Blocklisted commands (sudo, dd, shutdown, etc.) can NEVER be allowed
|
- Blocklisted commands (sudo, dd, shutdown, etc.) can NEVER be allowed
|
||||||
- Org-level blocked commands cannot be overridden by project configs
|
- Org-level blocked commands cannot be overridden by project configs
|
||||||
|
|
||||||
**Testing:**
|
|
||||||
```bash
|
|
||||||
# Unit tests (136 tests - fast)
|
|
||||||
python test_security.py
|
|
||||||
|
|
||||||
# Integration tests (9 tests - uses real hooks)
|
|
||||||
python test_security_integration.py
|
|
||||||
```
|
|
||||||
|
|
||||||
**Files:**
|
**Files:**
|
||||||
- `security.py` - Command validation logic and hardcoded blocklist
|
- `security.py` - Command validation logic and hardcoded blocklist
|
||||||
- `test_security.py` - Unit tests for security system (136 tests)
|
- `test_security.py` - Unit tests for security system (136 tests)
|
||||||
@@ -334,55 +398,7 @@ The orchestrator enforces strict bounds on concurrent processes:
|
|||||||
- `MAX_PARALLEL_AGENTS = 5` - Maximum concurrent coding agents
|
- `MAX_PARALLEL_AGENTS = 5` - Maximum concurrent coding agents
|
||||||
- `MAX_TOTAL_AGENTS = 10` - Hard limit on total agents (coding + testing)
|
- `MAX_TOTAL_AGENTS = 10` - Hard limit on total agents (coding + testing)
|
||||||
- Testing agents are capped at `max_concurrency` (same as coding agents)
|
- Testing agents are capped at `max_concurrency` (same as coding agents)
|
||||||
|
- Total process count never exceeds 11 Python processes (1 orchestrator + 5 coding + 5 testing)
|
||||||
**Expected process count during normal operation:**
|
|
||||||
- 1 orchestrator process
|
|
||||||
- Up to 5 coding agents
|
|
||||||
- Up to 5 testing agents
|
|
||||||
- Total: never exceeds 11 Python processes
|
|
||||||
|
|
||||||
**Stress Test Verification:**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Windows - verify process bounds
|
|
||||||
# 1. Note baseline count
|
|
||||||
tasklist | findstr python | find /c /v ""
|
|
||||||
|
|
||||||
# 2. Start parallel agent (max concurrency)
|
|
||||||
python autonomous_agent_demo.py --project-dir test --parallel --max-concurrency 5
|
|
||||||
|
|
||||||
# 3. During run - should NEVER exceed baseline + 11
|
|
||||||
tasklist | findstr python | find /c /v ""
|
|
||||||
|
|
||||||
# 4. After stop via UI - should return to baseline
|
|
||||||
tasklist | findstr python | find /c /v ""
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# macOS/Linux - verify process bounds
|
|
||||||
# 1. Note baseline count
|
|
||||||
pgrep -c python
|
|
||||||
|
|
||||||
# 2. Start parallel agent
|
|
||||||
python autonomous_agent_demo.py --project-dir test --parallel --max-concurrency 5
|
|
||||||
|
|
||||||
# 3. During run - should NEVER exceed baseline + 11
|
|
||||||
pgrep -c python
|
|
||||||
|
|
||||||
# 4. After stop - should return to baseline
|
|
||||||
pgrep -c python
|
|
||||||
```
|
|
||||||
|
|
||||||
**Log Verification:**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Check spawn vs completion balance
|
|
||||||
grep "Started testing agent" orchestrator_debug.log | wc -l
|
|
||||||
grep "Testing agent.*completed\|failed" orchestrator_debug.log | wc -l
|
|
||||||
|
|
||||||
# Watch for cap enforcement messages
|
|
||||||
grep "at max testing agents\|At max total agents" orchestrator_debug.log
|
|
||||||
```
|
|
||||||
|
|
||||||
### Design System
|
### Design System
|
||||||
|
|
||||||
|
|||||||
@@ -336,12 +336,20 @@ def create_database(project_dir: Path) -> tuple:
|
|||||||
"""
|
"""
|
||||||
Create database and return engine + session maker.
|
Create database and return engine + session maker.
|
||||||
|
|
||||||
|
Uses a cache to avoid creating new engines for each request, which improves
|
||||||
|
performance by reusing database connections.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
project_dir: Directory containing the project
|
project_dir: Directory containing the project
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
Tuple of (engine, SessionLocal)
|
Tuple of (engine, SessionLocal)
|
||||||
"""
|
"""
|
||||||
|
cache_key = project_dir.as_posix()
|
||||||
|
|
||||||
|
if cache_key in _engine_cache:
|
||||||
|
return _engine_cache[cache_key]
|
||||||
|
|
||||||
db_url = get_database_url(project_dir)
|
db_url = get_database_url(project_dir)
|
||||||
engine = create_engine(db_url, connect_args={
|
engine = create_engine(db_url, connect_args={
|
||||||
"check_same_thread": False,
|
"check_same_thread": False,
|
||||||
@@ -369,12 +377,39 @@ def create_database(project_dir: Path) -> tuple:
|
|||||||
_migrate_add_schedules_tables(engine)
|
_migrate_add_schedules_tables(engine)
|
||||||
|
|
||||||
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
|
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
|
||||||
|
|
||||||
|
# Cache the engine and session maker
|
||||||
|
_engine_cache[cache_key] = (engine, SessionLocal)
|
||||||
|
|
||||||
return engine, SessionLocal
|
return engine, SessionLocal
|
||||||
|
|
||||||
|
|
||||||
|
def dispose_engine(project_dir: Path) -> bool:
|
||||||
|
"""Dispose of and remove the cached engine for a project.
|
||||||
|
|
||||||
|
This closes all database connections, releasing file locks on Windows.
|
||||||
|
Should be called before deleting the database file.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
True if an engine was disposed, False if no engine was cached.
|
||||||
|
"""
|
||||||
|
cache_key = project_dir.as_posix()
|
||||||
|
|
||||||
|
if cache_key in _engine_cache:
|
||||||
|
engine, _ = _engine_cache.pop(cache_key)
|
||||||
|
engine.dispose()
|
||||||
|
return True
|
||||||
|
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
# Global session maker - will be set when server starts
|
# Global session maker - will be set when server starts
|
||||||
_session_maker: Optional[sessionmaker] = None
|
_session_maker: Optional[sessionmaker] = None
|
||||||
|
|
||||||
|
# Engine cache to avoid creating new engines for each request
|
||||||
|
# Key: project directory path (as posix string), Value: (engine, SessionLocal)
|
||||||
|
_engine_cache: dict[str, tuple] = {}
|
||||||
|
|
||||||
|
|
||||||
def set_session_maker(session_maker: sessionmaker) -> None:
|
def set_session_maker(session_maker: sessionmaker) -> None:
|
||||||
"""Set the global session maker."""
|
"""Set the global session maker."""
|
||||||
|
|||||||
@@ -300,15 +300,20 @@ def compute_scheduling_scores(features: list[dict]) -> dict[int, float]:
|
|||||||
parents[f["id"]].append(dep_id)
|
parents[f["id"]].append(dep_id)
|
||||||
|
|
||||||
# Calculate depths via BFS from roots
|
# Calculate depths via BFS from roots
|
||||||
|
# Use visited set to prevent infinite loops from circular dependencies
|
||||||
depths: dict[int, int] = {}
|
depths: dict[int, int] = {}
|
||||||
|
visited: set[int] = set()
|
||||||
roots = [f["id"] for f in features if not parents[f["id"]]]
|
roots = [f["id"] for f in features if not parents[f["id"]]]
|
||||||
queue = [(root, 0) for root in roots]
|
queue = [(root, 0) for root in roots]
|
||||||
while queue:
|
while queue:
|
||||||
node_id, depth = queue.pop(0)
|
node_id, depth = queue.pop(0)
|
||||||
if node_id not in depths or depth > depths[node_id]:
|
if node_id in visited:
|
||||||
depths[node_id] = depth
|
continue # Skip already visited nodes (handles cycles)
|
||||||
|
visited.add(node_id)
|
||||||
|
depths[node_id] = depth
|
||||||
for child_id in children[node_id]:
|
for child_id in children[node_id]:
|
||||||
queue.append((child_id, depth + 1))
|
if child_id not in visited:
|
||||||
|
queue.append((child_id, depth + 1))
|
||||||
|
|
||||||
# Handle orphaned nodes (shouldn't happen but be safe)
|
# Handle orphaned nodes (shouldn't happen but be safe)
|
||||||
for f in features:
|
for f in features:
|
||||||
|
|||||||
107
client.py
107
client.py
@@ -42,6 +42,28 @@ API_ENV_VARS = [
|
|||||||
"ANTHROPIC_DEFAULT_HAIKU_MODEL", # Model override for Haiku
|
"ANTHROPIC_DEFAULT_HAIKU_MODEL", # Model override for Haiku
|
||||||
]
|
]
|
||||||
|
|
||||||
|
# Extra read paths for cross-project file access (read-only)
|
||||||
|
# Set EXTRA_READ_PATHS environment variable with comma-separated absolute paths
|
||||||
|
# Example: EXTRA_READ_PATHS=/Volumes/Data/dev,/Users/shared/libs
|
||||||
|
EXTRA_READ_PATHS_VAR = "EXTRA_READ_PATHS"
|
||||||
|
|
||||||
|
# Sensitive directories that should never be allowed via EXTRA_READ_PATHS
|
||||||
|
# These contain credentials, keys, or system-critical files
|
||||||
|
EXTRA_READ_PATHS_BLOCKLIST = {
|
||||||
|
".ssh",
|
||||||
|
".aws",
|
||||||
|
".azure",
|
||||||
|
".kube",
|
||||||
|
".gnupg",
|
||||||
|
".gpg",
|
||||||
|
".password-store",
|
||||||
|
".docker",
|
||||||
|
".config/gcloud",
|
||||||
|
".npmrc",
|
||||||
|
".pypirc",
|
||||||
|
".netrc",
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
def get_playwright_headless() -> bool:
|
def get_playwright_headless() -> bool:
|
||||||
"""
|
"""
|
||||||
@@ -80,6 +102,79 @@ def get_playwright_browser() -> str:
|
|||||||
return value
|
return value
|
||||||
|
|
||||||
|
|
||||||
|
def get_extra_read_paths() -> list[Path]:
|
||||||
|
"""
|
||||||
|
Get extra read-only paths from EXTRA_READ_PATHS environment variable.
|
||||||
|
|
||||||
|
Parses comma-separated absolute paths and validates each one:
|
||||||
|
- Must be an absolute path
|
||||||
|
- Must exist and be a directory
|
||||||
|
- Cannot be or contain sensitive directories (e.g., .ssh, .aws)
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
List of validated, canonicalized Path objects.
|
||||||
|
"""
|
||||||
|
raw_value = os.getenv(EXTRA_READ_PATHS_VAR, "").strip()
|
||||||
|
if not raw_value:
|
||||||
|
return []
|
||||||
|
|
||||||
|
validated_paths: list[Path] = []
|
||||||
|
home_dir = Path.home()
|
||||||
|
|
||||||
|
for path_str in raw_value.split(","):
|
||||||
|
path_str = path_str.strip()
|
||||||
|
if not path_str:
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Parse and canonicalize the path
|
||||||
|
try:
|
||||||
|
path = Path(path_str).resolve()
|
||||||
|
except (OSError, ValueError) as e:
|
||||||
|
print(f" - Warning: Invalid EXTRA_READ_PATHS path '{path_str}': {e}")
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Must be absolute (resolve() makes it absolute, but check original input)
|
||||||
|
if not Path(path_str).is_absolute():
|
||||||
|
print(f" - Warning: EXTRA_READ_PATHS requires absolute paths, skipping: {path_str}")
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Must exist
|
||||||
|
if not path.exists():
|
||||||
|
print(f" - Warning: EXTRA_READ_PATHS path does not exist, skipping: {path_str}")
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Must be a directory
|
||||||
|
if not path.is_dir():
|
||||||
|
print(f" - Warning: EXTRA_READ_PATHS path is not a directory, skipping: {path_str}")
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Check against sensitive directory blocklist
|
||||||
|
is_blocked = False
|
||||||
|
for sensitive in EXTRA_READ_PATHS_BLOCKLIST:
|
||||||
|
sensitive_path = (home_dir / sensitive).resolve()
|
||||||
|
try:
|
||||||
|
# Block if path IS the sensitive dir or is INSIDE it
|
||||||
|
if path == sensitive_path or path.is_relative_to(sensitive_path):
|
||||||
|
print(f" - Warning: EXTRA_READ_PATHS blocked sensitive path: {path_str}")
|
||||||
|
is_blocked = True
|
||||||
|
break
|
||||||
|
# Also block if sensitive dir is INSIDE the requested path
|
||||||
|
if sensitive_path.is_relative_to(path):
|
||||||
|
print(f" - Warning: EXTRA_READ_PATHS path contains sensitive directory ({sensitive}): {path_str}")
|
||||||
|
is_blocked = True
|
||||||
|
break
|
||||||
|
except (OSError, ValueError):
|
||||||
|
# is_relative_to can raise on some edge cases
|
||||||
|
continue
|
||||||
|
|
||||||
|
if is_blocked:
|
||||||
|
continue
|
||||||
|
|
||||||
|
validated_paths.append(path)
|
||||||
|
|
||||||
|
return validated_paths
|
||||||
|
|
||||||
|
|
||||||
# Feature MCP tools for feature/test management
|
# Feature MCP tools for feature/test management
|
||||||
FEATURE_MCP_TOOLS = [
|
FEATURE_MCP_TOOLS = [
|
||||||
# Core feature operations
|
# Core feature operations
|
||||||
@@ -202,6 +297,16 @@ def create_client(
|
|||||||
# Allow Feature MCP tools for feature management
|
# Allow Feature MCP tools for feature management
|
||||||
*FEATURE_MCP_TOOLS,
|
*FEATURE_MCP_TOOLS,
|
||||||
]
|
]
|
||||||
|
|
||||||
|
# Add extra read paths from environment variable (read-only access)
|
||||||
|
# Paths are validated, canonicalized, and checked against sensitive blocklist
|
||||||
|
extra_read_paths = get_extra_read_paths()
|
||||||
|
for path in extra_read_paths:
|
||||||
|
# Add read-only permissions for each validated path
|
||||||
|
permissions_list.append(f"Read({path}/**)")
|
||||||
|
permissions_list.append(f"Glob({path}/**)")
|
||||||
|
permissions_list.append(f"Grep({path}/**)")
|
||||||
|
|
||||||
if not yolo_mode:
|
if not yolo_mode:
|
||||||
# Allow Playwright MCP tools for browser automation (standard mode only)
|
# Allow Playwright MCP tools for browser automation (standard mode only)
|
||||||
permissions_list.extend(PLAYWRIGHT_TOOLS)
|
permissions_list.extend(PLAYWRIGHT_TOOLS)
|
||||||
@@ -228,6 +333,8 @@ def create_client(
|
|||||||
print(f"Created security settings at {settings_file}")
|
print(f"Created security settings at {settings_file}")
|
||||||
print(" - Sandbox enabled (OS-level bash isolation)")
|
print(" - Sandbox enabled (OS-level bash isolation)")
|
||||||
print(f" - Filesystem restricted to: {project_dir.resolve()}")
|
print(f" - Filesystem restricted to: {project_dir.resolve()}")
|
||||||
|
if extra_read_paths:
|
||||||
|
print(f" - Extra read paths (validated): {', '.join(str(p) for p in extra_read_paths)}")
|
||||||
print(" - Bash commands restricted to allowlist (see security.py)")
|
print(" - Bash commands restricted to allowlist (see security.py)")
|
||||||
if yolo_mode:
|
if yolo_mode:
|
||||||
print(" - MCP servers: features (database) - YOLO MODE (no Playwright)")
|
print(" - MCP servers: features (database) - YOLO MODE (no Playwright)")
|
||||||
|
|||||||
@@ -504,14 +504,20 @@ class ParallelOrchestrator:
|
|||||||
cmd.append("--yolo")
|
cmd.append("--yolo")
|
||||||
|
|
||||||
try:
|
try:
|
||||||
proc = subprocess.Popen(
|
# CREATE_NO_WINDOW on Windows prevents console window pop-ups
|
||||||
cmd,
|
# stdin=DEVNULL prevents blocking on stdin reads
|
||||||
stdout=subprocess.PIPE,
|
popen_kwargs = {
|
||||||
stderr=subprocess.STDOUT,
|
"stdin": subprocess.DEVNULL,
|
||||||
text=True,
|
"stdout": subprocess.PIPE,
|
||||||
cwd=str(AUTOCODER_ROOT),
|
"stderr": subprocess.STDOUT,
|
||||||
env={**os.environ, "PYTHONUNBUFFERED": "1"},
|
"text": True,
|
||||||
)
|
"cwd": str(AUTOCODER_ROOT), # Run from autocoder root for proper imports
|
||||||
|
"env": {**os.environ, "PYTHONUNBUFFERED": "1"},
|
||||||
|
}
|
||||||
|
if sys.platform == "win32":
|
||||||
|
popen_kwargs["creationflags"] = subprocess.CREATE_NO_WINDOW
|
||||||
|
|
||||||
|
proc = subprocess.Popen(cmd, **popen_kwargs)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
# Reset in_progress on failure
|
# Reset in_progress on failure
|
||||||
session = self.get_session()
|
session = self.get_session()
|
||||||
@@ -587,14 +593,20 @@ class ParallelOrchestrator:
|
|||||||
cmd.extend(["--model", self.model])
|
cmd.extend(["--model", self.model])
|
||||||
|
|
||||||
try:
|
try:
|
||||||
proc = subprocess.Popen(
|
# CREATE_NO_WINDOW on Windows prevents console window pop-ups
|
||||||
cmd,
|
# stdin=DEVNULL prevents blocking on stdin reads
|
||||||
stdout=subprocess.PIPE,
|
popen_kwargs = {
|
||||||
stderr=subprocess.STDOUT,
|
"stdin": subprocess.DEVNULL,
|
||||||
text=True,
|
"stdout": subprocess.PIPE,
|
||||||
cwd=str(AUTOCODER_ROOT),
|
"stderr": subprocess.STDOUT,
|
||||||
env={**os.environ, "PYTHONUNBUFFERED": "1"},
|
"text": True,
|
||||||
)
|
"cwd": str(AUTOCODER_ROOT),
|
||||||
|
"env": {**os.environ, "PYTHONUNBUFFERED": "1"},
|
||||||
|
}
|
||||||
|
if sys.platform == "win32":
|
||||||
|
popen_kwargs["creationflags"] = subprocess.CREATE_NO_WINDOW
|
||||||
|
|
||||||
|
proc = subprocess.Popen(cmd, **popen_kwargs)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
debug_log.log("TESTING", f"FAILED to spawn testing agent: {e}")
|
debug_log.log("TESTING", f"FAILED to spawn testing agent: {e}")
|
||||||
return False, f"Failed to start testing agent: {e}"
|
return False, f"Failed to start testing agent: {e}"
|
||||||
@@ -638,14 +650,20 @@ class ParallelOrchestrator:
|
|||||||
|
|
||||||
print("Running initializer agent...", flush=True)
|
print("Running initializer agent...", flush=True)
|
||||||
|
|
||||||
proc = subprocess.Popen(
|
# CREATE_NO_WINDOW on Windows prevents console window pop-ups
|
||||||
cmd,
|
# stdin=DEVNULL prevents blocking on stdin reads
|
||||||
stdout=subprocess.PIPE,
|
popen_kwargs = {
|
||||||
stderr=subprocess.STDOUT,
|
"stdin": subprocess.DEVNULL,
|
||||||
text=True,
|
"stdout": subprocess.PIPE,
|
||||||
cwd=str(AUTOCODER_ROOT),
|
"stderr": subprocess.STDOUT,
|
||||||
env={**os.environ, "PYTHONUNBUFFERED": "1"},
|
"text": True,
|
||||||
)
|
"cwd": str(AUTOCODER_ROOT),
|
||||||
|
"env": {**os.environ, "PYTHONUNBUFFERED": "1"},
|
||||||
|
}
|
||||||
|
if sys.platform == "win32":
|
||||||
|
popen_kwargs["creationflags"] = subprocess.CREATE_NO_WINDOW
|
||||||
|
|
||||||
|
proc = subprocess.Popen(cmd, **popen_kwargs)
|
||||||
|
|
||||||
debug_log.log("INIT", "Initializer subprocess started", pid=proc.pid)
|
debug_log.log("INIT", "Initializer subprocess started", pid=proc.pid)
|
||||||
|
|
||||||
@@ -703,6 +721,12 @@ class ParallelOrchestrator:
|
|||||||
print(f"[Feature #{feature_id}] {line}", flush=True)
|
print(f"[Feature #{feature_id}] {line}", flush=True)
|
||||||
proc.wait()
|
proc.wait()
|
||||||
finally:
|
finally:
|
||||||
|
# CRITICAL: Kill the process tree to clean up any child processes (e.g., Claude CLI)
|
||||||
|
# This prevents zombie processes from accumulating
|
||||||
|
try:
|
||||||
|
kill_process_tree(proc, timeout=2.0)
|
||||||
|
except Exception as e:
|
||||||
|
debug_log.log("CLEANUP", f"Error killing process tree for {agent_type} agent", error=str(e))
|
||||||
self._on_agent_complete(feature_id, proc.returncode, agent_type, proc)
|
self._on_agent_complete(feature_id, proc.returncode, agent_type, proc)
|
||||||
|
|
||||||
def _signal_agent_completed(self):
|
def _signal_agent_completed(self):
|
||||||
|
|||||||
72
registry.py
72
registry.py
@@ -16,7 +16,7 @@ from datetime import datetime
|
|||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import Any
|
from typing import Any
|
||||||
|
|
||||||
from sqlalchemy import Column, DateTime, String, create_engine
|
from sqlalchemy import Column, DateTime, Integer, String, create_engine, text
|
||||||
from sqlalchemy.ext.declarative import declarative_base
|
from sqlalchemy.ext.declarative import declarative_base
|
||||||
from sqlalchemy.orm import sessionmaker
|
from sqlalchemy.orm import sessionmaker
|
||||||
|
|
||||||
@@ -85,6 +85,7 @@ class Project(Base):
|
|||||||
name = Column(String(50), primary_key=True, index=True)
|
name = Column(String(50), primary_key=True, index=True)
|
||||||
path = Column(String, nullable=False) # POSIX format for cross-platform
|
path = Column(String, nullable=False) # POSIX format for cross-platform
|
||||||
created_at = Column(DateTime, nullable=False)
|
created_at = Column(DateTime, nullable=False)
|
||||||
|
default_concurrency = Column(Integer, nullable=False, default=3)
|
||||||
|
|
||||||
|
|
||||||
class Settings(Base):
|
class Settings(Base):
|
||||||
@@ -146,12 +147,26 @@ def _get_engine():
|
|||||||
}
|
}
|
||||||
)
|
)
|
||||||
Base.metadata.create_all(bind=_engine)
|
Base.metadata.create_all(bind=_engine)
|
||||||
|
_migrate_add_default_concurrency(_engine)
|
||||||
_SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=_engine)
|
_SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=_engine)
|
||||||
logger.debug("Initialized registry database at: %s", db_path)
|
logger.debug("Initialized registry database at: %s", db_path)
|
||||||
|
|
||||||
return _engine, _SessionLocal
|
return _engine, _SessionLocal
|
||||||
|
|
||||||
|
|
||||||
|
def _migrate_add_default_concurrency(engine) -> None:
|
||||||
|
"""Add default_concurrency column if missing (for existing databases)."""
|
||||||
|
with engine.connect() as conn:
|
||||||
|
result = conn.execute(text("PRAGMA table_info(projects)"))
|
||||||
|
columns = [row[1] for row in result.fetchall()]
|
||||||
|
if "default_concurrency" not in columns:
|
||||||
|
conn.execute(text(
|
||||||
|
"ALTER TABLE projects ADD COLUMN default_concurrency INTEGER DEFAULT 3"
|
||||||
|
))
|
||||||
|
conn.commit()
|
||||||
|
logger.info("Migrated projects table: added default_concurrency column")
|
||||||
|
|
||||||
|
|
||||||
@contextmanager
|
@contextmanager
|
||||||
def _get_session():
|
def _get_session():
|
||||||
"""
|
"""
|
||||||
@@ -307,7 +322,8 @@ def list_registered_projects() -> dict[str, dict[str, Any]]:
|
|||||||
return {
|
return {
|
||||||
p.name: {
|
p.name: {
|
||||||
"path": p.path,
|
"path": p.path,
|
||||||
"created_at": p.created_at.isoformat() if p.created_at else None
|
"created_at": p.created_at.isoformat() if p.created_at else None,
|
||||||
|
"default_concurrency": getattr(p, 'default_concurrency', 3) or 3
|
||||||
}
|
}
|
||||||
for p in projects
|
for p in projects
|
||||||
}
|
}
|
||||||
@@ -333,7 +349,8 @@ def get_project_info(name: str) -> dict[str, Any] | None:
|
|||||||
return None
|
return None
|
||||||
return {
|
return {
|
||||||
"path": project.path,
|
"path": project.path,
|
||||||
"created_at": project.created_at.isoformat() if project.created_at else None
|
"created_at": project.created_at.isoformat() if project.created_at else None,
|
||||||
|
"default_concurrency": getattr(project, 'default_concurrency', 3) or 3
|
||||||
}
|
}
|
||||||
finally:
|
finally:
|
||||||
session.close()
|
session.close()
|
||||||
@@ -362,6 +379,55 @@ def update_project_path(name: str, new_path: Path) -> bool:
|
|||||||
return True
|
return True
|
||||||
|
|
||||||
|
|
||||||
|
def get_project_concurrency(name: str) -> int:
|
||||||
|
"""
|
||||||
|
Get project's default concurrency (1-5).
|
||||||
|
|
||||||
|
Args:
|
||||||
|
name: The project name.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
The default concurrency value (defaults to 3 if not set or project not found).
|
||||||
|
"""
|
||||||
|
_, SessionLocal = _get_engine()
|
||||||
|
session = SessionLocal()
|
||||||
|
try:
|
||||||
|
project = session.query(Project).filter(Project.name == name).first()
|
||||||
|
if project is None:
|
||||||
|
return 3
|
||||||
|
return getattr(project, 'default_concurrency', 3) or 3
|
||||||
|
finally:
|
||||||
|
session.close()
|
||||||
|
|
||||||
|
|
||||||
|
def set_project_concurrency(name: str, concurrency: int) -> bool:
|
||||||
|
"""
|
||||||
|
Set project's default concurrency (1-5).
|
||||||
|
|
||||||
|
Args:
|
||||||
|
name: The project name.
|
||||||
|
concurrency: The concurrency value (1-5).
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
True if updated, False if project wasn't found.
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
ValueError: If concurrency is not between 1 and 5.
|
||||||
|
"""
|
||||||
|
if concurrency < 1 or concurrency > 5:
|
||||||
|
raise ValueError("concurrency must be between 1 and 5")
|
||||||
|
|
||||||
|
with _get_session() as session:
|
||||||
|
project = session.query(Project).filter(Project.name == name).first()
|
||||||
|
if not project:
|
||||||
|
return False
|
||||||
|
|
||||||
|
project.default_concurrency = concurrency
|
||||||
|
|
||||||
|
logger.info("Set project '%s' default_concurrency to %d", name, concurrency)
|
||||||
|
return True
|
||||||
|
|
||||||
|
|
||||||
# =============================================================================
|
# =============================================================================
|
||||||
# Validation Functions
|
# Validation Functions
|
||||||
# =============================================================================
|
# =============================================================================
|
||||||
|
|||||||
112
security.py
112
security.py
@@ -6,6 +6,7 @@ Pre-tool-use hooks that validate bash commands for security.
|
|||||||
Uses an allowlist approach - only explicitly permitted commands can run.
|
Uses an allowlist approach - only explicitly permitted commands can run.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
import logging
|
||||||
import os
|
import os
|
||||||
import re
|
import re
|
||||||
import shlex
|
import shlex
|
||||||
@@ -14,6 +15,9 @@ from typing import Optional
|
|||||||
|
|
||||||
import yaml
|
import yaml
|
||||||
|
|
||||||
|
# Logger for security-related events (fallback parsing, validation failures, etc.)
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
# Regex pattern for valid pkill process names (no regex metacharacters allowed)
|
# Regex pattern for valid pkill process names (no regex metacharacters allowed)
|
||||||
# Matches alphanumeric names with dots, underscores, and hyphens
|
# Matches alphanumeric names with dots, underscores, and hyphens
|
||||||
VALID_PROCESS_NAME_PATTERN = re.compile(r"^[A-Za-z0-9._-]+$")
|
VALID_PROCESS_NAME_PATTERN = re.compile(r"^[A-Za-z0-9._-]+$")
|
||||||
@@ -140,6 +144,45 @@ def split_command_segments(command_string: str) -> list[str]:
|
|||||||
return result
|
return result
|
||||||
|
|
||||||
|
|
||||||
|
def _extract_primary_command(segment: str) -> str | None:
|
||||||
|
"""
|
||||||
|
Fallback command extraction when shlex fails.
|
||||||
|
|
||||||
|
Extracts the first word that looks like a command, handling cases
|
||||||
|
like complex docker exec commands with nested quotes.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
segment: The command segment to parse
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
The primary command name, or None if extraction fails
|
||||||
|
"""
|
||||||
|
# Remove leading whitespace
|
||||||
|
segment = segment.lstrip()
|
||||||
|
|
||||||
|
if not segment:
|
||||||
|
return None
|
||||||
|
|
||||||
|
# Skip env var assignments at start (VAR=value cmd)
|
||||||
|
words = segment.split()
|
||||||
|
while words and "=" in words[0] and not words[0].startswith("="):
|
||||||
|
words = words[1:]
|
||||||
|
|
||||||
|
if not words:
|
||||||
|
return None
|
||||||
|
|
||||||
|
# Extract first token (the command)
|
||||||
|
first_word = words[0]
|
||||||
|
|
||||||
|
# Match valid command characters (alphanumeric, dots, underscores, hyphens, slashes)
|
||||||
|
match = re.match(r"^([a-zA-Z0-9_./-]+)", first_word)
|
||||||
|
if match:
|
||||||
|
cmd = match.group(1)
|
||||||
|
return os.path.basename(cmd)
|
||||||
|
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
def extract_commands(command_string: str) -> list[str]:
|
def extract_commands(command_string: str) -> list[str]:
|
||||||
"""
|
"""
|
||||||
Extract command names from a shell command string.
|
Extract command names from a shell command string.
|
||||||
@@ -156,7 +199,6 @@ def extract_commands(command_string: str) -> list[str]:
|
|||||||
commands = []
|
commands = []
|
||||||
|
|
||||||
# shlex doesn't treat ; as a separator, so we need to pre-process
|
# shlex doesn't treat ; as a separator, so we need to pre-process
|
||||||
import re
|
|
||||||
|
|
||||||
# Split on semicolons that aren't inside quotes (simple heuristic)
|
# Split on semicolons that aren't inside quotes (simple heuristic)
|
||||||
# This handles common cases like "echo hello; ls"
|
# This handles common cases like "echo hello; ls"
|
||||||
@@ -171,8 +213,21 @@ def extract_commands(command_string: str) -> list[str]:
|
|||||||
tokens = shlex.split(segment)
|
tokens = shlex.split(segment)
|
||||||
except ValueError:
|
except ValueError:
|
||||||
# Malformed command (unclosed quotes, etc.)
|
# Malformed command (unclosed quotes, etc.)
|
||||||
# Return empty to trigger block (fail-safe)
|
# Try fallback extraction instead of blocking entirely
|
||||||
return []
|
fallback_cmd = _extract_primary_command(segment)
|
||||||
|
if fallback_cmd:
|
||||||
|
logger.debug(
|
||||||
|
"shlex fallback used: segment=%r -> command=%r",
|
||||||
|
segment,
|
||||||
|
fallback_cmd,
|
||||||
|
)
|
||||||
|
commands.append(fallback_cmd)
|
||||||
|
else:
|
||||||
|
logger.debug(
|
||||||
|
"shlex fallback failed: segment=%r (no command extracted)",
|
||||||
|
segment,
|
||||||
|
)
|
||||||
|
continue
|
||||||
|
|
||||||
if not tokens:
|
if not tokens:
|
||||||
continue
|
continue
|
||||||
@@ -444,58 +499,74 @@ def load_org_config() -> Optional[dict]:
|
|||||||
config = yaml.safe_load(f)
|
config = yaml.safe_load(f)
|
||||||
|
|
||||||
if not config:
|
if not config:
|
||||||
|
logger.warning(f"Org config at {config_path} is empty")
|
||||||
return None
|
return None
|
||||||
|
|
||||||
# Validate structure
|
# Validate structure
|
||||||
if not isinstance(config, dict):
|
if not isinstance(config, dict):
|
||||||
|
logger.warning(f"Org config at {config_path} must be a YAML dictionary")
|
||||||
return None
|
return None
|
||||||
|
|
||||||
if "version" not in config:
|
if "version" not in config:
|
||||||
|
logger.warning(f"Org config at {config_path} missing required 'version' field")
|
||||||
return None
|
return None
|
||||||
|
|
||||||
# Validate allowed_commands if present
|
# Validate allowed_commands if present
|
||||||
if "allowed_commands" in config:
|
if "allowed_commands" in config:
|
||||||
allowed = config["allowed_commands"]
|
allowed = config["allowed_commands"]
|
||||||
if not isinstance(allowed, list):
|
if not isinstance(allowed, list):
|
||||||
|
logger.warning(f"Org config at {config_path}: 'allowed_commands' must be a list")
|
||||||
return None
|
return None
|
||||||
for cmd in allowed:
|
for i, cmd in enumerate(allowed):
|
||||||
if not isinstance(cmd, dict):
|
if not isinstance(cmd, dict):
|
||||||
|
logger.warning(f"Org config at {config_path}: allowed_commands[{i}] must be a dict")
|
||||||
return None
|
return None
|
||||||
if "name" not in cmd:
|
if "name" not in cmd:
|
||||||
|
logger.warning(f"Org config at {config_path}: allowed_commands[{i}] missing 'name'")
|
||||||
return None
|
return None
|
||||||
# Validate that name is a non-empty string
|
# Validate that name is a non-empty string
|
||||||
if not isinstance(cmd["name"], str) or cmd["name"].strip() == "":
|
if not isinstance(cmd["name"], str) or cmd["name"].strip() == "":
|
||||||
|
logger.warning(f"Org config at {config_path}: allowed_commands[{i}] has invalid 'name'")
|
||||||
return None
|
return None
|
||||||
|
|
||||||
# Validate blocked_commands if present
|
# Validate blocked_commands if present
|
||||||
if "blocked_commands" in config:
|
if "blocked_commands" in config:
|
||||||
blocked = config["blocked_commands"]
|
blocked = config["blocked_commands"]
|
||||||
if not isinstance(blocked, list):
|
if not isinstance(blocked, list):
|
||||||
|
logger.warning(f"Org config at {config_path}: 'blocked_commands' must be a list")
|
||||||
return None
|
return None
|
||||||
for cmd in blocked:
|
for i, cmd in enumerate(blocked):
|
||||||
if not isinstance(cmd, str):
|
if not isinstance(cmd, str):
|
||||||
|
logger.warning(f"Org config at {config_path}: blocked_commands[{i}] must be a string")
|
||||||
return None
|
return None
|
||||||
|
|
||||||
# Validate pkill_processes if present
|
# Validate pkill_processes if present
|
||||||
if "pkill_processes" in config:
|
if "pkill_processes" in config:
|
||||||
processes = config["pkill_processes"]
|
processes = config["pkill_processes"]
|
||||||
if not isinstance(processes, list):
|
if not isinstance(processes, list):
|
||||||
|
logger.warning(f"Org config at {config_path}: 'pkill_processes' must be a list")
|
||||||
return None
|
return None
|
||||||
# Normalize and validate each process name against safe pattern
|
# Normalize and validate each process name against safe pattern
|
||||||
normalized = []
|
normalized = []
|
||||||
for proc in processes:
|
for i, proc in enumerate(processes):
|
||||||
if not isinstance(proc, str):
|
if not isinstance(proc, str):
|
||||||
|
logger.warning(f"Org config at {config_path}: pkill_processes[{i}] must be a string")
|
||||||
return None
|
return None
|
||||||
proc = proc.strip()
|
proc = proc.strip()
|
||||||
# Block empty strings and regex metacharacters
|
# Block empty strings and regex metacharacters
|
||||||
if not proc or not VALID_PROCESS_NAME_PATTERN.fullmatch(proc):
|
if not proc or not VALID_PROCESS_NAME_PATTERN.fullmatch(proc):
|
||||||
|
logger.warning(f"Org config at {config_path}: pkill_processes[{i}] has invalid value '{proc}'")
|
||||||
return None
|
return None
|
||||||
normalized.append(proc)
|
normalized.append(proc)
|
||||||
config["pkill_processes"] = normalized
|
config["pkill_processes"] = normalized
|
||||||
|
|
||||||
return config
|
return config
|
||||||
|
|
||||||
except (yaml.YAMLError, IOError, OSError):
|
except yaml.YAMLError as e:
|
||||||
|
logger.warning(f"Failed to parse org config at {config_path}: {e}")
|
||||||
|
return None
|
||||||
|
except (IOError, OSError) as e:
|
||||||
|
logger.warning(f"Failed to read org config at {config_path}: {e}")
|
||||||
return None
|
return None
|
||||||
|
|
||||||
|
|
||||||
@@ -509,7 +580,7 @@ def load_project_commands(project_dir: Path) -> Optional[dict]:
|
|||||||
Returns:
|
Returns:
|
||||||
Dict with parsed YAML config, or None if file doesn't exist or is invalid
|
Dict with parsed YAML config, or None if file doesn't exist or is invalid
|
||||||
"""
|
"""
|
||||||
config_path = project_dir / ".autocoder" / "allowed_commands.yaml"
|
config_path = project_dir.resolve() / ".autocoder" / "allowed_commands.yaml"
|
||||||
|
|
||||||
if not config_path.exists():
|
if not config_path.exists():
|
||||||
return None
|
return None
|
||||||
@@ -519,53 +590,68 @@ def load_project_commands(project_dir: Path) -> Optional[dict]:
|
|||||||
config = yaml.safe_load(f)
|
config = yaml.safe_load(f)
|
||||||
|
|
||||||
if not config:
|
if not config:
|
||||||
|
logger.warning(f"Project config at {config_path} is empty")
|
||||||
return None
|
return None
|
||||||
|
|
||||||
# Validate structure
|
# Validate structure
|
||||||
if not isinstance(config, dict):
|
if not isinstance(config, dict):
|
||||||
|
logger.warning(f"Project config at {config_path} must be a YAML dictionary")
|
||||||
return None
|
return None
|
||||||
|
|
||||||
if "version" not in config:
|
if "version" not in config:
|
||||||
|
logger.warning(f"Project config at {config_path} missing required 'version' field")
|
||||||
return None
|
return None
|
||||||
|
|
||||||
commands = config.get("commands", [])
|
commands = config.get("commands", [])
|
||||||
if not isinstance(commands, list):
|
if not isinstance(commands, list):
|
||||||
|
logger.warning(f"Project config at {config_path}: 'commands' must be a list")
|
||||||
return None
|
return None
|
||||||
|
|
||||||
# Enforce 100 command limit
|
# Enforce 100 command limit
|
||||||
if len(commands) > 100:
|
if len(commands) > 100:
|
||||||
|
logger.warning(f"Project config at {config_path} exceeds 100 command limit ({len(commands)} commands)")
|
||||||
return None
|
return None
|
||||||
|
|
||||||
# Validate each command entry
|
# Validate each command entry
|
||||||
for cmd in commands:
|
for i, cmd in enumerate(commands):
|
||||||
if not isinstance(cmd, dict):
|
if not isinstance(cmd, dict):
|
||||||
|
logger.warning(f"Project config at {config_path}: commands[{i}] must be a dict")
|
||||||
return None
|
return None
|
||||||
if "name" not in cmd:
|
if "name" not in cmd:
|
||||||
|
logger.warning(f"Project config at {config_path}: commands[{i}] missing 'name'")
|
||||||
return None
|
return None
|
||||||
# Validate name is a string
|
# Validate name is a non-empty string
|
||||||
if not isinstance(cmd["name"], str):
|
if not isinstance(cmd["name"], str) or cmd["name"].strip() == "":
|
||||||
|
logger.warning(f"Project config at {config_path}: commands[{i}] has invalid 'name'")
|
||||||
return None
|
return None
|
||||||
|
|
||||||
# Validate pkill_processes if present
|
# Validate pkill_processes if present
|
||||||
if "pkill_processes" in config:
|
if "pkill_processes" in config:
|
||||||
processes = config["pkill_processes"]
|
processes = config["pkill_processes"]
|
||||||
if not isinstance(processes, list):
|
if not isinstance(processes, list):
|
||||||
|
logger.warning(f"Project config at {config_path}: 'pkill_processes' must be a list")
|
||||||
return None
|
return None
|
||||||
# Normalize and validate each process name against safe pattern
|
# Normalize and validate each process name against safe pattern
|
||||||
normalized = []
|
normalized = []
|
||||||
for proc in processes:
|
for i, proc in enumerate(processes):
|
||||||
if not isinstance(proc, str):
|
if not isinstance(proc, str):
|
||||||
|
logger.warning(f"Project config at {config_path}: pkill_processes[{i}] must be a string")
|
||||||
return None
|
return None
|
||||||
proc = proc.strip()
|
proc = proc.strip()
|
||||||
# Block empty strings and regex metacharacters
|
# Block empty strings and regex metacharacters
|
||||||
if not proc or not VALID_PROCESS_NAME_PATTERN.fullmatch(proc):
|
if not proc or not VALID_PROCESS_NAME_PATTERN.fullmatch(proc):
|
||||||
|
logger.warning(f"Project config at {config_path}: pkill_processes[{i}] has invalid value '{proc}'")
|
||||||
return None
|
return None
|
||||||
normalized.append(proc)
|
normalized.append(proc)
|
||||||
config["pkill_processes"] = normalized
|
config["pkill_processes"] = normalized
|
||||||
|
|
||||||
return config
|
return config
|
||||||
|
|
||||||
except (yaml.YAMLError, IOError, OSError):
|
except yaml.YAMLError as e:
|
||||||
|
logger.warning(f"Failed to parse project config at {config_path}: {e}")
|
||||||
|
return None
|
||||||
|
except (IOError, OSError) as e:
|
||||||
|
logger.warning(f"Failed to read project config at {config_path}: {e}")
|
||||||
return None
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -18,6 +18,7 @@ from ..schemas import (
|
|||||||
ProjectDetail,
|
ProjectDetail,
|
||||||
ProjectPrompts,
|
ProjectPrompts,
|
||||||
ProjectPromptsUpdate,
|
ProjectPromptsUpdate,
|
||||||
|
ProjectSettingsUpdate,
|
||||||
ProjectStats,
|
ProjectStats,
|
||||||
ProjectSummary,
|
ProjectSummary,
|
||||||
)
|
)
|
||||||
@@ -63,13 +64,23 @@ def _get_registry_functions():
|
|||||||
sys.path.insert(0, str(root))
|
sys.path.insert(0, str(root))
|
||||||
|
|
||||||
from registry import (
|
from registry import (
|
||||||
|
get_project_concurrency,
|
||||||
get_project_path,
|
get_project_path,
|
||||||
list_registered_projects,
|
list_registered_projects,
|
||||||
register_project,
|
register_project,
|
||||||
|
set_project_concurrency,
|
||||||
unregister_project,
|
unregister_project,
|
||||||
validate_project_path,
|
validate_project_path,
|
||||||
)
|
)
|
||||||
return register_project, unregister_project, get_project_path, list_registered_projects, validate_project_path
|
return (
|
||||||
|
register_project,
|
||||||
|
unregister_project,
|
||||||
|
get_project_path,
|
||||||
|
list_registered_projects,
|
||||||
|
validate_project_path,
|
||||||
|
get_project_concurrency,
|
||||||
|
set_project_concurrency,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
router = APIRouter(prefix="/api/projects", tags=["projects"])
|
router = APIRouter(prefix="/api/projects", tags=["projects"])
|
||||||
@@ -102,7 +113,8 @@ def get_project_stats(project_dir: Path) -> ProjectStats:
|
|||||||
async def list_projects():
|
async def list_projects():
|
||||||
"""List all registered projects."""
|
"""List all registered projects."""
|
||||||
_init_imports()
|
_init_imports()
|
||||||
_, _, _, list_registered_projects, validate_project_path = _get_registry_functions()
|
(_, _, _, list_registered_projects, validate_project_path,
|
||||||
|
get_project_concurrency, _) = _get_registry_functions()
|
||||||
|
|
||||||
projects = list_registered_projects()
|
projects = list_registered_projects()
|
||||||
result = []
|
result = []
|
||||||
@@ -123,6 +135,7 @@ async def list_projects():
|
|||||||
path=info["path"],
|
path=info["path"],
|
||||||
has_spec=has_spec,
|
has_spec=has_spec,
|
||||||
stats=stats,
|
stats=stats,
|
||||||
|
default_concurrency=info.get("default_concurrency", 3),
|
||||||
))
|
))
|
||||||
|
|
||||||
return result
|
return result
|
||||||
@@ -132,7 +145,8 @@ async def list_projects():
|
|||||||
async def create_project(project: ProjectCreate):
|
async def create_project(project: ProjectCreate):
|
||||||
"""Create a new project at the specified path."""
|
"""Create a new project at the specified path."""
|
||||||
_init_imports()
|
_init_imports()
|
||||||
register_project, _, get_project_path, list_registered_projects, _ = _get_registry_functions()
|
(register_project, _, get_project_path, list_registered_projects,
|
||||||
|
_, _, _) = _get_registry_functions()
|
||||||
|
|
||||||
name = validate_project_name(project.name)
|
name = validate_project_name(project.name)
|
||||||
project_path = Path(project.path).resolve()
|
project_path = Path(project.path).resolve()
|
||||||
@@ -203,6 +217,7 @@ async def create_project(project: ProjectCreate):
|
|||||||
path=project_path.as_posix(),
|
path=project_path.as_posix(),
|
||||||
has_spec=False, # Just created, no spec yet
|
has_spec=False, # Just created, no spec yet
|
||||||
stats=ProjectStats(passing=0, total=0, percentage=0.0),
|
stats=ProjectStats(passing=0, total=0, percentage=0.0),
|
||||||
|
default_concurrency=3,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
@@ -210,7 +225,7 @@ async def create_project(project: ProjectCreate):
|
|||||||
async def get_project(name: str):
|
async def get_project(name: str):
|
||||||
"""Get detailed information about a project."""
|
"""Get detailed information about a project."""
|
||||||
_init_imports()
|
_init_imports()
|
||||||
_, _, get_project_path, _, _ = _get_registry_functions()
|
(_, _, get_project_path, _, _, get_project_concurrency, _) = _get_registry_functions()
|
||||||
|
|
||||||
name = validate_project_name(name)
|
name = validate_project_name(name)
|
||||||
project_dir = get_project_path(name)
|
project_dir = get_project_path(name)
|
||||||
@@ -231,6 +246,7 @@ async def get_project(name: str):
|
|||||||
has_spec=has_spec,
|
has_spec=has_spec,
|
||||||
stats=stats,
|
stats=stats,
|
||||||
prompts_dir=str(prompts_dir),
|
prompts_dir=str(prompts_dir),
|
||||||
|
default_concurrency=get_project_concurrency(name),
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
@@ -244,7 +260,7 @@ async def delete_project(name: str, delete_files: bool = False):
|
|||||||
delete_files: If True, also delete the project directory and files
|
delete_files: If True, also delete the project directory and files
|
||||||
"""
|
"""
|
||||||
_init_imports()
|
_init_imports()
|
||||||
_, unregister_project, get_project_path, _, _ = _get_registry_functions()
|
(_, unregister_project, get_project_path, _, _, _, _) = _get_registry_functions()
|
||||||
|
|
||||||
name = validate_project_name(name)
|
name = validate_project_name(name)
|
||||||
project_dir = get_project_path(name)
|
project_dir = get_project_path(name)
|
||||||
@@ -280,7 +296,7 @@ async def delete_project(name: str, delete_files: bool = False):
|
|||||||
async def get_project_prompts(name: str):
|
async def get_project_prompts(name: str):
|
||||||
"""Get the content of project prompt files."""
|
"""Get the content of project prompt files."""
|
||||||
_init_imports()
|
_init_imports()
|
||||||
_, _, get_project_path, _, _ = _get_registry_functions()
|
(_, _, get_project_path, _, _, _, _) = _get_registry_functions()
|
||||||
|
|
||||||
name = validate_project_name(name)
|
name = validate_project_name(name)
|
||||||
project_dir = get_project_path(name)
|
project_dir = get_project_path(name)
|
||||||
@@ -313,7 +329,7 @@ async def get_project_prompts(name: str):
|
|||||||
async def update_project_prompts(name: str, prompts: ProjectPromptsUpdate):
|
async def update_project_prompts(name: str, prompts: ProjectPromptsUpdate):
|
||||||
"""Update project prompt files."""
|
"""Update project prompt files."""
|
||||||
_init_imports()
|
_init_imports()
|
||||||
_, _, get_project_path, _, _ = _get_registry_functions()
|
(_, _, get_project_path, _, _, _, _) = _get_registry_functions()
|
||||||
|
|
||||||
name = validate_project_name(name)
|
name = validate_project_name(name)
|
||||||
project_dir = get_project_path(name)
|
project_dir = get_project_path(name)
|
||||||
@@ -343,7 +359,7 @@ async def update_project_prompts(name: str, prompts: ProjectPromptsUpdate):
|
|||||||
async def get_project_stats_endpoint(name: str):
|
async def get_project_stats_endpoint(name: str):
|
||||||
"""Get current progress statistics for a project."""
|
"""Get current progress statistics for a project."""
|
||||||
_init_imports()
|
_init_imports()
|
||||||
_, _, get_project_path, _, _ = _get_registry_functions()
|
(_, _, get_project_path, _, _, _, _) = _get_registry_functions()
|
||||||
|
|
||||||
name = validate_project_name(name)
|
name = validate_project_name(name)
|
||||||
project_dir = get_project_path(name)
|
project_dir = get_project_path(name)
|
||||||
@@ -355,3 +371,121 @@ async def get_project_stats_endpoint(name: str):
|
|||||||
raise HTTPException(status_code=404, detail="Project directory not found")
|
raise HTTPException(status_code=404, detail="Project directory not found")
|
||||||
|
|
||||||
return get_project_stats(project_dir)
|
return get_project_stats(project_dir)
|
||||||
|
|
||||||
|
|
||||||
|
@router.post("/{name}/reset")
|
||||||
|
async def reset_project(name: str, full_reset: bool = False):
|
||||||
|
"""
|
||||||
|
Reset a project to its initial state.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
name: Project name to reset
|
||||||
|
full_reset: If True, also delete prompts/ directory (triggers setup wizard)
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Dictionary with list of deleted files and reset type
|
||||||
|
"""
|
||||||
|
_init_imports()
|
||||||
|
(_, _, get_project_path, _, _, _, _) = _get_registry_functions()
|
||||||
|
|
||||||
|
name = validate_project_name(name)
|
||||||
|
project_dir = get_project_path(name)
|
||||||
|
|
||||||
|
if not project_dir:
|
||||||
|
raise HTTPException(status_code=404, detail=f"Project '{name}' not found")
|
||||||
|
|
||||||
|
if not project_dir.exists():
|
||||||
|
raise HTTPException(status_code=404, detail="Project directory not found")
|
||||||
|
|
||||||
|
# Check if agent is running
|
||||||
|
lock_file = project_dir / ".agent.lock"
|
||||||
|
if lock_file.exists():
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=409,
|
||||||
|
detail="Cannot reset project while agent is running. Stop the agent first."
|
||||||
|
)
|
||||||
|
|
||||||
|
# Dispose of database engines to release file locks (required on Windows)
|
||||||
|
# Import here to avoid circular imports
|
||||||
|
from api.database import dispose_engine as dispose_features_engine
|
||||||
|
from server.services.assistant_database import dispose_engine as dispose_assistant_engine
|
||||||
|
|
||||||
|
dispose_features_engine(project_dir)
|
||||||
|
dispose_assistant_engine(project_dir)
|
||||||
|
|
||||||
|
deleted_files: list[str] = []
|
||||||
|
|
||||||
|
# Files to delete in quick reset
|
||||||
|
quick_reset_files = [
|
||||||
|
"features.db",
|
||||||
|
"features.db-wal", # WAL mode journal file
|
||||||
|
"features.db-shm", # WAL mode shared memory file
|
||||||
|
"assistant.db",
|
||||||
|
"assistant.db-wal",
|
||||||
|
"assistant.db-shm",
|
||||||
|
".claude_settings.json",
|
||||||
|
".claude_assistant_settings.json",
|
||||||
|
]
|
||||||
|
|
||||||
|
for filename in quick_reset_files:
|
||||||
|
file_path = project_dir / filename
|
||||||
|
if file_path.exists():
|
||||||
|
try:
|
||||||
|
file_path.unlink()
|
||||||
|
deleted_files.append(filename)
|
||||||
|
except Exception as e:
|
||||||
|
raise HTTPException(status_code=500, detail=f"Failed to delete {filename}: {e}")
|
||||||
|
|
||||||
|
# Full reset: also delete prompts directory
|
||||||
|
if full_reset:
|
||||||
|
prompts_dir = project_dir / "prompts"
|
||||||
|
if prompts_dir.exists():
|
||||||
|
try:
|
||||||
|
shutil.rmtree(prompts_dir)
|
||||||
|
deleted_files.append("prompts/")
|
||||||
|
except Exception as e:
|
||||||
|
raise HTTPException(status_code=500, detail=f"Failed to delete prompts/: {e}")
|
||||||
|
|
||||||
|
return {
|
||||||
|
"success": True,
|
||||||
|
"reset_type": "full" if full_reset else "quick",
|
||||||
|
"deleted_files": deleted_files,
|
||||||
|
"message": f"Project '{name}' has been reset" + (" (full reset)" if full_reset else " (quick reset)")
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
@router.patch("/{name}/settings", response_model=ProjectDetail)
|
||||||
|
async def update_project_settings(name: str, settings: ProjectSettingsUpdate):
|
||||||
|
"""Update project-level settings (concurrency, etc.)."""
|
||||||
|
_init_imports()
|
||||||
|
(_, _, get_project_path, _, _, get_project_concurrency,
|
||||||
|
set_project_concurrency) = _get_registry_functions()
|
||||||
|
|
||||||
|
name = validate_project_name(name)
|
||||||
|
project_dir = get_project_path(name)
|
||||||
|
|
||||||
|
if not project_dir:
|
||||||
|
raise HTTPException(status_code=404, detail=f"Project '{name}' not found")
|
||||||
|
|
||||||
|
if not project_dir.exists():
|
||||||
|
raise HTTPException(status_code=404, detail="Project directory not found")
|
||||||
|
|
||||||
|
# Update concurrency if provided
|
||||||
|
if settings.default_concurrency is not None:
|
||||||
|
success = set_project_concurrency(name, settings.default_concurrency)
|
||||||
|
if not success:
|
||||||
|
raise HTTPException(status_code=500, detail="Failed to update concurrency")
|
||||||
|
|
||||||
|
# Return updated project details
|
||||||
|
has_spec = _check_spec_exists(project_dir)
|
||||||
|
stats = get_project_stats(project_dir)
|
||||||
|
prompts_dir = _get_project_prompts_dir(project_dir)
|
||||||
|
|
||||||
|
return ProjectDetail(
|
||||||
|
name=name,
|
||||||
|
path=project_dir.as_posix(),
|
||||||
|
has_spec=has_spec,
|
||||||
|
stats=stats,
|
||||||
|
prompts_dir=str(prompts_dir),
|
||||||
|
default_concurrency=get_project_concurrency(name),
|
||||||
|
)
|
||||||
|
|||||||
@@ -45,6 +45,7 @@ class ProjectSummary(BaseModel):
|
|||||||
path: str
|
path: str
|
||||||
has_spec: bool
|
has_spec: bool
|
||||||
stats: ProjectStats
|
stats: ProjectStats
|
||||||
|
default_concurrency: int = 3
|
||||||
|
|
||||||
|
|
||||||
class ProjectDetail(BaseModel):
|
class ProjectDetail(BaseModel):
|
||||||
@@ -54,6 +55,7 @@ class ProjectDetail(BaseModel):
|
|||||||
has_spec: bool
|
has_spec: bool
|
||||||
stats: ProjectStats
|
stats: ProjectStats
|
||||||
prompts_dir: str
|
prompts_dir: str
|
||||||
|
default_concurrency: int = 3
|
||||||
|
|
||||||
|
|
||||||
class ProjectPrompts(BaseModel):
|
class ProjectPrompts(BaseModel):
|
||||||
@@ -70,6 +72,18 @@ class ProjectPromptsUpdate(BaseModel):
|
|||||||
coding_prompt: str | None = None
|
coding_prompt: str | None = None
|
||||||
|
|
||||||
|
|
||||||
|
class ProjectSettingsUpdate(BaseModel):
|
||||||
|
"""Request schema for updating project-level settings."""
|
||||||
|
default_concurrency: int | None = None
|
||||||
|
|
||||||
|
@field_validator('default_concurrency')
|
||||||
|
@classmethod
|
||||||
|
def validate_concurrency(cls, v: int | None) -> int | None:
|
||||||
|
if v is not None and (v < 1 or v > 5):
|
||||||
|
raise ValueError("default_concurrency must be between 1 and 5")
|
||||||
|
return v
|
||||||
|
|
||||||
|
|
||||||
# ============================================================================
|
# ============================================================================
|
||||||
# Feature Schemas
|
# Feature Schemas
|
||||||
# ============================================================================
|
# ============================================================================
|
||||||
|
|||||||
@@ -79,6 +79,26 @@ def get_engine(project_dir: Path):
|
|||||||
return _engine_cache[cache_key]
|
return _engine_cache[cache_key]
|
||||||
|
|
||||||
|
|
||||||
|
def dispose_engine(project_dir: Path) -> bool:
|
||||||
|
"""Dispose of and remove the cached engine for a project.
|
||||||
|
|
||||||
|
This closes all database connections, releasing file locks on Windows.
|
||||||
|
Should be called before deleting the database file.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
True if an engine was disposed, False if no engine was cached.
|
||||||
|
"""
|
||||||
|
cache_key = project_dir.as_posix()
|
||||||
|
|
||||||
|
if cache_key in _engine_cache:
|
||||||
|
engine = _engine_cache.pop(cache_key)
|
||||||
|
engine.dispose()
|
||||||
|
logger.debug(f"Disposed database engine for {cache_key}")
|
||||||
|
return True
|
||||||
|
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
def get_session(project_dir: Path):
|
def get_session(project_dir: Path):
|
||||||
"""Get a new database session for a project."""
|
"""Get a new database session for a project."""
|
||||||
engine = get_engine(project_dir)
|
engine = get_engine(project_dir)
|
||||||
|
|||||||
@@ -10,8 +10,8 @@ import asyncio
|
|||||||
import json
|
import json
|
||||||
import logging
|
import logging
|
||||||
import os
|
import os
|
||||||
import re
|
|
||||||
import shutil
|
import shutil
|
||||||
|
import sys
|
||||||
import threading
|
import threading
|
||||||
import uuid
|
import uuid
|
||||||
from datetime import datetime
|
from datetime import datetime
|
||||||
@@ -38,6 +38,13 @@ API_ENV_VARS = [
|
|||||||
"ANTHROPIC_DEFAULT_HAIKU_MODEL",
|
"ANTHROPIC_DEFAULT_HAIKU_MODEL",
|
||||||
]
|
]
|
||||||
|
|
||||||
|
# Feature MCP tools needed for expand session
|
||||||
|
EXPAND_FEATURE_TOOLS = [
|
||||||
|
"mcp__features__feature_create",
|
||||||
|
"mcp__features__feature_create_bulk",
|
||||||
|
"mcp__features__feature_get_stats",
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
async def _make_multimodal_message(content_blocks: list[dict]) -> AsyncGenerator[dict, None]:
|
async def _make_multimodal_message(content_blocks: list[dict]) -> AsyncGenerator[dict, None]:
|
||||||
"""
|
"""
|
||||||
@@ -61,9 +68,8 @@ class ExpandChatSession:
|
|||||||
|
|
||||||
Unlike SpecChatSession which writes spec files, this session:
|
Unlike SpecChatSession which writes spec files, this session:
|
||||||
1. Reads existing app_spec.txt for context
|
1. Reads existing app_spec.txt for context
|
||||||
2. Parses feature definitions from Claude's output
|
2. Chats with the user to define new features
|
||||||
3. Creates features via REST API
|
3. Claude creates features via the feature_create_bulk MCP tool
|
||||||
4. Tracks which features were created during the session
|
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def __init__(self, project_name: str, project_dir: Path):
|
def __init__(self, project_name: str, project_dir: Path):
|
||||||
@@ -145,10 +151,14 @@ class ExpandChatSession:
|
|||||||
return
|
return
|
||||||
|
|
||||||
# Create temporary security settings file (unique per session to avoid conflicts)
|
# Create temporary security settings file (unique per session to avoid conflicts)
|
||||||
|
# Note: permission_mode="bypassPermissions" is safe here because:
|
||||||
|
# 1. Only Read/Glob file tools are allowed (no Write/Edit)
|
||||||
|
# 2. MCP tools are restricted to feature creation only
|
||||||
|
# 3. No Bash access - cannot execute arbitrary commands
|
||||||
security_settings = {
|
security_settings = {
|
||||||
"sandbox": {"enabled": True},
|
"sandbox": {"enabled": True},
|
||||||
"permissions": {
|
"permissions": {
|
||||||
"defaultMode": "acceptEdits",
|
"defaultMode": "bypassPermissions",
|
||||||
"allow": [
|
"allow": [
|
||||||
"Read(./**)",
|
"Read(./**)",
|
||||||
"Glob(./**)",
|
"Glob(./**)",
|
||||||
@@ -171,6 +181,18 @@ class ExpandChatSession:
|
|||||||
# This allows using alternative APIs (e.g., GLM via z.ai) that may not support Claude model names
|
# This allows using alternative APIs (e.g., GLM via z.ai) that may not support Claude model names
|
||||||
model = os.getenv("ANTHROPIC_DEFAULT_OPUS_MODEL", "claude-opus-4-5-20251101")
|
model = os.getenv("ANTHROPIC_DEFAULT_OPUS_MODEL", "claude-opus-4-5-20251101")
|
||||||
|
|
||||||
|
# Build MCP servers config for feature creation
|
||||||
|
mcp_servers = {
|
||||||
|
"features": {
|
||||||
|
"command": sys.executable,
|
||||||
|
"args": ["-m", "mcp_server.feature_mcp"],
|
||||||
|
"env": {
|
||||||
|
"PROJECT_DIR": str(self.project_dir.resolve()),
|
||||||
|
"PYTHONPATH": str(ROOT_DIR.resolve()),
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
# Create Claude SDK client
|
# Create Claude SDK client
|
||||||
try:
|
try:
|
||||||
self.client = ClaudeSDKClient(
|
self.client = ClaudeSDKClient(
|
||||||
@@ -181,8 +203,10 @@ class ExpandChatSession:
|
|||||||
allowed_tools=[
|
allowed_tools=[
|
||||||
"Read",
|
"Read",
|
||||||
"Glob",
|
"Glob",
|
||||||
|
*EXPAND_FEATURE_TOOLS,
|
||||||
],
|
],
|
||||||
permission_mode="acceptEdits",
|
mcp_servers=mcp_servers,
|
||||||
|
permission_mode="bypassPermissions",
|
||||||
max_turns=100,
|
max_turns=100,
|
||||||
cwd=str(self.project_dir.resolve()),
|
cwd=str(self.project_dir.resolve()),
|
||||||
settings=str(settings_file.resolve()),
|
settings=str(settings_file.resolve()),
|
||||||
@@ -267,7 +291,8 @@ class ExpandChatSession:
|
|||||||
"""
|
"""
|
||||||
Internal method to query Claude and stream responses.
|
Internal method to query Claude and stream responses.
|
||||||
|
|
||||||
Handles text responses and detects feature creation blocks.
|
Feature creation is handled by Claude calling the feature_create_bulk
|
||||||
|
MCP tool directly -- no text parsing needed.
|
||||||
"""
|
"""
|
||||||
if not self.client:
|
if not self.client:
|
||||||
return
|
return
|
||||||
@@ -291,9 +316,6 @@ class ExpandChatSession:
|
|||||||
else:
|
else:
|
||||||
await self.client.query(message)
|
await self.client.query(message)
|
||||||
|
|
||||||
# Accumulate full response to detect feature blocks
|
|
||||||
full_response = ""
|
|
||||||
|
|
||||||
# Stream the response
|
# Stream the response
|
||||||
async for msg in self.client.receive_response():
|
async for msg in self.client.receive_response():
|
||||||
msg_type = type(msg).__name__
|
msg_type = type(msg).__name__
|
||||||
@@ -305,7 +327,6 @@ class ExpandChatSession:
|
|||||||
if block_type == "TextBlock" and hasattr(block, "text"):
|
if block_type == "TextBlock" and hasattr(block, "text"):
|
||||||
text = block.text
|
text = block.text
|
||||||
if text:
|
if text:
|
||||||
full_response += text
|
|
||||||
yield {"type": "text", "content": text}
|
yield {"type": "text", "content": text}
|
||||||
|
|
||||||
self.messages.append({
|
self.messages.append({
|
||||||
@@ -314,123 +335,6 @@ class ExpandChatSession:
|
|||||||
"timestamp": datetime.now().isoformat()
|
"timestamp": datetime.now().isoformat()
|
||||||
})
|
})
|
||||||
|
|
||||||
# Check for feature creation blocks in full response (handle multiple blocks)
|
|
||||||
features_matches = re.findall(
|
|
||||||
r'<features_to_create>\s*(\[[\s\S]*?\])\s*</features_to_create>',
|
|
||||||
full_response
|
|
||||||
)
|
|
||||||
|
|
||||||
if features_matches:
|
|
||||||
# Collect all features from all blocks, deduplicating by name
|
|
||||||
all_features: list[dict] = []
|
|
||||||
seen_names: set[str] = set()
|
|
||||||
|
|
||||||
for features_json in features_matches:
|
|
||||||
try:
|
|
||||||
features_data = json.loads(features_json)
|
|
||||||
|
|
||||||
if features_data and isinstance(features_data, list):
|
|
||||||
for feature in features_data:
|
|
||||||
name = feature.get("name", "")
|
|
||||||
if name and name not in seen_names:
|
|
||||||
seen_names.add(name)
|
|
||||||
all_features.append(feature)
|
|
||||||
except json.JSONDecodeError as e:
|
|
||||||
logger.error(f"Failed to parse features JSON block: {e}")
|
|
||||||
# Continue processing other blocks
|
|
||||||
|
|
||||||
if all_features:
|
|
||||||
try:
|
|
||||||
# Create all deduplicated features
|
|
||||||
created = await self._create_features_bulk(all_features)
|
|
||||||
|
|
||||||
if created:
|
|
||||||
self.features_created += len(created)
|
|
||||||
self.created_feature_ids.extend([f["id"] for f in created])
|
|
||||||
|
|
||||||
yield {
|
|
||||||
"type": "features_created",
|
|
||||||
"count": len(created),
|
|
||||||
"features": created
|
|
||||||
}
|
|
||||||
|
|
||||||
logger.info(f"Created {len(created)} features for {self.project_name}")
|
|
||||||
except Exception:
|
|
||||||
logger.exception("Failed to create features")
|
|
||||||
yield {
|
|
||||||
"type": "error",
|
|
||||||
"content": "Failed to create features"
|
|
||||||
}
|
|
||||||
|
|
||||||
async def _create_features_bulk(self, features: list[dict]) -> list[dict]:
|
|
||||||
"""
|
|
||||||
Create features directly in the database.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
features: List of feature dictionaries with category, name, description, steps
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
List of created feature dictionaries with IDs
|
|
||||||
|
|
||||||
Note:
|
|
||||||
Uses flush() to get IDs immediately without re-querying by priority range,
|
|
||||||
which could pick up rows from concurrent writers.
|
|
||||||
"""
|
|
||||||
# Import database classes
|
|
||||||
import sys
|
|
||||||
root = Path(__file__).parent.parent.parent
|
|
||||||
if str(root) not in sys.path:
|
|
||||||
sys.path.insert(0, str(root))
|
|
||||||
|
|
||||||
from api.database import Feature, create_database
|
|
||||||
|
|
||||||
# Get database session
|
|
||||||
_, SessionLocal = create_database(self.project_dir)
|
|
||||||
session = SessionLocal()
|
|
||||||
|
|
||||||
try:
|
|
||||||
# Determine starting priority
|
|
||||||
max_priority_feature = session.query(Feature).order_by(Feature.priority.desc()).first()
|
|
||||||
current_priority = (max_priority_feature.priority + 1) if max_priority_feature else 1
|
|
||||||
|
|
||||||
created_rows: list = []
|
|
||||||
|
|
||||||
for f in features:
|
|
||||||
db_feature = Feature(
|
|
||||||
priority=current_priority,
|
|
||||||
category=f.get("category", "functional"),
|
|
||||||
name=f.get("name", "Unnamed feature"),
|
|
||||||
description=f.get("description", ""),
|
|
||||||
steps=f.get("steps", []),
|
|
||||||
passes=False,
|
|
||||||
in_progress=False,
|
|
||||||
)
|
|
||||||
session.add(db_feature)
|
|
||||||
created_rows.append(db_feature)
|
|
||||||
current_priority += 1
|
|
||||||
|
|
||||||
# Flush to get IDs without relying on priority range query
|
|
||||||
session.flush()
|
|
||||||
|
|
||||||
# Build result from the flushed objects (IDs are now populated)
|
|
||||||
created_features = [
|
|
||||||
{
|
|
||||||
"id": db_feature.id,
|
|
||||||
"name": db_feature.name,
|
|
||||||
"category": db_feature.category,
|
|
||||||
}
|
|
||||||
for db_feature in created_rows
|
|
||||||
]
|
|
||||||
|
|
||||||
session.commit()
|
|
||||||
return created_features
|
|
||||||
|
|
||||||
except Exception:
|
|
||||||
session.rollback()
|
|
||||||
raise
|
|
||||||
finally:
|
|
||||||
session.close()
|
|
||||||
|
|
||||||
def get_features_created(self) -> int:
|
def get_features_created(self) -> int:
|
||||||
"""Get the total number of features created in this session."""
|
"""Get the total number of features created in this session."""
|
||||||
return self.features_created
|
return self.features_created
|
||||||
|
|||||||
@@ -349,14 +349,20 @@ class AgentProcessManager:
|
|||||||
try:
|
try:
|
||||||
# Start subprocess with piped stdout/stderr
|
# Start subprocess with piped stdout/stderr
|
||||||
# Use project_dir as cwd so Claude SDK sandbox allows access to project files
|
# Use project_dir as cwd so Claude SDK sandbox allows access to project files
|
||||||
# IMPORTANT: Set PYTHONUNBUFFERED to ensure output isn't delayed
|
# stdin=DEVNULL prevents blocking if Claude CLI or child process tries to read stdin
|
||||||
self.process = subprocess.Popen(
|
# CREATE_NO_WINDOW on Windows prevents console window pop-ups
|
||||||
cmd,
|
# PYTHONUNBUFFERED ensures output isn't delayed
|
||||||
stdout=subprocess.PIPE,
|
popen_kwargs = {
|
||||||
stderr=subprocess.STDOUT,
|
"stdin": subprocess.DEVNULL,
|
||||||
cwd=str(self.project_dir),
|
"stdout": subprocess.PIPE,
|
||||||
env={**os.environ, "PYTHONUNBUFFERED": "1"},
|
"stderr": subprocess.STDOUT,
|
||||||
)
|
"cwd": str(self.project_dir),
|
||||||
|
"env": {**os.environ, "PYTHONUNBUFFERED": "1"},
|
||||||
|
}
|
||||||
|
if sys.platform == "win32":
|
||||||
|
popen_kwargs["creationflags"] = subprocess.CREATE_NO_WINDOW
|
||||||
|
|
||||||
|
self.process = subprocess.Popen(cmd, **popen_kwargs)
|
||||||
|
|
||||||
# Atomic lock creation - if it fails, another process beat us
|
# Atomic lock creation - if it fails, another process beat us
|
||||||
if not self._create_lock():
|
if not self._create_lock():
|
||||||
|
|||||||
@@ -39,5 +39,3 @@ pip install -r requirements.txt --quiet
|
|||||||
|
|
||||||
REM Run the Python launcher
|
REM Run the Python launcher
|
||||||
python "%~dp0start_ui.py" %*
|
python "%~dp0start_ui.py" %*
|
||||||
|
|
||||||
pause
|
|
||||||
|
|||||||
19
start_ui.py
19
start_ui.py
@@ -137,10 +137,25 @@ def check_node() -> bool:
|
|||||||
|
|
||||||
|
|
||||||
def install_npm_deps() -> bool:
|
def install_npm_deps() -> bool:
|
||||||
"""Install npm dependencies if node_modules doesn't exist."""
|
"""Install npm dependencies if node_modules doesn't exist or is stale."""
|
||||||
node_modules = UI_DIR / "node_modules"
|
node_modules = UI_DIR / "node_modules"
|
||||||
|
package_json = UI_DIR / "package.json"
|
||||||
|
package_lock = UI_DIR / "package-lock.json"
|
||||||
|
|
||||||
if node_modules.exists():
|
# Check if npm install is needed
|
||||||
|
needs_install = False
|
||||||
|
|
||||||
|
if not node_modules.exists():
|
||||||
|
needs_install = True
|
||||||
|
elif package_json.exists():
|
||||||
|
# If package.json or package-lock.json is newer than node_modules, reinstall
|
||||||
|
node_modules_mtime = node_modules.stat().st_mtime
|
||||||
|
if package_json.stat().st_mtime > node_modules_mtime:
|
||||||
|
needs_install = True
|
||||||
|
elif package_lock.exists() and package_lock.stat().st_mtime > node_modules_mtime:
|
||||||
|
needs_install = True
|
||||||
|
|
||||||
|
if not needs_install:
|
||||||
print(" npm dependencies already installed")
|
print(" npm dependencies already installed")
|
||||||
return True
|
return True
|
||||||
|
|
||||||
|
|||||||
426
test_dependency_resolver.py
Normal file
426
test_dependency_resolver.py
Normal file
@@ -0,0 +1,426 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Dependency Resolver Tests
|
||||||
|
=========================
|
||||||
|
|
||||||
|
Tests for the dependency resolver functions including cycle detection.
|
||||||
|
Run with: python test_dependency_resolver.py
|
||||||
|
"""
|
||||||
|
|
||||||
|
import sys
|
||||||
|
import time
|
||||||
|
from concurrent.futures import ThreadPoolExecutor
|
||||||
|
from concurrent.futures import TimeoutError as FuturesTimeoutError
|
||||||
|
|
||||||
|
from api.dependency_resolver import (
|
||||||
|
are_dependencies_satisfied,
|
||||||
|
compute_scheduling_scores,
|
||||||
|
get_blocked_features,
|
||||||
|
get_blocking_dependencies,
|
||||||
|
get_ready_features,
|
||||||
|
resolve_dependencies,
|
||||||
|
would_create_circular_dependency,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def test_compute_scheduling_scores_simple_chain():
|
||||||
|
"""Test scheduling scores for a simple linear dependency chain."""
|
||||||
|
print("\nTesting compute_scheduling_scores with simple chain:")
|
||||||
|
|
||||||
|
features = [
|
||||||
|
{"id": 1, "priority": 1, "dependencies": []},
|
||||||
|
{"id": 2, "priority": 2, "dependencies": [1]},
|
||||||
|
{"id": 3, "priority": 3, "dependencies": [2]},
|
||||||
|
]
|
||||||
|
|
||||||
|
scores = compute_scheduling_scores(features)
|
||||||
|
|
||||||
|
# All features should have scores
|
||||||
|
passed = True
|
||||||
|
for f in features:
|
||||||
|
if f["id"] not in scores:
|
||||||
|
print(f" FAIL: Feature {f['id']} missing from scores")
|
||||||
|
passed = False
|
||||||
|
|
||||||
|
if passed:
|
||||||
|
# Root feature (1) should have highest score (unblocks most)
|
||||||
|
if scores[1] > scores[2] > scores[3]:
|
||||||
|
print(" PASS: Root feature has highest score, leaf has lowest")
|
||||||
|
else:
|
||||||
|
print(f" FAIL: Expected scores[1] > scores[2] > scores[3], got {scores}")
|
||||||
|
passed = False
|
||||||
|
|
||||||
|
return passed
|
||||||
|
|
||||||
|
|
||||||
|
def test_compute_scheduling_scores_with_cycle():
|
||||||
|
"""Test that compute_scheduling_scores handles circular dependencies without hanging."""
|
||||||
|
print("\nTesting compute_scheduling_scores with circular dependencies:")
|
||||||
|
|
||||||
|
# Create a cycle: 1 -> 2 -> 3 -> 1
|
||||||
|
features = [
|
||||||
|
{"id": 1, "priority": 1, "dependencies": [3]},
|
||||||
|
{"id": 2, "priority": 2, "dependencies": [1]},
|
||||||
|
{"id": 3, "priority": 3, "dependencies": [2]},
|
||||||
|
]
|
||||||
|
|
||||||
|
# Use timeout to detect infinite loop
|
||||||
|
def compute_with_timeout():
|
||||||
|
return compute_scheduling_scores(features)
|
||||||
|
|
||||||
|
start = time.time()
|
||||||
|
try:
|
||||||
|
with ThreadPoolExecutor(max_workers=1) as executor:
|
||||||
|
future = executor.submit(compute_with_timeout)
|
||||||
|
scores = future.result(timeout=5.0) # 5 second timeout
|
||||||
|
|
||||||
|
elapsed = time.time() - start
|
||||||
|
|
||||||
|
# Should complete quickly (< 1 second for 3 features)
|
||||||
|
if elapsed > 1.0:
|
||||||
|
print(f" FAIL: Took {elapsed:.2f}s (expected < 1s)")
|
||||||
|
return False
|
||||||
|
|
||||||
|
# All features should have scores (even cyclic ones)
|
||||||
|
if len(scores) == 3:
|
||||||
|
print(f" PASS: Completed in {elapsed:.3f}s with {len(scores)} scores")
|
||||||
|
return True
|
||||||
|
else:
|
||||||
|
print(f" FAIL: Expected 3 scores, got {len(scores)}")
|
||||||
|
return False
|
||||||
|
|
||||||
|
except FuturesTimeoutError:
|
||||||
|
print(" FAIL: Infinite loop detected (timed out after 5s)")
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
def test_compute_scheduling_scores_self_reference():
|
||||||
|
"""Test scheduling scores with self-referencing dependency."""
|
||||||
|
print("\nTesting compute_scheduling_scores with self-reference:")
|
||||||
|
|
||||||
|
features = [
|
||||||
|
{"id": 1, "priority": 1, "dependencies": [1]}, # Self-reference
|
||||||
|
{"id": 2, "priority": 2, "dependencies": []},
|
||||||
|
]
|
||||||
|
|
||||||
|
start = time.time()
|
||||||
|
try:
|
||||||
|
with ThreadPoolExecutor(max_workers=1) as executor:
|
||||||
|
future = executor.submit(lambda: compute_scheduling_scores(features))
|
||||||
|
scores = future.result(timeout=5.0)
|
||||||
|
|
||||||
|
elapsed = time.time() - start
|
||||||
|
|
||||||
|
if elapsed > 1.0:
|
||||||
|
print(f" FAIL: Took {elapsed:.2f}s (expected < 1s)")
|
||||||
|
return False
|
||||||
|
|
||||||
|
if len(scores) == 2:
|
||||||
|
print(f" PASS: Completed in {elapsed:.3f}s with {len(scores)} scores")
|
||||||
|
return True
|
||||||
|
else:
|
||||||
|
print(f" FAIL: Expected 2 scores, got {len(scores)}")
|
||||||
|
return False
|
||||||
|
|
||||||
|
except FuturesTimeoutError:
|
||||||
|
print(" FAIL: Infinite loop detected (timed out after 5s)")
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
def test_compute_scheduling_scores_complex_cycle():
|
||||||
|
"""Test scheduling scores with complex circular dependencies."""
|
||||||
|
print("\nTesting compute_scheduling_scores with complex cycle:")
|
||||||
|
|
||||||
|
# Features 1-3 form a cycle, feature 4 depends on 1
|
||||||
|
features = [
|
||||||
|
{"id": 1, "priority": 1, "dependencies": [3]},
|
||||||
|
{"id": 2, "priority": 2, "dependencies": [1]},
|
||||||
|
{"id": 3, "priority": 3, "dependencies": [2]},
|
||||||
|
{"id": 4, "priority": 4, "dependencies": [1]}, # Outside cycle
|
||||||
|
]
|
||||||
|
|
||||||
|
start = time.time()
|
||||||
|
try:
|
||||||
|
with ThreadPoolExecutor(max_workers=1) as executor:
|
||||||
|
future = executor.submit(lambda: compute_scheduling_scores(features))
|
||||||
|
scores = future.result(timeout=5.0)
|
||||||
|
|
||||||
|
elapsed = time.time() - start
|
||||||
|
|
||||||
|
if elapsed > 1.0:
|
||||||
|
print(f" FAIL: Took {elapsed:.2f}s (expected < 1s)")
|
||||||
|
return False
|
||||||
|
|
||||||
|
if len(scores) == 4:
|
||||||
|
print(f" PASS: Completed in {elapsed:.3f}s with {len(scores)} scores")
|
||||||
|
return True
|
||||||
|
else:
|
||||||
|
print(f" FAIL: Expected 4 scores, got {len(scores)}")
|
||||||
|
return False
|
||||||
|
|
||||||
|
except FuturesTimeoutError:
|
||||||
|
print(" FAIL: Infinite loop detected (timed out after 5s)")
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
def test_compute_scheduling_scores_diamond():
|
||||||
|
"""Test scheduling scores with diamond dependency pattern."""
|
||||||
|
print("\nTesting compute_scheduling_scores with diamond pattern:")
|
||||||
|
|
||||||
|
# 1
|
||||||
|
# / \
|
||||||
|
# 2 3
|
||||||
|
# \ /
|
||||||
|
# 4
|
||||||
|
features = [
|
||||||
|
{"id": 1, "priority": 1, "dependencies": []},
|
||||||
|
{"id": 2, "priority": 2, "dependencies": [1]},
|
||||||
|
{"id": 3, "priority": 3, "dependencies": [1]},
|
||||||
|
{"id": 4, "priority": 4, "dependencies": [2, 3]},
|
||||||
|
]
|
||||||
|
|
||||||
|
scores = compute_scheduling_scores(features)
|
||||||
|
|
||||||
|
# Feature 1 should have highest score (unblocks 2, 3, and transitively 4)
|
||||||
|
if scores[1] > scores[2] and scores[1] > scores[3] and scores[1] > scores[4]:
|
||||||
|
# Feature 4 should have lowest score (leaf, unblocks nothing)
|
||||||
|
if scores[4] < scores[2] and scores[4] < scores[3]:
|
||||||
|
print(" PASS: Root has highest score, leaf has lowest")
|
||||||
|
return True
|
||||||
|
else:
|
||||||
|
print(f" FAIL: Leaf should have lowest score. Scores: {scores}")
|
||||||
|
return False
|
||||||
|
else:
|
||||||
|
print(f" FAIL: Root should have highest score. Scores: {scores}")
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
def test_compute_scheduling_scores_empty():
|
||||||
|
"""Test scheduling scores with empty feature list."""
|
||||||
|
print("\nTesting compute_scheduling_scores with empty list:")
|
||||||
|
|
||||||
|
scores = compute_scheduling_scores([])
|
||||||
|
|
||||||
|
if scores == {}:
|
||||||
|
print(" PASS: Returns empty dict for empty input")
|
||||||
|
return True
|
||||||
|
else:
|
||||||
|
print(f" FAIL: Expected empty dict, got {scores}")
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
def test_would_create_circular_dependency():
|
||||||
|
"""Test cycle detection for new dependencies."""
|
||||||
|
print("\nTesting would_create_circular_dependency:")
|
||||||
|
|
||||||
|
# Current dependencies: 2 depends on 1, 3 depends on 2
|
||||||
|
# Dependency chain: 3 -> 2 -> 1 (arrows mean "depends on")
|
||||||
|
features = [
|
||||||
|
{"id": 1, "priority": 1, "dependencies": []},
|
||||||
|
{"id": 2, "priority": 2, "dependencies": [1]},
|
||||||
|
{"id": 3, "priority": 3, "dependencies": [2]},
|
||||||
|
]
|
||||||
|
|
||||||
|
passed = True
|
||||||
|
|
||||||
|
# source_id gains dependency on target_id
|
||||||
|
# Adding "1 depends on 3" would create cycle: 1 -> 3 -> 2 -> 1
|
||||||
|
if would_create_circular_dependency(features, 1, 3):
|
||||||
|
print(" PASS: Detected cycle when adding 1 depends on 3")
|
||||||
|
else:
|
||||||
|
print(" FAIL: Should detect cycle when adding 1 depends on 3")
|
||||||
|
passed = False
|
||||||
|
|
||||||
|
# Adding "3 depends on 1" would NOT create cycle (redundant but not circular)
|
||||||
|
if not would_create_circular_dependency(features, 3, 1):
|
||||||
|
print(" PASS: No false positive for 3 depends on 1")
|
||||||
|
else:
|
||||||
|
print(" FAIL: False positive for 3 depends on 1")
|
||||||
|
passed = False
|
||||||
|
|
||||||
|
# Self-reference should be detected
|
||||||
|
if would_create_circular_dependency(features, 1, 1):
|
||||||
|
print(" PASS: Detected self-reference")
|
||||||
|
else:
|
||||||
|
print(" FAIL: Should detect self-reference")
|
||||||
|
passed = False
|
||||||
|
|
||||||
|
return passed
|
||||||
|
|
||||||
|
|
||||||
|
def test_resolve_dependencies_with_cycle():
|
||||||
|
"""Test resolve_dependencies detects and reports cycles."""
|
||||||
|
print("\nTesting resolve_dependencies with cycle:")
|
||||||
|
|
||||||
|
# Create a cycle: 1 -> 2 -> 3 -> 1
|
||||||
|
features = [
|
||||||
|
{"id": 1, "priority": 1, "dependencies": [3]},
|
||||||
|
{"id": 2, "priority": 2, "dependencies": [1]},
|
||||||
|
{"id": 3, "priority": 3, "dependencies": [2]},
|
||||||
|
]
|
||||||
|
|
||||||
|
result = resolve_dependencies(features)
|
||||||
|
|
||||||
|
# Should report circular dependencies
|
||||||
|
if result["circular_dependencies"]:
|
||||||
|
print(f" PASS: Detected cycle: {result['circular_dependencies']}")
|
||||||
|
return True
|
||||||
|
else:
|
||||||
|
print(" FAIL: Should report circular dependencies")
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
def test_are_dependencies_satisfied():
|
||||||
|
"""Test dependency satisfaction checking."""
|
||||||
|
print("\nTesting are_dependencies_satisfied:")
|
||||||
|
|
||||||
|
features = [
|
||||||
|
{"id": 1, "priority": 1, "dependencies": [], "passes": True},
|
||||||
|
{"id": 2, "priority": 2, "dependencies": [1], "passes": False},
|
||||||
|
{"id": 3, "priority": 3, "dependencies": [2], "passes": False},
|
||||||
|
]
|
||||||
|
|
||||||
|
passed = True
|
||||||
|
|
||||||
|
# Feature 1 has no deps, should be satisfied
|
||||||
|
if are_dependencies_satisfied(features[0], features):
|
||||||
|
print(" PASS: Feature 1 (no deps) is satisfied")
|
||||||
|
else:
|
||||||
|
print(" FAIL: Feature 1 should be satisfied")
|
||||||
|
passed = False
|
||||||
|
|
||||||
|
# Feature 2 depends on 1 which passes, should be satisfied
|
||||||
|
if are_dependencies_satisfied(features[1], features):
|
||||||
|
print(" PASS: Feature 2 (dep on passing) is satisfied")
|
||||||
|
else:
|
||||||
|
print(" FAIL: Feature 2 should be satisfied")
|
||||||
|
passed = False
|
||||||
|
|
||||||
|
# Feature 3 depends on 2 which doesn't pass, should NOT be satisfied
|
||||||
|
if not are_dependencies_satisfied(features[2], features):
|
||||||
|
print(" PASS: Feature 3 (dep on non-passing) is not satisfied")
|
||||||
|
else:
|
||||||
|
print(" FAIL: Feature 3 should not be satisfied")
|
||||||
|
passed = False
|
||||||
|
|
||||||
|
return passed
|
||||||
|
|
||||||
|
|
||||||
|
def test_get_blocking_dependencies():
|
||||||
|
"""Test getting blocking dependency IDs."""
|
||||||
|
print("\nTesting get_blocking_dependencies:")
|
||||||
|
|
||||||
|
features = [
|
||||||
|
{"id": 1, "priority": 1, "dependencies": [], "passes": True},
|
||||||
|
{"id": 2, "priority": 2, "dependencies": [], "passes": False},
|
||||||
|
{"id": 3, "priority": 3, "dependencies": [1, 2], "passes": False},
|
||||||
|
]
|
||||||
|
|
||||||
|
blocking = get_blocking_dependencies(features[2], features)
|
||||||
|
|
||||||
|
# Only feature 2 should be blocking (1 passes)
|
||||||
|
if blocking == [2]:
|
||||||
|
print(" PASS: Correctly identified blocking dependency")
|
||||||
|
return True
|
||||||
|
else:
|
||||||
|
print(f" FAIL: Expected [2], got {blocking}")
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
def test_get_ready_features():
|
||||||
|
"""Test getting ready features."""
|
||||||
|
print("\nTesting get_ready_features:")
|
||||||
|
|
||||||
|
features = [
|
||||||
|
{"id": 1, "priority": 1, "dependencies": [], "passes": True},
|
||||||
|
{"id": 2, "priority": 2, "dependencies": [], "passes": False, "in_progress": False},
|
||||||
|
{"id": 3, "priority": 3, "dependencies": [1], "passes": False, "in_progress": False},
|
||||||
|
{"id": 4, "priority": 4, "dependencies": [2], "passes": False, "in_progress": False},
|
||||||
|
]
|
||||||
|
|
||||||
|
ready = get_ready_features(features)
|
||||||
|
|
||||||
|
# Features 2 and 3 should be ready
|
||||||
|
# Feature 1 passes, feature 4 blocked by 2
|
||||||
|
ready_ids = [f["id"] for f in ready]
|
||||||
|
|
||||||
|
if 2 in ready_ids and 3 in ready_ids:
|
||||||
|
if 1 not in ready_ids and 4 not in ready_ids:
|
||||||
|
print(f" PASS: Ready features: {ready_ids}")
|
||||||
|
return True
|
||||||
|
else:
|
||||||
|
print(f" FAIL: Should not include passing/blocked. Got: {ready_ids}")
|
||||||
|
return False
|
||||||
|
else:
|
||||||
|
print(f" FAIL: Should include 2 and 3. Got: {ready_ids}")
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
def test_get_blocked_features():
|
||||||
|
"""Test getting blocked features."""
|
||||||
|
print("\nTesting get_blocked_features:")
|
||||||
|
|
||||||
|
features = [
|
||||||
|
{"id": 1, "priority": 1, "dependencies": [], "passes": False},
|
||||||
|
{"id": 2, "priority": 2, "dependencies": [1], "passes": False},
|
||||||
|
]
|
||||||
|
|
||||||
|
blocked = get_blocked_features(features)
|
||||||
|
|
||||||
|
# Feature 2 should be blocked by 1
|
||||||
|
if len(blocked) == 1 and blocked[0]["id"] == 2:
|
||||||
|
if blocked[0]["blocked_by"] == [1]:
|
||||||
|
print(" PASS: Correctly identified blocked feature")
|
||||||
|
return True
|
||||||
|
else:
|
||||||
|
print(f" FAIL: Wrong blocked_by: {blocked[0]['blocked_by']}")
|
||||||
|
return False
|
||||||
|
else:
|
||||||
|
print(f" FAIL: Expected feature 2 blocked, got: {blocked}")
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
def run_all_tests():
|
||||||
|
"""Run all tests and report results."""
|
||||||
|
print("=" * 60)
|
||||||
|
print("Dependency Resolver Tests")
|
||||||
|
print("=" * 60)
|
||||||
|
|
||||||
|
tests = [
|
||||||
|
test_compute_scheduling_scores_simple_chain,
|
||||||
|
test_compute_scheduling_scores_with_cycle,
|
||||||
|
test_compute_scheduling_scores_self_reference,
|
||||||
|
test_compute_scheduling_scores_complex_cycle,
|
||||||
|
test_compute_scheduling_scores_diamond,
|
||||||
|
test_compute_scheduling_scores_empty,
|
||||||
|
test_would_create_circular_dependency,
|
||||||
|
test_resolve_dependencies_with_cycle,
|
||||||
|
test_are_dependencies_satisfied,
|
||||||
|
test_get_blocking_dependencies,
|
||||||
|
test_get_ready_features,
|
||||||
|
test_get_blocked_features,
|
||||||
|
]
|
||||||
|
|
||||||
|
passed = 0
|
||||||
|
failed = 0
|
||||||
|
|
||||||
|
for test in tests:
|
||||||
|
try:
|
||||||
|
if test():
|
||||||
|
passed += 1
|
||||||
|
else:
|
||||||
|
failed += 1
|
||||||
|
except Exception as e:
|
||||||
|
print(f" ERROR: {e}")
|
||||||
|
failed += 1
|
||||||
|
|
||||||
|
print("\n" + "=" * 60)
|
||||||
|
print(f"Results: {passed} passed, {failed} failed")
|
||||||
|
print("=" * 60)
|
||||||
|
|
||||||
|
return failed == 0
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
success = run_all_tests()
|
||||||
|
sys.exit(0 if success else 1)
|
||||||
@@ -107,6 +107,8 @@ def test_extract_commands():
|
|||||||
("/usr/bin/node script.js", ["node"]),
|
("/usr/bin/node script.js", ["node"]),
|
||||||
("VAR=value ls", ["ls"]),
|
("VAR=value ls", ["ls"]),
|
||||||
("git status || git init", ["git", "git"]),
|
("git status || git init", ["git", "git"]),
|
||||||
|
# Fallback parser test: complex nested quotes that break shlex
|
||||||
|
('docker exec container php -r "echo \\"test\\";"', ["docker"]),
|
||||||
]
|
]
|
||||||
|
|
||||||
for cmd, expected in test_cases:
|
for cmd, expected in test_cases:
|
||||||
@@ -453,6 +455,21 @@ commands:
|
|||||||
print(" FAIL: Non-allowed command 'rustc' should be blocked")
|
print(" FAIL: Non-allowed command 'rustc' should be blocked")
|
||||||
failed += 1
|
failed += 1
|
||||||
|
|
||||||
|
# Test 4: Empty command name is rejected
|
||||||
|
config_path.write_text("""version: 1
|
||||||
|
commands:
|
||||||
|
- name: ""
|
||||||
|
description: Empty name should be rejected
|
||||||
|
""")
|
||||||
|
result = load_project_commands(project_dir)
|
||||||
|
if result is None:
|
||||||
|
print(" PASS: Empty command name rejected in project config")
|
||||||
|
passed += 1
|
||||||
|
else:
|
||||||
|
print(" FAIL: Empty command name should be rejected in project config")
|
||||||
|
print(f" Got: {result}")
|
||||||
|
failed += 1
|
||||||
|
|
||||||
return passed, failed
|
return passed, failed
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
6
ui/package-lock.json
generated
6
ui/package-lock.json
generated
@@ -3024,7 +3024,7 @@
|
|||||||
"version": "19.2.9",
|
"version": "19.2.9",
|
||||||
"resolved": "https://registry.npmjs.org/@types/react/-/react-19.2.9.tgz",
|
"resolved": "https://registry.npmjs.org/@types/react/-/react-19.2.9.tgz",
|
||||||
"integrity": "sha512-Lpo8kgb/igvMIPeNV2rsYKTgaORYdO1XGVZ4Qz3akwOj0ySGYMPlQWa8BaLn0G63D1aSaAQ5ldR06wCpChQCjA==",
|
"integrity": "sha512-Lpo8kgb/igvMIPeNV2rsYKTgaORYdO1XGVZ4Qz3akwOj0ySGYMPlQWa8BaLn0G63D1aSaAQ5ldR06wCpChQCjA==",
|
||||||
"dev": true,
|
"devOptional": true,
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"csstype": "^3.2.2"
|
"csstype": "^3.2.2"
|
||||||
@@ -3034,7 +3034,7 @@
|
|||||||
"version": "19.2.3",
|
"version": "19.2.3",
|
||||||
"resolved": "https://registry.npmjs.org/@types/react-dom/-/react-dom-19.2.3.tgz",
|
"resolved": "https://registry.npmjs.org/@types/react-dom/-/react-dom-19.2.3.tgz",
|
||||||
"integrity": "sha512-jp2L/eY6fn+KgVVQAOqYItbF0VY/YApe5Mz2F0aykSO8gx31bYCZyvSeYxCHKvzHG5eZjc+zyaS5BrBWya2+kQ==",
|
"integrity": "sha512-jp2L/eY6fn+KgVVQAOqYItbF0VY/YApe5Mz2F0aykSO8gx31bYCZyvSeYxCHKvzHG5eZjc+zyaS5BrBWya2+kQ==",
|
||||||
"dev": true,
|
"devOptional": true,
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"peerDependencies": {
|
"peerDependencies": {
|
||||||
"@types/react": "^19.2.0"
|
"@types/react": "^19.2.0"
|
||||||
@@ -3658,7 +3658,7 @@
|
|||||||
"version": "3.2.3",
|
"version": "3.2.3",
|
||||||
"resolved": "https://registry.npmjs.org/csstype/-/csstype-3.2.3.tgz",
|
"resolved": "https://registry.npmjs.org/csstype/-/csstype-3.2.3.tgz",
|
||||||
"integrity": "sha512-z1HGKcYy2xA8AGQfwrn0PAy+PB7X/GSj3UVJW9qKyn43xWa+gl5nXmU4qqLMRzWVLFC8KusUX8T/0kCiOYpAIQ==",
|
"integrity": "sha512-z1HGKcYy2xA8AGQfwrn0PAy+PB7X/GSj3UVJW9qKyn43xWa+gl5nXmU4qqLMRzWVLFC8KusUX8T/0kCiOYpAIQ==",
|
||||||
"dev": true,
|
"devOptional": true,
|
||||||
"license": "MIT"
|
"license": "MIT"
|
||||||
},
|
},
|
||||||
"node_modules/d3-color": {
|
"node_modules/d3-color": {
|
||||||
|
|||||||
@@ -26,8 +26,10 @@ import { ViewToggle, type ViewMode } from './components/ViewToggle'
|
|||||||
import { DependencyGraph } from './components/DependencyGraph'
|
import { DependencyGraph } from './components/DependencyGraph'
|
||||||
import { KeyboardShortcutsHelp } from './components/KeyboardShortcutsHelp'
|
import { KeyboardShortcutsHelp } from './components/KeyboardShortcutsHelp'
|
||||||
import { ThemeSelector } from './components/ThemeSelector'
|
import { ThemeSelector } from './components/ThemeSelector'
|
||||||
|
import { ResetProjectModal } from './components/ResetProjectModal'
|
||||||
|
import { ProjectSetupRequired } from './components/ProjectSetupRequired'
|
||||||
import { getDependencyGraph } from './lib/api'
|
import { getDependencyGraph } from './lib/api'
|
||||||
import { Loader2, Settings, Moon, Sun } from 'lucide-react'
|
import { Loader2, Settings, Moon, Sun, RotateCcw } from 'lucide-react'
|
||||||
import type { Feature } from './lib/types'
|
import type { Feature } from './lib/types'
|
||||||
import { Button } from '@/components/ui/button'
|
import { Button } from '@/components/ui/button'
|
||||||
import { Card, CardContent } from '@/components/ui/card'
|
import { Card, CardContent } from '@/components/ui/card'
|
||||||
@@ -36,6 +38,9 @@ import { Badge } from '@/components/ui/badge'
|
|||||||
const STORAGE_KEY = 'autocoder-selected-project'
|
const STORAGE_KEY = 'autocoder-selected-project'
|
||||||
const VIEW_MODE_KEY = 'autocoder-view-mode'
|
const VIEW_MODE_KEY = 'autocoder-view-mode'
|
||||||
|
|
||||||
|
// Bottom padding for main content when debug panel is collapsed (40px header + 8px margin)
|
||||||
|
const COLLAPSED_DEBUG_PANEL_CLEARANCE = 48
|
||||||
|
|
||||||
function App() {
|
function App() {
|
||||||
// Initialize selected project from localStorage
|
// Initialize selected project from localStorage
|
||||||
const [selectedProject, setSelectedProject] = useState<string | null>(() => {
|
const [selectedProject, setSelectedProject] = useState<string | null>(() => {
|
||||||
@@ -56,6 +61,7 @@ function App() {
|
|||||||
const [showSettings, setShowSettings] = useState(false)
|
const [showSettings, setShowSettings] = useState(false)
|
||||||
const [showKeyboardHelp, setShowKeyboardHelp] = useState(false)
|
const [showKeyboardHelp, setShowKeyboardHelp] = useState(false)
|
||||||
const [isSpecCreating, setIsSpecCreating] = useState(false)
|
const [isSpecCreating, setIsSpecCreating] = useState(false)
|
||||||
|
const [showResetModal, setShowResetModal] = useState(false)
|
||||||
const [showSpecChat, setShowSpecChat] = useState(false) // For "Create Spec" button in empty kanban
|
const [showSpecChat, setShowSpecChat] = useState(false) // For "Create Spec" button in empty kanban
|
||||||
const [viewMode, setViewMode] = useState<ViewMode>(() => {
|
const [viewMode, setViewMode] = useState<ViewMode>(() => {
|
||||||
try {
|
try {
|
||||||
@@ -200,10 +206,18 @@ function App() {
|
|||||||
setShowKeyboardHelp(true)
|
setShowKeyboardHelp(true)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// R : Open reset modal (when project selected and agent not running)
|
||||||
|
if ((e.key === 'r' || e.key === 'R') && selectedProject && wsState.agentStatus !== 'running') {
|
||||||
|
e.preventDefault()
|
||||||
|
setShowResetModal(true)
|
||||||
|
}
|
||||||
|
|
||||||
// Escape : Close modals
|
// Escape : Close modals
|
||||||
if (e.key === 'Escape') {
|
if (e.key === 'Escape') {
|
||||||
if (showKeyboardHelp) {
|
if (showKeyboardHelp) {
|
||||||
setShowKeyboardHelp(false)
|
setShowKeyboardHelp(false)
|
||||||
|
} else if (showResetModal) {
|
||||||
|
setShowResetModal(false)
|
||||||
} else if (showExpandProject) {
|
} else if (showExpandProject) {
|
||||||
setShowExpandProject(false)
|
setShowExpandProject(false)
|
||||||
} else if (showSettings) {
|
} else if (showSettings) {
|
||||||
@@ -222,7 +236,7 @@ function App() {
|
|||||||
|
|
||||||
window.addEventListener('keydown', handleKeyDown)
|
window.addEventListener('keydown', handleKeyDown)
|
||||||
return () => window.removeEventListener('keydown', handleKeyDown)
|
return () => window.removeEventListener('keydown', handleKeyDown)
|
||||||
}, [selectedProject, showAddFeature, showExpandProject, selectedFeature, debugOpen, debugActiveTab, assistantOpen, features, showSettings, showKeyboardHelp, isSpecCreating, viewMode])
|
}, [selectedProject, showAddFeature, showExpandProject, selectedFeature, debugOpen, debugActiveTab, assistantOpen, features, showSettings, showKeyboardHelp, isSpecCreating, viewMode, showResetModal, wsState.agentStatus])
|
||||||
|
|
||||||
// Combine WebSocket progress with feature data
|
// Combine WebSocket progress with feature data
|
||||||
const progress = wsState.progress.total > 0 ? wsState.progress : {
|
const progress = wsState.progress.total > 0 ? wsState.progress : {
|
||||||
@@ -242,7 +256,7 @@ function App() {
|
|||||||
return (
|
return (
|
||||||
<div className="min-h-screen bg-background">
|
<div className="min-h-screen bg-background">
|
||||||
{/* Header */}
|
{/* Header */}
|
||||||
<header className="bg-card text-foreground border-b-2 border-border">
|
<header className="sticky top-0 z-50 bg-card/80 backdrop-blur-md text-foreground border-b-2 border-border">
|
||||||
<div className="max-w-7xl mx-auto px-4 py-4">
|
<div className="max-w-7xl mx-auto px-4 py-4">
|
||||||
<div className="flex items-center justify-between">
|
<div className="flex items-center justify-between">
|
||||||
{/* Logo and Title */}
|
{/* Logo and Title */}
|
||||||
@@ -265,6 +279,7 @@ function App() {
|
|||||||
<AgentControl
|
<AgentControl
|
||||||
projectName={selectedProject}
|
projectName={selectedProject}
|
||||||
status={wsState.agentStatus}
|
status={wsState.agentStatus}
|
||||||
|
defaultConcurrency={selectedProjectData?.default_concurrency}
|
||||||
/>
|
/>
|
||||||
|
|
||||||
<DevServerControl
|
<DevServerControl
|
||||||
@@ -283,6 +298,17 @@ function App() {
|
|||||||
<Settings size={18} />
|
<Settings size={18} />
|
||||||
</Button>
|
</Button>
|
||||||
|
|
||||||
|
<Button
|
||||||
|
onClick={() => setShowResetModal(true)}
|
||||||
|
variant="outline"
|
||||||
|
size="sm"
|
||||||
|
title="Reset Project (R)"
|
||||||
|
aria-label="Reset Project"
|
||||||
|
disabled={wsState.agentStatus === 'running'}
|
||||||
|
>
|
||||||
|
<RotateCcw size={18} />
|
||||||
|
</Button>
|
||||||
|
|
||||||
{/* Ollama Mode Indicator */}
|
{/* Ollama Mode Indicator */}
|
||||||
{settings?.ollama_mode && (
|
{settings?.ollama_mode && (
|
||||||
<div
|
<div
|
||||||
@@ -331,7 +357,7 @@ function App() {
|
|||||||
{/* Main Content */}
|
{/* Main Content */}
|
||||||
<main
|
<main
|
||||||
className="max-w-7xl mx-auto px-4 py-8"
|
className="max-w-7xl mx-auto px-4 py-8"
|
||||||
style={{ paddingBottom: debugOpen ? debugPanelHeight + 32 : undefined }}
|
style={{ paddingBottom: debugOpen ? debugPanelHeight + 32 : COLLAPSED_DEBUG_PANEL_CLEARANCE }}
|
||||||
>
|
>
|
||||||
{!selectedProject ? (
|
{!selectedProject ? (
|
||||||
<div className="text-center mt-12">
|
<div className="text-center mt-12">
|
||||||
@@ -342,6 +368,16 @@ function App() {
|
|||||||
Select a project from the dropdown above or create a new one to get started.
|
Select a project from the dropdown above or create a new one to get started.
|
||||||
</p>
|
</p>
|
||||||
</div>
|
</div>
|
||||||
|
) : !hasSpec ? (
|
||||||
|
<ProjectSetupRequired
|
||||||
|
projectName={selectedProject}
|
||||||
|
projectPath={selectedProjectData?.path}
|
||||||
|
onCreateWithClaude={() => setShowSpecChat(true)}
|
||||||
|
onEditManually={() => {
|
||||||
|
// Open debug panel for the user to see the project path
|
||||||
|
setDebugOpen(true)
|
||||||
|
}}
|
||||||
|
/>
|
||||||
) : (
|
) : (
|
||||||
<div className="space-y-8">
|
<div className="space-y-8">
|
||||||
{/* Progress Dashboard */}
|
{/* Progress Dashboard */}
|
||||||
@@ -508,6 +544,21 @@ function App() {
|
|||||||
{/* Keyboard Shortcuts Help */}
|
{/* Keyboard Shortcuts Help */}
|
||||||
<KeyboardShortcutsHelp isOpen={showKeyboardHelp} onClose={() => setShowKeyboardHelp(false)} />
|
<KeyboardShortcutsHelp isOpen={showKeyboardHelp} onClose={() => setShowKeyboardHelp(false)} />
|
||||||
|
|
||||||
|
{/* Reset Project Modal */}
|
||||||
|
{showResetModal && selectedProject && (
|
||||||
|
<ResetProjectModal
|
||||||
|
isOpen={showResetModal}
|
||||||
|
projectName={selectedProject}
|
||||||
|
onClose={() => setShowResetModal(false)}
|
||||||
|
onResetComplete={(wasFullReset) => {
|
||||||
|
// If full reset, the spec was deleted - show spec creation chat
|
||||||
|
if (wasFullReset) {
|
||||||
|
setShowSpecChat(true)
|
||||||
|
}
|
||||||
|
}}
|
||||||
|
/>
|
||||||
|
)}
|
||||||
|
|
||||||
{/* Celebration Overlay - shows when a feature is completed by an agent */}
|
{/* Celebration Overlay - shows when a feature is completed by an agent */}
|
||||||
{wsState.celebration && (
|
{wsState.celebration && (
|
||||||
<CelebrationOverlay
|
<CelebrationOverlay
|
||||||
|
|||||||
@@ -1,9 +1,10 @@
|
|||||||
import { useState } from 'react'
|
import { useState, useEffect, useRef, useCallback } from 'react'
|
||||||
import { Play, Square, Loader2, GitBranch, Clock } from 'lucide-react'
|
import { Play, Square, Loader2, GitBranch, Clock } from 'lucide-react'
|
||||||
import {
|
import {
|
||||||
useStartAgent,
|
useStartAgent,
|
||||||
useStopAgent,
|
useStopAgent,
|
||||||
useSettings,
|
useSettings,
|
||||||
|
useUpdateProjectSettings,
|
||||||
} from '../hooks/useProjects'
|
} from '../hooks/useProjects'
|
||||||
import { useNextScheduledRun } from '../hooks/useSchedules'
|
import { useNextScheduledRun } from '../hooks/useSchedules'
|
||||||
import { formatNextRun, formatEndTime } from '../lib/timeUtils'
|
import { formatNextRun, formatEndTime } from '../lib/timeUtils'
|
||||||
@@ -15,14 +16,47 @@ import { Badge } from '@/components/ui/badge'
|
|||||||
interface AgentControlProps {
|
interface AgentControlProps {
|
||||||
projectName: string
|
projectName: string
|
||||||
status: AgentStatus
|
status: AgentStatus
|
||||||
|
defaultConcurrency?: number
|
||||||
}
|
}
|
||||||
|
|
||||||
export function AgentControl({ projectName, status }: AgentControlProps) {
|
export function AgentControl({ projectName, status, defaultConcurrency = 3 }: AgentControlProps) {
|
||||||
const { data: settings } = useSettings()
|
const { data: settings } = useSettings()
|
||||||
const yoloMode = settings?.yolo_mode ?? false
|
const yoloMode = settings?.yolo_mode ?? false
|
||||||
|
|
||||||
// Concurrency: 1 = single agent, 2-5 = parallel
|
// Concurrency: 1 = single agent, 2-5 = parallel
|
||||||
const [concurrency, setConcurrency] = useState(3)
|
const [concurrency, setConcurrency] = useState(defaultConcurrency)
|
||||||
|
|
||||||
|
// Sync concurrency when project changes or defaultConcurrency updates
|
||||||
|
useEffect(() => {
|
||||||
|
setConcurrency(defaultConcurrency)
|
||||||
|
}, [defaultConcurrency])
|
||||||
|
|
||||||
|
// Debounced save for concurrency changes
|
||||||
|
const updateProjectSettings = useUpdateProjectSettings(projectName)
|
||||||
|
const saveTimeoutRef = useRef<ReturnType<typeof setTimeout> | null>(null)
|
||||||
|
|
||||||
|
const handleConcurrencyChange = useCallback((newConcurrency: number) => {
|
||||||
|
setConcurrency(newConcurrency)
|
||||||
|
|
||||||
|
// Clear previous timeout
|
||||||
|
if (saveTimeoutRef.current) {
|
||||||
|
clearTimeout(saveTimeoutRef.current)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Debounce save (500ms)
|
||||||
|
saveTimeoutRef.current = setTimeout(() => {
|
||||||
|
updateProjectSettings.mutate({ default_concurrency: newConcurrency })
|
||||||
|
}, 500)
|
||||||
|
}, [updateProjectSettings])
|
||||||
|
|
||||||
|
// Cleanup timeout on unmount
|
||||||
|
useEffect(() => {
|
||||||
|
return () => {
|
||||||
|
if (saveTimeoutRef.current) {
|
||||||
|
clearTimeout(saveTimeoutRef.current)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}, [])
|
||||||
|
|
||||||
const startAgent = useStartAgent(projectName)
|
const startAgent = useStartAgent(projectName)
|
||||||
const stopAgent = useStopAgent(projectName)
|
const stopAgent = useStopAgent(projectName)
|
||||||
@@ -57,7 +91,7 @@ export function AgentControl({ projectName, status }: AgentControlProps) {
|
|||||||
min={1}
|
min={1}
|
||||||
max={5}
|
max={5}
|
||||||
value={concurrency}
|
value={concurrency}
|
||||||
onChange={(e) => setConcurrency(Number(e.target.value))}
|
onChange={(e) => handleConcurrencyChange(Number(e.target.value))}
|
||||||
disabled={isLoading}
|
disabled={isLoading}
|
||||||
className="w-16 h-2 accent-primary cursor-pointer"
|
className="w-16 h-2 accent-primary cursor-pointer"
|
||||||
title={`${concurrency} concurrent agent${concurrency > 1 ? 's' : ''}`}
|
title={`${concurrency} concurrent agent${concurrency > 1 ? 's' : ''}`}
|
||||||
|
|||||||
@@ -12,6 +12,7 @@ import { useAssistantChat } from '../hooks/useAssistantChat'
|
|||||||
import { ChatMessage as ChatMessageComponent } from './ChatMessage'
|
import { ChatMessage as ChatMessageComponent } from './ChatMessage'
|
||||||
import { ConversationHistory } from './ConversationHistory'
|
import { ConversationHistory } from './ConversationHistory'
|
||||||
import type { ChatMessage } from '../lib/types'
|
import type { ChatMessage } from '../lib/types'
|
||||||
|
import { isSubmitEnter } from '../lib/keyboard'
|
||||||
import { Button } from '@/components/ui/button'
|
import { Button } from '@/components/ui/button'
|
||||||
import { Textarea } from '@/components/ui/textarea'
|
import { Textarea } from '@/components/ui/textarea'
|
||||||
|
|
||||||
@@ -134,7 +135,7 @@ export function AssistantChat({
|
|||||||
}
|
}
|
||||||
|
|
||||||
const handleKeyDown = (e: React.KeyboardEvent<HTMLTextAreaElement>) => {
|
const handleKeyDown = (e: React.KeyboardEvent<HTMLTextAreaElement>) => {
|
||||||
if (e.key === 'Enter' && !e.shiftKey) {
|
if (isSubmitEnter(e)) {
|
||||||
e.preventDefault()
|
e.preventDefault()
|
||||||
handleSend()
|
handleSend()
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -50,11 +50,23 @@ export function AssistantPanel({ projectName, isOpen, onClose }: AssistantPanelP
|
|||||||
)
|
)
|
||||||
|
|
||||||
// Fetch conversation details when we have an ID
|
// Fetch conversation details when we have an ID
|
||||||
const { data: conversationDetail, isLoading: isLoadingConversation } = useConversation(
|
const { data: conversationDetail, isLoading: isLoadingConversation, error: conversationError } = useConversation(
|
||||||
projectName,
|
projectName,
|
||||||
conversationId
|
conversationId
|
||||||
)
|
)
|
||||||
|
|
||||||
|
// Clear stored conversation ID if it no longer exists (404 error)
|
||||||
|
useEffect(() => {
|
||||||
|
if (conversationError && conversationId) {
|
||||||
|
const message = conversationError.message.toLowerCase()
|
||||||
|
// Only clear for 404 errors, not transient network issues
|
||||||
|
if (message.includes('not found') || message.includes('404')) {
|
||||||
|
console.warn(`Conversation ${conversationId} not found, clearing stored ID`)
|
||||||
|
setConversationId(null)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}, [conversationError, conversationId])
|
||||||
|
|
||||||
// Convert API messages to ChatMessage format for the chat component
|
// Convert API messages to ChatMessage format for the chat component
|
||||||
const initialMessages: ChatMessage[] | undefined = conversationDetail?.messages.map((msg) => ({
|
const initialMessages: ChatMessage[] | undefined = conversationDetail?.messages.map((msg) => ({
|
||||||
id: `db-${msg.id}`,
|
id: `db-${msg.id}`,
|
||||||
|
|||||||
@@ -168,7 +168,7 @@ export function ConversationHistory({
|
|||||||
<Button
|
<Button
|
||||||
variant="ghost"
|
variant="ghost"
|
||||||
size="icon"
|
size="icon"
|
||||||
onClick={(e) => handleDeleteClick(e, conversation)}
|
onClick={(e: React.MouseEvent) => handleDeleteClick(e, conversation)}
|
||||||
className={`h-8 w-8 mr-2 ${
|
className={`h-8 w-8 mr-2 ${
|
||||||
isCurrent
|
isCurrent
|
||||||
? 'opacity-60 hover:opacity-100'
|
? 'opacity-60 hover:opacity-100'
|
||||||
|
|||||||
@@ -349,7 +349,7 @@ export function DebugLogViewer({
|
|||||||
<Button
|
<Button
|
||||||
variant={activeTab === 'agent' ? 'secondary' : 'ghost'}
|
variant={activeTab === 'agent' ? 'secondary' : 'ghost'}
|
||||||
size="sm"
|
size="sm"
|
||||||
onClick={(e) => {
|
onClick={(e: React.MouseEvent) => {
|
||||||
e.stopPropagation()
|
e.stopPropagation()
|
||||||
setActiveTab('agent')
|
setActiveTab('agent')
|
||||||
}}
|
}}
|
||||||
@@ -366,7 +366,7 @@ export function DebugLogViewer({
|
|||||||
<Button
|
<Button
|
||||||
variant={activeTab === 'devserver' ? 'secondary' : 'ghost'}
|
variant={activeTab === 'devserver' ? 'secondary' : 'ghost'}
|
||||||
size="sm"
|
size="sm"
|
||||||
onClick={(e) => {
|
onClick={(e: React.MouseEvent) => {
|
||||||
e.stopPropagation()
|
e.stopPropagation()
|
||||||
setActiveTab('devserver')
|
setActiveTab('devserver')
|
||||||
}}
|
}}
|
||||||
@@ -383,7 +383,7 @@ export function DebugLogViewer({
|
|||||||
<Button
|
<Button
|
||||||
variant={activeTab === 'terminal' ? 'secondary' : 'ghost'}
|
variant={activeTab === 'terminal' ? 'secondary' : 'ghost'}
|
||||||
size="sm"
|
size="sm"
|
||||||
onClick={(e) => {
|
onClick={(e: React.MouseEvent) => {
|
||||||
e.stopPropagation()
|
e.stopPropagation()
|
||||||
setActiveTab('terminal')
|
setActiveTab('terminal')
|
||||||
}}
|
}}
|
||||||
@@ -421,7 +421,7 @@ export function DebugLogViewer({
|
|||||||
<Button
|
<Button
|
||||||
variant="ghost"
|
variant="ghost"
|
||||||
size="icon"
|
size="icon"
|
||||||
onClick={(e) => {
|
onClick={(e: React.MouseEvent) => {
|
||||||
e.stopPropagation()
|
e.stopPropagation()
|
||||||
handleClear()
|
handleClear()
|
||||||
}}
|
}}
|
||||||
|
|||||||
@@ -11,6 +11,7 @@ import { useExpandChat } from '../hooks/useExpandChat'
|
|||||||
import { ChatMessage } from './ChatMessage'
|
import { ChatMessage } from './ChatMessage'
|
||||||
import { TypingIndicator } from './TypingIndicator'
|
import { TypingIndicator } from './TypingIndicator'
|
||||||
import type { ImageAttachment } from '../lib/types'
|
import type { ImageAttachment } from '../lib/types'
|
||||||
|
import { isSubmitEnter } from '../lib/keyboard'
|
||||||
import { Button } from '@/components/ui/button'
|
import { Button } from '@/components/ui/button'
|
||||||
import { Input } from '@/components/ui/input'
|
import { Input } from '@/components/ui/input'
|
||||||
import { Card, CardContent } from '@/components/ui/card'
|
import { Card, CardContent } from '@/components/ui/card'
|
||||||
@@ -88,7 +89,7 @@ export function ExpandProjectChat({
|
|||||||
}
|
}
|
||||||
|
|
||||||
const handleKeyDown = (e: React.KeyboardEvent) => {
|
const handleKeyDown = (e: React.KeyboardEvent) => {
|
||||||
if (e.key === 'Enter' && !e.shiftKey) {
|
if (isSubmitEnter(e)) {
|
||||||
e.preventDefault()
|
e.preventDefault()
|
||||||
handleSendMessage()
|
handleSendMessage()
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -18,6 +18,7 @@ import {
|
|||||||
ArrowLeft,
|
ArrowLeft,
|
||||||
} from 'lucide-react'
|
} from 'lucide-react'
|
||||||
import * as api from '../lib/api'
|
import * as api from '../lib/api'
|
||||||
|
import { isSubmitEnter } from '../lib/keyboard'
|
||||||
import type { DirectoryEntry, DriveInfo } from '../lib/types'
|
import type { DirectoryEntry, DriveInfo } from '../lib/types'
|
||||||
import { Button } from '@/components/ui/button'
|
import { Button } from '@/components/ui/button'
|
||||||
import { Input } from '@/components/ui/input'
|
import { Input } from '@/components/ui/input'
|
||||||
@@ -269,7 +270,7 @@ export function FolderBrowser({ onSelect, onCancel, initialPath }: FolderBrowser
|
|||||||
className="flex-1"
|
className="flex-1"
|
||||||
autoFocus
|
autoFocus
|
||||||
onKeyDown={(e) => {
|
onKeyDown={(e) => {
|
||||||
if (e.key === 'Enter') handleCreateFolder()
|
if (isSubmitEnter(e, false)) handleCreateFolder()
|
||||||
if (e.key === 'Escape') {
|
if (e.key === 'Escape') {
|
||||||
setIsCreatingFolder(false)
|
setIsCreatingFolder(false)
|
||||||
setNewFolderName('')
|
setNewFolderName('')
|
||||||
|
|||||||
@@ -36,7 +36,7 @@ function getStateColor(state: OrchestratorState): string {
|
|||||||
case 'complete':
|
case 'complete':
|
||||||
return 'text-primary'
|
return 'text-primary'
|
||||||
case 'spawning':
|
case 'spawning':
|
||||||
return 'text-violet-600 dark:text-violet-400'
|
return 'text-primary'
|
||||||
case 'scheduling':
|
case 'scheduling':
|
||||||
case 'monitoring':
|
case 'monitoring':
|
||||||
return 'text-primary'
|
return 'text-primary'
|
||||||
@@ -65,7 +65,7 @@ export function OrchestratorStatusCard({ status }: OrchestratorStatusCardProps)
|
|||||||
const [showEvents, setShowEvents] = useState(false)
|
const [showEvents, setShowEvents] = useState(false)
|
||||||
|
|
||||||
return (
|
return (
|
||||||
<Card className="mb-4 bg-gradient-to-r from-violet-50 to-purple-50 dark:from-violet-950/30 dark:to-purple-950/30 border-violet-200 dark:border-violet-800/50 py-4">
|
<Card className="mb-4 bg-primary/10 border-primary/30 py-4">
|
||||||
<CardContent className="p-4">
|
<CardContent className="p-4">
|
||||||
<div className="flex items-start gap-4">
|
<div className="flex items-start gap-4">
|
||||||
{/* Avatar */}
|
{/* Avatar */}
|
||||||
@@ -75,7 +75,7 @@ export function OrchestratorStatusCard({ status }: OrchestratorStatusCardProps)
|
|||||||
<div className="flex-1 min-w-0">
|
<div className="flex-1 min-w-0">
|
||||||
{/* Header row */}
|
{/* Header row */}
|
||||||
<div className="flex items-center gap-2 mb-1">
|
<div className="flex items-center gap-2 mb-1">
|
||||||
<span className="font-semibold text-lg text-violet-700 dark:text-violet-300">
|
<span className="font-semibold text-lg text-primary">
|
||||||
Maestro
|
Maestro
|
||||||
</span>
|
</span>
|
||||||
<span className={`text-sm font-medium ${getStateColor(status.state)}`}>
|
<span className={`text-sm font-medium ${getStateColor(status.state)}`}>
|
||||||
@@ -124,7 +124,7 @@ export function OrchestratorStatusCard({ status }: OrchestratorStatusCardProps)
|
|||||||
variant="ghost"
|
variant="ghost"
|
||||||
size="sm"
|
size="sm"
|
||||||
onClick={() => setShowEvents(!showEvents)}
|
onClick={() => setShowEvents(!showEvents)}
|
||||||
className="text-violet-600 dark:text-violet-400 hover:bg-violet-100 dark:hover:bg-violet-900/30"
|
className="text-primary hover:bg-primary/10"
|
||||||
>
|
>
|
||||||
<Sparkles size={12} />
|
<Sparkles size={12} />
|
||||||
Activity
|
Activity
|
||||||
@@ -135,14 +135,14 @@ export function OrchestratorStatusCard({ status }: OrchestratorStatusCardProps)
|
|||||||
|
|
||||||
{/* Collapsible recent events */}
|
{/* Collapsible recent events */}
|
||||||
{showEvents && status.recentEvents.length > 0 && (
|
{showEvents && status.recentEvents.length > 0 && (
|
||||||
<div className="mt-3 pt-3 border-t border-violet-200 dark:border-violet-800/50">
|
<div className="mt-3 pt-3 border-t border-primary/20">
|
||||||
<div className="space-y-1.5">
|
<div className="space-y-1.5">
|
||||||
{status.recentEvents.map((event, idx) => (
|
{status.recentEvents.map((event, idx) => (
|
||||||
<div
|
<div
|
||||||
key={`${event.timestamp}-${idx}`}
|
key={`${event.timestamp}-${idx}`}
|
||||||
className="flex items-start gap-2 text-xs"
|
className="flex items-start gap-2 text-xs"
|
||||||
>
|
>
|
||||||
<span className="text-violet-500 dark:text-violet-400 shrink-0 font-mono">
|
<span className="text-primary shrink-0 font-mono">
|
||||||
{formatRelativeTime(event.timestamp)}
|
{formatRelativeTime(event.timestamp)}
|
||||||
</span>
|
</span>
|
||||||
<span className="text-foreground">
|
<span className="text-foreground">
|
||||||
|
|||||||
@@ -120,7 +120,7 @@ export function ProjectSelector({
|
|||||||
<Button
|
<Button
|
||||||
variant="ghost"
|
variant="ghost"
|
||||||
size="icon-xs"
|
size="icon-xs"
|
||||||
onClick={(e) => handleDeleteClick(e, project.name)}
|
onClick={(e: React.MouseEvent) => handleDeleteClick(e, project.name)}
|
||||||
className="text-muted-foreground hover:text-destructive"
|
className="text-muted-foreground hover:text-destructive"
|
||||||
>
|
>
|
||||||
<Trash2 size={14} />
|
<Trash2 size={14} />
|
||||||
|
|||||||
90
ui/src/components/ProjectSetupRequired.tsx
Normal file
90
ui/src/components/ProjectSetupRequired.tsx
Normal file
@@ -0,0 +1,90 @@
|
|||||||
|
import { Sparkles, FileEdit, FolderOpen } from 'lucide-react'
|
||||||
|
import { Button } from '@/components/ui/button'
|
||||||
|
import { Card, CardContent, CardDescription, CardHeader, CardTitle } from '@/components/ui/card'
|
||||||
|
|
||||||
|
interface ProjectSetupRequiredProps {
|
||||||
|
projectName: string
|
||||||
|
projectPath?: string
|
||||||
|
onCreateWithClaude: () => void
|
||||||
|
onEditManually: () => void
|
||||||
|
}
|
||||||
|
|
||||||
|
export function ProjectSetupRequired({
|
||||||
|
projectName,
|
||||||
|
projectPath,
|
||||||
|
onCreateWithClaude,
|
||||||
|
onEditManually,
|
||||||
|
}: ProjectSetupRequiredProps) {
|
||||||
|
return (
|
||||||
|
<div className="max-w-2xl mx-auto mt-8">
|
||||||
|
<Card className="border-2">
|
||||||
|
<CardHeader className="text-center">
|
||||||
|
<CardTitle className="text-2xl font-display">
|
||||||
|
Project Setup Required
|
||||||
|
</CardTitle>
|
||||||
|
<CardDescription className="text-base">
|
||||||
|
<span className="font-semibold">{projectName}</span> needs an app spec to get started
|
||||||
|
</CardDescription>
|
||||||
|
{projectPath && (
|
||||||
|
<div className="flex items-center justify-center gap-2 text-sm text-muted-foreground mt-2">
|
||||||
|
<FolderOpen size={14} />
|
||||||
|
<code className="bg-muted px-2 py-0.5 rounded text-xs">{projectPath}</code>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
</CardHeader>
|
||||||
|
<CardContent className="space-y-4">
|
||||||
|
<p className="text-center text-muted-foreground">
|
||||||
|
Choose how you want to create your app specification:
|
||||||
|
</p>
|
||||||
|
|
||||||
|
<div className="grid gap-4 md:grid-cols-2">
|
||||||
|
{/* Create with Claude Option */}
|
||||||
|
<Card
|
||||||
|
className="cursor-pointer border-2 transition-all hover:border-primary hover:shadow-md"
|
||||||
|
onClick={onCreateWithClaude}
|
||||||
|
>
|
||||||
|
<CardContent className="pt-6 text-center space-y-3">
|
||||||
|
<div className="w-12 h-12 mx-auto bg-primary/10 rounded-full flex items-center justify-center">
|
||||||
|
<Sparkles className="text-primary" size={24} />
|
||||||
|
</div>
|
||||||
|
<h3 className="font-semibold text-lg">Create with Claude</h3>
|
||||||
|
<p className="text-sm text-muted-foreground">
|
||||||
|
Describe your app idea and Claude will help create a detailed specification
|
||||||
|
</p>
|
||||||
|
<Button className="w-full">
|
||||||
|
<Sparkles size={16} className="mr-2" />
|
||||||
|
Start Chat
|
||||||
|
</Button>
|
||||||
|
</CardContent>
|
||||||
|
</Card>
|
||||||
|
|
||||||
|
{/* Edit Manually Option */}
|
||||||
|
<Card
|
||||||
|
className="cursor-pointer border-2 transition-all hover:border-primary hover:shadow-md"
|
||||||
|
onClick={onEditManually}
|
||||||
|
>
|
||||||
|
<CardContent className="pt-6 text-center space-y-3">
|
||||||
|
<div className="w-12 h-12 mx-auto bg-muted rounded-full flex items-center justify-center">
|
||||||
|
<FileEdit className="text-muted-foreground" size={24} />
|
||||||
|
</div>
|
||||||
|
<h3 className="font-semibold text-lg">Edit Templates Manually</h3>
|
||||||
|
<p className="text-sm text-muted-foreground">
|
||||||
|
Create the prompts directory and edit template files yourself
|
||||||
|
</p>
|
||||||
|
<Button variant="outline" className="w-full">
|
||||||
|
<FileEdit size={16} className="mr-2" />
|
||||||
|
View Templates
|
||||||
|
</Button>
|
||||||
|
</CardContent>
|
||||||
|
</Card>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<p className="text-center text-xs text-muted-foreground pt-4">
|
||||||
|
The app spec tells the agent what to build. It includes the application name,
|
||||||
|
description, tech stack, and feature requirements.
|
||||||
|
</p>
|
||||||
|
</CardContent>
|
||||||
|
</Card>
|
||||||
|
</div>
|
||||||
|
)
|
||||||
|
}
|
||||||
194
ui/src/components/ResetProjectModal.tsx
Normal file
194
ui/src/components/ResetProjectModal.tsx
Normal file
@@ -0,0 +1,194 @@
|
|||||||
|
import { useState } from 'react'
|
||||||
|
import { Loader2, AlertTriangle, RotateCcw, Trash2, Check, X } from 'lucide-react'
|
||||||
|
import { useResetProject } from '../hooks/useProjects'
|
||||||
|
import {
|
||||||
|
Dialog,
|
||||||
|
DialogContent,
|
||||||
|
DialogHeader,
|
||||||
|
DialogTitle,
|
||||||
|
DialogDescription,
|
||||||
|
DialogFooter,
|
||||||
|
} from '@/components/ui/dialog'
|
||||||
|
import { Button } from '@/components/ui/button'
|
||||||
|
import { Alert, AlertDescription } from '@/components/ui/alert'
|
||||||
|
|
||||||
|
interface ResetProjectModalProps {
|
||||||
|
isOpen: boolean
|
||||||
|
projectName: string
|
||||||
|
onClose: () => void
|
||||||
|
onResetComplete?: (wasFullReset: boolean) => void
|
||||||
|
}
|
||||||
|
|
||||||
|
export function ResetProjectModal({
|
||||||
|
isOpen,
|
||||||
|
projectName,
|
||||||
|
onClose,
|
||||||
|
onResetComplete,
|
||||||
|
}: ResetProjectModalProps) {
|
||||||
|
const [resetType, setResetType] = useState<'quick' | 'full'>('quick')
|
||||||
|
const resetProject = useResetProject(projectName)
|
||||||
|
|
||||||
|
const handleReset = async () => {
|
||||||
|
const isFullReset = resetType === 'full'
|
||||||
|
try {
|
||||||
|
await resetProject.mutateAsync(isFullReset)
|
||||||
|
onResetComplete?.(isFullReset)
|
||||||
|
onClose()
|
||||||
|
} catch {
|
||||||
|
// Error is handled by the mutation state
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
const handleClose = () => {
|
||||||
|
if (!resetProject.isPending) {
|
||||||
|
resetProject.reset()
|
||||||
|
setResetType('quick')
|
||||||
|
onClose()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return (
|
||||||
|
<Dialog open={isOpen} onOpenChange={(open) => !open && handleClose()}>
|
||||||
|
<DialogContent className="sm:max-w-md">
|
||||||
|
<DialogHeader>
|
||||||
|
<DialogTitle className="flex items-center gap-2">
|
||||||
|
<RotateCcw size={20} />
|
||||||
|
Reset Project
|
||||||
|
</DialogTitle>
|
||||||
|
<DialogDescription>
|
||||||
|
Reset <span className="font-semibold">{projectName}</span> to start fresh
|
||||||
|
</DialogDescription>
|
||||||
|
</DialogHeader>
|
||||||
|
|
||||||
|
<div className="space-y-4 py-4">
|
||||||
|
{/* Reset Type Toggle */}
|
||||||
|
<div className="flex rounded-lg border-2 border-border overflow-hidden">
|
||||||
|
<button
|
||||||
|
onClick={() => setResetType('quick')}
|
||||||
|
disabled={resetProject.isPending}
|
||||||
|
className={`flex-1 py-3 px-4 text-sm font-medium transition-colors flex items-center justify-center gap-2 ${
|
||||||
|
resetType === 'quick'
|
||||||
|
? 'bg-primary text-primary-foreground'
|
||||||
|
: 'bg-background text-foreground hover:bg-muted'
|
||||||
|
} ${resetProject.isPending ? 'opacity-50 cursor-not-allowed' : ''}`}
|
||||||
|
>
|
||||||
|
<RotateCcw size={16} />
|
||||||
|
Quick Reset
|
||||||
|
</button>
|
||||||
|
<button
|
||||||
|
onClick={() => setResetType('full')}
|
||||||
|
disabled={resetProject.isPending}
|
||||||
|
className={`flex-1 py-3 px-4 text-sm font-medium transition-colors flex items-center justify-center gap-2 ${
|
||||||
|
resetType === 'full'
|
||||||
|
? 'bg-destructive text-destructive-foreground'
|
||||||
|
: 'bg-background text-foreground hover:bg-muted'
|
||||||
|
} ${resetProject.isPending ? 'opacity-50 cursor-not-allowed' : ''}`}
|
||||||
|
>
|
||||||
|
<Trash2 size={16} />
|
||||||
|
Full Reset
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Warning Box */}
|
||||||
|
<Alert variant={resetType === 'full' ? 'destructive' : 'default'} className="border-2">
|
||||||
|
<AlertTriangle className="h-4 w-4" />
|
||||||
|
<AlertDescription>
|
||||||
|
<div className="font-semibold mb-2">
|
||||||
|
{resetType === 'quick' ? 'What will be deleted:' : 'What will be deleted:'}
|
||||||
|
</div>
|
||||||
|
<ul className="list-none space-y-1 text-sm">
|
||||||
|
<li className="flex items-center gap-2">
|
||||||
|
<X size={14} className="text-destructive" />
|
||||||
|
All features and progress
|
||||||
|
</li>
|
||||||
|
<li className="flex items-center gap-2">
|
||||||
|
<X size={14} className="text-destructive" />
|
||||||
|
Assistant chat history
|
||||||
|
</li>
|
||||||
|
<li className="flex items-center gap-2">
|
||||||
|
<X size={14} className="text-destructive" />
|
||||||
|
Agent settings
|
||||||
|
</li>
|
||||||
|
{resetType === 'full' && (
|
||||||
|
<li className="flex items-center gap-2">
|
||||||
|
<X size={14} className="text-destructive" />
|
||||||
|
App spec and prompts
|
||||||
|
</li>
|
||||||
|
)}
|
||||||
|
</ul>
|
||||||
|
</AlertDescription>
|
||||||
|
</Alert>
|
||||||
|
|
||||||
|
{/* What will be preserved */}
|
||||||
|
<div className="bg-muted/50 rounded-lg border-2 border-border p-3">
|
||||||
|
<div className="font-semibold mb-2 text-sm">
|
||||||
|
{resetType === 'quick' ? 'What will be preserved:' : 'What will be preserved:'}
|
||||||
|
</div>
|
||||||
|
<ul className="list-none space-y-1 text-sm text-muted-foreground">
|
||||||
|
{resetType === 'quick' ? (
|
||||||
|
<>
|
||||||
|
<li className="flex items-center gap-2">
|
||||||
|
<Check size={14} className="text-green-600" />
|
||||||
|
App spec and prompts
|
||||||
|
</li>
|
||||||
|
<li className="flex items-center gap-2">
|
||||||
|
<Check size={14} className="text-green-600" />
|
||||||
|
Project code and files
|
||||||
|
</li>
|
||||||
|
</>
|
||||||
|
) : (
|
||||||
|
<>
|
||||||
|
<li className="flex items-center gap-2">
|
||||||
|
<Check size={14} className="text-green-600" />
|
||||||
|
Project code and files
|
||||||
|
</li>
|
||||||
|
<li className="flex items-center gap-2 text-muted-foreground/70">
|
||||||
|
<AlertTriangle size={14} />
|
||||||
|
Setup wizard will appear
|
||||||
|
</li>
|
||||||
|
</>
|
||||||
|
)}
|
||||||
|
</ul>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Error Message */}
|
||||||
|
{resetProject.isError && (
|
||||||
|
<Alert variant="destructive">
|
||||||
|
<AlertDescription>
|
||||||
|
{resetProject.error instanceof Error
|
||||||
|
? resetProject.error.message
|
||||||
|
: 'Failed to reset project. Please try again.'}
|
||||||
|
</AlertDescription>
|
||||||
|
</Alert>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<DialogFooter className="gap-2">
|
||||||
|
<Button
|
||||||
|
variant="outline"
|
||||||
|
onClick={handleClose}
|
||||||
|
disabled={resetProject.isPending}
|
||||||
|
>
|
||||||
|
Cancel
|
||||||
|
</Button>
|
||||||
|
<Button
|
||||||
|
variant={resetType === 'full' ? 'destructive' : 'default'}
|
||||||
|
onClick={handleReset}
|
||||||
|
disabled={resetProject.isPending}
|
||||||
|
>
|
||||||
|
{resetProject.isPending ? (
|
||||||
|
<>
|
||||||
|
<Loader2 className="animate-spin mr-2" size={16} />
|
||||||
|
Resetting...
|
||||||
|
</>
|
||||||
|
) : (
|
||||||
|
<>
|
||||||
|
{resetType === 'quick' ? 'Quick Reset' : 'Full Reset'}
|
||||||
|
</>
|
||||||
|
)}
|
||||||
|
</Button>
|
||||||
|
</DialogFooter>
|
||||||
|
</DialogContent>
|
||||||
|
</Dialog>
|
||||||
|
)
|
||||||
|
}
|
||||||
@@ -335,7 +335,7 @@ export function ScheduleModal({ projectName, isOpen, onClose }: ScheduleModalPro
|
|||||||
<Checkbox
|
<Checkbox
|
||||||
id="yolo-mode"
|
id="yolo-mode"
|
||||||
checked={newSchedule.yolo_mode}
|
checked={newSchedule.yolo_mode}
|
||||||
onCheckedChange={(checked) =>
|
onCheckedChange={(checked: boolean | 'indeterminate') =>
|
||||||
setNewSchedule((prev) => ({ ...prev, yolo_mode: checked === true }))
|
setNewSchedule((prev) => ({ ...prev, yolo_mode: checked === true }))
|
||||||
}
|
}
|
||||||
/>
|
/>
|
||||||
|
|||||||
@@ -12,6 +12,7 @@ import { ChatMessage } from './ChatMessage'
|
|||||||
import { QuestionOptions } from './QuestionOptions'
|
import { QuestionOptions } from './QuestionOptions'
|
||||||
import { TypingIndicator } from './TypingIndicator'
|
import { TypingIndicator } from './TypingIndicator'
|
||||||
import type { ImageAttachment } from '../lib/types'
|
import type { ImageAttachment } from '../lib/types'
|
||||||
|
import { isSubmitEnter } from '../lib/keyboard'
|
||||||
import { Button } from '@/components/ui/button'
|
import { Button } from '@/components/ui/button'
|
||||||
import { Textarea } from '@/components/ui/textarea'
|
import { Textarea } from '@/components/ui/textarea'
|
||||||
import { Card, CardContent } from '@/components/ui/card'
|
import { Card, CardContent } from '@/components/ui/card'
|
||||||
@@ -127,7 +128,7 @@ export function SpecCreationChat({
|
|||||||
}
|
}
|
||||||
|
|
||||||
const handleKeyDown = (e: React.KeyboardEvent) => {
|
const handleKeyDown = (e: React.KeyboardEvent) => {
|
||||||
if (e.key === 'Enter' && !e.shiftKey) {
|
if (isSubmitEnter(e)) {
|
||||||
e.preventDefault()
|
e.preventDefault()
|
||||||
handleSendMessage()
|
handleSendMessage()
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -8,6 +8,7 @@
|
|||||||
import { useState, useRef, useEffect, useCallback } from 'react'
|
import { useState, useRef, useEffect, useCallback } from 'react'
|
||||||
import { Plus, X } from 'lucide-react'
|
import { Plus, X } from 'lucide-react'
|
||||||
import type { TerminalInfo } from '@/lib/types'
|
import type { TerminalInfo } from '@/lib/types'
|
||||||
|
import { isSubmitEnter } from '@/lib/keyboard'
|
||||||
import { Button } from '@/components/ui/button'
|
import { Button } from '@/components/ui/button'
|
||||||
import { Input } from '@/components/ui/input'
|
import { Input } from '@/components/ui/input'
|
||||||
|
|
||||||
@@ -96,7 +97,7 @@ export function TerminalTabs({
|
|||||||
// Handle key events during editing
|
// Handle key events during editing
|
||||||
const handleKeyDown = useCallback(
|
const handleKeyDown = useCallback(
|
||||||
(e: React.KeyboardEvent) => {
|
(e: React.KeyboardEvent) => {
|
||||||
if (e.key === 'Enter') {
|
if (isSubmitEnter(e, false)) {
|
||||||
e.preventDefault()
|
e.preventDefault()
|
||||||
submitEdit()
|
submitEdit()
|
||||||
} else if (e.key === 'Escape') {
|
} else if (e.key === 'Escape') {
|
||||||
|
|||||||
@@ -13,7 +13,7 @@ export function ThemeSelector({ themes, currentTheme, onThemeChange }: ThemeSele
|
|||||||
const [isOpen, setIsOpen] = useState(false)
|
const [isOpen, setIsOpen] = useState(false)
|
||||||
const [previewTheme, setPreviewTheme] = useState<ThemeId | null>(null)
|
const [previewTheme, setPreviewTheme] = useState<ThemeId | null>(null)
|
||||||
const containerRef = useRef<HTMLDivElement>(null)
|
const containerRef = useRef<HTMLDivElement>(null)
|
||||||
const timeoutRef = useRef<NodeJS.Timeout | null>(null)
|
const timeoutRef = useRef<ReturnType<typeof setTimeout> | null>(null)
|
||||||
|
|
||||||
// Close dropdown when clicking outside
|
// Close dropdown when clicking outside
|
||||||
useEffect(() => {
|
useEffect(() => {
|
||||||
@@ -32,7 +32,7 @@ export function ThemeSelector({ themes, currentTheme, onThemeChange }: ThemeSele
|
|||||||
useEffect(() => {
|
useEffect(() => {
|
||||||
if (previewTheme) {
|
if (previewTheme) {
|
||||||
const root = document.documentElement
|
const root = document.documentElement
|
||||||
root.classList.remove('theme-claude', 'theme-neo-brutalism', 'theme-retro-arcade', 'theme-aurora')
|
root.classList.remove('theme-claude', 'theme-neo-brutalism', 'theme-retro-arcade', 'theme-aurora', 'theme-business')
|
||||||
if (previewTheme === 'claude') {
|
if (previewTheme === 'claude') {
|
||||||
root.classList.add('theme-claude')
|
root.classList.add('theme-claude')
|
||||||
} else if (previewTheme === 'neo-brutalism') {
|
} else if (previewTheme === 'neo-brutalism') {
|
||||||
@@ -41,6 +41,8 @@ export function ThemeSelector({ themes, currentTheme, onThemeChange }: ThemeSele
|
|||||||
root.classList.add('theme-retro-arcade')
|
root.classList.add('theme-retro-arcade')
|
||||||
} else if (previewTheme === 'aurora') {
|
} else if (previewTheme === 'aurora') {
|
||||||
root.classList.add('theme-aurora')
|
root.classList.add('theme-aurora')
|
||||||
|
} else if (previewTheme === 'business') {
|
||||||
|
root.classList.add('theme-business')
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -48,7 +50,7 @@ export function ThemeSelector({ themes, currentTheme, onThemeChange }: ThemeSele
|
|||||||
return () => {
|
return () => {
|
||||||
if (previewTheme) {
|
if (previewTheme) {
|
||||||
const root = document.documentElement
|
const root = document.documentElement
|
||||||
root.classList.remove('theme-claude', 'theme-neo-brutalism', 'theme-retro-arcade', 'theme-aurora')
|
root.classList.remove('theme-claude', 'theme-neo-brutalism', 'theme-retro-arcade', 'theme-aurora', 'theme-business')
|
||||||
if (currentTheme === 'claude') {
|
if (currentTheme === 'claude') {
|
||||||
root.classList.add('theme-claude')
|
root.classList.add('theme-claude')
|
||||||
} else if (currentTheme === 'neo-brutalism') {
|
} else if (currentTheme === 'neo-brutalism') {
|
||||||
@@ -57,6 +59,8 @@ export function ThemeSelector({ themes, currentTheme, onThemeChange }: ThemeSele
|
|||||||
root.classList.add('theme-retro-arcade')
|
root.classList.add('theme-retro-arcade')
|
||||||
} else if (currentTheme === 'aurora') {
|
} else if (currentTheme === 'aurora') {
|
||||||
root.classList.add('theme-aurora')
|
root.classList.add('theme-aurora')
|
||||||
|
} else if (currentTheme === 'business') {
|
||||||
|
root.classList.add('theme-business')
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -26,6 +26,16 @@ export function useConversation(projectName: string | null, conversationId: numb
|
|||||||
queryFn: () => api.getAssistantConversation(projectName!, conversationId!),
|
queryFn: () => api.getAssistantConversation(projectName!, conversationId!),
|
||||||
enabled: !!projectName && !!conversationId,
|
enabled: !!projectName && !!conversationId,
|
||||||
staleTime: 30_000, // Cache for 30 seconds
|
staleTime: 30_000, // Cache for 30 seconds
|
||||||
|
retry: (failureCount, error) => {
|
||||||
|
// Don't retry on "not found" errors (404) - conversation doesn't exist
|
||||||
|
if (error instanceof Error && (
|
||||||
|
error.message.toLowerCase().includes('not found') ||
|
||||||
|
error.message === 'HTTP 404'
|
||||||
|
)) {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
return failureCount < 3
|
||||||
|
},
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -4,7 +4,7 @@
|
|||||||
|
|
||||||
import { useQuery, useMutation, useQueryClient } from '@tanstack/react-query'
|
import { useQuery, useMutation, useQueryClient } from '@tanstack/react-query'
|
||||||
import * as api from '../lib/api'
|
import * as api from '../lib/api'
|
||||||
import type { FeatureCreate, FeatureUpdate, ModelsResponse, Settings, SettingsUpdate } from '../lib/types'
|
import type { FeatureCreate, FeatureUpdate, ModelsResponse, ProjectSettingsUpdate, Settings, SettingsUpdate } from '../lib/types'
|
||||||
|
|
||||||
// ============================================================================
|
// ============================================================================
|
||||||
// Projects
|
// Projects
|
||||||
@@ -48,6 +48,33 @@ export function useDeleteProject() {
|
|||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
|
export function useResetProject(projectName: string) {
|
||||||
|
const queryClient = useQueryClient()
|
||||||
|
|
||||||
|
return useMutation({
|
||||||
|
mutationFn: (fullReset: boolean) => api.resetProject(projectName, fullReset),
|
||||||
|
onSuccess: () => {
|
||||||
|
queryClient.invalidateQueries({ queryKey: ['projects'] })
|
||||||
|
queryClient.invalidateQueries({ queryKey: ['project', projectName] })
|
||||||
|
queryClient.invalidateQueries({ queryKey: ['features', projectName] })
|
||||||
|
queryClient.invalidateQueries({ queryKey: ['agent-status', projectName] })
|
||||||
|
},
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
export function useUpdateProjectSettings(projectName: string) {
|
||||||
|
const queryClient = useQueryClient()
|
||||||
|
|
||||||
|
return useMutation({
|
||||||
|
mutationFn: (settings: ProjectSettingsUpdate) =>
|
||||||
|
api.updateProjectSettings(projectName, settings),
|
||||||
|
onSuccess: () => {
|
||||||
|
queryClient.invalidateQueries({ queryKey: ['projects'] })
|
||||||
|
queryClient.invalidateQueries({ queryKey: ['project', projectName] })
|
||||||
|
},
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
// ============================================================================
|
// ============================================================================
|
||||||
// Features
|
// Features
|
||||||
// ============================================================================
|
// ============================================================================
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
import { useState, useEffect, useCallback } from 'react'
|
import { useState, useEffect, useCallback } from 'react'
|
||||||
|
|
||||||
export type ThemeId = 'twitter' | 'claude' | 'neo-brutalism' | 'retro-arcade' | 'aurora'
|
export type ThemeId = 'twitter' | 'claude' | 'neo-brutalism' | 'retro-arcade' | 'aurora' | 'business'
|
||||||
|
|
||||||
export interface ThemeOption {
|
export interface ThemeOption {
|
||||||
id: ThemeId
|
id: ThemeId
|
||||||
@@ -43,6 +43,12 @@ export const THEMES: ThemeOption[] = [
|
|||||||
name: 'Aurora',
|
name: 'Aurora',
|
||||||
description: 'Deep violet and teal, like northern lights',
|
description: 'Deep violet and teal, like northern lights',
|
||||||
previewColors: { primary: '#8b5cf6', background: '#faf8ff', accent: '#2dd4bf' }
|
previewColors: { primary: '#8b5cf6', background: '#faf8ff', accent: '#2dd4bf' }
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: 'business',
|
||||||
|
name: 'Business',
|
||||||
|
description: 'Deep navy (#000e4e) and gray monochrome',
|
||||||
|
previewColors: { primary: '#000e4e', background: '#eaecef', accent: '#6b7280' }
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
|
|
||||||
@@ -61,6 +67,8 @@ function getThemeClass(themeId: ThemeId): string {
|
|||||||
return 'theme-retro-arcade'
|
return 'theme-retro-arcade'
|
||||||
case 'aurora':
|
case 'aurora':
|
||||||
return 'theme-aurora'
|
return 'theme-aurora'
|
||||||
|
case 'business':
|
||||||
|
return 'theme-business'
|
||||||
default:
|
default:
|
||||||
return ''
|
return ''
|
||||||
}
|
}
|
||||||
@@ -70,7 +78,7 @@ export function useTheme() {
|
|||||||
const [theme, setThemeState] = useState<ThemeId>(() => {
|
const [theme, setThemeState] = useState<ThemeId>(() => {
|
||||||
try {
|
try {
|
||||||
const stored = localStorage.getItem(THEME_STORAGE_KEY)
|
const stored = localStorage.getItem(THEME_STORAGE_KEY)
|
||||||
if (stored === 'twitter' || stored === 'claude' || stored === 'neo-brutalism' || stored === 'retro-arcade' || stored === 'aurora') {
|
if (stored === 'twitter' || stored === 'claude' || stored === 'neo-brutalism' || stored === 'retro-arcade' || stored === 'aurora' || stored === 'business') {
|
||||||
return stored
|
return stored
|
||||||
}
|
}
|
||||||
} catch {
|
} catch {
|
||||||
@@ -92,7 +100,7 @@ export function useTheme() {
|
|||||||
const root = document.documentElement
|
const root = document.documentElement
|
||||||
|
|
||||||
// Remove all theme classes
|
// Remove all theme classes
|
||||||
root.classList.remove('theme-claude', 'theme-neo-brutalism', 'theme-retro-arcade', 'theme-aurora')
|
root.classList.remove('theme-claude', 'theme-neo-brutalism', 'theme-retro-arcade', 'theme-aurora', 'theme-business')
|
||||||
|
|
||||||
// Add current theme class (if not twitter/default)
|
// Add current theme class (if not twitter/default)
|
||||||
const themeClass = getThemeClass(theme)
|
const themeClass = getThemeClass(theme)
|
||||||
|
|||||||
@@ -6,6 +6,7 @@ import type {
|
|||||||
ProjectSummary,
|
ProjectSummary,
|
||||||
ProjectDetail,
|
ProjectDetail,
|
||||||
ProjectPrompts,
|
ProjectPrompts,
|
||||||
|
ProjectSettingsUpdate,
|
||||||
FeatureListResponse,
|
FeatureListResponse,
|
||||||
Feature,
|
Feature,
|
||||||
FeatureCreate,
|
FeatureCreate,
|
||||||
@@ -100,6 +101,33 @@ export async function updateProjectPrompts(
|
|||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
|
export async function updateProjectSettings(
|
||||||
|
name: string,
|
||||||
|
settings: ProjectSettingsUpdate
|
||||||
|
): Promise<ProjectDetail> {
|
||||||
|
return fetchJSON(`/projects/${encodeURIComponent(name)}/settings`, {
|
||||||
|
method: 'PATCH',
|
||||||
|
body: JSON.stringify(settings),
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
export interface ResetProjectResponse {
|
||||||
|
success: boolean
|
||||||
|
reset_type: 'quick' | 'full'
|
||||||
|
deleted_files: string[]
|
||||||
|
message: string
|
||||||
|
}
|
||||||
|
|
||||||
|
export async function resetProject(
|
||||||
|
name: string,
|
||||||
|
fullReset: boolean = false
|
||||||
|
): Promise<ResetProjectResponse> {
|
||||||
|
const params = fullReset ? '?full_reset=true' : ''
|
||||||
|
return fetchJSON(`/projects/${encodeURIComponent(name)}/reset${params}`, {
|
||||||
|
method: 'POST',
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
// ============================================================================
|
// ============================================================================
|
||||||
// Features API
|
// Features API
|
||||||
// ============================================================================
|
// ============================================================================
|
||||||
|
|||||||
38
ui/src/lib/keyboard.ts
Normal file
38
ui/src/lib/keyboard.ts
Normal file
@@ -0,0 +1,38 @@
|
|||||||
|
/**
|
||||||
|
* Keyboard event utilities
|
||||||
|
*
|
||||||
|
* Helpers for handling keyboard events, particularly for IME-aware input handling.
|
||||||
|
*/
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Check if an Enter keypress should trigger form submission.
|
||||||
|
*
|
||||||
|
* Returns false during IME composition (e.g., Japanese, Chinese, Korean input)
|
||||||
|
* to prevent accidental submission while selecting characters.
|
||||||
|
*
|
||||||
|
* @param e - The keyboard event from React
|
||||||
|
* @param allowShiftEnter - If true, Shift+Enter returns false (for multiline input)
|
||||||
|
* @returns true if Enter should submit, false if it should be ignored
|
||||||
|
*
|
||||||
|
* @example
|
||||||
|
* // In a chat input (Shift+Enter for newline)
|
||||||
|
* if (isSubmitEnter(e)) {
|
||||||
|
* e.preventDefault()
|
||||||
|
* handleSend()
|
||||||
|
* }
|
||||||
|
*
|
||||||
|
* @example
|
||||||
|
* // In a single-line input (Enter always submits)
|
||||||
|
* if (isSubmitEnter(e, false)) {
|
||||||
|
* handleSubmit()
|
||||||
|
* }
|
||||||
|
*/
|
||||||
|
export function isSubmitEnter(
|
||||||
|
e: React.KeyboardEvent,
|
||||||
|
allowShiftEnter: boolean = true
|
||||||
|
): boolean {
|
||||||
|
if (e.key !== 'Enter') return false
|
||||||
|
if (allowShiftEnter && e.shiftKey) return false
|
||||||
|
if (e.nativeEvent.isComposing) return false
|
||||||
|
return true
|
||||||
|
}
|
||||||
@@ -15,6 +15,7 @@ export interface ProjectSummary {
|
|||||||
path: string
|
path: string
|
||||||
has_spec: boolean
|
has_spec: boolean
|
||||||
stats: ProjectStats
|
stats: ProjectStats
|
||||||
|
default_concurrency: number
|
||||||
}
|
}
|
||||||
|
|
||||||
export interface ProjectDetail extends ProjectSummary {
|
export interface ProjectDetail extends ProjectSummary {
|
||||||
@@ -536,6 +537,10 @@ export interface SettingsUpdate {
|
|||||||
testing_agent_ratio?: number
|
testing_agent_ratio?: number
|
||||||
}
|
}
|
||||||
|
|
||||||
|
export interface ProjectSettingsUpdate {
|
||||||
|
default_concurrency?: number
|
||||||
|
}
|
||||||
|
|
||||||
// ============================================================================
|
// ============================================================================
|
||||||
// Schedule Types
|
// Schedule Types
|
||||||
// ============================================================================
|
// ============================================================================
|
||||||
|
|||||||
@@ -1,5 +1,5 @@
|
|||||||
@import "tailwindcss";
|
@import "tailwindcss";
|
||||||
@import "tw-animate-css";
|
@import url("tw-animate-css");
|
||||||
|
|
||||||
/* Enable class-based dark mode in Tailwind v4 */
|
/* Enable class-based dark mode in Tailwind v4 */
|
||||||
@custom-variant dark (&:where(.dark, .dark *));
|
@custom-variant dark (&:where(.dark, .dark *));
|
||||||
@@ -590,6 +590,139 @@
|
|||||||
--color-status-done: oklch(0.4500 0.1500 285);
|
--color-status-done: oklch(0.4500 0.1500 285);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/* ============================================================================
|
||||||
|
Theme: Business
|
||||||
|
Professional deep navy (#000e4e) and gray palette for corporate use
|
||||||
|
Designed for trust, readability, and modern depth with card shadows
|
||||||
|
============================================================================ */
|
||||||
|
|
||||||
|
.theme-business {
|
||||||
|
--radius: 0.5rem;
|
||||||
|
/* Concrete-like blue-gray background */
|
||||||
|
--background: oklch(0.9500 0.0080 265);
|
||||||
|
/* Deep navy foreground #000e4e */
|
||||||
|
--foreground: oklch(0.1700 0.0900 265);
|
||||||
|
/* White cards with shadow depth */
|
||||||
|
--card: oklch(1.0000 0 0);
|
||||||
|
--card-foreground: oklch(0.1700 0.0900 265);
|
||||||
|
--popover: oklch(1.0000 0 0);
|
||||||
|
--popover-foreground: oklch(0.1700 0.0900 265);
|
||||||
|
/* Primary: Deep navy #000e4e */
|
||||||
|
--primary: oklch(0.1700 0.0900 265);
|
||||||
|
--primary-foreground: oklch(1.0000 0 0);
|
||||||
|
/* Secondary: Light gray */
|
||||||
|
--secondary: oklch(0.9500 0.0020 265);
|
||||||
|
--secondary-foreground: oklch(0.1700 0.0900 265);
|
||||||
|
/* Muted: Soft gray */
|
||||||
|
--muted: oklch(0.9300 0.0020 265);
|
||||||
|
--muted-foreground: oklch(0.4500 0.0100 265);
|
||||||
|
/* Accent: Medium gray (no teal) */
|
||||||
|
--accent: oklch(0.5500 0.0100 265);
|
||||||
|
--accent-foreground: oklch(1.0000 0 0);
|
||||||
|
--destructive: oklch(0.5800 0.2000 25);
|
||||||
|
--destructive-foreground: oklch(1.0000 0 0);
|
||||||
|
/* Border: Subtle gray */
|
||||||
|
--border: oklch(0.8800 0.0030 265);
|
||||||
|
--input: oklch(0.9300 0.0020 265);
|
||||||
|
--ring: oklch(0.1700 0.0900 265);
|
||||||
|
/* Chart colors: Navy-gray monochrome scale */
|
||||||
|
--chart-1: oklch(0.1700 0.0900 265);
|
||||||
|
--chart-2: oklch(0.3500 0.0600 265);
|
||||||
|
--chart-3: oklch(0.5000 0.0400 265);
|
||||||
|
--chart-4: oklch(0.6500 0.0200 265);
|
||||||
|
--chart-5: oklch(0.8000 0.0100 265);
|
||||||
|
--sidebar: oklch(0.9300 0.0100 265);
|
||||||
|
--sidebar-foreground: oklch(0.1700 0.0900 265);
|
||||||
|
--sidebar-primary: oklch(0.1700 0.0900 265);
|
||||||
|
--sidebar-primary-foreground: oklch(1.0000 0 0);
|
||||||
|
--sidebar-accent: oklch(0.5500 0.0100 265);
|
||||||
|
--sidebar-accent-foreground: oklch(1.0000 0 0);
|
||||||
|
--sidebar-border: oklch(0.8800 0.0030 265);
|
||||||
|
--sidebar-ring: oklch(0.1700 0.0900 265);
|
||||||
|
|
||||||
|
/* Shadow variables - pronounced for card depth (2026 trend) */
|
||||||
|
--shadow-sm: 0 1px 3px 0 rgb(0 14 78 / 0.06), 0 1px 2px -1px rgb(0 14 78 / 0.04);
|
||||||
|
--shadow: 0 2px 8px 0 rgb(0 14 78 / 0.08), 0 1px 3px -1px rgb(0 14 78 / 0.06);
|
||||||
|
--shadow-md: 0 6px 16px -2px rgb(0 14 78 / 0.10), 0 3px 6px -3px rgb(0 14 78 / 0.08);
|
||||||
|
--shadow-lg: 0 12px 32px -4px rgb(0 14 78 / 0.12), 0 6px 12px -6px rgb(0 14 78 / 0.10);
|
||||||
|
|
||||||
|
/* Log level colors - professional muted tones */
|
||||||
|
--color-log-error: #dc2626;
|
||||||
|
--color-log-warning: #d97706;
|
||||||
|
--color-log-info: #000e4e;
|
||||||
|
--color-log-debug: #6b7280;
|
||||||
|
--color-log-success: #059669;
|
||||||
|
|
||||||
|
/* Status colors for Kanban - gray-navy scale */
|
||||||
|
--color-status-pending: oklch(0.9300 0.0030 265);
|
||||||
|
--color-status-progress: oklch(0.8500 0.0200 265);
|
||||||
|
--color-status-done: oklch(0.7500 0.0400 265);
|
||||||
|
|
||||||
|
/* Font stacks - system fonts for professional feel */
|
||||||
|
--font-sans: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, 'Helvetica Neue', Arial, sans-serif;
|
||||||
|
--font-mono: 'SF Mono', SFMono-Regular, ui-monospace, Consolas, 'Liberation Mono', monospace;
|
||||||
|
}
|
||||||
|
|
||||||
|
.theme-business.dark {
|
||||||
|
/* Very dark navy background */
|
||||||
|
--background: oklch(0.1200 0.0400 265);
|
||||||
|
--foreground: oklch(0.9400 0.0050 265);
|
||||||
|
/* Dark navy cards with elevation */
|
||||||
|
--card: oklch(0.1600 0.0500 265);
|
||||||
|
--card-foreground: oklch(0.9400 0.0050 265);
|
||||||
|
--popover: oklch(0.1400 0.0450 265);
|
||||||
|
--popover-foreground: oklch(0.9400 0.0050 265);
|
||||||
|
/* Primary: Lighter navy for dark mode */
|
||||||
|
--primary: oklch(0.5500 0.1200 265);
|
||||||
|
--primary-foreground: oklch(0.9800 0 0);
|
||||||
|
/* Secondary: Dark gray */
|
||||||
|
--secondary: oklch(0.2200 0.0200 265);
|
||||||
|
--secondary-foreground: oklch(0.9400 0.0050 265);
|
||||||
|
/* Muted: Medium-dark gray */
|
||||||
|
--muted: oklch(0.2500 0.0150 265);
|
||||||
|
--muted-foreground: oklch(0.6000 0.0100 265);
|
||||||
|
/* Accent: Light gray */
|
||||||
|
--accent: oklch(0.6500 0.0100 265);
|
||||||
|
--accent-foreground: oklch(0.1200 0.0400 265);
|
||||||
|
--destructive: oklch(0.6500 0.2000 25);
|
||||||
|
--destructive-foreground: oklch(1.0000 0 0);
|
||||||
|
--border: oklch(0.2800 0.0200 265);
|
||||||
|
--input: oklch(0.2200 0.0200 265);
|
||||||
|
--ring: oklch(0.5500 0.1200 265);
|
||||||
|
/* Chart colors: Navy-gray scale for dark mode */
|
||||||
|
--chart-1: oklch(0.5500 0.1200 265);
|
||||||
|
--chart-2: oklch(0.6500 0.0800 265);
|
||||||
|
--chart-3: oklch(0.7500 0.0400 265);
|
||||||
|
--chart-4: oklch(0.5000 0.0600 265);
|
||||||
|
--chart-5: oklch(0.4000 0.0400 265);
|
||||||
|
--sidebar: oklch(0.1000 0.0350 265);
|
||||||
|
--sidebar-foreground: oklch(0.9400 0.0050 265);
|
||||||
|
--sidebar-primary: oklch(0.5500 0.1200 265);
|
||||||
|
--sidebar-primary-foreground: oklch(0.9800 0 0);
|
||||||
|
--sidebar-accent: oklch(0.6500 0.0100 265);
|
||||||
|
--sidebar-accent-foreground: oklch(0.1200 0.0400 265);
|
||||||
|
--sidebar-border: oklch(0.2600 0.0180 265);
|
||||||
|
--sidebar-ring: oklch(0.5500 0.1200 265);
|
||||||
|
|
||||||
|
/* Shadow variables - dark mode with stronger depth */
|
||||||
|
--shadow-sm: 0 1px 3px 0 rgb(0 0 0 / 0.4), 0 1px 2px -1px rgb(0 0 0 / 0.3);
|
||||||
|
--shadow: 0 2px 8px 0 rgb(0 0 0 / 0.5), 0 1px 3px -1px rgb(0 0 0 / 0.4);
|
||||||
|
--shadow-md: 0 6px 16px -2px rgb(0 0 0 / 0.6), 0 3px 6px -3px rgb(0 0 0 / 0.5);
|
||||||
|
--shadow-lg: 0 12px 32px -4px rgb(0 0 0 / 0.7), 0 6px 12px -6px rgb(0 0 0 / 0.6);
|
||||||
|
|
||||||
|
/* Log level colors - dark mode */
|
||||||
|
--color-log-error: #f87171;
|
||||||
|
--color-log-warning: #fbbf24;
|
||||||
|
--color-log-info: #93c5fd;
|
||||||
|
--color-log-debug: #9ca3af;
|
||||||
|
--color-log-success: #34d399;
|
||||||
|
|
||||||
|
/* Status colors for Kanban - dark mode */
|
||||||
|
--color-status-pending: oklch(0.2500 0.0200 265);
|
||||||
|
--color-status-progress: oklch(0.3500 0.0400 265);
|
||||||
|
--color-status-done: oklch(0.4500 0.0600 265);
|
||||||
|
}
|
||||||
|
|
||||||
/* ============================================================================
|
/* ============================================================================
|
||||||
ShadCN Tailwind v4 Theme Integration
|
ShadCN Tailwind v4 Theme Integration
|
||||||
============================================================================ */
|
============================================================================ */
|
||||||
|
|||||||
@@ -4,6 +4,7 @@
|
|||||||
"lib": ["ES2023"],
|
"lib": ["ES2023"],
|
||||||
"module": "ESNext",
|
"module": "ESNext",
|
||||||
"skipLibCheck": true,
|
"skipLibCheck": true,
|
||||||
|
"types": ["node"],
|
||||||
|
|
||||||
/* Bundler mode */
|
/* Bundler mode */
|
||||||
"moduleResolution": "bundler",
|
"moduleResolution": "bundler",
|
||||||
|
|||||||
Reference in New Issue
Block a user