mirror of
https://github.com/leonvanzyl/autocoder.git
synced 2026-02-02 15:23:37 +00:00
Compare commits
36 Commits
3edb380b58
...
master
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
b2ab1ecc7a | ||
|
|
016eead8b4 | ||
|
|
1607fc8175 | ||
|
|
e1e5209866 | ||
|
|
24481d474d | ||
|
|
94e0b05cb1 | ||
|
|
dc5bcc4ae9 | ||
|
|
c4d0c6c9b2 | ||
|
|
e7aeea6b77 | ||
|
|
e348383c1f | ||
|
|
d27db31f21 | ||
|
|
e01e311541 | ||
|
|
494ccffbab | ||
|
|
b1419baf34 | ||
|
|
064aa0a62f | ||
|
|
d8a8c83447 | ||
|
|
6609a0f7d6 | ||
|
|
4336252b30 | ||
|
|
f2eb468c46 | ||
|
|
5a0085433b | ||
|
|
a050fd1543 | ||
|
|
338622b734 | ||
|
|
89f6721cfa | ||
|
|
88c695259f | ||
|
|
f018b4c1d8 | ||
|
|
79d02a1410 | ||
|
|
813fcde18b | ||
|
|
b693de2999 | ||
|
|
21fe28f51d | ||
|
|
80b6af7b2b | ||
|
|
099f52b19c | ||
|
|
dcf8b99dca | ||
|
|
cf8dec9abf | ||
|
|
ff1a63d104 | ||
|
|
bf194ad72f | ||
|
|
43494c337f |
@@ -8,7 +8,7 @@ This command **requires** the project directory as an argument via `$ARGUMENTS`.
|
|||||||
|
|
||||||
**Example:** `/create-spec generations/my-app`
|
**Example:** `/create-spec generations/my-app`
|
||||||
|
|
||||||
**Output location:** `$ARGUMENTS/prompts/app_spec.txt` and `$ARGUMENTS/prompts/initializer_prompt.md`
|
**Output location:** `$ARGUMENTS/.autocoder/prompts/app_spec.txt` and `$ARGUMENTS/.autocoder/prompts/initializer_prompt.md`
|
||||||
|
|
||||||
If `$ARGUMENTS` is empty, inform the user they must provide a project path and exit.
|
If `$ARGUMENTS` is empty, inform the user they must provide a project path and exit.
|
||||||
|
|
||||||
@@ -347,13 +347,13 @@ First ask in conversation if they want to make changes.
|
|||||||
|
|
||||||
## Output Directory
|
## Output Directory
|
||||||
|
|
||||||
The output directory is: `$ARGUMENTS/prompts/`
|
The output directory is: `$ARGUMENTS/.autocoder/prompts/`
|
||||||
|
|
||||||
Once the user approves, generate these files:
|
Once the user approves, generate these files:
|
||||||
|
|
||||||
## 1. Generate `app_spec.txt`
|
## 1. Generate `app_spec.txt`
|
||||||
|
|
||||||
**Output path:** `$ARGUMENTS/prompts/app_spec.txt`
|
**Output path:** `$ARGUMENTS/.autocoder/prompts/app_spec.txt`
|
||||||
|
|
||||||
Create a new file using this XML structure:
|
Create a new file using this XML structure:
|
||||||
|
|
||||||
@@ -489,7 +489,7 @@ Create a new file using this XML structure:
|
|||||||
|
|
||||||
## 2. Update `initializer_prompt.md`
|
## 2. Update `initializer_prompt.md`
|
||||||
|
|
||||||
**Output path:** `$ARGUMENTS/prompts/initializer_prompt.md`
|
**Output path:** `$ARGUMENTS/.autocoder/prompts/initializer_prompt.md`
|
||||||
|
|
||||||
If the output directory has an existing `initializer_prompt.md`, read it and update the feature count.
|
If the output directory has an existing `initializer_prompt.md`, read it and update the feature count.
|
||||||
If not, copy from `.claude/templates/initializer_prompt.template.md` first, then update.
|
If not, copy from `.claude/templates/initializer_prompt.template.md` first, then update.
|
||||||
@@ -512,7 +512,7 @@ After: **CRITICAL:** You must create exactly **25** features using the `feature
|
|||||||
|
|
||||||
## 3. Write Status File (REQUIRED - Do This Last)
|
## 3. Write Status File (REQUIRED - Do This Last)
|
||||||
|
|
||||||
**Output path:** `$ARGUMENTS/prompts/.spec_status.json`
|
**Output path:** `$ARGUMENTS/.autocoder/prompts/.spec_status.json`
|
||||||
|
|
||||||
**CRITICAL:** After you have completed ALL requested file changes, write this status file to signal completion to the UI. This is required for the "Continue to Project" button to appear.
|
**CRITICAL:** After you have completed ALL requested file changes, write this status file to signal completion to the UI. This is required for the "Continue to Project" button to appear.
|
||||||
|
|
||||||
@@ -524,8 +524,8 @@ Write this JSON file:
|
|||||||
"version": 1,
|
"version": 1,
|
||||||
"timestamp": "[current ISO 8601 timestamp, e.g., 2025-01-15T14:30:00.000Z]",
|
"timestamp": "[current ISO 8601 timestamp, e.g., 2025-01-15T14:30:00.000Z]",
|
||||||
"files_written": [
|
"files_written": [
|
||||||
"prompts/app_spec.txt",
|
".autocoder/prompts/app_spec.txt",
|
||||||
"prompts/initializer_prompt.md"
|
".autocoder/prompts/initializer_prompt.md"
|
||||||
],
|
],
|
||||||
"feature_count": [the feature count from Phase 4L]
|
"feature_count": [the feature count from Phase 4L]
|
||||||
}
|
}
|
||||||
@@ -539,9 +539,9 @@ Write this JSON file:
|
|||||||
"version": 1,
|
"version": 1,
|
||||||
"timestamp": "2025-01-15T14:30:00.000Z",
|
"timestamp": "2025-01-15T14:30:00.000Z",
|
||||||
"files_written": [
|
"files_written": [
|
||||||
"prompts/app_spec.txt",
|
".autocoder/prompts/app_spec.txt",
|
||||||
"prompts/initializer_prompt.md",
|
".autocoder/prompts/initializer_prompt.md",
|
||||||
"prompts/coding_prompt.md"
|
".autocoder/prompts/coding_prompt.md"
|
||||||
],
|
],
|
||||||
"feature_count": 35
|
"feature_count": 35
|
||||||
}
|
}
|
||||||
@@ -559,11 +559,11 @@ Write this JSON file:
|
|||||||
|
|
||||||
Once files are generated, tell the user what to do next:
|
Once files are generated, tell the user what to do next:
|
||||||
|
|
||||||
> "Your specification files have been created in `$ARGUMENTS/prompts/`!
|
> "Your specification files have been created in `$ARGUMENTS/.autocoder/prompts/`!
|
||||||
>
|
>
|
||||||
> **Files created:**
|
> **Files created:**
|
||||||
> - `$ARGUMENTS/prompts/app_spec.txt`
|
> - `$ARGUMENTS/.autocoder/prompts/app_spec.txt`
|
||||||
> - `$ARGUMENTS/prompts/initializer_prompt.md`
|
> - `$ARGUMENTS/.autocoder/prompts/initializer_prompt.md`
|
||||||
>
|
>
|
||||||
> The **Continue to Project** button should now appear. Click it to start the autonomous coding agent!
|
> The **Continue to Project** button should now appear. Click it to start the autonomous coding agent!
|
||||||
>
|
>
|
||||||
|
|||||||
@@ -42,7 +42,7 @@ You are the **Project Expansion Assistant** - an expert at understanding existin
|
|||||||
# FIRST: Read and Understand Existing Project
|
# FIRST: Read and Understand Existing Project
|
||||||
|
|
||||||
**Step 1:** Read the existing specification:
|
**Step 1:** Read the existing specification:
|
||||||
- Read `$ARGUMENTS/prompts/app_spec.txt`
|
- Read `$ARGUMENTS/.autocoder/prompts/app_spec.txt`
|
||||||
|
|
||||||
**Step 2:** Present a summary to the user:
|
**Step 2:** Present a summary to the user:
|
||||||
|
|
||||||
@@ -231,4 +231,4 @@ If they want to add more, go back to Phase 1.
|
|||||||
|
|
||||||
# BEGIN
|
# BEGIN
|
||||||
|
|
||||||
Start by reading the app specification file at `$ARGUMENTS/prompts/app_spec.txt`, then greet the user with a summary of their existing project and ask what they want to add.
|
Start by reading the app specification file at `$ARGUMENTS/.autocoder/prompts/app_spec.txt`, then greet the user with a summary of their existing project and ask what they want to add.
|
||||||
|
|||||||
@@ -5,6 +5,6 @@ description: Convert GSD codebase mapping to Autocoder app_spec.txt
|
|||||||
|
|
||||||
# GSD to Autocoder Spec
|
# GSD to Autocoder Spec
|
||||||
|
|
||||||
Convert `.planning/codebase/*.md` (from `/gsd:map-codebase`) to Autocoder's `prompts/app_spec.txt`.
|
Convert `.planning/codebase/*.md` (from `/gsd:map-codebase`) to Autocoder's `.autocoder/prompts/app_spec.txt`.
|
||||||
|
|
||||||
@.claude/skills/gsd-to-autocoder-spec/SKILL.md
|
@.claude/skills/gsd-to-autocoder-spec/SKILL.md
|
||||||
|
|||||||
@@ -40,15 +40,36 @@ Pull request(s): $ARGUMENTS
|
|||||||
- For Medium PRs: spawn 1-2 agents focusing on the most impacted areas
|
- For Medium PRs: spawn 1-2 agents focusing on the most impacted areas
|
||||||
- For Complex PRs: spawn up to 3 agents to cover security, performance, and architectural concerns
|
- For Complex PRs: spawn up to 3 agents to cover security, performance, and architectural concerns
|
||||||
|
|
||||||
4. **Vision Alignment Check**
|
4. **PR Scope & Title Alignment Check**
|
||||||
|
- Compare the PR title and description against the actual diff content
|
||||||
|
- Check whether the PR is focused on a single coherent change or contains multiple unrelated changes
|
||||||
|
- If the title/description describe one thing but the PR contains significantly more (e.g., title says "fix typo in README" but the diff touches 20 files across multiple domains), flag this as a **scope mismatch**
|
||||||
|
- A scope mismatch is a **merge blocker** — recommend the author split the PR into smaller, focused PRs
|
||||||
|
- Suggest specific ways to split the PR (e.g., "separate the refactor from the feature addition")
|
||||||
|
- Reviewing large, unfocused PRs is impractical and error-prone; the review cannot provide adequate assurance for such changes
|
||||||
|
|
||||||
|
5. **Vision Alignment Check**
|
||||||
- Read the project's README.md and CLAUDE.md to understand the application's core purpose
|
- Read the project's README.md and CLAUDE.md to understand the application's core purpose
|
||||||
- Assess whether this PR aligns with the application's intended functionality
|
- Assess whether this PR aligns with the application's intended functionality
|
||||||
- If the changes deviate significantly from the core vision or add functionality that doesn't serve the application's purpose, note this in the review
|
- If the changes deviate significantly from the core vision or add functionality that doesn't serve the application's purpose, note this in the review
|
||||||
- This is not a blocker, but should be flagged for the reviewer's consideration
|
- This is not a blocker, but should be flagged for the reviewer's consideration
|
||||||
|
|
||||||
5. **Safety Assessment**
|
6. **Safety Assessment**
|
||||||
- Provide a review on whether the PR is safe to merge as-is
|
- Provide a review on whether the PR is safe to merge as-is
|
||||||
- Provide any feedback in terms of risk level
|
- Provide any feedback in terms of risk level
|
||||||
|
|
||||||
6. **Improvements**
|
7. **Improvements**
|
||||||
- Propose any improvements in terms of importance and complexity
|
- Propose any improvements in terms of importance and complexity
|
||||||
|
|
||||||
|
8. **Merge Recommendation**
|
||||||
|
- Based on all findings, provide a clear merge/don't-merge recommendation
|
||||||
|
- If all concerns are minor (cosmetic issues, naming suggestions, small style nits, missing comments, etc.), recommend **merging the PR** and note that the reviewer can address these minor concerns themselves with a quick follow-up commit pushed directly to master
|
||||||
|
- If there are significant concerns (bugs, security issues, architectural problems, scope mismatch), recommend **not merging** and explain what needs to be resolved first
|
||||||
|
|
||||||
|
9. **TLDR**
|
||||||
|
- End the review with a `## TLDR` section
|
||||||
|
- In 3-5 bullet points maximum, summarize:
|
||||||
|
- What this PR is actually about (one sentence)
|
||||||
|
- The key concerns, if any (or "no significant concerns")
|
||||||
|
- **Verdict: MERGE** / **MERGE (with minor follow-up)** / **DON'T MERGE** with a one-line reason
|
||||||
|
- This section should be scannable in under 10 seconds
|
||||||
@@ -9,7 +9,7 @@ description: |
|
|||||||
|
|
||||||
# GSD to Autocoder Spec Converter
|
# GSD to Autocoder Spec Converter
|
||||||
|
|
||||||
Converts `.planning/codebase/*.md` (GSD mapping output) to `prompts/app_spec.txt` (Autocoder format).
|
Converts `.planning/codebase/*.md` (GSD mapping output) to `.autocoder/prompts/app_spec.txt` (Autocoder format).
|
||||||
|
|
||||||
## When to Use
|
## When to Use
|
||||||
|
|
||||||
@@ -84,7 +84,7 @@ Extract:
|
|||||||
|
|
||||||
Create `prompts/` directory:
|
Create `prompts/` directory:
|
||||||
```bash
|
```bash
|
||||||
mkdir -p prompts
|
mkdir -p .autocoder/prompts
|
||||||
```
|
```
|
||||||
|
|
||||||
**Mapping GSD Documents to Autocoder Spec:**
|
**Mapping GSD Documents to Autocoder Spec:**
|
||||||
@@ -114,7 +114,7 @@ mkdir -p prompts
|
|||||||
**Write the spec file** using the XML format from [references/app-spec-format.md](references/app-spec-format.md):
|
**Write the spec file** using the XML format from [references/app-spec-format.md](references/app-spec-format.md):
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
cat > prompts/app_spec.txt << 'EOF'
|
cat > .autocoder/prompts/app_spec.txt << 'EOF'
|
||||||
<project_specification>
|
<project_specification>
|
||||||
<project_name>{from package.json or directory}</project_name>
|
<project_name>{from package.json or directory}</project_name>
|
||||||
|
|
||||||
@@ -173,9 +173,9 @@ EOF
|
|||||||
### Step 5: Verify Generated Spec
|
### Step 5: Verify Generated Spec
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
head -100 prompts/app_spec.txt
|
head -100 .autocoder/prompts/app_spec.txt
|
||||||
echo "---"
|
echo "---"
|
||||||
grep -c "User can\|System\|API\|Feature" prompts/app_spec.txt || echo "0"
|
grep -c "User can\|System\|API\|Feature" .autocoder/prompts/app_spec.txt || echo "0"
|
||||||
```
|
```
|
||||||
|
|
||||||
**Validation checklist:**
|
**Validation checklist:**
|
||||||
@@ -194,7 +194,7 @@ Output:
|
|||||||
app_spec.txt generated from GSD codebase mapping.
|
app_spec.txt generated from GSD codebase mapping.
|
||||||
|
|
||||||
Source: .planning/codebase/*.md
|
Source: .planning/codebase/*.md
|
||||||
Output: prompts/app_spec.txt
|
Output: .autocoder/prompts/app_spec.txt
|
||||||
|
|
||||||
Next: Start Autocoder
|
Next: Start Autocoder
|
||||||
|
|
||||||
|
|||||||
@@ -49,51 +49,21 @@ Otherwise, start servers manually and document the process.
|
|||||||
|
|
||||||
#### TEST-DRIVEN DEVELOPMENT MINDSET (CRITICAL)
|
#### TEST-DRIVEN DEVELOPMENT MINDSET (CRITICAL)
|
||||||
|
|
||||||
Features are **test cases** that drive development. This is test-driven development:
|
Features are **test cases** that drive development. If functionality doesn't exist, **BUILD IT** -- you are responsible for implementing ALL required functionality. Missing pages, endpoints, database tables, or components are NOT blockers; they are your job to create.
|
||||||
|
|
||||||
- **If you can't test a feature because functionality doesn't exist → BUILD IT**
|
**Note:** Your feature has been pre-assigned by the orchestrator. Use `feature_get_by_id` with your assigned feature ID to get the details. Then mark it as in-progress:
|
||||||
- You are responsible for implementing ALL required functionality
|
|
||||||
- Never assume another process will build it later
|
|
||||||
- "Missing functionality" is NOT a blocker - it's your job to create it
|
|
||||||
|
|
||||||
**Example:** Feature says "User can filter flashcards by difficulty level"
|
|
||||||
- WRONG: "Flashcard page doesn't exist yet" → skip feature
|
|
||||||
- RIGHT: "Flashcard page doesn't exist yet" → build flashcard page → implement filter → test feature
|
|
||||||
|
|
||||||
**Note:** Your feature has been pre-assigned by the orchestrator. Use `feature_get_by_id` with your assigned feature ID to get the details.
|
|
||||||
|
|
||||||
Once you've retrieved the feature, **mark it as in-progress** (if not already):
|
|
||||||
|
|
||||||
```
|
```
|
||||||
# Mark feature as in-progress
|
|
||||||
Use the feature_mark_in_progress tool with feature_id={your_assigned_id}
|
Use the feature_mark_in_progress tool with feature_id={your_assigned_id}
|
||||||
```
|
```
|
||||||
|
|
||||||
If you get "already in-progress" error, that's OK - continue with implementation.
|
If you get "already in-progress" error, that's OK - continue with implementation.
|
||||||
|
|
||||||
Focus on completing one feature perfectly and completing its testing steps in this session before moving on to other features.
|
Focus on completing one feature perfectly in this session. It's ok if you only complete one feature, as more sessions will follow.
|
||||||
It's ok if you only complete one feature in this session, as there will be more sessions later that continue to make progress.
|
|
||||||
|
|
||||||
#### When to Skip a Feature (EXTREMELY RARE)
|
#### When to Skip a Feature (EXTREMELY RARE)
|
||||||
|
|
||||||
**Skipping should almost NEVER happen.** Only skip for truly external blockers you cannot control:
|
Only skip for truly external blockers: missing third-party credentials (Stripe keys, OAuth secrets), unavailable external services, or unfulfillable environment requirements. **NEVER** skip because a page, endpoint, component, or data doesn't exist yet -- build it. If a feature requires other functionality first, build that functionality as part of this feature.
|
||||||
|
|
||||||
- **External API not configured**: Third-party service credentials missing (e.g., Stripe keys, OAuth secrets)
|
|
||||||
- **External service unavailable**: Dependency on service that's down or inaccessible
|
|
||||||
- **Environment limitation**: Hardware or system requirement you cannot fulfill
|
|
||||||
|
|
||||||
**NEVER skip because:**
|
|
||||||
|
|
||||||
| Situation | Wrong Action | Correct Action |
|
|
||||||
|-----------|--------------|----------------|
|
|
||||||
| "Page doesn't exist" | Skip | Create the page |
|
|
||||||
| "API endpoint missing" | Skip | Implement the endpoint |
|
|
||||||
| "Database table not ready" | Skip | Create the migration |
|
|
||||||
| "Component not built" | Skip | Build the component |
|
|
||||||
| "No data to test with" | Skip | Create test data or build data entry flow |
|
|
||||||
| "Feature X needs to be done first" | Skip | Build feature X as part of this feature |
|
|
||||||
|
|
||||||
If a feature requires building other functionality first, **build that functionality**. You are the coding agent - your job is to make the feature work, not to defer it.
|
|
||||||
|
|
||||||
If you must skip (truly external blocker only):
|
If you must skip (truly external blocker only):
|
||||||
|
|
||||||
@@ -139,130 +109,22 @@ Use browser automation tools:
|
|||||||
|
|
||||||
### STEP 5.5: MANDATORY VERIFICATION CHECKLIST (BEFORE MARKING ANY TEST PASSING)
|
### STEP 5.5: MANDATORY VERIFICATION CHECKLIST (BEFORE MARKING ANY TEST PASSING)
|
||||||
|
|
||||||
**You MUST complete ALL of these checks before marking any feature as "passes": true**
|
**Complete ALL applicable checks before marking any feature as passing:**
|
||||||
|
|
||||||
#### Security Verification (for protected features)
|
- **Security:** Feature respects role permissions; unauthenticated access blocked; API checks auth (401/403); no cross-user data leaks via URL manipulation
|
||||||
|
- **Real Data:** Create unique test data via UI, verify it appears, refresh to confirm persistence, delete and verify removal. No unexplained data (indicates mocks). Dashboard counts reflect real numbers
|
||||||
- [ ] Feature respects user role permissions
|
- **Mock Data Grep:** Run STEP 5.6 grep checks - no hits in src/ (excluding tests). No globalThis, devStore, or dev-store patterns
|
||||||
- [ ] Unauthenticated access is blocked (redirects to login)
|
- **Server Restart:** For data features, run STEP 5.7 - data persists across server restart
|
||||||
- [ ] API endpoint checks authorization (returns 401/403 appropriately)
|
- **Navigation:** All buttons link to existing routes, no 404s, back button works, edit/view/delete links have correct IDs
|
||||||
- [ ] Cannot access other users' data by manipulating URLs
|
- **Integration:** Zero JS console errors, no 500s in network tab, API data matches UI, loading/error states work
|
||||||
|
|
||||||
#### Real Data Verification (CRITICAL - NO MOCK DATA)
|
|
||||||
|
|
||||||
- [ ] Created unique test data via UI (e.g., "TEST_12345_VERIFY_ME")
|
|
||||||
- [ ] Verified the EXACT data I created appears in UI
|
|
||||||
- [ ] Refreshed page - data persists (proves database storage)
|
|
||||||
- [ ] Deleted the test data - verified it's gone everywhere
|
|
||||||
- [ ] NO unexplained data appeared (would indicate mock data)
|
|
||||||
- [ ] Dashboard/counts reflect real numbers after my changes
|
|
||||||
- [ ] **Ran extended mock data grep (STEP 5.6) - no hits in src/ (excluding tests)**
|
|
||||||
- [ ] **Verified no globalThis, devStore, or dev-store patterns**
|
|
||||||
- [ ] **Server restart test passed (STEP 5.7) - data persists across restart**
|
|
||||||
|
|
||||||
#### Navigation Verification
|
|
||||||
|
|
||||||
- [ ] All buttons on this page link to existing routes
|
|
||||||
- [ ] No 404 errors when clicking any interactive element
|
|
||||||
- [ ] Back button returns to correct previous page
|
|
||||||
- [ ] Related links (edit, view, delete) have correct IDs in URLs
|
|
||||||
|
|
||||||
#### Integration Verification
|
|
||||||
|
|
||||||
- [ ] Console shows ZERO JavaScript errors
|
|
||||||
- [ ] Network tab shows successful API calls (no 500s)
|
|
||||||
- [ ] Data returned from API matches what UI displays
|
|
||||||
- [ ] Loading states appeared during API calls
|
|
||||||
- [ ] Error states handle failures gracefully
|
|
||||||
|
|
||||||
### STEP 5.6: MOCK DATA DETECTION (Before marking passing)
|
### STEP 5.6: MOCK DATA DETECTION (Before marking passing)
|
||||||
|
|
||||||
**Run ALL these grep checks. Any hits in src/ (excluding test files) require investigation:**
|
Before marking a feature passing, grep for mock/placeholder data patterns in src/ (excluding test files): `globalThis`, `devStore`, `dev-store`, `mockDb`, `mockData`, `fakeData`, `sampleData`, `dummyData`, `testData`, `TODO.*real`, `TODO.*database`, `STUB`, `MOCK`, `isDevelopment`, `isDev`. Any hits in production code must be investigated and fixed. Also create unique test data (e.g., "TEST_12345"), verify it appears in UI, then delete and confirm removal - unexplained data indicates mock implementations.
|
||||||
|
|
||||||
```bash
|
|
||||||
# Common exclusions for test files
|
|
||||||
EXCLUDE="--exclude=*.test.* --exclude=*.spec.* --exclude=*__test__* --exclude=*__mocks__*"
|
|
||||||
|
|
||||||
# 1. In-memory storage patterns (CRITICAL - catches dev-store)
|
|
||||||
grep -r "globalThis\." --include="*.ts" --include="*.tsx" --include="*.js" $EXCLUDE src/
|
|
||||||
grep -r "dev-store\|devStore\|DevStore\|mock-db\|mockDb" --include="*.ts" --include="*.tsx" --include="*.js" $EXCLUDE src/
|
|
||||||
|
|
||||||
# 2. Mock data variables
|
|
||||||
grep -r "mockData\|fakeData\|sampleData\|dummyData\|testData" --include="*.ts" --include="*.tsx" --include="*.js" $EXCLUDE src/
|
|
||||||
|
|
||||||
# 3. TODO/incomplete markers
|
|
||||||
grep -r "TODO.*real\|TODO.*database\|TODO.*API\|STUB\|MOCK" --include="*.ts" --include="*.tsx" --include="*.js" $EXCLUDE src/
|
|
||||||
|
|
||||||
# 4. Development-only conditionals
|
|
||||||
grep -r "isDevelopment\|isDev\|process\.env\.NODE_ENV.*development" --include="*.ts" --include="*.tsx" --include="*.js" $EXCLUDE src/
|
|
||||||
|
|
||||||
# 5. In-memory collections as data stores
|
|
||||||
grep -r "new Map\(\)\|new Set\(\)" --include="*.ts" --include="*.tsx" --include="*.js" $EXCLUDE src/ 2>/dev/null
|
|
||||||
```
|
|
||||||
|
|
||||||
**Rule:** If ANY grep returns results in production code → investigate → FIX before marking passing.
|
|
||||||
|
|
||||||
**Runtime verification:**
|
|
||||||
1. Create unique data (e.g., "TEST_12345") → verify in UI → delete → verify gone
|
|
||||||
2. Check database directly - all displayed data must come from real DB queries
|
|
||||||
3. If unexplained data appears, it's mock data - fix before marking passing.
|
|
||||||
|
|
||||||
### STEP 5.7: SERVER RESTART PERSISTENCE TEST (MANDATORY for data features)
|
### STEP 5.7: SERVER RESTART PERSISTENCE TEST (MANDATORY for data features)
|
||||||
|
|
||||||
**When required:** Any feature involving CRUD operations or data persistence.
|
For any feature involving CRUD or data persistence: create unique test data (e.g., "RESTART_TEST_12345"), verify it exists, then fully stop and restart the dev server. After restart, verify the test data still exists. If data is gone, the implementation uses in-memory storage -- run STEP 5.6 greps, find the mock pattern, and replace with real database queries. Clean up test data after verification. This test catches in-memory stores like `globalThis.devStore` that pass all other tests but lose data on restart.
|
||||||
|
|
||||||
**This test is NON-NEGOTIABLE. It catches in-memory storage implementations that pass all other tests.**
|
|
||||||
|
|
||||||
**Steps:**
|
|
||||||
|
|
||||||
1. Create unique test data via UI or API (e.g., item named "RESTART_TEST_12345")
|
|
||||||
2. Verify data appears in UI and API response
|
|
||||||
|
|
||||||
3. **STOP the server completely:**
|
|
||||||
```bash
|
|
||||||
# Kill by port (safer - only kills the dev server, not VS Code/Claude Code/etc.)
|
|
||||||
# Unix/macOS:
|
|
||||||
lsof -ti :${PORT:-3000} | xargs kill -TERM 2>/dev/null || true
|
|
||||||
sleep 3
|
|
||||||
lsof -ti :${PORT:-3000} | xargs kill -9 2>/dev/null || true
|
|
||||||
sleep 2
|
|
||||||
|
|
||||||
# Windows alternative (use if lsof not available):
|
|
||||||
# netstat -ano | findstr :${PORT:-3000} | findstr LISTENING
|
|
||||||
# taskkill /F /PID <pid_from_above> 2>nul
|
|
||||||
|
|
||||||
# Verify server is stopped
|
|
||||||
if lsof -ti :${PORT:-3000} > /dev/null 2>&1; then
|
|
||||||
echo "ERROR: Server still running on port ${PORT:-3000}!"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
```
|
|
||||||
|
|
||||||
4. **RESTART the server:**
|
|
||||||
```bash
|
|
||||||
./init.sh &
|
|
||||||
sleep 15 # Allow server to fully start
|
|
||||||
# Verify server is responding
|
|
||||||
if ! curl -f http://localhost:${PORT:-3000}/api/health && ! curl -f http://localhost:${PORT:-3000}; then
|
|
||||||
echo "ERROR: Server failed to start after restart"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
```
|
|
||||||
|
|
||||||
5. **Query for test data - it MUST still exist**
|
|
||||||
- Via UI: Navigate to data location, verify data appears
|
|
||||||
- Via API: `curl http://localhost:${PORT:-3000}/api/items` - verify data in response
|
|
||||||
|
|
||||||
6. **If data is GONE:** Implementation uses in-memory storage → CRITICAL FAIL
|
|
||||||
- Run all grep commands from STEP 5.6 to identify the mock pattern
|
|
||||||
- You MUST fix the in-memory storage implementation before proceeding
|
|
||||||
- Replace in-memory storage with real database queries
|
|
||||||
|
|
||||||
7. **Clean up test data** after successful verification
|
|
||||||
|
|
||||||
**Why this test exists:** In-memory stores like `globalThis.devStore` pass all other tests because data persists during a single server run. Only a full server restart reveals this bug. Skipping this step WILL allow dev-store implementations to slip through.
|
|
||||||
|
|
||||||
**YOLO Mode Note:** Even in YOLO mode, this verification is MANDATORY for data features. Use curl instead of browser automation.
|
|
||||||
|
|
||||||
### STEP 6: UPDATE FEATURE STATUS (CAREFULLY!)
|
### STEP 6: UPDATE FEATURE STATUS (CAREFULLY!)
|
||||||
|
|
||||||
|
|||||||
@@ -1,58 +1,29 @@
|
|||||||
## YOUR ROLE - TESTING AGENT
|
## YOUR ROLE - TESTING AGENT
|
||||||
|
|
||||||
You are a **testing agent** responsible for **regression testing** previously-passing features.
|
You are a **testing agent** responsible for **regression testing** previously-passing features. If you find a regression, you must fix it.
|
||||||
|
|
||||||
Your job is to ensure that features marked as "passing" still work correctly. If you find a regression (a feature that no longer works), you must fix it.
|
## ASSIGNED FEATURES FOR REGRESSION TESTING
|
||||||
|
|
||||||
### STEP 1: GET YOUR BEARINGS (MANDATORY)
|
You are assigned to test the following features: {{TESTING_FEATURE_IDS}}
|
||||||
|
|
||||||
Start by orienting yourself:
|
### Workflow for EACH feature:
|
||||||
|
1. Call `feature_get_by_id` with the feature ID
|
||||||
|
2. Read the feature's verification steps
|
||||||
|
3. Test the feature in the browser
|
||||||
|
4. Call `feature_mark_passing` or `feature_mark_failing`
|
||||||
|
5. Move to the next feature
|
||||||
|
|
||||||
```bash
|
---
|
||||||
# 1. See your working directory
|
|
||||||
pwd
|
|
||||||
|
|
||||||
# 2. List files to understand project structure
|
### STEP 1: GET YOUR ASSIGNED FEATURE(S)
|
||||||
ls -la
|
|
||||||
|
|
||||||
# 3. Read progress notes from previous sessions (last 200 lines)
|
Your features have been pre-assigned by the orchestrator. For each feature ID listed above, use `feature_get_by_id` to get the details:
|
||||||
tail -200 claude-progress.txt
|
|
||||||
|
|
||||||
# 4. Check recent git history
|
|
||||||
git log --oneline -10
|
|
||||||
```
|
|
||||||
|
|
||||||
Then use MCP tools to check feature status:
|
|
||||||
|
|
||||||
```
|
```
|
||||||
# 5. Get progress statistics
|
Use the feature_get_by_id tool with feature_id=<ID>
|
||||||
Use the feature_get_stats tool
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### STEP 2: START SERVERS (IF NOT RUNNING)
|
### STEP 2: VERIFY THE FEATURE
|
||||||
|
|
||||||
If `init.sh` exists, run it:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
chmod +x init.sh
|
|
||||||
./init.sh
|
|
||||||
```
|
|
||||||
|
|
||||||
Otherwise, start servers manually.
|
|
||||||
|
|
||||||
### STEP 3: GET YOUR ASSIGNED FEATURE
|
|
||||||
|
|
||||||
Your feature has been pre-assigned by the orchestrator. Use `feature_get_by_id` to get the details:
|
|
||||||
|
|
||||||
```
|
|
||||||
Use the feature_get_by_id tool with feature_id={your_assigned_id}
|
|
||||||
```
|
|
||||||
|
|
||||||
The orchestrator has already claimed this feature for testing (set `testing_in_progress=true`).
|
|
||||||
|
|
||||||
**CRITICAL:** You MUST call `feature_release_testing` when done, regardless of pass/fail.
|
|
||||||
|
|
||||||
### STEP 4: VERIFY THE FEATURE
|
|
||||||
|
|
||||||
**CRITICAL:** You MUST verify the feature through the actual UI using browser automation.
|
**CRITICAL:** You MUST verify the feature through the actual UI using browser automation.
|
||||||
|
|
||||||
@@ -81,21 +52,11 @@ Use browser automation tools:
|
|||||||
- browser_console_messages - Get browser console output (check for errors)
|
- browser_console_messages - Get browser console output (check for errors)
|
||||||
- browser_network_requests - Monitor API calls
|
- browser_network_requests - Monitor API calls
|
||||||
|
|
||||||
### STEP 5: HANDLE RESULTS
|
### STEP 3: HANDLE RESULTS
|
||||||
|
|
||||||
#### If the feature PASSES:
|
#### If the feature PASSES:
|
||||||
|
|
||||||
The feature still works correctly. Release the claim and end your session:
|
The feature still works correctly. **DO NOT** call feature_mark_passing again -- it's already passing. End your session.
|
||||||
|
|
||||||
```
|
|
||||||
# Release the testing claim (tested_ok=true)
|
|
||||||
Use the feature_release_testing tool with feature_id={id} and tested_ok=true
|
|
||||||
|
|
||||||
# Log the successful verification
|
|
||||||
echo "[Testing] Feature #{id} verified - still passing" >> claude-progress.txt
|
|
||||||
```
|
|
||||||
|
|
||||||
**DO NOT** call feature_mark_passing again - it's already passing.
|
|
||||||
|
|
||||||
#### If the feature FAILS (regression found):
|
#### If the feature FAILS (regression found):
|
||||||
|
|
||||||
@@ -125,13 +86,7 @@ A regression has been introduced. You MUST fix it:
|
|||||||
Use the feature_mark_passing tool with feature_id={id}
|
Use the feature_mark_passing tool with feature_id={id}
|
||||||
```
|
```
|
||||||
|
|
||||||
6. **Release the testing claim:**
|
6. **Commit the fix:**
|
||||||
```
|
|
||||||
Use the feature_release_testing tool with feature_id={id} and tested_ok=false
|
|
||||||
```
|
|
||||||
Note: tested_ok=false because we found a regression (even though we fixed it).
|
|
||||||
|
|
||||||
7. **Commit the fix:**
|
|
||||||
```bash
|
```bash
|
||||||
git add .
|
git add .
|
||||||
git commit -m "Fix regression in [feature name]
|
git commit -m "Fix regression in [feature name]
|
||||||
@@ -141,14 +96,6 @@ A regression has been introduced. You MUST fix it:
|
|||||||
- Verified with browser automation"
|
- Verified with browser automation"
|
||||||
```
|
```
|
||||||
|
|
||||||
### STEP 6: UPDATE PROGRESS AND END
|
|
||||||
|
|
||||||
Update `claude-progress.txt`:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
echo "[Testing] Session complete - verified/fixed feature #{id}" >> claude-progress.txt
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## AVAILABLE MCP TOOLS
|
## AVAILABLE MCP TOOLS
|
||||||
@@ -156,12 +103,11 @@ echo "[Testing] Session complete - verified/fixed feature #{id}" >> claude-progr
|
|||||||
### Feature Management
|
### Feature Management
|
||||||
- `feature_get_stats` - Get progress overview (passing/in_progress/total counts)
|
- `feature_get_stats` - Get progress overview (passing/in_progress/total counts)
|
||||||
- `feature_get_by_id` - Get your assigned feature details
|
- `feature_get_by_id` - Get your assigned feature details
|
||||||
- `feature_release_testing` - **REQUIRED** - Release claim after testing (pass tested_ok=true/false)
|
|
||||||
- `feature_mark_failing` - Mark a feature as failing (when you find a regression)
|
- `feature_mark_failing` - Mark a feature as failing (when you find a regression)
|
||||||
- `feature_mark_passing` - Mark a feature as passing (after fixing a regression)
|
- `feature_mark_passing` - Mark a feature as passing (after fixing a regression)
|
||||||
|
|
||||||
### Browser Automation (Playwright)
|
### Browser Automation (Playwright)
|
||||||
All interaction tools have **built-in auto-wait** - no manual timeouts needed.
|
All interaction tools have **built-in auto-wait** -- no manual timeouts needed.
|
||||||
|
|
||||||
- `browser_navigate` - Navigate to URL
|
- `browser_navigate` - Navigate to URL
|
||||||
- `browser_take_screenshot` - Capture screenshot
|
- `browser_take_screenshot` - Capture screenshot
|
||||||
@@ -178,9 +124,7 @@ All interaction tools have **built-in auto-wait** - no manual timeouts needed.
|
|||||||
|
|
||||||
## IMPORTANT REMINDERS
|
## IMPORTANT REMINDERS
|
||||||
|
|
||||||
**Your Goal:** Verify that passing features still work, and fix any regressions found.
|
**Your Goal:** Test each assigned feature thoroughly. Verify it still works, and fix any regression found. Process ALL features in your list before ending your session.
|
||||||
|
|
||||||
**This Session's Goal:** Test ONE feature thoroughly.
|
|
||||||
|
|
||||||
**Quality Bar:**
|
**Quality Bar:**
|
||||||
- Zero console errors
|
- Zero console errors
|
||||||
@@ -188,21 +132,15 @@ All interaction tools have **built-in auto-wait** - no manual timeouts needed.
|
|||||||
- Visual appearance correct
|
- Visual appearance correct
|
||||||
- API calls succeed
|
- API calls succeed
|
||||||
|
|
||||||
**CRITICAL - Always release your claim:**
|
|
||||||
- Call `feature_release_testing` when done, whether pass or fail
|
|
||||||
- Pass `tested_ok=true` if the feature passed
|
|
||||||
- Pass `tested_ok=false` if you found a regression
|
|
||||||
|
|
||||||
**If you find a regression:**
|
**If you find a regression:**
|
||||||
1. Mark the feature as failing immediately
|
1. Mark the feature as failing immediately
|
||||||
2. Fix the issue
|
2. Fix the issue
|
||||||
3. Verify the fix with browser automation
|
3. Verify the fix with browser automation
|
||||||
4. Mark as passing only after thorough verification
|
4. Mark as passing only after thorough verification
|
||||||
5. Release the testing claim with `tested_ok=false`
|
5. Commit the fix
|
||||||
6. Commit the fix
|
|
||||||
|
|
||||||
**You have one iteration.** Focus on testing ONE feature thoroughly.
|
**You have one iteration.** Test all assigned features before ending.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
Begin by running Step 1 (Get Your Bearings).
|
Begin by running Step 1 for the first feature in your assigned list.
|
||||||
|
|||||||
12
.env.example
12
.env.example
@@ -22,6 +22,18 @@
|
|||||||
# Example: EXTRA_READ_PATHS=/Volumes/Data/dev,/Users/shared/libs
|
# Example: EXTRA_READ_PATHS=/Volumes/Data/dev,/Users/shared/libs
|
||||||
# EXTRA_READ_PATHS=
|
# EXTRA_READ_PATHS=
|
||||||
|
|
||||||
|
# Google Cloud Vertex AI Configuration (Optional)
|
||||||
|
# To use Claude via Vertex AI on Google Cloud Platform, uncomment and set these variables.
|
||||||
|
# Requires: gcloud CLI installed and authenticated (run: gcloud auth application-default login)
|
||||||
|
# Note: Use @ instead of - in model names (e.g., claude-opus-4-5@20251101)
|
||||||
|
#
|
||||||
|
# CLAUDE_CODE_USE_VERTEX=1
|
||||||
|
# CLOUD_ML_REGION=us-east5
|
||||||
|
# ANTHROPIC_VERTEX_PROJECT_ID=your-gcp-project-id
|
||||||
|
# ANTHROPIC_DEFAULT_OPUS_MODEL=claude-opus-4-5@20251101
|
||||||
|
# ANTHROPIC_DEFAULT_SONNET_MODEL=claude-sonnet-4-5@20250929
|
||||||
|
# ANTHROPIC_DEFAULT_HAIKU_MODEL=claude-3-5-haiku@20241022
|
||||||
|
|
||||||
# GLM/Alternative API Configuration (Optional)
|
# GLM/Alternative API Configuration (Optional)
|
||||||
# To use Zhipu AI's GLM models instead of Claude, uncomment and set these variables.
|
# To use Zhipu AI's GLM models instead of Claude, uncomment and set these variables.
|
||||||
# This only affects AutoCoder - your global Claude Code settings remain unchanged.
|
# This only affects AutoCoder - your global Claude Code settings remain unchanged.
|
||||||
|
|||||||
2
.gitignore
vendored
2
.gitignore
vendored
@@ -76,6 +76,8 @@ ui/playwright-report/
|
|||||||
.dmypy.json
|
.dmypy.json
|
||||||
dmypy.json
|
dmypy.json
|
||||||
|
|
||||||
|
.ruff_cache/
|
||||||
|
|
||||||
# ===================
|
# ===================
|
||||||
# Claude Code
|
# Claude Code
|
||||||
# ===================
|
# ===================
|
||||||
|
|||||||
175
CLAUDE.md
175
CLAUDE.md
@@ -54,6 +54,12 @@ python autonomous_agent_demo.py --project-dir my-app --yolo
|
|||||||
|
|
||||||
# Parallel mode: run multiple agents concurrently (1-5 agents)
|
# Parallel mode: run multiple agents concurrently (1-5 agents)
|
||||||
python autonomous_agent_demo.py --project-dir my-app --parallel --max-concurrency 3
|
python autonomous_agent_demo.py --project-dir my-app --parallel --max-concurrency 3
|
||||||
|
|
||||||
|
# Batch mode: implement multiple features per agent session (1-3)
|
||||||
|
python autonomous_agent_demo.py --project-dir my-app --batch-size 3
|
||||||
|
|
||||||
|
# Batch specific features by ID
|
||||||
|
python autonomous_agent_demo.py --project-dir my-app --batch-features 1,2,3
|
||||||
```
|
```
|
||||||
|
|
||||||
### YOLO Mode (Rapid Prototyping)
|
### YOLO Mode (Rapid Prototyping)
|
||||||
@@ -68,7 +74,7 @@ python autonomous_agent_demo.py --project-dir my-app --yolo
|
|||||||
```
|
```
|
||||||
|
|
||||||
**What's different in YOLO mode:**
|
**What's different in YOLO mode:**
|
||||||
- No regression testing (skips `feature_get_for_regression`)
|
- No regression testing
|
||||||
- No Playwright MCP server (browser automation disabled)
|
- No Playwright MCP server (browser automation disabled)
|
||||||
- Features marked passing after lint/type-check succeeds
|
- Features marked passing after lint/type-check succeeds
|
||||||
- Faster iteration for prototyping
|
- Faster iteration for prototyping
|
||||||
@@ -99,8 +105,11 @@ npm run lint # Run ESLint
|
|||||||
```bash
|
```bash
|
||||||
ruff check . # Lint
|
ruff check . # Lint
|
||||||
mypy . # Type check
|
mypy . # Type check
|
||||||
python test_security.py # Security unit tests (163 tests)
|
python test_security.py # Security unit tests (12 tests)
|
||||||
python test_security_integration.py # Integration tests (9 tests)
|
python test_security_integration.py # Integration tests (9 tests)
|
||||||
|
python -m pytest test_client.py # Client tests (20 tests)
|
||||||
|
python -m pytest test_dependency_resolver.py # Dependency resolver tests (12 tests)
|
||||||
|
python -m pytest test_rate_limit_utils.py # Rate limit tests (22 tests)
|
||||||
```
|
```
|
||||||
|
|
||||||
### React UI
|
### React UI
|
||||||
@@ -108,11 +117,17 @@ python test_security_integration.py # Integration tests (9 tests)
|
|||||||
```bash
|
```bash
|
||||||
cd ui
|
cd ui
|
||||||
npm run lint # ESLint
|
npm run lint # ESLint
|
||||||
npm run build # Type check + build
|
npm run build # Type check + build (Vite 7)
|
||||||
npm run test:e2e # Playwright end-to-end tests
|
npm run test:e2e # Playwright end-to-end tests
|
||||||
npm run test:e2e:ui # Playwright tests with UI
|
npm run test:e2e:ui # Playwright tests with UI
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### CI/CD
|
||||||
|
|
||||||
|
GitHub Actions (`.github/workflows/ci.yml`) runs on push/PR to master:
|
||||||
|
- **Python job**: ruff lint + security tests
|
||||||
|
- **UI job**: ESLint + TypeScript build
|
||||||
|
|
||||||
### Code Quality
|
### Code Quality
|
||||||
|
|
||||||
Configuration in `pyproject.toml`:
|
Configuration in `pyproject.toml`:
|
||||||
@@ -124,15 +139,21 @@ Configuration in `pyproject.toml`:
|
|||||||
### Core Python Modules
|
### Core Python Modules
|
||||||
|
|
||||||
- `start.py` - CLI launcher with project creation/selection menu
|
- `start.py` - CLI launcher with project creation/selection menu
|
||||||
- `autonomous_agent_demo.py` - Entry point for running the agent
|
- `autonomous_agent_demo.py` - Entry point for running the agent (supports `--yolo`, `--parallel`, `--batch-size`, `--batch-features`)
|
||||||
|
- `autocoder_paths.py` - Central path resolution with dual-path backward compatibility and migration
|
||||||
- `agent.py` - Agent session loop using Claude Agent SDK
|
- `agent.py` - Agent session loop using Claude Agent SDK
|
||||||
- `client.py` - ClaudeSDKClient configuration with security hooks and MCP servers
|
- `client.py` - ClaudeSDKClient configuration with security hooks, MCP servers, and Vertex AI support
|
||||||
- `security.py` - Bash command allowlist validation (ALLOWED_COMMANDS whitelist)
|
- `security.py` - Bash command allowlist validation (ALLOWED_COMMANDS whitelist)
|
||||||
- `prompts.py` - Prompt template loading with project-specific fallback
|
- `prompts.py` - Prompt template loading with project-specific fallback and batch feature prompts
|
||||||
- `progress.py` - Progress tracking, database queries, webhook notifications
|
- `progress.py` - Progress tracking, database queries, webhook notifications
|
||||||
- `registry.py` - Project registry for mapping names to paths (cross-platform)
|
- `registry.py` - Project registry for mapping names to paths (cross-platform), global settings model
|
||||||
- `parallel_orchestrator.py` - Concurrent agent execution with dependency-aware scheduling
|
- `parallel_orchestrator.py` - Concurrent agent execution with dependency-aware scheduling
|
||||||
|
- `auth.py` - Authentication error detection for Claude CLI
|
||||||
|
- `env_constants.py` - Shared environment variable constants (API_ENV_VARS) used by client.py and chat sessions
|
||||||
|
- `rate_limit_utils.py` - Rate limit detection, retry parsing, exponential backoff with jitter
|
||||||
|
- `api/database.py` - SQLAlchemy models (Feature, Schedule, ScheduleOverride)
|
||||||
- `api/dependency_resolver.py` - Cycle detection (Kahn's algorithm + DFS) and dependency validation
|
- `api/dependency_resolver.py` - Cycle detection (Kahn's algorithm + DFS) and dependency validation
|
||||||
|
- `api/migration.py` - JSON-to-SQLite migration utility
|
||||||
|
|
||||||
### Project Registry
|
### Project Registry
|
||||||
|
|
||||||
@@ -146,13 +167,36 @@ The registry uses:
|
|||||||
|
|
||||||
### Server API (server/)
|
### Server API (server/)
|
||||||
|
|
||||||
The FastAPI server provides REST endpoints for the UI:
|
The FastAPI server provides REST and WebSocket endpoints for the UI:
|
||||||
|
|
||||||
- `server/routers/projects.py` - Project CRUD with registry integration
|
**Routers** (`server/routers/`):
|
||||||
- `server/routers/features.py` - Feature management
|
- `projects.py` - Project CRUD with registry integration
|
||||||
- `server/routers/agent.py` - Agent control (start/stop/pause/resume)
|
- `features.py` - Feature management
|
||||||
- `server/routers/filesystem.py` - Filesystem browser API with security controls
|
- `agent.py` - Agent control (start/stop/pause/resume)
|
||||||
- `server/routers/spec_creation.py` - WebSocket for interactive spec creation
|
- `filesystem.py` - Filesystem browser API with security controls
|
||||||
|
- `spec_creation.py` - WebSocket for interactive spec creation
|
||||||
|
- `expand_project.py` - Interactive project expansion via natural language
|
||||||
|
- `assistant_chat.py` - Read-only project assistant chat (WebSocket/REST)
|
||||||
|
- `terminal.py` - Interactive terminal I/O with PTY support (WebSocket bidirectional)
|
||||||
|
- `devserver.py` - Dev server control (start/stop) and config
|
||||||
|
- `schedules.py` - CRUD for time-based agent scheduling
|
||||||
|
- `settings.py` - Global settings management (model selection, YOLO, batch size, headless browser)
|
||||||
|
|
||||||
|
**Services** (`server/services/`):
|
||||||
|
- `process_manager.py` - Agent process lifecycle management
|
||||||
|
- `project_config.py` - Project type detection and dev command management
|
||||||
|
- `terminal_manager.py` - Terminal session management with PTY (`pywinpty` on Windows)
|
||||||
|
- `scheduler_service.py` - APScheduler-based automated agent scheduling
|
||||||
|
- `dev_server_manager.py` - Dev server lifecycle management
|
||||||
|
- `assistant_chat_session.py` / `assistant_database.py` - Assistant chat sessions with SQLite persistence
|
||||||
|
- `spec_chat_session.py` - Spec creation chat sessions
|
||||||
|
- `expand_chat_session.py` - Expand project chat sessions
|
||||||
|
- `chat_constants.py` - Shared constants for chat services
|
||||||
|
|
||||||
|
**Utilities** (`server/utils/`):
|
||||||
|
- `process_utils.py` - Process management utilities
|
||||||
|
- `project_helpers.py` - Project path resolution helpers
|
||||||
|
- `validation.py` - Project name validation
|
||||||
|
|
||||||
### Feature Management
|
### Feature Management
|
||||||
|
|
||||||
@@ -163,18 +207,26 @@ Features are stored in SQLite (`features.db`) via SQLAlchemy. The agent interact
|
|||||||
|
|
||||||
MCP tools available to the agent:
|
MCP tools available to the agent:
|
||||||
- `feature_get_stats` - Progress statistics
|
- `feature_get_stats` - Progress statistics
|
||||||
- `feature_get_next` - Get highest-priority pending feature (respects dependencies)
|
- `feature_get_by_id` - Get a single feature by ID
|
||||||
- `feature_claim_next` - Atomically claim next available feature (for parallel mode)
|
- `feature_get_summary` - Get summary of all features
|
||||||
- `feature_get_for_regression` - Random passing features for regression testing
|
- `feature_get_ready` - Get features ready to work on (dependencies met)
|
||||||
|
- `feature_get_blocked` - Get features blocked by unmet dependencies
|
||||||
|
- `feature_get_graph` - Get full dependency graph
|
||||||
|
- `feature_claim_and_get` - Atomically claim next available feature (for parallel mode)
|
||||||
|
- `feature_mark_in_progress` - Mark feature as in progress
|
||||||
- `feature_mark_passing` - Mark feature complete
|
- `feature_mark_passing` - Mark feature complete
|
||||||
|
- `feature_mark_failing` - Mark feature as failing
|
||||||
- `feature_skip` - Move feature to end of queue
|
- `feature_skip` - Move feature to end of queue
|
||||||
|
- `feature_clear_in_progress` - Clear in-progress status
|
||||||
- `feature_create_bulk` - Initialize all features (used by initializer)
|
- `feature_create_bulk` - Initialize all features (used by initializer)
|
||||||
|
- `feature_create` - Create a single feature
|
||||||
- `feature_add_dependency` - Add dependency between features (with cycle detection)
|
- `feature_add_dependency` - Add dependency between features (with cycle detection)
|
||||||
- `feature_remove_dependency` - Remove a dependency
|
- `feature_remove_dependency` - Remove a dependency
|
||||||
|
- `feature_set_dependencies` - Set all dependencies for a feature at once
|
||||||
|
|
||||||
### React UI (ui/)
|
### React UI (ui/)
|
||||||
|
|
||||||
- Tech stack: React 19, TypeScript, TanStack Query, Tailwind CSS v4, Radix UI, dagre (graph layout)
|
- Tech stack: React 19, TypeScript, Vite 7, TanStack Query, Tailwind CSS v4, Radix UI, dagre (graph layout), xterm.js (terminal)
|
||||||
- `src/App.tsx` - Main app with project selection, kanban board, agent controls
|
- `src/App.tsx` - Main app with project selection, kanban board, agent controls
|
||||||
- `src/hooks/useWebSocket.ts` - Real-time updates via WebSocket (progress, agent status, logs, agent updates)
|
- `src/hooks/useWebSocket.ts` - Real-time updates via WebSocket (progress, agent status, logs, agent updates)
|
||||||
- `src/hooks/useProjects.ts` - React Query hooks for API calls
|
- `src/hooks/useProjects.ts` - React Query hooks for API calls
|
||||||
@@ -186,6 +238,12 @@ Key components:
|
|||||||
- `DependencyGraph.tsx` - Interactive node graph visualization with dagre layout
|
- `DependencyGraph.tsx` - Interactive node graph visualization with dagre layout
|
||||||
- `CelebrationOverlay.tsx` - Confetti animation on feature completion
|
- `CelebrationOverlay.tsx` - Confetti animation on feature completion
|
||||||
- `FolderBrowser.tsx` - Server-side filesystem browser for project folder selection
|
- `FolderBrowser.tsx` - Server-side filesystem browser for project folder selection
|
||||||
|
- `Terminal.tsx` / `TerminalTabs.tsx` - xterm.js-based multi-tab terminal
|
||||||
|
- `AssistantPanel.tsx` / `AssistantChat.tsx` - AI assistant for project Q&A
|
||||||
|
- `ExpandProjectModal.tsx` / `ExpandProjectChat.tsx` - Add features via natural language
|
||||||
|
- `DevServerControl.tsx` - Dev server start/stop control
|
||||||
|
- `ScheduleModal.tsx` - Schedule management UI
|
||||||
|
- `SettingsModal.tsx` - Global settings panel
|
||||||
|
|
||||||
Keyboard shortcuts (press `?` for help):
|
Keyboard shortcuts (press `?` for help):
|
||||||
- `D` - Toggle debug panel
|
- `D` - Toggle debug panel
|
||||||
@@ -197,12 +255,17 @@ Keyboard shortcuts (press `?` for help):
|
|||||||
### Project Structure for Generated Apps
|
### Project Structure for Generated Apps
|
||||||
|
|
||||||
Projects can be stored in any directory (registered in `~/.autocoder/registry.db`). Each project contains:
|
Projects can be stored in any directory (registered in `~/.autocoder/registry.db`). Each project contains:
|
||||||
- `prompts/app_spec.txt` - Application specification (XML format)
|
- `.autocoder/prompts/app_spec.txt` - Application specification (XML format)
|
||||||
- `prompts/initializer_prompt.md` - First session prompt
|
- `.autocoder/prompts/initializer_prompt.md` - First session prompt
|
||||||
- `prompts/coding_prompt.md` - Continuation session prompt
|
- `.autocoder/prompts/coding_prompt.md` - Continuation session prompt
|
||||||
- `features.db` - SQLite database with feature test cases
|
- `.autocoder/features.db` - SQLite database with feature test cases
|
||||||
- `.agent.lock` - Lock file to prevent multiple agent instances
|
- `.autocoder/.agent.lock` - Lock file to prevent multiple agent instances
|
||||||
- `.autocoder/allowed_commands.yaml` - Project-specific bash command allowlist (optional)
|
- `.autocoder/allowed_commands.yaml` - Project-specific bash command allowlist (optional)
|
||||||
|
- `.autocoder/.gitignore` - Ignores runtime files
|
||||||
|
- `CLAUDE.md` - Stays at project root (SDK convention)
|
||||||
|
- `app_spec.txt` - Root copy for agent template compatibility
|
||||||
|
|
||||||
|
Legacy projects with files at root level (e.g., `features.db`, `prompts/`) are auto-migrated to `.autocoder/` on next agent start. Dual-path resolution ensures old and new layouts work transparently.
|
||||||
|
|
||||||
### Security Model
|
### Security Model
|
||||||
|
|
||||||
@@ -242,15 +305,6 @@ The following directories (relative to home) are always blocked:
|
|||||||
- `.docker`, `.config/gcloud` - Container/cloud configs
|
- `.docker`, `.config/gcloud` - Container/cloud configs
|
||||||
- `.npmrc`, `.pypirc`, `.netrc` - Package manager credentials
|
- `.npmrc`, `.pypirc`, `.netrc` - Package manager credentials
|
||||||
|
|
||||||
**Example Output:**
|
|
||||||
|
|
||||||
```
|
|
||||||
Created security settings at /path/to/project/.claude_settings.json
|
|
||||||
- Sandbox enabled (OS-level bash isolation)
|
|
||||||
- Filesystem restricted to: /path/to/project
|
|
||||||
- Extra read paths (validated): /Users/me/docs, /opt/shared-libs
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Per-Project Allowed Commands
|
#### Per-Project Allowed Commands
|
||||||
|
|
||||||
The agent's bash command access is controlled through a hierarchical configuration system:
|
The agent's bash command access is controlled through a hierarchical configuration system:
|
||||||
@@ -312,13 +366,28 @@ blocked_commands:
|
|||||||
|
|
||||||
**Files:**
|
**Files:**
|
||||||
- `security.py` - Command validation logic and hardcoded blocklist
|
- `security.py` - Command validation logic and hardcoded blocklist
|
||||||
- `test_security.py` - Unit tests for security system (136 tests)
|
- `test_security.py` - Unit tests for security system
|
||||||
- `test_security_integration.py` - Integration tests with real hooks (9 tests)
|
- `test_security_integration.py` - Integration tests with real hooks
|
||||||
- `TEST_SECURITY.md` - Quick testing reference guide
|
|
||||||
- `examples/project_allowed_commands.yaml` - Project config example (all commented by default)
|
- `examples/project_allowed_commands.yaml` - Project config example (all commented by default)
|
||||||
- `examples/org_config.yaml` - Org config example (all commented by default)
|
- `examples/org_config.yaml` - Org config example (all commented by default)
|
||||||
- `examples/README.md` - Comprehensive guide with use cases, testing, and troubleshooting
|
- `examples/README.md` - Comprehensive guide with use cases, testing, and troubleshooting
|
||||||
- `PHASE3_SPEC.md` - Specification for mid-session approval feature (future enhancement)
|
|
||||||
|
### Vertex AI Configuration (Optional)
|
||||||
|
|
||||||
|
Run coding agents via Google Cloud Vertex AI:
|
||||||
|
|
||||||
|
1. Install and authenticate gcloud CLI: `gcloud auth application-default login`
|
||||||
|
2. Configure `.env`:
|
||||||
|
```
|
||||||
|
CLAUDE_CODE_USE_VERTEX=1
|
||||||
|
CLOUD_ML_REGION=us-east5
|
||||||
|
ANTHROPIC_VERTEX_PROJECT_ID=your-gcp-project-id
|
||||||
|
ANTHROPIC_DEFAULT_OPUS_MODEL=claude-opus-4-5@20251101
|
||||||
|
ANTHROPIC_DEFAULT_SONNET_MODEL=claude-sonnet-4-5@20250929
|
||||||
|
ANTHROPIC_DEFAULT_HAIKU_MODEL=claude-3-5-haiku@20241022
|
||||||
|
```
|
||||||
|
|
||||||
|
**Note:** Use `@` instead of `-` in model names for Vertex AI.
|
||||||
|
|
||||||
### Ollama Local Models (Optional)
|
### Ollama Local Models (Optional)
|
||||||
|
|
||||||
@@ -355,8 +424,24 @@ Run coding agents using local models via Ollama v0.14.0+:
|
|||||||
|
|
||||||
## Claude Code Integration
|
## Claude Code Integration
|
||||||
|
|
||||||
- `.claude/commands/create-spec.md` - `/create-spec` slash command for interactive spec creation
|
**Slash commands** (`.claude/commands/`):
|
||||||
- `.claude/skills/frontend-design/SKILL.md` - Skill for distinctive UI design
|
- `/create-spec` - Interactive spec creation for new projects
|
||||||
|
- `/expand-project` - Expand existing project with new features
|
||||||
|
- `/gsd-to-autocoder-spec` - Convert GSD codebase mapping to app_spec.txt
|
||||||
|
- `/check-code` - Run lint and type-check for code quality
|
||||||
|
- `/checkpoint` - Create comprehensive checkpoint commit
|
||||||
|
- `/review-pr` - Review pull requests
|
||||||
|
|
||||||
|
**Custom agents** (`.claude/agents/`):
|
||||||
|
- `coder.md` - Elite software architect agent for code implementation (Opus)
|
||||||
|
- `code-review.md` - Code review agent for quality/security/performance analysis (Opus)
|
||||||
|
- `deep-dive.md` - Technical investigator for deep analysis and debugging (Opus)
|
||||||
|
|
||||||
|
**Skills** (`.claude/skills/`):
|
||||||
|
- `frontend-design` - Distinctive, production-grade UI design
|
||||||
|
- `gsd-to-autocoder-spec` - Convert GSD codebase mapping to Autocoder app_spec format
|
||||||
|
|
||||||
|
**Other:**
|
||||||
- `.claude/templates/` - Prompt templates copied to new projects
|
- `.claude/templates/` - Prompt templates copied to new projects
|
||||||
- `examples/` - Configuration examples and documentation for security settings
|
- `examples/` - Configuration examples and documentation for security settings
|
||||||
|
|
||||||
@@ -364,12 +449,12 @@ Run coding agents using local models via Ollama v0.14.0+:
|
|||||||
|
|
||||||
### Prompt Loading Fallback Chain
|
### Prompt Loading Fallback Chain
|
||||||
|
|
||||||
1. Project-specific: `{project_dir}/prompts/{name}.md`
|
1. Project-specific: `{project_dir}/.autocoder/prompts/{name}.md` (or legacy `{project_dir}/prompts/{name}.md`)
|
||||||
2. Base template: `.claude/templates/{name}.template.md`
|
2. Base template: `.claude/templates/{name}.template.md`
|
||||||
|
|
||||||
### Agent Session Flow
|
### Agent Session Flow
|
||||||
|
|
||||||
1. Check if `features.db` has features (determines initializer vs coding agent)
|
1. Check if `.autocoder/features.db` has features (determines initializer vs coding agent)
|
||||||
2. Create ClaudeSDKClient with security settings
|
2. Create ClaudeSDKClient with security settings
|
||||||
3. Send prompt and stream response
|
3. Send prompt and stream response
|
||||||
4. Auto-continue with 3-second delay between sessions
|
4. Auto-continue with 3-second delay between sessions
|
||||||
@@ -387,7 +472,7 @@ The UI receives updates via WebSocket (`/ws/projects/{project_name}`):
|
|||||||
|
|
||||||
When running with `--parallel`, the orchestrator:
|
When running with `--parallel`, the orchestrator:
|
||||||
1. Spawns multiple Claude agents as subprocesses (up to `--max-concurrency`)
|
1. Spawns multiple Claude agents as subprocesses (up to `--max-concurrency`)
|
||||||
2. Each agent claims features atomically via `feature_claim_next`
|
2. Each agent claims features atomically via `feature_claim_and_get`
|
||||||
3. Features blocked by unmet dependencies are skipped
|
3. Features blocked by unmet dependencies are skipped
|
||||||
4. Browser contexts are isolated per agent using `--isolated` flag
|
4. Browser contexts are isolated per agent using `--isolated` flag
|
||||||
5. AgentTracker parses output and emits `agent_update` messages for UI
|
5. AgentTracker parses output and emits `agent_update` messages for UI
|
||||||
@@ -400,6 +485,16 @@ The orchestrator enforces strict bounds on concurrent processes:
|
|||||||
- Testing agents are capped at `max_concurrency` (same as coding agents)
|
- Testing agents are capped at `max_concurrency` (same as coding agents)
|
||||||
- Total process count never exceeds 11 Python processes (1 orchestrator + 5 coding + 5 testing)
|
- Total process count never exceeds 11 Python processes (1 orchestrator + 5 coding + 5 testing)
|
||||||
|
|
||||||
|
### Multi-Feature Batching
|
||||||
|
|
||||||
|
Agents can implement multiple features per session using `--batch-size` (1-3, default: 3):
|
||||||
|
- `--batch-size N` - Max features per coding agent batch
|
||||||
|
- `--testing-batch-size N` - Features per testing batch (1-5, default: 3)
|
||||||
|
- `--batch-features 1,2,3` - Specific feature IDs for batch implementation
|
||||||
|
- `--testing-batch-features 1,2,3` - Specific feature IDs for batch regression testing
|
||||||
|
- `prompts.py` provides `get_batch_feature_prompt()` for multi-feature prompt generation
|
||||||
|
- Configurable in UI via settings panel
|
||||||
|
|
||||||
### Design System
|
### Design System
|
||||||
|
|
||||||
The UI uses a **neobrutalism** design with Tailwind CSS v4:
|
The UI uses a **neobrutalism** design with Tailwind CSS v4:
|
||||||
|
|||||||
@@ -1,228 +0,0 @@
|
|||||||
# Custom Updates - AutoCoder
|
|
||||||
|
|
||||||
This document tracks all customizations made to AutoCoder that deviate from the upstream repository. Reference this file before any updates to preserve these changes.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Table of Contents
|
|
||||||
|
|
||||||
1. [UI Theme Customization](#1-ui-theme-customization)
|
|
||||||
2. [Playwright Browser Configuration](#2-playwright-browser-configuration)
|
|
||||||
3. [Update Checklist](#update-checklist)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 1. UI Theme Customization
|
|
||||||
|
|
||||||
### Overview
|
|
||||||
|
|
||||||
The UI has been customized from the default **neobrutalism** style to a clean **Twitter/Supabase-style** design.
|
|
||||||
|
|
||||||
**Design Changes:**
|
|
||||||
- No shadows
|
|
||||||
- Thin borders (1px)
|
|
||||||
- Rounded corners (1.3rem base)
|
|
||||||
- Blue accent color (Twitter blue)
|
|
||||||
- Clean typography (Open Sans)
|
|
||||||
|
|
||||||
### Modified Files
|
|
||||||
|
|
||||||
#### `ui/src/styles/custom-theme.css`
|
|
||||||
|
|
||||||
**Purpose:** Main theme override file that replaces neo design with clean Twitter style.
|
|
||||||
|
|
||||||
**Key Changes:**
|
|
||||||
- All `--shadow-neo-*` variables set to `none`
|
|
||||||
- All status colors (`pending`, `progress`, `done`) use Twitter blue
|
|
||||||
- Rounded corners: `--radius-neo-lg: 1.3rem`
|
|
||||||
- Font: Open Sans
|
|
||||||
- Removed all transform effects on hover
|
|
||||||
- Dark mode with proper contrast
|
|
||||||
|
|
||||||
**CSS Variables (Light Mode):**
|
|
||||||
```css
|
|
||||||
--color-neo-accent: oklch(0.6723 0.1606 244.9955); /* Twitter blue */
|
|
||||||
--color-neo-pending: oklch(0.6723 0.1606 244.9955);
|
|
||||||
--color-neo-progress: oklch(0.6723 0.1606 244.9955);
|
|
||||||
--color-neo-done: oklch(0.6723 0.1606 244.9955);
|
|
||||||
```
|
|
||||||
|
|
||||||
**CSS Variables (Dark Mode):**
|
|
||||||
```css
|
|
||||||
--color-neo-bg: oklch(0.08 0 0);
|
|
||||||
--color-neo-card: oklch(0.16 0.005 250);
|
|
||||||
--color-neo-border: oklch(0.30 0 0);
|
|
||||||
```
|
|
||||||
|
|
||||||
**How to preserve:** This file should NOT be overwritten. It loads after `globals.css` and overrides it.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
#### `ui/src/components/KanbanColumn.tsx`
|
|
||||||
|
|
||||||
**Purpose:** Modified to support themeable kanban columns without inline styles.
|
|
||||||
|
|
||||||
**Changes:**
|
|
||||||
|
|
||||||
1. **colorMap changed from inline colors to CSS classes:**
|
|
||||||
```tsx
|
|
||||||
// BEFORE (original):
|
|
||||||
const colorMap = {
|
|
||||||
pending: 'var(--color-neo-pending)',
|
|
||||||
progress: 'var(--color-neo-progress)',
|
|
||||||
done: 'var(--color-neo-done)',
|
|
||||||
}
|
|
||||||
|
|
||||||
// AFTER (customized):
|
|
||||||
const colorMap = {
|
|
||||||
pending: 'kanban-header-pending',
|
|
||||||
progress: 'kanban-header-progress',
|
|
||||||
done: 'kanban-header-done',
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **Column div uses CSS class instead of inline style:**
|
|
||||||
```tsx
|
|
||||||
// BEFORE:
|
|
||||||
<div className="neo-card overflow-hidden" style={{ borderColor: colorMap[color] }}>
|
|
||||||
|
|
||||||
// AFTER:
|
|
||||||
<div className={`neo-card overflow-hidden kanban-column ${colorMap[color]}`}>
|
|
||||||
```
|
|
||||||
|
|
||||||
3. **Header div simplified (removed duplicate color class):**
|
|
||||||
```tsx
|
|
||||||
// BEFORE:
|
|
||||||
<div className={`... ${colorMap[color]}`} style={{ backgroundColor: colorMap[color] }}>
|
|
||||||
|
|
||||||
// AFTER:
|
|
||||||
<div className="kanban-header px-4 py-3 border-b border-[var(--color-neo-border)]">
|
|
||||||
```
|
|
||||||
|
|
||||||
4. **Title text color:**
|
|
||||||
```tsx
|
|
||||||
// BEFORE:
|
|
||||||
text-[var(--color-neo-text-on-bright)]
|
|
||||||
|
|
||||||
// AFTER:
|
|
||||||
text-[var(--color-neo-text)]
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 2. Playwright Browser Configuration
|
|
||||||
|
|
||||||
### Overview
|
|
||||||
|
|
||||||
Changed default Playwright settings for better performance:
|
|
||||||
- **Default browser:** Firefox (lower CPU usage)
|
|
||||||
- **Default mode:** Headless (saves resources)
|
|
||||||
|
|
||||||
### Modified Files
|
|
||||||
|
|
||||||
#### `client.py`
|
|
||||||
|
|
||||||
**Changes:**
|
|
||||||
|
|
||||||
```python
|
|
||||||
# BEFORE:
|
|
||||||
DEFAULT_PLAYWRIGHT_HEADLESS = False
|
|
||||||
|
|
||||||
# AFTER:
|
|
||||||
DEFAULT_PLAYWRIGHT_HEADLESS = True
|
|
||||||
DEFAULT_PLAYWRIGHT_BROWSER = "firefox"
|
|
||||||
```
|
|
||||||
|
|
||||||
**New function added:**
|
|
||||||
```python
|
|
||||||
def get_playwright_browser() -> str:
|
|
||||||
"""
|
|
||||||
Get the browser to use for Playwright.
|
|
||||||
Options: chrome, firefox, webkit, msedge
|
|
||||||
Firefox is recommended for lower CPU usage.
|
|
||||||
"""
|
|
||||||
return os.getenv("PLAYWRIGHT_BROWSER", DEFAULT_PLAYWRIGHT_BROWSER).lower()
|
|
||||||
```
|
|
||||||
|
|
||||||
**Playwright args updated:**
|
|
||||||
```python
|
|
||||||
playwright_args = [
|
|
||||||
"@playwright/mcp@latest",
|
|
||||||
"--viewport-size", "1280x720",
|
|
||||||
"--browser", browser, # NEW: configurable browser
|
|
||||||
]
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
#### `.env.example`
|
|
||||||
|
|
||||||
**Updated documentation:**
|
|
||||||
```bash
|
|
||||||
# PLAYWRIGHT_BROWSER: Which browser to use for testing
|
|
||||||
# - firefox: Lower CPU usage, recommended (default)
|
|
||||||
# - chrome: Google Chrome
|
|
||||||
# - webkit: Safari engine
|
|
||||||
# - msedge: Microsoft Edge
|
|
||||||
# PLAYWRIGHT_BROWSER=firefox
|
|
||||||
|
|
||||||
# PLAYWRIGHT_HEADLESS: Run browser without visible window
|
|
||||||
# - true: Browser runs in background, saves CPU (default)
|
|
||||||
# - false: Browser opens a visible window (useful for debugging)
|
|
||||||
# PLAYWRIGHT_HEADLESS=true
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 3. Update Checklist
|
|
||||||
|
|
||||||
When updating AutoCoder from upstream, verify these items:
|
|
||||||
|
|
||||||
### UI Changes
|
|
||||||
- [ ] `ui/src/styles/custom-theme.css` is preserved
|
|
||||||
- [ ] `ui/src/components/KanbanColumn.tsx` changes are preserved
|
|
||||||
- [ ] Run `npm run build` in `ui/` directory
|
|
||||||
- [ ] Test both light and dark modes
|
|
||||||
|
|
||||||
### Backend Changes
|
|
||||||
- [ ] `client.py` - Playwright browser/headless defaults preserved
|
|
||||||
- [ ] `.env.example` - Documentation updates preserved
|
|
||||||
|
|
||||||
### General
|
|
||||||
- [ ] Verify Playwright uses Firefox by default
|
|
||||||
- [ ] Check that browser runs headless by default
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Reverting to Defaults
|
|
||||||
|
|
||||||
### UI Only
|
|
||||||
```bash
|
|
||||||
rm ui/src/styles/custom-theme.css
|
|
||||||
git checkout ui/src/components/KanbanColumn.tsx
|
|
||||||
cd ui && npm run build
|
|
||||||
```
|
|
||||||
|
|
||||||
### Backend Only
|
|
||||||
```bash
|
|
||||||
git checkout client.py .env.example
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Files Summary
|
|
||||||
|
|
||||||
| File | Type | Change Description |
|
|
||||||
|------|------|-------------------|
|
|
||||||
| `ui/src/styles/custom-theme.css` | UI | Twitter-style theme |
|
|
||||||
| `ui/src/components/KanbanColumn.tsx` | UI | Themeable kanban columns |
|
|
||||||
| `ui/src/main.tsx` | UI | Imports custom theme |
|
|
||||||
| `client.py` | Backend | Firefox + headless defaults |
|
|
||||||
| `.env.example` | Config | Updated documentation |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Last Updated
|
|
||||||
|
|
||||||
**Date:** January 2026
|
|
||||||
**PR:** #93 - Twitter-style UI theme with custom theme override system
|
|
||||||
1591
PHASE3_SPEC.md
1591
PHASE3_SPEC.md
File diff suppressed because it is too large
Load Diff
@@ -1,22 +0,0 @@
|
|||||||
Let's call it Simple Todo. This is a really simple web app that I can use to track my to-do items using a Kanban
|
|
||||||
board. I should be able to add to-dos and then drag and drop them through the Kanban board. The different columns in
|
|
||||||
the Kanban board are:
|
|
||||||
|
|
||||||
- To Do
|
|
||||||
- In Progress
|
|
||||||
- Done
|
|
||||||
|
|
||||||
The app should use a neobrutalism design.
|
|
||||||
|
|
||||||
There is no need for user authentication either. All the to-dos will be stored in local storage, so each user has
|
|
||||||
access to all of their to-dos when they open their browser. So do not worry about implementing a backend with user
|
|
||||||
authentication or a database. Simply store everything in local storage. As for the design, please try to avoid AI
|
|
||||||
slop, so use your front-end design skills to design something beautiful and practical. As for the content of the
|
|
||||||
to-dos, we should store:
|
|
||||||
|
|
||||||
- The name or the title at the very least
|
|
||||||
- Optionally, we can also set tags, due dates, and priorities which should be represented as beautiful little badges
|
|
||||||
on the to-do card Users should have the ability to easily clear out all the completed To-Dos. They should also be
|
|
||||||
able to filter and search for To-Dos as well.
|
|
||||||
|
|
||||||
You choose the rest. Keep it simple. Should be 25 features.
|
|
||||||
129
agent.py
129
agent.py
@@ -23,14 +23,27 @@ if sys.platform == "win32":
|
|||||||
sys.stderr = io.TextIOWrapper(sys.stderr.buffer, encoding="utf-8", errors="replace", line_buffering=True)
|
sys.stderr = io.TextIOWrapper(sys.stderr.buffer, encoding="utf-8", errors="replace", line_buffering=True)
|
||||||
|
|
||||||
from client import create_client
|
from client import create_client
|
||||||
from progress import count_passing_tests, has_features, print_progress_summary, print_session_header
|
from progress import (
|
||||||
|
count_passing_tests,
|
||||||
|
has_features,
|
||||||
|
print_progress_summary,
|
||||||
|
print_session_header,
|
||||||
|
)
|
||||||
from prompts import (
|
from prompts import (
|
||||||
copy_spec_to_project,
|
copy_spec_to_project,
|
||||||
|
get_batch_feature_prompt,
|
||||||
get_coding_prompt,
|
get_coding_prompt,
|
||||||
get_initializer_prompt,
|
get_initializer_prompt,
|
||||||
get_single_feature_prompt,
|
get_single_feature_prompt,
|
||||||
get_testing_prompt,
|
get_testing_prompt,
|
||||||
)
|
)
|
||||||
|
from rate_limit_utils import (
|
||||||
|
calculate_error_backoff,
|
||||||
|
calculate_rate_limit_backoff,
|
||||||
|
clamp_retry_delay,
|
||||||
|
is_rate_limit_error,
|
||||||
|
parse_retry_after,
|
||||||
|
)
|
||||||
|
|
||||||
# Configuration
|
# Configuration
|
||||||
AUTO_CONTINUE_DELAY_SECONDS = 3
|
AUTO_CONTINUE_DELAY_SECONDS = 3
|
||||||
@@ -106,8 +119,19 @@ async def run_agent_session(
|
|||||||
return "continue", response_text
|
return "continue", response_text
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
print(f"Error during agent session: {e}")
|
error_str = str(e)
|
||||||
return "error", str(e)
|
print(f"Error during agent session: {error_str}")
|
||||||
|
|
||||||
|
# Detect rate limit errors from exception message
|
||||||
|
if is_rate_limit_error(error_str):
|
||||||
|
# Try to extract retry-after time from error
|
||||||
|
retry_seconds = parse_retry_after(error_str)
|
||||||
|
if retry_seconds is not None:
|
||||||
|
return "rate_limit", str(retry_seconds)
|
||||||
|
else:
|
||||||
|
return "rate_limit", "unknown"
|
||||||
|
|
||||||
|
return "error", error_str
|
||||||
|
|
||||||
|
|
||||||
async def run_autonomous_agent(
|
async def run_autonomous_agent(
|
||||||
@@ -116,8 +140,10 @@ async def run_autonomous_agent(
|
|||||||
max_iterations: Optional[int] = None,
|
max_iterations: Optional[int] = None,
|
||||||
yolo_mode: bool = False,
|
yolo_mode: bool = False,
|
||||||
feature_id: Optional[int] = None,
|
feature_id: Optional[int] = None,
|
||||||
|
feature_ids: Optional[list[int]] = None,
|
||||||
agent_type: Optional[str] = None,
|
agent_type: Optional[str] = None,
|
||||||
testing_feature_id: Optional[int] = None,
|
testing_feature_id: Optional[int] = None,
|
||||||
|
testing_feature_ids: Optional[list[int]] = None,
|
||||||
) -> None:
|
) -> None:
|
||||||
"""
|
"""
|
||||||
Run the autonomous agent loop.
|
Run the autonomous agent loop.
|
||||||
@@ -128,8 +154,10 @@ async def run_autonomous_agent(
|
|||||||
max_iterations: Maximum number of iterations (None for unlimited)
|
max_iterations: Maximum number of iterations (None for unlimited)
|
||||||
yolo_mode: If True, skip browser testing in coding agent prompts
|
yolo_mode: If True, skip browser testing in coding agent prompts
|
||||||
feature_id: If set, work only on this specific feature (used by orchestrator for coding agents)
|
feature_id: If set, work only on this specific feature (used by orchestrator for coding agents)
|
||||||
|
feature_ids: If set, work on these features in batch (used by orchestrator for batch mode)
|
||||||
agent_type: Type of agent: "initializer", "coding", "testing", or None (auto-detect)
|
agent_type: Type of agent: "initializer", "coding", "testing", or None (auto-detect)
|
||||||
testing_feature_id: For testing agents, the pre-claimed feature ID to test
|
testing_feature_id: For testing agents, the pre-claimed feature ID to test (legacy single mode)
|
||||||
|
testing_feature_ids: For testing agents, list of feature IDs to batch test
|
||||||
"""
|
"""
|
||||||
print("\n" + "=" * 70)
|
print("\n" + "=" * 70)
|
||||||
print(" AUTONOMOUS CODING AGENT")
|
print(" AUTONOMOUS CODING AGENT")
|
||||||
@@ -140,7 +168,9 @@ async def run_autonomous_agent(
|
|||||||
print(f"Agent type: {agent_type}")
|
print(f"Agent type: {agent_type}")
|
||||||
if yolo_mode:
|
if yolo_mode:
|
||||||
print("Mode: YOLO (testing agents disabled)")
|
print("Mode: YOLO (testing agents disabled)")
|
||||||
if feature_id:
|
if feature_ids and len(feature_ids) > 1:
|
||||||
|
print(f"Feature batch: {', '.join(f'#{fid}' for fid in feature_ids)}")
|
||||||
|
elif feature_id:
|
||||||
print(f"Feature assignment: #{feature_id}")
|
print(f"Feature assignment: #{feature_id}")
|
||||||
if max_iterations:
|
if max_iterations:
|
||||||
print(f"Max iterations: {max_iterations}")
|
print(f"Max iterations: {max_iterations}")
|
||||||
@@ -183,6 +213,8 @@ async def run_autonomous_agent(
|
|||||||
|
|
||||||
# Main loop
|
# Main loop
|
||||||
iteration = 0
|
iteration = 0
|
||||||
|
rate_limit_retries = 0 # Track consecutive rate limit errors for exponential backoff
|
||||||
|
error_retries = 0 # Track consecutive non-rate-limit errors
|
||||||
|
|
||||||
while True:
|
while True:
|
||||||
iteration += 1
|
iteration += 1
|
||||||
@@ -212,23 +244,29 @@ async def run_autonomous_agent(
|
|||||||
import os
|
import os
|
||||||
if agent_type == "testing":
|
if agent_type == "testing":
|
||||||
agent_id = f"testing-{os.getpid()}" # Unique ID for testing agents
|
agent_id = f"testing-{os.getpid()}" # Unique ID for testing agents
|
||||||
|
elif feature_ids and len(feature_ids) > 1:
|
||||||
|
agent_id = f"batch-{feature_ids[0]}"
|
||||||
elif feature_id:
|
elif feature_id:
|
||||||
agent_id = f"feature-{feature_id}"
|
agent_id = f"feature-{feature_id}"
|
||||||
else:
|
else:
|
||||||
agent_id = None
|
agent_id = None
|
||||||
client = create_client(project_dir, model, yolo_mode=yolo_mode, agent_id=agent_id)
|
client = create_client(project_dir, model, yolo_mode=yolo_mode, agent_id=agent_id, agent_type=agent_type)
|
||||||
|
|
||||||
# Choose prompt based on agent type
|
# Choose prompt based on agent type
|
||||||
if agent_type == "initializer":
|
if agent_type == "initializer":
|
||||||
prompt = get_initializer_prompt(project_dir)
|
prompt = get_initializer_prompt(project_dir)
|
||||||
elif agent_type == "testing":
|
elif agent_type == "testing":
|
||||||
prompt = get_testing_prompt(project_dir, testing_feature_id)
|
prompt = get_testing_prompt(project_dir, testing_feature_id, testing_feature_ids)
|
||||||
elif feature_id:
|
elif feature_ids and len(feature_ids) > 1:
|
||||||
|
# Batch mode (used by orchestrator for multi-feature coding agents)
|
||||||
|
prompt = get_batch_feature_prompt(feature_ids, project_dir, yolo_mode)
|
||||||
|
elif feature_id or (feature_ids is not None and len(feature_ids) == 1):
|
||||||
# Single-feature mode (used by orchestrator for coding agents)
|
# Single-feature mode (used by orchestrator for coding agents)
|
||||||
prompt = get_single_feature_prompt(feature_id, project_dir, yolo_mode)
|
fid = feature_id if feature_id is not None else feature_ids[0] # type: ignore[index]
|
||||||
|
prompt = get_single_feature_prompt(fid, project_dir, yolo_mode)
|
||||||
else:
|
else:
|
||||||
# General coding prompt (legacy path)
|
# General coding prompt (legacy path)
|
||||||
prompt = get_coding_prompt(project_dir)
|
prompt = get_coding_prompt(project_dir, yolo_mode=yolo_mode)
|
||||||
|
|
||||||
# Run session with async context manager
|
# Run session with async context manager
|
||||||
# Wrap in try/except to handle MCP server startup failures gracefully
|
# Wrap in try/except to handle MCP server startup failures gracefully
|
||||||
@@ -250,13 +288,28 @@ async def run_autonomous_agent(
|
|||||||
|
|
||||||
# Handle status
|
# Handle status
|
||||||
if status == "continue":
|
if status == "continue":
|
||||||
|
# Reset error retries on success; rate-limit retries reset only if no signal
|
||||||
|
error_retries = 0
|
||||||
|
reset_rate_limit_retries = True
|
||||||
|
|
||||||
delay_seconds = AUTO_CONTINUE_DELAY_SECONDS
|
delay_seconds = AUTO_CONTINUE_DELAY_SECONDS
|
||||||
target_time_str = None
|
target_time_str = None
|
||||||
|
|
||||||
if "limit reached" in response.lower():
|
# Check for rate limit indicators in response text
|
||||||
print("Claude Agent SDK indicated limit reached.")
|
if is_rate_limit_error(response):
|
||||||
|
print("Claude Agent SDK indicated rate limit reached.")
|
||||||
|
reset_rate_limit_retries = False
|
||||||
|
|
||||||
# Try to parse reset time from response
|
# Try to extract retry-after from response text first
|
||||||
|
retry_seconds = parse_retry_after(response)
|
||||||
|
if retry_seconds is not None:
|
||||||
|
delay_seconds = clamp_retry_delay(retry_seconds)
|
||||||
|
else:
|
||||||
|
# Use exponential backoff when retry-after unknown
|
||||||
|
delay_seconds = calculate_rate_limit_backoff(rate_limit_retries)
|
||||||
|
rate_limit_retries += 1
|
||||||
|
|
||||||
|
# Try to parse reset time from response (more specific format)
|
||||||
match = re.search(
|
match = re.search(
|
||||||
r"(?i)\bresets(?:\s+at)?\s+(\d+)(?::(\d+))?\s*(am|pm)\s*\(([^)]+)\)",
|
r"(?i)\bresets(?:\s+at)?\s+(\d+)(?::(\d+))?\s*(am|pm)\s*\(([^)]+)\)",
|
||||||
response,
|
response,
|
||||||
@@ -285,9 +338,7 @@ async def run_autonomous_agent(
|
|||||||
target += timedelta(days=1)
|
target += timedelta(days=1)
|
||||||
|
|
||||||
delta = target - now
|
delta = target - now
|
||||||
delay_seconds = min(
|
delay_seconds = min(max(int(delta.total_seconds()), 1), 24 * 60 * 60)
|
||||||
delta.total_seconds(), 24 * 60 * 60
|
|
||||||
) # Clamp to 24 hours max
|
|
||||||
target_time_str = target.strftime("%B %d, %Y at %I:%M %p %Z")
|
target_time_str = target.strftime("%B %d, %Y at %I:%M %p %Z")
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
@@ -316,20 +367,56 @@ async def run_autonomous_agent(
|
|||||||
print("The autonomous agent has finished its work.")
|
print("The autonomous agent has finished its work.")
|
||||||
break
|
break
|
||||||
|
|
||||||
# Single-feature mode OR testing agent: exit after one session
|
# Single-feature mode, batch mode, or testing agent: exit after one session
|
||||||
if feature_id is not None or agent_type == "testing":
|
if feature_ids and len(feature_ids) > 1:
|
||||||
|
print(f"\nBatch mode: Features {', '.join(f'#{fid}' for fid in feature_ids)} session complete.")
|
||||||
|
break
|
||||||
|
elif feature_id is not None or (feature_ids is not None and len(feature_ids) == 1):
|
||||||
|
fid = feature_id if feature_id is not None else feature_ids[0] # type: ignore[index]
|
||||||
if agent_type == "testing":
|
if agent_type == "testing":
|
||||||
print("\nTesting agent complete. Terminating session.")
|
print("\nTesting agent complete. Terminating session.")
|
||||||
else:
|
else:
|
||||||
print(f"\nSingle-feature mode: Feature #{feature_id} session complete.")
|
print(f"\nSingle-feature mode: Feature #{fid} session complete.")
|
||||||
break
|
break
|
||||||
|
elif agent_type == "testing":
|
||||||
|
print("\nTesting agent complete. Terminating session.")
|
||||||
|
break
|
||||||
|
|
||||||
|
# Reset rate limit retries only if no rate limit signal was detected
|
||||||
|
if reset_rate_limit_retries:
|
||||||
|
rate_limit_retries = 0
|
||||||
|
|
||||||
|
await asyncio.sleep(delay_seconds)
|
||||||
|
|
||||||
|
elif status == "rate_limit":
|
||||||
|
# Smart rate limit handling with exponential backoff
|
||||||
|
# Reset error counter so mixed events don't inflate delays
|
||||||
|
error_retries = 0
|
||||||
|
if response != "unknown":
|
||||||
|
try:
|
||||||
|
delay_seconds = clamp_retry_delay(int(response))
|
||||||
|
except (ValueError, TypeError):
|
||||||
|
# Malformed value - fall through to exponential backoff
|
||||||
|
response = "unknown"
|
||||||
|
if response == "unknown":
|
||||||
|
# Use exponential backoff when retry-after unknown or malformed
|
||||||
|
delay_seconds = calculate_rate_limit_backoff(rate_limit_retries)
|
||||||
|
rate_limit_retries += 1
|
||||||
|
print(f"\nRate limit hit. Backoff wait: {delay_seconds} seconds (attempt #{rate_limit_retries})...")
|
||||||
|
else:
|
||||||
|
print(f"\nRate limit hit. Waiting {delay_seconds} seconds before retry...")
|
||||||
|
|
||||||
await asyncio.sleep(delay_seconds)
|
await asyncio.sleep(delay_seconds)
|
||||||
|
|
||||||
elif status == "error":
|
elif status == "error":
|
||||||
|
# Non-rate-limit errors: linear backoff capped at 5 minutes
|
||||||
|
# Reset rate limit counter so mixed events don't inflate delays
|
||||||
|
rate_limit_retries = 0
|
||||||
|
error_retries += 1
|
||||||
|
delay_seconds = calculate_error_backoff(error_retries)
|
||||||
print("\nSession encountered an error")
|
print("\nSession encountered an error")
|
||||||
print("Will retry with a fresh session...")
|
print(f"Will retry in {delay_seconds}s (attempt #{error_retries})...")
|
||||||
await asyncio.sleep(AUTO_CONTINUE_DELAY_SECONDS)
|
await asyncio.sleep(delay_seconds)
|
||||||
|
|
||||||
# Small delay between sessions
|
# Small delay between sessions
|
||||||
if max_iterations is None or iteration < max_iterations:
|
if max_iterations is None or iteration < max_iterations:
|
||||||
|
|||||||
135
api/database.py
135
api/database.py
@@ -8,7 +8,7 @@ SQLite database schema for feature storage using SQLAlchemy.
|
|||||||
import sys
|
import sys
|
||||||
from datetime import datetime, timezone
|
from datetime import datetime, timezone
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import Optional
|
from typing import Generator, Optional
|
||||||
|
|
||||||
|
|
||||||
def _utc_now() -> datetime:
|
def _utc_now() -> datetime:
|
||||||
@@ -26,13 +26,16 @@ from sqlalchemy import (
|
|||||||
String,
|
String,
|
||||||
Text,
|
Text,
|
||||||
create_engine,
|
create_engine,
|
||||||
|
event,
|
||||||
text,
|
text,
|
||||||
)
|
)
|
||||||
from sqlalchemy.ext.declarative import declarative_base
|
from sqlalchemy.orm import DeclarativeBase, Session, relationship, sessionmaker
|
||||||
from sqlalchemy.orm import Session, relationship, sessionmaker
|
|
||||||
from sqlalchemy.types import JSON
|
from sqlalchemy.types import JSON
|
||||||
|
|
||||||
Base = declarative_base()
|
|
||||||
|
class Base(DeclarativeBase):
|
||||||
|
"""SQLAlchemy 2.0 style declarative base."""
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
class Feature(Base):
|
class Feature(Base):
|
||||||
@@ -180,7 +183,8 @@ class ScheduleOverride(Base):
|
|||||||
|
|
||||||
def get_database_path(project_dir: Path) -> Path:
|
def get_database_path(project_dir: Path) -> Path:
|
||||||
"""Return the path to the SQLite database for a project."""
|
"""Return the path to the SQLite database for a project."""
|
||||||
return project_dir / "features.db"
|
from autocoder_paths import get_features_db_path
|
||||||
|
return get_features_db_path(project_dir)
|
||||||
|
|
||||||
|
|
||||||
def get_database_url(project_dir: Path) -> str:
|
def get_database_url(project_dir: Path) -> str:
|
||||||
@@ -307,11 +311,11 @@ def _migrate_add_schedules_tables(engine) -> None:
|
|||||||
|
|
||||||
# Create schedules table if missing
|
# Create schedules table if missing
|
||||||
if "schedules" not in existing_tables:
|
if "schedules" not in existing_tables:
|
||||||
Schedule.__table__.create(bind=engine)
|
Schedule.__table__.create(bind=engine) # type: ignore[attr-defined]
|
||||||
|
|
||||||
# Create schedule_overrides table if missing
|
# Create schedule_overrides table if missing
|
||||||
if "schedule_overrides" not in existing_tables:
|
if "schedule_overrides" not in existing_tables:
|
||||||
ScheduleOverride.__table__.create(bind=engine)
|
ScheduleOverride.__table__.create(bind=engine) # type: ignore[attr-defined]
|
||||||
|
|
||||||
# Add crash_count column if missing (for upgrades)
|
# Add crash_count column if missing (for upgrades)
|
||||||
if "schedules" in existing_tables:
|
if "schedules" in existing_tables:
|
||||||
@@ -332,6 +336,35 @@ def _migrate_add_schedules_tables(engine) -> None:
|
|||||||
conn.commit()
|
conn.commit()
|
||||||
|
|
||||||
|
|
||||||
|
def _configure_sqlite_immediate_transactions(engine) -> None:
|
||||||
|
"""Configure engine for IMMEDIATE transactions via event hooks.
|
||||||
|
|
||||||
|
Per SQLAlchemy docs: https://docs.sqlalchemy.org/en/20/dialects/sqlite.html
|
||||||
|
|
||||||
|
This replaces fragile pysqlite implicit transaction handling with explicit
|
||||||
|
BEGIN IMMEDIATE at transaction start. Benefits:
|
||||||
|
- Acquires write lock immediately, preventing stale reads
|
||||||
|
- Works correctly regardless of prior ORM operations
|
||||||
|
- Future-proof: won't break when pysqlite legacy mode is removed in Python 3.16
|
||||||
|
"""
|
||||||
|
@event.listens_for(engine, "connect")
|
||||||
|
def do_connect(dbapi_connection, connection_record):
|
||||||
|
# Disable pysqlite's implicit transaction handling
|
||||||
|
dbapi_connection.isolation_level = None
|
||||||
|
|
||||||
|
# Set busy_timeout on raw connection before any transactions
|
||||||
|
cursor = dbapi_connection.cursor()
|
||||||
|
try:
|
||||||
|
cursor.execute("PRAGMA busy_timeout=30000")
|
||||||
|
finally:
|
||||||
|
cursor.close()
|
||||||
|
|
||||||
|
@event.listens_for(engine, "begin")
|
||||||
|
def do_begin(conn):
|
||||||
|
# Use IMMEDIATE for all transactions to prevent stale reads
|
||||||
|
conn.exec_driver_sql("BEGIN IMMEDIATE")
|
||||||
|
|
||||||
|
|
||||||
def create_database(project_dir: Path) -> tuple:
|
def create_database(project_dir: Path) -> tuple:
|
||||||
"""
|
"""
|
||||||
Create database and return engine + session maker.
|
Create database and return engine + session maker.
|
||||||
@@ -351,21 +384,41 @@ def create_database(project_dir: Path) -> tuple:
|
|||||||
return _engine_cache[cache_key]
|
return _engine_cache[cache_key]
|
||||||
|
|
||||||
db_url = get_database_url(project_dir)
|
db_url = get_database_url(project_dir)
|
||||||
engine = create_engine(db_url, connect_args={
|
|
||||||
"check_same_thread": False,
|
# Ensure parent directory exists (for .autocoder/ layout)
|
||||||
"timeout": 30 # Wait up to 30s for locks
|
db_path = get_database_path(project_dir)
|
||||||
})
|
db_path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
Base.metadata.create_all(bind=engine)
|
|
||||||
|
|
||||||
# Choose journal mode based on filesystem type
|
# Choose journal mode based on filesystem type
|
||||||
# WAL mode doesn't work reliably on network filesystems and can cause corruption
|
# WAL mode doesn't work reliably on network filesystems and can cause corruption
|
||||||
is_network = _is_network_path(project_dir)
|
is_network = _is_network_path(project_dir)
|
||||||
journal_mode = "DELETE" if is_network else "WAL"
|
journal_mode = "DELETE" if is_network else "WAL"
|
||||||
|
|
||||||
|
engine = create_engine(db_url, connect_args={
|
||||||
|
"check_same_thread": False,
|
||||||
|
"timeout": 30 # Wait up to 30s for locks
|
||||||
|
})
|
||||||
|
|
||||||
|
# Set journal mode BEFORE configuring event hooks
|
||||||
|
# PRAGMA journal_mode must run outside of a transaction, and our event hooks
|
||||||
|
# start a transaction with BEGIN IMMEDIATE on every operation
|
||||||
with engine.connect() as conn:
|
with engine.connect() as conn:
|
||||||
conn.execute(text(f"PRAGMA journal_mode={journal_mode}"))
|
# Get raw DBAPI connection to execute PRAGMA outside transaction
|
||||||
conn.execute(text("PRAGMA busy_timeout=30000"))
|
raw_conn = conn.connection.dbapi_connection
|
||||||
conn.commit()
|
if raw_conn is None:
|
||||||
|
raise RuntimeError("Failed to get raw DBAPI connection")
|
||||||
|
cursor = raw_conn.cursor()
|
||||||
|
try:
|
||||||
|
cursor.execute(f"PRAGMA journal_mode={journal_mode}")
|
||||||
|
cursor.execute("PRAGMA busy_timeout=30000")
|
||||||
|
finally:
|
||||||
|
cursor.close()
|
||||||
|
|
||||||
|
# Configure IMMEDIATE transactions via event hooks AFTER setting PRAGMAs
|
||||||
|
# This must happen before create_all() and migrations run
|
||||||
|
_configure_sqlite_immediate_transactions(engine)
|
||||||
|
|
||||||
|
Base.metadata.create_all(bind=engine)
|
||||||
|
|
||||||
# Migrate existing databases
|
# Migrate existing databases
|
||||||
_migrate_add_in_progress_column(engine)
|
_migrate_add_in_progress_column(engine)
|
||||||
@@ -417,7 +470,7 @@ def set_session_maker(session_maker: sessionmaker) -> None:
|
|||||||
_session_maker = session_maker
|
_session_maker = session_maker
|
||||||
|
|
||||||
|
|
||||||
def get_db() -> Session:
|
def get_db() -> Generator[Session, None, None]:
|
||||||
"""
|
"""
|
||||||
Dependency for FastAPI to get database session.
|
Dependency for FastAPI to get database session.
|
||||||
|
|
||||||
@@ -429,5 +482,55 @@ def get_db() -> Session:
|
|||||||
db = _session_maker()
|
db = _session_maker()
|
||||||
try:
|
try:
|
||||||
yield db
|
yield db
|
||||||
|
except Exception:
|
||||||
|
db.rollback()
|
||||||
|
raise
|
||||||
finally:
|
finally:
|
||||||
db.close()
|
db.close()
|
||||||
|
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# Atomic Transaction Helpers for Parallel Mode
|
||||||
|
# =============================================================================
|
||||||
|
# These helpers prevent database corruption when multiple processes access the
|
||||||
|
# same SQLite database concurrently. They use IMMEDIATE transactions which
|
||||||
|
# acquire write locks at the start (preventing stale reads) and atomic
|
||||||
|
# UPDATE ... WHERE clauses (preventing check-then-modify races).
|
||||||
|
|
||||||
|
|
||||||
|
from contextlib import contextmanager
|
||||||
|
|
||||||
|
|
||||||
|
@contextmanager
|
||||||
|
def atomic_transaction(session_maker):
|
||||||
|
"""Context manager for atomic SQLite transactions.
|
||||||
|
|
||||||
|
Acquires a write lock immediately via BEGIN IMMEDIATE (configured by
|
||||||
|
engine event hooks), preventing stale reads in read-modify-write patterns.
|
||||||
|
This is essential for preventing race conditions in parallel mode.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
session_maker: SQLAlchemy sessionmaker
|
||||||
|
|
||||||
|
Yields:
|
||||||
|
SQLAlchemy session with automatic commit/rollback
|
||||||
|
|
||||||
|
Example:
|
||||||
|
with atomic_transaction(session_maker) as session:
|
||||||
|
# All reads in this block are protected by write lock
|
||||||
|
feature = session.query(Feature).filter(...).first()
|
||||||
|
feature.priority = new_priority
|
||||||
|
# Commit happens automatically on exit
|
||||||
|
"""
|
||||||
|
session = session_maker()
|
||||||
|
try:
|
||||||
|
yield session
|
||||||
|
session.commit()
|
||||||
|
except Exception:
|
||||||
|
try:
|
||||||
|
session.rollback()
|
||||||
|
except Exception:
|
||||||
|
pass # Don't let rollback failure mask original error
|
||||||
|
raise
|
||||||
|
finally:
|
||||||
|
session.close()
|
||||||
|
|||||||
@@ -7,6 +7,7 @@ Includes cycle detection, validation, and helper functions for dependency manage
|
|||||||
"""
|
"""
|
||||||
|
|
||||||
import heapq
|
import heapq
|
||||||
|
from collections import deque
|
||||||
from typing import TypedDict
|
from typing import TypedDict
|
||||||
|
|
||||||
# Security: Prevent DoS via excessive dependencies
|
# Security: Prevent DoS via excessive dependencies
|
||||||
@@ -301,19 +302,20 @@ def compute_scheduling_scores(features: list[dict]) -> dict[int, float]:
|
|||||||
|
|
||||||
# Calculate depths via BFS from roots
|
# Calculate depths via BFS from roots
|
||||||
# Use visited set to prevent infinite loops from circular dependencies
|
# Use visited set to prevent infinite loops from circular dependencies
|
||||||
|
# Use deque for O(1) popleft instead of list.pop(0) which is O(n)
|
||||||
depths: dict[int, int] = {}
|
depths: dict[int, int] = {}
|
||||||
visited: set[int] = set()
|
visited: set[int] = set()
|
||||||
roots = [f["id"] for f in features if not parents[f["id"]]]
|
roots = [f["id"] for f in features if not parents[f["id"]]]
|
||||||
queue = [(root, 0) for root in roots]
|
bfs_queue: deque[tuple[int, int]] = deque((root, 0) for root in roots)
|
||||||
while queue:
|
while bfs_queue:
|
||||||
node_id, depth = queue.pop(0)
|
node_id, depth = bfs_queue.popleft()
|
||||||
if node_id in visited:
|
if node_id in visited:
|
||||||
continue # Skip already visited nodes (handles cycles)
|
continue # Skip already visited nodes (handles cycles)
|
||||||
visited.add(node_id)
|
visited.add(node_id)
|
||||||
depths[node_id] = depth
|
depths[node_id] = depth
|
||||||
for child_id in children[node_id]:
|
for child_id in children[node_id]:
|
||||||
if child_id not in visited:
|
if child_id not in visited:
|
||||||
queue.append((child_id, depth + 1))
|
bfs_queue.append((child_id, depth + 1))
|
||||||
|
|
||||||
# Handle orphaned nodes (shouldn't happen but be safe)
|
# Handle orphaned nodes (shouldn't happen but be safe)
|
||||||
for f in features:
|
for f in features:
|
||||||
|
|||||||
290
autocoder_paths.py
Normal file
290
autocoder_paths.py
Normal file
@@ -0,0 +1,290 @@
|
|||||||
|
"""
|
||||||
|
Autocoder Path Resolution
|
||||||
|
=========================
|
||||||
|
|
||||||
|
Central module for resolving paths to autocoder-generated files within a project.
|
||||||
|
|
||||||
|
Implements a dual-path resolution strategy for backward compatibility:
|
||||||
|
|
||||||
|
1. Check ``project_dir / ".autocoder" / X`` (new layout)
|
||||||
|
2. Check ``project_dir / X`` (legacy root-level layout)
|
||||||
|
3. Default to the new location for fresh projects
|
||||||
|
|
||||||
|
This allows existing projects with root-level ``features.db``, ``.agent.lock``,
|
||||||
|
etc. to keep working while new projects store everything under ``.autocoder/``.
|
||||||
|
|
||||||
|
The ``migrate_project_layout`` function can move an old-layout project to the
|
||||||
|
new layout safely, with full integrity checks for SQLite databases.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import logging
|
||||||
|
import shutil
|
||||||
|
import sqlite3
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# .gitignore content written into every .autocoder/ directory
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
_GITIGNORE_CONTENT = """\
|
||||||
|
# Autocoder runtime files
|
||||||
|
features.db
|
||||||
|
features.db-wal
|
||||||
|
features.db-shm
|
||||||
|
assistant.db
|
||||||
|
assistant.db-wal
|
||||||
|
assistant.db-shm
|
||||||
|
.agent.lock
|
||||||
|
.devserver.lock
|
||||||
|
.claude_settings.json
|
||||||
|
.claude_assistant_settings.json
|
||||||
|
.claude_settings.expand.*.json
|
||||||
|
.progress_cache
|
||||||
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Private helpers
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
def _resolve_path(project_dir: Path, filename: str) -> Path:
|
||||||
|
"""Resolve a file path using dual-path strategy.
|
||||||
|
|
||||||
|
Checks the new ``.autocoder/`` location first, then falls back to the
|
||||||
|
legacy root-level location. If neither exists, returns the new location
|
||||||
|
so that newly-created files land in ``.autocoder/``.
|
||||||
|
"""
|
||||||
|
new = project_dir / ".autocoder" / filename
|
||||||
|
if new.exists():
|
||||||
|
return new
|
||||||
|
old = project_dir / filename
|
||||||
|
if old.exists():
|
||||||
|
return old
|
||||||
|
return new # default for new projects
|
||||||
|
|
||||||
|
|
||||||
|
def _resolve_dir(project_dir: Path, dirname: str) -> Path:
|
||||||
|
"""Resolve a directory path using dual-path strategy.
|
||||||
|
|
||||||
|
Same logic as ``_resolve_path`` but intended for directories such as
|
||||||
|
``prompts/``.
|
||||||
|
"""
|
||||||
|
new = project_dir / ".autocoder" / dirname
|
||||||
|
if new.exists():
|
||||||
|
return new
|
||||||
|
old = project_dir / dirname
|
||||||
|
if old.exists():
|
||||||
|
return old
|
||||||
|
return new
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# .autocoder directory management
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
def get_autocoder_dir(project_dir: Path) -> Path:
|
||||||
|
"""Return the ``.autocoder`` directory path. Does NOT create it."""
|
||||||
|
return project_dir / ".autocoder"
|
||||||
|
|
||||||
|
|
||||||
|
def ensure_autocoder_dir(project_dir: Path) -> Path:
|
||||||
|
"""Create the ``.autocoder/`` directory (if needed) and write its ``.gitignore``.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
The path to the ``.autocoder`` directory.
|
||||||
|
"""
|
||||||
|
autocoder_dir = get_autocoder_dir(project_dir)
|
||||||
|
autocoder_dir.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
gitignore_path = autocoder_dir / ".gitignore"
|
||||||
|
gitignore_path.write_text(_GITIGNORE_CONTENT, encoding="utf-8")
|
||||||
|
|
||||||
|
return autocoder_dir
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Dual-path file helpers
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
def get_features_db_path(project_dir: Path) -> Path:
|
||||||
|
"""Resolve the path to ``features.db``."""
|
||||||
|
return _resolve_path(project_dir, "features.db")
|
||||||
|
|
||||||
|
|
||||||
|
def get_assistant_db_path(project_dir: Path) -> Path:
|
||||||
|
"""Resolve the path to ``assistant.db``."""
|
||||||
|
return _resolve_path(project_dir, "assistant.db")
|
||||||
|
|
||||||
|
|
||||||
|
def get_agent_lock_path(project_dir: Path) -> Path:
|
||||||
|
"""Resolve the path to ``.agent.lock``."""
|
||||||
|
return _resolve_path(project_dir, ".agent.lock")
|
||||||
|
|
||||||
|
|
||||||
|
def get_devserver_lock_path(project_dir: Path) -> Path:
|
||||||
|
"""Resolve the path to ``.devserver.lock``."""
|
||||||
|
return _resolve_path(project_dir, ".devserver.lock")
|
||||||
|
|
||||||
|
|
||||||
|
def get_claude_settings_path(project_dir: Path) -> Path:
|
||||||
|
"""Resolve the path to ``.claude_settings.json``."""
|
||||||
|
return _resolve_path(project_dir, ".claude_settings.json")
|
||||||
|
|
||||||
|
|
||||||
|
def get_claude_assistant_settings_path(project_dir: Path) -> Path:
|
||||||
|
"""Resolve the path to ``.claude_assistant_settings.json``."""
|
||||||
|
return _resolve_path(project_dir, ".claude_assistant_settings.json")
|
||||||
|
|
||||||
|
|
||||||
|
def get_progress_cache_path(project_dir: Path) -> Path:
|
||||||
|
"""Resolve the path to ``.progress_cache``."""
|
||||||
|
return _resolve_path(project_dir, ".progress_cache")
|
||||||
|
|
||||||
|
|
||||||
|
def get_prompts_dir(project_dir: Path) -> Path:
|
||||||
|
"""Resolve the path to the ``prompts/`` directory."""
|
||||||
|
return _resolve_dir(project_dir, "prompts")
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Non-dual-path helpers (always use new location)
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
def get_expand_settings_path(project_dir: Path, uuid_hex: str) -> Path:
|
||||||
|
"""Return the path for an ephemeral expand-session settings file.
|
||||||
|
|
||||||
|
These files are short-lived and always stored in ``.autocoder/``.
|
||||||
|
"""
|
||||||
|
return project_dir / ".autocoder" / f".claude_settings.expand.{uuid_hex}.json"
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Lock-file safety check
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
def has_agent_running(project_dir: Path) -> bool:
|
||||||
|
"""Check whether any agent or dev-server lock file exists at either location.
|
||||||
|
|
||||||
|
Inspects both the legacy root-level paths and the new ``.autocoder/``
|
||||||
|
paths so that a running agent is detected regardless of project layout.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
``True`` if any ``.agent.lock`` or ``.devserver.lock`` exists.
|
||||||
|
"""
|
||||||
|
lock_names = (".agent.lock", ".devserver.lock")
|
||||||
|
for name in lock_names:
|
||||||
|
if (project_dir / name).exists():
|
||||||
|
return True
|
||||||
|
if (project_dir / ".autocoder" / name).exists():
|
||||||
|
return True
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Migration
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
def migrate_project_layout(project_dir: Path) -> list[str]:
|
||||||
|
"""Migrate a project from the legacy root-level layout to ``.autocoder/``.
|
||||||
|
|
||||||
|
The migration is incremental and safe:
|
||||||
|
|
||||||
|
* If the agent is running (lock files present) the migration is skipped
|
||||||
|
entirely to avoid corrupting in-use databases.
|
||||||
|
* Each file/directory is migrated independently. If any single step
|
||||||
|
fails the error is logged and migration continues with the remaining
|
||||||
|
items. Partial migration is safe because the dual-path resolution
|
||||||
|
strategy will find files at whichever location they ended up in.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
A list of human-readable descriptions of what was migrated, e.g.
|
||||||
|
``["prompts/ -> .autocoder/prompts/", "features.db -> .autocoder/features.db"]``.
|
||||||
|
An empty list means nothing was migrated (either everything is
|
||||||
|
already migrated, or the agent is running).
|
||||||
|
"""
|
||||||
|
# Safety: refuse to migrate while an agent is running
|
||||||
|
if has_agent_running(project_dir):
|
||||||
|
logger.warning("Migration skipped: agent or dev-server is running for %s", project_dir)
|
||||||
|
return []
|
||||||
|
|
||||||
|
autocoder_dir = ensure_autocoder_dir(project_dir)
|
||||||
|
migrated: list[str] = []
|
||||||
|
|
||||||
|
# --- 1. Migrate prompts/ directory -----------------------------------
|
||||||
|
try:
|
||||||
|
old_prompts = project_dir / "prompts"
|
||||||
|
new_prompts = autocoder_dir / "prompts"
|
||||||
|
if old_prompts.exists() and old_prompts.is_dir() and not new_prompts.exists():
|
||||||
|
shutil.copytree(str(old_prompts), str(new_prompts))
|
||||||
|
shutil.rmtree(str(old_prompts))
|
||||||
|
migrated.append("prompts/ -> .autocoder/prompts/")
|
||||||
|
logger.info("Migrated prompts/ -> .autocoder/prompts/")
|
||||||
|
except Exception:
|
||||||
|
logger.warning("Failed to migrate prompts/ directory", exc_info=True)
|
||||||
|
|
||||||
|
# --- 2. Migrate SQLite databases (features.db, assistant.db) ---------
|
||||||
|
db_names = ("features.db", "assistant.db")
|
||||||
|
for db_name in db_names:
|
||||||
|
try:
|
||||||
|
old_db = project_dir / db_name
|
||||||
|
new_db = autocoder_dir / db_name
|
||||||
|
if old_db.exists() and not new_db.exists():
|
||||||
|
# Flush WAL to ensure all data is in the main database file
|
||||||
|
conn = sqlite3.connect(str(old_db))
|
||||||
|
try:
|
||||||
|
cursor = conn.cursor()
|
||||||
|
cursor.execute("PRAGMA wal_checkpoint(TRUNCATE)")
|
||||||
|
finally:
|
||||||
|
conn.close()
|
||||||
|
|
||||||
|
# Copy the main database file (WAL is now flushed)
|
||||||
|
shutil.copy2(str(old_db), str(new_db))
|
||||||
|
|
||||||
|
# Verify the copy is intact
|
||||||
|
verify_conn = sqlite3.connect(str(new_db))
|
||||||
|
try:
|
||||||
|
verify_cursor = verify_conn.cursor()
|
||||||
|
result = verify_cursor.execute("PRAGMA integrity_check").fetchone()
|
||||||
|
if result is None or result[0] != "ok":
|
||||||
|
logger.error(
|
||||||
|
"Integrity check failed for migrated %s: %s",
|
||||||
|
db_name, result,
|
||||||
|
)
|
||||||
|
# Remove the broken copy; old file stays in place
|
||||||
|
new_db.unlink(missing_ok=True)
|
||||||
|
continue
|
||||||
|
finally:
|
||||||
|
verify_conn.close()
|
||||||
|
|
||||||
|
# Remove old database files (.db, .db-wal, .db-shm)
|
||||||
|
old_db.unlink(missing_ok=True)
|
||||||
|
for suffix in ("-wal", "-shm"):
|
||||||
|
wal_file = project_dir / f"{db_name}{suffix}"
|
||||||
|
wal_file.unlink(missing_ok=True)
|
||||||
|
|
||||||
|
migrated.append(f"{db_name} -> .autocoder/{db_name}")
|
||||||
|
logger.info("Migrated %s -> .autocoder/%s", db_name, db_name)
|
||||||
|
except Exception:
|
||||||
|
logger.warning("Failed to migrate %s", db_name, exc_info=True)
|
||||||
|
|
||||||
|
# --- 3. Migrate simple files -----------------------------------------
|
||||||
|
simple_files = (
|
||||||
|
".agent.lock",
|
||||||
|
".devserver.lock",
|
||||||
|
".claude_settings.json",
|
||||||
|
".claude_assistant_settings.json",
|
||||||
|
".progress_cache",
|
||||||
|
)
|
||||||
|
for filename in simple_files:
|
||||||
|
try:
|
||||||
|
old_file = project_dir / filename
|
||||||
|
new_file = autocoder_dir / filename
|
||||||
|
if old_file.exists() and not new_file.exists():
|
||||||
|
shutil.move(str(old_file), str(new_file))
|
||||||
|
migrated.append(f"{filename} -> .autocoder/{filename}")
|
||||||
|
logger.info("Migrated %s -> .autocoder/%s", filename, filename)
|
||||||
|
except Exception:
|
||||||
|
logger.warning("Failed to migrate %s", filename, exc_info=True)
|
||||||
|
|
||||||
|
return migrated
|
||||||
@@ -133,6 +133,13 @@ Authentication:
|
|||||||
help="Work on a specific feature ID only (used by orchestrator for coding agents)",
|
help="Work on a specific feature ID only (used by orchestrator for coding agents)",
|
||||||
)
|
)
|
||||||
|
|
||||||
|
parser.add_argument(
|
||||||
|
"--feature-ids",
|
||||||
|
type=str,
|
||||||
|
default=None,
|
||||||
|
help="Comma-separated feature IDs to implement in batch (e.g., '5,8,12')",
|
||||||
|
)
|
||||||
|
|
||||||
# Agent type for subprocess mode
|
# Agent type for subprocess mode
|
||||||
parser.add_argument(
|
parser.add_argument(
|
||||||
"--agent-type",
|
"--agent-type",
|
||||||
@@ -145,7 +152,14 @@ Authentication:
|
|||||||
"--testing-feature-id",
|
"--testing-feature-id",
|
||||||
type=int,
|
type=int,
|
||||||
default=None,
|
default=None,
|
||||||
help="Feature ID to regression test (used by orchestrator for testing agents)",
|
help="Feature ID to regression test (used by orchestrator for testing agents, legacy single mode)",
|
||||||
|
)
|
||||||
|
|
||||||
|
parser.add_argument(
|
||||||
|
"--testing-feature-ids",
|
||||||
|
type=str,
|
||||||
|
default=None,
|
||||||
|
help="Comma-separated feature IDs to regression test in batch (e.g., '5,12,18')",
|
||||||
)
|
)
|
||||||
|
|
||||||
# Testing agent configuration
|
# Testing agent configuration
|
||||||
@@ -156,6 +170,20 @@ Authentication:
|
|||||||
help="Testing agents per coding agent (0-3, default: 1). Set to 0 to disable testing agents.",
|
help="Testing agents per coding agent (0-3, default: 1). Set to 0 to disable testing agents.",
|
||||||
)
|
)
|
||||||
|
|
||||||
|
parser.add_argument(
|
||||||
|
"--testing-batch-size",
|
||||||
|
type=int,
|
||||||
|
default=3,
|
||||||
|
help="Number of features per testing batch (1-5, default: 3)",
|
||||||
|
)
|
||||||
|
|
||||||
|
parser.add_argument(
|
||||||
|
"--batch-size",
|
||||||
|
type=int,
|
||||||
|
default=3,
|
||||||
|
help="Max features per coding agent batch (1-3, default: 3)",
|
||||||
|
)
|
||||||
|
|
||||||
return parser.parse_args()
|
return parser.parse_args()
|
||||||
|
|
||||||
|
|
||||||
@@ -193,6 +221,30 @@ def main() -> None:
|
|||||||
print("Use an absolute path or register the project first.")
|
print("Use an absolute path or register the project first.")
|
||||||
return
|
return
|
||||||
|
|
||||||
|
# Migrate project layout to .autocoder/ if needed (idempotent, safe)
|
||||||
|
from autocoder_paths import migrate_project_layout
|
||||||
|
migrated = migrate_project_layout(project_dir)
|
||||||
|
if migrated:
|
||||||
|
print(f"Migrated project files to .autocoder/: {', '.join(migrated)}", flush=True)
|
||||||
|
|
||||||
|
# Parse batch testing feature IDs (comma-separated string -> list[int])
|
||||||
|
testing_feature_ids: list[int] | None = None
|
||||||
|
if args.testing_feature_ids:
|
||||||
|
try:
|
||||||
|
testing_feature_ids = [int(x.strip()) for x in args.testing_feature_ids.split(",") if x.strip()]
|
||||||
|
except ValueError:
|
||||||
|
print(f"Error: --testing-feature-ids must be comma-separated integers, got: {args.testing_feature_ids}")
|
||||||
|
return
|
||||||
|
|
||||||
|
# Parse batch coding feature IDs (comma-separated string -> list[int])
|
||||||
|
coding_feature_ids: list[int] | None = None
|
||||||
|
if args.feature_ids:
|
||||||
|
try:
|
||||||
|
coding_feature_ids = [int(x.strip()) for x in args.feature_ids.split(",") if x.strip()]
|
||||||
|
except ValueError:
|
||||||
|
print(f"Error: --feature-ids must be comma-separated integers, got: {args.feature_ids}")
|
||||||
|
return
|
||||||
|
|
||||||
try:
|
try:
|
||||||
if args.agent_type:
|
if args.agent_type:
|
||||||
# Subprocess mode - spawned by orchestrator for a specific role
|
# Subprocess mode - spawned by orchestrator for a specific role
|
||||||
@@ -203,8 +255,10 @@ def main() -> None:
|
|||||||
max_iterations=args.max_iterations or 1,
|
max_iterations=args.max_iterations or 1,
|
||||||
yolo_mode=args.yolo,
|
yolo_mode=args.yolo,
|
||||||
feature_id=args.feature_id,
|
feature_id=args.feature_id,
|
||||||
|
feature_ids=coding_feature_ids,
|
||||||
agent_type=args.agent_type,
|
agent_type=args.agent_type,
|
||||||
testing_feature_id=args.testing_feature_id,
|
testing_feature_id=args.testing_feature_id,
|
||||||
|
testing_feature_ids=testing_feature_ids,
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
else:
|
else:
|
||||||
@@ -223,6 +277,8 @@ def main() -> None:
|
|||||||
model=args.model,
|
model=args.model,
|
||||||
yolo_mode=args.yolo,
|
yolo_mode=args.yolo,
|
||||||
testing_agent_ratio=args.testing_ratio,
|
testing_agent_ratio=args.testing_ratio,
|
||||||
|
testing_batch_size=args.testing_batch_size,
|
||||||
|
batch_size=args.batch_size,
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
except KeyboardInterrupt:
|
except KeyboardInterrupt:
|
||||||
|
|||||||
270
client.py
270
client.py
@@ -7,6 +7,7 @@ Functions for creating and configuring the Claude Agent SDK client.
|
|||||||
|
|
||||||
import json
|
import json
|
||||||
import os
|
import os
|
||||||
|
import re
|
||||||
import shutil
|
import shutil
|
||||||
import sys
|
import sys
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
@@ -15,7 +16,8 @@ from claude_agent_sdk import ClaudeAgentOptions, ClaudeSDKClient
|
|||||||
from claude_agent_sdk.types import HookContext, HookInput, HookMatcher, SyncHookJSONOutput
|
from claude_agent_sdk.types import HookContext, HookInput, HookMatcher, SyncHookJSONOutput
|
||||||
from dotenv import load_dotenv
|
from dotenv import load_dotenv
|
||||||
|
|
||||||
from security import bash_security_hook
|
from env_constants import API_ENV_VARS
|
||||||
|
from security import SENSITIVE_DIRECTORIES, bash_security_hook
|
||||||
|
|
||||||
# Load environment variables from .env file if present
|
# Load environment variables from .env file if present
|
||||||
load_dotenv()
|
load_dotenv()
|
||||||
@@ -30,39 +32,44 @@ DEFAULT_PLAYWRIGHT_HEADLESS = True
|
|||||||
# Firefox is recommended for lower CPU usage
|
# Firefox is recommended for lower CPU usage
|
||||||
DEFAULT_PLAYWRIGHT_BROWSER = "firefox"
|
DEFAULT_PLAYWRIGHT_BROWSER = "firefox"
|
||||||
|
|
||||||
# Environment variables to pass through to Claude CLI for API configuration
|
|
||||||
# These allow using alternative API endpoints (e.g., GLM via z.ai) without
|
|
||||||
# affecting the user's global Claude Code settings
|
|
||||||
API_ENV_VARS = [
|
|
||||||
"ANTHROPIC_BASE_URL", # Custom API endpoint (e.g., https://api.z.ai/api/anthropic)
|
|
||||||
"ANTHROPIC_AUTH_TOKEN", # API authentication token
|
|
||||||
"API_TIMEOUT_MS", # Request timeout in milliseconds
|
|
||||||
"ANTHROPIC_DEFAULT_SONNET_MODEL", # Model override for Sonnet
|
|
||||||
"ANTHROPIC_DEFAULT_OPUS_MODEL", # Model override for Opus
|
|
||||||
"ANTHROPIC_DEFAULT_HAIKU_MODEL", # Model override for Haiku
|
|
||||||
]
|
|
||||||
|
|
||||||
# Extra read paths for cross-project file access (read-only)
|
# Extra read paths for cross-project file access (read-only)
|
||||||
# Set EXTRA_READ_PATHS environment variable with comma-separated absolute paths
|
# Set EXTRA_READ_PATHS environment variable with comma-separated absolute paths
|
||||||
# Example: EXTRA_READ_PATHS=/Volumes/Data/dev,/Users/shared/libs
|
# Example: EXTRA_READ_PATHS=/Volumes/Data/dev,/Users/shared/libs
|
||||||
EXTRA_READ_PATHS_VAR = "EXTRA_READ_PATHS"
|
EXTRA_READ_PATHS_VAR = "EXTRA_READ_PATHS"
|
||||||
|
|
||||||
# Sensitive directories that should never be allowed via EXTRA_READ_PATHS
|
# Sensitive directories that should never be allowed via EXTRA_READ_PATHS.
|
||||||
# These contain credentials, keys, or system-critical files
|
# Delegates to the canonical SENSITIVE_DIRECTORIES set in security.py so that
|
||||||
EXTRA_READ_PATHS_BLOCKLIST = {
|
# this blocklist and the filesystem browser API share a single source of truth.
|
||||||
".ssh",
|
EXTRA_READ_PATHS_BLOCKLIST = SENSITIVE_DIRECTORIES
|
||||||
".aws",
|
|
||||||
".azure",
|
def convert_model_for_vertex(model: str) -> str:
|
||||||
".kube",
|
"""
|
||||||
".gnupg",
|
Convert model name format for Vertex AI compatibility.
|
||||||
".gpg",
|
|
||||||
".password-store",
|
Vertex AI uses @ to separate model name from version (e.g., claude-opus-4-5@20251101)
|
||||||
".docker",
|
while the Anthropic API uses - (e.g., claude-opus-4-5-20251101).
|
||||||
".config/gcloud",
|
|
||||||
".npmrc",
|
Args:
|
||||||
".pypirc",
|
model: Model name in Anthropic format (with hyphens)
|
||||||
".netrc",
|
|
||||||
}
|
Returns:
|
||||||
|
Model name in Vertex AI format (with @ before date) if Vertex AI is enabled,
|
||||||
|
otherwise returns the model unchanged.
|
||||||
|
"""
|
||||||
|
# Only convert if Vertex AI is enabled
|
||||||
|
if os.getenv("CLAUDE_CODE_USE_VERTEX") != "1":
|
||||||
|
return model
|
||||||
|
|
||||||
|
# Pattern: claude-{name}-{version}-{date} -> claude-{name}-{version}@{date}
|
||||||
|
# Example: claude-opus-4-5-20251101 -> claude-opus-4-5@20251101
|
||||||
|
# The date is always 8 digits at the end
|
||||||
|
match = re.match(r'^(claude-.+)-(\d{8})$', model)
|
||||||
|
if match:
|
||||||
|
base_name, date = match.groups()
|
||||||
|
return f"{base_name}@{date}"
|
||||||
|
|
||||||
|
# If already in @ format or doesn't match expected pattern, return as-is
|
||||||
|
return model
|
||||||
|
|
||||||
|
|
||||||
def get_playwright_headless() -> bool:
|
def get_playwright_headless() -> bool:
|
||||||
@@ -175,32 +182,55 @@ def get_extra_read_paths() -> list[Path]:
|
|||||||
return validated_paths
|
return validated_paths
|
||||||
|
|
||||||
|
|
||||||
# Feature MCP tools for feature/test management
|
# Per-agent-type MCP tool lists.
|
||||||
FEATURE_MCP_TOOLS = [
|
# Only expose the tools each agent type actually needs, reducing tool schema
|
||||||
# Core feature operations
|
# overhead and preventing agents from calling tools meant for other roles.
|
||||||
|
#
|
||||||
|
# Tools intentionally omitted from ALL agent lists (UI/orchestrator only):
|
||||||
|
# feature_get_ready, feature_get_blocked, feature_get_graph,
|
||||||
|
# feature_remove_dependency
|
||||||
|
#
|
||||||
|
# The ghost tool "feature_release_testing" was removed entirely -- it was
|
||||||
|
# listed here but never implemented in mcp_server/feature_mcp.py.
|
||||||
|
|
||||||
|
CODING_AGENT_TOOLS = [
|
||||||
"mcp__features__feature_get_stats",
|
"mcp__features__feature_get_stats",
|
||||||
"mcp__features__feature_get_by_id", # Get assigned feature details
|
"mcp__features__feature_get_by_id",
|
||||||
"mcp__features__feature_get_summary", # Lightweight: id, name, status, deps only
|
"mcp__features__feature_get_summary",
|
||||||
|
"mcp__features__feature_claim_and_get",
|
||||||
"mcp__features__feature_mark_in_progress",
|
"mcp__features__feature_mark_in_progress",
|
||||||
"mcp__features__feature_claim_and_get", # Atomic claim + get details
|
|
||||||
"mcp__features__feature_mark_passing",
|
"mcp__features__feature_mark_passing",
|
||||||
"mcp__features__feature_mark_failing", # Mark regression detected
|
"mcp__features__feature_mark_failing",
|
||||||
"mcp__features__feature_skip",
|
"mcp__features__feature_skip",
|
||||||
"mcp__features__feature_create_bulk",
|
|
||||||
"mcp__features__feature_create",
|
|
||||||
"mcp__features__feature_clear_in_progress",
|
"mcp__features__feature_clear_in_progress",
|
||||||
"mcp__features__feature_release_testing", # Release testing claim
|
|
||||||
# Dependency management
|
|
||||||
"mcp__features__feature_add_dependency",
|
|
||||||
"mcp__features__feature_remove_dependency",
|
|
||||||
"mcp__features__feature_set_dependencies",
|
|
||||||
# Query tools
|
|
||||||
"mcp__features__feature_get_ready",
|
|
||||||
"mcp__features__feature_get_blocked",
|
|
||||||
"mcp__features__feature_get_graph",
|
|
||||||
]
|
]
|
||||||
|
|
||||||
# Playwright MCP tools for browser automation
|
TESTING_AGENT_TOOLS = [
|
||||||
|
"mcp__features__feature_get_stats",
|
||||||
|
"mcp__features__feature_get_by_id",
|
||||||
|
"mcp__features__feature_get_summary",
|
||||||
|
"mcp__features__feature_mark_passing",
|
||||||
|
"mcp__features__feature_mark_failing",
|
||||||
|
]
|
||||||
|
|
||||||
|
INITIALIZER_AGENT_TOOLS = [
|
||||||
|
"mcp__features__feature_get_stats",
|
||||||
|
"mcp__features__feature_create_bulk",
|
||||||
|
"mcp__features__feature_create",
|
||||||
|
"mcp__features__feature_add_dependency",
|
||||||
|
"mcp__features__feature_set_dependencies",
|
||||||
|
]
|
||||||
|
|
||||||
|
# Union of all agent tool lists -- used for permissions (all tools remain
|
||||||
|
# *permitted* so the MCP server can respond, but only the agent-type-specific
|
||||||
|
# list is included in allowed_tools, which controls what the LLM sees).
|
||||||
|
ALL_FEATURE_MCP_TOOLS = sorted(
|
||||||
|
set(CODING_AGENT_TOOLS) | set(TESTING_AGENT_TOOLS) | set(INITIALIZER_AGENT_TOOLS)
|
||||||
|
)
|
||||||
|
|
||||||
|
# Playwright MCP tools for browser automation.
|
||||||
|
# Full set of tools for comprehensive UI testing including drag-and-drop,
|
||||||
|
# hover menus, file uploads, tab management, etc.
|
||||||
PLAYWRIGHT_TOOLS = [
|
PLAYWRIGHT_TOOLS = [
|
||||||
# Core navigation & screenshots
|
# Core navigation & screenshots
|
||||||
"mcp__playwright__browser_navigate",
|
"mcp__playwright__browser_navigate",
|
||||||
@@ -213,9 +243,10 @@ PLAYWRIGHT_TOOLS = [
|
|||||||
"mcp__playwright__browser_type",
|
"mcp__playwright__browser_type",
|
||||||
"mcp__playwright__browser_fill_form",
|
"mcp__playwright__browser_fill_form",
|
||||||
"mcp__playwright__browser_select_option",
|
"mcp__playwright__browser_select_option",
|
||||||
"mcp__playwright__browser_hover",
|
|
||||||
"mcp__playwright__browser_drag",
|
|
||||||
"mcp__playwright__browser_press_key",
|
"mcp__playwright__browser_press_key",
|
||||||
|
"mcp__playwright__browser_drag",
|
||||||
|
"mcp__playwright__browser_hover",
|
||||||
|
"mcp__playwright__browser_file_upload",
|
||||||
|
|
||||||
# JavaScript & debugging
|
# JavaScript & debugging
|
||||||
"mcp__playwright__browser_evaluate",
|
"mcp__playwright__browser_evaluate",
|
||||||
@@ -224,16 +255,17 @@ PLAYWRIGHT_TOOLS = [
|
|||||||
"mcp__playwright__browser_network_requests",
|
"mcp__playwright__browser_network_requests",
|
||||||
|
|
||||||
# Browser management
|
# Browser management
|
||||||
"mcp__playwright__browser_close",
|
|
||||||
"mcp__playwright__browser_resize",
|
"mcp__playwright__browser_resize",
|
||||||
"mcp__playwright__browser_tabs",
|
|
||||||
"mcp__playwright__browser_wait_for",
|
"mcp__playwright__browser_wait_for",
|
||||||
"mcp__playwright__browser_handle_dialog",
|
"mcp__playwright__browser_handle_dialog",
|
||||||
"mcp__playwright__browser_file_upload",
|
|
||||||
"mcp__playwright__browser_install",
|
"mcp__playwright__browser_install",
|
||||||
|
"mcp__playwright__browser_close",
|
||||||
|
"mcp__playwright__browser_tabs",
|
||||||
]
|
]
|
||||||
|
|
||||||
# Built-in tools
|
# Built-in tools available to agents.
|
||||||
|
# WebFetch and WebSearch are included so coding agents can look up current
|
||||||
|
# documentation for frameworks and libraries they are implementing.
|
||||||
BUILTIN_TOOLS = [
|
BUILTIN_TOOLS = [
|
||||||
"Read",
|
"Read",
|
||||||
"Write",
|
"Write",
|
||||||
@@ -251,6 +283,7 @@ def create_client(
|
|||||||
model: str,
|
model: str,
|
||||||
yolo_mode: bool = False,
|
yolo_mode: bool = False,
|
||||||
agent_id: str | None = None,
|
agent_id: str | None = None,
|
||||||
|
agent_type: str = "coding",
|
||||||
):
|
):
|
||||||
"""
|
"""
|
||||||
Create a Claude Agent SDK client with multi-layered security.
|
Create a Claude Agent SDK client with multi-layered security.
|
||||||
@@ -261,6 +294,8 @@ def create_client(
|
|||||||
yolo_mode: If True, skip Playwright MCP server for rapid prototyping
|
yolo_mode: If True, skip Playwright MCP server for rapid prototyping
|
||||||
agent_id: Optional unique identifier for browser isolation in parallel mode.
|
agent_id: Optional unique identifier for browser isolation in parallel mode.
|
||||||
When provided, each agent gets its own browser profile.
|
When provided, each agent gets its own browser profile.
|
||||||
|
agent_type: One of "coding", "testing", or "initializer". Controls which
|
||||||
|
MCP tools are exposed and the max_turns limit.
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
Configured ClaudeSDKClient (from claude_agent_sdk)
|
Configured ClaudeSDKClient (from claude_agent_sdk)
|
||||||
@@ -274,13 +309,34 @@ def create_client(
|
|||||||
Note: Authentication is handled by start.bat/start.sh before this runs.
|
Note: Authentication is handled by start.bat/start.sh before this runs.
|
||||||
The Claude SDK auto-detects credentials from the Claude CLI configuration
|
The Claude SDK auto-detects credentials from the Claude CLI configuration
|
||||||
"""
|
"""
|
||||||
# Build allowed tools list based on mode
|
# Select the feature MCP tools appropriate for this agent type
|
||||||
# In YOLO mode, exclude Playwright tools for faster prototyping
|
feature_tools_map = {
|
||||||
allowed_tools = [*BUILTIN_TOOLS, *FEATURE_MCP_TOOLS]
|
"coding": CODING_AGENT_TOOLS,
|
||||||
|
"testing": TESTING_AGENT_TOOLS,
|
||||||
|
"initializer": INITIALIZER_AGENT_TOOLS,
|
||||||
|
}
|
||||||
|
feature_tools = feature_tools_map.get(agent_type, CODING_AGENT_TOOLS)
|
||||||
|
|
||||||
|
# Select max_turns based on agent type:
|
||||||
|
# - coding/initializer: 300 turns (complex multi-step implementation)
|
||||||
|
# - testing: 100 turns (focused verification of a single feature)
|
||||||
|
max_turns_map = {
|
||||||
|
"coding": 300,
|
||||||
|
"testing": 100,
|
||||||
|
"initializer": 300,
|
||||||
|
}
|
||||||
|
max_turns = max_turns_map.get(agent_type, 300)
|
||||||
|
|
||||||
|
# Build allowed tools list based on mode and agent type.
|
||||||
|
# In YOLO mode, exclude Playwright tools for faster prototyping.
|
||||||
|
allowed_tools = [*BUILTIN_TOOLS, *feature_tools]
|
||||||
if not yolo_mode:
|
if not yolo_mode:
|
||||||
allowed_tools.extend(PLAYWRIGHT_TOOLS)
|
allowed_tools.extend(PLAYWRIGHT_TOOLS)
|
||||||
|
|
||||||
# Build permissions list
|
# Build permissions list.
|
||||||
|
# We permit ALL feature MCP tools at the security layer (so the MCP server
|
||||||
|
# can respond if called), but the LLM only *sees* the agent-type-specific
|
||||||
|
# subset via allowed_tools above.
|
||||||
permissions_list = [
|
permissions_list = [
|
||||||
# Allow all file operations within the project directory
|
# Allow all file operations within the project directory
|
||||||
"Read(./**)",
|
"Read(./**)",
|
||||||
@@ -291,11 +347,11 @@ def create_client(
|
|||||||
# Bash permission granted here, but actual commands are validated
|
# Bash permission granted here, but actual commands are validated
|
||||||
# by the bash_security_hook (see security.py for allowed commands)
|
# by the bash_security_hook (see security.py for allowed commands)
|
||||||
"Bash(*)",
|
"Bash(*)",
|
||||||
# Allow web tools for documentation lookup
|
# Allow web tools for looking up framework/library documentation
|
||||||
"WebFetch",
|
"WebFetch(*)",
|
||||||
"WebSearch",
|
"WebSearch(*)",
|
||||||
# Allow Feature MCP tools for feature management
|
# Allow Feature MCP tools for feature management
|
||||||
*FEATURE_MCP_TOOLS,
|
*ALL_FEATURE_MCP_TOOLS,
|
||||||
]
|
]
|
||||||
|
|
||||||
# Add extra read paths from environment variable (read-only access)
|
# Add extra read paths from environment variable (read-only access)
|
||||||
@@ -326,7 +382,9 @@ def create_client(
|
|||||||
project_dir.mkdir(parents=True, exist_ok=True)
|
project_dir.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
# Write settings to a file in the project directory
|
# Write settings to a file in the project directory
|
||||||
settings_file = project_dir / ".claude_settings.json"
|
from autocoder_paths import get_claude_settings_path
|
||||||
|
settings_file = get_claude_settings_path(project_dir)
|
||||||
|
settings_file.parent.mkdir(parents=True, exist_ok=True)
|
||||||
with open(settings_file, "w") as f:
|
with open(settings_file, "w") as f:
|
||||||
json.dump(security_settings, f, indent=2)
|
json.dump(security_settings, f, indent=2)
|
||||||
|
|
||||||
@@ -400,14 +458,19 @@ def create_client(
|
|||||||
if value:
|
if value:
|
||||||
sdk_env[var] = value
|
sdk_env[var] = value
|
||||||
|
|
||||||
# Detect alternative API mode (Ollama or GLM)
|
# Detect alternative API mode (Ollama, GLM, or Vertex AI)
|
||||||
base_url = sdk_env.get("ANTHROPIC_BASE_URL", "")
|
base_url = sdk_env.get("ANTHROPIC_BASE_URL", "")
|
||||||
is_alternative_api = bool(base_url)
|
is_vertex = sdk_env.get("CLAUDE_CODE_USE_VERTEX") == "1"
|
||||||
|
is_alternative_api = bool(base_url) or is_vertex
|
||||||
is_ollama = "localhost:11434" in base_url or "127.0.0.1:11434" in base_url
|
is_ollama = "localhost:11434" in base_url or "127.0.0.1:11434" in base_url
|
||||||
|
model = convert_model_for_vertex(model)
|
||||||
if sdk_env:
|
if sdk_env:
|
||||||
print(f" - API overrides: {', '.join(sdk_env.keys())}")
|
print(f" - API overrides: {', '.join(sdk_env.keys())}")
|
||||||
if is_ollama:
|
if is_vertex:
|
||||||
|
project_id = sdk_env.get("ANTHROPIC_VERTEX_PROJECT_ID", "unknown")
|
||||||
|
region = sdk_env.get("CLOUD_ML_REGION", "unknown")
|
||||||
|
print(f" - Vertex AI Mode: Using GCP project '{project_id}' with model '{model}' in region '{region}'")
|
||||||
|
elif is_ollama:
|
||||||
print(" - Ollama Mode: Using local models")
|
print(" - Ollama Mode: Using local models")
|
||||||
elif "ANTHROPIC_BASE_URL" in sdk_env:
|
elif "ANTHROPIC_BASE_URL" in sdk_env:
|
||||||
print(f" - GLM Mode: Using {sdk_env['ANTHROPIC_BASE_URL']}")
|
print(f" - GLM Mode: Using {sdk_env['ANTHROPIC_BASE_URL']}")
|
||||||
@@ -420,9 +483,10 @@ def create_client(
|
|||||||
context["project_dir"] = str(project_dir.resolve())
|
context["project_dir"] = str(project_dir.resolve())
|
||||||
return await bash_security_hook(input_data, tool_use_id, context)
|
return await bash_security_hook(input_data, tool_use_id, context)
|
||||||
|
|
||||||
# PreCompact hook for logging and customizing context compaction
|
# PreCompact hook for logging and customizing context compaction.
|
||||||
# Compaction is handled automatically by Claude Code CLI when context approaches limits.
|
# Compaction is handled automatically by Claude Code CLI when context approaches limits.
|
||||||
# This hook allows us to log when compaction occurs and optionally provide custom instructions.
|
# This hook provides custom instructions that guide the summarizer to preserve
|
||||||
|
# critical workflow state while discarding verbose/redundant content.
|
||||||
async def pre_compact_hook(
|
async def pre_compact_hook(
|
||||||
input_data: HookInput,
|
input_data: HookInput,
|
||||||
tool_use_id: str | None,
|
tool_use_id: str | None,
|
||||||
@@ -435,8 +499,9 @@ def create_client(
|
|||||||
- "auto": Automatic compaction when context approaches token limits
|
- "auto": Automatic compaction when context approaches token limits
|
||||||
- "manual": User-initiated compaction via /compact command
|
- "manual": User-initiated compaction via /compact command
|
||||||
|
|
||||||
The hook can customize compaction via hookSpecificOutput:
|
Returns custom instructions that guide the compaction summarizer to:
|
||||||
- customInstructions: String with focus areas for summarization
|
1. Preserve critical workflow state (feature ID, modified files, test results)
|
||||||
|
2. Discard verbose content (screenshots, long grep outputs, repeated reads)
|
||||||
"""
|
"""
|
||||||
trigger = input_data.get("trigger", "auto")
|
trigger = input_data.get("trigger", "auto")
|
||||||
custom_instructions = input_data.get("custom_instructions")
|
custom_instructions = input_data.get("custom_instructions")
|
||||||
@@ -447,18 +512,53 @@ def create_client(
|
|||||||
print("[Context] Manual compaction requested")
|
print("[Context] Manual compaction requested")
|
||||||
|
|
||||||
if custom_instructions:
|
if custom_instructions:
|
||||||
print(f"[Context] Custom instructions: {custom_instructions}")
|
print(f"[Context] Custom instructions provided: {custom_instructions}")
|
||||||
|
|
||||||
# Return empty dict to allow compaction to proceed with default behavior
|
# Build compaction instructions that preserve workflow-critical context
|
||||||
# To customize, return:
|
# while discarding verbose content that inflates token usage.
|
||||||
# {
|
#
|
||||||
# "hookSpecificOutput": {
|
# The summarizer receives these instructions and uses them to decide
|
||||||
# "hookEventName": "PreCompact",
|
# what to keep vs. discard during context compaction.
|
||||||
# "customInstructions": "Focus on preserving file paths and test results"
|
compaction_guidance = "\n".join([
|
||||||
# }
|
"## PRESERVE (critical workflow state)",
|
||||||
# }
|
"- Current feature ID, feature name, and feature status (pending/in_progress/passing/failing)",
|
||||||
return SyncHookJSONOutput()
|
"- List of all files created or modified during this session, with their paths",
|
||||||
|
"- Last test/lint/type-check results: command run, pass/fail status, and key error messages",
|
||||||
|
"- Current step in the workflow (e.g., implementing, testing, fixing lint errors)",
|
||||||
|
"- Any dependency information (which features block this one)",
|
||||||
|
"- Git operations performed (commits, branches created)",
|
||||||
|
"- MCP tool call results (feature_claim_and_get, feature_mark_passing, etc.)",
|
||||||
|
"- Key architectural decisions made during this session",
|
||||||
|
"",
|
||||||
|
"## DISCARD (verbose content safe to drop)",
|
||||||
|
"- Full screenshot base64 data (just note that a screenshot was taken and what it showed)",
|
||||||
|
"- Long grep/find/glob output listings (summarize to: searched for X, found Y relevant files)",
|
||||||
|
"- Repeated file reads of the same file (keep only the latest read or a summary of changes)",
|
||||||
|
"- Full file contents from Read tool (summarize to: read file X, key sections were Y)",
|
||||||
|
"- Verbose npm/pip install output (just note: dependencies installed successfully/failed)",
|
||||||
|
"- Full lint/type-check output when passing (just note: lint passed with no errors)",
|
||||||
|
"- Browser console message dumps (summarize to: N errors found, key error was X)",
|
||||||
|
"- Redundant tool result confirmations ([Done] markers)",
|
||||||
|
])
|
||||||
|
|
||||||
|
print("[Context] Applying custom compaction instructions (preserve workflow state, discard verbose content)")
|
||||||
|
|
||||||
|
# The SDK's HookSpecificOutput union type does not yet include a
|
||||||
|
# PreCompactHookSpecificOutput variant, but the CLI protocol accepts
|
||||||
|
# {"hookEventName": "PreCompact", "customInstructions": "..."}.
|
||||||
|
# The dict is serialized to JSON and sent to the CLI process directly,
|
||||||
|
# so the runtime behavior is correct despite the type mismatch.
|
||||||
|
return SyncHookJSONOutput(
|
||||||
|
hookSpecificOutput={ # type: ignore[typeddict-item]
|
||||||
|
"hookEventName": "PreCompact",
|
||||||
|
"customInstructions": compaction_guidance,
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
# PROMPT CACHING: The Claude Code CLI applies cache_control breakpoints internally.
|
||||||
|
# Our system_prompt benefits from automatic caching without explicit configuration.
|
||||||
|
# If explicit cache_control is needed, the SDK would need to accept content blocks
|
||||||
|
# with cache_control fields (not currently supported in v0.1.x).
|
||||||
return ClaudeSDKClient(
|
return ClaudeSDKClient(
|
||||||
options=ClaudeAgentOptions(
|
options=ClaudeAgentOptions(
|
||||||
model=model,
|
model=model,
|
||||||
@@ -467,7 +567,7 @@ def create_client(
|
|||||||
setting_sources=["project"], # Enable skills, commands, and CLAUDE.md from project dir
|
setting_sources=["project"], # Enable skills, commands, and CLAUDE.md from project dir
|
||||||
max_buffer_size=10 * 1024 * 1024, # 10MB for large Playwright screenshots
|
max_buffer_size=10 * 1024 * 1024, # 10MB for large Playwright screenshots
|
||||||
allowed_tools=allowed_tools,
|
allowed_tools=allowed_tools,
|
||||||
mcp_servers=mcp_servers,
|
mcp_servers=mcp_servers, # type: ignore[arg-type] # SDK accepts dict config at runtime
|
||||||
hooks={
|
hooks={
|
||||||
"PreToolUse": [
|
"PreToolUse": [
|
||||||
HookMatcher(matcher="Bash", hooks=[bash_hook_with_context]),
|
HookMatcher(matcher="Bash", hooks=[bash_hook_with_context]),
|
||||||
@@ -479,14 +579,14 @@ def create_client(
|
|||||||
HookMatcher(hooks=[pre_compact_hook]),
|
HookMatcher(hooks=[pre_compact_hook]),
|
||||||
],
|
],
|
||||||
},
|
},
|
||||||
max_turns=1000,
|
max_turns=max_turns,
|
||||||
cwd=str(project_dir.resolve()),
|
cwd=str(project_dir.resolve()),
|
||||||
settings=str(settings_file.resolve()), # Use absolute path
|
settings=str(settings_file.resolve()), # Use absolute path
|
||||||
env=sdk_env, # Pass API configuration overrides to CLI subprocess
|
env=sdk_env, # Pass API configuration overrides to CLI subprocess
|
||||||
# Enable extended context beta for better handling of long sessions.
|
# Enable extended context beta for better handling of long sessions.
|
||||||
# This provides up to 1M tokens of context with automatic compaction.
|
# This provides up to 1M tokens of context with automatic compaction.
|
||||||
# See: https://docs.anthropic.com/en/api/beta-headers
|
# See: https://docs.anthropic.com/en/api/beta-headers
|
||||||
# Disabled for alternative APIs (Ollama, GLM) as they don't support Claude-specific betas.
|
# Disabled for alternative APIs (Ollama, GLM, Vertex AI) as they don't support this beta.
|
||||||
betas=[] if is_alternative_api else ["context-1m-2025-08-07"],
|
betas=[] if is_alternative_api else ["context-1m-2025-08-07"],
|
||||||
# Note on context management:
|
# Note on context management:
|
||||||
# The Claude Agent SDK handles context management automatically through the
|
# The Claude Agent SDK handles context management automatically through the
|
||||||
@@ -497,7 +597,7 @@ def create_client(
|
|||||||
# parameters. Instead, context is managed via:
|
# parameters. Instead, context is managed via:
|
||||||
# 1. betas=["context-1m-2025-08-07"] - Extended context window
|
# 1. betas=["context-1m-2025-08-07"] - Extended context window
|
||||||
# 2. PreCompact hook - Intercept and customize compaction behavior
|
# 2. PreCompact hook - Intercept and customize compaction behavior
|
||||||
# 3. max_turns - Limit conversation turns (set to 1000 for long sessions)
|
# 3. max_turns - Limit conversation turns (per agent type: coding=300, testing=100)
|
||||||
#
|
#
|
||||||
# Future SDK versions may add explicit compaction controls. When available,
|
# Future SDK versions may add explicit compaction controls. When available,
|
||||||
# consider adding:
|
# consider adding:
|
||||||
|
|||||||
27
env_constants.py
Normal file
27
env_constants.py
Normal file
@@ -0,0 +1,27 @@
|
|||||||
|
"""
|
||||||
|
Shared Environment Variable Constants
|
||||||
|
======================================
|
||||||
|
|
||||||
|
Single source of truth for environment variables forwarded to Claude CLI
|
||||||
|
subprocesses. Imported by both ``client.py`` (agent sessions) and
|
||||||
|
``server/services/chat_constants.py`` (chat sessions) to avoid maintaining
|
||||||
|
duplicate lists.
|
||||||
|
|
||||||
|
These allow autocoder to use alternative API endpoints (Ollama, GLM,
|
||||||
|
Vertex AI) without affecting the user's global Claude Code settings.
|
||||||
|
"""
|
||||||
|
|
||||||
|
API_ENV_VARS: list[str] = [
|
||||||
|
# Core API configuration
|
||||||
|
"ANTHROPIC_BASE_URL", # Custom API endpoint (e.g., https://api.z.ai/api/anthropic)
|
||||||
|
"ANTHROPIC_AUTH_TOKEN", # API authentication token
|
||||||
|
"API_TIMEOUT_MS", # Request timeout in milliseconds
|
||||||
|
# Model tier overrides
|
||||||
|
"ANTHROPIC_DEFAULT_SONNET_MODEL", # Model override for Sonnet
|
||||||
|
"ANTHROPIC_DEFAULT_OPUS_MODEL", # Model override for Opus
|
||||||
|
"ANTHROPIC_DEFAULT_HAIKU_MODEL", # Model override for Haiku
|
||||||
|
# Vertex AI configuration
|
||||||
|
"CLAUDE_CODE_USE_VERTEX", # Enable Vertex AI mode (set to "1")
|
||||||
|
"CLOUD_ML_REGION", # GCP region (e.g., us-east5)
|
||||||
|
"ANTHROPIC_VERTEX_PROJECT_ID", # GCP project ID
|
||||||
|
]
|
||||||
@@ -30,18 +30,18 @@ orchestrator, not by agents. Agents receive pre-assigned feature IDs.
|
|||||||
import json
|
import json
|
||||||
import os
|
import os
|
||||||
import sys
|
import sys
|
||||||
import threading
|
|
||||||
from contextlib import asynccontextmanager
|
from contextlib import asynccontextmanager
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import Annotated
|
from typing import Annotated
|
||||||
|
|
||||||
from mcp.server.fastmcp import FastMCP
|
from mcp.server.fastmcp import FastMCP
|
||||||
from pydantic import BaseModel, Field
|
from pydantic import BaseModel, Field
|
||||||
|
from sqlalchemy import text
|
||||||
|
|
||||||
# Add parent directory to path so we can import from api module
|
# Add parent directory to path so we can import from api module
|
||||||
sys.path.insert(0, str(Path(__file__).parent.parent))
|
sys.path.insert(0, str(Path(__file__).parent.parent))
|
||||||
|
|
||||||
from api.database import Feature, create_database
|
from api.database import Feature, atomic_transaction, create_database
|
||||||
from api.dependency_resolver import (
|
from api.dependency_resolver import (
|
||||||
MAX_DEPENDENCIES_PER_FEATURE,
|
MAX_DEPENDENCIES_PER_FEATURE,
|
||||||
compute_scheduling_scores,
|
compute_scheduling_scores,
|
||||||
@@ -96,8 +96,9 @@ class BulkCreateInput(BaseModel):
|
|||||||
_session_maker = None
|
_session_maker = None
|
||||||
_engine = None
|
_engine = None
|
||||||
|
|
||||||
# Lock for priority assignment to prevent race conditions
|
# NOTE: The old threading.Lock() was removed because it only worked per-process,
|
||||||
_priority_lock = threading.Lock()
|
# not cross-process. In parallel mode, multiple MCP servers run in separate
|
||||||
|
# processes, so the lock was useless. We now use atomic SQL operations instead.
|
||||||
|
|
||||||
|
|
||||||
@asynccontextmanager
|
@asynccontextmanager
|
||||||
@@ -243,15 +244,25 @@ def feature_mark_passing(
|
|||||||
"""
|
"""
|
||||||
session = get_session()
|
session = get_session()
|
||||||
try:
|
try:
|
||||||
feature = session.query(Feature).filter(Feature.id == feature_id).first()
|
# Atomic update with state guard - prevents double-pass in parallel mode
|
||||||
|
result = session.execute(text("""
|
||||||
if feature is None:
|
UPDATE features
|
||||||
return json.dumps({"error": f"Feature with ID {feature_id} not found"})
|
SET passes = 1, in_progress = 0
|
||||||
|
WHERE id = :id AND passes = 0
|
||||||
feature.passes = True
|
"""), {"id": feature_id})
|
||||||
feature.in_progress = False
|
|
||||||
session.commit()
|
session.commit()
|
||||||
|
|
||||||
|
if result.rowcount == 0:
|
||||||
|
# Check why the update didn't match
|
||||||
|
feature = session.query(Feature).filter(Feature.id == feature_id).first()
|
||||||
|
if feature is None:
|
||||||
|
return json.dumps({"error": f"Feature with ID {feature_id} not found"})
|
||||||
|
if feature.passes:
|
||||||
|
return json.dumps({"error": f"Feature with ID {feature_id} is already passing"})
|
||||||
|
return json.dumps({"error": "Failed to mark feature passing for unknown reason"})
|
||||||
|
|
||||||
|
# Get the feature name for the response
|
||||||
|
feature = session.query(Feature).filter(Feature.id == feature_id).first()
|
||||||
return json.dumps({"success": True, "feature_id": feature_id, "name": feature.name})
|
return json.dumps({"success": True, "feature_id": feature_id, "name": feature.name})
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
session.rollback()
|
session.rollback()
|
||||||
@@ -284,14 +295,20 @@ def feature_mark_failing(
|
|||||||
"""
|
"""
|
||||||
session = get_session()
|
session = get_session()
|
||||||
try:
|
try:
|
||||||
|
# Check if feature exists first
|
||||||
feature = session.query(Feature).filter(Feature.id == feature_id).first()
|
feature = session.query(Feature).filter(Feature.id == feature_id).first()
|
||||||
|
|
||||||
if feature is None:
|
if feature is None:
|
||||||
return json.dumps({"error": f"Feature with ID {feature_id} not found"})
|
return json.dumps({"error": f"Feature with ID {feature_id} not found"})
|
||||||
|
|
||||||
feature.passes = False
|
# Atomic update for parallel safety
|
||||||
feature.in_progress = False
|
session.execute(text("""
|
||||||
|
UPDATE features
|
||||||
|
SET passes = 0, in_progress = 0
|
||||||
|
WHERE id = :id
|
||||||
|
"""), {"id": feature_id})
|
||||||
session.commit()
|
session.commit()
|
||||||
|
|
||||||
|
# Refresh to get updated state
|
||||||
session.refresh(feature)
|
session.refresh(feature)
|
||||||
|
|
||||||
return json.dumps({
|
return json.dumps({
|
||||||
@@ -337,25 +354,28 @@ def feature_skip(
|
|||||||
return json.dumps({"error": "Cannot skip a feature that is already passing"})
|
return json.dumps({"error": "Cannot skip a feature that is already passing"})
|
||||||
|
|
||||||
old_priority = feature.priority
|
old_priority = feature.priority
|
||||||
|
name = feature.name
|
||||||
|
|
||||||
# Use lock to prevent race condition in priority assignment
|
# Atomic update: set priority to max+1 in a single statement
|
||||||
with _priority_lock:
|
# This prevents race conditions where two features get the same priority
|
||||||
# Get max priority and set this feature to max + 1
|
session.execute(text("""
|
||||||
max_priority_result = session.query(Feature.priority).order_by(Feature.priority.desc()).first()
|
UPDATE features
|
||||||
new_priority = (max_priority_result[0] + 1) if max_priority_result else 1
|
SET priority = (SELECT COALESCE(MAX(priority), 0) + 1 FROM features),
|
||||||
|
in_progress = 0
|
||||||
feature.priority = new_priority
|
WHERE id = :id
|
||||||
feature.in_progress = False
|
"""), {"id": feature_id})
|
||||||
session.commit()
|
session.commit()
|
||||||
|
|
||||||
|
# Refresh to get new priority
|
||||||
session.refresh(feature)
|
session.refresh(feature)
|
||||||
|
new_priority = feature.priority
|
||||||
|
|
||||||
return json.dumps({
|
return json.dumps({
|
||||||
"id": feature.id,
|
"id": feature_id,
|
||||||
"name": feature.name,
|
"name": name,
|
||||||
"old_priority": old_priority,
|
"old_priority": old_priority,
|
||||||
"new_priority": new_priority,
|
"new_priority": new_priority,
|
||||||
"message": f"Feature '{feature.name}' moved to end of queue"
|
"message": f"Feature '{name}' moved to end of queue"
|
||||||
})
|
})
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
session.rollback()
|
session.rollback()
|
||||||
@@ -381,21 +401,27 @@ def feature_mark_in_progress(
|
|||||||
"""
|
"""
|
||||||
session = get_session()
|
session = get_session()
|
||||||
try:
|
try:
|
||||||
feature = session.query(Feature).filter(Feature.id == feature_id).first()
|
# Atomic claim: only succeeds if feature is not already claimed or passing
|
||||||
|
result = session.execute(text("""
|
||||||
|
UPDATE features
|
||||||
|
SET in_progress = 1
|
||||||
|
WHERE id = :id AND passes = 0 AND in_progress = 0
|
||||||
|
"""), {"id": feature_id})
|
||||||
|
session.commit()
|
||||||
|
|
||||||
|
if result.rowcount == 0:
|
||||||
|
# Check why the claim failed
|
||||||
|
feature = session.query(Feature).filter(Feature.id == feature_id).first()
|
||||||
if feature is None:
|
if feature is None:
|
||||||
return json.dumps({"error": f"Feature with ID {feature_id} not found"})
|
return json.dumps({"error": f"Feature with ID {feature_id} not found"})
|
||||||
|
|
||||||
if feature.passes:
|
if feature.passes:
|
||||||
return json.dumps({"error": f"Feature with ID {feature_id} is already passing"})
|
return json.dumps({"error": f"Feature with ID {feature_id} is already passing"})
|
||||||
|
|
||||||
if feature.in_progress:
|
if feature.in_progress:
|
||||||
return json.dumps({"error": f"Feature with ID {feature_id} is already in-progress"})
|
return json.dumps({"error": f"Feature with ID {feature_id} is already in-progress"})
|
||||||
|
return json.dumps({"error": "Failed to mark feature in-progress for unknown reason"})
|
||||||
|
|
||||||
feature.in_progress = True
|
# Fetch the claimed feature
|
||||||
session.commit()
|
feature = session.query(Feature).filter(Feature.id == feature_id).first()
|
||||||
session.refresh(feature)
|
|
||||||
|
|
||||||
return json.dumps(feature.to_dict())
|
return json.dumps(feature.to_dict())
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
session.rollback()
|
session.rollback()
|
||||||
@@ -421,24 +447,35 @@ def feature_claim_and_get(
|
|||||||
"""
|
"""
|
||||||
session = get_session()
|
session = get_session()
|
||||||
try:
|
try:
|
||||||
|
# First check if feature exists
|
||||||
feature = session.query(Feature).filter(Feature.id == feature_id).first()
|
feature = session.query(Feature).filter(Feature.id == feature_id).first()
|
||||||
|
|
||||||
if feature is None:
|
if feature is None:
|
||||||
return json.dumps({"error": f"Feature with ID {feature_id} not found"})
|
return json.dumps({"error": f"Feature with ID {feature_id} not found"})
|
||||||
|
|
||||||
if feature.passes:
|
if feature.passes:
|
||||||
return json.dumps({"error": f"Feature with ID {feature_id} is already passing"})
|
return json.dumps({"error": f"Feature with ID {feature_id} is already passing"})
|
||||||
|
|
||||||
# Idempotent: if already in-progress, just return details
|
# Try atomic claim: only succeeds if not already claimed
|
||||||
already_claimed = feature.in_progress
|
result = session.execute(text("""
|
||||||
if not already_claimed:
|
UPDATE features
|
||||||
feature.in_progress = True
|
SET in_progress = 1
|
||||||
|
WHERE id = :id AND passes = 0 AND in_progress = 0
|
||||||
|
"""), {"id": feature_id})
|
||||||
session.commit()
|
session.commit()
|
||||||
session.refresh(feature)
|
|
||||||
|
|
||||||
result = feature.to_dict()
|
# Determine if we claimed it or it was already claimed
|
||||||
result["already_claimed"] = already_claimed
|
already_claimed = result.rowcount == 0
|
||||||
return json.dumps(result)
|
if already_claimed:
|
||||||
|
# Verify it's in_progress (not some other failure condition)
|
||||||
|
session.refresh(feature)
|
||||||
|
if not feature.in_progress:
|
||||||
|
return json.dumps({"error": f"Failed to claim feature {feature_id} for unknown reason"})
|
||||||
|
|
||||||
|
# Refresh to get current state
|
||||||
|
session.refresh(feature)
|
||||||
|
result_dict = feature.to_dict()
|
||||||
|
result_dict["already_claimed"] = already_claimed
|
||||||
|
return json.dumps(result_dict)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
session.rollback()
|
session.rollback()
|
||||||
return json.dumps({"error": f"Failed to claim feature: {str(e)}"})
|
return json.dumps({"error": f"Failed to claim feature: {str(e)}"})
|
||||||
@@ -463,15 +500,20 @@ def feature_clear_in_progress(
|
|||||||
"""
|
"""
|
||||||
session = get_session()
|
session = get_session()
|
||||||
try:
|
try:
|
||||||
|
# Check if feature exists
|
||||||
feature = session.query(Feature).filter(Feature.id == feature_id).first()
|
feature = session.query(Feature).filter(Feature.id == feature_id).first()
|
||||||
|
|
||||||
if feature is None:
|
if feature is None:
|
||||||
return json.dumps({"error": f"Feature with ID {feature_id} not found"})
|
return json.dumps({"error": f"Feature with ID {feature_id} not found"})
|
||||||
|
|
||||||
feature.in_progress = False
|
# Atomic update - idempotent, safe in parallel mode
|
||||||
|
session.execute(text("""
|
||||||
|
UPDATE features
|
||||||
|
SET in_progress = 0
|
||||||
|
WHERE id = :id
|
||||||
|
"""), {"id": feature_id})
|
||||||
session.commit()
|
session.commit()
|
||||||
session.refresh(feature)
|
|
||||||
|
|
||||||
|
session.refresh(feature)
|
||||||
return json.dumps(feature.to_dict())
|
return json.dumps(feature.to_dict())
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
session.rollback()
|
session.rollback()
|
||||||
@@ -506,13 +548,14 @@ def feature_create_bulk(
|
|||||||
Returns:
|
Returns:
|
||||||
JSON with: created (int) - number of features created, with_dependencies (int)
|
JSON with: created (int) - number of features created, with_dependencies (int)
|
||||||
"""
|
"""
|
||||||
session = get_session()
|
|
||||||
try:
|
try:
|
||||||
# Use lock to prevent race condition in priority assignment
|
# Use atomic transaction for bulk inserts to prevent priority conflicts
|
||||||
with _priority_lock:
|
with atomic_transaction(_session_maker) as session:
|
||||||
# Get the starting priority
|
# Get the starting priority atomically within the transaction
|
||||||
max_priority_result = session.query(Feature.priority).order_by(Feature.priority.desc()).first()
|
result = session.execute(text("""
|
||||||
start_priority = (max_priority_result[0] + 1) if max_priority_result else 1
|
SELECT COALESCE(MAX(priority), 0) FROM features
|
||||||
|
""")).fetchone()
|
||||||
|
start_priority = (result[0] or 0) + 1
|
||||||
|
|
||||||
# First pass: validate all features and their index-based dependencies
|
# First pass: validate all features and their index-based dependencies
|
||||||
for i, feature_data in enumerate(features):
|
for i, feature_data in enumerate(features):
|
||||||
@@ -546,7 +589,7 @@ def feature_create_bulk(
|
|||||||
"error": f"Feature at index {i} cannot depend on feature at index {idx} (forward reference not allowed)"
|
"error": f"Feature at index {i} cannot depend on feature at index {idx} (forward reference not allowed)"
|
||||||
})
|
})
|
||||||
|
|
||||||
# Second pass: create all features
|
# Second pass: create all features with reserved priorities
|
||||||
created_features: list[Feature] = []
|
created_features: list[Feature] = []
|
||||||
for i, feature_data in enumerate(features):
|
for i, feature_data in enumerate(features):
|
||||||
db_feature = Feature(
|
db_feature = Feature(
|
||||||
@@ -571,20 +614,16 @@ def feature_create_bulk(
|
|||||||
if indices:
|
if indices:
|
||||||
# Convert indices to actual feature IDs
|
# Convert indices to actual feature IDs
|
||||||
dep_ids = [created_features[idx].id for idx in indices]
|
dep_ids = [created_features[idx].id for idx in indices]
|
||||||
created_features[i].dependencies = sorted(dep_ids)
|
created_features[i].dependencies = sorted(dep_ids) # type: ignore[assignment] # SQLAlchemy JSON Column accepts list at runtime
|
||||||
deps_count += 1
|
deps_count += 1
|
||||||
|
|
||||||
session.commit()
|
# Commit happens automatically on context manager exit
|
||||||
|
|
||||||
return json.dumps({
|
return json.dumps({
|
||||||
"created": len(created_features),
|
"created": len(created_features),
|
||||||
"with_dependencies": deps_count
|
"with_dependencies": deps_count
|
||||||
})
|
})
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
session.rollback()
|
|
||||||
return json.dumps({"error": str(e)})
|
return json.dumps({"error": str(e)})
|
||||||
finally:
|
|
||||||
session.close()
|
|
||||||
|
|
||||||
|
|
||||||
@mcp.tool()
|
@mcp.tool()
|
||||||
@@ -608,13 +647,14 @@ def feature_create(
|
|||||||
Returns:
|
Returns:
|
||||||
JSON with the created feature details including its ID
|
JSON with the created feature details including its ID
|
||||||
"""
|
"""
|
||||||
session = get_session()
|
|
||||||
try:
|
try:
|
||||||
# Use lock to prevent race condition in priority assignment
|
# Use atomic transaction to prevent priority collisions
|
||||||
with _priority_lock:
|
with atomic_transaction(_session_maker) as session:
|
||||||
# Get the next priority
|
# Get the next priority atomically within the transaction
|
||||||
max_priority_result = session.query(Feature.priority).order_by(Feature.priority.desc()).first()
|
result = session.execute(text("""
|
||||||
next_priority = (max_priority_result[0] + 1) if max_priority_result else 1
|
SELECT COALESCE(MAX(priority), 0) + 1 FROM features
|
||||||
|
""")).fetchone()
|
||||||
|
next_priority = result[0]
|
||||||
|
|
||||||
db_feature = Feature(
|
db_feature = Feature(
|
||||||
priority=next_priority,
|
priority=next_priority,
|
||||||
@@ -626,20 +666,18 @@ def feature_create(
|
|||||||
in_progress=False,
|
in_progress=False,
|
||||||
)
|
)
|
||||||
session.add(db_feature)
|
session.add(db_feature)
|
||||||
session.commit()
|
session.flush() # Get the ID
|
||||||
|
|
||||||
session.refresh(db_feature)
|
feature_dict = db_feature.to_dict()
|
||||||
|
# Commit happens automatically on context manager exit
|
||||||
|
|
||||||
return json.dumps({
|
return json.dumps({
|
||||||
"success": True,
|
"success": True,
|
||||||
"message": f"Created feature: {name}",
|
"message": f"Created feature: {name}",
|
||||||
"feature": db_feature.to_dict()
|
"feature": feature_dict
|
||||||
})
|
})
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
session.rollback()
|
|
||||||
return json.dumps({"error": str(e)})
|
return json.dumps({"error": str(e)})
|
||||||
finally:
|
|
||||||
session.close()
|
|
||||||
|
|
||||||
|
|
||||||
@mcp.tool()
|
@mcp.tool()
|
||||||
@@ -659,12 +697,13 @@ def feature_add_dependency(
|
|||||||
Returns:
|
Returns:
|
||||||
JSON with success status and updated dependencies list, or error message
|
JSON with success status and updated dependencies list, or error message
|
||||||
"""
|
"""
|
||||||
session = get_session()
|
|
||||||
try:
|
try:
|
||||||
# Security: Self-reference check
|
# Security: Self-reference check (can do before transaction)
|
||||||
if feature_id == dependency_id:
|
if feature_id == dependency_id:
|
||||||
return json.dumps({"error": "A feature cannot depend on itself"})
|
return json.dumps({"error": "A feature cannot depend on itself"})
|
||||||
|
|
||||||
|
# Use atomic transaction for consistent cycle detection
|
||||||
|
with atomic_transaction(_session_maker) as session:
|
||||||
feature = session.query(Feature).filter(Feature.id == feature_id).first()
|
feature = session.query(Feature).filter(Feature.id == feature_id).first()
|
||||||
dependency = session.query(Feature).filter(Feature.id == dependency_id).first()
|
dependency = session.query(Feature).filter(Feature.id == dependency_id).first()
|
||||||
|
|
||||||
@@ -684,27 +723,23 @@ def feature_add_dependency(
|
|||||||
return json.dumps({"error": "Dependency already exists"})
|
return json.dumps({"error": "Dependency already exists"})
|
||||||
|
|
||||||
# Security: Circular dependency check
|
# Security: Circular dependency check
|
||||||
# would_create_circular_dependency(features, source_id, target_id)
|
# Within IMMEDIATE transaction, snapshot is protected by write lock
|
||||||
# source_id = feature gaining the dependency, target_id = feature being depended upon
|
|
||||||
all_features = [f.to_dict() for f in session.query(Feature).all()]
|
all_features = [f.to_dict() for f in session.query(Feature).all()]
|
||||||
if would_create_circular_dependency(all_features, feature_id, dependency_id):
|
if would_create_circular_dependency(all_features, feature_id, dependency_id):
|
||||||
return json.dumps({"error": "Cannot add: would create circular dependency"})
|
return json.dumps({"error": "Cannot add: would create circular dependency"})
|
||||||
|
|
||||||
# Add dependency
|
# Add dependency atomically
|
||||||
current_deps.append(dependency_id)
|
new_deps = sorted(current_deps + [dependency_id])
|
||||||
feature.dependencies = sorted(current_deps)
|
feature.dependencies = new_deps
|
||||||
session.commit()
|
# Commit happens automatically on context manager exit
|
||||||
|
|
||||||
return json.dumps({
|
return json.dumps({
|
||||||
"success": True,
|
"success": True,
|
||||||
"feature_id": feature_id,
|
"feature_id": feature_id,
|
||||||
"dependencies": feature.dependencies
|
"dependencies": new_deps
|
||||||
})
|
})
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
session.rollback()
|
|
||||||
return json.dumps({"error": f"Failed to add dependency: {str(e)}"})
|
return json.dumps({"error": f"Failed to add dependency: {str(e)}"})
|
||||||
finally:
|
|
||||||
session.close()
|
|
||||||
|
|
||||||
|
|
||||||
@mcp.tool()
|
@mcp.tool()
|
||||||
@@ -721,8 +756,9 @@ def feature_remove_dependency(
|
|||||||
Returns:
|
Returns:
|
||||||
JSON with success status and updated dependencies list, or error message
|
JSON with success status and updated dependencies list, or error message
|
||||||
"""
|
"""
|
||||||
session = get_session()
|
|
||||||
try:
|
try:
|
||||||
|
# Use atomic transaction for consistent read-modify-write
|
||||||
|
with atomic_transaction(_session_maker) as session:
|
||||||
feature = session.query(Feature).filter(Feature.id == feature_id).first()
|
feature = session.query(Feature).filter(Feature.id == feature_id).first()
|
||||||
if not feature:
|
if not feature:
|
||||||
return json.dumps({"error": f"Feature {feature_id} not found"})
|
return json.dumps({"error": f"Feature {feature_id} not found"})
|
||||||
@@ -731,20 +767,18 @@ def feature_remove_dependency(
|
|||||||
if dependency_id not in current_deps:
|
if dependency_id not in current_deps:
|
||||||
return json.dumps({"error": "Dependency does not exist"})
|
return json.dumps({"error": "Dependency does not exist"})
|
||||||
|
|
||||||
current_deps.remove(dependency_id)
|
# Remove dependency atomically
|
||||||
feature.dependencies = current_deps if current_deps else None
|
new_deps = [d for d in current_deps if d != dependency_id]
|
||||||
session.commit()
|
feature.dependencies = new_deps if new_deps else None
|
||||||
|
# Commit happens automatically on context manager exit
|
||||||
|
|
||||||
return json.dumps({
|
return json.dumps({
|
||||||
"success": True,
|
"success": True,
|
||||||
"feature_id": feature_id,
|
"feature_id": feature_id,
|
||||||
"dependencies": feature.dependencies or []
|
"dependencies": new_deps
|
||||||
})
|
})
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
session.rollback()
|
|
||||||
return json.dumps({"error": f"Failed to remove dependency: {str(e)}"})
|
return json.dumps({"error": f"Failed to remove dependency: {str(e)}"})
|
||||||
finally:
|
|
||||||
session.close()
|
|
||||||
|
|
||||||
|
|
||||||
@mcp.tool()
|
@mcp.tool()
|
||||||
@@ -897,9 +931,8 @@ def feature_set_dependencies(
|
|||||||
Returns:
|
Returns:
|
||||||
JSON with success status and updated dependencies list, or error message
|
JSON with success status and updated dependencies list, or error message
|
||||||
"""
|
"""
|
||||||
session = get_session()
|
|
||||||
try:
|
try:
|
||||||
# Security: Self-reference check
|
# Security: Self-reference check (can do before transaction)
|
||||||
if feature_id in dependency_ids:
|
if feature_id in dependency_ids:
|
||||||
return json.dumps({"error": "A feature cannot depend on itself"})
|
return json.dumps({"error": "A feature cannot depend on itself"})
|
||||||
|
|
||||||
@@ -911,6 +944,8 @@ def feature_set_dependencies(
|
|||||||
if len(dependency_ids) != len(set(dependency_ids)):
|
if len(dependency_ids) != len(set(dependency_ids)):
|
||||||
return json.dumps({"error": "Duplicate dependencies not allowed"})
|
return json.dumps({"error": "Duplicate dependencies not allowed"})
|
||||||
|
|
||||||
|
# Use atomic transaction for consistent cycle detection
|
||||||
|
with atomic_transaction(_session_maker) as session:
|
||||||
feature = session.query(Feature).filter(Feature.id == feature_id).first()
|
feature = session.query(Feature).filter(Feature.id == feature_id).first()
|
||||||
if not feature:
|
if not feature:
|
||||||
return json.dumps({"error": f"Feature {feature_id} not found"})
|
return json.dumps({"error": f"Feature {feature_id} not found"})
|
||||||
@@ -922,8 +957,8 @@ def feature_set_dependencies(
|
|||||||
return json.dumps({"error": f"Dependencies not found: {missing}"})
|
return json.dumps({"error": f"Dependencies not found: {missing}"})
|
||||||
|
|
||||||
# Check for circular dependencies
|
# Check for circular dependencies
|
||||||
|
# Within IMMEDIATE transaction, snapshot is protected by write lock
|
||||||
all_features = [f.to_dict() for f in session.query(Feature).all()]
|
all_features = [f.to_dict() for f in session.query(Feature).all()]
|
||||||
# Temporarily update the feature's dependencies for cycle check
|
|
||||||
test_features = []
|
test_features = []
|
||||||
for f in all_features:
|
for f in all_features:
|
||||||
if f["id"] == feature_id:
|
if f["id"] == feature_id:
|
||||||
@@ -932,24 +967,21 @@ def feature_set_dependencies(
|
|||||||
test_features.append(f)
|
test_features.append(f)
|
||||||
|
|
||||||
for dep_id in dependency_ids:
|
for dep_id in dependency_ids:
|
||||||
# source_id = feature_id (gaining dep), target_id = dep_id (being depended upon)
|
|
||||||
if would_create_circular_dependency(test_features, feature_id, dep_id):
|
if would_create_circular_dependency(test_features, feature_id, dep_id):
|
||||||
return json.dumps({"error": f"Cannot add dependency {dep_id}: would create circular dependency"})
|
return json.dumps({"error": f"Cannot add dependency {dep_id}: would create circular dependency"})
|
||||||
|
|
||||||
# Set dependencies
|
# Set dependencies atomically
|
||||||
feature.dependencies = sorted(dependency_ids) if dependency_ids else None
|
sorted_deps = sorted(dependency_ids) if dependency_ids else None
|
||||||
session.commit()
|
feature.dependencies = sorted_deps
|
||||||
|
# Commit happens automatically on context manager exit
|
||||||
|
|
||||||
return json.dumps({
|
return json.dumps({
|
||||||
"success": True,
|
"success": True,
|
||||||
"feature_id": feature_id,
|
"feature_id": feature_id,
|
||||||
"dependencies": feature.dependencies or []
|
"dependencies": sorted_deps or []
|
||||||
})
|
})
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
session.rollback()
|
|
||||||
return json.dumps({"error": f"Failed to set dependencies: {str(e)}"})
|
return json.dumps({"error": f"Failed to set dependencies: {str(e)}"})
|
||||||
finally:
|
|
||||||
session.close()
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
|
|||||||
File diff suppressed because it is too large
Load Diff
36
progress.py
36
progress.py
@@ -10,12 +10,21 @@ import json
|
|||||||
import os
|
import os
|
||||||
import sqlite3
|
import sqlite3
|
||||||
import urllib.request
|
import urllib.request
|
||||||
|
from contextlib import closing
|
||||||
from datetime import datetime, timezone
|
from datetime import datetime, timezone
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
|
|
||||||
WEBHOOK_URL = os.environ.get("PROGRESS_N8N_WEBHOOK_URL")
|
WEBHOOK_URL = os.environ.get("PROGRESS_N8N_WEBHOOK_URL")
|
||||||
PROGRESS_CACHE_FILE = ".progress_cache"
|
PROGRESS_CACHE_FILE = ".progress_cache"
|
||||||
|
|
||||||
|
# SQLite connection settings for parallel mode safety
|
||||||
|
SQLITE_TIMEOUT = 30 # seconds to wait for locks
|
||||||
|
|
||||||
|
|
||||||
|
def _get_connection(db_file: Path) -> sqlite3.Connection:
|
||||||
|
"""Get a SQLite connection with proper timeout settings for parallel mode."""
|
||||||
|
return sqlite3.connect(db_file, timeout=SQLITE_TIMEOUT)
|
||||||
|
|
||||||
|
|
||||||
def has_features(project_dir: Path) -> bool:
|
def has_features(project_dir: Path) -> bool:
|
||||||
"""
|
"""
|
||||||
@@ -31,25 +40,23 @@ def has_features(project_dir: Path) -> bool:
|
|||||||
|
|
||||||
Returns False if no features exist (initializer needs to run).
|
Returns False if no features exist (initializer needs to run).
|
||||||
"""
|
"""
|
||||||
import sqlite3
|
|
||||||
|
|
||||||
# Check legacy JSON file first
|
# Check legacy JSON file first
|
||||||
json_file = project_dir / "feature_list.json"
|
json_file = project_dir / "feature_list.json"
|
||||||
if json_file.exists():
|
if json_file.exists():
|
||||||
return True
|
return True
|
||||||
|
|
||||||
# Check SQLite database
|
# Check SQLite database
|
||||||
db_file = project_dir / "features.db"
|
from autocoder_paths import get_features_db_path
|
||||||
|
db_file = get_features_db_path(project_dir)
|
||||||
if not db_file.exists():
|
if not db_file.exists():
|
||||||
return False
|
return False
|
||||||
|
|
||||||
try:
|
try:
|
||||||
conn = sqlite3.connect(db_file)
|
with closing(_get_connection(db_file)) as conn:
|
||||||
cursor = conn.cursor()
|
cursor = conn.cursor()
|
||||||
cursor.execute("SELECT COUNT(*) FROM features")
|
cursor.execute("SELECT COUNT(*) FROM features")
|
||||||
count = cursor.fetchone()[0]
|
count: int = cursor.fetchone()[0]
|
||||||
conn.close()
|
return bool(count > 0)
|
||||||
return count > 0
|
|
||||||
except Exception:
|
except Exception:
|
||||||
# Database exists but can't be read or has no features table
|
# Database exists but can't be read or has no features table
|
||||||
return False
|
return False
|
||||||
@@ -65,12 +72,13 @@ def count_passing_tests(project_dir: Path) -> tuple[int, int, int]:
|
|||||||
Returns:
|
Returns:
|
||||||
(passing_count, in_progress_count, total_count)
|
(passing_count, in_progress_count, total_count)
|
||||||
"""
|
"""
|
||||||
db_file = project_dir / "features.db"
|
from autocoder_paths import get_features_db_path
|
||||||
|
db_file = get_features_db_path(project_dir)
|
||||||
if not db_file.exists():
|
if not db_file.exists():
|
||||||
return 0, 0, 0
|
return 0, 0, 0
|
||||||
|
|
||||||
try:
|
try:
|
||||||
conn = sqlite3.connect(db_file)
|
with closing(_get_connection(db_file)) as conn:
|
||||||
cursor = conn.cursor()
|
cursor = conn.cursor()
|
||||||
# Single aggregate query instead of 3 separate COUNT queries
|
# Single aggregate query instead of 3 separate COUNT queries
|
||||||
# Handle case where in_progress column doesn't exist yet (legacy DBs)
|
# Handle case where in_progress column doesn't exist yet (legacy DBs)
|
||||||
@@ -98,7 +106,6 @@ def count_passing_tests(project_dir: Path) -> tuple[int, int, int]:
|
|||||||
total = row[0] or 0
|
total = row[0] or 0
|
||||||
passing = row[1] or 0
|
passing = row[1] or 0
|
||||||
in_progress = 0
|
in_progress = 0
|
||||||
conn.close()
|
|
||||||
return passing, in_progress, total
|
return passing, in_progress, total
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
print(f"[Database error in count_passing_tests: {e}]")
|
print(f"[Database error in count_passing_tests: {e}]")
|
||||||
@@ -115,12 +122,13 @@ def get_all_passing_features(project_dir: Path) -> list[dict]:
|
|||||||
Returns:
|
Returns:
|
||||||
List of dicts with id, category, name for each passing feature
|
List of dicts with id, category, name for each passing feature
|
||||||
"""
|
"""
|
||||||
db_file = project_dir / "features.db"
|
from autocoder_paths import get_features_db_path
|
||||||
|
db_file = get_features_db_path(project_dir)
|
||||||
if not db_file.exists():
|
if not db_file.exists():
|
||||||
return []
|
return []
|
||||||
|
|
||||||
try:
|
try:
|
||||||
conn = sqlite3.connect(db_file)
|
with closing(_get_connection(db_file)) as conn:
|
||||||
cursor = conn.cursor()
|
cursor = conn.cursor()
|
||||||
cursor.execute(
|
cursor.execute(
|
||||||
"SELECT id, category, name FROM features WHERE passes = 1 ORDER BY priority ASC"
|
"SELECT id, category, name FROM features WHERE passes = 1 ORDER BY priority ASC"
|
||||||
@@ -129,7 +137,6 @@ def get_all_passing_features(project_dir: Path) -> list[dict]:
|
|||||||
{"id": row[0], "category": row[1], "name": row[2]}
|
{"id": row[0], "category": row[1], "name": row[2]}
|
||||||
for row in cursor.fetchall()
|
for row in cursor.fetchall()
|
||||||
]
|
]
|
||||||
conn.close()
|
|
||||||
return features
|
return features
|
||||||
except Exception:
|
except Exception:
|
||||||
return []
|
return []
|
||||||
@@ -140,7 +147,8 @@ def send_progress_webhook(passing: int, total: int, project_dir: Path) -> None:
|
|||||||
if not WEBHOOK_URL:
|
if not WEBHOOK_URL:
|
||||||
return # Webhook not configured
|
return # Webhook not configured
|
||||||
|
|
||||||
cache_file = project_dir / PROGRESS_CACHE_FILE
|
from autocoder_paths import get_progress_cache_path
|
||||||
|
cache_file = get_progress_cache_path(project_dir)
|
||||||
previous = 0
|
previous = 0
|
||||||
previous_passing_ids = set()
|
previous_passing_ids = set()
|
||||||
|
|
||||||
|
|||||||
187
prompts.py
187
prompts.py
@@ -9,6 +9,7 @@ Fallback chain:
|
|||||||
2. Base template: .claude/templates/{name}.template.md
|
2. Base template: .claude/templates/{name}.template.md
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
import re
|
||||||
import shutil
|
import shutil
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
|
|
||||||
@@ -18,7 +19,8 @@ TEMPLATES_DIR = Path(__file__).parent / ".claude" / "templates"
|
|||||||
|
|
||||||
def get_project_prompts_dir(project_dir: Path) -> Path:
|
def get_project_prompts_dir(project_dir: Path) -> Path:
|
||||||
"""Get the prompts directory for a specific project."""
|
"""Get the prompts directory for a specific project."""
|
||||||
return project_dir / "prompts"
|
from autocoder_paths import get_prompts_dir
|
||||||
|
return get_prompts_dir(project_dir)
|
||||||
|
|
||||||
|
|
||||||
def load_prompt(name: str, project_dir: Path | None = None) -> str:
|
def load_prompt(name: str, project_dir: Path | None = None) -> str:
|
||||||
@@ -69,43 +71,120 @@ def get_initializer_prompt(project_dir: Path | None = None) -> str:
|
|||||||
return load_prompt("initializer_prompt", project_dir)
|
return load_prompt("initializer_prompt", project_dir)
|
||||||
|
|
||||||
|
|
||||||
def get_coding_prompt(project_dir: Path | None = None) -> str:
|
def _strip_browser_testing_sections(prompt: str) -> str:
|
||||||
"""Load the coding agent prompt (project-specific if available)."""
|
"""Strip browser automation and Playwright testing instructions from prompt.
|
||||||
return load_prompt("coding_prompt", project_dir)
|
|
||||||
|
Used in YOLO mode where browser testing is skipped entirely. Replaces
|
||||||
|
browser-related sections with a brief YOLO-mode note while preserving
|
||||||
|
all non-testing instructions (implementation, git, progress notes, etc.).
|
||||||
|
|
||||||
|
Args:
|
||||||
|
prompt: The full coding prompt text.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
The prompt with browser testing sections replaced by YOLO guidance.
|
||||||
|
"""
|
||||||
|
original_prompt = prompt
|
||||||
|
|
||||||
|
# Replace STEP 5 (browser automation verification) with YOLO note
|
||||||
|
prompt = re.sub(
|
||||||
|
r"### STEP 5: VERIFY WITH BROWSER AUTOMATION.*?(?=### STEP 5\.5:)",
|
||||||
|
"### STEP 5: VERIFY FEATURE (YOLO MODE)\n\n"
|
||||||
|
"**YOLO mode is active.** Skip browser automation testing. "
|
||||||
|
"Instead, verify your feature works by ensuring:\n"
|
||||||
|
"- Code compiles without errors (lint and type-check pass)\n"
|
||||||
|
"- Server starts without errors after your changes\n"
|
||||||
|
"- No obvious runtime errors in server logs\n\n",
|
||||||
|
prompt,
|
||||||
|
flags=re.DOTALL,
|
||||||
|
)
|
||||||
|
|
||||||
|
# Replace the screenshots-only marking rule with YOLO-appropriate wording
|
||||||
|
prompt = prompt.replace(
|
||||||
|
"**ONLY MARK A FEATURE AS PASSING AFTER VERIFICATION WITH SCREENSHOTS.**",
|
||||||
|
"**YOLO mode: Mark a feature as passing after lint/type-check succeeds and server starts cleanly.**",
|
||||||
|
)
|
||||||
|
|
||||||
|
# Replace the BROWSER AUTOMATION reference section
|
||||||
|
prompt = re.sub(
|
||||||
|
r"## BROWSER AUTOMATION\n\n.*?(?=---)",
|
||||||
|
"## VERIFICATION (YOLO MODE)\n\n"
|
||||||
|
"Browser automation is disabled in YOLO mode. "
|
||||||
|
"Verify features by running lint, type-check, and confirming the dev server starts without errors.\n\n",
|
||||||
|
prompt,
|
||||||
|
flags=re.DOTALL,
|
||||||
|
)
|
||||||
|
|
||||||
|
# In STEP 4, replace browser automation reference with YOLO guidance
|
||||||
|
prompt = prompt.replace(
|
||||||
|
"2. Test manually using browser automation (see Step 5)",
|
||||||
|
"2. Verify code compiles (lint and type-check pass)",
|
||||||
|
)
|
||||||
|
|
||||||
|
if prompt == original_prompt:
|
||||||
|
print("[YOLO] Warning: No browser testing sections found to strip. "
|
||||||
|
"Project-specific prompt may need manual YOLO adaptation.")
|
||||||
|
|
||||||
|
return prompt
|
||||||
|
|
||||||
|
|
||||||
def get_testing_prompt(project_dir: Path | None = None, testing_feature_id: int | None = None) -> str:
|
def get_coding_prompt(project_dir: Path | None = None, yolo_mode: bool = False) -> str:
|
||||||
"""Load the testing agent prompt (project-specific if available).
|
"""Load the coding agent prompt (project-specific if available).
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
project_dir: Optional project directory for project-specific prompts
|
project_dir: Optional project directory for project-specific prompts
|
||||||
testing_feature_id: If provided, the pre-assigned feature ID to test.
|
yolo_mode: If True, strip browser automation / Playwright testing
|
||||||
The orchestrator claims the feature before spawning the agent.
|
instructions and replace with YOLO-mode guidance. This reduces
|
||||||
|
prompt tokens since YOLO mode skips all browser testing anyway.
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
The testing prompt, with pre-assigned feature instructions if applicable.
|
The coding prompt, optionally stripped of testing instructions.
|
||||||
|
"""
|
||||||
|
prompt = load_prompt("coding_prompt", project_dir)
|
||||||
|
|
||||||
|
if yolo_mode:
|
||||||
|
prompt = _strip_browser_testing_sections(prompt)
|
||||||
|
|
||||||
|
return prompt
|
||||||
|
|
||||||
|
|
||||||
|
def get_testing_prompt(
|
||||||
|
project_dir: Path | None = None,
|
||||||
|
testing_feature_id: int | None = None,
|
||||||
|
testing_feature_ids: list[int] | None = None,
|
||||||
|
) -> str:
|
||||||
|
"""Load the testing agent prompt (project-specific if available).
|
||||||
|
|
||||||
|
Supports both single-feature and multi-feature testing modes. When
|
||||||
|
testing_feature_ids is provided, the template's {{TESTING_FEATURE_IDS}}
|
||||||
|
placeholder is replaced with the comma-separated list. Falls back to
|
||||||
|
the legacy single-feature header when only testing_feature_id is given.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
project_dir: Optional project directory for project-specific prompts
|
||||||
|
testing_feature_id: If provided, the pre-assigned feature ID to test (legacy single mode).
|
||||||
|
testing_feature_ids: If provided, a list of feature IDs to test (batch mode).
|
||||||
|
Takes precedence over testing_feature_id when both are set.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
The testing prompt, with feature assignment instructions populated.
|
||||||
"""
|
"""
|
||||||
base_prompt = load_prompt("testing_prompt", project_dir)
|
base_prompt = load_prompt("testing_prompt", project_dir)
|
||||||
|
|
||||||
|
# Batch mode: replace the {{TESTING_FEATURE_IDS}} placeholder in the template
|
||||||
|
if testing_feature_ids is not None and len(testing_feature_ids) > 0:
|
||||||
|
ids_str = ", ".join(str(fid) for fid in testing_feature_ids)
|
||||||
|
return base_prompt.replace("{{TESTING_FEATURE_IDS}}", ids_str)
|
||||||
|
|
||||||
|
# Legacy single-feature mode: prepend header and replace placeholder
|
||||||
if testing_feature_id is not None:
|
if testing_feature_id is not None:
|
||||||
# Prepend pre-assigned feature instructions
|
# Replace the placeholder with the single ID for template consistency
|
||||||
pre_assigned_header = f"""## ASSIGNED FEATURE
|
base_prompt = base_prompt.replace("{{TESTING_FEATURE_IDS}}", str(testing_feature_id))
|
||||||
|
|
||||||
**You are assigned to regression test Feature #{testing_feature_id}.**
|
|
||||||
|
|
||||||
### Your workflow:
|
|
||||||
1. Call `feature_get_by_id` with ID {testing_feature_id} to get the feature details
|
|
||||||
2. Verify the feature through the UI using browser automation
|
|
||||||
3. If regression found, call `feature_mark_failing` with feature_id={testing_feature_id}
|
|
||||||
4. Exit when done (no cleanup needed)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
"""
|
|
||||||
return pre_assigned_header + base_prompt
|
|
||||||
|
|
||||||
return base_prompt
|
return base_prompt
|
||||||
|
|
||||||
|
# No feature assignment -- return template with placeholder cleared
|
||||||
|
return base_prompt.replace("{{TESTING_FEATURE_IDS}}", "(none assigned)")
|
||||||
|
|
||||||
|
|
||||||
def get_single_feature_prompt(feature_id: int, project_dir: Path | None = None, yolo_mode: bool = False) -> str:
|
def get_single_feature_prompt(feature_id: int, project_dir: Path | None = None, yolo_mode: bool = False) -> str:
|
||||||
"""Prepend single-feature assignment header to base coding prompt.
|
"""Prepend single-feature assignment header to base coding prompt.
|
||||||
@@ -117,13 +196,13 @@ def get_single_feature_prompt(feature_id: int, project_dir: Path | None = None,
|
|||||||
Args:
|
Args:
|
||||||
feature_id: The specific feature ID to work on
|
feature_id: The specific feature ID to work on
|
||||||
project_dir: Optional project directory for project-specific prompts
|
project_dir: Optional project directory for project-specific prompts
|
||||||
yolo_mode: Ignored (kept for backward compatibility). Testing is now
|
yolo_mode: If True, strip browser testing instructions from the base
|
||||||
handled by separate testing agents, not YOLO prompts.
|
coding prompt for reduced token usage in YOLO mode.
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
The prompt with single-feature header prepended
|
The prompt with single-feature header prepended
|
||||||
"""
|
"""
|
||||||
base_prompt = get_coding_prompt(project_dir)
|
base_prompt = get_coding_prompt(project_dir, yolo_mode=yolo_mode)
|
||||||
|
|
||||||
# Minimal header - the base prompt already contains the full workflow
|
# Minimal header - the base prompt already contains the full workflow
|
||||||
single_feature_header = f"""## ASSIGNED FEATURE: #{feature_id}
|
single_feature_header = f"""## ASSIGNED FEATURE: #{feature_id}
|
||||||
@@ -138,6 +217,52 @@ If blocked, use `feature_skip` and document the blocker.
|
|||||||
return single_feature_header + base_prompt
|
return single_feature_header + base_prompt
|
||||||
|
|
||||||
|
|
||||||
|
def get_batch_feature_prompt(
|
||||||
|
feature_ids: list[int],
|
||||||
|
project_dir: Path | None = None,
|
||||||
|
yolo_mode: bool = False,
|
||||||
|
) -> str:
|
||||||
|
"""Prepend batch-feature assignment header to base coding prompt.
|
||||||
|
|
||||||
|
Used in parallel mode to assign multiple features to an agent.
|
||||||
|
Features should be implemented sequentially in the given order.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
feature_ids: List of feature IDs to implement in order
|
||||||
|
project_dir: Optional project directory for project-specific prompts
|
||||||
|
yolo_mode: If True, strip browser testing instructions from the base prompt
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
The prompt with batch-feature header prepended
|
||||||
|
"""
|
||||||
|
base_prompt = get_coding_prompt(project_dir, yolo_mode=yolo_mode)
|
||||||
|
ids_str = ", ".join(f"#{fid}" for fid in feature_ids)
|
||||||
|
|
||||||
|
batch_header = f"""## ASSIGNED FEATURES (BATCH): {ids_str}
|
||||||
|
|
||||||
|
You have been assigned {len(feature_ids)} features to implement sequentially.
|
||||||
|
Process them IN ORDER: {ids_str}
|
||||||
|
|
||||||
|
### Workflow for each feature:
|
||||||
|
1. Call `feature_claim_and_get` with the feature ID to get its details
|
||||||
|
2. Implement the feature fully
|
||||||
|
3. Verify it works (browser testing if applicable)
|
||||||
|
4. Call `feature_mark_passing` to mark it complete
|
||||||
|
5. Git commit the changes
|
||||||
|
6. Move to the next feature
|
||||||
|
|
||||||
|
### Important:
|
||||||
|
- Complete each feature fully before starting the next
|
||||||
|
- Mark each feature passing individually as you go
|
||||||
|
- If blocked on a feature, use `feature_skip` and move to the next one
|
||||||
|
- Other agents are handling other features - focus only on yours
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
"""
|
||||||
|
return batch_header + base_prompt
|
||||||
|
|
||||||
|
|
||||||
def get_app_spec(project_dir: Path) -> str:
|
def get_app_spec(project_dir: Path) -> str:
|
||||||
"""
|
"""
|
||||||
Load the app spec from the project.
|
Load the app spec from the project.
|
||||||
@@ -190,9 +315,9 @@ def scaffold_project_prompts(project_dir: Path) -> Path:
|
|||||||
project_prompts = get_project_prompts_dir(project_dir)
|
project_prompts = get_project_prompts_dir(project_dir)
|
||||||
project_prompts.mkdir(parents=True, exist_ok=True)
|
project_prompts.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
# Create .autocoder directory for configuration files
|
# Create .autocoder directory with .gitignore for runtime files
|
||||||
autocoder_dir = project_dir / ".autocoder"
|
from autocoder_paths import ensure_autocoder_dir
|
||||||
autocoder_dir.mkdir(parents=True, exist_ok=True)
|
autocoder_dir = ensure_autocoder_dir(project_dir)
|
||||||
|
|
||||||
# Define template mappings: (source_template, destination_name)
|
# Define template mappings: (source_template, destination_name)
|
||||||
templates = [
|
templates = [
|
||||||
|
|||||||
132
rate_limit_utils.py
Normal file
132
rate_limit_utils.py
Normal file
@@ -0,0 +1,132 @@
|
|||||||
|
"""
|
||||||
|
Rate Limit Utilities
|
||||||
|
====================
|
||||||
|
|
||||||
|
Shared utilities for detecting and handling API rate limits.
|
||||||
|
Used by both agent.py (production) and test_rate_limit_utils.py (tests).
|
||||||
|
"""
|
||||||
|
|
||||||
|
import random
|
||||||
|
import re
|
||||||
|
from typing import Optional
|
||||||
|
|
||||||
|
# Regex patterns for rate limit detection (used in both exception messages and response text)
|
||||||
|
# These patterns use word boundaries to avoid false positives like "PR #429" or "please wait while I..."
|
||||||
|
RATE_LIMIT_REGEX_PATTERNS = [
|
||||||
|
r"\brate[_\s]?limit", # "rate limit", "rate_limit", "ratelimit"
|
||||||
|
r"\btoo\s+many\s+requests", # "too many requests"
|
||||||
|
r"\bhttp\s*429\b", # "http 429", "http429"
|
||||||
|
r"\bstatus\s*429\b", # "status 429", "status429"
|
||||||
|
r"\berror\s*429\b", # "error 429", "error429"
|
||||||
|
r"\b429\s+too\s+many", # "429 too many"
|
||||||
|
r"\b(?:server|api|system)\s+(?:is\s+)?overloaded\b", # "server is overloaded", "api overloaded"
|
||||||
|
r"\bquota\s*exceeded\b", # "quota exceeded"
|
||||||
|
]
|
||||||
|
|
||||||
|
# Compiled regex for efficient matching
|
||||||
|
_RATE_LIMIT_REGEX = re.compile(
|
||||||
|
"|".join(RATE_LIMIT_REGEX_PATTERNS),
|
||||||
|
re.IGNORECASE
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def parse_retry_after(error_message: str) -> Optional[int]:
|
||||||
|
"""
|
||||||
|
Extract retry-after seconds from various error message formats.
|
||||||
|
|
||||||
|
Handles common formats:
|
||||||
|
- "Retry-After: 60"
|
||||||
|
- "retry after 60 seconds"
|
||||||
|
- "try again in 5 seconds"
|
||||||
|
- "30 seconds remaining"
|
||||||
|
|
||||||
|
Args:
|
||||||
|
error_message: The error message to parse
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Seconds to wait, or None if not parseable.
|
||||||
|
"""
|
||||||
|
# Patterns require explicit "seconds" or "s" unit, OR no unit at all (end of string/sentence)
|
||||||
|
# This prevents matching "30 minutes" or "1 hour" since those have non-seconds units
|
||||||
|
patterns = [
|
||||||
|
r"retry.?after[:\s]+(\d+)\s*(?:seconds?|s\b)", # Requires seconds unit
|
||||||
|
r"retry.?after[:\s]+(\d+)(?:\s*$|\s*[,.])", # Or end of string/sentence
|
||||||
|
r"try again in\s+(\d+)\s*(?:seconds?|s\b)", # Requires seconds unit
|
||||||
|
r"try again in\s+(\d+)(?:\s*$|\s*[,.])", # Or end of string/sentence
|
||||||
|
r"(\d+)\s*seconds?\s*(?:remaining|left|until)",
|
||||||
|
]
|
||||||
|
|
||||||
|
for pattern in patterns:
|
||||||
|
match = re.search(pattern, error_message, re.IGNORECASE)
|
||||||
|
if match:
|
||||||
|
return int(match.group(1))
|
||||||
|
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
def is_rate_limit_error(error_message: str) -> bool:
|
||||||
|
"""
|
||||||
|
Detect if an error message indicates a rate limit.
|
||||||
|
|
||||||
|
Uses regex patterns with word boundaries to avoid false positives
|
||||||
|
like "PR #429", "please wait while I...", or "Node v14.29.0".
|
||||||
|
|
||||||
|
Args:
|
||||||
|
error_message: The error message to check
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
True if the message indicates a rate limit, False otherwise.
|
||||||
|
"""
|
||||||
|
return bool(_RATE_LIMIT_REGEX.search(error_message))
|
||||||
|
|
||||||
|
|
||||||
|
def calculate_rate_limit_backoff(retries: int) -> int:
|
||||||
|
"""
|
||||||
|
Calculate exponential backoff with jitter for rate limits.
|
||||||
|
|
||||||
|
Base formula: min(15 * 2^retries, 3600)
|
||||||
|
Jitter: adds 0-30% random jitter to prevent thundering herd.
|
||||||
|
Base sequence: ~15-20s, ~30-40s, ~60-78s, ~120-156s, ...
|
||||||
|
|
||||||
|
The lower starting delay (15s vs 60s) allows faster recovery from
|
||||||
|
transient rate limits, while jitter prevents synchronized retries
|
||||||
|
when multiple agents hit limits simultaneously.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
retries: Number of consecutive rate limit retries (0-indexed)
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Delay in seconds (clamped to 1-3600 range, with jitter)
|
||||||
|
"""
|
||||||
|
base = int(min(max(15 * (2 ** retries), 1), 3600))
|
||||||
|
jitter = random.uniform(0, base * 0.3)
|
||||||
|
return int(base + jitter)
|
||||||
|
|
||||||
|
|
||||||
|
def calculate_error_backoff(retries: int) -> int:
|
||||||
|
"""
|
||||||
|
Calculate linear backoff for non-rate-limit errors.
|
||||||
|
|
||||||
|
Formula: min(30 * retries, 300) - caps at 5 minutes
|
||||||
|
Sequence: 30s, 60s, 90s, 120s, ... 300s
|
||||||
|
|
||||||
|
Args:
|
||||||
|
retries: Number of consecutive error retries (1-indexed)
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Delay in seconds (clamped to 1-300 range)
|
||||||
|
"""
|
||||||
|
return min(max(30 * retries, 1), 300)
|
||||||
|
|
||||||
|
|
||||||
|
def clamp_retry_delay(delay_seconds: int) -> int:
|
||||||
|
"""
|
||||||
|
Clamp a retry delay to a safe range (1-3600 seconds).
|
||||||
|
|
||||||
|
Args:
|
||||||
|
delay_seconds: The raw delay value
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Delay clamped to 1-3600 seconds
|
||||||
|
"""
|
||||||
|
return min(max(delay_seconds, 1), 3600)
|
||||||
19
registry.py
19
registry.py
@@ -17,8 +17,7 @@ from pathlib import Path
|
|||||||
from typing import Any
|
from typing import Any
|
||||||
|
|
||||||
from sqlalchemy import Column, DateTime, Integer, String, create_engine, text
|
from sqlalchemy import Column, DateTime, Integer, String, create_engine, text
|
||||||
from sqlalchemy.ext.declarative import declarative_base
|
from sqlalchemy.orm import DeclarativeBase, sessionmaker
|
||||||
from sqlalchemy.orm import sessionmaker
|
|
||||||
|
|
||||||
# Module logger
|
# Module logger
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
@@ -39,7 +38,17 @@ AVAILABLE_MODELS = [
|
|||||||
VALID_MODELS = [m["id"] for m in AVAILABLE_MODELS]
|
VALID_MODELS = [m["id"] for m in AVAILABLE_MODELS]
|
||||||
|
|
||||||
# Default model and settings
|
# Default model and settings
|
||||||
DEFAULT_MODEL = "claude-opus-4-5-20251101"
|
# Respect ANTHROPIC_DEFAULT_OPUS_MODEL env var for Foundry/custom deployments
|
||||||
|
# Guard against empty/whitespace values by trimming and falling back when blank
|
||||||
|
_env_default_model = os.getenv("ANTHROPIC_DEFAULT_OPUS_MODEL")
|
||||||
|
if _env_default_model is not None:
|
||||||
|
_env_default_model = _env_default_model.strip()
|
||||||
|
DEFAULT_MODEL = _env_default_model or "claude-opus-4-5-20251101"
|
||||||
|
|
||||||
|
# Ensure env-provided DEFAULT_MODEL is in VALID_MODELS for validation consistency
|
||||||
|
# (idempotent: only adds if missing, doesn't alter AVAILABLE_MODELS semantics)
|
||||||
|
if DEFAULT_MODEL and DEFAULT_MODEL not in VALID_MODELS:
|
||||||
|
VALID_MODELS.append(DEFAULT_MODEL)
|
||||||
DEFAULT_YOLO_MODE = False
|
DEFAULT_YOLO_MODE = False
|
||||||
|
|
||||||
# SQLite connection settings
|
# SQLite connection settings
|
||||||
@@ -75,7 +84,9 @@ class RegistryPermissionDenied(RegistryError):
|
|||||||
# SQLAlchemy Model
|
# SQLAlchemy Model
|
||||||
# =============================================================================
|
# =============================================================================
|
||||||
|
|
||||||
Base = declarative_base()
|
class Base(DeclarativeBase):
|
||||||
|
"""SQLAlchemy 2.0 style declarative base."""
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
class Project(Base):
|
class Project(Base):
|
||||||
|
|||||||
@@ -15,3 +15,4 @@ pyyaml>=6.0.0
|
|||||||
ruff>=0.8.0
|
ruff>=0.8.0
|
||||||
mypy>=1.13.0
|
mypy>=1.13.0
|
||||||
pytest>=8.0.0
|
pytest>=8.0.0
|
||||||
|
types-PyYAML>=6.0.0
|
||||||
|
|||||||
207
security.py
207
security.py
@@ -97,6 +97,31 @@ BLOCKED_COMMANDS = {
|
|||||||
"ufw",
|
"ufw",
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# Sensitive directories (relative to home) that should never be exposed.
|
||||||
|
# Used by both the EXTRA_READ_PATHS validator (client.py) and the filesystem
|
||||||
|
# browser API (server/routers/filesystem.py) to block credential/key directories.
|
||||||
|
# This is the single source of truth -- import from here in both places.
|
||||||
|
#
|
||||||
|
# SENSITIVE_DIRECTORIES is the union of the previous filesystem browser blocklist
|
||||||
|
# (filesystem.py) and the previous EXTRA_READ_PATHS blocklist (client.py).
|
||||||
|
# Some entries are new to each consumer -- this is intentional for defense-in-depth.
|
||||||
|
SENSITIVE_DIRECTORIES = {
|
||||||
|
".ssh",
|
||||||
|
".aws",
|
||||||
|
".azure",
|
||||||
|
".kube",
|
||||||
|
".gnupg",
|
||||||
|
".gpg",
|
||||||
|
".password-store",
|
||||||
|
".docker",
|
||||||
|
".config/gcloud",
|
||||||
|
".config/gh",
|
||||||
|
".npmrc",
|
||||||
|
".pypirc",
|
||||||
|
".netrc",
|
||||||
|
".terraform",
|
||||||
|
}
|
||||||
|
|
||||||
# Commands that trigger emphatic warnings but CAN be approved (Phase 3)
|
# Commands that trigger emphatic warnings but CAN be approved (Phase 3)
|
||||||
# For now, these are blocked like BLOCKED_COMMANDS until Phase 3 implements approval
|
# For now, these are blocked like BLOCKED_COMMANDS until Phase 3 implements approval
|
||||||
DANGEROUS_COMMANDS = {
|
DANGEROUS_COMMANDS = {
|
||||||
@@ -413,24 +438,6 @@ def validate_init_script(command_string: str) -> tuple[bool, str]:
|
|||||||
return False, f"Only ./init.sh is allowed, got: {script}"
|
return False, f"Only ./init.sh is allowed, got: {script}"
|
||||||
|
|
||||||
|
|
||||||
def get_command_for_validation(cmd: str, segments: list[str]) -> str:
|
|
||||||
"""
|
|
||||||
Find the specific command segment that contains the given command.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
cmd: The command name to find
|
|
||||||
segments: List of command segments
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
The segment containing the command, or empty string if not found
|
|
||||||
"""
|
|
||||||
for segment in segments:
|
|
||||||
segment_commands = extract_commands(segment)
|
|
||||||
if cmd in segment_commands:
|
|
||||||
return segment
|
|
||||||
return ""
|
|
||||||
|
|
||||||
|
|
||||||
def matches_pattern(command: str, pattern: str) -> bool:
|
def matches_pattern(command: str, pattern: str) -> bool:
|
||||||
"""
|
"""
|
||||||
Check if a command matches a pattern.
|
Check if a command matches a pattern.
|
||||||
@@ -472,6 +479,75 @@ def matches_pattern(command: str, pattern: str) -> bool:
|
|||||||
return False
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
def _validate_command_list(commands: list, config_path: Path, field_name: str) -> bool:
|
||||||
|
"""
|
||||||
|
Validate a list of command entries from a YAML config.
|
||||||
|
|
||||||
|
Each entry must be a dict with a non-empty string 'name' field.
|
||||||
|
Used by both load_org_config() and load_project_commands() to avoid
|
||||||
|
duplicating the same validation logic.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
commands: List of command entries to validate
|
||||||
|
config_path: Path to the config file (for log messages)
|
||||||
|
field_name: Name of the YAML field being validated (e.g., 'allowed_commands', 'commands')
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
True if all entries are valid, False otherwise
|
||||||
|
"""
|
||||||
|
if not isinstance(commands, list):
|
||||||
|
logger.warning(f"Config at {config_path}: '{field_name}' must be a list")
|
||||||
|
return False
|
||||||
|
for i, cmd in enumerate(commands):
|
||||||
|
if not isinstance(cmd, dict):
|
||||||
|
logger.warning(f"Config at {config_path}: {field_name}[{i}] must be a dict")
|
||||||
|
return False
|
||||||
|
if "name" not in cmd:
|
||||||
|
logger.warning(f"Config at {config_path}: {field_name}[{i}] missing 'name'")
|
||||||
|
return False
|
||||||
|
if not isinstance(cmd["name"], str) or cmd["name"].strip() == "":
|
||||||
|
logger.warning(f"Config at {config_path}: {field_name}[{i}] has invalid 'name'")
|
||||||
|
return False
|
||||||
|
return True
|
||||||
|
|
||||||
|
|
||||||
|
def _validate_pkill_processes(config: dict, config_path: Path) -> Optional[list[str]]:
|
||||||
|
"""
|
||||||
|
Validate and normalize pkill_processes from a YAML config.
|
||||||
|
|
||||||
|
Each entry must be a non-empty string matching VALID_PROCESS_NAME_PATTERN
|
||||||
|
(alphanumeric, dots, underscores, hyphens only -- no regex metacharacters).
|
||||||
|
Used by both load_org_config() and load_project_commands().
|
||||||
|
|
||||||
|
Args:
|
||||||
|
config: Parsed YAML config dict that may contain 'pkill_processes'
|
||||||
|
config_path: Path to the config file (for log messages)
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Normalized list of process names, or None if validation fails.
|
||||||
|
Returns an empty list if 'pkill_processes' is not present.
|
||||||
|
"""
|
||||||
|
if "pkill_processes" not in config:
|
||||||
|
return []
|
||||||
|
|
||||||
|
processes = config["pkill_processes"]
|
||||||
|
if not isinstance(processes, list):
|
||||||
|
logger.warning(f"Config at {config_path}: 'pkill_processes' must be a list")
|
||||||
|
return None
|
||||||
|
|
||||||
|
normalized = []
|
||||||
|
for i, proc in enumerate(processes):
|
||||||
|
if not isinstance(proc, str):
|
||||||
|
logger.warning(f"Config at {config_path}: pkill_processes[{i}] must be a string")
|
||||||
|
return None
|
||||||
|
proc = proc.strip()
|
||||||
|
if not proc or not VALID_PROCESS_NAME_PATTERN.fullmatch(proc):
|
||||||
|
logger.warning(f"Config at {config_path}: pkill_processes[{i}] has invalid value '{proc}'")
|
||||||
|
return None
|
||||||
|
normalized.append(proc)
|
||||||
|
return normalized
|
||||||
|
|
||||||
|
|
||||||
def get_org_config_path() -> Path:
|
def get_org_config_path() -> Path:
|
||||||
"""
|
"""
|
||||||
Get the organization-level config file path.
|
Get the organization-level config file path.
|
||||||
@@ -513,20 +589,7 @@ def load_org_config() -> Optional[dict]:
|
|||||||
|
|
||||||
# Validate allowed_commands if present
|
# Validate allowed_commands if present
|
||||||
if "allowed_commands" in config:
|
if "allowed_commands" in config:
|
||||||
allowed = config["allowed_commands"]
|
if not _validate_command_list(config["allowed_commands"], config_path, "allowed_commands"):
|
||||||
if not isinstance(allowed, list):
|
|
||||||
logger.warning(f"Org config at {config_path}: 'allowed_commands' must be a list")
|
|
||||||
return None
|
|
||||||
for i, cmd in enumerate(allowed):
|
|
||||||
if not isinstance(cmd, dict):
|
|
||||||
logger.warning(f"Org config at {config_path}: allowed_commands[{i}] must be a dict")
|
|
||||||
return None
|
|
||||||
if "name" not in cmd:
|
|
||||||
logger.warning(f"Org config at {config_path}: allowed_commands[{i}] missing 'name'")
|
|
||||||
return None
|
|
||||||
# Validate that name is a non-empty string
|
|
||||||
if not isinstance(cmd["name"], str) or cmd["name"].strip() == "":
|
|
||||||
logger.warning(f"Org config at {config_path}: allowed_commands[{i}] has invalid 'name'")
|
|
||||||
return None
|
return None
|
||||||
|
|
||||||
# Validate blocked_commands if present
|
# Validate blocked_commands if present
|
||||||
@@ -541,23 +604,10 @@ def load_org_config() -> Optional[dict]:
|
|||||||
return None
|
return None
|
||||||
|
|
||||||
# Validate pkill_processes if present
|
# Validate pkill_processes if present
|
||||||
if "pkill_processes" in config:
|
normalized = _validate_pkill_processes(config, config_path)
|
||||||
processes = config["pkill_processes"]
|
if normalized is None:
|
||||||
if not isinstance(processes, list):
|
|
||||||
logger.warning(f"Org config at {config_path}: 'pkill_processes' must be a list")
|
|
||||||
return None
|
return None
|
||||||
# Normalize and validate each process name against safe pattern
|
if normalized:
|
||||||
normalized = []
|
|
||||||
for i, proc in enumerate(processes):
|
|
||||||
if not isinstance(proc, str):
|
|
||||||
logger.warning(f"Org config at {config_path}: pkill_processes[{i}] must be a string")
|
|
||||||
return None
|
|
||||||
proc = proc.strip()
|
|
||||||
# Block empty strings and regex metacharacters
|
|
||||||
if not proc or not VALID_PROCESS_NAME_PATTERN.fullmatch(proc):
|
|
||||||
logger.warning(f"Org config at {config_path}: pkill_processes[{i}] has invalid value '{proc}'")
|
|
||||||
return None
|
|
||||||
normalized.append(proc)
|
|
||||||
config["pkill_processes"] = normalized
|
config["pkill_processes"] = normalized
|
||||||
|
|
||||||
return config
|
return config
|
||||||
@@ -603,46 +653,21 @@ def load_project_commands(project_dir: Path) -> Optional[dict]:
|
|||||||
return None
|
return None
|
||||||
|
|
||||||
commands = config.get("commands", [])
|
commands = config.get("commands", [])
|
||||||
if not isinstance(commands, list):
|
|
||||||
logger.warning(f"Project config at {config_path}: 'commands' must be a list")
|
|
||||||
return None
|
|
||||||
|
|
||||||
# Enforce 100 command limit
|
# Enforce 100 command limit
|
||||||
if len(commands) > 100:
|
if isinstance(commands, list) and len(commands) > 100:
|
||||||
logger.warning(f"Project config at {config_path} exceeds 100 command limit ({len(commands)} commands)")
|
logger.warning(f"Project config at {config_path} exceeds 100 command limit ({len(commands)} commands)")
|
||||||
return None
|
return None
|
||||||
|
|
||||||
# Validate each command entry
|
# Validate each command entry using shared helper
|
||||||
for i, cmd in enumerate(commands):
|
if not _validate_command_list(commands, config_path, "commands"):
|
||||||
if not isinstance(cmd, dict):
|
|
||||||
logger.warning(f"Project config at {config_path}: commands[{i}] must be a dict")
|
|
||||||
return None
|
|
||||||
if "name" not in cmd:
|
|
||||||
logger.warning(f"Project config at {config_path}: commands[{i}] missing 'name'")
|
|
||||||
return None
|
|
||||||
# Validate name is a non-empty string
|
|
||||||
if not isinstance(cmd["name"], str) or cmd["name"].strip() == "":
|
|
||||||
logger.warning(f"Project config at {config_path}: commands[{i}] has invalid 'name'")
|
|
||||||
return None
|
return None
|
||||||
|
|
||||||
# Validate pkill_processes if present
|
# Validate pkill_processes if present
|
||||||
if "pkill_processes" in config:
|
normalized = _validate_pkill_processes(config, config_path)
|
||||||
processes = config["pkill_processes"]
|
if normalized is None:
|
||||||
if not isinstance(processes, list):
|
|
||||||
logger.warning(f"Project config at {config_path}: 'pkill_processes' must be a list")
|
|
||||||
return None
|
return None
|
||||||
# Normalize and validate each process name against safe pattern
|
if normalized:
|
||||||
normalized = []
|
|
||||||
for i, proc in enumerate(processes):
|
|
||||||
if not isinstance(proc, str):
|
|
||||||
logger.warning(f"Project config at {config_path}: pkill_processes[{i}] must be a string")
|
|
||||||
return None
|
|
||||||
proc = proc.strip()
|
|
||||||
# Block empty strings and regex metacharacters
|
|
||||||
if not proc or not VALID_PROCESS_NAME_PATTERN.fullmatch(proc):
|
|
||||||
logger.warning(f"Project config at {config_path}: pkill_processes[{i}] has invalid value '{proc}'")
|
|
||||||
return None
|
|
||||||
normalized.append(proc)
|
|
||||||
config["pkill_processes"] = normalized
|
config["pkill_processes"] = normalized
|
||||||
|
|
||||||
return config
|
return config
|
||||||
@@ -659,8 +684,12 @@ def validate_project_command(cmd_config: dict) -> tuple[bool, str]:
|
|||||||
"""
|
"""
|
||||||
Validate a single command entry from project config.
|
Validate a single command entry from project config.
|
||||||
|
|
||||||
|
Checks that the command has a valid name and is not in any blocklist.
|
||||||
|
Called during hierarchy resolution to gate each project command before
|
||||||
|
it is added to the effective allowed set.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
cmd_config: Dict with command configuration (name, description, args)
|
cmd_config: Dict with command configuration (name, description)
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
Tuple of (is_valid, error_message)
|
Tuple of (is_valid, error_message)
|
||||||
@@ -690,15 +719,6 @@ def validate_project_command(cmd_config: dict) -> tuple[bool, str]:
|
|||||||
if "description" in cmd_config and not isinstance(cmd_config["description"], str):
|
if "description" in cmd_config and not isinstance(cmd_config["description"], str):
|
||||||
return False, "Description must be a string"
|
return False, "Description must be a string"
|
||||||
|
|
||||||
# Args validation (Phase 1 - just check structure)
|
|
||||||
if "args" in cmd_config:
|
|
||||||
args = cmd_config["args"]
|
|
||||||
if not isinstance(args, list):
|
|
||||||
return False, "Args must be a list"
|
|
||||||
for arg in args:
|
|
||||||
if not isinstance(arg, str):
|
|
||||||
return False, "Each arg must be a string"
|
|
||||||
|
|
||||||
return True, ""
|
return True, ""
|
||||||
|
|
||||||
|
|
||||||
@@ -899,8 +919,13 @@ async def bash_security_hook(input_data, tool_use_id=None, context=None):
|
|||||||
|
|
||||||
# Additional validation for sensitive commands
|
# Additional validation for sensitive commands
|
||||||
if cmd in COMMANDS_NEEDING_EXTRA_VALIDATION:
|
if cmd in COMMANDS_NEEDING_EXTRA_VALIDATION:
|
||||||
# Find the specific segment containing this command
|
# Find the specific segment containing this command by searching
|
||||||
cmd_segment = get_command_for_validation(cmd, segments)
|
# each segment's extracted commands for a match
|
||||||
|
cmd_segment = ""
|
||||||
|
for segment in segments:
|
||||||
|
if cmd in extract_commands(segment):
|
||||||
|
cmd_segment = segment
|
||||||
|
break
|
||||||
if not cmd_segment:
|
if not cmd_segment:
|
||||||
cmd_segment = command # Fallback to full command
|
cmd_segment = command # Fallback to full command
|
||||||
|
|
||||||
|
|||||||
@@ -7,6 +7,7 @@ Provides REST API, WebSocket, and static file serving.
|
|||||||
"""
|
"""
|
||||||
|
|
||||||
import asyncio
|
import asyncio
|
||||||
|
import logging
|
||||||
import os
|
import os
|
||||||
import shutil
|
import shutil
|
||||||
import sys
|
import sys
|
||||||
@@ -42,6 +43,7 @@ from .routers import (
|
|||||||
)
|
)
|
||||||
from .schemas import SetupStatus
|
from .schemas import SetupStatus
|
||||||
from .services.assistant_chat_session import cleanup_all_sessions as cleanup_assistant_sessions
|
from .services.assistant_chat_session import cleanup_all_sessions as cleanup_assistant_sessions
|
||||||
|
from .services.chat_constants import ROOT_DIR
|
||||||
from .services.dev_server_manager import (
|
from .services.dev_server_manager import (
|
||||||
cleanup_all_devservers,
|
cleanup_all_devservers,
|
||||||
cleanup_orphaned_devserver_locks,
|
cleanup_orphaned_devserver_locks,
|
||||||
@@ -53,7 +55,6 @@ from .services.terminal_manager import cleanup_all_terminals
|
|||||||
from .websocket import project_websocket
|
from .websocket import project_websocket
|
||||||
|
|
||||||
# Paths
|
# Paths
|
||||||
ROOT_DIR = Path(__file__).parent.parent
|
|
||||||
UI_DIST_DIR = ROOT_DIR / "ui" / "dist"
|
UI_DIST_DIR = ROOT_DIR / "ui" / "dist"
|
||||||
|
|
||||||
|
|
||||||
@@ -88,10 +89,19 @@ app = FastAPI(
|
|||||||
lifespan=lifespan,
|
lifespan=lifespan,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# Module logger
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
# Check if remote access is enabled via environment variable
|
# Check if remote access is enabled via environment variable
|
||||||
# Set by start_ui.py when --host is not 127.0.0.1
|
# Set by start_ui.py when --host is not 127.0.0.1
|
||||||
ALLOW_REMOTE = os.environ.get("AUTOCODER_ALLOW_REMOTE", "").lower() in ("1", "true", "yes")
|
ALLOW_REMOTE = os.environ.get("AUTOCODER_ALLOW_REMOTE", "").lower() in ("1", "true", "yes")
|
||||||
|
|
||||||
|
if ALLOW_REMOTE:
|
||||||
|
logger.warning(
|
||||||
|
"ALLOW_REMOTE is enabled. Terminal WebSocket is exposed without sandboxing. "
|
||||||
|
"Only use this in trusted network environments."
|
||||||
|
)
|
||||||
|
|
||||||
# CORS - allow all origins when remote access is enabled, otherwise localhost only
|
# CORS - allow all origins when remote access is enabled, otherwise localhost only
|
||||||
if ALLOW_REMOTE:
|
if ALLOW_REMOTE:
|
||||||
app.add_middleware(
|
app.add_middleware(
|
||||||
@@ -222,7 +232,14 @@ if UI_DIST_DIR.exists():
|
|||||||
raise HTTPException(status_code=404)
|
raise HTTPException(status_code=404)
|
||||||
|
|
||||||
# Try to serve the file directly
|
# Try to serve the file directly
|
||||||
file_path = UI_DIST_DIR / path
|
file_path = (UI_DIST_DIR / path).resolve()
|
||||||
|
|
||||||
|
# Ensure resolved path is within UI_DIST_DIR (prevent path traversal)
|
||||||
|
try:
|
||||||
|
file_path.relative_to(UI_DIST_DIR.resolve())
|
||||||
|
except ValueError:
|
||||||
|
raise HTTPException(status_code=404)
|
||||||
|
|
||||||
if file_path.exists() and file_path.is_file():
|
if file_path.exists() and file_path.is_file():
|
||||||
return FileResponse(file_path)
|
return FileResponse(file_path)
|
||||||
|
|
||||||
|
|||||||
@@ -6,31 +6,22 @@ API endpoints for agent control (start/stop/pause/resume).
|
|||||||
Uses project registry for path lookups.
|
Uses project registry for path lookups.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
import re
|
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
|
|
||||||
from fastapi import APIRouter, HTTPException
|
from fastapi import APIRouter, HTTPException
|
||||||
|
|
||||||
from ..schemas import AgentActionResponse, AgentStartRequest, AgentStatus
|
from ..schemas import AgentActionResponse, AgentStartRequest, AgentStatus
|
||||||
|
from ..services.chat_constants import ROOT_DIR
|
||||||
from ..services.process_manager import get_manager
|
from ..services.process_manager import get_manager
|
||||||
|
from ..utils.project_helpers import get_project_path as _get_project_path
|
||||||
|
from ..utils.validation import validate_project_name
|
||||||
|
|
||||||
|
|
||||||
def _get_project_path(project_name: str) -> Path:
|
def _get_settings_defaults() -> tuple[bool, str, int, bool, int]:
|
||||||
"""Get project path from registry."""
|
|
||||||
import sys
|
|
||||||
root = Path(__file__).parent.parent.parent
|
|
||||||
if str(root) not in sys.path:
|
|
||||||
sys.path.insert(0, str(root))
|
|
||||||
|
|
||||||
from registry import get_project_path
|
|
||||||
return get_project_path(project_name)
|
|
||||||
|
|
||||||
|
|
||||||
def _get_settings_defaults() -> tuple[bool, str, int]:
|
|
||||||
"""Get defaults from global settings.
|
"""Get defaults from global settings.
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
Tuple of (yolo_mode, model, testing_agent_ratio)
|
Tuple of (yolo_mode, model, testing_agent_ratio, playwright_headless, batch_size)
|
||||||
"""
|
"""
|
||||||
import sys
|
import sys
|
||||||
root = Path(__file__).parent.parent.parent
|
root = Path(__file__).parent.parent.parent
|
||||||
@@ -49,24 +40,18 @@ def _get_settings_defaults() -> tuple[bool, str, int]:
|
|||||||
except (ValueError, TypeError):
|
except (ValueError, TypeError):
|
||||||
testing_agent_ratio = 1
|
testing_agent_ratio = 1
|
||||||
|
|
||||||
return yolo_mode, model, testing_agent_ratio
|
playwright_headless = (settings.get("playwright_headless") or "true").lower() == "true"
|
||||||
|
|
||||||
|
try:
|
||||||
|
batch_size = int(settings.get("batch_size", "3"))
|
||||||
|
except (ValueError, TypeError):
|
||||||
|
batch_size = 3
|
||||||
|
|
||||||
|
return yolo_mode, model, testing_agent_ratio, playwright_headless, batch_size
|
||||||
|
|
||||||
|
|
||||||
router = APIRouter(prefix="/api/projects/{project_name}/agent", tags=["agent"])
|
router = APIRouter(prefix="/api/projects/{project_name}/agent", tags=["agent"])
|
||||||
|
|
||||||
# Root directory for process manager
|
|
||||||
ROOT_DIR = Path(__file__).parent.parent.parent
|
|
||||||
|
|
||||||
|
|
||||||
def validate_project_name(name: str) -> str:
|
|
||||||
"""Validate and sanitize project name to prevent path traversal."""
|
|
||||||
if not re.match(r'^[a-zA-Z0-9_-]{1,50}$', name):
|
|
||||||
raise HTTPException(
|
|
||||||
status_code=400,
|
|
||||||
detail="Invalid project name"
|
|
||||||
)
|
|
||||||
return name
|
|
||||||
|
|
||||||
|
|
||||||
def get_project_manager(project_name: str):
|
def get_project_manager(project_name: str):
|
||||||
"""Get the process manager for a project."""
|
"""Get the process manager for a project."""
|
||||||
@@ -111,18 +96,22 @@ async def start_agent(
|
|||||||
manager = get_project_manager(project_name)
|
manager = get_project_manager(project_name)
|
||||||
|
|
||||||
# Get defaults from global settings if not provided in request
|
# Get defaults from global settings if not provided in request
|
||||||
default_yolo, default_model, default_testing_ratio = _get_settings_defaults()
|
default_yolo, default_model, default_testing_ratio, playwright_headless, default_batch_size = _get_settings_defaults()
|
||||||
|
|
||||||
yolo_mode = request.yolo_mode if request.yolo_mode is not None else default_yolo
|
yolo_mode = request.yolo_mode if request.yolo_mode is not None else default_yolo
|
||||||
model = request.model if request.model else default_model
|
model = request.model if request.model else default_model
|
||||||
max_concurrency = request.max_concurrency or 1
|
max_concurrency = request.max_concurrency or 1
|
||||||
testing_agent_ratio = request.testing_agent_ratio if request.testing_agent_ratio is not None else default_testing_ratio
|
testing_agent_ratio = request.testing_agent_ratio if request.testing_agent_ratio is not None else default_testing_ratio
|
||||||
|
|
||||||
|
batch_size = default_batch_size
|
||||||
|
|
||||||
success, message = await manager.start(
|
success, message = await manager.start(
|
||||||
yolo_mode=yolo_mode,
|
yolo_mode=yolo_mode,
|
||||||
model=model,
|
model=model,
|
||||||
max_concurrency=max_concurrency,
|
max_concurrency=max_concurrency,
|
||||||
testing_agent_ratio=testing_agent_ratio,
|
testing_agent_ratio=testing_agent_ratio,
|
||||||
|
playwright_headless=playwright_headless,
|
||||||
|
batch_size=batch_size,
|
||||||
)
|
)
|
||||||
|
|
||||||
# Notify scheduler of manual start (to prevent auto-stop during scheduled window)
|
# Notify scheduler of manual start (to prevent auto-stop during scheduled window)
|
||||||
|
|||||||
@@ -7,8 +7,6 @@ WebSocket and REST endpoints for the read-only project assistant.
|
|||||||
|
|
||||||
import json
|
import json
|
||||||
import logging
|
import logging
|
||||||
import re
|
|
||||||
from pathlib import Path
|
|
||||||
from typing import Optional
|
from typing import Optional
|
||||||
|
|
||||||
from fastapi import APIRouter, HTTPException, WebSocket, WebSocketDisconnect
|
from fastapi import APIRouter, HTTPException, WebSocket, WebSocketDisconnect
|
||||||
@@ -27,30 +25,13 @@ from ..services.assistant_database import (
|
|||||||
get_conversation,
|
get_conversation,
|
||||||
get_conversations,
|
get_conversations,
|
||||||
)
|
)
|
||||||
|
from ..utils.project_helpers import get_project_path as _get_project_path
|
||||||
|
from ..utils.validation import is_valid_project_name as validate_project_name
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
router = APIRouter(prefix="/api/assistant", tags=["assistant-chat"])
|
router = APIRouter(prefix="/api/assistant", tags=["assistant-chat"])
|
||||||
|
|
||||||
# Root directory
|
|
||||||
ROOT_DIR = Path(__file__).parent.parent.parent
|
|
||||||
|
|
||||||
|
|
||||||
def _get_project_path(project_name: str) -> Optional[Path]:
|
|
||||||
"""Get project path from registry."""
|
|
||||||
import sys
|
|
||||||
root = Path(__file__).parent.parent.parent
|
|
||||||
if str(root) not in sys.path:
|
|
||||||
sys.path.insert(0, str(root))
|
|
||||||
|
|
||||||
from registry import get_project_path
|
|
||||||
return get_project_path(project_name)
|
|
||||||
|
|
||||||
|
|
||||||
def validate_project_name(name: str) -> bool:
|
|
||||||
"""Validate project name to prevent path traversal."""
|
|
||||||
return bool(re.match(r'^[a-zA-Z0-9_-]{1,50}$', name))
|
|
||||||
|
|
||||||
|
|
||||||
# ============================================================================
|
# ============================================================================
|
||||||
# Pydantic Models
|
# Pydantic Models
|
||||||
@@ -145,9 +126,9 @@ async def create_project_conversation(project_name: str):
|
|||||||
|
|
||||||
conversation = create_conversation(project_dir, project_name)
|
conversation = create_conversation(project_dir, project_name)
|
||||||
return ConversationSummary(
|
return ConversationSummary(
|
||||||
id=conversation.id,
|
id=int(conversation.id),
|
||||||
project_name=conversation.project_name,
|
project_name=str(conversation.project_name),
|
||||||
title=conversation.title,
|
title=str(conversation.title) if conversation.title else None,
|
||||||
created_at=conversation.created_at.isoformat() if conversation.created_at else None,
|
created_at=conversation.created_at.isoformat() if conversation.created_at else None,
|
||||||
updated_at=conversation.updated_at.isoformat() if conversation.updated_at else None,
|
updated_at=conversation.updated_at.isoformat() if conversation.updated_at else None,
|
||||||
message_count=0,
|
message_count=0,
|
||||||
|
|||||||
@@ -6,7 +6,7 @@ API endpoints for dev server control (start/stop) and configuration.
|
|||||||
Uses project registry for path lookups and project_config for command detection.
|
Uses project registry for path lookups and project_config for command detection.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
import re
|
import logging
|
||||||
import sys
|
import sys
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
|
|
||||||
@@ -26,38 +26,22 @@ from ..services.project_config import (
|
|||||||
get_project_config,
|
get_project_config,
|
||||||
set_dev_command,
|
set_dev_command,
|
||||||
)
|
)
|
||||||
|
from ..utils.project_helpers import get_project_path as _get_project_path
|
||||||
|
from ..utils.validation import validate_project_name
|
||||||
|
|
||||||
# Add root to path for registry import
|
# Add root to path for security module import
|
||||||
_root = Path(__file__).parent.parent.parent
|
_root = Path(__file__).parent.parent.parent
|
||||||
if str(_root) not in sys.path:
|
if str(_root) not in sys.path:
|
||||||
sys.path.insert(0, str(_root))
|
sys.path.insert(0, str(_root))
|
||||||
|
|
||||||
from registry import get_project_path as registry_get_project_path
|
from security import extract_commands, get_effective_commands, is_command_allowed
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
def _get_project_path(project_name: str) -> Path | None:
|
|
||||||
"""Get project path from registry."""
|
|
||||||
return registry_get_project_path(project_name)
|
|
||||||
|
|
||||||
|
|
||||||
router = APIRouter(prefix="/api/projects/{project_name}/devserver", tags=["devserver"])
|
router = APIRouter(prefix="/api/projects/{project_name}/devserver", tags=["devserver"])
|
||||||
|
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# Helper Functions
|
|
||||||
# ============================================================================
|
|
||||||
|
|
||||||
|
|
||||||
def validate_project_name(name: str) -> str:
|
|
||||||
"""Validate and sanitize project name to prevent path traversal."""
|
|
||||||
if not re.match(r'^[a-zA-Z0-9_-]{1,50}$', name):
|
|
||||||
raise HTTPException(
|
|
||||||
status_code=400,
|
|
||||||
detail="Invalid project name"
|
|
||||||
)
|
|
||||||
return name
|
|
||||||
|
|
||||||
|
|
||||||
def get_project_dir(project_name: str) -> Path:
|
def get_project_dir(project_name: str) -> Path:
|
||||||
"""
|
"""
|
||||||
Get the validated project directory for a project name.
|
Get the validated project directory for a project name.
|
||||||
@@ -106,6 +90,45 @@ def get_project_devserver_manager(project_name: str):
|
|||||||
return get_devserver_manager(project_name, project_dir)
|
return get_devserver_manager(project_name, project_dir)
|
||||||
|
|
||||||
|
|
||||||
|
def validate_dev_command(command: str, project_dir: Path) -> None:
|
||||||
|
"""
|
||||||
|
Validate a dev server command against the security allowlist.
|
||||||
|
|
||||||
|
Extracts all commands from the shell string and checks each against
|
||||||
|
the effective allowlist (global + org + project). Raises HTTPException
|
||||||
|
if any command is blocked or not allowed.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
command: The shell command string to validate
|
||||||
|
project_dir: Project directory for loading project-level allowlists
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
HTTPException 400: If the command fails validation
|
||||||
|
"""
|
||||||
|
commands = extract_commands(command)
|
||||||
|
if not commands:
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=400,
|
||||||
|
detail="Could not parse command for security validation"
|
||||||
|
)
|
||||||
|
|
||||||
|
allowed_commands, blocked_commands = get_effective_commands(project_dir)
|
||||||
|
|
||||||
|
for cmd in commands:
|
||||||
|
if cmd in blocked_commands:
|
||||||
|
logger.warning("Blocked dev server command '%s' (in blocklist) for project dir %s", cmd, project_dir)
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=400,
|
||||||
|
detail=f"Command '{cmd}' is blocked and cannot be used as a dev server command"
|
||||||
|
)
|
||||||
|
if not is_command_allowed(cmd, allowed_commands):
|
||||||
|
logger.warning("Rejected dev server command '%s' (not in allowlist) for project dir %s", cmd, project_dir)
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=400,
|
||||||
|
detail=f"Command '{cmd}' is not in the allowed commands list"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
# ============================================================================
|
# ============================================================================
|
||||||
# Endpoints
|
# Endpoints
|
||||||
# ============================================================================
|
# ============================================================================
|
||||||
@@ -167,7 +190,10 @@ async def start_devserver(
|
|||||||
detail="No dev command available. Configure a custom command or ensure project type can be detected."
|
detail="No dev command available. Configure a custom command or ensure project type can be detected."
|
||||||
)
|
)
|
||||||
|
|
||||||
# Now command is definitely str
|
# Validate command against security allowlist before execution
|
||||||
|
validate_dev_command(command, project_dir)
|
||||||
|
|
||||||
|
# Now command is definitely str and validated
|
||||||
success, message = await manager.start(command)
|
success, message = await manager.start(command)
|
||||||
|
|
||||||
return DevServerActionResponse(
|
return DevServerActionResponse(
|
||||||
@@ -258,6 +284,9 @@ async def update_devserver_config(
|
|||||||
except ValueError as e:
|
except ValueError as e:
|
||||||
raise HTTPException(status_code=400, detail=str(e))
|
raise HTTPException(status_code=400, detail=str(e))
|
||||||
else:
|
else:
|
||||||
|
# Validate command against security allowlist before persisting
|
||||||
|
validate_dev_command(update.custom_command, project_dir)
|
||||||
|
|
||||||
# Set the custom command
|
# Set the custom command
|
||||||
try:
|
try:
|
||||||
set_dev_command(project_dir, update.custom_command)
|
set_dev_command(project_dir, update.custom_command)
|
||||||
|
|||||||
@@ -8,7 +8,6 @@ Allows adding multiple features to existing projects via natural language.
|
|||||||
|
|
||||||
import json
|
import json
|
||||||
import logging
|
import logging
|
||||||
from pathlib import Path
|
|
||||||
from typing import Optional
|
from typing import Optional
|
||||||
|
|
||||||
from fastapi import APIRouter, HTTPException, WebSocket, WebSocketDisconnect
|
from fastapi import APIRouter, HTTPException, WebSocket, WebSocketDisconnect
|
||||||
@@ -22,27 +21,13 @@ from ..services.expand_chat_session import (
|
|||||||
list_expand_sessions,
|
list_expand_sessions,
|
||||||
remove_expand_session,
|
remove_expand_session,
|
||||||
)
|
)
|
||||||
|
from ..utils.project_helpers import get_project_path as _get_project_path
|
||||||
from ..utils.validation import validate_project_name
|
from ..utils.validation import validate_project_name
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
router = APIRouter(prefix="/api/expand", tags=["expand-project"])
|
router = APIRouter(prefix="/api/expand", tags=["expand-project"])
|
||||||
|
|
||||||
# Root directory
|
|
||||||
ROOT_DIR = Path(__file__).parent.parent.parent
|
|
||||||
|
|
||||||
|
|
||||||
def _get_project_path(project_name: str) -> Path:
|
|
||||||
"""Get project path from registry."""
|
|
||||||
import sys
|
|
||||||
root = Path(__file__).parent.parent.parent
|
|
||||||
if str(root) not in sys.path:
|
|
||||||
sys.path.insert(0, str(root))
|
|
||||||
|
|
||||||
from registry import get_project_path
|
|
||||||
return get_project_path(project_name)
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
# ============================================================================
|
# ============================================================================
|
||||||
@@ -136,7 +121,8 @@ async def expand_project_websocket(websocket: WebSocket, project_name: str):
|
|||||||
return
|
return
|
||||||
|
|
||||||
# Verify project has app_spec.txt
|
# Verify project has app_spec.txt
|
||||||
spec_path = project_dir / "prompts" / "app_spec.txt"
|
from autocoder_paths import get_prompts_dir
|
||||||
|
spec_path = get_prompts_dir(project_dir) / "app_spec.txt"
|
||||||
if not spec_path.exists():
|
if not spec_path.exists():
|
||||||
await websocket.close(code=4004, reason="Project has no spec. Create spec first.")
|
await websocket.close(code=4004, reason="Project has no spec. Create spec first.")
|
||||||
return
|
return
|
||||||
|
|||||||
@@ -8,10 +8,12 @@ API endpoints for feature/test case management.
|
|||||||
import logging
|
import logging
|
||||||
from contextlib import contextmanager
|
from contextlib import contextmanager
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
|
from typing import Literal
|
||||||
|
|
||||||
from fastapi import APIRouter, HTTPException
|
from fastapi import APIRouter, HTTPException
|
||||||
|
|
||||||
from ..schemas import (
|
from ..schemas import (
|
||||||
|
DependencyGraphEdge,
|
||||||
DependencyGraphNode,
|
DependencyGraphNode,
|
||||||
DependencyGraphResponse,
|
DependencyGraphResponse,
|
||||||
DependencyUpdate,
|
DependencyUpdate,
|
||||||
@@ -22,6 +24,7 @@ from ..schemas import (
|
|||||||
FeatureResponse,
|
FeatureResponse,
|
||||||
FeatureUpdate,
|
FeatureUpdate,
|
||||||
)
|
)
|
||||||
|
from ..utils.project_helpers import get_project_path as _get_project_path
|
||||||
from ..utils.validation import validate_project_name
|
from ..utils.validation import validate_project_name
|
||||||
|
|
||||||
# Lazy imports to avoid circular dependencies
|
# Lazy imports to avoid circular dependencies
|
||||||
@@ -31,17 +34,6 @@ _Feature = None
|
|||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
def _get_project_path(project_name: str) -> Path:
|
|
||||||
"""Get project path from registry."""
|
|
||||||
import sys
|
|
||||||
root = Path(__file__).parent.parent.parent
|
|
||||||
if str(root) not in sys.path:
|
|
||||||
sys.path.insert(0, str(root))
|
|
||||||
|
|
||||||
from registry import get_project_path
|
|
||||||
return get_project_path(project_name)
|
|
||||||
|
|
||||||
|
|
||||||
def _get_db_classes():
|
def _get_db_classes():
|
||||||
"""Lazy import of database classes."""
|
"""Lazy import of database classes."""
|
||||||
global _create_database, _Feature
|
global _create_database, _Feature
|
||||||
@@ -71,6 +63,9 @@ def get_db_session(project_dir: Path):
|
|||||||
session = SessionLocal()
|
session = SessionLocal()
|
||||||
try:
|
try:
|
||||||
yield session
|
yield session
|
||||||
|
except Exception:
|
||||||
|
session.rollback()
|
||||||
|
raise
|
||||||
finally:
|
finally:
|
||||||
session.close()
|
session.close()
|
||||||
|
|
||||||
@@ -131,7 +126,8 @@ async def list_features(project_name: str):
|
|||||||
if not project_dir.exists():
|
if not project_dir.exists():
|
||||||
raise HTTPException(status_code=404, detail="Project directory not found")
|
raise HTTPException(status_code=404, detail="Project directory not found")
|
||||||
|
|
||||||
db_file = project_dir / "features.db"
|
from autocoder_paths import get_features_db_path
|
||||||
|
db_file = get_features_db_path(project_dir)
|
||||||
if not db_file.exists():
|
if not db_file.exists():
|
||||||
return FeatureListResponse(pending=[], in_progress=[], done=[])
|
return FeatureListResponse(pending=[], in_progress=[], done=[])
|
||||||
|
|
||||||
@@ -326,7 +322,8 @@ async def get_dependency_graph(project_name: str):
|
|||||||
if not project_dir.exists():
|
if not project_dir.exists():
|
||||||
raise HTTPException(status_code=404, detail="Project directory not found")
|
raise HTTPException(status_code=404, detail="Project directory not found")
|
||||||
|
|
||||||
db_file = project_dir / "features.db"
|
from autocoder_paths import get_features_db_path
|
||||||
|
db_file = get_features_db_path(project_dir)
|
||||||
if not db_file.exists():
|
if not db_file.exists():
|
||||||
return DependencyGraphResponse(nodes=[], edges=[])
|
return DependencyGraphResponse(nodes=[], edges=[])
|
||||||
|
|
||||||
@@ -344,6 +341,7 @@ async def get_dependency_graph(project_name: str):
|
|||||||
deps = f.dependencies or []
|
deps = f.dependencies or []
|
||||||
blocking = [d for d in deps if d not in passing_ids]
|
blocking = [d for d in deps if d not in passing_ids]
|
||||||
|
|
||||||
|
status: Literal["pending", "in_progress", "done", "blocked"]
|
||||||
if f.passes:
|
if f.passes:
|
||||||
status = "done"
|
status = "done"
|
||||||
elif blocking:
|
elif blocking:
|
||||||
@@ -363,7 +361,7 @@ async def get_dependency_graph(project_name: str):
|
|||||||
))
|
))
|
||||||
|
|
||||||
for dep_id in deps:
|
for dep_id in deps:
|
||||||
edges.append({"source": dep_id, "target": f.id})
|
edges.append(DependencyGraphEdge(source=dep_id, target=f.id))
|
||||||
|
|
||||||
return DependencyGraphResponse(nodes=nodes, edges=edges)
|
return DependencyGraphResponse(nodes=nodes, edges=edges)
|
||||||
except HTTPException:
|
except HTTPException:
|
||||||
@@ -390,7 +388,8 @@ async def get_feature(project_name: str, feature_id: int):
|
|||||||
if not project_dir.exists():
|
if not project_dir.exists():
|
||||||
raise HTTPException(status_code=404, detail="Project directory not found")
|
raise HTTPException(status_code=404, detail="Project directory not found")
|
||||||
|
|
||||||
db_file = project_dir / "features.db"
|
from autocoder_paths import get_features_db_path
|
||||||
|
db_file = get_features_db_path(project_dir)
|
||||||
if not db_file.exists():
|
if not db_file.exists():
|
||||||
raise HTTPException(status_code=404, detail="No features database found")
|
raise HTTPException(status_code=404, detail="No features database found")
|
||||||
|
|
||||||
|
|||||||
@@ -6,6 +6,7 @@ API endpoints for browsing the filesystem for project folder selection.
|
|||||||
Provides cross-platform support for Windows, macOS, and Linux.
|
Provides cross-platform support for Windows, macOS, and Linux.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
import functools
|
||||||
import logging
|
import logging
|
||||||
import os
|
import os
|
||||||
import re
|
import re
|
||||||
@@ -14,6 +15,8 @@ from pathlib import Path
|
|||||||
|
|
||||||
from fastapi import APIRouter, HTTPException, Query
|
from fastapi import APIRouter, HTTPException, Query
|
||||||
|
|
||||||
|
from security import SENSITIVE_DIRECTORIES
|
||||||
|
|
||||||
# Module logger
|
# Module logger
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
@@ -77,17 +80,10 @@ LINUX_BLOCKED = {
|
|||||||
"/opt",
|
"/opt",
|
||||||
}
|
}
|
||||||
|
|
||||||
# Universal blocked paths (relative to home directory)
|
# Universal blocked paths (relative to home directory).
|
||||||
UNIVERSAL_BLOCKED_RELATIVE = {
|
# Delegates to the canonical SENSITIVE_DIRECTORIES set in security.py so that
|
||||||
".ssh",
|
# the filesystem browser and the EXTRA_READ_PATHS validator share one source of truth.
|
||||||
".aws",
|
UNIVERSAL_BLOCKED_RELATIVE = SENSITIVE_DIRECTORIES
|
||||||
".gnupg",
|
|
||||||
".config/gh",
|
|
||||||
".netrc",
|
|
||||||
".docker",
|
|
||||||
".kube",
|
|
||||||
".terraform",
|
|
||||||
}
|
|
||||||
|
|
||||||
# Patterns for files that should not be shown
|
# Patterns for files that should not be shown
|
||||||
HIDDEN_PATTERNS = [
|
HIDDEN_PATTERNS = [
|
||||||
@@ -99,8 +95,14 @@ HIDDEN_PATTERNS = [
|
|||||||
]
|
]
|
||||||
|
|
||||||
|
|
||||||
def get_blocked_paths() -> set[Path]:
|
@functools.lru_cache(maxsize=1)
|
||||||
"""Get the set of blocked paths for the current platform."""
|
def get_blocked_paths() -> frozenset[Path]:
|
||||||
|
"""
|
||||||
|
Get the set of blocked paths for the current platform.
|
||||||
|
|
||||||
|
Cached because the platform and home directory do not change at runtime,
|
||||||
|
and this function is called once per directory entry in list_directory().
|
||||||
|
"""
|
||||||
home = Path.home()
|
home = Path.home()
|
||||||
blocked = set()
|
blocked = set()
|
||||||
|
|
||||||
@@ -119,7 +121,7 @@ def get_blocked_paths() -> set[Path]:
|
|||||||
for rel in UNIVERSAL_BLOCKED_RELATIVE:
|
for rel in UNIVERSAL_BLOCKED_RELATIVE:
|
||||||
blocked.add((home / rel).resolve())
|
blocked.add((home / rel).resolve())
|
||||||
|
|
||||||
return blocked
|
return frozenset(blocked)
|
||||||
|
|
||||||
|
|
||||||
def is_path_blocked(path: Path) -> bool:
|
def is_path_blocked(path: Path) -> bool:
|
||||||
|
|||||||
@@ -10,6 +10,7 @@ import re
|
|||||||
import shutil
|
import shutil
|
||||||
import sys
|
import sys
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
|
from typing import Any, Callable
|
||||||
|
|
||||||
from fastapi import APIRouter, HTTPException
|
from fastapi import APIRouter, HTTPException
|
||||||
|
|
||||||
@@ -24,11 +25,12 @@ from ..schemas import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
# Lazy imports to avoid circular dependencies
|
# Lazy imports to avoid circular dependencies
|
||||||
|
# These are initialized by _init_imports() before first use.
|
||||||
_imports_initialized = False
|
_imports_initialized = False
|
||||||
_check_spec_exists = None
|
_check_spec_exists: Callable[..., Any] | None = None
|
||||||
_scaffold_project_prompts = None
|
_scaffold_project_prompts: Callable[..., Any] | None = None
|
||||||
_get_project_prompts_dir = None
|
_get_project_prompts_dir: Callable[..., Any] | None = None
|
||||||
_count_passing_tests = None
|
_count_passing_tests: Callable[..., Any] | None = None
|
||||||
|
|
||||||
|
|
||||||
def _init_imports():
|
def _init_imports():
|
||||||
@@ -99,6 +101,7 @@ def validate_project_name(name: str) -> str:
|
|||||||
def get_project_stats(project_dir: Path) -> ProjectStats:
|
def get_project_stats(project_dir: Path) -> ProjectStats:
|
||||||
"""Get statistics for a project."""
|
"""Get statistics for a project."""
|
||||||
_init_imports()
|
_init_imports()
|
||||||
|
assert _count_passing_tests is not None # guaranteed by _init_imports()
|
||||||
passing, in_progress, total = _count_passing_tests(project_dir)
|
passing, in_progress, total = _count_passing_tests(project_dir)
|
||||||
percentage = (passing / total * 100) if total > 0 else 0.0
|
percentage = (passing / total * 100) if total > 0 else 0.0
|
||||||
return ProjectStats(
|
return ProjectStats(
|
||||||
@@ -113,6 +116,7 @@ def get_project_stats(project_dir: Path) -> ProjectStats:
|
|||||||
async def list_projects():
|
async def list_projects():
|
||||||
"""List all registered projects."""
|
"""List all registered projects."""
|
||||||
_init_imports()
|
_init_imports()
|
||||||
|
assert _check_spec_exists is not None # guaranteed by _init_imports()
|
||||||
(_, _, _, list_registered_projects, validate_project_path,
|
(_, _, _, list_registered_projects, validate_project_path,
|
||||||
get_project_concurrency, _) = _get_registry_functions()
|
get_project_concurrency, _) = _get_registry_functions()
|
||||||
|
|
||||||
@@ -145,6 +149,7 @@ async def list_projects():
|
|||||||
async def create_project(project: ProjectCreate):
|
async def create_project(project: ProjectCreate):
|
||||||
"""Create a new project at the specified path."""
|
"""Create a new project at the specified path."""
|
||||||
_init_imports()
|
_init_imports()
|
||||||
|
assert _scaffold_project_prompts is not None # guaranteed by _init_imports()
|
||||||
(register_project, _, get_project_path, list_registered_projects,
|
(register_project, _, get_project_path, list_registered_projects,
|
||||||
_, _, _) = _get_registry_functions()
|
_, _, _) = _get_registry_functions()
|
||||||
|
|
||||||
@@ -225,6 +230,8 @@ async def create_project(project: ProjectCreate):
|
|||||||
async def get_project(name: str):
|
async def get_project(name: str):
|
||||||
"""Get detailed information about a project."""
|
"""Get detailed information about a project."""
|
||||||
_init_imports()
|
_init_imports()
|
||||||
|
assert _check_spec_exists is not None # guaranteed by _init_imports()
|
||||||
|
assert _get_project_prompts_dir is not None # guaranteed by _init_imports()
|
||||||
(_, _, get_project_path, _, _, get_project_concurrency, _) = _get_registry_functions()
|
(_, _, get_project_path, _, _, get_project_concurrency, _) = _get_registry_functions()
|
||||||
|
|
||||||
name = validate_project_name(name)
|
name = validate_project_name(name)
|
||||||
@@ -269,8 +276,8 @@ async def delete_project(name: str, delete_files: bool = False):
|
|||||||
raise HTTPException(status_code=404, detail=f"Project '{name}' not found")
|
raise HTTPException(status_code=404, detail=f"Project '{name}' not found")
|
||||||
|
|
||||||
# Check if agent is running
|
# Check if agent is running
|
||||||
lock_file = project_dir / ".agent.lock"
|
from autocoder_paths import has_agent_running
|
||||||
if lock_file.exists():
|
if has_agent_running(project_dir):
|
||||||
raise HTTPException(
|
raise HTTPException(
|
||||||
status_code=409,
|
status_code=409,
|
||||||
detail="Cannot delete project while agent is running. Stop the agent first."
|
detail="Cannot delete project while agent is running. Stop the agent first."
|
||||||
@@ -296,6 +303,7 @@ async def delete_project(name: str, delete_files: bool = False):
|
|||||||
async def get_project_prompts(name: str):
|
async def get_project_prompts(name: str):
|
||||||
"""Get the content of project prompt files."""
|
"""Get the content of project prompt files."""
|
||||||
_init_imports()
|
_init_imports()
|
||||||
|
assert _get_project_prompts_dir is not None # guaranteed by _init_imports()
|
||||||
(_, _, get_project_path, _, _, _, _) = _get_registry_functions()
|
(_, _, get_project_path, _, _, _, _) = _get_registry_functions()
|
||||||
|
|
||||||
name = validate_project_name(name)
|
name = validate_project_name(name)
|
||||||
@@ -307,7 +315,7 @@ async def get_project_prompts(name: str):
|
|||||||
if not project_dir.exists():
|
if not project_dir.exists():
|
||||||
raise HTTPException(status_code=404, detail="Project directory not found")
|
raise HTTPException(status_code=404, detail="Project directory not found")
|
||||||
|
|
||||||
prompts_dir = _get_project_prompts_dir(project_dir)
|
prompts_dir: Path = _get_project_prompts_dir(project_dir)
|
||||||
|
|
||||||
def read_file(filename: str) -> str:
|
def read_file(filename: str) -> str:
|
||||||
filepath = prompts_dir / filename
|
filepath = prompts_dir / filename
|
||||||
@@ -329,6 +337,7 @@ async def get_project_prompts(name: str):
|
|||||||
async def update_project_prompts(name: str, prompts: ProjectPromptsUpdate):
|
async def update_project_prompts(name: str, prompts: ProjectPromptsUpdate):
|
||||||
"""Update project prompt files."""
|
"""Update project prompt files."""
|
||||||
_init_imports()
|
_init_imports()
|
||||||
|
assert _get_project_prompts_dir is not None # guaranteed by _init_imports()
|
||||||
(_, _, get_project_path, _, _, _, _) = _get_registry_functions()
|
(_, _, get_project_path, _, _, _, _) = _get_registry_functions()
|
||||||
|
|
||||||
name = validate_project_name(name)
|
name = validate_project_name(name)
|
||||||
@@ -398,8 +407,8 @@ async def reset_project(name: str, full_reset: bool = False):
|
|||||||
raise HTTPException(status_code=404, detail="Project directory not found")
|
raise HTTPException(status_code=404, detail="Project directory not found")
|
||||||
|
|
||||||
# Check if agent is running
|
# Check if agent is running
|
||||||
lock_file = project_dir / ".agent.lock"
|
from autocoder_paths import has_agent_running
|
||||||
if lock_file.exists():
|
if has_agent_running(project_dir):
|
||||||
raise HTTPException(
|
raise HTTPException(
|
||||||
status_code=409,
|
status_code=409,
|
||||||
detail="Cannot reset project while agent is running. Stop the agent first."
|
detail="Cannot reset project while agent is running. Stop the agent first."
|
||||||
@@ -415,36 +424,58 @@ async def reset_project(name: str, full_reset: bool = False):
|
|||||||
|
|
||||||
deleted_files: list[str] = []
|
deleted_files: list[str] = []
|
||||||
|
|
||||||
# Files to delete in quick reset
|
from autocoder_paths import (
|
||||||
quick_reset_files = [
|
get_assistant_db_path,
|
||||||
"features.db",
|
get_claude_assistant_settings_path,
|
||||||
"features.db-wal", # WAL mode journal file
|
get_claude_settings_path,
|
||||||
"features.db-shm", # WAL mode shared memory file
|
get_features_db_path,
|
||||||
"assistant.db",
|
)
|
||||||
"assistant.db-wal",
|
|
||||||
"assistant.db-shm",
|
# Build list of files to delete using path helpers (finds files at current location)
|
||||||
".claude_settings.json",
|
# Plus explicit old-location fallbacks for backward compatibility
|
||||||
".claude_assistant_settings.json",
|
db_path = get_features_db_path(project_dir)
|
||||||
|
asst_path = get_assistant_db_path(project_dir)
|
||||||
|
reset_files: list[Path] = [
|
||||||
|
db_path,
|
||||||
|
db_path.with_suffix(".db-wal"),
|
||||||
|
db_path.with_suffix(".db-shm"),
|
||||||
|
asst_path,
|
||||||
|
asst_path.with_suffix(".db-wal"),
|
||||||
|
asst_path.with_suffix(".db-shm"),
|
||||||
|
get_claude_settings_path(project_dir),
|
||||||
|
get_claude_assistant_settings_path(project_dir),
|
||||||
|
# Also clean old root-level locations if they exist
|
||||||
|
project_dir / "features.db",
|
||||||
|
project_dir / "features.db-wal",
|
||||||
|
project_dir / "features.db-shm",
|
||||||
|
project_dir / "assistant.db",
|
||||||
|
project_dir / "assistant.db-wal",
|
||||||
|
project_dir / "assistant.db-shm",
|
||||||
|
project_dir / ".claude_settings.json",
|
||||||
|
project_dir / ".claude_assistant_settings.json",
|
||||||
]
|
]
|
||||||
|
|
||||||
for filename in quick_reset_files:
|
for file_path in reset_files:
|
||||||
file_path = project_dir / filename
|
|
||||||
if file_path.exists():
|
if file_path.exists():
|
||||||
try:
|
try:
|
||||||
|
relative = file_path.relative_to(project_dir)
|
||||||
file_path.unlink()
|
file_path.unlink()
|
||||||
deleted_files.append(filename)
|
deleted_files.append(str(relative))
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
raise HTTPException(status_code=500, detail=f"Failed to delete {filename}: {e}")
|
raise HTTPException(status_code=500, detail=f"Failed to delete {file_path.name}: {e}")
|
||||||
|
|
||||||
# Full reset: also delete prompts directory
|
# Full reset: also delete prompts directory
|
||||||
if full_reset:
|
if full_reset:
|
||||||
prompts_dir = project_dir / "prompts"
|
from autocoder_paths import get_prompts_dir
|
||||||
|
# Delete prompts from both possible locations
|
||||||
|
for prompts_dir in [get_prompts_dir(project_dir), project_dir / "prompts"]:
|
||||||
if prompts_dir.exists():
|
if prompts_dir.exists():
|
||||||
try:
|
try:
|
||||||
|
relative = prompts_dir.relative_to(project_dir)
|
||||||
shutil.rmtree(prompts_dir)
|
shutil.rmtree(prompts_dir)
|
||||||
deleted_files.append("prompts/")
|
deleted_files.append(f"{relative}/")
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
raise HTTPException(status_code=500, detail=f"Failed to delete prompts/: {e}")
|
raise HTTPException(status_code=500, detail=f"Failed to delete prompts: {e}")
|
||||||
|
|
||||||
return {
|
return {
|
||||||
"success": True,
|
"success": True,
|
||||||
@@ -458,6 +489,8 @@ async def reset_project(name: str, full_reset: bool = False):
|
|||||||
async def update_project_settings(name: str, settings: ProjectSettingsUpdate):
|
async def update_project_settings(name: str, settings: ProjectSettingsUpdate):
|
||||||
"""Update project-level settings (concurrency, etc.)."""
|
"""Update project-level settings (concurrency, etc.)."""
|
||||||
_init_imports()
|
_init_imports()
|
||||||
|
assert _check_spec_exists is not None # guaranteed by _init_imports()
|
||||||
|
assert _get_project_prompts_dir is not None # guaranteed by _init_imports()
|
||||||
(_, _, get_project_path, _, _, get_project_concurrency,
|
(_, _, get_project_path, _, _, get_project_concurrency,
|
||||||
set_project_concurrency) = _get_registry_functions()
|
set_project_concurrency) = _get_registry_functions()
|
||||||
|
|
||||||
|
|||||||
@@ -6,12 +6,10 @@ API endpoints for managing agent schedules.
|
|||||||
Provides CRUD operations for time-based schedule configuration.
|
Provides CRUD operations for time-based schedule configuration.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
import re
|
|
||||||
import sys
|
|
||||||
from contextlib import contextmanager
|
from contextlib import contextmanager
|
||||||
from datetime import datetime, timedelta, timezone
|
from datetime import datetime, timedelta, timezone
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import Generator, Tuple
|
from typing import TYPE_CHECKING, Generator, Tuple
|
||||||
|
|
||||||
from fastapi import APIRouter, HTTPException
|
from fastapi import APIRouter, HTTPException
|
||||||
from sqlalchemy.orm import Session
|
from sqlalchemy.orm import Session
|
||||||
@@ -26,17 +24,21 @@ from ..schemas import (
|
|||||||
ScheduleResponse,
|
ScheduleResponse,
|
||||||
ScheduleUpdate,
|
ScheduleUpdate,
|
||||||
)
|
)
|
||||||
|
from ..utils.project_helpers import get_project_path as _get_project_path
|
||||||
|
from ..utils.validation import validate_project_name
|
||||||
|
|
||||||
|
if TYPE_CHECKING:
|
||||||
|
from api.database import Schedule as ScheduleModel
|
||||||
|
|
||||||
|
|
||||||
def _get_project_path(project_name: str) -> Path:
|
def _schedule_to_response(schedule: "ScheduleModel") -> ScheduleResponse:
|
||||||
"""Get project path from registry."""
|
"""Convert a Schedule ORM object to a ScheduleResponse Pydantic model.
|
||||||
root = Path(__file__).parent.parent.parent
|
|
||||||
if str(root) not in sys.path:
|
|
||||||
sys.path.insert(0, str(root))
|
|
||||||
|
|
||||||
from registry import get_project_path
|
|
||||||
return get_project_path(project_name)
|
|
||||||
|
|
||||||
|
SQLAlchemy Column descriptors resolve to Python types at instance access time,
|
||||||
|
but mypy sees the Column[T] descriptor type. Using model_validate with
|
||||||
|
from_attributes handles this conversion correctly.
|
||||||
|
"""
|
||||||
|
return ScheduleResponse.model_validate(schedule, from_attributes=True)
|
||||||
|
|
||||||
router = APIRouter(
|
router = APIRouter(
|
||||||
prefix="/api/projects/{project_name}/schedules",
|
prefix="/api/projects/{project_name}/schedules",
|
||||||
@@ -44,16 +46,6 @@ router = APIRouter(
|
|||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
def validate_project_name(name: str) -> str:
|
|
||||||
"""Validate and sanitize project name to prevent path traversal."""
|
|
||||||
if not re.match(r'^[a-zA-Z0-9_-]{1,50}$', name):
|
|
||||||
raise HTTPException(
|
|
||||||
status_code=400,
|
|
||||||
detail="Invalid project name"
|
|
||||||
)
|
|
||||||
return name
|
|
||||||
|
|
||||||
|
|
||||||
@contextmanager
|
@contextmanager
|
||||||
def _get_db_session(project_name: str) -> Generator[Tuple[Session, Path], None, None]:
|
def _get_db_session(project_name: str) -> Generator[Tuple[Session, Path], None, None]:
|
||||||
"""Get database session for a project as a context manager.
|
"""Get database session for a project as a context manager.
|
||||||
@@ -84,6 +76,9 @@ def _get_db_session(project_name: str) -> Generator[Tuple[Session, Path], None,
|
|||||||
db = SessionLocal()
|
db = SessionLocal()
|
||||||
try:
|
try:
|
||||||
yield db, project_path
|
yield db, project_path
|
||||||
|
except Exception:
|
||||||
|
db.rollback()
|
||||||
|
raise
|
||||||
finally:
|
finally:
|
||||||
db.close()
|
db.close()
|
||||||
|
|
||||||
@@ -99,21 +94,7 @@ async def list_schedules(project_name: str):
|
|||||||
).order_by(Schedule.start_time).all()
|
).order_by(Schedule.start_time).all()
|
||||||
|
|
||||||
return ScheduleListResponse(
|
return ScheduleListResponse(
|
||||||
schedules=[
|
schedules=[_schedule_to_response(s) for s in schedules]
|
||||||
ScheduleResponse(
|
|
||||||
id=s.id,
|
|
||||||
project_name=s.project_name,
|
|
||||||
start_time=s.start_time,
|
|
||||||
duration_minutes=s.duration_minutes,
|
|
||||||
days_of_week=s.days_of_week,
|
|
||||||
enabled=s.enabled,
|
|
||||||
yolo_mode=s.yolo_mode,
|
|
||||||
model=s.model,
|
|
||||||
crash_count=s.crash_count,
|
|
||||||
created_at=s.created_at,
|
|
||||||
)
|
|
||||||
for s in schedules
|
|
||||||
]
|
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
@@ -187,18 +168,7 @@ async def create_schedule(project_name: str, data: ScheduleCreate):
|
|||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error(f"Failed to start agent for schedule {schedule.id}: {e}", exc_info=True)
|
logger.error(f"Failed to start agent for schedule {schedule.id}: {e}", exc_info=True)
|
||||||
|
|
||||||
return ScheduleResponse(
|
return _schedule_to_response(schedule)
|
||||||
id=schedule.id,
|
|
||||||
project_name=schedule.project_name,
|
|
||||||
start_time=schedule.start_time,
|
|
||||||
duration_minutes=schedule.duration_minutes,
|
|
||||||
days_of_week=schedule.days_of_week,
|
|
||||||
enabled=schedule.enabled,
|
|
||||||
yolo_mode=schedule.yolo_mode,
|
|
||||||
model=schedule.model,
|
|
||||||
crash_count=schedule.crash_count,
|
|
||||||
created_at=schedule.created_at,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
@router.get("/next", response_model=NextRunResponse)
|
@router.get("/next", response_model=NextRunResponse)
|
||||||
@@ -256,8 +226,8 @@ async def get_next_scheduled_run(project_name: str):
|
|||||||
|
|
||||||
return NextRunResponse(
|
return NextRunResponse(
|
||||||
has_schedules=True,
|
has_schedules=True,
|
||||||
next_start=next_start.isoformat() if (active_count == 0 and next_start) else None,
|
next_start=next_start if active_count == 0 else None,
|
||||||
next_end=latest_end.isoformat() if latest_end else None,
|
next_end=latest_end,
|
||||||
is_currently_running=active_count > 0,
|
is_currently_running=active_count > 0,
|
||||||
active_schedule_count=active_count,
|
active_schedule_count=active_count,
|
||||||
)
|
)
|
||||||
@@ -277,18 +247,7 @@ async def get_schedule(project_name: str, schedule_id: int):
|
|||||||
if not schedule:
|
if not schedule:
|
||||||
raise HTTPException(status_code=404, detail="Schedule not found")
|
raise HTTPException(status_code=404, detail="Schedule not found")
|
||||||
|
|
||||||
return ScheduleResponse(
|
return _schedule_to_response(schedule)
|
||||||
id=schedule.id,
|
|
||||||
project_name=schedule.project_name,
|
|
||||||
start_time=schedule.start_time,
|
|
||||||
duration_minutes=schedule.duration_minutes,
|
|
||||||
days_of_week=schedule.days_of_week,
|
|
||||||
enabled=schedule.enabled,
|
|
||||||
yolo_mode=schedule.yolo_mode,
|
|
||||||
model=schedule.model,
|
|
||||||
crash_count=schedule.crash_count,
|
|
||||||
created_at=schedule.created_at,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
@router.patch("/{schedule_id}", response_model=ScheduleResponse)
|
@router.patch("/{schedule_id}", response_model=ScheduleResponse)
|
||||||
@@ -331,18 +290,7 @@ async def update_schedule(
|
|||||||
# Was enabled, now disabled - remove jobs
|
# Was enabled, now disabled - remove jobs
|
||||||
scheduler.remove_schedule(schedule_id)
|
scheduler.remove_schedule(schedule_id)
|
||||||
|
|
||||||
return ScheduleResponse(
|
return _schedule_to_response(schedule)
|
||||||
id=schedule.id,
|
|
||||||
project_name=schedule.project_name,
|
|
||||||
start_time=schedule.start_time,
|
|
||||||
duration_minutes=schedule.duration_minutes,
|
|
||||||
days_of_week=schedule.days_of_week,
|
|
||||||
enabled=schedule.enabled,
|
|
||||||
yolo_mode=schedule.yolo_mode,
|
|
||||||
model=schedule.model,
|
|
||||||
crash_count=schedule.crash_count,
|
|
||||||
created_at=schedule.created_at,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
@router.delete("/{schedule_id}", status_code=204)
|
@router.delete("/{schedule_id}", status_code=204)
|
||||||
|
|||||||
@@ -9,17 +9,16 @@ Settings are stored in the registry database and shared across all projects.
|
|||||||
import mimetypes
|
import mimetypes
|
||||||
import os
|
import os
|
||||||
import sys
|
import sys
|
||||||
from pathlib import Path
|
|
||||||
|
|
||||||
from fastapi import APIRouter
|
from fastapi import APIRouter
|
||||||
|
|
||||||
from ..schemas import ModelInfo, ModelsResponse, SettingsResponse, SettingsUpdate
|
from ..schemas import ModelInfo, ModelsResponse, SettingsResponse, SettingsUpdate
|
||||||
|
from ..services.chat_constants import ROOT_DIR
|
||||||
|
|
||||||
# Mimetype fix for Windows - must run before StaticFiles is mounted
|
# Mimetype fix for Windows - must run before StaticFiles is mounted
|
||||||
mimetypes.add_type("text/javascript", ".js", True)
|
mimetypes.add_type("text/javascript", ".js", True)
|
||||||
|
|
||||||
# Add root to path for registry import
|
# Ensure root is on sys.path for registry import
|
||||||
ROOT_DIR = Path(__file__).parent.parent.parent
|
|
||||||
if str(ROOT_DIR) not in sys.path:
|
if str(ROOT_DIR) not in sys.path:
|
||||||
sys.path.insert(0, str(ROOT_DIR))
|
sys.path.insert(0, str(ROOT_DIR))
|
||||||
|
|
||||||
@@ -92,6 +91,8 @@ async def get_settings():
|
|||||||
glm_mode=_is_glm_mode(),
|
glm_mode=_is_glm_mode(),
|
||||||
ollama_mode=_is_ollama_mode(),
|
ollama_mode=_is_ollama_mode(),
|
||||||
testing_agent_ratio=_parse_int(all_settings.get("testing_agent_ratio"), 1),
|
testing_agent_ratio=_parse_int(all_settings.get("testing_agent_ratio"), 1),
|
||||||
|
playwright_headless=_parse_bool(all_settings.get("playwright_headless"), default=True),
|
||||||
|
batch_size=_parse_int(all_settings.get("batch_size"), 3),
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
@@ -107,6 +108,12 @@ async def update_settings(update: SettingsUpdate):
|
|||||||
if update.testing_agent_ratio is not None:
|
if update.testing_agent_ratio is not None:
|
||||||
set_setting("testing_agent_ratio", str(update.testing_agent_ratio))
|
set_setting("testing_agent_ratio", str(update.testing_agent_ratio))
|
||||||
|
|
||||||
|
if update.playwright_headless is not None:
|
||||||
|
set_setting("playwright_headless", "true" if update.playwright_headless else "false")
|
||||||
|
|
||||||
|
if update.batch_size is not None:
|
||||||
|
set_setting("batch_size", str(update.batch_size))
|
||||||
|
|
||||||
# Return updated settings
|
# Return updated settings
|
||||||
all_settings = get_all_settings()
|
all_settings = get_all_settings()
|
||||||
return SettingsResponse(
|
return SettingsResponse(
|
||||||
@@ -115,4 +122,6 @@ async def update_settings(update: SettingsUpdate):
|
|||||||
glm_mode=_is_glm_mode(),
|
glm_mode=_is_glm_mode(),
|
||||||
ollama_mode=_is_ollama_mode(),
|
ollama_mode=_is_ollama_mode(),
|
||||||
testing_agent_ratio=_parse_int(all_settings.get("testing_agent_ratio"), 1),
|
testing_agent_ratio=_parse_int(all_settings.get("testing_agent_ratio"), 1),
|
||||||
|
playwright_headless=_parse_bool(all_settings.get("playwright_headless"), default=True),
|
||||||
|
batch_size=_parse_int(all_settings.get("batch_size"), 3),
|
||||||
)
|
)
|
||||||
|
|||||||
@@ -7,8 +7,6 @@ WebSocket and REST endpoints for interactive spec creation with Claude.
|
|||||||
|
|
||||||
import json
|
import json
|
||||||
import logging
|
import logging
|
||||||
import re
|
|
||||||
from pathlib import Path
|
|
||||||
from typing import Optional
|
from typing import Optional
|
||||||
|
|
||||||
from fastapi import APIRouter, HTTPException, WebSocket, WebSocketDisconnect
|
from fastapi import APIRouter, HTTPException, WebSocket, WebSocketDisconnect
|
||||||
@@ -22,30 +20,13 @@ from ..services.spec_chat_session import (
|
|||||||
list_sessions,
|
list_sessions,
|
||||||
remove_session,
|
remove_session,
|
||||||
)
|
)
|
||||||
|
from ..utils.project_helpers import get_project_path as _get_project_path
|
||||||
|
from ..utils.validation import is_valid_project_name as validate_project_name
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
router = APIRouter(prefix="/api/spec", tags=["spec-creation"])
|
router = APIRouter(prefix="/api/spec", tags=["spec-creation"])
|
||||||
|
|
||||||
# Root directory
|
|
||||||
ROOT_DIR = Path(__file__).parent.parent.parent
|
|
||||||
|
|
||||||
|
|
||||||
def _get_project_path(project_name: str) -> Path:
|
|
||||||
"""Get project path from registry."""
|
|
||||||
import sys
|
|
||||||
root = Path(__file__).parent.parent.parent
|
|
||||||
if str(root) not in sys.path:
|
|
||||||
sys.path.insert(0, str(root))
|
|
||||||
|
|
||||||
from registry import get_project_path
|
|
||||||
return get_project_path(project_name)
|
|
||||||
|
|
||||||
|
|
||||||
def validate_project_name(name: str) -> bool:
|
|
||||||
"""Validate project name to prevent path traversal."""
|
|
||||||
return bool(re.match(r'^[a-zA-Z0-9_-]{1,50}$', name))
|
|
||||||
|
|
||||||
|
|
||||||
# ============================================================================
|
# ============================================================================
|
||||||
# REST Endpoints
|
# REST Endpoints
|
||||||
@@ -124,7 +105,8 @@ async def get_spec_file_status(project_name: str):
|
|||||||
if not project_dir.exists():
|
if not project_dir.exists():
|
||||||
raise HTTPException(status_code=404, detail="Project directory not found")
|
raise HTTPException(status_code=404, detail="Project directory not found")
|
||||||
|
|
||||||
status_file = project_dir / "prompts" / ".spec_status.json"
|
from autocoder_paths import get_prompts_dir
|
||||||
|
status_file = get_prompts_dir(project_dir) / ".spec_status.json"
|
||||||
|
|
||||||
if not status_file.exists():
|
if not status_file.exists():
|
||||||
return SpecFileStatus(
|
return SpecFileStatus(
|
||||||
|
|||||||
@@ -12,8 +12,6 @@ import base64
|
|||||||
import json
|
import json
|
||||||
import logging
|
import logging
|
||||||
import re
|
import re
|
||||||
import sys
|
|
||||||
from pathlib import Path
|
|
||||||
|
|
||||||
from fastapi import APIRouter, HTTPException, WebSocket, WebSocketDisconnect
|
from fastapi import APIRouter, HTTPException, WebSocket, WebSocketDisconnect
|
||||||
from pydantic import BaseModel
|
from pydantic import BaseModel
|
||||||
@@ -27,13 +25,8 @@ from ..services.terminal_manager import (
|
|||||||
rename_terminal,
|
rename_terminal,
|
||||||
stop_terminal_session,
|
stop_terminal_session,
|
||||||
)
|
)
|
||||||
|
from ..utils.project_helpers import get_project_path as _get_project_path
|
||||||
# Add project root to path for registry import
|
from ..utils.validation import is_valid_project_name as validate_project_name
|
||||||
_root = Path(__file__).parent.parent.parent
|
|
||||||
if str(_root) not in sys.path:
|
|
||||||
sys.path.insert(0, str(_root))
|
|
||||||
|
|
||||||
from registry import get_project_path as registry_get_project_path
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
@@ -48,27 +41,6 @@ class TerminalCloseCode:
|
|||||||
FAILED_TO_START = 4500
|
FAILED_TO_START = 4500
|
||||||
|
|
||||||
|
|
||||||
def _get_project_path(project_name: str) -> Path | None:
|
|
||||||
"""Get project path from registry."""
|
|
||||||
return registry_get_project_path(project_name)
|
|
||||||
|
|
||||||
|
|
||||||
def validate_project_name(name: str) -> bool:
|
|
||||||
"""
|
|
||||||
Validate project name to prevent path traversal attacks.
|
|
||||||
|
|
||||||
Allows only alphanumeric characters, underscores, and hyphens.
|
|
||||||
Maximum length of 50 characters.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
name: The project name to validate
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
True if valid, False otherwise
|
|
||||||
"""
|
|
||||||
return bool(re.match(r"^[a-zA-Z0-9_-]{1,50}$", name))
|
|
||||||
|
|
||||||
|
|
||||||
def validate_terminal_id(terminal_id: str) -> bool:
|
def validate_terminal_id(terminal_id: str) -> bool:
|
||||||
"""
|
"""
|
||||||
Validate terminal ID format.
|
Validate terminal ID format.
|
||||||
|
|||||||
@@ -398,6 +398,8 @@ class SettingsResponse(BaseModel):
|
|||||||
glm_mode: bool = False # True if GLM API is configured via .env
|
glm_mode: bool = False # True if GLM API is configured via .env
|
||||||
ollama_mode: bool = False # True if Ollama API is configured via .env
|
ollama_mode: bool = False # True if Ollama API is configured via .env
|
||||||
testing_agent_ratio: int = 1 # Regression testing agents (0-3)
|
testing_agent_ratio: int = 1 # Regression testing agents (0-3)
|
||||||
|
playwright_headless: bool = True
|
||||||
|
batch_size: int = 3 # Features per coding agent batch (1-3)
|
||||||
|
|
||||||
|
|
||||||
class ModelsResponse(BaseModel):
|
class ModelsResponse(BaseModel):
|
||||||
@@ -411,6 +413,8 @@ class SettingsUpdate(BaseModel):
|
|||||||
yolo_mode: bool | None = None
|
yolo_mode: bool | None = None
|
||||||
model: str | None = None
|
model: str | None = None
|
||||||
testing_agent_ratio: int | None = None # 0-3
|
testing_agent_ratio: int | None = None # 0-3
|
||||||
|
playwright_headless: bool | None = None
|
||||||
|
batch_size: int | None = None # Features per agent batch (1-3)
|
||||||
|
|
||||||
@field_validator('model')
|
@field_validator('model')
|
||||||
@classmethod
|
@classmethod
|
||||||
@@ -426,6 +430,13 @@ class SettingsUpdate(BaseModel):
|
|||||||
raise ValueError("testing_agent_ratio must be between 0 and 3")
|
raise ValueError("testing_agent_ratio must be between 0 and 3")
|
||||||
return v
|
return v
|
||||||
|
|
||||||
|
@field_validator('batch_size')
|
||||||
|
@classmethod
|
||||||
|
def validate_batch_size(cls, v: int | None) -> int | None:
|
||||||
|
if v is not None and (v < 1 or v > 3):
|
||||||
|
raise ValueError("batch_size must be between 1 and 3")
|
||||||
|
return v
|
||||||
|
|
||||||
|
|
||||||
# ============================================================================
|
# ============================================================================
|
||||||
# Dev Server Schemas
|
# Dev Server Schemas
|
||||||
|
|||||||
@@ -25,25 +25,13 @@ from .assistant_database import (
|
|||||||
create_conversation,
|
create_conversation,
|
||||||
get_messages,
|
get_messages,
|
||||||
)
|
)
|
||||||
|
from .chat_constants import API_ENV_VARS, ROOT_DIR
|
||||||
|
|
||||||
# Load environment variables from .env file if present
|
# Load environment variables from .env file if present
|
||||||
load_dotenv()
|
load_dotenv()
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
# Root directory of the project
|
|
||||||
ROOT_DIR = Path(__file__).parent.parent.parent
|
|
||||||
|
|
||||||
# Environment variables to pass through to Claude CLI for API configuration
|
|
||||||
API_ENV_VARS = [
|
|
||||||
"ANTHROPIC_BASE_URL",
|
|
||||||
"ANTHROPIC_AUTH_TOKEN",
|
|
||||||
"API_TIMEOUT_MS",
|
|
||||||
"ANTHROPIC_DEFAULT_SONNET_MODEL",
|
|
||||||
"ANTHROPIC_DEFAULT_OPUS_MODEL",
|
|
||||||
"ANTHROPIC_DEFAULT_HAIKU_MODEL",
|
|
||||||
]
|
|
||||||
|
|
||||||
# Read-only feature MCP tools
|
# Read-only feature MCP tools
|
||||||
READONLY_FEATURE_MCP_TOOLS = [
|
READONLY_FEATURE_MCP_TOOLS = [
|
||||||
"mcp__features__feature_get_stats",
|
"mcp__features__feature_get_stats",
|
||||||
@@ -76,7 +64,8 @@ def get_system_prompt(project_name: str, project_dir: Path) -> str:
|
|||||||
"""Generate the system prompt for the assistant with project context."""
|
"""Generate the system prompt for the assistant with project context."""
|
||||||
# Try to load app_spec.txt for context
|
# Try to load app_spec.txt for context
|
||||||
app_spec_content = ""
|
app_spec_content = ""
|
||||||
app_spec_path = project_dir / "prompts" / "app_spec.txt"
|
from autocoder_paths import get_prompts_dir
|
||||||
|
app_spec_path = get_prompts_dir(project_dir) / "app_spec.txt"
|
||||||
if app_spec_path.exists():
|
if app_spec_path.exists():
|
||||||
try:
|
try:
|
||||||
app_spec_content = app_spec_path.read_text(encoding="utf-8")
|
app_spec_content = app_spec_path.read_text(encoding="utf-8")
|
||||||
@@ -90,6 +79,8 @@ def get_system_prompt(project_name: str, project_dir: Path) -> str:
|
|||||||
|
|
||||||
Your role is to help users understand the codebase, answer questions about features, and manage the project backlog. You can READ files and CREATE/MANAGE features, but you cannot modify source code.
|
Your role is to help users understand the codebase, answer questions about features, and manage the project backlog. You can READ files and CREATE/MANAGE features, but you cannot modify source code.
|
||||||
|
|
||||||
|
You have MCP tools available for feature management. Use them directly by calling the tool -- do not suggest CLI commands, bash commands, or curl commands to the user. You can create features yourself using the feature_create and feature_create_bulk tools.
|
||||||
|
|
||||||
## What You CAN Do
|
## What You CAN Do
|
||||||
|
|
||||||
**Codebase Analysis (Read-Only):**
|
**Codebase Analysis (Read-Only):**
|
||||||
@@ -134,17 +125,21 @@ If the user asks you to modify code, explain that you're a project assistant and
|
|||||||
|
|
||||||
## Creating Features
|
## Creating Features
|
||||||
|
|
||||||
When a user asks to add a feature, gather the following information:
|
When a user asks to add a feature, use the `feature_create` or `feature_create_bulk` MCP tools directly:
|
||||||
1. **Category**: A grouping like "Authentication", "API", "UI", "Database"
|
|
||||||
2. **Name**: A concise, descriptive name
|
For a **single feature**, call `feature_create` with:
|
||||||
3. **Description**: What the feature should do
|
- category: A grouping like "Authentication", "API", "UI", "Database"
|
||||||
4. **Steps**: How to verify/implement the feature (as a list)
|
- name: A concise, descriptive name
|
||||||
|
- description: What the feature should do
|
||||||
|
- steps: List of verification/implementation steps
|
||||||
|
|
||||||
|
For **multiple features**, call `feature_create_bulk` with an array of feature objects.
|
||||||
|
|
||||||
You can ask clarifying questions if the user's request is vague, or make reasonable assumptions for simple requests.
|
You can ask clarifying questions if the user's request is vague, or make reasonable assumptions for simple requests.
|
||||||
|
|
||||||
**Example interaction:**
|
**Example interaction:**
|
||||||
User: "Add a feature for S3 sync"
|
User: "Add a feature for S3 sync"
|
||||||
You: I'll create that feature. Let me add it to the backlog...
|
You: I'll create that feature now.
|
||||||
[calls feature_create with appropriate parameters]
|
[calls feature_create with appropriate parameters]
|
||||||
You: Done! I've added "S3 Sync Integration" to your backlog. It's now visible on the kanban board.
|
You: Done! I've added "S3 Sync Integration" to your backlog. It's now visible on the kanban board.
|
||||||
|
|
||||||
@@ -208,7 +203,7 @@ class AssistantChatSession:
|
|||||||
# Create a new conversation if we don't have one
|
# Create a new conversation if we don't have one
|
||||||
if is_new_conversation:
|
if is_new_conversation:
|
||||||
conv = create_conversation(self.project_dir, self.project_name)
|
conv = create_conversation(self.project_dir, self.project_name)
|
||||||
self.conversation_id = conv.id
|
self.conversation_id = int(conv.id) # type coercion: Column[int] -> int
|
||||||
yield {"type": "conversation_created", "conversation_id": self.conversation_id}
|
yield {"type": "conversation_created", "conversation_id": self.conversation_id}
|
||||||
|
|
||||||
# Build permissions list for assistant access (read + feature management)
|
# Build permissions list for assistant access (read + feature management)
|
||||||
@@ -229,7 +224,9 @@ class AssistantChatSession:
|
|||||||
"allow": permissions_list,
|
"allow": permissions_list,
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
settings_file = self.project_dir / ".claude_assistant_settings.json"
|
from autocoder_paths import get_claude_assistant_settings_path
|
||||||
|
settings_file = get_claude_assistant_settings_path(self.project_dir)
|
||||||
|
settings_file.parent.mkdir(parents=True, exist_ok=True)
|
||||||
with open(settings_file, "w") as f:
|
with open(settings_file, "w") as f:
|
||||||
json.dump(security_settings, f, indent=2)
|
json.dump(security_settings, f, indent=2)
|
||||||
|
|
||||||
@@ -261,7 +258,11 @@ class AssistantChatSession:
|
|||||||
system_cli = shutil.which("claude")
|
system_cli = shutil.which("claude")
|
||||||
|
|
||||||
# Build environment overrides for API configuration
|
# Build environment overrides for API configuration
|
||||||
sdk_env = {var: os.getenv(var) for var in API_ENV_VARS if os.getenv(var)}
|
sdk_env: dict[str, str] = {}
|
||||||
|
for var in API_ENV_VARS:
|
||||||
|
value = os.getenv(var)
|
||||||
|
if value:
|
||||||
|
sdk_env[var] = value
|
||||||
|
|
||||||
# Determine model from environment or use default
|
# Determine model from environment or use default
|
||||||
# This allows using alternative APIs (e.g., GLM via z.ai) that may not support Claude model names
|
# This allows using alternative APIs (e.g., GLM via z.ai) that may not support Claude model names
|
||||||
@@ -277,7 +278,7 @@ class AssistantChatSession:
|
|||||||
# This avoids Windows command line length limit (~8191 chars)
|
# This avoids Windows command line length limit (~8191 chars)
|
||||||
setting_sources=["project"],
|
setting_sources=["project"],
|
||||||
allowed_tools=[*READONLY_BUILTIN_TOOLS, *ASSISTANT_FEATURE_TOOLS],
|
allowed_tools=[*READONLY_BUILTIN_TOOLS, *ASSISTANT_FEATURE_TOOLS],
|
||||||
mcp_servers=mcp_servers,
|
mcp_servers=mcp_servers, # type: ignore[arg-type] # SDK accepts dict config at runtime
|
||||||
permission_mode="bypassPermissions",
|
permission_mode="bypassPermissions",
|
||||||
max_turns=100,
|
max_turns=100,
|
||||||
cwd=str(self.project_dir.resolve()),
|
cwd=str(self.project_dir.resolve()),
|
||||||
@@ -303,6 +304,8 @@ class AssistantChatSession:
|
|||||||
greeting = f"Hello! I'm your project assistant for **{self.project_name}**. I can help you understand the codebase, explain features, and answer questions about the project. What would you like to know?"
|
greeting = f"Hello! I'm your project assistant for **{self.project_name}**. I can help you understand the codebase, explain features, and answer questions about the project. What would you like to know?"
|
||||||
|
|
||||||
# Store the greeting in the database
|
# Store the greeting in the database
|
||||||
|
# conversation_id is guaranteed non-None here (set on line 206 above)
|
||||||
|
assert self.conversation_id is not None
|
||||||
add_message(self.project_dir, self.conversation_id, "assistant", greeting)
|
add_message(self.project_dir, self.conversation_id, "assistant", greeting)
|
||||||
|
|
||||||
yield {"type": "text", "content": greeting}
|
yield {"type": "text", "content": greeting}
|
||||||
|
|||||||
@@ -7,20 +7,28 @@ Each project has its own assistant.db file in the project directory.
|
|||||||
"""
|
"""
|
||||||
|
|
||||||
import logging
|
import logging
|
||||||
|
import threading
|
||||||
from datetime import datetime, timezone
|
from datetime import datetime, timezone
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import Optional
|
from typing import Optional
|
||||||
|
|
||||||
from sqlalchemy import Column, DateTime, ForeignKey, Integer, String, Text, create_engine, func
|
from sqlalchemy import Column, DateTime, ForeignKey, Integer, String, Text, create_engine, func
|
||||||
from sqlalchemy.orm import declarative_base, relationship, sessionmaker
|
from sqlalchemy.engine import Engine
|
||||||
|
from sqlalchemy.orm import DeclarativeBase, relationship, sessionmaker
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
Base = declarative_base()
|
class Base(DeclarativeBase):
|
||||||
|
"""SQLAlchemy 2.0 style declarative base."""
|
||||||
|
pass
|
||||||
|
|
||||||
# Engine cache to avoid creating new engines for each request
|
# Engine cache to avoid creating new engines for each request
|
||||||
# Key: project directory path (as posix string), Value: SQLAlchemy engine
|
# Key: project directory path (as posix string), Value: SQLAlchemy engine
|
||||||
_engine_cache: dict[str, object] = {}
|
_engine_cache: dict[str, Engine] = {}
|
||||||
|
|
||||||
|
# Lock for thread-safe access to the engine cache
|
||||||
|
# Prevents race conditions when multiple threads create engines simultaneously
|
||||||
|
_cache_lock = threading.Lock()
|
||||||
|
|
||||||
|
|
||||||
def _utc_now() -> datetime:
|
def _utc_now() -> datetime:
|
||||||
@@ -56,7 +64,8 @@ class ConversationMessage(Base):
|
|||||||
|
|
||||||
def get_db_path(project_dir: Path) -> Path:
|
def get_db_path(project_dir: Path) -> Path:
|
||||||
"""Get the path to the assistant database for a project."""
|
"""Get the path to the assistant database for a project."""
|
||||||
return project_dir / "assistant.db"
|
from autocoder_paths import get_assistant_db_path
|
||||||
|
return get_assistant_db_path(project_dir)
|
||||||
|
|
||||||
|
|
||||||
def get_engine(project_dir: Path):
|
def get_engine(project_dir: Path):
|
||||||
@@ -64,14 +73,30 @@ def get_engine(project_dir: Path):
|
|||||||
|
|
||||||
Uses a cache to avoid creating new engines for each request, which improves
|
Uses a cache to avoid creating new engines for each request, which improves
|
||||||
performance by reusing database connections.
|
performance by reusing database connections.
|
||||||
|
|
||||||
|
Thread-safe: Uses a lock to prevent race conditions when multiple threads
|
||||||
|
try to create engines simultaneously for the same project.
|
||||||
"""
|
"""
|
||||||
cache_key = project_dir.as_posix()
|
cache_key = project_dir.as_posix()
|
||||||
|
|
||||||
|
# Double-checked locking for thread safety and performance
|
||||||
|
if cache_key in _engine_cache:
|
||||||
|
return _engine_cache[cache_key]
|
||||||
|
|
||||||
|
with _cache_lock:
|
||||||
|
# Check again inside the lock in case another thread created it
|
||||||
if cache_key not in _engine_cache:
|
if cache_key not in _engine_cache:
|
||||||
db_path = get_db_path(project_dir)
|
db_path = get_db_path(project_dir)
|
||||||
# Use as_posix() for cross-platform compatibility with SQLite connection strings
|
# Use as_posix() for cross-platform compatibility with SQLite connection strings
|
||||||
db_url = f"sqlite:///{db_path.as_posix()}"
|
db_url = f"sqlite:///{db_path.as_posix()}"
|
||||||
engine = create_engine(db_url, echo=False)
|
engine = create_engine(
|
||||||
|
db_url,
|
||||||
|
echo=False,
|
||||||
|
connect_args={
|
||||||
|
"check_same_thread": False,
|
||||||
|
"timeout": 30, # Wait up to 30s for locks
|
||||||
|
}
|
||||||
|
)
|
||||||
Base.metadata.create_all(engine)
|
Base.metadata.create_all(engine)
|
||||||
_engine_cache[cache_key] = engine
|
_engine_cache[cache_key] = engine
|
||||||
logger.debug(f"Created new database engine for {cache_key}")
|
logger.debug(f"Created new database engine for {cache_key}")
|
||||||
|
|||||||
57
server/services/chat_constants.py
Normal file
57
server/services/chat_constants.py
Normal file
@@ -0,0 +1,57 @@
|
|||||||
|
"""
|
||||||
|
Chat Session Constants
|
||||||
|
======================
|
||||||
|
|
||||||
|
Shared constants for all chat session types (assistant, spec, expand).
|
||||||
|
|
||||||
|
The canonical ``API_ENV_VARS`` list lives in ``env_constants.py`` at the
|
||||||
|
project root and is re-exported here for convenience so that existing
|
||||||
|
imports (``from .chat_constants import API_ENV_VARS``) continue to work.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import sys
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import AsyncGenerator
|
||||||
|
|
||||||
|
# -------------------------------------------------------------------
|
||||||
|
# Root directory of the autocoder project (repository root).
|
||||||
|
# Used throughout the server package whenever the repo root is needed.
|
||||||
|
# -------------------------------------------------------------------
|
||||||
|
ROOT_DIR = Path(__file__).parent.parent.parent
|
||||||
|
|
||||||
|
# Ensure the project root is on sys.path so we can import env_constants
|
||||||
|
# from the root-level module without requiring a package install.
|
||||||
|
_root_str = str(ROOT_DIR)
|
||||||
|
if _root_str not in sys.path:
|
||||||
|
sys.path.insert(0, _root_str)
|
||||||
|
|
||||||
|
# -------------------------------------------------------------------
|
||||||
|
# Environment variables forwarded to Claude CLI subprocesses.
|
||||||
|
# Single source of truth lives in env_constants.py at the project root.
|
||||||
|
# Re-exported here so existing ``from .chat_constants import API_ENV_VARS``
|
||||||
|
# imports continue to work unchanged.
|
||||||
|
# -------------------------------------------------------------------
|
||||||
|
from env_constants import API_ENV_VARS # noqa: E402, F401
|
||||||
|
|
||||||
|
|
||||||
|
async def make_multimodal_message(content_blocks: list[dict]) -> AsyncGenerator[dict, None]:
|
||||||
|
"""Yield a single multimodal user message in Claude Agent SDK format.
|
||||||
|
|
||||||
|
The Claude Agent SDK's ``query()`` method accepts either a plain string
|
||||||
|
or an ``AsyncIterable[dict]`` for custom message formats. This helper
|
||||||
|
wraps a list of content blocks (text and/or images) in the expected
|
||||||
|
envelope.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
content_blocks: List of content-block dicts, e.g.
|
||||||
|
``[{"type": "text", "text": "..."}, {"type": "image", ...}]``.
|
||||||
|
|
||||||
|
Yields:
|
||||||
|
A single dict representing the user message.
|
||||||
|
"""
|
||||||
|
yield {
|
||||||
|
"type": "user",
|
||||||
|
"message": {"role": "user", "content": content_blocks},
|
||||||
|
"parent_tool_use_id": None,
|
||||||
|
"session_id": "default",
|
||||||
|
}
|
||||||
@@ -24,6 +24,7 @@ from typing import Awaitable, Callable, Literal, Set
|
|||||||
import psutil
|
import psutil
|
||||||
|
|
||||||
from registry import list_registered_projects
|
from registry import list_registered_projects
|
||||||
|
from security import extract_commands, get_effective_commands, is_command_allowed
|
||||||
from server.utils.process_utils import kill_process_tree
|
from server.utils.process_utils import kill_process_tree
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
@@ -114,7 +115,8 @@ class DevServerProcessManager:
|
|||||||
self._callbacks_lock = threading.Lock()
|
self._callbacks_lock = threading.Lock()
|
||||||
|
|
||||||
# Lock file to prevent multiple instances (stored in project directory)
|
# Lock file to prevent multiple instances (stored in project directory)
|
||||||
self.lock_file = self.project_dir / ".devserver.lock"
|
from autocoder_paths import get_devserver_lock_path
|
||||||
|
self.lock_file = get_devserver_lock_path(self.project_dir)
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def status(self) -> Literal["stopped", "running", "crashed"]:
|
def status(self) -> Literal["stopped", "running", "crashed"]:
|
||||||
@@ -304,6 +306,20 @@ class DevServerProcessManager:
|
|||||||
if not self.project_dir.exists():
|
if not self.project_dir.exists():
|
||||||
return False, f"Project directory does not exist: {self.project_dir}"
|
return False, f"Project directory does not exist: {self.project_dir}"
|
||||||
|
|
||||||
|
# Defense-in-depth: validate command against security allowlist
|
||||||
|
commands = extract_commands(command)
|
||||||
|
if not commands:
|
||||||
|
return False, "Could not parse command for security validation"
|
||||||
|
|
||||||
|
allowed_commands, blocked_commands = get_effective_commands(self.project_dir)
|
||||||
|
for cmd in commands:
|
||||||
|
if cmd in blocked_commands:
|
||||||
|
logger.warning("Blocked dev server command '%s' (in blocklist) for %s", cmd, self.project_name)
|
||||||
|
return False, f"Command '{cmd}' is blocked and cannot be used as a dev server command"
|
||||||
|
if not is_command_allowed(cmd, allowed_commands):
|
||||||
|
logger.warning("Rejected dev server command '%s' (not in allowlist) for %s", cmd, self.project_name)
|
||||||
|
return False, f"Command '{cmd}' is not in the allowed commands list"
|
||||||
|
|
||||||
self._command = command
|
self._command = command
|
||||||
self._detected_url = None # Reset URL detection
|
self._detected_url = None # Reset URL detection
|
||||||
|
|
||||||
@@ -487,8 +503,18 @@ def cleanup_orphaned_devserver_locks() -> int:
|
|||||||
if not project_path.exists():
|
if not project_path.exists():
|
||||||
continue
|
continue
|
||||||
|
|
||||||
lock_file = project_path / ".devserver.lock"
|
# Check both legacy and new locations for lock files
|
||||||
if not lock_file.exists():
|
from autocoder_paths import get_autocoder_dir
|
||||||
|
lock_locations = [
|
||||||
|
project_path / ".devserver.lock",
|
||||||
|
get_autocoder_dir(project_path) / ".devserver.lock",
|
||||||
|
]
|
||||||
|
lock_file = None
|
||||||
|
for candidate in lock_locations:
|
||||||
|
if candidate.exists():
|
||||||
|
lock_file = candidate
|
||||||
|
break
|
||||||
|
if lock_file is None:
|
||||||
continue
|
continue
|
||||||
|
|
||||||
try:
|
try:
|
||||||
|
|||||||
@@ -16,28 +16,19 @@ import threading
|
|||||||
import uuid
|
import uuid
|
||||||
from datetime import datetime
|
from datetime import datetime
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import AsyncGenerator, Optional
|
from typing import Any, AsyncGenerator, Optional
|
||||||
|
|
||||||
from claude_agent_sdk import ClaudeAgentOptions, ClaudeSDKClient
|
from claude_agent_sdk import ClaudeAgentOptions, ClaudeSDKClient
|
||||||
from dotenv import load_dotenv
|
from dotenv import load_dotenv
|
||||||
|
|
||||||
from ..schemas import ImageAttachment
|
from ..schemas import ImageAttachment
|
||||||
|
from .chat_constants import API_ENV_VARS, ROOT_DIR, make_multimodal_message
|
||||||
|
|
||||||
# Load environment variables from .env file if present
|
# Load environment variables from .env file if present
|
||||||
load_dotenv()
|
load_dotenv()
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
# Environment variables to pass through to Claude CLI for API configuration
|
|
||||||
API_ENV_VARS = [
|
|
||||||
"ANTHROPIC_BASE_URL",
|
|
||||||
"ANTHROPIC_AUTH_TOKEN",
|
|
||||||
"API_TIMEOUT_MS",
|
|
||||||
"ANTHROPIC_DEFAULT_SONNET_MODEL",
|
|
||||||
"ANTHROPIC_DEFAULT_OPUS_MODEL",
|
|
||||||
"ANTHROPIC_DEFAULT_HAIKU_MODEL",
|
|
||||||
]
|
|
||||||
|
|
||||||
# Feature MCP tools needed for expand session
|
# Feature MCP tools needed for expand session
|
||||||
EXPAND_FEATURE_TOOLS = [
|
EXPAND_FEATURE_TOOLS = [
|
||||||
"mcp__features__feature_create",
|
"mcp__features__feature_create",
|
||||||
@@ -46,22 +37,6 @@ EXPAND_FEATURE_TOOLS = [
|
|||||||
]
|
]
|
||||||
|
|
||||||
|
|
||||||
async def _make_multimodal_message(content_blocks: list[dict]) -> AsyncGenerator[dict, None]:
|
|
||||||
"""
|
|
||||||
Create an async generator that yields a properly formatted multimodal message.
|
|
||||||
"""
|
|
||||||
yield {
|
|
||||||
"type": "user",
|
|
||||||
"message": {"role": "user", "content": content_blocks},
|
|
||||||
"parent_tool_use_id": None,
|
|
||||||
"session_id": "default",
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
# Root directory of the project
|
|
||||||
ROOT_DIR = Path(__file__).parent.parent.parent
|
|
||||||
|
|
||||||
|
|
||||||
class ExpandChatSession:
|
class ExpandChatSession:
|
||||||
"""
|
"""
|
||||||
Manages a project expansion conversation.
|
Manages a project expansion conversation.
|
||||||
@@ -128,7 +103,8 @@ class ExpandChatSession:
|
|||||||
return
|
return
|
||||||
|
|
||||||
# Verify project has existing spec
|
# Verify project has existing spec
|
||||||
spec_path = self.project_dir / "prompts" / "app_spec.txt"
|
from autocoder_paths import get_prompts_dir
|
||||||
|
spec_path = get_prompts_dir(self.project_dir) / "app_spec.txt"
|
||||||
if not spec_path.exists():
|
if not spec_path.exists():
|
||||||
yield {
|
yield {
|
||||||
"type": "error",
|
"type": "error",
|
||||||
@@ -162,10 +138,13 @@ class ExpandChatSession:
|
|||||||
"allow": [
|
"allow": [
|
||||||
"Read(./**)",
|
"Read(./**)",
|
||||||
"Glob(./**)",
|
"Glob(./**)",
|
||||||
|
*EXPAND_FEATURE_TOOLS,
|
||||||
],
|
],
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
settings_file = self.project_dir / f".claude_settings.expand.{uuid.uuid4().hex}.json"
|
from autocoder_paths import get_expand_settings_path
|
||||||
|
settings_file = get_expand_settings_path(self.project_dir, uuid.uuid4().hex)
|
||||||
|
settings_file.parent.mkdir(parents=True, exist_ok=True)
|
||||||
self._settings_file = settings_file
|
self._settings_file = settings_file
|
||||||
with open(settings_file, "w", encoding="utf-8") as f:
|
with open(settings_file, "w", encoding="utf-8") as f:
|
||||||
json.dump(security_settings, f, indent=2)
|
json.dump(security_settings, f, indent=2)
|
||||||
@@ -175,7 +154,12 @@ class ExpandChatSession:
|
|||||||
system_prompt = skill_content.replace("$ARGUMENTS", project_path)
|
system_prompt = skill_content.replace("$ARGUMENTS", project_path)
|
||||||
|
|
||||||
# Build environment overrides for API configuration
|
# Build environment overrides for API configuration
|
||||||
sdk_env = {var: os.getenv(var) for var in API_ENV_VARS if os.getenv(var)}
|
# Filter to only include vars that are actually set (non-None)
|
||||||
|
sdk_env: dict[str, str] = {}
|
||||||
|
for var in API_ENV_VARS:
|
||||||
|
value = os.getenv(var)
|
||||||
|
if value:
|
||||||
|
sdk_env[var] = value
|
||||||
|
|
||||||
# Determine model from environment or use default
|
# Determine model from environment or use default
|
||||||
# This allows using alternative APIs (e.g., GLM via z.ai) that may not support Claude model names
|
# This allows using alternative APIs (e.g., GLM via z.ai) that may not support Claude model names
|
||||||
@@ -203,9 +187,12 @@ class ExpandChatSession:
|
|||||||
allowed_tools=[
|
allowed_tools=[
|
||||||
"Read",
|
"Read",
|
||||||
"Glob",
|
"Glob",
|
||||||
|
"Grep",
|
||||||
|
"WebFetch",
|
||||||
|
"WebSearch",
|
||||||
*EXPAND_FEATURE_TOOLS,
|
*EXPAND_FEATURE_TOOLS,
|
||||||
],
|
],
|
||||||
mcp_servers=mcp_servers,
|
mcp_servers=mcp_servers, # type: ignore[arg-type] # SDK accepts dict config at runtime
|
||||||
permission_mode="bypassPermissions",
|
permission_mode="bypassPermissions",
|
||||||
max_turns=100,
|
max_turns=100,
|
||||||
cwd=str(self.project_dir.resolve()),
|
cwd=str(self.project_dir.resolve()),
|
||||||
@@ -299,7 +286,7 @@ class ExpandChatSession:
|
|||||||
|
|
||||||
# Build the message content
|
# Build the message content
|
||||||
if attachments and len(attachments) > 0:
|
if attachments and len(attachments) > 0:
|
||||||
content_blocks = []
|
content_blocks: list[dict[str, Any]] = []
|
||||||
if message:
|
if message:
|
||||||
content_blocks.append({"type": "text", "text": message})
|
content_blocks.append({"type": "text", "text": message})
|
||||||
for att in attachments:
|
for att in attachments:
|
||||||
@@ -311,7 +298,7 @@ class ExpandChatSession:
|
|||||||
"data": att.base64Data,
|
"data": att.base64Data,
|
||||||
}
|
}
|
||||||
})
|
})
|
||||||
await self.client.query(_make_multimodal_message(content_blocks))
|
await self.client.query(make_multimodal_message(content_blocks))
|
||||||
logger.info(f"Sent multimodal message with {len(attachments)} image(s)")
|
logger.info(f"Sent multimodal message with {len(attachments)} image(s)")
|
||||||
else:
|
else:
|
||||||
await self.client.query(message)
|
await self.client.query(message)
|
||||||
|
|||||||
@@ -15,7 +15,7 @@ import sys
|
|||||||
import threading
|
import threading
|
||||||
from datetime import datetime
|
from datetime import datetime
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import Awaitable, Callable, Literal, Set
|
from typing import Any, Awaitable, Callable, Literal, Set
|
||||||
|
|
||||||
import psutil
|
import psutil
|
||||||
|
|
||||||
@@ -92,7 +92,8 @@ class AgentProcessManager:
|
|||||||
self._callbacks_lock = threading.Lock()
|
self._callbacks_lock = threading.Lock()
|
||||||
|
|
||||||
# Lock file to prevent multiple instances (stored in project directory)
|
# Lock file to prevent multiple instances (stored in project directory)
|
||||||
self.lock_file = self.project_dir / ".agent.lock"
|
from autocoder_paths import get_agent_lock_path
|
||||||
|
self.lock_file = get_agent_lock_path(self.project_dir)
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def status(self) -> Literal["stopped", "running", "paused", "crashed"]:
|
def status(self) -> Literal["stopped", "running", "paused", "crashed"]:
|
||||||
@@ -296,6 +297,8 @@ class AgentProcessManager:
|
|||||||
parallel_mode: bool = False,
|
parallel_mode: bool = False,
|
||||||
max_concurrency: int | None = None,
|
max_concurrency: int | None = None,
|
||||||
testing_agent_ratio: int = 1,
|
testing_agent_ratio: int = 1,
|
||||||
|
playwright_headless: bool = True,
|
||||||
|
batch_size: int = 3,
|
||||||
) -> tuple[bool, str]:
|
) -> tuple[bool, str]:
|
||||||
"""
|
"""
|
||||||
Start the agent as a subprocess.
|
Start the agent as a subprocess.
|
||||||
@@ -306,6 +309,7 @@ class AgentProcessManager:
|
|||||||
parallel_mode: DEPRECATED - ignored, always uses unified orchestrator
|
parallel_mode: DEPRECATED - ignored, always uses unified orchestrator
|
||||||
max_concurrency: Max concurrent coding agents (1-5, default 1)
|
max_concurrency: Max concurrent coding agents (1-5, default 1)
|
||||||
testing_agent_ratio: Number of regression testing agents (0-3, default 1)
|
testing_agent_ratio: Number of regression testing agents (0-3, default 1)
|
||||||
|
playwright_headless: If True, run browser in headless mode
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
Tuple of (success, message)
|
Tuple of (success, message)
|
||||||
@@ -346,18 +350,21 @@ class AgentProcessManager:
|
|||||||
# Add testing agent configuration
|
# Add testing agent configuration
|
||||||
cmd.extend(["--testing-ratio", str(testing_agent_ratio)])
|
cmd.extend(["--testing-ratio", str(testing_agent_ratio)])
|
||||||
|
|
||||||
|
# Add --batch-size flag for multi-feature batching
|
||||||
|
cmd.extend(["--batch-size", str(batch_size)])
|
||||||
|
|
||||||
try:
|
try:
|
||||||
# Start subprocess with piped stdout/stderr
|
# Start subprocess with piped stdout/stderr
|
||||||
# Use project_dir as cwd so Claude SDK sandbox allows access to project files
|
# Use project_dir as cwd so Claude SDK sandbox allows access to project files
|
||||||
# stdin=DEVNULL prevents blocking if Claude CLI or child process tries to read stdin
|
# stdin=DEVNULL prevents blocking if Claude CLI or child process tries to read stdin
|
||||||
# CREATE_NO_WINDOW on Windows prevents console window pop-ups
|
# CREATE_NO_WINDOW on Windows prevents console window pop-ups
|
||||||
# PYTHONUNBUFFERED ensures output isn't delayed
|
# PYTHONUNBUFFERED ensures output isn't delayed
|
||||||
popen_kwargs = {
|
popen_kwargs: dict[str, Any] = {
|
||||||
"stdin": subprocess.DEVNULL,
|
"stdin": subprocess.DEVNULL,
|
||||||
"stdout": subprocess.PIPE,
|
"stdout": subprocess.PIPE,
|
||||||
"stderr": subprocess.STDOUT,
|
"stderr": subprocess.STDOUT,
|
||||||
"cwd": str(self.project_dir),
|
"cwd": str(self.project_dir),
|
||||||
"env": {**os.environ, "PYTHONUNBUFFERED": "1"},
|
"env": {**os.environ, "PYTHONUNBUFFERED": "1", "PLAYWRIGHT_HEADLESS": "true" if playwright_headless else "false"},
|
||||||
}
|
}
|
||||||
if sys.platform == "win32":
|
if sys.platform == "win32":
|
||||||
popen_kwargs["creationflags"] = subprocess.CREATE_NO_WINDOW
|
popen_kwargs["creationflags"] = subprocess.CREATE_NO_WINDOW
|
||||||
@@ -579,8 +586,18 @@ def cleanup_orphaned_locks() -> int:
|
|||||||
if not project_path.exists():
|
if not project_path.exists():
|
||||||
continue
|
continue
|
||||||
|
|
||||||
lock_file = project_path / ".agent.lock"
|
# Check both legacy and new locations for lock files
|
||||||
if not lock_file.exists():
|
from autocoder_paths import get_autocoder_dir
|
||||||
|
lock_locations = [
|
||||||
|
project_path / ".agent.lock",
|
||||||
|
get_autocoder_dir(project_path) / ".agent.lock",
|
||||||
|
]
|
||||||
|
lock_file = None
|
||||||
|
for candidate in lock_locations:
|
||||||
|
if candidate.exists():
|
||||||
|
lock_file = candidate
|
||||||
|
break
|
||||||
|
if lock_file is None:
|
||||||
continue
|
continue
|
||||||
|
|
||||||
try:
|
try:
|
||||||
|
|||||||
@@ -92,8 +92,9 @@ class SchedulerService:
|
|||||||
async def _load_project_schedules(self, project_name: str, project_dir: Path) -> int:
|
async def _load_project_schedules(self, project_name: str, project_dir: Path) -> int:
|
||||||
"""Load schedules for a single project. Returns count of schedules loaded."""
|
"""Load schedules for a single project. Returns count of schedules loaded."""
|
||||||
from api.database import Schedule, create_database
|
from api.database import Schedule, create_database
|
||||||
|
from autocoder_paths import get_features_db_path
|
||||||
|
|
||||||
db_path = project_dir / "features.db"
|
db_path = get_features_db_path(project_dir)
|
||||||
if not db_path.exists():
|
if not db_path.exists():
|
||||||
return 0
|
return 0
|
||||||
|
|
||||||
@@ -567,8 +568,9 @@ class SchedulerService:
|
|||||||
):
|
):
|
||||||
"""Check if a project should be started on server startup."""
|
"""Check if a project should be started on server startup."""
|
||||||
from api.database import Schedule, ScheduleOverride, create_database
|
from api.database import Schedule, ScheduleOverride, create_database
|
||||||
|
from autocoder_paths import get_features_db_path
|
||||||
|
|
||||||
db_path = project_dir / "features.db"
|
db_path = get_features_db_path(project_dir)
|
||||||
if not db_path.exists():
|
if not db_path.exists():
|
||||||
return
|
return
|
||||||
|
|
||||||
|
|||||||
@@ -13,49 +13,19 @@ import shutil
|
|||||||
import threading
|
import threading
|
||||||
from datetime import datetime
|
from datetime import datetime
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import AsyncGenerator, Optional
|
from typing import Any, AsyncGenerator, Optional
|
||||||
|
|
||||||
from claude_agent_sdk import ClaudeAgentOptions, ClaudeSDKClient
|
from claude_agent_sdk import ClaudeAgentOptions, ClaudeSDKClient
|
||||||
from dotenv import load_dotenv
|
from dotenv import load_dotenv
|
||||||
|
|
||||||
from ..schemas import ImageAttachment
|
from ..schemas import ImageAttachment
|
||||||
|
from .chat_constants import API_ENV_VARS, ROOT_DIR, make_multimodal_message
|
||||||
|
|
||||||
# Load environment variables from .env file if present
|
# Load environment variables from .env file if present
|
||||||
load_dotenv()
|
load_dotenv()
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
# Environment variables to pass through to Claude CLI for API configuration
|
|
||||||
API_ENV_VARS = [
|
|
||||||
"ANTHROPIC_BASE_URL",
|
|
||||||
"ANTHROPIC_AUTH_TOKEN",
|
|
||||||
"API_TIMEOUT_MS",
|
|
||||||
"ANTHROPIC_DEFAULT_SONNET_MODEL",
|
|
||||||
"ANTHROPIC_DEFAULT_OPUS_MODEL",
|
|
||||||
"ANTHROPIC_DEFAULT_HAIKU_MODEL",
|
|
||||||
]
|
|
||||||
|
|
||||||
|
|
||||||
async def _make_multimodal_message(content_blocks: list[dict]) -> AsyncGenerator[dict, None]:
|
|
||||||
"""
|
|
||||||
Create an async generator that yields a properly formatted multimodal message.
|
|
||||||
|
|
||||||
The Claude Agent SDK's query() method accepts either:
|
|
||||||
- A string (simple text)
|
|
||||||
- An AsyncIterable[dict] (for custom message formats)
|
|
||||||
|
|
||||||
This function wraps content blocks in the expected message format.
|
|
||||||
"""
|
|
||||||
yield {
|
|
||||||
"type": "user",
|
|
||||||
"message": {"role": "user", "content": content_blocks},
|
|
||||||
"parent_tool_use_id": None,
|
|
||||||
"session_id": "default",
|
|
||||||
}
|
|
||||||
|
|
||||||
# Root directory of the project
|
|
||||||
ROOT_DIR = Path(__file__).parent.parent.parent
|
|
||||||
|
|
||||||
|
|
||||||
class SpecChatSession:
|
class SpecChatSession:
|
||||||
"""
|
"""
|
||||||
@@ -125,7 +95,8 @@ class SpecChatSession:
|
|||||||
# Delete app_spec.txt so Claude can create it fresh
|
# Delete app_spec.txt so Claude can create it fresh
|
||||||
# The SDK requires reading existing files before writing, but app_spec.txt is created new
|
# The SDK requires reading existing files before writing, but app_spec.txt is created new
|
||||||
# Note: We keep initializer_prompt.md so Claude can read and update the template
|
# Note: We keep initializer_prompt.md so Claude can read and update the template
|
||||||
prompts_dir = self.project_dir / "prompts"
|
from autocoder_paths import get_prompts_dir
|
||||||
|
prompts_dir = get_prompts_dir(self.project_dir)
|
||||||
app_spec_path = prompts_dir / "app_spec.txt"
|
app_spec_path = prompts_dir / "app_spec.txt"
|
||||||
if app_spec_path.exists():
|
if app_spec_path.exists():
|
||||||
app_spec_path.unlink()
|
app_spec_path.unlink()
|
||||||
@@ -145,7 +116,9 @@ class SpecChatSession:
|
|||||||
],
|
],
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
settings_file = self.project_dir / ".claude_settings.json"
|
from autocoder_paths import get_claude_settings_path
|
||||||
|
settings_file = get_claude_settings_path(self.project_dir)
|
||||||
|
settings_file.parent.mkdir(parents=True, exist_ok=True)
|
||||||
with open(settings_file, "w") as f:
|
with open(settings_file, "w") as f:
|
||||||
json.dump(security_settings, f, indent=2)
|
json.dump(security_settings, f, indent=2)
|
||||||
|
|
||||||
@@ -167,7 +140,12 @@ class SpecChatSession:
|
|||||||
system_cli = shutil.which("claude")
|
system_cli = shutil.which("claude")
|
||||||
|
|
||||||
# Build environment overrides for API configuration
|
# Build environment overrides for API configuration
|
||||||
sdk_env = {var: os.getenv(var) for var in API_ENV_VARS if os.getenv(var)}
|
# Filter to only include vars that are actually set (non-None)
|
||||||
|
sdk_env: dict[str, str] = {}
|
||||||
|
for var in API_ENV_VARS:
|
||||||
|
value = os.getenv(var)
|
||||||
|
if value:
|
||||||
|
sdk_env[var] = value
|
||||||
|
|
||||||
# Determine model from environment or use default
|
# Determine model from environment or use default
|
||||||
# This allows using alternative APIs (e.g., GLM via z.ai) that may not support Claude model names
|
# This allows using alternative APIs (e.g., GLM via z.ai) that may not support Claude model names
|
||||||
@@ -289,7 +267,7 @@ class SpecChatSession:
|
|||||||
# Build the message content
|
# Build the message content
|
||||||
if attachments and len(attachments) > 0:
|
if attachments and len(attachments) > 0:
|
||||||
# Multimodal message: build content blocks array
|
# Multimodal message: build content blocks array
|
||||||
content_blocks = []
|
content_blocks: list[dict[str, Any]] = []
|
||||||
|
|
||||||
# Add text block if there's text
|
# Add text block if there's text
|
||||||
if message:
|
if message:
|
||||||
@@ -308,7 +286,7 @@ class SpecChatSession:
|
|||||||
|
|
||||||
# Send multimodal content to Claude using async generator format
|
# Send multimodal content to Claude using async generator format
|
||||||
# The SDK's query() accepts AsyncIterable[dict] for custom message formats
|
# The SDK's query() accepts AsyncIterable[dict] for custom message formats
|
||||||
await self.client.query(_make_multimodal_message(content_blocks))
|
await self.client.query(make_multimodal_message(content_blocks))
|
||||||
logger.info(f"Sent multimodal message with {len(attachments)} image(s)")
|
logger.info(f"Sent multimodal message with {len(attachments)} image(s)")
|
||||||
else:
|
else:
|
||||||
# Text-only message: use string format
|
# Text-only message: use string format
|
||||||
@@ -317,7 +295,7 @@ class SpecChatSession:
|
|||||||
current_text = ""
|
current_text = ""
|
||||||
|
|
||||||
# Track pending writes for BOTH required files
|
# Track pending writes for BOTH required files
|
||||||
pending_writes = {
|
pending_writes: dict[str, dict[str, Any] | None] = {
|
||||||
"app_spec": None, # {"tool_id": ..., "path": ...}
|
"app_spec": None, # {"tool_id": ..., "path": ...}
|
||||||
"initializer": None, # {"tool_id": ..., "path": ...}
|
"initializer": None, # {"tool_id": ..., "path": ...}
|
||||||
}
|
}
|
||||||
@@ -392,7 +370,8 @@ class SpecChatSession:
|
|||||||
logger.warning(f"Tool error: {content}")
|
logger.warning(f"Tool error: {content}")
|
||||||
# Clear any pending writes that failed
|
# Clear any pending writes that failed
|
||||||
for key in pending_writes:
|
for key in pending_writes:
|
||||||
if pending_writes[key] and tool_use_id == pending_writes[key].get("tool_id"):
|
pending_write = pending_writes[key]
|
||||||
|
if pending_write is not None and tool_use_id == pending_write.get("tool_id"):
|
||||||
logger.error(f"{key} write failed: {content}")
|
logger.error(f"{key} write failed: {content}")
|
||||||
pending_writes[key] = None
|
pending_writes[key] = None
|
||||||
else:
|
else:
|
||||||
|
|||||||
@@ -371,7 +371,7 @@ class TerminalSession:
|
|||||||
# Reap zombie if not already reaped
|
# Reap zombie if not already reaped
|
||||||
if self._child_pid is not None:
|
if self._child_pid is not None:
|
||||||
try:
|
try:
|
||||||
os.waitpid(self._child_pid, os.WNOHANG)
|
os.waitpid(self._child_pid, os.WNOHANG) # type: ignore[attr-defined] # Unix-only method, guarded by runtime platform selection
|
||||||
except ChildProcessError:
|
except ChildProcessError:
|
||||||
pass
|
pass
|
||||||
except Exception:
|
except Exception:
|
||||||
@@ -736,7 +736,7 @@ async def cleanup_all_terminals() -> None:
|
|||||||
Called on server shutdown to ensure all PTY processes are terminated.
|
Called on server shutdown to ensure all PTY processes are terminated.
|
||||||
"""
|
"""
|
||||||
with _sessions_lock:
|
with _sessions_lock:
|
||||||
all_sessions = []
|
all_sessions: list[TerminalSession] = []
|
||||||
for project_sessions in _sessions.values():
|
for project_sessions in _sessions.values():
|
||||||
all_sessions.extend(project_sessions.values())
|
all_sessions.extend(project_sessions.values())
|
||||||
|
|
||||||
|
|||||||
32
server/utils/project_helpers.py
Normal file
32
server/utils/project_helpers.py
Normal file
@@ -0,0 +1,32 @@
|
|||||||
|
"""
|
||||||
|
Project Helper Utilities
|
||||||
|
========================
|
||||||
|
|
||||||
|
Shared project path lookup used across all server routers and websocket handlers.
|
||||||
|
Consolidates the previously duplicated _get_project_path() function.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import sys
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
# Ensure the project root is on sys.path so `registry` can be imported.
|
||||||
|
# This is necessary because `registry.py` lives at the repository root,
|
||||||
|
# outside the `server` package.
|
||||||
|
_root = Path(__file__).parent.parent.parent
|
||||||
|
if str(_root) not in sys.path:
|
||||||
|
sys.path.insert(0, str(_root))
|
||||||
|
|
||||||
|
from registry import get_project_path as _registry_get_project_path
|
||||||
|
|
||||||
|
|
||||||
|
def get_project_path(project_name: str) -> Path | None:
|
||||||
|
"""Look up a project's filesystem path from the global registry.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
project_name: The registered name of the project.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
The resolved ``Path`` to the project directory, or ``None`` if the
|
||||||
|
project is not found in the registry.
|
||||||
|
"""
|
||||||
|
return _registry_get_project_path(project_name)
|
||||||
@@ -1,26 +1,52 @@
|
|||||||
"""
|
"""
|
||||||
Shared validation utilities for the server.
|
Shared Validation Utilities
|
||||||
|
============================
|
||||||
|
|
||||||
|
Project name validation used across REST endpoints and WebSocket handlers.
|
||||||
|
Two variants are provided:
|
||||||
|
|
||||||
|
* ``is_valid_project_name`` -- returns ``bool``, suitable for WebSocket
|
||||||
|
handlers where raising an HTTPException is not appropriate.
|
||||||
|
* ``validate_project_name`` -- raises ``HTTPException(400)`` on failure,
|
||||||
|
suitable for REST endpoint handlers.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
import re
|
import re
|
||||||
|
|
||||||
from fastapi import HTTPException
|
from fastapi import HTTPException
|
||||||
|
|
||||||
|
# Compiled once; reused by both variants.
|
||||||
|
_PROJECT_NAME_RE = re.compile(r'^[a-zA-Z0-9_-]{1,50}$')
|
||||||
|
|
||||||
|
|
||||||
|
def is_valid_project_name(name: str) -> bool:
|
||||||
|
"""Check whether *name* is a valid project name.
|
||||||
|
|
||||||
|
Allows only ASCII letters, digits, hyphens, and underscores (1-50 chars).
|
||||||
|
Returns ``True`` if valid, ``False`` otherwise.
|
||||||
|
|
||||||
|
Use this in WebSocket handlers where you need to close the socket
|
||||||
|
yourself rather than raise an HTTP error.
|
||||||
|
"""
|
||||||
|
return bool(_PROJECT_NAME_RE.match(name))
|
||||||
|
|
||||||
|
|
||||||
def validate_project_name(name: str) -> str:
|
def validate_project_name(name: str) -> str:
|
||||||
"""
|
"""Validate and return *name*, or raise ``HTTPException(400)``.
|
||||||
Validate and sanitize project name to prevent path traversal.
|
|
||||||
|
Suitable for REST endpoint handlers where FastAPI will convert the
|
||||||
|
exception into an HTTP 400 response automatically.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
name: Project name to validate
|
name: Project name to validate.
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
The validated project name
|
The validated project name (unchanged).
|
||||||
|
|
||||||
Raises:
|
Raises:
|
||||||
HTTPException: If name is invalid
|
HTTPException: If *name* is invalid.
|
||||||
"""
|
"""
|
||||||
if not re.match(r'^[a-zA-Z0-9_-]{1,50}$', name):
|
if not _PROJECT_NAME_RE.match(name):
|
||||||
raise HTTPException(
|
raise HTTPException(
|
||||||
status_code=400,
|
status_code=400,
|
||||||
detail="Invalid project name. Use only letters, numbers, hyphens, and underscores (1-50 chars)."
|
detail="Invalid project name. Use only letters, numbers, hyphens, and underscores (1-50 chars)."
|
||||||
|
|||||||
@@ -16,8 +16,11 @@ from typing import Set
|
|||||||
from fastapi import WebSocket, WebSocketDisconnect
|
from fastapi import WebSocket, WebSocketDisconnect
|
||||||
|
|
||||||
from .schemas import AGENT_MASCOTS
|
from .schemas import AGENT_MASCOTS
|
||||||
|
from .services.chat_constants import ROOT_DIR
|
||||||
from .services.dev_server_manager import get_devserver_manager
|
from .services.dev_server_manager import get_devserver_manager
|
||||||
from .services.process_manager import get_manager
|
from .services.process_manager import get_manager
|
||||||
|
from .utils.project_helpers import get_project_path as _get_project_path
|
||||||
|
from .utils.validation import is_valid_project_name as validate_project_name
|
||||||
|
|
||||||
# Lazy imports
|
# Lazy imports
|
||||||
_count_passing_tests = None
|
_count_passing_tests = None
|
||||||
@@ -36,6 +39,14 @@ TESTING_AGENT_START_PATTERN = re.compile(r'Started testing agent for feature #(\
|
|||||||
# Matches: "Feature #123 testing completed" or "Feature #123 testing failed"
|
# Matches: "Feature #123 testing completed" or "Feature #123 testing failed"
|
||||||
TESTING_AGENT_COMPLETE_PATTERN = re.compile(r'Feature #(\d+) testing (completed|failed)')
|
TESTING_AGENT_COMPLETE_PATTERN = re.compile(r'Feature #(\d+) testing (completed|failed)')
|
||||||
|
|
||||||
|
# Pattern to detect batch coding agent start message
|
||||||
|
# Matches: "Started coding agent for features #5, #8, #12"
|
||||||
|
BATCH_CODING_AGENT_START_PATTERN = re.compile(r'Started coding agent for features (#\d+(?:,\s*#\d+)*)')
|
||||||
|
|
||||||
|
# Pattern to detect batch completion
|
||||||
|
# Matches: "Features #5, #8, #12 completed" or "Features #5, #8, #12 failed"
|
||||||
|
BATCH_FEATURES_COMPLETE_PATTERN = re.compile(r'Features (#\d+(?:,\s*#\d+)*)\s+(completed|failed)')
|
||||||
|
|
||||||
# Patterns for detecting agent activity and thoughts
|
# Patterns for detecting agent activity and thoughts
|
||||||
THOUGHT_PATTERNS = [
|
THOUGHT_PATTERNS = [
|
||||||
# Claude's tool usage patterns (actual format: [Tool: name])
|
# Claude's tool usage patterns (actual format: [Tool: name])
|
||||||
@@ -61,9 +72,9 @@ ORCHESTRATOR_PATTERNS = {
|
|||||||
'capacity_check': re.compile(r'\[DEBUG\] Spawning loop: (\d+) ready, (\d+) slots'),
|
'capacity_check': re.compile(r'\[DEBUG\] Spawning loop: (\d+) ready, (\d+) slots'),
|
||||||
'at_capacity': re.compile(r'At max capacity|at max testing agents|At max total agents'),
|
'at_capacity': re.compile(r'At max capacity|at max testing agents|At max total agents'),
|
||||||
'feature_start': re.compile(r'Starting feature \d+/\d+: #(\d+) - (.+)'),
|
'feature_start': re.compile(r'Starting feature \d+/\d+: #(\d+) - (.+)'),
|
||||||
'coding_spawn': re.compile(r'Started coding agent for feature #(\d+)'),
|
'coding_spawn': re.compile(r'Started coding agent for features? #(\d+)'),
|
||||||
'testing_spawn': re.compile(r'Started testing agent for feature #(\d+)'),
|
'testing_spawn': re.compile(r'Started testing agent for feature #(\d+)'),
|
||||||
'coding_complete': re.compile(r'Feature #(\d+) (completed|failed)'),
|
'coding_complete': re.compile(r'Features? #(\d+)(?:,\s*#\d+)* (completed|failed)'),
|
||||||
'testing_complete': re.compile(r'Feature #(\d+) testing (completed|failed)'),
|
'testing_complete': re.compile(r'Feature #(\d+) testing (completed|failed)'),
|
||||||
'all_complete': re.compile(r'All features complete'),
|
'all_complete': re.compile(r'All features complete'),
|
||||||
'blocked_features': re.compile(r'(\d+) blocked by dependencies'),
|
'blocked_features': re.compile(r'(\d+) blocked by dependencies'),
|
||||||
@@ -93,12 +104,24 @@ class AgentTracker:
|
|||||||
# Check for orchestrator status messages first
|
# Check for orchestrator status messages first
|
||||||
# These don't have [Feature #X] prefix
|
# These don't have [Feature #X] prefix
|
||||||
|
|
||||||
# Coding agent start: "Started coding agent for feature #X"
|
# Batch coding agent start: "Started coding agent for features #5, #8, #12"
|
||||||
if line.startswith("Started coding agent for feature #"):
|
batch_start_match = BATCH_CODING_AGENT_START_PATTERN.match(line)
|
||||||
|
if batch_start_match:
|
||||||
try:
|
try:
|
||||||
feature_id = int(re.search(r'#(\d+)', line).group(1))
|
feature_ids = [int(x.strip().lstrip('#')) for x in batch_start_match.group(1).split(',')]
|
||||||
|
if feature_ids:
|
||||||
|
return await self._handle_batch_agent_start(feature_ids, "coding")
|
||||||
|
except ValueError:
|
||||||
|
pass
|
||||||
|
|
||||||
|
# Single coding agent start: "Started coding agent for feature #X"
|
||||||
|
if line.startswith("Started coding agent for feature #"):
|
||||||
|
m = re.search(r'#(\d+)', line)
|
||||||
|
if m:
|
||||||
|
try:
|
||||||
|
feature_id = int(m.group(1))
|
||||||
return await self._handle_agent_start(feature_id, line, agent_type="coding")
|
return await self._handle_agent_start(feature_id, line, agent_type="coding")
|
||||||
except (AttributeError, ValueError):
|
except ValueError:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
# Testing agent start: "Started testing agent for feature #X (PID xxx)"
|
# Testing agent start: "Started testing agent for feature #X (PID xxx)"
|
||||||
@@ -114,13 +137,26 @@ class AgentTracker:
|
|||||||
is_success = testing_complete_match.group(2) == "completed"
|
is_success = testing_complete_match.group(2) == "completed"
|
||||||
return await self._handle_agent_complete(feature_id, is_success, agent_type="testing")
|
return await self._handle_agent_complete(feature_id, is_success, agent_type="testing")
|
||||||
|
|
||||||
|
# Batch features complete: "Features #5, #8, #12 completed/failed"
|
||||||
|
batch_complete_match = BATCH_FEATURES_COMPLETE_PATTERN.match(line)
|
||||||
|
if batch_complete_match:
|
||||||
|
try:
|
||||||
|
feature_ids = [int(x.strip().lstrip('#')) for x in batch_complete_match.group(1).split(',')]
|
||||||
|
is_success = batch_complete_match.group(2) == "completed"
|
||||||
|
if feature_ids:
|
||||||
|
return await self._handle_batch_agent_complete(feature_ids, is_success, "coding")
|
||||||
|
except ValueError:
|
||||||
|
pass
|
||||||
|
|
||||||
# Coding agent complete: "Feature #X completed/failed" (without "testing" keyword)
|
# Coding agent complete: "Feature #X completed/failed" (without "testing" keyword)
|
||||||
if line.startswith("Feature #") and ("completed" in line or "failed" in line) and "testing" not in line:
|
if line.startswith("Feature #") and ("completed" in line or "failed" in line) and "testing" not in line:
|
||||||
|
m = re.search(r'#(\d+)', line)
|
||||||
|
if m:
|
||||||
try:
|
try:
|
||||||
feature_id = int(re.search(r'#(\d+)', line).group(1))
|
feature_id = int(m.group(1))
|
||||||
is_success = "completed" in line
|
is_success = "completed" in line
|
||||||
return await self._handle_agent_complete(feature_id, is_success, agent_type="coding")
|
return await self._handle_agent_complete(feature_id, is_success, agent_type="coding")
|
||||||
except (AttributeError, ValueError):
|
except ValueError:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
# Check for feature-specific output lines: [Feature #X] content
|
# Check for feature-specific output lines: [Feature #X] content
|
||||||
@@ -151,6 +187,7 @@ class AgentTracker:
|
|||||||
'name': AGENT_MASCOTS[agent_index % len(AGENT_MASCOTS)],
|
'name': AGENT_MASCOTS[agent_index % len(AGENT_MASCOTS)],
|
||||||
'agent_index': agent_index,
|
'agent_index': agent_index,
|
||||||
'agent_type': 'coding',
|
'agent_type': 'coding',
|
||||||
|
'feature_ids': [feature_id],
|
||||||
'state': 'thinking',
|
'state': 'thinking',
|
||||||
'feature_name': f'Feature #{feature_id}',
|
'feature_name': f'Feature #{feature_id}',
|
||||||
'last_thought': None,
|
'last_thought': None,
|
||||||
@@ -158,6 +195,10 @@ class AgentTracker:
|
|||||||
|
|
||||||
agent = self.active_agents[key]
|
agent = self.active_agents[key]
|
||||||
|
|
||||||
|
# Update current_feature_id for batch agents when output comes from a different feature
|
||||||
|
if 'current_feature_id' in agent and feature_id in agent.get('feature_ids', []):
|
||||||
|
agent['current_feature_id'] = feature_id
|
||||||
|
|
||||||
# Detect state and thought from content
|
# Detect state and thought from content
|
||||||
state = 'working'
|
state = 'working'
|
||||||
thought = None
|
thought = None
|
||||||
@@ -181,6 +222,7 @@ class AgentTracker:
|
|||||||
'agentName': agent['name'],
|
'agentName': agent['name'],
|
||||||
'agentType': agent['agent_type'],
|
'agentType': agent['agent_type'],
|
||||||
'featureId': feature_id,
|
'featureId': feature_id,
|
||||||
|
'featureIds': agent.get('feature_ids', [feature_id]),
|
||||||
'featureName': agent['feature_name'],
|
'featureName': agent['feature_name'],
|
||||||
'state': state,
|
'state': state,
|
||||||
'thought': thought,
|
'thought': thought,
|
||||||
@@ -237,6 +279,7 @@ class AgentTracker:
|
|||||||
'name': AGENT_MASCOTS[agent_index % len(AGENT_MASCOTS)],
|
'name': AGENT_MASCOTS[agent_index % len(AGENT_MASCOTS)],
|
||||||
'agent_index': agent_index,
|
'agent_index': agent_index,
|
||||||
'agent_type': agent_type,
|
'agent_type': agent_type,
|
||||||
|
'feature_ids': [feature_id],
|
||||||
'state': 'thinking',
|
'state': 'thinking',
|
||||||
'feature_name': feature_name,
|
'feature_name': feature_name,
|
||||||
'last_thought': 'Starting work...',
|
'last_thought': 'Starting work...',
|
||||||
@@ -248,12 +291,55 @@ class AgentTracker:
|
|||||||
'agentName': AGENT_MASCOTS[agent_index % len(AGENT_MASCOTS)],
|
'agentName': AGENT_MASCOTS[agent_index % len(AGENT_MASCOTS)],
|
||||||
'agentType': agent_type,
|
'agentType': agent_type,
|
||||||
'featureId': feature_id,
|
'featureId': feature_id,
|
||||||
|
'featureIds': [feature_id],
|
||||||
'featureName': feature_name,
|
'featureName': feature_name,
|
||||||
'state': 'thinking',
|
'state': 'thinking',
|
||||||
'thought': 'Starting work...',
|
'thought': 'Starting work...',
|
||||||
'timestamp': datetime.now().isoformat(),
|
'timestamp': datetime.now().isoformat(),
|
||||||
}
|
}
|
||||||
|
|
||||||
|
async def _handle_batch_agent_start(self, feature_ids: list[int], agent_type: str = "coding") -> dict | None:
|
||||||
|
"""Handle batch agent start message from orchestrator."""
|
||||||
|
if not feature_ids:
|
||||||
|
return None
|
||||||
|
primary_id = feature_ids[0]
|
||||||
|
async with self._lock:
|
||||||
|
key = (primary_id, agent_type)
|
||||||
|
agent_index = self._next_agent_index
|
||||||
|
self._next_agent_index += 1
|
||||||
|
|
||||||
|
feature_name = f'Features {", ".join(f"#{fid}" for fid in feature_ids)}'
|
||||||
|
|
||||||
|
self.active_agents[key] = {
|
||||||
|
'name': AGENT_MASCOTS[agent_index % len(AGENT_MASCOTS)],
|
||||||
|
'agent_index': agent_index,
|
||||||
|
'agent_type': agent_type,
|
||||||
|
'feature_ids': list(feature_ids),
|
||||||
|
'current_feature_id': primary_id,
|
||||||
|
'state': 'thinking',
|
||||||
|
'feature_name': feature_name,
|
||||||
|
'last_thought': 'Starting batch work...',
|
||||||
|
}
|
||||||
|
|
||||||
|
# Register all feature IDs so output lines can find this agent
|
||||||
|
for fid in feature_ids:
|
||||||
|
secondary_key = (fid, agent_type)
|
||||||
|
if secondary_key != key:
|
||||||
|
self.active_agents[secondary_key] = self.active_agents[key]
|
||||||
|
|
||||||
|
return {
|
||||||
|
'type': 'agent_update',
|
||||||
|
'agentIndex': agent_index,
|
||||||
|
'agentName': AGENT_MASCOTS[agent_index % len(AGENT_MASCOTS)],
|
||||||
|
'agentType': agent_type,
|
||||||
|
'featureId': primary_id,
|
||||||
|
'featureIds': list(feature_ids),
|
||||||
|
'featureName': feature_name,
|
||||||
|
'state': 'thinking',
|
||||||
|
'thought': 'Starting batch work...',
|
||||||
|
'timestamp': datetime.now().isoformat(),
|
||||||
|
}
|
||||||
|
|
||||||
async def _handle_agent_complete(self, feature_id: int, is_success: bool, agent_type: str = "coding") -> dict | None:
|
async def _handle_agent_complete(self, feature_id: int, is_success: bool, agent_type: str = "coding") -> dict | None:
|
||||||
"""Handle agent completion - ALWAYS emits a message, even if agent wasn't tracked.
|
"""Handle agent completion - ALWAYS emits a message, even if agent wasn't tracked.
|
||||||
|
|
||||||
@@ -275,6 +361,7 @@ class AgentTracker:
|
|||||||
'agentName': agent['name'],
|
'agentName': agent['name'],
|
||||||
'agentType': agent.get('agent_type', agent_type),
|
'agentType': agent.get('agent_type', agent_type),
|
||||||
'featureId': feature_id,
|
'featureId': feature_id,
|
||||||
|
'featureIds': agent.get('feature_ids', [feature_id]),
|
||||||
'featureName': agent['feature_name'],
|
'featureName': agent['feature_name'],
|
||||||
'state': state,
|
'state': state,
|
||||||
'thought': 'Completed successfully!' if is_success else 'Failed to complete',
|
'thought': 'Completed successfully!' if is_success else 'Failed to complete',
|
||||||
@@ -291,6 +378,7 @@ class AgentTracker:
|
|||||||
'agentName': 'Unknown',
|
'agentName': 'Unknown',
|
||||||
'agentType': agent_type,
|
'agentType': agent_type,
|
||||||
'featureId': feature_id,
|
'featureId': feature_id,
|
||||||
|
'featureIds': [feature_id],
|
||||||
'featureName': f'Feature #{feature_id}',
|
'featureName': f'Feature #{feature_id}',
|
||||||
'state': state,
|
'state': state,
|
||||||
'thought': 'Completed successfully!' if is_success else 'Failed to complete',
|
'thought': 'Completed successfully!' if is_success else 'Failed to complete',
|
||||||
@@ -298,6 +386,49 @@ class AgentTracker:
|
|||||||
'synthetic': True,
|
'synthetic': True,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
async def _handle_batch_agent_complete(self, feature_ids: list[int], is_success: bool, agent_type: str = "coding") -> dict | None:
|
||||||
|
"""Handle batch agent completion."""
|
||||||
|
if not feature_ids:
|
||||||
|
return None
|
||||||
|
primary_id = feature_ids[0]
|
||||||
|
async with self._lock:
|
||||||
|
state = 'success' if is_success else 'error'
|
||||||
|
key = (primary_id, agent_type)
|
||||||
|
|
||||||
|
if key in self.active_agents:
|
||||||
|
agent = self.active_agents[key]
|
||||||
|
result = {
|
||||||
|
'type': 'agent_update',
|
||||||
|
'agentIndex': agent['agent_index'],
|
||||||
|
'agentName': agent['name'],
|
||||||
|
'agentType': agent.get('agent_type', agent_type),
|
||||||
|
'featureId': primary_id,
|
||||||
|
'featureIds': agent.get('feature_ids', list(feature_ids)),
|
||||||
|
'featureName': agent['feature_name'],
|
||||||
|
'state': state,
|
||||||
|
'thought': 'Batch completed successfully!' if is_success else 'Batch failed to complete',
|
||||||
|
'timestamp': datetime.now().isoformat(),
|
||||||
|
}
|
||||||
|
# Clean up all keys for this batch
|
||||||
|
for fid in feature_ids:
|
||||||
|
self.active_agents.pop((fid, agent_type), None)
|
||||||
|
return result
|
||||||
|
else:
|
||||||
|
# Synthetic completion
|
||||||
|
return {
|
||||||
|
'type': 'agent_update',
|
||||||
|
'agentIndex': -1,
|
||||||
|
'agentName': 'Unknown',
|
||||||
|
'agentType': agent_type,
|
||||||
|
'featureId': primary_id,
|
||||||
|
'featureIds': list(feature_ids),
|
||||||
|
'featureName': f'Features {", ".join(f"#{fid}" for fid in feature_ids)}',
|
||||||
|
'state': state,
|
||||||
|
'thought': 'Batch completed successfully!' if is_success else 'Batch failed to complete',
|
||||||
|
'timestamp': datetime.now().isoformat(),
|
||||||
|
'synthetic': True,
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
class OrchestratorTracker:
|
class OrchestratorTracker:
|
||||||
"""Tracks orchestrator state for Mission Control observability.
|
"""Tracks orchestrator state for Mission Control observability.
|
||||||
@@ -444,7 +575,7 @@ class OrchestratorTracker:
|
|||||||
timestamp = datetime.now().isoformat()
|
timestamp = datetime.now().isoformat()
|
||||||
|
|
||||||
# Add to recent events (keep last 5)
|
# Add to recent events (keep last 5)
|
||||||
event = {
|
event: dict[str, str | int] = {
|
||||||
'eventType': event_type,
|
'eventType': event_type,
|
||||||
'message': message,
|
'message': message,
|
||||||
'timestamp': timestamp,
|
'timestamp': timestamp,
|
||||||
@@ -487,17 +618,6 @@ class OrchestratorTracker:
|
|||||||
self.recent_events.clear()
|
self.recent_events.clear()
|
||||||
|
|
||||||
|
|
||||||
def _get_project_path(project_name: str) -> Path:
|
|
||||||
"""Get project path from registry."""
|
|
||||||
import sys
|
|
||||||
root = Path(__file__).parent.parent
|
|
||||||
if str(root) not in sys.path:
|
|
||||||
sys.path.insert(0, str(root))
|
|
||||||
|
|
||||||
from registry import get_project_path
|
|
||||||
return get_project_path(project_name)
|
|
||||||
|
|
||||||
|
|
||||||
def _get_count_passing_tests():
|
def _get_count_passing_tests():
|
||||||
"""Lazy import of count_passing_tests."""
|
"""Lazy import of count_passing_tests."""
|
||||||
global _count_passing_tests
|
global _count_passing_tests
|
||||||
@@ -564,15 +684,6 @@ class ConnectionManager:
|
|||||||
# Global connection manager
|
# Global connection manager
|
||||||
manager = ConnectionManager()
|
manager = ConnectionManager()
|
||||||
|
|
||||||
# Root directory
|
|
||||||
ROOT_DIR = Path(__file__).parent.parent
|
|
||||||
|
|
||||||
|
|
||||||
def validate_project_name(name: str) -> bool:
|
|
||||||
"""Validate project name to prevent path traversal."""
|
|
||||||
return bool(re.match(r'^[a-zA-Z0-9_-]{1,50}$', name))
|
|
||||||
|
|
||||||
|
|
||||||
async def poll_progress(websocket: WebSocket, project_name: str, project_dir: Path):
|
async def poll_progress(websocket: WebSocket, project_name: str, project_dir: Path):
|
||||||
"""Poll database for progress changes and send updates."""
|
"""Poll database for progress changes and send updates."""
|
||||||
count_passing_tests = _get_count_passing_tests()
|
count_passing_tests = _get_count_passing_tests()
|
||||||
@@ -652,7 +763,7 @@ async def project_websocket(websocket: WebSocket, project_name: str):
|
|||||||
agent_index, _ = await agent_tracker.get_agent_info(feature_id)
|
agent_index, _ = await agent_tracker.get_agent_info(feature_id)
|
||||||
|
|
||||||
# Send the raw log line with optional feature/agent attribution
|
# Send the raw log line with optional feature/agent attribution
|
||||||
log_msg = {
|
log_msg: dict[str, str | int] = {
|
||||||
"type": "log",
|
"type": "log",
|
||||||
"line": line,
|
"line": line,
|
||||||
"timestamp": datetime.now().isoformat(),
|
"timestamp": datetime.now().isoformat(),
|
||||||
|
|||||||
@@ -202,7 +202,7 @@ def build_frontend() -> bool:
|
|||||||
trigger_file = "dist/ directory missing"
|
trigger_file = "dist/ directory missing"
|
||||||
elif src_dir.exists():
|
elif src_dir.exists():
|
||||||
# Find the newest file in dist/ directory
|
# Find the newest file in dist/ directory
|
||||||
newest_dist_mtime = 0
|
newest_dist_mtime: float = 0
|
||||||
for dist_file in dist_dir.rglob("*"):
|
for dist_file in dist_dir.rglob("*"):
|
||||||
try:
|
try:
|
||||||
if dist_file.is_file():
|
if dist_file.is_file():
|
||||||
|
|||||||
265
test_client.py
Normal file
265
test_client.py
Normal file
@@ -0,0 +1,265 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Client Utility Tests
|
||||||
|
====================
|
||||||
|
|
||||||
|
Tests for the client module utility functions.
|
||||||
|
Run with: python test_client.py
|
||||||
|
"""
|
||||||
|
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
import tempfile
|
||||||
|
import unittest
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
from client import (
|
||||||
|
EXTRA_READ_PATHS_BLOCKLIST,
|
||||||
|
EXTRA_READ_PATHS_VAR,
|
||||||
|
convert_model_for_vertex,
|
||||||
|
get_extra_read_paths,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class TestConvertModelForVertex(unittest.TestCase):
|
||||||
|
"""Tests for convert_model_for_vertex function."""
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
"""Save original env state."""
|
||||||
|
self._orig_vertex = os.environ.get("CLAUDE_CODE_USE_VERTEX")
|
||||||
|
|
||||||
|
def tearDown(self):
|
||||||
|
"""Restore original env state."""
|
||||||
|
if self._orig_vertex is None:
|
||||||
|
os.environ.pop("CLAUDE_CODE_USE_VERTEX", None)
|
||||||
|
else:
|
||||||
|
os.environ["CLAUDE_CODE_USE_VERTEX"] = self._orig_vertex
|
||||||
|
|
||||||
|
# --- Vertex AI disabled (default) ---
|
||||||
|
|
||||||
|
def test_returns_model_unchanged_when_vertex_disabled(self):
|
||||||
|
os.environ.pop("CLAUDE_CODE_USE_VERTEX", None)
|
||||||
|
self.assertEqual(
|
||||||
|
convert_model_for_vertex("claude-opus-4-5-20251101"),
|
||||||
|
"claude-opus-4-5-20251101",
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_returns_model_unchanged_when_vertex_set_to_zero(self):
|
||||||
|
os.environ["CLAUDE_CODE_USE_VERTEX"] = "0"
|
||||||
|
self.assertEqual(
|
||||||
|
convert_model_for_vertex("claude-opus-4-5-20251101"),
|
||||||
|
"claude-opus-4-5-20251101",
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_returns_model_unchanged_when_vertex_set_to_empty(self):
|
||||||
|
os.environ["CLAUDE_CODE_USE_VERTEX"] = ""
|
||||||
|
self.assertEqual(
|
||||||
|
convert_model_for_vertex("claude-sonnet-4-5-20250929"),
|
||||||
|
"claude-sonnet-4-5-20250929",
|
||||||
|
)
|
||||||
|
|
||||||
|
# --- Vertex AI enabled: standard conversions ---
|
||||||
|
|
||||||
|
def test_converts_opus_model(self):
|
||||||
|
os.environ["CLAUDE_CODE_USE_VERTEX"] = "1"
|
||||||
|
self.assertEqual(
|
||||||
|
convert_model_for_vertex("claude-opus-4-5-20251101"),
|
||||||
|
"claude-opus-4-5@20251101",
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_converts_sonnet_model(self):
|
||||||
|
os.environ["CLAUDE_CODE_USE_VERTEX"] = "1"
|
||||||
|
self.assertEqual(
|
||||||
|
convert_model_for_vertex("claude-sonnet-4-5-20250929"),
|
||||||
|
"claude-sonnet-4-5@20250929",
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_converts_haiku_model(self):
|
||||||
|
os.environ["CLAUDE_CODE_USE_VERTEX"] = "1"
|
||||||
|
self.assertEqual(
|
||||||
|
convert_model_for_vertex("claude-3-5-haiku-20241022"),
|
||||||
|
"claude-3-5-haiku@20241022",
|
||||||
|
)
|
||||||
|
|
||||||
|
# --- Vertex AI enabled: already converted or non-matching ---
|
||||||
|
|
||||||
|
def test_already_vertex_format_unchanged(self):
|
||||||
|
os.environ["CLAUDE_CODE_USE_VERTEX"] = "1"
|
||||||
|
self.assertEqual(
|
||||||
|
convert_model_for_vertex("claude-opus-4-5@20251101"),
|
||||||
|
"claude-opus-4-5@20251101",
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_non_claude_model_unchanged(self):
|
||||||
|
os.environ["CLAUDE_CODE_USE_VERTEX"] = "1"
|
||||||
|
self.assertEqual(
|
||||||
|
convert_model_for_vertex("gpt-4o"),
|
||||||
|
"gpt-4o",
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_model_without_date_suffix_unchanged(self):
|
||||||
|
os.environ["CLAUDE_CODE_USE_VERTEX"] = "1"
|
||||||
|
self.assertEqual(
|
||||||
|
convert_model_for_vertex("claude-opus-4-5"),
|
||||||
|
"claude-opus-4-5",
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_empty_string_unchanged(self):
|
||||||
|
os.environ["CLAUDE_CODE_USE_VERTEX"] = "1"
|
||||||
|
self.assertEqual(convert_model_for_vertex(""), "")
|
||||||
|
|
||||||
|
|
||||||
|
class TestExtraReadPathsBlocklist(unittest.TestCase):
|
||||||
|
"""Tests for EXTRA_READ_PATHS sensitive directory blocking in get_extra_read_paths()."""
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
"""Save original environment and home directory state."""
|
||||||
|
self._orig_extra_read = os.environ.get(EXTRA_READ_PATHS_VAR)
|
||||||
|
self._orig_home = os.environ.get("HOME")
|
||||||
|
self._orig_userprofile = os.environ.get("USERPROFILE")
|
||||||
|
self._orig_homedrive = os.environ.get("HOMEDRIVE")
|
||||||
|
self._orig_homepath = os.environ.get("HOMEPATH")
|
||||||
|
|
||||||
|
def tearDown(self):
|
||||||
|
"""Restore original environment state."""
|
||||||
|
restore_map = {
|
||||||
|
EXTRA_READ_PATHS_VAR: self._orig_extra_read,
|
||||||
|
"HOME": self._orig_home,
|
||||||
|
"USERPROFILE": self._orig_userprofile,
|
||||||
|
"HOMEDRIVE": self._orig_homedrive,
|
||||||
|
"HOMEPATH": self._orig_homepath,
|
||||||
|
}
|
||||||
|
for key, value in restore_map.items():
|
||||||
|
if value is None:
|
||||||
|
os.environ.pop(key, None)
|
||||||
|
else:
|
||||||
|
os.environ[key] = value
|
||||||
|
|
||||||
|
def _set_home(self, home_path: str):
|
||||||
|
"""Set the home directory for both Unix and Windows."""
|
||||||
|
os.environ["HOME"] = home_path
|
||||||
|
if sys.platform == "win32":
|
||||||
|
os.environ["USERPROFILE"] = home_path
|
||||||
|
drive, path = os.path.splitdrive(home_path)
|
||||||
|
if drive:
|
||||||
|
os.environ["HOMEDRIVE"] = drive
|
||||||
|
os.environ["HOMEPATH"] = path
|
||||||
|
|
||||||
|
def test_sensitive_directory_is_blocked(self):
|
||||||
|
"""Path that IS a sensitive directory (e.g., ~/.ssh) should be blocked."""
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
self._set_home(tmpdir)
|
||||||
|
# Create the sensitive directory so it exists
|
||||||
|
ssh_dir = Path(tmpdir) / ".ssh"
|
||||||
|
ssh_dir.mkdir()
|
||||||
|
|
||||||
|
os.environ[EXTRA_READ_PATHS_VAR] = str(ssh_dir)
|
||||||
|
result = get_extra_read_paths()
|
||||||
|
self.assertEqual(result, [], "Path that IS ~/.ssh should be blocked")
|
||||||
|
|
||||||
|
def test_path_inside_sensitive_directory_is_blocked(self):
|
||||||
|
"""Path INSIDE a sensitive directory (e.g., ~/.ssh/keys) should be blocked."""
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
self._set_home(tmpdir)
|
||||||
|
ssh_dir = Path(tmpdir) / ".ssh"
|
||||||
|
keys_dir = ssh_dir / "keys"
|
||||||
|
keys_dir.mkdir(parents=True)
|
||||||
|
|
||||||
|
os.environ[EXTRA_READ_PATHS_VAR] = str(keys_dir)
|
||||||
|
result = get_extra_read_paths()
|
||||||
|
self.assertEqual(result, [], "Path inside ~/.ssh should be blocked")
|
||||||
|
|
||||||
|
def test_path_containing_sensitive_directory_is_blocked(self):
|
||||||
|
"""Path that contains a sensitive directory inside it should be blocked.
|
||||||
|
|
||||||
|
For example, if the extra read path is the user's home directory, and
|
||||||
|
~/.ssh exists inside it, the path should be blocked because granting
|
||||||
|
read access to the parent would expose the sensitive subdirectory.
|
||||||
|
"""
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
self._set_home(tmpdir)
|
||||||
|
# Create a sensitive dir inside the home so it triggers the
|
||||||
|
# "sensitive dir is inside the requested path" check
|
||||||
|
ssh_dir = Path(tmpdir) / ".ssh"
|
||||||
|
ssh_dir.mkdir()
|
||||||
|
|
||||||
|
os.environ[EXTRA_READ_PATHS_VAR] = tmpdir
|
||||||
|
result = get_extra_read_paths()
|
||||||
|
self.assertEqual(result, [], "Home dir containing .ssh should be blocked")
|
||||||
|
|
||||||
|
def test_valid_non_sensitive_path_is_allowed(self):
|
||||||
|
"""A valid directory that is NOT sensitive should be allowed."""
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
self._set_home(tmpdir)
|
||||||
|
# Create a non-sensitive directory under home
|
||||||
|
docs_dir = Path(tmpdir) / "Documents" / "myproject"
|
||||||
|
docs_dir.mkdir(parents=True)
|
||||||
|
|
||||||
|
os.environ[EXTRA_READ_PATHS_VAR] = str(docs_dir)
|
||||||
|
result = get_extra_read_paths()
|
||||||
|
self.assertEqual(len(result), 1, "Non-sensitive path should be allowed")
|
||||||
|
self.assertEqual(result[0], docs_dir.resolve())
|
||||||
|
|
||||||
|
def test_all_blocklist_entries_are_checked(self):
|
||||||
|
"""Every directory in EXTRA_READ_PATHS_BLOCKLIST should actually be blocked."""
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
self._set_home(tmpdir)
|
||||||
|
|
||||||
|
for sensitive_name in sorted(EXTRA_READ_PATHS_BLOCKLIST):
|
||||||
|
sensitive_dir = Path(tmpdir) / sensitive_name
|
||||||
|
sensitive_dir.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
os.environ[EXTRA_READ_PATHS_VAR] = str(sensitive_dir)
|
||||||
|
result = get_extra_read_paths()
|
||||||
|
self.assertEqual(
|
||||||
|
result, [],
|
||||||
|
f"Blocklist entry '{sensitive_name}' should be blocked"
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_multiple_paths_mixed_sensitive_and_valid(self):
|
||||||
|
"""When given multiple paths, only non-sensitive ones should pass."""
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
self._set_home(tmpdir)
|
||||||
|
|
||||||
|
# Create one sensitive and one valid directory
|
||||||
|
ssh_dir = Path(tmpdir) / ".ssh"
|
||||||
|
ssh_dir.mkdir()
|
||||||
|
valid_dir = Path(tmpdir) / "projects"
|
||||||
|
valid_dir.mkdir()
|
||||||
|
|
||||||
|
os.environ[EXTRA_READ_PATHS_VAR] = f"{ssh_dir},{valid_dir}"
|
||||||
|
result = get_extra_read_paths()
|
||||||
|
self.assertEqual(len(result), 1, "Only the non-sensitive path should be returned")
|
||||||
|
self.assertEqual(result[0], valid_dir.resolve())
|
||||||
|
|
||||||
|
def test_empty_extra_read_paths_returns_empty(self):
|
||||||
|
"""Empty EXTRA_READ_PATHS should return empty list."""
|
||||||
|
os.environ[EXTRA_READ_PATHS_VAR] = ""
|
||||||
|
result = get_extra_read_paths()
|
||||||
|
self.assertEqual(result, [])
|
||||||
|
|
||||||
|
def test_unset_extra_read_paths_returns_empty(self):
|
||||||
|
"""Unset EXTRA_READ_PATHS should return empty list."""
|
||||||
|
os.environ.pop(EXTRA_READ_PATHS_VAR, None)
|
||||||
|
result = get_extra_read_paths()
|
||||||
|
self.assertEqual(result, [])
|
||||||
|
|
||||||
|
def test_nonexistent_path_is_skipped(self):
|
||||||
|
"""A path that does not exist should be skipped."""
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
self._set_home(tmpdir)
|
||||||
|
nonexistent = Path(tmpdir) / "does_not_exist"
|
||||||
|
|
||||||
|
os.environ[EXTRA_READ_PATHS_VAR] = str(nonexistent)
|
||||||
|
result = get_extra_read_paths()
|
||||||
|
self.assertEqual(result, [])
|
||||||
|
|
||||||
|
def test_relative_path_is_skipped(self):
|
||||||
|
"""A relative path should be skipped."""
|
||||||
|
os.environ[EXTRA_READ_PATHS_VAR] = "relative/path"
|
||||||
|
result = get_extra_read_paths()
|
||||||
|
self.assertEqual(result, [])
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
unittest.main()
|
||||||
205
test_rate_limit_utils.py
Normal file
205
test_rate_limit_utils.py
Normal file
@@ -0,0 +1,205 @@
|
|||||||
|
"""
|
||||||
|
Unit tests for rate limit handling functions.
|
||||||
|
|
||||||
|
Tests the parse_retry_after(), is_rate_limit_error(), and backoff calculation
|
||||||
|
functions from rate_limit_utils.py (shared module).
|
||||||
|
"""
|
||||||
|
|
||||||
|
import unittest
|
||||||
|
|
||||||
|
from rate_limit_utils import (
|
||||||
|
calculate_error_backoff,
|
||||||
|
calculate_rate_limit_backoff,
|
||||||
|
clamp_retry_delay,
|
||||||
|
is_rate_limit_error,
|
||||||
|
parse_retry_after,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class TestParseRetryAfter(unittest.TestCase):
|
||||||
|
"""Tests for parse_retry_after() function."""
|
||||||
|
|
||||||
|
def test_retry_after_colon_format(self):
|
||||||
|
"""Test 'Retry-After: 60' format."""
|
||||||
|
assert parse_retry_after("Retry-After: 60") == 60
|
||||||
|
assert parse_retry_after("retry-after: 120") == 120
|
||||||
|
assert parse_retry_after("retry after: 30 seconds") == 30
|
||||||
|
|
||||||
|
def test_retry_after_space_format(self):
|
||||||
|
"""Test 'retry after 60 seconds' format."""
|
||||||
|
assert parse_retry_after("retry after 60 seconds") == 60
|
||||||
|
assert parse_retry_after("Please retry after 120 seconds") == 120
|
||||||
|
assert parse_retry_after("Retry after 30") == 30
|
||||||
|
|
||||||
|
def test_try_again_in_format(self):
|
||||||
|
"""Test 'try again in X seconds' format."""
|
||||||
|
assert parse_retry_after("try again in 120 seconds") == 120
|
||||||
|
assert parse_retry_after("Please try again in 60s") == 60
|
||||||
|
assert parse_retry_after("Try again in 30 seconds") == 30
|
||||||
|
|
||||||
|
def test_seconds_remaining_format(self):
|
||||||
|
"""Test 'X seconds remaining' format."""
|
||||||
|
assert parse_retry_after("30 seconds remaining") == 30
|
||||||
|
assert parse_retry_after("60 seconds left") == 60
|
||||||
|
assert parse_retry_after("120 seconds until reset") == 120
|
||||||
|
|
||||||
|
def test_retry_after_zero(self):
|
||||||
|
"""Test 'Retry-After: 0' returns 0 (not None)."""
|
||||||
|
assert parse_retry_after("Retry-After: 0") == 0
|
||||||
|
assert parse_retry_after("retry after 0 seconds") == 0
|
||||||
|
|
||||||
|
def test_no_match(self):
|
||||||
|
"""Test messages that don't contain retry-after info."""
|
||||||
|
assert parse_retry_after("no match here") is None
|
||||||
|
assert parse_retry_after("Connection refused") is None
|
||||||
|
assert parse_retry_after("Internal server error") is None
|
||||||
|
assert parse_retry_after("") is None
|
||||||
|
|
||||||
|
def test_minutes_not_supported(self):
|
||||||
|
"""Test that minutes are not parsed (by design)."""
|
||||||
|
# We only support seconds to avoid complexity
|
||||||
|
# These patterns should NOT match when followed by minute/hour units
|
||||||
|
assert parse_retry_after("wait 5 minutes") is None
|
||||||
|
assert parse_retry_after("try again in 2 minutes") is None
|
||||||
|
assert parse_retry_after("retry after 5 minutes") is None
|
||||||
|
assert parse_retry_after("retry after 1 hour") is None
|
||||||
|
assert parse_retry_after("try again in 30 min") is None
|
||||||
|
|
||||||
|
|
||||||
|
class TestIsRateLimitError(unittest.TestCase):
|
||||||
|
"""Tests for is_rate_limit_error() function."""
|
||||||
|
|
||||||
|
def test_rate_limit_patterns(self):
|
||||||
|
"""Test various rate limit error messages."""
|
||||||
|
assert is_rate_limit_error("Rate limit exceeded") is True
|
||||||
|
assert is_rate_limit_error("rate_limit_exceeded") is True
|
||||||
|
assert is_rate_limit_error("Too many requests") is True
|
||||||
|
assert is_rate_limit_error("HTTP 429 Too Many Requests") is True
|
||||||
|
assert is_rate_limit_error("API quota exceeded") is True
|
||||||
|
assert is_rate_limit_error("Server is overloaded") is True
|
||||||
|
|
||||||
|
def test_specific_429_patterns(self):
|
||||||
|
"""Test that 429 is detected with proper context."""
|
||||||
|
assert is_rate_limit_error("http 429") is True
|
||||||
|
assert is_rate_limit_error("HTTP429") is True
|
||||||
|
assert is_rate_limit_error("status 429") is True
|
||||||
|
assert is_rate_limit_error("error 429") is True
|
||||||
|
assert is_rate_limit_error("429 too many requests") is True
|
||||||
|
|
||||||
|
def test_case_insensitive(self):
|
||||||
|
"""Test that detection is case-insensitive."""
|
||||||
|
assert is_rate_limit_error("RATE LIMIT") is True
|
||||||
|
assert is_rate_limit_error("Rate Limit") is True
|
||||||
|
assert is_rate_limit_error("rate limit") is True
|
||||||
|
assert is_rate_limit_error("RaTe LiMiT") is True
|
||||||
|
|
||||||
|
def test_non_rate_limit_errors(self):
|
||||||
|
"""Test non-rate-limit error messages."""
|
||||||
|
assert is_rate_limit_error("Connection refused") is False
|
||||||
|
assert is_rate_limit_error("Authentication failed") is False
|
||||||
|
assert is_rate_limit_error("Invalid API key") is False
|
||||||
|
assert is_rate_limit_error("Internal server error") is False
|
||||||
|
assert is_rate_limit_error("Network timeout") is False
|
||||||
|
assert is_rate_limit_error("") is False
|
||||||
|
|
||||||
|
|
||||||
|
class TestFalsePositives(unittest.TestCase):
|
||||||
|
"""Verify non-rate-limit messages don't trigger detection."""
|
||||||
|
|
||||||
|
def test_version_numbers_with_429(self):
|
||||||
|
"""Version numbers should not trigger."""
|
||||||
|
assert is_rate_limit_error("Node v14.29.0") is False
|
||||||
|
assert is_rate_limit_error("Python 3.12.429") is False
|
||||||
|
assert is_rate_limit_error("Version 2.429 released") is False
|
||||||
|
|
||||||
|
def test_issue_and_pr_numbers(self):
|
||||||
|
"""Issue/PR numbers should not trigger."""
|
||||||
|
assert is_rate_limit_error("See PR #429") is False
|
||||||
|
assert is_rate_limit_error("Fixed in issue 429") is False
|
||||||
|
assert is_rate_limit_error("Closes #429") is False
|
||||||
|
|
||||||
|
def test_line_numbers(self):
|
||||||
|
"""Line numbers in errors should not trigger."""
|
||||||
|
assert is_rate_limit_error("Error at line 429") is False
|
||||||
|
assert is_rate_limit_error("See file.py:429") is False
|
||||||
|
|
||||||
|
def test_port_numbers(self):
|
||||||
|
"""Port numbers should not trigger."""
|
||||||
|
assert is_rate_limit_error("port 4293") is False
|
||||||
|
assert is_rate_limit_error("localhost:4290") is False
|
||||||
|
|
||||||
|
def test_legitimate_wait_messages(self):
|
||||||
|
"""Legitimate wait instructions should not trigger."""
|
||||||
|
# These would fail if "please wait" pattern still exists
|
||||||
|
assert is_rate_limit_error("Please wait for the build to complete") is False
|
||||||
|
assert is_rate_limit_error("Please wait while I analyze this") is False
|
||||||
|
|
||||||
|
def test_retry_discussion_messages(self):
|
||||||
|
"""Messages discussing retry logic should not trigger."""
|
||||||
|
# These would fail if "try again later" pattern still exists
|
||||||
|
assert is_rate_limit_error("Try again later after maintenance") is False
|
||||||
|
assert is_rate_limit_error("The user should try again later") is False
|
||||||
|
|
||||||
|
def test_limit_discussion_messages(self):
|
||||||
|
"""Messages discussing limits should not trigger (removed pattern)."""
|
||||||
|
# These would fail if "limit reached" pattern still exists
|
||||||
|
assert is_rate_limit_error("File size limit reached") is False
|
||||||
|
assert is_rate_limit_error("Memory limit reached, consider optimization") is False
|
||||||
|
|
||||||
|
def test_overloaded_in_programming_context(self):
|
||||||
|
"""Method/operator overloading discussions should not trigger."""
|
||||||
|
assert is_rate_limit_error("I will create an overloaded constructor") is False
|
||||||
|
assert is_rate_limit_error("The + operator is overloaded") is False
|
||||||
|
assert is_rate_limit_error("Here is the overloaded version of the function") is False
|
||||||
|
assert is_rate_limit_error("The method is overloaded to accept different types") is False
|
||||||
|
# But actual API overload messages should still match
|
||||||
|
assert is_rate_limit_error("Server is overloaded") is True
|
||||||
|
assert is_rate_limit_error("API overloaded") is True
|
||||||
|
assert is_rate_limit_error("system is overloaded") is True
|
||||||
|
|
||||||
|
|
||||||
|
class TestBackoffFunctions(unittest.TestCase):
|
||||||
|
"""Test backoff calculation functions from rate_limit_utils."""
|
||||||
|
|
||||||
|
def test_rate_limit_backoff_sequence(self):
|
||||||
|
"""Test that rate limit backoff follows expected exponential sequence with jitter.
|
||||||
|
|
||||||
|
Base formula: 15 * 2^retries with 0-30% jitter.
|
||||||
|
Base values: 15, 30, 60, 120, 240, 480, 960, 1920, 3600, 3600
|
||||||
|
With jitter the result should be in [base, base * 1.3].
|
||||||
|
"""
|
||||||
|
base_values = [15, 30, 60, 120, 240, 480, 960, 1920, 3600, 3600]
|
||||||
|
for retries, base in enumerate(base_values):
|
||||||
|
delay = calculate_rate_limit_backoff(retries)
|
||||||
|
# Delay must be at least the base value (jitter is non-negative)
|
||||||
|
assert delay >= base, f"Retry {retries}: {delay} < base {base}"
|
||||||
|
# Delay must not exceed base + 30% jitter (int truncation means <= base * 1.3)
|
||||||
|
max_with_jitter = int(base * 1.3)
|
||||||
|
assert delay <= max_with_jitter, f"Retry {retries}: {delay} > max {max_with_jitter}"
|
||||||
|
|
||||||
|
def test_error_backoff_sequence(self):
|
||||||
|
"""Test that error backoff follows expected linear sequence."""
|
||||||
|
expected = [30, 60, 90, 120, 150, 180, 210, 240, 270, 300, 300] # Caps at 300
|
||||||
|
for retries in range(1, len(expected) + 1):
|
||||||
|
delay = calculate_error_backoff(retries)
|
||||||
|
expected_delay = expected[retries - 1]
|
||||||
|
assert delay == expected_delay, f"Retry {retries}: expected {expected_delay}, got {delay}"
|
||||||
|
|
||||||
|
def test_clamp_retry_delay(self):
|
||||||
|
"""Test that retry delay is clamped to valid range."""
|
||||||
|
# Values within range stay the same
|
||||||
|
assert clamp_retry_delay(60) == 60
|
||||||
|
assert clamp_retry_delay(1800) == 1800
|
||||||
|
assert clamp_retry_delay(3600) == 3600
|
||||||
|
|
||||||
|
# Values below minimum get clamped to 1
|
||||||
|
assert clamp_retry_delay(0) == 1
|
||||||
|
assert clamp_retry_delay(-10) == 1
|
||||||
|
|
||||||
|
# Values above maximum get clamped to 3600
|
||||||
|
assert clamp_retry_delay(7200) == 3600
|
||||||
|
assert clamp_retry_delay(86400) == 3600
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
unittest.main()
|
||||||
@@ -992,31 +992,26 @@ def main():
|
|||||||
failed += pkill_failed
|
failed += pkill_failed
|
||||||
|
|
||||||
# Commands that SHOULD be blocked
|
# Commands that SHOULD be blocked
|
||||||
|
# Note: blocklisted commands (sudo, shutdown, dd, aws) are tested in
|
||||||
|
# test_blocklist_enforcement(). chmod validation is tested in
|
||||||
|
# test_validate_chmod(). init.sh validation is tested in
|
||||||
|
# test_validate_init_script(). pkill validation is tested in
|
||||||
|
# test_pkill_extensibility(). The entries below focus on scenarios
|
||||||
|
# NOT covered by those dedicated tests.
|
||||||
print("\nCommands that should be BLOCKED:\n")
|
print("\nCommands that should be BLOCKED:\n")
|
||||||
dangerous = [
|
dangerous = [
|
||||||
# Not in allowlist - dangerous system commands
|
# Not in allowlist - dangerous system commands
|
||||||
"shutdown now",
|
|
||||||
"reboot",
|
"reboot",
|
||||||
"dd if=/dev/zero of=/dev/sda",
|
|
||||||
# Not in allowlist - common commands excluded from minimal set
|
# Not in allowlist - common commands excluded from minimal set
|
||||||
"wget https://example.com",
|
"wget https://example.com",
|
||||||
"python app.py",
|
"python app.py",
|
||||||
"killall node",
|
"killall node",
|
||||||
# pkill with non-dev processes
|
# pkill with non-dev processes (pkill python tested in test_pkill_extensibility)
|
||||||
"pkill bash",
|
"pkill bash",
|
||||||
"pkill chrome",
|
"pkill chrome",
|
||||||
"pkill python",
|
|
||||||
# Shell injection attempts
|
# Shell injection attempts
|
||||||
"$(echo pkill) node",
|
"$(echo pkill) node",
|
||||||
'eval "pkill node"',
|
'eval "pkill node"',
|
||||||
# chmod with disallowed modes
|
|
||||||
"chmod 777 file.sh",
|
|
||||||
"chmod 755 file.sh",
|
|
||||||
"chmod +w file.sh",
|
|
||||||
"chmod -R +x dir/",
|
|
||||||
# Non-init.sh scripts
|
|
||||||
"./setup.sh",
|
|
||||||
"./malicious.sh",
|
|
||||||
]
|
]
|
||||||
|
|
||||||
for cmd in dangerous:
|
for cmd in dangerous:
|
||||||
@@ -1026,6 +1021,10 @@ def main():
|
|||||||
failed += 1
|
failed += 1
|
||||||
|
|
||||||
# Commands that SHOULD be allowed
|
# Commands that SHOULD be allowed
|
||||||
|
# Note: chmod +x variants are tested in test_validate_chmod().
|
||||||
|
# init.sh variants are tested in test_validate_init_script().
|
||||||
|
# The combined "chmod +x init.sh && ./init.sh" below serves as the
|
||||||
|
# integration test verifying the hook routes to both validators correctly.
|
||||||
print("\nCommands that should be ALLOWED:\n")
|
print("\nCommands that should be ALLOWED:\n")
|
||||||
safe = [
|
safe = [
|
||||||
# File inspection
|
# File inspection
|
||||||
@@ -1076,16 +1075,7 @@ def main():
|
|||||||
"ls | grep test",
|
"ls | grep test",
|
||||||
# Full paths
|
# Full paths
|
||||||
"/usr/local/bin/node app.js",
|
"/usr/local/bin/node app.js",
|
||||||
# chmod +x (allowed)
|
# Combined chmod and init.sh (integration test for both validators)
|
||||||
"chmod +x init.sh",
|
|
||||||
"chmod +x script.sh",
|
|
||||||
"chmod u+x init.sh",
|
|
||||||
"chmod a+x init.sh",
|
|
||||||
# init.sh execution (allowed)
|
|
||||||
"./init.sh",
|
|
||||||
"./init.sh --production",
|
|
||||||
"/path/to/init.sh",
|
|
||||||
# Combined chmod and init.sh
|
|
||||||
"chmod +x init.sh && ./init.sh",
|
"chmod +x init.sh && ./init.sh",
|
||||||
]
|
]
|
||||||
|
|
||||||
|
|||||||
322
ui/package-lock.json
generated
322
ui/package-lock.json
generated
@@ -12,16 +12,9 @@
|
|||||||
"@radix-ui/react-dialog": "^1.1.15",
|
"@radix-ui/react-dialog": "^1.1.15",
|
||||||
"@radix-ui/react-dropdown-menu": "^2.1.16",
|
"@radix-ui/react-dropdown-menu": "^2.1.16",
|
||||||
"@radix-ui/react-label": "^2.1.8",
|
"@radix-ui/react-label": "^2.1.8",
|
||||||
"@radix-ui/react-popover": "^1.1.15",
|
|
||||||
"@radix-ui/react-radio-group": "^1.3.8",
|
|
||||||
"@radix-ui/react-scroll-area": "^1.2.10",
|
|
||||||
"@radix-ui/react-select": "^2.2.6",
|
|
||||||
"@radix-ui/react-separator": "^1.1.8",
|
"@radix-ui/react-separator": "^1.1.8",
|
||||||
"@radix-ui/react-slot": "^1.2.4",
|
"@radix-ui/react-slot": "^1.2.4",
|
||||||
"@radix-ui/react-switch": "^1.2.6",
|
"@radix-ui/react-switch": "^1.2.6",
|
||||||
"@radix-ui/react-tabs": "^1.1.13",
|
|
||||||
"@radix-ui/react-toggle": "^1.1.10",
|
|
||||||
"@radix-ui/react-tooltip": "^1.2.8",
|
|
||||||
"@tanstack/react-query": "^5.72.0",
|
"@tanstack/react-query": "^5.72.0",
|
||||||
"@xterm/addon-fit": "^0.11.0",
|
"@xterm/addon-fit": "^0.11.0",
|
||||||
"@xterm/addon-web-links": "^0.12.0",
|
"@xterm/addon-web-links": "^0.12.0",
|
||||||
@@ -1093,12 +1086,6 @@
|
|||||||
"node": ">=18"
|
"node": ">=18"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"node_modules/@radix-ui/number": {
|
|
||||||
"version": "1.1.1",
|
|
||||||
"resolved": "https://registry.npmjs.org/@radix-ui/number/-/number-1.1.1.tgz",
|
|
||||||
"integrity": "sha512-MkKCwxlXTgz6CFoJx3pCwn07GKp36+aZyu/u2Ln2VrA5DcdyCZkASEDBTd8x5whTQQL5CiYf4prXKLcgQdv29g==",
|
|
||||||
"license": "MIT"
|
|
||||||
},
|
|
||||||
"node_modules/@radix-ui/primitive": {
|
"node_modules/@radix-ui/primitive": {
|
||||||
"version": "1.1.3",
|
"version": "1.1.3",
|
||||||
"resolved": "https://registry.npmjs.org/@radix-ui/primitive/-/primitive-1.1.3.tgz",
|
"resolved": "https://registry.npmjs.org/@radix-ui/primitive/-/primitive-1.1.3.tgz",
|
||||||
@@ -1519,61 +1506,6 @@
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"node_modules/@radix-ui/react-popover": {
|
|
||||||
"version": "1.1.15",
|
|
||||||
"resolved": "https://registry.npmjs.org/@radix-ui/react-popover/-/react-popover-1.1.15.tgz",
|
|
||||||
"integrity": "sha512-kr0X2+6Yy/vJzLYJUPCZEc8SfQcf+1COFoAqauJm74umQhta9M7lNJHP7QQS3vkvcGLQUbWpMzwrXYwrYztHKA==",
|
|
||||||
"license": "MIT",
|
|
||||||
"dependencies": {
|
|
||||||
"@radix-ui/primitive": "1.1.3",
|
|
||||||
"@radix-ui/react-compose-refs": "1.1.2",
|
|
||||||
"@radix-ui/react-context": "1.1.2",
|
|
||||||
"@radix-ui/react-dismissable-layer": "1.1.11",
|
|
||||||
"@radix-ui/react-focus-guards": "1.1.3",
|
|
||||||
"@radix-ui/react-focus-scope": "1.1.7",
|
|
||||||
"@radix-ui/react-id": "1.1.1",
|
|
||||||
"@radix-ui/react-popper": "1.2.8",
|
|
||||||
"@radix-ui/react-portal": "1.1.9",
|
|
||||||
"@radix-ui/react-presence": "1.1.5",
|
|
||||||
"@radix-ui/react-primitive": "2.1.3",
|
|
||||||
"@radix-ui/react-slot": "1.2.3",
|
|
||||||
"@radix-ui/react-use-controllable-state": "1.2.2",
|
|
||||||
"aria-hidden": "^1.2.4",
|
|
||||||
"react-remove-scroll": "^2.6.3"
|
|
||||||
},
|
|
||||||
"peerDependencies": {
|
|
||||||
"@types/react": "*",
|
|
||||||
"@types/react-dom": "*",
|
|
||||||
"react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc",
|
|
||||||
"react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc"
|
|
||||||
},
|
|
||||||
"peerDependenciesMeta": {
|
|
||||||
"@types/react": {
|
|
||||||
"optional": true
|
|
||||||
},
|
|
||||||
"@types/react-dom": {
|
|
||||||
"optional": true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/@radix-ui/react-popover/node_modules/@radix-ui/react-slot": {
|
|
||||||
"version": "1.2.3",
|
|
||||||
"resolved": "https://registry.npmjs.org/@radix-ui/react-slot/-/react-slot-1.2.3.tgz",
|
|
||||||
"integrity": "sha512-aeNmHnBxbi2St0au6VBVC7JXFlhLlOnvIIlePNniyUNAClzmtAUEY8/pBiK3iHjufOlwA+c20/8jngo7xcrg8A==",
|
|
||||||
"license": "MIT",
|
|
||||||
"dependencies": {
|
|
||||||
"@radix-ui/react-compose-refs": "1.1.2"
|
|
||||||
},
|
|
||||||
"peerDependencies": {
|
|
||||||
"@types/react": "*",
|
|
||||||
"react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc"
|
|
||||||
},
|
|
||||||
"peerDependenciesMeta": {
|
|
||||||
"@types/react": {
|
|
||||||
"optional": true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/@radix-ui/react-popper": {
|
"node_modules/@radix-ui/react-popper": {
|
||||||
"version": "1.2.8",
|
"version": "1.2.8",
|
||||||
"resolved": "https://registry.npmjs.org/@radix-ui/react-popper/-/react-popper-1.2.8.tgz",
|
"resolved": "https://registry.npmjs.org/@radix-ui/react-popper/-/react-popper-1.2.8.tgz",
|
||||||
@@ -1695,38 +1627,6 @@
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"node_modules/@radix-ui/react-radio-group": {
|
|
||||||
"version": "1.3.8",
|
|
||||||
"resolved": "https://registry.npmjs.org/@radix-ui/react-radio-group/-/react-radio-group-1.3.8.tgz",
|
|
||||||
"integrity": "sha512-VBKYIYImA5zsxACdisNQ3BjCBfmbGH3kQlnFVqlWU4tXwjy7cGX8ta80BcrO+WJXIn5iBylEH3K6ZTlee//lgQ==",
|
|
||||||
"license": "MIT",
|
|
||||||
"dependencies": {
|
|
||||||
"@radix-ui/primitive": "1.1.3",
|
|
||||||
"@radix-ui/react-compose-refs": "1.1.2",
|
|
||||||
"@radix-ui/react-context": "1.1.2",
|
|
||||||
"@radix-ui/react-direction": "1.1.1",
|
|
||||||
"@radix-ui/react-presence": "1.1.5",
|
|
||||||
"@radix-ui/react-primitive": "2.1.3",
|
|
||||||
"@radix-ui/react-roving-focus": "1.1.11",
|
|
||||||
"@radix-ui/react-use-controllable-state": "1.2.2",
|
|
||||||
"@radix-ui/react-use-previous": "1.1.1",
|
|
||||||
"@radix-ui/react-use-size": "1.1.1"
|
|
||||||
},
|
|
||||||
"peerDependencies": {
|
|
||||||
"@types/react": "*",
|
|
||||||
"@types/react-dom": "*",
|
|
||||||
"react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc",
|
|
||||||
"react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc"
|
|
||||||
},
|
|
||||||
"peerDependenciesMeta": {
|
|
||||||
"@types/react": {
|
|
||||||
"optional": true
|
|
||||||
},
|
|
||||||
"@types/react-dom": {
|
|
||||||
"optional": true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/@radix-ui/react-roving-focus": {
|
"node_modules/@radix-ui/react-roving-focus": {
|
||||||
"version": "1.1.11",
|
"version": "1.1.11",
|
||||||
"resolved": "https://registry.npmjs.org/@radix-ui/react-roving-focus/-/react-roving-focus-1.1.11.tgz",
|
"resolved": "https://registry.npmjs.org/@radix-ui/react-roving-focus/-/react-roving-focus-1.1.11.tgz",
|
||||||
@@ -1758,98 +1658,6 @@
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"node_modules/@radix-ui/react-scroll-area": {
|
|
||||||
"version": "1.2.10",
|
|
||||||
"resolved": "https://registry.npmjs.org/@radix-ui/react-scroll-area/-/react-scroll-area-1.2.10.tgz",
|
|
||||||
"integrity": "sha512-tAXIa1g3sM5CGpVT0uIbUx/U3Gs5N8T52IICuCtObaos1S8fzsrPXG5WObkQN3S6NVl6wKgPhAIiBGbWnvc97A==",
|
|
||||||
"license": "MIT",
|
|
||||||
"dependencies": {
|
|
||||||
"@radix-ui/number": "1.1.1",
|
|
||||||
"@radix-ui/primitive": "1.1.3",
|
|
||||||
"@radix-ui/react-compose-refs": "1.1.2",
|
|
||||||
"@radix-ui/react-context": "1.1.2",
|
|
||||||
"@radix-ui/react-direction": "1.1.1",
|
|
||||||
"@radix-ui/react-presence": "1.1.5",
|
|
||||||
"@radix-ui/react-primitive": "2.1.3",
|
|
||||||
"@radix-ui/react-use-callback-ref": "1.1.1",
|
|
||||||
"@radix-ui/react-use-layout-effect": "1.1.1"
|
|
||||||
},
|
|
||||||
"peerDependencies": {
|
|
||||||
"@types/react": "*",
|
|
||||||
"@types/react-dom": "*",
|
|
||||||
"react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc",
|
|
||||||
"react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc"
|
|
||||||
},
|
|
||||||
"peerDependenciesMeta": {
|
|
||||||
"@types/react": {
|
|
||||||
"optional": true
|
|
||||||
},
|
|
||||||
"@types/react-dom": {
|
|
||||||
"optional": true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/@radix-ui/react-select": {
|
|
||||||
"version": "2.2.6",
|
|
||||||
"resolved": "https://registry.npmjs.org/@radix-ui/react-select/-/react-select-2.2.6.tgz",
|
|
||||||
"integrity": "sha512-I30RydO+bnn2PQztvo25tswPH+wFBjehVGtmagkU78yMdwTwVf12wnAOF+AeP8S2N8xD+5UPbGhkUfPyvT+mwQ==",
|
|
||||||
"license": "MIT",
|
|
||||||
"dependencies": {
|
|
||||||
"@radix-ui/number": "1.1.1",
|
|
||||||
"@radix-ui/primitive": "1.1.3",
|
|
||||||
"@radix-ui/react-collection": "1.1.7",
|
|
||||||
"@radix-ui/react-compose-refs": "1.1.2",
|
|
||||||
"@radix-ui/react-context": "1.1.2",
|
|
||||||
"@radix-ui/react-direction": "1.1.1",
|
|
||||||
"@radix-ui/react-dismissable-layer": "1.1.11",
|
|
||||||
"@radix-ui/react-focus-guards": "1.1.3",
|
|
||||||
"@radix-ui/react-focus-scope": "1.1.7",
|
|
||||||
"@radix-ui/react-id": "1.1.1",
|
|
||||||
"@radix-ui/react-popper": "1.2.8",
|
|
||||||
"@radix-ui/react-portal": "1.1.9",
|
|
||||||
"@radix-ui/react-primitive": "2.1.3",
|
|
||||||
"@radix-ui/react-slot": "1.2.3",
|
|
||||||
"@radix-ui/react-use-callback-ref": "1.1.1",
|
|
||||||
"@radix-ui/react-use-controllable-state": "1.2.2",
|
|
||||||
"@radix-ui/react-use-layout-effect": "1.1.1",
|
|
||||||
"@radix-ui/react-use-previous": "1.1.1",
|
|
||||||
"@radix-ui/react-visually-hidden": "1.2.3",
|
|
||||||
"aria-hidden": "^1.2.4",
|
|
||||||
"react-remove-scroll": "^2.6.3"
|
|
||||||
},
|
|
||||||
"peerDependencies": {
|
|
||||||
"@types/react": "*",
|
|
||||||
"@types/react-dom": "*",
|
|
||||||
"react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc",
|
|
||||||
"react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc"
|
|
||||||
},
|
|
||||||
"peerDependenciesMeta": {
|
|
||||||
"@types/react": {
|
|
||||||
"optional": true
|
|
||||||
},
|
|
||||||
"@types/react-dom": {
|
|
||||||
"optional": true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/@radix-ui/react-select/node_modules/@radix-ui/react-slot": {
|
|
||||||
"version": "1.2.3",
|
|
||||||
"resolved": "https://registry.npmjs.org/@radix-ui/react-slot/-/react-slot-1.2.3.tgz",
|
|
||||||
"integrity": "sha512-aeNmHnBxbi2St0au6VBVC7JXFlhLlOnvIIlePNniyUNAClzmtAUEY8/pBiK3iHjufOlwA+c20/8jngo7xcrg8A==",
|
|
||||||
"license": "MIT",
|
|
||||||
"dependencies": {
|
|
||||||
"@radix-ui/react-compose-refs": "1.1.2"
|
|
||||||
},
|
|
||||||
"peerDependencies": {
|
|
||||||
"@types/react": "*",
|
|
||||||
"react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc"
|
|
||||||
},
|
|
||||||
"peerDependenciesMeta": {
|
|
||||||
"@types/react": {
|
|
||||||
"optional": true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/@radix-ui/react-separator": {
|
"node_modules/@radix-ui/react-separator": {
|
||||||
"version": "1.1.8",
|
"version": "1.1.8",
|
||||||
"resolved": "https://registry.npmjs.org/@radix-ui/react-separator/-/react-separator-1.1.8.tgz",
|
"resolved": "https://registry.npmjs.org/@radix-ui/react-separator/-/react-separator-1.1.8.tgz",
|
||||||
@@ -1943,113 +1751,6 @@
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"node_modules/@radix-ui/react-tabs": {
|
|
||||||
"version": "1.1.13",
|
|
||||||
"resolved": "https://registry.npmjs.org/@radix-ui/react-tabs/-/react-tabs-1.1.13.tgz",
|
|
||||||
"integrity": "sha512-7xdcatg7/U+7+Udyoj2zodtI9H/IIopqo+YOIcZOq1nJwXWBZ9p8xiu5llXlekDbZkca79a/fozEYQXIA4sW6A==",
|
|
||||||
"license": "MIT",
|
|
||||||
"dependencies": {
|
|
||||||
"@radix-ui/primitive": "1.1.3",
|
|
||||||
"@radix-ui/react-context": "1.1.2",
|
|
||||||
"@radix-ui/react-direction": "1.1.1",
|
|
||||||
"@radix-ui/react-id": "1.1.1",
|
|
||||||
"@radix-ui/react-presence": "1.1.5",
|
|
||||||
"@radix-ui/react-primitive": "2.1.3",
|
|
||||||
"@radix-ui/react-roving-focus": "1.1.11",
|
|
||||||
"@radix-ui/react-use-controllable-state": "1.2.2"
|
|
||||||
},
|
|
||||||
"peerDependencies": {
|
|
||||||
"@types/react": "*",
|
|
||||||
"@types/react-dom": "*",
|
|
||||||
"react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc",
|
|
||||||
"react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc"
|
|
||||||
},
|
|
||||||
"peerDependenciesMeta": {
|
|
||||||
"@types/react": {
|
|
||||||
"optional": true
|
|
||||||
},
|
|
||||||
"@types/react-dom": {
|
|
||||||
"optional": true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/@radix-ui/react-toggle": {
|
|
||||||
"version": "1.1.10",
|
|
||||||
"resolved": "https://registry.npmjs.org/@radix-ui/react-toggle/-/react-toggle-1.1.10.tgz",
|
|
||||||
"integrity": "sha512-lS1odchhFTeZv3xwHH31YPObmJn8gOg7Lq12inrr0+BH/l3Tsq32VfjqH1oh80ARM3mlkfMic15n0kg4sD1poQ==",
|
|
||||||
"license": "MIT",
|
|
||||||
"dependencies": {
|
|
||||||
"@radix-ui/primitive": "1.1.3",
|
|
||||||
"@radix-ui/react-primitive": "2.1.3",
|
|
||||||
"@radix-ui/react-use-controllable-state": "1.2.2"
|
|
||||||
},
|
|
||||||
"peerDependencies": {
|
|
||||||
"@types/react": "*",
|
|
||||||
"@types/react-dom": "*",
|
|
||||||
"react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc",
|
|
||||||
"react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc"
|
|
||||||
},
|
|
||||||
"peerDependenciesMeta": {
|
|
||||||
"@types/react": {
|
|
||||||
"optional": true
|
|
||||||
},
|
|
||||||
"@types/react-dom": {
|
|
||||||
"optional": true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/@radix-ui/react-tooltip": {
|
|
||||||
"version": "1.2.8",
|
|
||||||
"resolved": "https://registry.npmjs.org/@radix-ui/react-tooltip/-/react-tooltip-1.2.8.tgz",
|
|
||||||
"integrity": "sha512-tY7sVt1yL9ozIxvmbtN5qtmH2krXcBCfjEiCgKGLqunJHvgvZG2Pcl2oQ3kbcZARb1BGEHdkLzcYGO8ynVlieg==",
|
|
||||||
"license": "MIT",
|
|
||||||
"dependencies": {
|
|
||||||
"@radix-ui/primitive": "1.1.3",
|
|
||||||
"@radix-ui/react-compose-refs": "1.1.2",
|
|
||||||
"@radix-ui/react-context": "1.1.2",
|
|
||||||
"@radix-ui/react-dismissable-layer": "1.1.11",
|
|
||||||
"@radix-ui/react-id": "1.1.1",
|
|
||||||
"@radix-ui/react-popper": "1.2.8",
|
|
||||||
"@radix-ui/react-portal": "1.1.9",
|
|
||||||
"@radix-ui/react-presence": "1.1.5",
|
|
||||||
"@radix-ui/react-primitive": "2.1.3",
|
|
||||||
"@radix-ui/react-slot": "1.2.3",
|
|
||||||
"@radix-ui/react-use-controllable-state": "1.2.2",
|
|
||||||
"@radix-ui/react-visually-hidden": "1.2.3"
|
|
||||||
},
|
|
||||||
"peerDependencies": {
|
|
||||||
"@types/react": "*",
|
|
||||||
"@types/react-dom": "*",
|
|
||||||
"react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc",
|
|
||||||
"react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc"
|
|
||||||
},
|
|
||||||
"peerDependenciesMeta": {
|
|
||||||
"@types/react": {
|
|
||||||
"optional": true
|
|
||||||
},
|
|
||||||
"@types/react-dom": {
|
|
||||||
"optional": true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/@radix-ui/react-tooltip/node_modules/@radix-ui/react-slot": {
|
|
||||||
"version": "1.2.3",
|
|
||||||
"resolved": "https://registry.npmjs.org/@radix-ui/react-slot/-/react-slot-1.2.3.tgz",
|
|
||||||
"integrity": "sha512-aeNmHnBxbi2St0au6VBVC7JXFlhLlOnvIIlePNniyUNAClzmtAUEY8/pBiK3iHjufOlwA+c20/8jngo7xcrg8A==",
|
|
||||||
"license": "MIT",
|
|
||||||
"dependencies": {
|
|
||||||
"@radix-ui/react-compose-refs": "1.1.2"
|
|
||||||
},
|
|
||||||
"peerDependencies": {
|
|
||||||
"@types/react": "*",
|
|
||||||
"react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc"
|
|
||||||
},
|
|
||||||
"peerDependenciesMeta": {
|
|
||||||
"@types/react": {
|
|
||||||
"optional": true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/@radix-ui/react-use-callback-ref": {
|
"node_modules/@radix-ui/react-use-callback-ref": {
|
||||||
"version": "1.1.1",
|
"version": "1.1.1",
|
||||||
"resolved": "https://registry.npmjs.org/@radix-ui/react-use-callback-ref/-/react-use-callback-ref-1.1.1.tgz",
|
"resolved": "https://registry.npmjs.org/@radix-ui/react-use-callback-ref/-/react-use-callback-ref-1.1.1.tgz",
|
||||||
@@ -2186,29 +1887,6 @@
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"node_modules/@radix-ui/react-visually-hidden": {
|
|
||||||
"version": "1.2.3",
|
|
||||||
"resolved": "https://registry.npmjs.org/@radix-ui/react-visually-hidden/-/react-visually-hidden-1.2.3.tgz",
|
|
||||||
"integrity": "sha512-pzJq12tEaaIhqjbzpCuv/OypJY/BPavOofm+dbab+MHLajy277+1lLm6JFcGgF5eskJ6mquGirhXY2GD/8u8Ug==",
|
|
||||||
"license": "MIT",
|
|
||||||
"dependencies": {
|
|
||||||
"@radix-ui/react-primitive": "2.1.3"
|
|
||||||
},
|
|
||||||
"peerDependencies": {
|
|
||||||
"@types/react": "*",
|
|
||||||
"@types/react-dom": "*",
|
|
||||||
"react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc",
|
|
||||||
"react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc"
|
|
||||||
},
|
|
||||||
"peerDependenciesMeta": {
|
|
||||||
"@types/react": {
|
|
||||||
"optional": true
|
|
||||||
},
|
|
||||||
"@types/react-dom": {
|
|
||||||
"optional": true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/@radix-ui/rect": {
|
"node_modules/@radix-ui/rect": {
|
||||||
"version": "1.1.1",
|
"version": "1.1.1",
|
||||||
"resolved": "https://registry.npmjs.org/@radix-ui/rect/-/rect-1.1.1.tgz",
|
"resolved": "https://registry.npmjs.org/@radix-ui/rect/-/rect-1.1.1.tgz",
|
||||||
|
|||||||
@@ -16,16 +16,9 @@
|
|||||||
"@radix-ui/react-dialog": "^1.1.15",
|
"@radix-ui/react-dialog": "^1.1.15",
|
||||||
"@radix-ui/react-dropdown-menu": "^2.1.16",
|
"@radix-ui/react-dropdown-menu": "^2.1.16",
|
||||||
"@radix-ui/react-label": "^2.1.8",
|
"@radix-ui/react-label": "^2.1.8",
|
||||||
"@radix-ui/react-popover": "^1.1.15",
|
|
||||||
"@radix-ui/react-radio-group": "^1.3.8",
|
|
||||||
"@radix-ui/react-scroll-area": "^1.2.10",
|
|
||||||
"@radix-ui/react-select": "^2.2.6",
|
|
||||||
"@radix-ui/react-separator": "^1.1.8",
|
"@radix-ui/react-separator": "^1.1.8",
|
||||||
"@radix-ui/react-slot": "^1.2.4",
|
"@radix-ui/react-slot": "^1.2.4",
|
||||||
"@radix-ui/react-switch": "^1.2.6",
|
"@radix-ui/react-switch": "^1.2.6",
|
||||||
"@radix-ui/react-tabs": "^1.1.13",
|
|
||||||
"@radix-ui/react-toggle": "^1.1.10",
|
|
||||||
"@radix-ui/react-tooltip": "^1.2.8",
|
|
||||||
"@tanstack/react-query": "^5.72.0",
|
"@tanstack/react-query": "^5.72.0",
|
||||||
"@xterm/addon-fit": "^0.11.0",
|
"@xterm/addon-fit": "^0.11.0",
|
||||||
"@xterm/addon-web-links": "^0.12.0",
|
"@xterm/addon-web-links": "^0.12.0",
|
||||||
|
|||||||
@@ -13,7 +13,6 @@ import { SetupWizard } from './components/SetupWizard'
|
|||||||
import { AddFeatureForm } from './components/AddFeatureForm'
|
import { AddFeatureForm } from './components/AddFeatureForm'
|
||||||
import { FeatureModal } from './components/FeatureModal'
|
import { FeatureModal } from './components/FeatureModal'
|
||||||
import { DebugLogViewer, type TabType } from './components/DebugLogViewer'
|
import { DebugLogViewer, type TabType } from './components/DebugLogViewer'
|
||||||
import { AgentThought } from './components/AgentThought'
|
|
||||||
import { AgentMissionControl } from './components/AgentMissionControl'
|
import { AgentMissionControl } from './components/AgentMissionControl'
|
||||||
import { CelebrationOverlay } from './components/CelebrationOverlay'
|
import { CelebrationOverlay } from './components/CelebrationOverlay'
|
||||||
import { AssistantFAB } from './components/AssistantFAB'
|
import { AssistantFAB } from './components/AssistantFAB'
|
||||||
@@ -28,8 +27,8 @@ import { KeyboardShortcutsHelp } from './components/KeyboardShortcutsHelp'
|
|||||||
import { ThemeSelector } from './components/ThemeSelector'
|
import { ThemeSelector } from './components/ThemeSelector'
|
||||||
import { ResetProjectModal } from './components/ResetProjectModal'
|
import { ResetProjectModal } from './components/ResetProjectModal'
|
||||||
import { ProjectSetupRequired } from './components/ProjectSetupRequired'
|
import { ProjectSetupRequired } from './components/ProjectSetupRequired'
|
||||||
import { getDependencyGraph } from './lib/api'
|
import { getDependencyGraph, startAgent } from './lib/api'
|
||||||
import { Loader2, Settings, Moon, Sun, RotateCcw } from 'lucide-react'
|
import { Loader2, Settings, Moon, Sun, RotateCcw, BookOpen } from 'lucide-react'
|
||||||
import type { Feature } from './lib/types'
|
import type { Feature } from './lib/types'
|
||||||
import { Button } from '@/components/ui/button'
|
import { Button } from '@/components/ui/button'
|
||||||
import { Card, CardContent } from '@/components/ui/card'
|
import { Card, CardContent } from '@/components/ui/card'
|
||||||
@@ -41,6 +40,8 @@ const VIEW_MODE_KEY = 'autocoder-view-mode'
|
|||||||
// Bottom padding for main content when debug panel is collapsed (40px header + 8px margin)
|
// Bottom padding for main content when debug panel is collapsed (40px header + 8px margin)
|
||||||
const COLLAPSED_DEBUG_PANEL_CLEARANCE = 48
|
const COLLAPSED_DEBUG_PANEL_CLEARANCE = 48
|
||||||
|
|
||||||
|
type InitializerStatus = 'idle' | 'starting' | 'error'
|
||||||
|
|
||||||
function App() {
|
function App() {
|
||||||
// Initialize selected project from localStorage
|
// Initialize selected project from localStorage
|
||||||
const [selectedProject, setSelectedProject] = useState<string | null>(() => {
|
const [selectedProject, setSelectedProject] = useState<string | null>(() => {
|
||||||
@@ -63,6 +64,8 @@ function App() {
|
|||||||
const [isSpecCreating, setIsSpecCreating] = useState(false)
|
const [isSpecCreating, setIsSpecCreating] = useState(false)
|
||||||
const [showResetModal, setShowResetModal] = useState(false)
|
const [showResetModal, setShowResetModal] = useState(false)
|
||||||
const [showSpecChat, setShowSpecChat] = useState(false) // For "Create Spec" button in empty kanban
|
const [showSpecChat, setShowSpecChat] = useState(false) // For "Create Spec" button in empty kanban
|
||||||
|
const [specInitializerStatus, setSpecInitializerStatus] = useState<InitializerStatus>('idle')
|
||||||
|
const [specInitializerError, setSpecInitializerError] = useState<string | null>(null)
|
||||||
const [viewMode, setViewMode] = useState<ViewMode>(() => {
|
const [viewMode, setViewMode] = useState<ViewMode>(() => {
|
||||||
try {
|
try {
|
||||||
const stored = localStorage.getItem(VIEW_MODE_KEY)
|
const stored = localStorage.getItem(VIEW_MODE_KEY)
|
||||||
@@ -332,6 +335,17 @@ function App() {
|
|||||||
</>
|
</>
|
||||||
)}
|
)}
|
||||||
|
|
||||||
|
{/* Docs link */}
|
||||||
|
<Button
|
||||||
|
onClick={() => { window.location.hash = '#/docs' }}
|
||||||
|
variant="outline"
|
||||||
|
size="sm"
|
||||||
|
title="Documentation"
|
||||||
|
aria-label="Open Documentation"
|
||||||
|
>
|
||||||
|
<BookOpen size={18} />
|
||||||
|
</Button>
|
||||||
|
|
||||||
{/* Theme selector */}
|
{/* Theme selector */}
|
||||||
<ThemeSelector
|
<ThemeSelector
|
||||||
themes={themes}
|
themes={themes}
|
||||||
@@ -386,6 +400,8 @@ function App() {
|
|||||||
total={progress.total}
|
total={progress.total}
|
||||||
percentage={progress.percentage}
|
percentage={progress.percentage}
|
||||||
isConnected={wsState.isConnected}
|
isConnected={wsState.isConnected}
|
||||||
|
logs={wsState.activeAgents.length === 0 ? wsState.logs : undefined}
|
||||||
|
agentStatus={wsState.activeAgents.length === 0 ? wsState.agentStatus : undefined}
|
||||||
/>
|
/>
|
||||||
|
|
||||||
{/* Agent Mission Control - shows orchestrator status and active agents in parallel mode */}
|
{/* Agent Mission Control - shows orchestrator status and active agents in parallel mode */}
|
||||||
@@ -396,13 +412,6 @@ function App() {
|
|||||||
getAgentLogs={wsState.getAgentLogs}
|
getAgentLogs={wsState.getAgentLogs}
|
||||||
/>
|
/>
|
||||||
|
|
||||||
{/* Agent Thought - shows latest agent narrative (single agent mode) */}
|
|
||||||
{wsState.activeAgents.length === 0 && (
|
|
||||||
<AgentThought
|
|
||||||
logs={wsState.logs}
|
|
||||||
agentStatus={wsState.agentStatus}
|
|
||||||
/>
|
|
||||||
)}
|
|
||||||
|
|
||||||
{/* Initializing Features State - show when agent is running but no features yet */}
|
{/* Initializing Features State - show when agent is running but no features yet */}
|
||||||
{features &&
|
{features &&
|
||||||
@@ -495,14 +504,31 @@ function App() {
|
|||||||
<div className="fixed inset-0 z-50 bg-background">
|
<div className="fixed inset-0 z-50 bg-background">
|
||||||
<SpecCreationChat
|
<SpecCreationChat
|
||||||
projectName={selectedProject}
|
projectName={selectedProject}
|
||||||
onComplete={() => {
|
onComplete={async (_specPath, yoloMode) => {
|
||||||
|
setSpecInitializerStatus('starting')
|
||||||
|
try {
|
||||||
|
await startAgent(selectedProject, {
|
||||||
|
yoloMode: yoloMode ?? false,
|
||||||
|
maxConcurrency: 3,
|
||||||
|
})
|
||||||
|
// Success — close chat and refresh
|
||||||
setShowSpecChat(false)
|
setShowSpecChat(false)
|
||||||
// Refresh projects to update has_spec
|
setSpecInitializerStatus('idle')
|
||||||
queryClient.invalidateQueries({ queryKey: ['projects'] })
|
queryClient.invalidateQueries({ queryKey: ['projects'] })
|
||||||
queryClient.invalidateQueries({ queryKey: ['features', selectedProject] })
|
queryClient.invalidateQueries({ queryKey: ['features', selectedProject] })
|
||||||
|
} catch (err) {
|
||||||
|
setSpecInitializerStatus('error')
|
||||||
|
setSpecInitializerError(err instanceof Error ? err.message : 'Failed to start agent')
|
||||||
|
}
|
||||||
|
}}
|
||||||
|
onCancel={() => { setShowSpecChat(false); setSpecInitializerStatus('idle') }}
|
||||||
|
onExitToProject={() => { setShowSpecChat(false); setSpecInitializerStatus('idle') }}
|
||||||
|
initializerStatus={specInitializerStatus}
|
||||||
|
initializerError={specInitializerError}
|
||||||
|
onRetryInitializer={() => {
|
||||||
|
setSpecInitializerError(null)
|
||||||
|
setSpecInitializerStatus('idle')
|
||||||
}}
|
}}
|
||||||
onCancel={() => setShowSpecChat(false)}
|
|
||||||
onExitToProject={() => setShowSpecChat(false)}
|
|
||||||
/>
|
/>
|
||||||
</div>
|
</div>
|
||||||
)}
|
)}
|
||||||
|
|||||||
@@ -1,4 +1,10 @@
|
|||||||
import { type AgentMascot, type AgentState } from '../lib/types'
|
import { type AgentMascot, type AgentState } from '../lib/types'
|
||||||
|
import {
|
||||||
|
AVATAR_COLORS,
|
||||||
|
UNKNOWN_COLORS,
|
||||||
|
MASCOT_SVGS,
|
||||||
|
UnknownMascotSVG,
|
||||||
|
} from './mascotData'
|
||||||
|
|
||||||
interface AgentAvatarProps {
|
interface AgentAvatarProps {
|
||||||
name: AgentMascot | 'Unknown'
|
name: AgentMascot | 'Unknown'
|
||||||
@@ -7,515 +13,12 @@ interface AgentAvatarProps {
|
|||||||
showName?: boolean
|
showName?: boolean
|
||||||
}
|
}
|
||||||
|
|
||||||
// Fallback colors for unknown agents (neutral gray)
|
|
||||||
const UNKNOWN_COLORS = { primary: '#6B7280', secondary: '#9CA3AF', accent: '#F3F4F6' }
|
|
||||||
|
|
||||||
const AVATAR_COLORS: Record<AgentMascot, { primary: string; secondary: string; accent: string }> = {
|
|
||||||
// Original 5
|
|
||||||
Spark: { primary: '#3B82F6', secondary: '#60A5FA', accent: '#DBEAFE' }, // Blue robot
|
|
||||||
Fizz: { primary: '#F97316', secondary: '#FB923C', accent: '#FFEDD5' }, // Orange fox
|
|
||||||
Octo: { primary: '#8B5CF6', secondary: '#A78BFA', accent: '#EDE9FE' }, // Purple octopus
|
|
||||||
Hoot: { primary: '#22C55E', secondary: '#4ADE80', accent: '#DCFCE7' }, // Green owl
|
|
||||||
Buzz: { primary: '#EAB308', secondary: '#FACC15', accent: '#FEF9C3' }, // Yellow bee
|
|
||||||
// Tech-inspired
|
|
||||||
Pixel: { primary: '#EC4899', secondary: '#F472B6', accent: '#FCE7F3' }, // Pink
|
|
||||||
Byte: { primary: '#06B6D4', secondary: '#22D3EE', accent: '#CFFAFE' }, // Cyan
|
|
||||||
Nova: { primary: '#F43F5E', secondary: '#FB7185', accent: '#FFE4E6' }, // Rose
|
|
||||||
Chip: { primary: '#84CC16', secondary: '#A3E635', accent: '#ECFCCB' }, // Lime
|
|
||||||
Bolt: { primary: '#FBBF24', secondary: '#FCD34D', accent: '#FEF3C7' }, // Amber
|
|
||||||
// Energetic
|
|
||||||
Dash: { primary: '#14B8A6', secondary: '#2DD4BF', accent: '#CCFBF1' }, // Teal
|
|
||||||
Zap: { primary: '#A855F7', secondary: '#C084FC', accent: '#F3E8FF' }, // Violet
|
|
||||||
Gizmo: { primary: '#64748B', secondary: '#94A3B8', accent: '#F1F5F9' }, // Slate
|
|
||||||
Turbo: { primary: '#EF4444', secondary: '#F87171', accent: '#FEE2E2' }, // Red
|
|
||||||
Blip: { primary: '#10B981', secondary: '#34D399', accent: '#D1FAE5' }, // Emerald
|
|
||||||
// Playful
|
|
||||||
Neon: { primary: '#D946EF', secondary: '#E879F9', accent: '#FAE8FF' }, // Fuchsia
|
|
||||||
Widget: { primary: '#6366F1', secondary: '#818CF8', accent: '#E0E7FF' }, // Indigo
|
|
||||||
Zippy: { primary: '#F59E0B', secondary: '#FBBF24', accent: '#FEF3C7' }, // Orange-yellow
|
|
||||||
Quirk: { primary: '#0EA5E9', secondary: '#38BDF8', accent: '#E0F2FE' }, // Sky
|
|
||||||
Flux: { primary: '#7C3AED', secondary: '#8B5CF6', accent: '#EDE9FE' }, // Purple
|
|
||||||
}
|
|
||||||
|
|
||||||
const SIZES = {
|
const SIZES = {
|
||||||
sm: { svg: 32, font: 'text-xs' },
|
sm: { svg: 32, font: 'text-xs' },
|
||||||
md: { svg: 48, font: 'text-sm' },
|
md: { svg: 48, font: 'text-sm' },
|
||||||
lg: { svg: 64, font: 'text-base' },
|
lg: { svg: 64, font: 'text-base' },
|
||||||
}
|
}
|
||||||
|
|
||||||
// SVG mascot definitions - simple cute characters
|
|
||||||
function SparkSVG({ colors, size }: { colors: typeof AVATAR_COLORS.Spark; size: number }) {
|
|
||||||
return (
|
|
||||||
<svg width={size} height={size} viewBox="0 0 64 64" fill="none">
|
|
||||||
{/* Robot body */}
|
|
||||||
<rect x="16" y="20" width="32" height="28" rx="4" fill={colors.primary} />
|
|
||||||
{/* Robot head */}
|
|
||||||
<rect x="12" y="8" width="40" height="24" rx="4" fill={colors.secondary} />
|
|
||||||
{/* Antenna */}
|
|
||||||
<circle cx="32" cy="4" r="4" fill={colors.primary} className="animate-pulse" />
|
|
||||||
<rect x="30" y="4" width="4" height="8" fill={colors.primary} />
|
|
||||||
{/* Eyes */}
|
|
||||||
<circle cx="24" cy="18" r="4" fill="white" />
|
|
||||||
<circle cx="40" cy="18" r="4" fill="white" />
|
|
||||||
<circle cx="25" cy="18" r="2" fill={colors.primary} />
|
|
||||||
<circle cx="41" cy="18" r="2" fill={colors.primary} />
|
|
||||||
{/* Mouth */}
|
|
||||||
<rect x="26" y="24" width="12" height="2" rx="1" fill="white" />
|
|
||||||
{/* Arms */}
|
|
||||||
<rect x="6" y="24" width="8" height="4" rx="2" fill={colors.primary} />
|
|
||||||
<rect x="50" y="24" width="8" height="4" rx="2" fill={colors.primary} />
|
|
||||||
</svg>
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
function FizzSVG({ colors, size }: { colors: typeof AVATAR_COLORS.Fizz; size: number }) {
|
|
||||||
return (
|
|
||||||
<svg width={size} height={size} viewBox="0 0 64 64" fill="none">
|
|
||||||
{/* Ears */}
|
|
||||||
<polygon points="12,12 20,28 4,28" fill={colors.primary} />
|
|
||||||
<polygon points="52,12 60,28 44,28" fill={colors.primary} />
|
|
||||||
<polygon points="14,14 18,26 8,26" fill={colors.accent} />
|
|
||||||
<polygon points="50,14 56,26 44,26" fill={colors.accent} />
|
|
||||||
{/* Head */}
|
|
||||||
<ellipse cx="32" cy="36" rx="24" ry="22" fill={colors.primary} />
|
|
||||||
{/* Face */}
|
|
||||||
<ellipse cx="32" cy="40" rx="18" ry="14" fill={colors.accent} />
|
|
||||||
{/* Eyes */}
|
|
||||||
<ellipse cx="24" cy="32" rx="4" ry="5" fill="white" />
|
|
||||||
<ellipse cx="40" cy="32" rx="4" ry="5" fill="white" />
|
|
||||||
<circle cx="25" cy="33" r="2" fill="#1a1a1a" />
|
|
||||||
<circle cx="41" cy="33" r="2" fill="#1a1a1a" />
|
|
||||||
{/* Nose */}
|
|
||||||
<ellipse cx="32" cy="42" rx="4" ry="3" fill={colors.primary} />
|
|
||||||
{/* Whiskers */}
|
|
||||||
<line x1="8" y1="38" x2="18" y2="40" stroke={colors.primary} strokeWidth="2" />
|
|
||||||
<line x1="8" y1="44" x2="18" y2="44" stroke={colors.primary} strokeWidth="2" />
|
|
||||||
<line x1="46" y1="40" x2="56" y2="38" stroke={colors.primary} strokeWidth="2" />
|
|
||||||
<line x1="46" y1="44" x2="56" y2="44" stroke={colors.primary} strokeWidth="2" />
|
|
||||||
</svg>
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
function OctoSVG({ colors, size }: { colors: typeof AVATAR_COLORS.Octo; size: number }) {
|
|
||||||
return (
|
|
||||||
<svg width={size} height={size} viewBox="0 0 64 64" fill="none">
|
|
||||||
{/* Tentacles */}
|
|
||||||
<path d="M12,48 Q8,56 12,60 Q16,64 20,58" fill={colors.secondary} />
|
|
||||||
<path d="M22,50 Q20,58 24,62" fill={colors.secondary} />
|
|
||||||
<path d="M32,52 Q32,60 36,62" fill={colors.secondary} />
|
|
||||||
<path d="M42,50 Q44,58 40,62" fill={colors.secondary} />
|
|
||||||
<path d="M52,48 Q56,56 52,60 Q48,64 44,58" fill={colors.secondary} />
|
|
||||||
{/* Head */}
|
|
||||||
<ellipse cx="32" cy="32" rx="22" ry="24" fill={colors.primary} />
|
|
||||||
{/* Eyes */}
|
|
||||||
<ellipse cx="24" cy="28" rx="6" ry="8" fill="white" />
|
|
||||||
<ellipse cx="40" cy="28" rx="6" ry="8" fill="white" />
|
|
||||||
<ellipse cx="25" cy="30" rx="3" ry="4" fill={colors.primary} />
|
|
||||||
<ellipse cx="41" cy="30" rx="3" ry="4" fill={colors.primary} />
|
|
||||||
{/* Smile */}
|
|
||||||
<path d="M24,42 Q32,48 40,42" stroke={colors.accent} strokeWidth="2" fill="none" strokeLinecap="round" />
|
|
||||||
</svg>
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
function HootSVG({ colors, size }: { colors: typeof AVATAR_COLORS.Hoot; size: number }) {
|
|
||||||
return (
|
|
||||||
<svg width={size} height={size} viewBox="0 0 64 64" fill="none">
|
|
||||||
{/* Ear tufts */}
|
|
||||||
<polygon points="14,8 22,24 6,20" fill={colors.primary} />
|
|
||||||
<polygon points="50,8 58,20 42,24" fill={colors.primary} />
|
|
||||||
{/* Body */}
|
|
||||||
<ellipse cx="32" cy="40" rx="20" ry="18" fill={colors.primary} />
|
|
||||||
{/* Head */}
|
|
||||||
<circle cx="32" cy="28" r="20" fill={colors.secondary} />
|
|
||||||
{/* Eye circles */}
|
|
||||||
<circle cx="24" cy="26" r="10" fill={colors.accent} />
|
|
||||||
<circle cx="40" cy="26" r="10" fill={colors.accent} />
|
|
||||||
{/* Eyes */}
|
|
||||||
<circle cx="24" cy="26" r="6" fill="white" />
|
|
||||||
<circle cx="40" cy="26" r="6" fill="white" />
|
|
||||||
<circle cx="25" cy="27" r="3" fill="#1a1a1a" />
|
|
||||||
<circle cx="41" cy="27" r="3" fill="#1a1a1a" />
|
|
||||||
{/* Beak */}
|
|
||||||
<polygon points="32,32 28,40 36,40" fill="#F97316" />
|
|
||||||
{/* Belly */}
|
|
||||||
<ellipse cx="32" cy="46" rx="10" ry="8" fill={colors.accent} />
|
|
||||||
</svg>
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
function BuzzSVG({ colors, size }: { colors: typeof AVATAR_COLORS.Buzz; size: number }) {
|
|
||||||
return (
|
|
||||||
<svg width={size} height={size} viewBox="0 0 64 64" fill="none">
|
|
||||||
{/* Wings */}
|
|
||||||
<ellipse cx="14" cy="32" rx="10" ry="14" fill={colors.accent} opacity="0.8" className="animate-pulse" />
|
|
||||||
<ellipse cx="50" cy="32" rx="10" ry="14" fill={colors.accent} opacity="0.8" className="animate-pulse" />
|
|
||||||
{/* Body stripes */}
|
|
||||||
<ellipse cx="32" cy="36" rx="14" ry="20" fill={colors.primary} />
|
|
||||||
<ellipse cx="32" cy="30" rx="12" ry="6" fill="#1a1a1a" />
|
|
||||||
<ellipse cx="32" cy="44" rx="12" ry="6" fill="#1a1a1a" />
|
|
||||||
{/* Head */}
|
|
||||||
<circle cx="32" cy="16" r="12" fill={colors.primary} />
|
|
||||||
{/* Antennae */}
|
|
||||||
<line x1="26" y1="8" x2="22" y2="2" stroke="#1a1a1a" strokeWidth="2" />
|
|
||||||
<line x1="38" y1="8" x2="42" y2="2" stroke="#1a1a1a" strokeWidth="2" />
|
|
||||||
<circle cx="22" cy="2" r="2" fill="#1a1a1a" />
|
|
||||||
<circle cx="42" cy="2" r="2" fill="#1a1a1a" />
|
|
||||||
{/* Eyes */}
|
|
||||||
<circle cx="28" cy="14" r="4" fill="white" />
|
|
||||||
<circle cx="36" cy="14" r="4" fill="white" />
|
|
||||||
<circle cx="29" cy="15" r="2" fill="#1a1a1a" />
|
|
||||||
<circle cx="37" cy="15" r="2" fill="#1a1a1a" />
|
|
||||||
{/* Smile */}
|
|
||||||
<path d="M28,20 Q32,24 36,20" stroke="#1a1a1a" strokeWidth="1.5" fill="none" strokeLinecap="round" />
|
|
||||||
</svg>
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Pixel - cute pixel art style character
|
|
||||||
function PixelSVG({ colors, size }: { colors: typeof AVATAR_COLORS.Pixel; size: number }) {
|
|
||||||
return (
|
|
||||||
<svg width={size} height={size} viewBox="0 0 64 64" fill="none">
|
|
||||||
{/* Blocky body */}
|
|
||||||
<rect x="20" y="28" width="24" height="28" fill={colors.primary} />
|
|
||||||
<rect x="16" y="32" width="8" height="20" fill={colors.secondary} />
|
|
||||||
<rect x="40" y="32" width="8" height="20" fill={colors.secondary} />
|
|
||||||
{/* Head */}
|
|
||||||
<rect x="16" y="8" width="32" height="24" fill={colors.primary} />
|
|
||||||
{/* Eyes */}
|
|
||||||
<rect x="20" y="14" width="8" height="8" fill="white" />
|
|
||||||
<rect x="36" y="14" width="8" height="8" fill="white" />
|
|
||||||
<rect x="24" y="16" width="4" height="4" fill="#1a1a1a" />
|
|
||||||
<rect x="38" y="16" width="4" height="4" fill="#1a1a1a" />
|
|
||||||
{/* Mouth */}
|
|
||||||
<rect x="26" y="26" width="12" height="4" fill={colors.accent} />
|
|
||||||
</svg>
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Byte - data cube character
|
|
||||||
function ByteSVG({ colors, size }: { colors: typeof AVATAR_COLORS.Byte; size: number }) {
|
|
||||||
return (
|
|
||||||
<svg width={size} height={size} viewBox="0 0 64 64" fill="none">
|
|
||||||
{/* 3D cube body */}
|
|
||||||
<polygon points="32,8 56,20 56,44 32,56 8,44 8,20" fill={colors.primary} />
|
|
||||||
<polygon points="32,8 56,20 32,32 8,20" fill={colors.secondary} />
|
|
||||||
<polygon points="32,32 56,20 56,44 32,56" fill={colors.accent} opacity="0.6" />
|
|
||||||
{/* Face */}
|
|
||||||
<circle cx="24" cy="28" r="4" fill="white" />
|
|
||||||
<circle cx="40" cy="28" r="4" fill="white" />
|
|
||||||
<circle cx="25" cy="29" r="2" fill="#1a1a1a" />
|
|
||||||
<circle cx="41" cy="29" r="2" fill="#1a1a1a" />
|
|
||||||
<path d="M26,38 Q32,42 38,38" stroke="white" strokeWidth="2" fill="none" strokeLinecap="round" />
|
|
||||||
</svg>
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Nova - star character
|
|
||||||
function NovaSVG({ colors, size }: { colors: typeof AVATAR_COLORS.Nova; size: number }) {
|
|
||||||
return (
|
|
||||||
<svg width={size} height={size} viewBox="0 0 64 64" fill="none">
|
|
||||||
{/* Star points */}
|
|
||||||
<polygon points="32,2 38,22 58,22 42,36 48,56 32,44 16,56 22,36 6,22 26,22" fill={colors.primary} />
|
|
||||||
<circle cx="32" cy="32" r="14" fill={colors.secondary} />
|
|
||||||
{/* Face */}
|
|
||||||
<circle cx="27" cy="30" r="3" fill="white" />
|
|
||||||
<circle cx="37" cy="30" r="3" fill="white" />
|
|
||||||
<circle cx="28" cy="31" r="1.5" fill="#1a1a1a" />
|
|
||||||
<circle cx="38" cy="31" r="1.5" fill="#1a1a1a" />
|
|
||||||
<path d="M28,37 Q32,40 36,37" stroke="#1a1a1a" strokeWidth="1.5" fill="none" strokeLinecap="round" />
|
|
||||||
</svg>
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Chip - circuit board character
|
|
||||||
function ChipSVG({ colors, size }: { colors: typeof AVATAR_COLORS.Chip; size: number }) {
|
|
||||||
return (
|
|
||||||
<svg width={size} height={size} viewBox="0 0 64 64" fill="none">
|
|
||||||
{/* Chip body */}
|
|
||||||
<rect x="16" y="16" width="32" height="32" rx="4" fill={colors.primary} />
|
|
||||||
{/* Pins */}
|
|
||||||
<rect x="20" y="10" width="4" height="8" fill={colors.secondary} />
|
|
||||||
<rect x="30" y="10" width="4" height="8" fill={colors.secondary} />
|
|
||||||
<rect x="40" y="10" width="4" height="8" fill={colors.secondary} />
|
|
||||||
<rect x="20" y="46" width="4" height="8" fill={colors.secondary} />
|
|
||||||
<rect x="30" y="46" width="4" height="8" fill={colors.secondary} />
|
|
||||||
<rect x="40" y="46" width="4" height="8" fill={colors.secondary} />
|
|
||||||
{/* Face */}
|
|
||||||
<circle cx="26" cy="28" r="4" fill={colors.accent} />
|
|
||||||
<circle cx="38" cy="28" r="4" fill={colors.accent} />
|
|
||||||
<circle cx="26" cy="28" r="2" fill="#1a1a1a" />
|
|
||||||
<circle cx="38" cy="28" r="2" fill="#1a1a1a" />
|
|
||||||
<rect x="26" y="38" width="12" height="3" rx="1" fill={colors.accent} />
|
|
||||||
</svg>
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Bolt - lightning character
|
|
||||||
function BoltSVG({ colors, size }: { colors: typeof AVATAR_COLORS.Bolt; size: number }) {
|
|
||||||
return (
|
|
||||||
<svg width={size} height={size} viewBox="0 0 64 64" fill="none">
|
|
||||||
{/* Lightning bolt body */}
|
|
||||||
<polygon points="36,4 20,28 30,28 24,60 48,32 36,32 44,4" fill={colors.primary} />
|
|
||||||
<polygon points="34,8 24,26 32,26 28,52 42,34 34,34 40,8" fill={colors.secondary} />
|
|
||||||
{/* Face */}
|
|
||||||
<circle cx="30" cy="30" r="3" fill="white" />
|
|
||||||
<circle cx="38" cy="26" r="3" fill="white" />
|
|
||||||
<circle cx="31" cy="31" r="1.5" fill="#1a1a1a" />
|
|
||||||
<circle cx="39" cy="27" r="1.5" fill="#1a1a1a" />
|
|
||||||
</svg>
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Dash - speedy character
|
|
||||||
function DashSVG({ colors, size }: { colors: typeof AVATAR_COLORS.Dash; size: number }) {
|
|
||||||
return (
|
|
||||||
<svg width={size} height={size} viewBox="0 0 64 64" fill="none">
|
|
||||||
{/* Speed lines */}
|
|
||||||
<rect x="4" y="28" width="12" height="3" rx="1" fill={colors.accent} opacity="0.6" />
|
|
||||||
<rect x="8" y="34" width="10" height="3" rx="1" fill={colors.accent} opacity="0.4" />
|
|
||||||
{/* Aerodynamic body */}
|
|
||||||
<ellipse cx="36" cy="32" rx="20" ry="16" fill={colors.primary} />
|
|
||||||
<ellipse cx="40" cy="32" rx="14" ry="12" fill={colors.secondary} />
|
|
||||||
{/* Face */}
|
|
||||||
<circle cx="38" cy="28" r="4" fill="white" />
|
|
||||||
<circle cx="48" cy="28" r="4" fill="white" />
|
|
||||||
<circle cx="39" cy="29" r="2" fill="#1a1a1a" />
|
|
||||||
<circle cx="49" cy="29" r="2" fill="#1a1a1a" />
|
|
||||||
<path d="M40,36 Q44,39 48,36" stroke="#1a1a1a" strokeWidth="1.5" fill="none" strokeLinecap="round" />
|
|
||||||
</svg>
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Zap - electric orb
|
|
||||||
function ZapSVG({ colors, size }: { colors: typeof AVATAR_COLORS.Zap; size: number }) {
|
|
||||||
return (
|
|
||||||
<svg width={size} height={size} viewBox="0 0 64 64" fill="none">
|
|
||||||
{/* Electric sparks */}
|
|
||||||
<path d="M12,32 L20,28 L16,32 L22,30" stroke={colors.secondary} strokeWidth="2" className="animate-pulse" />
|
|
||||||
<path d="M52,32 L44,28 L48,32 L42,30" stroke={colors.secondary} strokeWidth="2" className="animate-pulse" />
|
|
||||||
{/* Orb */}
|
|
||||||
<circle cx="32" cy="32" r="18" fill={colors.primary} />
|
|
||||||
<circle cx="32" cy="32" r="14" fill={colors.secondary} />
|
|
||||||
{/* Face */}
|
|
||||||
<circle cx="26" cy="30" r="4" fill="white" />
|
|
||||||
<circle cx="38" cy="30" r="4" fill="white" />
|
|
||||||
<circle cx="27" cy="31" r="2" fill={colors.primary} />
|
|
||||||
<circle cx="39" cy="31" r="2" fill={colors.primary} />
|
|
||||||
<path d="M28,40 Q32,44 36,40" stroke="white" strokeWidth="2" fill="none" strokeLinecap="round" />
|
|
||||||
</svg>
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Gizmo - gear character
|
|
||||||
function GizmoSVG({ colors, size }: { colors: typeof AVATAR_COLORS.Gizmo; size: number }) {
|
|
||||||
return (
|
|
||||||
<svg width={size} height={size} viewBox="0 0 64 64" fill="none">
|
|
||||||
{/* Gear teeth */}
|
|
||||||
<rect x="28" y="4" width="8" height="8" fill={colors.primary} />
|
|
||||||
<rect x="28" y="52" width="8" height="8" fill={colors.primary} />
|
|
||||||
<rect x="4" y="28" width="8" height="8" fill={colors.primary} />
|
|
||||||
<rect x="52" y="28" width="8" height="8" fill={colors.primary} />
|
|
||||||
{/* Gear body */}
|
|
||||||
<circle cx="32" cy="32" r="20" fill={colors.primary} />
|
|
||||||
<circle cx="32" cy="32" r="14" fill={colors.secondary} />
|
|
||||||
{/* Face */}
|
|
||||||
<circle cx="26" cy="30" r="4" fill="white" />
|
|
||||||
<circle cx="38" cy="30" r="4" fill="white" />
|
|
||||||
<circle cx="27" cy="31" r="2" fill="#1a1a1a" />
|
|
||||||
<circle cx="39" cy="31" r="2" fill="#1a1a1a" />
|
|
||||||
<path d="M28,40 Q32,43 36,40" stroke="#1a1a1a" strokeWidth="2" fill="none" strokeLinecap="round" />
|
|
||||||
</svg>
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Turbo - rocket character
|
|
||||||
function TurboSVG({ colors, size }: { colors: typeof AVATAR_COLORS.Turbo; size: number }) {
|
|
||||||
return (
|
|
||||||
<svg width={size} height={size} viewBox="0 0 64 64" fill="none">
|
|
||||||
{/* Flames */}
|
|
||||||
<ellipse cx="32" cy="58" rx="8" ry="6" fill="#FBBF24" className="animate-pulse" />
|
|
||||||
<ellipse cx="32" cy="56" rx="5" ry="4" fill="#FCD34D" />
|
|
||||||
{/* Rocket body */}
|
|
||||||
<ellipse cx="32" cy="32" rx="14" ry="24" fill={colors.primary} />
|
|
||||||
{/* Nose cone */}
|
|
||||||
<ellipse cx="32" cy="12" rx="8" ry="10" fill={colors.secondary} />
|
|
||||||
{/* Fins */}
|
|
||||||
<polygon points="18,44 10,56 18,52" fill={colors.secondary} />
|
|
||||||
<polygon points="46,44 54,56 46,52" fill={colors.secondary} />
|
|
||||||
{/* Window/Face */}
|
|
||||||
<circle cx="32" cy="28" r="8" fill={colors.accent} />
|
|
||||||
<circle cx="29" cy="27" r="2" fill="#1a1a1a" />
|
|
||||||
<circle cx="35" cy="27" r="2" fill="#1a1a1a" />
|
|
||||||
<path d="M29,32 Q32,34 35,32" stroke="#1a1a1a" strokeWidth="1" fill="none" />
|
|
||||||
</svg>
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Blip - radar dot character
|
|
||||||
function BlipSVG({ colors, size }: { colors: typeof AVATAR_COLORS.Blip; size: number }) {
|
|
||||||
return (
|
|
||||||
<svg width={size} height={size} viewBox="0 0 64 64" fill="none">
|
|
||||||
{/* Radar rings */}
|
|
||||||
<circle cx="32" cy="32" r="28" stroke={colors.accent} strokeWidth="2" fill="none" opacity="0.3" />
|
|
||||||
<circle cx="32" cy="32" r="22" stroke={colors.accent} strokeWidth="2" fill="none" opacity="0.5" />
|
|
||||||
{/* Main dot */}
|
|
||||||
<circle cx="32" cy="32" r="14" fill={colors.primary} />
|
|
||||||
<circle cx="32" cy="32" r="10" fill={colors.secondary} />
|
|
||||||
{/* Face */}
|
|
||||||
<circle cx="28" cy="30" r="3" fill="white" />
|
|
||||||
<circle cx="36" cy="30" r="3" fill="white" />
|
|
||||||
<circle cx="29" cy="31" r="1.5" fill="#1a1a1a" />
|
|
||||||
<circle cx="37" cy="31" r="1.5" fill="#1a1a1a" />
|
|
||||||
<path d="M29,37 Q32,40 35,37" stroke="white" strokeWidth="1.5" fill="none" strokeLinecap="round" />
|
|
||||||
</svg>
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Neon - glowing character
|
|
||||||
function NeonSVG({ colors, size }: { colors: typeof AVATAR_COLORS.Neon; size: number }) {
|
|
||||||
return (
|
|
||||||
<svg width={size} height={size} viewBox="0 0 64 64" fill="none">
|
|
||||||
{/* Glow effect */}
|
|
||||||
<circle cx="32" cy="32" r="26" fill={colors.accent} opacity="0.3" />
|
|
||||||
<circle cx="32" cy="32" r="22" fill={colors.accent} opacity="0.5" />
|
|
||||||
{/* Body */}
|
|
||||||
<circle cx="32" cy="32" r="18" fill={colors.primary} />
|
|
||||||
{/* Inner glow */}
|
|
||||||
<circle cx="32" cy="32" r="12" fill={colors.secondary} />
|
|
||||||
{/* Face */}
|
|
||||||
<circle cx="27" cy="30" r="4" fill="white" />
|
|
||||||
<circle cx="37" cy="30" r="4" fill="white" />
|
|
||||||
<circle cx="28" cy="31" r="2" fill={colors.primary} />
|
|
||||||
<circle cx="38" cy="31" r="2" fill={colors.primary} />
|
|
||||||
<path d="M28,38 Q32,42 36,38" stroke="white" strokeWidth="2" fill="none" strokeLinecap="round" />
|
|
||||||
</svg>
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Widget - UI component character
|
|
||||||
function WidgetSVG({ colors, size }: { colors: typeof AVATAR_COLORS.Widget; size: number }) {
|
|
||||||
return (
|
|
||||||
<svg width={size} height={size} viewBox="0 0 64 64" fill="none">
|
|
||||||
{/* Window frame */}
|
|
||||||
<rect x="8" y="12" width="48" height="40" rx="4" fill={colors.primary} />
|
|
||||||
{/* Title bar */}
|
|
||||||
<rect x="8" y="12" width="48" height="10" rx="4" fill={colors.secondary} />
|
|
||||||
<circle cx="16" cy="17" r="2" fill="#EF4444" />
|
|
||||||
<circle cx="24" cy="17" r="2" fill="#FBBF24" />
|
|
||||||
<circle cx="32" cy="17" r="2" fill="#22C55E" />
|
|
||||||
{/* Content area / Face */}
|
|
||||||
<rect x="12" y="26" width="40" height="22" rx="2" fill={colors.accent} />
|
|
||||||
<circle cx="24" cy="34" r="4" fill="white" />
|
|
||||||
<circle cx="40" cy="34" r="4" fill="white" />
|
|
||||||
<circle cx="25" cy="35" r="2" fill={colors.primary} />
|
|
||||||
<circle cx="41" cy="35" r="2" fill={colors.primary} />
|
|
||||||
<rect x="28" y="42" width="8" height="3" rx="1" fill={colors.primary} />
|
|
||||||
</svg>
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Zippy - fast bunny-like character
|
|
||||||
function ZippySVG({ colors, size }: { colors: typeof AVATAR_COLORS.Zippy; size: number }) {
|
|
||||||
return (
|
|
||||||
<svg width={size} height={size} viewBox="0 0 64 64" fill="none">
|
|
||||||
{/* Ears */}
|
|
||||||
<ellipse cx="22" cy="14" rx="6" ry="14" fill={colors.primary} />
|
|
||||||
<ellipse cx="42" cy="14" rx="6" ry="14" fill={colors.primary} />
|
|
||||||
<ellipse cx="22" cy="14" rx="3" ry="10" fill={colors.accent} />
|
|
||||||
<ellipse cx="42" cy="14" rx="3" ry="10" fill={colors.accent} />
|
|
||||||
{/* Head */}
|
|
||||||
<circle cx="32" cy="38" r="20" fill={colors.primary} />
|
|
||||||
{/* Face */}
|
|
||||||
<circle cx="24" cy="34" r="5" fill="white" />
|
|
||||||
<circle cx="40" cy="34" r="5" fill="white" />
|
|
||||||
<circle cx="25" cy="35" r="2.5" fill="#1a1a1a" />
|
|
||||||
<circle cx="41" cy="35" r="2.5" fill="#1a1a1a" />
|
|
||||||
{/* Nose and mouth */}
|
|
||||||
<ellipse cx="32" cy="44" rx="3" ry="2" fill={colors.secondary} />
|
|
||||||
<path d="M32,46 L32,50 M28,52 Q32,56 36,52" stroke="#1a1a1a" strokeWidth="1.5" fill="none" />
|
|
||||||
</svg>
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Quirk - question mark character
|
|
||||||
function QuirkSVG({ colors, size }: { colors: typeof AVATAR_COLORS.Quirk; size: number }) {
|
|
||||||
return (
|
|
||||||
<svg width={size} height={size} viewBox="0 0 64 64" fill="none">
|
|
||||||
{/* Question mark body */}
|
|
||||||
<path d="M24,20 Q24,8 32,8 Q44,8 44,20 Q44,28 32,32 L32,40"
|
|
||||||
stroke={colors.primary} strokeWidth="8" fill="none" strokeLinecap="round" />
|
|
||||||
<circle cx="32" cy="52" r="6" fill={colors.primary} />
|
|
||||||
{/* Face on the dot */}
|
|
||||||
<circle cx="29" cy="51" r="1.5" fill="white" />
|
|
||||||
<circle cx="35" cy="51" r="1.5" fill="white" />
|
|
||||||
<circle cx="29" cy="51" r="0.75" fill="#1a1a1a" />
|
|
||||||
<circle cx="35" cy="51" r="0.75" fill="#1a1a1a" />
|
|
||||||
{/* Decorative swirl */}
|
|
||||||
<circle cx="32" cy="20" r="4" fill={colors.secondary} />
|
|
||||||
</svg>
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Flux - flowing wave character
|
|
||||||
function FluxSVG({ colors, size }: { colors: typeof AVATAR_COLORS.Flux; size: number }) {
|
|
||||||
return (
|
|
||||||
<svg width={size} height={size} viewBox="0 0 64 64" fill="none">
|
|
||||||
{/* Wave body */}
|
|
||||||
<path d="M8,32 Q16,16 32,32 Q48,48 56,32" stroke={colors.primary} strokeWidth="16" fill="none" strokeLinecap="round" />
|
|
||||||
<path d="M8,32 Q16,16 32,32 Q48,48 56,32" stroke={colors.secondary} strokeWidth="10" fill="none" strokeLinecap="round" />
|
|
||||||
{/* Face */}
|
|
||||||
<circle cx="28" cy="28" r="4" fill="white" />
|
|
||||||
<circle cx="40" cy="36" r="4" fill="white" />
|
|
||||||
<circle cx="29" cy="29" r="2" fill="#1a1a1a" />
|
|
||||||
<circle cx="41" cy="37" r="2" fill="#1a1a1a" />
|
|
||||||
{/* Sparkles */}
|
|
||||||
<circle cx="16" cy="24" r="2" fill={colors.accent} className="animate-pulse" />
|
|
||||||
<circle cx="48" cy="40" r="2" fill={colors.accent} className="animate-pulse" />
|
|
||||||
</svg>
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Unknown agent fallback - simple question mark icon
|
|
||||||
function UnknownSVG({ colors, size }: { colors: typeof UNKNOWN_COLORS; size: number }) {
|
|
||||||
return (
|
|
||||||
<svg width={size} height={size} viewBox="0 0 64 64" fill="none" xmlns="http://www.w3.org/2000/svg">
|
|
||||||
{/* Circle background */}
|
|
||||||
<circle cx="32" cy="32" r="28" fill={colors.primary} />
|
|
||||||
<circle cx="32" cy="32" r="24" fill={colors.secondary} />
|
|
||||||
{/* Question mark */}
|
|
||||||
<text x="32" y="44" textAnchor="middle" fontSize="32" fontWeight="bold" fill="white">?</text>
|
|
||||||
</svg>
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
const MASCOT_SVGS: Record<AgentMascot, typeof SparkSVG> = {
|
|
||||||
// Original 5
|
|
||||||
Spark: SparkSVG,
|
|
||||||
Fizz: FizzSVG,
|
|
||||||
Octo: OctoSVG,
|
|
||||||
Hoot: HootSVG,
|
|
||||||
Buzz: BuzzSVG,
|
|
||||||
// Tech-inspired
|
|
||||||
Pixel: PixelSVG,
|
|
||||||
Byte: ByteSVG,
|
|
||||||
Nova: NovaSVG,
|
|
||||||
Chip: ChipSVG,
|
|
||||||
Bolt: BoltSVG,
|
|
||||||
// Energetic
|
|
||||||
Dash: DashSVG,
|
|
||||||
Zap: ZapSVG,
|
|
||||||
Gizmo: GizmoSVG,
|
|
||||||
Turbo: TurboSVG,
|
|
||||||
Blip: BlipSVG,
|
|
||||||
// Playful
|
|
||||||
Neon: NeonSVG,
|
|
||||||
Widget: WidgetSVG,
|
|
||||||
Zippy: ZippySVG,
|
|
||||||
Quirk: QuirkSVG,
|
|
||||||
Flux: FluxSVG,
|
|
||||||
}
|
|
||||||
|
|
||||||
// Animation classes based on state
|
// Animation classes based on state
|
||||||
function getStateAnimation(state: AgentState): string {
|
function getStateAnimation(state: AgentState): string {
|
||||||
switch (state) {
|
switch (state) {
|
||||||
@@ -581,7 +84,7 @@ export function AgentAvatar({ name, state, size = 'md', showName = false }: Agen
|
|||||||
const isUnknown = name === 'Unknown'
|
const isUnknown = name === 'Unknown'
|
||||||
const colors = isUnknown ? UNKNOWN_COLORS : AVATAR_COLORS[name]
|
const colors = isUnknown ? UNKNOWN_COLORS : AVATAR_COLORS[name]
|
||||||
const { svg: svgSize, font } = SIZES[size]
|
const { svg: svgSize, font } = SIZES[size]
|
||||||
const SvgComponent = isUnknown ? UnknownSVG : MASCOT_SVGS[name]
|
const SvgComponent = isUnknown ? UnknownMascotSVG : MASCOT_SVGS[name]
|
||||||
const stateDesc = getStateDescription(state)
|
const stateDesc = getStateDescription(state)
|
||||||
const ariaLabel = `Agent ${name} is ${stateDesc}`
|
const ariaLabel = `Agent ${name} is ${stateDesc}`
|
||||||
|
|
||||||
|
|||||||
@@ -112,12 +112,25 @@ export function AgentCard({ agent, onShowLogs }: AgentCardProps) {
|
|||||||
|
|
||||||
{/* Feature info */}
|
{/* Feature info */}
|
||||||
<div>
|
<div>
|
||||||
|
{agent.featureIds && agent.featureIds.length > 1 ? (
|
||||||
|
<>
|
||||||
|
<div className="text-xs text-muted-foreground mb-0.5">
|
||||||
|
Batch: {agent.featureIds.map(id => `#${id}`).join(', ')}
|
||||||
|
</div>
|
||||||
|
<div className="text-sm font-bold truncate">
|
||||||
|
Active: Feature #{agent.featureId}
|
||||||
|
</div>
|
||||||
|
</>
|
||||||
|
) : (
|
||||||
|
<>
|
||||||
<div className="text-xs text-muted-foreground mb-0.5">
|
<div className="text-xs text-muted-foreground mb-0.5">
|
||||||
Feature #{agent.featureId}
|
Feature #{agent.featureId}
|
||||||
</div>
|
</div>
|
||||||
<div className="text-sm font-medium truncate" title={agent.featureName}>
|
<div className="text-sm font-medium truncate" title={agent.featureName}>
|
||||||
{agent.featureName}
|
{agent.featureName}
|
||||||
</div>
|
</div>
|
||||||
|
</>
|
||||||
|
)}
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
{/* Thought bubble */}
|
{/* Thought bubble */}
|
||||||
@@ -195,7 +208,10 @@ export function AgentLogModal({ agent, logs, onClose }: AgentLogModalProps) {
|
|||||||
</Badge>
|
</Badge>
|
||||||
</div>
|
</div>
|
||||||
<p className="text-sm text-muted-foreground">
|
<p className="text-sm text-muted-foreground">
|
||||||
Feature #{agent.featureId}: {agent.featureName}
|
{agent.featureIds && agent.featureIds.length > 1
|
||||||
|
? `Batch: ${agent.featureIds.map(id => `#${id}`).join(', ')}`
|
||||||
|
: `Feature #${agent.featureId}: ${agent.featureName}`
|
||||||
|
}
|
||||||
</p>
|
</p>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|||||||
@@ -227,10 +227,14 @@ function DependencyGraphInner({ graphData, onNodeClick, activeAgents = [] }: Dep
|
|||||||
}, [])
|
}, [])
|
||||||
|
|
||||||
// Create a map of featureId to agent info for quick lookup
|
// Create a map of featureId to agent info for quick lookup
|
||||||
|
// Maps ALL batch feature IDs to the same agent
|
||||||
const agentByFeatureId = useMemo(() => {
|
const agentByFeatureId = useMemo(() => {
|
||||||
const map = new Map<number, NodeAgentInfo>()
|
const map = new Map<number, NodeAgentInfo>()
|
||||||
for (const agent of activeAgents) {
|
for (const agent of activeAgents) {
|
||||||
map.set(agent.featureId, { name: agent.agentName, state: agent.state })
|
const ids = agent.featureIds || [agent.featureId]
|
||||||
|
for (const fid of ids) {
|
||||||
|
map.set(fid, { name: agent.agentName, state: agent.state })
|
||||||
|
}
|
||||||
}
|
}
|
||||||
return map
|
return map
|
||||||
}, [activeAgents])
|
}, [activeAgents])
|
||||||
|
|||||||
@@ -41,9 +41,14 @@ export function KanbanColumn({
|
|||||||
showCreateSpec,
|
showCreateSpec,
|
||||||
}: KanbanColumnProps) {
|
}: KanbanColumnProps) {
|
||||||
// Create a map of feature ID to active agent for quick lookup
|
// Create a map of feature ID to active agent for quick lookup
|
||||||
const agentByFeatureId = new Map(
|
// Maps ALL batch feature IDs to the same agent
|
||||||
activeAgents.map(agent => [agent.featureId, agent])
|
const agentByFeatureId = new Map<number, ActiveAgent>()
|
||||||
)
|
for (const agent of activeAgents) {
|
||||||
|
const ids = agent.featureIds || [agent.featureId]
|
||||||
|
for (const fid of ids) {
|
||||||
|
agentByFeatureId.set(fid, agent)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
return (
|
return (
|
||||||
<Card className={`overflow-hidden ${colorMap[color]} py-0`}>
|
<Card className={`overflow-hidden ${colorMap[color]} py-0`}>
|
||||||
|
|||||||
@@ -10,6 +10,7 @@
|
|||||||
*/
|
*/
|
||||||
|
|
||||||
import { useState } from 'react'
|
import { useState } from 'react'
|
||||||
|
import { createPortal } from 'react-dom'
|
||||||
import { Bot, FileEdit, ArrowRight, ArrowLeft, Loader2, CheckCircle2, Folder } from 'lucide-react'
|
import { Bot, FileEdit, ArrowRight, ArrowLeft, Loader2, CheckCircle2, Folder } from 'lucide-react'
|
||||||
import { useCreateProject } from '../hooks/useProjects'
|
import { useCreateProject } from '../hooks/useProjects'
|
||||||
import { SpecCreationChat } from './SpecCreationChat'
|
import { SpecCreationChat } from './SpecCreationChat'
|
||||||
@@ -200,10 +201,10 @@ export function NewProjectModal({
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Full-screen chat view
|
// Full-screen chat view - use portal to render at body level
|
||||||
if (step === 'chat') {
|
if (step === 'chat') {
|
||||||
return (
|
return createPortal(
|
||||||
<div className="fixed inset-0 z-50 bg-background">
|
<div className="fixed inset-0 z-50 bg-background flex flex-col">
|
||||||
<SpecCreationChat
|
<SpecCreationChat
|
||||||
projectName={projectName.trim()}
|
projectName={projectName.trim()}
|
||||||
onComplete={handleSpecComplete}
|
onComplete={handleSpecComplete}
|
||||||
@@ -213,7 +214,8 @@ export function NewProjectModal({
|
|||||||
initializerError={initializerError}
|
initializerError={initializerError}
|
||||||
onRetryInitializer={handleRetryInitializer}
|
onRetryInitializer={handleRetryInitializer}
|
||||||
/>
|
/>
|
||||||
</div>
|
</div>,
|
||||||
|
document.body
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -1,12 +1,40 @@
|
|||||||
import { Wifi, WifiOff } from 'lucide-react'
|
import { useMemo, useState, useEffect } from 'react'
|
||||||
|
import { Wifi, WifiOff, Brain, Sparkles } from 'lucide-react'
|
||||||
import { Card, CardContent, CardHeader, CardTitle } from '@/components/ui/card'
|
import { Card, CardContent, CardHeader, CardTitle } from '@/components/ui/card'
|
||||||
import { Badge } from '@/components/ui/badge'
|
import { Badge } from '@/components/ui/badge'
|
||||||
|
import type { AgentStatus } from '../lib/types'
|
||||||
|
|
||||||
interface ProgressDashboardProps {
|
interface ProgressDashboardProps {
|
||||||
passing: number
|
passing: number
|
||||||
total: number
|
total: number
|
||||||
percentage: number
|
percentage: number
|
||||||
isConnected: boolean
|
isConnected: boolean
|
||||||
|
logs?: Array<{ line: string; timestamp: string }>
|
||||||
|
agentStatus?: AgentStatus
|
||||||
|
}
|
||||||
|
|
||||||
|
const IDLE_TIMEOUT = 30000
|
||||||
|
|
||||||
|
function isAgentThought(line: string): boolean {
|
||||||
|
const trimmed = line.trim()
|
||||||
|
if (/^\[Tool:/.test(trimmed)) return false
|
||||||
|
if (/^\s*Input:\s*\{/.test(trimmed)) return false
|
||||||
|
if (/^\[(Done|Error)\]/.test(trimmed)) return false
|
||||||
|
if (/^Output:/.test(trimmed)) return false
|
||||||
|
if (/^[[{]/.test(trimmed)) return false
|
||||||
|
if (trimmed.length < 10) return false
|
||||||
|
if (/^[A-Za-z]:\\/.test(trimmed)) return false
|
||||||
|
if (/^\/[a-z]/.test(trimmed)) return false
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
function getLatestThought(logs: Array<{ line: string; timestamp: string }>): string | null {
|
||||||
|
for (let i = logs.length - 1; i >= 0; i--) {
|
||||||
|
if (isAgentThought(logs[i].line)) {
|
||||||
|
return logs[i].line.trim()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return null
|
||||||
}
|
}
|
||||||
|
|
||||||
export function ProgressDashboard({
|
export function ProgressDashboard({
|
||||||
@@ -14,10 +42,43 @@ export function ProgressDashboard({
|
|||||||
total,
|
total,
|
||||||
percentage,
|
percentage,
|
||||||
isConnected,
|
isConnected,
|
||||||
|
logs = [],
|
||||||
|
agentStatus,
|
||||||
}: ProgressDashboardProps) {
|
}: ProgressDashboardProps) {
|
||||||
|
const thought = useMemo(() => getLatestThought(logs), [logs])
|
||||||
|
const [displayedThought, setDisplayedThought] = useState<string | null>(null)
|
||||||
|
const [textVisible, setTextVisible] = useState(true)
|
||||||
|
|
||||||
|
const lastLogTimestamp = logs.length > 0
|
||||||
|
? new Date(logs[logs.length - 1].timestamp).getTime()
|
||||||
|
: 0
|
||||||
|
|
||||||
|
const showThought = useMemo(() => {
|
||||||
|
if (!thought) return false
|
||||||
|
if (agentStatus === 'running') return true
|
||||||
|
if (agentStatus === 'paused') {
|
||||||
|
return Date.now() - lastLogTimestamp < IDLE_TIMEOUT
|
||||||
|
}
|
||||||
|
return false
|
||||||
|
}, [thought, agentStatus, lastLogTimestamp])
|
||||||
|
|
||||||
|
useEffect(() => {
|
||||||
|
if (thought !== displayedThought && thought) {
|
||||||
|
setTextVisible(false)
|
||||||
|
const timeout = setTimeout(() => {
|
||||||
|
setDisplayedThought(thought)
|
||||||
|
setTextVisible(true)
|
||||||
|
}, 150)
|
||||||
|
return () => clearTimeout(timeout)
|
||||||
|
}
|
||||||
|
}, [thought, displayedThought])
|
||||||
|
|
||||||
|
const isRunning = agentStatus === 'running'
|
||||||
|
|
||||||
return (
|
return (
|
||||||
<Card>
|
<Card>
|
||||||
<CardHeader className="flex-row items-center justify-between space-y-0 pb-4">
|
<CardHeader className="flex-row items-center justify-between space-y-0 pb-0">
|
||||||
|
<div className="flex items-center gap-3">
|
||||||
<CardTitle className="text-xl uppercase tracking-wide">
|
<CardTitle className="text-xl uppercase tracking-wide">
|
||||||
Progress
|
Progress
|
||||||
</CardTitle>
|
</CardTitle>
|
||||||
@@ -34,47 +95,56 @@ export function ProgressDashboard({
|
|||||||
</>
|
</>
|
||||||
)}
|
)}
|
||||||
</Badge>
|
</Badge>
|
||||||
</CardHeader>
|
</div>
|
||||||
|
<div className="flex items-baseline gap-1">
|
||||||
<CardContent>
|
<span className="font-mono text-lg font-bold text-primary">
|
||||||
{/* Large Percentage */}
|
{passing}
|
||||||
<div className="text-center mb-6">
|
|
||||||
<span className="inline-flex items-baseline">
|
|
||||||
<span className="text-6xl font-bold tabular-nums">
|
|
||||||
{percentage.toFixed(1)}
|
|
||||||
</span>
|
|
||||||
<span className="text-3xl font-semibold text-muted-foreground">
|
|
||||||
%
|
|
||||||
</span>
|
</span>
|
||||||
|
<span className="text-sm text-muted-foreground">/</span>
|
||||||
|
<span className="font-mono text-lg font-bold">
|
||||||
|
{total}
|
||||||
</span>
|
</span>
|
||||||
</div>
|
</div>
|
||||||
|
</CardHeader>
|
||||||
|
|
||||||
|
<CardContent className="pt-3 pb-3">
|
||||||
|
<div className="flex items-center gap-4">
|
||||||
{/* Progress Bar */}
|
{/* Progress Bar */}
|
||||||
<div className="h-3 bg-muted rounded-full overflow-hidden mb-6">
|
<div className="h-2.5 bg-muted rounded-full overflow-hidden flex-1">
|
||||||
<div
|
<div
|
||||||
className="h-full bg-primary rounded-full transition-all duration-500 ease-out"
|
className="h-full bg-primary rounded-full transition-all duration-500 ease-out"
|
||||||
style={{ width: `${percentage}%` }}
|
style={{ width: `${percentage}%` }}
|
||||||
/>
|
/>
|
||||||
</div>
|
</div>
|
||||||
|
{/* Percentage */}
|
||||||
{/* Stats */}
|
<span className="text-sm font-bold tabular-nums text-muted-foreground w-12 text-right">
|
||||||
<div className="flex justify-center gap-8 text-center">
|
{percentage.toFixed(1)}%
|
||||||
<div>
|
|
||||||
<span className="font-mono text-3xl font-bold text-primary">
|
|
||||||
{passing}
|
|
||||||
</span>
|
|
||||||
<span className="block text-sm text-muted-foreground uppercase">
|
|
||||||
Passing
|
|
||||||
</span>
|
</span>
|
||||||
</div>
|
</div>
|
||||||
<div className="text-4xl text-muted-foreground">/</div>
|
|
||||||
<div>
|
{/* Agent Thought */}
|
||||||
<span className="font-mono text-3xl font-bold">
|
<div
|
||||||
{total}
|
className={`
|
||||||
</span>
|
transition-all duration-300 ease-out overflow-hidden
|
||||||
<span className="block text-sm text-muted-foreground uppercase">
|
${showThought && displayedThought ? 'opacity-100 max-h-10 mt-3' : 'opacity-0 max-h-0 mt-0'}
|
||||||
Total
|
`}
|
||||||
</span>
|
>
|
||||||
|
<div className="flex items-center gap-2">
|
||||||
|
<div className="relative shrink-0">
|
||||||
|
<Brain size={16} className="text-primary" strokeWidth={2.5} />
|
||||||
|
{isRunning && (
|
||||||
|
<Sparkles size={8} className="absolute -top-1 -right-1 text-yellow-500 animate-pulse" />
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
<p
|
||||||
|
className="font-mono text-sm truncate text-muted-foreground transition-all duration-150 ease-out"
|
||||||
|
style={{
|
||||||
|
opacity: textVisible ? 1 : 0,
|
||||||
|
transform: textVisible ? 'translateY(0)' : 'translateY(-4px)',
|
||||||
|
}}
|
||||||
|
>
|
||||||
|
{displayedThought?.replace(/:$/, '')}
|
||||||
|
</p>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</CardContent>
|
</CardContent>
|
||||||
|
|||||||
@@ -41,6 +41,12 @@ export function SettingsModal({ isOpen, onClose }: SettingsModalProps) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
const handleBatchSizeChange = (size: number) => {
|
||||||
|
if (!updateSettings.isPending) {
|
||||||
|
updateSettings.mutate({ batch_size: size })
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
const models = modelsData?.models ?? []
|
const models = modelsData?.models ?? []
|
||||||
const isSaving = updateSettings.isPending
|
const isSaving = updateSettings.isPending
|
||||||
|
|
||||||
@@ -171,6 +177,24 @@ export function SettingsModal({ isOpen, onClose }: SettingsModalProps) {
|
|||||||
/>
|
/>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
|
{/* Headless Browser Toggle */}
|
||||||
|
<div className="flex items-center justify-between">
|
||||||
|
<div className="space-y-0.5">
|
||||||
|
<Label htmlFor="playwright-headless" className="font-medium">
|
||||||
|
Headless Browser
|
||||||
|
</Label>
|
||||||
|
<p className="text-sm text-muted-foreground">
|
||||||
|
Run browser without visible window (saves CPU)
|
||||||
|
</p>
|
||||||
|
</div>
|
||||||
|
<Switch
|
||||||
|
id="playwright-headless"
|
||||||
|
checked={settings.playwright_headless}
|
||||||
|
onCheckedChange={() => updateSettings.mutate({ playwright_headless: !settings.playwright_headless })}
|
||||||
|
disabled={isSaving}
|
||||||
|
/>
|
||||||
|
</div>
|
||||||
|
|
||||||
{/* Model Selection */}
|
{/* Model Selection */}
|
||||||
<div className="space-y-2">
|
<div className="space-y-2">
|
||||||
<Label className="font-medium">Model</Label>
|
<Label className="font-medium">Model</Label>
|
||||||
@@ -216,6 +240,30 @@ export function SettingsModal({ isOpen, onClose }: SettingsModalProps) {
|
|||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
|
{/* Features per Agent */}
|
||||||
|
<div className="space-y-2">
|
||||||
|
<Label className="font-medium">Features per Agent</Label>
|
||||||
|
<p className="text-sm text-muted-foreground">
|
||||||
|
Number of features assigned to each coding agent
|
||||||
|
</p>
|
||||||
|
<div className="flex rounded-lg border overflow-hidden">
|
||||||
|
{[1, 2, 3].map((size) => (
|
||||||
|
<button
|
||||||
|
key={size}
|
||||||
|
onClick={() => handleBatchSizeChange(size)}
|
||||||
|
disabled={isSaving}
|
||||||
|
className={`flex-1 py-2 px-3 text-sm font-medium transition-colors ${
|
||||||
|
(settings.batch_size ?? 1) === size
|
||||||
|
? 'bg-primary text-primary-foreground'
|
||||||
|
: 'bg-background text-foreground hover:bg-muted'
|
||||||
|
} ${isSaving ? 'opacity-50 cursor-not-allowed' : ''}`}
|
||||||
|
>
|
||||||
|
{size}
|
||||||
|
</button>
|
||||||
|
))}
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
{/* Update Error */}
|
{/* Update Error */}
|
||||||
{updateSettings.isError && (
|
{updateSettings.isError && (
|
||||||
<Alert variant="destructive">
|
<Alert variant="destructive">
|
||||||
|
|||||||
@@ -228,7 +228,7 @@ export function SpecCreationChat({
|
|||||||
}
|
}
|
||||||
|
|
||||||
return (
|
return (
|
||||||
<div className="flex flex-col h-full bg-background">
|
<div className="flex flex-col h-screen bg-background">
|
||||||
{/* Header */}
|
{/* Header */}
|
||||||
<div className="flex items-center justify-between p-4 border-b-2 border-border bg-card">
|
<div className="flex items-center justify-between p-4 border-b-2 border-border bg-card">
|
||||||
<div className="flex items-center gap-3">
|
<div className="flex items-center gap-3">
|
||||||
@@ -303,7 +303,7 @@ export function SpecCreationChat({
|
|||||||
)}
|
)}
|
||||||
|
|
||||||
{/* Messages area */}
|
{/* Messages area */}
|
||||||
<div className="flex-1 overflow-y-auto py-4">
|
<div className="flex-1 overflow-y-auto py-4 min-h-0">
|
||||||
{messages.length === 0 && !isLoading && (
|
{messages.length === 0 && !isLoading && (
|
||||||
<div className="flex flex-col items-center justify-center h-full text-center p-8">
|
<div className="flex flex-col items-center justify-center h-full text-center p-8">
|
||||||
<Card className="p-6 max-w-md">
|
<Card className="p-6 max-w-md">
|
||||||
@@ -451,8 +451,7 @@ export function SpecCreationChat({
|
|||||||
|
|
||||||
{/* Completion footer */}
|
{/* Completion footer */}
|
||||||
{isComplete && (
|
{isComplete && (
|
||||||
<div className={`p-4 border-t-2 border-border ${
|
<div className={`p-4 border-t-2 border-border ${initializerStatus === 'error' ? 'bg-destructive' : 'bg-green-500'
|
||||||
initializerStatus === 'error' ? 'bg-destructive' : 'bg-green-500'
|
|
||||||
}`}>
|
}`}>
|
||||||
<div className="flex items-center justify-between">
|
<div className="flex items-center justify-between">
|
||||||
<div className="flex items-center gap-2">
|
<div className="flex items-center gap-2">
|
||||||
|
|||||||
130
ui/src/components/docs/DocsContent.tsx
Normal file
130
ui/src/components/docs/DocsContent.tsx
Normal file
@@ -0,0 +1,130 @@
|
|||||||
|
/**
|
||||||
|
* DocsContent Component
|
||||||
|
*
|
||||||
|
* Renders all 13 documentation section components in order.
|
||||||
|
* Uses IntersectionObserver to detect which section heading is currently
|
||||||
|
* visible in the viewport, and notifies the parent so the sidebar
|
||||||
|
* can highlight the active section.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { useEffect, useRef, useCallback } from 'react'
|
||||||
|
import { DOC_SECTIONS } from './docsData'
|
||||||
|
// Section components -- lazy-load candidates in the future, but imported
|
||||||
|
// statically for now to keep the build simple and deterministic.
|
||||||
|
import { GettingStarted } from './sections/GettingStarted'
|
||||||
|
import { AppSpecSetup } from './sections/AppSpecSetup'
|
||||||
|
import { ProjectStructure } from './sections/ProjectStructure'
|
||||||
|
import { FeaturesKanban } from './sections/FeaturesKanban'
|
||||||
|
import { AgentSystem } from './sections/AgentSystem'
|
||||||
|
import { SettingsConfig } from './sections/SettingsConfig'
|
||||||
|
import { DeveloperTools } from './sections/DeveloperTools'
|
||||||
|
import { AIAssistant } from './sections/AIAssistant'
|
||||||
|
import { Scheduling } from './sections/Scheduling'
|
||||||
|
import { AppearanceThemes } from './sections/AppearanceThemes'
|
||||||
|
import { Security } from './sections/Security'
|
||||||
|
import { AdvancedConfig } from './sections/AdvancedConfig'
|
||||||
|
import { FAQ } from './sections/FAQ'
|
||||||
|
|
||||||
|
interface DocsContentProps {
|
||||||
|
activeSectionId: string | null
|
||||||
|
onSectionVisible: (id: string) => void
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Maps each section id from docsData to its corresponding React component.
|
||||||
|
* Order matches DOC_SECTIONS so we can iterate safely.
|
||||||
|
*/
|
||||||
|
const SECTION_COMPONENTS: Record<string, React.FC> = {
|
||||||
|
'getting-started': GettingStarted,
|
||||||
|
'app-spec-setup': AppSpecSetup,
|
||||||
|
'project-structure': ProjectStructure,
|
||||||
|
'features-kanban': FeaturesKanban,
|
||||||
|
'agent-system': AgentSystem,
|
||||||
|
'settings-config': SettingsConfig,
|
||||||
|
'developer-tools': DeveloperTools,
|
||||||
|
'ai-assistant': AIAssistant,
|
||||||
|
scheduling: Scheduling,
|
||||||
|
'appearance-themes': AppearanceThemes,
|
||||||
|
security: Security,
|
||||||
|
'advanced-config': AdvancedConfig,
|
||||||
|
faq: FAQ,
|
||||||
|
}
|
||||||
|
|
||||||
|
export function DocsContent({ onSectionVisible }: DocsContentProps) {
|
||||||
|
const containerRef = useRef<HTMLDivElement>(null)
|
||||||
|
// Store refs to each section heading element so the observer can watch them
|
||||||
|
const headingRefs = useRef<Map<string, HTMLElement>>(new Map())
|
||||||
|
|
||||||
|
// Stable callback ref setter -- avoids recreating refs on every render
|
||||||
|
const setHeadingRef = useCallback((id: string, element: HTMLElement | null) => {
|
||||||
|
if (element) {
|
||||||
|
headingRefs.current.set(id, element)
|
||||||
|
} else {
|
||||||
|
headingRefs.current.delete(id)
|
||||||
|
}
|
||||||
|
}, [])
|
||||||
|
|
||||||
|
// IntersectionObserver: track which section heading is at or near the top of the viewport
|
||||||
|
useEffect(() => {
|
||||||
|
const headings = headingRefs.current
|
||||||
|
if (headings.size === 0) return
|
||||||
|
|
||||||
|
// rootMargin: trigger when a heading enters the top 20% of the viewport.
|
||||||
|
// This ensures the sidebar updates *before* the user scrolls past the heading.
|
||||||
|
const observer = new IntersectionObserver(
|
||||||
|
(entries) => {
|
||||||
|
// Find the topmost visible heading -- the one closest to the top of the viewport
|
||||||
|
const visible = entries
|
||||||
|
.filter((entry) => entry.isIntersecting)
|
||||||
|
.sort((a, b) => a.boundingClientRect.top - b.boundingClientRect.top)
|
||||||
|
|
||||||
|
if (visible.length > 0) {
|
||||||
|
const topEntry = visible[0]
|
||||||
|
const sectionId = topEntry.target.getAttribute('data-section-id')
|
||||||
|
if (sectionId) {
|
||||||
|
onSectionVisible(sectionId)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
// Observe from the very top of the viewport down to -60% from the bottom,
|
||||||
|
// so headings are detected while in the upper portion of the screen.
|
||||||
|
rootMargin: '0px 0px -60% 0px',
|
||||||
|
threshold: 0,
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
headings.forEach((element) => observer.observe(element))
|
||||||
|
|
||||||
|
return () => observer.disconnect()
|
||||||
|
}, [onSectionVisible])
|
||||||
|
|
||||||
|
return (
|
||||||
|
<div ref={containerRef} className="docs-prose">
|
||||||
|
{DOC_SECTIONS.map((section) => {
|
||||||
|
const SectionComponent = SECTION_COMPONENTS[section.id]
|
||||||
|
if (!SectionComponent) return null
|
||||||
|
|
||||||
|
const Icon = section.icon
|
||||||
|
|
||||||
|
return (
|
||||||
|
<div key={section.id} id={section.id} className="scroll-mt-24 mb-16">
|
||||||
|
{/* Section heading with anchor */}
|
||||||
|
<h2
|
||||||
|
ref={(el) => setHeadingRef(section.id, el)}
|
||||||
|
data-section-id={section.id}
|
||||||
|
className="font-display text-2xl font-bold tracking-tight mb-6 flex items-center gap-3
|
||||||
|
text-foreground border-b-2 border-border pb-3"
|
||||||
|
>
|
||||||
|
<Icon size={24} className="text-primary shrink-0" />
|
||||||
|
{section.title}
|
||||||
|
</h2>
|
||||||
|
|
||||||
|
{/* Section body */}
|
||||||
|
<SectionComponent />
|
||||||
|
</div>
|
||||||
|
)
|
||||||
|
})}
|
||||||
|
</div>
|
||||||
|
)
|
||||||
|
}
|
||||||
215
ui/src/components/docs/DocsPage.tsx
Normal file
215
ui/src/components/docs/DocsPage.tsx
Normal file
@@ -0,0 +1,215 @@
|
|||||||
|
/**
|
||||||
|
* DocsPage Component
|
||||||
|
*
|
||||||
|
* Main layout for the documentation route (#/docs).
|
||||||
|
* Full-page layout with a sticky header, collapsible sidebar on the left,
|
||||||
|
* and scrollable content area on the right.
|
||||||
|
*
|
||||||
|
* Mobile-responsive: sidebar collapses behind a hamburger menu that
|
||||||
|
* opens as an overlay.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { useState, useEffect, useCallback } from 'react'
|
||||||
|
import { ArrowLeft, Menu, X, Moon, Sun } from 'lucide-react'
|
||||||
|
import { useHashRoute } from '../../hooks/useHashRoute'
|
||||||
|
import { useTheme } from '../../hooks/useTheme'
|
||||||
|
import { ThemeSelector } from '../ThemeSelector'
|
||||||
|
import { Button } from '@/components/ui/button'
|
||||||
|
import { Badge } from '@/components/ui/badge'
|
||||||
|
import { DocsSidebar } from './DocsSidebar'
|
||||||
|
import { DocsSearch } from './DocsSearch'
|
||||||
|
import { DocsContent } from './DocsContent'
|
||||||
|
|
||||||
|
export function DocsPage() {
|
||||||
|
const [activeSectionId, setActiveSectionId] = useState<string | null>(null)
|
||||||
|
const [searchQuery, setSearchQuery] = useState('')
|
||||||
|
const [mobileSidebarOpen, setMobileSidebarOpen] = useState(false)
|
||||||
|
|
||||||
|
const { section: initialSection } = useHashRoute()
|
||||||
|
const { theme, setTheme, darkMode, toggleDarkMode, themes } = useTheme()
|
||||||
|
|
||||||
|
// On mount, if the hash includes a section id (e.g. #/docs/getting-started),
|
||||||
|
// scroll to it and set it as active
|
||||||
|
useEffect(() => {
|
||||||
|
if (initialSection) {
|
||||||
|
setActiveSectionId(initialSection)
|
||||||
|
// Delay scroll slightly so the DOM is rendered
|
||||||
|
requestAnimationFrame(() => {
|
||||||
|
const element = document.getElementById(initialSection)
|
||||||
|
if (element) {
|
||||||
|
element.scrollIntoView({ behavior: 'smooth', block: 'start' })
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
// eslint-disable-next-line react-hooks/exhaustive-deps
|
||||||
|
}, []) // Run only on mount
|
||||||
|
|
||||||
|
// When a sidebar item is clicked, scroll the corresponding element into view
|
||||||
|
const handleSectionClick = useCallback((id: string) => {
|
||||||
|
setActiveSectionId(id)
|
||||||
|
|
||||||
|
// Update hash for linkability (without triggering a route change)
|
||||||
|
history.replaceState(null, '', `#/docs/${id}`)
|
||||||
|
|
||||||
|
const element = document.getElementById(id)
|
||||||
|
if (element) {
|
||||||
|
element.scrollIntoView({ behavior: 'smooth', block: 'start' })
|
||||||
|
}
|
||||||
|
}, [])
|
||||||
|
|
||||||
|
// Called by DocsContent's IntersectionObserver when a heading scrolls into view
|
||||||
|
const handleSectionVisible = useCallback((id: string) => {
|
||||||
|
setActiveSectionId(id)
|
||||||
|
}, [])
|
||||||
|
|
||||||
|
// Close mobile sidebar when pressing Escape
|
||||||
|
useEffect(() => {
|
||||||
|
const handleKeyDown = (e: KeyboardEvent) => {
|
||||||
|
if (e.key === 'Escape' && mobileSidebarOpen) {
|
||||||
|
setMobileSidebarOpen(false)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
window.addEventListener('keydown', handleKeyDown)
|
||||||
|
return () => window.removeEventListener('keydown', handleKeyDown)
|
||||||
|
}, [mobileSidebarOpen])
|
||||||
|
|
||||||
|
// Prevent body scroll when mobile sidebar overlay is open
|
||||||
|
useEffect(() => {
|
||||||
|
if (mobileSidebarOpen) {
|
||||||
|
document.body.style.overflow = 'hidden'
|
||||||
|
} else {
|
||||||
|
document.body.style.overflow = ''
|
||||||
|
}
|
||||||
|
return () => {
|
||||||
|
document.body.style.overflow = ''
|
||||||
|
}
|
||||||
|
}, [mobileSidebarOpen])
|
||||||
|
|
||||||
|
return (
|
||||||
|
<div className="min-h-screen bg-background">
|
||||||
|
{/* Sticky header */}
|
||||||
|
<header className="sticky top-0 z-50 bg-card/80 backdrop-blur-md text-foreground border-b-2 border-border">
|
||||||
|
<div className="max-w-7xl mx-auto px-4 py-3">
|
||||||
|
<div className="flex items-center justify-between">
|
||||||
|
{/* Left side: hamburger (mobile) + title + badge */}
|
||||||
|
<div className="flex items-center gap-3">
|
||||||
|
{/* Mobile hamburger button -- only visible below lg breakpoint */}
|
||||||
|
<Button
|
||||||
|
variant="ghost"
|
||||||
|
size="icon-sm"
|
||||||
|
className="lg:hidden"
|
||||||
|
onClick={() => setMobileSidebarOpen(!mobileSidebarOpen)}
|
||||||
|
aria-label={mobileSidebarOpen ? 'Close sidebar' : 'Open sidebar'}
|
||||||
|
>
|
||||||
|
{mobileSidebarOpen ? <X size={20} /> : <Menu size={20} />}
|
||||||
|
</Button>
|
||||||
|
|
||||||
|
<a
|
||||||
|
href="#/"
|
||||||
|
className="font-display text-xl font-bold tracking-tight uppercase text-foreground
|
||||||
|
hover:text-primary transition-colors"
|
||||||
|
>
|
||||||
|
AutoCoder
|
||||||
|
</a>
|
||||||
|
|
||||||
|
<Badge variant="secondary" className="text-xs font-medium">
|
||||||
|
Documentation
|
||||||
|
</Badge>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Right side: theme controls + back button */}
|
||||||
|
<div className="flex items-center gap-2">
|
||||||
|
<ThemeSelector
|
||||||
|
themes={themes}
|
||||||
|
currentTheme={theme}
|
||||||
|
onThemeChange={setTheme}
|
||||||
|
/>
|
||||||
|
|
||||||
|
<Button
|
||||||
|
onClick={toggleDarkMode}
|
||||||
|
variant="outline"
|
||||||
|
size="sm"
|
||||||
|
title="Toggle dark mode"
|
||||||
|
aria-label="Toggle dark mode"
|
||||||
|
>
|
||||||
|
{darkMode ? <Sun size={18} /> : <Moon size={18} />}
|
||||||
|
</Button>
|
||||||
|
|
||||||
|
<Button
|
||||||
|
variant="outline"
|
||||||
|
size="sm"
|
||||||
|
asChild
|
||||||
|
>
|
||||||
|
<a href="#/" className="inline-flex items-center gap-1.5">
|
||||||
|
<ArrowLeft size={16} />
|
||||||
|
<span className="hidden sm:inline">Back to App</span>
|
||||||
|
</a>
|
||||||
|
</Button>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</header>
|
||||||
|
|
||||||
|
{/* Body: sidebar + content */}
|
||||||
|
<div className="max-w-7xl mx-auto flex">
|
||||||
|
{/* ----------------------------------------------------------------
|
||||||
|
Desktop sidebar -- visible at lg and above
|
||||||
|
Fixed width, sticky below the header, independently scrollable
|
||||||
|
---------------------------------------------------------------- */}
|
||||||
|
<aside
|
||||||
|
className="hidden lg:block w-[280px] shrink-0 sticky top-[57px] h-[calc(100vh-57px)]
|
||||||
|
overflow-y-auto border-r border-border p-4 space-y-4"
|
||||||
|
>
|
||||||
|
<DocsSearch value={searchQuery} onChange={setSearchQuery} />
|
||||||
|
<DocsSidebar
|
||||||
|
activeSectionId={activeSectionId}
|
||||||
|
onSectionClick={handleSectionClick}
|
||||||
|
searchQuery={searchQuery}
|
||||||
|
/>
|
||||||
|
</aside>
|
||||||
|
|
||||||
|
{/* ----------------------------------------------------------------
|
||||||
|
Mobile sidebar overlay -- visible below lg breakpoint
|
||||||
|
---------------------------------------------------------------- */}
|
||||||
|
{mobileSidebarOpen && (
|
||||||
|
<>
|
||||||
|
{/* Backdrop */}
|
||||||
|
<div
|
||||||
|
className="fixed inset-0 z-40 bg-background/60 backdrop-blur-sm lg:hidden"
|
||||||
|
onClick={() => setMobileSidebarOpen(false)}
|
||||||
|
aria-hidden="true"
|
||||||
|
/>
|
||||||
|
|
||||||
|
{/* Sidebar panel */}
|
||||||
|
<aside
|
||||||
|
className="fixed top-[57px] left-0 z-50 w-[280px] h-[calc(100vh-57px)]
|
||||||
|
overflow-y-auto bg-card border-r-2 border-border p-4 space-y-4
|
||||||
|
animate-slide-in lg:hidden"
|
||||||
|
>
|
||||||
|
<DocsSearch value={searchQuery} onChange={setSearchQuery} />
|
||||||
|
<DocsSidebar
|
||||||
|
activeSectionId={activeSectionId}
|
||||||
|
onSectionClick={handleSectionClick}
|
||||||
|
searchQuery={searchQuery}
|
||||||
|
onMobileClose={() => setMobileSidebarOpen(false)}
|
||||||
|
/>
|
||||||
|
</aside>
|
||||||
|
</>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{/* ----------------------------------------------------------------
|
||||||
|
Content area -- fills remaining space, scrollable
|
||||||
|
---------------------------------------------------------------- */}
|
||||||
|
<main className="flex-1 min-w-0 px-6 py-8 lg:px-10">
|
||||||
|
<div className="max-w-[65ch] mx-auto">
|
||||||
|
<DocsContent
|
||||||
|
activeSectionId={activeSectionId}
|
||||||
|
onSectionVisible={handleSectionVisible}
|
||||||
|
/>
|
||||||
|
</div>
|
||||||
|
</main>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
)
|
||||||
|
}
|
||||||
78
ui/src/components/docs/DocsSearch.tsx
Normal file
78
ui/src/components/docs/DocsSearch.tsx
Normal file
@@ -0,0 +1,78 @@
|
|||||||
|
/**
|
||||||
|
* DocsSearch Component
|
||||||
|
*
|
||||||
|
* Search input for the documentation sidebar.
|
||||||
|
* Supports Ctrl/Cmd+K keyboard shortcut to focus,
|
||||||
|
* and shows a keyboard hint when the input is empty.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { useRef, useEffect } from 'react'
|
||||||
|
import { Search, X } from 'lucide-react'
|
||||||
|
|
||||||
|
interface DocsSearchProps {
|
||||||
|
value: string
|
||||||
|
onChange: (value: string) => void
|
||||||
|
}
|
||||||
|
|
||||||
|
export function DocsSearch({ value, onChange }: DocsSearchProps) {
|
||||||
|
const inputRef = useRef<HTMLInputElement>(null)
|
||||||
|
|
||||||
|
// Global keyboard shortcut: Ctrl/Cmd+K focuses the search input
|
||||||
|
useEffect(() => {
|
||||||
|
const handleKeyDown = (e: KeyboardEvent) => {
|
||||||
|
if ((e.ctrlKey || e.metaKey) && e.key === 'k') {
|
||||||
|
e.preventDefault()
|
||||||
|
inputRef.current?.focus()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
window.addEventListener('keydown', handleKeyDown)
|
||||||
|
return () => window.removeEventListener('keydown', handleKeyDown)
|
||||||
|
}, [])
|
||||||
|
|
||||||
|
return (
|
||||||
|
<div className="relative">
|
||||||
|
{/* Search icon */}
|
||||||
|
<Search
|
||||||
|
size={16}
|
||||||
|
className="absolute left-3 top-1/2 -translate-y-1/2 text-muted-foreground pointer-events-none"
|
||||||
|
/>
|
||||||
|
|
||||||
|
<input
|
||||||
|
ref={inputRef}
|
||||||
|
type="text"
|
||||||
|
value={value}
|
||||||
|
onChange={(e) => onChange(e.target.value)}
|
||||||
|
placeholder="Search docs..."
|
||||||
|
className="w-full pl-9 pr-16 py-2 text-sm bg-muted border border-border rounded-lg
|
||||||
|
text-foreground placeholder:text-muted-foreground
|
||||||
|
focus:outline-none focus:ring-2 focus:ring-ring/50 focus:border-ring
|
||||||
|
transition-colors"
|
||||||
|
/>
|
||||||
|
|
||||||
|
{/* Right side: clear button when has value, otherwise Ctrl+K hint */}
|
||||||
|
{value ? (
|
||||||
|
<button
|
||||||
|
onClick={() => {
|
||||||
|
onChange('')
|
||||||
|
inputRef.current?.focus()
|
||||||
|
}}
|
||||||
|
className="absolute right-3 top-1/2 -translate-y-1/2 text-muted-foreground
|
||||||
|
hover:text-foreground transition-colors"
|
||||||
|
aria-label="Clear search"
|
||||||
|
>
|
||||||
|
<X size={16} />
|
||||||
|
</button>
|
||||||
|
) : (
|
||||||
|
<kbd
|
||||||
|
className="absolute right-3 top-1/2 -translate-y-1/2
|
||||||
|
text-[10px] text-muted-foreground bg-background
|
||||||
|
border border-border rounded px-1.5 py-0.5
|
||||||
|
pointer-events-none select-none"
|
||||||
|
>
|
||||||
|
Ctrl+K
|
||||||
|
</kbd>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
)
|
||||||
|
}
|
||||||
189
ui/src/components/docs/DocsSidebar.tsx
Normal file
189
ui/src/components/docs/DocsSidebar.tsx
Normal file
@@ -0,0 +1,189 @@
|
|||||||
|
/**
|
||||||
|
* DocsSidebar Component
|
||||||
|
*
|
||||||
|
* Left sidebar navigation for the documentation page.
|
||||||
|
* Lists all sections from docsData with expandable subsections.
|
||||||
|
* Supports search filtering with auto-expansion of matching sections.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { useState, useMemo } from 'react'
|
||||||
|
import { ChevronRight } from 'lucide-react'
|
||||||
|
import { DOC_SECTIONS, type DocSection } from './docsData'
|
||||||
|
|
||||||
|
interface DocsSidebarProps {
|
||||||
|
activeSectionId: string | null
|
||||||
|
onSectionClick: (id: string) => void
|
||||||
|
searchQuery: string
|
||||||
|
onMobileClose?: () => void
|
||||||
|
}
|
||||||
|
|
||||||
|
export function DocsSidebar({
|
||||||
|
activeSectionId,
|
||||||
|
onSectionClick,
|
||||||
|
searchQuery,
|
||||||
|
onMobileClose,
|
||||||
|
}: DocsSidebarProps) {
|
||||||
|
// Track which top-level sections are manually expanded by the user
|
||||||
|
const [expandedSections, setExpandedSections] = useState<Set<string>>(() => {
|
||||||
|
// Start with the first section expanded so the sidebar is not fully collapsed
|
||||||
|
const initial = new Set<string>()
|
||||||
|
if (DOC_SECTIONS.length > 0) {
|
||||||
|
initial.add(DOC_SECTIONS[0].id)
|
||||||
|
}
|
||||||
|
return initial
|
||||||
|
})
|
||||||
|
|
||||||
|
const normalizedQuery = searchQuery.trim().toLowerCase()
|
||||||
|
|
||||||
|
// Filter sections based on search query, matching against section title,
|
||||||
|
// subsection titles, and keywords
|
||||||
|
const filteredSections = useMemo(() => {
|
||||||
|
if (!normalizedQuery) {
|
||||||
|
return DOC_SECTIONS
|
||||||
|
}
|
||||||
|
|
||||||
|
return DOC_SECTIONS.filter((section) => {
|
||||||
|
// Check section title
|
||||||
|
if (section.title.toLowerCase().includes(normalizedQuery)) return true
|
||||||
|
|
||||||
|
// Check keywords
|
||||||
|
if (section.keywords.some((kw) => kw.toLowerCase().includes(normalizedQuery))) return true
|
||||||
|
|
||||||
|
// Check subsection titles
|
||||||
|
if (section.subsections.some((sub) => sub.title.toLowerCase().includes(normalizedQuery))) {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
return false
|
||||||
|
})
|
||||||
|
}, [normalizedQuery])
|
||||||
|
|
||||||
|
// Determine which sections should appear expanded:
|
||||||
|
// - When searching: auto-expand all matching sections
|
||||||
|
// - Otherwise: use manual expanded state, plus expand whichever section contains the active item
|
||||||
|
const isSectionExpanded = (sectionId: string): boolean => {
|
||||||
|
if (normalizedQuery) return true
|
||||||
|
|
||||||
|
if (expandedSections.has(sectionId)) return true
|
||||||
|
|
||||||
|
// Also expand the section that contains the currently active subsection
|
||||||
|
if (activeSectionId) {
|
||||||
|
const section = DOC_SECTIONS.find((s) => s.id === sectionId)
|
||||||
|
if (section) {
|
||||||
|
if (section.id === activeSectionId) return true
|
||||||
|
if (section.subsections.some((sub) => sub.id === activeSectionId)) return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
const toggleSection = (sectionId: string) => {
|
||||||
|
setExpandedSections((prev) => {
|
||||||
|
const next = new Set(prev)
|
||||||
|
if (next.has(sectionId)) {
|
||||||
|
next.delete(sectionId)
|
||||||
|
} else {
|
||||||
|
next.add(sectionId)
|
||||||
|
}
|
||||||
|
return next
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Checks whether a given id (section or subsection) is the currently active item.
|
||||||
|
* Active items get a highlighted visual treatment.
|
||||||
|
*/
|
||||||
|
const isActive = (id: string): boolean => activeSectionId === id
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Checks whether a section contains the active subsection.
|
||||||
|
* Used to highlight parent sections in a muted way.
|
||||||
|
*/
|
||||||
|
const sectionContainsActive = (section: DocSection): boolean => {
|
||||||
|
if (!activeSectionId) return false
|
||||||
|
return section.subsections.some((sub) => sub.id === activeSectionId)
|
||||||
|
}
|
||||||
|
|
||||||
|
const handleItemClick = (id: string) => {
|
||||||
|
onSectionClick(id)
|
||||||
|
// On mobile, close the sidebar after navigation
|
||||||
|
onMobileClose?.()
|
||||||
|
}
|
||||||
|
|
||||||
|
return (
|
||||||
|
<nav aria-label="Documentation navigation" className="space-y-1">
|
||||||
|
{filteredSections.map((section) => {
|
||||||
|
const Icon = section.icon
|
||||||
|
const expanded = isSectionExpanded(section.id)
|
||||||
|
const active = isActive(section.id)
|
||||||
|
const containsActive = sectionContainsActive(section)
|
||||||
|
|
||||||
|
return (
|
||||||
|
<div key={section.id}>
|
||||||
|
{/* Section header (clickable to expand/collapse and navigate) */}
|
||||||
|
<button
|
||||||
|
onClick={() => {
|
||||||
|
toggleSection(section.id)
|
||||||
|
handleItemClick(section.id)
|
||||||
|
}}
|
||||||
|
className={`w-full flex items-center gap-2 px-3 py-2 text-sm rounded-md
|
||||||
|
transition-colors cursor-pointer group
|
||||||
|
${active
|
||||||
|
? 'bg-primary/10 border-l-2 border-primary text-foreground font-semibold'
|
||||||
|
: containsActive
|
||||||
|
? 'text-foreground font-medium'
|
||||||
|
: 'text-muted-foreground hover:text-foreground hover:bg-muted'
|
||||||
|
}`}
|
||||||
|
aria-expanded={expanded}
|
||||||
|
>
|
||||||
|
<Icon
|
||||||
|
size={16}
|
||||||
|
className={`shrink-0 ${active ? 'text-primary' : 'text-muted-foreground group-hover:text-foreground'}`}
|
||||||
|
/>
|
||||||
|
|
||||||
|
<span className="flex-1 text-left truncate">{section.title}</span>
|
||||||
|
|
||||||
|
<ChevronRight
|
||||||
|
size={14}
|
||||||
|
className={`shrink-0 text-muted-foreground transition-transform duration-200
|
||||||
|
${expanded ? 'rotate-90' : ''}`}
|
||||||
|
/>
|
||||||
|
</button>
|
||||||
|
|
||||||
|
{/* Subsections (shown when expanded) */}
|
||||||
|
{expanded && (
|
||||||
|
<div className="ml-4 mt-0.5 space-y-0.5 border-l border-border animate-slide-in-down">
|
||||||
|
{section.subsections.map((sub) => {
|
||||||
|
const subActive = isActive(sub.id)
|
||||||
|
|
||||||
|
return (
|
||||||
|
<button
|
||||||
|
key={sub.id}
|
||||||
|
onClick={() => handleItemClick(sub.id)}
|
||||||
|
className={`w-full text-left px-3 py-1.5 text-sm rounded-r-md
|
||||||
|
transition-colors cursor-pointer
|
||||||
|
${subActive
|
||||||
|
? 'bg-primary/10 border-l-2 border-primary text-foreground font-medium -ml-px'
|
||||||
|
: 'text-muted-foreground hover:text-foreground hover:bg-muted'
|
||||||
|
}`}
|
||||||
|
>
|
||||||
|
{sub.title}
|
||||||
|
</button>
|
||||||
|
)
|
||||||
|
})}
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
)
|
||||||
|
})}
|
||||||
|
|
||||||
|
{/* No results message when search filters everything out */}
|
||||||
|
{normalizedQuery && filteredSections.length === 0 && (
|
||||||
|
<div className="px-3 py-6 text-center text-sm text-muted-foreground">
|
||||||
|
No sections match “{searchQuery}”
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
</nav>
|
||||||
|
)
|
||||||
|
}
|
||||||
222
ui/src/components/docs/docsData.ts
Normal file
222
ui/src/components/docs/docsData.ts
Normal file
@@ -0,0 +1,222 @@
|
|||||||
|
import {
|
||||||
|
Rocket,
|
||||||
|
FileText,
|
||||||
|
FolderTree,
|
||||||
|
LayoutGrid,
|
||||||
|
Bot,
|
||||||
|
Settings,
|
||||||
|
Terminal,
|
||||||
|
MessageSquare,
|
||||||
|
Clock,
|
||||||
|
Palette,
|
||||||
|
Shield,
|
||||||
|
Wrench,
|
||||||
|
HelpCircle,
|
||||||
|
type LucideIcon,
|
||||||
|
} from 'lucide-react'
|
||||||
|
|
||||||
|
export interface DocSubsection {
|
||||||
|
id: string
|
||||||
|
title: string
|
||||||
|
}
|
||||||
|
|
||||||
|
export interface DocSection {
|
||||||
|
id: string
|
||||||
|
title: string
|
||||||
|
icon: LucideIcon
|
||||||
|
subsections: DocSubsection[]
|
||||||
|
keywords: string[]
|
||||||
|
}
|
||||||
|
|
||||||
|
export const DOC_SECTIONS: DocSection[] = [
|
||||||
|
{
|
||||||
|
id: 'getting-started',
|
||||||
|
title: 'Getting Started',
|
||||||
|
icon: Rocket,
|
||||||
|
subsections: [
|
||||||
|
{ id: 'what-is-autocoder', title: 'What is AutoCoder?' },
|
||||||
|
{ id: 'quick-start', title: 'Quick Start' },
|
||||||
|
{ id: 'creating-a-project', title: 'Creating a New Project' },
|
||||||
|
{ id: 'existing-project', title: 'Adding to an Existing Project' },
|
||||||
|
{ id: 'system-requirements', title: 'System Requirements' },
|
||||||
|
],
|
||||||
|
keywords: ['install', 'setup', 'start', 'begin', 'new', 'requirements', 'prerequisites'],
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: 'app-spec-setup',
|
||||||
|
title: 'App Spec & Project Setup',
|
||||||
|
icon: FileText,
|
||||||
|
subsections: [
|
||||||
|
{ id: 'what-is-app-spec', title: 'What is an App Spec?' },
|
||||||
|
{ id: 'creating-spec-with-claude', title: 'Creating a Spec with Claude' },
|
||||||
|
{ id: 'writing-spec-manually', title: 'Writing a Spec Manually' },
|
||||||
|
{ id: 'initializer-agent', title: 'The Initializer Agent' },
|
||||||
|
{ id: 'starting-after-spec', title: 'Starting After Spec Creation' },
|
||||||
|
],
|
||||||
|
keywords: ['spec', 'specification', 'xml', 'app_spec', 'initializer', 'prompt', 'template'],
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: 'project-structure',
|
||||||
|
title: 'Target Project Structure',
|
||||||
|
icon: FolderTree,
|
||||||
|
subsections: [
|
||||||
|
{ id: 'autocoder-directory', title: '.autocoder/ Directory Layout' },
|
||||||
|
{ id: 'features-db', title: 'Features Database' },
|
||||||
|
{ id: 'prompts-directory', title: 'Prompts Directory' },
|
||||||
|
{ id: 'allowed-commands-yaml', title: 'Allowed Commands Config' },
|
||||||
|
{ id: 'claude-md', title: 'CLAUDE.md Convention' },
|
||||||
|
{ id: 'legacy-migration', title: 'Legacy Layout Migration' },
|
||||||
|
{ id: 'claude-inheritance', title: 'Claude Inheritance' },
|
||||||
|
],
|
||||||
|
keywords: ['folder', 'directory', 'structure', 'layout', 'files', 'database', 'sqlite', 'migration'],
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: 'features-kanban',
|
||||||
|
title: 'Features & Kanban Board',
|
||||||
|
icon: LayoutGrid,
|
||||||
|
subsections: [
|
||||||
|
{ id: 'kanban-overview', title: 'Kanban Board Overview' },
|
||||||
|
{ id: 'feature-cards', title: 'Feature Cards' },
|
||||||
|
{ id: 'dependency-graph', title: 'Dependency Graph View' },
|
||||||
|
{ id: 'adding-features', title: 'Adding Features' },
|
||||||
|
{ id: 'editing-features', title: 'Editing & Deleting Features' },
|
||||||
|
{ id: 'feature-dependencies', title: 'Feature Dependencies' },
|
||||||
|
{ id: 'expanding-with-ai', title: 'Expanding Project with AI' },
|
||||||
|
{ id: 'feature-priority', title: 'Priority & Ordering' },
|
||||||
|
],
|
||||||
|
keywords: ['kanban', 'board', 'feature', 'card', 'dependency', 'graph', 'priority', 'pending', 'progress', 'done'],
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: 'agent-system',
|
||||||
|
title: 'Agent System',
|
||||||
|
icon: Bot,
|
||||||
|
subsections: [
|
||||||
|
{ id: 'maestro-orchestrator', title: 'Maestro: The Orchestrator' },
|
||||||
|
{ id: 'coding-agents', title: 'Coding Agents' },
|
||||||
|
{ id: 'testing-agents', title: 'Testing Agents' },
|
||||||
|
{ id: 'agent-lifecycle', title: 'Agent Lifecycle' },
|
||||||
|
{ id: 'concurrency', title: 'Concurrency Control' },
|
||||||
|
{ id: 'mission-control', title: 'Agent Mission Control' },
|
||||||
|
{ id: 'agent-mascots', title: 'Agent Mascots & States' },
|
||||||
|
{ id: 'agent-logs', title: 'Viewing Agent Logs' },
|
||||||
|
{ id: 'process-limits', title: 'Process Limits' },
|
||||||
|
],
|
||||||
|
keywords: ['agent', 'maestro', 'orchestrator', 'coding', 'testing', 'parallel', 'concurrency', 'mascot', 'spark', 'fizz', 'octo', 'batch'],
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: 'settings-config',
|
||||||
|
title: 'Settings & Configuration',
|
||||||
|
icon: Settings,
|
||||||
|
subsections: [
|
||||||
|
{ id: 'opening-settings', title: 'Opening Settings' },
|
||||||
|
{ id: 'yolo-mode', title: 'YOLO Mode' },
|
||||||
|
{ id: 'headless-browser', title: 'Headless Browser' },
|
||||||
|
{ id: 'model-selection', title: 'Model Selection' },
|
||||||
|
{ id: 'regression-agents', title: 'Regression Agents' },
|
||||||
|
{ id: 'features-per-agent', title: 'Features per Agent (Batch Size)' },
|
||||||
|
{ id: 'concurrency-setting', title: 'Concurrency' },
|
||||||
|
{ id: 'settings-persistence', title: 'How Settings are Persisted' },
|
||||||
|
],
|
||||||
|
keywords: ['settings', 'config', 'yolo', 'headless', 'model', 'opus', 'sonnet', 'haiku', 'batch', 'regression'],
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: 'developer-tools',
|
||||||
|
title: 'Developer Tools',
|
||||||
|
icon: Terminal,
|
||||||
|
subsections: [
|
||||||
|
{ id: 'debug-panel', title: 'Debug Panel' },
|
||||||
|
{ id: 'agent-logs-tab', title: 'Agent Logs Tab' },
|
||||||
|
{ id: 'dev-server-logs', title: 'Dev Server Logs Tab' },
|
||||||
|
{ id: 'terminal', title: 'Terminal' },
|
||||||
|
{ id: 'dev-server-control', title: 'Dev Server Control' },
|
||||||
|
{ id: 'per-agent-logs', title: 'Per-Agent Logs' },
|
||||||
|
],
|
||||||
|
keywords: ['debug', 'terminal', 'logs', 'dev server', 'console', 'xterm', 'shell'],
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: 'ai-assistant',
|
||||||
|
title: 'AI Assistant',
|
||||||
|
icon: MessageSquare,
|
||||||
|
subsections: [
|
||||||
|
{ id: 'what-is-assistant', title: 'What is the Assistant?' },
|
||||||
|
{ id: 'opening-assistant', title: 'Opening the Assistant' },
|
||||||
|
{ id: 'assistant-capabilities', title: 'What It Can Do' },
|
||||||
|
{ id: 'assistant-limitations', title: 'What It Cannot Do' },
|
||||||
|
{ id: 'conversation-history', title: 'Conversation History' },
|
||||||
|
],
|
||||||
|
keywords: ['assistant', 'ai', 'chat', 'help', 'question', 'conversation'],
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: 'scheduling',
|
||||||
|
title: 'Scheduling',
|
||||||
|
icon: Clock,
|
||||||
|
subsections: [
|
||||||
|
{ id: 'what-scheduling-does', title: 'What Scheduling Does' },
|
||||||
|
{ id: 'creating-schedule', title: 'Creating a Schedule' },
|
||||||
|
{ id: 'schedule-settings', title: 'Schedule Settings' },
|
||||||
|
{ id: 'schedule-overrides', title: 'Schedule Overrides' },
|
||||||
|
{ id: 'crash-recovery', title: 'Crash Recovery' },
|
||||||
|
],
|
||||||
|
keywords: ['schedule', 'timer', 'automated', 'cron', 'run', 'recurring', 'utc'],
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: 'appearance-themes',
|
||||||
|
title: 'Appearance & Themes',
|
||||||
|
icon: Palette,
|
||||||
|
subsections: [
|
||||||
|
{ id: 'themes-overview', title: 'Themes Overview' },
|
||||||
|
{ id: 'dark-light-mode', title: 'Dark & Light Mode' },
|
||||||
|
{ id: 'theme-selector', title: 'Theme Selector' },
|
||||||
|
{ id: 'keyboard-shortcuts', title: 'Keyboard Shortcuts' },
|
||||||
|
],
|
||||||
|
keywords: ['theme', 'dark', 'light', 'color', 'appearance', 'twitter', 'claude', 'neo', 'brutalism', 'retro', 'aurora', 'business', 'keyboard', 'shortcut'],
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: 'security',
|
||||||
|
title: 'Security',
|
||||||
|
icon: Shield,
|
||||||
|
subsections: [
|
||||||
|
{ id: 'command-validation', title: 'Command Validation Overview' },
|
||||||
|
{ id: 'command-hierarchy', title: 'Command Hierarchy' },
|
||||||
|
{ id: 'hardcoded-blocklist', title: 'Hardcoded Blocklist' },
|
||||||
|
{ id: 'global-allowlist', title: 'Global Allowlist' },
|
||||||
|
{ id: 'project-allowlist', title: 'Per-Project Allowed Commands' },
|
||||||
|
{ id: 'org-config', title: 'Organization Configuration' },
|
||||||
|
{ id: 'extra-read-paths', title: 'Extra Read Paths' },
|
||||||
|
{ id: 'filesystem-sandboxing', title: 'Filesystem Sandboxing' },
|
||||||
|
],
|
||||||
|
keywords: ['security', 'sandbox', 'allowlist', 'blocklist', 'command', 'bash', 'permission', 'filesystem'],
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: 'advanced-config',
|
||||||
|
title: 'Advanced Configuration',
|
||||||
|
icon: Wrench,
|
||||||
|
subsections: [
|
||||||
|
{ id: 'vertex-ai', title: 'Vertex AI Setup' },
|
||||||
|
{ id: 'ollama', title: 'Ollama Local Models' },
|
||||||
|
{ id: 'env-variables', title: 'Environment Variables' },
|
||||||
|
{ id: 'cli-arguments', title: 'CLI Arguments' },
|
||||||
|
{ id: 'webhooks', title: 'Webhook Support' },
|
||||||
|
{ id: 'project-registry', title: 'Project Registry' },
|
||||||
|
],
|
||||||
|
keywords: ['vertex', 'gcloud', 'ollama', 'local', 'env', 'environment', 'cli', 'webhook', 'n8n', 'registry', 'api'],
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: 'faq',
|
||||||
|
title: 'FAQ & Troubleshooting',
|
||||||
|
icon: HelpCircle,
|
||||||
|
subsections: [
|
||||||
|
{ id: 'faq-new-project', title: 'Starting a New Project' },
|
||||||
|
{ id: 'faq-existing-project', title: 'Adding to Existing Project' },
|
||||||
|
{ id: 'faq-agent-crash', title: 'Agent Crashes' },
|
||||||
|
{ id: 'faq-custom-commands', title: 'Custom Bash Commands' },
|
||||||
|
{ id: 'faq-blocked-features', title: 'Blocked Features' },
|
||||||
|
{ id: 'faq-parallel', title: 'Running in Parallel' },
|
||||||
|
{ id: 'faq-local-model', title: 'Using Local Models' },
|
||||||
|
{ id: 'faq-reset', title: 'Resetting a Project' },
|
||||||
|
{ id: 'faq-agent-types', title: 'Coding vs Testing Agents' },
|
||||||
|
{ id: 'faq-real-time', title: 'Monitoring in Real Time' },
|
||||||
|
],
|
||||||
|
keywords: ['faq', 'troubleshoot', 'help', 'problem', 'issue', 'fix', 'error', 'stuck', 'reset', 'crash'],
|
||||||
|
},
|
||||||
|
]
|
||||||
75
ui/src/components/docs/sections/AIAssistant.tsx
Normal file
75
ui/src/components/docs/sections/AIAssistant.tsx
Normal file
@@ -0,0 +1,75 @@
|
|||||||
|
/**
|
||||||
|
* AIAssistant Documentation Section
|
||||||
|
*
|
||||||
|
* Covers the project assistant: what it is, how to open it,
|
||||||
|
* its capabilities and limitations, and conversation history.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { Badge } from '@/components/ui/badge'
|
||||||
|
|
||||||
|
export function AIAssistant() {
|
||||||
|
return (
|
||||||
|
<div>
|
||||||
|
{/* What is the Assistant? */}
|
||||||
|
<h3 id="what-is-assistant" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
What is the Assistant?
|
||||||
|
</h3>
|
||||||
|
<p className="text-muted-foreground mb-4">
|
||||||
|
The AI Assistant is a read-only project helper that can answer questions about your project, search
|
||||||
|
code, view progress, and help you understand what’s happening — without making any changes.
|
||||||
|
</p>
|
||||||
|
|
||||||
|
{/* Opening the Assistant */}
|
||||||
|
<h3 id="opening-assistant" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Opening the Assistant
|
||||||
|
</h3>
|
||||||
|
<ul className="list-disc space-y-2 ml-4 text-muted-foreground">
|
||||||
|
<li>
|
||||||
|
Press <Badge variant="secondary">A</Badge> to toggle the assistant panel
|
||||||
|
</li>
|
||||||
|
<li>Or click the floating action button (chat bubble) in the bottom-right corner</li>
|
||||||
|
<li>The panel slides in from the right side</li>
|
||||||
|
</ul>
|
||||||
|
|
||||||
|
{/* What It Can Do */}
|
||||||
|
<h3 id="assistant-capabilities" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
What It Can Do
|
||||||
|
</h3>
|
||||||
|
<ul className="list-disc space-y-2 ml-4 text-muted-foreground">
|
||||||
|
<li>Read and search your project’s source code</li>
|
||||||
|
<li>Answer questions about code architecture and implementation</li>
|
||||||
|
<li>View feature progress and status</li>
|
||||||
|
<li>Create new features based on your description</li>
|
||||||
|
<li>Explain what agents have done or are currently doing</li>
|
||||||
|
<li>Help debug issues by analyzing code and logs</li>
|
||||||
|
</ul>
|
||||||
|
|
||||||
|
{/* What It Cannot Do */}
|
||||||
|
<h3 id="assistant-limitations" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
What It Cannot Do
|
||||||
|
</h3>
|
||||||
|
<ul className="list-disc space-y-2 ml-4 text-muted-foreground">
|
||||||
|
<li>Modify files (read-only access)</li>
|
||||||
|
<li>Run bash commands</li>
|
||||||
|
<li>Mark features as passing/failing</li>
|
||||||
|
<li>Start or stop agents</li>
|
||||||
|
<li>Access external APIs or the internet</li>
|
||||||
|
</ul>
|
||||||
|
<div className="border-l-4 border-primary pl-4 italic text-muted-foreground mt-4">
|
||||||
|
This is a deliberate security design — the assistant is a safe way to interact with your project
|
||||||
|
without risk of unintended changes.
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Conversation History */}
|
||||||
|
<h3 id="conversation-history" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Conversation History
|
||||||
|
</h3>
|
||||||
|
<ul className="list-disc space-y-2 ml-4 text-muted-foreground">
|
||||||
|
<li>Conversations are stored per-project in SQLite database</li>
|
||||||
|
<li>Multiple conversations supported — start new ones as needed</li>
|
||||||
|
<li>Switch between conversations using the conversation selector</li>
|
||||||
|
<li>History persists across browser sessions</li>
|
||||||
|
</ul>
|
||||||
|
</div>
|
||||||
|
)
|
||||||
|
}
|
||||||
220
ui/src/components/docs/sections/AdvancedConfig.tsx
Normal file
220
ui/src/components/docs/sections/AdvancedConfig.tsx
Normal file
@@ -0,0 +1,220 @@
|
|||||||
|
/**
|
||||||
|
* AdvancedConfig Documentation Section
|
||||||
|
*
|
||||||
|
* Covers Vertex AI setup, Ollama local models, environment variables,
|
||||||
|
* CLI arguments, webhook support, and the project registry.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { Badge } from '@/components/ui/badge'
|
||||||
|
|
||||||
|
/** Environment variable descriptor for the reference table. */
|
||||||
|
interface EnvVar {
|
||||||
|
name: string
|
||||||
|
description: string
|
||||||
|
}
|
||||||
|
|
||||||
|
const ENV_VARS: EnvVar[] = [
|
||||||
|
{ name: 'CLAUDE_CODE_USE_VERTEX', description: 'Enable Vertex AI (1)' },
|
||||||
|
{ name: 'CLOUD_ML_REGION', description: 'GCP region' },
|
||||||
|
{ name: 'ANTHROPIC_VERTEX_PROJECT_ID', description: 'GCP project ID' },
|
||||||
|
{ name: 'ANTHROPIC_BASE_URL', description: 'Custom API base URL (for Ollama)' },
|
||||||
|
{ name: 'ANTHROPIC_AUTH_TOKEN', description: 'API auth token' },
|
||||||
|
{ name: 'API_TIMEOUT_MS', description: 'API timeout in milliseconds' },
|
||||||
|
{ name: 'EXTRA_READ_PATHS', description: 'Comma-separated extra read directories' },
|
||||||
|
{ name: 'ANTHROPIC_DEFAULT_OPUS_MODEL', description: 'Override Opus model name' },
|
||||||
|
{ name: 'ANTHROPIC_DEFAULT_SONNET_MODEL', description: 'Override Sonnet model name' },
|
||||||
|
{ name: 'ANTHROPIC_DEFAULT_HAIKU_MODEL', description: 'Override Haiku model name' },
|
||||||
|
]
|
||||||
|
|
||||||
|
/** CLI argument descriptor for the reference table. */
|
||||||
|
interface CliArg {
|
||||||
|
name: string
|
||||||
|
description: string
|
||||||
|
}
|
||||||
|
|
||||||
|
const CLI_ARGS: CliArg[] = [
|
||||||
|
{ name: '--project-dir', description: 'Project directory path or registered name' },
|
||||||
|
{ name: '--yolo', description: 'Enable YOLO mode' },
|
||||||
|
{ name: '--parallel', description: 'Enable parallel mode' },
|
||||||
|
{ name: '--max-concurrency N', description: 'Max concurrent agents (1-5)' },
|
||||||
|
{ name: '--batch-size N', description: 'Features per coding agent (1-3)' },
|
||||||
|
{ name: '--batch-features 1,2,3', description: 'Specific feature IDs to implement' },
|
||||||
|
{ name: '--testing-batch-size N', description: 'Features per testing batch (1-5)' },
|
||||||
|
{ name: '--testing-batch-features 1,2,3', description: 'Specific testing feature IDs' },
|
||||||
|
]
|
||||||
|
|
||||||
|
export function AdvancedConfig() {
|
||||||
|
return (
|
||||||
|
<div>
|
||||||
|
{/* Vertex AI Setup */}
|
||||||
|
<h3 id="vertex-ai" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Vertex AI Setup
|
||||||
|
</h3>
|
||||||
|
<p className="text-muted-foreground mb-3">
|
||||||
|
Run coding agents via Google Cloud Vertex AI:
|
||||||
|
</p>
|
||||||
|
<ol className="list-decimal space-y-2 ml-4 text-muted-foreground">
|
||||||
|
<li>
|
||||||
|
Install and authenticate the gcloud CLI:{' '}
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">
|
||||||
|
gcloud auth application-default login
|
||||||
|
</span>
|
||||||
|
</li>
|
||||||
|
<li>
|
||||||
|
Configure your{' '}
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">.env</span> file:
|
||||||
|
</li>
|
||||||
|
</ol>
|
||||||
|
<div className="bg-muted rounded-lg p-4 font-mono text-sm mt-3">
|
||||||
|
<pre><code>{`CLAUDE_CODE_USE_VERTEX=1
|
||||||
|
CLOUD_ML_REGION=us-east5
|
||||||
|
ANTHROPIC_VERTEX_PROJECT_ID=your-gcp-project-id
|
||||||
|
ANTHROPIC_DEFAULT_OPUS_MODEL=claude-opus-4-5@20251101
|
||||||
|
ANTHROPIC_DEFAULT_SONNET_MODEL=claude-sonnet-4-5@20250929
|
||||||
|
ANTHROPIC_DEFAULT_HAIKU_MODEL=claude-3-5-haiku@20241022`}</code></pre>
|
||||||
|
</div>
|
||||||
|
<blockquote className="border-l-4 border-primary pl-4 italic text-muted-foreground mt-4">
|
||||||
|
Use <span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono not-italic">@</span>{' '}
|
||||||
|
instead of <span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono not-italic">-</span>{' '}
|
||||||
|
in model names for Vertex AI.
|
||||||
|
</blockquote>
|
||||||
|
|
||||||
|
{/* Ollama Local Models */}
|
||||||
|
<h3 id="ollama" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Ollama Local Models
|
||||||
|
</h3>
|
||||||
|
<p className="text-muted-foreground mb-3">
|
||||||
|
Run coding agents using local models via Ollama v0.14.0+:
|
||||||
|
</p>
|
||||||
|
<ol className="list-decimal space-y-2 ml-4 text-muted-foreground">
|
||||||
|
<li>
|
||||||
|
Install Ollama from{' '}
|
||||||
|
<a href="https://ollama.com" target="_blank" rel="noreferrer" className="text-primary underline">
|
||||||
|
ollama.com
|
||||||
|
</a>
|
||||||
|
</li>
|
||||||
|
<li>
|
||||||
|
Start Ollama:{' '}
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">ollama serve</span>
|
||||||
|
</li>
|
||||||
|
<li>
|
||||||
|
Pull a coding model:{' '}
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">ollama pull qwen3-coder</span>
|
||||||
|
</li>
|
||||||
|
<li>
|
||||||
|
Configure your{' '}
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">.env</span>:
|
||||||
|
</li>
|
||||||
|
</ol>
|
||||||
|
<div className="bg-muted rounded-lg p-4 font-mono text-sm mt-3">
|
||||||
|
<pre><code>{`ANTHROPIC_BASE_URL=http://localhost:11434
|
||||||
|
ANTHROPIC_AUTH_TOKEN=ollama
|
||||||
|
API_TIMEOUT_MS=3000000
|
||||||
|
ANTHROPIC_DEFAULT_SONNET_MODEL=qwen3-coder`}</code></pre>
|
||||||
|
</div>
|
||||||
|
<p className="text-muted-foreground mt-3">
|
||||||
|
<strong className="text-foreground">Recommended models:</strong>{' '}
|
||||||
|
<Badge variant="secondary">qwen3-coder</Badge>{' '}
|
||||||
|
<Badge variant="secondary">deepseek-coder-v2</Badge>{' '}
|
||||||
|
<Badge variant="secondary">codellama</Badge>
|
||||||
|
</p>
|
||||||
|
<p className="text-muted-foreground mt-2">
|
||||||
|
<strong className="text-foreground">Limitations:</strong> Smaller context windows than Claude
|
||||||
|
(model-dependent), extended context beta disabled (not supported by Ollama), and performance
|
||||||
|
depends on local hardware (GPU recommended).
|
||||||
|
</p>
|
||||||
|
|
||||||
|
{/* Environment Variables */}
|
||||||
|
<h3 id="env-variables" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Environment Variables
|
||||||
|
</h3>
|
||||||
|
<p className="text-muted-foreground mb-3">
|
||||||
|
Key environment variables for configuring AutoCoder:
|
||||||
|
</p>
|
||||||
|
<table className="w-full text-sm mt-3">
|
||||||
|
<thead>
|
||||||
|
<tr className="bg-muted/50">
|
||||||
|
<th className="border border-border px-3 py-2 text-left font-medium text-foreground">
|
||||||
|
Variable
|
||||||
|
</th>
|
||||||
|
<th className="border border-border px-3 py-2 text-left font-medium text-foreground">
|
||||||
|
Description
|
||||||
|
</th>
|
||||||
|
</tr>
|
||||||
|
</thead>
|
||||||
|
<tbody className="text-muted-foreground">
|
||||||
|
{ENV_VARS.map((v) => (
|
||||||
|
<tr key={v.name}>
|
||||||
|
<td className="border border-border px-3 py-2">
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">{v.name}</span>
|
||||||
|
</td>
|
||||||
|
<td className="border border-border px-3 py-2">{v.description}</td>
|
||||||
|
</tr>
|
||||||
|
))}
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
|
|
||||||
|
{/* CLI Arguments */}
|
||||||
|
<h3 id="cli-arguments" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
CLI Arguments
|
||||||
|
</h3>
|
||||||
|
<p className="text-muted-foreground mb-3">
|
||||||
|
Command-line arguments for{' '}
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">
|
||||||
|
autonomous_agent_demo.py
|
||||||
|
</span>
|
||||||
|
:
|
||||||
|
</p>
|
||||||
|
<table className="w-full text-sm mt-3">
|
||||||
|
<thead>
|
||||||
|
<tr className="bg-muted/50">
|
||||||
|
<th className="border border-border px-3 py-2 text-left font-medium text-foreground">
|
||||||
|
Argument
|
||||||
|
</th>
|
||||||
|
<th className="border border-border px-3 py-2 text-left font-medium text-foreground">
|
||||||
|
Description
|
||||||
|
</th>
|
||||||
|
</tr>
|
||||||
|
</thead>
|
||||||
|
<tbody className="text-muted-foreground">
|
||||||
|
{CLI_ARGS.map((arg) => (
|
||||||
|
<tr key={arg.name}>
|
||||||
|
<td className="border border-border px-3 py-2">
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">{arg.name}</span>
|
||||||
|
</td>
|
||||||
|
<td className="border border-border px-3 py-2">{arg.description}</td>
|
||||||
|
</tr>
|
||||||
|
))}
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
|
|
||||||
|
{/* Webhook Support */}
|
||||||
|
<h3 id="webhooks" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Webhook Support
|
||||||
|
</h3>
|
||||||
|
<ul className="list-disc space-y-2 ml-4 text-muted-foreground">
|
||||||
|
<li>AutoCoder can send webhook notifications on feature completion</li>
|
||||||
|
<li>Compatible with N8N and similar automation tools</li>
|
||||||
|
<li>Configure the webhook URL in project settings</li>
|
||||||
|
<li>
|
||||||
|
Payload includes: feature name, status, and project info
|
||||||
|
</li>
|
||||||
|
</ul>
|
||||||
|
|
||||||
|
{/* Project Registry */}
|
||||||
|
<h3 id="project-registry" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Project Registry
|
||||||
|
</h3>
|
||||||
|
<ul className="list-disc space-y-2 ml-4 text-muted-foreground">
|
||||||
|
<li>
|
||||||
|
All projects are registered in{' '}
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">~/.autocoder/registry.db</span>{' '}
|
||||||
|
(SQLite)
|
||||||
|
</li>
|
||||||
|
<li>Maps project names to filesystem paths</li>
|
||||||
|
<li>Uses POSIX path format (forward slashes) for cross-platform compatibility</li>
|
||||||
|
<li>SQLAlchemy ORM with SQLite's built-in transaction handling</li>
|
||||||
|
</ul>
|
||||||
|
</div>
|
||||||
|
)
|
||||||
|
}
|
||||||
280
ui/src/components/docs/sections/AgentSystem.tsx
Normal file
280
ui/src/components/docs/sections/AgentSystem.tsx
Normal file
@@ -0,0 +1,280 @@
|
|||||||
|
/**
|
||||||
|
* AgentSystem Documentation Section
|
||||||
|
*
|
||||||
|
* Covers the orchestrator (Maestro), coding agents, testing agents,
|
||||||
|
* agent lifecycle, concurrency control, mission control dashboard,
|
||||||
|
* agent mascots and states, viewing logs, and process limits.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { Badge } from '@/components/ui/badge'
|
||||||
|
|
||||||
|
export function AgentSystem() {
|
||||||
|
return (
|
||||||
|
<div>
|
||||||
|
{/* Maestro: The Orchestrator */}
|
||||||
|
<h3 id="maestro-orchestrator" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Maestro: The Orchestrator
|
||||||
|
</h3>
|
||||||
|
<p className="text-muted-foreground mb-3">
|
||||||
|
Maestro is the central orchestrator that coordinates all agents. It acts as the conductor,
|
||||||
|
ensuring features are implemented efficiently and in the correct order.
|
||||||
|
</p>
|
||||||
|
<ul className="list-disc space-y-2 ml-4 text-muted-foreground">
|
||||||
|
<li>Manages the full lifecycle of coding and testing agents</li>
|
||||||
|
<li>Schedules which features to work on based on dependencies and priority</li>
|
||||||
|
<li>Monitors agent health and restarts crashed agents automatically</li>
|
||||||
|
<li>Reports status to the UI in real time via WebSocket</li>
|
||||||
|
</ul>
|
||||||
|
|
||||||
|
{/* Coding Agents */}
|
||||||
|
<h3 id="coding-agents" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Coding Agents
|
||||||
|
</h3>
|
||||||
|
<ul className="list-disc space-y-2 ml-4 text-muted-foreground">
|
||||||
|
<li>Implement features one at a time, or in batches of 1–3</li>
|
||||||
|
<li>
|
||||||
|
Claim features atomically via the{' '}
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">
|
||||||
|
feature_claim_and_get
|
||||||
|
</span>{' '}
|
||||||
|
MCP tool — no two agents work on the same feature
|
||||||
|
</li>
|
||||||
|
<li>Run in isolated environments with their own browser context</li>
|
||||||
|
<li>
|
||||||
|
Use the Claude Code SDK with project-specific tools and{' '}
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">CLAUDE.md</span>
|
||||||
|
</li>
|
||||||
|
</ul>
|
||||||
|
|
||||||
|
{/* Testing Agents */}
|
||||||
|
<h3 id="testing-agents" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Testing Agents
|
||||||
|
</h3>
|
||||||
|
<ul className="list-disc space-y-2 ml-4 text-muted-foreground">
|
||||||
|
<li>Run regression tests after features are implemented</li>
|
||||||
|
<li>Verify that new code does not break existing features</li>
|
||||||
|
<li>Configurable ratio: 0–3 testing agents per coding agent</li>
|
||||||
|
<li>Can batch-test multiple features per session (1–5)</li>
|
||||||
|
</ul>
|
||||||
|
|
||||||
|
{/* Agent Lifecycle */}
|
||||||
|
<h3 id="agent-lifecycle" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Agent Lifecycle
|
||||||
|
</h3>
|
||||||
|
<p className="text-muted-foreground mb-3">
|
||||||
|
Agents are controlled through the UI or CLI. The lifecycle states are:
|
||||||
|
</p>
|
||||||
|
<table className="w-full text-sm mt-3">
|
||||||
|
<thead>
|
||||||
|
<tr className="bg-muted/50">
|
||||||
|
<th className="border border-border px-3 py-2 text-left font-medium text-foreground">
|
||||||
|
Action
|
||||||
|
</th>
|
||||||
|
<th className="border border-border px-3 py-2 text-left font-medium text-foreground">
|
||||||
|
Behavior
|
||||||
|
</th>
|
||||||
|
</tr>
|
||||||
|
</thead>
|
||||||
|
<tbody className="text-muted-foreground">
|
||||||
|
<tr>
|
||||||
|
<td className="border border-border px-3 py-2 font-medium">Start</td>
|
||||||
|
<td className="border border-border px-3 py-2">
|
||||||
|
Click the Play button or run the CLI command
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td className="border border-border px-3 py-2 font-medium">Stop</td>
|
||||||
|
<td className="border border-border px-3 py-2">
|
||||||
|
Gracefully terminates all running agents
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td className="border border-border px-3 py-2 font-medium">Pause</td>
|
||||||
|
<td className="border border-border px-3 py-2">
|
||||||
|
Temporarily halts work (agents finish their current task first)
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td className="border border-border px-3 py-2 font-medium">Resume</td>
|
||||||
|
<td className="border border-border px-3 py-2">
|
||||||
|
Continues from where the agents were paused
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
|
<p className="text-muted-foreground mt-3">
|
||||||
|
Agents auto-continue between sessions with a 3-second delay, so they keep working until
|
||||||
|
all features are complete or they are explicitly stopped.
|
||||||
|
</p>
|
||||||
|
|
||||||
|
{/* Concurrency Control */}
|
||||||
|
<h3 id="concurrency" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Concurrency Control
|
||||||
|
</h3>
|
||||||
|
<ul className="list-disc space-y-2 ml-4 text-muted-foreground">
|
||||||
|
<li>
|
||||||
|
A slider in the agent control bar sets the number of concurrent coding agents
|
||||||
|
(1–5)
|
||||||
|
</li>
|
||||||
|
<li>
|
||||||
|
More agents means faster progress, but also higher API usage
|
||||||
|
</li>
|
||||||
|
<li>Each agent runs as an independent subprocess</li>
|
||||||
|
<li>
|
||||||
|
Feature claiming is atomic — no two agents will ever work on the same feature
|
||||||
|
simultaneously
|
||||||
|
</li>
|
||||||
|
</ul>
|
||||||
|
|
||||||
|
{/* Agent Mission Control */}
|
||||||
|
<h3 id="mission-control" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Agent Mission Control
|
||||||
|
</h3>
|
||||||
|
<p className="text-muted-foreground mb-3">
|
||||||
|
The Mission Control dashboard provides a real-time overview of all active agents:
|
||||||
|
</p>
|
||||||
|
<ul className="list-disc space-y-2 ml-4 text-muted-foreground">
|
||||||
|
<li>Active agent cards with mascot icons and current status</li>
|
||||||
|
<li>The feature each agent is currently working on</li>
|
||||||
|
<li>Agent state indicators (thinking, working, testing, etc.)</li>
|
||||||
|
<li>Orchestrator status and a recent activity feed</li>
|
||||||
|
</ul>
|
||||||
|
|
||||||
|
{/* Agent Mascots & States */}
|
||||||
|
<h3 id="agent-mascots" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Agent Mascots & States
|
||||||
|
</h3>
|
||||||
|
<p className="text-muted-foreground mb-3">
|
||||||
|
Each agent is assigned a unique mascot for easy identification:{' '}
|
||||||
|
<strong className="text-foreground">Spark</strong>,{' '}
|
||||||
|
<strong className="text-foreground">Fizz</strong>,{' '}
|
||||||
|
<strong className="text-foreground">Octo</strong>,{' '}
|
||||||
|
<strong className="text-foreground">Hoot</strong>,{' '}
|
||||||
|
<strong className="text-foreground">Buzz</strong>, and more. Agent states include:
|
||||||
|
</p>
|
||||||
|
<table className="w-full text-sm mt-3">
|
||||||
|
<thead>
|
||||||
|
<tr className="bg-muted/50">
|
||||||
|
<th className="border border-border px-3 py-2 text-left font-medium text-foreground">
|
||||||
|
State
|
||||||
|
</th>
|
||||||
|
<th className="border border-border px-3 py-2 text-left font-medium text-foreground">
|
||||||
|
Animation
|
||||||
|
</th>
|
||||||
|
<th className="border border-border px-3 py-2 text-left font-medium text-foreground">
|
||||||
|
Description
|
||||||
|
</th>
|
||||||
|
</tr>
|
||||||
|
</thead>
|
||||||
|
<tbody className="text-muted-foreground">
|
||||||
|
<tr>
|
||||||
|
<td className="border border-border px-3 py-2">
|
||||||
|
<Badge variant="secondary">Thinking</Badge>
|
||||||
|
</td>
|
||||||
|
<td className="border border-border px-3 py-2">Bouncing</td>
|
||||||
|
<td className="border border-border px-3 py-2">Agent is planning its approach</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td className="border border-border px-3 py-2">
|
||||||
|
<Badge variant="secondary">Working</Badge>
|
||||||
|
</td>
|
||||||
|
<td className="border border-border px-3 py-2">Shake</td>
|
||||||
|
<td className="border border-border px-3 py-2">Actively writing code</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td className="border border-border px-3 py-2">
|
||||||
|
<Badge variant="secondary">Testing</Badge>
|
||||||
|
</td>
|
||||||
|
<td className="border border-border px-3 py-2">Rotating</td>
|
||||||
|
<td className="border border-border px-3 py-2">Running tests</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td className="border border-border px-3 py-2">
|
||||||
|
<Badge variant="default">Success</Badge>
|
||||||
|
</td>
|
||||||
|
<td className="border border-border px-3 py-2">Celebration</td>
|
||||||
|
<td className="border border-border px-3 py-2">Feature completed</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td className="border border-border px-3 py-2">
|
||||||
|
<Badge variant="destructive">Error</Badge>
|
||||||
|
</td>
|
||||||
|
<td className="border border-border px-3 py-2">Red shake</td>
|
||||||
|
<td className="border border-border px-3 py-2">Encountered an issue</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td className="border border-border px-3 py-2">
|
||||||
|
<Badge variant="outline">Struggling</Badge>
|
||||||
|
</td>
|
||||||
|
<td className="border border-border px-3 py-2">Concerned expression</td>
|
||||||
|
<td className="border border-border px-3 py-2">Multiple consecutive failures</td>
|
||||||
|
</tr>
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
|
|
||||||
|
{/* Viewing Agent Logs */}
|
||||||
|
<h3 id="agent-logs" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Viewing Agent Logs
|
||||||
|
</h3>
|
||||||
|
<ul className="list-disc space-y-2 ml-4 text-muted-foreground">
|
||||||
|
<li>Click any agent card in Mission Control to see its log output</li>
|
||||||
|
<li>Logs are color-coded by level (info, warning, error)</li>
|
||||||
|
<li>Output streams in real time via WebSocket</li>
|
||||||
|
<li>Each agent's logs are isolated and filterable</li>
|
||||||
|
</ul>
|
||||||
|
|
||||||
|
{/* Process Limits */}
|
||||||
|
<h3 id="process-limits" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Process Limits
|
||||||
|
</h3>
|
||||||
|
<p className="text-muted-foreground mb-3">
|
||||||
|
The orchestrator enforces strict bounds on concurrent processes to prevent resource
|
||||||
|
exhaustion:
|
||||||
|
</p>
|
||||||
|
<table className="w-full text-sm mt-3">
|
||||||
|
<thead>
|
||||||
|
<tr className="bg-muted/50">
|
||||||
|
<th className="border border-border px-3 py-2 text-left font-medium text-foreground">
|
||||||
|
Limit
|
||||||
|
</th>
|
||||||
|
<th className="border border-border px-3 py-2 text-left font-medium text-foreground">
|
||||||
|
Value
|
||||||
|
</th>
|
||||||
|
</tr>
|
||||||
|
</thead>
|
||||||
|
<tbody className="text-muted-foreground">
|
||||||
|
<tr>
|
||||||
|
<td className="border border-border px-3 py-2">
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">
|
||||||
|
MAX_PARALLEL_AGENTS
|
||||||
|
</span>
|
||||||
|
</td>
|
||||||
|
<td className="border border-border px-3 py-2">5 (maximum concurrent coding agents)</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td className="border border-border px-3 py-2">
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">
|
||||||
|
MAX_TOTAL_AGENTS
|
||||||
|
</span>
|
||||||
|
</td>
|
||||||
|
<td className="border border-border px-3 py-2">
|
||||||
|
10 (hard limit on coding + testing combined)
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td className="border border-border px-3 py-2">Testing agents</td>
|
||||||
|
<td className="border border-border px-3 py-2">
|
||||||
|
Capped at the same count as coding agents
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td className="border border-border px-3 py-2">Total Python processes</td>
|
||||||
|
<td className="border border-border px-3 py-2">
|
||||||
|
Never exceeds 11 (1 orchestrator + 5 coding + 5 testing)
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
|
</div>
|
||||||
|
)
|
||||||
|
}
|
||||||
130
ui/src/components/docs/sections/AppSpecSetup.tsx
Normal file
130
ui/src/components/docs/sections/AppSpecSetup.tsx
Normal file
@@ -0,0 +1,130 @@
|
|||||||
|
/**
|
||||||
|
* AppSpecSetup Documentation Section
|
||||||
|
*
|
||||||
|
* Explains what an app spec is, how to create one interactively
|
||||||
|
* or manually, the initializer agent, and starting after spec creation.
|
||||||
|
*/
|
||||||
|
|
||||||
|
export function AppSpecSetup() {
|
||||||
|
return (
|
||||||
|
<div>
|
||||||
|
{/* What is an App Spec? */}
|
||||||
|
<h3 id="what-is-app-spec" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
What is an App Spec?
|
||||||
|
</h3>
|
||||||
|
<p className="text-muted-foreground mb-3">
|
||||||
|
The app spec is an XML document that describes the application to be built. It lives at{' '}
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">
|
||||||
|
.autocoder/prompts/app_spec.txt
|
||||||
|
</span>{' '}
|
||||||
|
and tells the initializer agent what features to create. The spec defines your app's name,
|
||||||
|
description, tech stack, and the features that should be implemented.
|
||||||
|
</p>
|
||||||
|
<div className="bg-muted rounded-lg p-4 font-mono text-sm">
|
||||||
|
<pre><code>{`<app>
|
||||||
|
<name>My App</name>
|
||||||
|
<description>A task management app</description>
|
||||||
|
<features>
|
||||||
|
<feature>User authentication with login/signup</feature>
|
||||||
|
<feature>Task CRUD with categories</feature>
|
||||||
|
</features>
|
||||||
|
</app>`}</code></pre>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Creating a Spec with Claude */}
|
||||||
|
<h3 id="creating-spec-with-claude" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Creating a Spec with Claude
|
||||||
|
</h3>
|
||||||
|
<ul className="list-disc space-y-2 ml-4 text-muted-foreground">
|
||||||
|
<li>
|
||||||
|
In the UI, select your project and click{' '}
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">Create Spec</span>
|
||||||
|
</li>
|
||||||
|
<li>
|
||||||
|
An interactive chat with Claude helps you define your app — it asks about
|
||||||
|
your app's purpose, features, and tech stack
|
||||||
|
</li>
|
||||||
|
<li>The spec is generated and saved automatically</li>
|
||||||
|
<li>After creation, the initializer agent can be started immediately</li>
|
||||||
|
</ul>
|
||||||
|
|
||||||
|
{/* Writing a Spec Manually */}
|
||||||
|
<h3 id="writing-spec-manually" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Writing a Spec Manually
|
||||||
|
</h3>
|
||||||
|
<ul className="list-disc space-y-2 ml-4 text-muted-foreground">
|
||||||
|
<li>
|
||||||
|
Create{' '}
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">
|
||||||
|
.autocoder/prompts/app_spec.txt
|
||||||
|
</span>{' '}
|
||||||
|
in your project directory
|
||||||
|
</li>
|
||||||
|
<li>
|
||||||
|
Use XML format with app name, description, tech stack, and a feature list
|
||||||
|
</li>
|
||||||
|
<li>
|
||||||
|
Be specific about each feature — the initializer creates test cases from these
|
||||||
|
descriptions
|
||||||
|
</li>
|
||||||
|
<li>
|
||||||
|
Include technical constraints where needed (e.g.,{' '}
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">
|
||||||
|
"use PostgreSQL"
|
||||||
|
</span>
|
||||||
|
,{' '}
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">
|
||||||
|
"React with TypeScript"
|
||||||
|
</span>
|
||||||
|
)
|
||||||
|
</li>
|
||||||
|
</ul>
|
||||||
|
|
||||||
|
{/* The Initializer Agent */}
|
||||||
|
<h3 id="initializer-agent" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
The Initializer Agent
|
||||||
|
</h3>
|
||||||
|
<p className="text-muted-foreground mb-3">
|
||||||
|
The initializer agent is the first agent to run on a new project. It bridges the gap between
|
||||||
|
your spec and the coding agents that implement features.
|
||||||
|
</p>
|
||||||
|
<ul className="list-disc space-y-2 ml-4 text-muted-foreground">
|
||||||
|
<li>Runs automatically on first agent start when no features exist in the database</li>
|
||||||
|
<li>Reads the app spec and creates features with descriptions, steps, and priorities</li>
|
||||||
|
<li>
|
||||||
|
Sets up feature dependencies (e.g., "auth must be done before user profile")
|
||||||
|
</li>
|
||||||
|
<li>
|
||||||
|
Creates the feature database at{' '}
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">
|
||||||
|
.autocoder/features.db
|
||||||
|
</span>
|
||||||
|
</li>
|
||||||
|
</ul>
|
||||||
|
|
||||||
|
{/* Starting After Spec Creation */}
|
||||||
|
<h3 id="starting-after-spec" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Starting After Spec Creation
|
||||||
|
</h3>
|
||||||
|
<p className="text-muted-foreground mb-3">
|
||||||
|
Once your spec is ready, you can kick off the agents:
|
||||||
|
</p>
|
||||||
|
<ul className="list-disc space-y-2 ml-4 text-muted-foreground">
|
||||||
|
<li>
|
||||||
|
From the UI, click the <strong className="text-foreground">Play</strong> button to start
|
||||||
|
the agent
|
||||||
|
</li>
|
||||||
|
<li>
|
||||||
|
Or run from the CLI:
|
||||||
|
</li>
|
||||||
|
</ul>
|
||||||
|
<div className="bg-muted rounded-lg p-4 font-mono text-sm mt-3">
|
||||||
|
<pre><code>python autonomous_agent_demo.py --project-dir your-project</code></pre>
|
||||||
|
</div>
|
||||||
|
<p className="text-muted-foreground mt-3">
|
||||||
|
The initializer runs first to create features, then coding agents take over to implement
|
||||||
|
them. Progress is shown in real time on the Kanban board.
|
||||||
|
</p>
|
||||||
|
</div>
|
||||||
|
)
|
||||||
|
}
|
||||||
185
ui/src/components/docs/sections/AppearanceThemes.tsx
Normal file
185
ui/src/components/docs/sections/AppearanceThemes.tsx
Normal file
@@ -0,0 +1,185 @@
|
|||||||
|
/**
|
||||||
|
* AppearanceThemes Documentation Section
|
||||||
|
*
|
||||||
|
* Covers built-in themes with color previews, dark/light mode toggling,
|
||||||
|
* the theme selector dropdown, and global keyboard shortcuts.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { Badge } from '@/components/ui/badge'
|
||||||
|
|
||||||
|
/** Theme descriptor used to render the preview rows. */
|
||||||
|
interface ThemePreview {
|
||||||
|
name: string
|
||||||
|
description: string
|
||||||
|
colors: { label: string; hex: string }[]
|
||||||
|
}
|
||||||
|
|
||||||
|
const THEMES: ThemePreview[] = [
|
||||||
|
{
|
||||||
|
name: 'Twitter',
|
||||||
|
description: 'Clean, modern blue design. Primary: blue, Background: white/dark gray.',
|
||||||
|
colors: [
|
||||||
|
{ label: 'Background', hex: '#ffffff' },
|
||||||
|
{ label: 'Primary', hex: '#4a9eff' },
|
||||||
|
{ label: 'Accent', hex: '#e8f4ff' },
|
||||||
|
],
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: 'Claude',
|
||||||
|
description: "Warm beige/cream tones with orange accents. Inspired by Anthropic's Claude brand.",
|
||||||
|
colors: [
|
||||||
|
{ label: 'Background', hex: '#faf6f0' },
|
||||||
|
{ label: 'Primary', hex: '#c75b2a' },
|
||||||
|
{ label: 'Accent', hex: '#f5ede4' },
|
||||||
|
],
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: 'Neo Brutalism',
|
||||||
|
description: 'Bold colors, hard shadows, no border radius. High contrast, expressive design.',
|
||||||
|
colors: [
|
||||||
|
{ label: 'Background', hex: '#ffffff' },
|
||||||
|
{ label: 'Primary', hex: '#ff4d00' },
|
||||||
|
{ label: 'Accent', hex: '#ffeb00' },
|
||||||
|
],
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: 'Retro Arcade',
|
||||||
|
description: 'Vibrant pink and teal with pixel-art inspired styling.',
|
||||||
|
colors: [
|
||||||
|
{ label: 'Background', hex: '#f0e6d3' },
|
||||||
|
{ label: 'Primary', hex: '#e8457c' },
|
||||||
|
{ label: 'Accent', hex: '#4eb8a5' },
|
||||||
|
],
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: 'Aurora',
|
||||||
|
description: 'Deep violet and luminous teal, inspired by the northern lights.',
|
||||||
|
colors: [
|
||||||
|
{ label: 'Background', hex: '#faf8ff' },
|
||||||
|
{ label: 'Primary', hex: '#8b5cf6' },
|
||||||
|
{ label: 'Accent', hex: '#2dd4bf' },
|
||||||
|
],
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: 'Business',
|
||||||
|
description: 'Professional deep navy and gray monochrome palette for corporate use.',
|
||||||
|
colors: [
|
||||||
|
{ label: 'Background', hex: '#eaecef' },
|
||||||
|
{ label: 'Primary', hex: '#000e4e' },
|
||||||
|
{ label: 'Accent', hex: '#6b7280' },
|
||||||
|
],
|
||||||
|
},
|
||||||
|
]
|
||||||
|
|
||||||
|
/** Keyboard shortcut descriptor for the shortcuts table. */
|
||||||
|
interface Shortcut {
|
||||||
|
key: string
|
||||||
|
action: string
|
||||||
|
}
|
||||||
|
|
||||||
|
const SHORTCUTS: Shortcut[] = [
|
||||||
|
{ key: '?', action: 'Show keyboard shortcuts help' },
|
||||||
|
{ key: 'D', action: 'Toggle debug panel' },
|
||||||
|
{ key: 'T', action: 'Toggle terminal' },
|
||||||
|
{ key: 'G', action: 'Toggle Kanban/Graph view' },
|
||||||
|
{ key: 'N', action: 'Add new feature' },
|
||||||
|
{ key: 'E', action: 'Expand project with AI' },
|
||||||
|
{ key: 'A', action: 'Toggle AI assistant' },
|
||||||
|
{ key: ',', action: 'Open settings' },
|
||||||
|
{ key: 'R', action: 'Reset project' },
|
||||||
|
{ key: 'Escape', action: 'Close current modal' },
|
||||||
|
]
|
||||||
|
|
||||||
|
export function AppearanceThemes() {
|
||||||
|
return (
|
||||||
|
<div>
|
||||||
|
{/* Themes Overview */}
|
||||||
|
<h3 id="themes-overview" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Themes Overview
|
||||||
|
</h3>
|
||||||
|
<p className="text-muted-foreground mb-4">
|
||||||
|
AutoCoder comes with 6 built-in themes. Each theme provides a complete visual identity including
|
||||||
|
colors, accents, and dark mode variants.
|
||||||
|
</p>
|
||||||
|
<div className="space-y-4">
|
||||||
|
{THEMES.map((theme) => (
|
||||||
|
<div key={theme.name} className="flex items-start gap-4">
|
||||||
|
{/* Color swatches */}
|
||||||
|
<div className="flex gap-1.5 shrink-0 mt-1">
|
||||||
|
{theme.colors.map((color) => (
|
||||||
|
<div
|
||||||
|
key={color.label}
|
||||||
|
title={`${color.label}: ${color.hex}`}
|
||||||
|
className="w-6 h-6 rounded border border-border"
|
||||||
|
style={{ backgroundColor: color.hex }}
|
||||||
|
/>
|
||||||
|
))}
|
||||||
|
</div>
|
||||||
|
{/* Description */}
|
||||||
|
<div>
|
||||||
|
<strong className="text-foreground">{theme.name}</strong>
|
||||||
|
{theme.name === 'Twitter' && (
|
||||||
|
<>
|
||||||
|
{' '}
|
||||||
|
<Badge variant="secondary">Default</Badge>
|
||||||
|
</>
|
||||||
|
)}
|
||||||
|
<span className="text-muted-foreground"> — {theme.description}</span>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
))}
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Dark & Light Mode */}
|
||||||
|
<h3 id="dark-light-mode" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Dark & Light Mode
|
||||||
|
</h3>
|
||||||
|
<ul className="list-disc space-y-2 ml-4 text-muted-foreground">
|
||||||
|
<li>Toggle with the sun/moon icon in the header</li>
|
||||||
|
<li>All 6 themes have dedicated dark mode variants</li>
|
||||||
|
<li>
|
||||||
|
Preference is saved in browser{' '}
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">localStorage</span>
|
||||||
|
</li>
|
||||||
|
<li>Dark mode affects all UI elements including the docs page</li>
|
||||||
|
</ul>
|
||||||
|
|
||||||
|
{/* Theme Selector */}
|
||||||
|
<h3 id="theme-selector" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Theme Selector
|
||||||
|
</h3>
|
||||||
|
<ul className="list-disc space-y-2 ml-4 text-muted-foreground">
|
||||||
|
<li>Hover over the palette icon in the header to open the theme dropdown</li>
|
||||||
|
<li>Preview themes by hovering over each option (live preview)</li>
|
||||||
|
<li>Click to select — the change is applied instantly</li>
|
||||||
|
<li>Theme preference persists across sessions</li>
|
||||||
|
</ul>
|
||||||
|
|
||||||
|
{/* Keyboard Shortcuts */}
|
||||||
|
<h3 id="keyboard-shortcuts" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Keyboard Shortcuts
|
||||||
|
</h3>
|
||||||
|
<p className="text-muted-foreground mb-3">
|
||||||
|
Press <Badge variant="secondary">?</Badge> anywhere in the UI to see the shortcuts help overlay.
|
||||||
|
</p>
|
||||||
|
<table className="w-full text-sm mt-3">
|
||||||
|
<thead>
|
||||||
|
<tr className="bg-muted/50">
|
||||||
|
<th className="border border-border px-3 py-2 text-left font-medium text-foreground">Key</th>
|
||||||
|
<th className="border border-border px-3 py-2 text-left font-medium text-foreground">Action</th>
|
||||||
|
</tr>
|
||||||
|
</thead>
|
||||||
|
<tbody className="text-muted-foreground">
|
||||||
|
{SHORTCUTS.map((shortcut) => (
|
||||||
|
<tr key={shortcut.key}>
|
||||||
|
<td className="border border-border px-3 py-2">
|
||||||
|
<Badge variant="secondary">{shortcut.key}</Badge>
|
||||||
|
</td>
|
||||||
|
<td className="border border-border px-3 py-2">{shortcut.action}</td>
|
||||||
|
</tr>
|
||||||
|
))}
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
|
</div>
|
||||||
|
)
|
||||||
|
}
|
||||||
104
ui/src/components/docs/sections/DeveloperTools.tsx
Normal file
104
ui/src/components/docs/sections/DeveloperTools.tsx
Normal file
@@ -0,0 +1,104 @@
|
|||||||
|
/**
|
||||||
|
* DeveloperTools Documentation Section
|
||||||
|
*
|
||||||
|
* Covers the debug panel, agent logs tab, dev server logs,
|
||||||
|
* terminal, dev server control, and per-agent logs.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { Badge } from '@/components/ui/badge'
|
||||||
|
|
||||||
|
export function DeveloperTools() {
|
||||||
|
return (
|
||||||
|
<div>
|
||||||
|
{/* Debug Panel */}
|
||||||
|
<h3 id="debug-panel" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Debug Panel
|
||||||
|
</h3>
|
||||||
|
<ul className="list-disc space-y-2 ml-4 text-muted-foreground">
|
||||||
|
<li>
|
||||||
|
Press <Badge variant="secondary">D</Badge> to toggle the debug panel at the bottom of the screen
|
||||||
|
</li>
|
||||||
|
<li>Resizable by dragging the top edge</li>
|
||||||
|
<li>
|
||||||
|
Three tabs: <strong className="text-foreground">Agent Logs</strong>,{' '}
|
||||||
|
<strong className="text-foreground">Dev Server Logs</strong>, and{' '}
|
||||||
|
<strong className="text-foreground">Terminal</strong>
|
||||||
|
</li>
|
||||||
|
<li>Shows real-time output from agents and dev server</li>
|
||||||
|
</ul>
|
||||||
|
|
||||||
|
{/* Agent Logs Tab */}
|
||||||
|
<h3 id="agent-logs-tab" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Agent Logs Tab
|
||||||
|
</h3>
|
||||||
|
<ul className="list-disc space-y-2 ml-4 text-muted-foreground">
|
||||||
|
<li>
|
||||||
|
Color-coded log levels:{' '}
|
||||||
|
<span className="text-[var(--color-log-error)] font-medium">Error</span>,{' '}
|
||||||
|
<span className="text-[var(--color-log-warning)] font-medium">Warning</span>,{' '}
|
||||||
|
<span className="text-[var(--color-log-info)] font-medium">Info</span>,{' '}
|
||||||
|
<span className="text-[var(--color-log-debug)] font-medium">Debug</span>,{' '}
|
||||||
|
<span className="text-[var(--color-log-success)] font-medium">Success</span>
|
||||||
|
</li>
|
||||||
|
<li>Timestamps on each log entry</li>
|
||||||
|
<li>Auto-scrolls to latest entry</li>
|
||||||
|
<li>Clear button to reset log view</li>
|
||||||
|
</ul>
|
||||||
|
|
||||||
|
{/* Dev Server Logs Tab */}
|
||||||
|
<h3 id="dev-server-logs" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Dev Server Logs Tab
|
||||||
|
</h3>
|
||||||
|
<ul className="list-disc space-y-2 ml-4 text-muted-foreground">
|
||||||
|
<li>
|
||||||
|
Shows stdout/stderr from the project’s dev server (e.g.,{' '}
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">npm run dev</span>)
|
||||||
|
</li>
|
||||||
|
<li>Useful for seeing compilation errors, hot reload status</li>
|
||||||
|
<li>Clear button available</li>
|
||||||
|
</ul>
|
||||||
|
|
||||||
|
{/* Terminal */}
|
||||||
|
<h3 id="terminal" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Terminal
|
||||||
|
</h3>
|
||||||
|
<ul className="list-disc space-y-2 ml-4 text-muted-foreground">
|
||||||
|
<li>
|
||||||
|
Press <Badge variant="secondary">T</Badge> to open terminal (opens debug panel on the terminal tab)
|
||||||
|
</li>
|
||||||
|
<li>Full xterm.js terminal emulator with WebSocket backend</li>
|
||||||
|
<li>Multi-tab support: create multiple terminal sessions</li>
|
||||||
|
<li>Rename tabs by double-clicking the tab title</li>
|
||||||
|
<li>Each tab runs an independent PTY (pseudo-terminal) session</li>
|
||||||
|
<li>Supports standard terminal features: colors, cursor movement, history</li>
|
||||||
|
</ul>
|
||||||
|
|
||||||
|
{/* Dev Server Control */}
|
||||||
|
<h3 id="dev-server-control" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Dev Server Control
|
||||||
|
</h3>
|
||||||
|
<ul className="list-disc space-y-2 ml-4 text-muted-foreground">
|
||||||
|
<li>Start/stop button in the header bar</li>
|
||||||
|
<li>
|
||||||
|
Auto-detects project type (Next.js, Vite, CRA, etc.) and runs the appropriate dev command
|
||||||
|
</li>
|
||||||
|
<li>Shows the dev server URL when running</li>
|
||||||
|
<li>Automatic crash detection and restart option</li>
|
||||||
|
<li>Dev server output piped to the Dev Server Logs tab</li>
|
||||||
|
</ul>
|
||||||
|
|
||||||
|
{/* Per-Agent Logs */}
|
||||||
|
<h3 id="per-agent-logs" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Per-Agent Logs
|
||||||
|
</h3>
|
||||||
|
<ul className="list-disc space-y-2 ml-4 text-muted-foreground">
|
||||||
|
<li>In Agent Mission Control, click any agent card to see its individual logs</li>
|
||||||
|
<li>
|
||||||
|
Logs include: what feature the agent is working on, code changes, test results
|
||||||
|
</li>
|
||||||
|
<li>Separate logs for coding agents and testing agents</li>
|
||||||
|
<li>Real-time streaming — see agent output as it happens</li>
|
||||||
|
</ul>
|
||||||
|
</div>
|
||||||
|
)
|
||||||
|
}
|
||||||
157
ui/src/components/docs/sections/FAQ.tsx
Normal file
157
ui/src/components/docs/sections/FAQ.tsx
Normal file
@@ -0,0 +1,157 @@
|
|||||||
|
/**
|
||||||
|
* FAQ Documentation Section
|
||||||
|
*
|
||||||
|
* Covers frequently asked questions about project setup, agent behavior,
|
||||||
|
* customization, troubleshooting, and real-time monitoring.
|
||||||
|
*/
|
||||||
|
|
||||||
|
export function FAQ() {
|
||||||
|
return (
|
||||||
|
<div>
|
||||||
|
{/* Starting a New Project */}
|
||||||
|
<h3 id="faq-new-project" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Starting a New Project
|
||||||
|
</h3>
|
||||||
|
<p className="text-muted-foreground italic mb-2">
|
||||||
|
How do I use AutoCoder on a new project?
|
||||||
|
</p>
|
||||||
|
<p className="text-muted-foreground">
|
||||||
|
From the UI, select "Create New Project" in the project dropdown. Choose a folder and
|
||||||
|
name. Then create an app spec using the interactive chat or write one manually. Click Start to run
|
||||||
|
the initializer agent, which creates features from your spec. Coding agents then implement features
|
||||||
|
automatically.
|
||||||
|
</p>
|
||||||
|
|
||||||
|
{/* Adding to Existing Project */}
|
||||||
|
<h3 id="faq-existing-project" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Adding to Existing Project
|
||||||
|
</h3>
|
||||||
|
<p className="text-muted-foreground italic mb-2">
|
||||||
|
How do I add AutoCoder to an existing project?
|
||||||
|
</p>
|
||||||
|
<p className="text-muted-foreground">
|
||||||
|
Register the project folder through the UI project selector using "Add Existing".
|
||||||
|
AutoCoder creates a{' '}
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">.autocoder/</span> directory
|
||||||
|
alongside your existing code. Write an app spec describing what to build (new features), and the
|
||||||
|
agent works within your existing codebase.
|
||||||
|
</p>
|
||||||
|
|
||||||
|
{/* Agent Crashes */}
|
||||||
|
<h3 id="faq-agent-crash" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Agent Crashes
|
||||||
|
</h3>
|
||||||
|
<p className="text-muted-foreground italic mb-2">
|
||||||
|
What happens if an agent crashes?
|
||||||
|
</p>
|
||||||
|
<p className="text-muted-foreground">
|
||||||
|
The orchestrator (Maestro) automatically detects crashed agents and can restart them. Features
|
||||||
|
claimed by a crashed agent are released back to the pending queue. Scheduled runs use exponential
|
||||||
|
backoff with up to 3 retries. Check the agent logs in the debug panel for crash details.
|
||||||
|
</p>
|
||||||
|
|
||||||
|
{/* Custom Bash Commands */}
|
||||||
|
<h3 id="faq-custom-commands" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Custom Bash Commands
|
||||||
|
</h3>
|
||||||
|
<p className="text-muted-foreground italic mb-2">
|
||||||
|
How do I customize which bash commands the agent can use?
|
||||||
|
</p>
|
||||||
|
<p className="text-muted-foreground">
|
||||||
|
Create{' '}
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">
|
||||||
|
.autocoder/allowed_commands.yaml
|
||||||
|
</span>{' '}
|
||||||
|
in your project with a list of allowed commands. Supports exact names, wildcards (e.g.,{' '}
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">swift*</span>), and local
|
||||||
|
scripts. See the Security section for full details on the command hierarchy.
|
||||||
|
</p>
|
||||||
|
|
||||||
|
{/* Blocked Features */}
|
||||||
|
<h3 id="faq-blocked-features" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Blocked Features
|
||||||
|
</h3>
|
||||||
|
<p className="text-muted-foreground italic mb-2">
|
||||||
|
Why are my features stuck in "blocked" status?
|
||||||
|
</p>
|
||||||
|
<p className="text-muted-foreground">
|
||||||
|
Features with unmet dependencies show as blocked. Check the Dependency Graph view (press{' '}
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">G</span>) to see which
|
||||||
|
features are waiting on others. A feature can only start when all its dependencies are marked as
|
||||||
|
"passing". Remove or reorder dependencies if needed.
|
||||||
|
</p>
|
||||||
|
|
||||||
|
{/* Running in Parallel */}
|
||||||
|
<h3 id="faq-parallel" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Running in Parallel
|
||||||
|
</h3>
|
||||||
|
<p className="text-muted-foreground italic mb-2">
|
||||||
|
How do I run multiple agents in parallel?
|
||||||
|
</p>
|
||||||
|
<p className="text-muted-foreground">
|
||||||
|
Use the concurrency slider in the agent control bar (1–5 agents) or pass{' '}
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">
|
||||||
|
--parallel --max-concurrency N
|
||||||
|
</span>{' '}
|
||||||
|
on the CLI. Each agent claims features atomically, so there is no conflict. More agents means
|
||||||
|
faster progress but higher API cost.
|
||||||
|
</p>
|
||||||
|
|
||||||
|
{/* Using Local Models */}
|
||||||
|
<h3 id="faq-local-model" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Using Local Models
|
||||||
|
</h3>
|
||||||
|
<p className="text-muted-foreground italic mb-2">
|
||||||
|
Can I use a local model instead of the Claude API?
|
||||||
|
</p>
|
||||||
|
<p className="text-muted-foreground">
|
||||||
|
Yes, via Ollama v0.14.0+. Install Ollama, pull a coding model (e.g.,{' '}
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">qwen3-coder</span>), and
|
||||||
|
configure your{' '}
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">.env</span> to point to
|
||||||
|
localhost. See the Advanced Configuration section for full setup instructions.
|
||||||
|
</p>
|
||||||
|
|
||||||
|
{/* Resetting a Project */}
|
||||||
|
<h3 id="faq-reset" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Resetting a Project
|
||||||
|
</h3>
|
||||||
|
<p className="text-muted-foreground italic mb-2">
|
||||||
|
How do I reset a project and start over?
|
||||||
|
</p>
|
||||||
|
<p className="text-muted-foreground">
|
||||||
|
Press <span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">R</span> (when agents
|
||||||
|
are stopped) to open the Reset modal. Choose between: "Reset Features" (clears the
|
||||||
|
feature database, keeps the spec) or "Full Reset" (removes the spec too, starts fresh).
|
||||||
|
After a full reset, you will be prompted to create a new spec.
|
||||||
|
</p>
|
||||||
|
|
||||||
|
{/* Coding vs Testing Agents */}
|
||||||
|
<h3 id="faq-agent-types" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Coding vs Testing Agents
|
||||||
|
</h3>
|
||||||
|
<p className="text-muted-foreground italic mb-2">
|
||||||
|
What's the difference between coding and testing agents?
|
||||||
|
</p>
|
||||||
|
<p className="text-muted-foreground">
|
||||||
|
Coding agents implement features — they write code, create files, and run feature-specific
|
||||||
|
tests. Testing agents run regression tests across completed features to ensure new code does not
|
||||||
|
break existing functionality. Configure the testing agent ratio (0–3) in settings.
|
||||||
|
</p>
|
||||||
|
|
||||||
|
{/* Monitoring in Real Time */}
|
||||||
|
<h3 id="faq-real-time" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Monitoring in Real Time
|
||||||
|
</h3>
|
||||||
|
<p className="text-muted-foreground italic mb-2">
|
||||||
|
How do I view what an agent is doing in real time?
|
||||||
|
</p>
|
||||||
|
<p className="text-muted-foreground">
|
||||||
|
Multiple ways: (1) Watch the Kanban board for feature status changes. (2) Open the debug panel
|
||||||
|
(<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">D</span> key) for live
|
||||||
|
agent logs. (3) Click agent cards in Mission Control for per-agent logs. (4) The progress bar
|
||||||
|
updates in real time via WebSocket.
|
||||||
|
</p>
|
||||||
|
</div>
|
||||||
|
)
|
||||||
|
}
|
||||||
182
ui/src/components/docs/sections/FeaturesKanban.tsx
Normal file
182
ui/src/components/docs/sections/FeaturesKanban.tsx
Normal file
@@ -0,0 +1,182 @@
|
|||||||
|
/**
|
||||||
|
* FeaturesKanban Documentation Section
|
||||||
|
*
|
||||||
|
* Covers the Kanban board, feature cards, dependency graph view,
|
||||||
|
* adding/editing features, dependencies, expanding with AI,
|
||||||
|
* and priority ordering.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { Badge } from '@/components/ui/badge'
|
||||||
|
|
||||||
|
export function FeaturesKanban() {
|
||||||
|
return (
|
||||||
|
<div>
|
||||||
|
{/* Kanban Board Overview */}
|
||||||
|
<h3 id="kanban-overview" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Kanban Board Overview
|
||||||
|
</h3>
|
||||||
|
<p className="text-muted-foreground mb-3">
|
||||||
|
The main view organizes features into three columns representing their current status:
|
||||||
|
</p>
|
||||||
|
<table className="w-full text-sm mt-3 mb-4">
|
||||||
|
<thead>
|
||||||
|
<tr className="bg-muted/50">
|
||||||
|
<th className="border border-border px-3 py-2 text-left font-medium text-foreground">
|
||||||
|
Column
|
||||||
|
</th>
|
||||||
|
<th className="border border-border px-3 py-2 text-left font-medium text-foreground">
|
||||||
|
Color
|
||||||
|
</th>
|
||||||
|
<th className="border border-border px-3 py-2 text-left font-medium text-foreground">
|
||||||
|
Meaning
|
||||||
|
</th>
|
||||||
|
</tr>
|
||||||
|
</thead>
|
||||||
|
<tbody className="text-muted-foreground">
|
||||||
|
<tr>
|
||||||
|
<td className="border border-border px-3 py-2 font-medium">Pending</td>
|
||||||
|
<td className="border border-border px-3 py-2">
|
||||||
|
<Badge variant="outline" className="border-yellow-500 text-yellow-600">Yellow</Badge>
|
||||||
|
</td>
|
||||||
|
<td className="border border-border px-3 py-2">Waiting to be picked up</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td className="border border-border px-3 py-2 font-medium">In Progress</td>
|
||||||
|
<td className="border border-border px-3 py-2">
|
||||||
|
<Badge variant="outline" className="border-cyan-500 text-cyan-600">Cyan</Badge>
|
||||||
|
</td>
|
||||||
|
<td className="border border-border px-3 py-2">An agent is actively working on it</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td className="border border-border px-3 py-2 font-medium">Done</td>
|
||||||
|
<td className="border border-border px-3 py-2">
|
||||||
|
<Badge variant="outline" className="border-green-500 text-green-600">Green</Badge>
|
||||||
|
</td>
|
||||||
|
<td className="border border-border px-3 py-2">Implemented and passing</td>
|
||||||
|
</tr>
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
|
<p className="text-muted-foreground">
|
||||||
|
Each feature appears as a card showing its name, priority, and category. The board updates
|
||||||
|
in real time as agents work.
|
||||||
|
</p>
|
||||||
|
|
||||||
|
{/* Feature Cards */}
|
||||||
|
<h3 id="feature-cards" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Feature Cards
|
||||||
|
</h3>
|
||||||
|
<ul className="list-disc space-y-2 ml-4 text-muted-foreground">
|
||||||
|
<li>
|
||||||
|
Each card displays a priority badge (<Badge variant="secondary">P1</Badge> through{' '}
|
||||||
|
<Badge variant="secondary">P5</Badge>), a category tag, and the feature name
|
||||||
|
</li>
|
||||||
|
<li>Status icons indicate the current state of the feature</li>
|
||||||
|
<li>Click a card to open the detail modal with the full description and test steps</li>
|
||||||
|
<li>
|
||||||
|
Cards in the "In Progress" column show which agent is currently working on them
|
||||||
|
</li>
|
||||||
|
</ul>
|
||||||
|
|
||||||
|
{/* Dependency Graph View */}
|
||||||
|
<h3 id="dependency-graph" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Dependency Graph View
|
||||||
|
</h3>
|
||||||
|
<p className="text-muted-foreground mb-3">
|
||||||
|
An alternative to the Kanban board that visualizes feature relationships as a directed graph.
|
||||||
|
</p>
|
||||||
|
<ul className="list-disc space-y-2 ml-4 text-muted-foreground">
|
||||||
|
<li>
|
||||||
|
Press <Badge variant="secondary">G</Badge> to toggle between Kanban and Graph view
|
||||||
|
</li>
|
||||||
|
<li>Uses the dagre layout engine for automatic node positioning</li>
|
||||||
|
<li>
|
||||||
|
Nodes are colored by status — pending, in-progress, and done each have
|
||||||
|
distinct colors
|
||||||
|
</li>
|
||||||
|
<li>Arrows show dependency relationships between features</li>
|
||||||
|
<li>Click any node to open the feature detail modal</li>
|
||||||
|
<li>Supports both horizontal and vertical layout orientations</li>
|
||||||
|
</ul>
|
||||||
|
|
||||||
|
{/* Adding Features */}
|
||||||
|
<h3 id="adding-features" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Adding Features
|
||||||
|
</h3>
|
||||||
|
<ul className="list-disc space-y-2 ml-4 text-muted-foreground">
|
||||||
|
<li>
|
||||||
|
Press <Badge variant="secondary">N</Badge> to open the Add Feature form
|
||||||
|
</li>
|
||||||
|
<li>Fill in: name, description, category, and priority</li>
|
||||||
|
<li>Optionally define steps (test criteria the agent must pass to complete the feature)</li>
|
||||||
|
<li>New features are added to the Pending column immediately</li>
|
||||||
|
</ul>
|
||||||
|
|
||||||
|
{/* Editing & Deleting Features */}
|
||||||
|
<h3 id="editing-features" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Editing & Deleting Features
|
||||||
|
</h3>
|
||||||
|
<ul className="list-disc space-y-2 ml-4 text-muted-foreground">
|
||||||
|
<li>Click a feature card to open the detail modal</li>
|
||||||
|
<li>
|
||||||
|
Click <strong className="text-foreground">Edit</strong> to modify the name, description,
|
||||||
|
category, priority, or steps
|
||||||
|
</li>
|
||||||
|
<li>
|
||||||
|
<strong className="text-foreground">Delete</strong> removes the feature permanently
|
||||||
|
</li>
|
||||||
|
<li>
|
||||||
|
<strong className="text-foreground">Skip</strong> moves a feature to the end of the queue
|
||||||
|
without deleting it
|
||||||
|
</li>
|
||||||
|
</ul>
|
||||||
|
|
||||||
|
{/* Feature Dependencies */}
|
||||||
|
<h3 id="feature-dependencies" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Feature Dependencies
|
||||||
|
</h3>
|
||||||
|
<p className="text-muted-foreground mb-3">
|
||||||
|
Features can declare dependencies on other features, ensuring they are implemented in the
|
||||||
|
correct order.
|
||||||
|
</p>
|
||||||
|
<ul className="list-disc space-y-2 ml-4 text-muted-foreground">
|
||||||
|
<li>Set dependencies in the feature edit modal</li>
|
||||||
|
<li>
|
||||||
|
Cycle detection prevents circular dependencies (uses Kahn's algorithm combined
|
||||||
|
with DFS)
|
||||||
|
</li>
|
||||||
|
<li>
|
||||||
|
Blocked features display a lock icon and cannot be claimed by agents until their
|
||||||
|
dependencies are met
|
||||||
|
</li>
|
||||||
|
<li>The Dependency Graph view makes these relationships easy to visualize</li>
|
||||||
|
</ul>
|
||||||
|
|
||||||
|
{/* Expanding Project with AI */}
|
||||||
|
<h3 id="expanding-with-ai" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Expanding Project with AI
|
||||||
|
</h3>
|
||||||
|
<ul className="list-disc space-y-2 ml-4 text-muted-foreground">
|
||||||
|
<li>
|
||||||
|
Press <Badge variant="secondary">E</Badge> to open the Expand Project modal
|
||||||
|
</li>
|
||||||
|
<li>Chat with Claude to describe the new features you want to add</li>
|
||||||
|
<li>Supports image attachments for UI mockups or design references</li>
|
||||||
|
<li>Claude creates properly structured features with appropriate dependencies</li>
|
||||||
|
<li>New features appear on the board immediately after creation</li>
|
||||||
|
</ul>
|
||||||
|
|
||||||
|
{/* Priority & Ordering */}
|
||||||
|
<h3 id="feature-priority" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Priority & Ordering
|
||||||
|
</h3>
|
||||||
|
<ul className="list-disc space-y-2 ml-4 text-muted-foreground">
|
||||||
|
<li>
|
||||||
|
Features are ordered by priority: <Badge variant="secondary">P1</Badge> is the highest
|
||||||
|
and <Badge variant="secondary">P5</Badge> is the lowest
|
||||||
|
</li>
|
||||||
|
<li>Within the same priority level, features are ordered by creation time</li>
|
||||||
|
<li>Agents always pick up the highest-priority ready feature first</li>
|
||||||
|
</ul>
|
||||||
|
</div>
|
||||||
|
)
|
||||||
|
}
|
||||||
134
ui/src/components/docs/sections/GettingStarted.tsx
Normal file
134
ui/src/components/docs/sections/GettingStarted.tsx
Normal file
@@ -0,0 +1,134 @@
|
|||||||
|
/**
|
||||||
|
* GettingStarted Documentation Section
|
||||||
|
*
|
||||||
|
* Covers what AutoCoder is, quick start commands,
|
||||||
|
* creating and adding projects, and system requirements.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { Badge } from '@/components/ui/badge'
|
||||||
|
|
||||||
|
export function GettingStarted() {
|
||||||
|
return (
|
||||||
|
<div>
|
||||||
|
{/* What is AutoCoder? */}
|
||||||
|
<h3 id="what-is-autocoder" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
What is AutoCoder?
|
||||||
|
</h3>
|
||||||
|
<p className="text-muted-foreground mb-4">
|
||||||
|
AutoCoder is an autonomous coding agent system that builds complete applications over multiple
|
||||||
|
sessions using a two-agent pattern:
|
||||||
|
</p>
|
||||||
|
<ol className="list-decimal space-y-2 ml-4 text-muted-foreground">
|
||||||
|
<li>
|
||||||
|
<strong className="text-foreground">Initializer Agent</strong> — reads your app spec
|
||||||
|
and creates features in a SQLite database
|
||||||
|
</li>
|
||||||
|
<li>
|
||||||
|
<strong className="text-foreground">Coding Agent</strong> — implements features one by
|
||||||
|
one, marking each as passing when complete
|
||||||
|
</li>
|
||||||
|
</ol>
|
||||||
|
<p className="text-muted-foreground mt-4">
|
||||||
|
It comes with a React-based UI for monitoring progress, managing features, and controlling agents
|
||||||
|
in real time.
|
||||||
|
</p>
|
||||||
|
|
||||||
|
{/* Quick Start */}
|
||||||
|
<h3 id="quick-start" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Quick Start
|
||||||
|
</h3>
|
||||||
|
<p className="text-muted-foreground mb-3">
|
||||||
|
Launch AutoCoder with a single command. The CLI menu lets you create or select a project,
|
||||||
|
while the Web UI provides a full dashboard experience.
|
||||||
|
</p>
|
||||||
|
<div className="bg-muted rounded-lg p-4 font-mono text-sm">
|
||||||
|
<pre><code>{`# Windows
|
||||||
|
start.bat # CLI menu
|
||||||
|
start_ui.bat # Web UI
|
||||||
|
|
||||||
|
# macOS/Linux
|
||||||
|
./start.sh # CLI menu
|
||||||
|
./start_ui.sh # Web UI`}</code></pre>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Creating a New Project */}
|
||||||
|
<h3 id="creating-a-project" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Creating a New Project
|
||||||
|
</h3>
|
||||||
|
<ul className="list-disc space-y-2 ml-4 text-muted-foreground">
|
||||||
|
<li>
|
||||||
|
From the UI, click the project dropdown and select{' '}
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">Create New Project</span>
|
||||||
|
</li>
|
||||||
|
<li>Enter a name and select or browse to a folder for the project</li>
|
||||||
|
<li>
|
||||||
|
Create an app spec interactively with Claude, or write one manually in XML format
|
||||||
|
</li>
|
||||||
|
<li>
|
||||||
|
The initializer agent reads your spec and creates features automatically
|
||||||
|
</li>
|
||||||
|
</ul>
|
||||||
|
|
||||||
|
{/* Adding to an Existing Project */}
|
||||||
|
<h3 id="existing-project" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Adding to an Existing Project
|
||||||
|
</h3>
|
||||||
|
<ul className="list-disc space-y-2 ml-4 text-muted-foreground">
|
||||||
|
<li>Register the project folder via the UI project selector</li>
|
||||||
|
<li>
|
||||||
|
AutoCoder creates a{' '}
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">.autocoder/</span>{' '}
|
||||||
|
directory inside your project
|
||||||
|
</li>
|
||||||
|
<li>
|
||||||
|
Existing code is preserved — AutoCoder adds its configuration alongside it
|
||||||
|
</li>
|
||||||
|
<li>Write or generate an app spec describing what to build</li>
|
||||||
|
</ul>
|
||||||
|
|
||||||
|
{/* System Requirements */}
|
||||||
|
<h3 id="system-requirements" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
System Requirements
|
||||||
|
</h3>
|
||||||
|
<table className="w-full text-sm mt-3">
|
||||||
|
<thead>
|
||||||
|
<tr className="bg-muted/50">
|
||||||
|
<th className="border border-border px-3 py-2 text-left font-medium text-foreground">
|
||||||
|
Requirement
|
||||||
|
</th>
|
||||||
|
<th className="border border-border px-3 py-2 text-left font-medium text-foreground">
|
||||||
|
Details
|
||||||
|
</th>
|
||||||
|
</tr>
|
||||||
|
</thead>
|
||||||
|
<tbody className="text-muted-foreground">
|
||||||
|
<tr>
|
||||||
|
<td className="border border-border px-3 py-2">Python</td>
|
||||||
|
<td className="border border-border px-3 py-2">
|
||||||
|
<Badge variant="secondary">3.11+</Badge>
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td className="border border-border px-3 py-2">Node.js</td>
|
||||||
|
<td className="border border-border px-3 py-2">
|
||||||
|
<Badge variant="secondary">20+</Badge>{' '}
|
||||||
|
<span className="text-xs">(for UI development)</span>
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td className="border border-border px-3 py-2">Claude Code CLI</td>
|
||||||
|
<td className="border border-border px-3 py-2">
|
||||||
|
Required for running agents
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td className="border border-border px-3 py-2">Operating System</td>
|
||||||
|
<td className="border border-border px-3 py-2">
|
||||||
|
Windows, macOS, or Linux
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
|
</div>
|
||||||
|
)
|
||||||
|
}
|
||||||
162
ui/src/components/docs/sections/ProjectStructure.tsx
Normal file
162
ui/src/components/docs/sections/ProjectStructure.tsx
Normal file
@@ -0,0 +1,162 @@
|
|||||||
|
/**
|
||||||
|
* ProjectStructure Documentation Section
|
||||||
|
*
|
||||||
|
* Covers the .autocoder/ directory layout, features database,
|
||||||
|
* prompts directory, allowed commands, CLAUDE.md convention,
|
||||||
|
* legacy migration, and Claude inheritance.
|
||||||
|
*/
|
||||||
|
|
||||||
|
export function ProjectStructure() {
|
||||||
|
return (
|
||||||
|
<div>
|
||||||
|
{/* .autocoder/ Directory Layout */}
|
||||||
|
<h3 id="autocoder-directory" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
.autocoder/ Directory Layout
|
||||||
|
</h3>
|
||||||
|
<p className="text-muted-foreground mb-3">
|
||||||
|
Every AutoCoder project stores its configuration and runtime files in a{' '}
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">.autocoder/</span>{' '}
|
||||||
|
directory at the project root.
|
||||||
|
</p>
|
||||||
|
<div className="bg-muted rounded-lg p-4 font-mono text-sm">
|
||||||
|
<pre><code>{`your-project/
|
||||||
|
\u251C\u2500\u2500 .autocoder/
|
||||||
|
\u2502 \u251C\u2500\u2500 features.db # SQLite feature database
|
||||||
|
\u2502 \u251C\u2500\u2500 .agent.lock # Lock file (prevents multiple instances)
|
||||||
|
\u2502 \u251C\u2500\u2500 .gitignore # Ignores runtime files
|
||||||
|
\u2502 \u251C\u2500\u2500 allowed_commands.yaml # Per-project bash command allowlist
|
||||||
|
\u2502 \u2514\u2500\u2500 prompts/
|
||||||
|
\u2502 \u251C\u2500\u2500 app_spec.txt # Application specification (XML)
|
||||||
|
\u2502 \u251C\u2500\u2500 initializer_prompt.md # First session prompt
|
||||||
|
\u2502 \u2514\u2500\u2500 coding_prompt.md # Continuation session prompt
|
||||||
|
\u251C\u2500\u2500 CLAUDE.md # Claude Code convention file
|
||||||
|
\u2514\u2500\u2500 app_spec.txt # Root copy for template compatibility`}</code></pre>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Features Database */}
|
||||||
|
<h3 id="features-db" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Features Database
|
||||||
|
</h3>
|
||||||
|
<ul className="list-disc space-y-2 ml-4 text-muted-foreground">
|
||||||
|
<li>
|
||||||
|
SQLite database managed by SQLAlchemy, stored at{' '}
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">
|
||||||
|
.autocoder/features.db
|
||||||
|
</span>
|
||||||
|
</li>
|
||||||
|
<li>
|
||||||
|
Each feature record includes: id, priority, category, name, description, steps, status
|
||||||
|
(<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">pending</span>,{' '}
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">in_progress</span>,{' '}
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">passing</span>,{' '}
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">failing</span>),
|
||||||
|
and dependencies
|
||||||
|
</li>
|
||||||
|
<li>Agents interact with features through MCP server tools, not direct database access</li>
|
||||||
|
<li>Viewable in the UI via the Kanban board or the Dependency Graph view</li>
|
||||||
|
</ul>
|
||||||
|
|
||||||
|
{/* Prompts Directory */}
|
||||||
|
<h3 id="prompts-directory" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Prompts Directory
|
||||||
|
</h3>
|
||||||
|
<p className="text-muted-foreground mb-3">
|
||||||
|
Prompts control how agents behave during each session:
|
||||||
|
</p>
|
||||||
|
<ul className="list-disc space-y-2 ml-4 text-muted-foreground">
|
||||||
|
<li>
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">app_spec.txt</span>{' '}
|
||||||
|
— your application specification in XML format
|
||||||
|
</li>
|
||||||
|
<li>
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">
|
||||||
|
initializer_prompt.md
|
||||||
|
</span>{' '}
|
||||||
|
— prompt for the initializer agent (creates features from the spec)
|
||||||
|
</li>
|
||||||
|
<li>
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">
|
||||||
|
coding_prompt.md
|
||||||
|
</span>{' '}
|
||||||
|
— prompt for coding agents (implements features)
|
||||||
|
</li>
|
||||||
|
</ul>
|
||||||
|
<p className="text-muted-foreground mt-3">
|
||||||
|
These can be customized per project. If not present, defaults from{' '}
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">
|
||||||
|
.claude/templates/
|
||||||
|
</span>{' '}
|
||||||
|
are used as a fallback.
|
||||||
|
</p>
|
||||||
|
|
||||||
|
{/* Allowed Commands Config */}
|
||||||
|
<h3 id="allowed-commands-yaml" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Allowed Commands Config
|
||||||
|
</h3>
|
||||||
|
<p className="text-muted-foreground mb-3">
|
||||||
|
The optional{' '}
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">
|
||||||
|
.autocoder/allowed_commands.yaml
|
||||||
|
</span>{' '}
|
||||||
|
file lets you grant project-specific bash commands to the agent. This is useful when your
|
||||||
|
project requires tools beyond the default allowlist (e.g., language-specific compilers or
|
||||||
|
custom build scripts).
|
||||||
|
</p>
|
||||||
|
<p className="text-muted-foreground">
|
||||||
|
See the <strong className="text-foreground">Security</strong> section for full details on
|
||||||
|
the command hierarchy and how project-level commands interact with global and organization
|
||||||
|
policies.
|
||||||
|
</p>
|
||||||
|
|
||||||
|
{/* CLAUDE.md Convention */}
|
||||||
|
<h3 id="claude-md" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
CLAUDE.md Convention
|
||||||
|
</h3>
|
||||||
|
<ul className="list-disc space-y-2 ml-4 text-muted-foreground">
|
||||||
|
<li>
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">CLAUDE.md</span>{' '}
|
||||||
|
lives at the project root, as required by the Claude Code SDK
|
||||||
|
</li>
|
||||||
|
<li>
|
||||||
|
Contains project-specific instructions that the agent follows during every coding session
|
||||||
|
</li>
|
||||||
|
<li>
|
||||||
|
Automatically inherited by all agents working on the project — no additional
|
||||||
|
configuration needed
|
||||||
|
</li>
|
||||||
|
</ul>
|
||||||
|
|
||||||
|
{/* Legacy Layout Migration */}
|
||||||
|
<h3 id="legacy-migration" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Legacy Layout Migration
|
||||||
|
</h3>
|
||||||
|
<p className="text-muted-foreground mb-3">
|
||||||
|
Older projects stored configuration files directly at the project root (e.g.,{' '}
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">features.db</span>,{' '}
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">prompts/</span>).
|
||||||
|
</p>
|
||||||
|
<ul className="list-disc space-y-2 ml-4 text-muted-foreground">
|
||||||
|
<li>
|
||||||
|
On the next agent start, these files are automatically migrated into{' '}
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">.autocoder/</span>
|
||||||
|
</li>
|
||||||
|
<li>Dual-path resolution ensures both old and new layouts work transparently</li>
|
||||||
|
<li>No manual migration is needed — it happens seamlessly</li>
|
||||||
|
</ul>
|
||||||
|
|
||||||
|
{/* Claude Inheritance */}
|
||||||
|
<h3 id="claude-inheritance" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Claude Inheritance
|
||||||
|
</h3>
|
||||||
|
<p className="text-muted-foreground mb-3">
|
||||||
|
Agents inherit all MCP servers, tools, skills, custom commands, and{' '}
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">CLAUDE.md</span>{' '}
|
||||||
|
from the target project folder.
|
||||||
|
</p>
|
||||||
|
<div className="border-l-4 border-primary pl-4 italic text-muted-foreground">
|
||||||
|
If your project has its own MCP servers or Claude commands, the coding agent can use them.
|
||||||
|
The agent essentially runs as if Claude Code was opened in your project directory.
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
)
|
||||||
|
}
|
||||||
102
ui/src/components/docs/sections/Scheduling.tsx
Normal file
102
ui/src/components/docs/sections/Scheduling.tsx
Normal file
@@ -0,0 +1,102 @@
|
|||||||
|
/**
|
||||||
|
* Scheduling Documentation Section
|
||||||
|
*
|
||||||
|
* Covers schedule creation, per-schedule settings,
|
||||||
|
* overrides, and crash recovery with exponential backoff.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { Badge } from '@/components/ui/badge'
|
||||||
|
|
||||||
|
export function Scheduling() {
|
||||||
|
return (
|
||||||
|
<div>
|
||||||
|
{/* What Scheduling Does */}
|
||||||
|
<h3 id="what-scheduling-does" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
What Scheduling Does
|
||||||
|
</h3>
|
||||||
|
<p className="text-muted-foreground mb-4">
|
||||||
|
Scheduling automates agent runs at specific times. Set up a schedule and AutoCoder will automatically
|
||||||
|
start agents on your project — useful for overnight builds, periodic maintenance, or continuous
|
||||||
|
development.
|
||||||
|
</p>
|
||||||
|
|
||||||
|
{/* Creating a Schedule */}
|
||||||
|
<h3 id="creating-schedule" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Creating a Schedule
|
||||||
|
</h3>
|
||||||
|
<ul className="list-disc space-y-2 ml-4 text-muted-foreground">
|
||||||
|
<li>Click the clock icon in the header to open the Schedule modal</li>
|
||||||
|
<li>Set: start time, duration (how long agents run), days of the week</li>
|
||||||
|
<li>Optionally configure: YOLO mode, concurrency, model selection</li>
|
||||||
|
<li>Schedule is saved and starts at the next matching time</li>
|
||||||
|
</ul>
|
||||||
|
|
||||||
|
{/* Schedule Settings */}
|
||||||
|
<h3 id="schedule-settings" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Schedule Settings
|
||||||
|
</h3>
|
||||||
|
<p className="text-muted-foreground mb-3">
|
||||||
|
Each schedule can override global settings:
|
||||||
|
</p>
|
||||||
|
<table className="w-full text-sm mt-3">
|
||||||
|
<thead>
|
||||||
|
<tr className="bg-muted/50">
|
||||||
|
<th className="border border-border px-3 py-2 text-left font-medium text-foreground">Setting</th>
|
||||||
|
<th className="border border-border px-3 py-2 text-left font-medium text-foreground">Details</th>
|
||||||
|
</tr>
|
||||||
|
</thead>
|
||||||
|
<tbody className="text-muted-foreground">
|
||||||
|
<tr>
|
||||||
|
<td className="border border-border px-3 py-2">YOLO mode</td>
|
||||||
|
<td className="border border-border px-3 py-2">On/off per schedule</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td className="border border-border px-3 py-2">Concurrency</td>
|
||||||
|
<td className="border border-border px-3 py-2">
|
||||||
|
<Badge variant="secondary">1–5</Badge> agents
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td className="border border-border px-3 py-2">Model tier</td>
|
||||||
|
<td className="border border-border px-3 py-2">Opus / Sonnet / Haiku</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td className="border border-border px-3 py-2">Duration</td>
|
||||||
|
<td className="border border-border px-3 py-2">How long the session runs before auto-stopping</td>
|
||||||
|
</tr>
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
|
<div className="border-l-4 border-primary pl-4 italic text-muted-foreground mt-4">
|
||||||
|
All schedule times are in UTC timezone.
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Schedule Overrides */}
|
||||||
|
<h3 id="schedule-overrides" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Schedule Overrides
|
||||||
|
</h3>
|
||||||
|
<ul className="list-disc space-y-2 ml-4 text-muted-foreground">
|
||||||
|
<li>Manually skip a scheduled run (one-time override)</li>
|
||||||
|
<li>Pause a schedule temporarily (resumes on next period)</li>
|
||||||
|
<li>
|
||||||
|
View upcoming runs with{' '}
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">Running until</span> /{' '}
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">Next run</span> indicators
|
||||||
|
</li>
|
||||||
|
<li>Override without deleting the schedule</li>
|
||||||
|
</ul>
|
||||||
|
|
||||||
|
{/* Crash Recovery */}
|
||||||
|
<h3 id="crash-recovery" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Crash Recovery
|
||||||
|
</h3>
|
||||||
|
<ul className="list-disc space-y-2 ml-4 text-muted-foreground">
|
||||||
|
<li>If a scheduled agent crashes, it uses exponential backoff for retries</li>
|
||||||
|
<li>
|
||||||
|
Maximum <Badge variant="secondary">3</Badge> retry attempts per scheduled run
|
||||||
|
</li>
|
||||||
|
<li>Backoff prevents rapid restart loops</li>
|
||||||
|
<li>Failed runs are logged for troubleshooting</li>
|
||||||
|
</ul>
|
||||||
|
</div>
|
||||||
|
)
|
||||||
|
}
|
||||||
218
ui/src/components/docs/sections/Security.tsx
Normal file
218
ui/src/components/docs/sections/Security.tsx
Normal file
@@ -0,0 +1,218 @@
|
|||||||
|
/**
|
||||||
|
* Security Documentation Section
|
||||||
|
*
|
||||||
|
* Covers the defense-in-depth security model: command validation layers,
|
||||||
|
* the hierarchical allowlist/blocklist system, per-project and org-level
|
||||||
|
* configuration, extra read paths, and filesystem sandboxing.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { Badge } from '@/components/ui/badge'
|
||||||
|
|
||||||
|
export function Security() {
|
||||||
|
return (
|
||||||
|
<div>
|
||||||
|
{/* Command Validation Overview */}
|
||||||
|
<h3 id="command-validation" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Command Validation Overview
|
||||||
|
</h3>
|
||||||
|
<p className="text-muted-foreground mb-3">
|
||||||
|
AutoCoder uses a defense-in-depth approach for security. All three layers must pass before any
|
||||||
|
command is executed:
|
||||||
|
</p>
|
||||||
|
<ol className="list-decimal space-y-2 ml-4 text-muted-foreground">
|
||||||
|
<li>
|
||||||
|
<strong className="text-foreground">OS-level sandbox</strong> — bash commands run inside
|
||||||
|
a restricted sandbox environment
|
||||||
|
</li>
|
||||||
|
<li>
|
||||||
|
<strong className="text-foreground">Filesystem restriction</strong> — agents can only
|
||||||
|
access the project directory (plus configured extra read paths)
|
||||||
|
</li>
|
||||||
|
<li>
|
||||||
|
<strong className="text-foreground">Hierarchical allowlist</strong> — every bash command
|
||||||
|
is validated against a multi-level allowlist system
|
||||||
|
</li>
|
||||||
|
</ol>
|
||||||
|
|
||||||
|
{/* Command Hierarchy */}
|
||||||
|
<h3 id="command-hierarchy" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Command Hierarchy
|
||||||
|
</h3>
|
||||||
|
<p className="text-muted-foreground mb-3">
|
||||||
|
Commands are evaluated against a 5-level hierarchy, from highest to lowest priority:
|
||||||
|
</p>
|
||||||
|
<ol className="list-decimal space-y-2 ml-4 text-muted-foreground">
|
||||||
|
<li>
|
||||||
|
<strong className="text-foreground">Hardcoded Blocklist</strong>{' '}
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">security.py</span>{' '}
|
||||||
|
— NEVER allowed, cannot be overridden
|
||||||
|
</li>
|
||||||
|
<li>
|
||||||
|
<strong className="text-foreground">Org Blocklist</strong>{' '}
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">~/.autocoder/config.yaml</span>{' '}
|
||||||
|
— org-wide blocks, cannot be project-overridden
|
||||||
|
</li>
|
||||||
|
<li>
|
||||||
|
<strong className="text-foreground">Org Allowlist</strong>{' '}
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">~/.autocoder/config.yaml</span>{' '}
|
||||||
|
— available to all projects
|
||||||
|
</li>
|
||||||
|
<li>
|
||||||
|
<strong className="text-foreground">Global Allowlist</strong>{' '}
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">security.py</span>{' '}
|
||||||
|
— default commands (npm, git, curl, etc.)
|
||||||
|
</li>
|
||||||
|
<li>
|
||||||
|
<strong className="text-foreground">Project Allowlist</strong>{' '}
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">
|
||||||
|
.autocoder/allowed_commands.yaml
|
||||||
|
</span>{' '}
|
||||||
|
— project-specific additions
|
||||||
|
</li>
|
||||||
|
</ol>
|
||||||
|
<blockquote className="border-l-4 border-primary pl-4 italic text-muted-foreground mt-4">
|
||||||
|
Higher priority levels always win. A command blocked at level 1 or 2 can never be allowed by
|
||||||
|
lower levels.
|
||||||
|
</blockquote>
|
||||||
|
|
||||||
|
{/* Hardcoded Blocklist */}
|
||||||
|
<h3 id="hardcoded-blocklist" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Hardcoded Blocklist
|
||||||
|
</h3>
|
||||||
|
<p className="text-muted-foreground mb-3">
|
||||||
|
The following commands can <strong className="text-foreground">never</strong> be allowed, regardless
|
||||||
|
of any configuration. They are hardcoded in{' '}
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">security.py</span> and
|
||||||
|
cannot be overridden:
|
||||||
|
</p>
|
||||||
|
<div className="flex flex-wrap gap-2">
|
||||||
|
{['dd', 'sudo', 'su', 'shutdown', 'reboot', 'poweroff', 'mkfs', 'fdisk', 'mount', 'umount', 'systemctl'].map(
|
||||||
|
(cmd) => (
|
||||||
|
<Badge key={cmd} variant="destructive">
|
||||||
|
{cmd}
|
||||||
|
</Badge>
|
||||||
|
),
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Global Allowlist */}
|
||||||
|
<h3 id="global-allowlist" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Global Allowlist
|
||||||
|
</h3>
|
||||||
|
<p className="text-muted-foreground mb-3">
|
||||||
|
Default commands available to all projects out of the box. These are the standard development
|
||||||
|
commands needed for most projects:
|
||||||
|
</p>
|
||||||
|
<div className="flex flex-wrap gap-2">
|
||||||
|
{['npm', 'npx', 'node', 'git', 'curl', 'python', 'pip', 'cat', 'ls', 'mkdir', 'cp', 'mv', 'rm', 'grep', 'find'].map(
|
||||||
|
(cmd) => (
|
||||||
|
<Badge key={cmd} variant="secondary">
|
||||||
|
{cmd}
|
||||||
|
</Badge>
|
||||||
|
),
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Per-Project Allowed Commands */}
|
||||||
|
<h3 id="project-allowlist" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Per-Project Allowed Commands
|
||||||
|
</h3>
|
||||||
|
<p className="text-muted-foreground mb-3">
|
||||||
|
Each project can define additional allowed commands in{' '}
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">
|
||||||
|
.autocoder/allowed_commands.yaml
|
||||||
|
</span>
|
||||||
|
:
|
||||||
|
</p>
|
||||||
|
<div className="bg-muted rounded-lg p-4 font-mono text-sm">
|
||||||
|
<pre><code>{`# .autocoder/allowed_commands.yaml
|
||||||
|
version: 1
|
||||||
|
commands:
|
||||||
|
# Exact command name
|
||||||
|
- name: swift
|
||||||
|
description: Swift compiler
|
||||||
|
|
||||||
|
# Wildcard - matches swiftc, swiftlint, swiftformat
|
||||||
|
- name: swift*
|
||||||
|
description: All Swift tools (wildcard)
|
||||||
|
|
||||||
|
# Local project scripts
|
||||||
|
- name: ./scripts/build.sh
|
||||||
|
description: Project build script`}</code></pre>
|
||||||
|
</div>
|
||||||
|
<p className="text-muted-foreground mt-3">
|
||||||
|
<strong className="text-foreground">Pattern matching:</strong> exact match (
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">swift</span>), wildcard (
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">swift*</span> matches swiftc,
|
||||||
|
swiftlint, etc.), and scripts (
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">./scripts/build.sh</span>).
|
||||||
|
Limit: 100 commands per project.
|
||||||
|
</p>
|
||||||
|
|
||||||
|
{/* Organization Configuration */}
|
||||||
|
<h3 id="org-config" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Organization Configuration
|
||||||
|
</h3>
|
||||||
|
<p className="text-muted-foreground mb-3">
|
||||||
|
System administrators can set org-wide policies in{' '}
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">~/.autocoder/config.yaml</span>:
|
||||||
|
</p>
|
||||||
|
<div className="bg-muted rounded-lg p-4 font-mono text-sm">
|
||||||
|
<pre><code>{`# ~/.autocoder/config.yaml
|
||||||
|
version: 1
|
||||||
|
|
||||||
|
# Commands available to ALL projects
|
||||||
|
allowed_commands:
|
||||||
|
- name: jq
|
||||||
|
description: JSON processor
|
||||||
|
|
||||||
|
# Commands blocked across ALL projects (cannot be overridden)
|
||||||
|
blocked_commands:
|
||||||
|
- aws # Prevent accidental cloud operations
|
||||||
|
- kubectl # Block production deployments`}</code></pre>
|
||||||
|
</div>
|
||||||
|
<p className="text-muted-foreground mt-3">
|
||||||
|
Org-level blocked commands cannot be overridden by any project configuration.
|
||||||
|
</p>
|
||||||
|
|
||||||
|
{/* Extra Read Paths */}
|
||||||
|
<h3 id="extra-read-paths" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Extra Read Paths
|
||||||
|
</h3>
|
||||||
|
<p className="text-muted-foreground mb-3">
|
||||||
|
Allow agents to read files from directories outside the project folder via the{' '}
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">EXTRA_READ_PATHS</span>{' '}
|
||||||
|
environment variable:
|
||||||
|
</p>
|
||||||
|
<div className="bg-muted rounded-lg p-4 font-mono text-sm">
|
||||||
|
<pre><code>EXTRA_READ_PATHS=/path/to/docs,/path/to/shared-libs</code></pre>
|
||||||
|
</div>
|
||||||
|
<ul className="list-disc space-y-2 ml-4 text-muted-foreground mt-3">
|
||||||
|
<li>Must be absolute paths and must exist as directories</li>
|
||||||
|
<li>Only read operations allowed (Read, Glob, Grep — no Write/Edit)</li>
|
||||||
|
<li>
|
||||||
|
Sensitive directories are always blocked:{' '}
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">.ssh</span>,{' '}
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">.aws</span>,{' '}
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">.gnupg</span>,{' '}
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">.docker</span>,{' '}
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">.kube</span>, etc.
|
||||||
|
</li>
|
||||||
|
</ul>
|
||||||
|
|
||||||
|
{/* Filesystem Sandboxing */}
|
||||||
|
<h3 id="filesystem-sandboxing" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Filesystem Sandboxing
|
||||||
|
</h3>
|
||||||
|
<ul className="list-disc space-y-2 ml-4 text-muted-foreground">
|
||||||
|
<li>Agents can only write to the project directory</li>
|
||||||
|
<li>Read access is limited to the project directory plus configured extra read paths</li>
|
||||||
|
<li>
|
||||||
|
Path traversal attacks are prevented via canonicalization (
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">Path.resolve()</span>)
|
||||||
|
</li>
|
||||||
|
<li>File operations are validated before execution</li>
|
||||||
|
</ul>
|
||||||
|
</div>
|
||||||
|
)
|
||||||
|
}
|
||||||
188
ui/src/components/docs/sections/SettingsConfig.tsx
Normal file
188
ui/src/components/docs/sections/SettingsConfig.tsx
Normal file
@@ -0,0 +1,188 @@
|
|||||||
|
/**
|
||||||
|
* SettingsConfig Documentation Section
|
||||||
|
*
|
||||||
|
* Covers global settings: opening the modal, YOLO mode, headless browser,
|
||||||
|
* model selection, regression agents, batch size, concurrency, and persistence.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { Badge } from '@/components/ui/badge'
|
||||||
|
|
||||||
|
export function SettingsConfig() {
|
||||||
|
return (
|
||||||
|
<div>
|
||||||
|
{/* Opening Settings */}
|
||||||
|
<h3 id="opening-settings" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Opening Settings
|
||||||
|
</h3>
|
||||||
|
<p className="text-muted-foreground mb-4">
|
||||||
|
Press the <Badge variant="secondary">,</Badge> (comma) key or click the gear icon in the header bar to
|
||||||
|
open the Settings modal. Settings are global and apply to all projects.
|
||||||
|
</p>
|
||||||
|
|
||||||
|
{/* YOLO Mode */}
|
||||||
|
<h3 id="yolo-mode" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
YOLO Mode
|
||||||
|
</h3>
|
||||||
|
<p className="text-muted-foreground mb-3">
|
||||||
|
YOLO mode is for rapid prototyping — it skips testing for faster iteration:
|
||||||
|
</p>
|
||||||
|
<ul className="list-disc space-y-2 ml-4 text-muted-foreground">
|
||||||
|
<li>
|
||||||
|
<strong className="text-foreground">What’s skipped:</strong> Regression testing, Playwright MCP
|
||||||
|
server (browser automation disabled)
|
||||||
|
</li>
|
||||||
|
<li>
|
||||||
|
<strong className="text-foreground">What still runs:</strong> Lint and type-check (to verify code
|
||||||
|
compiles), Feature MCP server for tracking
|
||||||
|
</li>
|
||||||
|
<li>
|
||||||
|
Toggle via the lightning bolt button in the UI or the{' '}
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">--yolo</span> CLI flag
|
||||||
|
</li>
|
||||||
|
<li>
|
||||||
|
<strong className="text-foreground">When to use:</strong> Early prototyping when you want to scaffold
|
||||||
|
features quickly without verification overhead
|
||||||
|
</li>
|
||||||
|
<li>Switch back to standard mode for production-quality development</li>
|
||||||
|
</ul>
|
||||||
|
|
||||||
|
{/* Headless Browser */}
|
||||||
|
<h3 id="headless-browser" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Headless Browser
|
||||||
|
</h3>
|
||||||
|
<ul className="list-disc space-y-2 ml-4 text-muted-foreground">
|
||||||
|
<li>When enabled, Playwright runs without a visible browser window</li>
|
||||||
|
<li>Saves CPU/GPU resources on machines running multiple agents</li>
|
||||||
|
<li>Tests still run fully — just no visible browser UI</li>
|
||||||
|
<li>Toggle in settings or via the UI button</li>
|
||||||
|
</ul>
|
||||||
|
|
||||||
|
{/* Model Selection */}
|
||||||
|
<h3 id="model-selection" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Model Selection
|
||||||
|
</h3>
|
||||||
|
<p className="text-muted-foreground mb-3">
|
||||||
|
Choose which Claude model tier to use for your agents:
|
||||||
|
</p>
|
||||||
|
<table className="w-full text-sm mt-3">
|
||||||
|
<thead>
|
||||||
|
<tr className="bg-muted/50">
|
||||||
|
<th className="border border-border px-3 py-2 text-left font-medium text-foreground">Tier</th>
|
||||||
|
<th className="border border-border px-3 py-2 text-left font-medium text-foreground">
|
||||||
|
Characteristics
|
||||||
|
</th>
|
||||||
|
</tr>
|
||||||
|
</thead>
|
||||||
|
<tbody className="text-muted-foreground">
|
||||||
|
<tr>
|
||||||
|
<td className="border border-border px-3 py-2">
|
||||||
|
<Badge variant="default">Opus</Badge>
|
||||||
|
</td>
|
||||||
|
<td className="border border-border px-3 py-2">Most capable, highest quality</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td className="border border-border px-3 py-2">
|
||||||
|
<Badge variant="secondary">Sonnet</Badge>
|
||||||
|
</td>
|
||||||
|
<td className="border border-border px-3 py-2">Balanced speed and quality</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td className="border border-border px-3 py-2">
|
||||||
|
<Badge variant="outline">Haiku</Badge>
|
||||||
|
</td>
|
||||||
|
<td className="border border-border px-3 py-2">Fastest, most economical</td>
|
||||||
|
</tr>
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
|
<ul className="list-disc space-y-2 ml-4 text-muted-foreground mt-4">
|
||||||
|
<li>Model can be set globally in settings</li>
|
||||||
|
<li>Per-schedule model override is also available</li>
|
||||||
|
<li>
|
||||||
|
When using Vertex AI, model names use{' '}
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">@</span> instead of{' '}
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">-</span> (e.g.,{' '}
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">
|
||||||
|
claude-opus-4-5@20251101
|
||||||
|
</span>
|
||||||
|
)
|
||||||
|
</li>
|
||||||
|
</ul>
|
||||||
|
|
||||||
|
{/* Regression Agents */}
|
||||||
|
<h3 id="regression-agents" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Regression Agents
|
||||||
|
</h3>
|
||||||
|
<p className="text-muted-foreground mb-3">
|
||||||
|
Controls how many testing agents run alongside coding agents (0–3):
|
||||||
|
</p>
|
||||||
|
<ul className="list-disc space-y-2 ml-4 text-muted-foreground">
|
||||||
|
<li>
|
||||||
|
<strong className="text-foreground">0:</strong> No regression testing (like YOLO but coding agents
|
||||||
|
still test their own feature)
|
||||||
|
</li>
|
||||||
|
<li>
|
||||||
|
<strong className="text-foreground">1:</strong> One testing agent runs in background verifying
|
||||||
|
completed features
|
||||||
|
</li>
|
||||||
|
<li>
|
||||||
|
<strong className="text-foreground">2–3:</strong> Multiple testing agents for thorough
|
||||||
|
verification
|
||||||
|
</li>
|
||||||
|
<li>Testing agents batch-test 1–5 features per session</li>
|
||||||
|
</ul>
|
||||||
|
|
||||||
|
{/* Features per Agent / Batch Size */}
|
||||||
|
<h3 id="features-per-agent" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Features per Agent (Batch Size)
|
||||||
|
</h3>
|
||||||
|
<p className="text-muted-foreground mb-3">
|
||||||
|
Controls how many features each coding agent implements per session (1–3):
|
||||||
|
</p>
|
||||||
|
<ul className="list-disc space-y-2 ml-4 text-muted-foreground">
|
||||||
|
<li>
|
||||||
|
<strong className="text-foreground">1:</strong> One feature per session (most focused, lower risk of
|
||||||
|
conflicts)
|
||||||
|
</li>
|
||||||
|
<li>
|
||||||
|
<strong className="text-foreground">2–3:</strong> Multiple features per session (more efficient,
|
||||||
|
fewer session startups)
|
||||||
|
</li>
|
||||||
|
<li>
|
||||||
|
Set via settings UI or the{' '}
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">--batch-size</span> CLI flag
|
||||||
|
</li>
|
||||||
|
<li>
|
||||||
|
Can also target specific features:{' '}
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">--batch-features 1,2,3</span>
|
||||||
|
</li>
|
||||||
|
</ul>
|
||||||
|
|
||||||
|
{/* Concurrency */}
|
||||||
|
<h3 id="concurrency-setting" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
Concurrency
|
||||||
|
</h3>
|
||||||
|
<ul className="list-disc space-y-2 ml-4 text-muted-foreground">
|
||||||
|
<li>Per-project default concurrency saved in project settings</li>
|
||||||
|
<li>Override at runtime with the concurrency slider in agent controls</li>
|
||||||
|
<li>
|
||||||
|
Range: <Badge variant="secondary">1–5</Badge> concurrent coding agents
|
||||||
|
</li>
|
||||||
|
<li>Higher concurrency = faster progress but more API cost</li>
|
||||||
|
</ul>
|
||||||
|
|
||||||
|
{/* How Settings are Persisted */}
|
||||||
|
<h3 id="settings-persistence" className="text-lg font-semibold text-foreground mt-8 mb-3">
|
||||||
|
How Settings are Persisted
|
||||||
|
</h3>
|
||||||
|
<ul className="list-disc space-y-2 ml-4 text-muted-foreground">
|
||||||
|
<li>
|
||||||
|
Global settings stored in SQLite registry at{' '}
|
||||||
|
<span className="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">~/.autocoder/registry.db</span>
|
||||||
|
</li>
|
||||||
|
<li>Per-project settings (like default concurrency) stored in the project registry entry</li>
|
||||||
|
<li>UI settings (theme, dark mode) stored in browser localStorage</li>
|
||||||
|
<li>Settings survive app restarts and are shared across UI sessions</li>
|
||||||
|
</ul>
|
||||||
|
</div>
|
||||||
|
)
|
||||||
|
}
|
||||||
529
ui/src/components/mascotData.tsx
Normal file
529
ui/src/components/mascotData.tsx
Normal file
@@ -0,0 +1,529 @@
|
|||||||
|
/**
|
||||||
|
* SVG mascot definitions and color palettes for agent avatars.
|
||||||
|
*
|
||||||
|
* Each mascot is a simple, cute SVG character rendered as a React component.
|
||||||
|
* Colors are keyed by AgentMascot name so avatars stay visually distinct
|
||||||
|
* when multiple agents run in parallel.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import type { AgentMascot } from '../lib/types'
|
||||||
|
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
// Color types and palettes
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
export interface MascotColorPalette {
|
||||||
|
primary: string
|
||||||
|
secondary: string
|
||||||
|
accent: string
|
||||||
|
}
|
||||||
|
|
||||||
|
/** Props shared by every mascot SVG component. */
|
||||||
|
export interface MascotSVGProps {
|
||||||
|
colors: MascotColorPalette
|
||||||
|
size: number
|
||||||
|
}
|
||||||
|
|
||||||
|
/** Fallback colors for unknown / untracked agents (neutral gray). */
|
||||||
|
export const UNKNOWN_COLORS: MascotColorPalette = {
|
||||||
|
primary: '#6B7280',
|
||||||
|
secondary: '#9CA3AF',
|
||||||
|
accent: '#F3F4F6',
|
||||||
|
}
|
||||||
|
|
||||||
|
export const AVATAR_COLORS: Record<AgentMascot, MascotColorPalette> = {
|
||||||
|
// Original 5
|
||||||
|
Spark: { primary: '#3B82F6', secondary: '#60A5FA', accent: '#DBEAFE' }, // Blue robot
|
||||||
|
Fizz: { primary: '#F97316', secondary: '#FB923C', accent: '#FFEDD5' }, // Orange fox
|
||||||
|
Octo: { primary: '#8B5CF6', secondary: '#A78BFA', accent: '#EDE9FE' }, // Purple octopus
|
||||||
|
Hoot: { primary: '#22C55E', secondary: '#4ADE80', accent: '#DCFCE7' }, // Green owl
|
||||||
|
Buzz: { primary: '#EAB308', secondary: '#FACC15', accent: '#FEF9C3' }, // Yellow bee
|
||||||
|
// Tech-inspired
|
||||||
|
Pixel: { primary: '#EC4899', secondary: '#F472B6', accent: '#FCE7F3' }, // Pink
|
||||||
|
Byte: { primary: '#06B6D4', secondary: '#22D3EE', accent: '#CFFAFE' }, // Cyan
|
||||||
|
Nova: { primary: '#F43F5E', secondary: '#FB7185', accent: '#FFE4E6' }, // Rose
|
||||||
|
Chip: { primary: '#84CC16', secondary: '#A3E635', accent: '#ECFCCB' }, // Lime
|
||||||
|
Bolt: { primary: '#FBBF24', secondary: '#FCD34D', accent: '#FEF3C7' }, // Amber
|
||||||
|
// Energetic
|
||||||
|
Dash: { primary: '#14B8A6', secondary: '#2DD4BF', accent: '#CCFBF1' }, // Teal
|
||||||
|
Zap: { primary: '#A855F7', secondary: '#C084FC', accent: '#F3E8FF' }, // Violet
|
||||||
|
Gizmo: { primary: '#64748B', secondary: '#94A3B8', accent: '#F1F5F9' }, // Slate
|
||||||
|
Turbo: { primary: '#EF4444', secondary: '#F87171', accent: '#FEE2E2' }, // Red
|
||||||
|
Blip: { primary: '#10B981', secondary: '#34D399', accent: '#D1FAE5' }, // Emerald
|
||||||
|
// Playful
|
||||||
|
Neon: { primary: '#D946EF', secondary: '#E879F9', accent: '#FAE8FF' }, // Fuchsia
|
||||||
|
Widget: { primary: '#6366F1', secondary: '#818CF8', accent: '#E0E7FF' }, // Indigo
|
||||||
|
Zippy: { primary: '#F59E0B', secondary: '#FBBF24', accent: '#FEF3C7' }, // Orange-yellow
|
||||||
|
Quirk: { primary: '#0EA5E9', secondary: '#38BDF8', accent: '#E0F2FE' }, // Sky
|
||||||
|
Flux: { primary: '#7C3AED', secondary: '#8B5CF6', accent: '#EDE9FE' }, // Purple
|
||||||
|
}
|
||||||
|
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
// SVG mascot components - simple cute characters
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
function SparkSVG({ colors, size }: MascotSVGProps) {
|
||||||
|
return (
|
||||||
|
<svg width={size} height={size} viewBox="0 0 64 64" fill="none">
|
||||||
|
{/* Robot body */}
|
||||||
|
<rect x="16" y="20" width="32" height="28" rx="4" fill={colors.primary} />
|
||||||
|
{/* Robot head */}
|
||||||
|
<rect x="12" y="8" width="40" height="24" rx="4" fill={colors.secondary} />
|
||||||
|
{/* Antenna */}
|
||||||
|
<circle cx="32" cy="4" r="4" fill={colors.primary} className="animate-pulse" />
|
||||||
|
<rect x="30" y="4" width="4" height="8" fill={colors.primary} />
|
||||||
|
{/* Eyes */}
|
||||||
|
<circle cx="24" cy="18" r="4" fill="white" />
|
||||||
|
<circle cx="40" cy="18" r="4" fill="white" />
|
||||||
|
<circle cx="25" cy="18" r="2" fill={colors.primary} />
|
||||||
|
<circle cx="41" cy="18" r="2" fill={colors.primary} />
|
||||||
|
{/* Mouth */}
|
||||||
|
<rect x="26" y="24" width="12" height="2" rx="1" fill="white" />
|
||||||
|
{/* Arms */}
|
||||||
|
<rect x="6" y="24" width="8" height="4" rx="2" fill={colors.primary} />
|
||||||
|
<rect x="50" y="24" width="8" height="4" rx="2" fill={colors.primary} />
|
||||||
|
</svg>
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
function FizzSVG({ colors, size }: MascotSVGProps) {
|
||||||
|
return (
|
||||||
|
<svg width={size} height={size} viewBox="0 0 64 64" fill="none">
|
||||||
|
{/* Ears */}
|
||||||
|
<polygon points="12,12 20,28 4,28" fill={colors.primary} />
|
||||||
|
<polygon points="52,12 60,28 44,28" fill={colors.primary} />
|
||||||
|
<polygon points="14,14 18,26 8,26" fill={colors.accent} />
|
||||||
|
<polygon points="50,14 56,26 44,26" fill={colors.accent} />
|
||||||
|
{/* Head */}
|
||||||
|
<ellipse cx="32" cy="36" rx="24" ry="22" fill={colors.primary} />
|
||||||
|
{/* Face */}
|
||||||
|
<ellipse cx="32" cy="40" rx="18" ry="14" fill={colors.accent} />
|
||||||
|
{/* Eyes */}
|
||||||
|
<ellipse cx="24" cy="32" rx="4" ry="5" fill="white" />
|
||||||
|
<ellipse cx="40" cy="32" rx="4" ry="5" fill="white" />
|
||||||
|
<circle cx="25" cy="33" r="2" fill="#1a1a1a" />
|
||||||
|
<circle cx="41" cy="33" r="2" fill="#1a1a1a" />
|
||||||
|
{/* Nose */}
|
||||||
|
<ellipse cx="32" cy="42" rx="4" ry="3" fill={colors.primary} />
|
||||||
|
{/* Whiskers */}
|
||||||
|
<line x1="8" y1="38" x2="18" y2="40" stroke={colors.primary} strokeWidth="2" />
|
||||||
|
<line x1="8" y1="44" x2="18" y2="44" stroke={colors.primary} strokeWidth="2" />
|
||||||
|
<line x1="46" y1="40" x2="56" y2="38" stroke={colors.primary} strokeWidth="2" />
|
||||||
|
<line x1="46" y1="44" x2="56" y2="44" stroke={colors.primary} strokeWidth="2" />
|
||||||
|
</svg>
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
function OctoSVG({ colors, size }: MascotSVGProps) {
|
||||||
|
return (
|
||||||
|
<svg width={size} height={size} viewBox="0 0 64 64" fill="none">
|
||||||
|
{/* Tentacles */}
|
||||||
|
<path d="M12,48 Q8,56 12,60 Q16,64 20,58" fill={colors.secondary} />
|
||||||
|
<path d="M22,50 Q20,58 24,62" fill={colors.secondary} />
|
||||||
|
<path d="M32,52 Q32,60 36,62" fill={colors.secondary} />
|
||||||
|
<path d="M42,50 Q44,58 40,62" fill={colors.secondary} />
|
||||||
|
<path d="M52,48 Q56,56 52,60 Q48,64 44,58" fill={colors.secondary} />
|
||||||
|
{/* Head */}
|
||||||
|
<ellipse cx="32" cy="32" rx="22" ry="24" fill={colors.primary} />
|
||||||
|
{/* Eyes */}
|
||||||
|
<ellipse cx="24" cy="28" rx="6" ry="8" fill="white" />
|
||||||
|
<ellipse cx="40" cy="28" rx="6" ry="8" fill="white" />
|
||||||
|
<ellipse cx="25" cy="30" rx="3" ry="4" fill={colors.primary} />
|
||||||
|
<ellipse cx="41" cy="30" rx="3" ry="4" fill={colors.primary} />
|
||||||
|
{/* Smile */}
|
||||||
|
<path d="M24,42 Q32,48 40,42" stroke={colors.accent} strokeWidth="2" fill="none" strokeLinecap="round" />
|
||||||
|
</svg>
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
function HootSVG({ colors, size }: MascotSVGProps) {
|
||||||
|
return (
|
||||||
|
<svg width={size} height={size} viewBox="0 0 64 64" fill="none">
|
||||||
|
{/* Ear tufts */}
|
||||||
|
<polygon points="14,8 22,24 6,20" fill={colors.primary} />
|
||||||
|
<polygon points="50,8 58,20 42,24" fill={colors.primary} />
|
||||||
|
{/* Body */}
|
||||||
|
<ellipse cx="32" cy="40" rx="20" ry="18" fill={colors.primary} />
|
||||||
|
{/* Head */}
|
||||||
|
<circle cx="32" cy="28" r="20" fill={colors.secondary} />
|
||||||
|
{/* Eye circles */}
|
||||||
|
<circle cx="24" cy="26" r="10" fill={colors.accent} />
|
||||||
|
<circle cx="40" cy="26" r="10" fill={colors.accent} />
|
||||||
|
{/* Eyes */}
|
||||||
|
<circle cx="24" cy="26" r="6" fill="white" />
|
||||||
|
<circle cx="40" cy="26" r="6" fill="white" />
|
||||||
|
<circle cx="25" cy="27" r="3" fill="#1a1a1a" />
|
||||||
|
<circle cx="41" cy="27" r="3" fill="#1a1a1a" />
|
||||||
|
{/* Beak */}
|
||||||
|
<polygon points="32,32 28,40 36,40" fill="#F97316" />
|
||||||
|
{/* Belly */}
|
||||||
|
<ellipse cx="32" cy="46" rx="10" ry="8" fill={colors.accent} />
|
||||||
|
</svg>
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
function BuzzSVG({ colors, size }: MascotSVGProps) {
|
||||||
|
return (
|
||||||
|
<svg width={size} height={size} viewBox="0 0 64 64" fill="none">
|
||||||
|
{/* Wings */}
|
||||||
|
<ellipse cx="14" cy="32" rx="10" ry="14" fill={colors.accent} opacity="0.8" className="animate-pulse" />
|
||||||
|
<ellipse cx="50" cy="32" rx="10" ry="14" fill={colors.accent} opacity="0.8" className="animate-pulse" />
|
||||||
|
{/* Body stripes */}
|
||||||
|
<ellipse cx="32" cy="36" rx="14" ry="20" fill={colors.primary} />
|
||||||
|
<ellipse cx="32" cy="30" rx="12" ry="6" fill="#1a1a1a" />
|
||||||
|
<ellipse cx="32" cy="44" rx="12" ry="6" fill="#1a1a1a" />
|
||||||
|
{/* Head */}
|
||||||
|
<circle cx="32" cy="16" r="12" fill={colors.primary} />
|
||||||
|
{/* Antennae */}
|
||||||
|
<line x1="26" y1="8" x2="22" y2="2" stroke="#1a1a1a" strokeWidth="2" />
|
||||||
|
<line x1="38" y1="8" x2="42" y2="2" stroke="#1a1a1a" strokeWidth="2" />
|
||||||
|
<circle cx="22" cy="2" r="2" fill="#1a1a1a" />
|
||||||
|
<circle cx="42" cy="2" r="2" fill="#1a1a1a" />
|
||||||
|
{/* Eyes */}
|
||||||
|
<circle cx="28" cy="14" r="4" fill="white" />
|
||||||
|
<circle cx="36" cy="14" r="4" fill="white" />
|
||||||
|
<circle cx="29" cy="15" r="2" fill="#1a1a1a" />
|
||||||
|
<circle cx="37" cy="15" r="2" fill="#1a1a1a" />
|
||||||
|
{/* Smile */}
|
||||||
|
<path d="M28,20 Q32,24 36,20" stroke="#1a1a1a" strokeWidth="1.5" fill="none" strokeLinecap="round" />
|
||||||
|
</svg>
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
function PixelSVG({ colors, size }: MascotSVGProps) {
|
||||||
|
return (
|
||||||
|
<svg width={size} height={size} viewBox="0 0 64 64" fill="none">
|
||||||
|
{/* Blocky body */}
|
||||||
|
<rect x="20" y="28" width="24" height="28" fill={colors.primary} />
|
||||||
|
<rect x="16" y="32" width="8" height="20" fill={colors.secondary} />
|
||||||
|
<rect x="40" y="32" width="8" height="20" fill={colors.secondary} />
|
||||||
|
{/* Head */}
|
||||||
|
<rect x="16" y="8" width="32" height="24" fill={colors.primary} />
|
||||||
|
{/* Eyes */}
|
||||||
|
<rect x="20" y="14" width="8" height="8" fill="white" />
|
||||||
|
<rect x="36" y="14" width="8" height="8" fill="white" />
|
||||||
|
<rect x="24" y="16" width="4" height="4" fill="#1a1a1a" />
|
||||||
|
<rect x="38" y="16" width="4" height="4" fill="#1a1a1a" />
|
||||||
|
{/* Mouth */}
|
||||||
|
<rect x="26" y="26" width="12" height="4" fill={colors.accent} />
|
||||||
|
</svg>
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
function ByteSVG({ colors, size }: MascotSVGProps) {
|
||||||
|
return (
|
||||||
|
<svg width={size} height={size} viewBox="0 0 64 64" fill="none">
|
||||||
|
{/* 3D cube body */}
|
||||||
|
<polygon points="32,8 56,20 56,44 32,56 8,44 8,20" fill={colors.primary} />
|
||||||
|
<polygon points="32,8 56,20 32,32 8,20" fill={colors.secondary} />
|
||||||
|
<polygon points="32,32 56,20 56,44 32,56" fill={colors.accent} opacity="0.6" />
|
||||||
|
{/* Face */}
|
||||||
|
<circle cx="24" cy="28" r="4" fill="white" />
|
||||||
|
<circle cx="40" cy="28" r="4" fill="white" />
|
||||||
|
<circle cx="25" cy="29" r="2" fill="#1a1a1a" />
|
||||||
|
<circle cx="41" cy="29" r="2" fill="#1a1a1a" />
|
||||||
|
<path d="M26,38 Q32,42 38,38" stroke="white" strokeWidth="2" fill="none" strokeLinecap="round" />
|
||||||
|
</svg>
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
function NovaSVG({ colors, size }: MascotSVGProps) {
|
||||||
|
return (
|
||||||
|
<svg width={size} height={size} viewBox="0 0 64 64" fill="none">
|
||||||
|
{/* Star points */}
|
||||||
|
<polygon points="32,2 38,22 58,22 42,36 48,56 32,44 16,56 22,36 6,22 26,22" fill={colors.primary} />
|
||||||
|
<circle cx="32" cy="32" r="14" fill={colors.secondary} />
|
||||||
|
{/* Face */}
|
||||||
|
<circle cx="27" cy="30" r="3" fill="white" />
|
||||||
|
<circle cx="37" cy="30" r="3" fill="white" />
|
||||||
|
<circle cx="28" cy="31" r="1.5" fill="#1a1a1a" />
|
||||||
|
<circle cx="38" cy="31" r="1.5" fill="#1a1a1a" />
|
||||||
|
<path d="M28,37 Q32,40 36,37" stroke="#1a1a1a" strokeWidth="1.5" fill="none" strokeLinecap="round" />
|
||||||
|
</svg>
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
function ChipSVG({ colors, size }: MascotSVGProps) {
|
||||||
|
return (
|
||||||
|
<svg width={size} height={size} viewBox="0 0 64 64" fill="none">
|
||||||
|
{/* Chip body */}
|
||||||
|
<rect x="16" y="16" width="32" height="32" rx="4" fill={colors.primary} />
|
||||||
|
{/* Pins */}
|
||||||
|
<rect x="20" y="10" width="4" height="8" fill={colors.secondary} />
|
||||||
|
<rect x="30" y="10" width="4" height="8" fill={colors.secondary} />
|
||||||
|
<rect x="40" y="10" width="4" height="8" fill={colors.secondary} />
|
||||||
|
<rect x="20" y="46" width="4" height="8" fill={colors.secondary} />
|
||||||
|
<rect x="30" y="46" width="4" height="8" fill={colors.secondary} />
|
||||||
|
<rect x="40" y="46" width="4" height="8" fill={colors.secondary} />
|
||||||
|
{/* Face */}
|
||||||
|
<circle cx="26" cy="28" r="4" fill={colors.accent} />
|
||||||
|
<circle cx="38" cy="28" r="4" fill={colors.accent} />
|
||||||
|
<circle cx="26" cy="28" r="2" fill="#1a1a1a" />
|
||||||
|
<circle cx="38" cy="28" r="2" fill="#1a1a1a" />
|
||||||
|
<rect x="26" y="38" width="12" height="3" rx="1" fill={colors.accent} />
|
||||||
|
</svg>
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
function BoltSVG({ colors, size }: MascotSVGProps) {
|
||||||
|
return (
|
||||||
|
<svg width={size} height={size} viewBox="0 0 64 64" fill="none">
|
||||||
|
{/* Lightning bolt body */}
|
||||||
|
<polygon points="36,4 20,28 30,28 24,60 48,32 36,32 44,4" fill={colors.primary} />
|
||||||
|
<polygon points="34,8 24,26 32,26 28,52 42,34 34,34 40,8" fill={colors.secondary} />
|
||||||
|
{/* Face */}
|
||||||
|
<circle cx="30" cy="30" r="3" fill="white" />
|
||||||
|
<circle cx="38" cy="26" r="3" fill="white" />
|
||||||
|
<circle cx="31" cy="31" r="1.5" fill="#1a1a1a" />
|
||||||
|
<circle cx="39" cy="27" r="1.5" fill="#1a1a1a" />
|
||||||
|
</svg>
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
function DashSVG({ colors, size }: MascotSVGProps) {
|
||||||
|
return (
|
||||||
|
<svg width={size} height={size} viewBox="0 0 64 64" fill="none">
|
||||||
|
{/* Speed lines */}
|
||||||
|
<rect x="4" y="28" width="12" height="3" rx="1" fill={colors.accent} opacity="0.6" />
|
||||||
|
<rect x="8" y="34" width="10" height="3" rx="1" fill={colors.accent} opacity="0.4" />
|
||||||
|
{/* Aerodynamic body */}
|
||||||
|
<ellipse cx="36" cy="32" rx="20" ry="16" fill={colors.primary} />
|
||||||
|
<ellipse cx="40" cy="32" rx="14" ry="12" fill={colors.secondary} />
|
||||||
|
{/* Face */}
|
||||||
|
<circle cx="38" cy="28" r="4" fill="white" />
|
||||||
|
<circle cx="48" cy="28" r="4" fill="white" />
|
||||||
|
<circle cx="39" cy="29" r="2" fill="#1a1a1a" />
|
||||||
|
<circle cx="49" cy="29" r="2" fill="#1a1a1a" />
|
||||||
|
<path d="M40,36 Q44,39 48,36" stroke="#1a1a1a" strokeWidth="1.5" fill="none" strokeLinecap="round" />
|
||||||
|
</svg>
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
function ZapSVG({ colors, size }: MascotSVGProps) {
|
||||||
|
return (
|
||||||
|
<svg width={size} height={size} viewBox="0 0 64 64" fill="none">
|
||||||
|
{/* Electric sparks */}
|
||||||
|
<path d="M12,32 L20,28 L16,32 L22,30" stroke={colors.secondary} strokeWidth="2" className="animate-pulse" />
|
||||||
|
<path d="M52,32 L44,28 L48,32 L42,30" stroke={colors.secondary} strokeWidth="2" className="animate-pulse" />
|
||||||
|
{/* Orb */}
|
||||||
|
<circle cx="32" cy="32" r="18" fill={colors.primary} />
|
||||||
|
<circle cx="32" cy="32" r="14" fill={colors.secondary} />
|
||||||
|
{/* Face */}
|
||||||
|
<circle cx="26" cy="30" r="4" fill="white" />
|
||||||
|
<circle cx="38" cy="30" r="4" fill="white" />
|
||||||
|
<circle cx="27" cy="31" r="2" fill={colors.primary} />
|
||||||
|
<circle cx="39" cy="31" r="2" fill={colors.primary} />
|
||||||
|
<path d="M28,40 Q32,44 36,40" stroke="white" strokeWidth="2" fill="none" strokeLinecap="round" />
|
||||||
|
</svg>
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
function GizmoSVG({ colors, size }: MascotSVGProps) {
|
||||||
|
return (
|
||||||
|
<svg width={size} height={size} viewBox="0 0 64 64" fill="none">
|
||||||
|
{/* Gear teeth */}
|
||||||
|
<rect x="28" y="4" width="8" height="8" fill={colors.primary} />
|
||||||
|
<rect x="28" y="52" width="8" height="8" fill={colors.primary} />
|
||||||
|
<rect x="4" y="28" width="8" height="8" fill={colors.primary} />
|
||||||
|
<rect x="52" y="28" width="8" height="8" fill={colors.primary} />
|
||||||
|
{/* Gear body */}
|
||||||
|
<circle cx="32" cy="32" r="20" fill={colors.primary} />
|
||||||
|
<circle cx="32" cy="32" r="14" fill={colors.secondary} />
|
||||||
|
{/* Face */}
|
||||||
|
<circle cx="26" cy="30" r="4" fill="white" />
|
||||||
|
<circle cx="38" cy="30" r="4" fill="white" />
|
||||||
|
<circle cx="27" cy="31" r="2" fill="#1a1a1a" />
|
||||||
|
<circle cx="39" cy="31" r="2" fill="#1a1a1a" />
|
||||||
|
<path d="M28,40 Q32,43 36,40" stroke="#1a1a1a" strokeWidth="2" fill="none" strokeLinecap="round" />
|
||||||
|
</svg>
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
function TurboSVG({ colors, size }: MascotSVGProps) {
|
||||||
|
return (
|
||||||
|
<svg width={size} height={size} viewBox="0 0 64 64" fill="none">
|
||||||
|
{/* Flames */}
|
||||||
|
<ellipse cx="32" cy="58" rx="8" ry="6" fill="#FBBF24" className="animate-pulse" />
|
||||||
|
<ellipse cx="32" cy="56" rx="5" ry="4" fill="#FCD34D" />
|
||||||
|
{/* Rocket body */}
|
||||||
|
<ellipse cx="32" cy="32" rx="14" ry="24" fill={colors.primary} />
|
||||||
|
{/* Nose cone */}
|
||||||
|
<ellipse cx="32" cy="12" rx="8" ry="10" fill={colors.secondary} />
|
||||||
|
{/* Fins */}
|
||||||
|
<polygon points="18,44 10,56 18,52" fill={colors.secondary} />
|
||||||
|
<polygon points="46,44 54,56 46,52" fill={colors.secondary} />
|
||||||
|
{/* Window/Face */}
|
||||||
|
<circle cx="32" cy="28" r="8" fill={colors.accent} />
|
||||||
|
<circle cx="29" cy="27" r="2" fill="#1a1a1a" />
|
||||||
|
<circle cx="35" cy="27" r="2" fill="#1a1a1a" />
|
||||||
|
<path d="M29,32 Q32,34 35,32" stroke="#1a1a1a" strokeWidth="1" fill="none" />
|
||||||
|
</svg>
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
function BlipSVG({ colors, size }: MascotSVGProps) {
|
||||||
|
return (
|
||||||
|
<svg width={size} height={size} viewBox="0 0 64 64" fill="none">
|
||||||
|
{/* Radar rings */}
|
||||||
|
<circle cx="32" cy="32" r="28" stroke={colors.accent} strokeWidth="2" fill="none" opacity="0.3" />
|
||||||
|
<circle cx="32" cy="32" r="22" stroke={colors.accent} strokeWidth="2" fill="none" opacity="0.5" />
|
||||||
|
{/* Main dot */}
|
||||||
|
<circle cx="32" cy="32" r="14" fill={colors.primary} />
|
||||||
|
<circle cx="32" cy="32" r="10" fill={colors.secondary} />
|
||||||
|
{/* Face */}
|
||||||
|
<circle cx="28" cy="30" r="3" fill="white" />
|
||||||
|
<circle cx="36" cy="30" r="3" fill="white" />
|
||||||
|
<circle cx="29" cy="31" r="1.5" fill="#1a1a1a" />
|
||||||
|
<circle cx="37" cy="31" r="1.5" fill="#1a1a1a" />
|
||||||
|
<path d="M29,37 Q32,40 35,37" stroke="white" strokeWidth="1.5" fill="none" strokeLinecap="round" />
|
||||||
|
</svg>
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
function NeonSVG({ colors, size }: MascotSVGProps) {
|
||||||
|
return (
|
||||||
|
<svg width={size} height={size} viewBox="0 0 64 64" fill="none">
|
||||||
|
{/* Glow effect */}
|
||||||
|
<circle cx="32" cy="32" r="26" fill={colors.accent} opacity="0.3" />
|
||||||
|
<circle cx="32" cy="32" r="22" fill={colors.accent} opacity="0.5" />
|
||||||
|
{/* Body */}
|
||||||
|
<circle cx="32" cy="32" r="18" fill={colors.primary} />
|
||||||
|
{/* Inner glow */}
|
||||||
|
<circle cx="32" cy="32" r="12" fill={colors.secondary} />
|
||||||
|
{/* Face */}
|
||||||
|
<circle cx="27" cy="30" r="4" fill="white" />
|
||||||
|
<circle cx="37" cy="30" r="4" fill="white" />
|
||||||
|
<circle cx="28" cy="31" r="2" fill={colors.primary} />
|
||||||
|
<circle cx="38" cy="31" r="2" fill={colors.primary} />
|
||||||
|
<path d="M28,38 Q32,42 36,38" stroke="white" strokeWidth="2" fill="none" strokeLinecap="round" />
|
||||||
|
</svg>
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
function WidgetSVG({ colors, size }: MascotSVGProps) {
|
||||||
|
return (
|
||||||
|
<svg width={size} height={size} viewBox="0 0 64 64" fill="none">
|
||||||
|
{/* Window frame */}
|
||||||
|
<rect x="8" y="12" width="48" height="40" rx="4" fill={colors.primary} />
|
||||||
|
{/* Title bar */}
|
||||||
|
<rect x="8" y="12" width="48" height="10" rx="4" fill={colors.secondary} />
|
||||||
|
<circle cx="16" cy="17" r="2" fill="#EF4444" />
|
||||||
|
<circle cx="24" cy="17" r="2" fill="#FBBF24" />
|
||||||
|
<circle cx="32" cy="17" r="2" fill="#22C55E" />
|
||||||
|
{/* Content area / Face */}
|
||||||
|
<rect x="12" y="26" width="40" height="22" rx="2" fill={colors.accent} />
|
||||||
|
<circle cx="24" cy="34" r="4" fill="white" />
|
||||||
|
<circle cx="40" cy="34" r="4" fill="white" />
|
||||||
|
<circle cx="25" cy="35" r="2" fill={colors.primary} />
|
||||||
|
<circle cx="41" cy="35" r="2" fill={colors.primary} />
|
||||||
|
<rect x="28" y="42" width="8" height="3" rx="1" fill={colors.primary} />
|
||||||
|
</svg>
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
function ZippySVG({ colors, size }: MascotSVGProps) {
|
||||||
|
return (
|
||||||
|
<svg width={size} height={size} viewBox="0 0 64 64" fill="none">
|
||||||
|
{/* Ears */}
|
||||||
|
<ellipse cx="22" cy="14" rx="6" ry="14" fill={colors.primary} />
|
||||||
|
<ellipse cx="42" cy="14" rx="6" ry="14" fill={colors.primary} />
|
||||||
|
<ellipse cx="22" cy="14" rx="3" ry="10" fill={colors.accent} />
|
||||||
|
<ellipse cx="42" cy="14" rx="3" ry="10" fill={colors.accent} />
|
||||||
|
{/* Head */}
|
||||||
|
<circle cx="32" cy="38" r="20" fill={colors.primary} />
|
||||||
|
{/* Face */}
|
||||||
|
<circle cx="24" cy="34" r="5" fill="white" />
|
||||||
|
<circle cx="40" cy="34" r="5" fill="white" />
|
||||||
|
<circle cx="25" cy="35" r="2.5" fill="#1a1a1a" />
|
||||||
|
<circle cx="41" cy="35" r="2.5" fill="#1a1a1a" />
|
||||||
|
{/* Nose and mouth */}
|
||||||
|
<ellipse cx="32" cy="44" rx="3" ry="2" fill={colors.secondary} />
|
||||||
|
<path d="M32,46 L32,50 M28,52 Q32,56 36,52" stroke="#1a1a1a" strokeWidth="1.5" fill="none" />
|
||||||
|
</svg>
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
function QuirkSVG({ colors, size }: MascotSVGProps) {
|
||||||
|
return (
|
||||||
|
<svg width={size} height={size} viewBox="0 0 64 64" fill="none">
|
||||||
|
{/* Question mark body */}
|
||||||
|
<path d="M24,20 Q24,8 32,8 Q44,8 44,20 Q44,28 32,32 L32,40"
|
||||||
|
stroke={colors.primary} strokeWidth="8" fill="none" strokeLinecap="round" />
|
||||||
|
<circle cx="32" cy="52" r="6" fill={colors.primary} />
|
||||||
|
{/* Face on the dot */}
|
||||||
|
<circle cx="29" cy="51" r="1.5" fill="white" />
|
||||||
|
<circle cx="35" cy="51" r="1.5" fill="white" />
|
||||||
|
<circle cx="29" cy="51" r="0.75" fill="#1a1a1a" />
|
||||||
|
<circle cx="35" cy="51" r="0.75" fill="#1a1a1a" />
|
||||||
|
{/* Decorative swirl */}
|
||||||
|
<circle cx="32" cy="20" r="4" fill={colors.secondary} />
|
||||||
|
</svg>
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
function FluxSVG({ colors, size }: MascotSVGProps) {
|
||||||
|
return (
|
||||||
|
<svg width={size} height={size} viewBox="0 0 64 64" fill="none">
|
||||||
|
{/* Wave body */}
|
||||||
|
<path d="M8,32 Q16,16 32,32 Q48,48 56,32" stroke={colors.primary} strokeWidth="16" fill="none" strokeLinecap="round" />
|
||||||
|
<path d="M8,32 Q16,16 32,32 Q48,48 56,32" stroke={colors.secondary} strokeWidth="10" fill="none" strokeLinecap="round" />
|
||||||
|
{/* Face */}
|
||||||
|
<circle cx="28" cy="28" r="4" fill="white" />
|
||||||
|
<circle cx="40" cy="36" r="4" fill="white" />
|
||||||
|
<circle cx="29" cy="29" r="2" fill="#1a1a1a" />
|
||||||
|
<circle cx="41" cy="37" r="2" fill="#1a1a1a" />
|
||||||
|
{/* Sparkles */}
|
||||||
|
<circle cx="16" cy="24" r="2" fill={colors.accent} className="animate-pulse" />
|
||||||
|
<circle cx="48" cy="40" r="2" fill={colors.accent} className="animate-pulse" />
|
||||||
|
</svg>
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
/** Fallback icon for unknown / untracked agents. */
|
||||||
|
function UnknownSVG({ colors, size }: MascotSVGProps) {
|
||||||
|
return (
|
||||||
|
<svg width={size} height={size} viewBox="0 0 64 64" fill="none" xmlns="http://www.w3.org/2000/svg">
|
||||||
|
{/* Circle background */}
|
||||||
|
<circle cx="32" cy="32" r="28" fill={colors.primary} />
|
||||||
|
<circle cx="32" cy="32" r="24" fill={colors.secondary} />
|
||||||
|
{/* Question mark */}
|
||||||
|
<text x="32" y="44" textAnchor="middle" fontSize="32" fontWeight="bold" fill="white">?</text>
|
||||||
|
</svg>
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
// Mascot component lookup
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
/** Maps each mascot name to its SVG component. */
|
||||||
|
export const MASCOT_SVGS: Record<AgentMascot, React.FC<MascotSVGProps>> = {
|
||||||
|
// Original 5
|
||||||
|
Spark: SparkSVG,
|
||||||
|
Fizz: FizzSVG,
|
||||||
|
Octo: OctoSVG,
|
||||||
|
Hoot: HootSVG,
|
||||||
|
Buzz: BuzzSVG,
|
||||||
|
// Tech-inspired
|
||||||
|
Pixel: PixelSVG,
|
||||||
|
Byte: ByteSVG,
|
||||||
|
Nova: NovaSVG,
|
||||||
|
Chip: ChipSVG,
|
||||||
|
Bolt: BoltSVG,
|
||||||
|
// Energetic
|
||||||
|
Dash: DashSVG,
|
||||||
|
Zap: ZapSVG,
|
||||||
|
Gizmo: GizmoSVG,
|
||||||
|
Turbo: TurboSVG,
|
||||||
|
Blip: BlipSVG,
|
||||||
|
// Playful
|
||||||
|
Neon: NeonSVG,
|
||||||
|
Widget: WidgetSVG,
|
||||||
|
Zippy: ZippySVG,
|
||||||
|
Quirk: QuirkSVG,
|
||||||
|
Flux: FluxSVG,
|
||||||
|
}
|
||||||
|
|
||||||
|
/** The SVG component for unknown agents. Exported separately because
|
||||||
|
* it is not part of the AgentMascot union type. */
|
||||||
|
export const UnknownMascotSVG: React.FC<MascotSVGProps> = UnknownSVG
|
||||||
@@ -1,87 +0,0 @@
|
|||||||
import * as React from "react"
|
|
||||||
import * as PopoverPrimitive from "@radix-ui/react-popover"
|
|
||||||
|
|
||||||
import { cn } from "@/lib/utils"
|
|
||||||
|
|
||||||
function Popover({
|
|
||||||
...props
|
|
||||||
}: React.ComponentProps<typeof PopoverPrimitive.Root>) {
|
|
||||||
return <PopoverPrimitive.Root data-slot="popover" {...props} />
|
|
||||||
}
|
|
||||||
|
|
||||||
function PopoverTrigger({
|
|
||||||
...props
|
|
||||||
}: React.ComponentProps<typeof PopoverPrimitive.Trigger>) {
|
|
||||||
return <PopoverPrimitive.Trigger data-slot="popover-trigger" {...props} />
|
|
||||||
}
|
|
||||||
|
|
||||||
function PopoverContent({
|
|
||||||
className,
|
|
||||||
align = "center",
|
|
||||||
sideOffset = 4,
|
|
||||||
...props
|
|
||||||
}: React.ComponentProps<typeof PopoverPrimitive.Content>) {
|
|
||||||
return (
|
|
||||||
<PopoverPrimitive.Portal>
|
|
||||||
<PopoverPrimitive.Content
|
|
||||||
data-slot="popover-content"
|
|
||||||
align={align}
|
|
||||||
sideOffset={sideOffset}
|
|
||||||
className={cn(
|
|
||||||
"bg-popover text-popover-foreground data-[state=open]:animate-in data-[state=closed]:animate-out data-[state=closed]:fade-out-0 data-[state=open]:fade-in-0 data-[state=closed]:zoom-out-95 data-[state=open]:zoom-in-95 data-[side=bottom]:slide-in-from-top-2 data-[side=left]:slide-in-from-right-2 data-[side=right]:slide-in-from-left-2 data-[side=top]:slide-in-from-bottom-2 z-50 w-72 origin-(--radix-popover-content-transform-origin) rounded-md border p-4 shadow-md outline-hidden",
|
|
||||||
className
|
|
||||||
)}
|
|
||||||
{...props}
|
|
||||||
/>
|
|
||||||
</PopoverPrimitive.Portal>
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
function PopoverAnchor({
|
|
||||||
...props
|
|
||||||
}: React.ComponentProps<typeof PopoverPrimitive.Anchor>) {
|
|
||||||
return <PopoverPrimitive.Anchor data-slot="popover-anchor" {...props} />
|
|
||||||
}
|
|
||||||
|
|
||||||
function PopoverHeader({ className, ...props }: React.ComponentProps<"div">) {
|
|
||||||
return (
|
|
||||||
<div
|
|
||||||
data-slot="popover-header"
|
|
||||||
className={cn("flex flex-col gap-1 text-sm", className)}
|
|
||||||
{...props}
|
|
||||||
/>
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
function PopoverTitle({ className, ...props }: React.ComponentProps<"h2">) {
|
|
||||||
return (
|
|
||||||
<div
|
|
||||||
data-slot="popover-title"
|
|
||||||
className={cn("font-medium", className)}
|
|
||||||
{...props}
|
|
||||||
/>
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
function PopoverDescription({
|
|
||||||
className,
|
|
||||||
...props
|
|
||||||
}: React.ComponentProps<"p">) {
|
|
||||||
return (
|
|
||||||
<p
|
|
||||||
data-slot="popover-description"
|
|
||||||
className={cn("text-muted-foreground", className)}
|
|
||||||
{...props}
|
|
||||||
/>
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
export {
|
|
||||||
Popover,
|
|
||||||
PopoverTrigger,
|
|
||||||
PopoverContent,
|
|
||||||
PopoverAnchor,
|
|
||||||
PopoverHeader,
|
|
||||||
PopoverTitle,
|
|
||||||
PopoverDescription,
|
|
||||||
}
|
|
||||||
@@ -1,45 +0,0 @@
|
|||||||
"use client"
|
|
||||||
|
|
||||||
import * as React from "react"
|
|
||||||
import * as RadioGroupPrimitive from "@radix-ui/react-radio-group"
|
|
||||||
import { CircleIcon } from "lucide-react"
|
|
||||||
|
|
||||||
import { cn } from "@/lib/utils"
|
|
||||||
|
|
||||||
function RadioGroup({
|
|
||||||
className,
|
|
||||||
...props
|
|
||||||
}: React.ComponentProps<typeof RadioGroupPrimitive.Root>) {
|
|
||||||
return (
|
|
||||||
<RadioGroupPrimitive.Root
|
|
||||||
data-slot="radio-group"
|
|
||||||
className={cn("grid gap-3", className)}
|
|
||||||
{...props}
|
|
||||||
/>
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
function RadioGroupItem({
|
|
||||||
className,
|
|
||||||
...props
|
|
||||||
}: React.ComponentProps<typeof RadioGroupPrimitive.Item>) {
|
|
||||||
return (
|
|
||||||
<RadioGroupPrimitive.Item
|
|
||||||
data-slot="radio-group-item"
|
|
||||||
className={cn(
|
|
||||||
"border-input text-primary focus-visible:border-ring focus-visible:ring-ring/50 aria-invalid:ring-destructive/20 dark:aria-invalid:ring-destructive/40 aria-invalid:border-destructive dark:bg-input/30 aspect-square size-4 shrink-0 rounded-full border shadow-xs transition-[color,box-shadow] outline-none focus-visible:ring-[3px] disabled:cursor-not-allowed disabled:opacity-50",
|
|
||||||
className
|
|
||||||
)}
|
|
||||||
{...props}
|
|
||||||
>
|
|
||||||
<RadioGroupPrimitive.Indicator
|
|
||||||
data-slot="radio-group-indicator"
|
|
||||||
className="relative flex items-center justify-center"
|
|
||||||
>
|
|
||||||
<CircleIcon className="fill-primary absolute top-1/2 left-1/2 size-2 -translate-x-1/2 -translate-y-1/2" />
|
|
||||||
</RadioGroupPrimitive.Indicator>
|
|
||||||
</RadioGroupPrimitive.Item>
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
export { RadioGroup, RadioGroupItem }
|
|
||||||
@@ -1,56 +0,0 @@
|
|||||||
import * as React from "react"
|
|
||||||
import * as ScrollAreaPrimitive from "@radix-ui/react-scroll-area"
|
|
||||||
|
|
||||||
import { cn } from "@/lib/utils"
|
|
||||||
|
|
||||||
function ScrollArea({
|
|
||||||
className,
|
|
||||||
children,
|
|
||||||
...props
|
|
||||||
}: React.ComponentProps<typeof ScrollAreaPrimitive.Root>) {
|
|
||||||
return (
|
|
||||||
<ScrollAreaPrimitive.Root
|
|
||||||
data-slot="scroll-area"
|
|
||||||
className={cn("relative", className)}
|
|
||||||
{...props}
|
|
||||||
>
|
|
||||||
<ScrollAreaPrimitive.Viewport
|
|
||||||
data-slot="scroll-area-viewport"
|
|
||||||
className="focus-visible:ring-ring/50 size-full rounded-[inherit] transition-[color,box-shadow] outline-none focus-visible:ring-[3px] focus-visible:outline-1"
|
|
||||||
>
|
|
||||||
{children}
|
|
||||||
</ScrollAreaPrimitive.Viewport>
|
|
||||||
<ScrollBar />
|
|
||||||
<ScrollAreaPrimitive.Corner />
|
|
||||||
</ScrollAreaPrimitive.Root>
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
function ScrollBar({
|
|
||||||
className,
|
|
||||||
orientation = "vertical",
|
|
||||||
...props
|
|
||||||
}: React.ComponentProps<typeof ScrollAreaPrimitive.ScrollAreaScrollbar>) {
|
|
||||||
return (
|
|
||||||
<ScrollAreaPrimitive.ScrollAreaScrollbar
|
|
||||||
data-slot="scroll-area-scrollbar"
|
|
||||||
orientation={orientation}
|
|
||||||
className={cn(
|
|
||||||
"flex touch-none p-px transition-colors select-none",
|
|
||||||
orientation === "vertical" &&
|
|
||||||
"h-full w-2.5 border-l border-l-transparent",
|
|
||||||
orientation === "horizontal" &&
|
|
||||||
"h-2.5 flex-col border-t border-t-transparent",
|
|
||||||
className
|
|
||||||
)}
|
|
||||||
{...props}
|
|
||||||
>
|
|
||||||
<ScrollAreaPrimitive.ScrollAreaThumb
|
|
||||||
data-slot="scroll-area-thumb"
|
|
||||||
className="bg-border relative flex-1 rounded-full"
|
|
||||||
/>
|
|
||||||
</ScrollAreaPrimitive.ScrollAreaScrollbar>
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
export { ScrollArea, ScrollBar }
|
|
||||||
@@ -1,190 +0,0 @@
|
|||||||
"use client"
|
|
||||||
|
|
||||||
import * as React from "react"
|
|
||||||
import * as SelectPrimitive from "@radix-ui/react-select"
|
|
||||||
import { CheckIcon, ChevronDownIcon, ChevronUpIcon } from "lucide-react"
|
|
||||||
|
|
||||||
import { cn } from "@/lib/utils"
|
|
||||||
|
|
||||||
function Select({
|
|
||||||
...props
|
|
||||||
}: React.ComponentProps<typeof SelectPrimitive.Root>) {
|
|
||||||
return <SelectPrimitive.Root data-slot="select" {...props} />
|
|
||||||
}
|
|
||||||
|
|
||||||
function SelectGroup({
|
|
||||||
...props
|
|
||||||
}: React.ComponentProps<typeof SelectPrimitive.Group>) {
|
|
||||||
return <SelectPrimitive.Group data-slot="select-group" {...props} />
|
|
||||||
}
|
|
||||||
|
|
||||||
function SelectValue({
|
|
||||||
...props
|
|
||||||
}: React.ComponentProps<typeof SelectPrimitive.Value>) {
|
|
||||||
return <SelectPrimitive.Value data-slot="select-value" {...props} />
|
|
||||||
}
|
|
||||||
|
|
||||||
function SelectTrigger({
|
|
||||||
className,
|
|
||||||
size = "default",
|
|
||||||
children,
|
|
||||||
...props
|
|
||||||
}: React.ComponentProps<typeof SelectPrimitive.Trigger> & {
|
|
||||||
size?: "sm" | "default"
|
|
||||||
}) {
|
|
||||||
return (
|
|
||||||
<SelectPrimitive.Trigger
|
|
||||||
data-slot="select-trigger"
|
|
||||||
data-size={size}
|
|
||||||
className={cn(
|
|
||||||
"border-input data-[placeholder]:text-muted-foreground [&_svg:not([class*='text-'])]:text-muted-foreground focus-visible:border-ring focus-visible:ring-ring/50 aria-invalid:ring-destructive/20 dark:aria-invalid:ring-destructive/40 aria-invalid:border-destructive dark:bg-input/30 dark:hover:bg-input/50 flex w-fit items-center justify-between gap-2 rounded-md border bg-transparent px-3 py-2 text-sm whitespace-nowrap shadow-xs transition-[color,box-shadow] outline-none focus-visible:ring-[3px] disabled:cursor-not-allowed disabled:opacity-50 data-[size=default]:h-9 data-[size=sm]:h-8 *:data-[slot=select-value]:line-clamp-1 *:data-[slot=select-value]:flex *:data-[slot=select-value]:items-center *:data-[slot=select-value]:gap-2 [&_svg]:pointer-events-none [&_svg]:shrink-0 [&_svg:not([class*='size-'])]:size-4",
|
|
||||||
className
|
|
||||||
)}
|
|
||||||
{...props}
|
|
||||||
>
|
|
||||||
{children}
|
|
||||||
<SelectPrimitive.Icon asChild>
|
|
||||||
<ChevronDownIcon className="size-4 opacity-50" />
|
|
||||||
</SelectPrimitive.Icon>
|
|
||||||
</SelectPrimitive.Trigger>
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
function SelectContent({
|
|
||||||
className,
|
|
||||||
children,
|
|
||||||
position = "item-aligned",
|
|
||||||
align = "center",
|
|
||||||
...props
|
|
||||||
}: React.ComponentProps<typeof SelectPrimitive.Content>) {
|
|
||||||
return (
|
|
||||||
<SelectPrimitive.Portal>
|
|
||||||
<SelectPrimitive.Content
|
|
||||||
data-slot="select-content"
|
|
||||||
className={cn(
|
|
||||||
"bg-popover text-popover-foreground data-[state=open]:animate-in data-[state=closed]:animate-out data-[state=closed]:fade-out-0 data-[state=open]:fade-in-0 data-[state=closed]:zoom-out-95 data-[state=open]:zoom-in-95 data-[side=bottom]:slide-in-from-top-2 data-[side=left]:slide-in-from-right-2 data-[side=right]:slide-in-from-left-2 data-[side=top]:slide-in-from-bottom-2 relative z-50 max-h-(--radix-select-content-available-height) min-w-[8rem] origin-(--radix-select-content-transform-origin) overflow-x-hidden overflow-y-auto rounded-md border shadow-md",
|
|
||||||
position === "popper" &&
|
|
||||||
"data-[side=bottom]:translate-y-1 data-[side=left]:-translate-x-1 data-[side=right]:translate-x-1 data-[side=top]:-translate-y-1",
|
|
||||||
className
|
|
||||||
)}
|
|
||||||
position={position}
|
|
||||||
align={align}
|
|
||||||
{...props}
|
|
||||||
>
|
|
||||||
<SelectScrollUpButton />
|
|
||||||
<SelectPrimitive.Viewport
|
|
||||||
className={cn(
|
|
||||||
"p-1",
|
|
||||||
position === "popper" &&
|
|
||||||
"h-[var(--radix-select-trigger-height)] w-full min-w-[var(--radix-select-trigger-width)] scroll-my-1"
|
|
||||||
)}
|
|
||||||
>
|
|
||||||
{children}
|
|
||||||
</SelectPrimitive.Viewport>
|
|
||||||
<SelectScrollDownButton />
|
|
||||||
</SelectPrimitive.Content>
|
|
||||||
</SelectPrimitive.Portal>
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
function SelectLabel({
|
|
||||||
className,
|
|
||||||
...props
|
|
||||||
}: React.ComponentProps<typeof SelectPrimitive.Label>) {
|
|
||||||
return (
|
|
||||||
<SelectPrimitive.Label
|
|
||||||
data-slot="select-label"
|
|
||||||
className={cn("text-muted-foreground px-2 py-1.5 text-xs", className)}
|
|
||||||
{...props}
|
|
||||||
/>
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
function SelectItem({
|
|
||||||
className,
|
|
||||||
children,
|
|
||||||
...props
|
|
||||||
}: React.ComponentProps<typeof SelectPrimitive.Item>) {
|
|
||||||
return (
|
|
||||||
<SelectPrimitive.Item
|
|
||||||
data-slot="select-item"
|
|
||||||
className={cn(
|
|
||||||
"focus:bg-accent focus:text-accent-foreground [&_svg:not([class*='text-'])]:text-muted-foreground relative flex w-full cursor-default items-center gap-2 rounded-sm py-1.5 pr-8 pl-2 text-sm outline-hidden select-none data-[disabled]:pointer-events-none data-[disabled]:opacity-50 [&_svg]:pointer-events-none [&_svg]:shrink-0 [&_svg:not([class*='size-'])]:size-4 *:[span]:last:flex *:[span]:last:items-center *:[span]:last:gap-2",
|
|
||||||
className
|
|
||||||
)}
|
|
||||||
{...props}
|
|
||||||
>
|
|
||||||
<span
|
|
||||||
data-slot="select-item-indicator"
|
|
||||||
className="absolute right-2 flex size-3.5 items-center justify-center"
|
|
||||||
>
|
|
||||||
<SelectPrimitive.ItemIndicator>
|
|
||||||
<CheckIcon className="size-4" />
|
|
||||||
</SelectPrimitive.ItemIndicator>
|
|
||||||
</span>
|
|
||||||
<SelectPrimitive.ItemText>{children}</SelectPrimitive.ItemText>
|
|
||||||
</SelectPrimitive.Item>
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
function SelectSeparator({
|
|
||||||
className,
|
|
||||||
...props
|
|
||||||
}: React.ComponentProps<typeof SelectPrimitive.Separator>) {
|
|
||||||
return (
|
|
||||||
<SelectPrimitive.Separator
|
|
||||||
data-slot="select-separator"
|
|
||||||
className={cn("bg-border pointer-events-none -mx-1 my-1 h-px", className)}
|
|
||||||
{...props}
|
|
||||||
/>
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
function SelectScrollUpButton({
|
|
||||||
className,
|
|
||||||
...props
|
|
||||||
}: React.ComponentProps<typeof SelectPrimitive.ScrollUpButton>) {
|
|
||||||
return (
|
|
||||||
<SelectPrimitive.ScrollUpButton
|
|
||||||
data-slot="select-scroll-up-button"
|
|
||||||
className={cn(
|
|
||||||
"flex cursor-default items-center justify-center py-1",
|
|
||||||
className
|
|
||||||
)}
|
|
||||||
{...props}
|
|
||||||
>
|
|
||||||
<ChevronUpIcon className="size-4" />
|
|
||||||
</SelectPrimitive.ScrollUpButton>
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
function SelectScrollDownButton({
|
|
||||||
className,
|
|
||||||
...props
|
|
||||||
}: React.ComponentProps<typeof SelectPrimitive.ScrollDownButton>) {
|
|
||||||
return (
|
|
||||||
<SelectPrimitive.ScrollDownButton
|
|
||||||
data-slot="select-scroll-down-button"
|
|
||||||
className={cn(
|
|
||||||
"flex cursor-default items-center justify-center py-1",
|
|
||||||
className
|
|
||||||
)}
|
|
||||||
{...props}
|
|
||||||
>
|
|
||||||
<ChevronDownIcon className="size-4" />
|
|
||||||
</SelectPrimitive.ScrollDownButton>
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
export {
|
|
||||||
Select,
|
|
||||||
SelectContent,
|
|
||||||
SelectGroup,
|
|
||||||
SelectItem,
|
|
||||||
SelectLabel,
|
|
||||||
SelectScrollDownButton,
|
|
||||||
SelectScrollUpButton,
|
|
||||||
SelectSeparator,
|
|
||||||
SelectTrigger,
|
|
||||||
SelectValue,
|
|
||||||
}
|
|
||||||
@@ -1,89 +0,0 @@
|
|||||||
import * as React from "react"
|
|
||||||
import * as TabsPrimitive from "@radix-ui/react-tabs"
|
|
||||||
import { cva, type VariantProps } from "class-variance-authority"
|
|
||||||
|
|
||||||
import { cn } from "@/lib/utils"
|
|
||||||
|
|
||||||
function Tabs({
|
|
||||||
className,
|
|
||||||
orientation = "horizontal",
|
|
||||||
...props
|
|
||||||
}: React.ComponentProps<typeof TabsPrimitive.Root>) {
|
|
||||||
return (
|
|
||||||
<TabsPrimitive.Root
|
|
||||||
data-slot="tabs"
|
|
||||||
data-orientation={orientation}
|
|
||||||
orientation={orientation}
|
|
||||||
className={cn(
|
|
||||||
"group/tabs flex gap-2 data-[orientation=horizontal]:flex-col",
|
|
||||||
className
|
|
||||||
)}
|
|
||||||
{...props}
|
|
||||||
/>
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
const tabsListVariants = cva(
|
|
||||||
"rounded-lg p-[3px] group-data-[orientation=horizontal]/tabs:h-9 data-[variant=line]:rounded-none group/tabs-list text-muted-foreground inline-flex w-fit items-center justify-center group-data-[orientation=vertical]/tabs:h-fit group-data-[orientation=vertical]/tabs:flex-col",
|
|
||||||
{
|
|
||||||
variants: {
|
|
||||||
variant: {
|
|
||||||
default: "bg-muted",
|
|
||||||
line: "gap-1 bg-transparent",
|
|
||||||
},
|
|
||||||
},
|
|
||||||
defaultVariants: {
|
|
||||||
variant: "default",
|
|
||||||
},
|
|
||||||
}
|
|
||||||
)
|
|
||||||
|
|
||||||
function TabsList({
|
|
||||||
className,
|
|
||||||
variant = "default",
|
|
||||||
...props
|
|
||||||
}: React.ComponentProps<typeof TabsPrimitive.List> &
|
|
||||||
VariantProps<typeof tabsListVariants>) {
|
|
||||||
return (
|
|
||||||
<TabsPrimitive.List
|
|
||||||
data-slot="tabs-list"
|
|
||||||
data-variant={variant}
|
|
||||||
className={cn(tabsListVariants({ variant }), className)}
|
|
||||||
{...props}
|
|
||||||
/>
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
function TabsTrigger({
|
|
||||||
className,
|
|
||||||
...props
|
|
||||||
}: React.ComponentProps<typeof TabsPrimitive.Trigger>) {
|
|
||||||
return (
|
|
||||||
<TabsPrimitive.Trigger
|
|
||||||
data-slot="tabs-trigger"
|
|
||||||
className={cn(
|
|
||||||
"focus-visible:border-ring focus-visible:ring-ring/50 focus-visible:outline-ring text-foreground/60 hover:text-foreground dark:text-muted-foreground dark:hover:text-foreground relative inline-flex h-[calc(100%-1px)] flex-1 items-center justify-center gap-1.5 rounded-md border border-transparent px-2 py-1 text-sm font-medium whitespace-nowrap transition-all group-data-[orientation=vertical]/tabs:w-full group-data-[orientation=vertical]/tabs:justify-start focus-visible:ring-[3px] focus-visible:outline-1 disabled:pointer-events-none disabled:opacity-50 group-data-[variant=default]/tabs-list:data-[state=active]:shadow-sm group-data-[variant=line]/tabs-list:data-[state=active]:shadow-none [&_svg]:pointer-events-none [&_svg]:shrink-0 [&_svg:not([class*='size-'])]:size-4",
|
|
||||||
"group-data-[variant=line]/tabs-list:bg-transparent group-data-[variant=line]/tabs-list:data-[state=active]:bg-transparent dark:group-data-[variant=line]/tabs-list:data-[state=active]:border-transparent dark:group-data-[variant=line]/tabs-list:data-[state=active]:bg-transparent",
|
|
||||||
"data-[state=active]:bg-background dark:data-[state=active]:text-foreground dark:data-[state=active]:border-input dark:data-[state=active]:bg-input/30 data-[state=active]:text-foreground",
|
|
||||||
"after:bg-foreground after:absolute after:opacity-0 after:transition-opacity group-data-[orientation=horizontal]/tabs:after:inset-x-0 group-data-[orientation=horizontal]/tabs:after:bottom-[-5px] group-data-[orientation=horizontal]/tabs:after:h-0.5 group-data-[orientation=vertical]/tabs:after:inset-y-0 group-data-[orientation=vertical]/tabs:after:-right-1 group-data-[orientation=vertical]/tabs:after:w-0.5 group-data-[variant=line]/tabs-list:data-[state=active]:after:opacity-100",
|
|
||||||
className
|
|
||||||
)}
|
|
||||||
{...props}
|
|
||||||
/>
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
function TabsContent({
|
|
||||||
className,
|
|
||||||
...props
|
|
||||||
}: React.ComponentProps<typeof TabsPrimitive.Content>) {
|
|
||||||
return (
|
|
||||||
<TabsPrimitive.Content
|
|
||||||
data-slot="tabs-content"
|
|
||||||
className={cn("flex-1 outline-none", className)}
|
|
||||||
{...props}
|
|
||||||
/>
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
export { Tabs, TabsList, TabsTrigger, TabsContent, tabsListVariants }
|
|
||||||
@@ -1,47 +0,0 @@
|
|||||||
"use client"
|
|
||||||
|
|
||||||
import * as React from "react"
|
|
||||||
import * as TogglePrimitive from "@radix-ui/react-toggle"
|
|
||||||
import { cva, type VariantProps } from "class-variance-authority"
|
|
||||||
|
|
||||||
import { cn } from "@/lib/utils"
|
|
||||||
|
|
||||||
const toggleVariants = cva(
|
|
||||||
"inline-flex items-center justify-center gap-2 rounded-md text-sm font-medium hover:bg-muted hover:text-muted-foreground disabled:pointer-events-none disabled:opacity-50 data-[state=on]:bg-accent data-[state=on]:text-accent-foreground [&_svg]:pointer-events-none [&_svg:not([class*='size-'])]:size-4 [&_svg]:shrink-0 focus-visible:border-ring focus-visible:ring-ring/50 focus-visible:ring-[3px] outline-none transition-[color,box-shadow] aria-invalid:ring-destructive/20 dark:aria-invalid:ring-destructive/40 aria-invalid:border-destructive whitespace-nowrap",
|
|
||||||
{
|
|
||||||
variants: {
|
|
||||||
variant: {
|
|
||||||
default: "bg-transparent",
|
|
||||||
outline:
|
|
||||||
"border border-input bg-transparent shadow-xs hover:bg-accent hover:text-accent-foreground",
|
|
||||||
},
|
|
||||||
size: {
|
|
||||||
default: "h-9 px-2 min-w-9",
|
|
||||||
sm: "h-8 px-1.5 min-w-8",
|
|
||||||
lg: "h-10 px-2.5 min-w-10",
|
|
||||||
},
|
|
||||||
},
|
|
||||||
defaultVariants: {
|
|
||||||
variant: "default",
|
|
||||||
size: "default",
|
|
||||||
},
|
|
||||||
}
|
|
||||||
)
|
|
||||||
|
|
||||||
function Toggle({
|
|
||||||
className,
|
|
||||||
variant,
|
|
||||||
size,
|
|
||||||
...props
|
|
||||||
}: React.ComponentProps<typeof TogglePrimitive.Root> &
|
|
||||||
VariantProps<typeof toggleVariants>) {
|
|
||||||
return (
|
|
||||||
<TogglePrimitive.Root
|
|
||||||
data-slot="toggle"
|
|
||||||
className={cn(toggleVariants({ variant, size, className }))}
|
|
||||||
{...props}
|
|
||||||
/>
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
export { Toggle, toggleVariants }
|
|
||||||
@@ -1,61 +0,0 @@
|
|||||||
"use client"
|
|
||||||
|
|
||||||
import * as React from "react"
|
|
||||||
import * as TooltipPrimitive from "@radix-ui/react-tooltip"
|
|
||||||
|
|
||||||
import { cn } from "@/lib/utils"
|
|
||||||
|
|
||||||
function TooltipProvider({
|
|
||||||
delayDuration = 0,
|
|
||||||
...props
|
|
||||||
}: React.ComponentProps<typeof TooltipPrimitive.Provider>) {
|
|
||||||
return (
|
|
||||||
<TooltipPrimitive.Provider
|
|
||||||
data-slot="tooltip-provider"
|
|
||||||
delayDuration={delayDuration}
|
|
||||||
{...props}
|
|
||||||
/>
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
function Tooltip({
|
|
||||||
...props
|
|
||||||
}: React.ComponentProps<typeof TooltipPrimitive.Root>) {
|
|
||||||
return (
|
|
||||||
<TooltipProvider>
|
|
||||||
<TooltipPrimitive.Root data-slot="tooltip" {...props} />
|
|
||||||
</TooltipProvider>
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
function TooltipTrigger({
|
|
||||||
...props
|
|
||||||
}: React.ComponentProps<typeof TooltipPrimitive.Trigger>) {
|
|
||||||
return <TooltipPrimitive.Trigger data-slot="tooltip-trigger" {...props} />
|
|
||||||
}
|
|
||||||
|
|
||||||
function TooltipContent({
|
|
||||||
className,
|
|
||||||
sideOffset = 0,
|
|
||||||
children,
|
|
||||||
...props
|
|
||||||
}: React.ComponentProps<typeof TooltipPrimitive.Content>) {
|
|
||||||
return (
|
|
||||||
<TooltipPrimitive.Portal>
|
|
||||||
<TooltipPrimitive.Content
|
|
||||||
data-slot="tooltip-content"
|
|
||||||
sideOffset={sideOffset}
|
|
||||||
className={cn(
|
|
||||||
"bg-foreground text-background animate-in fade-in-0 zoom-in-95 data-[state=closed]:animate-out data-[state=closed]:fade-out-0 data-[state=closed]:zoom-out-95 data-[side=bottom]:slide-in-from-top-2 data-[side=left]:slide-in-from-right-2 data-[side=right]:slide-in-from-left-2 data-[side=top]:slide-in-from-bottom-2 z-50 w-fit origin-(--radix-tooltip-content-transform-origin) rounded-md px-3 py-1.5 text-xs text-balance",
|
|
||||||
className
|
|
||||||
)}
|
|
||||||
{...props}
|
|
||||||
>
|
|
||||||
{children}
|
|
||||||
<TooltipPrimitive.Arrow className="bg-foreground fill-foreground z-50 size-2.5 translate-y-[calc(-50%_-_2px)] rotate-45 rounded-[2px]" />
|
|
||||||
</TooltipPrimitive.Content>
|
|
||||||
</TooltipPrimitive.Portal>
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
export { Tooltip, TooltipTrigger, TooltipContent, TooltipProvider }
|
|
||||||
36
ui/src/hooks/useHashRoute.ts
Normal file
36
ui/src/hooks/useHashRoute.ts
Normal file
@@ -0,0 +1,36 @@
|
|||||||
|
import { useState, useEffect, useCallback } from 'react'
|
||||||
|
|
||||||
|
export type Route = 'app' | 'docs'
|
||||||
|
|
||||||
|
interface HashRouteState {
|
||||||
|
route: Route
|
||||||
|
section: string | null
|
||||||
|
navigate: (hash: string) => void
|
||||||
|
}
|
||||||
|
|
||||||
|
function parseHash(hash: string): { route: Route; section: string | null } {
|
||||||
|
const cleaned = hash.replace(/^#\/?/, '')
|
||||||
|
if (cleaned === 'docs' || cleaned.startsWith('docs/')) {
|
||||||
|
const section = cleaned.slice(5) || null // Remove 'docs/' prefix
|
||||||
|
return { route: 'docs', section }
|
||||||
|
}
|
||||||
|
return { route: 'app', section: null }
|
||||||
|
}
|
||||||
|
|
||||||
|
export function useHashRoute(): HashRouteState {
|
||||||
|
const [state, setState] = useState(() => parseHash(window.location.hash))
|
||||||
|
|
||||||
|
useEffect(() => {
|
||||||
|
const handleHashChange = () => {
|
||||||
|
setState(parseHash(window.location.hash))
|
||||||
|
}
|
||||||
|
window.addEventListener('hashchange', handleHashChange)
|
||||||
|
return () => window.removeEventListener('hashchange', handleHashChange)
|
||||||
|
}, [])
|
||||||
|
|
||||||
|
const navigate = useCallback((hash: string) => {
|
||||||
|
window.location.hash = hash
|
||||||
|
}, [])
|
||||||
|
|
||||||
|
return { ...state, navigate }
|
||||||
|
}
|
||||||
@@ -266,6 +266,8 @@ const DEFAULT_SETTINGS: Settings = {
|
|||||||
glm_mode: false,
|
glm_mode: false,
|
||||||
ollama_mode: false,
|
ollama_mode: false,
|
||||||
testing_agent_ratio: 1,
|
testing_agent_ratio: 1,
|
||||||
|
playwright_headless: true,
|
||||||
|
batch_size: 3,
|
||||||
}
|
}
|
||||||
|
|
||||||
export function useAvailableModels() {
|
export function useAvailableModels() {
|
||||||
|
|||||||
@@ -210,6 +210,7 @@ export function useProjectWebSocket(projectName: string | null) {
|
|||||||
agentName: message.agentName,
|
agentName: message.agentName,
|
||||||
agentType: message.agentType || 'coding', // Default to coding for backwards compat
|
agentType: message.agentType || 'coding', // Default to coding for backwards compat
|
||||||
featureId: message.featureId,
|
featureId: message.featureId,
|
||||||
|
featureIds: message.featureIds || [message.featureId],
|
||||||
featureName: message.featureName,
|
featureName: message.featureName,
|
||||||
state: message.state,
|
state: message.state,
|
||||||
thought: message.thought,
|
thought: message.thought,
|
||||||
@@ -225,6 +226,7 @@ export function useProjectWebSocket(projectName: string | null) {
|
|||||||
agentName: message.agentName,
|
agentName: message.agentName,
|
||||||
agentType: message.agentType || 'coding', // Default to coding for backwards compat
|
agentType: message.agentType || 'coding', // Default to coding for backwards compat
|
||||||
featureId: message.featureId,
|
featureId: message.featureId,
|
||||||
|
featureIds: message.featureIds || [message.featureId],
|
||||||
featureName: message.featureName,
|
featureName: message.featureName,
|
||||||
state: message.state,
|
state: message.state,
|
||||||
thought: message.thought,
|
thought: message.thought,
|
||||||
|
|||||||
@@ -199,7 +199,8 @@ export interface ActiveAgent {
|
|||||||
agentIndex: number // -1 for synthetic completions
|
agentIndex: number // -1 for synthetic completions
|
||||||
agentName: AgentMascot | 'Unknown'
|
agentName: AgentMascot | 'Unknown'
|
||||||
agentType: AgentType // "coding" or "testing"
|
agentType: AgentType // "coding" or "testing"
|
||||||
featureId: number
|
featureId: number // Current/primary feature (backward compat)
|
||||||
|
featureIds: number[] // All features in batch
|
||||||
featureName: string
|
featureName: string
|
||||||
state: AgentState
|
state: AgentState
|
||||||
thought?: string
|
thought?: string
|
||||||
@@ -270,6 +271,7 @@ export interface WSAgentUpdateMessage {
|
|||||||
agentName: AgentMascot | 'Unknown'
|
agentName: AgentMascot | 'Unknown'
|
||||||
agentType: AgentType // "coding" or "testing"
|
agentType: AgentType // "coding" or "testing"
|
||||||
featureId: number
|
featureId: number
|
||||||
|
featureIds?: number[] // All features in batch (may be absent for backward compat)
|
||||||
featureName: string
|
featureName: string
|
||||||
state: AgentState
|
state: AgentState
|
||||||
thought?: string
|
thought?: string
|
||||||
@@ -529,12 +531,16 @@ export interface Settings {
|
|||||||
glm_mode: boolean
|
glm_mode: boolean
|
||||||
ollama_mode: boolean
|
ollama_mode: boolean
|
||||||
testing_agent_ratio: number // Regression testing agents (0-3)
|
testing_agent_ratio: number // Regression testing agents (0-3)
|
||||||
|
playwright_headless: boolean
|
||||||
|
batch_size: number // Features per coding agent batch (1-3)
|
||||||
}
|
}
|
||||||
|
|
||||||
export interface SettingsUpdate {
|
export interface SettingsUpdate {
|
||||||
yolo_mode?: boolean
|
yolo_mode?: boolean
|
||||||
model?: string
|
model?: string
|
||||||
testing_agent_ratio?: number
|
testing_agent_ratio?: number
|
||||||
|
playwright_headless?: boolean
|
||||||
|
batch_size?: number
|
||||||
}
|
}
|
||||||
|
|
||||||
export interface ProjectSettingsUpdate {
|
export interface ProjectSettingsUpdate {
|
||||||
|
|||||||
@@ -1,7 +1,9 @@
|
|||||||
import { StrictMode } from 'react'
|
import { StrictMode } from 'react'
|
||||||
import { createRoot } from 'react-dom/client'
|
import { createRoot } from 'react-dom/client'
|
||||||
import { QueryClient, QueryClientProvider } from '@tanstack/react-query'
|
import { QueryClient, QueryClientProvider } from '@tanstack/react-query'
|
||||||
|
import { useHashRoute } from './hooks/useHashRoute'
|
||||||
import App from './App'
|
import App from './App'
|
||||||
|
import { DocsPage } from './components/docs/DocsPage'
|
||||||
import './styles/globals.css'
|
import './styles/globals.css'
|
||||||
// Note: Custom theme removed - using shadcn/ui theming instead
|
// Note: Custom theme removed - using shadcn/ui theming instead
|
||||||
|
|
||||||
@@ -14,10 +16,16 @@ const queryClient = new QueryClient({
|
|||||||
},
|
},
|
||||||
})
|
})
|
||||||
|
|
||||||
|
function Router() {
|
||||||
|
const { route } = useHashRoute()
|
||||||
|
if (route === 'docs') return <DocsPage />
|
||||||
|
return <App />
|
||||||
|
}
|
||||||
|
|
||||||
createRoot(document.getElementById('root')!).render(
|
createRoot(document.getElementById('root')!).render(
|
||||||
<StrictMode>
|
<StrictMode>
|
||||||
<QueryClientProvider client={queryClient}>
|
<QueryClientProvider client={queryClient}>
|
||||||
<App />
|
<Router />
|
||||||
</QueryClientProvider>
|
</QueryClientProvider>
|
||||||
</StrictMode>,
|
</StrictMode>,
|
||||||
)
|
)
|
||||||
|
|||||||
@@ -1,5 +1,5 @@
|
|||||||
@import "tailwindcss";
|
@import "tailwindcss";
|
||||||
@import url("tw-animate-css");
|
@import "tw-animate-css";
|
||||||
|
|
||||||
/* Enable class-based dark mode in Tailwind v4 */
|
/* Enable class-based dark mode in Tailwind v4 */
|
||||||
@custom-variant dark (&:where(.dark, .dark *));
|
@custom-variant dark (&:where(.dark, .dark *));
|
||||||
@@ -1134,6 +1134,143 @@
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/* ============================================================================
|
||||||
|
Documentation Prose Typography
|
||||||
|
============================================================================ */
|
||||||
|
|
||||||
|
.docs-prose {
|
||||||
|
line-height: 1.7;
|
||||||
|
color: var(--muted-foreground);
|
||||||
|
}
|
||||||
|
|
||||||
|
.docs-prose h2 {
|
||||||
|
font-size: 1.5rem;
|
||||||
|
font-weight: 700;
|
||||||
|
color: var(--foreground);
|
||||||
|
margin-top: 3rem;
|
||||||
|
margin-bottom: 1rem;
|
||||||
|
padding-bottom: 0.5rem;
|
||||||
|
border-bottom: 2px solid var(--border);
|
||||||
|
scroll-margin-top: 5rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
.docs-prose h2:first-child {
|
||||||
|
margin-top: 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
.docs-prose h3 {
|
||||||
|
font-size: 1.15rem;
|
||||||
|
font-weight: 600;
|
||||||
|
color: var(--foreground);
|
||||||
|
margin-top: 2rem;
|
||||||
|
margin-bottom: 0.75rem;
|
||||||
|
scroll-margin-top: 5rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
.docs-prose p {
|
||||||
|
margin-bottom: 1rem;
|
||||||
|
max-width: 65ch;
|
||||||
|
}
|
||||||
|
|
||||||
|
.docs-prose ul,
|
||||||
|
.docs-prose ol {
|
||||||
|
margin-bottom: 1rem;
|
||||||
|
padding-left: 1.5rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
.docs-prose ul {
|
||||||
|
list-style-type: disc;
|
||||||
|
}
|
||||||
|
|
||||||
|
.docs-prose ol {
|
||||||
|
list-style-type: decimal;
|
||||||
|
}
|
||||||
|
|
||||||
|
.docs-prose li {
|
||||||
|
margin-bottom: 0.375rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
.docs-prose li > ul,
|
||||||
|
.docs-prose li > ol {
|
||||||
|
margin-top: 0.375rem;
|
||||||
|
margin-bottom: 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
.docs-prose pre {
|
||||||
|
background: var(--muted);
|
||||||
|
border: 1px solid var(--border);
|
||||||
|
border-radius: var(--radius);
|
||||||
|
padding: 1rem;
|
||||||
|
overflow-x: auto;
|
||||||
|
margin-bottom: 1rem;
|
||||||
|
font-family: var(--font-mono);
|
||||||
|
font-size: 0.8125rem;
|
||||||
|
line-height: 1.6;
|
||||||
|
}
|
||||||
|
|
||||||
|
.docs-prose code:not(pre code) {
|
||||||
|
background: var(--muted);
|
||||||
|
padding: 0.125rem 0.375rem;
|
||||||
|
border-radius: 0.25rem;
|
||||||
|
font-family: var(--font-mono);
|
||||||
|
font-size: 0.8125rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
.docs-prose table {
|
||||||
|
width: 100%;
|
||||||
|
border-collapse: collapse;
|
||||||
|
margin-bottom: 1rem;
|
||||||
|
font-size: 0.875rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
.docs-prose th {
|
||||||
|
background: var(--muted);
|
||||||
|
font-weight: 600;
|
||||||
|
color: var(--foreground);
|
||||||
|
text-align: left;
|
||||||
|
padding: 0.5rem 0.75rem;
|
||||||
|
border: 1px solid var(--border);
|
||||||
|
}
|
||||||
|
|
||||||
|
.docs-prose td {
|
||||||
|
padding: 0.5rem 0.75rem;
|
||||||
|
border: 1px solid var(--border);
|
||||||
|
}
|
||||||
|
|
||||||
|
.docs-prose tr:nth-child(even) td {
|
||||||
|
background: var(--muted);
|
||||||
|
opacity: 0.5;
|
||||||
|
}
|
||||||
|
|
||||||
|
.docs-prose blockquote {
|
||||||
|
border-left: 4px solid var(--primary);
|
||||||
|
padding-left: 1rem;
|
||||||
|
margin-bottom: 1rem;
|
||||||
|
font-style: italic;
|
||||||
|
color: var(--muted-foreground);
|
||||||
|
}
|
||||||
|
|
||||||
|
.docs-prose a {
|
||||||
|
color: var(--primary);
|
||||||
|
text-decoration: underline;
|
||||||
|
text-underline-offset: 2px;
|
||||||
|
}
|
||||||
|
|
||||||
|
.docs-prose a:hover {
|
||||||
|
opacity: 0.8;
|
||||||
|
}
|
||||||
|
|
||||||
|
.docs-prose strong {
|
||||||
|
color: var(--foreground);
|
||||||
|
font-weight: 600;
|
||||||
|
}
|
||||||
|
|
||||||
|
.docs-prose hr {
|
||||||
|
border: none;
|
||||||
|
border-top: 1px solid var(--border);
|
||||||
|
margin: 2rem 0;
|
||||||
|
}
|
||||||
|
|
||||||
/* ============================================================================
|
/* ============================================================================
|
||||||
Scrollbar Styling
|
Scrollbar Styling
|
||||||
============================================================================ */
|
============================================================================ */
|
||||||
|
|||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user