Add prominent warnings about Anthropic's Agent SDK policy regarding
subscription-based authentication for third-party agents. Users are
now advised to use API keys instead of `claude login` to avoid
potential account suspension.
Changes:
- README: Add WARNING and NOTE admonition boxes at top (auth policy
+ repo no longer actively maintained)
- README: Flip auth recommendation to API key first, subscription second
- SettingsModal: Add amber warning Alert when Claude provider is selected
- auth.py: Update CLI/server help messages to recommend API key as Option 1
- Start scripts (start.sh, start.bat, start_ui.sh): Mention ANTHROPIC_API_KEY
alongside claude login in all auth hints
- start.py, autonomous_agent_demo.py: Update help text references
No functionality removed — subscription auth still works, warnings are
informational only.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add support for uploading Markdown, Text, Word (.docx), CSV, Excel (.xlsx),
PDF, and PowerPoint (.pptx) files in addition to existing JPEG/PNG image
uploads in the spec creation and project expansion chat interfaces.
Backend changes:
- New server/utils/document_extraction.py: in-memory text extraction for all
document formats using python-docx, openpyxl, PyPDF2, python-pptx (no disk
persistence)
- Rename ImageAttachment to FileAttachment across schemas, routers, and
chat session services
- Add build_attachment_content_blocks() helper in chat_constants.py to route
images as image content blocks and documents as extracted text blocks
- Separate size limits: 5MB for images, 20MB for documents
- Handle extraction errors (corrupt files, encrypted PDFs) gracefully
Frontend changes:
- Widen accepted MIME types and file extensions in both chat components
- Add resolveMimeType() fallback for browsers that don't set MIME on .md files
- Document attachments display with FileText icon instead of image thumbnail
- ChatMessage renders documents as compact pills with filename and size
- Update help text from "attach images" to "attach files"
Dependencies added: python-docx, openpyxl, PyPDF2, python-pptx
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Update rollup, minimatch, ajv, and lodash to patched versions
via npm audit fix (2 high, 2 moderate → 0 vulnerabilities).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add a new scaffold system that lets users choose a project template
(blank or agentic starter) during project creation. This inserts a
template selection step between folder selection and spec method choice.
Backend:
- New server/routers/scaffold.py with SSE streaming endpoint for
running hardcoded scaffold commands (npx create-agentic-app)
- Path validation, security checks, and cross-platform npx resolution
- Registered scaffold_router in server/main.py and routers/__init__.py
Frontend (NewProjectModal.tsx):
- New "template" step with Blank Project and Agentic Starter cards
- Real-time scaffold output streaming with auto-scroll log viewer
- Success, error, and retry states with proper back-navigation
- Updated step flow: name → folder → template → method → chat/complete
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The Claude Code CLI v2.1.45+ emits a `rate_limit_event` message type that
the Python SDK v0.1.19 cannot parse, raising MessageParseError. Two bugs
resulted:
1. **False-positive rate limit**: check_rate_limit_error() matched
"rate_limit" in the exception string "Unknown message type:
rate_limit_event" via both an explicit type check and a regex fallback,
triggering 15-19s backoff + query re-send on every session.
2. **One-message-behind**: The MessageParseError killed the
receive_response() async generator, but the CLI subprocess was still
alive with buffered response data. Catching and returning meant the
response was never consumed. The next send_message() would read the
previous response first, creating a one-behind offset.
Changes:
- chat_constants.py: check_rate_limit_error() now returns (False, None)
for any MessageParseError, blocking both false-positive paths. Added
safe_receive_response() helper that retries receive_response() on
MessageParseError — the SDK's decoupled producer/consumer architecture
(anyio memory channel) allows the new generator to continue reading
remaining messages without data loss. Removed calculate_rate_limit_backoff
re-export and MAX_CHAT_RATE_LIMIT_RETRIES constant.
- spec_chat_session.py, assistant_chat_session.py, expand_chat_session.py:
Replaced retry-with-backoff loops with safe_receive_response() wrapper.
Removed asyncio.sleep backoff, query re-send, and rate_limited yield.
Cleaned up unused imports (asyncio, calculate_rate_limit_backoff,
MAX_CHAT_RATE_LIMIT_RETRIES).
- agent.py: Added inner retry loop around receive_response() with same
MessageParseError skip-and-restart pattern. Removed early-return that
truncated responses.
- types.ts: Removed SpecChatRateLimitedMessage,
AssistantChatRateLimitedMessage, and their union entries.
- useSpecChat.ts, useAssistantChat.ts, useExpandChat.ts: Removed dead
'rate_limited' case handlers.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The Claude CLI sends `rate_limit_event` messages that the SDK's
`parse_message()` doesn't recognize, raising `MessageParseError` and
crashing all three chat session types (spec, assistant, expand).
Changes:
- Bump claude-agent-sdk minimum from 0.1.0 to 0.1.39
- Add `check_rate_limit_error()` helper in chat_constants.py that
detects rate limits from both MessageParseError data payloads and
error message text patterns
- Wrap `receive_response()` loops in all three `_query_claude()` methods
with retry-on-rate-limit logic (up to 3 retries with backoff)
- Gracefully log and skip non-rate-limit MessageParseError instead of
crashing the session
- Add `rate_limited` message type to frontend TypeScript types and
handle it in useSpecChat, useAssistantChat, useExpandChat hooks to
show "Rate limited. Retrying in Xs..." system messages
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Create VISION.md establishing AutoForge as a Claude Agent SDK wrapper
exclusively, rejecting integrations with other AI SDKs/CLIs/platforms
- Update review-pr.md step 6 to make vision deviation a merge blocker
(previously informational only) and auto-reject PRs modifying VISION.md
- Add .claude/launch.json with backend (uvicorn) and frontend (Vite)
dev server configurations
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Validate select option structure (value/label keys, non-empty strings)
and reject options on non-select field types.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Add merge conflict detection as step 2 in PR review command, surfacing
conflicts early before the rest of the review proceeds
- Refine merge recommendations: always fix issues on the PR branch before
merging rather than merging first and fixing on main afterward
- Update verdict definitions (MERGE / MERGE after fixes / DON'T MERGE)
with clearer action guidance for each outcome
- Add GLM 5 model to the GLM API provider in registry
- Clean up ui/package-lock.json (remove unnecessary peer flags)
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
A) Graph view: add needs_human_input bucket to handleGraphNodeClick so
clicking blocked nodes opens the feature modal
B) MCP validation: validate field type enum, require options for select,
enforce unique non-empty field IDs and labels
C) Progress fallback: include needs_human_input in non-WebSocket total
D) WebSocket: track needs_human_input count in progress state
E) Cleanup guard: remove unnecessary needs_human_input check in
_cleanup_stale_features (resolved via merge conflict)
F) Defensive SQL: require in_progress=1 in feature_request_human_input
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Follow-up fixes after merging PR #183 (graceful pause/drain mode):
- process_manager: _stream_output finally block now transitions from
pausing/paused_graceful to crashed/stopped (not just running), and
cleans up the drain signal file on process exit
- App.tsx: block Reset button and R shortcut during pausing/paused_graceful
- AgentThought/ProgressDashboard: keep thought bubble visible while pausing
- OrchestratorAvatar: add draining/paused cases to animation, glow, and
description switch statements
- AgentMissionControl: show Draining/Paused badge text for new states
- registry.py: remove redundant type annotation to fix mypy no-redef
- process_manager.py: add type:ignore for SQLAlchemy Column assignment
- websocket.py: reclassify test-pass lines as 'testing' not 'success'
- review-pr.md: add post-review recommended action guidance
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Set unique PLAYWRIGHT_CLI_SESSION environment variable for each spawned
agent subprocess to prevent concurrent agents from sharing a single
browser instance and interfering with each other's navigation.
- _spawn_coding_agent: session named "coding-{feature_id}"
- _spawn_coding_agent_batch: session named "coding-{primary_id}"
- _spawn_testing_agent: session named "testing-{counter}" using an
incrementing counter (since multiple testing agents can test
overlapping features, feature ID alone isn't sufficient)
Previously, after migrating from Playwright MCP to CLI, all parallel
agents shared the default browser session, causing them to navigate
away from each other's pages.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add feature_get_ready, feature_get_blocked, and feature_get_graph to
CODING_AGENT_TOOLS, TESTING_AGENT_TOOLS, and INITIALIZER_AGENT_TOOLS.
These read-only tools were available on the MCP server but blocked by
the allowed_tools lists, causing "blocked/not allowed" errors when
agents tried to query project state.
Fix SettingsModal custom base URL input:
- Remove fallback to current settings value when saving, so empty input
is not silently replaced with the existing URL
- Remove .trim() on the input value to prevent cursor jumping while typing
- Fix "Change" button pre-fill using empty string instead of space
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>