Add support for uploading Markdown, Text, Word (.docx), CSV, Excel (.xlsx),
PDF, and PowerPoint (.pptx) files in addition to existing JPEG/PNG image
uploads in the spec creation and project expansion chat interfaces.
Backend changes:
- New server/utils/document_extraction.py: in-memory text extraction for all
document formats using python-docx, openpyxl, PyPDF2, python-pptx (no disk
persistence)
- Rename ImageAttachment to FileAttachment across schemas, routers, and
chat session services
- Add build_attachment_content_blocks() helper in chat_constants.py to route
images as image content blocks and documents as extracted text blocks
- Separate size limits: 5MB for images, 20MB for documents
- Handle extraction errors (corrupt files, encrypted PDFs) gracefully
Frontend changes:
- Widen accepted MIME types and file extensions in both chat components
- Add resolveMimeType() fallback for browsers that don't set MIME on .md files
- Document attachments display with FileText icon instead of image thumbnail
- ChatMessage renders documents as compact pills with filename and size
- Update help text from "attach images" to "attach files"
Dependencies added: python-docx, openpyxl, PyPDF2, python-pptx
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The Claude Code CLI v2.1.45+ emits a `rate_limit_event` message type that
the Python SDK v0.1.19 cannot parse, raising MessageParseError. Two bugs
resulted:
1. **False-positive rate limit**: check_rate_limit_error() matched
"rate_limit" in the exception string "Unknown message type:
rate_limit_event" via both an explicit type check and a regex fallback,
triggering 15-19s backoff + query re-send on every session.
2. **One-message-behind**: The MessageParseError killed the
receive_response() async generator, but the CLI subprocess was still
alive with buffered response data. Catching and returning meant the
response was never consumed. The next send_message() would read the
previous response first, creating a one-behind offset.
Changes:
- chat_constants.py: check_rate_limit_error() now returns (False, None)
for any MessageParseError, blocking both false-positive paths. Added
safe_receive_response() helper that retries receive_response() on
MessageParseError — the SDK's decoupled producer/consumer architecture
(anyio memory channel) allows the new generator to continue reading
remaining messages without data loss. Removed calculate_rate_limit_backoff
re-export and MAX_CHAT_RATE_LIMIT_RETRIES constant.
- spec_chat_session.py, assistant_chat_session.py, expand_chat_session.py:
Replaced retry-with-backoff loops with safe_receive_response() wrapper.
Removed asyncio.sleep backoff, query re-send, and rate_limited yield.
Cleaned up unused imports (asyncio, calculate_rate_limit_backoff,
MAX_CHAT_RATE_LIMIT_RETRIES).
- agent.py: Added inner retry loop around receive_response() with same
MessageParseError skip-and-restart pattern. Removed early-return that
truncated responses.
- types.ts: Removed SpecChatRateLimitedMessage,
AssistantChatRateLimitedMessage, and their union entries.
- useSpecChat.ts, useAssistantChat.ts, useExpandChat.ts: Removed dead
'rate_limited' case handlers.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The Claude CLI sends `rate_limit_event` messages that the SDK's
`parse_message()` doesn't recognize, raising `MessageParseError` and
crashing all three chat session types (spec, assistant, expand).
Changes:
- Bump claude-agent-sdk minimum from 0.1.0 to 0.1.39
- Add `check_rate_limit_error()` helper in chat_constants.py that
detects rate limits from both MessageParseError data payloads and
error message text patterns
- Wrap `receive_response()` loops in all three `_query_claude()` methods
with retry-on-rate-limit logic (up to 3 retries with backoff)
- Gracefully log and skip non-rate-limit MessageParseError instead of
crashing the session
- Add `rate_limited` message type to frontend TypeScript types and
handle it in useSpecChat, useAssistantChat, useExpandChat hooks to
show "Rate limited. Retrying in Xs..." system messages
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
All 5 WebSocket endpoints (expand, spec, assistant, terminal, project)
were closing the connection before calling accept() when validation
failed. Starlette converts pre-accept close into an HTTP 403, giving
clients no meaningful error information.
Server changes:
- Move websocket.accept() before all validation checks in every WS handler
- Send JSON error message before closing so clients get actionable errors
- Fix validate_project_name usage (raises HTTPException, not returns bool)
- ConnectionManager.connect() no longer calls accept() (caller's job)
Client changes:
- All 3 WS hooks (useWebSocket, useExpandChat, useSpecChat) skip
reconnection on 4xxx close codes (application errors won't self-resolve)
- Gate expand button, keyboard shortcut, and modal on hasSpec
- Add hasSpec to useEffect dependency array to prevent stale closure
- Update keyboard shortcuts help text for E key context
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Add language specifier to fenced code block in expand-project.md
- Remove detailed exception strings from WebSocket responses (security)
- Make WebSocket "start" message idempotent to avoid session reset
- Fix race condition in bulk feature creation with row-level lock
- Add validation for starting_priority (must be >= 1)
- Fix _query_claude to handle multiple feature blocks and deduplicate
- Add FileReader error handling in ExpandProjectChat
- Fix disconnect() to clear pending reconnect timeout
- Enable sandbox mode and validate CLI path in expand_chat_session
- Clean up temporary settings file on session close
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Adds the ability to add multiple features to an existing project through
a natural language conversation with Claude, similar to how initial spec
creation works.
Features:
- New "Expand" button in header (keyboard shortcut: E)
- Full-screen chat interface for describing new features
- Claude reads existing app_spec.txt for context
- Features created directly in database after user approval
- Bulk feature creation endpoint for batch operations
New files:
- .claude/commands/expand-project.md - Claude skill for expansion
- server/services/expand_chat_session.py - Chat session service
- server/routers/expand_project.py - WebSocket endpoint
- ui/src/components/ExpandProjectChat.tsx - Chat UI
- ui/src/components/ExpandProjectModal.tsx - Modal wrapper
- ui/src/hooks/useExpandChat.ts - WebSocket hook
Modified:
- Added POST /bulk endpoint to features router
- Added FeatureBulkCreate schemas
- Integrated Expand button and modal in App.tsx
Co-Authored-By: Claude <noreply@anthropic.com>