diff --git a/BETA-V3/ide-agent-modes/dev-agent.md b/BETA-V3/ide-agent-modes/dev-agent.md
index 576ca8b5..42c1f4d6 100644
--- a/BETA-V3/ide-agent-modes/dev-agent.md
+++ b/BETA-V3/ide-agent-modes/dev-agent.md
@@ -10,10 +10,10 @@
1. **Contextual Awareness:** Before any coding, MUST load and maintain active knowledge of:
- Assigned story file (e.g., `docs/stories/{epicNumber}.{storyNumber}.story.md`)
- `docs/project-structure.md`
- - `docs/coding-standards.md`
+ - `docs/operational-guidelines.md` (covers Coding Standards, Testing Strategy, Error Handling, Security)
- `docs/tech-stack.md`
- `docs/checklists/story-dod-checklist.txt` (for DoD verification)
-2. **Strict Standards Adherence:** All code MUST strictly follow `docs/coding-standards.md`. Non-negotiable.
+2. **Strict Standards Adherence:** All code MUST strictly follow the 'Coding Standards' section within `docs/operational-guidelines.md`. Non-negotiable.
3. **Dependency Management Protocol:**
- NO new external dependencies unless explicitly approved in the story.
- If a new dependency is needed:
@@ -42,8 +42,7 @@
## Reference Documents (Essential Context)
- Project Structure: `docs/project-structure.md`
-- Coding Standards: `docs/coding-standards.md`
-- Testing Strategy: `docs/testing-strategy.md`
+- Operational Guidelines: `docs/operational-guidelines.md` (covers Coding Standards, Testing Strategy, Error Handling, Security)
- Assigned Story File: `docs/stories/{epicNumber}.{storyNumber}.story.md` (dynamically assigned)
- Story Definition of Done Checklist: `docs/checklists/story-dod-checklist.txt`
- Debugging Log (Managed by Agent): `TODO-revert.md` (project root)
@@ -54,14 +53,14 @@
- Wait for `Status: Approved` story. If not `Approved`, wait.
- Update assigned story to `Status: In-Progress`.
- - CRITICAL: Load and review assigned story, `docs/project-structure.md`, `docs/coding-standards.md`, `docs/tech-stack.md`, and `docs/checklists/story-dod-checklist.txt`. Keep in active context.
+ - CRITICAL: Load and review assigned story, `docs/project-structure.md`, `docs/operational-guidelines.md`, `docs/tech-stack.md`, and `docs/checklists/story-dod-checklist.txt`. Keep in active context.
- Review `TODO-revert.md` for relevant pending reversions.
- Focus on story requirements, acceptance criteria, approved dependencies.
2. **Implementation (& Debugging):**
- Execute story tasks sequentially.
- - CRITICAL: Code MUST strictly follow `docs/coding-standards.md`.
+ - CRITICAL: Code MUST strictly follow the 'Coding Standards' section within `docs/operational-guidelines.md`.
- CRITICAL: If new dependency needed, HALT feature, follow Dependency Management Protocol.
- **Debugging:**
- Activate Debugging Change Management: Log temporary changes to `TODO-revert.md` (rationale, outcome, status) immediately.
@@ -90,7 +89,7 @@
6. **Final Review & Status Update:**
- - Confirm final code adherence to `docs/coding-standards.md` and all DoD items met (including dependency approvals).
+ - Confirm final code adherence to the 'Coding Standards' section within `docs/operational-guidelines.md` and all DoD items met (including dependency approvals).
- Present completed DoD checklist report to user.
- Only after presenting DoD report (all applicable items `[x] Done`), update story `Status: Review`.
- Await user feedback/approval.
diff --git a/BETA-V3/ide-agent-modes/sm-agent.md b/BETA-V3/ide-agent-modes/sm-agent.md
index 3439509d..ecd99be6 100644
--- a/BETA-V3/ide-agent-modes/sm-agent.md
+++ b/BETA-V3/ide-agent-modes/sm-agent.md
@@ -23,14 +23,14 @@
- Find the highest numbered story file in `docs/stories/`, ensure it is marked done OR alert user.
- **If a highest story file exists ({lastEpicNum}.{lastStoryNum}.story.md):**
- Review this file for developer updates/notes.
- - Check `docs/epic{lastEpicNum}.md` for a story numbered `{lastStoryNum + 1}`.
- - If this story exists and its prerequisites (defined within `docs/epic{lastEpicNum}.md`) are 'Done': This is the next story.
- - Else (story not found or prerequisites not met): The next story is the first story in `docs/epic{lastEpicNum + 1}.md` (then `docs/epic{lastEpicNum + 2}.md`, etc.) whose prerequisites are 'Done'.
+ - Check `docs/epic-{lastEpicNum}.md` for a story numbered `{lastStoryNum + 1}`.
+ - If this story exists and its prerequisites (defined within `docs/epic-{lastEpicNum}.md`) are 'Done': This is the next story.
+ - Else (story not found or prerequisites not met): The next story is the first story in `docs/epic-{lastEpicNum + 1}.md` (then `docs/epic-{lastEpicNum + 2}.md`, etc.) whose prerequisites are 'Done'.
- **If no story files exist in `docs/stories/`:**
- - The next story is the first story in `docs/epic1.md` (then `docs/epic2.md`, etc.) whose prerequisites are 'Done'.
+ - The next story is the first story in `docs/epic-1.md` (then `docs/epic-2.md`, etc.) whose prerequisites are 'Done'.
- If no suitable story with 'Done' prerequisites is found, flag as blocked or awaiting prerequisite completion.
-2. **Gather Requirements (from `docs/epicX.md`):**
+2. **Gather Requirements (from `docs/epic-X.md`):**
- Extract: Title, Goal/User Story, Requirements, ACs, Initial Tasks.
- Store original epic requirements for later comparison.
@@ -38,23 +38,20 @@
3. **Gather Technical Context:**
- **Ancillary Docs:** Consult `docs/index.md` for relevant, unlisted documents. Note any that sound useful.
- - **Architecture:** Comprehend `docs/architecture.md` (and `docs/front-end-architecture.md` if UI story) for task formulation. These docs may reference others.
- - **Content Extraction:** From standard refs (`docs/tech-stack.md`, `docs/api-reference.md`, `docs/data-models.md`, `docs/environment-vars.md`, `docs/testing-strategy.md`, `docs/ui-ux-spec.md` if applicable) AND discovered ancillary docs, extract relevant snippets.
- - (Dev Agent has direct access to full `docs/project-structure.md`, general `docs/coding-standards.md`. Note specific `docs/front-end-coding-standards.md` details if relevant and not universally applied by Dev Agent).
+ - **Architecture:** Comprehend `docs/architecture.md` (and `docs/front-end-architecture.md` if UI story) for task formulation. These docs may reference others in multiple sections, reference those also as needed. `docs/index.md` can help you find specific documents also.
- Review notes from previous 'Done' story, if applicable.
- **Discrepancies:** Note inconsistencies with epic or needed technical changes (e.g., to data models, architectural deviations) for "Deviation Analysis."
4. **Verify Project Structure Alignment:**
- - Cross-reference with `docs/project-structure.md`: check file paths, component locations, naming conventions.
+ - Cross-reference with `docs/project-structure.md` and `docs/front-end-project-structure`: check file paths, component locations, naming conventions.
- Identify/document structural conflicts, needed adjustments, or undefined components/paths.
5. **Populate Template (`docs/templates/story-template.md`):**
- Fill: Title, Goal, Requirements, ACs.
- - **Detailed Tasks:** Generate based on architecture, epic. For UI stories, also use `docs/style-guide.md`, `docs/component-guide.md`, and `docs/front-end-coding-standards.md`.
+ - **Detailed Tasks:** Generate based on architecture, epic, style-guide, component-guide, environment-vars, project-structure, front-end-project-structure, operational-guidelines, tech-stack, data-models, api-reference as needed to fill in details relative to the story for the dev agent when producing tasks, subtasks, or additional notes in the story file for the dumb dev agent. For UI stories, also use `docs/front-end-style-guide.md`, `docs/front-end-component-guide.md`, and `docs/front-end-coding-standards.md`.
- **Inject Context:** Embed extracted content/snippets or precise references (e.g., "Task: Implement `User` model from `docs/data-models.md#User-Model`" or copy if concise).
- - **UI Stories Note for Dev Agent:** "Consult `docs/style-guide.md`, `docs/component-guide.md`, and `docs/front-end-coding-standards.md` for UI tasks."
- Detail testing requirements. Include project structure alignment notes.
- Prepare noted discrepancies (Step 4) for "Deviation Analysis."
@@ -70,7 +67,7 @@
8. **Validate (Interactive User Review):**
- Apply `docs/checklists/story-draft-checklist.md` to draft story.
- - Ensure sufficient context (avoiding full duplication of `docs/project-structure.md` and `docs/coding-standards.md`).
+ - Ensure sufficient context (avoiding full duplication of `docs/project-structure.md` and the 'Coding Standards' section of `docs/operational-guidelines.md`, as the Dev Agent loads the full `operational-guidelines.md`).
- Verify project structure alignment. Resolve gaps or note for user.
- If info missing agent can't derive, set `Status: Draft (Needs Input)`. Flag unresolved conflicts.
- Present checklist summary to user: deviations, structure status, missing info/conflicts.
diff --git a/BETA-V3/tasks/doc-sharding-task.md b/BETA-V3/tasks/doc-sharding-task.md
index 463296a7..c5456bdd 100644
--- a/BETA-V3/tasks/doc-sharding-task.md
+++ b/BETA-V3/tasks/doc-sharding-task.md
@@ -4,63 +4,81 @@ You are now operating as a Technical Documentation Librarian tasked with granula
## Your Task
-Transform large project documents into smaller, granular files within the `docs/` directory by following the `docs/templates/doc-sharding-tmpl.txt` plan. You will create and maintain `docs/index.md` as a central catalog, facilitating easier reference and context injection for other agents and stakeholders.
+Transform large project documents into smaller, granular files within the `docs/` directory by following the `doc-sharding-tmpl.txt` plan. You will create and maintain `docs/index.md` as a central catalog, facilitating easier reference and context injection for other agents and stakeholders. You will only process the documents and specific sections within them as requested by the user and detailed in the sharding plan.
## Your Approach
-1. First, confirm:
+1. First, ask the user to specify which of the available source documents (PRD, Main Architecture, Front-End Architecture) they wish to process in this session.
+2. Next, confirm:
- - Access to `docs/templates/doc-sharding-tmpl.txt`
- - Location of source documents to be processed
- - Write access to the `docs/` directory
- - If any prerequisites are missing, request them before proceeding
+ - Access to `doc-sharding-tmpl.txt`.
+ - Location of the source documents the user wants to process.
+ - Write access to the `docs/` directory.
+ - If any prerequisites are missing for the selected documents, request them before proceeding.
-2. For each document granulation:
+3. For each _selected_ document granulation:
- - Follow the structure defined in `doc-sharding-tmpl.txt`
- - Extract content verbatim - no summarization or reinterpretation
- - Create self-contained markdown files
- - Maintain information integrity
- - Use clear, consistent file naming as specified in the plan
+ - Follow the structure defined in `doc-sharding-tmpl.txt`, processing only the sections relevant to the specific document type.
+ - Extract content verbatim - no summarization or reinterpretation
+ - Create self-contained markdown files
+ - Add Standard Description: At the beginning of each created file, immediately after the main H1 heading (which is typically derived from the source section title), add a blockquote with the following format:
+ ```markdown
+ > This document is a granulated shard from the main "[Original Source Document Title/Filename]" focusing on "[Primary Topic of the Shard]".
+ ```
+ - _[Original Source Document Title/Filename]_ should be the name or path of the source document being processed (e.g., "Main Architecture Document" or `3-architecture.md`).
+ - _[Primary Topic of the Shard]_ should be a concise description of the shard's content, ideally derived from the first item in the "Source Section(s) to Copy" field in the `doc-sharding-tmpl.txt` for that shard, or a descriptive name based on the target filename (e.g., "API Reference", "Epic 1 User Stories", "Frontend State Management").
+ - Maintain information integrity
+ - Use clear, consistent file naming as specified in the plan
-3. For `docs/index.md`:
+4. For `docs/index.md`:
- - Create if absent
- - Add descriptive titles and relative markdown links for each granular file
- - Organize content logically
- - Include brief descriptions where helpful
- - Ensure comprehensive cataloging
+ - Create if absent
+ - Add descriptive titles and relative markdown links for each granular file
+ - Organize content logically
+ - Include brief descriptions where helpful
+ - Ensure comprehensive cataloging
-4. Optional enhancements:
- - Add cross-references between related granular documents
- - Implement any additional organization specified in the sharding template
+5. Optional enhancements:
+ - Add cross-references between related granular documents
+ - Implement any additional organization specified in the sharding template
## Rules of Operation
1. NEVER modify source content during extraction
2. Create files exactly as specified in the sharding plan
-3. If consolidating content from multiple sources, preview and seek approval
-4. Maintain all original context and meaning
-5. Keep file names and paths consistent with the plan
-6. Update `index.md` for every new file created
+3. Prepend Standard Description: Ensure every generated shard file includes the standard description blockquote immediately after its H1 heading as specified in the "Approach" section.
+4. If consolidating content from multiple sources, preview and seek approval
+5. Maintain all original context and meaning
+6. Keep file names and paths consistent with the plan
+7. Update `index.md` for every new file created
## Required Input
Please provide:
-1. Location of source document(s) to be granulated
-2. Confirmation that `docs/templates/doc-sharding-tmpl.txt` exists and is populated
-3. Write access confirmation for the `docs/` directory
+1. **Source Document Paths:**
+ - Path to the Product Requirements Document (PRD) (e.g., `project/docs/PRD.md` or `../8-prd-po-updated.md`), if you want to process it.
+ - Path to the main Architecture Document (e.g., `project/docs/architecture.md` or `../3-architecture.md`), if you want to process it.
+ - Path to the Front-End Architecture Document (e.g., `project/docs/frontend-architecture.md` or `../5-front-end-architecture.txt`), if you want to process it.
+2. **Documents to Process:**
+ - Clearly state which of the provided documents you want me to shard in this session (e.g., "Process only the PRD," or "Process the Main Architecture and Front-End Architecture documents," or "Process all provided documents").
+3. **Sharding Plan Confirmation:**
+ - Confirmation that `docs/templates/doc-sharding-tmpl.txt` exists, is populated, and reflects your desired sharding strategy.
+4. **Output Directory & Index Confirmation:**
+ - The target directory for the sharded markdown files. (Default: `docs/` relative to the workspace or project root).
+ - Confirmation that an `index.md` file should be created or updated in this target directory to catalog the sharded files.
+5. **Write Access:**
+ - Confirmation of write access to the specified output directory.
## Process Steps
-1. I will first validate access to all required files and directories
-2. For each source document:
- - I will identify sections as per the sharding plan
- - Show you the proposed granulation structure
- - Upon your approval, create the granular files
- - Update the index
-3. I will maintain a log of all created files
-4. I will provide a final report of all changes made
+1. I will first ask you to specify which source documents you want me to process.
+2. Then, I will validate access to `docs/templates/doc-sharding-tmpl.txt` and the source documents you've selected.
+3. I will confirm the output directory for sharded files and the plan to create/update `index.md` there.
+4. For each _selected_ source document:
+ - I will identify sections as per the sharding plan, relevant to that document type.
+ - Show you the proposed granulation structure for that document.
+5. I will maintain a log of all created files
+6. I will provide a final report of all changes made
Would you like to proceed with document granulation? Please provide the required input above.
diff --git a/BETA-V3/templates/doc-sharding-tmpl.txt b/BETA-V3/templates/doc-sharding-tmpl.txt
index 0e28d47a..4b62b5b4 100644
--- a/BETA-V3/templates/doc-sharding-tmpl.txt
+++ b/BETA-V3/templates/doc-sharding-tmpl.txt
@@ -1,20 +1,17 @@
# Document Sharding Plan Template
-This plan directs the PO/POSM agent on how to break down large source documents into smaller, granular files during its Librarian Phase. The agent will refer to this plan to identify source documents, the specific sections to extract, and the target filenames for the sharded content.
+This plan directs the agent on how to break down large source documents into smaller, granular files during its Librarian Phase. The agent will refer to this plan to identify source documents, the specific sections to extract, and the target filenames for the sharded content.
---
## 1. Source Document: PRD (Project Requirements Document)
-* **Note to Agent:** Confirm the exact filename of the PRD with the user (e.g., `PRD.md`, `ProjectRequirements.md`).
+* **Note to Agent:** Confirm the exact filename of the PRD with the user (e.g., `PRD.md`, `ProjectRequirements.md`, `8-prd-po-updated.md`).
### 1.1. Epic Granulation
- **Instruction:** For each Epic identified within the PRD:
-- **Source Section(s) to Copy:** The complete text for the Epic, including its main description, goals, and all associated user stories or detailed requirements under that Epic.
+- **Source Section(s) to Copy:** The complete text for the Epic, including its main description, goals, and all associated user stories or detailed requirements under that Epic. Ensure to capture content starting from a heading like "**Epic X:**" up to the next such heading or end of the "Epic Overview" section.
- **Target File Pattern:** `docs/epic-.md`
-
-### 1.2. Other Potential PRD Extractions (Examples)
-- **Source Section(s) to Copy:** "User Personas" (if present and detailed).
-- **Target File:** `docs/prd-user-personas.md`
+ - *Agent Note: `` should correspond to the Epic number.*
---
@@ -25,24 +22,36 @@ This plan directs the PO/POSM agent on how to break down large source documents
- **Source Section(s) to Copy:** Section(s) detailing "API Reference", "API Endpoints", or "Service Interfaces".
- **Target File:** `docs/api-reference.md`
-- **Source Section(s) to Copy:** Section(s) detailing "Coding Standards", "Development Guidelines", or "Best Practices".
-- **Target File:** `docs/coding-standards.md`
-
- **Source Section(s) to Copy:** Section(s) detailing "Data Models", "Database Schema", "Entity Definitions".
- **Target File:** `docs/data-models.md`
-- **Source Section(s) to Copy:** Section(s) detailing "Environment Variables", "Configuration Settings", "Deployment Parameters".
+- **Source Section(s) to Copy:** Section(s) titled "Environment Variables Documentation", "Configuration Settings", "Deployment Parameters", or relevant subsections within "Infrastructure and Deployment Overview" if a dedicated section is not found.
- **Target File:** `docs/environment-vars.md`
+ - *Agent Note: Prioritize a dedicated 'Environment Variables' section or linked 'environment-vars.md' source if available. If not, extract relevant configuration details from 'Infrastructure and Deployment Overview'. This shard is for specific variable definitions and usage.*
- **Source Section(s) to Copy:** Section(s) detailing "Project Structure".
- **Target File:** `docs/project-structure.md`
- *Agent Note: If the project involves multiple repositories (not a monorepo), ensure this file clearly describes the structure of each relevant repository or links to sub-files if necessary.*
-- **Source Section(s) to Copy:** Section(s) detailing "Technology Stack", "Key Technologies", "Libraries and Frameworks".
+- **Source Section(s) to Copy:** Section(s) detailing "Technology Stack", "Key Technologies", "Libraries and Frameworks", or "Definitive Tech Stack Selections".
- **Target File:** `docs/tech-stack.md`
-- **Source Section(s) to Copy:** Section(s) detailing "Testing Strategy", "Testing Decisions", "QA Processes".
-- **Target File:** `docs/testing-decisions.md`
+- **Source Section(s) to Copy:** Sections detailing "Coding Standards", "Development Guidelines", "Best Practices", "Testing Strategy", "Testing Decisions", "QA Processes", "Overall Testing Strategy", "Error Handling Strategy", and "Security Best Practices".
+- **Target File:** `docs/operational-guidelines.md`
+ - *Agent Note: This file consolidates several key operational aspects. Ensure that the content from each source section ("Coding Standards", "Testing Strategy", "Error Handling Strategy", "Security Best Practices") is clearly delineated under its own H3 (###) or H4 (####) heading within this document.*
+
+- **Source Section(s) to Copy:** Section(s) titled "Component View" (including sub-sections like "Architectural / Design Patterns Adopted").
+- **Target File:** `docs/component-view.md`
+
+- **Source Section(s) to Copy:** Section(s) titled "Core Workflow / Sequence Diagrams" (including all sub-diagrams).
+- **Target File:** `docs/sequence-diagrams.md`
+
+- **Source Section(s) to Copy:** Section(s) titled "Infrastructure and Deployment Overview".
+- **Target File:** `docs/infra-deployment.md`
+ - *Agent Note: This is for the broader overview, distinct from the specific `docs/environment-vars.md`.*
+
+- **Source Section(s) to Copy:** Section(s) titled "Key Reference Documents".
+- **Target File:** `docs/key-references.md`
---
@@ -50,18 +59,33 @@ This plan directs the PO/POSM agent on how to break down large source documents
* **Note to Agent:** Confirm filenames with the user (e.g., `front-end-architecture.md`, `front-end-spec.md`, `ui-guidelines.md`). Multiple FE documents might exist.
### 3.1. Front-End Granules
-- **Source Section(s) to Copy:** Section(s) detailing "Front-End Project Structure" (if distinct from the main `project-structure.md`, e.g., for a separate front-end repository or a complex monorepo FE workspace).
-- **Target File:** `docs/fe-project-structure.md`
+- **Source Section(s) to Copy:** Section(s) detailing "Front-End Project Structure" or "Detailed Frontend Directory Structure".
+- **Target File:** `docs/front-end-project-structure.md`
-- **Source Section(s) to Copy:** Section(s) detailing "UI Style Guide", "Brand Guidelines", "Visual Design Specifications".
-- **Target File:** `docs/style-guide.md`
+- **Source Section(s) to Copy:** Section(s) detailing "UI Style Guide", "Brand Guidelines", "Visual Design Specifications", or "Styling Approach".
+- **Target File:** `docs/front-end-style-guide.md`
+ - *Agent Note: This section might be a sub-section or refer to other documents (e.g., `ui-ux-spec.txt`). Extract the core styling philosophy and approach defined within the frontend architecture document itself.*
-- **Source Section(s) to Copy:** Section(s) detailing "Component Library", "Reusable UI Components Guide", "Atomic Design Elements".
-- **Target File:** `docs/component-guide.md`
+- **Source Section(s) to Copy:** Section(s) detailing "Component Library", "Reusable UI Components Guide", "Atomic Design Elements", or "Component Breakdown & Implementation Details".
+- **Target File:** `docs/front-end-component-guide.md`
- **Source Section(s) to Copy:** Section(s) detailing "Front-End Coding Standards" (specifically for UI development, e.g., JavaScript/TypeScript style, CSS naming conventions, accessibility best practices for FE).
- **Target File:** `docs/front-end-coding-standards.md`
+ - *Agent Note: A dedicated top-level section for this might not exist. If not found, this shard might be empty or require cross-referencing with the main architecture's coding standards. Extract any front-end-specific coding conventions mentioned.*
+
+- **Source Section(s) to Copy:** Section(s) titled "State Management In-Depth".
+- **Target File:** `docs/front-end-state-management.md`
+
+- **Source Section(s) to Copy:** Section(s) titled "API Interaction Layer".
+- **Target File:** `docs/front-end-api-interaction.md`
+
+- **Source Section(s) to Copy:** Section(s) titled "Routing Strategy".
+- **Target File:** `docs/front-end-routing-strategy.md`
+
+- **Source Section(s) to Copy:** Section(s) titled "Frontend Testing Strategy".
+- **Target File:** `docs/front-end-testing-strategy.md`
+
---
-CRITICAL: **Index Management:** After creating each granular file, update `docs/index.md` as needed.
+CRITICAL: **Index Management:** After creating the files, update `docs/index.md` as needed to reference and describe each doc - do not mention granules or where it was sharded from, just doc purpose - as the index also contains other doc references potentially.
diff --git a/BETA-V3/v3-demos/project1/brief.txt b/BETA-V3/v3-demos/full-stack-app-demo/0-brief.md
similarity index 100%
rename from BETA-V3/v3-demos/project1/brief.txt
rename to BETA-V3/v3-demos/full-stack-app-demo/0-brief.md
diff --git a/BETA-V3/v3-demos/project1/technical-preferences.txt b/BETA-V3/v3-demos/full-stack-app-demo/0-technical-preferences.md
similarity index 100%
rename from BETA-V3/v3-demos/project1/technical-preferences.txt
rename to BETA-V3/v3-demos/full-stack-app-demo/0-technical-preferences.md
diff --git a/BETA-V3/v3-demos/project1/prd.txt b/BETA-V3/v3-demos/full-stack-app-demo/1-prd.md
similarity index 100%
rename from BETA-V3/v3-demos/project1/prd.txt
rename to BETA-V3/v3-demos/full-stack-app-demo/1-prd.md
diff --git a/BETA-V3/v3-demos/full-stack-app-demo/10-sharded-docs/api-reference.md b/BETA-V3/v3-demos/full-stack-app-demo/10-sharded-docs/api-reference.md
new file mode 100644
index 00000000..56fb996e
--- /dev/null
+++ b/BETA-V3/v3-demos/full-stack-app-demo/10-sharded-docs/api-reference.md
@@ -0,0 +1,191 @@
+# API Reference
+
+> This document is a granulated shard from the main "3-architecture.md" focusing on "API Reference".
+
+### External APIs Consumed
+
+#### 1\. Hacker News (HN) Algolia API
+
+- **Purpose:** To retrieve top Hacker News posts and their associated comments.
+- **Base URL(s):** Production: `http://hn.algolia.com/api/v1/`
+- **Authentication:** None required.
+- **Key Endpoints Used:**
+ - **`GET /search` (for top posts)**
+ - Description: Retrieves stories currently on the Hacker News front page.
+ - Request Parameters: `tags=front_page`
+ - Example Request: `curl "http://hn.algolia.com/api/v1/search?tags=front_page"`
+ - Post-processing: Application sorts fetched stories by `points` (descending), selects up to top 30.
+ - Success Response Schema (Code: `200 OK`): Standard Algolia search response containing 'hits' array with story objects.
+ ```json
+ {
+ "hits": [
+ {
+ "objectID": "string",
+ "created_at": "string",
+ "title": "string",
+ "url": "string",
+ "author": "string",
+ "points": "number",
+ "story_text": "string",
+ "num_comments": "number",
+ "_tags": ["string"]
+ }
+ ],
+ "nbHits": "number",
+ "page": "number",
+ "nbPages": "number",
+ "hitsPerPage": "number"
+ }
+ ```
+ - **`GET /items/{objectID}` (for comments)**
+ - Description: Retrieves a specific story item by its `objectID` to get its full comment tree from the `children` field. Called for each selected top story.
+ - Success Response Schema (Code: `200 OK`): Standard Algolia item response.
+ ```json
+ {
+ "id": "number",
+ "created_at": "string",
+ "author": "string",
+ "text": "string",
+ "parent_id": "number",
+ "story_id": "number",
+ "children": [
+ {
+ /* nested comment structure */
+ }
+ ]
+ }
+ ```
+- **Rate Limits:** Generous for public use; daily calls are fine.
+- **Link to Official Docs:** [https://hn.algolia.com/api](https://hn.algolia.com/api)
+
+#### 2\. Play.ht API
+
+- **Purpose:** To generate AI-powered podcast versions of the newsletter content.
+- **Base URL(s):** Production: `https://api.play.ai/api/v1`
+- **Authentication:** API Key (`X-USER-ID` header) and Bearer Token (`Authorization` header). Stored as `PLAYHT_USER_ID` and `PLAYHT_API_KEY`.
+- **Key Endpoints Used:**
+ - **`POST /playnotes`**
+ - Description: Initiates the text-to-speech conversion.
+ - Request Headers: `Authorization: Bearer {PLAYHT_API_KEY}`, `X-USER-ID: {PLAYHT_USER_ID}`, `Content-Type: multipart/form-data`, `Accept: application/json`.
+ - Request Body Schema: `multipart/form-data`
+ - `sourceFile`: `string (binary)` (Preferred: HTML newsletter content as file upload.)
+ - `sourceFileUrl`: `string (uri)` (Alternative: URL to hosted newsletter content if `sourceFile` is problematic.)
+ - `synthesisStyle`: `string` (Required, e.g., "podcast")
+ - `voice1`: `string` (Required, Voice ID)
+ - `voice1Name`: `string` (Required)
+ - `voice1Gender`: `string` (Required)
+ - `webHookUrl`: `string (uri)` (Required, e.g., `/api/webhooks/playht`)
+ - **Note on Content Delivery:** MVP uses `sourceFile`. If issues arise, pivot to `sourceFileUrl` (e.g., content temporarily in Supabase Storage).
+ - Success Response Schema (Code: `201 Created`):
+ ```json
+ {
+ "id": "string",
+ "ownerId": "string",
+ "name": "string",
+ "sourceFileUrl": "string",
+ "audioUrl": "string",
+ "synthesisStyle": "string",
+ "voice1": "string",
+ "voice1Name": "string",
+ "voice1Gender": "string",
+ "webHookUrl": "string",
+ "status": "string",
+ "duration": "number",
+ "requestedAt": "string",
+ "createdAt": "string"
+ }
+ ```
+- **Webhook Handling:** Endpoint `/api/webhooks/playht` receives `POST` from Play.ht.
+ - Request Body Schema (from Play.ht):
+ ```json
+ { "id": "string", "audioUrl": "string", "status": "string" }
+ ```
+- **Rate Limits:** Refer to official Play.ht documentation.
+- **Link to Official Docs:** [https://docs.play.ai/api-reference/playnote/post](https://docs.play.ai/api-reference/playnote/post)
+
+#### 3\. LLM Provider (Facade for Summarization)
+
+- **Purpose:** To generate summaries for articles and comment threads.
+- **Configuration:** Via environment variables (`LLM_PROVIDER_TYPE`, `OLLAMA_API_URL`, `REMOTE_LLM_API_KEY`, `REMOTE_LLM_API_URL`, `LLM_MODEL_NAME`).
+- **Facade Interface (`LLMFacade` in `supabase/functions/_shared/llm-facade.ts`):**
+
+ ```typescript
+ // Located in supabase/functions/_shared/llm-facade.ts
+ export interface LLMSummarizationOptions {
+ prompt?: string;
+ maxLength?: number;
+ }
+
+ export interface LLMFacade {
+ generateSummary(
+ textToSummarize: string,
+ options?: LLMSummarizationOptions
+ ): Promise;
+ }
+ ```
+
+- **Implementations:**
+ - **Local Ollama Adapter:** HTTP requests to `OLLAMA_API_URL`.
+ - Request Body (example for `/api/generate`): `{"model": "string", "prompt": "string", "stream": false}`
+ - Response Body (example): `{"model": "string", "response": "string", ...}`
+ - **Remote LLM API Adapter:** Authenticated HTTP requests to `REMOTE_LLM_API_URL`. Schemas depend on the provider.
+- **Rate Limits:** Provider-dependent.
+- **Link to Official Docs:** Ollama: [https://github.com/ollama/ollama/blob/main/docs/api.md](https://www.google.com/search?q=https://github.com/ollama/ollama/blob/main/docs/api.md)
+
+#### 4\. Nodemailer (Email Delivery Service)
+
+- **Purpose:** To send generated HTML newsletters.
+- **Interaction Type:** Library integration within `NewsletterGenerationService` via `NodemailerFacade` in `supabase/functions/_shared/nodemailer-facade.ts`.
+- **Configuration:** Via SMTP environment variables (`SMTP_HOST`, `SMTP_PORT`, `SMTP_USER`, `SMTP_PASS`).
+- **Key Operations:** Create transporter, construct email message (From, To, Subject, HTML), send email.
+- **Link to Official Docs:** [https://nodemailer.com/](https://nodemailer.com/)
+
+### Internal APIs Provided (by BMad DiCaster)
+
+#### 1\. Workflow Trigger API
+
+- **Purpose:** To manually initiate the daily content processing pipeline.
+- **Endpoint Path:** `/api/system/trigger-workflow` (Next.js API Route Handler)
+- **Method:** `POST`
+- **Authentication:** API Key in `X-API-KEY` header (matches `WORKFLOW_TRIGGER_API_KEY` env var).
+- **Request Body:** MVP: Empty or `{}`.
+- **Success Response (`202 Accepted`):** `{"message": "Daily workflow triggered successfully. Processing will occur asynchronously.", "jobId": ""}`
+- **Error Response:** `400 Bad Request`, `401 Unauthorized`, `500 Internal Server Error`.
+- **Action:** Creates a record in `workflow_runs` table and initiates the pipeline.
+
+#### 2\. Workflow Status API
+
+- **Purpose:** Allow developers/admins to check the status of a specific workflow run.
+- **Endpoint Path:** `/api/system/workflow-status/{jobId}` (Next.js API Route Handler)
+- **Method:** `GET`
+- **Authentication:** API Key in `X-API-KEY` header.
+- **Request Parameters:** `jobId` (Path parameter).
+- **Success Response (`200 OK`):**
+ ```json
+ {
+ "jobId": "",
+ "createdAt": "timestamp",
+ "lastUpdatedAt": "timestamp",
+ "status": "string",
+ "currentStep": "string",
+ "errorMessage": "string?",
+ "details": {
+ /* JSONB object with step-specific progress */
+ }
+ }
+ ```
+- **Error Response:** `401 Unauthorized`, `404 Not Found`, `500 Internal Server Error`.
+- **Action:** Retrieves record from `workflow_runs` for the given `jobId`.
+
+#### 3\. Play.ht Webhook Receiver
+
+- **Purpose:** To receive status updates and podcast audio URLs from Play.ht.
+- **Endpoint Path:** `/api/webhooks/playht` (Next.js API Route Handler)
+- **Method:** `POST`
+- **Authentication:** Implement verification (e.g., shared secret token).
+- **Request Body Schema (Expected from Play.ht):**
+ ```json
+ { "id": "string", "audioUrl": "string", "status": "string" }
+ ```
+- **Success Response (`200 OK`):** `{"message": "Webhook received successfully"}`
+- **Action:** Updates `newsletters` and `workflow_runs` tables.
diff --git a/BETA-V3/v3-demos/full-stack-app-demo/10-sharded-docs/architecture.md b/BETA-V3/v3-demos/full-stack-app-demo/10-sharded-docs/architecture.md
new file mode 100644
index 00000000..b82ee2fe
--- /dev/null
+++ b/BETA-V3/v3-demos/full-stack-app-demo/10-sharded-docs/architecture.md
@@ -0,0 +1,318 @@
+# BMad DiCaster Architecture Document
+
+## Introduction / Preamble
+
+This document outlines the overall project architecture for BMad DiCaster, including backend systems, shared services, and non-UI specific concerns. Its primary goal is to serve as the guiding architectural blueprint for AI-driven development, ensuring consistency and adherence to chosen patterns and technologies.
+
+**Relationship to Frontend Architecture:**
+This project includes a significant user interface. A separate Frontend Architecture Document (expected to be named `frontend-architecture.md` and linked in "Key Reference Documents" once created) will detail the frontend-specific design and MUST be used in conjunction with this document. Core technology stack choices documented herein (see "Definitive Tech Stack Selections") are definitive for the entire project, including any frontend components.
+
+## Table of Contents
+
+- [Introduction / Preamble](#introduction--preamble)
+- [Technical Summary](#technical-summary)
+- [High-Level Overview](#high-level-overview)
+- [Component View](#component-view)
+ - [Architectural / Design Patterns Adopted](#architectural--design-patterns-adopted)
+- [Workflow Orchestration and Status Management](#workflow-orchestration-and-status-management)
+- [Project Structure](#project-structure)
+ - [Key Directory Descriptions](#key-directory-descriptions)
+ - [Monorepo Management](#monorepo-management)
+ - [Notes](#notes)
+- [API Reference](#api-reference)
+ - [External APIs Consumed](#external-apis-consumed)
+ - [1. Hacker News (HN) Algolia API](#1-hacker-news-hn-algolia-api)
+ - [2. Play.ht API](#2-playht-api)
+ - [3. LLM Provider (Facade for Summarization)](#3-llm-provider-facade-for-summarization)
+ - [4. Nodemailer (Email Delivery Service)](#4-nodemailer-email-delivery-service)
+ - [Internal APIs Provided (by BMad DiCaster)](#internal-apis-provided-by-bmad-dicaster)
+ - [1. Workflow Trigger API](#1-workflow-trigger-api)
+ - [2. Workflow Status API](#2-workflow-status-api)
+ - [3. Play.ht Webhook Receiver](#3-playht-webhook-receiver)
+- [Data Models](#data-models)
+ - [Core Application Entities / Domain Objects](#core-application-entities--domain-objects)
+ - [1. `WorkflowRun`](#1-workflowrun)
+ - [2. `HNPost`](#2-hnpost)
+ - [3. `HNComment`](#3-hncomment)
+ - [4. `ScrapedArticle`](#4-scrapedarticle)
+ - [5. `ArticleSummary`](#5-articlesummary)
+ - [6. `CommentSummary`](#6-commentsummary)
+ - [7. `Newsletter`](#7-newsletter)
+ - [8. `Subscriber`](#8-subscriber)
+ - [9. `SummarizationPrompt`](#9-summarizationprompt)
+ - [10. `NewsletterTemplate`](#10-newslettertemplate)
+ - [Database Schemas (Supabase PostgreSQL)](#database-schemas-supabase-postgresql)
+ - [1. `workflow_runs`](#1-workflow_runs)
+ - [2. `hn_posts`](#2-hn_posts)
+ - [3. `hn_comments`](#3-hn_comments)
+ - [4. `scraped_articles`](#4-scraped_articles)
+ - [5. `article_summaries`](#5-article_summaries)
+ - [6. `comment_summaries`](#6-comment_summaries)
+ - [7. `newsletters`](#7-newsletters)
+ - [8. `subscribers`](#8-subscribers)
+ - [9. `summarization_prompts`](#9-summarization_prompts)
+ - [10. `newsletter_templates`](#10-newsletter_templates)
+- [Core Workflow / Sequence Diagrams](#core-workflow--sequence-diagrams)
+ - [1. Daily Workflow Initiation & HN Content Acquisition](#1-daily-workflow-initiation--hn-content-acquisition)
+ - [2. Article Scraping & Summarization Flow](#2-article-scraping--summarization-flow)
+ - [3. Newsletter, Podcast, and Delivery Flow](#3-newsletter-podcast-and-delivery-flow)
+- [Definitive Tech Stack Selections](#definitive-tech-stack-selections)
+- [Infrastructure and Deployment Overview](#infrastructure-and-deployment-overview)
+- [Error Handling Strategy](#error-handling-strategy)
+- [Coding Standards](#coding-standards)
+ - [Detailed Language & Framework Conventions](#detailed-language--framework-conventions)
+ - [TypeScript/Node.js (Next.js & Supabase Functions) Specifics](#typescriptnodejs-nextjs--supabase-functions-specifics)
+- [Overall Testing Strategy](#overall-testing-strategy)
+- [Security Best Practices](#security-best-practices)
+- [Key Reference Documents](#key-reference-documents)
+- [Change Log](#change-log)
+- [Prompt for Design Architect: Frontend Architecture Definition](#prompt-for-design-architect-frontend-architecture-definition)
+
+## Technical Summary
+
+BMad DiCaster is a web application designed to provide daily, concise summaries of top Hacker News (HN) posts, delivered as an HTML newsletter and an optional AI-generated podcast, accessible via a Next.js web interface. The system employs a serverless, event-driven architecture hosted on Vercel, with Supabase providing PostgreSQL database services and function hosting. Key components include services for HN content retrieval, article scraping (using Cheerio), AI-powered summarization (via a configurable LLM facade for Ollama/remote APIs), podcast generation (Play.ht), newsletter generation (Nodemailer), and workflow orchestration. The architecture emphasizes modularity, clear separation of concerns (pragmatic hexagonal approach for complex functions), and robust error handling, aiming for efficient development, particularly by AI developer agents.
+
+## High-Level Overview
+
+The BMad DiCaster application will adopt a **serverless, event-driven architecture** hosted entirely on Vercel, with Supabase providing backend services (database and functions). The project will be structured as a **monorepo**, containing both the Next.js frontend application and the backend Supabase functions.
+
+The core data processing flow is designed as an event-driven pipeline:
+
+1. A scheduled mechanism (Vercel Cron Job) or manual trigger (API/CLI) initiates the daily workflow, creating a `workflow_run` job.
+2. Hacker News posts and comments are retrieved (HN Algolia API) and stored in Supabase.
+3. This data insertion triggers a Supabase function (via database webhook) to scrape linked articles.
+4. Successful article scraping and storage trigger further Supabase functions for AI-powered summarization of articles and comments.
+5. The completion of summarization steps for a workflow run is tracked, and once all prerequisites are met, a newsletter generation service is triggered.
+6. The newsletter content is sent to the Play.ht API to generate a podcast.
+7. Play.ht calls a webhook to notify our system when the podcast is ready, providing the podcast URL.
+8. The newsletter data in Supabase is updated with the podcast URL.
+9. The newsletter is then delivered to subscribers via Nodemailer, after considering podcast availability (with delay/retry logic).
+10. The Next.js frontend allows users to view current and past newsletters and listen to the podcasts.
+
+This event-driven approach, using Supabase Database Webhooks (via `pg_net` or native functionality) to trigger Vercel-hosted Supabase Functions, aims to create a resilient and scalable system. It mitigates potential timeout issues by breaking down long-running processes into smaller, asynchronously triggered units.
+
+Below is a system context diagram illustrating the primary services and user interactions:
+
+```mermaid
+graph TD
+ User[Developer/Admin] -- "Triggers Daily Workflow (API/CLI/Cron)" --> BMadDiCasterBE[BMad DiCaster Backend Logic]
+ UserWeb[End User] -- "Accesses Web Interface" --> BMadDiCasterFE[BMad DiCaster Frontend (Next.js on Vercel)]
+ BMadDiCasterFE -- "Displays Data From" --> SupabaseDB[Supabase PostgreSQL]
+ BMadDiCasterFE -- "Interacts With for Data/Triggers" --> SupabaseFunctions[Supabase Functions on Vercel]
+
+ subgraph "BMad DiCaster Backend Logic (Supabase Functions & Vercel)"
+ direction LR
+ SupabaseFunctions
+ HNAPI[Hacker News Algolia API]
+ ArticleScraper[Article Scraper Service]
+ Summarizer[Summarization Service (LLM Facade)]
+ PlayHTAPI[Play.ht API]
+ NewsletterService[Newsletter Generation & Delivery Service]
+ Nodemailer[Nodemailer Service]
+ end
+
+ BMadDiCasterBE --> SupabaseDB
+ SupabaseFunctions -- "Fetches HN Data" --> HNAPI
+ SupabaseFunctions -- "Scrapes Articles" --> ArticleScraper
+ ArticleScraper -- "Gets URLs from" --> SupabaseDB
+ ArticleScraper -- "Stores Content" --> SupabaseDB
+ SupabaseFunctions -- "Summarizes Content" --> Summarizer
+ Summarizer -- "Uses Prompts from / Stores Summaries" --> SupabaseDB
+ SupabaseFunctions -- "Generates Podcast" --> PlayHTAPI
+ PlayHTAPI -- "Sends Webhook (Podcast URL)" --> SupabaseFunctions
+ SupabaseFunctions -- "Updates Podcast URL" --> SupabaseDB
+ SupabaseFunctions -- "Generates Newsletter" --> NewsletterService
+ NewsletterService -- "Uses Template/Data from" --> SupabaseDB
+ NewsletterService -- "Sends Emails Via" --> Nodemailer
+ SupabaseDB -- "Stores Subscriber List" --> NewsletterService
+
+ classDef user fill:#9cf,stroke:#333,stroke-width:2px;
+ classDef fe fill:#f9f,stroke:#333,stroke-width:2px;
+ classDef be fill:#ccf,stroke:#333,stroke-width:2px;
+ classDef external fill:#ffc,stroke:#333,stroke-width:2px;
+ classDef db fill:#cfc,stroke:#333,stroke-width:2px;
+
+ class User,UserWeb user;
+ class BMadDiCasterFE fe;
+ class BMadDiCasterBE,SupabaseFunctions,ArticleScraper,Summarizer,NewsletterService be;
+ class HNAPI,PlayHTAPI,Nodemailer external;
+ class SupabaseDB db;
+```
+
+## Component View
+
+> This section has been moved to a dedicated document: [Component View](./component-view.md)
+
+## Workflow Orchestration and Status Management
+
+The BMad DiCaster application employs an event-driven pipeline for its daily content processing. To manage, monitor, and ensure the robust execution of this multi-step workflow, the following orchestration strategy is implemented:
+
+**1. Central Workflow Tracking (`workflow_runs` Table):**
+
+- A dedicated table, `public.workflow_runs` (defined in Data Models), serves as the single source of truth for the state and progress of each initiated daily workflow.
+- Each workflow execution is identified by a unique `id` (jobId) in this table.
+- Key fields include `status`, `current_step_details`, `error_message`, and a `details` JSONB column to store metadata and progress counters (e.g., `posts_fetched`, `articles_scraped_successfully`, `summaries_generated`, `podcast_playht_job_id`, `podcast_status`).
+
+**2. Workflow Initiation:**
+
+- A workflow is initiated via the `POST /api/system/trigger-workflow` API endpoint (callable manually, by CLI, or by a cron job).
+- Upon successful trigger, a new record is created in `workflow_runs` with an initial status (e.g., 'pending' or 'fetching_hn'), and the `jobId` is returned to the caller.
+- This initial record creation triggers the first service in the pipeline (`HNContentService`) via a database webhook or an initial direct call from the trigger API logic.
+
+**3. Service Function Responsibilities:**
+
+- Each backend Supabase Function (`HNContentService`, `ArticleScrapingService`, `SummarizationService`, `PodcastGenerationService`, `NewsletterGenerationService`) participating in the workflow **must**:
+ - Be aware of the `workflow_run_id` for the job it is processing. This ID should be passed along or retrievable based on the triggering event/data.
+ - **Before starting its primary task:** Update the `workflow_runs` table for the current `workflow_run_id` to reflect its `current_step_details` (e.g., "Started scraping article X for workflow Y").
+ - **Upon successful completion of its task:**
+ - Update any relevant data tables (e.g., `scraped_articles`, `article_summaries`).
+ - Update the `workflow_runs.details` JSONB field with relevant output or counters (e.g., increment `articles_scraped_successfully_count`).
+ - **Upon failure:** Update the `workflow_runs` table for the `workflow_run_id` to set `status` to 'failed', and populate `error_message` and `current_step_details` with failure information.
+ - Utilize the shared `WorkflowTrackerService` (see point 5) for consistent status updates.
+- The `PlayHTWebhookHandlerAPI` (Next.js API route) updates the `newsletters` table and then the `workflow_runs.details` with podcast status.
+
+**4. Orchestration and Progression (`CheckWorkflowCompletionService`):**
+
+- A dedicated Supabase Function, `CheckWorkflowCompletionService`, will be scheduled to run periodically (e.g., every 5-10 minutes via Vercel Cron Jobs invoking a dedicated HTTP endpoint for this service, or Supabase's `pg_cron` if preferred for DB-centric scheduling).
+- This service orchestrates progression between major stages by:
+ - Querying `workflow_runs` for jobs in intermediate statuses.
+ - Verifying if all prerequisite tasks for the next stage are complete by:
+ - Querying related data tables (e.g., `scraped_articles`, `article_summaries`, `comment_summaries`) based on the `workflow_run_id`.
+ - Checking expected counts against actual completed counts (e.g., all articles intended for summarization have an `article_summaries` entry for the current `workflow_run_id`).
+ - Checking the status of the podcast generation in the `newsletters` table (linked to `workflow_run_id`) before proceeding to email delivery.
+ - If conditions for the next stage are met, it updates the `workflow_runs.status` (e.g., to 'generating_newsletter') and then invokes the appropriate next service (e.g., `NewsletterGenerationService`), passing the `workflow_run_id`.
+
+**5. Shared `WorkflowTrackerService`:**
+
+- A utility service, `WorkflowTrackerService`, will be created in `supabase/functions/_shared/`.
+- It will provide standardized methods for all backend functions to interact with the `workflow_runs` table (e.g., `updateWorkflowStep()`, `incrementWorkflowDetailCounter()`, `failWorkflow()`, `completeWorkflowStep()`).
+- This promotes consistency in status updates and reduces redundant code.
+
+**6. Podcast Link Before Email Delivery:**
+
+- The `NewsletterGenerationService`, after generating the HTML and initiating podcast creation (via `PodcastGenerationService`), will set the `newsletters.podcast_status` to 'generating'.
+- The `CheckWorkflowCompletionService` (or the `NewsletterGenerationService` itself if designed for polling/delay) will monitor the `newsletters.podcast_url` (populated by the `PlayHTWebhookHandlerAPI`) or `newsletters.podcast_status`.
+- Email delivery is triggered by `CheckWorkflowCompletionService` once the podcast URL is available, a timeout is reached, or podcast generation fails (as per PRD's delay/retry logic). The final delivery status will be updated in `workflow_runs` and `newsletters`.
+
+## Project Structure
+
+> This section has been moved to a dedicated document: [Project Structure](./project-structure.md)
+
+## API Reference
+
+> This section has been moved to a dedicated document: [API Reference](./api-reference.md)
+
+## Data Models
+
+> This section has been moved to a dedicated document: [Data Models](./data-models.md)
+
+## Core Workflow / Sequence Diagrams
+
+> This section has been moved to a dedicated document: [Core Workflow / Sequence Diagrams](./sequence-diagrams.md)
+
+## Definitive Tech Stack Selections
+
+> This section has been moved to a dedicated document: [Definitive Tech Stack Selections](./tech-stack.md)
+
+## Infrastructure and Deployment Overview
+
+> This section has been moved to a dedicated document: [Infrastructure and Deployment Overview](./infra-deployment.md)
+
+## Error Handling Strategy
+
+> This section is part of the consolidated [Operational Guidelines](./operational-guidelines.md#error-handling-strategy).
+
+## Coding Standards
+
+> This section is part of the consolidated [Operational Guidelines](./operational-guidelines.md#coding-standards).
+
+## Overall Testing Strategy
+
+> This section is part of the consolidated [Operational Guidelines](./operational-guidelines.md#overall-testing-strategy).
+
+## Security Best Practices
+
+> This section is part of the consolidated [Operational Guidelines](./operational-guidelines.md#security-best-practices).
+
+## Key Reference Documents
+
+1. **Product Requirements Document (PRD):** `docs/prd-incremental-full-agile-mode.txt`
+2. **UI/UX Specification:** `docs/ui-ux-spec.txt`
+3. **Technical Preferences:** `docs/technical-preferences copy.txt`
+4. **Environment Variables Documentation:** [Environment Variables Documentation](./environment-vars.md)
+5. **(Optional) Frontend Architecture Document:** `docs/frontend-architecture.md` (To be created by Design Architect)
+6. **Play.ht API Documentation:** [https://docs.play.ai/api-reference/playnote/post](https://docs.play.ai/api-reference/playnote/post)
+7. **Hacker News Algolia API:** [https://hn.algolia.com/api](https://hn.algolia.com/api)
+8. **Ollama API Documentation:** [https://github.com/ollama/ollama/blob/main/docs/api.md](https://www.google.com/search?q=https://github.com/ollama/ollama/blob/main/docs/api.md)
+9. **Supabase Documentation:** [https://supabase.com/docs](https://supabase.com/docs)
+10. **Next.js Documentation:** [https://nextjs.org/docs](https://nextjs.org/docs)
+11. **Vercel Documentation:** [https://vercel.com/docs](https://vercel.com/docs)
+12. **Pino Logging Documentation:** [https://getpino.io/](https://getpino.io/)
+13. **Zod Documentation:** [https://zod.dev/](https://zod.dev/)
+
+## Change Log
+
+| Change | Date | Version | Description | Author |
+| :----------------------------------------- | :--------- | :------ | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------- |
+| Initial Draft based on PRD and discussions | 2025-05-13 | 0.1 | First complete draft covering project overview, components, data models, tech stack, deployment, error handling, coding standards, testing strategy, security, and workflow orchestration. | 3-arch (Agent) |
+
+---
+
+## Prompt for Design Architect: Frontend Architecture Definition
+
+**To the Design Architect (Agent Specializing in Frontend Architecture):**
+
+You are now tasked with defining the detailed **Frontend Architecture** for the BMad DiCaster project. This main Architecture Document and the `docs/ui-ux-spec.txt` are your primary input artifacts. Your goal is to produce a dedicated `frontend-architecture.md` document.
+
+**Key Inputs & Constraints (from this Main Architecture Document & UI/UX Spec):**
+
+1. **Overall Project Architecture:** Familiarize yourself with the "High-Level Overview," "Component View," "Data Models" (especially any shared types in `shared/types/`), and "API Reference" (particularly internal APIs like `/api/system/trigger-workflow` and `/api/webhooks/playht` that the frontend might indirectly be aware of or need to interact with for admin purposes in the future, though MVP frontend primarily reads newsletter data).
+2. **UI/UX Specification (`docs/ui-ux-spec.txt`):** This document contains user flows, wireframes, core screens (Newsletter List, Newsletter Detail), component inventory (NewsletterCard, PodcastPlayer, DownloadButton, BackButton), branding considerations (synthwave, minimalist), and accessibility aspirations.
+3. **Definitive Technology Stack (Frontend Relevant):**
+ - Framework: Next.js (`latest`, App Router)
+ - Language: React (`19.0.0`) with TypeScript (`5.7.2`)
+ - UI Libraries: Tailwind CSS (`3.4.17`), Shadcn UI (`latest`)
+ - State Management: Zustand (`latest`)
+ - Testing: React Testing Library (RTL) (`latest`), Jest (`latest`)
+ - Starter Template: Vercel/Supabase Next.js App Router template ([https://vercel.com/templates/next.js/supabase](https://vercel.com/templates/next.js/supabase)). Leverage its existing structure for `app/`, `components/ui/` (from Shadcn), `lib/utils.ts`, and `utils/supabase/` (client, server, middleware helpers for Supabase).
+4. **Project Structure (Frontend Relevant):** Refer to the "Project Structure" section in this document, particularly the `app/` directory, `components/` (for Shadcn `ui` and your `core` application components), `lib/`, and `utils/supabase/`.
+5. **Existing Frontend Files (from template):** Be aware of `middleware.ts` (for Supabase auth) and any existing components or utility functions provided by the starter template.
+
+**Tasks for Frontend Architecture Document (`frontend-architecture.md`):**
+
+1. **Refine Frontend Project Structure:**
+ - Detail the specific folder structure within `app/`. Propose organization for pages (routes), layouts, application-specific components (`app/components/core/`), data fetching logic, context providers, and Zustand stores.
+ - How will Shadcn UI components (`components/ui/`) be used and potentially customized?
+2. **Component Architecture:**
+ - For each core screen identified in the UI/UX spec (Newsletter List, Newsletter Detail), define the primary React component hierarchy.
+ - Specify responsibilities and key props for major reusable application components (e.g., `NewsletterCard`, `NewsletterDetailView`, `PodcastPlayerControls`).
+ - How will components fetch and display data from Supabase? (e.g., Server Components, Client Components using Supabase client from `utils/supabase/client.ts` or `utils/supabase/server.ts`).
+3. **State Management (Zustand):**
+ - Identify global and local state needs.
+ - Define specific Zustand store(s): what data they will hold (e.g., current newsletter list, selected newsletter details, podcast player state), and what actions they will expose.
+ - How will components interact with these stores?
+4. **Data Fetching & Caching (Frontend):**
+ - Specify patterns for fetching newsletter data (lists and individual items) and podcast information.
+ - How will Next.js data fetching capabilities (Server Components, Route Handlers, `Workspace` with caching options) be utilized with the Supabase client?
+ - Address loading and error states for data fetching in the UI.
+5. **Routing:**
+ - Confirm Next.js App Router usage and define URL structure for the newsletter list and detail pages.
+6. **Styling Approach:**
+ - Reiterate use of Tailwind CSS and Shadcn UI.
+ - Define any project-specific conventions for applying Tailwind classes or extending the theme (beyond what's in `tailwind.config.ts`).
+ - How will the "synthwave technical glowing purple vibes" be implemented using Tailwind?
+7. **Error Handling (Frontend):**
+ - How will errors from API calls (to Supabase or internal Next.js API routes if any) be handled and displayed to the user?
+ - Strategy for UI error boundaries.
+8. **Accessibility (AX):**
+ - Elaborate on how the WCAG 2.1 Level A requirements (keyboard navigation, semantic HTML, alt text, color contrast) will be met in component design and implementation, leveraging Next.js and Shadcn UI capabilities.
+9. **Testing (Frontend):**
+ - Reiterate the use of Jest and RTL for unit/integration testing of React components.
+ - Provide examples or guidelines for writing effective frontend tests.
+10. **Key Frontend Libraries & Versioning:** Confirm versions from the main tech stack and list any additional frontend-only libraries required.
+
+Your output should be a clean, well-formatted `frontend-architecture.md` document ready for AI developer agents to use for frontend implementation. Adhere to the output formatting guidelines. You are now operating in **Frontend Architecture Mode**.
+
+---
+
+This concludes the BMad DiCaster Architecture Document.
diff --git a/BETA-V3/v3-demos/full-stack-app-demo/10-sharded-docs/component-view.md b/BETA-V3/v3-demos/full-stack-app-demo/10-sharded-docs/component-view.md
new file mode 100644
index 00000000..3004c3d7
--- /dev/null
+++ b/BETA-V3/v3-demos/full-stack-app-demo/10-sharded-docs/component-view.md
@@ -0,0 +1,141 @@
+# Component View
+
+> This document is a granulated shard from the main "3-architecture.md" focusing on "Component View".
+
+The BMad DiCaster system is composed of several key logical components, primarily implemented as serverless functions (Supabase Functions deployed on Vercel) and a Next.js frontend application. These components work together in an event-driven manner.
+
+```mermaid
+graph TD
+ subgraph FrontendApp [Frontend Application (Next.js)]
+ direction LR
+ WebAppUI["Web Application UI (React Components)"]
+ APIServiceFE["API Service (Frontend - Next.js Route Handlers)"]
+ end
+
+ subgraph BackendServices [Backend Services (Supabase Functions & Core Logic)]
+ direction TB
+ WorkflowTriggerAPI["Workflow Trigger API (/api/system/trigger-workflow)"]
+ HNContentService["HN Content Service (Supabase Fn)"]
+ ArticleScrapingService["Article Scraping Service (Supabase Fn)"]
+ SummarizationService["Summarization Service (LLM Facade - Supabase Fn)"]
+ PodcastGenerationService["Podcast Generation Service (Supabase Fn)"]
+ NewsletterGenerationService["Newsletter Generation Service (Supabase Fn)"]
+ PlayHTWebhookHandlerAPI["Play.ht Webhook API (/api/webhooks/playht)"]
+ CheckWorkflowCompletionService["CheckWorkflowCompletionService (Supabase Cron Fn)"]
+ end
+
+ subgraph ExternalIntegrations [External APIs & Services]
+ direction TB
+ HNAlgoliaAPI["Hacker News Algolia API"]
+ PlayHTAPI["Play.ht API"]
+ LLMProvider["LLM Provider (Ollama/Remote API)"]
+ NodemailerService["Nodemailer (Email Delivery)"]
+ end
+
+ subgraph DataStorage [Data Storage (Supabase PostgreSQL)]
+ direction TB
+ DB_WorkflowRuns["workflow_runs Table"]
+ DB_Posts["hn_posts Table"]
+ DB_Comments["hn_comments Table"]
+ DB_Articles["scraped_articles Table"]
+ DB_Summaries["article_summaries / comment_summaries Tables"]
+ DB_Newsletters["newsletters Table"]
+ DB_Subscribers["subscribers Table"]
+ DB_Prompts["summarization_prompts Table"]
+ DB_NewsletterTemplates["newsletter_templates Table"]
+ end
+
+ UserWeb[End User] --> WebAppUI
+ WebAppUI --> APIServiceFE
+ APIServiceFE --> WorkflowTriggerAPI
+ APIServiceFE --> DataStorage
+
+
+ DevAdmin[Developer/Admin/Cron] --> WorkflowTriggerAPI
+
+ WorkflowTriggerAPI --> DB_WorkflowRuns
+
+ DB_WorkflowRuns -- "Triggers (via CheckWorkflowCompletion or direct)" --> HNContentService
+ HNContentService --> HNAlgoliaAPI
+ HNContentService --> DB_Posts
+ HNContentService --> DB_Comments
+ HNContentService --> DB_WorkflowRuns
+
+
+ DB_Posts -- "Triggers (via DB Webhook)" --> ArticleScrapingService
+ ArticleScrapingService --> DB_Articles
+ ArticleScrapingService --> DB_WorkflowRuns
+
+ DB_Articles -- "Triggers (via DB Webhook)" --> SummarizationService
+ SummarizationService --> LLMProvider
+ SummarizationService --> DB_Prompts
+ SummarizationService --> DB_Summaries
+ SummarizationService --> DB_WorkflowRuns
+
+ CheckWorkflowCompletionService -- "Monitors & Triggers Next Steps Based On" --> DB_WorkflowRuns
+ CheckWorkflowCompletionService -- "Monitors & Triggers Next Steps Based On" --> DB_Summaries
+ CheckWorkflowCompletionService -- "Monitors & Triggers Next Steps Based On" --> DB_Newsletters
+
+
+ CheckWorkflowCompletionService --> NewsletterGenerationService
+ NewsletterGenerationService --> DB_NewsletterTemplates
+ NewsletterGenerationService --> DB_Summaries
+ NewsletterGenerationService --> DB_Newsletters
+ NewsletterGenerationService --> DB_WorkflowRuns
+
+
+ CheckWorkflowCompletionService --> PodcastGenerationService
+ PodcastGenerationService --> PlayHTAPI
+ PodcastGenerationService --> DB_Newsletters
+ PodcastGenerationService --> DB_WorkflowRuns
+
+ PlayHTAPI -- "Webhook" --> PlayHTWebhookHandlerAPI
+ PlayHTWebhookHandlerAPI --> DB_Newsletters
+ PlayHTWebhookHandlerAPI --> DB_WorkflowRuns
+
+
+ CheckWorkflowCompletionService -- "Triggers Delivery" --> NewsletterGenerationService
+ NewsletterGenerationService -- "(For Delivery)" --> NodemailerService
+ NewsletterGenerationService -- "(For Delivery)" --> DB_Subscribers
+ NewsletterGenerationService -- "(For Delivery)" --> DB_Newsletters
+ NewsletterGenerationService -- "(For Delivery)" --> DB_WorkflowRuns
+
+
+ classDef user fill:#9cf,stroke:#333,stroke-width:2px;
+ classDef feapp fill:#f9d,stroke:#333,stroke-width:2px;
+ classDef beapp fill:#cdf,stroke:#333,stroke-width:2px;
+ classDef external fill:#ffc,stroke:#333,stroke-width:2px;
+ classDef db fill:#cfc,stroke:#333,stroke-width:2px;
+
+ class UserWeb,DevAdmin user;
+ class FrontendApp,WebAppUI,APIServiceFE feapp;
+ class BackendServices,WorkflowTriggerAPI,HNContentService,ArticleScrapingService,SummarizationService,PodcastGenerationService,NewsletterGenerationService,PlayHTWebhookHandlerAPI,CheckWorkflowCompletionService beapp;
+ class ExternalIntegrations,HNAlgoliaAPI,PlayHTAPI,LLMProvider,NodemailerService external;
+ class DataStorage,DB_WorkflowRuns,DB_Posts,DB_Comments,DB_Articles,DB_Summaries,DB_Newsletters,DB_Subscribers,DB_Prompts,DB_NewsletterTemplates db;
+```
+
+- **Frontend Application (Next.js on Vercel):**
+ - **Web Application UI (React Components):** Renders UI, displays newsletters/podcasts, handles user interactions.
+ - **API Service (Frontend - Next.js Route Handlers):** Handles frontend-initiated API calls (e.g., for future admin functions) and receives incoming webhooks (Play.ht).
+- **Backend Services (Supabase Functions & Core Logic):**
+ - **Workflow Trigger API (`/api/system/trigger-workflow`):** Secure Next.js API route to manually initiate the daily workflow.
+ - **HN Content Service (Supabase Fn):** Retrieves posts/comments from HN Algolia API, stores them.
+ - **Article Scraping Service (Supabase Fn):** Triggered by new HN posts, scrapes article content.
+ - **Summarization Service (LLM Facade - Supabase Fn):** Triggered by new articles/comments, generates summaries using LLM.
+ - **Podcast Generation Service (Supabase Fn):** Sends newsletter content to Play.ht API.
+ - **Newsletter Generation Service (Supabase Fn):** Compiles newsletter, handles podcast link logic, triggers email delivery.
+ - **Play.ht Webhook API (`/api/webhooks/playht`):** Next.js API route to receive podcast status from Play.ht.
+ - **CheckWorkflowCompletionService (Supabase Cron Fn):** Periodically monitors `workflow_runs` and related tables to orchestrate the progression between pipeline stages (e.g., from summarization to newsletter generation, then to delivery).
+- **Data Storage (Supabase PostgreSQL):** Stores all application data including workflow state, content, summaries, newsletters, subscribers, prompts, and templates.
+- **External APIs & Services:** HN Algolia API, Play.ht API, LLM Provider (Ollama/Remote), Nodemailer.
+
+### Architectural / Design Patterns Adopted
+
+- **Event-Driven Architecture:** Core backend processing is a series of steps triggered by database events (Supabase Database Webhooks calling Supabase Functions hosted on Vercel) and orchestrated via the `workflow_runs` table and the `CheckWorkflowCompletionService`.
+- **Serverless Functions:** Backend logic is encapsulated in Supabase Functions (running on Vercel).
+- **Monorepo:** All code resides in a single repository.
+- **Facade Pattern:** Encapsulates interactions with external services (HN API, Play.ht API, LLM, Nodemailer) within `supabase/functions/_shared/`.
+- **Factory Pattern (for LLM Service):** The `LLMFacade` will use a factory to instantiate the appropriate LLM client based on environment configuration.
+- **Hexagonal Architecture (Pragmatic Application):** For complex Supabase Functions, core business logic will be separated from framework-specific handlers and data interaction code (adapters) to improve testability and maintainability. Simpler functions may have a more direct implementation.
+- **Repository Pattern (for Data Access - Conceptual):** Data access logic within services will be organized, conceptually resembling repositories, even if not strictly implemented with separate repository classes for all entities in MVP Supabase Functions.
+- **Configuration via Environment Variables:** All sensitive and environment-specific configurations managed via environment variables.
diff --git a/BETA-V3/v3-demos/full-stack-app-demo/10-sharded-docs/data-models.md b/BETA-V3/v3-demos/full-stack-app-demo/10-sharded-docs/data-models.md
new file mode 100644
index 00000000..4438cebd
--- /dev/null
+++ b/BETA-V3/v3-demos/full-stack-app-demo/10-sharded-docs/data-models.md
@@ -0,0 +1,232 @@
+# Data Models
+
+> This document is a granulated shard from the main "3-architecture.md" focusing on "Data Models".
+
+This section defines the core data structures used within the BMad DiCaster application, including conceptual domain entities and their corresponding database schemas in Supabase PostgreSQL.
+
+### Core Application Entities / Domain Objects
+
+(Conceptual types, typically defined in `shared/types/domain-models.ts`)
+
+#### 1\. `WorkflowRun`
+
+- **Description:** A single execution of the daily workflow.
+- **Schema:** `id (string UUID)`, `createdAt (string ISO)`, `lastUpdatedAt (string ISO)`, `status (enum string: 'pending' | 'fetching_hn' | 'scraping_articles' | 'summarizing_content' | 'generating_podcast' | 'generating_newsletter' | 'delivering_newsletter' | 'completed' | 'failed')`, `currentStepDetails (string?)`, `errorMessage (string?)`, `details (object?: { postsFetched?: number, articlesAttempted?: number, articlesScrapedSuccessfully?: number, summariesGenerated?: number, podcastJobId?: string, podcastStatus?: string, newsletterGeneratedAt?: string, subscribersNotified?: number })`
+
+#### 2\. `HNPost`
+
+- **Description:** A post from Hacker News.
+- **Schema:** `id (string HN_objectID)`, `hnNumericId (number?)`, `title (string)`, `url (string?)`, `author (string)`, `points (number)`, `createdAt (string ISO)`, `retrievedAt (string ISO)`, `hnStoryText (string?)`, `numComments (number?)`, `tags (string[]?)`, `workflowRunId (string UUID?)`
+
+#### 3\. `HNComment`
+
+- **Description:** A comment on an HN post.
+- **Schema:** `id (string HN_commentID)`, `hnPostId (string)`, `parentId (string?)`, `author (string?)`, `text (string HTML)`, `createdAt (string ISO)`, `retrievedAt (string ISO)`, `children (HNComment[]?)`
+
+#### 4\. `ScrapedArticle`
+
+- **Description:** Content scraped from an article URL.
+- **Schema:** `id (string UUID)`, `hnPostId (string)`, `originalUrl (string)`, `resolvedUrl (string?)`, `title (string?)`, `author (string?)`, `publicationDate (string ISO?)`, `mainTextContent (string?)`, `scrapedAt (string ISO)`, `scrapingStatus (enum string: 'pending' | 'success' | 'failed_unreachable' | 'failed_paywall' | 'failed_parsing')`, `errorMessage (string?)`, `workflowRunId (string UUID?)`
+
+#### 5\. `ArticleSummary`
+
+- **Description:** AI-generated summary of a `ScrapedArticle`.
+- **Schema:** `id (string UUID)`, `scrapedArticleId (string UUID)`, `summaryText (string)`, `generatedAt (string ISO)`, `llmPromptVersion (string?)`, `llmModelUsed (string?)`, `workflowRunId (string UUID)`
+
+#### 6\. `CommentSummary`
+
+- **Description:** AI-generated summary of comments for an `HNPost`.
+- **Schema:** `id (string UUID)`, `hnPostId (string)`, `summaryText (string)`, `generatedAt (string ISO)`, `llmPromptVersion (string?)`, `llmModelUsed (string?)`, `workflowRunId (string UUID)`
+
+#### 7\. `Newsletter`
+
+- **Description:** The daily generated newsletter.
+- **Schema:** `id (string UUID)`, `workflowRunId (string UUID)`, `targetDate (string YYYY-MM-DD)`, `title (string)`, `generatedAt (string ISO)`, `htmlContent (string)`, `mjmlTemplateVersion (string?)`, `podcastPlayhtJobId (string?)`, `podcastUrl (string?)`, `podcastStatus (enum string?: 'pending' | 'generating' | 'completed' | 'failed')`, `deliveryStatus (enum string: 'pending' | 'sending' | 'sent' | 'partially_failed' | 'failed')`, `scheduledSendAt (string ISO?)`, `sentAt (string ISO?)`
+
+#### 8\. `Subscriber`
+
+- **Description:** An email subscriber.
+- **Schema:** `id (string UUID)`, `email (string)`, `subscribedAt (string ISO)`, `isActive (boolean)`, `unsubscribedAt (string ISO?)`
+
+#### 9\. `SummarizationPrompt`
+
+- **Description:** Stores prompts for AI summarization.
+- **Schema:** `id (string UUID)`, `promptName (string)`, `promptText (string)`, `version (string)`, `createdAt (string ISO)`, `updatedAt (string ISO)`, `isDefaultArticlePrompt (boolean)`, `isDefaultCommentPrompt (boolean)`
+
+#### 10\. `NewsletterTemplate`
+
+- **Description:** HTML/MJML templates for newsletters.
+- **Schema:** `id (string UUID)`, `templateName (string)`, `mjmlContent (string?)`, `htmlContent (string)`, `version (string)`, `createdAt (string ISO)`, `updatedAt (string ISO)`, `isDefault (boolean)`
+
+### Database Schemas (Supabase PostgreSQL)
+
+#### 1\. `workflow_runs`
+
+```sql
+CREATE TABLE public.workflow_runs (
+ id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
+ created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
+ last_updated_at TIMESTAMPTZ NOT NULL DEFAULT now(),
+ status TEXT NOT NULL DEFAULT 'pending', -- pending, fetching_hn, scraping_articles, summarizing_content, generating_podcast, generating_newsletter, delivering_newsletter, completed, failed
+ current_step_details TEXT NULL,
+ error_message TEXT NULL,
+ details JSONB NULL -- {postsFetched, articlesAttempted, articlesScrapedSuccessfully, summariesGenerated, podcastJobId, podcastStatus, newsletterGeneratedAt, subscribersNotified}
+);
+COMMENT ON COLUMN public.workflow_runs.status IS 'Possible values: pending, fetching_hn, scraping_articles, summarizing_content, generating_podcast, generating_newsletter, delivering_newsletter, completed, failed';
+COMMENT ON COLUMN public.workflow_runs.details IS 'Stores step-specific progress or metadata like postsFetched, articlesScraped, podcastJobId, etc.';
+```
+
+#### 2\. `hn_posts`
+
+```sql
+CREATE TABLE public.hn_posts (
+ id TEXT PRIMARY KEY, -- HN's objectID
+ hn_numeric_id BIGINT NULL UNIQUE,
+ title TEXT NOT NULL,
+ url TEXT NULL,
+ author TEXT NULL,
+ points INTEGER NOT NULL DEFAULT 0,
+ created_at TIMESTAMPTZ NOT NULL, -- HN post creation time
+ retrieved_at TIMESTAMPTZ NOT NULL DEFAULT now(),
+ hn_story_text TEXT NULL,
+ num_comments INTEGER NULL DEFAULT 0,
+ tags TEXT[] NULL,
+ workflow_run_id UUID NULL REFERENCES public.workflow_runs(id) ON DELETE SET NULL -- The run that fetched this instance of the post
+);
+COMMENT ON COLUMN public.hn_posts.id IS 'Hacker News objectID for the story.';
+```
+
+#### 3\. `hn_comments`
+
+```sql
+CREATE TABLE public.hn_comments (
+ id TEXT PRIMARY KEY, -- HN's comment ID
+ hn_post_id TEXT NOT NULL REFERENCES public.hn_posts(id) ON DELETE CASCADE,
+ parent_comment_id TEXT NULL REFERENCES public.hn_comments(id) ON DELETE CASCADE,
+ author TEXT NULL,
+ comment_text TEXT NOT NULL, -- HTML content of the comment
+ created_at TIMESTAMPTZ NOT NULL, -- HN comment creation time
+ retrieved_at TIMESTAMPTZ NOT NULL DEFAULT now()
+);
+CREATE INDEX idx_hn_comments_post_id ON public.hn_comments(hn_post_id);
+```
+
+#### 4\. `scraped_articles`
+
+```sql
+CREATE TABLE public.scraped_articles (
+ id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
+ hn_post_id TEXT NOT NULL REFERENCES public.hn_posts(id) ON DELETE CASCADE,
+ original_url TEXT NOT NULL,
+ resolved_url TEXT NULL,
+ title TEXT NULL,
+ author TEXT NULL,
+ publication_date TIMESTAMPTZ NULL,
+ main_text_content TEXT NULL,
+ scraped_at TIMESTAMPTZ NOT NULL DEFAULT now(),
+ scraping_status TEXT NOT NULL DEFAULT 'pending', -- pending, success, failed_unreachable, failed_paywall, failed_parsing
+ error_message TEXT NULL,
+ workflow_run_id UUID NULL REFERENCES public.workflow_runs(id) ON DELETE SET NULL
+);
+CREATE UNIQUE INDEX idx_scraped_articles_hn_post_id_workflow_run_id ON public.scraped_articles(hn_post_id, workflow_run_id);
+COMMENT ON COLUMN public.scraped_articles.scraping_status IS 'Possible values: pending, success, failed_unreachable, failed_paywall, failed_parsing, failed_generic';
+```
+
+#### 5\. `article_summaries`
+
+```sql
+CREATE TABLE public.article_summaries (
+ id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
+ scraped_article_id UUID NOT NULL REFERENCES public.scraped_articles(id) ON DELETE CASCADE,
+ summary_text TEXT NOT NULL,
+ generated_at TIMESTAMPTZ NOT NULL DEFAULT now(),
+ llm_prompt_version TEXT NULL,
+ llm_model_used TEXT NULL,
+ workflow_run_id UUID NOT NULL REFERENCES public.workflow_runs(id) ON DELETE CASCADE
+);
+CREATE UNIQUE INDEX idx_article_summaries_scraped_article_id_workflow_run_id ON public.article_summaries(scraped_article_id, workflow_run_id);
+COMMENT ON COLUMN public.article_summaries.llm_prompt_version IS 'Version or identifier of the summarization prompt used.';
+```
+
+#### 6\. `comment_summaries`
+
+```sql
+CREATE TABLE public.comment_summaries (
+ id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
+ hn_post_id TEXT NOT NULL REFERENCES public.hn_posts(id) ON DELETE CASCADE,
+ summary_text TEXT NOT NULL,
+ generated_at TIMESTAMPTZ NOT NULL DEFAULT now(),
+ llm_prompt_version TEXT NULL,
+ llm_model_used TEXT NULL,
+ workflow_run_id UUID NOT NULL REFERENCES public.workflow_runs(id) ON DELETE CASCADE
+);
+CREATE UNIQUE INDEX idx_comment_summaries_hn_post_id_workflow_run_id ON public.comment_summaries(hn_post_id, workflow_run_id);
+```
+
+#### 7\. `newsletters`
+
+```sql
+CREATE TABLE public.newsletters (
+ id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
+ workflow_run_id UUID NOT NULL UNIQUE REFERENCES public.workflow_runs(id) ON DELETE CASCADE,
+ target_date DATE NOT NULL UNIQUE,
+ title TEXT NOT NULL,
+ generated_at TIMESTAMPTZ NOT NULL DEFAULT now(),
+ html_content TEXT NOT NULL,
+ mjml_template_version TEXT NULL,
+ podcast_playht_job_id TEXT NULL,
+ podcast_url TEXT NULL,
+ podcast_status TEXT NULL DEFAULT 'pending', -- pending, generating, completed, failed
+ delivery_status TEXT NOT NULL DEFAULT 'pending', -- pending, sending, sent, failed, partially_failed
+ scheduled_send_at TIMESTAMPTZ NULL,
+ sent_at TIMESTAMPTZ NULL
+);
+CREATE INDEX idx_newsletters_target_date ON public.newsletters(target_date);
+COMMENT ON COLUMN public.newsletters.target_date IS 'The date this newsletter pertains to. Ensures uniqueness.';
+```
+
+#### 8\. `subscribers`
+
+```sql
+CREATE TABLE public.subscribers (
+ id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
+ email TEXT NOT NULL UNIQUE,
+ subscribed_at TIMESTAMPTZ NOT NULL DEFAULT now(),
+ is_active BOOLEAN NOT NULL DEFAULT TRUE,
+ unsubscribed_at TIMESTAMPTZ NULL
+);
+CREATE INDEX idx_subscribers_email_active ON public.subscribers(email, is_active);
+```
+
+#### 9\. `summarization_prompts`
+
+```sql
+CREATE TABLE public.summarization_prompts (
+ id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
+ prompt_name TEXT NOT NULL UNIQUE,
+ prompt_text TEXT NOT NULL,
+ version TEXT NOT NULL DEFAULT '1.0',
+ created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
+ updated_at TIMESTAMPTZ NOT NULL DEFAULT now(),
+ is_default_article_prompt BOOLEAN NOT NULL DEFAULT FALSE,
+ is_default_comment_prompt BOOLEAN NOT NULL DEFAULT FALSE
+);
+COMMENT ON COLUMN public.summarization_prompts.prompt_name IS 'Unique identifier for the prompt, e.g., article_summary_v2.1';
+-- Application logic will enforce that only one prompt of each type is marked as default.
+```
+
+#### 10\. `newsletter_templates`
+
+```sql
+CREATE TABLE public.newsletter_templates (
+ id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
+ template_name TEXT NOT NULL UNIQUE,
+ mjml_content TEXT NULL,
+ html_content TEXT NOT NULL,
+ version TEXT NOT NULL DEFAULT '1.0',
+ created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
+ updated_at TIMESTAMPTZ NOT NULL DEFAULT now(),
+ is_default BOOLEAN NOT NULL DEFAULT FALSE
+);
+-- Application logic will enforce that only one template is marked as default.
+```
diff --git a/BETA-V3/v3-demos/full-stack-app-demo/10-sharded-docs/environment-vars.md b/BETA-V3/v3-demos/full-stack-app-demo/10-sharded-docs/environment-vars.md
new file mode 100644
index 00000000..75fc53ff
--- /dev/null
+++ b/BETA-V3/v3-demos/full-stack-app-demo/10-sharded-docs/environment-vars.md
@@ -0,0 +1,9 @@
+# Environment Variables Documentation
+
+> This document is a granulated shard from the main "3-architecture.md" focusing on "Environment Variables Documentation".
+
+The BMad DiCaster Architecture Document (`3-architecture.md`) indicates that detailed environment variable documentation is intended to be consolidated, potentially in a file named `docs/environment-vars.md`. This file is marked as "(To be created)" within the "Key Reference Documents" section of `3-architecture.md`.
+
+While specific environment variables are mentioned contextually throughout `3-architecture.md` (e.g., for Play.ht API keys, LLM provider configuration, SMTP settings, and workflow trigger API keys), a dedicated, centralized list of all variables, their purposes, and example values is not present as a single extractable section suitable for verbatim sharding at this time.
+
+This sharded document serves as a placeholder, reflecting the sharding plan's intent to capture "Environment Variables Documentation". For specific variables mentioned in context, please refer to the full `3-architecture.md` (particularly sections like API Reference, Infrastructure Overview, and Security Best Practices) until a dedicated and consolidated list is formally compiled as intended.
diff --git a/BETA-V3/v3-demos/full-stack-app-demo/10-sharded-docs/epic-1.md b/BETA-V3/v3-demos/full-stack-app-demo/10-sharded-docs/epic-1.md
new file mode 100644
index 00000000..09b36f4e
--- /dev/null
+++ b/BETA-V3/v3-demos/full-stack-app-demo/10-sharded-docs/epic-1.md
@@ -0,0 +1,111 @@
+# Epic 1: Project Initialization, Setup, and HN Content Acquisition
+
+> This document is a granulated shard from the main "BETA-V3/v3-demos/full-stack-app-demo/8-prd-po-updated.md" focusing on "Epic 1: Project Initialization, Setup, and HN Content Acquisition".
+
+- Goal: Establish the foundational project structure, including the Next.js application, Supabase integration, deployment pipeline, API/CLI triggers, core workflow orchestration, and implement functionality to retrieve, process, and store Hacker News posts/comments via a `ContentAcquisitionFacade`, providing data for newsletter generation. Implement the database event mechanism to trigger subsequent processing. Define core configuration tables, seed data, and set up testing frameworks.
+
+- **Story 1.1:** As a developer, I want to set up the Next.js project with Supabase integration, so that I have a functional foundation for building the application.
+ - Acceptance Criteria:
+ - The Next.js project is initialized using the Vercel/Supabase template.
+ - Supabase is successfully integrated with the Next.js project.
+ - The project codebase is initialized in a Git repository.
+ - A basic project `README.md` is created in the root of the repository, including a project overview, links to main documentation (PRD, architecture), and essential developer setup/run commands.
+- **Story 1.2:** As a developer, I want to configure the deployment pipeline to Vercel with separate development and production environments, so that I can easily deploy and update the application.
+ - Acceptance Criteria:
+ - The project is successfully linked to a Vercel project with separate environments.
+ - Automated deployments are configured for the main branch to the production environment.
+ - Environment variables are set up for local development and Vercel deployments.
+- **Story 1.3:** As a developer, I want to implement the API and CLI trigger mechanisms, so that I can manually trigger the workflow during development and testing.
+ - Acceptance Criteria:
+ - A secure API endpoint is created.
+ - The API endpoint requires authentication (API key).
+ - The API endpoint (`/api/system/trigger-workflow`) creates an entry in the `workflow_runs` table and returns the `jobId`.
+ - The API endpoint returns an appropriate response to indicate success or failure.
+ - The API endpoint is secured via an API key.
+ - A CLI command is created.
+ - The CLI command invokes the `/api/system/trigger-workflow` endpoint or directly interacts with `WorkflowTrackerService` to start a new workflow run.
+ - The CLI command provides informative output to the console.
+ - All API requests and CLI command executions are logged, including timestamps and any relevant data.
+ - All interactions with the API or CLI that initiate a workflow must record the `workflow_run_id` in logs.
+ - The API and CLI interfaces adhere to mobile responsiveness and Tailwind/theming principles.
+- **Story 1.4:** As a system, I want to retrieve the top 30 Hacker News posts and associated comments daily using a configurable `ContentAcquisitionFacade`, so that the data is available for summarization and newsletter generation.
+ - Acceptance Criteria:
+ - A `ContentAcquisitionFacade` is implemented in `supabase/functions/_shared/` to abstract interaction with the news data source (initially HN Algolia API).
+ - The facade handles API authentication (if any), request formation, and response parsing for the specific news source.
+ - The facade implements basic retry logic for transient errors.
+ - Unit tests for the `ContentAcquisitionFacade` (mocking actual HTTP calls to the HN Algolia API) achieve >80% coverage.
+ - The system retrieves the top 30 Hacker News posts daily via the `ContentAcquisitionFacade`.
+ - The system retrieves associated comments for the top 30 posts via the `ContentAcquisitionFacade`.
+ - Retrieved data (posts and comments) is stored in Supabase database, linked to the current `workflow_run_id`.
+ - This functionality can be triggered via the API and CLI.
+ - The system logs the start and completion of the retrieval process, including any errors.
+ - Upon completion, the service updates the `workflow_runs` table with status and details (e.g., number of posts fetched) via `WorkflowTrackerService`.
+ - Supabase migrations for `hn_posts` and `hn_comments` tables (as defined in `architecture.txt`) are created and applied before data operations.
+- **Story 1.5: Define and Implement `workflow_runs` Table and `WorkflowTrackerService`**
+ - Goal: Implement the core workflow orchestration mechanism (tracking part).
+ - Acceptance Criteria:
+ - Supabase migration created for the `workflow_runs` table as defined in the architecture document.
+ - `WorkflowTrackerService` implemented in `supabase/functions/_shared/` with methods for initiating, updating step details, incrementing counters, failing, and completing workflow runs.
+ - Service includes robust error handling and logging via Pino.
+ - Unit tests for `WorkflowTrackerService` achieve >80% coverage.
+- **Story 1.6: Implement `CheckWorkflowCompletionService` (Supabase Cron Function)**
+ - Goal: Implement the core workflow orchestration mechanism (progression part).
+ - Acceptance Criteria:
+ - Supabase Function `check-workflow-completion-service` created.
+ - Function queries `workflow_runs` and related tables to determine if a workflow run is ready to progress to the next major stage.
+ - Function correctly updates `workflow_runs.status` and invokes the next appropriate service function.
+ - Logic for handling podcast link availability is implemented here or in conjunction with `NewsletterGenerationService`.
+ - The function is configurable to be run periodically.
+ - Comprehensive logging implemented using Pino.
+ - Unit tests achieve >80% coverage.
+- **Story 1.7: Implement Workflow Status API Endpoint (`/api/system/workflow-status/{jobId}`)**
+ - Goal: Allow developers/admins to check the status of a workflow run.
+ - Acceptance Criteria:
+ - Next.js API Route Handler created at `/api/system/workflow-status/{jobId}`.
+ - Endpoint secured with API Key authentication.
+ - Retrieves and returns status details from the `workflow_runs` table.
+ - Handles cases where `jobId` is not found (404).
+ - Unit and integration tests for the API endpoint.
+- **Story 1.8: Create and document `docs/environment-vars.md` and set up `.env.example`**
+ - Goal: Ensure environment variables are properly documented and managed.
+ - Acceptance Criteria:
+ - A `docs/environment-vars.md` file is created.
+ - An `.env.example` file is created.
+ - Sensitive information in examples is masked.
+ - For each third-party service requiring credentials, `docs/environment-vars.md` includes:
+ - A brief note or link guiding the user on where to typically sign up for the service and obtain the necessary API key or credential.
+ - A recommendation for the user to check the service's current free/low-tier API rate limits against expected MVP usage.
+ - A note that usage beyond free tier limits for commercial services (like Play.ht, remote LLMs, or email providers) may incur costs, and the user should review the provider's pricing.
+- **Story 1.9 (New): Implement Database Event/Webhook: `hn_posts` Insert to Article Scraping Service**
+ - Goal: To ensure that the successful insertion of a new Hacker News post into the `hn_posts` table automatically triggers the `ArticleScrapingService`.
+ - Acceptance Criteria:
+ - A Supabase database trigger or webhook mechanism (e.g., using `pg_net` or native triggers calling a function) is implemented on the `hn_posts` table for INSERT operations.
+ - The trigger successfully invokes the `ArticleScrapingService` (Supabase Function).
+ - The invocation passes necessary parameters like `hn_post_id` and `workflow_run_id` to the `ArticleScrapingService`.
+ - The mechanism is robust and includes error handling/logging for the trigger/webhook itself.
+ - Unit/integration tests are created to verify the trigger fires correctly and the service is invoked with correct parameters.
+- **Story 1.10 (New): Define and Implement Core Configuration Tables**
+ - Goal: To establish the database tables necessary for storing core application configurations like summarization prompts, newsletter templates, and subscriber lists.
+ - Acceptance Criteria:
+ - A Supabase migration is created and applied to define the `summarization_prompts` table schema as specified in `architecture.txt`.
+ - A Supabase migration is created and applied to define the `newsletter_templates` table schema as specified in `architecture.txt`.
+ - A Supabase migration is created and applied to define the `subscribers` table schema as specified in `architecture.txt`.
+ - These tables are ready for data population (e.g., via seeding or manual entry for MVP).
+- **Story 1.11 (New): Create Seed Data for Initial Configuration**
+ - Goal: To populate the database with initial configuration data (prompts, templates, test subscribers) necessary for development and testing of MVP features.
+ - Acceptance Criteria:
+ - A `supabase/seed.sql` file (or an equivalent, documented seeding mechanism) is created.
+ - The seed mechanism populates the `summarization_prompts` table with at least one default article prompt and one default comment prompt.
+ - The seed mechanism populates the `newsletter_templates` table with at least one default newsletter template (HTML format for MVP).
+ - The seed mechanism populates the `subscribers` table with a small list of 1-3 test email addresses for MVP delivery testing.
+ - Instructions on how to apply the seed data to a local or development Supabase instance are documented (e.g., in the project `README.md`).
+- **Story 1.12 (New): Set up and Configure Project Testing Frameworks**
+ - Goal: To ensure that the primary testing frameworks (Jest, React Testing Library, Playwright) are installed and configured early in the project lifecycle, enabling test-driven development practices and adherence to the testing strategy.
+ - Acceptance Criteria:
+ - Jest and React Testing Library (RTL) are installed as project dependencies.
+ - Jest and RTL are configured for unit and integration testing of Next.js components and JavaScript/TypeScript code (e.g., `jest.config.js` is set up, necessary Babel/TS transformations are in place).
+ - A sample unit test (e.g., for a simple component or utility function) is created and runs successfully using the Jest/RTL setup.
+ - Playwright is installed as a project dependency.
+ - Playwright is configured for end-to-end testing (e.g., `playwright.config.ts` is set up, browser configurations are defined).
+ - A sample E2E test (e.g., navigating to the application's homepage on the local development server) is created and runs successfully using Playwright.
+ - Scripts to execute tests (e.g., unit tests, E2E tests) are added to `package.json`.
diff --git a/BETA-V3/v3-demos/full-stack-app-demo/10-sharded-docs/epic-2.md b/BETA-V3/v3-demos/full-stack-app-demo/10-sharded-docs/epic-2.md
new file mode 100644
index 00000000..d9616af2
--- /dev/null
+++ b/BETA-V3/v3-demos/full-stack-app-demo/10-sharded-docs/epic-2.md
@@ -0,0 +1,39 @@
+# Epic 2: Article Scraping
+
+> This document is a granulated shard from the main "BETA-V3/v3-demos/full-stack-app-demo/8-prd-po-updated.md" focusing on "Epic 2: Article Scraping".
+
+- Goal: Implement the functionality to scrape and store linked articles from HN posts, enriching the data available for summarization and the newsletter. Ensure this functionality is triggered by database events and can be tested via API/CLI (if retained). Implement the database event mechanism to trigger subsequent processing.
+
+- **Story 2.1:** As a system, I want to identify URLs within the top 30 (configurable via environment variable) Hacker News posts, so that I can extract the content of linked articles.
+ - Acceptance Criteria:
+ - The system parses the top N (configurable via env var) Hacker News posts to identify URLs.
+ - The system filters out any URLs that are not relevant to article scraping (e.g., links to images, videos, etc.).
+- **Story 2.2:** As a system, I want to scrape the content of the identified article URLs using Cheerio, so that I can provide summaries in the newsletter.
+ - Acceptance Criteria:
+ - The system scrapes the content from the identified article URLs using Cheerio.
+ - The system extracts relevant content such as the article title, author, publication date, and main text.
+ - The system handles potential issues during scraping, such as website errors or changes in website structure, logging errors for review.
+- **Story 2.3:** As a system, I want to store the scraped article content in the Supabase database, associated with the corresponding Hacker News post and workflow run, so that it can be used for summarization and newsletter generation.
+ - Acceptance Criteria:
+ - Scraped article content is stored in the `scraped_articles` table, linked to the `hn_post_id` and the current `workflow_run_id`.
+ - The system ensures that the stored data includes all extracted information (title, author, date, text).
+ - The `scraping_status` and any `error_message` are recorded in the `scraped_articles` table.
+ - Upon completion of scraping an article (success or failure), the service updates the `workflow_runs.details` (e.g., incrementing scraped counts) via `WorkflowTrackerService`.
+ - A Supabase migration for the `scraped_articles` table (as defined in `architecture.txt`) is created and applied before data operations.
+- **Story 2.4:** As a developer, I want to trigger the article scraping process via the API and CLI, so that I can manually initiate it for testing and debugging.
+ - _Architect's Note: This story might become redundant if the main workflow trigger (Story 1.3) handles the entire pipeline initiation and individual service testing is done via direct function invocation or unit/integration tests._
+ - Acceptance Criteria:
+ - The API endpoint can trigger the article scraping process.
+ - The CLI command can trigger the article scraping process locally.
+ - The system logs the start and completion of the scraping process, including any errors encountered.
+ - All API requests and CLI command executions are logged, including timestamps and any relevant data.
+ - The system handles partial execution gracefully (i.e., if triggered before Epic 1 components like `WorkflowTrackerService` are available, it logs a message and exits).
+ - If retained for isolated testing, all scraping operations initiated via this trigger must be associated with a valid `workflow_run_id` and update the `workflow_runs` table accordingly via `WorkflowTrackerService`.
+- **Story 2.5 (New): Implement Database Event/Webhook: `scraped_articles` Success to Summarization Service**
+ - Goal: To ensure that the successful scraping and storage of an article in `scraped_articles` automatically triggers the `SummarizationService`.
+ - Acceptance Criteria:
+ - A Supabase database trigger or webhook mechanism is implemented on the `scraped_articles` table (e.g., on INSERT or UPDATE where `scraping_status` is 'success').
+ - The trigger successfully invokes the `SummarizationService` (Supabase Function).
+ - The invocation passes necessary parameters like `scraped_article_id` and `workflow_run_id` to the `SummarizationService`.
+ - The mechanism is robust and includes error handling/logging for the trigger/webhook itself.
+ - Unit/integration tests are created to verify the trigger fires correctly and the service is invoked with correct parameters.
diff --git a/BETA-V3/v3-demos/full-stack-app-demo/10-sharded-docs/epic-3.md b/BETA-V3/v3-demos/full-stack-app-demo/10-sharded-docs/epic-3.md
new file mode 100644
index 00000000..2c659d9f
--- /dev/null
+++ b/BETA-V3/v3-demos/full-stack-app-demo/10-sharded-docs/epic-3.md
@@ -0,0 +1,41 @@
+# Epic 3: AI-Powered Content Summarization
+
+> This document is a granulated shard from the main "BETA-V3/v3-demos/full-stack-app-demo/8-prd-po-updated.md" focusing on "Epic 3: AI-Powered Content Summarization".
+
+- Goal: Integrate AI summarization capabilities, by implementing and using a configurable and testable `LLMFacade`, to generate concise summaries of articles and comments from prompts stored in the database. This will enrich the newsletter content, be triggerable via API/CLI, is triggered by database events, and track progress via `WorkflowTrackerService`.
+
+- **Story 3.1:** As a system, I want to integrate an AI summarization capability by implementing and using an `LLMFacade`, so that I can generate concise summaries of articles and comments using various configurable LLM providers.
+ - Acceptance Criteria:
+ - An `LLMFacade` interface and concrete implementations (e.g., `OllamaAdapter`, `RemoteLLMApiAdapter`) are created in `supabase/functions/_shared/llm-facade.ts`.
+ - A factory function is implemented within or alongside the facade to select the appropriate LLM adapter based on environment variables (e.g., `LLM_PROVIDER_TYPE`, `OLLAMA_API_URL`, `REMOTE_LLM_API_KEY`, `REMOTE_LLM_API_URL`, `LLM_MODEL_NAME`).
+ - The `LLMFacade` handles making requests to the respective LLM APIs (as configured) and parsing their responses to extract the summary.
+ - Robust error handling and retry logic for transient API errors are implemented within the facade.
+ - Unit tests for the `LLMFacade` and its adapters (mocking actual HTTP calls) achieve >80% coverage.
+ - The system utilizes this `LLMFacade` for all summarization tasks (articles and comments).
+ - The integration is configurable via environment variables to switch between local and remote LLMs and specify model names.
+- **Story 3.2:** As a system, I want to retrieve summarization prompts from the database, and then use them via the `LLMFacade` to generate 2-paragraph summaries of the scraped articles, so that users can quickly grasp the main content and the prompts can be easily updated.
+ - Acceptance Criteria:
+ - The service retrieves the appropriate summarization prompt from the `summarization_prompts` table.
+ - The system generates a 2-paragraph summary for each scraped article using the retrieved prompt via the `LLMFacade`.
+ - Generated summaries are stored in the `article_summaries` table, linked to the `scraped_article_id` and the current `workflow_run_id`.
+ - The summaries are accurate and capture the key information from the article.
+ - Upon completion of each article summarization task, the service updates `workflow_runs.details` (e.g., incrementing article summaries generated counts) via `WorkflowTrackerService`.
+ - (System Note: The `CheckWorkflowCompletionService` monitors the `article_summaries` table as part of determining overall summarization completion for a `workflow_run_id`).
+ - A Supabase migration for the `article_summaries` table (as defined in `architecture.txt`) is created and applied before data operations.
+- **Story 3.3:** As a system, I want to retrieve summarization prompts from the database, and then use them via the `LLMFacade` to generate 2-paragraph summaries of the comments for the selected HN posts, so that users can understand the main discussions and the prompts can be easily updated.
+ - Acceptance Criteria:
+ - The service retrieves the appropriate summarization prompt from the `summarization_prompts` table.
+ - The system generates a 2-paragraph summary of the comments for each selected HN post using the retrieved prompt via the `LLMFacade`.
+ - Generated summaries are stored in the `comment_summaries` table, linked to the `hn_post_id` and the current `workflow_run_id`.
+ - The summaries highlight interesting interactions and key points from the discussion.
+ - Upon completion of each comment summarization task, the service updates `workflow_runs.details` (e.g., incrementing comment summaries generated counts) via `WorkflowTrackerService`.
+ - (System Note: The `CheckWorkflowCompletionService` monitors the `comment_summaries` table as part of determining overall summarization completion for a `workflow_run_id`).
+ - A Supabase migration for the `comment_summaries` table (as defined in `architecture.txt`) is created and applied before data operations.
+- **Story 3.4:** As a developer, I want to trigger the AI summarization process via the API and CLI, so that I can manually initiate it for testing and debugging.
+ - Acceptance Criteria:
+ - The API endpoint can trigger the AI summarization process.
+ - The CLI command can trigger the AI summarization process locally.
+ - The system logs the input and output of the summarization process, including the summarization prompt used and any errors.
+ - All API requests and CLI command executions are logged, including timestamps and any relevant data.
+ - The system handles partial execution gracefully (i.e., if triggered before Epic 2 is complete, it logs a message and exits).
+ - All summarization operations initiated via this trigger must be associated with a valid `workflow_run_id` and update the `workflow_runs` table accordingly via `WorkflowTrackerService`.
diff --git a/BETA-V3/v3-demos/full-stack-app-demo/10-sharded-docs/epic-4.md b/BETA-V3/v3-demos/full-stack-app-demo/10-sharded-docs/epic-4.md
new file mode 100644
index 00000000..4ab1bda1
--- /dev/null
+++ b/BETA-V3/v3-demos/full-stack-app-demo/10-sharded-docs/epic-4.md
@@ -0,0 +1,43 @@
+# Epic 4: Automated Newsletter Creation and Distribution
+
+> This document is a granulated shard from the main "BETA-V3/v3-demos/full-stack-app-demo/8-prd-po-updated.md" focusing on "Epic 4: Automated Newsletter Creation and Distribution".
+
+- Goal: Automate the generation and delivery of the daily newsletter by implementing and using a configurable `EmailDispatchFacade`. This includes handling podcast link availability, being triggerable via API/CLI, orchestration by `CheckWorkflowCompletionService`, and status tracking via `WorkflowTrackerService`.
+
+- **Story 4.1:** As a system, I want to retrieve the newsletter template from the database, so that the newsletter's design and structure can be updated without code changes.
+ - Acceptance Criteria:
+ - The system retrieves the newsletter template from the `newsletter_templates` database table.
+- **Story 4.2:** As a system, I want to generate a daily newsletter in HTML format using the retrieved template, so that users can receive a concise summary of Hacker News content.
+ - Acceptance Criteria:
+ - The `NewsletterGenerationService` is triggered by the `CheckWorkflowCompletionService` when all summaries for a `workflow_run_id` are ready.
+ - The service retrieves the newsletter template (from Story 4.1 output) from `newsletter_templates` table and summaries associated with the `workflow_run_id`.
+ - The system generates a newsletter in HTML format using the template retrieved from the database.
+ - The newsletter includes summaries of selected articles and comments.
+ - The newsletter includes links to the original HN posts and articles.
+ - The newsletter includes the original post dates/times.
+ - Generated newsletter is stored in the `newsletters` table, linked to the `workflow_run_id`.
+ - The service updates `workflow_runs.status` to 'generating_podcast' (or a similar appropriate status indicating handoff to podcast generation) after initiating podcast generation (as part of Epic 5 logic that will be invoked by this service or by `CheckWorkflowCompletionService` after this story's core task).
+ - A Supabase migration for the `newsletters` table (as defined in `architecture.txt`) is created and applied before data operations.
+- **Story 4.3:** As a system, I want to send the generated newsletter to a list of subscribers by implementing and using an `EmailDispatchFacade`, with credentials securely provided, so that users receive the daily summary in their inbox.
+ - Acceptance Criteria:
+ - An `EmailDispatchFacade` is implemented in `supabase/functions/_shared/` to abstract interaction with the email sending service (initially Nodemailer via SMTP).
+ - The facade handles configuration (e.g., SMTP settings from environment variables), email construction (From, To, Subject, HTML content), and sending the email.
+ - The facade includes error handling for email dispatch and logs relevant status information.
+ - Unit tests for the `EmailDispatchFacade` (mocking the actual Nodemailer library calls) achieve >80% coverage.
+ - The `NewsletterGenerationService` (specifically, its delivery part, utilizing the `EmailDispatchFacade`) is triggered by `CheckWorkflowCompletionService` once the podcast link is available in the `newsletters` table for the `workflow_run_id` (or a configured timeout/failure condition for the podcast step has been met).
+ - The system retrieves the list of subscriber email addresses from the Supabase database.
+ - The system sends the HTML newsletter (with podcast link conditionally included) to all active subscribers using the `EmailDispatchFacade`.
+ - Credentials for the email service (e.g., SMTP server details) are securely accessed via environment variables and used by the facade.
+ - The system logs the delivery status for each subscriber (potentially via the facade).
+ - The system implements conditional logic for podcast link inclusion (from `newsletters` table) and handles delay/retry as per PRD, coordinated by `CheckWorkflowCompletionService`.
+ - Updates `newsletters.delivery_status` (e.g., 'sent', 'failed') and `workflow_runs.status` to 'completed' or 'failed' via `WorkflowTrackerService` upon completion or failure of delivery.
+ - The initial email template includes a placeholder for the podcast URL.
+ - The end-to-end generation time for a typical daily newsletter (from workflow trigger to successful email dispatch initiation, for a small set of content) is measured and logged during testing to ensure it's within a reasonable operational timeframe (target < 30 minutes).
+- **Story 4.4:** As a developer, I want to trigger the newsletter generation and distribution process via the API and CLI, so that I can manually initiate it for testing and debugging.
+ - Acceptance Criteria:
+ - The API endpoint can trigger the newsletter generation and distribution process.
+ - The CLI command can trigger the newsletter generation and distribution process locally.
+ - The system logs the start and completion of the process, including any errors.
+ - All API requests and CLI command executions are logged, including timestamps and any relevant data.
+ - The system handles partial execution gracefully (i.e., if triggered before Epic 3 is complete, it logs a message and exits).
+ - All newsletter operations initiated via this trigger must be associated with a valid `workflow_run_id` and update the `workflow_runs` table accordingly via `WorkflowTrackerService`.
diff --git a/BETA-V3/v3-demos/full-stack-app-demo/10-sharded-docs/epic-5.md b/BETA-V3/v3-demos/full-stack-app-demo/10-sharded-docs/epic-5.md
new file mode 100644
index 00000000..804168db
--- /dev/null
+++ b/BETA-V3/v3-demos/full-stack-app-demo/10-sharded-docs/epic-5.md
@@ -0,0 +1,36 @@
+# Epic 5: Podcast Generation Integration
+
+> This document is a granulated shard from the main "BETA-V3/v3-demos/full-stack-app-demo/8-prd-po-updated.md" focusing on "Epic 5: Podcast Generation Integration".
+
+- Goal: Integrate with an audio generation API (initially Play.ht) by implementing and using a configurable `AudioGenerationFacade` to create podcast versions of the newsletter. This includes handling webhooks to update newsletter data and workflow status. Ensure this is triggerable via API/CLI, orchestrated appropriately, and uses `WorkflowTrackerService`.
+
+- **Story 5.1:** As a system, I want to integrate with an audio generation API (e.g., Play.ht's PlayNote API) by implementing and using an `AudioGenerationFacade`, so that I can generate AI-powered podcast versions of the newsletter content.
+ - Acceptance Criteria:
+ - An `AudioGenerationFacade` is implemented in `supabase/functions/_shared/` to abstract interaction with the audio generation service (initially Play.ht).
+ - The facade handles API authentication, request formation (e.g., sending content for synthesis, providing webhook URL), and response parsing for the specific audio generation service.
+ - The facade is configurable via environment variables (e.g., API key, user ID, service endpoint, webhook URL base).
+ - Robust error handling and retry logic for transient API errors are implemented within the facade.
+ - Unit tests for the `AudioGenerationFacade` (mocking actual HTTP calls to the Play.ht API) achieve >80% coverage.
+ - The system uses this `AudioGenerationFacade` for all podcast generation tasks.
+ - The integration employs webhooks for asynchronous status updates from the audio generation service.
+ - (Context: The `PodcastGenerationService` containing this logic is invoked by `NewsletterGenerationService` or `CheckWorkflowCompletionService` for a specific `workflow_run_id` and `newsletter_id`.)
+- **Story 5.2:** As a system, I want to send the newsletter content to the audio generation service via the `AudioGenerationFacade` to initiate podcast creation, and receive a job ID or initial response, so that I can track the podcast creation process.
+ - Acceptance Criteria:
+ - The system sends the newsletter content (identified by `newsletter_id` for a given `workflow_run_id`) to the configured audio generation service via the `AudioGenerationFacade`.
+ - The system receives a job ID or initial response from the service via the facade.
+ - The `podcast_playht_job_id` (or a generic `podcast_job_id`) and `podcast_status` (e.g., 'generating', 'submitted') are stored in the `newsletters` table, linked to the `workflow_run_id`.
+- **Story 5.3:** As a system, I want to implement a webhook handler to receive the podcast URL from the audio generation service, and update the newsletter data and workflow status, so that the podcast link can be included in the newsletter and web interface, and the overall workflow can proceed.
+ - Acceptance Criteria:
+ - The system implements a webhook handler (`PlayHTWebhookHandlerAPI` at `/api/webhooks/playht` or a more generic path like `/api/webhooks/audio-generation`) to receive the podcast URL and status from the audio generation service.
+ - The webhook handler extracts the podcast URL and status (e.g., 'completed', 'failed') from the webhook payload.
+ - The webhook handler updates the `newsletters` table with the podcast URL and status for the corresponding job.
+ - The `PlayHTWebhookHandlerAPI` also updates the `workflow_runs.details` with the podcast status (e.g., `podcast_status: 'completed'`) via `WorkflowTrackerService` for the relevant `workflow_run_id` (which may need to be looked up from the `newsletter_id` or job ID present in the webhook or associated with the service job).
+ - If supported by the audio generation service (e.g., Play.ht), implement security verification for the incoming webhook (such as shared secret or signature validation) to ensure authenticity. If direct verification mechanisms are not supported by the provider, this specific AC is N/A, and alternative measures (like IP whitelisting, if applicable and secure) should be considered and documented.
+- **Story 5.4:** As a developer, I want to trigger the podcast generation process via the API and CLI, so that I can manually initiate it for testing and debugging.
+ - Acceptance Criteria:
+ - The API endpoint can trigger the podcast generation process.
+ - The CLI command can trigger the podcast generation process locally.
+ - The system logs the start and completion of the process, including any intermediate steps, responses from the audio generation service, and webhook interactions.
+ - All API requests and CLI command executions are logged, including timestamps and any relevant data.
+ - The system handles partial execution gracefully (i.e., if triggered before Epic 4 components are ready, it logs a message and exits).
+ - All podcast generation operations initiated via this trigger must be associated with a valid `workflow_run_id` and `newsletter_id`, and update the `workflow_runs` and `newsletters` tables accordingly via `WorkflowTrackerService` and direct table updates as necessary.
diff --git a/BETA-V3/v3-demos/full-stack-app-demo/10-sharded-docs/epic-6.md b/BETA-V3/v3-demos/full-stack-app-demo/10-sharded-docs/epic-6.md
new file mode 100644
index 00000000..073f1752
--- /dev/null
+++ b/BETA-V3/v3-demos/full-stack-app-demo/10-sharded-docs/epic-6.md
@@ -0,0 +1,44 @@
+# Epic 6: Web Interface for Initial Structure and Content Access
+
+> This document is a granulated shard from the main "BETA-V3/v3-demos/full-stack-app-demo/8-prd-po-updated.md" focusing on "Epic 6: Web Interface for Initial Structure and Content Access".
+
+- Goal: Develop a user-friendly, responsive, and accessible web interface, based on the `frontend-architecture.md`, to display newsletters and provide access to podcast content, aligning with the project's visual and technical guidelines. All UI development within this epic must adhere to the "synthwave technical glowing purple vibes" aesthetic using Tailwind CSS and Shadcn UI, ensure basic mobile responsiveness, meet WCAG 2.1 Level A accessibility guidelines (including semantic HTML, keyboard navigation, alt text, color contrast), and optimize images using `next/image`, as detailed in the `frontend-architecture.txt` and `ui-ux-spec.txt`.
+
+- **Story 6.1:** As a developer, I want to establish the initial Next.js App Router structure for the web interface, including core layouts and routing, using `frontend-architecture.md` as a guide, so that I have a foundational frontend structure.
+ - Acceptance Criteria:
+ - Initial HTML/CSS mockups (e.g., from Vercel v0, if used) serve as a visual guide, but implementation uses Next.js and Shadcn UI components as per `frontend-architecture.md`.
+ - Next.js App Router routes are set up for `/newsletters` (listing page) and `/newsletters/[newsletterId]` (detail page) within an `app/(web)/` route group.
+ - Root layout (`app/(web)/layout.tsx`) and any necessary feature-specific layouts (e.g., `app/(web)/newsletters/layout.tsx`) are implemented using Next.js App Router conventions and Tailwind CSS.
+ - A `PageWrapper.tsx` component (as defined in `frontend-architecture.txt`) is implemented and used for consistent page styling (e.g., padding, max-width).
+ - Basic page structure renders correctly in development environment.
+- **Story 6.2:** As a user, I want to see a list of current and past newsletters on the `/newsletters` page, so that I can easily browse available content.
+ - Acceptance Criteria:
+ - The `app/(web)/newsletters/page.tsx` route displays a list of newsletters.
+ - Newsletter items are displayed using a `NewsletterCard.tsx` component.
+ - The `NewsletterCard.tsx` component is developed (e.g., using Shadcn UI `Card` as a base), displaying at least the newsletter title, target date, and a link/navigation to its detail page.
+ - `NewsletterCard.tsx` is styled using Tailwind CSS to fit the "synthwave" theme.
+ - Data for the newsletter list (e.g., ID, title, date) is fetched server-side on `app/(web)/newsletters/page.tsx` using the Supabase server client.
+ - The newsletter list page is responsive across common device sizes (mobile, desktop).
+ - The list includes relevant information such as the newsletter title and date.
+ - The list is paginated or provides scrolling functionality to handle a large number of newsletters.
+ - Key page load performance (e.g., Largest Contentful Paint) for the newsletter list page is benchmarked (e.g., using browser developer tools or Lighthouse) during development testing to ensure it aligns with the target of fast load times (target < 2 seconds).
+- **Story 6.3:** As a user, I want to be able to select a newsletter from the list and read its full content within the web page on the `/newsletters/[newsletterId]` page.
+ - Acceptance Criteria:
+ - Clicking on a `NewsletterCard` navigates to the corresponding `app/(web)/newsletters/[newsletterId]/page.tsx` route.
+ - The full HTML content of the selected newsletter is retrieved server-side using the Supabase server client and displayed in a readable format.
+ - A `BackButton.tsx` component is developed (e.g., using Shadcn UI `Button` as a base) and integrated on the newsletter detail page, allowing users to navigate back to the newsletter list.
+ - The newsletter detail page content area is responsive across common device sizes.
+ - Key page load performance (e.g., Largest Contentful Paint) for the newsletter detail page is benchmarked (e.g., using browser developer tools or Lighthouse) during development testing to ensure it aligns with the target of fast load times (target < 2 seconds).
+- **Story 6.4:** As a user, I want to have the option to download the currently viewed newsletter from its detail page, so that I can access it offline.
+ - Acceptance Criteria:
+ - A `DownloadButton.tsx` component is developed (e.g., using Shadcn UI `Button` as a base).
+ - The `DownloadButton.tsx` is integrated and visible on the newsletter detail page (`/newsletters/[newsletterId]`).
+ - Clicking the button initiates a download of the newsletter content (e.g., HTML format for MVP).
+- **Story 6.5:** As a user, I want to listen to the generated podcast associated with a newsletter within the web interface on its detail page, if a podcast is available.
+ - Acceptance Criteria:
+ - A `PodcastPlayer.tsx` React component with standard playback controls (play, pause, seek bar, volume control) is developed.
+ - An `podcastPlayerSlice.ts` Zustand store is implemented to manage podcast player state (e.g., current track URL, playback status, current time, volume).
+ - The `PodcastPlayer.tsx` component integrates with the `podcastPlayerSlice.ts` Zustand store for its state management.
+ - If a podcast URL is available for the displayed newsletter (fetched from Supabase), the `PodcastPlayer.tsx` component is displayed on the newsletter detail page.
+ - The `PodcastPlayer.tsx` can load and play the podcast audio from the provided URL.
+ - The `PodcastPlayer.tsx` is styled using Tailwind CSS to fit the "synthwave" theme and is responsive.
diff --git a/BETA-V3/v3-demos/full-stack-app-demo/10-sharded-docs/front-end-api-interaction.md b/BETA-V3/v3-demos/full-stack-app-demo/10-sharded-docs/front-end-api-interaction.md
new file mode 100644
index 00000000..559938fd
--- /dev/null
+++ b/BETA-V3/v3-demos/full-stack-app-demo/10-sharded-docs/front-end-api-interaction.md
@@ -0,0 +1,73 @@
+# API Interaction Layer
+
+> This document is a granulated shard from the main "5-front-end-architecture.md" focusing on "API Interaction Layer".
+
+The frontend will interact with Supabase for data. Server Components will fetch data directly using server-side Supabase client. Client Components needing to mutate data or trigger backend logic will use Next.js Server Actions or, if necessary, dedicated Next.js API Route Handlers which then interact with Supabase.
+
+### Client/Service Structure
+
+- **HTTP Client Setup (for Next.js API Route Handlers, if used extensively):**
+
+ - While Server Components and Server Actions are preferred for Supabase interactions, if direct calls from client to custom Next.js API routes are needed, a simple `fetch` wrapper or a lightweight client like `ky` could be used.
+ - The Vercel/Supabase template provides `utils/supabase/client.ts` (for client-side components) and `utils/supabase/server.ts` (for Server Components, Route Handlers, Server Actions). These will be the primary interfaces to Supabase.
+ - **Base URL:** Not applicable for direct Supabase client usage. For custom API routes: relative paths (e.g., `/api/my-route`).
+ - **Authentication:** The Supabase clients handle auth token management. For custom API routes, Next.js middleware (`middleware.ts`) would handle session verification.
+
+- **Service Definitions (Conceptual for Supabase Data Access):**
+
+ - No separate "service" files like `userService.ts` are strictly necessary for data fetching with Server Components. Data fetching logic will be co-located with the Server Components or within Server Actions.
+ - **Example (Data fetching in a Server Component):**
+
+ ```typescript
+ // app/(web)/newsletters/page.tsx
+ import { createClient } from "@/utils/supabase/server";
+ import NewsletterCard from "@/app/components/core/NewsletterCard"; // Corrected path
+
+ export default async function NewsletterListPage() {
+ const supabase = createClient();
+ const { data: newsletters, error } = await supabase
+ .from("newsletters")
+ .select("id, title, target_date, podcast_url") // Add podcast_url
+ .order("target_date", { ascending: false });
+
+ if (error) console.error("Error fetching newsletters:", error);
+ // Render newsletters or error state
+ }
+ ```
+
+ - **Example (Server Action for a hypothetical "subscribe" feature - future scope):**
+
+ ```typescript
+ // app/actions/subscribeActions.ts
+ "use server";
+ import { createClient } from "@/utils/supabase/server";
+ import { z } from "zod";
+ import { revalidatePath } from "next/cache";
+
+ const EmailSchema = z.string().email();
+
+ export async function subscribeToNewsletter(email: string) {
+ const validation = EmailSchema.safeParse(email);
+ if (!validation.success) {
+ return { error: "Invalid email format." };
+ }
+ const supabase = createClient();
+ const { error } = await supabase
+ .from("subscribers")
+ .insert({ email: validation.data });
+ if (error) {
+ return { error: "Subscription failed." };
+ }
+ revalidatePath("/"); // Example path revalidation
+ return { success: true };
+ }
+ ```
+
+### Error Handling & Retries (Frontend)
+
+- **Server Component Data Fetching Errors:** Errors from Supabase in Server Components should be caught. The component can then render an appropriate error UI or pass error information as props. Next.js error handling (e.g. `error.tsx` files) can also be used for unrecoverable errors.
+- **Client Component / Server Action Errors:**
+ - Server Actions should return structured responses (e.g., `{ success: boolean, data?: any, error?: string }`). Client Components calling Server Actions will handle these responses to update UI (e.g., display error messages, toast notifications).
+ - Shadcn UI includes a `Toast` component which can be used for non-modal error notifications.
+- **UI Error Boundaries:** React Error Boundaries can be implemented at key points in the component tree (e.g., around major layout sections or complex components) to catch rendering errors in Client Components and display a fallback UI, preventing a full app crash. A root `global-error.tsx` can serve as a global boundary.
+- **Retry Logic:** Generally, retry logic for data fetching should be handled by the user (e.g., a "Try Again" button) rather than automatic client-side retries for MVP, unless dealing with specific, known transient issues. Supabase client libraries might have their own internal retry mechanisms for certain types of network errors.
diff --git a/BETA-V3/v3-demos/full-stack-app-demo/10-sharded-docs/front-end-architecture.md b/BETA-V3/v3-demos/full-stack-app-demo/10-sharded-docs/front-end-architecture.md
new file mode 100644
index 00000000..3680a66d
--- /dev/null
+++ b/BETA-V3/v3-demos/full-stack-app-demo/10-sharded-docs/front-end-architecture.md
@@ -0,0 +1,137 @@
+# BMad DiCaster Frontend Architecture Document
+
+## Table of Contents
+
+- [Introduction](#introduction)
+- [Overall Frontend Philosophy & Patterns](#overall-frontend-philosophy--patterns)
+- [Detailed Frontend Directory Structure](#detailed-frontend-directory-structure)
+- [Component Breakdown & Implementation Details](#component-breakdown--implementation-details)
+ - [Component Naming & Organization](#component-naming--organization)
+ - [Template for Component Specification](#template-for-component-specification)
+- [State Management In-Depth](#state-management-in-depth)
+ - [Chosen Solution](#chosen-solution)
+ - [Rationale](#rationale)
+ - [Store Structure / Slices](#store-structure--slices)
+ - [Key Selectors](#key-selectors)
+ - [Key Actions / Reducers / Thunks](#key-actions--reducers--thunks)
+- [API Interaction Layer](#api-interaction-layer)
+ - [Client/Service Structure](#clientservice-structure)
+ - [Error Handling & Retries (Frontend)](#error-handling--retries-frontend)
+- [Routing Strategy](#routing-strategy)
+ - [Routing Library](#routing-library)
+ - [Route Definitions](#route-definitions)
+ - [Route Guards / Protection](#route-guards--protection)
+- [Build, Bundling, and Deployment](#build-bundling-and-deployment)
+ - [Build Process & Scripts](#build-process--scripts)
+ - [Key Bundling Optimizations](#key-bundling-optimizations)
+ - [Deployment to CDN/Hosting](#deployment-to-cdnhosting)
+- [Frontend Testing Strategy](#frontend-testing-strategy)
+ - [Link to Main Testing Strategy](#link-to-main-testing-strategy)
+ - [Component Testing](#component-testing)
+ - [UI Integration/Flow Testing](#ui-integrationflow-testing)
+ - [End-to-End UI Testing Tools & Scope](#end-to-end-ui-testing-tools--scope)
+- [Accessibility (AX) Implementation Details](#accessibility-ax-implementation-details)
+- [Performance Considerations](#performance-considerations)
+- [Change Log](#change-log)
+
+## Introduction
+
+This document details the technical architecture specifically for the frontend of BMad DiCaster. It complements the main BMad DiCaster Architecture Document and the UI/UX Specification. The goal is to provide a clear blueprint for frontend development, ensuring consistency, maintainability, and alignment with the overall system design and user experience goals.
+
+- **Link to Main Architecture Document:** `docs/architecture.md` (Note: The overall system architecture, including Monorepo/Polyrepo decisions and backend structure, will influence frontend choices, especially around shared code or API interaction patterns.)
+- **Link to UI/UX Specification:** `docs/ui-ux-spec.txt`
+- **Link to Primary Design Files (Figma, Sketch, etc.):** N/A (Low-fidelity wireframes described in `docs/ui-ux-spec.txt`; detailed mockups to be created during development)
+- **Link to Deployed Storybook / Component Showcase (if applicable):** N/A (To be developed)
+
+## Overall Frontend Philosophy & Patterns
+
+> Key aspects of this section have been moved to dedicated documents:
+>
+> - For styling approach, theme customization, and visual design: See [Frontend Style Guide](./front-end-style-guide.md)
+> - For core framework choices, component architecture, data flow, and general coding standards: See [Frontend Coding Standards & Accessibility](./front-end-coding-standards.md#general-coding-standards-from-overall-philosophy--patterns)
+
+## Detailed Frontend Directory Structure
+
+> This section has been moved to a dedicated document: [Detailed Frontend Directory Structure](./front-end-project-structure.md)
+
+## Component Breakdown & Implementation Details
+
+> This section has been moved to a dedicated document: [Component Breakdown & Implementation Details](./front-end-component-guide.md)
+
+## State Management In-Depth
+
+> This section has been moved to a dedicated document: [State Management In-Depth](./front-end-state-management.md)
+
+## API Interaction Layer
+
+> This section has been moved to a dedicated document: [API Interaction Layer](./front-end-api-interaction.md)
+
+## Routing Strategy
+
+> This section has been moved to a dedicated document: [Routing Strategy](./front-end-routing-strategy.md)
+
+## Build, Bundling, and Deployment
+
+Details align with the Vercel platform and Next.js capabilities.
+
+### Build Process & Scripts
+
+- **Key Build Scripts:**
+ - `npm run dev`: Starts Next.js local development server.
+ - `npm run build`: Generates an optimized production build of the Next.js application. (Script from `package.json`)
+ - `npm run start`: Starts the Next.js production server after a build.
+- **Environment Variables Handling during Build:**
+ - Client-side variables must be prefixed with `NEXT_PUBLIC_` (e.g., `NEXT_PUBLIC_SUPABASE_URL`, `NEXT_PUBLIC_SUPABASE_ANON_KEY`).
+ - Server-side variables (used in Server Components, Server Actions, Route Handlers) are accessed directly via `process.env`.
+ - Environment variables are managed in Vercel project settings for different environments (Production, Preview, Development). Local development uses `.env.local`.
+
+### Key Bundling Optimizations
+
+- **Code Splitting:** Next.js App Router automatically performs route-based code splitting. Dynamic imports (`next/dynamic`) can be used for further component-level code splitting if needed.
+- **Tree Shaking:** Ensured by Next.js's Webpack configuration during the production build process.
+- **Lazy Loading:** Next.js handles lazy loading of route segments by default. Images (`next/image`) are optimized and can be lazy-loaded.
+- **Minification & Compression:** Handled automatically by Next.js during `npm run build` (JavaScript, CSS minification; Gzip/Brotli compression often handled by Vercel).
+
+### Deployment to CDN/Hosting
+
+- **Target Platform:** **Vercel** (as per `architecture.txt`)
+- **Deployment Trigger:** Automatic deployments via Vercel's Git integration (GitHub) on pushes/merges to specified branches (e.g., `main` for production, PR branches for previews). (Aligned with `architecture.txt`)
+- **Asset Caching Strategy:** Vercel's Edge Network handles CDN caching for static assets and Server Component payloads. Cache-control headers will be configured according to Next.js defaults and can be customized if necessary (e.g., for `public/` assets).
+
+## Frontend Testing Strategy
+
+> This section has been moved to a dedicated document: [Frontend Testing Strategy](./front-end-testing-strategy.md)
+
+## Accessibility (AX) Implementation Details
+
+> This section has been moved to a dedicated document: [Frontend Coding Standards & Accessibility](./front-end-coding-standards.md#accessibility-ax-implementation-details)
+
+## Performance Considerations
+
+[cite_start] The goal is a fast-loading and responsive user experience. [cite: 360, 565]
+
+- **Image Optimization:**
+ - Use `next/image` for automatic image optimization (resizing, WebP format where supported, lazy loading by default).
+- **Code Splitting & Lazy Loading:**
+ - Next.js App Router handles route-based code splitting.
+ - `next/dynamic` for client-side lazy loading of components that are not immediately visible or are heavy.
+- **Minimizing Re-renders (React):**
+ - Judicious use of `React.memo` for components that render frequently with the same props.
+ - Optimizing Zustand selectors if complex derived state is introduced (though direct access is often sufficient).
+ - Ensuring stable prop references where possible.
+- **Debouncing/Throttling:** Not anticipated for MVP features, but will be considered for future interactive elements like search inputs.
+- **Virtualization:** Not anticipated for MVP given the limited number of items (e.g., 30 newsletters per day). If lists become very long in the future, virtualization libraries like TanStack Virtual will be considered.
+- **Caching Strategies (Client-Side):**
+ - Leverage Next.js's built-in caching for Server Component payloads and static assets via Vercel's Edge Network.
+ - Browser caching for static assets (`public/` folder) will use default optimal headers set by Vercel.
+- **Performance Monitoring Tools:**
+ - Browser DevTools (Performance tab, Lighthouse).
+ - Vercel Analytics (if enabled) for real-user monitoring.
+ - WebPageTest for detailed performance analysis.
+- **Bundle Size Analysis:** Use tools like `@next/bundle-analyzer` to inspect production bundles and identify opportunities for optimization if bundle sizes become a concern.
+
+## Change Log
+
+| Date | Version | Description | Author |
+| :--------- | :------ | :----------------------------------------------- | :----------------- |
+| 2025-05-13 | 0.1 | Initial draft of frontend architecture document. | 4-design-arch (AI) |
diff --git a/BETA-V3/v3-demos/full-stack-app-demo/10-sharded-docs/front-end-coding-standards.md b/BETA-V3/v3-demos/full-stack-app-demo/10-sharded-docs/front-end-coding-standards.md
new file mode 100644
index 00000000..1c934a33
--- /dev/null
+++ b/BETA-V3/v3-demos/full-stack-app-demo/10-sharded-docs/front-end-coding-standards.md
@@ -0,0 +1,54 @@
+# Frontend Coding Standards & Accessibility
+
+> This document is a granulated shard from the main "5-front-end-architecture.md" focusing on "Front-End Coding Standards and Accessibility Best Practices".
+
+## General Coding Standards (from Overall Philosophy & Patterns)
+
+- **Framework & Core Libraries:**
+ - Next.js (Latest, App Router)
+ - React (19.0.0)
+ - TypeScript (5.7.2)
+- **Component Architecture Approach:**
+ - Shadcn UI for foundational elements.
+ - Application-Specific Components in `app/components/core/`.
+ - Prefer Server Components; use Client Components (`"use client"`) only when necessary for interactivity or browser APIs.
+- **Data Flow:**
+ - Unidirectional: Server Components (data fetching) -> Client Components (props).
+ - Mutations/Actions: Next.js Server Actions or API Route Handlers, with data revalidation.
+ - Supabase Client for DB interaction.
+- **Key Design Patterns Used:**
+ - Server Components & Client Components.
+ - React Hooks (and custom hooks).
+ - Provider Pattern (React Context API) when necessary.
+ - Facade Pattern (conceptual for Supabase client).
+
+## Naming & Organization Conventions (from Component Breakdown & Detailed Structure)
+
+- **Component File Naming:**
+ - React component files: `PascalCase.tsx` (e.g., `NewsletterCard.tsx`).
+ - Next.js special files (`page.tsx`, `layout.tsx`, etc.): conventional lowercase/kebab-case.
+- **Directory Naming:** `kebab-case`.
+- **Non-Component TypeScript Files (.ts):** Primarily `camelCase.ts` (e.g., `utils.ts`, `uiSlice.ts`). Config files (`tailwind.config.ts`) and shared type definitions (`api-schemas.ts`) may use `kebab-case`.
+- **Component Organization:**
+ - Core application components: `app/components/core/`.
+ - Layout components: `app/components/layout/`.
+ - Shadcn UI components: `components/ui/`.
+ - Page-specific components (if complex and not reusable) can be co-located within the page's route directory.
+
+## Accessibility (AX) Implementation Details
+
+> This section is directly from "Accessibility (AX) Implementation Details" in `5-front-end-architecture.md`.
+
+The frontend will adhere to **WCAG 2.1 Level A** as a minimum target, as specified in `docs/ui-ux-spec.txt`.
+
+- **Semantic HTML:** Emphasis on using correct HTML5 elements (`