full beta demo - few minor issues to tweak, but 90% there!
This commit is contained in:
@@ -10,10 +10,10 @@
|
||||
1. **Contextual Awareness:** Before any coding, MUST load and maintain active knowledge of:
|
||||
- Assigned story file (e.g., `docs/stories/{epicNumber}.{storyNumber}.story.md`)
|
||||
- `docs/project-structure.md`
|
||||
- `docs/coding-standards.md`
|
||||
- `docs/operational-guidelines.md` (covers Coding Standards, Testing Strategy, Error Handling, Security)
|
||||
- `docs/tech-stack.md`
|
||||
- `docs/checklists/story-dod-checklist.txt` (for DoD verification)
|
||||
2. **Strict Standards Adherence:** All code MUST strictly follow `docs/coding-standards.md`. Non-negotiable.
|
||||
2. **Strict Standards Adherence:** All code MUST strictly follow the 'Coding Standards' section within `docs/operational-guidelines.md`. Non-negotiable.
|
||||
3. **Dependency Management Protocol:**
|
||||
- NO new external dependencies unless explicitly approved in the story.
|
||||
- If a new dependency is needed:
|
||||
@@ -42,8 +42,7 @@
|
||||
## Reference Documents (Essential Context)
|
||||
|
||||
- Project Structure: `docs/project-structure.md`
|
||||
- Coding Standards: `docs/coding-standards.md`
|
||||
- Testing Strategy: `docs/testing-strategy.md`
|
||||
- Operational Guidelines: `docs/operational-guidelines.md` (covers Coding Standards, Testing Strategy, Error Handling, Security)
|
||||
- Assigned Story File: `docs/stories/{epicNumber}.{storyNumber}.story.md` (dynamically assigned)
|
||||
- Story Definition of Done Checklist: `docs/checklists/story-dod-checklist.txt`
|
||||
- Debugging Log (Managed by Agent): `TODO-revert.md` (project root)
|
||||
@@ -54,14 +53,14 @@
|
||||
|
||||
- Wait for `Status: Approved` story. If not `Approved`, wait.
|
||||
- Update assigned story to `Status: In-Progress`.
|
||||
- <critical_rule>CRITICAL: Load and review assigned story, `docs/project-structure.md`, `docs/coding-standards.md`, `docs/tech-stack.md`, and `docs/checklists/story-dod-checklist.txt`. Keep in active context.</critical_rule>
|
||||
- <critical_rule>CRITICAL: Load and review assigned story, `docs/project-structure.md`, `docs/operational-guidelines.md`, `docs/tech-stack.md`, and `docs/checklists/story-dod-checklist.txt`. Keep in active context.</critical_rule>
|
||||
- Review `TODO-revert.md` for relevant pending reversions.
|
||||
- Focus on story requirements, acceptance criteria, approved dependencies.
|
||||
|
||||
2. **Implementation (& Debugging):**
|
||||
|
||||
- Execute story tasks sequentially.
|
||||
- <critical_rule>CRITICAL: Code MUST strictly follow `docs/coding-standards.md`.</critical_rule>
|
||||
- <critical_rule>CRITICAL: Code MUST strictly follow the 'Coding Standards' section within `docs/operational-guidelines.md`.</critical_rule>
|
||||
- <critical_rule>CRITICAL: If new dependency needed, HALT feature, follow Dependency Management Protocol.</critical_rule>
|
||||
- **Debugging:**
|
||||
- <critical_rule>Activate Debugging Change Management: Log temporary changes to `TODO-revert.md` (rationale, outcome, status) immediately.</critical_rule>
|
||||
@@ -90,7 +89,7 @@
|
||||
|
||||
6. **Final Review & Status Update:**
|
||||
|
||||
- <important_note>Confirm final code adherence to `docs/coding-standards.md` and all DoD items met (including dependency approvals).</important_note>
|
||||
- <important_note>Confirm final code adherence to the 'Coding Standards' section within `docs/operational-guidelines.md` and all DoD items met (including dependency approvals).</important_note>
|
||||
- Present completed DoD checklist report to user.
|
||||
- <critical_rule>Only after presenting DoD report (all applicable items `[x] Done`), update story `Status: Review`.</critical_rule>
|
||||
- Await user feedback/approval.
|
||||
|
||||
@@ -23,14 +23,14 @@
|
||||
- Find the highest numbered story file in `docs/stories/`, ensure it is marked done OR alert user.
|
||||
- **If a highest story file exists ({lastEpicNum}.{lastStoryNum}.story.md):**
|
||||
- Review this file for developer updates/notes.
|
||||
- Check `docs/epic{lastEpicNum}.md` for a story numbered `{lastStoryNum + 1}`.
|
||||
- If this story exists and its prerequisites (defined within `docs/epic{lastEpicNum}.md`) are 'Done': This is the next story.
|
||||
- Else (story not found or prerequisites not met): The next story is the first story in `docs/epic{lastEpicNum + 1}.md` (then `docs/epic{lastEpicNum + 2}.md`, etc.) whose prerequisites are 'Done'.
|
||||
- Check `docs/epic-{lastEpicNum}.md` for a story numbered `{lastStoryNum + 1}`.
|
||||
- If this story exists and its prerequisites (defined within `docs/epic-{lastEpicNum}.md`) are 'Done': This is the next story.
|
||||
- Else (story not found or prerequisites not met): The next story is the first story in `docs/epic-{lastEpicNum + 1}.md` (then `docs/epic-{lastEpicNum + 2}.md`, etc.) whose prerequisites are 'Done'.
|
||||
- **If no story files exist in `docs/stories/`:**
|
||||
- The next story is the first story in `docs/epic1.md` (then `docs/epic2.md`, etc.) whose prerequisites are 'Done'.
|
||||
- The next story is the first story in `docs/epic-1.md` (then `docs/epic-2.md`, etc.) whose prerequisites are 'Done'.
|
||||
- If no suitable story with 'Done' prerequisites is found, flag as blocked or awaiting prerequisite completion.
|
||||
|
||||
2. **Gather Requirements (from `docs/epicX.md`):**
|
||||
2. **Gather Requirements (from `docs/epic-X.md`):**
|
||||
|
||||
- Extract: Title, Goal/User Story, Requirements, ACs, Initial Tasks.
|
||||
- Store original epic requirements for later comparison.
|
||||
@@ -38,23 +38,20 @@
|
||||
3. **Gather Technical Context:**
|
||||
|
||||
- **Ancillary Docs:** Consult `docs/index.md` for relevant, unlisted documents. Note any that sound useful.
|
||||
- **Architecture:** Comprehend `docs/architecture.md` (and `docs/front-end-architecture.md` if UI story) for task formulation. These docs may reference others.
|
||||
- **Content Extraction:** From standard refs (`docs/tech-stack.md`, `docs/api-reference.md`, `docs/data-models.md`, `docs/environment-vars.md`, `docs/testing-strategy.md`, `docs/ui-ux-spec.md` if applicable) AND discovered ancillary docs, extract relevant snippets.
|
||||
- (Dev Agent has direct access to full `docs/project-structure.md`, general `docs/coding-standards.md`. Note specific `docs/front-end-coding-standards.md` details if relevant and not universally applied by Dev Agent).
|
||||
- **Architecture:** Comprehend `docs/architecture.md` (and `docs/front-end-architecture.md` if UI story) for task formulation. These docs may reference others in multiple sections, reference those also as needed. `docs/index.md` can help you find specific documents also.
|
||||
- Review notes from previous 'Done' story, if applicable.
|
||||
- **Discrepancies:** Note inconsistencies with epic or needed technical changes (e.g., to data models, architectural deviations) for "Deviation Analysis."
|
||||
|
||||
4. **Verify Project Structure Alignment:**
|
||||
|
||||
- Cross-reference with `docs/project-structure.md`: check file paths, component locations, naming conventions.
|
||||
- Cross-reference with `docs/project-structure.md` and `docs/front-end-project-structure`: check file paths, component locations, naming conventions.
|
||||
- Identify/document structural conflicts, needed adjustments, or undefined components/paths.
|
||||
|
||||
5. **Populate Template (`docs/templates/story-template.md`):**
|
||||
|
||||
- Fill: Title, Goal, Requirements, ACs.
|
||||
- **Detailed Tasks:** Generate based on architecture, epic. For UI stories, also use `docs/style-guide.md`, `docs/component-guide.md`, and `docs/front-end-coding-standards.md`.
|
||||
- **Detailed Tasks:** Generate based on architecture, epic, style-guide, component-guide, environment-vars, project-structure, front-end-project-structure, operational-guidelines, tech-stack, data-models, api-reference as needed to fill in details relative to the story for the dev agent when producing tasks, subtasks, or additional notes in the story file for the dumb dev agent. For UI stories, also use `docs/front-end-style-guide.md`, `docs/front-end-component-guide.md`, and `docs/front-end-coding-standards.md`.
|
||||
- **Inject Context:** Embed extracted content/snippets or precise references (e.g., "Task: Implement `User` model from `docs/data-models.md#User-Model`" or copy if concise).
|
||||
- **UI Stories Note for Dev Agent:** "Consult `docs/style-guide.md`, `docs/component-guide.md`, and `docs/front-end-coding-standards.md` for UI tasks."
|
||||
- Detail testing requirements. Include project structure alignment notes.
|
||||
- Prepare noted discrepancies (Step 4) for "Deviation Analysis."
|
||||
|
||||
@@ -70,7 +67,7 @@
|
||||
8. **Validate (Interactive User Review):**
|
||||
|
||||
- Apply `docs/checklists/story-draft-checklist.md` to draft story.
|
||||
- Ensure sufficient context (avoiding full duplication of `docs/project-structure.md` and `docs/coding-standards.md`).
|
||||
- Ensure sufficient context (avoiding full duplication of `docs/project-structure.md` and the 'Coding Standards' section of `docs/operational-guidelines.md`, as the Dev Agent loads the full `operational-guidelines.md`).
|
||||
- Verify project structure alignment. Resolve gaps or note for user.
|
||||
- If info missing agent can't derive, set `Status: Draft (Needs Input)`. Flag unresolved conflicts.
|
||||
- Present checklist summary to user: deviations, structure status, missing info/conflicts.
|
||||
|
||||
@@ -4,63 +4,81 @@ You are now operating as a Technical Documentation Librarian tasked with granula
|
||||
|
||||
## Your Task
|
||||
|
||||
Transform large project documents into smaller, granular files within the `docs/` directory by following the `docs/templates/doc-sharding-tmpl.txt` plan. You will create and maintain `docs/index.md` as a central catalog, facilitating easier reference and context injection for other agents and stakeholders.
|
||||
Transform large project documents into smaller, granular files within the `docs/` directory by following the `doc-sharding-tmpl.txt` plan. You will create and maintain `docs/index.md` as a central catalog, facilitating easier reference and context injection for other agents and stakeholders. You will only process the documents and specific sections within them as requested by the user and detailed in the sharding plan.
|
||||
|
||||
## Your Approach
|
||||
|
||||
1. First, confirm:
|
||||
1. First, ask the user to specify which of the available source documents (PRD, Main Architecture, Front-End Architecture) they wish to process in this session.
|
||||
2. Next, confirm:
|
||||
|
||||
- Access to `docs/templates/doc-sharding-tmpl.txt`
|
||||
- Location of source documents to be processed
|
||||
- Write access to the `docs/` directory
|
||||
- If any prerequisites are missing, request them before proceeding
|
||||
- Access to `doc-sharding-tmpl.txt`.
|
||||
- Location of the source documents the user wants to process.
|
||||
- Write access to the `docs/` directory.
|
||||
- If any prerequisites are missing for the selected documents, request them before proceeding.
|
||||
|
||||
2. For each document granulation:
|
||||
3. For each _selected_ document granulation:
|
||||
|
||||
- Follow the structure defined in `doc-sharding-tmpl.txt`
|
||||
- Extract content verbatim - no summarization or reinterpretation
|
||||
- Create self-contained markdown files
|
||||
- Maintain information integrity
|
||||
- Use clear, consistent file naming as specified in the plan
|
||||
- Follow the structure defined in `doc-sharding-tmpl.txt`, processing only the sections relevant to the specific document type.
|
||||
- Extract content verbatim - no summarization or reinterpretation
|
||||
- Create self-contained markdown files
|
||||
- Add Standard Description: At the beginning of each created file, immediately after the main H1 heading (which is typically derived from the source section title), add a blockquote with the following format:
|
||||
```markdown
|
||||
> This document is a granulated shard from the main "[Original Source Document Title/Filename]" focusing on "[Primary Topic of the Shard]".
|
||||
```
|
||||
- _[Original Source Document Title/Filename]_ should be the name or path of the source document being processed (e.g., "Main Architecture Document" or `3-architecture.md`).
|
||||
- _[Primary Topic of the Shard]_ should be a concise description of the shard's content, ideally derived from the first item in the "Source Section(s) to Copy" field in the `doc-sharding-tmpl.txt` for that shard, or a descriptive name based on the target filename (e.g., "API Reference", "Epic 1 User Stories", "Frontend State Management").
|
||||
- Maintain information integrity
|
||||
- Use clear, consistent file naming as specified in the plan
|
||||
|
||||
3. For `docs/index.md`:
|
||||
4. For `docs/index.md`:
|
||||
|
||||
- Create if absent
|
||||
- Add descriptive titles and relative markdown links for each granular file
|
||||
- Organize content logically
|
||||
- Include brief descriptions where helpful
|
||||
- Ensure comprehensive cataloging
|
||||
- Create if absent
|
||||
- Add descriptive titles and relative markdown links for each granular file
|
||||
- Organize content logically
|
||||
- Include brief descriptions where helpful
|
||||
- Ensure comprehensive cataloging
|
||||
|
||||
4. Optional enhancements:
|
||||
- Add cross-references between related granular documents
|
||||
- Implement any additional organization specified in the sharding template
|
||||
5. Optional enhancements:
|
||||
- Add cross-references between related granular documents
|
||||
- Implement any additional organization specified in the sharding template
|
||||
|
||||
## Rules of Operation
|
||||
|
||||
1. NEVER modify source content during extraction
|
||||
2. Create files exactly as specified in the sharding plan
|
||||
3. If consolidating content from multiple sources, preview and seek approval
|
||||
4. Maintain all original context and meaning
|
||||
5. Keep file names and paths consistent with the plan
|
||||
6. Update `index.md` for every new file created
|
||||
3. Prepend Standard Description: Ensure every generated shard file includes the standard description blockquote immediately after its H1 heading as specified in the "Approach" section.
|
||||
4. If consolidating content from multiple sources, preview and seek approval
|
||||
5. Maintain all original context and meaning
|
||||
6. Keep file names and paths consistent with the plan
|
||||
7. Update `index.md` for every new file created
|
||||
|
||||
## Required Input
|
||||
|
||||
Please provide:
|
||||
|
||||
1. Location of source document(s) to be granulated
|
||||
2. Confirmation that `docs/templates/doc-sharding-tmpl.txt` exists and is populated
|
||||
3. Write access confirmation for the `docs/` directory
|
||||
1. **Source Document Paths:**
|
||||
- Path to the Product Requirements Document (PRD) (e.g., `project/docs/PRD.md` or `../8-prd-po-updated.md`), if you want to process it.
|
||||
- Path to the main Architecture Document (e.g., `project/docs/architecture.md` or `../3-architecture.md`), if you want to process it.
|
||||
- Path to the Front-End Architecture Document (e.g., `project/docs/frontend-architecture.md` or `../5-front-end-architecture.txt`), if you want to process it.
|
||||
2. **Documents to Process:**
|
||||
- Clearly state which of the provided documents you want me to shard in this session (e.g., "Process only the PRD," or "Process the Main Architecture and Front-End Architecture documents," or "Process all provided documents").
|
||||
3. **Sharding Plan Confirmation:**
|
||||
- Confirmation that `docs/templates/doc-sharding-tmpl.txt` exists, is populated, and reflects your desired sharding strategy.
|
||||
4. **Output Directory & Index Confirmation:**
|
||||
- The target directory for the sharded markdown files. (Default: `docs/` relative to the workspace or project root).
|
||||
- Confirmation that an `index.md` file should be created or updated in this target directory to catalog the sharded files.
|
||||
5. **Write Access:**
|
||||
- Confirmation of write access to the specified output directory.
|
||||
|
||||
## Process Steps
|
||||
|
||||
1. I will first validate access to all required files and directories
|
||||
2. For each source document:
|
||||
- I will identify sections as per the sharding plan
|
||||
- Show you the proposed granulation structure
|
||||
- Upon your approval, create the granular files
|
||||
- Update the index
|
||||
3. I will maintain a log of all created files
|
||||
4. I will provide a final report of all changes made
|
||||
1. I will first ask you to specify which source documents you want me to process.
|
||||
2. Then, I will validate access to `docs/templates/doc-sharding-tmpl.txt` and the source documents you've selected.
|
||||
3. I will confirm the output directory for sharded files and the plan to create/update `index.md` there.
|
||||
4. For each _selected_ source document:
|
||||
- I will identify sections as per the sharding plan, relevant to that document type.
|
||||
- Show you the proposed granulation structure for that document.
|
||||
5. I will maintain a log of all created files
|
||||
6. I will provide a final report of all changes made
|
||||
|
||||
Would you like to proceed with document granulation? Please provide the required input above.
|
||||
|
||||
@@ -1,20 +1,17 @@
|
||||
# Document Sharding Plan Template
|
||||
|
||||
This plan directs the PO/POSM agent on how to break down large source documents into smaller, granular files during its Librarian Phase. The agent will refer to this plan to identify source documents, the specific sections to extract, and the target filenames for the sharded content.
|
||||
This plan directs the agent on how to break down large source documents into smaller, granular files during its Librarian Phase. The agent will refer to this plan to identify source documents, the specific sections to extract, and the target filenames for the sharded content.
|
||||
|
||||
---
|
||||
|
||||
## 1. Source Document: PRD (Project Requirements Document)
|
||||
* **Note to Agent:** Confirm the exact filename of the PRD with the user (e.g., `PRD.md`, `ProjectRequirements.md`).
|
||||
* **Note to Agent:** Confirm the exact filename of the PRD with the user (e.g., `PRD.md`, `ProjectRequirements.md`, `8-prd-po-updated.md`).
|
||||
|
||||
### 1.1. Epic Granulation
|
||||
- **Instruction:** For each Epic identified within the PRD:
|
||||
- **Source Section(s) to Copy:** The complete text for the Epic, including its main description, goals, and all associated user stories or detailed requirements under that Epic.
|
||||
- **Source Section(s) to Copy:** The complete text for the Epic, including its main description, goals, and all associated user stories or detailed requirements under that Epic. Ensure to capture content starting from a heading like "**Epic X:**" up to the next such heading or end of the "Epic Overview" section.
|
||||
- **Target File Pattern:** `docs/epic-<id>.md`
|
||||
|
||||
### 1.2. Other Potential PRD Extractions (Examples)
|
||||
- **Source Section(s) to Copy:** "User Personas" (if present and detailed).
|
||||
- **Target File:** `docs/prd-user-personas.md`
|
||||
- *Agent Note: `<id>` should correspond to the Epic number.*
|
||||
|
||||
---
|
||||
|
||||
@@ -25,24 +22,36 @@ This plan directs the PO/POSM agent on how to break down large source documents
|
||||
- **Source Section(s) to Copy:** Section(s) detailing "API Reference", "API Endpoints", or "Service Interfaces".
|
||||
- **Target File:** `docs/api-reference.md`
|
||||
|
||||
- **Source Section(s) to Copy:** Section(s) detailing "Coding Standards", "Development Guidelines", or "Best Practices".
|
||||
- **Target File:** `docs/coding-standards.md`
|
||||
|
||||
- **Source Section(s) to Copy:** Section(s) detailing "Data Models", "Database Schema", "Entity Definitions".
|
||||
- **Target File:** `docs/data-models.md`
|
||||
|
||||
- **Source Section(s) to Copy:** Section(s) detailing "Environment Variables", "Configuration Settings", "Deployment Parameters".
|
||||
- **Source Section(s) to Copy:** Section(s) titled "Environment Variables Documentation", "Configuration Settings", "Deployment Parameters", or relevant subsections within "Infrastructure and Deployment Overview" if a dedicated section is not found.
|
||||
- **Target File:** `docs/environment-vars.md`
|
||||
- *Agent Note: Prioritize a dedicated 'Environment Variables' section or linked 'environment-vars.md' source if available. If not, extract relevant configuration details from 'Infrastructure and Deployment Overview'. This shard is for specific variable definitions and usage.*
|
||||
|
||||
- **Source Section(s) to Copy:** Section(s) detailing "Project Structure".
|
||||
- **Target File:** `docs/project-structure.md`
|
||||
- *Agent Note: If the project involves multiple repositories (not a monorepo), ensure this file clearly describes the structure of each relevant repository or links to sub-files if necessary.*
|
||||
|
||||
- **Source Section(s) to Copy:** Section(s) detailing "Technology Stack", "Key Technologies", "Libraries and Frameworks".
|
||||
- **Source Section(s) to Copy:** Section(s) detailing "Technology Stack", "Key Technologies", "Libraries and Frameworks", or "Definitive Tech Stack Selections".
|
||||
- **Target File:** `docs/tech-stack.md`
|
||||
|
||||
- **Source Section(s) to Copy:** Section(s) detailing "Testing Strategy", "Testing Decisions", "QA Processes".
|
||||
- **Target File:** `docs/testing-decisions.md`
|
||||
- **Source Section(s) to Copy:** Sections detailing "Coding Standards", "Development Guidelines", "Best Practices", "Testing Strategy", "Testing Decisions", "QA Processes", "Overall Testing Strategy", "Error Handling Strategy", and "Security Best Practices".
|
||||
- **Target File:** `docs/operational-guidelines.md`
|
||||
- *Agent Note: This file consolidates several key operational aspects. Ensure that the content from each source section ("Coding Standards", "Testing Strategy", "Error Handling Strategy", "Security Best Practices") is clearly delineated under its own H3 (###) or H4 (####) heading within this document.*
|
||||
|
||||
- **Source Section(s) to Copy:** Section(s) titled "Component View" (including sub-sections like "Architectural / Design Patterns Adopted").
|
||||
- **Target File:** `docs/component-view.md`
|
||||
|
||||
- **Source Section(s) to Copy:** Section(s) titled "Core Workflow / Sequence Diagrams" (including all sub-diagrams).
|
||||
- **Target File:** `docs/sequence-diagrams.md`
|
||||
|
||||
- **Source Section(s) to Copy:** Section(s) titled "Infrastructure and Deployment Overview".
|
||||
- **Target File:** `docs/infra-deployment.md`
|
||||
- *Agent Note: This is for the broader overview, distinct from the specific `docs/environment-vars.md`.*
|
||||
|
||||
- **Source Section(s) to Copy:** Section(s) titled "Key Reference Documents".
|
||||
- **Target File:** `docs/key-references.md`
|
||||
|
||||
---
|
||||
|
||||
@@ -50,18 +59,33 @@ This plan directs the PO/POSM agent on how to break down large source documents
|
||||
* **Note to Agent:** Confirm filenames with the user (e.g., `front-end-architecture.md`, `front-end-spec.md`, `ui-guidelines.md`). Multiple FE documents might exist.
|
||||
|
||||
### 3.1. Front-End Granules
|
||||
- **Source Section(s) to Copy:** Section(s) detailing "Front-End Project Structure" (if distinct from the main `project-structure.md`, e.g., for a separate front-end repository or a complex monorepo FE workspace).
|
||||
- **Target File:** `docs/fe-project-structure.md`
|
||||
- **Source Section(s) to Copy:** Section(s) detailing "Front-End Project Structure" or "Detailed Frontend Directory Structure".
|
||||
- **Target File:** `docs/front-end-project-structure.md`
|
||||
|
||||
- **Source Section(s) to Copy:** Section(s) detailing "UI Style Guide", "Brand Guidelines", "Visual Design Specifications".
|
||||
- **Target File:** `docs/style-guide.md`
|
||||
- **Source Section(s) to Copy:** Section(s) detailing "UI Style Guide", "Brand Guidelines", "Visual Design Specifications", or "Styling Approach".
|
||||
- **Target File:** `docs/front-end-style-guide.md`
|
||||
- *Agent Note: This section might be a sub-section or refer to other documents (e.g., `ui-ux-spec.txt`). Extract the core styling philosophy and approach defined within the frontend architecture document itself.*
|
||||
|
||||
- **Source Section(s) to Copy:** Section(s) detailing "Component Library", "Reusable UI Components Guide", "Atomic Design Elements".
|
||||
- **Target File:** `docs/component-guide.md`
|
||||
- **Source Section(s) to Copy:** Section(s) detailing "Component Library", "Reusable UI Components Guide", "Atomic Design Elements", or "Component Breakdown & Implementation Details".
|
||||
- **Target File:** `docs/front-end-component-guide.md`
|
||||
|
||||
- **Source Section(s) to Copy:** Section(s) detailing "Front-End Coding Standards" (specifically for UI development, e.g., JavaScript/TypeScript style, CSS naming conventions, accessibility best practices for FE).
|
||||
- **Target File:** `docs/front-end-coding-standards.md`
|
||||
- *Agent Note: A dedicated top-level section for this might not exist. If not found, this shard might be empty or require cross-referencing with the main architecture's coding standards. Extract any front-end-specific coding conventions mentioned.*
|
||||
|
||||
- **Source Section(s) to Copy:** Section(s) titled "State Management In-Depth".
|
||||
- **Target File:** `docs/front-end-state-management.md`
|
||||
|
||||
- **Source Section(s) to Copy:** Section(s) titled "API Interaction Layer".
|
||||
- **Target File:** `docs/front-end-api-interaction.md`
|
||||
|
||||
- **Source Section(s) to Copy:** Section(s) titled "Routing Strategy".
|
||||
- **Target File:** `docs/front-end-routing-strategy.md`
|
||||
|
||||
- **Source Section(s) to Copy:** Section(s) titled "Frontend Testing Strategy".
|
||||
- **Target File:** `docs/front-end-testing-strategy.md`
|
||||
|
||||
|
||||
---
|
||||
|
||||
CRITICAL: **Index Management:** After creating each granular file, update `docs/index.md` as needed.
|
||||
CRITICAL: **Index Management:** After creating the files, update `docs/index.md` as needed to reference and describe each doc - do not mention granules or where it was sharded from, just doc purpose - as the index also contains other doc references potentially.
|
||||
|
||||
@@ -0,0 +1,191 @@
|
||||
# API Reference
|
||||
|
||||
> This document is a granulated shard from the main "3-architecture.md" focusing on "API Reference".
|
||||
|
||||
### External APIs Consumed
|
||||
|
||||
#### 1\. Hacker News (HN) Algolia API
|
||||
|
||||
- **Purpose:** To retrieve top Hacker News posts and their associated comments.
|
||||
- **Base URL(s):** Production: `http://hn.algolia.com/api/v1/`
|
||||
- **Authentication:** None required.
|
||||
- **Key Endpoints Used:**
|
||||
- **`GET /search` (for top posts)**
|
||||
- Description: Retrieves stories currently on the Hacker News front page.
|
||||
- Request Parameters: `tags=front_page`
|
||||
- Example Request: `curl "http://hn.algolia.com/api/v1/search?tags=front_page"`
|
||||
- Post-processing: Application sorts fetched stories by `points` (descending), selects up to top 30.
|
||||
- Success Response Schema (Code: `200 OK`): Standard Algolia search response containing 'hits' array with story objects.
|
||||
```json
|
||||
{
|
||||
"hits": [
|
||||
{
|
||||
"objectID": "string",
|
||||
"created_at": "string",
|
||||
"title": "string",
|
||||
"url": "string",
|
||||
"author": "string",
|
||||
"points": "number",
|
||||
"story_text": "string",
|
||||
"num_comments": "number",
|
||||
"_tags": ["string"]
|
||||
}
|
||||
],
|
||||
"nbHits": "number",
|
||||
"page": "number",
|
||||
"nbPages": "number",
|
||||
"hitsPerPage": "number"
|
||||
}
|
||||
```
|
||||
- **`GET /items/{objectID}` (for comments)**
|
||||
- Description: Retrieves a specific story item by its `objectID` to get its full comment tree from the `children` field. Called for each selected top story.
|
||||
- Success Response Schema (Code: `200 OK`): Standard Algolia item response.
|
||||
```json
|
||||
{
|
||||
"id": "number",
|
||||
"created_at": "string",
|
||||
"author": "string",
|
||||
"text": "string",
|
||||
"parent_id": "number",
|
||||
"story_id": "number",
|
||||
"children": [
|
||||
{
|
||||
/* nested comment structure */
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
- **Rate Limits:** Generous for public use; daily calls are fine.
|
||||
- **Link to Official Docs:** [https://hn.algolia.com/api](https://hn.algolia.com/api)
|
||||
|
||||
#### 2\. Play.ht API
|
||||
|
||||
- **Purpose:** To generate AI-powered podcast versions of the newsletter content.
|
||||
- **Base URL(s):** Production: `https://api.play.ai/api/v1`
|
||||
- **Authentication:** API Key (`X-USER-ID` header) and Bearer Token (`Authorization` header). Stored as `PLAYHT_USER_ID` and `PLAYHT_API_KEY`.
|
||||
- **Key Endpoints Used:**
|
||||
- **`POST /playnotes`**
|
||||
- Description: Initiates the text-to-speech conversion.
|
||||
- Request Headers: `Authorization: Bearer {PLAYHT_API_KEY}`, `X-USER-ID: {PLAYHT_USER_ID}`, `Content-Type: multipart/form-data`, `Accept: application/json`.
|
||||
- Request Body Schema: `multipart/form-data`
|
||||
- `sourceFile`: `string (binary)` (Preferred: HTML newsletter content as file upload.)
|
||||
- `sourceFileUrl`: `string (uri)` (Alternative: URL to hosted newsletter content if `sourceFile` is problematic.)
|
||||
- `synthesisStyle`: `string` (Required, e.g., "podcast")
|
||||
- `voice1`: `string` (Required, Voice ID)
|
||||
- `voice1Name`: `string` (Required)
|
||||
- `voice1Gender`: `string` (Required)
|
||||
- `webHookUrl`: `string (uri)` (Required, e.g., `<YOUR_APP_DOMAIN>/api/webhooks/playht`)
|
||||
- **Note on Content Delivery:** MVP uses `sourceFile`. If issues arise, pivot to `sourceFileUrl` (e.g., content temporarily in Supabase Storage).
|
||||
- Success Response Schema (Code: `201 Created`):
|
||||
```json
|
||||
{
|
||||
"id": "string",
|
||||
"ownerId": "string",
|
||||
"name": "string",
|
||||
"sourceFileUrl": "string",
|
||||
"audioUrl": "string",
|
||||
"synthesisStyle": "string",
|
||||
"voice1": "string",
|
||||
"voice1Name": "string",
|
||||
"voice1Gender": "string",
|
||||
"webHookUrl": "string",
|
||||
"status": "string",
|
||||
"duration": "number",
|
||||
"requestedAt": "string",
|
||||
"createdAt": "string"
|
||||
}
|
||||
```
|
||||
- **Webhook Handling:** Endpoint `/api/webhooks/playht` receives `POST` from Play.ht.
|
||||
- Request Body Schema (from Play.ht):
|
||||
```json
|
||||
{ "id": "string", "audioUrl": "string", "status": "string" }
|
||||
```
|
||||
- **Rate Limits:** Refer to official Play.ht documentation.
|
||||
- **Link to Official Docs:** [https://docs.play.ai/api-reference/playnote/post](https://docs.play.ai/api-reference/playnote/post)
|
||||
|
||||
#### 3\. LLM Provider (Facade for Summarization)
|
||||
|
||||
- **Purpose:** To generate summaries for articles and comment threads.
|
||||
- **Configuration:** Via environment variables (`LLM_PROVIDER_TYPE`, `OLLAMA_API_URL`, `REMOTE_LLM_API_KEY`, `REMOTE_LLM_API_URL`, `LLM_MODEL_NAME`).
|
||||
- **Facade Interface (`LLMFacade` in `supabase/functions/_shared/llm-facade.ts`):**
|
||||
|
||||
```typescript
|
||||
// Located in supabase/functions/_shared/llm-facade.ts
|
||||
export interface LLMSummarizationOptions {
|
||||
prompt?: string;
|
||||
maxLength?: number;
|
||||
}
|
||||
|
||||
export interface LLMFacade {
|
||||
generateSummary(
|
||||
textToSummarize: string,
|
||||
options?: LLMSummarizationOptions
|
||||
): Promise<string>;
|
||||
}
|
||||
```
|
||||
|
||||
- **Implementations:**
|
||||
- **Local Ollama Adapter:** HTTP requests to `OLLAMA_API_URL`.
|
||||
- Request Body (example for `/api/generate`): `{"model": "string", "prompt": "string", "stream": false}`
|
||||
- Response Body (example): `{"model": "string", "response": "string", ...}`
|
||||
- **Remote LLM API Adapter:** Authenticated HTTP requests to `REMOTE_LLM_API_URL`. Schemas depend on the provider.
|
||||
- **Rate Limits:** Provider-dependent.
|
||||
- **Link to Official Docs:** Ollama: [https://github.com/ollama/ollama/blob/main/docs/api.md](https://www.google.com/search?q=https://github.com/ollama/ollama/blob/main/docs/api.md)
|
||||
|
||||
#### 4\. Nodemailer (Email Delivery Service)
|
||||
|
||||
- **Purpose:** To send generated HTML newsletters.
|
||||
- **Interaction Type:** Library integration within `NewsletterGenerationService` via `NodemailerFacade` in `supabase/functions/_shared/nodemailer-facade.ts`.
|
||||
- **Configuration:** Via SMTP environment variables (`SMTP_HOST`, `SMTP_PORT`, `SMTP_USER`, `SMTP_PASS`).
|
||||
- **Key Operations:** Create transporter, construct email message (From, To, Subject, HTML), send email.
|
||||
- **Link to Official Docs:** [https://nodemailer.com/](https://nodemailer.com/)
|
||||
|
||||
### Internal APIs Provided (by BMad DiCaster)
|
||||
|
||||
#### 1\. Workflow Trigger API
|
||||
|
||||
- **Purpose:** To manually initiate the daily content processing pipeline.
|
||||
- **Endpoint Path:** `/api/system/trigger-workflow` (Next.js API Route Handler)
|
||||
- **Method:** `POST`
|
||||
- **Authentication:** API Key in `X-API-KEY` header (matches `WORKFLOW_TRIGGER_API_KEY` env var).
|
||||
- **Request Body:** MVP: Empty or `{}`.
|
||||
- **Success Response (`202 Accepted`):** `{"message": "Daily workflow triggered successfully. Processing will occur asynchronously.", "jobId": "<UUID_of_the_workflow_run>"}`
|
||||
- **Error Response:** `400 Bad Request`, `401 Unauthorized`, `500 Internal Server Error`.
|
||||
- **Action:** Creates a record in `workflow_runs` table and initiates the pipeline.
|
||||
|
||||
#### 2\. Workflow Status API
|
||||
|
||||
- **Purpose:** Allow developers/admins to check the status of a specific workflow run.
|
||||
- **Endpoint Path:** `/api/system/workflow-status/{jobId}` (Next.js API Route Handler)
|
||||
- **Method:** `GET`
|
||||
- **Authentication:** API Key in `X-API-KEY` header.
|
||||
- **Request Parameters:** `jobId` (Path parameter).
|
||||
- **Success Response (`200 OK`):**
|
||||
```json
|
||||
{
|
||||
"jobId": "<UUID>",
|
||||
"createdAt": "timestamp",
|
||||
"lastUpdatedAt": "timestamp",
|
||||
"status": "string",
|
||||
"currentStep": "string",
|
||||
"errorMessage": "string?",
|
||||
"details": {
|
||||
/* JSONB object with step-specific progress */
|
||||
}
|
||||
}
|
||||
```
|
||||
- **Error Response:** `401 Unauthorized`, `404 Not Found`, `500 Internal Server Error`.
|
||||
- **Action:** Retrieves record from `workflow_runs` for the given `jobId`.
|
||||
|
||||
#### 3\. Play.ht Webhook Receiver
|
||||
|
||||
- **Purpose:** To receive status updates and podcast audio URLs from Play.ht.
|
||||
- **Endpoint Path:** `/api/webhooks/playht` (Next.js API Route Handler)
|
||||
- **Method:** `POST`
|
||||
- **Authentication:** Implement verification (e.g., shared secret token).
|
||||
- **Request Body Schema (Expected from Play.ht):**
|
||||
```json
|
||||
{ "id": "string", "audioUrl": "string", "status": "string" }
|
||||
```
|
||||
- **Success Response (`200 OK`):** `{"message": "Webhook received successfully"}`
|
||||
- **Action:** Updates `newsletters` and `workflow_runs` tables.
|
||||
@@ -0,0 +1,318 @@
|
||||
# BMad DiCaster Architecture Document
|
||||
|
||||
## Introduction / Preamble
|
||||
|
||||
This document outlines the overall project architecture for BMad DiCaster, including backend systems, shared services, and non-UI specific concerns. Its primary goal is to serve as the guiding architectural blueprint for AI-driven development, ensuring consistency and adherence to chosen patterns and technologies.
|
||||
|
||||
**Relationship to Frontend Architecture:**
|
||||
This project includes a significant user interface. A separate Frontend Architecture Document (expected to be named `frontend-architecture.md` and linked in "Key Reference Documents" once created) will detail the frontend-specific design and MUST be used in conjunction with this document. Core technology stack choices documented herein (see "Definitive Tech Stack Selections") are definitive for the entire project, including any frontend components.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Introduction / Preamble](#introduction--preamble)
|
||||
- [Technical Summary](#technical-summary)
|
||||
- [High-Level Overview](#high-level-overview)
|
||||
- [Component View](#component-view)
|
||||
- [Architectural / Design Patterns Adopted](#architectural--design-patterns-adopted)
|
||||
- [Workflow Orchestration and Status Management](#workflow-orchestration-and-status-management)
|
||||
- [Project Structure](#project-structure)
|
||||
- [Key Directory Descriptions](#key-directory-descriptions)
|
||||
- [Monorepo Management](#monorepo-management)
|
||||
- [Notes](#notes)
|
||||
- [API Reference](#api-reference)
|
||||
- [External APIs Consumed](#external-apis-consumed)
|
||||
- [1. Hacker News (HN) Algolia API](#1-hacker-news-hn-algolia-api)
|
||||
- [2. Play.ht API](#2-playht-api)
|
||||
- [3. LLM Provider (Facade for Summarization)](#3-llm-provider-facade-for-summarization)
|
||||
- [4. Nodemailer (Email Delivery Service)](#4-nodemailer-email-delivery-service)
|
||||
- [Internal APIs Provided (by BMad DiCaster)](#internal-apis-provided-by-bmad-dicaster)
|
||||
- [1. Workflow Trigger API](#1-workflow-trigger-api)
|
||||
- [2. Workflow Status API](#2-workflow-status-api)
|
||||
- [3. Play.ht Webhook Receiver](#3-playht-webhook-receiver)
|
||||
- [Data Models](#data-models)
|
||||
- [Core Application Entities / Domain Objects](#core-application-entities--domain-objects)
|
||||
- [1. `WorkflowRun`](#1-workflowrun)
|
||||
- [2. `HNPost`](#2-hnpost)
|
||||
- [3. `HNComment`](#3-hncomment)
|
||||
- [4. `ScrapedArticle`](#4-scrapedarticle)
|
||||
- [5. `ArticleSummary`](#5-articlesummary)
|
||||
- [6. `CommentSummary`](#6-commentsummary)
|
||||
- [7. `Newsletter`](#7-newsletter)
|
||||
- [8. `Subscriber`](#8-subscriber)
|
||||
- [9. `SummarizationPrompt`](#9-summarizationprompt)
|
||||
- [10. `NewsletterTemplate`](#10-newslettertemplate)
|
||||
- [Database Schemas (Supabase PostgreSQL)](#database-schemas-supabase-postgresql)
|
||||
- [1. `workflow_runs`](#1-workflow_runs)
|
||||
- [2. `hn_posts`](#2-hn_posts)
|
||||
- [3. `hn_comments`](#3-hn_comments)
|
||||
- [4. `scraped_articles`](#4-scraped_articles)
|
||||
- [5. `article_summaries`](#5-article_summaries)
|
||||
- [6. `comment_summaries`](#6-comment_summaries)
|
||||
- [7. `newsletters`](#7-newsletters)
|
||||
- [8. `subscribers`](#8-subscribers)
|
||||
- [9. `summarization_prompts`](#9-summarization_prompts)
|
||||
- [10. `newsletter_templates`](#10-newsletter_templates)
|
||||
- [Core Workflow / Sequence Diagrams](#core-workflow--sequence-diagrams)
|
||||
- [1. Daily Workflow Initiation & HN Content Acquisition](#1-daily-workflow-initiation--hn-content-acquisition)
|
||||
- [2. Article Scraping & Summarization Flow](#2-article-scraping--summarization-flow)
|
||||
- [3. Newsletter, Podcast, and Delivery Flow](#3-newsletter-podcast-and-delivery-flow)
|
||||
- [Definitive Tech Stack Selections](#definitive-tech-stack-selections)
|
||||
- [Infrastructure and Deployment Overview](#infrastructure-and-deployment-overview)
|
||||
- [Error Handling Strategy](#error-handling-strategy)
|
||||
- [Coding Standards](#coding-standards)
|
||||
- [Detailed Language & Framework Conventions](#detailed-language--framework-conventions)
|
||||
- [TypeScript/Node.js (Next.js & Supabase Functions) Specifics](#typescriptnodejs-nextjs--supabase-functions-specifics)
|
||||
- [Overall Testing Strategy](#overall-testing-strategy)
|
||||
- [Security Best Practices](#security-best-practices)
|
||||
- [Key Reference Documents](#key-reference-documents)
|
||||
- [Change Log](#change-log)
|
||||
- [Prompt for Design Architect: Frontend Architecture Definition](#prompt-for-design-architect-frontend-architecture-definition)
|
||||
|
||||
## Technical Summary
|
||||
|
||||
BMad DiCaster is a web application designed to provide daily, concise summaries of top Hacker News (HN) posts, delivered as an HTML newsletter and an optional AI-generated podcast, accessible via a Next.js web interface. The system employs a serverless, event-driven architecture hosted on Vercel, with Supabase providing PostgreSQL database services and function hosting. Key components include services for HN content retrieval, article scraping (using Cheerio), AI-powered summarization (via a configurable LLM facade for Ollama/remote APIs), podcast generation (Play.ht), newsletter generation (Nodemailer), and workflow orchestration. The architecture emphasizes modularity, clear separation of concerns (pragmatic hexagonal approach for complex functions), and robust error handling, aiming for efficient development, particularly by AI developer agents.
|
||||
|
||||
## High-Level Overview
|
||||
|
||||
The BMad DiCaster application will adopt a **serverless, event-driven architecture** hosted entirely on Vercel, with Supabase providing backend services (database and functions). The project will be structured as a **monorepo**, containing both the Next.js frontend application and the backend Supabase functions.
|
||||
|
||||
The core data processing flow is designed as an event-driven pipeline:
|
||||
|
||||
1. A scheduled mechanism (Vercel Cron Job) or manual trigger (API/CLI) initiates the daily workflow, creating a `workflow_run` job.
|
||||
2. Hacker News posts and comments are retrieved (HN Algolia API) and stored in Supabase.
|
||||
3. This data insertion triggers a Supabase function (via database webhook) to scrape linked articles.
|
||||
4. Successful article scraping and storage trigger further Supabase functions for AI-powered summarization of articles and comments.
|
||||
5. The completion of summarization steps for a workflow run is tracked, and once all prerequisites are met, a newsletter generation service is triggered.
|
||||
6. The newsletter content is sent to the Play.ht API to generate a podcast.
|
||||
7. Play.ht calls a webhook to notify our system when the podcast is ready, providing the podcast URL.
|
||||
8. The newsletter data in Supabase is updated with the podcast URL.
|
||||
9. The newsletter is then delivered to subscribers via Nodemailer, after considering podcast availability (with delay/retry logic).
|
||||
10. The Next.js frontend allows users to view current and past newsletters and listen to the podcasts.
|
||||
|
||||
This event-driven approach, using Supabase Database Webhooks (via `pg_net` or native functionality) to trigger Vercel-hosted Supabase Functions, aims to create a resilient and scalable system. It mitigates potential timeout issues by breaking down long-running processes into smaller, asynchronously triggered units.
|
||||
|
||||
Below is a system context diagram illustrating the primary services and user interactions:
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
User[Developer/Admin] -- "Triggers Daily Workflow (API/CLI/Cron)" --> BMadDiCasterBE[BMad DiCaster Backend Logic]
|
||||
UserWeb[End User] -- "Accesses Web Interface" --> BMadDiCasterFE[BMad DiCaster Frontend (Next.js on Vercel)]
|
||||
BMadDiCasterFE -- "Displays Data From" --> SupabaseDB[Supabase PostgreSQL]
|
||||
BMadDiCasterFE -- "Interacts With for Data/Triggers" --> SupabaseFunctions[Supabase Functions on Vercel]
|
||||
|
||||
subgraph "BMad DiCaster Backend Logic (Supabase Functions & Vercel)"
|
||||
direction LR
|
||||
SupabaseFunctions
|
||||
HNAPI[Hacker News Algolia API]
|
||||
ArticleScraper[Article Scraper Service]
|
||||
Summarizer[Summarization Service (LLM Facade)]
|
||||
PlayHTAPI[Play.ht API]
|
||||
NewsletterService[Newsletter Generation & Delivery Service]
|
||||
Nodemailer[Nodemailer Service]
|
||||
end
|
||||
|
||||
BMadDiCasterBE --> SupabaseDB
|
||||
SupabaseFunctions -- "Fetches HN Data" --> HNAPI
|
||||
SupabaseFunctions -- "Scrapes Articles" --> ArticleScraper
|
||||
ArticleScraper -- "Gets URLs from" --> SupabaseDB
|
||||
ArticleScraper -- "Stores Content" --> SupabaseDB
|
||||
SupabaseFunctions -- "Summarizes Content" --> Summarizer
|
||||
Summarizer -- "Uses Prompts from / Stores Summaries" --> SupabaseDB
|
||||
SupabaseFunctions -- "Generates Podcast" --> PlayHTAPI
|
||||
PlayHTAPI -- "Sends Webhook (Podcast URL)" --> SupabaseFunctions
|
||||
SupabaseFunctions -- "Updates Podcast URL" --> SupabaseDB
|
||||
SupabaseFunctions -- "Generates Newsletter" --> NewsletterService
|
||||
NewsletterService -- "Uses Template/Data from" --> SupabaseDB
|
||||
NewsletterService -- "Sends Emails Via" --> Nodemailer
|
||||
SupabaseDB -- "Stores Subscriber List" --> NewsletterService
|
||||
|
||||
classDef user fill:#9cf,stroke:#333,stroke-width:2px;
|
||||
classDef fe fill:#f9f,stroke:#333,stroke-width:2px;
|
||||
classDef be fill:#ccf,stroke:#333,stroke-width:2px;
|
||||
classDef external fill:#ffc,stroke:#333,stroke-width:2px;
|
||||
classDef db fill:#cfc,stroke:#333,stroke-width:2px;
|
||||
|
||||
class User,UserWeb user;
|
||||
class BMadDiCasterFE fe;
|
||||
class BMadDiCasterBE,SupabaseFunctions,ArticleScraper,Summarizer,NewsletterService be;
|
||||
class HNAPI,PlayHTAPI,Nodemailer external;
|
||||
class SupabaseDB db;
|
||||
```
|
||||
|
||||
## Component View
|
||||
|
||||
> This section has been moved to a dedicated document: [Component View](./component-view.md)
|
||||
|
||||
## Workflow Orchestration and Status Management
|
||||
|
||||
The BMad DiCaster application employs an event-driven pipeline for its daily content processing. To manage, monitor, and ensure the robust execution of this multi-step workflow, the following orchestration strategy is implemented:
|
||||
|
||||
**1. Central Workflow Tracking (`workflow_runs` Table):**
|
||||
|
||||
- A dedicated table, `public.workflow_runs` (defined in Data Models), serves as the single source of truth for the state and progress of each initiated daily workflow.
|
||||
- Each workflow execution is identified by a unique `id` (jobId) in this table.
|
||||
- Key fields include `status`, `current_step_details`, `error_message`, and a `details` JSONB column to store metadata and progress counters (e.g., `posts_fetched`, `articles_scraped_successfully`, `summaries_generated`, `podcast_playht_job_id`, `podcast_status`).
|
||||
|
||||
**2. Workflow Initiation:**
|
||||
|
||||
- A workflow is initiated via the `POST /api/system/trigger-workflow` API endpoint (callable manually, by CLI, or by a cron job).
|
||||
- Upon successful trigger, a new record is created in `workflow_runs` with an initial status (e.g., 'pending' or 'fetching_hn'), and the `jobId` is returned to the caller.
|
||||
- This initial record creation triggers the first service in the pipeline (`HNContentService`) via a database webhook or an initial direct call from the trigger API logic.
|
||||
|
||||
**3. Service Function Responsibilities:**
|
||||
|
||||
- Each backend Supabase Function (`HNContentService`, `ArticleScrapingService`, `SummarizationService`, `PodcastGenerationService`, `NewsletterGenerationService`) participating in the workflow **must**:
|
||||
- Be aware of the `workflow_run_id` for the job it is processing. This ID should be passed along or retrievable based on the triggering event/data.
|
||||
- **Before starting its primary task:** Update the `workflow_runs` table for the current `workflow_run_id` to reflect its `current_step_details` (e.g., "Started scraping article X for workflow Y").
|
||||
- **Upon successful completion of its task:**
|
||||
- Update any relevant data tables (e.g., `scraped_articles`, `article_summaries`).
|
||||
- Update the `workflow_runs.details` JSONB field with relevant output or counters (e.g., increment `articles_scraped_successfully_count`).
|
||||
- **Upon failure:** Update the `workflow_runs` table for the `workflow_run_id` to set `status` to 'failed', and populate `error_message` and `current_step_details` with failure information.
|
||||
- Utilize the shared `WorkflowTrackerService` (see point 5) for consistent status updates.
|
||||
- The `PlayHTWebhookHandlerAPI` (Next.js API route) updates the `newsletters` table and then the `workflow_runs.details` with podcast status.
|
||||
|
||||
**4. Orchestration and Progression (`CheckWorkflowCompletionService`):**
|
||||
|
||||
- A dedicated Supabase Function, `CheckWorkflowCompletionService`, will be scheduled to run periodically (e.g., every 5-10 minutes via Vercel Cron Jobs invoking a dedicated HTTP endpoint for this service, or Supabase's `pg_cron` if preferred for DB-centric scheduling).
|
||||
- This service orchestrates progression between major stages by:
|
||||
- Querying `workflow_runs` for jobs in intermediate statuses.
|
||||
- Verifying if all prerequisite tasks for the next stage are complete by:
|
||||
- Querying related data tables (e.g., `scraped_articles`, `article_summaries`, `comment_summaries`) based on the `workflow_run_id`.
|
||||
- Checking expected counts against actual completed counts (e.g., all articles intended for summarization have an `article_summaries` entry for the current `workflow_run_id`).
|
||||
- Checking the status of the podcast generation in the `newsletters` table (linked to `workflow_run_id`) before proceeding to email delivery.
|
||||
- If conditions for the next stage are met, it updates the `workflow_runs.status` (e.g., to 'generating_newsletter') and then invokes the appropriate next service (e.g., `NewsletterGenerationService`), passing the `workflow_run_id`.
|
||||
|
||||
**5. Shared `WorkflowTrackerService`:**
|
||||
|
||||
- A utility service, `WorkflowTrackerService`, will be created in `supabase/functions/_shared/`.
|
||||
- It will provide standardized methods for all backend functions to interact with the `workflow_runs` table (e.g., `updateWorkflowStep()`, `incrementWorkflowDetailCounter()`, `failWorkflow()`, `completeWorkflowStep()`).
|
||||
- This promotes consistency in status updates and reduces redundant code.
|
||||
|
||||
**6. Podcast Link Before Email Delivery:**
|
||||
|
||||
- The `NewsletterGenerationService`, after generating the HTML and initiating podcast creation (via `PodcastGenerationService`), will set the `newsletters.podcast_status` to 'generating'.
|
||||
- The `CheckWorkflowCompletionService` (or the `NewsletterGenerationService` itself if designed for polling/delay) will monitor the `newsletters.podcast_url` (populated by the `PlayHTWebhookHandlerAPI`) or `newsletters.podcast_status`.
|
||||
- Email delivery is triggered by `CheckWorkflowCompletionService` once the podcast URL is available, a timeout is reached, or podcast generation fails (as per PRD's delay/retry logic). The final delivery status will be updated in `workflow_runs` and `newsletters`.
|
||||
|
||||
## Project Structure
|
||||
|
||||
> This section has been moved to a dedicated document: [Project Structure](./project-structure.md)
|
||||
|
||||
## API Reference
|
||||
|
||||
> This section has been moved to a dedicated document: [API Reference](./api-reference.md)
|
||||
|
||||
## Data Models
|
||||
|
||||
> This section has been moved to a dedicated document: [Data Models](./data-models.md)
|
||||
|
||||
## Core Workflow / Sequence Diagrams
|
||||
|
||||
> This section has been moved to a dedicated document: [Core Workflow / Sequence Diagrams](./sequence-diagrams.md)
|
||||
|
||||
## Definitive Tech Stack Selections
|
||||
|
||||
> This section has been moved to a dedicated document: [Definitive Tech Stack Selections](./tech-stack.md)
|
||||
|
||||
## Infrastructure and Deployment Overview
|
||||
|
||||
> This section has been moved to a dedicated document: [Infrastructure and Deployment Overview](./infra-deployment.md)
|
||||
|
||||
## Error Handling Strategy
|
||||
|
||||
> This section is part of the consolidated [Operational Guidelines](./operational-guidelines.md#error-handling-strategy).
|
||||
|
||||
## Coding Standards
|
||||
|
||||
> This section is part of the consolidated [Operational Guidelines](./operational-guidelines.md#coding-standards).
|
||||
|
||||
## Overall Testing Strategy
|
||||
|
||||
> This section is part of the consolidated [Operational Guidelines](./operational-guidelines.md#overall-testing-strategy).
|
||||
|
||||
## Security Best Practices
|
||||
|
||||
> This section is part of the consolidated [Operational Guidelines](./operational-guidelines.md#security-best-practices).
|
||||
|
||||
## Key Reference Documents
|
||||
|
||||
1. **Product Requirements Document (PRD):** `docs/prd-incremental-full-agile-mode.txt`
|
||||
2. **UI/UX Specification:** `docs/ui-ux-spec.txt`
|
||||
3. **Technical Preferences:** `docs/technical-preferences copy.txt`
|
||||
4. **Environment Variables Documentation:** [Environment Variables Documentation](./environment-vars.md)
|
||||
5. **(Optional) Frontend Architecture Document:** `docs/frontend-architecture.md` (To be created by Design Architect)
|
||||
6. **Play.ht API Documentation:** [https://docs.play.ai/api-reference/playnote/post](https://docs.play.ai/api-reference/playnote/post)
|
||||
7. **Hacker News Algolia API:** [https://hn.algolia.com/api](https://hn.algolia.com/api)
|
||||
8. **Ollama API Documentation:** [https://github.com/ollama/ollama/blob/main/docs/api.md](https://www.google.com/search?q=https://github.com/ollama/ollama/blob/main/docs/api.md)
|
||||
9. **Supabase Documentation:** [https://supabase.com/docs](https://supabase.com/docs)
|
||||
10. **Next.js Documentation:** [https://nextjs.org/docs](https://nextjs.org/docs)
|
||||
11. **Vercel Documentation:** [https://vercel.com/docs](https://vercel.com/docs)
|
||||
12. **Pino Logging Documentation:** [https://getpino.io/](https://getpino.io/)
|
||||
13. **Zod Documentation:** [https://zod.dev/](https://zod.dev/)
|
||||
|
||||
## Change Log
|
||||
|
||||
| Change | Date | Version | Description | Author |
|
||||
| :----------------------------------------- | :--------- | :------ | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------- |
|
||||
| Initial Draft based on PRD and discussions | 2025-05-13 | 0.1 | First complete draft covering project overview, components, data models, tech stack, deployment, error handling, coding standards, testing strategy, security, and workflow orchestration. | 3-arch (Agent) |
|
||||
|
||||
---
|
||||
|
||||
## Prompt for Design Architect: Frontend Architecture Definition
|
||||
|
||||
**To the Design Architect (Agent Specializing in Frontend Architecture):**
|
||||
|
||||
You are now tasked with defining the detailed **Frontend Architecture** for the BMad DiCaster project. This main Architecture Document and the `docs/ui-ux-spec.txt` are your primary input artifacts. Your goal is to produce a dedicated `frontend-architecture.md` document.
|
||||
|
||||
**Key Inputs & Constraints (from this Main Architecture Document & UI/UX Spec):**
|
||||
|
||||
1. **Overall Project Architecture:** Familiarize yourself with the "High-Level Overview," "Component View," "Data Models" (especially any shared types in `shared/types/`), and "API Reference" (particularly internal APIs like `/api/system/trigger-workflow` and `/api/webhooks/playht` that the frontend might indirectly be aware of or need to interact with for admin purposes in the future, though MVP frontend primarily reads newsletter data).
|
||||
2. **UI/UX Specification (`docs/ui-ux-spec.txt`):** This document contains user flows, wireframes, core screens (Newsletter List, Newsletter Detail), component inventory (NewsletterCard, PodcastPlayer, DownloadButton, BackButton), branding considerations (synthwave, minimalist), and accessibility aspirations.
|
||||
3. **Definitive Technology Stack (Frontend Relevant):**
|
||||
- Framework: Next.js (`latest`, App Router)
|
||||
- Language: React (`19.0.0`) with TypeScript (`5.7.2`)
|
||||
- UI Libraries: Tailwind CSS (`3.4.17`), Shadcn UI (`latest`)
|
||||
- State Management: Zustand (`latest`)
|
||||
- Testing: React Testing Library (RTL) (`latest`), Jest (`latest`)
|
||||
- Starter Template: Vercel/Supabase Next.js App Router template ([https://vercel.com/templates/next.js/supabase](https://vercel.com/templates/next.js/supabase)). Leverage its existing structure for `app/`, `components/ui/` (from Shadcn), `lib/utils.ts`, and `utils/supabase/` (client, server, middleware helpers for Supabase).
|
||||
4. **Project Structure (Frontend Relevant):** Refer to the "Project Structure" section in this document, particularly the `app/` directory, `components/` (for Shadcn `ui` and your `core` application components), `lib/`, and `utils/supabase/`.
|
||||
5. **Existing Frontend Files (from template):** Be aware of `middleware.ts` (for Supabase auth) and any existing components or utility functions provided by the starter template.
|
||||
|
||||
**Tasks for Frontend Architecture Document (`frontend-architecture.md`):**
|
||||
|
||||
1. **Refine Frontend Project Structure:**
|
||||
- Detail the specific folder structure within `app/`. Propose organization for pages (routes), layouts, application-specific components (`app/components/core/`), data fetching logic, context providers, and Zustand stores.
|
||||
- How will Shadcn UI components (`components/ui/`) be used and potentially customized?
|
||||
2. **Component Architecture:**
|
||||
- For each core screen identified in the UI/UX spec (Newsletter List, Newsletter Detail), define the primary React component hierarchy.
|
||||
- Specify responsibilities and key props for major reusable application components (e.g., `NewsletterCard`, `NewsletterDetailView`, `PodcastPlayerControls`).
|
||||
- How will components fetch and display data from Supabase? (e.g., Server Components, Client Components using Supabase client from `utils/supabase/client.ts` or `utils/supabase/server.ts`).
|
||||
3. **State Management (Zustand):**
|
||||
- Identify global and local state needs.
|
||||
- Define specific Zustand store(s): what data they will hold (e.g., current newsletter list, selected newsletter details, podcast player state), and what actions they will expose.
|
||||
- How will components interact with these stores?
|
||||
4. **Data Fetching & Caching (Frontend):**
|
||||
- Specify patterns for fetching newsletter data (lists and individual items) and podcast information.
|
||||
- How will Next.js data fetching capabilities (Server Components, Route Handlers, `Workspace` with caching options) be utilized with the Supabase client?
|
||||
- Address loading and error states for data fetching in the UI.
|
||||
5. **Routing:**
|
||||
- Confirm Next.js App Router usage and define URL structure for the newsletter list and detail pages.
|
||||
6. **Styling Approach:**
|
||||
- Reiterate use of Tailwind CSS and Shadcn UI.
|
||||
- Define any project-specific conventions for applying Tailwind classes or extending the theme (beyond what's in `tailwind.config.ts`).
|
||||
- How will the "synthwave technical glowing purple vibes" be implemented using Tailwind?
|
||||
7. **Error Handling (Frontend):**
|
||||
- How will errors from API calls (to Supabase or internal Next.js API routes if any) be handled and displayed to the user?
|
||||
- Strategy for UI error boundaries.
|
||||
8. **Accessibility (AX):**
|
||||
- Elaborate on how the WCAG 2.1 Level A requirements (keyboard navigation, semantic HTML, alt text, color contrast) will be met in component design and implementation, leveraging Next.js and Shadcn UI capabilities.
|
||||
9. **Testing (Frontend):**
|
||||
- Reiterate the use of Jest and RTL for unit/integration testing of React components.
|
||||
- Provide examples or guidelines for writing effective frontend tests.
|
||||
10. **Key Frontend Libraries & Versioning:** Confirm versions from the main tech stack and list any additional frontend-only libraries required.
|
||||
|
||||
Your output should be a clean, well-formatted `frontend-architecture.md` document ready for AI developer agents to use for frontend implementation. Adhere to the output formatting guidelines. You are now operating in **Frontend Architecture Mode**.
|
||||
|
||||
---
|
||||
|
||||
This concludes the BMad DiCaster Architecture Document.
|
||||
@@ -0,0 +1,141 @@
|
||||
# Component View
|
||||
|
||||
> This document is a granulated shard from the main "3-architecture.md" focusing on "Component View".
|
||||
|
||||
The BMad DiCaster system is composed of several key logical components, primarily implemented as serverless functions (Supabase Functions deployed on Vercel) and a Next.js frontend application. These components work together in an event-driven manner.
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
subgraph FrontendApp [Frontend Application (Next.js)]
|
||||
direction LR
|
||||
WebAppUI["Web Application UI (React Components)"]
|
||||
APIServiceFE["API Service (Frontend - Next.js Route Handlers)"]
|
||||
end
|
||||
|
||||
subgraph BackendServices [Backend Services (Supabase Functions & Core Logic)]
|
||||
direction TB
|
||||
WorkflowTriggerAPI["Workflow Trigger API (/api/system/trigger-workflow)"]
|
||||
HNContentService["HN Content Service (Supabase Fn)"]
|
||||
ArticleScrapingService["Article Scraping Service (Supabase Fn)"]
|
||||
SummarizationService["Summarization Service (LLM Facade - Supabase Fn)"]
|
||||
PodcastGenerationService["Podcast Generation Service (Supabase Fn)"]
|
||||
NewsletterGenerationService["Newsletter Generation Service (Supabase Fn)"]
|
||||
PlayHTWebhookHandlerAPI["Play.ht Webhook API (/api/webhooks/playht)"]
|
||||
CheckWorkflowCompletionService["CheckWorkflowCompletionService (Supabase Cron Fn)"]
|
||||
end
|
||||
|
||||
subgraph ExternalIntegrations [External APIs & Services]
|
||||
direction TB
|
||||
HNAlgoliaAPI["Hacker News Algolia API"]
|
||||
PlayHTAPI["Play.ht API"]
|
||||
LLMProvider["LLM Provider (Ollama/Remote API)"]
|
||||
NodemailerService["Nodemailer (Email Delivery)"]
|
||||
end
|
||||
|
||||
subgraph DataStorage [Data Storage (Supabase PostgreSQL)]
|
||||
direction TB
|
||||
DB_WorkflowRuns["workflow_runs Table"]
|
||||
DB_Posts["hn_posts Table"]
|
||||
DB_Comments["hn_comments Table"]
|
||||
DB_Articles["scraped_articles Table"]
|
||||
DB_Summaries["article_summaries / comment_summaries Tables"]
|
||||
DB_Newsletters["newsletters Table"]
|
||||
DB_Subscribers["subscribers Table"]
|
||||
DB_Prompts["summarization_prompts Table"]
|
||||
DB_NewsletterTemplates["newsletter_templates Table"]
|
||||
end
|
||||
|
||||
UserWeb[End User] --> WebAppUI
|
||||
WebAppUI --> APIServiceFE
|
||||
APIServiceFE --> WorkflowTriggerAPI
|
||||
APIServiceFE --> DataStorage
|
||||
|
||||
|
||||
DevAdmin[Developer/Admin/Cron] --> WorkflowTriggerAPI
|
||||
|
||||
WorkflowTriggerAPI --> DB_WorkflowRuns
|
||||
|
||||
DB_WorkflowRuns -- "Triggers (via CheckWorkflowCompletion or direct)" --> HNContentService
|
||||
HNContentService --> HNAlgoliaAPI
|
||||
HNContentService --> DB_Posts
|
||||
HNContentService --> DB_Comments
|
||||
HNContentService --> DB_WorkflowRuns
|
||||
|
||||
|
||||
DB_Posts -- "Triggers (via DB Webhook)" --> ArticleScrapingService
|
||||
ArticleScrapingService --> DB_Articles
|
||||
ArticleScrapingService --> DB_WorkflowRuns
|
||||
|
||||
DB_Articles -- "Triggers (via DB Webhook)" --> SummarizationService
|
||||
SummarizationService --> LLMProvider
|
||||
SummarizationService --> DB_Prompts
|
||||
SummarizationService --> DB_Summaries
|
||||
SummarizationService --> DB_WorkflowRuns
|
||||
|
||||
CheckWorkflowCompletionService -- "Monitors & Triggers Next Steps Based On" --> DB_WorkflowRuns
|
||||
CheckWorkflowCompletionService -- "Monitors & Triggers Next Steps Based On" --> DB_Summaries
|
||||
CheckWorkflowCompletionService -- "Monitors & Triggers Next Steps Based On" --> DB_Newsletters
|
||||
|
||||
|
||||
CheckWorkflowCompletionService --> NewsletterGenerationService
|
||||
NewsletterGenerationService --> DB_NewsletterTemplates
|
||||
NewsletterGenerationService --> DB_Summaries
|
||||
NewsletterGenerationService --> DB_Newsletters
|
||||
NewsletterGenerationService --> DB_WorkflowRuns
|
||||
|
||||
|
||||
CheckWorkflowCompletionService --> PodcastGenerationService
|
||||
PodcastGenerationService --> PlayHTAPI
|
||||
PodcastGenerationService --> DB_Newsletters
|
||||
PodcastGenerationService --> DB_WorkflowRuns
|
||||
|
||||
PlayHTAPI -- "Webhook" --> PlayHTWebhookHandlerAPI
|
||||
PlayHTWebhookHandlerAPI --> DB_Newsletters
|
||||
PlayHTWebhookHandlerAPI --> DB_WorkflowRuns
|
||||
|
||||
|
||||
CheckWorkflowCompletionService -- "Triggers Delivery" --> NewsletterGenerationService
|
||||
NewsletterGenerationService -- "(For Delivery)" --> NodemailerService
|
||||
NewsletterGenerationService -- "(For Delivery)" --> DB_Subscribers
|
||||
NewsletterGenerationService -- "(For Delivery)" --> DB_Newsletters
|
||||
NewsletterGenerationService -- "(For Delivery)" --> DB_WorkflowRuns
|
||||
|
||||
|
||||
classDef user fill:#9cf,stroke:#333,stroke-width:2px;
|
||||
classDef feapp fill:#f9d,stroke:#333,stroke-width:2px;
|
||||
classDef beapp fill:#cdf,stroke:#333,stroke-width:2px;
|
||||
classDef external fill:#ffc,stroke:#333,stroke-width:2px;
|
||||
classDef db fill:#cfc,stroke:#333,stroke-width:2px;
|
||||
|
||||
class UserWeb,DevAdmin user;
|
||||
class FrontendApp,WebAppUI,APIServiceFE feapp;
|
||||
class BackendServices,WorkflowTriggerAPI,HNContentService,ArticleScrapingService,SummarizationService,PodcastGenerationService,NewsletterGenerationService,PlayHTWebhookHandlerAPI,CheckWorkflowCompletionService beapp;
|
||||
class ExternalIntegrations,HNAlgoliaAPI,PlayHTAPI,LLMProvider,NodemailerService external;
|
||||
class DataStorage,DB_WorkflowRuns,DB_Posts,DB_Comments,DB_Articles,DB_Summaries,DB_Newsletters,DB_Subscribers,DB_Prompts,DB_NewsletterTemplates db;
|
||||
```
|
||||
|
||||
- **Frontend Application (Next.js on Vercel):**
|
||||
- **Web Application UI (React Components):** Renders UI, displays newsletters/podcasts, handles user interactions.
|
||||
- **API Service (Frontend - Next.js Route Handlers):** Handles frontend-initiated API calls (e.g., for future admin functions) and receives incoming webhooks (Play.ht).
|
||||
- **Backend Services (Supabase Functions & Core Logic):**
|
||||
- **Workflow Trigger API (`/api/system/trigger-workflow`):** Secure Next.js API route to manually initiate the daily workflow.
|
||||
- **HN Content Service (Supabase Fn):** Retrieves posts/comments from HN Algolia API, stores them.
|
||||
- **Article Scraping Service (Supabase Fn):** Triggered by new HN posts, scrapes article content.
|
||||
- **Summarization Service (LLM Facade - Supabase Fn):** Triggered by new articles/comments, generates summaries using LLM.
|
||||
- **Podcast Generation Service (Supabase Fn):** Sends newsletter content to Play.ht API.
|
||||
- **Newsletter Generation Service (Supabase Fn):** Compiles newsletter, handles podcast link logic, triggers email delivery.
|
||||
- **Play.ht Webhook API (`/api/webhooks/playht`):** Next.js API route to receive podcast status from Play.ht.
|
||||
- **CheckWorkflowCompletionService (Supabase Cron Fn):** Periodically monitors `workflow_runs` and related tables to orchestrate the progression between pipeline stages (e.g., from summarization to newsletter generation, then to delivery).
|
||||
- **Data Storage (Supabase PostgreSQL):** Stores all application data including workflow state, content, summaries, newsletters, subscribers, prompts, and templates.
|
||||
- **External APIs & Services:** HN Algolia API, Play.ht API, LLM Provider (Ollama/Remote), Nodemailer.
|
||||
|
||||
### Architectural / Design Patterns Adopted
|
||||
|
||||
- **Event-Driven Architecture:** Core backend processing is a series of steps triggered by database events (Supabase Database Webhooks calling Supabase Functions hosted on Vercel) and orchestrated via the `workflow_runs` table and the `CheckWorkflowCompletionService`.
|
||||
- **Serverless Functions:** Backend logic is encapsulated in Supabase Functions (running on Vercel).
|
||||
- **Monorepo:** All code resides in a single repository.
|
||||
- **Facade Pattern:** Encapsulates interactions with external services (HN API, Play.ht API, LLM, Nodemailer) within `supabase/functions/_shared/`.
|
||||
- **Factory Pattern (for LLM Service):** The `LLMFacade` will use a factory to instantiate the appropriate LLM client based on environment configuration.
|
||||
- **Hexagonal Architecture (Pragmatic Application):** For complex Supabase Functions, core business logic will be separated from framework-specific handlers and data interaction code (adapters) to improve testability and maintainability. Simpler functions may have a more direct implementation.
|
||||
- **Repository Pattern (for Data Access - Conceptual):** Data access logic within services will be organized, conceptually resembling repositories, even if not strictly implemented with separate repository classes for all entities in MVP Supabase Functions.
|
||||
- **Configuration via Environment Variables:** All sensitive and environment-specific configurations managed via environment variables.
|
||||
@@ -0,0 +1,232 @@
|
||||
# Data Models
|
||||
|
||||
> This document is a granulated shard from the main "3-architecture.md" focusing on "Data Models".
|
||||
|
||||
This section defines the core data structures used within the BMad DiCaster application, including conceptual domain entities and their corresponding database schemas in Supabase PostgreSQL.
|
||||
|
||||
### Core Application Entities / Domain Objects
|
||||
|
||||
(Conceptual types, typically defined in `shared/types/domain-models.ts`)
|
||||
|
||||
#### 1\. `WorkflowRun`
|
||||
|
||||
- **Description:** A single execution of the daily workflow.
|
||||
- **Schema:** `id (string UUID)`, `createdAt (string ISO)`, `lastUpdatedAt (string ISO)`, `status (enum string: 'pending' | 'fetching_hn' | 'scraping_articles' | 'summarizing_content' | 'generating_podcast' | 'generating_newsletter' | 'delivering_newsletter' | 'completed' | 'failed')`, `currentStepDetails (string?)`, `errorMessage (string?)`, `details (object?: { postsFetched?: number, articlesAttempted?: number, articlesScrapedSuccessfully?: number, summariesGenerated?: number, podcastJobId?: string, podcastStatus?: string, newsletterGeneratedAt?: string, subscribersNotified?: number })`
|
||||
|
||||
#### 2\. `HNPost`
|
||||
|
||||
- **Description:** A post from Hacker News.
|
||||
- **Schema:** `id (string HN_objectID)`, `hnNumericId (number?)`, `title (string)`, `url (string?)`, `author (string)`, `points (number)`, `createdAt (string ISO)`, `retrievedAt (string ISO)`, `hnStoryText (string?)`, `numComments (number?)`, `tags (string[]?)`, `workflowRunId (string UUID?)`
|
||||
|
||||
#### 3\. `HNComment`
|
||||
|
||||
- **Description:** A comment on an HN post.
|
||||
- **Schema:** `id (string HN_commentID)`, `hnPostId (string)`, `parentId (string?)`, `author (string?)`, `text (string HTML)`, `createdAt (string ISO)`, `retrievedAt (string ISO)`, `children (HNComment[]?)`
|
||||
|
||||
#### 4\. `ScrapedArticle`
|
||||
|
||||
- **Description:** Content scraped from an article URL.
|
||||
- **Schema:** `id (string UUID)`, `hnPostId (string)`, `originalUrl (string)`, `resolvedUrl (string?)`, `title (string?)`, `author (string?)`, `publicationDate (string ISO?)`, `mainTextContent (string?)`, `scrapedAt (string ISO)`, `scrapingStatus (enum string: 'pending' | 'success' | 'failed_unreachable' | 'failed_paywall' | 'failed_parsing')`, `errorMessage (string?)`, `workflowRunId (string UUID?)`
|
||||
|
||||
#### 5\. `ArticleSummary`
|
||||
|
||||
- **Description:** AI-generated summary of a `ScrapedArticle`.
|
||||
- **Schema:** `id (string UUID)`, `scrapedArticleId (string UUID)`, `summaryText (string)`, `generatedAt (string ISO)`, `llmPromptVersion (string?)`, `llmModelUsed (string?)`, `workflowRunId (string UUID)`
|
||||
|
||||
#### 6\. `CommentSummary`
|
||||
|
||||
- **Description:** AI-generated summary of comments for an `HNPost`.
|
||||
- **Schema:** `id (string UUID)`, `hnPostId (string)`, `summaryText (string)`, `generatedAt (string ISO)`, `llmPromptVersion (string?)`, `llmModelUsed (string?)`, `workflowRunId (string UUID)`
|
||||
|
||||
#### 7\. `Newsletter`
|
||||
|
||||
- **Description:** The daily generated newsletter.
|
||||
- **Schema:** `id (string UUID)`, `workflowRunId (string UUID)`, `targetDate (string YYYY-MM-DD)`, `title (string)`, `generatedAt (string ISO)`, `htmlContent (string)`, `mjmlTemplateVersion (string?)`, `podcastPlayhtJobId (string?)`, `podcastUrl (string?)`, `podcastStatus (enum string?: 'pending' | 'generating' | 'completed' | 'failed')`, `deliveryStatus (enum string: 'pending' | 'sending' | 'sent' | 'partially_failed' | 'failed')`, `scheduledSendAt (string ISO?)`, `sentAt (string ISO?)`
|
||||
|
||||
#### 8\. `Subscriber`
|
||||
|
||||
- **Description:** An email subscriber.
|
||||
- **Schema:** `id (string UUID)`, `email (string)`, `subscribedAt (string ISO)`, `isActive (boolean)`, `unsubscribedAt (string ISO?)`
|
||||
|
||||
#### 9\. `SummarizationPrompt`
|
||||
|
||||
- **Description:** Stores prompts for AI summarization.
|
||||
- **Schema:** `id (string UUID)`, `promptName (string)`, `promptText (string)`, `version (string)`, `createdAt (string ISO)`, `updatedAt (string ISO)`, `isDefaultArticlePrompt (boolean)`, `isDefaultCommentPrompt (boolean)`
|
||||
|
||||
#### 10\. `NewsletterTemplate`
|
||||
|
||||
- **Description:** HTML/MJML templates for newsletters.
|
||||
- **Schema:** `id (string UUID)`, `templateName (string)`, `mjmlContent (string?)`, `htmlContent (string)`, `version (string)`, `createdAt (string ISO)`, `updatedAt (string ISO)`, `isDefault (boolean)`
|
||||
|
||||
### Database Schemas (Supabase PostgreSQL)
|
||||
|
||||
#### 1\. `workflow_runs`
|
||||
|
||||
```sql
|
||||
CREATE TABLE public.workflow_runs (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
|
||||
last_updated_at TIMESTAMPTZ NOT NULL DEFAULT now(),
|
||||
status TEXT NOT NULL DEFAULT 'pending', -- pending, fetching_hn, scraping_articles, summarizing_content, generating_podcast, generating_newsletter, delivering_newsletter, completed, failed
|
||||
current_step_details TEXT NULL,
|
||||
error_message TEXT NULL,
|
||||
details JSONB NULL -- {postsFetched, articlesAttempted, articlesScrapedSuccessfully, summariesGenerated, podcastJobId, podcastStatus, newsletterGeneratedAt, subscribersNotified}
|
||||
);
|
||||
COMMENT ON COLUMN public.workflow_runs.status IS 'Possible values: pending, fetching_hn, scraping_articles, summarizing_content, generating_podcast, generating_newsletter, delivering_newsletter, completed, failed';
|
||||
COMMENT ON COLUMN public.workflow_runs.details IS 'Stores step-specific progress or metadata like postsFetched, articlesScraped, podcastJobId, etc.';
|
||||
```
|
||||
|
||||
#### 2\. `hn_posts`
|
||||
|
||||
```sql
|
||||
CREATE TABLE public.hn_posts (
|
||||
id TEXT PRIMARY KEY, -- HN's objectID
|
||||
hn_numeric_id BIGINT NULL UNIQUE,
|
||||
title TEXT NOT NULL,
|
||||
url TEXT NULL,
|
||||
author TEXT NULL,
|
||||
points INTEGER NOT NULL DEFAULT 0,
|
||||
created_at TIMESTAMPTZ NOT NULL, -- HN post creation time
|
||||
retrieved_at TIMESTAMPTZ NOT NULL DEFAULT now(),
|
||||
hn_story_text TEXT NULL,
|
||||
num_comments INTEGER NULL DEFAULT 0,
|
||||
tags TEXT[] NULL,
|
||||
workflow_run_id UUID NULL REFERENCES public.workflow_runs(id) ON DELETE SET NULL -- The run that fetched this instance of the post
|
||||
);
|
||||
COMMENT ON COLUMN public.hn_posts.id IS 'Hacker News objectID for the story.';
|
||||
```
|
||||
|
||||
#### 3\. `hn_comments`
|
||||
|
||||
```sql
|
||||
CREATE TABLE public.hn_comments (
|
||||
id TEXT PRIMARY KEY, -- HN's comment ID
|
||||
hn_post_id TEXT NOT NULL REFERENCES public.hn_posts(id) ON DELETE CASCADE,
|
||||
parent_comment_id TEXT NULL REFERENCES public.hn_comments(id) ON DELETE CASCADE,
|
||||
author TEXT NULL,
|
||||
comment_text TEXT NOT NULL, -- HTML content of the comment
|
||||
created_at TIMESTAMPTZ NOT NULL, -- HN comment creation time
|
||||
retrieved_at TIMESTAMPTZ NOT NULL DEFAULT now()
|
||||
);
|
||||
CREATE INDEX idx_hn_comments_post_id ON public.hn_comments(hn_post_id);
|
||||
```
|
||||
|
||||
#### 4\. `scraped_articles`
|
||||
|
||||
```sql
|
||||
CREATE TABLE public.scraped_articles (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
hn_post_id TEXT NOT NULL REFERENCES public.hn_posts(id) ON DELETE CASCADE,
|
||||
original_url TEXT NOT NULL,
|
||||
resolved_url TEXT NULL,
|
||||
title TEXT NULL,
|
||||
author TEXT NULL,
|
||||
publication_date TIMESTAMPTZ NULL,
|
||||
main_text_content TEXT NULL,
|
||||
scraped_at TIMESTAMPTZ NOT NULL DEFAULT now(),
|
||||
scraping_status TEXT NOT NULL DEFAULT 'pending', -- pending, success, failed_unreachable, failed_paywall, failed_parsing
|
||||
error_message TEXT NULL,
|
||||
workflow_run_id UUID NULL REFERENCES public.workflow_runs(id) ON DELETE SET NULL
|
||||
);
|
||||
CREATE UNIQUE INDEX idx_scraped_articles_hn_post_id_workflow_run_id ON public.scraped_articles(hn_post_id, workflow_run_id);
|
||||
COMMENT ON COLUMN public.scraped_articles.scraping_status IS 'Possible values: pending, success, failed_unreachable, failed_paywall, failed_parsing, failed_generic';
|
||||
```
|
||||
|
||||
#### 5\. `article_summaries`
|
||||
|
||||
```sql
|
||||
CREATE TABLE public.article_summaries (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
scraped_article_id UUID NOT NULL REFERENCES public.scraped_articles(id) ON DELETE CASCADE,
|
||||
summary_text TEXT NOT NULL,
|
||||
generated_at TIMESTAMPTZ NOT NULL DEFAULT now(),
|
||||
llm_prompt_version TEXT NULL,
|
||||
llm_model_used TEXT NULL,
|
||||
workflow_run_id UUID NOT NULL REFERENCES public.workflow_runs(id) ON DELETE CASCADE
|
||||
);
|
||||
CREATE UNIQUE INDEX idx_article_summaries_scraped_article_id_workflow_run_id ON public.article_summaries(scraped_article_id, workflow_run_id);
|
||||
COMMENT ON COLUMN public.article_summaries.llm_prompt_version IS 'Version or identifier of the summarization prompt used.';
|
||||
```
|
||||
|
||||
#### 6\. `comment_summaries`
|
||||
|
||||
```sql
|
||||
CREATE TABLE public.comment_summaries (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
hn_post_id TEXT NOT NULL REFERENCES public.hn_posts(id) ON DELETE CASCADE,
|
||||
summary_text TEXT NOT NULL,
|
||||
generated_at TIMESTAMPTZ NOT NULL DEFAULT now(),
|
||||
llm_prompt_version TEXT NULL,
|
||||
llm_model_used TEXT NULL,
|
||||
workflow_run_id UUID NOT NULL REFERENCES public.workflow_runs(id) ON DELETE CASCADE
|
||||
);
|
||||
CREATE UNIQUE INDEX idx_comment_summaries_hn_post_id_workflow_run_id ON public.comment_summaries(hn_post_id, workflow_run_id);
|
||||
```
|
||||
|
||||
#### 7\. `newsletters`
|
||||
|
||||
```sql
|
||||
CREATE TABLE public.newsletters (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
workflow_run_id UUID NOT NULL UNIQUE REFERENCES public.workflow_runs(id) ON DELETE CASCADE,
|
||||
target_date DATE NOT NULL UNIQUE,
|
||||
title TEXT NOT NULL,
|
||||
generated_at TIMESTAMPTZ NOT NULL DEFAULT now(),
|
||||
html_content TEXT NOT NULL,
|
||||
mjml_template_version TEXT NULL,
|
||||
podcast_playht_job_id TEXT NULL,
|
||||
podcast_url TEXT NULL,
|
||||
podcast_status TEXT NULL DEFAULT 'pending', -- pending, generating, completed, failed
|
||||
delivery_status TEXT NOT NULL DEFAULT 'pending', -- pending, sending, sent, failed, partially_failed
|
||||
scheduled_send_at TIMESTAMPTZ NULL,
|
||||
sent_at TIMESTAMPTZ NULL
|
||||
);
|
||||
CREATE INDEX idx_newsletters_target_date ON public.newsletters(target_date);
|
||||
COMMENT ON COLUMN public.newsletters.target_date IS 'The date this newsletter pertains to. Ensures uniqueness.';
|
||||
```
|
||||
|
||||
#### 8\. `subscribers`
|
||||
|
||||
```sql
|
||||
CREATE TABLE public.subscribers (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
email TEXT NOT NULL UNIQUE,
|
||||
subscribed_at TIMESTAMPTZ NOT NULL DEFAULT now(),
|
||||
is_active BOOLEAN NOT NULL DEFAULT TRUE,
|
||||
unsubscribed_at TIMESTAMPTZ NULL
|
||||
);
|
||||
CREATE INDEX idx_subscribers_email_active ON public.subscribers(email, is_active);
|
||||
```
|
||||
|
||||
#### 9\. `summarization_prompts`
|
||||
|
||||
```sql
|
||||
CREATE TABLE public.summarization_prompts (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
prompt_name TEXT NOT NULL UNIQUE,
|
||||
prompt_text TEXT NOT NULL,
|
||||
version TEXT NOT NULL DEFAULT '1.0',
|
||||
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
|
||||
updated_at TIMESTAMPTZ NOT NULL DEFAULT now(),
|
||||
is_default_article_prompt BOOLEAN NOT NULL DEFAULT FALSE,
|
||||
is_default_comment_prompt BOOLEAN NOT NULL DEFAULT FALSE
|
||||
);
|
||||
COMMENT ON COLUMN public.summarization_prompts.prompt_name IS 'Unique identifier for the prompt, e.g., article_summary_v2.1';
|
||||
-- Application logic will enforce that only one prompt of each type is marked as default.
|
||||
```
|
||||
|
||||
#### 10\. `newsletter_templates`
|
||||
|
||||
```sql
|
||||
CREATE TABLE public.newsletter_templates (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
template_name TEXT NOT NULL UNIQUE,
|
||||
mjml_content TEXT NULL,
|
||||
html_content TEXT NOT NULL,
|
||||
version TEXT NOT NULL DEFAULT '1.0',
|
||||
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
|
||||
updated_at TIMESTAMPTZ NOT NULL DEFAULT now(),
|
||||
is_default BOOLEAN NOT NULL DEFAULT FALSE
|
||||
);
|
||||
-- Application logic will enforce that only one template is marked as default.
|
||||
```
|
||||
@@ -0,0 +1,9 @@
|
||||
# Environment Variables Documentation
|
||||
|
||||
> This document is a granulated shard from the main "3-architecture.md" focusing on "Environment Variables Documentation".
|
||||
|
||||
The BMad DiCaster Architecture Document (`3-architecture.md`) indicates that detailed environment variable documentation is intended to be consolidated, potentially in a file named `docs/environment-vars.md`. This file is marked as "(To be created)" within the "Key Reference Documents" section of `3-architecture.md`.
|
||||
|
||||
While specific environment variables are mentioned contextually throughout `3-architecture.md` (e.g., for Play.ht API keys, LLM provider configuration, SMTP settings, and workflow trigger API keys), a dedicated, centralized list of all variables, their purposes, and example values is not present as a single extractable section suitable for verbatim sharding at this time.
|
||||
|
||||
This sharded document serves as a placeholder, reflecting the sharding plan's intent to capture "Environment Variables Documentation". For specific variables mentioned in context, please refer to the full `3-architecture.md` (particularly sections like API Reference, Infrastructure Overview, and Security Best Practices) until a dedicated and consolidated list is formally compiled as intended.
|
||||
111
BETA-V3/v3-demos/full-stack-app-demo/10-sharded-docs/epic-1.md
Normal file
111
BETA-V3/v3-demos/full-stack-app-demo/10-sharded-docs/epic-1.md
Normal file
@@ -0,0 +1,111 @@
|
||||
# Epic 1: Project Initialization, Setup, and HN Content Acquisition
|
||||
|
||||
> This document is a granulated shard from the main "BETA-V3/v3-demos/full-stack-app-demo/8-prd-po-updated.md" focusing on "Epic 1: Project Initialization, Setup, and HN Content Acquisition".
|
||||
|
||||
- Goal: Establish the foundational project structure, including the Next.js application, Supabase integration, deployment pipeline, API/CLI triggers, core workflow orchestration, and implement functionality to retrieve, process, and store Hacker News posts/comments via a `ContentAcquisitionFacade`, providing data for newsletter generation. Implement the database event mechanism to trigger subsequent processing. Define core configuration tables, seed data, and set up testing frameworks.
|
||||
|
||||
- **Story 1.1:** As a developer, I want to set up the Next.js project with Supabase integration, so that I have a functional foundation for building the application.
|
||||
- Acceptance Criteria:
|
||||
- The Next.js project is initialized using the Vercel/Supabase template.
|
||||
- Supabase is successfully integrated with the Next.js project.
|
||||
- The project codebase is initialized in a Git repository.
|
||||
- A basic project `README.md` is created in the root of the repository, including a project overview, links to main documentation (PRD, architecture), and essential developer setup/run commands.
|
||||
- **Story 1.2:** As a developer, I want to configure the deployment pipeline to Vercel with separate development and production environments, so that I can easily deploy and update the application.
|
||||
- Acceptance Criteria:
|
||||
- The project is successfully linked to a Vercel project with separate environments.
|
||||
- Automated deployments are configured for the main branch to the production environment.
|
||||
- Environment variables are set up for local development and Vercel deployments.
|
||||
- **Story 1.3:** As a developer, I want to implement the API and CLI trigger mechanisms, so that I can manually trigger the workflow during development and testing.
|
||||
- Acceptance Criteria:
|
||||
- A secure API endpoint is created.
|
||||
- The API endpoint requires authentication (API key).
|
||||
- The API endpoint (`/api/system/trigger-workflow`) creates an entry in the `workflow_runs` table and returns the `jobId`.
|
||||
- The API endpoint returns an appropriate response to indicate success or failure.
|
||||
- The API endpoint is secured via an API key.
|
||||
- A CLI command is created.
|
||||
- The CLI command invokes the `/api/system/trigger-workflow` endpoint or directly interacts with `WorkflowTrackerService` to start a new workflow run.
|
||||
- The CLI command provides informative output to the console.
|
||||
- All API requests and CLI command executions are logged, including timestamps and any relevant data.
|
||||
- All interactions with the API or CLI that initiate a workflow must record the `workflow_run_id` in logs.
|
||||
- The API and CLI interfaces adhere to mobile responsiveness and Tailwind/theming principles.
|
||||
- **Story 1.4:** As a system, I want to retrieve the top 30 Hacker News posts and associated comments daily using a configurable `ContentAcquisitionFacade`, so that the data is available for summarization and newsletter generation.
|
||||
- Acceptance Criteria:
|
||||
- A `ContentAcquisitionFacade` is implemented in `supabase/functions/_shared/` to abstract interaction with the news data source (initially HN Algolia API).
|
||||
- The facade handles API authentication (if any), request formation, and response parsing for the specific news source.
|
||||
- The facade implements basic retry logic for transient errors.
|
||||
- Unit tests for the `ContentAcquisitionFacade` (mocking actual HTTP calls to the HN Algolia API) achieve >80% coverage.
|
||||
- The system retrieves the top 30 Hacker News posts daily via the `ContentAcquisitionFacade`.
|
||||
- The system retrieves associated comments for the top 30 posts via the `ContentAcquisitionFacade`.
|
||||
- Retrieved data (posts and comments) is stored in Supabase database, linked to the current `workflow_run_id`.
|
||||
- This functionality can be triggered via the API and CLI.
|
||||
- The system logs the start and completion of the retrieval process, including any errors.
|
||||
- Upon completion, the service updates the `workflow_runs` table with status and details (e.g., number of posts fetched) via `WorkflowTrackerService`.
|
||||
- Supabase migrations for `hn_posts` and `hn_comments` tables (as defined in `architecture.txt`) are created and applied before data operations.
|
||||
- **Story 1.5: Define and Implement `workflow_runs` Table and `WorkflowTrackerService`**
|
||||
- Goal: Implement the core workflow orchestration mechanism (tracking part).
|
||||
- Acceptance Criteria:
|
||||
- Supabase migration created for the `workflow_runs` table as defined in the architecture document.
|
||||
- `WorkflowTrackerService` implemented in `supabase/functions/_shared/` with methods for initiating, updating step details, incrementing counters, failing, and completing workflow runs.
|
||||
- Service includes robust error handling and logging via Pino.
|
||||
- Unit tests for `WorkflowTrackerService` achieve >80% coverage.
|
||||
- **Story 1.6: Implement `CheckWorkflowCompletionService` (Supabase Cron Function)**
|
||||
- Goal: Implement the core workflow orchestration mechanism (progression part).
|
||||
- Acceptance Criteria:
|
||||
- Supabase Function `check-workflow-completion-service` created.
|
||||
- Function queries `workflow_runs` and related tables to determine if a workflow run is ready to progress to the next major stage.
|
||||
- Function correctly updates `workflow_runs.status` and invokes the next appropriate service function.
|
||||
- Logic for handling podcast link availability is implemented here or in conjunction with `NewsletterGenerationService`.
|
||||
- The function is configurable to be run periodically.
|
||||
- Comprehensive logging implemented using Pino.
|
||||
- Unit tests achieve >80% coverage.
|
||||
- **Story 1.7: Implement Workflow Status API Endpoint (`/api/system/workflow-status/{jobId}`)**
|
||||
- Goal: Allow developers/admins to check the status of a workflow run.
|
||||
- Acceptance Criteria:
|
||||
- Next.js API Route Handler created at `/api/system/workflow-status/{jobId}`.
|
||||
- Endpoint secured with API Key authentication.
|
||||
- Retrieves and returns status details from the `workflow_runs` table.
|
||||
- Handles cases where `jobId` is not found (404).
|
||||
- Unit and integration tests for the API endpoint.
|
||||
- **Story 1.8: Create and document `docs/environment-vars.md` and set up `.env.example`**
|
||||
- Goal: Ensure environment variables are properly documented and managed.
|
||||
- Acceptance Criteria:
|
||||
- A `docs/environment-vars.md` file is created.
|
||||
- An `.env.example` file is created.
|
||||
- Sensitive information in examples is masked.
|
||||
- For each third-party service requiring credentials, `docs/environment-vars.md` includes:
|
||||
- A brief note or link guiding the user on where to typically sign up for the service and obtain the necessary API key or credential.
|
||||
- A recommendation for the user to check the service's current free/low-tier API rate limits against expected MVP usage.
|
||||
- A note that usage beyond free tier limits for commercial services (like Play.ht, remote LLMs, or email providers) may incur costs, and the user should review the provider's pricing.
|
||||
- **Story 1.9 (New): Implement Database Event/Webhook: `hn_posts` Insert to Article Scraping Service**
|
||||
- Goal: To ensure that the successful insertion of a new Hacker News post into the `hn_posts` table automatically triggers the `ArticleScrapingService`.
|
||||
- Acceptance Criteria:
|
||||
- A Supabase database trigger or webhook mechanism (e.g., using `pg_net` or native triggers calling a function) is implemented on the `hn_posts` table for INSERT operations.
|
||||
- The trigger successfully invokes the `ArticleScrapingService` (Supabase Function).
|
||||
- The invocation passes necessary parameters like `hn_post_id` and `workflow_run_id` to the `ArticleScrapingService`.
|
||||
- The mechanism is robust and includes error handling/logging for the trigger/webhook itself.
|
||||
- Unit/integration tests are created to verify the trigger fires correctly and the service is invoked with correct parameters.
|
||||
- **Story 1.10 (New): Define and Implement Core Configuration Tables**
|
||||
- Goal: To establish the database tables necessary for storing core application configurations like summarization prompts, newsletter templates, and subscriber lists.
|
||||
- Acceptance Criteria:
|
||||
- A Supabase migration is created and applied to define the `summarization_prompts` table schema as specified in `architecture.txt`.
|
||||
- A Supabase migration is created and applied to define the `newsletter_templates` table schema as specified in `architecture.txt`.
|
||||
- A Supabase migration is created and applied to define the `subscribers` table schema as specified in `architecture.txt`.
|
||||
- These tables are ready for data population (e.g., via seeding or manual entry for MVP).
|
||||
- **Story 1.11 (New): Create Seed Data for Initial Configuration**
|
||||
- Goal: To populate the database with initial configuration data (prompts, templates, test subscribers) necessary for development and testing of MVP features.
|
||||
- Acceptance Criteria:
|
||||
- A `supabase/seed.sql` file (or an equivalent, documented seeding mechanism) is created.
|
||||
- The seed mechanism populates the `summarization_prompts` table with at least one default article prompt and one default comment prompt.
|
||||
- The seed mechanism populates the `newsletter_templates` table with at least one default newsletter template (HTML format for MVP).
|
||||
- The seed mechanism populates the `subscribers` table with a small list of 1-3 test email addresses for MVP delivery testing.
|
||||
- Instructions on how to apply the seed data to a local or development Supabase instance are documented (e.g., in the project `README.md`).
|
||||
- **Story 1.12 (New): Set up and Configure Project Testing Frameworks**
|
||||
- Goal: To ensure that the primary testing frameworks (Jest, React Testing Library, Playwright) are installed and configured early in the project lifecycle, enabling test-driven development practices and adherence to the testing strategy.
|
||||
- Acceptance Criteria:
|
||||
- Jest and React Testing Library (RTL) are installed as project dependencies.
|
||||
- Jest and RTL are configured for unit and integration testing of Next.js components and JavaScript/TypeScript code (e.g., `jest.config.js` is set up, necessary Babel/TS transformations are in place).
|
||||
- A sample unit test (e.g., for a simple component or utility function) is created and runs successfully using the Jest/RTL setup.
|
||||
- Playwright is installed as a project dependency.
|
||||
- Playwright is configured for end-to-end testing (e.g., `playwright.config.ts` is set up, browser configurations are defined).
|
||||
- A sample E2E test (e.g., navigating to the application's homepage on the local development server) is created and runs successfully using Playwright.
|
||||
- Scripts to execute tests (e.g., unit tests, E2E tests) are added to `package.json`.
|
||||
@@ -0,0 +1,39 @@
|
||||
# Epic 2: Article Scraping
|
||||
|
||||
> This document is a granulated shard from the main "BETA-V3/v3-demos/full-stack-app-demo/8-prd-po-updated.md" focusing on "Epic 2: Article Scraping".
|
||||
|
||||
- Goal: Implement the functionality to scrape and store linked articles from HN posts, enriching the data available for summarization and the newsletter. Ensure this functionality is triggered by database events and can be tested via API/CLI (if retained). Implement the database event mechanism to trigger subsequent processing.
|
||||
|
||||
- **Story 2.1:** As a system, I want to identify URLs within the top 30 (configurable via environment variable) Hacker News posts, so that I can extract the content of linked articles.
|
||||
- Acceptance Criteria:
|
||||
- The system parses the top N (configurable via env var) Hacker News posts to identify URLs.
|
||||
- The system filters out any URLs that are not relevant to article scraping (e.g., links to images, videos, etc.).
|
||||
- **Story 2.2:** As a system, I want to scrape the content of the identified article URLs using Cheerio, so that I can provide summaries in the newsletter.
|
||||
- Acceptance Criteria:
|
||||
- The system scrapes the content from the identified article URLs using Cheerio.
|
||||
- The system extracts relevant content such as the article title, author, publication date, and main text.
|
||||
- The system handles potential issues during scraping, such as website errors or changes in website structure, logging errors for review.
|
||||
- **Story 2.3:** As a system, I want to store the scraped article content in the Supabase database, associated with the corresponding Hacker News post and workflow run, so that it can be used for summarization and newsletter generation.
|
||||
- Acceptance Criteria:
|
||||
- Scraped article content is stored in the `scraped_articles` table, linked to the `hn_post_id` and the current `workflow_run_id`.
|
||||
- The system ensures that the stored data includes all extracted information (title, author, date, text).
|
||||
- The `scraping_status` and any `error_message` are recorded in the `scraped_articles` table.
|
||||
- Upon completion of scraping an article (success or failure), the service updates the `workflow_runs.details` (e.g., incrementing scraped counts) via `WorkflowTrackerService`.
|
||||
- A Supabase migration for the `scraped_articles` table (as defined in `architecture.txt`) is created and applied before data operations.
|
||||
- **Story 2.4:** As a developer, I want to trigger the article scraping process via the API and CLI, so that I can manually initiate it for testing and debugging.
|
||||
- _Architect's Note: This story might become redundant if the main workflow trigger (Story 1.3) handles the entire pipeline initiation and individual service testing is done via direct function invocation or unit/integration tests._
|
||||
- Acceptance Criteria:
|
||||
- The API endpoint can trigger the article scraping process.
|
||||
- The CLI command can trigger the article scraping process locally.
|
||||
- The system logs the start and completion of the scraping process, including any errors encountered.
|
||||
- All API requests and CLI command executions are logged, including timestamps and any relevant data.
|
||||
- The system handles partial execution gracefully (i.e., if triggered before Epic 1 components like `WorkflowTrackerService` are available, it logs a message and exits).
|
||||
- If retained for isolated testing, all scraping operations initiated via this trigger must be associated with a valid `workflow_run_id` and update the `workflow_runs` table accordingly via `WorkflowTrackerService`.
|
||||
- **Story 2.5 (New): Implement Database Event/Webhook: `scraped_articles` Success to Summarization Service**
|
||||
- Goal: To ensure that the successful scraping and storage of an article in `scraped_articles` automatically triggers the `SummarizationService`.
|
||||
- Acceptance Criteria:
|
||||
- A Supabase database trigger or webhook mechanism is implemented on the `scraped_articles` table (e.g., on INSERT or UPDATE where `scraping_status` is 'success').
|
||||
- The trigger successfully invokes the `SummarizationService` (Supabase Function).
|
||||
- The invocation passes necessary parameters like `scraped_article_id` and `workflow_run_id` to the `SummarizationService`.
|
||||
- The mechanism is robust and includes error handling/logging for the trigger/webhook itself.
|
||||
- Unit/integration tests are created to verify the trigger fires correctly and the service is invoked with correct parameters.
|
||||
@@ -0,0 +1,41 @@
|
||||
# Epic 3: AI-Powered Content Summarization
|
||||
|
||||
> This document is a granulated shard from the main "BETA-V3/v3-demos/full-stack-app-demo/8-prd-po-updated.md" focusing on "Epic 3: AI-Powered Content Summarization".
|
||||
|
||||
- Goal: Integrate AI summarization capabilities, by implementing and using a configurable and testable `LLMFacade`, to generate concise summaries of articles and comments from prompts stored in the database. This will enrich the newsletter content, be triggerable via API/CLI, is triggered by database events, and track progress via `WorkflowTrackerService`.
|
||||
|
||||
- **Story 3.1:** As a system, I want to integrate an AI summarization capability by implementing and using an `LLMFacade`, so that I can generate concise summaries of articles and comments using various configurable LLM providers.
|
||||
- Acceptance Criteria:
|
||||
- An `LLMFacade` interface and concrete implementations (e.g., `OllamaAdapter`, `RemoteLLMApiAdapter`) are created in `supabase/functions/_shared/llm-facade.ts`.
|
||||
- A factory function is implemented within or alongside the facade to select the appropriate LLM adapter based on environment variables (e.g., `LLM_PROVIDER_TYPE`, `OLLAMA_API_URL`, `REMOTE_LLM_API_KEY`, `REMOTE_LLM_API_URL`, `LLM_MODEL_NAME`).
|
||||
- The `LLMFacade` handles making requests to the respective LLM APIs (as configured) and parsing their responses to extract the summary.
|
||||
- Robust error handling and retry logic for transient API errors are implemented within the facade.
|
||||
- Unit tests for the `LLMFacade` and its adapters (mocking actual HTTP calls) achieve >80% coverage.
|
||||
- The system utilizes this `LLMFacade` for all summarization tasks (articles and comments).
|
||||
- The integration is configurable via environment variables to switch between local and remote LLMs and specify model names.
|
||||
- **Story 3.2:** As a system, I want to retrieve summarization prompts from the database, and then use them via the `LLMFacade` to generate 2-paragraph summaries of the scraped articles, so that users can quickly grasp the main content and the prompts can be easily updated.
|
||||
- Acceptance Criteria:
|
||||
- The service retrieves the appropriate summarization prompt from the `summarization_prompts` table.
|
||||
- The system generates a 2-paragraph summary for each scraped article using the retrieved prompt via the `LLMFacade`.
|
||||
- Generated summaries are stored in the `article_summaries` table, linked to the `scraped_article_id` and the current `workflow_run_id`.
|
||||
- The summaries are accurate and capture the key information from the article.
|
||||
- Upon completion of each article summarization task, the service updates `workflow_runs.details` (e.g., incrementing article summaries generated counts) via `WorkflowTrackerService`.
|
||||
- (System Note: The `CheckWorkflowCompletionService` monitors the `article_summaries` table as part of determining overall summarization completion for a `workflow_run_id`).
|
||||
- A Supabase migration for the `article_summaries` table (as defined in `architecture.txt`) is created and applied before data operations.
|
||||
- **Story 3.3:** As a system, I want to retrieve summarization prompts from the database, and then use them via the `LLMFacade` to generate 2-paragraph summaries of the comments for the selected HN posts, so that users can understand the main discussions and the prompts can be easily updated.
|
||||
- Acceptance Criteria:
|
||||
- The service retrieves the appropriate summarization prompt from the `summarization_prompts` table.
|
||||
- The system generates a 2-paragraph summary of the comments for each selected HN post using the retrieved prompt via the `LLMFacade`.
|
||||
- Generated summaries are stored in the `comment_summaries` table, linked to the `hn_post_id` and the current `workflow_run_id`.
|
||||
- The summaries highlight interesting interactions and key points from the discussion.
|
||||
- Upon completion of each comment summarization task, the service updates `workflow_runs.details` (e.g., incrementing comment summaries generated counts) via `WorkflowTrackerService`.
|
||||
- (System Note: The `CheckWorkflowCompletionService` monitors the `comment_summaries` table as part of determining overall summarization completion for a `workflow_run_id`).
|
||||
- A Supabase migration for the `comment_summaries` table (as defined in `architecture.txt`) is created and applied before data operations.
|
||||
- **Story 3.4:** As a developer, I want to trigger the AI summarization process via the API and CLI, so that I can manually initiate it for testing and debugging.
|
||||
- Acceptance Criteria:
|
||||
- The API endpoint can trigger the AI summarization process.
|
||||
- The CLI command can trigger the AI summarization process locally.
|
||||
- The system logs the input and output of the summarization process, including the summarization prompt used and any errors.
|
||||
- All API requests and CLI command executions are logged, including timestamps and any relevant data.
|
||||
- The system handles partial execution gracefully (i.e., if triggered before Epic 2 is complete, it logs a message and exits).
|
||||
- All summarization operations initiated via this trigger must be associated with a valid `workflow_run_id` and update the `workflow_runs` table accordingly via `WorkflowTrackerService`.
|
||||
@@ -0,0 +1,43 @@
|
||||
# Epic 4: Automated Newsletter Creation and Distribution
|
||||
|
||||
> This document is a granulated shard from the main "BETA-V3/v3-demos/full-stack-app-demo/8-prd-po-updated.md" focusing on "Epic 4: Automated Newsletter Creation and Distribution".
|
||||
|
||||
- Goal: Automate the generation and delivery of the daily newsletter by implementing and using a configurable `EmailDispatchFacade`. This includes handling podcast link availability, being triggerable via API/CLI, orchestration by `CheckWorkflowCompletionService`, and status tracking via `WorkflowTrackerService`.
|
||||
|
||||
- **Story 4.1:** As a system, I want to retrieve the newsletter template from the database, so that the newsletter's design and structure can be updated without code changes.
|
||||
- Acceptance Criteria:
|
||||
- The system retrieves the newsletter template from the `newsletter_templates` database table.
|
||||
- **Story 4.2:** As a system, I want to generate a daily newsletter in HTML format using the retrieved template, so that users can receive a concise summary of Hacker News content.
|
||||
- Acceptance Criteria:
|
||||
- The `NewsletterGenerationService` is triggered by the `CheckWorkflowCompletionService` when all summaries for a `workflow_run_id` are ready.
|
||||
- The service retrieves the newsletter template (from Story 4.1 output) from `newsletter_templates` table and summaries associated with the `workflow_run_id`.
|
||||
- The system generates a newsletter in HTML format using the template retrieved from the database.
|
||||
- The newsletter includes summaries of selected articles and comments.
|
||||
- The newsletter includes links to the original HN posts and articles.
|
||||
- The newsletter includes the original post dates/times.
|
||||
- Generated newsletter is stored in the `newsletters` table, linked to the `workflow_run_id`.
|
||||
- The service updates `workflow_runs.status` to 'generating_podcast' (or a similar appropriate status indicating handoff to podcast generation) after initiating podcast generation (as part of Epic 5 logic that will be invoked by this service or by `CheckWorkflowCompletionService` after this story's core task).
|
||||
- A Supabase migration for the `newsletters` table (as defined in `architecture.txt`) is created and applied before data operations.
|
||||
- **Story 4.3:** As a system, I want to send the generated newsletter to a list of subscribers by implementing and using an `EmailDispatchFacade`, with credentials securely provided, so that users receive the daily summary in their inbox.
|
||||
- Acceptance Criteria:
|
||||
- An `EmailDispatchFacade` is implemented in `supabase/functions/_shared/` to abstract interaction with the email sending service (initially Nodemailer via SMTP).
|
||||
- The facade handles configuration (e.g., SMTP settings from environment variables), email construction (From, To, Subject, HTML content), and sending the email.
|
||||
- The facade includes error handling for email dispatch and logs relevant status information.
|
||||
- Unit tests for the `EmailDispatchFacade` (mocking the actual Nodemailer library calls) achieve >80% coverage.
|
||||
- The `NewsletterGenerationService` (specifically, its delivery part, utilizing the `EmailDispatchFacade`) is triggered by `CheckWorkflowCompletionService` once the podcast link is available in the `newsletters` table for the `workflow_run_id` (or a configured timeout/failure condition for the podcast step has been met).
|
||||
- The system retrieves the list of subscriber email addresses from the Supabase database.
|
||||
- The system sends the HTML newsletter (with podcast link conditionally included) to all active subscribers using the `EmailDispatchFacade`.
|
||||
- Credentials for the email service (e.g., SMTP server details) are securely accessed via environment variables and used by the facade.
|
||||
- The system logs the delivery status for each subscriber (potentially via the facade).
|
||||
- The system implements conditional logic for podcast link inclusion (from `newsletters` table) and handles delay/retry as per PRD, coordinated by `CheckWorkflowCompletionService`.
|
||||
- Updates `newsletters.delivery_status` (e.g., 'sent', 'failed') and `workflow_runs.status` to 'completed' or 'failed' via `WorkflowTrackerService` upon completion or failure of delivery.
|
||||
- The initial email template includes a placeholder for the podcast URL.
|
||||
- The end-to-end generation time for a typical daily newsletter (from workflow trigger to successful email dispatch initiation, for a small set of content) is measured and logged during testing to ensure it's within a reasonable operational timeframe (target < 30 minutes).
|
||||
- **Story 4.4:** As a developer, I want to trigger the newsletter generation and distribution process via the API and CLI, so that I can manually initiate it for testing and debugging.
|
||||
- Acceptance Criteria:
|
||||
- The API endpoint can trigger the newsletter generation and distribution process.
|
||||
- The CLI command can trigger the newsletter generation and distribution process locally.
|
||||
- The system logs the start and completion of the process, including any errors.
|
||||
- All API requests and CLI command executions are logged, including timestamps and any relevant data.
|
||||
- The system handles partial execution gracefully (i.e., if triggered before Epic 3 is complete, it logs a message and exits).
|
||||
- All newsletter operations initiated via this trigger must be associated with a valid `workflow_run_id` and update the `workflow_runs` table accordingly via `WorkflowTrackerService`.
|
||||
@@ -0,0 +1,36 @@
|
||||
# Epic 5: Podcast Generation Integration
|
||||
|
||||
> This document is a granulated shard from the main "BETA-V3/v3-demos/full-stack-app-demo/8-prd-po-updated.md" focusing on "Epic 5: Podcast Generation Integration".
|
||||
|
||||
- Goal: Integrate with an audio generation API (initially Play.ht) by implementing and using a configurable `AudioGenerationFacade` to create podcast versions of the newsletter. This includes handling webhooks to update newsletter data and workflow status. Ensure this is triggerable via API/CLI, orchestrated appropriately, and uses `WorkflowTrackerService`.
|
||||
|
||||
- **Story 5.1:** As a system, I want to integrate with an audio generation API (e.g., Play.ht's PlayNote API) by implementing and using an `AudioGenerationFacade`, so that I can generate AI-powered podcast versions of the newsletter content.
|
||||
- Acceptance Criteria:
|
||||
- An `AudioGenerationFacade` is implemented in `supabase/functions/_shared/` to abstract interaction with the audio generation service (initially Play.ht).
|
||||
- The facade handles API authentication, request formation (e.g., sending content for synthesis, providing webhook URL), and response parsing for the specific audio generation service.
|
||||
- The facade is configurable via environment variables (e.g., API key, user ID, service endpoint, webhook URL base).
|
||||
- Robust error handling and retry logic for transient API errors are implemented within the facade.
|
||||
- Unit tests for the `AudioGenerationFacade` (mocking actual HTTP calls to the Play.ht API) achieve >80% coverage.
|
||||
- The system uses this `AudioGenerationFacade` for all podcast generation tasks.
|
||||
- The integration employs webhooks for asynchronous status updates from the audio generation service.
|
||||
- (Context: The `PodcastGenerationService` containing this logic is invoked by `NewsletterGenerationService` or `CheckWorkflowCompletionService` for a specific `workflow_run_id` and `newsletter_id`.)
|
||||
- **Story 5.2:** As a system, I want to send the newsletter content to the audio generation service via the `AudioGenerationFacade` to initiate podcast creation, and receive a job ID or initial response, so that I can track the podcast creation process.
|
||||
- Acceptance Criteria:
|
||||
- The system sends the newsletter content (identified by `newsletter_id` for a given `workflow_run_id`) to the configured audio generation service via the `AudioGenerationFacade`.
|
||||
- The system receives a job ID or initial response from the service via the facade.
|
||||
- The `podcast_playht_job_id` (or a generic `podcast_job_id`) and `podcast_status` (e.g., 'generating', 'submitted') are stored in the `newsletters` table, linked to the `workflow_run_id`.
|
||||
- **Story 5.3:** As a system, I want to implement a webhook handler to receive the podcast URL from the audio generation service, and update the newsletter data and workflow status, so that the podcast link can be included in the newsletter and web interface, and the overall workflow can proceed.
|
||||
- Acceptance Criteria:
|
||||
- The system implements a webhook handler (`PlayHTWebhookHandlerAPI` at `/api/webhooks/playht` or a more generic path like `/api/webhooks/audio-generation`) to receive the podcast URL and status from the audio generation service.
|
||||
- The webhook handler extracts the podcast URL and status (e.g., 'completed', 'failed') from the webhook payload.
|
||||
- The webhook handler updates the `newsletters` table with the podcast URL and status for the corresponding job.
|
||||
- The `PlayHTWebhookHandlerAPI` also updates the `workflow_runs.details` with the podcast status (e.g., `podcast_status: 'completed'`) via `WorkflowTrackerService` for the relevant `workflow_run_id` (which may need to be looked up from the `newsletter_id` or job ID present in the webhook or associated with the service job).
|
||||
- If supported by the audio generation service (e.g., Play.ht), implement security verification for the incoming webhook (such as shared secret or signature validation) to ensure authenticity. If direct verification mechanisms are not supported by the provider, this specific AC is N/A, and alternative measures (like IP whitelisting, if applicable and secure) should be considered and documented.
|
||||
- **Story 5.4:** As a developer, I want to trigger the podcast generation process via the API and CLI, so that I can manually initiate it for testing and debugging.
|
||||
- Acceptance Criteria:
|
||||
- The API endpoint can trigger the podcast generation process.
|
||||
- The CLI command can trigger the podcast generation process locally.
|
||||
- The system logs the start and completion of the process, including any intermediate steps, responses from the audio generation service, and webhook interactions.
|
||||
- All API requests and CLI command executions are logged, including timestamps and any relevant data.
|
||||
- The system handles partial execution gracefully (i.e., if triggered before Epic 4 components are ready, it logs a message and exits).
|
||||
- All podcast generation operations initiated via this trigger must be associated with a valid `workflow_run_id` and `newsletter_id`, and update the `workflow_runs` and `newsletters` tables accordingly via `WorkflowTrackerService` and direct table updates as necessary.
|
||||
@@ -0,0 +1,44 @@
|
||||
# Epic 6: Web Interface for Initial Structure and Content Access
|
||||
|
||||
> This document is a granulated shard from the main "BETA-V3/v3-demos/full-stack-app-demo/8-prd-po-updated.md" focusing on "Epic 6: Web Interface for Initial Structure and Content Access".
|
||||
|
||||
- Goal: Develop a user-friendly, responsive, and accessible web interface, based on the `frontend-architecture.md`, to display newsletters and provide access to podcast content, aligning with the project's visual and technical guidelines. All UI development within this epic must adhere to the "synthwave technical glowing purple vibes" aesthetic using Tailwind CSS and Shadcn UI, ensure basic mobile responsiveness, meet WCAG 2.1 Level A accessibility guidelines (including semantic HTML, keyboard navigation, alt text, color contrast), and optimize images using `next/image`, as detailed in the `frontend-architecture.txt` and `ui-ux-spec.txt`.
|
||||
|
||||
- **Story 6.1:** As a developer, I want to establish the initial Next.js App Router structure for the web interface, including core layouts and routing, using `frontend-architecture.md` as a guide, so that I have a foundational frontend structure.
|
||||
- Acceptance Criteria:
|
||||
- Initial HTML/CSS mockups (e.g., from Vercel v0, if used) serve as a visual guide, but implementation uses Next.js and Shadcn UI components as per `frontend-architecture.md`.
|
||||
- Next.js App Router routes are set up for `/newsletters` (listing page) and `/newsletters/[newsletterId]` (detail page) within an `app/(web)/` route group.
|
||||
- Root layout (`app/(web)/layout.tsx`) and any necessary feature-specific layouts (e.g., `app/(web)/newsletters/layout.tsx`) are implemented using Next.js App Router conventions and Tailwind CSS.
|
||||
- A `PageWrapper.tsx` component (as defined in `frontend-architecture.txt`) is implemented and used for consistent page styling (e.g., padding, max-width).
|
||||
- Basic page structure renders correctly in development environment.
|
||||
- **Story 6.2:** As a user, I want to see a list of current and past newsletters on the `/newsletters` page, so that I can easily browse available content.
|
||||
- Acceptance Criteria:
|
||||
- The `app/(web)/newsletters/page.tsx` route displays a list of newsletters.
|
||||
- Newsletter items are displayed using a `NewsletterCard.tsx` component.
|
||||
- The `NewsletterCard.tsx` component is developed (e.g., using Shadcn UI `Card` as a base), displaying at least the newsletter title, target date, and a link/navigation to its detail page.
|
||||
- `NewsletterCard.tsx` is styled using Tailwind CSS to fit the "synthwave" theme.
|
||||
- Data for the newsletter list (e.g., ID, title, date) is fetched server-side on `app/(web)/newsletters/page.tsx` using the Supabase server client.
|
||||
- The newsletter list page is responsive across common device sizes (mobile, desktop).
|
||||
- The list includes relevant information such as the newsletter title and date.
|
||||
- The list is paginated or provides scrolling functionality to handle a large number of newsletters.
|
||||
- Key page load performance (e.g., Largest Contentful Paint) for the newsletter list page is benchmarked (e.g., using browser developer tools or Lighthouse) during development testing to ensure it aligns with the target of fast load times (target < 2 seconds).
|
||||
- **Story 6.3:** As a user, I want to be able to select a newsletter from the list and read its full content within the web page on the `/newsletters/[newsletterId]` page.
|
||||
- Acceptance Criteria:
|
||||
- Clicking on a `NewsletterCard` navigates to the corresponding `app/(web)/newsletters/[newsletterId]/page.tsx` route.
|
||||
- The full HTML content of the selected newsletter is retrieved server-side using the Supabase server client and displayed in a readable format.
|
||||
- A `BackButton.tsx` component is developed (e.g., using Shadcn UI `Button` as a base) and integrated on the newsletter detail page, allowing users to navigate back to the newsletter list.
|
||||
- The newsletter detail page content area is responsive across common device sizes.
|
||||
- Key page load performance (e.g., Largest Contentful Paint) for the newsletter detail page is benchmarked (e.g., using browser developer tools or Lighthouse) during development testing to ensure it aligns with the target of fast load times (target < 2 seconds).
|
||||
- **Story 6.4:** As a user, I want to have the option to download the currently viewed newsletter from its detail page, so that I can access it offline.
|
||||
- Acceptance Criteria:
|
||||
- A `DownloadButton.tsx` component is developed (e.g., using Shadcn UI `Button` as a base).
|
||||
- The `DownloadButton.tsx` is integrated and visible on the newsletter detail page (`/newsletters/[newsletterId]`).
|
||||
- Clicking the button initiates a download of the newsletter content (e.g., HTML format for MVP).
|
||||
- **Story 6.5:** As a user, I want to listen to the generated podcast associated with a newsletter within the web interface on its detail page, if a podcast is available.
|
||||
- Acceptance Criteria:
|
||||
- A `PodcastPlayer.tsx` React component with standard playback controls (play, pause, seek bar, volume control) is developed.
|
||||
- An `podcastPlayerSlice.ts` Zustand store is implemented to manage podcast player state (e.g., current track URL, playback status, current time, volume).
|
||||
- The `PodcastPlayer.tsx` component integrates with the `podcastPlayerSlice.ts` Zustand store for its state management.
|
||||
- If a podcast URL is available for the displayed newsletter (fetched from Supabase), the `PodcastPlayer.tsx` component is displayed on the newsletter detail page.
|
||||
- The `PodcastPlayer.tsx` can load and play the podcast audio from the provided URL.
|
||||
- The `PodcastPlayer.tsx` is styled using Tailwind CSS to fit the "synthwave" theme and is responsive.
|
||||
@@ -0,0 +1,73 @@
|
||||
# API Interaction Layer
|
||||
|
||||
> This document is a granulated shard from the main "5-front-end-architecture.md" focusing on "API Interaction Layer".
|
||||
|
||||
The frontend will interact with Supabase for data. Server Components will fetch data directly using server-side Supabase client. Client Components needing to mutate data or trigger backend logic will use Next.js Server Actions or, if necessary, dedicated Next.js API Route Handlers which then interact with Supabase.
|
||||
|
||||
### Client/Service Structure
|
||||
|
||||
- **HTTP Client Setup (for Next.js API Route Handlers, if used extensively):**
|
||||
|
||||
- While Server Components and Server Actions are preferred for Supabase interactions, if direct calls from client to custom Next.js API routes are needed, a simple `fetch` wrapper or a lightweight client like `ky` could be used.
|
||||
- The Vercel/Supabase template provides `utils/supabase/client.ts` (for client-side components) and `utils/supabase/server.ts` (for Server Components, Route Handlers, Server Actions). These will be the primary interfaces to Supabase.
|
||||
- **Base URL:** Not applicable for direct Supabase client usage. For custom API routes: relative paths (e.g., `/api/my-route`).
|
||||
- **Authentication:** The Supabase clients handle auth token management. For custom API routes, Next.js middleware (`middleware.ts`) would handle session verification.
|
||||
|
||||
- **Service Definitions (Conceptual for Supabase Data Access):**
|
||||
|
||||
- No separate "service" files like `userService.ts` are strictly necessary for data fetching with Server Components. Data fetching logic will be co-located with the Server Components or within Server Actions.
|
||||
- **Example (Data fetching in a Server Component):**
|
||||
|
||||
```typescript
|
||||
// app/(web)/newsletters/page.tsx
|
||||
import { createClient } from "@/utils/supabase/server";
|
||||
import NewsletterCard from "@/app/components/core/NewsletterCard"; // Corrected path
|
||||
|
||||
export default async function NewsletterListPage() {
|
||||
const supabase = createClient();
|
||||
const { data: newsletters, error } = await supabase
|
||||
.from("newsletters")
|
||||
.select("id, title, target_date, podcast_url") // Add podcast_url
|
||||
.order("target_date", { ascending: false });
|
||||
|
||||
if (error) console.error("Error fetching newsletters:", error);
|
||||
// Render newsletters or error state
|
||||
}
|
||||
```
|
||||
|
||||
- **Example (Server Action for a hypothetical "subscribe" feature - future scope):**
|
||||
|
||||
```typescript
|
||||
// app/actions/subscribeActions.ts
|
||||
"use server";
|
||||
import { createClient } from "@/utils/supabase/server";
|
||||
import { z } from "zod";
|
||||
import { revalidatePath } from "next/cache";
|
||||
|
||||
const EmailSchema = z.string().email();
|
||||
|
||||
export async function subscribeToNewsletter(email: string) {
|
||||
const validation = EmailSchema.safeParse(email);
|
||||
if (!validation.success) {
|
||||
return { error: "Invalid email format." };
|
||||
}
|
||||
const supabase = createClient();
|
||||
const { error } = await supabase
|
||||
.from("subscribers")
|
||||
.insert({ email: validation.data });
|
||||
if (error) {
|
||||
return { error: "Subscription failed." };
|
||||
}
|
||||
revalidatePath("/"); // Example path revalidation
|
||||
return { success: true };
|
||||
}
|
||||
```
|
||||
|
||||
### Error Handling & Retries (Frontend)
|
||||
|
||||
- **Server Component Data Fetching Errors:** Errors from Supabase in Server Components should be caught. The component can then render an appropriate error UI or pass error information as props. Next.js error handling (e.g. `error.tsx` files) can also be used for unrecoverable errors.
|
||||
- **Client Component / Server Action Errors:**
|
||||
- Server Actions should return structured responses (e.g., `{ success: boolean, data?: any, error?: string }`). Client Components calling Server Actions will handle these responses to update UI (e.g., display error messages, toast notifications).
|
||||
- Shadcn UI includes a `Toast` component which can be used for non-modal error notifications.
|
||||
- **UI Error Boundaries:** React Error Boundaries can be implemented at key points in the component tree (e.g., around major layout sections or complex components) to catch rendering errors in Client Components and display a fallback UI, preventing a full app crash. A root `global-error.tsx` can serve as a global boundary.
|
||||
- **Retry Logic:** Generally, retry logic for data fetching should be handled by the user (e.g., a "Try Again" button) rather than automatic client-side retries for MVP, unless dealing with specific, known transient issues. Supabase client libraries might have their own internal retry mechanisms for certain types of network errors.
|
||||
@@ -0,0 +1,137 @@
|
||||
# BMad DiCaster Frontend Architecture Document
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [Overall Frontend Philosophy & Patterns](#overall-frontend-philosophy--patterns)
|
||||
- [Detailed Frontend Directory Structure](#detailed-frontend-directory-structure)
|
||||
- [Component Breakdown & Implementation Details](#component-breakdown--implementation-details)
|
||||
- [Component Naming & Organization](#component-naming--organization)
|
||||
- [Template for Component Specification](#template-for-component-specification)
|
||||
- [State Management In-Depth](#state-management-in-depth)
|
||||
- [Chosen Solution](#chosen-solution)
|
||||
- [Rationale](#rationale)
|
||||
- [Store Structure / Slices](#store-structure--slices)
|
||||
- [Key Selectors](#key-selectors)
|
||||
- [Key Actions / Reducers / Thunks](#key-actions--reducers--thunks)
|
||||
- [API Interaction Layer](#api-interaction-layer)
|
||||
- [Client/Service Structure](#clientservice-structure)
|
||||
- [Error Handling & Retries (Frontend)](#error-handling--retries-frontend)
|
||||
- [Routing Strategy](#routing-strategy)
|
||||
- [Routing Library](#routing-library)
|
||||
- [Route Definitions](#route-definitions)
|
||||
- [Route Guards / Protection](#route-guards--protection)
|
||||
- [Build, Bundling, and Deployment](#build-bundling-and-deployment)
|
||||
- [Build Process & Scripts](#build-process--scripts)
|
||||
- [Key Bundling Optimizations](#key-bundling-optimizations)
|
||||
- [Deployment to CDN/Hosting](#deployment-to-cdnhosting)
|
||||
- [Frontend Testing Strategy](#frontend-testing-strategy)
|
||||
- [Link to Main Testing Strategy](#link-to-main-testing-strategy)
|
||||
- [Component Testing](#component-testing)
|
||||
- [UI Integration/Flow Testing](#ui-integrationflow-testing)
|
||||
- [End-to-End UI Testing Tools & Scope](#end-to-end-ui-testing-tools--scope)
|
||||
- [Accessibility (AX) Implementation Details](#accessibility-ax-implementation-details)
|
||||
- [Performance Considerations](#performance-considerations)
|
||||
- [Change Log](#change-log)
|
||||
|
||||
## Introduction
|
||||
|
||||
This document details the technical architecture specifically for the frontend of BMad DiCaster. It complements the main BMad DiCaster Architecture Document and the UI/UX Specification. The goal is to provide a clear blueprint for frontend development, ensuring consistency, maintainability, and alignment with the overall system design and user experience goals.
|
||||
|
||||
- **Link to Main Architecture Document:** `docs/architecture.md` (Note: The overall system architecture, including Monorepo/Polyrepo decisions and backend structure, will influence frontend choices, especially around shared code or API interaction patterns.)
|
||||
- **Link to UI/UX Specification:** `docs/ui-ux-spec.txt`
|
||||
- **Link to Primary Design Files (Figma, Sketch, etc.):** N/A (Low-fidelity wireframes described in `docs/ui-ux-spec.txt`; detailed mockups to be created during development)
|
||||
- **Link to Deployed Storybook / Component Showcase (if applicable):** N/A (To be developed)
|
||||
|
||||
## Overall Frontend Philosophy & Patterns
|
||||
|
||||
> Key aspects of this section have been moved to dedicated documents:
|
||||
>
|
||||
> - For styling approach, theme customization, and visual design: See [Frontend Style Guide](./front-end-style-guide.md)
|
||||
> - For core framework choices, component architecture, data flow, and general coding standards: See [Frontend Coding Standards & Accessibility](./front-end-coding-standards.md#general-coding-standards-from-overall-philosophy--patterns)
|
||||
|
||||
## Detailed Frontend Directory Structure
|
||||
|
||||
> This section has been moved to a dedicated document: [Detailed Frontend Directory Structure](./front-end-project-structure.md)
|
||||
|
||||
## Component Breakdown & Implementation Details
|
||||
|
||||
> This section has been moved to a dedicated document: [Component Breakdown & Implementation Details](./front-end-component-guide.md)
|
||||
|
||||
## State Management In-Depth
|
||||
|
||||
> This section has been moved to a dedicated document: [State Management In-Depth](./front-end-state-management.md)
|
||||
|
||||
## API Interaction Layer
|
||||
|
||||
> This section has been moved to a dedicated document: [API Interaction Layer](./front-end-api-interaction.md)
|
||||
|
||||
## Routing Strategy
|
||||
|
||||
> This section has been moved to a dedicated document: [Routing Strategy](./front-end-routing-strategy.md)
|
||||
|
||||
## Build, Bundling, and Deployment
|
||||
|
||||
Details align with the Vercel platform and Next.js capabilities.
|
||||
|
||||
### Build Process & Scripts
|
||||
|
||||
- **Key Build Scripts:**
|
||||
- `npm run dev`: Starts Next.js local development server.
|
||||
- `npm run build`: Generates an optimized production build of the Next.js application. (Script from `package.json`)
|
||||
- `npm run start`: Starts the Next.js production server after a build.
|
||||
- **Environment Variables Handling during Build:**
|
||||
- Client-side variables must be prefixed with `NEXT_PUBLIC_` (e.g., `NEXT_PUBLIC_SUPABASE_URL`, `NEXT_PUBLIC_SUPABASE_ANON_KEY`).
|
||||
- Server-side variables (used in Server Components, Server Actions, Route Handlers) are accessed directly via `process.env`.
|
||||
- Environment variables are managed in Vercel project settings for different environments (Production, Preview, Development). Local development uses `.env.local`.
|
||||
|
||||
### Key Bundling Optimizations
|
||||
|
||||
- **Code Splitting:** Next.js App Router automatically performs route-based code splitting. Dynamic imports (`next/dynamic`) can be used for further component-level code splitting if needed.
|
||||
- **Tree Shaking:** Ensured by Next.js's Webpack configuration during the production build process.
|
||||
- **Lazy Loading:** Next.js handles lazy loading of route segments by default. Images (`next/image`) are optimized and can be lazy-loaded.
|
||||
- **Minification & Compression:** Handled automatically by Next.js during `npm run build` (JavaScript, CSS minification; Gzip/Brotli compression often handled by Vercel).
|
||||
|
||||
### Deployment to CDN/Hosting
|
||||
|
||||
- **Target Platform:** **Vercel** (as per `architecture.txt`)
|
||||
- **Deployment Trigger:** Automatic deployments via Vercel's Git integration (GitHub) on pushes/merges to specified branches (e.g., `main` for production, PR branches for previews). (Aligned with `architecture.txt`)
|
||||
- **Asset Caching Strategy:** Vercel's Edge Network handles CDN caching for static assets and Server Component payloads. Cache-control headers will be configured according to Next.js defaults and can be customized if necessary (e.g., for `public/` assets).
|
||||
|
||||
## Frontend Testing Strategy
|
||||
|
||||
> This section has been moved to a dedicated document: [Frontend Testing Strategy](./front-end-testing-strategy.md)
|
||||
|
||||
## Accessibility (AX) Implementation Details
|
||||
|
||||
> This section has been moved to a dedicated document: [Frontend Coding Standards & Accessibility](./front-end-coding-standards.md#accessibility-ax-implementation-details)
|
||||
|
||||
## Performance Considerations
|
||||
|
||||
[cite_start] The goal is a fast-loading and responsive user experience. [cite: 360, 565]
|
||||
|
||||
- **Image Optimization:**
|
||||
- Use `next/image` for automatic image optimization (resizing, WebP format where supported, lazy loading by default).
|
||||
- **Code Splitting & Lazy Loading:**
|
||||
- Next.js App Router handles route-based code splitting.
|
||||
- `next/dynamic` for client-side lazy loading of components that are not immediately visible or are heavy.
|
||||
- **Minimizing Re-renders (React):**
|
||||
- Judicious use of `React.memo` for components that render frequently with the same props.
|
||||
- Optimizing Zustand selectors if complex derived state is introduced (though direct access is often sufficient).
|
||||
- Ensuring stable prop references where possible.
|
||||
- **Debouncing/Throttling:** Not anticipated for MVP features, but will be considered for future interactive elements like search inputs.
|
||||
- **Virtualization:** Not anticipated for MVP given the limited number of items (e.g., 30 newsletters per day). If lists become very long in the future, virtualization libraries like TanStack Virtual will be considered.
|
||||
- **Caching Strategies (Client-Side):**
|
||||
- Leverage Next.js's built-in caching for Server Component payloads and static assets via Vercel's Edge Network.
|
||||
- Browser caching for static assets (`public/` folder) will use default optimal headers set by Vercel.
|
||||
- **Performance Monitoring Tools:**
|
||||
- Browser DevTools (Performance tab, Lighthouse).
|
||||
- Vercel Analytics (if enabled) for real-user monitoring.
|
||||
- WebPageTest for detailed performance analysis.
|
||||
- **Bundle Size Analysis:** Use tools like `@next/bundle-analyzer` to inspect production bundles and identify opportunities for optimization if bundle sizes become a concern.
|
||||
|
||||
## Change Log
|
||||
|
||||
| Date | Version | Description | Author |
|
||||
| :--------- | :------ | :----------------------------------------------- | :----------------- |
|
||||
| 2025-05-13 | 0.1 | Initial draft of frontend architecture document. | 4-design-arch (AI) |
|
||||
@@ -0,0 +1,54 @@
|
||||
# Frontend Coding Standards & Accessibility
|
||||
|
||||
> This document is a granulated shard from the main "5-front-end-architecture.md" focusing on "Front-End Coding Standards and Accessibility Best Practices".
|
||||
|
||||
## General Coding Standards (from Overall Philosophy & Patterns)
|
||||
|
||||
- **Framework & Core Libraries:**
|
||||
- Next.js (Latest, App Router)
|
||||
- React (19.0.0)
|
||||
- TypeScript (5.7.2)
|
||||
- **Component Architecture Approach:**
|
||||
- Shadcn UI for foundational elements.
|
||||
- Application-Specific Components in `app/components/core/`.
|
||||
- Prefer Server Components; use Client Components (`"use client"`) only when necessary for interactivity or browser APIs.
|
||||
- **Data Flow:**
|
||||
- Unidirectional: Server Components (data fetching) -> Client Components (props).
|
||||
- Mutations/Actions: Next.js Server Actions or API Route Handlers, with data revalidation.
|
||||
- Supabase Client for DB interaction.
|
||||
- **Key Design Patterns Used:**
|
||||
- Server Components & Client Components.
|
||||
- React Hooks (and custom hooks).
|
||||
- Provider Pattern (React Context API) when necessary.
|
||||
- Facade Pattern (conceptual for Supabase client).
|
||||
|
||||
## Naming & Organization Conventions (from Component Breakdown & Detailed Structure)
|
||||
|
||||
- **Component File Naming:**
|
||||
- React component files: `PascalCase.tsx` (e.g., `NewsletterCard.tsx`).
|
||||
- Next.js special files (`page.tsx`, `layout.tsx`, etc.): conventional lowercase/kebab-case.
|
||||
- **Directory Naming:** `kebab-case`.
|
||||
- **Non-Component TypeScript Files (.ts):** Primarily `camelCase.ts` (e.g., `utils.ts`, `uiSlice.ts`). Config files (`tailwind.config.ts`) and shared type definitions (`api-schemas.ts`) may use `kebab-case`.
|
||||
- **Component Organization:**
|
||||
- Core application components: `app/components/core/`.
|
||||
- Layout components: `app/components/layout/`.
|
||||
- Shadcn UI components: `components/ui/`.
|
||||
- Page-specific components (if complex and not reusable) can be co-located within the page's route directory.
|
||||
|
||||
## Accessibility (AX) Implementation Details
|
||||
|
||||
> This section is directly from "Accessibility (AX) Implementation Details" in `5-front-end-architecture.md`.
|
||||
|
||||
The frontend will adhere to **WCAG 2.1 Level A** as a minimum target, as specified in `docs/ui-ux-spec.txt`.
|
||||
|
||||
- **Semantic HTML:** Emphasis on using correct HTML5 elements (`<nav>`, `<main>`, `<article>`, `<aside>`, `<button>`, etc.) to provide inherent meaning and structure.
|
||||
- **ARIA Implementation:**
|
||||
- Shadcn UI components are built with accessibility in mind, often including appropriate ARIA attributes.
|
||||
- For custom components, relevant ARIA roles (e.g., `role="region"`, `role="alert"`) and attributes (e.g., `aria-label`, `aria-describedby`, `aria-live`, `aria-expanded`) will be used for dynamic content, interactive elements, and custom widgets to ensure assistive technologies can interpret them correctly.
|
||||
- **Keyboard Navigation:** All interactive elements (links, buttons, inputs, custom controls) must be focusable and operable using only the keyboard in a logical order. Focus indicators will be clear and visible.
|
||||
- **Focus Management:** For dynamic UI elements like modals or non-native dropdowns (if any are built custom beyond Shadcn capabilities), focus will be managed programmatically to ensure it moves to and is trapped within the element as appropriate, and returns to the trigger element upon dismissal.
|
||||
- **Alternative Text:** All meaningful images will have descriptive `alt` text. Decorative images will have empty `alt=""`.
|
||||
- **Color Contrast:** Adherence to WCAG 2.1 Level A color contrast ratios for text and interactive elements against their backgrounds. The "synthwave" theme's purple accents will be chosen carefully to meet these requirements. Tools will be used to verify contrast.
|
||||
- **Testing Tools for AX:**
|
||||
- Automated: Axe DevTools browser extension, Lighthouse accessibility audits.
|
||||
- Manual: Keyboard-only navigation testing, screen reader testing (e.g., NVDA, VoiceOver) for key user flows.
|
||||
@@ -0,0 +1,77 @@
|
||||
# Component Breakdown & Implementation Details
|
||||
|
||||
> This document is a granulated shard from the main "5-front-end-architecture.md" focusing on "Component Library, Reusable UI Components Guide, Atomic Design Elements, or Component Breakdown & Implementation Details".
|
||||
|
||||
This section outlines the conventions and templates for defining UI components. While a few globally shared or foundational components (e.g., main layout structures) might be specified here upfront to ensure consistency, the detailed specification for most feature-specific components will emerge as user stories are implemented. The key is for the development team (or AI agent) to follow the "Template for Component Specification" below whenever a new component is identified for development.
|
||||
|
||||
### Component Naming & Organization
|
||||
|
||||
- **Component File Naming:**
|
||||
|
||||
- React component files will use `PascalCase.tsx`. For example, `NewsletterCard.tsx`, `PodcastPlayer.tsx`.
|
||||
- Next.js special files like `page.tsx`, `layout.tsx`, `loading.tsx`, `error.tsx`, `global-error.tsx`, and `not-found.tsx` will use their conventional lowercase or kebab-case names.
|
||||
|
||||
- **Component Organization (Reiteration from Directory Structure):**
|
||||
|
||||
- **Application-Specific Core Components:** Reusable components specific to BMad DiCaster (e.g., `NewsletterCard`, `PodcastPlayer`) will reside in `app/components/core/`.
|
||||
- **Application-Specific Layout Components:** Components used for structuring page layouts (e.g., `PageWrapper.tsx`) will reside in `app/components/layout/`.
|
||||
- **Shadcn UI Components:** Components added via the Shadcn UI CLI will reside in `components/ui/` (e.g., `Button.tsx`, `Card.tsx`).
|
||||
- **Page-Specific Components:** If a component is complex but _only_ used on a single page, it can be co-located with that page's route file, for instance, in a `components` subfolder within that route's directory. However, the preference is to place reusable components in `app/components/core/` or `app/components/layout/`.
|
||||
|
||||
### Template for Component Specification
|
||||
|
||||
This template should be used to define and document each significant UI component identified from the UI/UX Specification (`docs/ui-ux-spec.txt`) and any subsequent design iterations. The goal is to provide sufficient detail for a developer or an AI agent to implement the component with minimal ambiguity. Most feature-specific components will be detailed emergently during development, following this template.
|
||||
|
||||
---
|
||||
|
||||
#### Component: `{ComponentName}` (e.g., `NewsletterCard`, `PodcastPlayerControls`)
|
||||
|
||||
- **Purpose:** {Briefly describe what this component does and its primary role in the user interface. What user need does it address?}
|
||||
- **Source File(s):** {e.g., `app/components/core/NewsletterCard.tsx`}
|
||||
- **Visual Reference:** {Link to specific Figma frame/component if available, or a detailed description/sketch if not. If based on a Shadcn UI component, note that and any key customizations.}
|
||||
- **Props (Properties):**
|
||||
{List each prop the component accepts. Specify its name, TypeScript type, whether it's required, any default value, and a clear description.}
|
||||
| Prop Name | Type | Required? | Default Value | Description |
|
||||
| :------------ | :------------------------------------ | :-------- | :------------ | :--------------------------------------------------- |
|
||||
| `exampleProp` | `string` | Yes | N/A | Example string prop. |
|
||||
| `items` | `Array<{id: string, name: string}>` | Yes | N/A | An array of item objects to display. |
|
||||
| `variant` | `'primary' \| 'secondary'` | No | `'primary'` | Visual variant of the component. |
|
||||
| `onClick` | `(event: React.MouseEvent) => void` | No | N/A | Optional click handler. |
|
||||
- **Internal State (if any):**
|
||||
{Describe any significant internal state the component manages using React hooks (e.g., `useState`).}
|
||||
| State Variable | Type | Initial Value | Description |
|
||||
| :---------------- | :-------- | :------------ | :------------------------------------------------ |
|
||||
| `isLoading` | `boolean` | `false` | Tracks if data for the component is loading. |
|
||||
| `selectedItem` | `string \| null` | `null` | Stores the ID of the currently selected item. |
|
||||
- **Key UI Elements / Structure (Conceptual):**
|
||||
{Describe the main visual parts of the component and their general layout. Reference Shadcn UI components if used as building blocks.}
|
||||
```jsx
|
||||
// Example for a Card component
|
||||
<Card>
|
||||
<CardHeader>
|
||||
<CardTitle>{"{{titleProp}}"}</CardTitle>
|
||||
<CardDescription>{"{{descriptionProp}}"}</CardDescription>
|
||||
</CardHeader>
|
||||
<CardContent>{/* {list of items or main content} */}</CardContent>
|
||||
<CardFooter>{/* {action buttons or footer content} */}</CardFooter>
|
||||
</Card>
|
||||
```
|
||||
- **Events Handled / Emitted:**
|
||||
- **Handles:** {List significant DOM events the component handles directly.}
|
||||
- **Emits (Callbacks):** {If the component uses props to emit events (callbacks) to its parent, list them here.}
|
||||
- **Actions Triggered (Side Effects):**
|
||||
- **State Management (Zustand):** {If the component interacts with a Zustand store, specify which store and actions.}
|
||||
- **API Calls / Data Fetching:** {Specify how Client Components trigger mutations or re-fetches (e.g., Server Actions).}
|
||||
- **Styling Notes:**
|
||||
- {Reference to specific Shadcn UI components used.}
|
||||
- {Key Tailwind CSS classes or custom styles for the "synthwave" theme.}
|
||||
- {Specific responsiveness behavior.}
|
||||
- **Accessibility (AX) Notes:**
|
||||
- {Specific ARIA attributes needed.}
|
||||
- {Keyboard navigation considerations.}
|
||||
- {Focus management details.}
|
||||
- {Notes on color contrast.}
|
||||
|
||||
---
|
||||
|
||||
_This template will be applied to each new significant component during the development process._
|
||||
@@ -0,0 +1,81 @@
|
||||
# Detailed Frontend Directory Structure
|
||||
|
||||
> This document is a granulated shard from the main "5-front-end-architecture.md" focusing on "Detailed Frontend Directory Structure".
|
||||
|
||||
The BMad DiCaster frontend will adhere to the Next.js App Router conventions and build upon the structure provided by the Vercel/Supabase Next.js App Router template. The monorepo structure defined in the main Architecture Document (`docs/architecture.md`) already outlines the top-level directories. This section details the frontend-specific organization.
|
||||
|
||||
**Naming Conventions Adopted:**
|
||||
|
||||
- **Directories:** `kebab-case` (e.g., `app/(web)/newsletter-list/`, `app/components/core/`)
|
||||
- **React Component Files (.tsx):** `PascalCase.tsx` (e.g., `NewsletterCard.tsx`, `PodcastPlayer.tsx`). Next.js App Router special files (e.g., `page.tsx`, `layout.tsx`, `loading.tsx`, `global-error.tsx`, `not-found.tsx`) retain their conventional lowercase or kebab-case names.
|
||||
- **Non-Component TypeScript Files (.ts):** Primarily `camelCase.ts` (e.g., `utils.ts`, `uiSlice.ts`). Configuration files (e.g., `tailwind.config.ts`) and shared type definition files (e.g., `api-schemas.ts`, `domain-models.ts`) may retain `kebab-case` as per common practice or previous agreement.
|
||||
|
||||
```plaintext
|
||||
{project-root}/
|
||||
├── app/ # Next.js App Router (Frontend Pages, Layouts, API Routes)
|
||||
│ ├── (web)/ # Group for user-facing web pages
|
||||
│ │ ├── newsletters/ # Route group for newsletter features
|
||||
│ │ │ ├── [newsletterId]/ # Dynamic route for individual newsletter detail
|
||||
│ │ │ │ ├── page.tsx # Newsletter Detail Page component
|
||||
│ │ │ │ └── loading.tsx # Optional: Loading UI for this route
|
||||
│ │ │ ├── page.tsx # Newsletter List Page component
|
||||
│ │ │ └── layout.tsx # Optional: Layout specific to /newsletters routes
|
||||
│ │ ├── layout.tsx # Root layout for all (web) pages
|
||||
│ │ └── page.tsx # Homepage (displays newsletter list)
|
||||
│ ├── (api)/ # API route handlers (as defined in main architecture [cite: 82, 127, 130, 133])
|
||||
│ │ ├── system/
|
||||
│ │ │ └── ...
|
||||
│ │ └── webhooks/
|
||||
│ │ └── ...
|
||||
│ ├── components/ # Application-specific UI React components (Core Logic)
|
||||
│ │ ├── core/ # Core, reusable application components
|
||||
│ │ │ ├── NewsletterCard.tsx
|
||||
│ │ │ ├── PodcastPlayer.tsx
|
||||
│ │ │ ├── DownloadButton.tsx
|
||||
│ │ │ └── BackButton.tsx
|
||||
│ │ └── layout/ # General layout components
|
||||
│ │ └── PageWrapper.tsx # Consistent padding/max-width for pages
|
||||
│ ├── auth/ # Auth-related pages and components (from template, MVP frontend is public)
|
||||
│ ├── login/page.tsx # Login page (from template, MVP frontend is public)
|
||||
│ └── global-error.tsx # Optional: Custom global error UI (Next.js special file)
|
||||
│ └── not-found.tsx # Optional: Custom 404 page UI (Next.js special file)
|
||||
│ ├── components/ # Shadcn UI components root (as configured by components.json [cite: 92])
|
||||
│ │ └── ui/ # Base UI elements from Shadcn (e.g., Button.tsx, Card.tsx)
|
||||
│ ├── lib/ # General utility functions for frontend [cite: 86, 309]
|
||||
│ │ ├── utils.ts # General utility functions (date formatting, etc.)
|
||||
│ │ └── hooks/ # Custom global React hooks
|
||||
│ │ └── useScreenWidth.ts # Example custom hook
|
||||
│ ├── store/ # Zustand state management
|
||||
│ │ ├── index.ts # Main store setup/export (can be store.ts or index.ts)
|
||||
│ │ └── slices/ # Individual state slices
|
||||
│ │ └── podcastPlayerSlice.ts # State for the podcast player
|
||||
│ ├── public/ # Static assets (images, favicon, etc.) [cite: 89]
|
||||
│ │ └── logo.svg # Application logo (to be provided [cite: 379])
|
||||
│ ├── shared/ # Shared code/types between frontend and Supabase functions [cite: 89, 97]
|
||||
│ │ └── types/
|
||||
│ │ ├── api-schemas.ts # Zod schemas for API req/res
|
||||
│ │ └── domain-models.ts # Core entity types (HNPost, Newsletter, etc. from main arch)
|
||||
│ ├── styles/ # Global styles [cite: 90]
|
||||
│ │ └── globals.css # Tailwind base styles, custom global styles
|
||||
│ ├── utils/ # Root utilities (from template [cite: 91])
|
||||
│ │ └── supabase/ # Supabase helper functions FOR FRONTEND (from template [cite: 92, 309])
|
||||
│ │ ├── client.ts # Client-side Supabase client
|
||||
│ │ ├── middleware.ts # Logic for Next.js middleware (Supabase auth [cite: 92, 311])
|
||||
│ │ └── server.ts # Server-side Supabase client
|
||||
│ ├── tailwind.config.ts # Tailwind CSS configuration [cite: 93]
|
||||
│ └── tsconfig.json # TypeScript configuration (includes path aliases like @/* [cite: 101])
|
||||
```
|
||||
|
||||
### Notes on Frontend Structure:
|
||||
|
||||
- **`app/(web)/`**: Route group for user-facing pages.
|
||||
- **`newsletters/page.tsx`**: Server Component for listing newsletters. [cite: 375, 573]
|
||||
- **`newsletters/[newsletterId]/page.tsx`**: Server Component for displaying a single newsletter. [cite: 376, 576]
|
||||
- **`app/components/core/`**: Houses application-specific React components like `NewsletterCard.tsx`, `PodcastPlayer.tsx`, `DownloadButton.tsx`, `BackButton.tsx` (identified in `ux-ui-spec.txt`). Components follow `PascalCase.tsx`.
|
||||
- **`app/components/layout/`**: For structural layout components, e.g., `PageWrapper.tsx`. Components follow `PascalCase.tsx`.
|
||||
- **`components/ui/`**: Standard directory for Shadcn UI components (e.g., `Button.tsx`, `Card.tsx`).
|
||||
- **`lib/hooks/`**: Custom React hooks (e.g., `useScreenWidth.ts`), files follow `camelCase.ts`.
|
||||
- **`store/slices/`**: Zustand state slices. `podcastPlayerSlice.ts` for podcast player state. Files follow `camelCase.ts`.
|
||||
- **`shared/types/`**: Type definitions. `api-schemas.ts` and `domain-models.ts` use `kebab-case.ts`.
|
||||
- **`utils/supabase/`**: Template-provided Supabase clients. Files follow `camelCase.ts`.
|
||||
- **Path Aliases**: `tsconfig.json` uses `@/*` aliases. [cite: 98, 101]
|
||||
@@ -0,0 +1,24 @@
|
||||
# Routing Strategy
|
||||
|
||||
> This document is a granulated shard from the main "5-front-end-architecture.md" focusing on "Routing Strategy".
|
||||
|
||||
Navigation and routing will be handled by the Next.js App Router.
|
||||
|
||||
- **Routing Library:** **Next.js App Router** (as per `architecture.txt`)
|
||||
|
||||
### Route Definitions
|
||||
|
||||
Based on `ux-ui-spec.txt` and PRD.
|
||||
|
||||
| Path Pattern | Component/Page (`app/(web)/...`) | Protection | Notes |
|
||||
| :---------------------------- | :------------------------------------ | :--------- | :-------------------------------------------------------------------------- |
|
||||
| `/` | `newsletters/page.tsx` (effectively) | Public | Homepage displays the newsletter list. |
|
||||
| `/newsletters` | `newsletters/page.tsx` | Public | Displays a list of current and past newsletters. |
|
||||
| `/newsletters/[newsletterId]` | `newsletters/[newsletterId]/page.tsx` | Public | Displays the detail page for a selected newsletter. `newsletterId` is UUID. |
|
||||
|
||||
_(Note: The main architecture document shows an `app/page.tsx` for the homepage. For MVP, this can either redirect to `/newsletters` or directly render the newsletter list content. The table above assumes it effectively serves the newsletter list.)_
|
||||
|
||||
### Route Guards / Protection
|
||||
|
||||
- **Authentication Guard:** The MVP frontend is public-facing, displaying newsletters and podcasts without user login. The Vercel/Supabase template includes middleware (`middleware.ts`) for protecting routes based on Supabase Auth. This will be relevant for any future admin sections but is not actively used to gate content for general users in MVP.
|
||||
- **Authorization Guard:** Not applicable for MVP.
|
||||
@@ -0,0 +1,121 @@
|
||||
# State Management In-Depth
|
||||
|
||||
> This document is a granulated shard from the main "5-front-end-architecture.md" focusing on "State Management In-Depth".
|
||||
|
||||
This section expands on the State Management strategy chosen (Zustand) and outlined in the "Overall Frontend Philosophy & Patterns".
|
||||
|
||||
- **Chosen Solution:** **Zustand** (Latest version, as per `architecture.txt`)
|
||||
- **Rationale:** Zustand was chosen for its simplicity, small bundle size, and unopinionated nature, suitable for BMad DiCaster's relatively simple frontend state needs (e.g., podcast player status). Server-side data is primarily managed by Next.js Server Components.
|
||||
|
||||
### Store Structure / Slices
|
||||
|
||||
Global client-side state will be organized into distinct "slices" within `store/slices/`. Components can import and use individual stores directly.
|
||||
|
||||
- **Conventions:**
|
||||
- Each slice in its own file: `store/slices/camelCaseSlice.ts`.
|
||||
- Define state interface, initial state, and action functions.
|
||||
- **Core Slice: `podcastPlayerSlice.ts`** (for MVP)
|
||||
|
||||
- **Purpose:** Manages the state of the podcast player (current track, playback status, time, volume).
|
||||
- **Source File:** `store/slices/podcastPlayerSlice.ts`
|
||||
- **State Shape (Example):**
|
||||
|
||||
```typescript
|
||||
interface PodcastTrack {
|
||||
id: string; // Could be newsletterId or a specific audio ID
|
||||
title: string;
|
||||
audioUrl: string;
|
||||
duration?: number; // in seconds
|
||||
}
|
||||
|
||||
interface PodcastPlayerState {
|
||||
currentTrack: PodcastTrack | null;
|
||||
isPlaying: boolean;
|
||||
currentTime: number; // in seconds
|
||||
volume: number; // 0 to 1
|
||||
isLoading: boolean;
|
||||
error: string | null;
|
||||
}
|
||||
|
||||
interface PodcastPlayerActions {
|
||||
loadTrack: (track: PodcastTrack) => void;
|
||||
play: () => void;
|
||||
pause: () => void;
|
||||
setCurrentTime: (time: number) => void;
|
||||
setVolume: (volume: number) => void;
|
||||
setError: (message: string | null) => void;
|
||||
resetPlayer: () => void;
|
||||
}
|
||||
```
|
||||
|
||||
- **Key Actions:** `loadTrack`, `play`, `pause`, `setCurrentTime`, `setVolume`, `setError`, `resetPlayer`.
|
||||
- **Zustand Store Definition:**
|
||||
|
||||
```typescript
|
||||
import { create } from "zustand";
|
||||
|
||||
// Previously defined interfaces: PodcastTrack, PodcastPlayerState, PodcastPlayerActions
|
||||
|
||||
const initialPodcastPlayerState: PodcastPlayerState = {
|
||||
currentTrack: null,
|
||||
isPlaying: false,
|
||||
currentTime: 0,
|
||||
volume: 0.75,
|
||||
isLoading: false,
|
||||
error: null,
|
||||
};
|
||||
|
||||
export const usePodcastPlayerStore = create<
|
||||
PodcastPlayerState & PodcastPlayerActions
|
||||
>((set) => ({
|
||||
...initialPodcastPlayerState,
|
||||
loadTrack: (track) =>
|
||||
set({
|
||||
currentTrack: track,
|
||||
isLoading: true, // Assume loading until actual audio element confirms
|
||||
error: null,
|
||||
isPlaying: false, // Usually don't autoplay on load
|
||||
currentTime: 0,
|
||||
}),
|
||||
play: () =>
|
||||
set((state) => {
|
||||
if (!state.currentTrack) return {}; // No track loaded
|
||||
return { isPlaying: true, isLoading: false, error: null };
|
||||
}),
|
||||
pause: () => set({ isPlaying: false }),
|
||||
setCurrentTime: (time) => set({ currentTime: time }),
|
||||
setVolume: (volume) => set({ volume: Math.max(0, Math.min(1, volume)) }),
|
||||
setError: (message) =>
|
||||
set({ error: message, isLoading: false, isPlaying: false }),
|
||||
resetPlayer: () => set({ ...initialPodcastPlayerState }),
|
||||
}));
|
||||
```
|
||||
|
||||
### Key Selectors
|
||||
|
||||
Selectors are functions that derive data from the store state. With Zustand, state is typically accessed directly from the hook, but memoized selectors can be created with libraries like `reselect` if complex derived data is needed, though for simple cases direct access is fine.
|
||||
|
||||
- **Convention:** For direct state access, components will use: `const { currentTrack, isPlaying, play } = usePodcastPlayerStore();`
|
||||
- **Example Selectors (if using `reselect` or similar, for more complex derivations later):**
|
||||
- `selectCurrentTrackTitle`: Returns `state.currentTrack?.title || 'No track loaded'`.
|
||||
- `selectIsPodcastPlaying`: Returns `state.isPlaying`.
|
||||
|
||||
### Key Actions / Reducers / Thunks
|
||||
|
||||
Zustand actions are functions defined within the `create` call that use `set` to update state. Asynchronous operations (like fetching data, though less common for Zustand which is often for UI state) can be handled by calling async functions within these actions and then calling `set` upon completion.
|
||||
|
||||
- **Convention:** Actions are part of the store hook: `const { loadTrack } = usePodcastPlayerStore();`.
|
||||
- **Asynchronous Example (Conceptual, if a slice needed to fetch data):**
|
||||
```typescript
|
||||
// In a hypothetical userSettingsSlice.ts
|
||||
// fetchUserSettings: async () => {
|
||||
// set({ isLoading: true });
|
||||
// try {
|
||||
// const settings = await api.fetchUserSettings(); // api is an imported service
|
||||
// set({ userSettings: settings, isLoading: false });
|
||||
// } catch (error) {
|
||||
// set({ error: 'Failed to fetch settings', isLoading: false });
|
||||
// }
|
||||
// }
|
||||
```
|
||||
For BMad DiCaster MVP, most data fetching is via Server Components. Client-side async actions in Zustand would primarily be for client-specific operations not directly tied to server data fetching.
|
||||
@@ -0,0 +1,31 @@
|
||||
# Frontend Style Guide
|
||||
|
||||
> This document is a granulated shard from the main "5-front-end-architecture.md" focusing on "UI Style Guide, Brand Guidelines, Visual Design Specifications, or Styling Approach".
|
||||
|
||||
The frontend for BMad DiCaster will be built using modern, efficient, and maintainable practices, leveraging the Vercel/Supabase Next.js App Router template as a starting point. The core philosophy is to create a responsive, fast-loading, and accessible user interface that aligns with the "synthwave technical glowing purple vibes" aesthetic.
|
||||
|
||||
- **Framework & Core Libraries relevant to Styling:**
|
||||
|
||||
- **Next.js (Latest, e.g., 14.x.x, App Router):** Chosen for its robust full-stack capabilities and seamless integration with Vercel for deployment.
|
||||
- **React (19.0.0):** As the underlying UI library for Next.js.
|
||||
- **TypeScript (5.7.2):** For strong typing and improved code quality.
|
||||
|
||||
- **Component Architecture relevant to Styling:**
|
||||
|
||||
- **Shadcn UI (Latest):** This collection of reusable UI components, built on Radix UI and Tailwind CSS, will be used for foundational elements like buttons, cards, dialogs, etc.
|
||||
- **Application-Specific Components:** Custom components will be developed for unique UI parts.
|
||||
|
||||
- **Styling Approach:**
|
||||
|
||||
- **Tailwind CSS (3.4.17):** A utility-first CSS framework for rapid UI development and consistent styling. It will be used for all styling, including achieving the "synthwave technical glowing purple vibes."
|
||||
- **Shadcn UI:** Leverages Tailwind CSS for its components.
|
||||
- **Global Styles:** `app/globals.css` will be used for base Tailwind directives and any genuinely global style definitions.
|
||||
- **Theme Customization:** `tailwind.config.ts` will be used to extend Tailwind's default theme with custom colors (e.g., synthwave purples like `#800080` as an accent), fonts, or spacing as needed to achieve the desired aesthetic. The "synthwave technical glowing purple vibes" will be achieved through a dark base theme, with purple accents for interactive elements, highlights, and potentially subtle text shadows or glows on specific headings or decorative elements. Font choices will lean towards modern, clean sans-serifs as specified in `ux-ui-spec.txt`, potentially with a more stylized font for major headings if it fits the theme without compromising readability.
|
||||
|
||||
- **Visual Design Specifications (derived from UI/UX Spec and Architecture):**
|
||||
- **Aesthetic:** "Synthwave technical glowing purple vibes."
|
||||
- **Layout:** Minimalist, clean, focusing on content readability.
|
||||
- **Color Palette:** Dark base theme with purple accents (e.g., `#800080`). Ensure high contrast for accessibility.
|
||||
- **Typography:** Modern, clean sans-serif fonts. Stylized font for major headings if it fits the theme and maintains readability.
|
||||
- **Iconography:** (To be determined, likely to use a standard library like Heroicons or Phosphor Icons, integrated as SVG or via Shadcn UI if applicable).
|
||||
- **Responsiveness:** The UI must be responsive and adapt to various screen sizes (desktop, tablet, mobile).
|
||||
@@ -0,0 +1,44 @@
|
||||
# Frontend Testing Strategy
|
||||
|
||||
> This document is a granulated shard from the main "5-front-end-architecture.md" focusing on "Frontend Testing Strategy".
|
||||
|
||||
This section elaborates on the overall testing strategy defined in `architecture.txt`, focusing on frontend specifics.
|
||||
|
||||
- **Link to Main Testing Strategy:** `docs/architecture.md#overall-testing-strategy` (and `docs/architecture.md#coding-standards` for test file colocation).
|
||||
|
||||
### Component Testing
|
||||
|
||||
- **Scope:** Testing individual React components in isolation, primarily focusing on UI rendering based on props and basic interactions.
|
||||
- **Tools:** **Jest** (test runner, assertion library, mocking) and **React Testing Library (RTL)** (for user-centric component querying and interaction).
|
||||
- **Focus:**
|
||||
- Correct rendering based on props.
|
||||
- User interactions (e.g., button clicks triggering callbacks).
|
||||
- Conditional rendering logic.
|
||||
- Accessibility attributes.
|
||||
- **Location:** Test files (`*.test.tsx` or `*.spec.tsx`) will be co-located with the component files (e.g., `app/components/core/NewsletterCard.test.tsx`).
|
||||
- **Example Guideline:** "A `NewsletterCard` component should render the title and date passed as props. Clicking the card should navigate (mocked) or call an `onClick` prop."
|
||||
|
||||
### UI Integration/Flow Testing
|
||||
|
||||
- **Scope:** Testing interactions between multiple components that compose a piece of UI or a small user flow, potentially with mocked Supabase client responses or Zustand store states.
|
||||
- **Tools:** Jest and React Testing Library.
|
||||
- **Focus:**
|
||||
- Data flow between a parent and its child components.
|
||||
- State updates in a Zustand store affecting multiple components.
|
||||
- Rendering of a sequence of UI elements in a simple flow (e.g., selecting an item from a list and seeing details appear).
|
||||
- **Example Guideline:** "The `NewsletterListPage` should correctly render multiple `NewsletterCard` components when provided with mock newsletter data. Clicking a card should correctly invoke navigation logic."
|
||||
|
||||
### End-to-End UI Testing Tools & Scope
|
||||
|
||||
- **Tools:** **Playwright**.
|
||||
- **Scope (Frontend Focus):**
|
||||
- Verify the "Viewing a Newsletter" user flow:
|
||||
1. Navigate to the newsletter list page.
|
||||
2. Verify newsletters are listed.
|
||||
3. Click on a newsletter.
|
||||
4. Verify the newsletter detail page loads with content.
|
||||
5. Verify the podcast player is present if a podcast URL exists.
|
||||
6. Verify the download button is present.
|
||||
7. Verify the "Back to List" button works.
|
||||
- Basic mobile responsiveness checks for key pages (list and detail).
|
||||
- **Test Data Management for UI:** E2E tests will rely on data populated in the development Supabase instance or use mocked API responses if targeting isolated frontend tests with Playwright's network interception. For true E2E against a live dev environment, pre-seeded data in Supabase dev instance will be used.
|
||||
@@ -0,0 +1,37 @@
|
||||
# Index
|
||||
|
||||
## PRD Epics and Stories
|
||||
|
||||
- [Product Requirements Document (PRD)](./prd.md) - The main PRD document, linking to individual epics.
|
||||
- [Epic 1: Project Initialization, Setup, and HN Content Acquisition](./epic-1.md)
|
||||
- [Epic 2: Article Scraping](./epic-2.md)
|
||||
- [Epic 3: AI-Powered Content Summarization](./epic-3.md)
|
||||
- [Epic 4: Automated Newsletter Creation and Distribution](./epic-4.md)
|
||||
- [Epic 5: Podcast Generation Integration](./epic-5.md)
|
||||
- [Epic 6: Web Interface for Initial Structure and Content Access](./epic-6.md)
|
||||
|
||||
## Architecture Documents
|
||||
|
||||
- [System Architecture Document](./architecture.md) - The main system architecture document, linking to detailed shards.
|
||||
- [API Reference](./api-reference.md) - Details on external and internal APIs.
|
||||
- [Component View](./component-view.md) - Logical components and architectural patterns.
|
||||
- [Data Models](./data-models.md) - Core application entities and database schemas.
|
||||
- [Environment Variables Documentation](./environment-vars.md) - Placeholder for consolidated environment variable information.
|
||||
- [Infrastructure and Deployment Overview](./infra-deployment.md) - Cloud providers, core services, and deployment strategy.
|
||||
- [Key Reference Documents](./key-references.md) - List of key documents referenced in the architecture.
|
||||
- [Operational Guidelines](./operational-guidelines.md) - Consolidated guidelines for error handling, coding standards, testing, and security.
|
||||
- [Project Structure](./project-structure.md) - Monorepo organization and key directory descriptions.
|
||||
- [Sequence Diagrams](./sequence-diagrams.md) - Core workflow and sequence diagrams.
|
||||
- [Technology Stack](./tech-stack.md) - Definitive technology selections for the project.
|
||||
|
||||
### Frontend Specific Architecture Documents
|
||||
|
||||
- [Frontend Architecture Document](./front-end-architecture.md) - The main frontend architecture document, linking to detailed shards.
|
||||
- [Frontend Project Structure](./front-end-project-structure.md) - Detailed frontend directory structure and naming conventions.
|
||||
- [Frontend Style Guide](./front-end-style-guide.md) - Styling approach, theme customization, and visual design specifications.
|
||||
- [Frontend Component Guide](./front-end-component-guide.md) - Component naming, organization, and template for component specification.
|
||||
- [Frontend Coding Standards & Accessibility](./front-end-coding-standards.md) - Frontend-specific coding standards and accessibility (AX) implementation details.
|
||||
- [Frontend State Management](./front-end-state-management.md) - In-depth details of Zustand store structure, slices, selectors, and actions.
|
||||
- [Frontend API Interaction Layer](./front-end-api-interaction.md) - Client/service structure for API interactions and frontend error handling.
|
||||
- [Frontend Routing Strategy](./front-end-routing-strategy.md) - Route definitions and protection mechanisms.
|
||||
- [Frontend Testing Strategy](./front-end-testing-strategy.md) - Component, UI integration, and end-to-end testing strategies for the frontend.
|
||||
@@ -0,0 +1 @@
|
||||
# Infrastructure and Deployment Overview\n\n> This document is a granulated shard from the main \"3-architecture.md\" focusing on \"Infrastructure and Deployment Overview\".\n\n- **Cloud Provider(s):**\n - **Vercel:** For hosting the Next.js frontend application, Next.js API routes (including the Play.ht webhook receiver and the workflow trigger API), and Supabase Functions (Edge/Serverless Functions deployed via Supabase CLI and Vercel integration).\n - **Supabase:** Provides the managed PostgreSQL database, authentication, storage, and the an environment for deploying backend functions. Supabase itself runs on underlying cloud infrastructure (e.g., AWS).\n- **Core Services Used:**\n - **Vercel:** Next.js Hosting (SSR, SSG, ISR, Edge runtime), Serverless Functions (for Next.js API routes), Edge Functions (for Next.js middleware and potentially some API routes), Global CDN, CI/CD (via GitHub integration), Environment Variables Management, Vercel Cron Jobs (for scheduled triggering of the `/api/system/trigger-workflow` endpoint).\n - **Supabase:** PostgreSQL Database, Supabase Auth, Supabase Storage (for temporary file hosting if needed for Play.ht, or other static assets), Supabase Functions (backend logic for the event-driven pipeline, deployed via Supabase CLI, runs on Vercel infrastructure), Database Webhooks (using `pg_net` or built-in functionality to trigger Supabase/Vercel functions), Supabase CLI (for local development, migrations, function deployment).\n- **Infrastructure as Code (IaC):**\n - **Supabase Migrations:** SQL migration files in `supabase/migrations/` define the database schema and are managed by the Supabase CLI. This is the primary IaC for the database.\n - **Vercel Configuration:** `vercel.json` (if needed for custom configurations beyond what the Vercel dashboard and Next.js provide) and project settings via the Vercel dashboard.\n - No explicit IaC for Vercel services beyond its declarative nature and Next.js conventions is anticipated for MVP.\n- **Deployment Strategy:**\n - **Source Control:** GitHub will be used for version control.\n - **CI/CD Tool:** GitHub Actions (as defined in `/.github/workflows/main.yml`).\n - **Frontend (Next.js app on Vercel):** Continuous deployment triggered by pushes/merges to the main branch. Preview deployments automatically created for pull requests.\n - **Backend (Supabase Functions):** Deployed via Supabase CLI commands (e.g., `supabase functions deploy <function_name> --project-ref <your-project-ref>`), run as part of the GitHub Actions workflow.\n - **Database Migrations (Supabase):** Applied via CI/CD step using `supabase migration up --linked` or Supabase CLI against remote DB.\n- **Environments:**\n - **Local Development:** Next.js local dev server (`next dev`), local Supabase stack (`supabase start`), local `.env.local`.\n - **Development/Preview (on Vercel):** Auto-deployed per PR/dev branch push, connected to a **Development Supabase instance**.\n - **Production (on Vercel):** Deployed from the main branch, connected to a **Production Supabase instance**.\n- **Environment Promotion:** Local -\> Dev/Preview (PR) -\> Production (merge to main).\n- **Rollback Strategy:** Vercel dashboard/CLI for app/function rollbacks; Supabase migrations (revert migration) or Point-in-Time Recovery for database.
|
||||
@@ -0,0 +1,17 @@
|
||||
# Key Reference Documents
|
||||
|
||||
> This document is a granulated shard from the main "3-architecture.md" focusing on "Key Reference Documents".
|
||||
|
||||
1. **Product Requirements Document (PRD):** `docs/prd-incremental-full-agile-mode.txt`
|
||||
2. **UI/UX Specification:** `docs/ui-ux-spec.txt`
|
||||
3. **Technical Preferences:** `docs/technical-preferences copy.txt`
|
||||
4. **Environment Variables Documentation:** `docs/environment-vars.md` (To be created)
|
||||
5. **(Optional) Frontend Architecture Document:** `docs/frontend-architecture.md` (To be created by Design Architect)
|
||||
6. **Play.ht API Documentation:** [https://docs.play.ai/api-reference/playnote/post](https://docs.play.ai/api-reference/playnote/post)
|
||||
7. **Hacker News Algolia API:** [https://hn.algolia.com/api](https://hn.algolia.com/api)
|
||||
8. **Ollama API Documentation:** [https://github.com/ollama/ollama/blob/main/docs/api.md](https://www.google.com/search?q=https://github.com/ollama/ollama/blob/main/docs/api.md)
|
||||
9. **Supabase Documentation:** [https://supabase.com/docs](https://supabase.com/docs)
|
||||
10. **Next.js Documentation:** [https://nextjs.org/docs](https://nextjs.org/docs)
|
||||
11. **Vercel Documentation:** [https://vercel.com/docs](https://vercel.com/docs)
|
||||
12. **Pino Logging Documentation:** [https://getpino.io/](https://getpino.io/)
|
||||
13. **Zod Documentation:** [https://zod.dev/](https://zod.dev/)
|
||||
@@ -0,0 +1,122 @@
|
||||
# Operational Guidelines
|
||||
|
||||
> This document is a granulated shard from the main "3-architecture.md" focusing on "Operational Guidelines (Coding Standards, Testing, Error Handling, Security)".
|
||||
|
||||
### Error Handling Strategy
|
||||
|
||||
A robust error handling strategy is essential for the reliability of the BMad DiCaster pipeline. This involves consistent error logging, appropriate retry mechanisms, and clear error propagation. The `workflow_runs` table will be a central piece in tracking errors for entire workflow executions.
|
||||
|
||||
- **General Approach:**
|
||||
- Standard JavaScript `Error` objects (or custom extensions of `Error`) will be used for exceptions within TypeScript code.
|
||||
- Each Supabase Function in the pipeline will catch its own errors, log them using Pino, update the `workflow_runs` table with an error status/message (via `WorkflowTrackerService`), and prevent unhandled promise rejections.
|
||||
- Next.js API routes will catch errors, log them, and return appropriate HTTP error responses (e.g., 4xx, 500) with a JSON error payload.
|
||||
- **Logging (Pino):**
|
||||
- **Library/Method:** Pino (`pino`) is the standard logging library for Supabase Functions and Next.js API routes.
|
||||
- **Configuration:** A shared Pino logger instance (e.g., `supabase/functions/_shared/logger.ts`) will be configured for JSON output, ISO timestamps, and environment-aware pretty-printing for development.
|
||||
```typescript
|
||||
// Example: supabase/functions/_shared/logger.ts
|
||||
import pino from "pino";
|
||||
export const logger = pino({
|
||||
level: process.env.LOG_LEVEL || "info",
|
||||
formatters: { level: (label) => ({ level: label }) },
|
||||
timestamp: pino.stdTimeFunctions.isoTime,
|
||||
...(process.env.NODE_ENV === "development" && {
|
||||
transport: {
|
||||
target: "pino-pretty",
|
||||
options: {
|
||||
colorize: true,
|
||||
translateTime: "SYS:standard",
|
||||
ignore: "pid,hostname",
|
||||
},
|
||||
},
|
||||
}),
|
||||
});
|
||||
```
|
||||
- **Format:** Structured JSON.
|
||||
- **Levels:** `trace`, `debug`, `info`, `warn`, `error`, `fatal`.
|
||||
- **Context:** Logs must include `timestamp`, `severity`, `workflowRunId` (where applicable), `service` or `functionName`, a clear `message`, and relevant `details` (sanitized). **Sensitive data must NEVER be logged.** Pass error objects directly to Pino: `logger.error({ err: errorInstance, workflowRunId }, "Operation failed");`.
|
||||
- **Specific Handling Patterns:**
|
||||
- **External API Calls (HN Algolia, Play.ht, LLM Provider):**
|
||||
- **Facades:** Calls made through dedicated facades in `supabase/functions/_shared/`.
|
||||
- **Timeouts:** Implement reasonable connect and read timeouts.
|
||||
- **Retries:** Facades implement limited retries (2-3) with exponential backoff for transient errors (network issues, 5xx errors).
|
||||
- **Error Propagation:** Facades catch, log, and throw standardized custom errors (e.g., `ExternalApiError`) containing contextual information.
|
||||
- **Internal Errors / Business Logic Exceptions (Supabase Functions):**
|
||||
- Use `try...catch`. Critical errors preventing task completion for a `workflow_run_id` must: 1. Log detailed error (Pino). 2. Call `WorkflowTrackerService.failWorkflow(...)`.
|
||||
- Next.js API routes return generic JSON errors (e.g., `{"error": "Internal server error"}`) and appropriate HTTP status codes.
|
||||
- **Database Operations (Supabase):** Critical errors treated as internal errors (log, update `workflow_runs` to 'failed').
|
||||
- **Scraping/Summarization/Podcast/Delivery Failures:** Individual item failures are logged and status updated (e.g., `scraped_articles.scraping_status`). The overall workflow may continue with available data, with partial success noted in `workflow_runs.details`. Systemic failures lead to `workflow_runs.status = 'failed'`.
|
||||
- **`CheckWorkflowCompletionService`:** Must be resilient. Errors processing one `workflow_run_id` should be logged but not prevent processing of other runs or subsequent scheduled invocations.
|
||||
|
||||
### Coding Standards
|
||||
|
||||
These standards are mandatory for all code generation by AI agents and human developers.
|
||||
|
||||
- **Primary Language & Runtime:** TypeScript `5.7.2`, Node.js `22.10.2`.
|
||||
- **Style Guide & Linter:** ESLint (configured with Next.js defaults, TypeScript support) and Prettier (`3.3.3`). Configurations in root. Linting/formatting are mandatory.
|
||||
- **Naming Conventions:**
|
||||
- Variables & Functions/Methods: `camelCase`
|
||||
- Classes/Types/Interfaces: `PascalCase`
|
||||
- Constants: `UPPER_SNAKE_CASE`
|
||||
- Files (.ts, .tsx): `kebab-case` (e.g., `newsletter-card.tsx`)
|
||||
- Supabase function directories: `kebab-case` (e.g., `hn-content-service`)
|
||||
- **File Structure:** Adhere to "Project Structure." Unit tests (`*.test.ts(x)`/`*.spec.ts(x)`) co-located with source files.
|
||||
- **Asynchronous Operations:** Always use `async`/`await` for Promises; ensure proper handling.
|
||||
- **Type Safety (TypeScript):** Adhere to `tsconfig.json` (`"strict": true`). Avoid `any`; use `unknown` with type narrowing. Shared types in `shared/types/`.
|
||||
- **Comments & Documentation:** Explain _why_, not _what_. Use TSDoc for exported members. READMEs for modules/services.
|
||||
- **Dependency Management:** Use `npm`. Vet new dependencies. Pin versions or use `^` for non-breaking updates. Resolve `latest` tags to specific versions upon setup.
|
||||
- **Environment Variables:** Manage via environment variables (`.env.example` provided). Use Zod for runtime parsing/validation.
|
||||
- **Modularity & Reusability:** Break down complexity. Use shared utilities/facades.
|
||||
|
||||
#### Detailed Language & Framework Conventions
|
||||
|
||||
##### TypeScript/Node.js (Next.js & Supabase Functions) Specifics:
|
||||
|
||||
- **Immutability:** Prefer immutable data structures (e.g., `Readonly<T>`, `as const`). Follow Zustand patterns for immutable state updates in React.
|
||||
- **Functional vs. OOP:** Favor functional constructs for data transformation/utilities. Use classes for services/facades managing state or as per framework (e.g., React functional components with Hooks preferred).
|
||||
- **Error Handling Specifics:** `throw new Error('...')` or custom error classes. Ensure `Promise` rejections are `Error` objects.
|
||||
- **Null/Undefined Handling:** With `strictNullChecks`, handle explicitly. Avoid `!` non-null assertion; prefer explicit checks, `?.`, `??`.
|
||||
- **Module System:** Use ES Modules (`import`/`export`) exclusively.
|
||||
- **Logging Specifics (Pino):** Use shared Pino logger. Include context object (`logger.info({ context }, "message")`), especially `workflowRunId`.
|
||||
- **Next.js Conventions:** Follow App Router conventions. Use Server Components for data fetching where appropriate. Route Handlers for API endpoints.
|
||||
- **Supabase Function Conventions:** `index.ts` as entry. Self-contained or use `_shared/` utilities. Secure client initialization (admin vs. user).
|
||||
- **Code Generation Anti-Patterns to Avoid:** Overly nested logic, single-letter variables (except trivial loops), disabling linter/TS errors without cause, bypassing framework security, monolithic functions.
|
||||
|
||||
### Overall Testing Strategy
|
||||
|
||||
- **Tools:** Jest (unit/integration), React Testing Library (RTL) (React components), Playwright (E2E). Supabase CLI for local DB/function testing.
|
||||
- **Unit Tests:**
|
||||
- **Scope:** Isolate individual functions, methods, classes, React components. Focus on logic, transformations, component rendering.
|
||||
- **Location & Naming:** Co-located with source files (`*.test.ts`, `*.spec.ts`, `*.test.tsx`, `*.spec.tsx`).
|
||||
- **Mocking/Stubbing:** Jest mocks for dependencies. External API Facades are mocked when testing services that use them. Facades themselves are tested by mocking the underlying HTTP client or library's network calls.
|
||||
- **AI Agent Responsibility:** Generate unit tests covering logic paths, props, events, edge cases, error conditions for new/modified code.
|
||||
- **Integration Tests:**
|
||||
- **Scope:** Interactions between components/services (e.g., API route -> service -> DB).
|
||||
- **Location:** `tests/integration/`.
|
||||
- **Environment:** Local Supabase dev environment. Consider `msw` for mocking HTTP services called by frontend/backend.
|
||||
- **AI Agent Responsibility:** Generate tests for key service interactions or API contracts.
|
||||
- **End-to-End (E2E) Tests:**
|
||||
- **Scope:** Validate complete user flows via UI.
|
||||
- **Tool:** Playwright. Location: `tests/e2e/`.
|
||||
- **Key Scenarios (MVP):** View newsletter list, view detail, play podcast, download newsletter.
|
||||
- **AI Agent Responsibility:** Generate E2E test stubs/scripts for critical paths.
|
||||
- **Test Coverage:**
|
||||
- **Target:** Aim for **80% unit test coverage** for new business logic and critical components. Quality over quantity.
|
||||
- **Measurement:** Jest coverage reports.
|
||||
- **Mocking/Stubbing Strategy (General):** Test one unit at a time. Mock external dependencies for unit tests. For facade unit tests: use the real library but mock its external calls at the library's boundary.
|
||||
- **Test Data Management:** Inline mock data for unit tests. Factories/fixtures or `seed.sql` for integration/E2E tests.
|
||||
|
||||
### Security Best Practices
|
||||
|
||||
- **Input Sanitization/Validation:** Zod for all external inputs (API requests, function payloads, external API responses). Validate at component boundaries.
|
||||
- **Output Encoding:** Rely on React JSX auto-escaping for frontend. Ensure HTML for newsletters is sanitized if dynamic data is injected outside of a secure templating engine.
|
||||
- **Secrets Management:** Via environment variables (Vercel UI, `.env.local`). Never hardcode or log secrets. Access via `process.env`. Use Supabase service role key only in backend functions.
|
||||
- **Dependency Security:** Regular `npm audit`. Vet new dependencies.
|
||||
- **Authentication/Authorization:**
|
||||
- Workflow Trigger/Status APIs: API Key (`X-API-KEY`).
|
||||
- Play.ht Webhook: Shared secret or signature verification.
|
||||
- Supabase RLS: Enable on tables, define policies (especially for `subscribers` and any data directly queried by frontend).
|
||||
- **Principle of Least Privilege:** Scope API keys and database roles narrowly.
|
||||
- **API Security (General):** HTTPS (Vercel default). Consider rate limiting for public APIs. Standard HTTP security headers.
|
||||
- **Error Handling & Information Disclosure:** Log detailed errors server-side; return generic messages/error IDs to clients.
|
||||
- **Regular Security Audits/Testing (Post-MVP):** Consider for future enhancements.
|
||||
172
BETA-V3/v3-demos/full-stack-app-demo/10-sharded-docs/prd.md
Normal file
172
BETA-V3/v3-demos/full-stack-app-demo/10-sharded-docs/prd.md
Normal file
@@ -0,0 +1,172 @@
|
||||
# BMad DiCaster Product Requirements Document (PRD)
|
||||
|
||||
## Goal, Objective and Context
|
||||
|
||||
**Goal:** To develop a web application that provides a daily, concise summary of top Hacker News (HN) posts, delivered as a newsletter and accessible via a web interface.
|
||||
|
||||
**Objective:** To streamline the consumption of HN content by curating the top stories, providing AI-powered summaries, and offering an optional AI-generated podcast version.
|
||||
|
||||
**Context:** Busy professionals and enthusiasts want to stay updated on HN but lack the time to sift through numerous posts and discussions. This application will address this problem by automating the delivery of summarized content.
|
||||
|
||||
## Functional Requirements (MVP)
|
||||
|
||||
- **HN Content Retrieval & Storage:**
|
||||
- Daily retrieval of the top 30 Hacker News posts and associated comments using the HN Algolia API.
|
||||
- Scraping and storage of up to 10 linked articles per day.
|
||||
- Storage of all retrieved data (posts, comments, articles) with date association.
|
||||
- **AI-Powered Summarization:**
|
||||
- AI-powered summarization of the 10 selected articles (2-paragraph summaries).
|
||||
- AI-powered summarization of comments for the 10 selected posts (2-paragraph summaries highlighting interesting interactions).
|
||||
- Configuration for local or remote LLM usage via environment variables.
|
||||
- **Newsletter Generation & Delivery:**
|
||||
- Generation of a daily newsletter in HTML format, including summaries, links to HN posts and articles, and original post dates/times.
|
||||
- Automated delivery of the newsletter to a manually configured list of subscribers in Supabase. The list of emails will be manually populated in the database. Account information for the Nodemailer service will be provided via environment variables.
|
||||
- **Podcast Generation & Integration:**
|
||||
- Integration with Play.ht's PlayNote API for AI-generated podcast creation from the newsletter content.
|
||||
- Webhook handler to update the newsletter with the generated podcast link.
|
||||
- **Web Interface (MVP):**
|
||||
- Display of current and past newsletters.
|
||||
- Functionality to read the newsletter content within the web page.
|
||||
- Download option for newsletters.
|
||||
- Web player for listening to generated podcasts.
|
||||
- Basic mobile responsiveness for displaying newsletters and podcasts.
|
||||
- **API & Triggering:**
|
||||
- Secure API endpoint to manually trigger the daily workflow, secured with API keys.
|
||||
- CLI command to manually trigger the daily workflow locally.
|
||||
|
||||
## Non-Functional Requirements (MVP)
|
||||
|
||||
- **Performance:**
|
||||
- The system should retrieve HN posts and generate the newsletter within a reasonable timeframe (e.g., under 30 minutes) to ensure timely delivery.
|
||||
- The web interface should load quickly (e.g., within 2 seconds) to provide a smooth user experience.
|
||||
- **Scalability:**
|
||||
- The system is designed for an initial MVP delivery to 3-5 email subscribers. Scalability beyond this will be considered post-MVP.
|
||||
- **Security:**
|
||||
- The API endpoint for triggering the daily workflow must be secure, using API keys.
|
||||
- User data (email addresses) should be stored securely. No other security measures are required for the MVP.
|
||||
- **Reliability:**
|
||||
- No specific uptime or availability requirements are defined for the MVP.
|
||||
- The newsletter generation and delivery process should be robust and handle potential errors gracefully.
|
||||
- The system must be executable from a local development environment.
|
||||
- **Maintainability:**
|
||||
- The codebase should adhere to good quality coding standards, including separation of concerns.
|
||||
- The system should employ facades and factories to facilitate future expansion.
|
||||
- The system should be built as an event-driven pipeline, leveraging Supabase to capture data at each stage and trigger subsequent functions asynchronously. This approach aims to mitigate potential timeout issues with Vercel hosting.
|
||||
|
||||
## User Interaction and Design Goals
|
||||
|
||||
This section captures the high-level vision and goals for the User Experience (UX) to guide the Design Architect.
|
||||
|
||||
- **Overall Vision & Experience:**
|
||||
- The desired look and feel is modern and minimalist, with synthwave technical glowing purple vibes.
|
||||
- Users should have a clean and efficient experience when accessing and consuming newsletter content and podcasts.
|
||||
- **Key Interaction Paradigms:**
|
||||
- Interaction paradigms will be determined by the Design Architect.
|
||||
- **Core Screens/Views (Conceptual):**
|
||||
- The MVP will consist of two pages:
|
||||
- A list page to display current and past newsletters.
|
||||
- A detail page to display the selected newsletter content, including:
|
||||
- Download option for the newsletter.
|
||||
- Web player for listening to the generated podcast.
|
||||
- The article laid out for viewing.
|
||||
- **Accessibility Aspirations:**
|
||||
- The web interface (Epic 6) will adhere to WCAG 2.1 Level A guidelines as detailed in `frontend-architecture.md`. (Updated during checklist review)
|
||||
- **Branding Considerations (High-Level):**
|
||||
- A logo for the application will be provided.
|
||||
- The application will use the name "BMad DiCaster".
|
||||
- **Target Devices/Platforms:**
|
||||
- The application will be designed as a mobile-first responsive web app, ensuring it looks good on both mobile and desktop devices.
|
||||
|
||||
## Technical Assumptions
|
||||
|
||||
This section captures any existing technical information that will guide the Architect in the technical design.
|
||||
|
||||
- The application will be developed using the Next.js/Supabase template and hosted entirely on Vercel.
|
||||
- This implies a monorepo structure, as the frontend (Next.js) and backend (Supabase functions) will reside within the same repository.
|
||||
- The backend will primarily leverage serverless functions provided by Vercel and Supabase.
|
||||
- Frontend development will be in Next.js with React.
|
||||
- Data storage will be handled by Supabase's PostgreSQL database.
|
||||
- Separate Supabase instances will be used for development and production environments to ensure data isolation and stability.
|
||||
- For local development, developers can utilize the Supabase CLI and Vercel CLI to emulate the production environment, primarily for testing functions and deployments, but the development Supabase instance will be the primary source of dev data.
|
||||
- Testing will include unit tests, integration tests (especially for interactions with Supabase), and end-to-end tests.
|
||||
- The system should be built as an event-driven pipeline, leveraging Supabase to capture data at each stage and trigger subsequent functions asynchronously to mitigate potential timeout issues with Vercel.
|
||||
|
||||
## Epic Overview
|
||||
|
||||
_(Note: Epics will be developed sequentially. Development will start with Epic 1 and proceed to the next epic only after the previous one is fully completed and verified. Per the BMAD method, every story must be self-contained and done before the next one is started.)_
|
||||
|
||||
_(Note: All UI development across all epics must adhere to mobile responsiveness and Tailwind CSS/theming principles to ensure a consistent and maintainable user experience.)_
|
||||
|
||||
**(General Note on Service Implementation for All Epics):** All backend services (Supabase Functions) developed as part of any epic must implement robust error handling. They should log extensively using Pino, ensuring that all log entries include the relevant `workflow_run_id` for traceability. Furthermore, services must interact with the `WorkflowTrackerService` to update the `workflow_runs` table appropriately on both successful completion of their tasks and in case of any failures, recording status and error messages as applicable.
|
||||
|
||||
- **Epic 1: Project Initialization, Setup, and HN Content Acquisition**
|
||||
- Goal: Establish the foundational project structure, including the Next.js application, Supabase integration, deployment pipeline, API/CLI triggers, core workflow orchestration, and implement functionality to retrieve, process, and store Hacker News posts/comments via a `ContentAcquisitionFacade`, providing data for newsletter generation. Implement the database event mechanism to trigger subsequent processing. Define core configuration tables, seed data, and set up testing frameworks.
|
||||
- **Epic 2: Article Scraping**
|
||||
- Goal: Implement the functionality to scrape and store linked articles from HN posts, enriching the data available for summarization and the newsletter. Ensure this functionality is triggered by database events and can be tested via API/CLI (if retained). Implement the database event mechanism to trigger subsequent processing.
|
||||
- **Epic 3: AI-Powered Content Summarization**
|
||||
- Goal: Integrate AI summarization capabilities, by implementing and using a configurable and testable `LLMFacade`, to generate concise summaries of articles and comments from prompts stored in the database. This will enrich the newsletter content, be triggerable via API/CLI, is triggered by database events, and track progress via `WorkflowTrackerService`.
|
||||
- **Epic 4: Automated Newsletter Creation and Distribution**
|
||||
- Goal: Automate the generation and delivery of the daily newsletter by implementing and using a configurable `EmailDispatchFacade`. This includes handling podcast link availability, being triggerable via API/CLI, orchestration by `CheckWorkflowCompletionService`, and status tracking via `WorkflowTrackerService`.
|
||||
- **Epic 5: Podcast Generation Integration**
|
||||
- Goal: Integrate with an audio generation API (initially Play.ht) by implementing and using a configurable `AudioGenerationFacade` to create podcast versions of the newsletter. This includes handling webhooks to update newsletter data and workflow status. Ensure this is triggerable via API/CLI, orchestrated appropriately, and uses `WorkflowTrackerService`.
|
||||
- **Epic 6: Web Interface for Initial Structure and Content Access**
|
||||
- Goal: Develop a user-friendly, responsive, and accessible web interface, based on the `frontend-architecture.md`, to display newsletters and provide access to podcast content, aligning with the project's visual and technical guidelines. All UI development within this epic must adhere to the "synthwave technical glowing purple vibes" aesthetic using Tailwind CSS and Shadcn UI, ensure basic mobile responsiveness, meet WCAG 2.1 Level A accessibility guidelines (including semantic HTML, keyboard navigation, alt text, color contrast), and optimize images using `next/image`, as detailed in the `frontend-architecture.txt` and `ui-ux-spec.txt`.
|
||||
|
||||
---
|
||||
|
||||
**Epic 1: Project Initialization, Setup, and HN Content Acquisition**
|
||||
|
||||
> This section has been moved to a dedicated document: [Epic 1: Project Initialization, Setup, and HN Content Acquisition](./epic-1.md)
|
||||
|
||||
---
|
||||
|
||||
**Epic 2: Article Scraping**
|
||||
|
||||
> This section has been moved to a dedicated document: [Epic 2: Article Scraping](./epic-2.md)
|
||||
|
||||
---
|
||||
|
||||
**Epic 3: AI-Powered Content Summarization**
|
||||
|
||||
> This section has been moved to a dedicated document: [Epic 3: AI-Powered Content Summarization](./epic-3.md)
|
||||
|
||||
---
|
||||
|
||||
**Epic 4: Automated Newsletter Creation and Distribution**
|
||||
|
||||
> This section has been moved to a dedicated document: [Epic 4: Automated Newsletter Creation and Distribution](./epic-4.md)
|
||||
|
||||
---
|
||||
|
||||
**Epic 5: Podcast Generation Integration**
|
||||
|
||||
> This section has been moved to a dedicated document: [Epic 5: Podcast Generation Integration](./epic-5.md)
|
||||
|
||||
---
|
||||
|
||||
**Epic 6: Web Interface for Initial Structure and Content Access**
|
||||
|
||||
> This section has been moved to a dedicated document: [Epic 6: Web Interface for Initial Structure and Content Access](./epic-6.md)
|
||||
|
||||
---
|
||||
|
||||
## Out of Scope Ideas Post MVP
|
||||
|
||||
- User Authentication and Management
|
||||
- Subscription Management
|
||||
- Admin Dashboard
|
||||
- Viewing and updating daily podcast settings
|
||||
- Prompt management for summarization
|
||||
- UI for template modification
|
||||
- Enhanced Newsletter Customization
|
||||
- Additional Content Digests
|
||||
- Configuration and creation of different digests
|
||||
- Support for content sources beyond Hacker News
|
||||
- Advanced scraping techniques (e.g., Playwright)
|
||||
|
||||
## Change Log
|
||||
|
||||
| Change | Date | Version | Description | Author |
|
||||
| :----------------------------------------------- | :--------- | :------ | :-------------------------------------------------------------------------------------------------------------------------------------------------------- | :----- |
|
||||
| Initial Draft | 2025-05-13 | 0.1 | Initial draft of the Product Requirements Document | 2-pm |
|
||||
| Updates from Arch suggestions & Checklist Review | 2025-05-14 | 0.3 | Incorporated changes from `arch-suggested-changes.txt`, `fea-suggested-changes.txt`, and Master Checklist review, including new stories & AC refinements. | 5-posm |
|
||||
@@ -0,0 +1,110 @@
|
||||
# Project Structure
|
||||
|
||||
> This document is a granulated shard from the main "3-architecture.md" focusing on "Project Structure".
|
||||
|
||||
The BMad DiCaster project is organized as a monorepo, leveraging the Vercel/Supabase Next.js App Router template as its foundation.
|
||||
|
||||
```plaintext
|
||||
{project-root}/
|
||||
├── app/ # Next.js App Router
|
||||
│ ├── (api)/ # API route handlers
|
||||
│ │ ├── system/
|
||||
│ │ │ ├── trigger-workflow/route.ts
|
||||
│ │ │ └── workflow-status/[jobId]/route.ts
|
||||
│ │ └── webhooks/
|
||||
│ │ └── playht/route.ts
|
||||
│ ├── components/ # Application-specific UI react components
|
||||
│ │ └── core/ # e.g., NewsletterCard, PodcastPlayer
|
||||
│ ├── newsletters/
|
||||
│ │ ├── [newsletterId]/page.tsx
|
||||
│ │ └── page.tsx
|
||||
│ ├── auth/ # Auth-related pages and components (from template)
|
||||
│ ├── login/page.tsx # Login page (from template)
|
||||
│ ├── layout.tsx
|
||||
│ └── page.tsx # Homepage
|
||||
├── components/ # Shadcn UI components root (as configured by components.json)
|
||||
│ ├── tutorial/ # Example/template components (can be removed)
|
||||
│ ├── typography/ # Example/template components (can be removed)
|
||||
│ └── ui/ # Base UI elements (button.tsx, card.tsx etc.)
|
||||
├── docs/ # Project documentation
|
||||
│ ├── prd.md # Or prd-incremental-full-agile-mode.txt
|
||||
│ ├── architecture.md # This document
|
||||
│ ├── ui-ux-spec.md # Or ui-ux-spec.txt
|
||||
│ ├── technical-preferences.md # Or technical-preferences copy.txt
|
||||
│ ├── ADR/ # Architecture Decision Records (to be created as needed)
|
||||
│ └── environment-vars.md # (To be created)
|
||||
├── lib/ # General utility functions for frontend (e.g., utils.ts from template)
|
||||
│ └── utils.ts
|
||||
├── supabase/ # Supabase specific project files (backend logic)
|
||||
│ ├── functions/ # Supabase Edge Functions (for event-driven pipeline)
|
||||
│ │ ├── hn-content-service/index.ts
|
||||
│ │ ├── article-scraper-service/index.ts
|
||||
│ │ ├── summarization-service/index.ts
|
||||
│ │ ├── podcast-generation-service/index.ts
|
||||
│ │ ├── newsletter-generation-service/index.ts
|
||||
│ │ ├── check-workflow-completion-service/index.ts # Cron-triggered orchestrator
|
||||
│ │ └── _shared/ # Shared utilities/facades FOR Supabase backend functions
|
||||
│ │ ├── supabase-admin-client.ts
|
||||
│ │ ├── llm-facade.ts
|
||||
│ │ ├── playht-facade.ts
|
||||
│ │ ├── nodemailer-facade.ts
|
||||
│ │ └── workflow-tracker-service.ts # For updating workflow_runs table
|
||||
│ ├── migrations/ # Database schema migrations
|
||||
│ │ └── YYYYMMDDHHMMSS_initial_schema.sql
|
||||
│ └── config.toml # Supabase project configuration (for CLI)
|
||||
├── public/ # Static assets (images, favicon, etc.)
|
||||
├── shared/ # Shared code/types between frontend and Supabase functions
|
||||
│ └── types/
|
||||
│ ├── api-schemas.ts # Request/response types for app/(api) routes
|
||||
│ ├── domain-models.ts # Core entity types (HNPost, ArticleSummary etc.)
|
||||
│ └── index.ts # Barrel file for shared types
|
||||
├── styles/ # Global styles (e.g., globals.css for Tailwind base)
|
||||
├── tests/ # Automated tests
|
||||
│ ├── e2e/ # Playwright E2E tests
|
||||
│ │ ├── newsletter-view.spec.ts
|
||||
│ │ └── playwright.config.ts
|
||||
│ └── integration/ # Integration tests
|
||||
│ └── api-trigger-workflow.integration.test.ts
|
||||
│ # Unit tests are co-located with source files, e.g., app/components/core/MyComponent.test.tsx
|
||||
├── utils/ # Root utilities (from template)
|
||||
│ └── supabase/ # Supabase helper functions FOR FRONTEND (from template)
|
||||
│ ├── client.ts # Client-side Supabase client
|
||||
│ ├── middleware.ts # Logic for Next.js middleware
|
||||
│ └── server.ts # Server-side Supabase client
|
||||
├── .env.example
|
||||
├── .gitignore
|
||||
├── components.json # Shadcn UI configuration
|
||||
├── middleware.ts # Next.js middleware (root, uses utils/supabase/middleware.ts)
|
||||
├── next-env.d.ts
|
||||
├── next.config.mjs
|
||||
├── package.json
|
||||
├── postcss.config.js
|
||||
├── README.md
|
||||
├── tailwind.config.ts
|
||||
└── tsconfig.json
|
||||
```
|
||||
|
||||
### Key Directory Descriptions:
|
||||
|
||||
- **`app/`**: Next.js frontend (pages, UI components, Next.js API routes).
|
||||
- **`app/(api)/`**: Backend API routes hosted on Vercel, including webhook receivers and system triggers.
|
||||
- **`app/components/core/`**: Application-specific reusable React components.
|
||||
- **`components/`**: Root for Shadcn UI components.
|
||||
- **`docs/`**: All project documentation.
|
||||
- **`lib/`**: Frontend-specific utility functions.
|
||||
- **`supabase/functions/`**: Backend serverless functions (event-driven pipeline steps).
|
||||
- **`supabase/functions/_shared/`**: Utilities and facades for these backend functions, including `WorkflowTrackerService`.
|
||||
- **`supabase/migrations/`**: Database migrations managed by Supabase CLI.
|
||||
- **`shared/types/`**: TypeScript types/interfaces shared between frontend and `supabase/functions/`. Path alias `@shared/*` to be configured in `tsconfig.json`.
|
||||
- **`tests/`**: Contains E2E and integration tests. Unit tests are co-located with source files.
|
||||
- **`utils/supabase/`**: Frontend-focused Supabase client helpers provided by the starter template.
|
||||
|
||||
### Monorepo Management:
|
||||
|
||||
- Standard `npm` (or `pnpm`/`yarn` workspaces if adopted later) for managing dependencies.
|
||||
- The root `tsconfig.json` includes path aliases (`@/*`, `@shared/*`).
|
||||
|
||||
### Notes:
|
||||
|
||||
- Supabase functions in `supabase/functions/` are deployed to Vercel via Supabase CLI and Vercel integration.
|
||||
- The `CheckWorkflowCompletionService` might be invoked via a Vercel Cron Job calling a simple HTTP trigger endpoint for that function, or via `pg_cron` if direct database scheduling is preferred.
|
||||
@@ -0,0 +1,216 @@
|
||||
# Core Workflow / Sequence Diagrams
|
||||
|
||||
> This document is a granulated shard from the main "3-architecture.md" focusing on "Core Workflow / Sequence Diagrams".
|
||||
|
||||
These diagrams illustrate the key sequences of operations in the BMad DiCaster system.
|
||||
|
||||
### 1\. Daily Workflow Initiation & HN Content Acquisition
|
||||
|
||||
This diagram shows the manual/API trigger initiating a new workflow run, followed by the fetching of Hacker News posts and comments.
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
actor Caller as Manual/API/CLI/Cron
|
||||
participant TriggerAPI as POST /api/system/trigger-workflow
|
||||
participant WorkflowRunsDB as workflow_runs (DB Table)
|
||||
participant WorkflowTracker as WorkflowTrackerService
|
||||
participant HNContentService as HNContentService (Supabase Fn)
|
||||
participant HNAlgoliaAPI as HN Algolia API
|
||||
participant HNPostsDB as hn_posts (DB Table)
|
||||
participant HNCommentsDB as hn_comments (DB Table)
|
||||
participant EventTrigger1 as DB Event/Webhook (on hn_posts insert)
|
||||
|
||||
Caller->>+TriggerAPI: Request to start daily workflow
|
||||
TriggerAPI->>+WorkflowTracker: initiateNewWorkflow()
|
||||
WorkflowTracker->>+WorkflowRunsDB: INSERT new run (status='pending', details={})
|
||||
WorkflowRunsDB-->>-WorkflowTracker: new_workflow_run_id
|
||||
WorkflowTracker-->>TriggerAPI: { jobId: new_workflow_run_id }
|
||||
TriggerAPI-->>-Caller: HTTP 202 Accepted { jobId }
|
||||
|
||||
alt Initial Trigger for HN Content Fetch
|
||||
WorkflowTracker->>+HNContentService: triggerFetch(workflow_run_id)
|
||||
Note over WorkflowTracker,HNContentService: This could be a direct call or an event insertion that HNContentService picks up.
|
||||
else Alternative: Event from WorkflowRunsDB insert
|
||||
WorkflowRunsDB-->>EventTrigger1: New workflow_run record
|
||||
EventTrigger1->>+HNContentService: Invoke(workflow_run_id, event_payload)
|
||||
end
|
||||
|
||||
HNContentService->>+WorkflowTracker: updateWorkflowStep(workflow_run_id, 'fetching_hn_posts', 'fetching_hn')
|
||||
WorkflowTracker->>+WorkflowRunsDB: UPDATE workflow_runs (status, current_step_details)
|
||||
WorkflowRunsDB-->>-WorkflowTracker: ack
|
||||
|
||||
HNContentService->>+HNAlgoliaAPI: GET /search?tags=front_page
|
||||
HNAlgoliaAPI-->>-HNContentService: Front page story items
|
||||
|
||||
loop For each story item (up to 30 after sorting by points)
|
||||
HNContentService->>+HNPostsDB: INSERT story (hn_post_id, title, url, points, created_at, workflow_run_id)
|
||||
HNPostsDB-->>EventTrigger1: Notifies: New hn_post inserted
|
||||
EventTrigger1-->>ArticleScrapingService: (Async) Trigger ArticleScrapingService(hn_post_id, workflow_run_id)
|
||||
Note right of EventTrigger1: Triggers article scraping (next diagram)
|
||||
|
||||
HNContentService->>+HNAlgoliaAPI: GET /items/{story_objectID} (to fetch comments)
|
||||
HNAlgoliaAPI-->>-HNContentService: Story details with comments
|
||||
loop For each comment
|
||||
HNContentService->>+HNCommentsDB: INSERT comment (comment_id, hn_post_id, text, author, created_at)
|
||||
HNCommentsDB-->>-HNContentService: ack
|
||||
end
|
||||
end
|
||||
HNContentService->>+WorkflowTracker: updateWorkflowDetails(workflow_run_id, {posts_fetched: X, comments_fetched: Y})
|
||||
WorkflowTracker->>+WorkflowRunsDB: UPDATE workflow_runs (details)
|
||||
WorkflowRunsDB-->>-WorkflowTracker: ack
|
||||
Note over HNContentService: HN Content Service might mark its part for the workflow as 'hn_data_fetched'. The overall workflow status will be managed by CheckWorkflowCompletionService.
|
||||
```
|
||||
|
||||
### 2\. Article Scraping & Summarization Flow
|
||||
|
||||
This diagram shows the flow starting from a new HN post being available, leading to article scraping, and then summarization of the article content and HN comments.
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
participant EventTrigger1 as DB Event/Webhook (on hn_posts insert)
|
||||
participant ArticleScrapingService as ArticleScrapingService (Supabase Fn)
|
||||
participant ScrapedArticlesDB as scraped_articles (DB Table)
|
||||
participant WorkflowTracker as WorkflowTrackerService
|
||||
participant WorkflowRunsDB as workflow_runs (DB Table)
|
||||
participant EventTrigger2 as DB Event/Webhook (on scraped_articles insert/update)
|
||||
participant SummarizationService as SummarizationService (Supabase Fn)
|
||||
participant LLMFacade as LLMFacade (shared function)
|
||||
participant LLMProvider as LLM Provider (Ollama/Remote)
|
||||
participant SummariesDB as article_summaries / comment_summaries (DB Tables)
|
||||
participant PromptsDB as summarization_prompts (DB Table)
|
||||
|
||||
EventTrigger1->>+ArticleScrapingService: Invoke(hn_post_id, workflow_run_id, article_url)
|
||||
ArticleScrapingService->>+WorkflowTracker: updateWorkflowStep(workflow_run_id, 'scraping_article_for_post_' + hn_post_id, 'scraping_articles')
|
||||
WorkflowTracker->>WorkflowRunsDB: UPDATE workflow_runs (current_step_details)
|
||||
|
||||
ArticleScrapingService->>ArticleScrapingService: Identify relevant URL from hn_post (if multiple)
|
||||
ArticleScrapingService->>+ScrapedArticlesDB: INSERT new article (hn_post_id, original_url, status='pending', workflow_run_id)
|
||||
ScrapedArticlesDB-->>-ArticleScrapingService: new_scraped_article_id
|
||||
|
||||
opt Article URL is valid and scrapeable
|
||||
ArticleScrapingService->>ArticleScrapingService: Fetch HTML content from article_url (using Cheerio compatible fetch)
|
||||
ArticleScrapingService->>ArticleScrapingService: Parse HTML with Cheerio, extract title, author, date, main_text
|
||||
ArticleScrapingService->>+ScrapedArticlesDB: UPDATE scraped_articles SET main_text_content, title, author, status='success' WHERE id=new_scraped_article_id
|
||||
else Scraping fails or URL invalid
|
||||
ArticleScrapingService->>+ScrapedArticlesDB: UPDATE scraped_articles SET status='failed_parsing/unreachable', error_message='...' WHERE id=new_scraped_article_id
|
||||
end
|
||||
ScrapedArticlesDB-->>EventTrigger2: Notifies: New/Updated scraped_article (status='success')
|
||||
EventTrigger2-->>SummarizationService: (Async) Trigger SummarizationService(scraped_article_id, workflow_run_id, 'article')
|
||||
Note right of EventTrigger2: Triggers article summarization
|
||||
|
||||
ArticleScrapingService->>+WorkflowTracker: updateWorkflowDetails(workflow_run_id, {articles_attempted_increment: 1, articles_scraped_successfully_increment: (success ? 1:0) })
|
||||
WorkflowTracker->>WorkflowRunsDB: UPDATE workflow_runs (details)
|
||||
|
||||
HNPostsDB (not shown, but data is available) -- "Data for comments" --> SummarizationService
|
||||
Note right of SummarizationService: HN Comments are also summarized for the hn_post_id associated with this workflow_run_id. This might be a separate invocation or part of a broader summarization task for the post.
|
||||
SummarizationService->>+WorkflowTracker: updateWorkflowStep(workflow_run_id, 'summarizing_content_for_post_' + hn_post_id, 'summarizing_content')
|
||||
WorkflowTracker->>WorkflowRunsDB: UPDATE workflow_runs (current_step_details)
|
||||
|
||||
alt Summarize Article
|
||||
SummarizationService->>SummarizationService: Get text_content from scraped_articles WHERE id=scraped_article_id
|
||||
SummarizationService->>+PromptsDB: SELECT prompt_text WHERE is_default_article_prompt=TRUE
|
||||
PromptsDB-->>-SummarizationService: article_prompt_text
|
||||
SummarizationService->>+LLMFacade: generateSummary(text_content, {prompt: article_prompt_text})
|
||||
LLMFacade->>+LLMProvider: Request summary (Ollama or Remote API call)
|
||||
LLMProvider-->>-LLMFacade: summary_response
|
||||
LLMFacade-->>-SummarizationService: article_summary_text
|
||||
SummarizationService->>+SummariesDB: INSERT into article_summaries (scraped_article_id, summary_text, workflow_run_id, llm_model_used)
|
||||
SummariesDB-->>-SummarizationService: ack
|
||||
end
|
||||
|
||||
alt Summarize Comments (for each relevant hn_post_id in the workflow_run)
|
||||
SummarizationService->>SummarizationService: Get all comments for hn_post_id from hn_comments table
|
||||
SummarizationService->>SummarizationService: Concatenate/prepare comment text
|
||||
SummarizationService->>+PromptsDB: SELECT prompt_text WHERE is_default_comment_prompt=TRUE
|
||||
PromptsDB-->>-SummarizationService: comment_prompt_text
|
||||
SummarizationService->>+LLMFacade: generateSummary(all_comments_text, {prompt: comment_prompt_text})
|
||||
LLMFacade->>+LLMProvider: Request summary
|
||||
LLMProvider-->>-LLMFacade: summary_response
|
||||
LLMFacade-->>-SummarizationService: comment_summary_text
|
||||
SummarizationService->>+SummariesDB: INSERT into comment_summaries (hn_post_id, summary_text, workflow_run_id, llm_model_used)
|
||||
SummariesDB-->>-SummarizationService: ack
|
||||
end
|
||||
SummarizationService->>+WorkflowTracker: updateWorkflowDetails(workflow_run_id, {summaries_generated_increment: 1_or_2})
|
||||
WorkflowTracker->>WorkflowRunsDB: UPDATE workflow_runs (details)
|
||||
Note over SummarizationService: After all expected summaries for the workflow_run are done, the CheckWorkflowCompletionService will eventually pick this up.
|
||||
```
|
||||
|
||||
### 3\. Newsletter, Podcast, and Delivery Flow
|
||||
|
||||
This diagram shows the steps from completed summarization to newsletter generation, podcast creation, webhook handling, and final email delivery. It assumes the `CheckWorkflowCompletionService` has determined that all summaries for a given `workflow_run_id` are ready.
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
participant CheckWorkflowService as CheckWorkflowCompletionService (Supabase Cron Fn)
|
||||
participant WorkflowRunsDB as workflow_runs (DB Table)
|
||||
participant WorkflowTracker as WorkflowTrackerService
|
||||
participant NewsletterGenService as NewsletterGenerationService (Supabase Fn)
|
||||
participant PodcastGenService as PodcastGenerationService (Supabase Fn)
|
||||
participant PlayHTAPI as Play.ht API
|
||||
participant NewsletterTemplatesDB as newsletter_templates (DB Table)
|
||||
participant SummariesDB as article_summaries / comment_summaries (DB Tables)
|
||||
participant NewslettersDB as newsletters (DB Table)
|
||||
participant PlayHTWebhook as POST /api/webhooks/playht (Next.js API Route)
|
||||
participant NodemailerService as NodemailerFacade (shared function)
|
||||
participant SubscribersDB as subscribers (DB Table)
|
||||
participant ExternalEmailService as Email Service (e.g., Gmail SMTP)
|
||||
|
||||
CheckWorkflowService->>+WorkflowRunsDB: Query for runs with status 'summarizing_content' and all summaries complete
|
||||
WorkflowRunsDB-->>-CheckWorkflowService: workflow_run_id (ready for newsletter)
|
||||
|
||||
CheckWorkflowService->>+WorkflowTracker: updateWorkflowStep(workflow_run_id, 'starting_newsletter_generation', 'generating_newsletter')
|
||||
WorkflowTracker->>+WorkflowRunsDB: UPDATE workflow_runs (status, current_step_details)
|
||||
WorkflowRunsDB-->>-WorkflowTracker: ack
|
||||
CheckWorkflowService->>+NewsletterGenService: Invoke(workflow_run_id)
|
||||
|
||||
NewsletterGenService->>+NewsletterTemplatesDB: SELECT html_content, version WHERE is_default=TRUE
|
||||
NewsletterTemplatesDB-->>-NewsletterGenService: template_html, template_version
|
||||
NewsletterGenService->>+SummariesDB: SELECT article_summaries, comment_summaries WHERE workflow_run_id=...
|
||||
SummariesDB-->>-NewsletterGenService: summaries_data
|
||||
NewsletterGenService->>NewsletterGenService: Compile HTML newsletter using template and summaries_data
|
||||
NewsletterGenService->>+NewslettersDB: INSERT newsletter (workflow_run_id, title, html_content, podcast_status='pending', delivery_status='pending', target_date)
|
||||
NewslettersDB-->>-NewsletterGenService: new_newsletter_id
|
||||
|
||||
NewsletterGenService->>+PodcastGenService: initiatePodcast(new_newsletter_id, html_content_for_podcast, workflow_run_id)
|
||||
WorkflowTracker->>+WorkflowRunsDB: updateWorkflowStep(workflow_run_id, 'podcast_generation_initiated', 'generating_podcast')
|
||||
WorkflowTracker->>WorkflowRunsDB: UPDATE workflow_runs
|
||||
PodcastGenService->>+PlayHTAPI: POST /playnotes (sourceFile=html_content, webHookUrl=...)
|
||||
PlayHTAPI-->>-PodcastGenService: { playht_job_id, status: 'generating' }
|
||||
PodcastGenService->>+NewslettersDB: UPDATE newsletters SET podcast_playht_job_id, podcast_status='generating' WHERE id=new_newsletter_id
|
||||
NewslettersDB-->>-PodcastGenService: ack
|
||||
Note over NewsletterGenService, PodcastGenService: Newsletter is now generated; podcast is being generated by Play.ht. Email delivery will wait for podcast completion or timeout.
|
||||
|
||||
PlayHTAPI-->>+PlayHTWebhook: POST (status='completed', audioUrl='...', id=playht_job_id)
|
||||
PlayHTWebhook->>+NewslettersDB: UPDATE newsletters SET podcast_url, podcast_status='completed' WHERE podcast_playht_job_id=...
|
||||
NewslettersDB-->>-PlayHTWebhook: ack
|
||||
PlayHTWebhook->>+WorkflowTracker: updateWorkflowDetails(workflow_run_id_from_newsletter, {podcast_status: 'completed'})
|
||||
WorkflowTracker->>WorkflowRunsDB: UPDATE workflow_runs (details)
|
||||
PlayHTWebhook-->>-PlayHTAPI: HTTP 200 OK
|
||||
|
||||
CheckWorkflowService->>+WorkflowRunsDB: Query for runs with status 'generating_podcast' AND newsletters.podcast_status IN ('completed', 'failed') OR timeout reached
|
||||
WorkflowRunsDB-->>-CheckWorkflowService: workflow_run_id (ready for delivery)
|
||||
|
||||
CheckWorkflowService->>+WorkflowTracker: updateWorkflowStep(workflow_run_id, 'starting_newsletter_delivery', 'delivering_newsletter')
|
||||
WorkflowTracker->>+WorkflowRunsDB: UPDATE workflow_runs (status, current_step_details)
|
||||
WorkflowRunsDB-->>-WorkflowTracker: ack
|
||||
CheckWorkflowService->>+NewsletterGenService: triggerDelivery(newsletter_id_for_workflow_run)
|
||||
|
||||
|
||||
NewsletterGenService->>+NewslettersDB: SELECT html_content, podcast_url WHERE id=newsletter_id
|
||||
NewslettersDB-->>-NewsletterGenService: newsletter_data
|
||||
NewsletterGenService->>NewsletterGenService: (If podcast_url available, embed it in html_content)
|
||||
NewsletterGenService->>+SubscribersDB: SELECT email WHERE is_active=TRUE
|
||||
SubscribersDB-->>-NewsletterGenService: subscriber_emails[]
|
||||
|
||||
loop For each subscriber_email
|
||||
NewsletterGenService->>+NodemailerService: sendEmail(to=subscriber_email, subject=newsletter_title, html=final_html_content)
|
||||
NodemailerService->>+ExternalEmailService: SMTP send
|
||||
ExternalEmailService-->>-NodemailerService: delivery_success/failure
|
||||
NodemailerService-->>-NewsletterGenService: status
|
||||
end
|
||||
NewsletterGenService->>+NewslettersDB: UPDATE newsletters SET delivery_status='sent' (or 'partially_failed'), sent_at=now()
|
||||
NewslettersDB-->>-NewsletterGenService: ack
|
||||
NewsletterGenService->>+WorkflowTracker: completeWorkflow(workflow_run_id, {delivery_status: 'sent', subscribers_notified: X})
|
||||
WorkflowTracker->>+WorkflowRunsDB: UPDATE workflow_runs (status='completed', details)
|
||||
WorkflowRunsDB-->>-WorkflowTracker: ack
|
||||
```
|
||||
@@ -0,0 +1,37 @@
|
||||
# Definitive Tech Stack Selections
|
||||
|
||||
> This document is a granulated shard from the main "3-architecture.md" focusing on "Definitive Tech Stack Selections".
|
||||
|
||||
This section outlines the definitive technology choices for the BMad DiCaster project. These selections are the single source of truth for all technology choices. "Latest" implies the latest stable version available at the time of project setup (2025-05-13); the specific version chosen should be pinned in `package.json` and this document updated accordingly.
|
||||
|
||||
- **Preferred Starter Template Frontend & Backend:** Vercel/Supabase Next.js App Router Template ([https://vercel.com/templates/next.js/supabase](https://vercel.com/templates/next.js/supabase))
|
||||
|
||||
| Category | Technology | Version / Details | Description / Purpose | Justification (Optional, from PRD/User) |
|
||||
| :------------------- | :-------------------------- | :----------------------------------------- | :------------------------------------------------------------------------ | :----------------------------------------------------------- |
|
||||
| **Languages** | TypeScript | `5.7.2` | Primary language for backend/frontend | Strong typing, community support, aligns with Next.js/React |
|
||||
| **Runtime** | Node.js | `22.10.2` | Server-side execution environment for Next.js & Supabase Functions | Compatible with Next.js, Vercel environment |
|
||||
| **Frameworks** | Next.js | `latest` (e.g., 14.2.3 at time of writing) | Full-stack React framework | App Router, SSR, API routes, Vercel synergy |
|
||||
| | React | `19.0.0` | Frontend UI library | Component-based, declarative |
|
||||
| **UI Libraries** | Tailwind CSS | `3.4.17` | Utility-first CSS framework | Rapid UI development, consistent styling |
|
||||
| | Shadcn UI | `latest` (CLI based) | React component library (via CLI) | Pre-styled, accessible components, built on Radix & Tailwind |
|
||||
| **Databases** | PostgreSQL | (via Supabase) | Primary relational data store | Provided by Supabase, robust, scalable |
|
||||
| **Cloud Platform** | Vercel | N/A | Hosting platform for Next.js app & Supabase Functions | Seamless Next.js/Supabase deployment, Edge Network |
|
||||
| **Cloud Services** | Supabase Functions | N/A (via Vercel deploy) | Serverless compute for backend pipeline & APIs | Integrated with Supabase DB, event-driven capabilities |
|
||||
| | Supabase Auth | N/A | User authentication and management | Integrated with Supabase, RLS |
|
||||
| | Supabase Storage | N/A | File storage (e.g., for temporary newsletter files if needed for Play.ht) | Integrated with Supabase |
|
||||
| **Infrastructure** | Supabase CLI | `latest` | Local development, migrations, function deployment | Official tool for Supabase development |
|
||||
| | Docker | `latest` (via Supabase CLI) | Containerization for local Supabase services | Local development consistency |
|
||||
| **State Management** | Zustand | `latest` | Frontend state management | Simple, unopinionated, performant for React |
|
||||
| **Testing** | React Testing Library (RTL) | `latest` | Testing React components | User-centric testing, works well with Jest |
|
||||
| | Jest | `latest` | Unit/Integration testing framework for JS/TS | Widely used, good support for Next.js/React |
|
||||
| | Playwright | `latest` | End-to-end testing framework | Modern, reliable, cross-browser |
|
||||
| **CI/CD** | GitHub Actions | N/A | Continuous Integration/Deployment | Integration with GitHub, automation of build/deploy/test |
|
||||
| **Other Tools** | Cheerio | `latest` | HTML parsing/scraping for articles | Server-side HTML manipulation |
|
||||
| | Nodemailer | `latest` | Email sending library for newsletters | Robust email sending from Node.js |
|
||||
| | Zod | `latest` | TypeScript-first schema declaration and validation | Data validation for API inputs, environment variables etc. |
|
||||
| | `tsx` / `ts-node` | `latest` (for scripts) | TypeScript execution for Node.js scripts (e.g. `scripts/`) | Running TS scripts directly |
|
||||
| | Prettier | `3.3.3` | Code formatter | Consistent code style |
|
||||
| | ESLint | `latest` | Linter for TypeScript/JavaScript | Code quality and error prevention |
|
||||
| | Pino | `latest` | High-performance JSON logger for Node.js | Structured and efficient logging |
|
||||
|
||||
</rewritten_file>
|
||||
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,345 @@
|
||||
Okay, I will generate all stories for Epic 2 in a single output, performing the necessary analysis of the provided documents to ensure each story is detailed and actionable.
|
||||
|
||||
Operating in **Story Creator Phase**.
|
||||
|
||||
Here are all the stories for Epic 2:
|
||||
|
||||
---
|
||||
|
||||
# Story 2.1: Identify URLs within Hacker News posts for article extraction
|
||||
|
||||
## Status: Draft
|
||||
|
||||
## Story
|
||||
|
||||
- As a system
|
||||
- I want to identify URLs within the top 30 (configurable via environment variable) Hacker News posts
|
||||
- so that I can extract the content of linked articles.
|
||||
|
||||
## Acceptance Criteria (ACs)
|
||||
|
||||
1. The system parses the top N (configurable via env var `HN_POST_LIMIT_FOR_SCRAPING`, defaults to 10 as per PRD Functional Req `[8-prd-po-updated.txt#HN Content Retrieval & Storage]`) Hacker News posts (retrieved in Epic 1) to identify URLs from the `url` field of `hn_posts` table entries associated with the current `workflow_run_id`.
|
||||
2. The system filters out any URLs that are not relevant to article scraping (e.g., links to `news.ycombinator.com` itself, known non-article domains if a list is maintained, or links that are empty/null).
|
||||
|
||||
## Tasks / Subtasks
|
||||
|
||||
- [ ] Task 1: Develop URL identification logic. (AC: 1)
|
||||
- [ ] Within the `ArticleScrapingService` (Supabase Function), add logic to fetch `hn_posts` records relevant to the current `workflow_run_id`.
|
||||
- [ ] Retrieve the `url` field from these records.
|
||||
- [ ] Implement configuration to limit processing to N posts (e.g., using an environment variable `HN_POST_LIMIT_FOR_SCRAPING`, defaulting to 10). The PRD mentions "up to 10 linked articles per day" (`[8-prd-po-updated.txt#Functional Requirements (MVP)]`). This might mean the top 10 posts with valid URLs from the fetched 30.
|
||||
- [ ] Task 2: Implement URL filtering. (AC: 2)
|
||||
- [ ] Create a filtering mechanism to exclude irrelevant URLs.
|
||||
- [ ] Initial filters should exclude:
|
||||
- Null or empty URLs.
|
||||
- URLs pointing to `news.ycombinator.com` (item or user links).
|
||||
- (Optional, for future enhancement) URLs matching a configurable blocklist of domains (e.g., image hosts, video platforms if not desired).
|
||||
- [ ] Log any URLs that are filtered out and the reason.
|
||||
- [ ] Task 3: Prepare URLs for Scraping Task.
|
||||
- [ ] For each valid and filtered URL, create a corresponding 'pending' entry in the `scraped_articles` table (this might be done here or as the first step in Story 2.2 just before actual scraping). This is important for tracking.
|
||||
|
||||
## Dev Technical Guidance
|
||||
|
||||
- **Service Context:** This logic will be part of the `ArticleScrapingService` Supabase Function, which is triggered by the database event from `hn_posts` insertion (Story 1.9). The service will receive `hn_post_id`, `workflow_run_id`, and `article_url` (the URL from the `hn_posts` table). This story's tasks refine how the service _validates_ and _prepares_ this URL before actual scraping.
|
||||
- **Configuration:**
|
||||
- Environment Variable: `HN_POST_LIMIT_FOR_SCRAPING` (default to 10). This dictates how many of the HN posts (those with URLs) from the current `workflow_run_id` will have their articles attempted for scraping.
|
||||
- The PRD `[8-prd-po-updated.txt#HN Content Retrieval & Storage]` says "Scraping and storage of up to 10 linked articles per day." This implies a selection or prioritization if more than 10 valid article URLs are found among the top 30 HN posts. The service might process the first 10 valid URLs it encounters based on post ranking or fetch order.
|
||||
- **URL Filtering Logic:**
|
||||
- Basic validation: check if URL is non-empty and a valid HTTP/HTTPS URL structure.
|
||||
- Domain checking: Use `URL` object in JavaScript/TypeScript to parse and inspect hostnames.
|
||||
- Example filter: `if (!url || new URL(url).hostname === 'news.ycombinator.com') return 'filtered_out_internal_link';`
|
||||
- **Input:** The `ArticleScrapingService` will receive `hn_post_id` and its associated `article_url` from the trigger (Story 1.9). This story focuses on the service deciding _if_ it should proceed with _this specific_ `article_url` based on overall limits and URL validity.
|
||||
- **Logging:** Use Pino. Log the `workflow_run_id`, `hn_post_id`, the URL being processed, and the outcome of identification/filtering.
|
||||
|
||||
## Story Progress Notes
|
||||
|
||||
### Agent Model Used: `<Agent Model Name/Version>`
|
||||
|
||||
### Completion Notes List
|
||||
|
||||
{Any notes about implementation choices, difficulties, or follow-up needed}
|
||||
|
||||
### Change Log
|
||||
|
||||
---
|
||||
|
||||
# Story 2.2: Scrape content of identified article URLs using Cheerio
|
||||
|
||||
## Status: Draft
|
||||
|
||||
## Story
|
||||
|
||||
- As a system
|
||||
- I want to scrape the content of the identified article URLs using Cheerio
|
||||
- so that I can provide summaries in the newsletter.
|
||||
|
||||
## Acceptance Criteria (ACs)
|
||||
|
||||
1. The system scrapes the content from the identified article URLs using Cheerio.
|
||||
2. The system extracts relevant content such as the article title, author, publication date, and main text.
|
||||
3. The system handles potential issues during scraping, such as website errors or changes in website structure, logging errors for review.
|
||||
|
||||
## Tasks / Subtasks
|
||||
|
||||
- [ ] Task 1: Set up `ArticleScrapingService` Supabase Function.
|
||||
- [ ] Create the Supabase Function `article-scraper-service` in `supabase/functions/article-scraper-service/index.ts`.
|
||||
- [ ] This function is triggered by the event from Story 1.9 (new `hn_post` insert). It receives `hn_post_id`, `workflow_run_id`, and `original_url`.
|
||||
- [ ] Initialize Pino logger and Supabase admin client.
|
||||
- [ ] Task 2: Implement Article Content Fetching. (AC: 1)
|
||||
- [ ] For the given `original_url`, make an HTTP GET request to fetch the HTML content of the article. Use a robust HTTP client (e.g., `node-fetch` or `axios`).
|
||||
- [ ] Implement basic error handling for the fetch (e.g., timeouts, non-2xx responses).
|
||||
- [ ] Task 3: Implement Content Extraction using Cheerio. (AC: 1, 2)
|
||||
- [ ] Load the fetched HTML content into Cheerio.
|
||||
- [ ] Implement logic to extract:
|
||||
- Article Title (e.g., from `<title>` tag, `<h1>` tags, OpenGraph meta tags like `og:title`).
|
||||
- Author (e.g., from meta tags like `author`, `article:author`, or common HTML patterns).
|
||||
- Publication Date (e.g., from meta tags like `article:published_time`, `datePublished`, or common HTML patterns; attempt to parse into ISO format).
|
||||
- Main Text Content (This is the most complex part. Attempt to identify the main article body, stripping away boilerplate like navs, footers, ads. Look for common patterns like `<article>` tags, `div`s with class `content`, `post-body`, etc. Paragraphs (`<p>`) within these main containers are primary targets.)
|
||||
- [ ] Store the `resolved_url` if the fetch involved redirects.
|
||||
- [ ] Task 4: Implement Scraping Error Handling. (AC: 3)
|
||||
- [ ] If fetching fails (network error, 4xx/5xx status), record `scraping_status = 'failed_unreachable'` or similar, and log the error.
|
||||
- [ ] If HTML parsing or content extraction fails significantly, record `scraping_status = 'failed_parsing'`, and log the error.
|
||||
- [ ] Consider a generic `failed_generic` status for other errors.
|
||||
- [ ] The PRD mentions `failed_paywall` (`[3-architecture.txt#ScrapedArticle]`). Implement basic detection if possible (e.g., looking for keywords like "subscribe to read" in a limited part of the body if main content is very short), otherwise, this might be a manual classification or future enhancement.
|
||||
- [ ] Task 5: Update `scraped_articles` Table (Initial Entry).
|
||||
- [ ] Before attempting to scrape, the `ArticleScrapingService` should create or update an entry in `scraped_articles` for the given `hn_post_id` and `workflow_run_id`, setting `original_url` and `scraping_status = 'pending'`. This uses the `id` of this new row as `scraped_article_id` for subsequent updates.
|
||||
- [ ] (This task might overlap with Story 2.1 Task 3 or Story 2.3 Task 1, ensure it's done once logically).
|
||||
|
||||
## Dev Technical Guidance
|
||||
|
||||
- **Service:** `article-scraper-service` Supabase Function.
|
||||
- **Technology:**
|
||||
- HTTP Client: `node-fetch` (common in Node.js environments for Supabase Functions) or `axios`.
|
||||
- HTML Parsing: Cheerio (`[3-architecture.txt#Definitive Tech Stack Selections]`).
|
||||
- **Content Extraction Strategy (Cheerio):**
|
||||
- This is heuristic-based and can be fragile. Start with common patterns.
|
||||
- **Title:** `$('title').text()`, `$('meta[property="og:title"]').attr('content')`, `$('h1').first().text()`.
|
||||
- **Author:** `$('meta[name="author"]').attr('content')`, `$('meta[property="article:author"]').attr('content')`.
|
||||
- **Date:** `$('meta[property="article:published_time"]').attr('content')`, `$('time').attr('datetime')`. Use a library like `date-fns` to parse various date formats into a consistent ISO string.
|
||||
- **Main Text:** This is the hardest. Libraries like `@mozilla/readability` can be used in conjunction with or as an alternative to custom Cheerio selectors for extracting the main article content, as they are specifically designed for this. If using only Cheerio, look for large blocks of text within `<p>` tags, often nested under `<article>` or common `div` classes. Remove script/style tags.
|
||||
- **Data to Extract:** `title`, `author`, `publication_date`, `main_text_content`, `resolved_url` (if different from original).
|
||||
- **Error Logging:** Log `workflow_run_id`, `hn_post_id`, `original_url`, and specific error messages from Cheerio or fetch.
|
||||
- **Workflow Interaction:**
|
||||
- The service is triggered by Story 1.9.
|
||||
- It updates the `workflow_runs` table via `WorkflowTrackerService` (e.g., `incrementWorkflowDetailCounter(jobId, 'articles_attempted_scraping')`) before attempting the scrape for an article.
|
||||
- The success/failure status for _this specific article_ is recorded in `scraped_articles` table (Story 2.3).
|
||||
- The _overall_ status of the scraping stage for the `workflow_run_id` (e.g., moving from 'scraping_articles' to 'summarizing_content') is managed by `CheckWorkflowCompletionService` (Story 1.6) once all triggered scraping tasks for that run are no longer 'pending'.
|
||||
|
||||
## Story Progress Notes
|
||||
|
||||
### Agent Model Used: `<Agent Model Name/Version>`
|
||||
|
||||
### Completion Notes List
|
||||
|
||||
{Any notes about implementation choices, difficulties, or follow-up needed}
|
||||
|
||||
### Change Log
|
||||
|
||||
---
|
||||
|
||||
# Story 2.3: Store scraped article content in Supabase
|
||||
|
||||
## Status: Draft
|
||||
|
||||
## Story
|
||||
|
||||
- As a system
|
||||
- I want to store the scraped article content in the Supabase database, associated with the corresponding Hacker News post and workflow run
|
||||
- so that it can be used for summarization and newsletter generation.
|
||||
|
||||
## Acceptance Criteria (ACs)
|
||||
|
||||
1. Scraped article content is stored in the `scraped_articles` table, linked to the `hn_post_id` and the current `workflow_run_id`.
|
||||
2. The system ensures that the stored data includes all extracted information (title, author, date, text, resolved URL).
|
||||
3. The `scraping_status` and any `error_message` are recorded in the `scraped_articles` table.
|
||||
4. Upon completion of scraping an article (success or failure), the service updates the `workflow_runs.details` (e.g., incrementing scraped counts) via `WorkflowTrackerService`.
|
||||
5. A Supabase migration for the `scraped_articles` table (as defined in `architecture.txt`) is created and applied before data operations.
|
||||
|
||||
## Tasks / Subtasks
|
||||
|
||||
- [ ] Task 1: Create `scraped_articles` Table Migration. (AC: 5)
|
||||
- [ ] Create a Supabase migration file in `supabase/migrations/`.
|
||||
- [ ] Define the SQL for the `scraped_articles` table as specified in `[3-architecture.txt#scraped_articles]`, including columns: `id`, `hn_post_id`, `original_url`, `resolved_url`, `title`, `author`, `publication_date`, `main_text_content`, `scraped_at`, `scraping_status`, `error_message`, `workflow_run_id`.
|
||||
- [ ] Include unique index and comments as specified.
|
||||
- [ ] Apply the migration.
|
||||
- [ ] Task 2: Implement Data Storage Logic in `ArticleScrapingService`. (AC: 1, 2, 3)
|
||||
- [ ] After scraping (Story 2.2), or if scraping failed, the `ArticleScrapingService` will update the existing 'pending' record in `scraped_articles` (identified by `hn_post_id` and `workflow_run_id`, or by the `scraped_article_id` if created earlier).
|
||||
- [ ] Populate `title`, `author`, `publication_date` (parsed to TIMESTAMPTZ), `main_text_content`, `resolved_url`.
|
||||
- [ ] Set `scraped_at = now()`.
|
||||
- [ ] Set `scraping_status` to 'success', 'failed_unreachable', 'failed_paywall', 'failed_parsing', or 'failed_generic'.
|
||||
- [ ] Populate `error_message` if scraping failed.
|
||||
- [ ] Ensure `hn_post_id` and `workflow_run_id` are correctly associated.
|
||||
- [ ] Task 3: Update `WorkflowTrackerService`. (AC: 4)
|
||||
- [ ] After attempting to scrape and updating `scraped_articles`, the `ArticleScrapingService` should call `WorkflowTrackerService`.
|
||||
- [ ] Example calls:
|
||||
- `WorkflowTrackerService.incrementWorkflowDetailCounter(workflow_run_id, 'articles_scraping_attempted', 1)`
|
||||
- If successful: `WorkflowTrackerService.incrementWorkflowDetailCounter(workflow_run_id, 'articles_scraped_successfully', 1)`
|
||||
- If failed: `WorkflowTrackerService.incrementWorkflowDetailCounter(workflow_run_id, 'articles_scraping_failed', 1)`
|
||||
- [ ] Log these updates.
|
||||
- [ ] Task 4: Ensure `ArticleScrapingService` creates initial 'pending' record if not already handled.
|
||||
- [ ] As the very first step when `ArticleScrapingService` is invoked for an `hn_post_id` and `workflow_run_id`, it must ensure an entry exists in `scraped_articles` with `scraping_status = 'pending'`. This can be an `INSERT ... ON CONFLICT DO NOTHING` or an explicit check. This record's `id` is the `scraped_article_id`. This prevents issues if the trigger fires multiple times or if other logic expects this row.
|
||||
|
||||
## Dev Technical Guidance
|
||||
|
||||
- **Service:** `ArticleScrapingService` Supabase Function.
|
||||
- **Database Table:** `scraped_articles`. The schema definition from `[3-architecture.txt#scraped_articles]` is the source of truth.
|
||||
- `scraping_status` enum values: 'pending', 'success', 'failed_unreachable', 'failed_paywall', 'failed_parsing', 'failed_generic'.
|
||||
- **Data Flow:**
|
||||
1. `ArticleScrapingService` is triggered (Story 1.9) with `hn_post_id`, `workflow_run_id`, `original_url`.
|
||||
2. (Task 4 of this story / Story 2.2 Task 5): Service ensures/creates a `scraped_articles` row for this task, status 'pending'. Gets `scraped_article_id`.
|
||||
3. Service attempts scraping (Story 2.1, Story 2.2).
|
||||
4. (Task 2 of this story): Service updates the `scraped_articles` row with results (content, status, error message).
|
||||
5. (Task 3 of this story): Service updates `workflow_runs.details` via `WorkflowTrackerService`.
|
||||
- **Supabase Client:** Use the Supabase admin client for `INSERT` and `UPDATE` operations on `scraped_articles`.
|
||||
- **Error Handling:** If database operations fail, the `ArticleScrapingService` should log this critically. The overall workflow's `error_message` might need an update via `WorkflowTrackerService.failWorkflow()` if a DB error in scraping is deemed critical for the whole run.
|
||||
- **Unique Constraint:** The `idx_scraped_articles_hn_post_id_workflow_run_id` unique index in `[3-architecture.txt#scraped_articles]` ensures that for a given workflow run, an HN post is processed only once by the scraping service. The initial insert (Task 4) should handle potential conflicts gracefully (e.g. `ON CONFLICT DO UPDATE` to set status to pending if it was somehow different, or `ON CONFLICT DO NOTHING` if an identical pending record already exists).
|
||||
|
||||
## Story Progress Notes
|
||||
|
||||
### Agent Model Used: `<Agent Model Name/Version>`
|
||||
|
||||
### Completion Notes List
|
||||
|
||||
{Any notes about implementation choices, difficulties, or follow-up needed}
|
||||
|
||||
### Change Log
|
||||
|
||||
---
|
||||
|
||||
# Story 2.4: Trigger article scraping process via API and CLI
|
||||
|
||||
## Status: Draft
|
||||
|
||||
## Story
|
||||
|
||||
- As a developer
|
||||
- I want to trigger the article scraping process via the API and CLI
|
||||
- so that I can manually initiate it for testing and debugging.
|
||||
|
||||
## Acceptance Criteria (ACs)
|
||||
|
||||
1. The API endpoint can trigger the article scraping process.
|
||||
2. The CLI command can trigger the article scraping process locally.
|
||||
3. The system logs the start and completion of the scraping process, including any errors encountered.
|
||||
4. All API requests and CLI command executions are logged, including timestamps and any relevant data.
|
||||
5. The system handles partial execution gracefully (i.e., if triggered before Epic 1 components like `WorkflowTrackerService` are available, it logs a message and exits).
|
||||
6. If retained for isolated testing, all scraping operations initiated via this trigger must be associated with a valid `workflow_run_id` and update the `workflow_runs` table accordingly via `WorkflowTrackerService`.
|
||||
|
||||
**(Self-correction/Architect's Note from PRD `[8-prd-po-updated.txt#Story 2.4]`):** "This story might become redundant if the main workflow trigger (Story 1.3) handles the entire pipeline initiation and individual service testing is done via direct function invocation or unit/integration tests."
|
||||
|
||||
**Decision for this story:** Proceed with the understanding that this provides a way to trigger scraping for a _specific, existing_ `workflow_run_id` and potentially for a _specific_ `hn_post_id` within that run, rather than initiating a full new workflow. This makes it distinct from Story 1.3 and useful for targeted testing/re-processing of a single article. If the main workflow trigger (1.3) is the _only_ intended way to start scraping, then this story could be skipped or its scope significantly reduced to just documenting how to test `ArticleScrapingService` via unit/integration tests. Assuming the former (targeted trigger) for now.
|
||||
|
||||
## Tasks / Subtasks
|
||||
|
||||
- [ ] Task 1: Design API endpoint for targeted scraping. (AC: 1)
|
||||
- [ ] Define a new Next.js API Route, e.g., `POST /api/system/trigger-scraping`.
|
||||
- [ ] Request body should accept `workflow_run_id` and `hn_post_id` (or `article_url` if more direct).
|
||||
- [ ] Secure with API key (same as Story 1.3).
|
||||
- [ ] Task 2: Implement API endpoint logic. (AC: 1, 3, 4, 6)
|
||||
- [ ] Authenticate request.
|
||||
- [ ] Validate inputs (`workflow_run_id`, `hn_post_id`).
|
||||
- [ ] Log initiation with Pino, including parameters.
|
||||
- [ ] Directly invoke `ArticleScrapingService` with the provided parameters. This might involve making an HTTP call to the service's endpoint if it's designed as a callable function, or if possible, importing and calling its handler directly (if co-located or packaged appropriately for internal calls).
|
||||
- [ ] `ArticleScrapingService` should already handle `WorkflowTrackerService` updates for the specific article. This endpoint mainly orchestrates the call.
|
||||
- [ ] Return a response indicating success/failure of _triggering_ the scrape.
|
||||
- [ ] Task 3: Implement CLI command for targeted scraping. (AC: 2, 3, 4, 6)
|
||||
- [ ] Create a new script `scripts/trigger-article-scrape.ts`.
|
||||
- [ ] Accept `workflow_run_id` and `hn_post_id` as command-line arguments.
|
||||
- [ ] The script calls the new API endpoint from Task 1 or directly invokes the `ArticleScrapingService` logic.
|
||||
- [ ] Log initiation and outcome to console.
|
||||
- [ ] Add to `package.json` scripts.
|
||||
- [ ] Task 4: Handle graceful partial execution. (AC: 5)
|
||||
- [ ] Ensure that if `WorkflowTrackerService` or other critical Epic 1 components are not available (e.g., during early development phases), the API/CLI logs a clear error and exits without crashing. This is more of a general robustness measure.
|
||||
|
||||
## Dev Technical Guidance
|
||||
|
||||
- **Purpose of this Trigger:** Unlike Story 1.3 (which starts a _new_ full workflow), this trigger is for re-scraping a specific article within an _existing_ workflow or for testing the `ArticleScrapingService` in isolation with specific inputs.
|
||||
- **API Endpoint:**
|
||||
- `POST /api/system/trigger-scraping`
|
||||
- Request Body: `{ "workflow_run_id": "uuid", "hn_post_id": "string" }` (or alternatively, the direct `article_url` if `hn_post_id` lookup is an extra step).
|
||||
- Authentication: Use `WORKFLOW_TRIGGER_API_KEY` in `X-API-KEY` header.
|
||||
- **CLI Command:**
|
||||
- Example: `npm run trigger-scrape -- --workflowId <uuid> --postId <string>`
|
||||
- Use a library like `yargs` for parsing command-line arguments if it becomes complex.
|
||||
- **Invoking `ArticleScrapingService`:**
|
||||
- If `ArticleScrapingService` is an HTTP-triggered Supabase Function, the API/CLI will make an HTTP request to its endpoint. This is cleaner for decoupling.
|
||||
- The payload to `ArticleScrapingService` should be what it expects (e.g., `{ hn_post_id, workflow_run_id, article_url }`).
|
||||
- **Logging:** Essential for tracking manual triggers. Log all input parameters and the outcome of the trigger. `ArticleScrapingService` itself will log its detailed scraping activities.
|
||||
- **Redundancy Check:** Re-evaluate if this story is truly needed if unit/integration tests for `ArticleScrapingService` are comprehensive and the main workflow trigger (Story 1.3) is sufficient for end-to-end testing. If kept, its specific purpose (targeted re-processing/testing) should be clear.
|
||||
|
||||
## Story Progress Notes
|
||||
|
||||
### Agent Model Used: `<Agent Model Name/Version>`
|
||||
|
||||
### Completion Notes List
|
||||
|
||||
{Any notes about implementation choices, difficulties, or follow-up needed. Specifically, confirm if this targeted trigger is required or if testing will be handled by other means.}
|
||||
|
||||
### Change Log
|
||||
|
||||
---
|
||||
|
||||
# Story 2.5: Implement Database Event/Webhook: `scraped_articles` Success to Summarization Service
|
||||
|
||||
## Status: Draft
|
||||
|
||||
## Story
|
||||
|
||||
- As a system
|
||||
- I want the successful scraping and storage of an article in `scraped_articles` to automatically trigger the `SummarizationService`
|
||||
- so that content summarization can begin as soon as an article's text is available.
|
||||
|
||||
## Acceptance Criteria (ACs)
|
||||
|
||||
1. A Supabase database trigger or webhook mechanism is implemented on the `scraped_articles` table (e.g., on INSERT or UPDATE where `scraping_status` is 'success').
|
||||
2. The trigger successfully invokes the `SummarizationService` (Supabase Function).
|
||||
3. The invocation passes necessary parameters like `scraped_article_id` and `workflow_run_id` to the `SummarizationService`.
|
||||
4. The mechanism is robust and includes error handling/logging for the trigger/webhook itself.
|
||||
5. Unit/integration tests are created to verify the trigger fires correctly and the service is invoked with correct parameters.
|
||||
|
||||
## Tasks / Subtasks
|
||||
|
||||
- [ ] Task 1: Design Trigger Mechanism for `scraped_articles`.
|
||||
- [ ] Similar to Story 1.9, decide on PostgreSQL trigger + `pg_net` vs. Supabase Function Hooks on database events if available and suitable.
|
||||
- [ ] The trigger should fire `AFTER INSERT OR UPDATE ON scraped_articles FOR EACH ROW WHEN (NEW.scraping_status = 'success' AND (OLD IS NULL OR OLD.scraping_status IS DISTINCT FROM 'success'))`. This ensures it fires only once when an article becomes successfully scraped.
|
||||
- [ ] Task 2: Implement Database Trigger and PL/pgSQL Function (if `pg_net` chosen). (AC: 1)
|
||||
- [ ] Create a migration file in `supabase/migrations/`.
|
||||
- [ ] Write SQL for the PL/pgSQL function. It will construct a payload (e.g., `{ "scraped_article_id": NEW.id, "workflow_run_id": NEW.workflow_run_id }`) and use `pg_net.http_post` to call the `SummarizationService`'s invocation URL.
|
||||
- [ ] Write SQL to create the trigger on `scraped_articles`.
|
||||
- [ ] The `SummarizationService` (from Epic 3) needs a known invocation URL.
|
||||
- [ ] Task 3: Configure `SummarizationService` for Invocation. (AC: 2, 3)
|
||||
- [ ] Ensure `SummarizationService` (to be developed in Epic 3) is designed to accept `scraped_article_id` and `workflow_run_id` via its request body (if HTTP triggered).
|
||||
- [ ] Implement security for this invocation URL (e.g., shared internal secret token).
|
||||
- [ ] Task 4: Implement Error Handling and Logging for this Trigger. (AC: 4)
|
||||
- [ ] The PL/pgSQL function should log errors from `pg_net` calls (e.g., to `stderr`).
|
||||
- [ ] Task 5: Create Tests. (AC: 5)
|
||||
- [ ] **Integration Test:**
|
||||
- Set up the trigger.
|
||||
- Insert/Update a row in `scraped_articles` to meet trigger conditions (`scraping_status = 'success'`).
|
||||
- Verify that a (mocked) `SummarizationService` endpoint receives an invocation with correct `scraped_article_id` and `workflow_run_id`.
|
||||
|
||||
## Dev Technical Guidance
|
||||
|
||||
- **Trigger Condition:** Crucially, the trigger should only fire when an article is _newly_ marked as successfully scraped to avoid re-triggering summarization unnecessarily. The `WHEN (NEW.scraping_status = 'success' AND (OLD IS NULL OR OLD.scraping_status IS DISTINCT FROM 'success'))` condition handles this for both new inserts and updates.
|
||||
- **`pg_net` or Function Hooks:** Same considerations as Story 1.9. If Supabase Function Hooks on DB events are a simpler alternative to `pg_net` for invoking Vercel-hosted Supabase Functions, that path is preferable.
|
||||
- Payload to `SummarizationService`:
|
||||
```json
|
||||
{
|
||||
"scraped_article_id": "UUID of the scraped article",
|
||||
"workflow_run_id": "UUID of the current workflow"
|
||||
}
|
||||
```
|
||||
- **Security:** The invocation URL for `SummarizationService` should be protected.
|
||||
- **Error Handling:** Similar to Story 1.9, errors in the trigger/`pg_net` call should be logged but ideally not cause the update to `scraped_articles` to fail. The `CheckWorkflowCompletionService` can serve as a backup to find successfully scraped articles that somehow didn't trigger summarization.
|
||||
- **Target Service:** The `SummarizationService` will be defined in Epic 3. For testing this story, its endpoint can be a mock that just logs received payloads.
|
||||
|
||||
## Story Progress Notes
|
||||
|
||||
### Agent Model Used: `<Agent Model Name/Version>`
|
||||
|
||||
### Completion Notes List
|
||||
|
||||
{Any notes about implementation choices, difficulties, or follow-up needed}
|
||||
|
||||
### Change Log
|
||||
|
||||
---
|
||||
@@ -0,0 +1,379 @@
|
||||
# Epic 3: AI-Powered Content Summarization Stories
|
||||
|
||||
## Story 3.1: Implement LLMFacade for AI Summarization
|
||||
|
||||
### Status: Approved
|
||||
|
||||
### Story
|
||||
|
||||
- As a system
|
||||
- I want to integrate an AI summarization capability by implementing and using an `LLMFacade`
|
||||
- so that I can generate concise summaries of articles and comments using various configurable LLM providers.
|
||||
|
||||
### Acceptance Criteria (ACs)
|
||||
|
||||
1. An `LLMFacade` interface and concrete implementations (e.g., `OllamaAdapter`, `RemoteLLMApiAdapter`) are created in `supabase/functions/_shared/llm-facade.ts`.
|
||||
2. A factory function is implemented within or alongside the facade to select the appropriate LLM adapter based on environment variables (e.g., `LLM_PROVIDER_TYPE`, `OLLAMA_API_URL`, `REMOTE_LLM_API_KEY`, `REMOTE_LLM_API_URL`, `LLM_MODEL_NAME`).
|
||||
3. The `LLMFacade` handles making requests to the respective LLM APIs (as configured) and parsing their responses to extract the summary.
|
||||
4. Robust error handling and retry logic for transient API errors are implemented within the facade.
|
||||
5. Unit tests for the `LLMFacade` and its adapters (mocking actual HTTP calls) achieve >80% coverage.
|
||||
6. The system utilizes this `LLMFacade` for all summarization tasks (articles and comments).
|
||||
7. The integration is configurable via environment variables to switch between local and remote LLMs and specify model names.
|
||||
|
||||
### Tasks / Subtasks
|
||||
|
||||
- [ ] Create the `LLMFacade` interface in `supabase/functions/_shared/llm-facade.ts` (AC: 1)
|
||||
- [ ] Define core interface methods like `summarize(content: string, instructions: string): Promise<string>`
|
||||
- [ ] Include retry count, timeout, and other configurable parameters
|
||||
- [ ] Implement the `OllamaAdapter` class (AC: 1, 3, 4)
|
||||
- [ ] Create HTTP client to communicate with Ollama API
|
||||
- [ ] Implement retry logic for transient errors
|
||||
- [ ] Add appropriate logging and error handling
|
||||
- [ ] Format response parsing to extract the summary text
|
||||
- [ ] Implement the `RemoteLLMApiAdapter` class for commercial APIs (AC: 1, 3, 4)
|
||||
- [ ] Create HTTP client to communicate with remote API (e.g., OpenAI, Amazon Bedrock, etc.)
|
||||
- [ ] Implement retry logic for transient errors
|
||||
- [ ] Add appropriate logging and error handling
|
||||
- [ ] Format response parsing to extract the summary text
|
||||
- [ ] Create a factory function for instantiating the right adapter (AC: 2)
|
||||
- [ ] Implement logic to choose adapter based on environment variables
|
||||
- [ ] Add validation for required environment variables
|
||||
- [ ] Create helpful error messages for missing/invalid configurations
|
||||
- [ ] Add appropriate TypeScript interfaces and types (AC: 1-3)
|
||||
- [ ] Add unit tests for the `LLMFacade` and adapters (AC: 5)
|
||||
- [ ] Test factory function with different environment configurations
|
||||
- [ ] Test retry logic with mocked API responses
|
||||
- [ ] Test error handling with different error scenarios
|
||||
- [ ] Test response parsing with sample API responses
|
||||
- [ ] Create documentation in code comments (AC: 6)
|
||||
- [ ] Document the facade pattern implementation
|
||||
- [ ] Document environment variable requirements
|
||||
- [ ] Include examples of usage for future services
|
||||
|
||||
### Dev Technical Guidance
|
||||
|
||||
The `LLMFacade` follows the facade design pattern to abstract away the complexity of interacting with different LLM providers. This is a critical component that will be used by both article and comment summarization services.
|
||||
|
||||
#### Environment Variables
|
||||
|
||||
The facade should look for these environment variables:
|
||||
|
||||
- `LLM_PROVIDER_TYPE`: Values like "ollama" or "remote"
|
||||
- `OLLAMA_API_URL`: URL for Ollama API (e.g., "http://localhost:11434")
|
||||
- `REMOTE_LLM_API_KEY`: API key for remote LLM service
|
||||
- `REMOTE_LLM_API_URL`: URL endpoint for remote LLM service
|
||||
- `LLM_MODEL_NAME`: Model name to use (e.g., "llama2" for Ollama or specific model IDs for remote providers)
|
||||
|
||||
Refer to `environment-vars.md` for the exact format and required variables.
|
||||
|
||||
#### Error Handling
|
||||
|
||||
Implement robust error handling with categorization of errors:
|
||||
|
||||
- Network/connectivity issues (should trigger retries)
|
||||
- Authentication failures (should fail immediately with clear message)
|
||||
- Rate limiting issues (should backoff and retry)
|
||||
- Malformed responses (should be logged in detail)
|
||||
|
||||
Use exponential backoff for retries with configurable maximum attempts (default: 3).
|
||||
|
||||
#### Testing
|
||||
|
||||
Mock the HTTP calls to external APIs in tests. For adapter tests, use sample API responses from real LLM providers to ensure proper parsing. Test both successful and error scenarios.
|
||||
|
||||
## Story Progress Notes
|
||||
|
||||
### Agent Model Used: `Claude 3.7 Sonnet`
|
||||
|
||||
### Completion Notes List
|
||||
|
||||
### Change Log
|
||||
|
||||
---
|
||||
|
||||
## Story 3.2: Implement Article Summarization Service
|
||||
|
||||
### Status: Approved
|
||||
|
||||
### Story
|
||||
|
||||
- As a system
|
||||
- I want to retrieve summarization prompts from the database, and then use them via the `LLMFacade` to generate 2-paragraph summaries of the scraped articles
|
||||
- so that users can quickly grasp the main content and the prompts can be easily updated.
|
||||
|
||||
### Acceptance Criteria (ACs)
|
||||
|
||||
1. The service retrieves the appropriate summarization prompt from the `summarization_prompts` table.
|
||||
2. The system generates a 2-paragraph summary for each scraped article using the retrieved prompt via the `LLMFacade`.
|
||||
3. Generated summaries are stored in the `article_summaries` table, linked to the `scraped_article_id` and the current `workflow_run_id`.
|
||||
4. The summaries are accurate and capture the key information from the article.
|
||||
5. Upon completion of each article summarization task, the service updates `workflow_runs.details` (e.g., incrementing article summaries generated counts) via `WorkflowTrackerService`.
|
||||
6. (System Note: The `CheckWorkflowCompletionService` monitors the `article_summaries` table as part of determining overall summarization completion for a `workflow_run_id`).
|
||||
7. A Supabase migration for the `article_summaries` table (as defined in `architecture.txt`) is created and applied before data operations.
|
||||
|
||||
### Tasks / Subtasks
|
||||
|
||||
- [ ] Create Supabase migration for `article_summaries` table if not already exists (AC: 7)
|
||||
- [ ] Use the SQL schema from `data-models.md`
|
||||
- [ ] Include proper indexes and foreign key constraints
|
||||
- [ ] Add migration to appropriate migration folder
|
||||
- [ ] Implement the article summarization service in `supabase/functions/summarization-service/index.ts` (AC: 1-6)
|
||||
- [ ] Implement function to retrieve default article summarization prompt from database
|
||||
- [ ] Import and use the `LLMFacade` from `_shared/llm-facade.ts`
|
||||
- [ ] Add logic to get scraped articles for the current workflow run
|
||||
- [ ] Process each article sequentially to avoid rate limiting
|
||||
- [ ] Implement error handling and status tracking (AC: 5)
|
||||
- [ ] Use the `WorkflowTrackerService` to update workflow status
|
||||
- [ ] Implement appropriate error handling for failed summarizations
|
||||
- [ ] Update workflow details with counts of successful/failed summarizations
|
||||
- [ ] Add logging for monitoring and debugging (AC: 2-5)
|
||||
- [ ] Log start and end of summarization process
|
||||
- [ ] Log key metrics (articles processed, time taken, etc.)
|
||||
- [ ] Log errors with appropriate context
|
||||
- [ ] Create unit tests for the service
|
||||
- [ ] Test prompt retrieval logic
|
||||
- [ ] Test article processing logic with mocked LLMFacade
|
||||
- [ ] Test error handling scenarios
|
||||
|
||||
### Dev Technical Guidance
|
||||
|
||||
This service is triggered after article scraping is completed. It should batch-process articles but be mindful of potential rate limits from LLM providers. Consider processing articles with an appropriate delay between requests.
|
||||
|
||||
#### Database Interactions
|
||||
|
||||
The service needs to:
|
||||
|
||||
1. Retrieve the default article summarization prompt (where `is_default_article_prompt = true`)
|
||||
2. Fetch all scraped articles for the current workflow run
|
||||
3. Insert summaries into the `article_summaries` table
|
||||
4. Update workflow status via the `WorkflowTrackerService`
|
||||
|
||||
The service should check if summaries already exist for the article+workflow combination to avoid duplicates.
|
||||
|
||||
#### Prompt Engineering
|
||||
|
||||
The service should combine the template prompt with the article text in a format that gives the LLM sufficient context but stays within token limits. A simple approach:
|
||||
|
||||
```
|
||||
{summarization_prompt_text}
|
||||
|
||||
ARTICLE TEXT:
|
||||
{article_text}
|
||||
|
||||
SUMMARY:
|
||||
```
|
||||
|
||||
#### WorkflowTracker Integration
|
||||
|
||||
Use the `WorkflowTrackerService` to:
|
||||
|
||||
1. Update the workflow status to "summarizing_articles" at start
|
||||
2. Increment the "article_summaries_generated" counter in the workflow details for each success
|
||||
3. Update workflow status appropriately on completion or failure
|
||||
|
||||
## Story Progress Notes
|
||||
|
||||
### Agent Model Used: `Claude 3.7 Sonnet`
|
||||
|
||||
### Completion Notes List
|
||||
|
||||
### Change Log
|
||||
|
||||
---
|
||||
|
||||
## Story 3.3: Implement Comment Summarization Service
|
||||
|
||||
### Status: Approved
|
||||
|
||||
### Story
|
||||
|
||||
- As a system
|
||||
- I want to retrieve summarization prompts from the database, and then use them via the `LLMFacade` to generate 2-paragraph summaries of the comments for the selected HN posts
|
||||
- so that users can understand the main discussions and the prompts can be easily updated.
|
||||
|
||||
### Acceptance Criteria (ACs)
|
||||
|
||||
1. The service retrieves the appropriate summarization prompt from the `summarization_prompts` table.
|
||||
2. The system generates a 2-paragraph summary of the comments for each selected HN post using the retrieved prompt via the `LLMFacade`.
|
||||
3. Generated summaries are stored in the `comment_summaries` table, linked to the `hn_post_id` and the current `workflow_run_id`.
|
||||
4. The summaries highlight interesting interactions and key points from the discussion.
|
||||
5. Upon completion of each comment summarization task, the service updates `workflow_runs.details` (e.g., incrementing comment summaries generated counts) via `WorkflowTrackerService`.
|
||||
6. (System Note: The `CheckWorkflowCompletionService` monitors the `comment_summaries` table as part of determining overall summarization completion for a `workflow_run_id`).
|
||||
7. A Supabase migration for the `comment_summaries` table (as defined in `architecture.txt`) is created and applied before data operations.
|
||||
|
||||
### Tasks / Subtasks
|
||||
|
||||
- [ ] Create Supabase migration for `comment_summaries` table if not already exists (AC: 7)
|
||||
- [ ] Use the SQL schema from `data-models.md`
|
||||
- [ ] Include proper indexes and foreign key constraints
|
||||
- [ ] Add migration to appropriate migration folder
|
||||
- [ ] Implement the comment summarization service in the same `supabase/functions/summarization-service/index.ts` file as the article summarization (AC: 1-6)
|
||||
- [ ] Add function to retrieve default comment summarization prompt from database
|
||||
- [ ] Add logic to fetch all comments for selected HN posts in current workflow run
|
||||
- [ ] Concatenate comments with appropriate context before sending to LLM
|
||||
- [ ] Group comments by HN post for summarization (AC: 2, 4)
|
||||
- [ ] Implement logic to combine comments for each post
|
||||
- [ ] Format grouped comments in a way that preserves threading information
|
||||
- [ ] Handle potentially large comment volumes (pagination/chunking)
|
||||
- [ ] Use the `LLMFacade` to generate summaries (AC: 2, 4)
|
||||
- [ ] Pass comments along with the prompt to the facade
|
||||
- [ ] Implement error handling for individual summarization failures
|
||||
- [ ] Store results in database (AC: 3)
|
||||
- [ ] Insert summaries into the `comment_summaries` table
|
||||
- [ ] Link to appropriate `hn_post_id` and `workflow_run_id`
|
||||
- [ ] Track workflow progress (AC: 5)
|
||||
- [ ] Update workflow status via `WorkflowTrackerService`
|
||||
- [ ] Increment comment summary count in workflow details
|
||||
- [ ] Create unit tests for the service
|
||||
- [ ] Test comment grouping logic
|
||||
- [ ] Test integration with LLMFacade (with mocks)
|
||||
- [ ] Test error handling scenarios
|
||||
|
||||
### Dev Technical Guidance
|
||||
|
||||
The comment summarization service follows a similar pattern to the article summarization service but has additional complexity due to the comment threading structure and potential volume of comments.
|
||||
|
||||
#### Handling Comment Threads
|
||||
|
||||
HN comments can be deeply nested. When presenting comments to the LLM, maintain context by:
|
||||
|
||||
1. Sorting comments by timestamp
|
||||
2. Including parent-child relationships with simple indentation or prefixing
|
||||
3. Including usernames to maintain conversation flow
|
||||
|
||||
#### Token Limit Management
|
||||
|
||||
Comments for popular posts can exceed LLM context windows. Consider:
|
||||
|
||||
1. Limiting to top N comments by points/engagement
|
||||
2. Chunking comments and generating multiple summaries, then combining
|
||||
3. Including a "summarize the most interesting discussions" instruction in the prompt
|
||||
|
||||
#### Error Handling
|
||||
|
||||
Comments may contain challenging content. Implement robust error handling:
|
||||
|
||||
1. Track which HN posts have failed summarization
|
||||
2. Implement a fallback mechanism for failed summaries
|
||||
3. Continue processing other posts if one fails
|
||||
|
||||
#### WorkflowTracker Integration
|
||||
|
||||
Similar to article summarization, use the `WorkflowTrackerService` to track progress and update workflow status.
|
||||
|
||||
## Story Progress Notes
|
||||
|
||||
### Agent Model Used: `Claude 3.7 Sonnet`
|
||||
|
||||
### Completion Notes List
|
||||
|
||||
### Change Log
|
||||
|
||||
---
|
||||
|
||||
## Story 3.4: Implement API and CLI for Triggering AI Summarization
|
||||
|
||||
### Status: Approved
|
||||
|
||||
### Story
|
||||
|
||||
- As a developer
|
||||
- I want to trigger the AI summarization process via the API and CLI
|
||||
- so that I can manually initiate it for testing and debugging.
|
||||
|
||||
### Acceptance Criteria (ACs)
|
||||
|
||||
1. The API endpoint can trigger the AI summarization process.
|
||||
2. The CLI command can trigger the AI summarization process locally.
|
||||
3. The system logs the input and output of the summarization process, including the summarization prompt used and any errors.
|
||||
4. All API requests and CLI command executions are logged, including timestamps and any relevant data.
|
||||
5. The system handles partial execution gracefully (i.e., if triggered before Epic 2 is complete, it logs a message and exits).
|
||||
6. All summarization operations initiated via this trigger must be associated with a valid `workflow_run_id` and update the `workflow_runs` table accordingly via `WorkflowTrackerService`.
|
||||
|
||||
### Tasks / Subtasks
|
||||
|
||||
- [ ] Extend the existing API endpoint in `app/(api)/system/trigger-workflow/route.ts` (AC: 1, 3, 4, 6)
|
||||
- [ ] Add option to trigger only summarization step
|
||||
- [ ] Add parameter to specify workflow_run_id for existing workflow
|
||||
- [ ] Add validation to check workflow prerequisites are met
|
||||
- [ ] Implement appropriate error handling and logging
|
||||
- [ ] Create CLI command for triggering summarization (AC: 2, 3, 4, 6)
|
||||
- [ ] Create entry in `package.json` scripts for CLI command
|
||||
- [ ] Implement CLI logic in appropriate location
|
||||
- [ ] Add option to specify workflow_run_id
|
||||
- [ ] Add help text and usage examples
|
||||
- [ ] Implement validation logic for prerequisites (AC: 5)
|
||||
- [ ] Check if required data exists in database for specified workflow
|
||||
- [ ] Return appropriate error messages if prerequisites are not met
|
||||
- [ ] Log validation failures in detail
|
||||
- [ ] Add detailed logging (AC: 3, 4)
|
||||
- [ ] Log API/CLI invocation details
|
||||
- [ ] Log validation results
|
||||
- [ ] Log summarization prompt retrieval
|
||||
- [ ] Log workflow progression
|
||||
- [ ] Implement workflow progress tracking (AC: 6)
|
||||
- [ ] Update workflow status via `WorkflowTrackerService`
|
||||
- [ ] Ensure workflow details are updated appropriately
|
||||
- [ ] Create unit tests for new functionality
|
||||
- [ ] Test API endpoint with various scenarios
|
||||
- [ ] Test CLI command functionality
|
||||
- [ ] Test validation logic with valid/invalid workflows
|
||||
|
||||
### Dev Technical Guidance
|
||||
|
||||
The API endpoint and CLI should both leverage the same core logic for triggering the summarization process. Consider implementing a shared service that both can call.
|
||||
|
||||
#### API Endpoint Design
|
||||
|
||||
Extend the existing workflow trigger API to support:
|
||||
|
||||
1. Triggering specific steps (like summarization) by adding a `step` parameter
|
||||
2. Providing an existing `workflow_run_id` to continue processing
|
||||
3. Returning detailed validation results for failed requests
|
||||
|
||||
Example request:
|
||||
|
||||
```json
|
||||
{
|
||||
"step": "summarization",
|
||||
"workflow_run_id": "123e4567-e89b-12d3-a456-426614174000",
|
||||
"options": {
|
||||
"force": false
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### CLI Design
|
||||
|
||||
Use a command like:
|
||||
|
||||
```bash
|
||||
npm run summarize -- --workflow-id=123e4567-e89b-12d3-a456-426614174000 [--force]
|
||||
```
|
||||
|
||||
#### Validation Logic
|
||||
|
||||
Before triggering summarization, validate:
|
||||
|
||||
1. The workflow exists in the database
|
||||
2. The workflow has scraped articles ready for summarization
|
||||
3. The summarization hasn't already been completed for this workflow
|
||||
|
||||
If validation fails, provide clear error messages that help diagnose the issue.
|
||||
|
||||
#### Error Handling
|
||||
|
||||
Implement robust error handling that:
|
||||
|
||||
1. Distinguishes between client errors (invalid input) and server errors
|
||||
2. Provides actionable error messages
|
||||
3. Logs detailed context for debugging
|
||||
4. Tracks failures in the workflow status
|
||||
|
||||
## Story Progress Notes
|
||||
|
||||
### Agent Model Used: `Claude 3.7 Sonnet`
|
||||
|
||||
### Completion Notes List
|
||||
|
||||
### Change Log
|
||||
@@ -9,37 +9,64 @@ This project includes a significant user interface. A separate Frontend Architec
|
||||
|
||||
## Table of Contents
|
||||
|
||||
1. [Introduction / Preamble](https://www.google.com/search?q=%23introduction--preamble)
|
||||
2. [Table of Contents](https://www.google.com/search?q=%23table-of-contents)
|
||||
3. [Technical Summary](https://www.google.com/search?q=%23technical-summary)
|
||||
4. [High-Level Overview](https://www.google.com/search?q=%23high-level-overview)
|
||||
5. [Component View](https://www.google.com/search?q=%23component-view)
|
||||
- [Architectural / Design Patterns Adopted](https://www.google.com/search?q=%23architectural--design-patterns-adopted)
|
||||
6. [Workflow Orchestration and Status Management](https://www.google.com/search?q=%23workflow-orchestration-and-status-management)
|
||||
7. [Project Structure](https://www.google.com/search?q=%23project-structure)
|
||||
- [Key Directory Descriptions](https://www.google.com/search?q=%23key-directory-descriptions)
|
||||
- [Monorepo Management](https://www.google.com/search?q=%23monorepo-management)
|
||||
- [Notes](https://www.google.com/search?q=%23notes)
|
||||
8. [API Reference](https://www.google.com/search?q=%23api-reference)
|
||||
- [External APIs Consumed](https://www.google.com/search?q=%23external-apis-consumed)
|
||||
- [Internal APIs Provided (by BMad DiCaster)](https://www.google.com/search?q=%23internal-apis-provided-by-bmad-dicaster)
|
||||
9. [Data Models](https://www.google.com/search?q=%23data-models)
|
||||
- [Core Application Entities / Domain Objects](https://www.google.com/search?q=%23core-application-entities--domain-objects)
|
||||
- [Database Schemas (Supabase PostgreSQL)](https://www.google.com/search?q=%23database-schemas-supabase-postgresql)
|
||||
10. [Core Workflow / Sequence Diagrams](https://www.google.com/search?q=%23core-workflow--sequence-diagrams)
|
||||
- [1. Daily Workflow Initiation & HN Content Acquisition](https://www.google.com/search?q=%231-daily-workflow-initiation--hn-content-acquisition)
|
||||
- [2. Article Scraping & Summarization Flow](https://www.google.com/search?q=%232-article-scraping--summarization-flow)
|
||||
- [3. Newsletter, Podcast, and Delivery Flow](https://www.google.com/search?q=%233-newsletter-podcast-and-delivery-flow)
|
||||
11. [Definitive Tech Stack Selections](https://www.google.com/search?q=%23definitive-tech-stack-selections)
|
||||
12. [Infrastructure and Deployment Overview](https://www.google.com/search?q=%23infrastructure-and-deployment-overview)
|
||||
13. [Error Handling Strategy](https://www.google.com/search?q=%23error-handling-strategy)
|
||||
14. [Coding Standards](https://www.google.com/search?q=%23coding-standards)
|
||||
- [Detailed Language & Framework Conventions](https://www.google.com/search?q=%23detailed-language--framework-conventions)
|
||||
15. [Overall Testing Strategy](https://www.google.com/search?q=%23overall-testing-strategy)
|
||||
16. [Security Best Practices](https://www.google.com/search?q=%23security-best-practices)
|
||||
17. [Key Reference Documents](https://www.google.com/search?q=%23key-reference-documents)
|
||||
18. [Change Log](https://www.google.com/search?q=%23change-log)
|
||||
19. [Prompt for Design Architect: Frontend Architecture Definition](https://www.google.com/search?q=%23prompt-for-design-architect-frontend-architecture-definition)
|
||||
- [Introduction / Preamble](#introduction--preamble)
|
||||
- [Technical Summary](#technical-summary)
|
||||
- [High-Level Overview](#high-level-overview)
|
||||
- [Component View](#component-view)
|
||||
- [Architectural / Design Patterns Adopted](#architectural--design-patterns-adopted)
|
||||
- [Workflow Orchestration and Status Management](#workflow-orchestration-and-status-management)
|
||||
- [Project Structure](#project-structure)
|
||||
- [Key Directory Descriptions](#key-directory-descriptions)
|
||||
- [Monorepo Management](#monorepo-management)
|
||||
- [Notes](#notes)
|
||||
- [API Reference](#api-reference)
|
||||
- [External APIs Consumed](#external-apis-consumed)
|
||||
- [1. Hacker News (HN) Algolia API](#1-hacker-news-hn-algolia-api)
|
||||
- [2. Play.ht API](#2-playht-api)
|
||||
- [3. LLM Provider (Facade for Summarization)](#3-llm-provider-facade-for-summarization)
|
||||
- [4. Nodemailer (Email Delivery Service)](#4-nodemailer-email-delivery-service)
|
||||
- [Internal APIs Provided (by BMad DiCaster)](#internal-apis-provided-by-bmad-dicaster)
|
||||
- [1. Workflow Trigger API](#1-workflow-trigger-api)
|
||||
- [2. Workflow Status API](#2-workflow-status-api)
|
||||
- [3. Play.ht Webhook Receiver](#3-playht-webhook-receiver)
|
||||
- [Data Models](#data-models)
|
||||
- [Core Application Entities / Domain Objects](#core-application-entities--domain-objects)
|
||||
- [1. `WorkflowRun`](#1-workflowrun)
|
||||
- [2. `HNPost`](#2-hnpost)
|
||||
- [3. `HNComment`](#3-hncomment)
|
||||
- [4. `ScrapedArticle`](#4-scrapedarticle)
|
||||
- [5. `ArticleSummary`](#5-articlesummary)
|
||||
- [6. `CommentSummary`](#6-commentsummary)
|
||||
- [7. `Newsletter`](#7-newsletter)
|
||||
- [8. `Subscriber`](#8-subscriber)
|
||||
- [9. `SummarizationPrompt`](#9-summarizationprompt)
|
||||
- [10. `NewsletterTemplate`](#10-newslettertemplate)
|
||||
- [Database Schemas (Supabase PostgreSQL)](#database-schemas-supabase-postgresql)
|
||||
- [1. `workflow_runs`](#1-workflow_runs)
|
||||
- [2. `hn_posts`](#2-hn_posts)
|
||||
- [3. `hn_comments`](#3-hn_comments)
|
||||
- [4. `scraped_articles`](#4-scraped_articles)
|
||||
- [5. `article_summaries`](#5-article_summaries)
|
||||
- [6. `comment_summaries`](#6-comment_summaries)
|
||||
- [7. `newsletters`](#7-newsletters)
|
||||
- [8. `subscribers`](#8-subscribers)
|
||||
- [9. `summarization_prompts`](#9-summarization_prompts)
|
||||
- [10. `newsletter_templates`](#10-newsletter_templates)
|
||||
- [Core Workflow / Sequence Diagrams](#core-workflow--sequence-diagrams)
|
||||
- [1. Daily Workflow Initiation & HN Content Acquisition](#1-daily-workflow-initiation--hn-content-acquisition)
|
||||
- [2. Article Scraping & Summarization Flow](#2-article-scraping--summarization-flow)
|
||||
- [3. Newsletter, Podcast, and Delivery Flow](#3-newsletter-podcast-and-delivery-flow)
|
||||
- [Definitive Tech Stack Selections](#definitive-tech-stack-selections)
|
||||
- [Infrastructure and Deployment Overview](#infrastructure-and-deployment-overview)
|
||||
- [Error Handling Strategy](#error-handling-strategy)
|
||||
- [Coding Standards](#coding-standards)
|
||||
- [Detailed Language & Framework Conventions](#detailed-language--framework-conventions)
|
||||
- [TypeScript/Node.js (Next.js & Supabase Functions) Specifics](#typescriptnodejs-nextjs--supabase-functions-specifics)
|
||||
- [Overall Testing Strategy](#overall-testing-strategy)
|
||||
- [Security Best Practices](#security-best-practices)
|
||||
- [Key Reference Documents](#key-reference-documents)
|
||||
- [Change Log](#change-log)
|
||||
- [Prompt for Design Architect: Frontend Architecture Definition](#prompt-for-design-architect-frontend-architecture-definition)
|
||||
|
||||
## Technical Summary
|
||||
|
||||
@@ -271,30 +298,37 @@ The BMad DiCaster application employs an event-driven pipeline for its daily con
|
||||
**3. Service Function Responsibilities:**
|
||||
|
||||
- Each backend Supabase Function (`HNContentService`, `ArticleScrapingService`, `SummarizationService`, `PodcastGenerationService`, `NewsletterGenerationService`) participating in the workflow **must**:
|
||||
- Be aware of the `workflow_run_id` for the job it is processing.
|
||||
- **Before starting its primary task:** Update the `workflow_runs` table for the current `workflow_run_id` to reflect its `current_step_details`.
|
||||
- **Upon successful completion of its task:** Update relevant data tables and the `workflow_runs.details` JSONB field.
|
||||
- **Upon failure:** Update the `workflow_runs` table to set `status` to 'failed', and populate `error_message` and `current_step_details`.
|
||||
- Utilize the shared `WorkflowTrackerService` for consistent status updates.
|
||||
- The `PlayHTWebhookHandlerAPI` updates the `newsletters` table and then the `workflow_runs.details` with podcast status.
|
||||
- Be aware of the `workflow_run_id` for the job it is processing. This ID should be passed along or retrievable based on the triggering event/data.
|
||||
- **Before starting its primary task:** Update the `workflow_runs` table for the current `workflow_run_id` to reflect its `current_step_details` (e.g., "Started scraping article X for workflow Y").
|
||||
- **Upon successful completion of its task:**
|
||||
- Update any relevant data tables (e.g., `scraped_articles`, `article_summaries`).
|
||||
- Update the `workflow_runs.details` JSONB field with relevant output or counters (e.g., increment `articles_scraped_successfully_count`).
|
||||
- **Upon failure:** Update the `workflow_runs` table for the `workflow_run_id` to set `status` to 'failed', and populate `error_message` and `current_step_details` with failure information.
|
||||
- Utilize the shared `WorkflowTrackerService` (see point 5) for consistent status updates.
|
||||
- The `PlayHTWebhookHandlerAPI` (Next.js API route) updates the `newsletters` table and then the `workflow_runs.details` with podcast status.
|
||||
|
||||
**4. Orchestration and Progression (`CheckWorkflowCompletionService`):**
|
||||
|
||||
- A dedicated Supabase Function, `CheckWorkflowCompletionService`, will be scheduled to run periodically (e.g., every 5-10 minutes via Vercel Cron Jobs invoking a dedicated HTTP endpoint for this service, or Supabase's `pg_cron` if preferred for DB-centric scheduling).
|
||||
- This service orchestrates progression between major stages by:
|
||||
- Querying `workflow_runs` for jobs in intermediate statuses.
|
||||
- Verifying if all prerequisites for the next stage are met (e.g., all summaries done before newsletter generation, podcast ready before delivery).
|
||||
- If conditions are met, it updates `workflow_runs.status` and invokes the appropriate next service, passing the `workflow_run_id`.
|
||||
- Verifying if all prerequisite tasks for the next stage are complete by:
|
||||
- Querying related data tables (e.g., `scraped_articles`, `article_summaries`, `comment_summaries`) based on the `workflow_run_id`.
|
||||
- Checking expected counts against actual completed counts (e.g., all articles intended for summarization have an `article_summaries` entry for the current `workflow_run_id`).
|
||||
- Checking the status of the podcast generation in the `newsletters` table (linked to `workflow_run_id`) before proceeding to email delivery.
|
||||
- If conditions for the next stage are met, it updates the `workflow_runs.status` (e.g., to 'generating_newsletter') and then invokes the appropriate next service (e.g., `NewsletterGenerationService`), passing the `workflow_run_id`.
|
||||
|
||||
**5. Shared `WorkflowTrackerService`:**
|
||||
|
||||
- A utility service in `supabase/functions/_shared/` will provide standardized methods for backend functions to interact with the `workflow_runs` table.
|
||||
- A utility service, `WorkflowTrackerService`, will be created in `supabase/functions/_shared/`.
|
||||
- It will provide standardized methods for all backend functions to interact with the `workflow_runs` table (e.g., `updateWorkflowStep()`, `incrementWorkflowDetailCounter()`, `failWorkflow()`, `completeWorkflowStep()`).
|
||||
- This promotes consistency in status updates and reduces redundant code.
|
||||
|
||||
**6. Podcast Link Before Email Delivery:**
|
||||
|
||||
- The `NewsletterGenerationService` initiates podcast creation.
|
||||
- The `CheckWorkflowCompletionService` monitors `newsletters.podcast_url` (populated by `PlayHTWebhookHandlerAPI`) or `newsletters.podcast_status`.
|
||||
- Email delivery is triggered by `CheckWorkflowCompletionService` once the podcast URL is available, a timeout is reached, or podcast generation fails (as per PRD's delay/retry logic).
|
||||
- The `NewsletterGenerationService`, after generating the HTML and initiating podcast creation (via `PodcastGenerationService`), will set the `newsletters.podcast_status` to 'generating'.
|
||||
- The `CheckWorkflowCompletionService` (or the `NewsletterGenerationService` itself if designed for polling/delay) will monitor the `newsletters.podcast_url` (populated by the `PlayHTWebhookHandlerAPI`) or `newsletters.podcast_status`.
|
||||
- Email delivery is triggered by `CheckWorkflowCompletionService` once the podcast URL is available, a timeout is reached, or podcast generation fails (as per PRD's delay/retry logic). The final delivery status will be updated in `workflow_runs` and `newsletters`.
|
||||
|
||||
## Project Structure
|
||||
|
||||
@@ -323,9 +357,10 @@ The BMad DiCaster project is organized as a monorepo, leveraging the Vercel/Supa
|
||||
│ ├── typography/ # Example/template components (can be removed)
|
||||
│ └── ui/ # Base UI elements (button.tsx, card.tsx etc.)
|
||||
├── docs/ # Project documentation
|
||||
│ ├── prd.md
|
||||
│ ├── prd.md # Or prd-incremental-full-agile-mode.txt
|
||||
│ ├── architecture.md # This document
|
||||
│ ├── ui-ux-spec.md
|
||||
│ ├── ui-ux-spec.md # Or ui-ux-spec.txt
|
||||
│ ├── technical-preferences.md # Or technical-preferences copy.txt
|
||||
│ ├── ADR/ # Architecture Decision Records (to be created as needed)
|
||||
│ └── environment-vars.md # (To be created)
|
||||
├── lib/ # General utility functions for frontend (e.g., utils.ts from template)
|
||||
@@ -419,10 +454,46 @@ The BMad DiCaster project is organized as a monorepo, leveraging the Vercel/Supa
|
||||
- Request Parameters: `tags=front_page`
|
||||
- Example Request: `curl "http://hn.algolia.com/api/v1/search?tags=front_page"`
|
||||
- Post-processing: Application sorts fetched stories by `points` (descending), selects up to top 30.
|
||||
- Success Response Schema (Code: `200 OK`): (Standard Algolia search response containing 'hits' array with story objects).
|
||||
- Success Response Schema (Code: `200 OK`): Standard Algolia search response containing 'hits' array with story objects.
|
||||
```json
|
||||
{
|
||||
"hits": [
|
||||
{
|
||||
"objectID": "string",
|
||||
"created_at": "string",
|
||||
"title": "string",
|
||||
"url": "string",
|
||||
"author": "string",
|
||||
"points": "number",
|
||||
"story_text": "string",
|
||||
"num_comments": "number",
|
||||
"_tags": ["string"]
|
||||
}
|
||||
],
|
||||
"nbHits": "number",
|
||||
"page": "number",
|
||||
"nbPages": "number",
|
||||
"hitsPerPage": "number"
|
||||
}
|
||||
```
|
||||
- **`GET /items/{objectID}` (for comments)**
|
||||
- Description: Retrieves a story by `objectID` to get its full comment tree from the `children` field. Called for each selected top story.
|
||||
- Success Response Schema (Code: `200 OK`): (Standard Algolia item response, `children` array contains comment tree).
|
||||
- Description: Retrieves a specific story item by its `objectID` to get its full comment tree from the `children` field. Called for each selected top story.
|
||||
- Success Response Schema (Code: `200 OK`): Standard Algolia item response.
|
||||
```json
|
||||
{
|
||||
"id": "number",
|
||||
"created_at": "string",
|
||||
"author": "string",
|
||||
"text": "string",
|
||||
"parent_id": "number",
|
||||
"story_id": "number",
|
||||
"children": [
|
||||
{
|
||||
/* nested comment structure */
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
- **Rate Limits:** Generous for public use; daily calls are fine.
|
||||
- **Link to Official Docs:** [https://hn.algolia.com/api](https://hn.algolia.com/api)
|
||||
|
||||
@@ -433,11 +504,41 @@ The BMad DiCaster project is organized as a monorepo, leveraging the Vercel/Supa
|
||||
- **Authentication:** API Key (`X-USER-ID` header) and Bearer Token (`Authorization` header). Stored as `PLAYHT_USER_ID` and `PLAYHT_API_KEY`.
|
||||
- **Key Endpoints Used:**
|
||||
- **`POST /playnotes`**
|
||||
- Description: Initiates text-to-speech conversion.
|
||||
- Request Body: `multipart/form-data` including `sourceFile` (HTML newsletter content), `synthesisStyle` ("podcast"), voice parameters, and `webHookUrl` (pointing to `/api/webhooks/playht` on our Vercel deployment).
|
||||
- **Note on Content Delivery:** MVP uses `sourceFile` (direct upload). Fallback: upload content to Supabase Storage and provide `sourceFileUrl`.
|
||||
- Success Response Schema (Code: `201 Created`): JSON object with `id` (PlayNote ID), `status`, etc.
|
||||
- **Webhook Handling:** Our endpoint `/api/webhooks/playht` receives `POST` requests from Play.ht with `id`, `audioUrl`, and `status`.
|
||||
- Description: Initiates the text-to-speech conversion.
|
||||
- Request Headers: `Authorization: Bearer {PLAYHT_API_KEY}`, `X-USER-ID: {PLAYHT_USER_ID}`, `Content-Type: multipart/form-data`, `Accept: application/json`.
|
||||
- Request Body Schema: `multipart/form-data`
|
||||
- `sourceFile`: `string (binary)` (Preferred: HTML newsletter content as file upload.)
|
||||
- `sourceFileUrl`: `string (uri)` (Alternative: URL to hosted newsletter content if `sourceFile` is problematic.)
|
||||
- `synthesisStyle`: `string` (Required, e.g., "podcast")
|
||||
- `voice1`: `string` (Required, Voice ID)
|
||||
- `voice1Name`: `string` (Required)
|
||||
- `voice1Gender`: `string` (Required)
|
||||
- `webHookUrl`: `string (uri)` (Required, e.g., `<YOUR_APP_DOMAIN>/api/webhooks/playht`)
|
||||
- **Note on Content Delivery:** MVP uses `sourceFile`. If issues arise, pivot to `sourceFileUrl` (e.g., content temporarily in Supabase Storage).
|
||||
- Success Response Schema (Code: `201 Created`):
|
||||
```json
|
||||
{
|
||||
"id": "string",
|
||||
"ownerId": "string",
|
||||
"name": "string",
|
||||
"sourceFileUrl": "string",
|
||||
"audioUrl": "string",
|
||||
"synthesisStyle": "string",
|
||||
"voice1": "string",
|
||||
"voice1Name": "string",
|
||||
"voice1Gender": "string",
|
||||
"webHookUrl": "string",
|
||||
"status": "string",
|
||||
"duration": "number",
|
||||
"requestedAt": "string",
|
||||
"createdAt": "string"
|
||||
}
|
||||
```
|
||||
- **Webhook Handling:** Endpoint `/api/webhooks/playht` receives `POST` from Play.ht.
|
||||
- Request Body Schema (from Play.ht):
|
||||
```json
|
||||
{ "id": "string", "audioUrl": "string", "status": "string" }
|
||||
```
|
||||
- **Rate Limits:** Refer to official Play.ht documentation.
|
||||
- **Link to Official Docs:** [https://docs.play.ai/api-reference/playnote/post](https://docs.play.ai/api-reference/playnote/post)
|
||||
|
||||
@@ -446,10 +547,14 @@ The BMad DiCaster project is organized as a monorepo, leveraging the Vercel/Supa
|
||||
- **Purpose:** To generate summaries for articles and comment threads.
|
||||
- **Configuration:** Via environment variables (`LLM_PROVIDER_TYPE`, `OLLAMA_API_URL`, `REMOTE_LLM_API_KEY`, `REMOTE_LLM_API_URL`, `LLM_MODEL_NAME`).
|
||||
- **Facade Interface (`LLMFacade` in `supabase/functions/_shared/llm-facade.ts`):**
|
||||
|
||||
```typescript
|
||||
// Located in supabase/functions/_shared/llm-facade.ts
|
||||
export interface LLMSummarizationOptions {
|
||||
/* ... */
|
||||
prompt?: string;
|
||||
maxLength?: number;
|
||||
}
|
||||
|
||||
export interface LLMFacade {
|
||||
generateSummary(
|
||||
textToSummarize: string,
|
||||
@@ -457,18 +562,21 @@ The BMad DiCaster project is organized as a monorepo, leveraging the Vercel/Supa
|
||||
): Promise<string>;
|
||||
}
|
||||
```
|
||||
|
||||
- **Implementations:**
|
||||
- **Local Ollama Adapter:** HTTP requests to `OLLAMA_API_URL` (e.g., `POST /api/generate` or `/api/chat`).
|
||||
- **Remote LLM API Adapter:** Authenticated HTTP requests to `REMOTE_LLM_API_URL`.
|
||||
- **Local Ollama Adapter:** HTTP requests to `OLLAMA_API_URL`.
|
||||
- Request Body (example for `/api/generate`): `{"model": "string", "prompt": "string", "stream": false}`
|
||||
- Response Body (example): `{"model": "string", "response": "string", ...}`
|
||||
- **Remote LLM API Adapter:** Authenticated HTTP requests to `REMOTE_LLM_API_URL`. Schemas depend on the provider.
|
||||
- **Rate Limits:** Provider-dependent.
|
||||
- **Link to Official Docs:** Ollama: [https://github.com/ollama/ollama/blob/main/docs/api.md](https://www.google.com/search?q=https://github.com/ollama/ollama/blob/main/docs/api.md)
|
||||
|
||||
#### 4\. Nodemailer (Email Delivery Service)
|
||||
|
||||
- **Purpose:** To send generated HTML newsletters.
|
||||
- **Interaction Type:** Library integration within `NewsletterGenerationService` (Supabase Function) via `NodemailerFacade` in `supabase/functions/_shared/nodemailer-facade.ts`.
|
||||
- **Configuration:** Via SMTP environment variables (`SMTP_HOST`, `SMTP_PORT`, `SMTP_USER`, `SMTP_PASS`, etc.).
|
||||
- **Key Operations:** Create transporter, construct email (From, To, Subject, HTML), send email.
|
||||
- **Interaction Type:** Library integration within `NewsletterGenerationService` via `NodemailerFacade` in `supabase/functions/_shared/nodemailer-facade.ts`.
|
||||
- **Configuration:** Via SMTP environment variables (`SMTP_HOST`, `SMTP_PORT`, `SMTP_USER`, `SMTP_PASS`).
|
||||
- **Key Operations:** Create transporter, construct email message (From, To, Subject, HTML), send email.
|
||||
- **Link to Official Docs:** [https://nodemailer.com/](https://nodemailer.com/)
|
||||
|
||||
### Internal APIs Provided (by BMad DiCaster)
|
||||
@@ -480,28 +588,51 @@ The BMad DiCaster project is organized as a monorepo, leveraging the Vercel/Supa
|
||||
- **Method:** `POST`
|
||||
- **Authentication:** API Key in `X-API-KEY` header (matches `WORKFLOW_TRIGGER_API_KEY` env var).
|
||||
- **Request Body:** MVP: Empty or `{}`.
|
||||
- **Success Response (`202 Accepted`):** `{"message": "Daily workflow triggered...", "jobId": "<UUID>"}`
|
||||
- **Action:** Creates a record in `workflow_runs` and initiates the pipeline.
|
||||
- **Success Response (`202 Accepted`):** `{"message": "Daily workflow triggered successfully. Processing will occur asynchronously.", "jobId": "<UUID_of_the_workflow_run>"}`
|
||||
- **Error Response:** `400 Bad Request`, `401 Unauthorized`, `500 Internal Server Error`.
|
||||
- **Action:** Creates a record in `workflow_runs` table and initiates the pipeline.
|
||||
|
||||
#### 2\. Workflow Status API
|
||||
|
||||
- **Purpose:** Allow developers/admins to check the status of a workflow run.
|
||||
- **Purpose:** Allow developers/admins to check the status of a specific workflow run.
|
||||
- **Endpoint Path:** `/api/system/workflow-status/{jobId}` (Next.js API Route Handler)
|
||||
- **Method:** `GET`
|
||||
- **Authentication:** API Key in `X-API-KEY` header.
|
||||
- **Success Response (`200 OK`):** JSON object with `jobId`, `status`, `currentStep`, `details`, etc. (from `workflow_runs` table).
|
||||
- **Request Parameters:** `jobId` (Path parameter).
|
||||
- **Success Response (`200 OK`):**
|
||||
```json
|
||||
{
|
||||
"jobId": "<UUID>",
|
||||
"createdAt": "timestamp",
|
||||
"lastUpdatedAt": "timestamp",
|
||||
"status": "string",
|
||||
"currentStep": "string",
|
||||
"errorMessage": "string?",
|
||||
"details": {
|
||||
/* JSONB object with step-specific progress */
|
||||
}
|
||||
}
|
||||
```
|
||||
- **Error Response:** `401 Unauthorized`, `404 Not Found`, `500 Internal Server Error`.
|
||||
- **Action:** Retrieves record from `workflow_runs` for the given `jobId`.
|
||||
|
||||
#### 3\. Play.ht Webhook Receiver
|
||||
|
||||
- **Purpose:** To receive status updates and podcast audio URLs from Play.ht.
|
||||
- **Endpoint Path:** `/api/webhooks/playht` (Next.js API Route Handler)
|
||||
- **Method:** `POST`
|
||||
- **Authentication:** Implement verification (shared secret or signature if Play.ht supports).
|
||||
- **Request Body (from Play.ht):** JSON with `id`, `audioUrl`, `status`.
|
||||
- **Authentication:** Implement verification (e.g., shared secret token).
|
||||
- **Request Body Schema (Expected from Play.ht):**
|
||||
```json
|
||||
{ "id": "string", "audioUrl": "string", "status": "string" }
|
||||
```
|
||||
- **Success Response (`200 OK`):** `{"message": "Webhook received successfully"}`
|
||||
- **Action:** Updates `newsletters` and `workflow_runs` tables.
|
||||
|
||||
## Data Models
|
||||
|
||||
This section defines the core data structures used within the BMad DiCaster application, including conceptual domain entities and their corresponding database schemas in Supabase PostgreSQL.
|
||||
|
||||
### Core Application Entities / Domain Objects
|
||||
|
||||
(Conceptual types, typically defined in `shared/types/domain-models.ts`)
|
||||
@@ -509,52 +640,52 @@ The BMad DiCaster project is organized as a monorepo, leveraging the Vercel/Supa
|
||||
#### 1\. `WorkflowRun`
|
||||
|
||||
- **Description:** A single execution of the daily workflow.
|
||||
- **Schema:** `id (string UUID)`, `createdAt (string ISO)`, `lastUpdatedAt (string ISO)`, `status (enum string)`, `currentStepDetails (string?)`, `errorMessage (string?)`, `details (object?)`
|
||||
- **Schema:** `id (string UUID)`, `createdAt (string ISO)`, `lastUpdatedAt (string ISO)`, `status (enum string: 'pending' | 'fetching_hn' | 'scraping_articles' | 'summarizing_content' | 'generating_podcast' | 'generating_newsletter' | 'delivering_newsletter' | 'completed' | 'failed')`, `currentStepDetails (string?)`, `errorMessage (string?)`, `details (object?: { postsFetched?: number, articlesAttempted?: number, articlesScrapedSuccessfully?: number, summariesGenerated?: number, podcastJobId?: string, podcastStatus?: string, newsletterGeneratedAt?: string, subscribersNotified?: number })`
|
||||
|
||||
#### 2\. `HNPost`
|
||||
|
||||
- **Description:** A post from Hacker News.
|
||||
- **Schema:** `id (string HN_objectID)`, `hnNumericId (number?)`, `title (string)`, `url (string?)`, `author (string)`, `points (number)`, `createdAt (string ISO)`, `retrievedAt (string ISO)`, `numComments (number?)`, `workflowRunId (string UUID?)`
|
||||
- **Schema:** `id (string HN_objectID)`, `hnNumericId (number?)`, `title (string)`, `url (string?)`, `author (string)`, `points (number)`, `createdAt (string ISO)`, `retrievedAt (string ISO)`, `hnStoryText (string?)`, `numComments (number?)`, `tags (string[]?)`, `workflowRunId (string UUID?)`
|
||||
|
||||
#### 3\. `HNComment`
|
||||
|
||||
- **Description:** A comment on an HN post.
|
||||
- **Schema:** `id (string HN_commentID)`, `hnPostId (string)`, `parentId (string?)`, `author (string?)`, `text (string HTML)`, `createdAt (string ISO)`
|
||||
- **Schema:** `id (string HN_commentID)`, `hnPostId (string)`, `parentId (string?)`, `author (string?)`, `text (string HTML)`, `createdAt (string ISO)`, `retrievedAt (string ISO)`, `children (HNComment[]?)`
|
||||
|
||||
#### 4\. `ScrapedArticle`
|
||||
|
||||
- **Description:** Content scraped from an article URL.
|
||||
- **Schema:** `id (string UUID)`, `hnPostId (string)`, `originalUrl (string)`, `title (string?)`, `mainTextContent (string?)`, `scrapedAt (string ISO)`, `scrapingStatus (enum string)`, `workflowRunId (string UUID?)`
|
||||
- **Schema:** `id (string UUID)`, `hnPostId (string)`, `originalUrl (string)`, `resolvedUrl (string?)`, `title (string?)`, `author (string?)`, `publicationDate (string ISO?)`, `mainTextContent (string?)`, `scrapedAt (string ISO)`, `scrapingStatus (enum string: 'pending' | 'success' | 'failed_unreachable' | 'failed_paywall' | 'failed_parsing')`, `errorMessage (string?)`, `workflowRunId (string UUID?)`
|
||||
|
||||
#### 5\. `ArticleSummary`
|
||||
|
||||
- **Description:** AI-generated summary of a `ScrapedArticle`.
|
||||
- **Schema:** `id (string UUID)`, `scrapedArticleId (string UUID)`, `summaryText (string)`, `generatedAt (string ISO)`, `llmModelUsed (string?)`, `workflowRunId (string UUID?)`
|
||||
- **Schema:** `id (string UUID)`, `scrapedArticleId (string UUID)`, `summaryText (string)`, `generatedAt (string ISO)`, `llmPromptVersion (string?)`, `llmModelUsed (string?)`, `workflowRunId (string UUID)`
|
||||
|
||||
#### 6\. `CommentSummary`
|
||||
|
||||
- **Description:** AI-generated summary of comments for an `HNPost`.
|
||||
- **Schema:** `id (string UUID)`, `hnPostId (string)`, `summaryText (string)`, `generatedAt (string ISO)`, `llmModelUsed (string?)`, `workflowRunId (string UUID?)`
|
||||
- **Schema:** `id (string UUID)`, `hnPostId (string)`, `summaryText (string)`, `generatedAt (string ISO)`, `llmPromptVersion (string?)`, `llmModelUsed (string?)`, `workflowRunId (string UUID)`
|
||||
|
||||
#### 7\. `Newsletter`
|
||||
|
||||
- **Description:** The daily generated newsletter.
|
||||
- **Schema:** `id (string UUID)`, `workflowRunId (string UUID)`, `title (string)`, `generatedAt (string ISO)`, `htmlContent (string)`, `podcastPlayhtJobId (string?)`, `podcastUrl (string?)`, `podcastStatus (enum string?)`, `deliveryStatus (enum string)`, `targetDate (string YYYY-MM-DD)`
|
||||
- **Schema:** `id (string UUID)`, `workflowRunId (string UUID)`, `targetDate (string YYYY-MM-DD)`, `title (string)`, `generatedAt (string ISO)`, `htmlContent (string)`, `mjmlTemplateVersion (string?)`, `podcastPlayhtJobId (string?)`, `podcastUrl (string?)`, `podcastStatus (enum string?: 'pending' | 'generating' | 'completed' | 'failed')`, `deliveryStatus (enum string: 'pending' | 'sending' | 'sent' | 'partially_failed' | 'failed')`, `scheduledSendAt (string ISO?)`, `sentAt (string ISO?)`
|
||||
|
||||
#### 8\. `Subscriber`
|
||||
|
||||
- **Description:** An email subscriber.
|
||||
- **Schema:** `id (string UUID)`, `email (string)`, `subscribedAt (string ISO)`, `isActive (boolean)`
|
||||
- **Schema:** `id (string UUID)`, `email (string)`, `subscribedAt (string ISO)`, `isActive (boolean)`, `unsubscribedAt (string ISO?)`
|
||||
|
||||
#### 9\. `SummarizationPrompt`
|
||||
|
||||
- **Description:** Stores prompts for AI summarization.
|
||||
- **Schema:** `id (string UUID)`, `promptName (string)`, `promptText (string)`, `version (string)`, `isDefaultArticlePrompt (boolean)`, `isDefaultCommentPrompt (boolean)`
|
||||
- **Schema:** `id (string UUID)`, `promptName (string)`, `promptText (string)`, `version (string)`, `createdAt (string ISO)`, `updatedAt (string ISO)`, `isDefaultArticlePrompt (boolean)`, `isDefaultCommentPrompt (boolean)`
|
||||
|
||||
#### 10\. `NewsletterTemplate`
|
||||
|
||||
- **Description:** HTML/MJML templates for newsletters.
|
||||
- **Schema:** `id (string UUID)`, `templateName (string)`, `mjmlContent (string?)`, `htmlContent (string)`, `version (string)`, `isDefault (boolean)`
|
||||
- **Schema:** `id (string UUID)`, `templateName (string)`, `mjmlContent (string?)`, `htmlContent (string)`, `version (string)`, `createdAt (string ISO)`, `updatedAt (string ISO)`, `isDefault (boolean)`
|
||||
|
||||
### Database Schemas (Supabase PostgreSQL)
|
||||
|
||||
@@ -568,8 +699,10 @@ CREATE TABLE public.workflow_runs (
|
||||
status TEXT NOT NULL DEFAULT 'pending', -- pending, fetching_hn, scraping_articles, summarizing_content, generating_podcast, generating_newsletter, delivering_newsletter, completed, failed
|
||||
current_step_details TEXT NULL,
|
||||
error_message TEXT NULL,
|
||||
details JSONB NULL -- {postsFetched, articlesAttempted, articlesScrapedSuccessfully, summariesGenerated, podcastJobId, podcastStatus, newsletterSentAt, subscribersNotified}
|
||||
details JSONB NULL -- {postsFetched, articlesAttempted, articlesScrapedSuccessfully, summariesGenerated, podcastJobId, podcastStatus, newsletterGeneratedAt, subscribersNotified}
|
||||
);
|
||||
COMMENT ON COLUMN public.workflow_runs.status IS 'Possible values: pending, fetching_hn, scraping_articles, summarizing_content, generating_podcast, generating_newsletter, delivering_newsletter, completed, failed';
|
||||
COMMENT ON COLUMN public.workflow_runs.details IS 'Stores step-specific progress or metadata like postsFetched, articlesScraped, podcastJobId, etc.';
|
||||
```
|
||||
|
||||
#### 2\. `hn_posts`
|
||||
@@ -582,13 +715,14 @@ CREATE TABLE public.hn_posts (
|
||||
url TEXT NULL,
|
||||
author TEXT NULL,
|
||||
points INTEGER NOT NULL DEFAULT 0,
|
||||
created_at TIMESTAMPTZ NOT NULL,
|
||||
created_at TIMESTAMPTZ NOT NULL, -- HN post creation time
|
||||
retrieved_at TIMESTAMPTZ NOT NULL DEFAULT now(),
|
||||
hn_story_text TEXT NULL,
|
||||
num_comments INTEGER NULL DEFAULT 0,
|
||||
tags TEXT[] NULL,
|
||||
workflow_run_id UUID NULL REFERENCES public.workflow_runs(id) ON DELETE SET NULL -- The run that fetched this instance of the post
|
||||
);
|
||||
COMMENT ON COLUMN public.hn_posts.id IS 'Hacker News objectID for the story.';
|
||||
```
|
||||
|
||||
#### 3\. `hn_comments`
|
||||
@@ -599,8 +733,8 @@ CREATE TABLE public.hn_comments (
|
||||
hn_post_id TEXT NOT NULL REFERENCES public.hn_posts(id) ON DELETE CASCADE,
|
||||
parent_comment_id TEXT NULL REFERENCES public.hn_comments(id) ON DELETE CASCADE,
|
||||
author TEXT NULL,
|
||||
comment_text TEXT NOT NULL,
|
||||
created_at TIMESTAMPTZ NOT NULL,
|
||||
comment_text TEXT NOT NULL, -- HTML content of the comment
|
||||
created_at TIMESTAMPTZ NOT NULL, -- HN comment creation time
|
||||
retrieved_at TIMESTAMPTZ NOT NULL DEFAULT now()
|
||||
);
|
||||
CREATE INDEX idx_hn_comments_post_id ON public.hn_comments(hn_post_id);
|
||||
@@ -611,7 +745,7 @@ CREATE INDEX idx_hn_comments_post_id ON public.hn_comments(hn_post_id);
|
||||
```sql
|
||||
CREATE TABLE public.scraped_articles (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
hn_post_id TEXT NOT NULL REFERENCES public.hn_posts(id) ON DELETE CASCADE, -- Should be unique if one article per post processing for a workflow run
|
||||
hn_post_id TEXT NOT NULL REFERENCES public.hn_posts(id) ON DELETE CASCADE,
|
||||
original_url TEXT NOT NULL,
|
||||
resolved_url TEXT NULL,
|
||||
title TEXT NULL,
|
||||
@@ -624,6 +758,7 @@ CREATE TABLE public.scraped_articles (
|
||||
workflow_run_id UUID NULL REFERENCES public.workflow_runs(id) ON DELETE SET NULL
|
||||
);
|
||||
CREATE UNIQUE INDEX idx_scraped_articles_hn_post_id_workflow_run_id ON public.scraped_articles(hn_post_id, workflow_run_id);
|
||||
COMMENT ON COLUMN public.scraped_articles.scraping_status IS 'Possible values: pending, success, failed_unreachable, failed_paywall, failed_parsing, failed_generic';
|
||||
```
|
||||
|
||||
#### 5\. `article_summaries`
|
||||
@@ -636,9 +771,10 @@ CREATE TABLE public.article_summaries (
|
||||
generated_at TIMESTAMPTZ NOT NULL DEFAULT now(),
|
||||
llm_prompt_version TEXT NULL,
|
||||
llm_model_used TEXT NULL,
|
||||
workflow_run_id UUID NOT NULL REFERENCES public.workflow_runs(id) ON DELETE CASCADE -- Summary is specific to a workflow run
|
||||
workflow_run_id UUID NOT NULL REFERENCES public.workflow_runs(id) ON DELETE CASCADE
|
||||
);
|
||||
CREATE UNIQUE INDEX idx_article_summaries_scraped_article_id_workflow_run_id ON public.article_summaries(scraped_article_id, workflow_run_id);
|
||||
COMMENT ON COLUMN public.article_summaries.llm_prompt_version IS 'Version or identifier of the summarization prompt used.';
|
||||
```
|
||||
|
||||
#### 6\. `comment_summaries`
|
||||
@@ -651,7 +787,7 @@ CREATE TABLE public.comment_summaries (
|
||||
generated_at TIMESTAMPTZ NOT NULL DEFAULT now(),
|
||||
llm_prompt_version TEXT NULL,
|
||||
llm_model_used TEXT NULL,
|
||||
workflow_run_id UUID NOT NULL REFERENCES public.workflow_runs(id) ON DELETE CASCADE -- Summary is specific to a workflow run
|
||||
workflow_run_id UUID NOT NULL REFERENCES public.workflow_runs(id) ON DELETE CASCADE
|
||||
);
|
||||
CREATE UNIQUE INDEX idx_comment_summaries_hn_post_id_workflow_run_id ON public.comment_summaries(hn_post_id, workflow_run_id);
|
||||
```
|
||||
@@ -674,6 +810,8 @@ CREATE TABLE public.newsletters (
|
||||
scheduled_send_at TIMESTAMPTZ NULL,
|
||||
sent_at TIMESTAMPTZ NULL
|
||||
);
|
||||
CREATE INDEX idx_newsletters_target_date ON public.newsletters(target_date);
|
||||
COMMENT ON COLUMN public.newsletters.target_date IS 'The date this newsletter pertains to. Ensures uniqueness.';
|
||||
```
|
||||
|
||||
#### 8\. `subscribers`
|
||||
@@ -686,6 +824,7 @@ CREATE TABLE public.subscribers (
|
||||
is_active BOOLEAN NOT NULL DEFAULT TRUE,
|
||||
unsubscribed_at TIMESTAMPTZ NULL
|
||||
);
|
||||
CREATE INDEX idx_subscribers_email_active ON public.subscribers(email, is_active);
|
||||
```
|
||||
|
||||
#### 9\. `summarization_prompts`
|
||||
@@ -700,8 +839,9 @@ CREATE TABLE public.summarization_prompts (
|
||||
updated_at TIMESTAMPTZ NOT NULL DEFAULT now(),
|
||||
is_default_article_prompt BOOLEAN NOT NULL DEFAULT FALSE,
|
||||
is_default_comment_prompt BOOLEAN NOT NULL DEFAULT FALSE
|
||||
-- Note: Logic to enforce single default will be in application layer or via more complex DB constraints/triggers.
|
||||
);
|
||||
COMMENT ON COLUMN public.summarization_prompts.prompt_name IS 'Unique identifier for the prompt, e.g., article_summary_v2.1';
|
||||
-- Application logic will enforce that only one prompt of each type is marked as default.
|
||||
```
|
||||
|
||||
#### 10\. `newsletter_templates`
|
||||
@@ -716,14 +856,18 @@ CREATE TABLE public.newsletter_templates (
|
||||
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
|
||||
updated_at TIMESTAMPTZ NOT NULL DEFAULT now(),
|
||||
is_default BOOLEAN NOT NULL DEFAULT FALSE
|
||||
-- Note: Logic to enforce single default will be in application layer.
|
||||
);
|
||||
-- Application logic will enforce that only one template is marked as default.
|
||||
```
|
||||
|
||||
## Core Workflow / Sequence Diagrams
|
||||
|
||||
These diagrams illustrate the key sequences of operations in the BMad DiCaster system.
|
||||
|
||||
### 1\. Daily Workflow Initiation & HN Content Acquisition
|
||||
|
||||
This diagram shows the manual/API trigger initiating a new workflow run, followed by the fetching of Hacker News posts and comments.
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
actor Caller as Manual/API/CLI/Cron
|
||||
@@ -745,6 +889,7 @@ sequenceDiagram
|
||||
|
||||
alt Initial Trigger for HN Content Fetch
|
||||
WorkflowTracker->>+HNContentService: triggerFetch(workflow_run_id)
|
||||
Note over WorkflowTracker,HNContentService: This could be a direct call or an event insertion that HNContentService picks up.
|
||||
else Alternative: Event from WorkflowRunsDB insert
|
||||
WorkflowRunsDB-->>EventTrigger1: New workflow_run record
|
||||
EventTrigger1->>+HNContentService: Invoke(workflow_run_id, event_payload)
|
||||
@@ -752,27 +897,34 @@ sequenceDiagram
|
||||
|
||||
HNContentService->>+WorkflowTracker: updateWorkflowStep(workflow_run_id, 'fetching_hn_posts', 'fetching_hn')
|
||||
WorkflowTracker->>+WorkflowRunsDB: UPDATE workflow_runs (status, current_step_details)
|
||||
WorkflowRunsDB-->>-WorkflowTracker: ack
|
||||
|
||||
HNContentService->>+HNAlgoliaAPI: GET /search?tags=front_page
|
||||
HNAlgoliaAPI-->>-HNContentService: Front page story items
|
||||
|
||||
loop For each story item (up to 30 after sorting by points)
|
||||
HNContentService->>+HNPostsDB: INSERT story (hn_post_id, ..., workflow_run_id)
|
||||
HNContentService->>+HNPostsDB: INSERT story (hn_post_id, title, url, points, created_at, workflow_run_id)
|
||||
HNPostsDB-->>EventTrigger1: Notifies: New hn_post inserted
|
||||
EventTrigger1-->>ArticleScrapingService: (Async) Trigger ArticleScrapingService(hn_post_id, workflow_run_id)
|
||||
Note right of EventTrigger1: Triggers article scraping (next diagram)
|
||||
|
||||
HNContentService->>+HNAlgoliaAPI: GET /items/{story_objectID} (to fetch comments)
|
||||
HNAlgoliaAPI-->>-HNContentService: Story details with comments
|
||||
loop For each comment
|
||||
HNContentService->>+HNCommentsDB: INSERT comment
|
||||
HNContentService->>+HNCommentsDB: INSERT comment (comment_id, hn_post_id, text, author, created_at)
|
||||
HNCommentsDB-->>-HNContentService: ack
|
||||
end
|
||||
end
|
||||
HNContentService->>+WorkflowTracker: updateWorkflowDetails(workflow_run_id, {posts_fetched: X, comments_fetched: Y})
|
||||
WorkflowTracker->>WorkflowRunsDB: UPDATE workflow_runs (details)
|
||||
WorkflowTracker->>+WorkflowRunsDB: UPDATE workflow_runs (details)
|
||||
WorkflowRunsDB-->>-WorkflowTracker: ack
|
||||
Note over HNContentService: HN Content Service might mark its part for the workflow as 'hn_data_fetched'. The overall workflow status will be managed by CheckWorkflowCompletionService.
|
||||
```
|
||||
|
||||
### 2\. Article Scraping & Summarization Flow
|
||||
|
||||
This diagram shows the flow starting from a new HN post being available, leading to article scraping, and then summarization of the article content and HN comments.
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
participant EventTrigger1 as DB Event/Webhook (on hn_posts insert)
|
||||
@@ -789,44 +941,64 @@ sequenceDiagram
|
||||
|
||||
EventTrigger1->>+ArticleScrapingService: Invoke(hn_post_id, workflow_run_id, article_url)
|
||||
ArticleScrapingService->>+WorkflowTracker: updateWorkflowStep(workflow_run_id, 'scraping_article_for_post_' + hn_post_id, 'scraping_articles')
|
||||
WorkflowTracker->>WorkflowRunsDB: UPDATE workflow_runs (current_step_details)
|
||||
|
||||
ArticleScrapingService->>+ScrapedArticlesDB: INSERT new article (status='pending', workflow_run_id)
|
||||
ArticleScrapingService->>ArticleScrapingService: Identify relevant URL from hn_post (if multiple)
|
||||
ArticleScrapingService->>+ScrapedArticlesDB: INSERT new article (hn_post_id, original_url, status='pending', workflow_run_id)
|
||||
ScrapedArticlesDB-->>-ArticleScrapingService: new_scraped_article_id
|
||||
|
||||
opt Article URL is valid
|
||||
ArticleScrapingService->>ArticleScrapingService: Fetch & Parse HTML with Cheerio
|
||||
ArticleScrapingService->>+ScrapedArticlesDB: UPDATE scraped_articles SET content, status='success'
|
||||
else Scraping fails
|
||||
ArticleScrapingService->>+ScrapedArticlesDB: UPDATE scraped_articles SET status='failed_...'
|
||||
opt Article URL is valid and scrapeable
|
||||
ArticleScrapingService->>ArticleScrapingService: Fetch HTML content from article_url (using Cheerio compatible fetch)
|
||||
ArticleScrapingService->>ArticleScrapingService: Parse HTML with Cheerio, extract title, author, date, main_text
|
||||
ArticleScrapingService->>+ScrapedArticlesDB: UPDATE scraped_articles SET main_text_content, title, author, status='success' WHERE id=new_scraped_article_id
|
||||
else Scraping fails or URL invalid
|
||||
ArticleScrapingService->>+ScrapedArticlesDB: UPDATE scraped_articles SET status='failed_parsing/unreachable', error_message='...' WHERE id=new_scraped_article_id
|
||||
end
|
||||
ScrapedArticlesDB-->>EventTrigger2: Notifies: New/Updated scraped_article (status='success')
|
||||
EventTrigger2-->>SummarizationService: (Async) Trigger SummarizationService(scraped_article_id, workflow_run_id, 'article')
|
||||
Note right of EventTrigger2: Triggers article summarization
|
||||
|
||||
ArticleScrapingService->>+WorkflowTracker: updateWorkflowDetails(workflow_run_id, {articles_attempted_increment: 1, ...})
|
||||
ArticleScrapingService->>+WorkflowTracker: updateWorkflowDetails(workflow_run_id, {articles_attempted_increment: 1, articles_scraped_successfully_increment: (success ? 1:0) })
|
||||
WorkflowTracker->>WorkflowRunsDB: UPDATE workflow_runs (details)
|
||||
|
||||
HNPostsDB (not shown, but data is available) -- "Data for comments" --> SummarizationService
|
||||
Note right of SummarizationService: HN Comments are also summarized for the hn_post_id associated with this workflow_run_id. This might be a separate invocation or part of a broader summarization task for the post.
|
||||
SummarizationService->>+WorkflowTracker: updateWorkflowStep(workflow_run_id, 'summarizing_content_for_post_' + hn_post_id, 'summarizing_content')
|
||||
WorkflowTracker->>WorkflowRunsDB: UPDATE workflow_runs (current_step_details)
|
||||
|
||||
alt Summarize Article
|
||||
SummarizationService->>SummarizationService: Get article_text
|
||||
SummarizationService->>+PromptsDB: Get article_prompt
|
||||
SummarizationService->>+LLMFacade: generateSummary(article_text, article_prompt)
|
||||
LLMFacade->>+LLMProvider: Request summary
|
||||
LLMProvider-->>-LLMFacade: article_summary
|
||||
SummarizationService->>+SummariesDB: INSERT into article_summaries
|
||||
SummarizationService->>SummarizationService: Get text_content from scraped_articles WHERE id=scraped_article_id
|
||||
SummarizationService->>+PromptsDB: SELECT prompt_text WHERE is_default_article_prompt=TRUE
|
||||
PromptsDB-->>-SummarizationService: article_prompt_text
|
||||
SummarizationService->>+LLMFacade: generateSummary(text_content, {prompt: article_prompt_text})
|
||||
LLMFacade->>+LLMProvider: Request summary (Ollama or Remote API call)
|
||||
LLMProvider-->>-LLMFacade: summary_response
|
||||
LLMFacade-->>-SummarizationService: article_summary_text
|
||||
SummarizationService->>+SummariesDB: INSERT into article_summaries (scraped_article_id, summary_text, workflow_run_id, llm_model_used)
|
||||
SummariesDB-->>-SummarizationService: ack
|
||||
end
|
||||
|
||||
alt Summarize Comments (for hn_post_id)
|
||||
SummarizationService->>SummarizationService: Get comment_texts
|
||||
SummarizationService->>+PromptsDB: Get comment_prompt
|
||||
SummarizationService->>+LLMFacade: generateSummary(comment_texts, comment_prompt)
|
||||
alt Summarize Comments (for each relevant hn_post_id in the workflow_run)
|
||||
SummarizationService->>SummarizationService: Get all comments for hn_post_id from hn_comments table
|
||||
SummarizationService->>SummarizationService: Concatenate/prepare comment text
|
||||
SummarizationService->>+PromptsDB: SELECT prompt_text WHERE is_default_comment_prompt=TRUE
|
||||
PromptsDB-->>-SummarizationService: comment_prompt_text
|
||||
SummarizationService->>+LLMFacade: generateSummary(all_comments_text, {prompt: comment_prompt_text})
|
||||
LLMFacade->>+LLMProvider: Request summary
|
||||
LLMProvider-->>-LLMFacade: comment_summary
|
||||
SummarizationService->>+SummariesDB: INSERT into comment_summaries
|
||||
LLMProvider-->>-LLMFacade: summary_response
|
||||
LLMFacade-->>-SummarizationService: comment_summary_text
|
||||
SummarizationService->>+SummariesDB: INSERT into comment_summaries (hn_post_id, summary_text, workflow_run_id, llm_model_used)
|
||||
SummariesDB-->>-SummarizationService: ack
|
||||
end
|
||||
SummarizationService->>+WorkflowTracker: updateWorkflowDetails(workflow_run_id, {summaries_generated_increment: N})
|
||||
SummarizationService->>+WorkflowTracker: updateWorkflowDetails(workflow_run_id, {summaries_generated_increment: 1_or_2})
|
||||
WorkflowTracker->>WorkflowRunsDB: UPDATE workflow_runs (details)
|
||||
Note over SummarizationService: After all expected summaries for the workflow_run are done, the CheckWorkflowCompletionService will eventually pick this up.
|
||||
```
|
||||
|
||||
### 3\. Newsletter, Podcast, and Delivery Flow
|
||||
|
||||
This diagram shows the steps from completed summarization to newsletter generation, podcast creation, webhook handling, and final email delivery. It assumes the `CheckWorkflowCompletionService` has determined that all summaries for a given `workflow_run_id` are ready.
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
participant CheckWorkflowService as CheckWorkflowCompletionService (Supabase Cron Fn)
|
||||
@@ -843,117 +1015,243 @@ sequenceDiagram
|
||||
participant SubscribersDB as subscribers (DB Table)
|
||||
participant ExternalEmailService as Email Service (e.g., Gmail SMTP)
|
||||
|
||||
CheckWorkflowService->>+WorkflowRunsDB: Query runs (status='summarizing_content', all summaries done?)
|
||||
CheckWorkflowService->>+WorkflowRunsDB: Query for runs with status 'summarizing_content' and all summaries complete
|
||||
WorkflowRunsDB-->>-CheckWorkflowService: workflow_run_id (ready for newsletter)
|
||||
|
||||
CheckWorkflowService->>+WorkflowTracker: updateWorkflowStep(workflow_run_id, 'starting_newsletter_generation', 'generating_newsletter')
|
||||
WorkflowTracker->>+WorkflowRunsDB: UPDATE workflow_runs (status, current_step_details)
|
||||
WorkflowRunsDB-->>-WorkflowTracker: ack
|
||||
CheckWorkflowService->>+NewsletterGenService: Invoke(workflow_run_id)
|
||||
|
||||
NewsletterGenService->>+NewsletterTemplatesDB: Get default_template
|
||||
NewsletterGenService->>+SummariesDB: Get summaries for workflow_run_id
|
||||
NewsletterGenService->>NewsletterGenService: Compile HTML newsletter
|
||||
NewsletterGenService->>+NewslettersDB: INSERT newsletter (html_content, podcast_status='pending')
|
||||
NewsletterGenService->>+NewsletterTemplatesDB: SELECT html_content, version WHERE is_default=TRUE
|
||||
NewsletterTemplatesDB-->>-NewsletterGenService: template_html, template_version
|
||||
NewsletterGenService->>+SummariesDB: SELECT article_summaries, comment_summaries WHERE workflow_run_id=...
|
||||
SummariesDB-->>-NewsletterGenService: summaries_data
|
||||
NewsletterGenService->>NewsletterGenService: Compile HTML newsletter using template and summaries_data
|
||||
NewsletterGenService->>+NewslettersDB: INSERT newsletter (workflow_run_id, title, html_content, podcast_status='pending', delivery_status='pending', target_date)
|
||||
NewslettersDB-->>-NewsletterGenService: new_newsletter_id
|
||||
|
||||
NewsletterGenService->>+PodcastGenService: initiatePodcast(newsletter_id, html_content)
|
||||
PodcastGenService->>+PlayHTAPI: POST /playnotes (webHookUrl=...)
|
||||
NewsletterGenService->>+PodcastGenService: initiatePodcast(new_newsletter_id, html_content_for_podcast, workflow_run_id)
|
||||
WorkflowTracker->>+WorkflowRunsDB: updateWorkflowStep(workflow_run_id, 'podcast_generation_initiated', 'generating_podcast')
|
||||
WorkflowTracker->>WorkflowRunsDB: UPDATE workflow_runs
|
||||
PodcastGenService->>+PlayHTAPI: POST /playnotes (sourceFile=html_content, webHookUrl=...)
|
||||
PlayHTAPI-->>-PodcastGenService: { playht_job_id, status: 'generating' }
|
||||
PodcastGenService->>+NewslettersDB: UPDATE newsletters SET podcast_playht_job_id, podcast_status='generating'
|
||||
PodcastGenService->>+WorkflowTracker: updateWorkflowStep(workflow_run_id, 'podcast_generation_initiated', 'generating_podcast')
|
||||
PodcastGenService->>+NewslettersDB: UPDATE newsletters SET podcast_playht_job_id, podcast_status='generating' WHERE id=new_newsletter_id
|
||||
NewslettersDB-->>-PodcastGenService: ack
|
||||
Note over NewsletterGenService, PodcastGenService: Newsletter is now generated; podcast is being generated by Play.ht. Email delivery will wait for podcast completion or timeout.
|
||||
|
||||
PlayHTAPI-->>+PlayHTWebhook: POST (status='completed', audioUrl='...')
|
||||
PlayHTWebhook->>+NewslettersDB: UPDATE newsletters SET podcast_url, podcast_status='completed'
|
||||
PlayHTWebhook->>+WorkflowTracker: updateWorkflowDetails(workflow_run_id, {podcast_status: 'completed'})
|
||||
PlayHTAPI-->>+PlayHTWebhook: POST (status='completed', audioUrl='...', id=playht_job_id)
|
||||
PlayHTWebhook->>+NewslettersDB: UPDATE newsletters SET podcast_url, podcast_status='completed' WHERE podcast_playht_job_id=...
|
||||
NewslettersDB-->>-PlayHTWebhook: ack
|
||||
PlayHTWebhook->>+WorkflowTracker: updateWorkflowDetails(workflow_run_id_from_newsletter, {podcast_status: 'completed'})
|
||||
WorkflowTracker->>WorkflowRunsDB: UPDATE workflow_runs (details)
|
||||
PlayHTWebhook-->>-PlayHTAPI: HTTP 200 OK
|
||||
|
||||
CheckWorkflowService->>+WorkflowRunsDB: Query runs (status='generating_podcast', podcast_status IN ('completed', 'failed') OR timeout)
|
||||
CheckWorkflowService->>+WorkflowRunsDB: Query for runs with status 'generating_podcast' AND newsletters.podcast_status IN ('completed', 'failed') OR timeout reached
|
||||
WorkflowRunsDB-->>-CheckWorkflowService: workflow_run_id (ready for delivery)
|
||||
|
||||
CheckWorkflowService->>+WorkflowTracker: updateWorkflowStep(workflow_run_id, 'starting_newsletter_delivery', 'delivering_newsletter')
|
||||
CheckWorkflowService->>+NewsletterGenService: triggerDelivery(newsletter_id)
|
||||
WorkflowTracker->>+WorkflowRunsDB: UPDATE workflow_runs (status, current_step_details)
|
||||
WorkflowRunsDB-->>-WorkflowTracker: ack
|
||||
CheckWorkflowService->>+NewsletterGenService: triggerDelivery(newsletter_id_for_workflow_run)
|
||||
|
||||
NewsletterGenService->>+NewslettersDB: Get newsletter_data (html, podcast_url)
|
||||
NewsletterGenService->>NewsletterGenService: (Embed podcast_url in HTML if available)
|
||||
NewsletterGenService->>+SubscribersDB: Get active_subscribers
|
||||
loop For each subscriber
|
||||
NewsletterGenService->>+NodemailerService: sendEmail(to, subject, html)
|
||||
|
||||
NewsletterGenService->>+NewslettersDB: SELECT html_content, podcast_url WHERE id=newsletter_id
|
||||
NewslettersDB-->>-NewsletterGenService: newsletter_data
|
||||
NewsletterGenService->>NewsletterGenService: (If podcast_url available, embed it in html_content)
|
||||
NewsletterGenService->>+SubscribersDB: SELECT email WHERE is_active=TRUE
|
||||
SubscribersDB-->>-NewsletterGenService: subscriber_emails[]
|
||||
|
||||
loop For each subscriber_email
|
||||
NewsletterGenService->>+NodemailerService: sendEmail(to=subscriber_email, subject=newsletter_title, html=final_html_content)
|
||||
NodemailerService->>+ExternalEmailService: SMTP send
|
||||
ExternalEmailService-->>-NodemailerService: delivery_success/failure
|
||||
NodemailerService-->>-NewsletterGenService: status
|
||||
end
|
||||
NewsletterGenService->>+NewslettersDB: UPDATE newsletters SET delivery_status='sent', sent_at=now()
|
||||
NewsletterGenService->>+WorkflowTracker: completeWorkflow(workflow_run_id, {delivery_status: 'sent'})
|
||||
NewsletterGenService->>+NewslettersDB: UPDATE newsletters SET delivery_status='sent' (or 'partially_failed'), sent_at=now()
|
||||
NewslettersDB-->>-NewsletterGenService: ack
|
||||
NewsletterGenService->>+WorkflowTracker: completeWorkflow(workflow_run_id, {delivery_status: 'sent', subscribers_notified: X})
|
||||
WorkflowTracker->>+WorkflowRunsDB: UPDATE workflow_runs (status='completed', details)
|
||||
WorkflowRunsDB-->>-WorkflowTracker: ack
|
||||
```
|
||||
|
||||
## Definitive Tech Stack Selections
|
||||
|
||||
This section outlines the definitive technology choices for the BMad DiCaster project.
|
||||
This section outlines the definitive technology choices for the BMad DiCaster project. These selections are the single source of truth for all technology choices. "Latest" implies the latest stable version available at the time of project setup (2025-05-13); the specific version chosen should be pinned in `package.json` and this document updated accordingly.
|
||||
|
||||
- **Preferred Starter Template Frontend & Backend:** Vercel/Supabase Next.js App Router Template ([https://vercel.com/templates/next.js/supabase](https://vercel.com/templates/next.js/supabase))
|
||||
|
||||
| Category | Technology | Version / Details | Description / Purpose | Justification (Optional, from PRD/User) |
|
||||
| :------------------- | :-------------------------- | :-------------------------- | :------------------------------------------------------------------------ | :----------------------------------------------------------- |
|
||||
| **Languages** | TypeScript | `5.7.2` | Primary language for backend/frontend | Strong typing, community support, aligns with Next.js/React |
|
||||
| **Runtime** | Node.js | `22.10.2` | Server-side execution environment for Next.js & Supabase Functions | Compatible with Next.js, Vercel environment |
|
||||
| **Frameworks** | Next.js | `latest` (e.g., 14.x.x) | Full-stack React framework | App Router, SSR, API routes, Vercel synergy |
|
||||
| | React | `19.0.0` | Frontend UI library | Component-based, declarative |
|
||||
| **UI Libraries** | Tailwind CSS | `3.4.17` | Utility-first CSS framework | Rapid UI development, consistent styling |
|
||||
| | Shadcn UI | `latest` | React component library (via CLI) | Pre-styled, accessible components, built on Radix & Tailwind |
|
||||
| **Databases** | PostgreSQL | (via Supabase) | Primary relational data store | Provided by Supabase, robust, scalable |
|
||||
| **Cloud Platform** | Vercel | N/A | Hosting platform for Next.js app & Supabase Functions | Seamless Next.js/Supabase deployment, Edge Network |
|
||||
| **Cloud Services** | Supabase Functions | N/A (via Vercel deploy) | Serverless compute for backend pipeline & APIs | Integrated with Supabase DB, event-driven capabilities |
|
||||
| | Supabase Auth | N/A | User authentication and management | Integrated with Supabase, RLS |
|
||||
| | Supabase Storage | N/A | File storage (e.g., for temporary newsletter files if needed for Play.ht) | Integrated with Supabase |
|
||||
| **Infrastructure** | Supabase CLI | `latest` | Local development, migrations, function deployment | Official tool for Supabase development |
|
||||
| | Docker | `latest` (via Supabase CLI) | Containerization for local Supabase services | Local development consistency |
|
||||
| **State Management** | Zustand | `latest` | Frontend state management | Simple, unopinionated, performant for React |
|
||||
| **Testing** | React Testing Library (RTL) | `latest` | Testing React components | User-centric testing, works well with Jest |
|
||||
| | Jest | `latest` | Unit/Integration testing framework for JS/TS | Widely used, good support for Next.js/React |
|
||||
| | Playwright | `latest` | End-to-end testing framework | Modern, reliable, cross-browser |
|
||||
| **CI/CD** | GitHub Actions | N/A | Continuous Integration/Deployment | Integration with GitHub, automation of build/deploy/test |
|
||||
| **Other Tools** | Cheerio | `latest` | HTML parsing/scraping for articles | Server-side HTML manipulation |
|
||||
| | Nodemailer | `latest` | Email sending library for newsletters | Robust email sending from Node.js |
|
||||
| | Zod | `latest` | TypeScript-first schema declaration and validation | Data validation for API inputs, environment variables etc. |
|
||||
| | `tsx` / `ts-node` | `latest` (for scripts) | TypeScript execution for Node.js scripts (e.g. `scripts/`) | Running TS scripts directly |
|
||||
| | Prettier | `3.3.3` | Code formatter | Consistent code style |
|
||||
| | ESLint | `latest` | Linter for TypeScript/JavaScript | Code quality and error prevention |
|
||||
| | Pino | `latest` | High-performance JSON logger for Node.js | Structured and efficient logging |
|
||||
| Category | Technology | Version / Details | Description / Purpose | Justification (Optional, from PRD/User) |
|
||||
| :------------------- | :-------------------------- | :----------------------------------------- | :------------------------------------------------------------------------ | :----------------------------------------------------------- |
|
||||
| **Languages** | TypeScript | `5.7.2` | Primary language for backend/frontend | Strong typing, community support, aligns with Next.js/React |
|
||||
| **Runtime** | Node.js | `22.10.2` | Server-side execution environment for Next.js & Supabase Functions | Compatible with Next.js, Vercel environment |
|
||||
| **Frameworks** | Next.js | `latest` (e.g., 14.2.3 at time of writing) | Full-stack React framework | App Router, SSR, API routes, Vercel synergy |
|
||||
| | React | `19.0.0` | Frontend UI library | Component-based, declarative |
|
||||
| **UI Libraries** | Tailwind CSS | `3.4.17` | Utility-first CSS framework | Rapid UI development, consistent styling |
|
||||
| | Shadcn UI | `latest` (CLI based) | React component library (via CLI) | Pre-styled, accessible components, built on Radix & Tailwind |
|
||||
| **Databases** | PostgreSQL | (via Supabase) | Primary relational data store | Provided by Supabase, robust, scalable |
|
||||
| **Cloud Platform** | Vercel | N/A | Hosting platform for Next.js app & Supabase Functions | Seamless Next.js/Supabase deployment, Edge Network |
|
||||
| **Cloud Services** | Supabase Functions | N/A (via Vercel deploy) | Serverless compute for backend pipeline & APIs | Integrated with Supabase DB, event-driven capabilities |
|
||||
| | Supabase Auth | N/A | User authentication and management | Integrated with Supabase, RLS |
|
||||
| | Supabase Storage | N/A | File storage (e.g., for temporary newsletter files if needed for Play.ht) | Integrated with Supabase |
|
||||
| **Infrastructure** | Supabase CLI | `latest` | Local development, migrations, function deployment | Official tool for Supabase development |
|
||||
| | Docker | `latest` (via Supabase CLI) | Containerization for local Supabase services | Local development consistency |
|
||||
| **State Management** | Zustand | `latest` | Frontend state management | Simple, unopinionated, performant for React |
|
||||
| **Testing** | React Testing Library (RTL) | `latest` | Testing React components | User-centric testing, works well with Jest |
|
||||
| | Jest | `latest` | Unit/Integration testing framework for JS/TS | Widely used, good support for Next.js/React |
|
||||
| | Playwright | `latest` | End-to-end testing framework | Modern, reliable, cross-browser |
|
||||
| **CI/CD** | GitHub Actions | N/A | Continuous Integration/Deployment | Integration with GitHub, automation of build/deploy/test |
|
||||
| **Other Tools** | Cheerio | `latest` | HTML parsing/scraping for articles | Server-side HTML manipulation |
|
||||
| | Nodemailer | `latest` | Email sending library for newsletters | Robust email sending from Node.js |
|
||||
| | Zod | `latest` | TypeScript-first schema declaration and validation | Data validation for API inputs, environment variables etc. |
|
||||
| | `tsx` / `ts-node` | `latest` (for scripts) | TypeScript execution for Node.js scripts (e.g. `scripts/`) | Running TS scripts directly |
|
||||
| | Prettier | `3.3.3` | Code formatter | Consistent code style |
|
||||
| | ESLint | `latest` | Linter for TypeScript/JavaScript | Code quality and error prevention |
|
||||
| | Pino | `latest` | High-performance JSON logger for Node.js | Structured and efficient logging |
|
||||
|
||||
## Infrastructure and Deployment Overview
|
||||
|
||||
- **Cloud Provider(s):** Vercel (for hosting Next.js app and Supabase Functions) and Supabase (managed PostgreSQL, Auth, Storage; runs on underlying cloud like AWS).
|
||||
- **Core Services Used:** Vercel (Next.js Hosting, Serverless/Edge Functions, CDN, CI/CD, Cron Jobs), Supabase (PostgreSQL, Auth, Storage, Functions, Database Webhooks).
|
||||
- **Infrastructure as Code (IaC):** Supabase Migrations (`supabase/migrations/`) for database schema; Vercel project settings (`vercel.json` if needed).
|
||||
- **Deployment Strategy:** GitHub Actions for CI/CD. Frontend (Next.js) via Vercel Git integration. Backend (Supabase Functions) via Supabase CLI within GitHub Actions. Database migrations via Supabase CLI.
|
||||
- **Environments:** Local (Next.js dev server, Supabase CLI local stack), Development/Preview (Vercel preview deployments linked to dev Supabase instance), Production (Vercel production deployment linked to prod Supabase instance).
|
||||
- **Cloud Provider(s):**
|
||||
- **Vercel:** For hosting the Next.js frontend application, Next.js API routes (including the Play.ht webhook receiver and the workflow trigger API), and Supabase Functions (Edge/Serverless Functions deployed via Supabase CLI and Vercel integration).
|
||||
- **Supabase:** Provides the managed PostgreSQL database, authentication, storage, and the an environment for deploying backend functions. Supabase itself runs on underlying cloud infrastructure (e.g., AWS).
|
||||
- **Core Services Used:**
|
||||
- **Vercel:** Next.js Hosting (SSR, SSG, ISR, Edge runtime), Serverless Functions (for Next.js API routes), Edge Functions (for Next.js middleware and potentially some API routes), Global CDN, CI/CD (via GitHub integration), Environment Variables Management, Vercel Cron Jobs (for scheduled triggering of the `/api/system/trigger-workflow` endpoint).
|
||||
- **Supabase:** PostgreSQL Database, Supabase Auth, Supabase Storage (for temporary file hosting if needed for Play.ht, or other static assets), Supabase Functions (backend logic for the event-driven pipeline, deployed via Supabase CLI, runs on Vercel infrastructure), Database Webhooks (using `pg_net` or built-in functionality to trigger Supabase/Vercel functions), Supabase CLI (for local development, migrations, function deployment).
|
||||
- **Infrastructure as Code (IaC):**
|
||||
- **Supabase Migrations:** SQL migration files in `supabase/migrations/` define the database schema and are managed by the Supabase CLI. This is the primary IaC for the database.
|
||||
- **Vercel Configuration:** `vercel.json` (if needed for custom configurations beyond what the Vercel dashboard and Next.js provide) and project settings via the Vercel dashboard.
|
||||
- No explicit IaC for Vercel services beyond its declarative nature and Next.js conventions is anticipated for MVP.
|
||||
- **Deployment Strategy:**
|
||||
- **Source Control:** GitHub will be used for version control.
|
||||
- **CI/CD Tool:** GitHub Actions (as defined in `/.github/workflows/main.yml`).
|
||||
- **Frontend (Next.js app on Vercel):** Continuous deployment triggered by pushes/merges to the main branch. Preview deployments automatically created for pull requests.
|
||||
- **Backend (Supabase Functions):** Deployed via Supabase CLI commands (e.g., `supabase functions deploy <function_name> --project-ref <your-project-ref>`), run as part of the GitHub Actions workflow.
|
||||
- **Database Migrations (Supabase):** Applied via CI/CD step using `supabase migration up --linked` or Supabase CLI against remote DB.
|
||||
- **Environments:**
|
||||
- **Local Development:** Next.js local dev server (`next dev`), local Supabase stack (`supabase start`), local `.env.local`.
|
||||
- **Development/Preview (on Vercel):** Auto-deployed per PR/dev branch push, connected to a **Development Supabase instance**.
|
||||
- **Production (on Vercel):** Deployed from the main branch, connected to a **Production Supabase instance**.
|
||||
- **Environment Promotion:** Local -\> Dev/Preview (PR) -\> Production (merge to main).
|
||||
- **Rollback Strategy:** Vercel dashboard/CLI for app/function rollbacks; Supabase migrations or Point-in-Time Recovery for database.
|
||||
- **Rollback Strategy:** Vercel dashboard/CLI for app/function rollbacks; Supabase migrations (revert migration) or Point-in-Time Recovery for database.
|
||||
|
||||
## Error Handling Strategy
|
||||
|
||||
- **General Approach:** Use standard `Error` objects or custom extensions. Supabase Functions catch errors, log via Pino, update `workflow_runs`, and avoid unhandled rejections. Next.js API routes return appropriate HTTP error responses with JSON payloads.
|
||||
A robust error handling strategy is essential for the reliability of the BMad DiCaster pipeline. This involves consistent error logging, appropriate retry mechanisms, and clear error propagation. The `workflow_runs` table will be a central piece in tracking errors for entire workflow executions.
|
||||
|
||||
- **General Approach:**
|
||||
- Standard JavaScript `Error` objects (or custom extensions of `Error`) will be used for exceptions within TypeScript code.
|
||||
- Each Supabase Function in the pipeline will catch its own errors, log them using Pino, update the `workflow_runs` table with an error status/message (via `WorkflowTrackerService`), and prevent unhandled promise rejections.
|
||||
- Next.js API routes will catch errors, log them, and return appropriate HTTP error responses (e.g., 4xx, 500) with a JSON error payload.
|
||||
- **Logging (Pino):**
|
||||
- Library: Pino (`pino`) for structured JSON logging in Supabase Functions and Next.js API routes.
|
||||
- Configuration: Shared Pino logger instance (`supabase/functions/_shared/logger.ts`).
|
||||
- Format: JSON.
|
||||
- Levels: `trace`, `debug`, `info`, `warn`, `error`, `fatal`.
|
||||
- Context: Logs include `timestamp`, `severity`, `workflowRunId`, `service`/`functionName`, `message`, and relevant `details`. **No sensitive data logged.**
|
||||
- **Library/Method:** Pino (`pino`) is the standard logging library for Supabase Functions and Next.js API routes.
|
||||
- **Configuration:** A shared Pino logger instance (e.g., `supabase/functions/_shared/logger.ts`) will be configured for JSON output, ISO timestamps, and environment-aware pretty-printing for development.
|
||||
```typescript
|
||||
// Example: supabase/functions/_shared/logger.ts
|
||||
import pino from "pino";
|
||||
export const logger = pino({
|
||||
level: process.env.LOG_LEVEL || "info",
|
||||
formatters: { level: (label) => ({ level: label }) },
|
||||
timestamp: pino.stdTimeFunctions.isoTime,
|
||||
...(process.env.NODE_ENV === "development" && {
|
||||
transport: {
|
||||
target: "pino-pretty",
|
||||
options: {
|
||||
colorize: true,
|
||||
translateTime: "SYS:standard",
|
||||
ignore: "pid,hostname",
|
||||
},
|
||||
},
|
||||
}),
|
||||
});
|
||||
```
|
||||
- **Format:** Structured JSON.
|
||||
- **Levels:** `trace`, `debug`, `info`, `warn`, `error`, `fatal`.
|
||||
- **Context:** Logs must include `timestamp`, `severity`, `workflowRunId` (where applicable), `service` or `functionName`, a clear `message`, and relevant `details` (sanitized). **Sensitive data must NEVER be logged.** Pass error objects directly to Pino: `logger.error({ err: errorInstance, workflowRunId }, "Operation failed");`.
|
||||
- **Specific Handling Patterns:**
|
||||
- **External API Calls:** Through facades with timeouts and limited retries (exponential backoff) for transient errors. Standardized custom errors thrown by facades.
|
||||
- **Internal Errors/Business Logic:** Caught within functions; log details, update `workflow_runs` to 'failed'. API routes return generic errors to clients.
|
||||
- **Database Operations:** Critical errors lead to 'failed' workflow status.
|
||||
- **Scraping/Summarization Failures:** Individual item failures are logged and status updated (e.g., `scraped_articles.scraping_status`), but may not halt the entire workflow run if other items succeed.
|
||||
- **Podcast/Delivery Failures:** Logged, status updated in `newsletters` and `workflow_runs`. Newsletter may be sent without podcast after timeout/failure.
|
||||
- **`CheckWorkflowCompletionService`:** Designed for resilience; errors in processing one run should not prevent processing of others or future scheduled runs.
|
||||
- **External API Calls (HN Algolia, Play.ht, LLM Provider):**
|
||||
- **Facades:** Calls made through dedicated facades in `supabase/functions/_shared/`.
|
||||
- **Timeouts:** Implement reasonable connect and read timeouts.
|
||||
- **Retries:** Facades implement limited retries (2-3) with exponential backoff for transient errors (network issues, 5xx errors).
|
||||
- **Error Propagation:** Facades catch, log, and throw standardized custom errors (e.g., `ExternalApiError`) containing contextual information.
|
||||
- **Internal Errors / Business Logic Exceptions (Supabase Functions):**
|
||||
- Use `try...catch`. Critical errors preventing task completion for a `workflow_run_id` must: 1. Log detailed error (Pino). 2. Call `WorkflowTrackerService.failWorkflow(...)`.
|
||||
- Next.js API routes return generic JSON errors (e.g., `{"error": "Internal server error"}`) and appropriate HTTP status codes.
|
||||
- **Database Operations (Supabase):** Critical errors treated as internal errors (log, update `workflow_runs` to 'failed').
|
||||
- **Scraping/Summarization/Podcast/Delivery Failures:** Individual item failures are logged and status updated (e.g., `scraped_articles.scraping_status`). The overall workflow may continue with available data, with partial success noted in `workflow_runs.details`. Systemic failures lead to `workflow_runs.status = 'failed'`.
|
||||
- **`CheckWorkflowCompletionService`:** Must be resilient. Errors processing one `workflow_run_id` should be logged but not prevent processing of other runs or subsequent scheduled invocations.
|
||||
|
||||
## Coding Standards
|
||||
|
||||
(As detailed previously, including TypeScript, Node.js, ESLint, Prettier, naming conventions, co-located unit tests `*.test.ts(x)`/`*.spec.ts(x)`, async/await, strict type safety, Pino logging, and specific framework/anti-pattern guidelines.)
|
||||
These standards are mandatory for all code generation by AI agents and human developers.
|
||||
|
||||
- **Primary Language & Runtime:** TypeScript `5.7.2`, Node.js `22.10.2`.
|
||||
- **Style Guide & Linter:** ESLint (configured with Next.js defaults, TypeScript support) and Prettier (`3.3.3`). Configurations in root. Linting/formatting are mandatory.
|
||||
- **Naming Conventions:**
|
||||
- Variables & Functions/Methods: `camelCase`
|
||||
- Classes/Types/Interfaces: `PascalCase`
|
||||
- Constants: `UPPER_SNAKE_CASE`
|
||||
- Files (.ts, .tsx): `kebab-case` (e.g., `newsletter-card.tsx`)
|
||||
- Supabase function directories: `kebab-case` (e.g., `hn-content-service`)
|
||||
- **File Structure:** Adhere to "Project Structure." Unit tests (`*.test.ts(x)`/`*.spec.ts(x)`) co-located with source files.
|
||||
- **Asynchronous Operations:** Always use `async`/`await` for Promises; ensure proper handling.
|
||||
- **Type Safety (TypeScript):** Adhere to `tsconfig.json` (`"strict": true`). Avoid `any`; use `unknown` with type narrowing. Shared types in `shared/types/`.
|
||||
- **Comments & Documentation:** Explain _why_, not _what_. Use TSDoc for exported members. READMEs for modules/services.
|
||||
- **Dependency Management:** Use `npm`. Vet new dependencies. Pin versions or use `^` for non-breaking updates. Resolve `latest` tags to specific versions upon setup.
|
||||
- **Environment Variables:** Manage via environment variables (`.env.example` provided). Use Zod for runtime parsing/validation.
|
||||
- **Modularity & Reusability:** Break down complexity. Use shared utilities/facades.
|
||||
|
||||
### Detailed Language & Framework Conventions
|
||||
|
||||
#### TypeScript/Node.js (Next.js & Supabase Functions) Specifics:
|
||||
|
||||
- **Immutability:** Prefer immutable data structures (e.g., `Readonly<T>`, `as const`). Follow Zustand patterns for immutable state updates in React.
|
||||
- **Functional vs. OOP:** Favor functional constructs for data transformation/utilities. Use classes for services/facades managing state or as per framework (e.g., React functional components with Hooks preferred).
|
||||
- **Error Handling Specifics:** `throw new Error('...')` or custom error classes. Ensure `Promise` rejections are `Error` objects.
|
||||
- **Null/Undefined Handling:** With `strictNullChecks`, handle explicitly. Avoid `!` non-null assertion; prefer explicit checks, `?.`, `??`.
|
||||
- **Module System:** Use ES Modules (`import`/`export`) exclusively.
|
||||
- **Logging Specifics (Pino):** Use shared Pino logger. Include context object (`logger.info({ context }, "message")`), especially `workflowRunId`.
|
||||
- **Next.js Conventions:** Follow App Router conventions. Use Server Components for data fetching where appropriate. Route Handlers for API endpoints.
|
||||
- **Supabase Function Conventions:** `index.ts` as entry. Self-contained or use `_shared/` utilities. Secure client initialization (admin vs. user).
|
||||
- **Code Generation Anti-Patterns to Avoid:** Overly nested logic, single-letter variables (except trivial loops), disabling linter/TS errors without cause, bypassing framework security, monolithic functions.
|
||||
|
||||
## Overall Testing Strategy
|
||||
|
||||
(As detailed previously, covering Unit Tests with Jest/RTL, Integration Tests, E2E Tests with Playwright, 80% unit test coverage target, specific mocking strategies for facades and external dependencies, and test data management.)
|
||||
- **Tools:** Jest (unit/integration), React Testing Library (RTL) (React components), Playwright (E2E). Supabase CLI for local DB/function testing.
|
||||
- **Unit Tests:**
|
||||
- **Scope:** Isolate individual functions, methods, classes, React components. Focus on logic, transformations, component rendering.
|
||||
- **Location & Naming:** Co-located with source files (`*.test.ts`, `*.spec.ts`, `*.test.tsx`, `*.spec.tsx`).
|
||||
- **Mocking/Stubbing:** Jest mocks for dependencies. External API Facades are mocked when testing services that use them. Facades themselves are tested by mocking the underlying HTTP client or library's network calls.
|
||||
- **AI Agent Responsibility:** Generate unit tests covering logic paths, props, events, edge cases, error conditions for new/modified code.
|
||||
- **Integration Tests:**
|
||||
- **Scope:** Interactions between components/services (e.g., API route -\> service -\> DB).
|
||||
- **Location:** `tests/integration/`.
|
||||
- **Environment:** Local Supabase dev environment. Consider `msw` for mocking HTTP services called by frontend/backend.
|
||||
- **AI Agent Responsibility:** Generate tests for key service interactions or API contracts.
|
||||
- **End-to-End (E2E) Tests:**
|
||||
- **Scope:** Validate complete user flows via UI.
|
||||
- **Tool:** Playwright. Location: `tests/e2e/`.
|
||||
- **Key Scenarios (MVP):** View newsletter list, view detail, play podcast, download newsletter.
|
||||
- **AI Agent Responsibility:** Generate E2E test stubs/scripts for critical paths.
|
||||
- **Test Coverage:**
|
||||
- **Target:** Aim for **80% unit test coverage** for new business logic and critical components. Quality over quantity.
|
||||
- **Measurement:** Jest coverage reports.
|
||||
- **Mocking/Stubbing Strategy (General):** Test one unit at a time. Mock external dependencies for unit tests. For facade unit tests: use the real library but mock its external calls at the library's boundary.
|
||||
- **Test Data Management:** Inline mock data for unit tests. Factories/fixtures or `seed.sql` for integration/E2E tests.
|
||||
|
||||
## Security Best Practices
|
||||
|
||||
(As detailed previously, including Zod for input validation, output encoding, secrets management via environment variables, dependency security scanning, API key authentication for system APIs, Play.ht webhook verification, Supabase RLS, principle of least privilege, HTTPS, and secure error information disclosure.)
|
||||
- **Input Sanitization/Validation:** Zod for all external inputs (API requests, function payloads, external API responses). Validate at component boundaries.
|
||||
- **Output Encoding:** Rely on React JSX auto-escaping for frontend. Ensure HTML for newsletters is sanitized if dynamic data is injected outside of a secure templating engine.
|
||||
- **Secrets Management:** Via environment variables (Vercel UI, `.env.local`). Never hardcode or log secrets. Access via `process.env`. Use Supabase service role key only in backend functions.
|
||||
- **Dependency Security:** Regular `npm audit`. Vet new dependencies.
|
||||
- **Authentication/Authorization:**
|
||||
- Workflow Trigger/Status APIs: API Key (`X-API-KEY`).
|
||||
- Play.ht Webhook: Shared secret or signature verification.
|
||||
- Supabase RLS: Enable on tables, define policies (especially for `subscribers` and any data directly queried by frontend).
|
||||
- **Principle of Least Privilege:** Scope API keys and database roles narrowly.
|
||||
- **API Security (General):** HTTPS (Vercel default). Consider rate limiting for public APIs. Standard HTTP security headers.
|
||||
- **Error Handling & Information Disclosure:** Log detailed errors server-side; return generic messages/error IDs to clients.
|
||||
- **Regular Security Audits/Testing (Post-MVP):** Consider for future enhancements.
|
||||
|
||||
## Key Reference Documents
|
||||
|
||||
@@ -969,6 +1267,7 @@ This section outlines the definitive technology choices for the BMad DiCaster pr
|
||||
10. **Next.js Documentation:** [https://nextjs.org/docs](https://nextjs.org/docs)
|
||||
11. **Vercel Documentation:** [https://vercel.com/docs](https://vercel.com/docs)
|
||||
12. **Pino Logging Documentation:** [https://getpino.io/](https://getpino.io/)
|
||||
13. **Zod Documentation:** [https://zod.dev/](https://zod.dev/)
|
||||
|
||||
## Change Log
|
||||
|
||||
@@ -1032,3 +1331,7 @@ You are now tasked with defining the detailed **Frontend Architecture** for the
|
||||
10. **Key Frontend Libraries & Versioning:** Confirm versions from the main tech stack and list any additional frontend-only libraries required.
|
||||
|
||||
Your output should be a clean, well-formatted `frontend-architecture.md` document ready for AI developer agents to use for frontend implementation. Adhere to the output formatting guidelines. You are now operating in **Frontend Architecture Mode**.
|
||||
|
||||
---
|
||||
|
||||
This concludes the BMad DiCaster Architecture Document.
|
||||
110
BETA-V3/v3-demos/full-stack-app-demo/4-arch-suggested-changes.md
Normal file
110
BETA-V3/v3-demos/full-stack-app-demo/4-arch-suggested-changes.md
Normal file
@@ -0,0 +1,110 @@
|
||||
Here’s a summary of potential additions and adjustments:
|
||||
|
||||
**New Technical Epics/Stories (or significant additions to existing ones):**
|
||||
|
||||
1. **New Technical Story (or Sub-tasks under Epic 1): Workflow Orchestration Setup**
|
||||
|
||||
- **Goal:** Implement the core workflow orchestration mechanism.
|
||||
- **Stories/Tasks:**
|
||||
- **Story: Define and Implement `workflow_runs` Table and `WorkflowTrackerService`**
|
||||
- Acceptance Criteria:
|
||||
- Supabase migration created for the `workflow_runs` table as defined in the architecture document.
|
||||
- `WorkflowTrackerService` implemented in `supabase/functions/_shared/` with methods for initiating, updating step details, incrementing counters, failing, and completing workflow runs.
|
||||
- Service includes robust error handling and logging via Pino.
|
||||
- Unit tests for `WorkflowTrackerService` achieve >80% coverage.
|
||||
- **Story: Implement `CheckWorkflowCompletionService` (Supabase Cron Function)**
|
||||
- Acceptance Criteria:
|
||||
- Supabase Function `check-workflow-completion-service` created.
|
||||
- Function queries `workflow_runs` and related tables to determine if a workflow run is ready to progress to the next major stage (e.g., from summarization to newsletter generation, from podcast initiated to delivery).
|
||||
- Function correctly updates `workflow_runs.status` and invokes the next appropriate service function (e.g., `NewsletterGenerationService`) via a Supabase database webhook trigger or direct HTTP call if preferred.
|
||||
- Logic for handling podcast link availability (delay/retry/timeout before sending email) is implemented here or in conjunction with `NewsletterGenerationService`.
|
||||
- The function is configurable to be run periodically via Vercel Cron Jobs (or `pg_cron`).
|
||||
- Comprehensive logging implemented using Pino.
|
||||
- Unit tests achieve >80% coverage.
|
||||
- **Story: Implement Workflow Status API Endpoint (`/api/system/workflow-status/{jobId}`)**
|
||||
- Acceptance Criteria:
|
||||
- Next.js API Route Handler created at `/api/system/workflow-status/{jobId}`.
|
||||
- Endpoint secured with API Key authentication.
|
||||
- Retrieves and returns status details from the `workflow_runs` table for the given `jobId`.
|
||||
- Handles cases where `jobId` is not found (404).
|
||||
- Unit and integration tests for the API endpoint.
|
||||
|
||||
2. **New Technical Story (under Epic 3: AI-Powered Content Summarization): Implement LLM Facade and Configuration**
|
||||
|
||||
- **Goal:** Create a flexible interface for interacting with different LLM providers for summarization.
|
||||
- **Story: Design and Implement `LLMFacade`**
|
||||
- Acceptance Criteria:
|
||||
- `LLMFacade` interface and concrete implementations (e.g., `OllamaAdapter`, `RemoteLLMApiAdapter`) created in `supabase/functions/_shared/llm-facade.ts`.
|
||||
- Factory function implemented to select the LLM adapter based on environment variables (`LLM_PROVIDER_TYPE`, `OLLAMA_API_URL`, `REMOTE_LLM_API_KEY`, `REMOTE_LLM_API_URL`, `LLM_MODEL_NAME`).
|
||||
- Facades handle making requests to the respective LLM APIs and parsing responses.
|
||||
- Error handling and retry logic for transient API errors implemented within the facade.
|
||||
- Unit tests for the facade and adapters (mocking actual HTTP calls) achieve >80% coverage.
|
||||
|
||||
3. **New Technical Story (or Sub-task under relevant Epics): Implement Facades for External Services**
|
||||
- **Goal:** Encapsulate all external service interactions.
|
||||
- **Stories/Tasks:**
|
||||
- **Story: Implement `HNAlgoliaFacade`** (for HN Content Service)
|
||||
- **Story: Implement `PlayHTFacade`** (for Podcast Generation Service)
|
||||
- **Story: Implement `NodemailerFacade`** (for Newsletter Generation Service)
|
||||
- Acceptance Criteria (General for each facade):
|
||||
- Facade created in `supabase/functions/_shared/`.
|
||||
- Handles API authentication, request formation, and response parsing for the specific service.
|
||||
- Implements basic retry logic for transient errors.
|
||||
- Unit tests (mocking actual HTTP/library calls) achieve >80% coverage.
|
||||
|
||||
**Adjustments to Existing Epics/Stories:**
|
||||
|
||||
- **Epic 1: Project Initialization, Setup, and HN Content Acquisition**
|
||||
|
||||
- **Story 1.3 (API and CLI trigger):**
|
||||
- Modify AC: "The API endpoint (`/api/system/trigger-workflow`) creates an entry in the `workflow_runs` table and returns the `jobId`."
|
||||
- Add AC: "The API endpoint is secured via an API key."
|
||||
- Add AC: "The CLI command invokes the `/api/system/trigger-workflow` endpoint or directly interacts with `WorkflowTrackerService` to start a new workflow run."
|
||||
- Add AC: "All interactions with the API or CLI that initiate a workflow must record the `workflow_run_id` in logs."
|
||||
- **Story 1.4 (Retrieve HN Posts & Comments):**
|
||||
- Modify AC: "Retrieved data (posts and comments) is stored in Supabase database, linked to the current `workflow_run_id`."
|
||||
- Add AC: "Upon completion, the service updates the `workflow_runs` table with status and details (e.g., number of posts fetched) via `WorkflowTrackerService`."
|
||||
- Add AC: "For each new `hn_posts` record, a database event/webhook is configured to trigger the Article Scraping Service (Epic 2), passing `hn_post_id` and `workflow_run_id`."
|
||||
|
||||
- **Epic 2: Article Scraping**
|
||||
|
||||
- **Story 2.2 & 2.3 (Scrape & Store):**
|
||||
- Modify AC: "Scraped article content is stored in the `scraped_articles` table, linked to the `hn_post_id` and the current `workflow_run_id`."
|
||||
- Add AC: "The `scraping_status` and any `error_message` are recorded in the `scraped_articles` table."
|
||||
- Add AC: "Upon completion of scraping an article (success or failure), the service updates the `workflow_runs.details` (e.g., incrementing scraped counts) via `WorkflowTrackerService`."
|
||||
- Add AC: "For each successfully scraped article, a database event/webhook is configured to trigger the Summarization Service (Epic 3) for that article, passing `scraped_article_id` and `workflow_run_id`."
|
||||
- **Story 2.4 (Trigger scraping via API/CLI):** This story might become redundant if the main workflow trigger (Story 1.3) handles the entire pipeline initiation. If retained for isolated testing, ensure it also works with the `workflow_run_id` concept.
|
||||
|
||||
- **Epic 3: AI-Powered Content Summarization**
|
||||
|
||||
- **Story 3.1 (Integrate AI summarization):** Refine to "Integrate with `LLMFacade` for summarization."
|
||||
- **Story 3.2 & 3.3 (Generate Summaries):**
|
||||
- Modify AC: "Summaries are stored in `article_summaries` or `comment_summaries` tables, linked to the `scraped_article_id` (for articles) or `hn_post_id` (for comments), and the current `workflow_run_id`."
|
||||
- Add AC: "The service retrieves prompts from the `summarization_prompts` table."
|
||||
- Add AC: "Upon completion of each summarization task, the service updates `workflow_runs.details` (e.g., incrementing summaries generated counts) via `WorkflowTrackerService`."
|
||||
- Add AC (Implicit): The `CheckWorkflowCompletionService` will monitor these tables to determine when all summarization for a `workflow_run_id` is complete.
|
||||
|
||||
- **Epic 4: Automated Newsletter Creation and Distribution**
|
||||
|
||||
- **Story 4.1 & 4.2 (Retrieve template & Generate Newsletter):**
|
||||
- Modify AC: "The `NewsletterGenerationService` is triggered by the `CheckWorkflowCompletionService` when all summaries for a `workflow_run_id` are ready."
|
||||
- Add AC: "The service retrieves the newsletter template from `newsletter_templates` table and summaries associated with the `workflow_run_id`."
|
||||
- Modify AC: "Generated newsletter is stored in the `newsletters` table, linked to the `workflow_run_id`."
|
||||
- Add AC: "The service updates `workflow_runs.status` to 'generating_podcast' (or similar) after initiating podcast generation."
|
||||
- **Story 4.3 (Send newsletter):**
|
||||
- Modify AC: "The `NewsletterGenerationService` (specifically, its delivery part) is triggered by `CheckWorkflowCompletionService` once the podcast link is available for the `workflow_run_id` (or timeout/failure occurred for podcast step)."
|
||||
- Modify AC: "Implements conditional logic for podcast link inclusion and handles delay/retry as per PRD, coordinated by `CheckWorkflowCompletionService`."
|
||||
- Add AC: "Updates `newsletters.delivery_status` and `workflow_runs.status` to 'completed' or 'failed' via `WorkflowTrackerService`."
|
||||
|
||||
- **Epic 5: Podcast Generation Integration**
|
||||
- **Story 5.1, 5.2, 5.3 (Integrate Play.ht, Send Content, Webhook Handler):**
|
||||
- Modify AC: The `PodcastGenerationService` is invoked by `NewsletterGenerationService` (or `CheckWorkflowCompletionService`) for a specific `workflow_run_id` and `newsletter_id`.
|
||||
- Modify AC: The `podcast_playht_job_id` and `podcast_status` are stored in the `newsletters` table.
|
||||
- Modify AC: The `PlayHTWebhookHandlerAPI` (`/api/webhooks/playht`) updates the `newsletters` table with the podcast URL and status.
|
||||
- Add AC: The `PlayHTWebhookHandlerAPI` also updates the `workflow_runs.details` with the podcast status via `WorkflowTrackerService` for the relevant `workflow_run_id` (needs to be looked up from the `newsletter_id` or `playht_job_id`).
|
||||
|
||||
**General Considerations for all Epics:**
|
||||
|
||||
- **Error Handling & Logging:** Reinforce that all services must implement robust error handling, log extensively using Pino (including `workflow_run_id`), and update `workflow_runs` appropriately on success or failure using the `WorkflowTrackerService`.
|
||||
- **Database Webhook/Trigger Configuration:** Add technical tasks/stories related to setting up the actual Supabase database webhooks (e.g., `pg_net` calls) or triggers that connect the pipeline steps (e.g., `hn_posts` insert triggering `ArticleScrapingService`). This is a crucial implementation detail of the event-driven flow.
|
||||
- **Environment Variable Management:** A story to create and document `docs/environment-vars.md` and set up `.env.example`.
|
||||
572
BETA-V3/v3-demos/full-stack-app-demo/5-front-end-architecture.md
Normal file
572
BETA-V3/v3-demos/full-stack-app-demo/5-front-end-architecture.md
Normal file
@@ -0,0 +1,572 @@
|
||||
# BMad DiCaster Frontend Architecture Document
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [Overall Frontend Philosophy & Patterns](#overall-frontend-philosophy--patterns)
|
||||
- [Detailed Frontend Directory Structure](#detailed-frontend-directory-structure)
|
||||
- [Component Breakdown & Implementation Details](#component-breakdown--implementation-details)
|
||||
- [Component Naming & Organization](#component-naming--organization)
|
||||
- [Template for Component Specification](#template-for-component-specification)
|
||||
- [State Management In-Depth](#state-management-in-depth)
|
||||
- [Chosen Solution](#chosen-solution)
|
||||
- [Rationale](#rationale)
|
||||
- [Store Structure / Slices](#store-structure--slices)
|
||||
- [Key Selectors](#key-selectors)
|
||||
- [Key Actions / Reducers / Thunks](#key-actions--reducers--thunks)
|
||||
- [API Interaction Layer](#api-interaction-layer)
|
||||
- [Client/Service Structure](#clientservice-structure)
|
||||
- [Error Handling & Retries (Frontend)](#error-handling--retries-frontend)
|
||||
- [Routing Strategy](#routing-strategy)
|
||||
- [Routing Library](#routing-library)
|
||||
- [Route Definitions](#route-definitions)
|
||||
- [Route Guards / Protection](#route-guards--protection)
|
||||
- [Build, Bundling, and Deployment](#build-bundling-and-deployment)
|
||||
- [Build Process & Scripts](#build-process--scripts)
|
||||
- [Key Bundling Optimizations](#key-bundling-optimizations)
|
||||
- [Deployment to CDN/Hosting](#deployment-to-cdnhosting)
|
||||
- [Frontend Testing Strategy](#frontend-testing-strategy)
|
||||
- [Link to Main Testing Strategy](#link-to-main-testing-strategy)
|
||||
- [Component Testing](#component-testing)
|
||||
- [UI Integration/Flow Testing](#ui-integrationflow-testing)
|
||||
- [End-to-End UI Testing Tools & Scope](#end-to-end-ui-testing-tools--scope)
|
||||
- [Accessibility (AX) Implementation Details](#accessibility-ax-implementation-details)
|
||||
- [Performance Considerations](#performance-considerations)
|
||||
- [Change Log](#change-log)
|
||||
|
||||
## Introduction
|
||||
|
||||
This document details the technical architecture specifically for the frontend of BMad DiCaster. It complements the main BMad DiCaster Architecture Document and the UI/UX Specification. The goal is to provide a clear blueprint for frontend development, ensuring consistency, maintainability, and alignment with the overall system design and user experience goals.
|
||||
|
||||
- **Link to Main Architecture Document:** `docs/architecture.md` (Note: The overall system architecture, including Monorepo/Polyrepo decisions and backend structure, will influence frontend choices, especially around shared code or API interaction patterns.)
|
||||
- **Link to UI/UX Specification:** `docs/ui-ux-spec.txt`
|
||||
- **Link to Primary Design Files (Figma, Sketch, etc.):** N/A (Low-fidelity wireframes described in `docs/ui-ux-spec.txt`; detailed mockups to be created during development)
|
||||
- **Link to Deployed Storybook / Component Showcase (if applicable):** N/A (To be developed)
|
||||
|
||||
## Overall Frontend Philosophy & Patterns
|
||||
|
||||
The frontend for BMad DiCaster will be built using modern, efficient, and maintainable practices, leveraging the Vercel/Supabase Next.js App Router template as a starting point. The core philosophy is to create a responsive, fast-loading, and accessible user interface that aligns with the "synthwave technical glowing purple vibes" aesthetic.
|
||||
|
||||
- **Framework & Core Libraries:**
|
||||
|
||||
- **Next.js (Latest, e.g., 14.x.x, App Router):** Chosen for its robust full-stack capabilities, server-side rendering (SSR) and static site generation (SSG) options, optimized performance, and seamless integration with Vercel for deployment. The App Router will be used for routing and layouts. [cite_start] (Version: `latest`, aligned with `architecture.txt` [cite: 187, 188])
|
||||
- **React (19.0.0):** As the underlying UI library for Next.js, React's component-based architecture allows for modular and reusable UI elements. [cite_start] (Version: `19.0.0`, aligned with `architecture.txt` [cite: 190, 191])
|
||||
- **TypeScript (5.7.2):** For strong typing, improved code quality, and better developer experience. [cite_start] (Version: `5.7.2`, aligned with `architecture.txt` [cite: 178, 179])
|
||||
|
||||
- **Component Architecture:**
|
||||
|
||||
- **Shadcn UI (Latest):** This collection of reusable UI components, built on Radix UI and Tailwind CSS, will be used for foundational elements like buttons, cards, dialogs, etc. Components will be added via its CLI, making them directly part of our codebase and easily customizable. [cite_start] (Aligned with `architecture.txt` [cite: 198, 199])
|
||||
- **Application-Specific Components:** Custom components will be developed for unique UI parts not covered by Shadcn UI (e.g., `NewsletterCard`, `PodcastPlayer`). These will be organized within `app/components/core/`.
|
||||
- **Structure:** Components will primarily be Server Components by default for optimal performance, with Client Components (`"use client"`) adopted only when interactivity or browser-specific APIs are required (e.g., event handlers, state hooks).
|
||||
|
||||
- **State Management Strategy:**
|
||||
|
||||
- **Zustand (Latest):** A lightweight, unopinionated, and simple state management solution for React. It will be used for managing global client-side state that needs to be shared across multiple components, such as UI state (e.g., podcast player status). [cite_start] (Aligned with `architecture.txt` [cite: 228, 229])
|
||||
- **React Context API / Server Components:** For simpler state-sharing scenarios or passing data down component trees where Zustand might be overkill, React Context or relying on Next.js Server Component props will be preferred.
|
||||
|
||||
- **Data Flow:**
|
||||
|
||||
- **Unidirectional Data Flow:** Data will primarily flow from Server Components (fetching data from Supabase) down to Client Components via props.
|
||||
- **Server Actions / Route Handlers:** For mutations or actions triggered from Client Components (e.g., future admin functionalities), Next.js Server Actions or dedicated API Route Handlers will be used to interact with the backend, which in turn updates the Supabase database. Data revalidation (e.g., using `revalidatePath` or `revalidateTag`) will be used to refresh data in Server Components.
|
||||
- **Supabase Client:** The Supabase JS client (from `utils/supabase/client.ts` for client components and `utils/supabase/server.ts` for server components/actions) will be the primary means of interacting with the Supabase backend for data fetching and mutations.
|
||||
|
||||
- **Styling Approach:**
|
||||
|
||||
- **Tailwind CSS (3.4.17):** A utility-first CSS framework for rapid UI development and consistent styling. It will be used for all styling, including achieving the "synthwave technical glowing purple vibes." [cite_start] (Version `3.4.17`, aligned with `architecture.txt` [cite: 194, 195])
|
||||
- **Shadcn UI:** Leverages Tailwind CSS for its components.
|
||||
- **Global Styles:** `app/globals.css` will be used for base Tailwind directives and any genuinely global style definitions.
|
||||
- [cite_start] **Theme Customization:** `tailwind.config.ts` will be used to extend Tailwind's default theme with custom colors (e.g., synthwave purples like `#800080` as an accent [cite: 584]), fonts, or spacing as needed to achieve the desired aesthetic. The "synthwave technical glowing purple vibes" will be achieved through a dark base theme, with purple accents for interactive elements, highlights, and potentially subtle text shadows or glows on specific headings or decorative elements. [cite_start] Font choices will lean towards modern, clean sans-serifs as specified in `ux-ui-spec.txt`[cite: 585], potentially with a more stylized font for major headings if it fits the theme without compromising readability.
|
||||
|
||||
- **Key Design Patterns Used:**
|
||||
- **Server Components & Client Components:** Utilizing Next.js App Router paradigm.
|
||||
- **Hooks:** Extensive use of React hooks (`useState`, `useEffect`, `useContext`) and custom hooks for reusable logic.
|
||||
- **Provider Pattern:** For React Context API usage when necessary.
|
||||
- **Facade Pattern (Conceptual for API Interaction):** The Supabase client libraries (`utils/supabase/client.ts`, `utils/supabase/server.ts`) act as a facade abstracting direct database interactions. Data fetching logic will be encapsulated in Server Components or specific data-fetching functions.
|
||||
|
||||
## Detailed Frontend Directory Structure
|
||||
|
||||
The BMad DiCaster frontend will adhere to the Next.js App Router conventions and build upon the structure provided by the Vercel/Supabase Next.js App Router template. The monorepo structure defined in the main Architecture Document (`docs/architecture.md`) already outlines the top-level directories. This section details the frontend-specific organization.
|
||||
|
||||
**Naming Conventions Adopted:**
|
||||
|
||||
- **Directories:** `kebab-case` (e.g., `app/(web)/newsletter-list/`, `app/components/core/`)
|
||||
- **React Component Files (.tsx):** `PascalCase.tsx` (e.g., `NewsletterCard.tsx`, `PodcastPlayer.tsx`). Next.js App Router special files (e.g., `page.tsx`, `layout.tsx`, `loading.tsx`, `global-error.tsx`, `not-found.tsx`) retain their conventional lowercase or kebab-case names.
|
||||
- **Non-Component TypeScript Files (.ts):** Primarily `camelCase.ts` (e.g., `utils.ts`, `uiSlice.ts`). Configuration files (e.g., `tailwind.config.ts`) and shared type definition files (e.g., `api-schemas.ts`, `domain-models.ts`) may retain `kebab-case` as per common practice or previous agreement.
|
||||
|
||||
```plaintext
|
||||
{project-root}/
|
||||
├── app/ # Next.js App Router (Frontend Pages, Layouts, API Routes)
|
||||
│ ├── (web)/ # Group for user-facing web pages
|
||||
│ │ ├── newsletters/ # Route group for newsletter features
|
||||
│ │ │ ├── [newsletterId]/ # Dynamic route for individual newsletter detail
|
||||
│ │ │ │ ├── page.tsx # Newsletter Detail Page component
|
||||
│ │ │ │ └── loading.tsx # Optional: Loading UI for this route
|
||||
│ │ │ ├── page.tsx # Newsletter List Page component
|
||||
│ │ │ └── layout.tsx # Optional: Layout specific to /newsletters routes
|
||||
│ │ ├── layout.tsx # Root layout for all (web) pages
|
||||
│ │ └── page.tsx # Homepage (displays newsletter list)
|
||||
[cite_start] │ ├── (api)/ # API route handlers (as defined in main architecture [cite: 82, 127, 130, 133])
|
||||
│ │ ├── system/
|
||||
│ │ │ └── ...
|
||||
│ │ └── webhooks/
|
||||
│ │ └── ...
|
||||
│ ├── components/ # Application-specific UI React components (Core Logic)
|
||||
│ │ ├── core/ # Core, reusable application components
|
||||
│ │ │ ├── NewsletterCard.tsx
|
||||
│ │ │ ├── PodcastPlayer.tsx
|
||||
│ │ │ ├── DownloadButton.tsx
|
||||
│ │ │ └── BackButton.tsx
|
||||
│ │ └── layout/ # General layout components
|
||||
│ │ └── PageWrapper.tsx # Consistent padding/max-width for pages
|
||||
│ ├── auth/ # Auth-related pages and components (from template, MVP frontend is public)
|
||||
│ ├── login/page.tsx # Login page (from template, MVP frontend is public)
|
||||
│ └── global-error.tsx # Optional: Custom global error UI (Next.js special file)
|
||||
│ └── not-found.tsx # Optional: Custom 404 page UI (Next.js special file)
|
||||
[cite_start] ├── components/ # Shadcn UI components root (as configured by components.json [cite: 92])
|
||||
│ └── ui/ # Base UI elements from Shadcn (e.g., Button.tsx, Card.tsx)
|
||||
[cite_start] ├── lib/ # General utility functions for frontend [cite: 86, 309]
|
||||
│ ├── utils.ts # General utility functions (date formatting, etc.)
|
||||
│ └── hooks/ # Custom global React hooks
|
||||
│ └── useScreenWidth.ts # Example custom hook
|
||||
├── store/ # Zustand state management
|
||||
│ ├── index.ts # Main store setup/export (can be store.ts or index.ts)
|
||||
│ └── slices/ # Individual state slices
|
||||
│ └── podcastPlayerSlice.ts # State for the podcast player
|
||||
[cite_start] ├── public/ # Static assets (images, favicon, etc.) [cite: 89]
|
||||
[cite_start] │ └── logo.svg # Application logo (to be provided [cite: 379])
|
||||
[cite_start] ├── shared/ # Shared code/types between frontend and Supabase functions [cite: 89, 97]
|
||||
│ └── types/
|
||||
│ ├── api-schemas.ts # Zod schemas for API req/res
|
||||
│ └── domain-models.ts # Core entity types (HNPost, Newsletter, etc. from main arch)
|
||||
[cite_start] ├── styles/ # Global styles [cite: 90]
|
||||
│ └── globals.css # Tailwind base styles, custom global styles
|
||||
[cite_start] ├── utils/ # Root utilities (from template [cite: 91])
|
||||
[cite_start] │ └── supabase/ # Supabase helper functions FOR FRONTEND (from template [cite: 92, 309])
|
||||
│ ├── client.ts # Client-side Supabase client
|
||||
[cite_start] │ ├── middleware.ts # Logic for Next.js middleware (Supabase auth [cite: 92, 311])
|
||||
│ └── server.ts # Server-side Supabase client
|
||||
[cite_start] ├── tailwind.config.ts # Tailwind CSS configuration [cite: 93]
|
||||
[cite_start] └── tsconfig.json # TypeScript configuration (includes path aliases like @/* [cite: 101])
|
||||
```
|
||||
|
||||
### Notes on Frontend Structure:
|
||||
|
||||
- **`app/(web)/`**: Route group for user-facing pages.
|
||||
- [cite\_start] **`newsletters/page.tsx`**: Server Component for listing newsletters. [cite: 375, 573]
|
||||
- [cite\_start] **`newsletters/[newsletterId]/page.tsx`**: Server Component for displaying a single newsletter. [cite: 376, 576]
|
||||
- **`app/components/core/`**: Houses application-specific React components like `NewsletterCard.tsx`, `PodcastPlayer.tsx`, `DownloadButton.tsx`, `BackButton.tsx` (identified in `ux-ui-spec.txt`). Components follow `PascalCase.tsx`.
|
||||
- **`app/components/layout/`**: For structural layout components, e.g., `PageWrapper.tsx`. Components follow `PascalCase.tsx`.
|
||||
- **`components/ui/`**: Standard directory for Shadcn UI components (e.g., `Button.tsx`, `Card.tsx`).
|
||||
- **`lib/hooks/`**: Custom React hooks (e.g., `useScreenWidth.ts`), files follow `camelCase.ts`.
|
||||
- **`store/slices/`**: Zustand state slices. `podcastPlayerSlice.ts` for podcast player state. Files follow `camelCase.ts`.
|
||||
- **`shared/types/`**: Type definitions. `api-schemas.ts` and `domain-models.ts` use `kebab-case.ts`.
|
||||
- **`utils/supabase/`**: Template-provided Supabase clients. Files follow `camelCase.ts`.
|
||||
- [cite\_start] **Path Aliases**: `tsconfig.json` uses `@/*` aliases. [cite: 98, 101]
|
||||
|
||||
## Component Breakdown & Implementation Details
|
||||
|
||||
This section outlines the conventions and templates for defining UI components. While a few globally shared or foundational components (e.g., main layout structures) might be specified here upfront to ensure consistency, the detailed specification for most feature-specific components will emerge as user stories are implemented. The key is for the development team (or AI agent) to follow the "Template for Component Specification" below whenever a new component is identified for development.
|
||||
|
||||
### Component Naming & Organization
|
||||
|
||||
- **Component File Naming:**
|
||||
|
||||
- React component files will use `PascalCase.tsx`. For example, `NewsletterCard.tsx`, `PodcastPlayer.tsx`.
|
||||
- Next.js special files like `page.tsx`, `layout.tsx`, `loading.tsx`, `error.tsx`, `global-error.tsx`, and `not-found.tsx` will use their conventional lowercase or kebab-case names.
|
||||
|
||||
- **Component Organization (Reiteration from Directory Structure):**
|
||||
|
||||
- **Application-Specific Core Components:** Reusable components specific to BMad DiCaster (e.g., `NewsletterCard`, `PodcastPlayer`) will reside in `app/components/core/`.
|
||||
- **Application-Specific Layout Components:** Components used for structuring page layouts (e.g., `PageWrapper.tsx`) will reside in `app/components/layout/`.
|
||||
- **Shadcn UI Components:** Components added via the Shadcn UI CLI will reside in `components/ui/` (e.g., `Button.tsx`, `Card.tsx`).
|
||||
- **Page-Specific Components:** If a component is complex but _only_ used on a single page, it can be co-located with that page's route file, for instance, in a `components` subfolder within that route's directory. However, the preference is to place reusable components in `app/components/core/` or `app/components/layout/`.
|
||||
|
||||
### Template for Component Specification
|
||||
|
||||
This template should be used to define and document each significant UI component identified from the UI/UX Specification (`docs/ui-ux-spec.txt`) and any subsequent design iterations. The goal is to provide sufficient detail for a developer or an AI agent to implement the component with minimal ambiguity. Most feature-specific components will be detailed emergently during development, following this template.
|
||||
|
||||
---
|
||||
|
||||
#### Component: `{ComponentName}` (e.g., `NewsletterCard`, `PodcastPlayerControls`)
|
||||
|
||||
- **Purpose:** {Briefly describe what this component does and its primary role in the user interface. What user need does it address?}
|
||||
- **Source File(s):** {e.g., `app/components/core/NewsletterCard.tsx`}
|
||||
- **Visual Reference:** {Link to specific Figma frame/component if available, or a detailed description/sketch if not. If based on a Shadcn UI component, note that and any key customizations.}
|
||||
- **Props (Properties):**
|
||||
{List each prop the component accepts. Specify its name, TypeScript type, whether it's required, any default value, and a clear description.}
|
||||
| Prop Name | Type | Required? | Default Value | Description |
|
||||
| :------------ | :------------------------------------ | :-------- | :------------ | :--------------------------------------------------- |
|
||||
| `exampleProp` | `string` | Yes | N/A | Example string prop. |
|
||||
| `items` | `Array<{id: string, name: string}>` | Yes | N/A | An array of item objects to display. |
|
||||
| `variant` | `'primary' \| 'secondary'` | No | `'primary'` | Visual variant of the component. |
|
||||
| `onClick` | `(event: React.MouseEvent) => void` | No | N/A | Optional click handler. |
|
||||
- **Internal State (if any):**
|
||||
{Describe any significant internal state the component manages using React hooks (e.g., `useState`).}
|
||||
| State Variable | Type | Initial Value | Description |
|
||||
| :---------------- | :-------- | :------------ | :------------------------------------------------ |
|
||||
| `isLoading` | `boolean` | `false` | Tracks if data for the component is loading. |
|
||||
| `selectedItem` | `string \| null` | `null` | Stores the ID of the currently selected item. |
|
||||
- **Key UI Elements / Structure (Conceptual):**
|
||||
{Describe the main visual parts of the component and their general layout. Reference Shadcn UI components if used as building blocks.}
|
||||
```jsx
|
||||
// Example for a Card component
|
||||
<Card>
|
||||
<CardHeader>
|
||||
<CardTitle>{"{titleProp}"}</CardTitle>
|
||||
<CardDescription>{"{descriptionProp}"}</CardDescription>
|
||||
</CardHeader>
|
||||
<CardContent>{/* {list of items or main content} */}</CardContent>
|
||||
<CardFooter>{/* {action buttons or footer content} */}</CardFooter>
|
||||
</Card>
|
||||
```
|
||||
- **Events Handled / Emitted:**
|
||||
- **Handles:** {List significant DOM events the component handles directly.}
|
||||
- **Emits (Callbacks):** {If the component uses props to emit events (callbacks) to its parent, list them here.}
|
||||
- **Actions Triggered (Side Effects):**
|
||||
- **State Management (Zustand):** {If the component interacts with a Zustand store, specify which store and actions.}
|
||||
- **API Calls / Data Fetching:** {Specify how Client Components trigger mutations or re-fetches (e.g., Server Actions).}
|
||||
- **Styling Notes:**
|
||||
- {Reference to specific Shadcn UI components used.}
|
||||
- {Key Tailwind CSS classes or custom styles for the "synthwave" theme.}
|
||||
- {Specific responsiveness behavior.}
|
||||
- **Accessibility (AX) Notes:**
|
||||
- {Specific ARIA attributes needed.}
|
||||
- {Keyboard navigation considerations.}
|
||||
- {Focus management details.}
|
||||
- {Notes on color contrast.}
|
||||
|
||||
---
|
||||
|
||||
_This template will be applied to each new significant component during the development process._
|
||||
|
||||
## State Management In-Depth
|
||||
|
||||
This section expands on the State Management strategy chosen (Zustand) and outlined in the "Overall Frontend Philosophy & Patterns".
|
||||
|
||||
- [cite\_start] **Chosen Solution:** **Zustand** (Latest version, as per `architecture.txt` [cite: 228, 229])
|
||||
- **Rationale:** Zustand was chosen for its simplicity, small bundle size, and unopinionated nature, suitable for BMad DiCaster's relatively simple frontend state needs (e.g., podcast player status). Server-side data is primarily managed by Next.js Server Components.
|
||||
|
||||
### Store Structure / Slices
|
||||
|
||||
Global client-side state will be organized into distinct "slices" within `store/slices/`. Components can import and use individual stores directly.
|
||||
|
||||
- **Conventions:**
|
||||
- Each slice in its own file: `store/slices/camelCaseSlice.ts`.
|
||||
- Define state interface, initial state, and action functions.
|
||||
- **Core Slice: `podcastPlayerSlice.ts`** (for MVP)
|
||||
|
||||
- **Purpose:** Manages the state of the podcast player (current track, playback status, time, volume).
|
||||
- **Source File:** `store/slices/podcastPlayerSlice.ts`
|
||||
- **State Shape (Example):**
|
||||
|
||||
```typescript
|
||||
interface PodcastTrack {
|
||||
id: string; // Could be newsletterId or a specific audio ID
|
||||
title: string;
|
||||
audioUrl: string;
|
||||
duration?: number; // in seconds
|
||||
}
|
||||
|
||||
interface PodcastPlayerState {
|
||||
currentTrack: PodcastTrack | null;
|
||||
isPlaying: boolean;
|
||||
currentTime: number; // in seconds
|
||||
volume: number; // 0 to 1
|
||||
isLoading: boolean;
|
||||
error: string | null;
|
||||
}
|
||||
|
||||
interface PodcastPlayerActions {
|
||||
loadTrack: (track: PodcastTrack) => void;
|
||||
play: () => void;
|
||||
pause: () => void;
|
||||
setCurrentTime: (time: number) => void;
|
||||
setVolume: (volume: number) => void;
|
||||
setError: (message: string | null) => void;
|
||||
resetPlayer: () => void;
|
||||
}
|
||||
```
|
||||
|
||||
- **Key Actions:** `loadTrack`, `play`, `pause`, `setCurrentTime`, `setVolume`, `setError`, `resetPlayer`.
|
||||
- **Zustand Store Definition:**
|
||||
|
||||
```typescript
|
||||
import { create } from "zustand";
|
||||
|
||||
// Previously defined interfaces: PodcastTrack, PodcastPlayerState, PodcastPlayerActions
|
||||
|
||||
const initialPodcastPlayerState: PodcastPlayerState = {
|
||||
currentTrack: null,
|
||||
isPlaying: false,
|
||||
currentTime: 0,
|
||||
volume: 0.75,
|
||||
isLoading: false,
|
||||
error: null,
|
||||
};
|
||||
|
||||
export const usePodcastPlayerStore = create<
|
||||
PodcastPlayerState & PodcastPlayerActions
|
||||
>((set) => ({
|
||||
...initialPodcastPlayerState,
|
||||
loadTrack: (track) =>
|
||||
set({
|
||||
currentTrack: track,
|
||||
isLoading: true, // Assume loading until actual audio element confirms
|
||||
error: null,
|
||||
isPlaying: false, // Usually don't autoplay on load
|
||||
currentTime: 0,
|
||||
}),
|
||||
play: () =>
|
||||
set((state) => {
|
||||
if (!state.currentTrack) return {}; // No track loaded
|
||||
return { isPlaying: true, isLoading: false, error: null };
|
||||
}),
|
||||
pause: () => set({ isPlaying: false }),
|
||||
setCurrentTime: (time) => set({ currentTime: time }),
|
||||
setVolume: (volume) => set({ volume: Math.max(0, Math.min(1, volume)) }),
|
||||
setError: (message) =>
|
||||
set({ error: message, isLoading: false, isPlaying: false }),
|
||||
resetPlayer: () => set({ ...initialPodcastPlayerState }),
|
||||
}));
|
||||
```
|
||||
|
||||
### Key Selectors
|
||||
|
||||
Selectors are functions that derive data from the store state. With Zustand, state is typically accessed directly from the hook, but memoized selectors can be created with libraries like `reselect` if complex derived data is needed, though for simple cases direct access is fine.
|
||||
|
||||
- **Convention:** For direct state access, components will use: `const { currentTrack, isPlaying, play } = usePodcastPlayerStore();`
|
||||
- **Example Selectors (if using `reselect` or similar, for more complex derivations later):**
|
||||
- `selectCurrentTrackTitle`: Returns `state.currentTrack?.title || 'No track loaded'`.
|
||||
- `selectIsPodcastPlaying`: Returns `state.isPlaying`.
|
||||
|
||||
### Key Actions / Reducers / Thunks
|
||||
|
||||
Zustand actions are functions defined within the `create` call that use `set` to update state. Asynchronous operations (like fetching data, though less common for Zustand which is often for UI state) can be handled by calling async functions within these actions and then calling `set` upon completion.
|
||||
|
||||
- **Convention:** Actions are part of the store hook: `const { loadTrack } = usePodcastPlayerStore();`.
|
||||
- **Asynchronous Example (Conceptual, if a slice needed to fetch data):**
|
||||
```typescript
|
||||
// In a hypothetical userSettingsSlice.ts
|
||||
// fetchUserSettings: async () => {
|
||||
// set({ isLoading: true });
|
||||
// try {
|
||||
// const settings = await api.fetchUserSettings(); // api is an imported service
|
||||
// set({ userSettings: settings, isLoading: false });
|
||||
// } catch (error) {
|
||||
// set({ error: 'Failed to fetch settings', isLoading: false });
|
||||
// }
|
||||
// }
|
||||
```
|
||||
For BMad DiCaster MVP, most data fetching is via Server Components. Client-side async actions in Zustand would primarily be for client-specific operations not directly tied to server data fetching.
|
||||
|
||||
## API Interaction Layer
|
||||
|
||||
The frontend will interact with Supabase for data. Server Components will fetch data directly using server-side Supabase client. Client Components needing to mutate data or trigger backend logic will use Next.js Server Actions or, if necessary, dedicated Next.js API Route Handlers which then interact with Supabase.
|
||||
|
||||
### Client/Service Structure
|
||||
|
||||
- **HTTP Client Setup (for Next.js API Route Handlers, if used extensively):**
|
||||
|
||||
- While Server Components and Server Actions are preferred for Supabase interactions, if direct calls from client to custom Next.js API routes are needed, a simple `Workspace` wrapper or a lightweight client like `ky` could be used.
|
||||
- The Vercel/Supabase template provides `utils/supabase/client.ts` (for client-side components) and `utils/supabase/server.ts` (for Server Components, Route Handlers, Server Actions). These will be the primary interfaces to Supabase.
|
||||
- **Base URL:** Not applicable for direct Supabase client usage. For custom API routes: relative paths (e.g., `/api/my-route`).
|
||||
- **Authentication:** The Supabase clients handle auth token management. [cite\_start] For custom API routes, Next.js middleware (`middleware.ts` [cite: 92, 311]) would handle session verification.
|
||||
|
||||
- **Service Definitions (Conceptual for Supabase Data Access):**
|
||||
|
||||
- No separate "service" files like `userService.ts` are strictly necessary for data fetching with Server Components. Data fetching logic will be co-located with the Server Components or within Server Actions.
|
||||
- **Example (Data fetching in a Server Component):**
|
||||
|
||||
```typescript
|
||||
// app/(web)/newsletters/page.tsx
|
||||
import { createClient } from "@/utils/supabase/server";
|
||||
import NewsletterCard from "@/app/components/core/NewsletterCard"; // Corrected path
|
||||
|
||||
export default async function NewsletterListPage() {
|
||||
const supabase = createClient();
|
||||
const { data: newsletters, error } = await supabase
|
||||
.from("newsletters")
|
||||
.select("id, title, target_date, podcast_url") // Add podcast_url
|
||||
.order("target_date", { ascending: false });
|
||||
|
||||
if (error) console.error("Error fetching newsletters:", error);
|
||||
// Render newsletters or error state
|
||||
}
|
||||
```
|
||||
|
||||
- **Example (Server Action for a hypothetical "subscribe" feature - future scope):**
|
||||
|
||||
```typescript
|
||||
// app/actions/subscribeActions.ts
|
||||
"use server";
|
||||
import { createClient } from "@/utils/supabase/server";
|
||||
import { z } from "zod";
|
||||
import { revalidatePath } from "next/cache";
|
||||
|
||||
const EmailSchema = z.string().email();
|
||||
|
||||
export async function subscribeToNewsletter(email: string) {
|
||||
const validation = EmailSchema.safeParse(email);
|
||||
if (!validation.success) {
|
||||
return { error: "Invalid email format." };
|
||||
}
|
||||
const supabase = createClient();
|
||||
const { error } = await supabase
|
||||
.from("subscribers")
|
||||
.insert({ email: validation.data });
|
||||
if (error) {
|
||||
return { error: "Subscription failed." };
|
||||
}
|
||||
revalidatePath("/"); // Example path revalidation
|
||||
return { success: true };
|
||||
}
|
||||
```
|
||||
|
||||
### Error Handling & Retries (Frontend)
|
||||
|
||||
- **Server Component Data Fetching Errors:** Errors from Supabase in Server Components should be caught. The component can then render an appropriate error UI or pass error information as props. Next.js error handling (e.g. `error.tsx` files) can also be used for unrecoverable errors.
|
||||
- **Client Component / Server Action Errors:**
|
||||
- Server Actions should return structured responses (e.g., `{ success: boolean, data?: any, error?: string }`). Client Components calling Server Actions will handle these responses to update UI (e.g., display error messages, toast notifications).
|
||||
- Shadcn UI includes a `Toast` component which can be used for non-modal error notifications.
|
||||
- **UI Error Boundaries:** React Error Boundaries can be implemented at key points in the component tree (e.g., around major layout sections or complex components) to catch rendering errors in Client Components and display a fallback UI, preventing a full app crash. A root `global-error.tsx` can serve as a global boundary.
|
||||
- **Retry Logic:** Generally, retry logic for data fetching should be handled by the user (e.g., a "Try Again" button) rather than automatic client-side retries for MVP, unless dealing with specific, known transient issues. Supabase client libraries might have their own internal retry mechanisms for certain types of network errors.
|
||||
|
||||
## Routing Strategy
|
||||
|
||||
Navigation and routing will be handled by the Next.js App Router.
|
||||
|
||||
- [cite\_start] **Routing Library:** **Next.js App Router** (as per `architecture.txt` [cite: 187, 188, 308])
|
||||
|
||||
### Route Definitions
|
||||
|
||||
[cite\_start] Based on `ux-ui-spec.txt` and PRD[cite: 352, 375, 376].
|
||||
|
||||
| Path Pattern | Component/Page (`app/(web)/...`) | Protection | Notes |
|
||||
| :---------------------------- | :------------------------------------ | :--------- | :-------------------------------------------------------------------------- |
|
||||
| `/` | `newsletters/page.tsx` (effectively) | Public | Homepage displays the newsletter list. |
|
||||
| `/newsletters` | `newsletters/page.tsx` | Public | Displays a list of current and past newsletters. |
|
||||
| `/newsletters/[newsletterId]` | `newsletters/[newsletterId]/page.tsx` | Public | Displays the detail page for a selected newsletter. `newsletterId` is UUID. |
|
||||
|
||||
[cite\_start] _(Note: The main architecture document [cite: 83] shows an `app/page.tsx` for the homepage. For MVP, this can either redirect to `/newsletters` or directly render the newsletter list content. The table above assumes it effectively serves the newsletter list.)_
|
||||
|
||||
### Route Guards / Protection
|
||||
|
||||
- **Authentication Guard:** The MVP frontend is public-facing, displaying newsletters and podcasts without user login. [cite\_start] The Vercel/Supabase template includes middleware (`middleware.ts` [cite: 92, 311]) for protecting routes based on Supabase Auth. This will be relevant for any future admin sections but is not actively used to gate content for general users in MVP.
|
||||
- **Authorization Guard:** Not applicable for MVP.
|
||||
|
||||
## Build, Bundling, and Deployment
|
||||
|
||||
Details align with the Vercel platform and Next.js capabilities.
|
||||
|
||||
### Build Process & Scripts
|
||||
|
||||
- **Key Build Scripts:**
|
||||
- `npm run dev`: Starts Next.js local development server.
|
||||
- `npm run build`: Generates an optimized production build of the Next.js application. (Script from `package.json`)
|
||||
- `npm run start`: Starts the Next.js production server after a build.
|
||||
- **Environment Variables Handling during Build:**
|
||||
- Client-side variables must be prefixed with `NEXT_PUBLIC_` (e.g., `NEXT_PUBLIC_SUPABASE_URL`, `NEXT_PUBLIC_SUPABASE_ANON_KEY`).
|
||||
- Server-side variables (used in Server Components, Server Actions, Route Handlers) are accessed directly via `process.env`.
|
||||
- Environment variables are managed in Vercel project settings for different environments (Production, Preview, Development). Local development uses `.env.local`.
|
||||
|
||||
### Key Bundling Optimizations
|
||||
|
||||
- **Code Splitting:** Next.js App Router automatically performs route-based code splitting. Dynamic imports (`next/dynamic`) can be used for further component-level code splitting if needed.
|
||||
- **Tree Shaking:** Ensured by Next.js's Webpack configuration during the production build process.
|
||||
- **Lazy Loading:** Next.js handles lazy loading of route segments by default. Images (`next/image`) are optimized and can be lazy-loaded.
|
||||
- **Minification & Compression:** Handled automatically by Next.js during `npm run build` (JavaScript, CSS minification; Gzip/Brotli compression often handled by Vercel).
|
||||
|
||||
### Deployment to CDN/Hosting
|
||||
|
||||
- [cite\_start] **Target Platform:** **Vercel** (as per `architecture.txt` [cite: 206, 207, 382])
|
||||
- **Deployment Trigger:** Automatic deployments via Vercel's Git integration (GitHub) on pushes/merges to specified branches (e.g., `main` for production, PR branches for previews). [cite\_start] (Aligned with `architecture.txt` [cite: 279, 280])
|
||||
- **Asset Caching Strategy:** Vercel's Edge Network handles CDN caching for static assets and Server Component payloads. Cache-control headers will be configured according to Next.js defaults and can be customized if necessary (e.g., for `public/` assets).
|
||||
|
||||
## Frontend Testing Strategy
|
||||
|
||||
This section elaborates on the overall testing strategy defined in `architecture.txt`, focusing on frontend specifics.
|
||||
|
||||
- **Link to Main Testing Strategy:** `docs/architecture.md#overall-testing-strategy` (and `docs/architecture.md#coding-standards` for test file colocation).
|
||||
|
||||
### Component Testing
|
||||
|
||||
- **Scope:** Testing individual React components in isolation, primarily focusing on UI rendering based on props and basic interactions.
|
||||
- [cite\_start] **Tools:** **Jest** (test runner, assertion library, mocking, as per `architecture.txt` [cite: 236, 237][cite\_start] ) and **React Testing Library (RTL)** (for user-centric component querying and interaction, as per `architecture.txt` [cite: 232, 233]).
|
||||
- **Focus:**
|
||||
- Correct rendering based on props.
|
||||
- User interactions (e.g., button clicks triggering callbacks).
|
||||
- Conditional rendering logic.
|
||||
- Accessibility attributes.
|
||||
- [cite\_start] **Location:** Test files (`*.test.tsx` or `*.spec.tsx`) will be co-located with the component files (e.g., `app/components/core/NewsletterCard.test.tsx`). [cite: 99, 295]
|
||||
- **Example Guideline:** "A `NewsletterCard` component should render the title and date passed as props. Clicking the card should navigate (mocked) or call an `onClick` prop."
|
||||
|
||||
### UI Integration/Flow Testing
|
||||
|
||||
- **Scope:** Testing interactions between multiple components that compose a piece of UI or a small user flow, potentially with mocked Supabase client responses or Zustand store states.
|
||||
- **Tools:** Jest and React Testing Library.
|
||||
- **Focus:**
|
||||
- Data flow between a parent and its child components.
|
||||
- State updates in a Zustand store affecting multiple components.
|
||||
- Rendering of a sequence of UI elements in a simple flow (e.g., selecting an item from a list and seeing details appear).
|
||||
- **Example Guideline:** "The `NewsletterListPage` should correctly render multiple `NewsletterCard` components when provided with mock newsletter data. Clicking a card should correctly invoke navigation logic."
|
||||
|
||||
### End-to-End UI Testing Tools & Scope
|
||||
|
||||
- [cite\_start] **Tools:** **Playwright** (as per `architecture.txt` [cite: 240, 241]).
|
||||
- **Scope (Frontend Focus):**
|
||||
- Verify the "Viewing a Newsletter" user flow:
|
||||
1. Navigate to the newsletter list page.
|
||||
2. Verify newsletters are listed.
|
||||
3. Click on a newsletter.
|
||||
4. Verify the newsletter detail page loads with content.
|
||||
5. Verify the podcast player is present if a podcast URL exists.
|
||||
6. Verify the download button is present.
|
||||
7. Verify the "Back to List" button works.
|
||||
- Basic mobile responsiveness checks for key pages (list and detail).
|
||||
- **Test Data Management for UI:** E2E tests will rely on data populated in the development Supabase instance or use mocked API responses if targeting isolated frontend tests with Playwright's network interception. For true E2E against a live dev environment, pre-seeded data in Supabase dev instance will be used.
|
||||
|
||||
## Accessibility (AX) Implementation Details
|
||||
|
||||
[cite\_start] The frontend will adhere to **WCAG 2.1 Level A** as a minimum target, as specified in `docs/ui-ux-spec.txt`[cite: 588].
|
||||
|
||||
- [cite\_start] **Semantic HTML:** Emphasis on using correct HTML5 elements (`<nav>`, `<main>`, `<article>`, `<aside>`, `<button>`, etc.) to provide inherent meaning and structure. [cite: 589]
|
||||
- **ARIA Implementation:**
|
||||
- Shadcn UI components are built with accessibility in mind, often including appropriate ARIA attributes.
|
||||
- For custom components, relevant ARIA roles (e.g., `role="region"`, `role="alert"`) and attributes (e.g., `aria-label`, `aria-describedby`, `aria-live`, `aria-expanded`) will be used for dynamic content, interactive elements, and custom widgets to ensure assistive technologies can interpret them correctly.
|
||||
- [cite\_start] **Keyboard Navigation:** All interactive elements (links, buttons, inputs, custom controls) must be focusable and operable using only the keyboard in a logical order. [cite: 588] Focus indicators will be clear and visible.
|
||||
- **Focus Management:** For dynamic UI elements like modals or non-native dropdowns (if any are built custom beyond Shadcn capabilities), focus will be managed programmatically to ensure it moves to and is trapped within the element as appropriate, and returns to the trigger element upon dismissal.
|
||||
- **Alternative Text:** All meaningful images will have descriptive `alt` text. [cite\_start] Decorative images will have empty `alt=""`. [cite: 590]
|
||||
- **Color Contrast:** Adherence to WCAG 2.1 Level A color contrast ratios for text and interactive elements against their backgrounds. [cite\_start] The "synthwave" theme's purple accents [cite: 584] will be chosen carefully to meet these requirements. [cite\_start] Tools will be used to verify contrast. [cite: 591]
|
||||
- **Testing Tools for AX:**
|
||||
- Automated: Axe DevTools browser extension, Lighthouse accessibility audits.
|
||||
- Manual: Keyboard-only navigation testing, screen reader testing (e.g., NVDA, VoiceOver) for key user flows.
|
||||
|
||||
## Performance Considerations
|
||||
|
||||
[cite\_start] The goal is a fast-loading and responsive user experience. [cite: 360, 565]
|
||||
|
||||
- **Image Optimization:**
|
||||
- Use `next/image` for automatic image optimization (resizing, WebP format where supported, lazy loading by default).
|
||||
- **Code Splitting & Lazy Loading:**
|
||||
- Next.js App Router handles route-based code splitting.
|
||||
- `next/dynamic` for client-side lazy loading of components that are not immediately visible or are heavy.
|
||||
- **Minimizing Re-renders (React):**
|
||||
- Judicious use of `React.memo` for components that render frequently with the same props.
|
||||
- Optimizing Zustand selectors if complex derived state is introduced (though direct access is often sufficient).
|
||||
- Ensuring stable prop references where possible.
|
||||
- **Debouncing/Throttling:** Not anticipated for MVP features, but will be considered for future interactive elements like search inputs.
|
||||
- **Virtualization:** Not anticipated for MVP given the limited number of items (e.g., 30 newsletters per day). If lists become very long in the future, virtualization libraries like TanStack Virtual will be considered.
|
||||
- **Caching Strategies (Client-Side):**
|
||||
- Leverage Next.js's built-in caching for Server Component payloads and static assets via Vercel's Edge Network.
|
||||
- Browser caching for static assets (`public/` folder) will use default optimal headers set by Vercel.
|
||||
- **Performance Monitoring Tools:**
|
||||
- Browser DevTools (Performance tab, Lighthouse).
|
||||
- Vercel Analytics (if enabled) for real-user monitoring.
|
||||
- WebPageTest for detailed performance analysis.
|
||||
- **Bundle Size Analysis:** Use tools like `@next/bundle-analyzer` to inspect production bundles and identify opportunities for optimization if bundle sizes become a concern.
|
||||
|
||||
## Change Log
|
||||
|
||||
| Date | Version | Description | Author |
|
||||
| :--------- | :------ | :----------------------------------------------- | :----------------- |
|
||||
| 2025-05-13 | 0.1 | Initial draft of frontend architecture document. | 4-design-arch (AI) |
|
||||
@@ -0,0 +1,91 @@
|
||||
## General Impact
|
||||
|
||||
- **Technology Alignment:** The choice of Next.js, React, Tailwind CSS, Shadcn UI, and Zustand is well-aligned with the PRD's technical assumptions and the overall architecture. The frontend architecture document provides the specific implementation details for these choices.
|
||||
- **Component-Driven Development:** The emphasis on component-based architecture, along with the "Template for Component Specification," will help in systematically building the UI elements mentioned in **Epic 6** (Web Interface).
|
||||
- **Data Fetching:** The strategy of using Next.js Server Components with the Supabase server client for data fetching aligns well with efficient data loading for the newsletter list and detail pages.
|
||||
- [cite_start] **Styling:** The "synthwave technical glowing purple vibes" [cite: 372] mentioned in the PRD are addressed in the frontend architecture's styling approach, guiding how Tailwind CSS and Shadcn UI will be customized.
|
||||
|
||||
**Specific Epic/Story Considerations:**
|
||||
|
||||
[cite_start] **Epic 6: Web Interface for Initial Structure and Content Access** [cite: 397]
|
||||
|
||||
- **Story 6.1:** "As a developer, I want to use a tool like Vercel v0 to generate the initial structure of the web interface..."
|
||||
|
||||
- **Impact/Refinement:** While Vercel v0 can be used for initial HTML/CSS mockups, the actual implementation will follow the Next.js App Router structure, Server/Client component model, and Shadcn UI component integration as defined in `frontend-architecture.md`. The "initial structure" will be more about setting up the Next.js routes (`app/(web)/newsletters/page.tsx`, `app/(web)/newsletters/[newsletterId]/page.tsx`) and core layout components.
|
||||
- **New Sub-task (Implied):** "Implement core page layouts (`app/(web)/layout.tsx`, `app/(web)/newsletters/layout.tsx` if needed) using Next.js App Router and Tailwind CSS."
|
||||
|
||||
- [cite_start] **Story 6.2:** "As a user, I want to see a list of current and past newsletters..." [cite: 491]
|
||||
|
||||
- **Impact/Refinement:** This directly maps to the `app/(web)/newsletters/page.tsx` route. The frontend architecture specifies this will be a Server Component fetching data from Supabase. The `NewsletterCard.tsx` component (to be detailed using the template) will be crucial here.
|
||||
- **New Sub-task (Implied):** "Develop `NewsletterCard.tsx` component as per UI/UX spec and component template to display individual newsletter summaries in the list."
|
||||
- **New Sub-task (Implied):** "Implement data fetching in `app/(web)/newsletters/page.tsx` using Supabase client to retrieve and display a list of `NewsletterCard` components."
|
||||
- **Acceptance Criteria Refinement:** Add "Newsletter items should be displayed using the `NewsletterCard` component." "Data should be fetched server-side via Supabase."
|
||||
|
||||
- [cite_start] **Story 6.3:** "As a user, I want to be able to read the newsletter content within the web page..." [cite: 495]
|
||||
|
||||
- **Impact/Refinement:** Maps to `app/(web)/newsletters/[newsletterId]/page.tsx`. This will be a Server Component.
|
||||
- **New Sub-task (Implied):** "Implement data fetching in `app/(web)/newsletters/[newsletterId]/page.tsx` to retrieve and display the full HTML content of a selected newsletter."
|
||||
- **Acceptance Criteria Refinement:** Add "Newsletter content should be fetched server-side via Supabase." "The `BackButton.tsx` component should be present and functional."
|
||||
|
||||
- [cite_start] **Story 6.4:** "As a user, I want to have the option to download newsletters..." [cite: 498]
|
||||
|
||||
- **Impact/Refinement:** Requires the `DownloadButton.tsx` component on the newsletter detail page. The frontend architecture will accommodate this standard component. The download mechanism itself (e.g., serving an HTML file or generating a PDF on the fly) is more of a backend/API concern, but the button is frontend.
|
||||
- **New Sub-task (Implied):** "Develop `DownloadButton.tsx` component as per UI/UX spec and component template."
|
||||
- **Acceptance Criteria Refinement:** Add "The `DownloadButton` component should be visible on the newsletter detail page."
|
||||
|
||||
- [cite_start] **Story 6.5:** "As a user, I want to listen to generated podcasts within the web interface..." [cite: 501]
|
||||
|
||||
- **Impact/Refinement:** This requires the `PodcastPlayer.tsx` component and the `podcastPlayerSlice.ts` Zustand store as defined in the frontend architecture.
|
||||
- **New Sub-task (Implied):** "Develop `PodcastPlayer.tsx` component, integrating with `podcastPlayerSlice` Zustand store for state management (play, pause, volume, track loading)."
|
||||
- **Acceptance Criteria Refinement:** Add "The `PodcastPlayer` component should allow users to play/pause the podcast linked to the newsletter." "Podcast player state should be managed by Zustand."
|
||||
|
||||
- **PRD - User Interaction and Design Goals:**
|
||||
- [cite_start] "Basic mobile responsiveness for displaying newsletters and podcasts." [cite: 356]
|
||||
- **Impact:** The frontend architecture's reliance on Tailwind CSS directly supports this. Breakpoints and responsive prefixes are standard. This should be an acceptance criterion for all UI-related stories in Epic 6.
|
||||
- [cite_start] "The MVP will consist of two pages: A list page... A detail page..." [cite: 375]
|
||||
- **Impact:** Confirmed and mapped to routes in the frontend architecture.
|
||||
|
||||
**Summary of Proposed Additions/Refinements to User Stories (Frontend Focus for Epic 6):**
|
||||
|
||||
No new user stories seem necessary, but refinements to existing ones or the creation of more granular technical sub-tasks for **Epic 6** would be beneficial:
|
||||
|
||||
- **Story 6.1 Refinement/Sub-tasks:**
|
||||
|
||||
- "Set up Next.js App Router routes for `/newsletters` and `/newsletters/[newsletterId]`."
|
||||
- "Implement root and feature-specific layouts (`layout.tsx`) using Next.js App Router and Tailwind CSS, including a `PageWrapper` component for consistent page styling."
|
||||
|
||||
- **Story 6.2 Refinement/Sub-tasks:**
|
||||
|
||||
- "Develop `NewsletterCard.tsx` React component (using Shadcn UI `Card` as a base) to display newsletter title, date, and link to detail page, styled with Tailwind CSS for synthwave theme."
|
||||
- "Implement server-side data fetching in `app/(web)/newsletters/page.tsx` to retrieve newsletter list from Supabase."
|
||||
- "Ensure `NewsletterListPage` is responsive as per UI/UX spec."
|
||||
|
||||
- **Story 6.3 Refinement/Sub-tasks:**
|
||||
|
||||
- "Implement server-side data fetching in `app/(web)/newsletters/[newsletterId]/page.tsx` to retrieve and render full newsletter HTML content from Supabase."
|
||||
- "Develop and integrate `BackButton.tsx` component."
|
||||
- "Ensure `NewsletterDetailPage` content area is responsive."
|
||||
|
||||
- **Story 6.4 Refinement/Sub-tasks:**
|
||||
|
||||
- "Develop `DownloadButton.tsx` React component (using Shadcn UI `Button` as a base)."
|
||||
- "Integrate `DownloadButton` into `NewsletterDetailPage`."
|
||||
_(Actual download mechanism likely relies on backend providing the correct content/headers)._
|
||||
|
||||
- **Story 6.5 Refinement/Sub-tasks:**
|
||||
- "Develop `PodcastPlayer.tsx` React component with standard playback controls (play, pause, seek, volume)."
|
||||
- "Implement `podcastPlayerSlice.ts` Zustand store for managing podcast player state (current track, isPlaying, currentTime, volume)."
|
||||
- "Integrate `PodcastPlayer` with `NewsletterDetailPage`, enabling it to load and play the podcast URL from the newsletter data."
|
||||
- "Style `PodcastPlayer` using Tailwind CSS for synthwave theme and ensure responsiveness."
|
||||
|
||||
**Accessibility & Performance:**
|
||||
|
||||
- The frontend architecture's sections on "Accessibility (AX) Implementation Details" and "Performance Considerations" provide the "how-to" for meeting the PRD's non-functional requirements. These should be considered implicit requirements for all frontend development tasks within Epic 6. For example, "All components developed must meet WCAG 2.1 Level A" and "Images must be optimized using `next/image`."
|
||||
|
||||
---
|
||||
|
||||
This review suggests that the frontend architecture aligns well with the PRD's epics and stories, primarily impacting **Epic 6**. The main effect is to provide more specific technical direction and highlight the need for creating specific React components and data fetching strategies as sub-tasks within that epic. No fundamental changes to the epics themselves seem necessary.
|
||||
|
||||
What do you think? Does this summary of impacts make sense?
|
||||
|
||||
If you're good with this, we can proceed to the final step: the checklist review.
|
||||
@@ -0,0 +1,39 @@
|
||||
**Master Checklist Report - BMad DiCaster MVP Plan**
|
||||
|
||||
**Date of Review:** May 14, 2025
|
||||
|
||||
**Documents Reviewed:**
|
||||
|
||||
- `prd.txt` (as updated by `arch-suggested-changes.txt`, `fea-suggested-changes.txt`, and checklist findings)
|
||||
- `architecture.txt`
|
||||
- `front-end-architecture.txt`
|
||||
- `po-master-checklist.txt` (as the framework for this review)
|
||||
|
||||
**Overall Assessment:** The project plan and associated documentation are largely comprehensive and well-structured for MVP development. Key architectural decisions are documented, and the PRD provides a good breakdown of work through epics and stories. The iterative review has identified areas for clarification and addition, primarily within the PRD's stories and acceptance criteria, which have been noted for incorporation.
|
||||
|
||||
**Category Statuses & Key Findings:**
|
||||
|
||||
| Category | Status | Critical Issues / Key Findings & Recommendations |
|
||||
| :------------------------------------------- | :---------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| **1. Project Setup & Initialization** | LARGELY COMPLIANT | - **Recommendation:** Add AC to Story 1.1 for creating a basic project `README.md` (Approved).\<br\>- Local dev setup relies on Vercel/hosted Supabase DB (User Clarification).\<br\>- Core dependencies and package management are clear.\<br\>- `docs/environment-vars.md` and `.env.example` (Story 1.8) are crucial for config. |
|
||||
| **2. Infrastructure & Deployment Seq.** | LARGELY COMPLIANT | - **Action:** Add new stories to Epic 1 for core config table creation (1.10) and seed data (1.11). Add ACs for migrations to relevant stories (1.4, 2.3, 3.2, 3.3, 4.2) (Approved).\<br\>- RLS deferred for MVP per PRD.\<br\>- API/Service config clear. Webhook security AC for Story 5.3 to be conditional (Approved).\<br\>- Deployment via standard Vercel GitHub integration (User Clarification).\<br\>- **Action:** Add new Story 1.12 (Epic 1) for test framework setup (Approved). Architect to detail broader mocking strategy in `architecture.txt`. |
|
||||
| **3. External Dependencies & Integrations** | LARGELY COMPLIANT | - **Action:** Story 1.8 AC refined to include guidance on obtaining third-party credentials, checking rate limits, and cost awareness (Approved).\<br\>- Fallback for LLM and podcast exists. For HN API/Email, retries are MVP scope.\<br\>- Infrastructure services (Vercel CDN, Email config) are covered. Custom domain not MVP. |
|
||||
| **4. User/Agent Responsibility Delineation** | COMPLIANT | - User responsibilities (account creation, credential provision, manual data mgmt for MVP) are clear.\<br\>- Developer agent responsibilities (code, automation, config mechanisms, testing) are clear. |
|
||||
| **5. Feature Sequencing & Dependencies** | COMPLIANT | - Functional, technical, and cross-epic dependencies are well-managed by sequential epic structure.\<br\>- Incremental value delivery is maintained. |
|
||||
| **6. MVP Scope Alignment** | COMPLIANT | - Epics/stories align with PRD goals. No extraneous features.\<br\>- Critical user journeys covered. Error/UX/Accessibility addressed by architecture docs & Epic 6 general note (User chose to ignore findings for more explicit PRD ACs here).\<br\>- Technical requirements & constraints from PRD are met by architecture. Performance considerations addressed. |
|
||||
| **7. Risk Management & Practicality** | LARGELY COMPLIANT | - **Action:** Add performance validation ACs to Story 4.3, 6.2, 6.3 (Approved). No explicit prototyping stories for LLM/Play.ht (User Decision). User will manually assess summarization quality.\<br\>- External dependency risks have mitigations (facades, retries, some fallbacks).\<br\>- Timeline Practicality: Sequential story development (BMAD method) is a key constraint impacting parallelism (User Clarification - "Not Compliant by Design" for parallel work). |
|
||||
| **8. Documentation & Handoff** | LARGELY COMPLIANT | - Developer docs (architecture, `README.md`, inline code via DoD) are planned.\<br\>- User docs (for end-users) not needed for MVP simplicity. Setup docs for admin/dev covered. |
|
||||
| **9. Post-MVP Considerations** | COMPLIANT | - Clear separation of MVP/future features. Architecture supports enhancements. Extensibility points exist.\<br\>- Analytics (Vercel option), informal feedback, basic monitoring (logs, DB table) are acceptable for MVP. Performance measurement ACs added. Proactive alerting not MVP. |
|
||||
|
||||
**Summary of Actionable Changes to `prd.txt` (already detailed prior to this report):**
|
||||
|
||||
- A general note regarding error handling and logging for all services to be added to the "Epic Overview."
|
||||
- **Epic 1:** New stories 1.9 (webhook), 1.10 (config tables), 1.11 (seed data), 1.12 (test frameworks). AC modifications to 1.1 (README), 1.3 (API trigger), 1.4 (ContentAcquisitionFacade, migrations), 1.8 (env vars doc).
|
||||
- **Epic 2:** New story 2.5 (webhook). AC modifications to 2.3 (migrations, workflow updates), 2.4 (API trigger).
|
||||
- **Epic 3:** AC modifications to 3.1 (LLMFacade), 3.2 & 3.3 (migrations, workflow updates), 3.4 (API trigger).
|
||||
- **Epic 4:** AC modifications to 4.2 (workflow, migrations), 4.3 (EmailDispatchFacade, workflow, perf validation), 4.4 (API trigger).
|
||||
- **Epic 5:** AC modifications to 5.1 (AudioGenerationFacade), 5.2 (workflow updates), 5.3 (webhook security, workflow updates), 5.4 (API trigger).
|
||||
- **Epic 6:** Goal refined to include general FE requirements. AC refinements for stories 6.1-6.5 based on `fea-suggested-changes.txt`. Performance validation ACs for 6.2 & 6.3.
|
||||
|
||||
**Overall Recommendation:**
|
||||
The MVP plan, with the incorporation of the above-listed actionable changes to the `prd.txt`, is **APPROVED** for proceeding to the next phase. The documentation suite is robust and provides a solid foundation for development.
|
||||
460
BETA-V3/v3-demos/full-stack-app-demo/8-prd-po-updated.md
Normal file
460
BETA-V3/v3-demos/full-stack-app-demo/8-prd-po-updated.md
Normal file
@@ -0,0 +1,460 @@
|
||||
# BMad DiCaster Product Requirements Document (PRD)
|
||||
|
||||
## Goal, Objective and Context
|
||||
|
||||
**Goal:** To develop a web application that provides a daily, concise summary of top Hacker News (HN) posts, delivered as a newsletter and accessible via a web interface.
|
||||
|
||||
**Objective:** To streamline the consumption of HN content by curating the top stories, providing AI-powered summaries, and offering an optional AI-generated podcast version.
|
||||
|
||||
**Context:** Busy professionals and enthusiasts want to stay updated on HN but lack the time to sift through numerous posts and discussions. This application will address this problem by automating the delivery of summarized content.
|
||||
|
||||
## Functional Requirements (MVP)
|
||||
|
||||
- **HN Content Retrieval & Storage:**
|
||||
- Daily retrieval of the top 30 Hacker News posts and associated comments using the HN Algolia API.
|
||||
- Scraping and storage of up to 10 linked articles per day.
|
||||
- Storage of all retrieved data (posts, comments, articles) with date association.
|
||||
- **AI-Powered Summarization:**
|
||||
- AI-powered summarization of the 10 selected articles (2-paragraph summaries).
|
||||
- AI-powered summarization of comments for the 10 selected posts (2-paragraph summaries highlighting interesting interactions).
|
||||
- Configuration for local or remote LLM usage via environment variables.
|
||||
- **Newsletter Generation & Delivery:**
|
||||
- Generation of a daily newsletter in HTML format, including summaries, links to HN posts and articles, and original post dates/times.
|
||||
- Automated delivery of the newsletter to a manually configured list of subscribers in Supabase. The list of emails will be manually populated in the database. Account information for the Nodemailer service will be provided via environment variables.
|
||||
- **Podcast Generation & Integration:**
|
||||
- Integration with Play.ht's PlayNote API for AI-generated podcast creation from the newsletter content.
|
||||
- Webhook handler to update the newsletter with the generated podcast link.
|
||||
- **Web Interface (MVP):**
|
||||
- Display of current and past newsletters.
|
||||
- Functionality to read the newsletter content within the web page.
|
||||
- Download option for newsletters.
|
||||
- Web player for listening to generated podcasts.
|
||||
- Basic mobile responsiveness for displaying newsletters and podcasts.
|
||||
- **API & Triggering:**
|
||||
- Secure API endpoint to manually trigger the daily workflow, secured with API keys.
|
||||
- CLI command to manually trigger the daily workflow locally.
|
||||
|
||||
## Non-Functional Requirements (MVP)
|
||||
|
||||
- **Performance:**
|
||||
- The system should retrieve HN posts and generate the newsletter within a reasonable timeframe (e.g., under 30 minutes) to ensure timely delivery.
|
||||
- The web interface should load quickly (e.g., within 2 seconds) to provide a smooth user experience.
|
||||
- **Scalability:**
|
||||
- The system is designed for an initial MVP delivery to 3-5 email subscribers. Scalability beyond this will be considered post-MVP.
|
||||
- **Security:**
|
||||
- The API endpoint for triggering the daily workflow must be secure, using API keys.
|
||||
- User data (email addresses) should be stored securely. No other security measures are required for the MVP.
|
||||
- **Reliability:**
|
||||
- No specific uptime or availability requirements are defined for the MVP.
|
||||
- The newsletter generation and delivery process should be robust and handle potential errors gracefully.
|
||||
- The system must be executable from a local development environment.
|
||||
- **Maintainability:**
|
||||
- The codebase should adhere to good quality coding standards, including separation of concerns.
|
||||
- The system should employ facades and factories to facilitate future expansion.
|
||||
- The system should be built as an event-driven pipeline, leveraging Supabase to capture data at each stage and trigger subsequent functions asynchronously. This approach aims to mitigate potential timeout issues with Vercel hosting.
|
||||
|
||||
## User Interaction and Design Goals
|
||||
|
||||
This section captures the high-level vision and goals for the User Experience (UX) to guide the Design Architect.
|
||||
|
||||
- **Overall Vision & Experience:**
|
||||
- The desired look and feel is modern and minimalist, with synthwave technical glowing purple vibes.
|
||||
- Users should have a clean and efficient experience when accessing and consuming newsletter content and podcasts.
|
||||
- **Key Interaction Paradigms:**
|
||||
- Interaction paradigms will be determined by the Design Architect.
|
||||
- **Core Screens/Views (Conceptual):**
|
||||
- The MVP will consist of two pages:
|
||||
- A list page to display current and past newsletters.
|
||||
- A detail page to display the selected newsletter content, including:
|
||||
- Download option for the newsletter.
|
||||
- Web player for listening to the generated podcast.
|
||||
- The article laid out for viewing.
|
||||
- **Accessibility Aspirations:**
|
||||
- The web interface (Epic 6) will adhere to WCAG 2.1 Level A guidelines as detailed in `frontend-architecture.md`. (Updated during checklist review)
|
||||
- **Branding Considerations (High-Level):**
|
||||
- A logo for the application will be provided.
|
||||
- The application will use the name "BMad DiCaster".
|
||||
- **Target Devices/Platforms:**
|
||||
- The application will be designed as a mobile-first responsive web app, ensuring it looks good on both mobile and desktop devices.
|
||||
|
||||
## Technical Assumptions
|
||||
|
||||
This section captures any existing technical information that will guide the Architect in the technical design.
|
||||
|
||||
- The application will be developed using the Next.js/Supabase template and hosted entirely on Vercel.
|
||||
- This implies a monorepo structure, as the frontend (Next.js) and backend (Supabase functions) will reside within the same repository.
|
||||
- The backend will primarily leverage serverless functions provided by Vercel and Supabase.
|
||||
- Frontend development will be in Next.js with React.
|
||||
- Data storage will be handled by Supabase's PostgreSQL database.
|
||||
- Separate Supabase instances will be used for development and production environments to ensure data isolation and stability.
|
||||
- For local development, developers can utilize the Supabase CLI and Vercel CLI to emulate the production environment, primarily for testing functions and deployments, but the development Supabase instance will be the primary source of dev data.
|
||||
- Testing will include unit tests, integration tests (especially for interactions with Supabase), and end-to-end tests.
|
||||
- The system should be built as an event-driven pipeline, leveraging Supabase to capture data at each stage and trigger subsequent functions asynchronously to mitigate potential timeout issues with Vercel.
|
||||
|
||||
## Epic Overview
|
||||
|
||||
_(Note: Epics will be developed sequentially. Development will start with Epic 1 and proceed to the next epic only after the previous one is fully completed and verified. Per the BMAD method, every story must be self-contained and done before the next one is started.)_
|
||||
|
||||
_(Note: All UI development across all epics must adhere to mobile responsiveness and Tailwind CSS/theming principles to ensure a consistent and maintainable user experience.)_
|
||||
|
||||
**(General Note on Service Implementation for All Epics):** All backend services (Supabase Functions) developed as part of any epic must implement robust error handling. They should log extensively using Pino, ensuring that all log entries include the relevant `workflow_run_id` for traceability. Furthermore, services must interact with the `WorkflowTrackerService` to update the `workflow_runs` table appropriately on both successful completion of their tasks and in case of any failures, recording status and error messages as applicable.
|
||||
|
||||
- **Epic 1: Project Initialization, Setup, and HN Content Acquisition**
|
||||
- Goal: Establish the foundational project structure, including the Next.js application, Supabase integration, deployment pipeline, API/CLI triggers, core workflow orchestration, and implement functionality to retrieve, process, and store Hacker News posts/comments via a `ContentAcquisitionFacade`, providing data for newsletter generation. Implement the database event mechanism to trigger subsequent processing. Define core configuration tables, seed data, and set up testing frameworks.
|
||||
- **Epic 2: Article Scraping**
|
||||
- Goal: Implement the functionality to scrape and store linked articles from HN posts, enriching the data available for summarization and the newsletter. Ensure this functionality is triggered by database events and can be tested via API/CLI (if retained). Implement the database event mechanism to trigger subsequent processing.
|
||||
- **Epic 3: AI-Powered Content Summarization**
|
||||
- Goal: Integrate AI summarization capabilities, by implementing and using a configurable and testable `LLMFacade`, to generate concise summaries of articles and comments from prompts stored in the database. This will enrich the newsletter content, be triggerable via API/CLI, is triggered by database events, and track progress via `WorkflowTrackerService`.
|
||||
- **Epic 4: Automated Newsletter Creation and Distribution**
|
||||
- Goal: Automate the generation and delivery of the daily newsletter by implementing and using a configurable `EmailDispatchFacade`. This includes handling podcast link availability, being triggerable via API/CLI, orchestration by `CheckWorkflowCompletionService`, and status tracking via `WorkflowTrackerService`.
|
||||
- **Epic 5: Podcast Generation Integration**
|
||||
- Goal: Integrate with an audio generation API (initially Play.ht) by implementing and using a configurable `AudioGenerationFacade` to create podcast versions of the newsletter. This includes handling webhooks to update newsletter data and workflow status. Ensure this is triggerable via API/CLI, orchestrated appropriately, and uses `WorkflowTrackerService`.
|
||||
- **Epic 6: Web Interface for Initial Structure and Content Access**
|
||||
- Goal: Develop a user-friendly, responsive, and accessible web interface, based on the `frontend-architecture.md`, to display newsletters and provide access to podcast content, aligning with the project's visual and technical guidelines. All UI development within this epic must adhere to the "synthwave technical glowing purple vibes" aesthetic using Tailwind CSS and Shadcn UI, ensure basic mobile responsiveness, meet WCAG 2.1 Level A accessibility guidelines (including semantic HTML, keyboard navigation, alt text, color contrast), and optimize images using `next/image`, as detailed in the `frontend-architecture.txt` and `ui-ux-spec.txt`.
|
||||
|
||||
---
|
||||
|
||||
**Epic 1: Project Initialization, Setup, and HN Content Acquisition**
|
||||
|
||||
- Goal: Establish the foundational project structure, including the Next.js application, Supabase integration, deployment pipeline, API/CLI triggers, core workflow orchestration, and implement functionality to retrieve, process, and store Hacker News posts/comments via a `ContentAcquisitionFacade`, providing data for newsletter generation. Implement the database event mechanism to trigger subsequent processing. Define core configuration tables, seed data, and set up testing frameworks.
|
||||
|
||||
- **Story 1.1:** As a developer, I want to set up the Next.js project with Supabase integration, so that I have a functional foundation for building the application.
|
||||
- Acceptance Criteria:
|
||||
- The Next.js project is initialized using the Vercel/Supabase template.
|
||||
- Supabase is successfully integrated with the Next.js project.
|
||||
- The project codebase is initialized in a Git repository.
|
||||
- A basic project `README.md` is created in the root of the repository, including a project overview, links to main documentation (PRD, architecture), and essential developer setup/run commands.
|
||||
- **Story 1.2:** As a developer, I want to configure the deployment pipeline to Vercel with separate development and production environments, so that I can easily deploy and update the application.
|
||||
- Acceptance Criteria:
|
||||
- The project is successfully linked to a Vercel project with separate environments.
|
||||
- Automated deployments are configured for the main branch to the production environment.
|
||||
- Environment variables are set up for local development and Vercel deployments.
|
||||
- **Story 1.3:** As a developer, I want to implement the API and CLI trigger mechanisms, so that I can manually trigger the workflow during development and testing.
|
||||
- Acceptance Criteria:
|
||||
- A secure API endpoint is created.
|
||||
- The API endpoint requires authentication (API key).
|
||||
- The API endpoint (`/api/system/trigger-workflow`) creates an entry in the `workflow_runs` table and returns the `jobId`.
|
||||
- The API endpoint returns an appropriate response to indicate success or failure.
|
||||
- The API endpoint is secured via an API key.
|
||||
- A CLI command is created.
|
||||
- The CLI command invokes the `/api/system/trigger-workflow` endpoint or directly interacts with `WorkflowTrackerService` to start a new workflow run.
|
||||
- The CLI command provides informative output to the console.
|
||||
- All API requests and CLI command executions are logged, including timestamps and any relevant data.
|
||||
- All interactions with the API or CLI that initiate a workflow must record the `workflow_run_id` in logs.
|
||||
- The API and CLI interfaces adhere to mobile responsiveness and Tailwind/theming principles.
|
||||
- **Story 1.4:** As a system, I want to retrieve the top 30 Hacker News posts and associated comments daily using a configurable `ContentAcquisitionFacade`, so that the data is available for summarization and newsletter generation.
|
||||
- Acceptance Criteria:
|
||||
- A `ContentAcquisitionFacade` is implemented in `supabase/functions/_shared/` to abstract interaction with the news data source (initially HN Algolia API).
|
||||
- The facade handles API authentication (if any), request formation, and response parsing for the specific news source.
|
||||
- The facade implements basic retry logic for transient errors.
|
||||
- Unit tests for the `ContentAcquisitionFacade` (mocking actual HTTP calls to the HN Algolia API) achieve >80% coverage.
|
||||
- The system retrieves the top 30 Hacker News posts daily via the `ContentAcquisitionFacade`.
|
||||
- The system retrieves associated comments for the top 30 posts via the `ContentAcquisitionFacade`.
|
||||
- Retrieved data (posts and comments) is stored in Supabase database, linked to the current `workflow_run_id`.
|
||||
- This functionality can be triggered via the API and CLI.
|
||||
- The system logs the start and completion of the retrieval process, including any errors.
|
||||
- Upon completion, the service updates the `workflow_runs` table with status and details (e.g., number of posts fetched) via `WorkflowTrackerService`.
|
||||
- Supabase migrations for `hn_posts` and `hn_comments` tables (as defined in `architecture.txt`) are created and applied before data operations.
|
||||
- **Story 1.5: Define and Implement `workflow_runs` Table and `WorkflowTrackerService`**
|
||||
- Goal: Implement the core workflow orchestration mechanism (tracking part).
|
||||
- Acceptance Criteria:
|
||||
- Supabase migration created for the `workflow_runs` table as defined in the architecture document.
|
||||
- `WorkflowTrackerService` implemented in `supabase/functions/_shared/` with methods for initiating, updating step details, incrementing counters, failing, and completing workflow runs.
|
||||
- Service includes robust error handling and logging via Pino.
|
||||
- Unit tests for `WorkflowTrackerService` achieve >80% coverage.
|
||||
- **Story 1.6: Implement `CheckWorkflowCompletionService` (Supabase Cron Function)**
|
||||
- Goal: Implement the core workflow orchestration mechanism (progression part).
|
||||
- Acceptance Criteria:
|
||||
- Supabase Function `check-workflow-completion-service` created.
|
||||
- Function queries `workflow_runs` and related tables to determine if a workflow run is ready to progress to the next major stage.
|
||||
- Function correctly updates `workflow_runs.status` and invokes the next appropriate service function.
|
||||
- Logic for handling podcast link availability is implemented here or in conjunction with `NewsletterGenerationService`.
|
||||
- The function is configurable to be run periodically.
|
||||
- Comprehensive logging implemented using Pino.
|
||||
- Unit tests achieve >80% coverage.
|
||||
- **Story 1.7: Implement Workflow Status API Endpoint (`/api/system/workflow-status/{jobId}`)**
|
||||
- Goal: Allow developers/admins to check the status of a workflow run.
|
||||
- Acceptance Criteria:
|
||||
- Next.js API Route Handler created at `/api/system/workflow-status/{jobId}`.
|
||||
- Endpoint secured with API Key authentication.
|
||||
- Retrieves and returns status details from the `workflow_runs` table.
|
||||
- Handles cases where `jobId` is not found (404).
|
||||
- Unit and integration tests for the API endpoint.
|
||||
- **Story 1.8: Create and document `docs/environment-vars.md` and set up `.env.example`**
|
||||
- Goal: Ensure environment variables are properly documented and managed.
|
||||
- Acceptance Criteria:
|
||||
- A `docs/environment-vars.md` file is created.
|
||||
- An `.env.example` file is created.
|
||||
- Sensitive information in examples is masked.
|
||||
- For each third-party service requiring credentials, `docs/environment-vars.md` includes:
|
||||
- A brief note or link guiding the user on where to typically sign up for the service and obtain the necessary API key or credential.
|
||||
- A recommendation for the user to check the service's current free/low-tier API rate limits against expected MVP usage.
|
||||
- A note that usage beyond free tier limits for commercial services (like Play.ht, remote LLMs, or email providers) may incur costs, and the user should review the provider's pricing.
|
||||
- **Story 1.9 (New): Implement Database Event/Webhook: `hn_posts` Insert to Article Scraping Service**
|
||||
- Goal: To ensure that the successful insertion of a new Hacker News post into the `hn_posts` table automatically triggers the `ArticleScrapingService`.
|
||||
- Acceptance Criteria:
|
||||
- A Supabase database trigger or webhook mechanism (e.g., using `pg_net` or native triggers calling a function) is implemented on the `hn_posts` table for INSERT operations.
|
||||
- The trigger successfully invokes the `ArticleScrapingService` (Supabase Function).
|
||||
- The invocation passes necessary parameters like `hn_post_id` and `workflow_run_id` to the `ArticleScrapingService`.
|
||||
- The mechanism is robust and includes error handling/logging for the trigger/webhook itself.
|
||||
- Unit/integration tests are created to verify the trigger fires correctly and the service is invoked with correct parameters.
|
||||
- **Story 1.10 (New): Define and Implement Core Configuration Tables**
|
||||
- Goal: To establish the database tables necessary for storing core application configurations like summarization prompts, newsletter templates, and subscriber lists.
|
||||
- Acceptance Criteria:
|
||||
- A Supabase migration is created and applied to define the `summarization_prompts` table schema as specified in `architecture.txt`.
|
||||
- A Supabase migration is created and applied to define the `newsletter_templates` table schema as specified in `architecture.txt`.
|
||||
- A Supabase migration is created and applied to define the `subscribers` table schema as specified in `architecture.txt`.
|
||||
- These tables are ready for data population (e.g., via seeding or manual entry for MVP).
|
||||
- **Story 1.11 (New): Create Seed Data for Initial Configuration**
|
||||
- Goal: To populate the database with initial configuration data (prompts, templates, test subscribers) necessary for development and testing of MVP features.
|
||||
- Acceptance Criteria:
|
||||
- A `supabase/seed.sql` file (or an equivalent, documented seeding mechanism) is created.
|
||||
- The seed mechanism populates the `summarization_prompts` table with at least one default article prompt and one default comment prompt.
|
||||
- The seed mechanism populates the `newsletter_templates` table with at least one default newsletter template (HTML format for MVP).
|
||||
- The seed mechanism populates the `subscribers` table with a small list of 1-3 test email addresses for MVP delivery testing.
|
||||
- Instructions on how to apply the seed data to a local or development Supabase instance are documented (e.g., in the project `README.md`).
|
||||
- **Story 1.12 (New): Set up and Configure Project Testing Frameworks**
|
||||
- Goal: To ensure that the primary testing frameworks (Jest, React Testing Library, Playwright) are installed and configured early in the project lifecycle, enabling test-driven development practices and adherence to the testing strategy.
|
||||
- Acceptance Criteria:
|
||||
- Jest and React Testing Library (RTL) are installed as project dependencies.
|
||||
- Jest and RTL are configured for unit and integration testing of Next.js components and JavaScript/TypeScript code (e.g., `jest.config.js` is set up, necessary Babel/TS transformations are in place).
|
||||
- A sample unit test (e.g., for a simple component or utility function) is created and runs successfully using the Jest/RTL setup.
|
||||
- Playwright is installed as a project dependency.
|
||||
- Playwright is configured for end-to-end testing (e.g., `playwright.config.ts` is set up, browser configurations are defined).
|
||||
- A sample E2E test (e.g., navigating to the application's homepage on the local development server) is created and runs successfully using Playwright.
|
||||
- Scripts to execute tests (e.g., unit tests, E2E tests) are added to `package.json`.
|
||||
|
||||
---
|
||||
|
||||
**Epic 2: Article Scraping**
|
||||
|
||||
- Goal: Implement the functionality to scrape and store linked articles from HN posts, enriching the data available for summarization and the newsletter. Ensure this functionality is triggered by database events and can be tested via API/CLI (if retained). Implement the database event mechanism to trigger subsequent processing.
|
||||
|
||||
- **Story 2.1:** As a system, I want to identify URLs within the top 30 (configurable via environment variable) Hacker News posts, so that I can extract the content of linked articles.
|
||||
- Acceptance Criteria:
|
||||
- The system parses the top N (configurable via env var) Hacker News posts to identify URLs.
|
||||
- The system filters out any URLs that are not relevant to article scraping (e.g., links to images, videos, etc.).
|
||||
- **Story 2.2:** As a system, I want to scrape the content of the identified article URLs using Cheerio, so that I can provide summaries in the newsletter.
|
||||
- Acceptance Criteria:
|
||||
- The system scrapes the content from the identified article URLs using Cheerio.
|
||||
- The system extracts relevant content such as the article title, author, publication date, and main text.
|
||||
- The system handles potential issues during scraping, such as website errors or changes in website structure, logging errors for review.
|
||||
- **Story 2.3:** As a system, I want to store the scraped article content in the Supabase database, associated with the corresponding Hacker News post and workflow run, so that it can be used for summarization and newsletter generation.
|
||||
- Acceptance Criteria:
|
||||
- Scraped article content is stored in the `scraped_articles` table, linked to the `hn_post_id` and the current `workflow_run_id`.
|
||||
- The system ensures that the stored data includes all extracted information (title, author, date, text).
|
||||
- The `scraping_status` and any `error_message` are recorded in the `scraped_articles` table.
|
||||
- Upon completion of scraping an article (success or failure), the service updates the `workflow_runs.details` (e.g., incrementing scraped counts) via `WorkflowTrackerService`.
|
||||
- A Supabase migration for the `scraped_articles` table (as defined in `architecture.txt`) is created and applied before data operations.
|
||||
- **Story 2.4:** As a developer, I want to trigger the article scraping process via the API and CLI, so that I can manually initiate it for testing and debugging.
|
||||
- _Architect's Note: This story might become redundant if the main workflow trigger (Story 1.3) handles the entire pipeline initiation and individual service testing is done via direct function invocation or unit/integration tests._
|
||||
- Acceptance Criteria:
|
||||
- The API endpoint can trigger the article scraping process.
|
||||
- The CLI command can trigger the article scraping process locally.
|
||||
- The system logs the start and completion of the scraping process, including any errors encountered.
|
||||
- All API requests and CLI command executions are logged, including timestamps and any relevant data.
|
||||
- The system handles partial execution gracefully (i.e., if triggered before Epic 1 components like `WorkflowTrackerService` are available, it logs a message and exits).
|
||||
- If retained for isolated testing, all scraping operations initiated via this trigger must be associated with a valid `workflow_run_id` and update the `workflow_runs` table accordingly via `WorkflowTrackerService`.
|
||||
- **Story 2.5 (New): Implement Database Event/Webhook: `scraped_articles` Success to Summarization Service**
|
||||
- Goal: To ensure that the successful scraping and storage of an article in `scraped_articles` automatically triggers the `SummarizationService`.
|
||||
- Acceptance Criteria:
|
||||
- A Supabase database trigger or webhook mechanism is implemented on the `scraped_articles` table (e.g., on INSERT or UPDATE where `scraping_status` is 'success').
|
||||
- The trigger successfully invokes the `SummarizationService` (Supabase Function).
|
||||
- The invocation passes necessary parameters like `scraped_article_id` and `workflow_run_id` to the `SummarizationService`.
|
||||
- The mechanism is robust and includes error handling/logging for the trigger/webhook itself.
|
||||
- Unit/integration tests are created to verify the trigger fires correctly and the service is invoked with correct parameters.
|
||||
|
||||
---
|
||||
|
||||
**Epic 3: AI-Powered Content Summarization**
|
||||
|
||||
- Goal: Integrate AI summarization capabilities, by implementing and using a configurable and testable `LLMFacade`, to generate concise summaries of articles and comments from prompts stored in the database. This will enrich the newsletter content, be triggerable via API/CLI, is triggered by database events, and track progress via `WorkflowTrackerService`.
|
||||
|
||||
- **Story 3.1:** As a system, I want to integrate an AI summarization capability by implementing and using an `LLMFacade`, so that I can generate concise summaries of articles and comments using various configurable LLM providers.
|
||||
- Acceptance Criteria:
|
||||
- An `LLMFacade` interface and concrete implementations (e.g., `OllamaAdapter`, `RemoteLLMApiAdapter`) are created in `supabase/functions/_shared/llm-facade.ts`.
|
||||
- A factory function is implemented within or alongside the facade to select the appropriate LLM adapter based on environment variables (e.g., `LLM_PROVIDER_TYPE`, `OLLAMA_API_URL`, `REMOTE_LLM_API_KEY`, `REMOTE_LLM_API_URL`, `LLM_MODEL_NAME`).
|
||||
- The `LLMFacade` handles making requests to the respective LLM APIs (as configured) and parsing their responses to extract the summary.
|
||||
- Robust error handling and retry logic for transient API errors are implemented within the facade.
|
||||
- Unit tests for the `LLMFacade` and its adapters (mocking actual HTTP calls) achieve >80% coverage.
|
||||
- The system utilizes this `LLMFacade` for all summarization tasks (articles and comments).
|
||||
- The integration is configurable via environment variables to switch between local and remote LLMs and specify model names.
|
||||
- **Story 3.2:** As a system, I want to retrieve summarization prompts from the database, and then use them via the `LLMFacade` to generate 2-paragraph summaries of the scraped articles, so that users can quickly grasp the main content and the prompts can be easily updated.
|
||||
- Acceptance Criteria:
|
||||
- The service retrieves the appropriate summarization prompt from the `summarization_prompts` table.
|
||||
- The system generates a 2-paragraph summary for each scraped article using the retrieved prompt via the `LLMFacade`.
|
||||
- Generated summaries are stored in the `article_summaries` table, linked to the `scraped_article_id` and the current `workflow_run_id`.
|
||||
- The summaries are accurate and capture the key information from the article.
|
||||
- Upon completion of each article summarization task, the service updates `workflow_runs.details` (e.g., incrementing article summaries generated counts) via `WorkflowTrackerService`.
|
||||
- (System Note: The `CheckWorkflowCompletionService` monitors the `article_summaries` table as part of determining overall summarization completion for a `workflow_run_id`).
|
||||
- A Supabase migration for the `article_summaries` table (as defined in `architecture.txt`) is created and applied before data operations.
|
||||
- **Story 3.3:** As a system, I want to retrieve summarization prompts from the database, and then use them via the `LLMFacade` to generate 2-paragraph summaries of the comments for the selected HN posts, so that users can understand the main discussions and the prompts can be easily updated.
|
||||
- Acceptance Criteria:
|
||||
- The service retrieves the appropriate summarization prompt from the `summarization_prompts` table.
|
||||
- The system generates a 2-paragraph summary of the comments for each selected HN post using the retrieved prompt via the `LLMFacade`.
|
||||
- Generated summaries are stored in the `comment_summaries` table, linked to the `hn_post_id` and the current `workflow_run_id`.
|
||||
- The summaries highlight interesting interactions and key points from the discussion.
|
||||
- Upon completion of each comment summarization task, the service updates `workflow_runs.details` (e.g., incrementing comment summaries generated counts) via `WorkflowTrackerService`.
|
||||
- (System Note: The `CheckWorkflowCompletionService` monitors the `comment_summaries` table as part of determining overall summarization completion for a `workflow_run_id`).
|
||||
- A Supabase migration for the `comment_summaries` table (as defined in `architecture.txt`) is created and applied before data operations.
|
||||
- **Story 3.4:** As a developer, I want to trigger the AI summarization process via the API and CLI, so that I can manually initiate it for testing and debugging.
|
||||
- Acceptance Criteria:
|
||||
- The API endpoint can trigger the AI summarization process.
|
||||
- The CLI command can trigger the AI summarization process locally.
|
||||
- The system logs the input and output of the summarization process, including the summarization prompt used and any errors.
|
||||
- All API requests and CLI command executions are logged, including timestamps and any relevant data.
|
||||
- The system handles partial execution gracefully (i.e., if triggered before Epic 2 is complete, it logs a message and exits).
|
||||
- All summarization operations initiated via this trigger must be associated with a valid `workflow_run_id` and update the `workflow_runs` table accordingly via `WorkflowTrackerService`.
|
||||
|
||||
---
|
||||
|
||||
**Epic 4: Automated Newsletter Creation and Distribution**
|
||||
|
||||
- Goal: Automate the generation and delivery of the daily newsletter by implementing and using a configurable `EmailDispatchFacade`. This includes handling podcast link availability, being triggerable via API/CLI, orchestration by `CheckWorkflowCompletionService`, and status tracking via `WorkflowTrackerService`.
|
||||
|
||||
- **Story 4.1:** As a system, I want to retrieve the newsletter template from the database, so that the newsletter's design and structure can be updated without code changes.
|
||||
- Acceptance Criteria:
|
||||
- The system retrieves the newsletter template from the `newsletter_templates` database table.
|
||||
- **Story 4.2:** As a system, I want to generate a daily newsletter in HTML format using the retrieved template, so that users can receive a concise summary of Hacker News content.
|
||||
- Acceptance Criteria:
|
||||
- The `NewsletterGenerationService` is triggered by the `CheckWorkflowCompletionService` when all summaries for a `workflow_run_id` are ready.
|
||||
- The service retrieves the newsletter template (from Story 4.1 output) from `newsletter_templates` table and summaries associated with the `workflow_run_id`.
|
||||
- The system generates a newsletter in HTML format using the template retrieved from the database.
|
||||
- The newsletter includes summaries of selected articles and comments.
|
||||
- The newsletter includes links to the original HN posts and articles.
|
||||
- The newsletter includes the original post dates/times.
|
||||
- Generated newsletter is stored in the `newsletters` table, linked to the `workflow_run_id`.
|
||||
- The service updates `workflow_runs.status` to 'generating_podcast' (or a similar appropriate status indicating handoff to podcast generation) after initiating podcast generation (as part of Epic 5 logic that will be invoked by this service or by `CheckWorkflowCompletionService` after this story's core task).
|
||||
- A Supabase migration for the `newsletters` table (as defined in `architecture.txt`) is created and applied before data operations.
|
||||
- **Story 4.3:** As a system, I want to send the generated newsletter to a list of subscribers by implementing and using an `EmailDispatchFacade`, with credentials securely provided, so that users receive the daily summary in their inbox.
|
||||
- Acceptance Criteria:
|
||||
- An `EmailDispatchFacade` is implemented in `supabase/functions/_shared/` to abstract interaction with the email sending service (initially Nodemailer via SMTP).
|
||||
- The facade handles configuration (e.g., SMTP settings from environment variables), email construction (From, To, Subject, HTML content), and sending the email.
|
||||
- The facade includes error handling for email dispatch and logs relevant status information.
|
||||
- Unit tests for the `EmailDispatchFacade` (mocking the actual Nodemailer library calls) achieve >80% coverage.
|
||||
- The `NewsletterGenerationService` (specifically, its delivery part, utilizing the `EmailDispatchFacade`) is triggered by `CheckWorkflowCompletionService` once the podcast link is available in the `newsletters` table for the `workflow_run_id` (or a configured timeout/failure condition for the podcast step has been met).
|
||||
- The system retrieves the list of subscriber email addresses from the Supabase database.
|
||||
- The system sends the HTML newsletter (with podcast link conditionally included) to all active subscribers using the `EmailDispatchFacade`.
|
||||
- Credentials for the email service (e.g., SMTP server details) are securely accessed via environment variables and used by the facade.
|
||||
- The system logs the delivery status for each subscriber (potentially via the facade).
|
||||
- The system implements conditional logic for podcast link inclusion (from `newsletters` table) and handles delay/retry as per PRD, coordinated by `CheckWorkflowCompletionService`.
|
||||
- Updates `newsletters.delivery_status` (e.g., 'sent', 'failed') and `workflow_runs.status` to 'completed' or 'failed' via `WorkflowTrackerService` upon completion or failure of delivery.
|
||||
- The initial email template includes a placeholder for the podcast URL.
|
||||
- The end-to-end generation time for a typical daily newsletter (from workflow trigger to successful email dispatch initiation, for a small set of content) is measured and logged during testing to ensure it's within a reasonable operational timeframe (target < 30 minutes).
|
||||
- **Story 4.4:** As a developer, I want to trigger the newsletter generation and distribution process via the API and CLI, so that I can manually initiate it for testing and debugging.
|
||||
- Acceptance Criteria:
|
||||
- The API endpoint can trigger the newsletter generation and distribution process.
|
||||
- The CLI command can trigger the newsletter generation and distribution process locally.
|
||||
- The system logs the start and completion of the process, including any errors.
|
||||
- All API requests and CLI command executions are logged, including timestamps and any relevant data.
|
||||
- The system handles partial execution gracefully (i.e., if triggered before Epic 3 is complete, it logs a message and exits).
|
||||
- All newsletter operations initiated via this trigger must be associated with a valid `workflow_run_id` and update the `workflow_runs` table accordingly via `WorkflowTrackerService`.
|
||||
|
||||
---
|
||||
|
||||
**Epic 5: Podcast Generation Integration**
|
||||
|
||||
- Goal: Integrate with an audio generation API (initially Play.ht) by implementing and using a configurable `AudioGenerationFacade` to create podcast versions of the newsletter. This includes handling webhooks to update newsletter data and workflow status. Ensure this is triggerable via API/CLI, orchestrated appropriately, and uses `WorkflowTrackerService`.
|
||||
|
||||
- **Story 5.1:** As a system, I want to integrate with an audio generation API (e.g., Play.ht's PlayNote API) by implementing and using an `AudioGenerationFacade`, so that I can generate AI-powered podcast versions of the newsletter content.
|
||||
- Acceptance Criteria:
|
||||
- An `AudioGenerationFacade` is implemented in `supabase/functions/_shared/` to abstract interaction with the audio generation service (initially Play.ht).
|
||||
- The facade handles API authentication, request formation (e.g., sending content for synthesis, providing webhook URL), and response parsing for the specific audio generation service.
|
||||
- The facade is configurable via environment variables (e.g., API key, user ID, service endpoint, webhook URL base).
|
||||
- Robust error handling and retry logic for transient API errors are implemented within the facade.
|
||||
- Unit tests for the `AudioGenerationFacade` (mocking actual HTTP calls to the Play.ht API) achieve >80% coverage.
|
||||
- The system uses this `AudioGenerationFacade` for all podcast generation tasks.
|
||||
- The integration employs webhooks for asynchronous status updates from the audio generation service.
|
||||
- (Context: The `PodcastGenerationService` containing this logic is invoked by `NewsletterGenerationService` or `CheckWorkflowCompletionService` for a specific `workflow_run_id` and `newsletter_id`.)
|
||||
- **Story 5.2:** As a system, I want to send the newsletter content to the audio generation service via the `AudioGenerationFacade` to initiate podcast creation, and receive a job ID or initial response, so that I can track the podcast creation process.
|
||||
- Acceptance Criteria:
|
||||
- The system sends the newsletter content (identified by `newsletter_id` for a given `workflow_run_id`) to the configured audio generation service via the `AudioGenerationFacade`.
|
||||
- The system receives a job ID or initial response from the service via the facade.
|
||||
- The `podcast_playht_job_id` (or a generic `podcast_job_id`) and `podcast_status` (e.g., 'generating', 'submitted') are stored in the `newsletters` table, linked to the `workflow_run_id`.
|
||||
- **Story 5.3:** As a system, I want to implement a webhook handler to receive the podcast URL from the audio generation service, and update the newsletter data and workflow status, so that the podcast link can be included in the newsletter and web interface, and the overall workflow can proceed.
|
||||
- Acceptance Criteria:
|
||||
- The system implements a webhook handler (`PlayHTWebhookHandlerAPI` at `/api/webhooks/playht` or a more generic path like `/api/webhooks/audio-generation`) to receive the podcast URL and status from the audio generation service.
|
||||
- The webhook handler extracts the podcast URL and status (e.g., 'completed', 'failed') from the webhook payload.
|
||||
- The webhook handler updates the `newsletters` table with the podcast URL and status for the corresponding job.
|
||||
- The `PlayHTWebhookHandlerAPI` also updates the `workflow_runs.details` with the podcast status (e.g., `podcast_status: 'completed'`) via `WorkflowTrackerService` for the relevant `workflow_run_id` (which may need to be looked up from the `newsletter_id` or job ID present in the webhook or associated with the service job).
|
||||
- If supported by the audio generation service (e.g., Play.ht), implement security verification for the incoming webhook (such as shared secret or signature validation) to ensure authenticity. If direct verification mechanisms are not supported by the provider, this specific AC is N/A, and alternative measures (like IP whitelisting, if applicable and secure) should be considered and documented.
|
||||
- **Story 5.4:** As a developer, I want to trigger the podcast generation process via the API and CLI, so that I can manually initiate it for testing and debugging.
|
||||
- Acceptance Criteria:
|
||||
- The API endpoint can trigger the podcast generation process.
|
||||
- The CLI command can trigger the podcast generation process locally.
|
||||
- The system logs the start and completion of the process, including any intermediate steps, responses from the audio generation service, and webhook interactions.
|
||||
- All API requests and CLI command executions are logged, including timestamps and any relevant data.
|
||||
- The system handles partial execution gracefully (i.e., if triggered before Epic 4 components are ready, it logs a message and exits).
|
||||
- All podcast generation operations initiated via this trigger must be associated with a valid `workflow_run_id` and `newsletter_id`, and update the `workflow_runs` and `newsletters` tables accordingly via `WorkflowTrackerService` and direct table updates as necessary.
|
||||
|
||||
---
|
||||
|
||||
**Epic 6: Web Interface for Initial Structure and Content Access**
|
||||
|
||||
- Goal: Develop a user-friendly, responsive, and accessible web interface, based on the `frontend-architecture.md`, to display newsletters and provide access to podcast content, aligning with the project's visual and technical guidelines. All UI development within this epic must adhere to the "synthwave technical glowing purple vibes" aesthetic using Tailwind CSS and Shadcn UI, ensure basic mobile responsiveness, meet WCAG 2.1 Level A accessibility guidelines (including semantic HTML, keyboard navigation, alt text, color contrast), and optimize images using `next/image`, as detailed in the `frontend-architecture.txt` and `ui-ux-spec.txt`.
|
||||
|
||||
- **Story 6.1:** As a developer, I want to establish the initial Next.js App Router structure for the web interface, including core layouts and routing, using `frontend-architecture.md` as a guide, so that I have a foundational frontend structure.
|
||||
- Acceptance Criteria:
|
||||
- Initial HTML/CSS mockups (e.g., from Vercel v0, if used) serve as a visual guide, but implementation uses Next.js and Shadcn UI components as per `frontend-architecture.md`.
|
||||
- Next.js App Router routes are set up for `/newsletters` (listing page) and `/newsletters/[newsletterId]` (detail page) within an `app/(web)/` route group.
|
||||
- Root layout (`app/(web)/layout.tsx`) and any necessary feature-specific layouts (e.g., `app/(web)/newsletters/layout.tsx`) are implemented using Next.js App Router conventions and Tailwind CSS.
|
||||
- A `PageWrapper.tsx` component (as defined in `frontend-architecture.txt`) is implemented and used for consistent page styling (e.g., padding, max-width).
|
||||
- Basic page structure renders correctly in development environment.
|
||||
- **Story 6.2:** As a user, I want to see a list of current and past newsletters on the `/newsletters` page, so that I can easily browse available content.
|
||||
- Acceptance Criteria:
|
||||
- The `app/(web)/newsletters/page.tsx` route displays a list of newsletters.
|
||||
- Newsletter items are displayed using a `NewsletterCard.tsx` component.
|
||||
- The `NewsletterCard.tsx` component is developed (e.g., using Shadcn UI `Card` as a base), displaying at least the newsletter title, target date, and a link/navigation to its detail page.
|
||||
- `NewsletterCard.tsx` is styled using Tailwind CSS to fit the "synthwave" theme.
|
||||
- Data for the newsletter list (e.g., ID, title, date) is fetched server-side on `app/(web)/newsletters/page.tsx` using the Supabase server client.
|
||||
- The newsletter list page is responsive across common device sizes (mobile, desktop).
|
||||
- The list includes relevant information such as the newsletter title and date.
|
||||
- The list is paginated or provides scrolling functionality to handle a large number of newsletters.
|
||||
- Key page load performance (e.g., Largest Contentful Paint) for the newsletter list page is benchmarked (e.g., using browser developer tools or Lighthouse) during development testing to ensure it aligns with the target of fast load times (target < 2 seconds).
|
||||
- **Story 6.3:** As a user, I want to be able to select a newsletter from the list and read its full content within the web page on the `/newsletters/[newsletterId]` page.
|
||||
- Acceptance Criteria:
|
||||
- Clicking on a `NewsletterCard` navigates to the corresponding `app/(web)/newsletters/[newsletterId]/page.tsx` route.
|
||||
- The full HTML content of the selected newsletter is retrieved server-side using the Supabase server client and displayed in a readable format.
|
||||
- A `BackButton.tsx` component is developed (e.g., using Shadcn UI `Button` as a base) and integrated on the newsletter detail page, allowing users to navigate back to the newsletter list.
|
||||
- The newsletter detail page content area is responsive across common device sizes.
|
||||
- Key page load performance (e.g., Largest Contentful Paint) for the newsletter detail page is benchmarked (e.g., using browser developer tools or Lighthouse) during development testing to ensure it aligns with the target of fast load times (target < 2 seconds).
|
||||
- **Story 6.4:** As a user, I want to have the option to download the currently viewed newsletter from its detail page, so that I can access it offline.
|
||||
- Acceptance Criteria:
|
||||
- A `DownloadButton.tsx` component is developed (e.g., using Shadcn UI `Button` as a base).
|
||||
- The `DownloadButton.tsx` is integrated and visible on the newsletter detail page (`/newsletters/[newsletterId]`).
|
||||
- Clicking the button initiates a download of the newsletter content (e.g., HTML format for MVP).
|
||||
- **Story 6.5:** As a user, I want to listen to the generated podcast associated with a newsletter within the web interface on its detail page, if a podcast is available.
|
||||
- Acceptance Criteria:
|
||||
- A `PodcastPlayer.tsx` React component with standard playback controls (play, pause, seek bar, volume control) is developed.
|
||||
- An `podcastPlayerSlice.ts` Zustand store is implemented to manage podcast player state (e.g., current track URL, playback status, current time, volume).
|
||||
- The `PodcastPlayer.tsx` component integrates with the `podcastPlayerSlice.ts` Zustand store for its state management.
|
||||
- If a podcast URL is available for the displayed newsletter (fetched from Supabase), the `PodcastPlayer.tsx` component is displayed on the newsletter detail page.
|
||||
- The `PodcastPlayer.tsx` can load and play the podcast audio from the provided URL.
|
||||
- The `PodcastPlayer.tsx` is styled using Tailwind CSS to fit the "synthwave" theme and is responsive.
|
||||
|
||||
---
|
||||
|
||||
## Key Reference Documents
|
||||
|
||||
_(This section will be created later, from the sections prior to this being carved up into smaller documents)_
|
||||
|
||||
## Out of Scope Ideas Post MVP
|
||||
|
||||
- User Authentication and Management
|
||||
- Subscription Management
|
||||
- Admin Dashboard
|
||||
- Viewing and updating daily podcast settings
|
||||
- Prompt management for summarization
|
||||
- UI for template modification
|
||||
- Enhanced Newsletter Customization
|
||||
- Additional Content Digests
|
||||
- Configuration and creation of different digests
|
||||
- Support for content sources beyond Hacker News
|
||||
- Advanced scraping techniques (e.g., Playwright)
|
||||
|
||||
## Change Log
|
||||
|
||||
| Change | Date | Version | Description | Author |
|
||||
| :----------------------------------------------- | :--------- | :------ | :-------------------------------------------------------------------------------------------------------------------------------------------------------- | :----- |
|
||||
| Initial Draft | 2025-05-13 | 0.1 | Initial draft of the Product Requirements Document | 2-pm |
|
||||
| Updates from Arch suggestions & Checklist Review | 2025-05-14 | 0.3 | Incorporated changes from `arch-suggested-changes.txt`, `fea-suggested-changes.txt`, and Master Checklist review, including new stories & AC refinements. | 5-posm |
|
||||
123
BETA-V3/v3-demos/full-stack-app-demo/9-v0-one-shot-prompt.txt
Normal file
123
BETA-V3/v3-demos/full-stack-app-demo/9-v0-one-shot-prompt.txt
Normal file
@@ -0,0 +1,123 @@
|
||||
Generate a Next.js 14 (App Router) application using React, TypeScript, and Tailwind CSS for a project called "BMad DiCaster".
|
||||
The application's purpose is to display daily summaries of Hacker News posts, including an optional AI-generated podcast.
|
||||
|
||||
**1. Overall Project Context & Technology Stack:**
|
||||
- Framework: Next.js 14+ (App Router)
|
||||
- Language: TypeScript
|
||||
- UI Library: React (19+)
|
||||
- Styling: Tailwind CSS (v3.4+)
|
||||
- Base Component Library: Shadcn UI (latest). Assume necessary Shadcn UI components (like Button, Card, Dialog, Audio player if available, or primitives to build one) can be easily added or are available.
|
||||
- State Management (Client-Side): Zustand (for specific client components like the podcast player). For initial scaffolding, component-level state is acceptable.
|
||||
- Data Source (for displayed content): Supabase (PostgreSQL). For this initial v0 generation, use placeholder data or clearly indicate where data fetching from Supabase would occur. Server Components should be preferred for data fetching.
|
||||
|
||||
**2. Design System & Visual Styling:**
|
||||
- Theme: "Synthwave technical glowing purple vibes." This translates to:
|
||||
- A predominantly dark theme for the application background.
|
||||
- Accent Color: A vibrant purple (e.g., Tailwind's `purple-500` or a custom shade like `#800080`) for interactive elements, links, highlights, and potentially subtle glows or text shadows on headings.
|
||||
- Layout: Modern, minimalist, and clean, focusing on content readability and efficient information consumption.
|
||||
- Typography: Use Tailwind's default sans-serif font stack. Employ semantic HTML and Tailwind's typography utilities (e.g., `text-2xl font-bold` for titles, `text-base` for body).
|
||||
- Responsiveness: The application must be mobile-first and responsive across common breakpoints (sm, md, lg, xl) using Tailwind CSS.
|
||||
- Accessibility: Adhere to WCAG 2.1 Level A. This includes semantic HTML, keyboard navigability, sufficient color contrast (especially with the dark theme and purple accents), and alt text for any images (though MVP is mostly text/content based).
|
||||
|
||||
**3. Application Structure & Routing (Next.js App Router):**
|
||||
- The main application will live under the `/` path, effectively serving as the newsletter list page.
|
||||
- `/newsletters`: This route should display a list of available newsletters. If the root `/` path doesn't directly serve this, it should redirect here or this should be the primary view.
|
||||
- `/newsletters/[newsletterId]`: This dynamic route will display the content of a single, selected newsletter. `[newsletterId]` will be a unique identifier (e.g., a UUID).
|
||||
|
||||
**4. Page Structure & Key Components:**
|
||||
|
||||
**A. PageWrapper Component (Conceptual - Create if useful for consistency):**
|
||||
- A layout component that wraps page content.
|
||||
- Provides consistent horizontal padding (e.g., `px-4 md:px-8`) and a max-width container (e.g., `max-w-4xl mx-auto`) to ensure content is well-centered and readable on larger screens.
|
||||
- Should include a simple header placeholder (e.g., just the text "BMad DiCaster" with the logo if available or a placeholder for it) and a simple footer placeholder (e.g., copyright text).
|
||||
|
||||
**B. Newsletter List Page (`/` or `/newsletters` -> `app/(web)/newsletters/page.tsx`):**
|
||||
- Purpose: Display a list of available newsletters, ordered by date (most recent first).
|
||||
- Key UI Elements:
|
||||
- Page Title: e.g., "Daily DiCaster Updates" or "Latest Newsletters".
|
||||
- List of `NewsletterCard` components.
|
||||
- Data: Each card represents a newsletter and should display at least a title and date. Clicking a card navigates to the Newsletter Detail Page. For v0, use an array of 3-5 placeholder newsletter objects (e.g., `{ id: 'uuid-1', title: 'Tech Highlights - May 14, 2025', date: '2025-05-14', summary_short: 'A quick rundown of today\'s top tech news...' }`).
|
||||
- Structure:
|
||||
```html
|
||||
<PageWrapper>
|
||||
<h1>[Page Title]</h1>
|
||||
<div class="grid grid-cols-1 md:grid-cols-2 lg:grid-cols-3 gap-4">
|
||||
</div>
|
||||
</PageWrapper>
|
||||
```
|
||||
|
||||
**C. Newsletter Detail Page (`/newsletters/[newsletterId]` -> `app/(web)/newsletters/[newsletterId]/page.tsx`):**
|
||||
- Purpose: Display the full content of a selected newsletter, including its podcast version.
|
||||
- Key UI Elements:
|
||||
- `BackButton` component to navigate back to the Newsletter List Page.
|
||||
- Newsletter Title.
|
||||
- Newsletter Date.
|
||||
- Full HTML content of the newsletter.
|
||||
- `PodcastPlayer` component (if a podcast URL is available for the newsletter).
|
||||
- `DownloadButton` component to download the newsletter.
|
||||
- Data: For v0, use a placeholder newsletter object (e.g., `{ id: 'uuid-1', title: 'Tech Highlights - May 14, 2025', date: '2025-05-14', htmlContent: '<p>This is the full <b>HTML</b> content...</p><ul><li>Point 1</li></ul>', podcastUrl: 'placeholder_audio.mp3' }`).
|
||||
- Structure:
|
||||
```html
|
||||
<PageWrapper>
|
||||
<BackButton />
|
||||
<h2>[Newsletter Title]</h2>
|
||||
<p class="text-sm text-gray-400">[Newsletter Date]</p>
|
||||
<article class="prose dark:prose-invert mt-4">
|
||||
</article>
|
||||
<div class="mt-6">
|
||||
<PodcastPlayer audioUrl="{placeholder_audio.mp3 (if available)}" />
|
||||
</div>
|
||||
<div class="mt-4">
|
||||
<DownloadButton newsletterId="{newsletterId}" />
|
||||
</div>
|
||||
</PageWrapper>
|
||||
```
|
||||
(Note: `prose` and `dark:prose-invert` are Tailwind Typography plugin classes. Assume this plugin is or can be installed.)
|
||||
|
||||
**5. Core Reusable Components (to be placed in `app/components/core/`):**
|
||||
|
||||
**a. `NewsletterCard.tsx`:**
|
||||
- Purpose: Displays a summary of a newsletter in the list view.
|
||||
- Props: `id: string`, `title: string`, `date: string`, `summary_short?: string`.
|
||||
- UI: Use a Shadcn UI `Card` component as a base. Display title, date, and summary. The entire card should be clickable and navigate to `/newsletters/[id]`.
|
||||
- Styling: Minimalist, synthwave accents on hover/focus.
|
||||
|
||||
**b. `PodcastPlayer.tsx`:**
|
||||
- Purpose: Plays the podcast audio associated with a newsletter.
|
||||
- Props: `audioUrl: string`.
|
||||
- UI:
|
||||
- If a Shadcn UI audio player component is available, use it.
|
||||
- Otherwise, create a simple player using HTML5 `<audio>` element and custom controls styled with Tailwind CSS.
|
||||
- Controls: Play/Pause button, current time / total duration display, volume control (slider or button), and a simple progress bar.
|
||||
- State: Manage internal state for play/pause, current time, volume using component-level state (useState) or a simple Zustand slice (`podcastPlayerSlice.ts`).
|
||||
- Styling: Clean, integrated into the page, synthwave accents for controls.
|
||||
|
||||
**c. `DownloadButton.tsx`:**
|
||||
- Purpose: Allows the user to download the newsletter.
|
||||
- Props: `newsletterId: string` (or `downloadUrl: string` if preferred).
|
||||
- UI: Use a Shadcn UI `Button` component. Icon for download is a plus.
|
||||
- Action: For v0, this can be a placeholder button. In the actual app, it would trigger a download.
|
||||
- Styling: Consistent with other buttons, synthwave accent.
|
||||
|
||||
**d. `BackButton.tsx`:**
|
||||
- Purpose: Navigates the user to the previous page (typically the newsletter list).
|
||||
- UI: Use a Shadcn UI `Button` (perhaps with `variant="outline"` or `variant="ghost"`). Should ideally include a "back" icon and/or text "Back to list".
|
||||
- Action: Use Next.js `useRouter` hook for navigation (`router.back()` or to a specific path like `/newsletters`).
|
||||
|
||||
**6. General Instructions for Vercel v0:**
|
||||
- Generate separate, well-commented files for each component and page.
|
||||
- Use placeholder data where actual data fetching from Supabase would occur. Clearly comment these locations.
|
||||
- Ensure basic folder structure aligns with Next.js App Router best practices (e.g., `app/(web)/newsletters/page.tsx`, `app/components/core/NewsletterCard.tsx`).
|
||||
- Prioritize functional scaffolding of the layout and components over pixel-perfect styling if choices need to be made, but apply the synthwave theme (dark base, purple accents) generally.
|
||||
- The code should be clean, readable, and easily modifiable.
|
||||
|
||||
**Example of a placeholder newsletter data structure (for v0):**
|
||||
```typescript
|
||||
interface PlaceholderNewsletter {
|
||||
id: string;
|
||||
title: string;
|
||||
date: string; // e.g., "YYYY-MM-DD"
|
||||
summary_short?: string; // For card view
|
||||
htmlContent?: string; // For detail view
|
||||
podcastUrl?: string; // URL to an mp3 file
|
||||
}
|
||||
35
BETA-V3/v3-demos/full-stack-app-demo/readme.md
Normal file
35
BETA-V3/v3-demos/full-stack-app-demo/readme.md
Normal file
@@ -0,0 +1,35 @@
|
||||
# Project 1 Demo
|
||||
|
||||
Hacker News AI Podcast NextJS Monorepo
|
||||
|
||||
## Full Agile Workflow
|
||||
|
||||
## **Providing too much detail about wanting to use facades to wrap things really caused the AI to screw some things up that I did not catch before letting it make a lot of story updates - lesson learned, dont put too much into the technical-preferences as the llm will overuse the hell out of it when updating epics and stories and if unnoticed are a pain to clean up later.**
|
||||
|
||||
**The output of this nonsense is apparent in the stories generated haphazardly.**
|
||||
|
||||
# Demo Info
|
||||
|
||||
- 0-brief.md generated with the Analyst through discussion of an idea. (Gemini 2.5 Pro)
|
||||
- The end of the brief has the hand off prompt to kick off the PM in PRD create mode.
|
||||
- 1-prd.md Generated from a great discussion in interactive mode. the prd has some citation noise in it that should not be in the release version.
|
||||
- The end of the prd has the output from the checklist and also the prompt for the architect.
|
||||
- Before running the architect I also ran the design-architect in ui-ux-spec mode to produce that doc which give a little more ui focus to fill in some PRD level details about the front end app piece of this.
|
||||
- I then ran with the PM prompt against the architect - the nice thing about these prompt carry overs is they carry extra details that get discussed in conversation that do not belong in that doc. So for example, when I am talking to the PM, reviewing stories and epics high level - I randomly think of a lot of lower level details - sequencing issues - technology choices - so I might let the PM know - he keeps track of them and if they do not belong in the PRD they go into the prompt.
|
||||
- Oh - another cool think with the V3 PM - aside from YOLO or not - it also has 2 different styles of producing the high level epics and stories - the way I think they should be high level business outcome focused - but also a mode that assumes going to skip the architect where it will but a lot more technical detail into the story AC. I am not a fan of that mode - and thats a bit how V2 was operating, a little too much at that level for my personal taste - but now its a choice!
|
||||
- I also create a 0-technical-preferences.md document - this is a new V3 feature that is IMO a game changer. As you build apps, you can keep track of architecture design and coding patterns and practices you favor. If they are in that doc, and they make sense for the project being discussed, the architect can include them or suggest them where they fit. So for example I want a hexagonal architecture I don't have to explain it every time.
|
||||
- I mentioned a bit too much about facade in an unclear way, and the architect once the whole checklist was run and we went through all the sections, gave me an output of proposed changes - I should have read it a bit closer but was in a hurry to produce the demo content - so when I had the PO apply all the improvements to all stories, a few I had to have corrected, it went a bit facade crazy.
|
||||
- But I am jumping ahead there - the architect was done in a super helpful interactive mode - talking me through every section of the template!
|
||||
- I then ran the design architect in generate front end architecture mode - it produced a fully details front end architecture - I am not sure if its overkill for this simple project ui with 2 simple screens, but I wanted to test it out. I need to do some more consulting with front end experts as I am more of a back end engineer these days - but I think it produced a useful document.
|
||||
- The best part was also having it generate a V0 prompt for me - this takes in the PRD the Architecture, the ui-ux-spec, and the front end architecture and crafts a detailed prompt - much better than could manually be written on the fly by me.
|
||||
I put the full prompt into V0 - and I had the full functional UI I envisioned actually working - ready to replace the mode data with real data sources - UNBELIEVABLE! - The prompt is in the file 9-v0-one-shot-prompt.txt - try it out! That was with no tweaks.
|
||||
- I then had the Po take the arch suggested changes, and front end suggested changes - and the PO gave me a fully updated version of the full PRD (with all the epics and stories corrected in a one shot output) after he confirmed all the changes with me - I should have corrected a few, but was in a hurry and just told him YOLO all the changes LOL
|
||||
- I think had the PO do the full massive checklist run in a new instance with all of the updated docs to find any issues - which can he seen in the 7-po-checklist-result.md file.
|
||||
|
||||
BTW - the original PRD is in file 1-prd - and 8-prd-po-updated is the one with all the inclusions from the other suggestions, plus a few changes fromm the checklist run.
|
||||
|
||||
- Once all of this was in cursor - I than used the new doc-sharding-task.md (which is powered by a doc-sharding-template so its easy to modify) - this took the final architecture, final PRD, and final front end architecture - and it output all of the smaller granular files in the folder 10-sharded-docs. The sharing was done with just a normal agent (no rules or mode customization) - this is the nice thing about the task files - don't need custom agent slots being filled up with these seldom used tasks.
|
||||
|
||||
- The dev agent and the SM are aligned on usage of these smaller docs, and have been tweaked to produce the stories. Folder 11 has samples of some of the granular stories created - just for fun, I had the POSM in gemini2.5 pro web YOLO all of the detailed stories for the first few epics. I DONT RECOMMEND DOING THIS IN REALITY - you lose the benefit of dev notes from one story if things change or pivot or whatever, feeding into the next story creation right before pick up. But its fun to do this to see how the model is at producing stories, and if the level of story detail is dialed in for a given model.
|
||||
|
||||
- For Epic 3 stories - I also YOLOd it, but use Sonnet 3.7 - just to see how the quality of story output differs doing it from within cursor with a very different model.
|
||||
@@ -1,3 +0,0 @@
|
||||
# Project 1 Demo
|
||||
|
||||
Hacker News AI Podcast NextJS Monorepo
|
||||
@@ -61,21 +61,39 @@
|
||||
- Confirm access to all relevant project documents (e.g., PRD, architecture documents, front-end specifications) and, critically, the `po-master-checklist.txt`.
|
||||
- Explain the process: "We will now go through the `po-master-checklist.txt` section by section. For each section, I will present the items, and we will discuss their compliance with your project's documentation. I will record findings and any necessary changes."
|
||||
|
||||
2. **Iterative Checklist Review (Section by Section)**
|
||||
2. **Pre-Checklist Documentation Update (Epics & Stories)**
|
||||
|
||||
- Before proceeding to the checklist, inquire with the user: "Are there any suggested updates to the epics and stories from the Architect or Front-End Architect that we need to incorporate into the PRD (or the relevant document containing the master list of epics and stories)?"
|
||||
- **If the user indicates 'Yes' and provides updates:**
|
||||
- Confirm you have the latest version of the PRD (or the primary document containing said epics and stories).
|
||||
- Explain: "I will now incorporate these updates. I will present each affected epic to you one at a time, explaining the changes made based on the feedback. Please review each one, and once you approve it, we'll move to the next."
|
||||
- **Iterative Epic Review & Update:**
|
||||
- For each epic that has received suggestions:
|
||||
- Apply the suggested changes to the epic and its associated stories within your internal representation of the document.
|
||||
- Present the complete, updated text of the epic (including its stories) to the user. Clearly highlight or explain the modifications made.
|
||||
- State: "Please review this updated epic. Do you approve these changes?"
|
||||
- Await user approval before moving to the next updated epic. If the user requests further modifications, address them and re-present for approval.
|
||||
- **Consolidated Output:** Once all specified epics have been reviewed and approved individually, state: "All suggested updates have been incorporated and approved. I will now provide the complete, updated master list of epics and stories as a single output."
|
||||
- Present the full content of the PRD section (or document) containing all epics and stories with all approved changes integrated.
|
||||
- Inform the user: "We will use this updated version of the epics and stories for the subsequent checklist review."
|
||||
- **If the user indicates 'No' updates are needed, or if there were no updates provided:**
|
||||
- State: "Understood. We will proceed with the checklist review using the current project documentation."
|
||||
|
||||
3. **Iterative Checklist Review (Section by Section)**
|
||||
|
||||
- For _each major section_ of the `po-master-checklist.txt`:
|
||||
- Present the checklist items for that specific section to the user.
|
||||
- For each item, discuss its relevance to the project and assess whether the current project documentation satisfies the item's requirements.
|
||||
- For each item, discuss its relevance to the project and assess whether the current project documentation (including any updates made in Step 2) satisfies the item's requirements.
|
||||
- Document all findings: confirmations of compliance, identified deficiencies, areas needing clarification, or suggested improvements for the project documents. Note which document(s) each finding pertains to.
|
||||
- Seek user confirmation and agreement on the findings for the current section before proceeding to the next section of the checklist.
|
||||
|
||||
3. **Compile Findings & Identify Changes**
|
||||
4. **Compile Findings & Identify Changes**
|
||||
|
||||
- After iterating through all sections of the `po-master-checklist.txt` with the user:
|
||||
- Consolidate all documented findings from each section.
|
||||
- Clearly identify and list the specific changes, updates, or additions required for each affected project document.
|
||||
|
||||
4. **Generate Master Checklist Report**
|
||||
5. **Generate Master Checklist Report**
|
||||
|
||||
- Produce a comprehensive final report that includes:
|
||||
- A statement confirming which sections of the `po-master-checklist.txt` were reviewed.
|
||||
@@ -83,7 +101,7 @@
|
||||
- Specific, actionable recommendations for changes to each affected document. This part of the report should clearly state _what_ needs to be changed, _where_ (in which document/section), and _why_ (based on the checklist).
|
||||
- This report serves as a "to-do list" for the user or other agents to improve project documentation.
|
||||
|
||||
5. **Conclude Phase & Advise Next Steps**
|
||||
6. **Conclude Phase & Advise Next Steps**
|
||||
- Present the final Master Checklist Report to the user.
|
||||
- Discuss the findings and recommendations.
|
||||
- Advise on potential next steps, such as:
|
||||
@@ -142,7 +160,7 @@
|
||||
- `docs/tech-stack.md`
|
||||
- `docs/testing-decisions.md`
|
||||
- **From Front-End Architecture (`front-end-architecture.txt`) and/or Front-End Spec (`front-end-spec.md`):**
|
||||
- `docs/fe-project-structure.md` (Create if distinct from the main `project-structure.md`, e.g., for a separate front-end repository).
|
||||
- `docs/frontend-project-structure.md`
|
||||
- `docs/style-guide.md`
|
||||
- `docs/component-guide.md`
|
||||
- `docs/front-end-coding-standards.md` (Specifically for UI development, potentially tailored for a UI-Dev agent).
|
||||
@@ -224,12 +242,13 @@
|
||||
1. **Check Prerequisite State & Inputs**
|
||||
|
||||
- Confirm that the overall plan has been validated (e.g., through the **Master Checklist Phase** or equivalent user approval).
|
||||
- Confirm that project documentation has been processed into granular files, if applicable (i.e., **Librarian Phase** has been run, or documents are already suitable).
|
||||
- Inform the user: "For story creation, I will primarily work with the main PRD, Architecture, and Front-End Architecture documents you provide. If these documents contain links to more specific, granular files that are essential for detailing a story, I will identify them. If I don't have access to a critical linked document, I will request it from you."
|
||||
- Ensure access to:
|
||||
- The `docs/index.md` (critical for locating specific granular information).
|
||||
- The collection of granular documents within the `docs/` folder.
|
||||
- The latest approved PRD (for overall epic/story definitions and high-level context).
|
||||
- Any overarching architecture diagrams or key summary documents if they exist separately from granular files.
|
||||
- The main, potentially unsharded, Architecture document (e.g., `architecture.md`).
|
||||
- The main, potentially unsharded, Front-End Architecture document (e.g., `front-end-architecture.md` or `front-end-spec.md`).
|
||||
- `docs/operational-guidelines.md` (if available, for general coding standards, testing, error handling, security). If its content is within the main architecture document, that's also acceptable.
|
||||
- `docs/index.md` (if available, as a supplementary guide to locate other relevant documents, including epics, or specific sections within larger documents if indexed).
|
||||
- Review the current state of the project: understand which epics and stories are already completed or in progress (this may require input from a tracking system or user).
|
||||
|
||||
2. **Identify Next Stories for Generation**
|
||||
@@ -238,23 +257,33 @@
|
||||
- Determine which stories are not yet complete and are ready for generation, respecting their sequence and dependencies.
|
||||
- If the user specified a range of epics/stories, limit generation to that range. Otherwise, prepare to generate all remaining sequential stories.
|
||||
|
||||
3. **Gather Technical & Historical Context per Story (from Granular Docs)**
|
||||
3. **Gather Technical & Historical Context per Story**
|
||||
|
||||
- For each story to be generated:
|
||||
- **Primarily consult the `docs/index.md`** to locate the relevant granular documentation file(s) containing the detailed specifications for that story's components or features.
|
||||
- Extract _only_ the specific, relevant information from these targeted granular files. Avoid injecting entire large documents or unrelated granular files.
|
||||
- Examples of context to extract by looking up in `docs/index.md` and then opening files like `docs/prd-user-authentication.md`, `docs/api-endpoints-auth.md`, `docs/architecture-auth-module.md`:
|
||||
- Specific functional requirements for a feature.
|
||||
- Detailed API endpoint specifications (request/response schemas from a file like `docs/api-endpoint-xyz.md`).
|
||||
- UI element descriptions or interaction flows (from a file like `docs/ux-login-flow.md`).
|
||||
- Data model definitions (from a file like `docs/data-model-user.md`).
|
||||
- Relevant coding standards or patterns applicable to the story's scope.
|
||||
- **Primary Source Analysis:**
|
||||
- Thoroughly review the PRD for the specific epic and story requirements.
|
||||
- Analyze the main Architecture and Front-End Architecture documents to find all sections relevant to the current story.
|
||||
- Extract necessary details, such as: architecture concepts, relevant epic details, style guide information, component guide information, environment variables, project structure details, tech stack decisions, data models, and API reference sections.
|
||||
- **Operational Guidelines Check:**
|
||||
- Consult `docs/operational-guidelines.md` if available and separate. If its contents (coding standards, testing strategy, error handling, security best practices) are integrated within the main Architecture document, extract them from there. These are critical for informing task breakdowns and technical notes.
|
||||
- **Link Following & Granular Document Handling:**
|
||||
- While parsing the primary documents, identify any internal hyperlinks that point to other, potentially more granular, documents or specific attachments.
|
||||
- If a linked document appears essential for elaborating the story's details (e.g., a specific data model definition, a detailed API spec snippet, a particular component's standards) and you do not have its content:
|
||||
- Clearly state to the user: "The [main document name] references [linked document name/description] for [purpose]. To fully detail this story, I need access to this specific information. Could you please provide it or confirm if it's already attached?"
|
||||
- Await the information or clarification before proceeding with aspects dependent on it.
|
||||
- If linked documents _are_ available, extract the specific, relevant information from them.
|
||||
- **`docs/index.md` as a Secondary Reference:**
|
||||
- If direct information or links within the primary documents are insufficient for a particular detail, consult `docs/index.md` (if available) to see if it catalogs a relevant granular file (e.g., `epic-X.md`, a specific `data-model-user.md`, or `front-end-style-guide.md`) that can provide the missing piece.
|
||||
- **UI Story Specifics:**
|
||||
- For UI-specific stories, actively seek out details related to front-end style guides, component guides, and front-end coding standards, whether they are sections in the main Front-End Architecture document, in `operational-guidelines.md`, or in separate linked/indexed granular files.
|
||||
- **Avoid Redundancy:** Extract _only_ the specific, relevant information needed for the story. Avoid wholesale injection of large document sections if a precise reference or a small snippet will suffice, especially for information the Developer Agent is expected to know (like general coding standards from `operational-guidelines.md` or overall project structure).
|
||||
- Review any previously completed (related) stories for relevant implementation details, patterns, or lessons learned that might inform the current story.
|
||||
|
||||
4. **Populate Story Template for Each Story**
|
||||
|
||||
- Load the content structure from the `story-tmpl.txt`.
|
||||
- For each story identified:
|
||||
- Fill in standard information: Title, Goal/User Story (e.g., "As a [user/system], I want [action], so that [benefit]"), clear Requirements, detailed Acceptance Criteria (ACs), and an initial breakdown of development Tasks.
|
||||
- Fill in standard information: Title, Goal/User Story, clear Requirements, detailed Acceptance Criteria (ACs), and an initial breakdown of development Tasks.
|
||||
- Set the initial Status to "Draft."
|
||||
- Inject the story-specific technical context (gathered in Step 3 from granular documents) into appropriate sections of the template (e.g., "Technical Notes," "Implementation Details," or within Tasks/ACs). Clearly cite the source granular file if helpful (e.g., "Refer to `docs/api-endpoint-xyz.md`
|
||||
- Inject the story-specific technical context (gathered in Step 3) into appropriate sections of the template (e.g., "Technical Notes," "Implementation Details," or within Tasks/ACs). Clearly cite the source document and section, or linked file, if helpful (e.g., "Refer to `architecture.md#Data-Validation-Strategy`" or "Details from `linked-component-spec.md`").
|
||||
- **Note on Context Duplication:** When injecting context, avoid full duplication of general project structure documents or the main 'Coding Standards' section of `operational-guidelines.md` (or its equivalent location in the main architecture document). The Developer Agent is expected to have these documents loaded. Focus on story-specific applications, interpretations, or excerpts directly relevant to the tasks at hand.
|
||||
|
||||
@@ -2,6 +2,8 @@
|
||||
|
||||
**BETA-V3 is the current focus of development and represents the latest iteration of the BMAD Method.** Find all V3 resources in the `BETA-V3/` directory.
|
||||
|
||||
#### A demo of full beta run one all of its output artifacts along with an explanation of what each file represents is avilable [here](BETA-V3/v3-demos/full-stack-app-demo/readme.md)
|
||||
|
||||
If you want to jump right in, here are the [Setup Instructions for V3](./BETA-V3/instruction.md) For IDE, WEB and Task setup.
|
||||
|
||||
## BETA-V3: Advancing AI-Driven Development
|
||||
|
||||
Reference in New Issue
Block a user