adjusts rule to use kebab-case for long form option flags.

This commit is contained in:
Eyal Toledano
2025-03-25 00:22:43 -04:00
parent 33bcb0114a
commit d4f767c9b5
2 changed files with 12 additions and 12 deletions

View File

@@ -35,7 +35,7 @@ alwaysApply: false
- ✅ DO: Use descriptive, action-oriented names
- **Option Names**:
- ✅ DO: Use camelCase for long-form option names (`--outputFormat`)
- ✅ DO: Use kebab-case for long-form option names (`--output-format`)
- ✅ DO: Provide single-letter shortcuts when appropriate (`-f, --file`)
- ✅ DO: Use consistent option names across similar commands
- ❌ DON'T: Use different names for the same concept (`--file` in one command, `--path` in another)

View File

@@ -59,30 +59,30 @@ Testing approach:
- Test error handling for non-existent task IDs
- Test basic command flow with a mock task store
## 2. Implement AI prompt construction [pending]
## 2. Implement AI prompt construction and FastMCP integration [pending]
### Dependencies: 24.1
### Description: Develop the logic to analyze tasks, construct appropriate AI prompts, and interact with the AI service using the existing ai-service.js to generate test content.
### Description: Develop the logic to analyze tasks, construct appropriate AI prompts, and interact with the AI service using FastMCP to generate test content.
### Details:
Implementation steps:
1. Create a utility function to analyze task descriptions and subtasks for test requirements
2. Implement a prompt builder that formats task information into an effective AI prompt
3. Use ai-service.js as needed to send the prompt and receive the response (streaming)
4. Process the response to extract the generated test code
5. Implement error handling for failures, rate limits, and malformed responses
6. Add appropriate logging for the test generation process
3. Use FastMCP to send the prompt and receive the response
4. Process the FastMCP response to extract the generated test code
5. Implement error handling for FastMCP failures, rate limits, and malformed responses
6. Add appropriate logging for the FastMCP interaction process
Testing approach:
- Test prompt construction with various task types
- Test ai services integration with mocked responses
- Test error handling for ai service failures
- Test response processing with sample ai-services.js outputs
- Test FastMCP integration with mocked responses
- Test error handling for FastMCP failures
- Test response processing with sample FastMCP outputs
## 3. Implement test file generation and output [pending]
### Dependencies: 24.2
### Description: Create functionality to format AI-generated tests into proper Jest test files and save them to the appropriate location.
### Details:
Implementation steps:
1. Create a utility to format the ai-services.js response into a well-structured Jest test file
1. Create a utility to format the FastMCP response into a well-structured Jest test file
2. Implement naming logic for test files (task_XXX.test.ts for parent tasks, task_XXX_YYY.test.ts for subtasks)
3. Add logic to determine the appropriate file path for saving the test
4. Implement file system operations to write the test file
@@ -93,7 +93,7 @@ Implementation steps:
Testing approach:
- Test file naming logic for various task/subtask combinations
- Test file content formatting with sample ai-services.js outputs
- Test file content formatting with sample FastMCP outputs
- Test file system operations with mocked fs module
- Test the complete flow from command input to file output
- Verify generated tests can be executed by Jest