77 lines
3.5 KiB
Markdown
77 lines
3.5 KiB
Markdown
# {Project Name} Testing Strategy
|
|
|
|
## Overall Philosophy & Goals
|
|
|
|
{Describe the high-level approach. e.g., "Follow the Testing Pyramid/Trophy principle.", "Automate extensively.", "Focus on testing business logic and key integrations.", "Ensure tests run efficiently in CI/CD."}
|
|
|
|
- Goal 1: {e.g., Achieve X% code coverage for critical modules.}
|
|
- Goal 2: {e.g., Prevent regressions in core functionality.}
|
|
- Goal 3: {e.g., Enable confident refactoring.}
|
|
|
|
## Testing Levels
|
|
|
|
### Unit Tests
|
|
|
|
- **Scope:** Test individual functions, methods, or components in isolation. Focus on business logic, calculations, and conditional paths within a single module.
|
|
- **Tools:** {e.g., Jest, Pytest, Go testing package, JUnit, NUnit}
|
|
- **Mocking/Stubbing:** {How are dependencies mocked? e.g., Jest mocks, Mockito, Go interfaces}
|
|
- **Location:** {e.g., `test/unit/`, alongside source files (`*.test.ts`)}
|
|
- **Expectations:** {e.g., Should cover all significant logic paths. Fast execution.}
|
|
|
|
### Integration Tests
|
|
|
|
- **Scope:** Verify the interaction and collaboration between multiple internal components or modules. Test the flow of data and control within a specific feature or workflow slice. May involve mocking external APIs or databases, or using test containers.
|
|
- **Tools:** {e.g., Jest, Pytest, Go testing package, Testcontainers, Supertest (for APIs)}
|
|
- **Location:** {e.g., `test/integration/`}
|
|
- **Expectations:** {e.g., Focus on module boundaries and contracts. Slower than unit tests.}
|
|
|
|
### End-to-End (E2E) / Acceptance Tests
|
|
|
|
- **Scope:** Test the entire system flow from an end-user perspective. Interact with the application through its external interfaces (UI or API). Validate complete user journeys or business processes against real or near-real dependencies.
|
|
- **Tools:** {e.g., Playwright, Cypress, Selenium (for UI); Postman/Newman, K6 (for API)}
|
|
- **Environment:** {Run against deployed environments (e.g., Staging) or a locally composed setup (Docker Compose).}
|
|
- **Location:** {e.g., `test/e2e/`}
|
|
- **Expectations:** {Cover critical user paths. Slower, potentially flaky, run less frequently (e.g., pre-release, nightly).}
|
|
|
|
### Manual / Exploratory Testing (Optional)
|
|
|
|
- **Scope:** {Where is manual testing still required? e.g., Exploratory testing for usability, testing complex edge cases.}
|
|
- **Process:** {How is it performed and tracked?}
|
|
|
|
## Specialized Testing Types (Add sections as needed)
|
|
|
|
### Performance Testing
|
|
|
|
- **Scope & Goals:** {What needs performance testing? What are the targets (latency, throughput)?}
|
|
- **Tools:** {e.g., K6, JMeter, Locust}
|
|
|
|
### Security Testing
|
|
|
|
- **Scope & Goals:** {e.g., Dependency scanning, SAST, DAST, penetration testing requirements.}
|
|
- **Tools:** {e.g., Snyk, OWASP ZAP, Dependabot}
|
|
|
|
### Accessibility Testing (UI)
|
|
|
|
- **Scope & Goals:** {Target WCAG level, key areas.}
|
|
- **Tools:** {e.g., Axe, Lighthouse, manual checks}
|
|
|
|
### Visual Regression Testing (UI)
|
|
|
|
- **Scope & Goals:** {Prevent unintended visual changes.}
|
|
- **Tools:** {e.g., Percy, Applitools Eyes, Playwright visual comparisons}
|
|
|
|
## Test Data Management
|
|
|
|
{How is test data generated, managed, and reset for different testing levels?}
|
|
|
|
## CI/CD Integration
|
|
|
|
{How and when are tests executed in the CI/CD pipeline? What constitutes a pipeline failure?}
|
|
|
|
## Change Log
|
|
|
|
| Change | Date | Version | Description | Author |
|
|
| ------------- | ---------- | ------- | ------------- | -------------- |
|
|
| Initial draft | YYYY-MM-DD | 0.1 | Initial draft | {Agent/Person} |
|
|
| ... | ... | ... | ... | ... |
|