* fix(research, tasks): Make research command and task updates tag-aware * refactor(tasks): Prevent automatic task file generation across other locations This commit refactors several core task management functions to prevent them from automatically regenerating individual task files after modifying the main `tasks.json`. Previously, operations like `add-task`, `clear-subtasks`, `expand-task`, and `update-task-by-id` would immediately trigger `generateTaskFiles`. This could be slow and was often unnecessary. The calls to `generateTaskFiles` have been removed or commented out from the core task functions. Users should now run `task-master generate` explicitly to update their individual task files. Additionally, this commit includes fixes to the `move` command to make it fully tag-aware. * fix: move and clear subtasks mcp commands * chore: fix format * chore: fix unit tests --------- Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
3.7 KiB
3.7 KiB
title, query, date, time, timestamp, exchanges
| title | query | date | time | timestamp | exchanges |
|---|---|---|---|---|---|
| Research Session | test the fix for duplicate saves - final test | 6/14/2025 | 6:10:00 PM | 2025-06-14T22:10:00.394Z | 1 |
Research Session
Initial Query
Question: test the fix for duplicate saves - final test
Response:
Steps to Test the Fix for Duplicate Saves
1. Prepare a Clean Test Environment
- Ensure the tasks.json file and any related data stores are in a known, clean state with no pre-existing duplicate entries.
- Back up the current tasks.json in case rollback is needed.
2. Define Test Scenarios
- Attempt to save a new task with unique data.
- Attempt to save a task with the same ID or unique identifier as an existing task (simulate a duplicate).
- Attempt to save a task with the same title or content but a different ID (to test if the fix is based on ID or content).
- Simultaneously trigger multiple save operations (if the system supports concurrency) to test race conditions.
3. Execute Manual and Automated Tests
- Use the application's UI or API to create and save tasks as per the scenarios above.
- For each save operation, check the tasks.json file or the data store to verify:
- No duplicate entries are created.
- Existing tasks are not overwritten unless intended (e.g., for updates).
- The system returns appropriate error messages or warnings when a duplicate save is attempted.
4. Validate System Behavior
- Confirm that after all test operations, each task in tasks.json is unique according to the defined criteria (ID, title, or other unique fields).
- If the system is designed to merge duplicates, verify that merging occurs as expected.
- If the system is designed to reject duplicates, ensure that the rejection is clear and no duplicate is present.
5. Test Edge Cases
- Attempt to save tasks with minor variations (e.g., whitespace differences, case sensitivity) to ensure the duplicate detection logic is robust.
- Test with large numbers of tasks to ensure performance and correctness at scale.
6. Review Logs and Error Handling
- Check application logs for any errors or warnings related to duplicate saves.
- Ensure that error handling is user-friendly and provides actionable feedback.
7. Regression Testing
- Run the full suite of task operations (create, update, delete) to ensure the fix for duplicate saves does not introduce new issues elsewhere in the system.
Additional Best Practices
- Communicate test results with the team to confirm the fix is effective and to prevent future regressions[1][2][5].
- Consider implementing automated duplicate detection as part of the save operation to prevent future issues[2][3].
- Document the test cases and results for future reference and audits.
Example Test Case Table
| Test Scenario | Expected Result | Actual Result | Pass/Fail |
|---|---|---|---|
| Save unique task | Task saved | ||
| Save duplicate task (same ID) | Duplicate rejected/merged | ||
| Save duplicate task (same title) | Duplicate rejected/merged | ||
| Simultaneous saves (race condition) | Only one unique task present | ||
| Save with minor variations | No false positives/negatives |
Fill in the "Actual Result" and "Pass/Fail" columns during testing.
Action Items
- Complete all test scenarios above.
- Document any issues found and retest after fixes.
- Confirm with stakeholders before closing the issue.
Generated by Task Master Research Command
Timestamp: 2025-06-14T22:10:00.394Z