Compare commits

..

220 Commits

Author SHA1 Message Date
github-actions[bot]
9acb900153 Version Packages (#1303)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
2025-10-14 11:47:13 +02:00
Ralph Khreish
c4f5d89e72 Merge pull request #1297 from eyaltoledano/next 2025-10-14 10:08:08 +02:00
Ralph Khreish
e308cf4f46 chore: exit pre mode 2025-10-13 22:46:24 +02:00
Ralph Khreish
11b7354010 fix: export url (#1288) 2025-10-13 21:51:19 +02:00
Ralph Khreish
4c1ef2ca94 fix: auth refresh token (#1299)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-10-13 21:50:22 +02:00
Ralph Khreish
663aa2dfe9 chore: fix CI 2025-10-12 17:10:24 +02:00
Ralph Khreish
8f60a0561e chore: add hiring banner 2025-10-12 16:52:58 +02:00
Ralph Khreish
9a22622e9c chore: cleanup changelog and pre exit 2025-10-11 21:21:04 +02:00
github-actions[bot]
8d3c7e4116 chore: rc version bump 2025-10-11 19:09:36 +00:00
Ralph Khreish
3010b90d98 feat: add claude code plugin support (#1293)
Co-authored-by: Claude <noreply@anthropic.com>
2025-10-11 21:01:10 +02:00
Ralph Khreish
90e6bdcf1c fix: expand_all now uses complexity analysis recommendations (#1287)
Co-authored-by: Claude <noreply@anthropic.com>
2025-10-11 11:54:31 +02:00
Ralph Khreish
25a00dca67 Merge pull request #1291 from eyaltoledano/raplh/chore/merge.main 2025-10-11 11:40:42 +02:00
Ralph Khreish
f263d4b2e0 Merge remote-tracking branch 'origin/main' into raplh/chore/merge.main 2025-10-11 11:32:47 +02:00
Ralph Khreish
f12a16d096 feat: add changelog highlights to auto-update notifications (#1286)
Co-authored-by: Claude <noreply@anthropic.com>
2025-10-10 18:49:59 +02:00
Ralph Khreish
aaf903ff2f fix: resolve cross-level dependency bug in add-dependency command (#1191)
Co-authored-by: Ralph Khreish <Crunchyman-ralph@users.noreply.github.com>
Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com>
Fixes #542
2025-10-08 15:30:20 +02:00
Ralph Khreish
2a910a40ba feat: add rpg method prd example template (#1285) 2025-10-08 15:00:52 +02:00
github-actions[bot]
0df6595245 Version Packages (#1283)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
2025-10-07 20:43:13 +02:00
Ralph Khreish
33e3fbb20f Release 0.28.0 2025-10-07 20:08:47 +02:00
Ralph Khreish
5cb7ed557a chore: exit pre 2025-10-07 19:34:56 +02:00
github-actions[bot]
b9e644c556 chore: rc version bump 2025-10-06 14:06:45 +00:00
Ralph Khreish
7265a6cf53 feat: implement export tasks (#1260) 2025-10-06 16:03:56 +02:00
Ralph Khreish
db6f405f23 feat: add api-storage improvements (#1278) 2025-10-06 15:23:48 +02:00
Ralph Khreish
7b5a7c4495 fix: remove deprecated generateTaskFiles calls from MCP tools (#1277)
Co-authored-by: Ralph Khreish <Crunchyman-ralph@users.noreply.github.com>
Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com>
Resolves issue #1271 - MCP Connection Closed Error After Upgrading to v0.27.3
2025-10-06 11:55:26 +02:00
Ralph Khreish
caee040907 fix(mcp-server): construct default tasks.json path when file parameter not provided (#1276)
Co-authored-by: Ralph Khreish <Crunchyman-ralph@users.noreply.github.com>
Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com>
Fixes #1272
2025-10-06 11:50:45 +02:00
github-actions[bot]
4b5473860b docs: Auto-update and format models.md 2025-10-05 20:04:58 +00:00
Ben Vargas
b43b7ce201 feat: Add Codex CLI provider with OAuth authentication (#1273)
Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
2025-10-05 22:04:45 +02:00
github-actions[bot]
86027f1ee4 chore: rc version bump 2025-10-04 17:26:07 +00:00
Ralph Khreish
4f984f8a69 chore: fix build issues (#1274) 2025-10-04 19:24:31 +02:00
github-actions[bot]
f7646f41b5 chore: rc version bump 2025-10-04 16:56:52 +00:00
Ralph Khreish
20004a39ea fix: add complexity score to tm list and tm show (#1270) 2025-10-03 18:47:05 +02:00
Ralph Khreish
f1393f47b1 fix: pricing show 0 when it is defined (#1266) 2025-10-03 16:21:32 +02:00
Ralph Khreish
738ec51c04 feat: Migrate Task Master to generateObject for structured AI responses (#1262)
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: Ben Vargas <ben@example.com>
2025-10-02 16:23:34 +02:00
Ralph Khreish
c7418c4594 fix: make tag listing table use dynamic column widths to prevent truncation (#1264)
Co-authored-by: Ralph Khreish <Crunchyman-ralph@users.noreply.github.com>
Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com>
2025-10-02 15:39:31 +02:00
Ralph Khreish
0747f1c772 Merge pull request #1265 from eyaltoledano/ralph/chore/update.from.main 2025-10-02 15:31:42 +02:00
Ralph Khreish
ffe24a2e35 Merge remote-tracking branch 'origin/main' into ralph/chore/update.from.main 2025-10-02 15:11:24 +02:00
Ralph Khreish
604b94baa9 chore: replace dotenv-mono with dotenv and try to fix env variables (#1261) 2025-10-02 11:52:25 +02:00
Ralph Khreish
2ea4bb6a81 chore: fix CI 2025-09-30 10:41:43 +02:00
Ralph Khreish
3e96387715 chore: fix extension CI 2025-09-30 10:41:43 +02:00
Ralph Khreish
100c3dc47d chore: apply requested changes 2025-09-30 10:41:43 +02:00
Ralph Khreish
986ac117ae feat: update grok-cli ai sdk provider to v5 (#1252) 2025-09-30 10:41:43 +02:00
tommy-ca
18aa416035 feat: Claude Code AI SDK v5 Integration (#1114)
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
2025-09-30 10:41:43 +02:00
github-actions[bot]
3b3dbabed1 Version Packages (#1255)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
2025-09-27 08:56:38 +02:00
Ralph Khreish
af53525cbc fix: handle subtasks in getTask method (#1254)
Co-authored-by: Ralph Khreish <Crunchyman-ralph@users.noreply.github.com>
Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com>
2025-09-26 20:58:15 +02:00
Joe Danziger
0079b7defd feat: Add Cursor IDE custom slash commands support (#1215) 2025-09-26 19:21:16 +02:00
Ralph Khreish
0b2c6967c4 fix: improve subtask & parent task management (#1251) 2025-09-26 11:04:38 +02:00
Ralph Khreish
c0682ac795 Merge pull request #1250 from eyaltoledano/chore/merge.main.september 2025-09-26 01:10:57 +02:00
Ralph Khreish
01a7faea8f Merge remote-tracking branch 'origin/main' into chore/merge.main.september 2025-09-26 01:10:12 +02:00
github-actions[bot]
b7f32eac5a Version Packages (#1249)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
2025-09-26 01:06:52 +02:00
Ralph Khreish
044a7bfc98 fix: implement subtask status update functionality (#1248)
Co-authored-by: Ralph Khreish <Crunchyman-ralph@users.noreply.github.com>
Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com>
2025-09-26 01:01:55 +02:00
Ralph Khreish
814265cd33 chore: adjust CI to run on all PRs (#1244) 2025-09-24 20:19:09 +02:00
Ralph Khreish
9b7b2ca7b2 Merge pull request #1245 from eyaltoledano/ralph/chore/update.from.main 2025-09-24 20:14:00 +02:00
Ralph Khreish
949f091179 Merge remote-tracking branch 'origin/main' into ralph/chore/update.from.main 2025-09-24 20:10:19 +02:00
github-actions[bot]
51a351760c Version Packages (#1243)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
2025-09-24 19:21:21 +02:00
Ralph Khreish
732b2c61ad Merge pull request #1233 from eyaltoledano/ralph/chore/fix.ci.failure.main 2025-09-24 17:10:42 +02:00
Ralph Khreish
32c2b03c23 Merge pull request #1242 from eyaltoledano/ralph/fix.main.merges
fix CI failing to release (#1232)
2025-09-24 14:28:33 +02:00
Ralph Khreish
3bfd999d81 Merge remote-tracking branch 'origin/main' into ralph/fix.main.merges 2025-09-24 14:28:09 +02:00
Ralph Khreish
9fa79eb026 chore: fix CI failing to release (#1232) 2025-09-24 14:26:41 +02:00
Jungwoo Song
875134247a Add Q Developer CLI at README.md (#1159) 2025-09-24 11:06:16 +02:00
Ralph Khreish
4c2801d5eb chore: run format and fix CI 2025-09-24 11:00:04 +02:00
Ralph Khreish
c911608f60 chore: last round of touchups and bug fixes 2025-09-24 10:57:17 +02:00
github-actions[bot]
8f1497407f chore: rc version bump 2025-09-23 20:43:32 +00:00
Ralph Khreish
10b64ec6f5 chore: re-enter rc mode for a last pre-release 2025-09-23 22:38:55 +02:00
Ralph Khreish
1a1879483b chore: do final test 2025-09-23 21:47:37 +02:00
Ralph Khreish
d691cbb7ae chore: CI fix format 2025-09-23 20:27:41 +02:00
Ralph Khreish
1b7c9637a5 chore: fix CI and tsdown config 2025-09-23 20:24:11 +02:00
Ralph Khreish
9ff5f158d5 chore: fix format 2025-09-23 19:27:57 +02:00
Ralph Khreish
b2ff06e8c5 fix: CI and unit tests 2025-09-23 19:26:02 +02:00
Ralph Khreish
c2fc61ddb3 chore: mintlify fix broken links (#1237) 2025-09-23 18:32:51 +02:00
Ralph Khreish
aaacc3dae3 fix: improve docs and command help for analzye-complexity (#1235) 2025-09-23 18:19:32 +02:00
Ralph Khreish
46cd5dc186 fix: add installation instructions for claude-code mcp (#1236) 2025-09-23 18:16:40 +02:00
github-actions[bot]
49a31be416 docs: Auto-update and format models.md 2025-09-23 15:45:03 +00:00
JeonSeongHyeon
2b69936ee7 fix: update model ID for sonar deep research (#1192)
Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
2025-09-23 17:44:40 +02:00
Ralph Khreish
6438f6c7c8 chore: exit pre-release mode and format 2025-09-23 11:46:20 +02:00
Ralph Khreish
6bbd777552 chore: fix --version weird error 2025-09-23 11:45:46 +02:00
github-actions[bot]
100482722f chore: rc version bump 2025-09-23 09:10:24 +00:00
Ralph Khreish
7ff882bf23 fix: improve commander instance in commands.js 2025-09-23 11:05:24 +02:00
Ralph Khreish
6ab768f6ec chore: fix CI failure 2025-09-22 22:41:21 +02:00
Julien Pelletier
b5fe723f8e Fix/claude code path executable setting (#1172)
Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
2025-09-22 22:39:37 +02:00
Ralph Khreish
f487736670 chore: fix CI failing to release (#1232) 2025-09-22 22:34:12 +02:00
olssonsten
d67b81d25d feat: add MCP timeout configuration for long-running operations (#1112) 2025-09-22 19:55:10 +02:00
Ralph Khreish
66c05053c0 Merge pull request #1231 from eyaltoledano/ralph/merge.from.main 2025-09-22 19:54:12 +02:00
Ralph Khreish
d7ab4609aa chore: fix CI 2025-09-22 19:25:44 +02:00
github-actions[bot]
05f6242f7e Version Packages (#1228)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-09-22 19:23:22 +02:00
Ralph Khreish
a58719cf50 Merge pull request #1230 from eyaltoledano/next 2025-09-22 19:20:40 +02:00
Ralph Khreish
674d1f6de7 fix: update MCP config paths in various files (#1229) 2025-09-22 19:15:17 +02:00
Ralph Khreish
f106fb8e0b Merge pull request Release 0.27.0 #1224 from eyaltoledano/next 2025-09-22 18:40:55 +02:00
Ralph Khreish
fd9dd43ee0 fix: improve weekly metrics workflow with 14-day window and debug output (#1226) 2025-09-22 15:47:03 +02:00
Ralph Khreish
c395e93696 chore: remove pre-mode (get out of RC) 2025-09-20 01:11:50 +02:00
Ralph Khreish
a621ff05ea feat: update tm models defaults (#1225) 2025-09-20 01:07:33 +02:00
github-actions[bot]
47ddb60231 docs: Auto-update and format models.md 2025-09-19 22:08:36 +00:00
Eyal Toledano
fce841490a Tm start (#1200)
Co-authored-by: Max Tuzzolino <maxtuzz@Maxs-MacBook-Pro.local>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Max Tuzzolino <max.tuzsmith@gmail.com>
Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
2025-09-20 00:08:20 +02:00
Ralph Khreish
4e126430a0 chore: update docs to remove --package refs (#1220) 2025-09-18 23:39:50 +02:00
github-actions[bot]
a33abe6c21 chore: rc version bump 2025-09-18 21:17:55 +00:00
Eyal Toledano
2b0cbdbc84 feat: extends the tm context command to accept a brief ID directly or… (#1219)
Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
2025-09-18 23:12:08 +02:00
github-actions[bot]
f1cdf78aa6 chore: rc version bump 2025-09-18 16:37:06 +00:00
Ralph Khreish
e6de285cea feat: add auto-update to every command when your task-master instance is out of date (#1217) 2025-09-18 18:35:32 +02:00
github-actions[bot]
cf3339fa48 chore: rc version bump 2025-09-18 15:18:17 +00:00
Ralph Khreish
255b9f0334 chore: test pre-release functionality with new system 2025-09-18 17:16:26 +02:00
github-actions[bot]
cb2c266b2d chore: rc version bump 2025-09-18 12:56:01 +00:00
Ralph Khreish
170d6f2f65 feat: implement api update-task (#1214) 2025-09-18 01:48:01 +02:00
Ralph Khreish
137ef36278 chore: fix pre-release CI (#1213)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-09-18 00:34:13 +02:00
Ralph Khreish
1a3a528bf7 Merge pull request #1212 from eyaltoledano/ralph/main/rebase 2025-09-17 22:05:13 +02:00
Ralph Khreish
c164adc6ff chore: move to tsdown (#1211)
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2025-09-17 22:00:37 +02:00
Ralph Khreish
9d61e0447d fix: UI list and show (#1210) 2025-09-17 22:00:37 +02:00
Ralph Khreish
ee11b735b3 chore: fix env variables (#1204)
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2025-09-17 22:00:37 +02:00
losolosol
6d978228d9 feat: added vscode start task button (#1201)
Co-authored-by: Carlos Montoya <carlos@Carloss-MacBook-Pro.local>
Co-authored-by: Carlos Montoya <los@losmontoya.com>
2025-09-17 22:00:36 +02:00
Ralph Khreish
ea9341e7af chore: fix CI p2 2025-09-17 22:00:36 +02:00
Ralph Khreish
4296e383ea chore: fix CI 2025-09-17 22:00:36 +02:00
Ralph Khreish
97b2781709 feat: add tm show (#1199) 2025-09-17 22:00:36 +02:00
Ralph Khreish
96553e4a5f chore: fix CI with new typescript setup (#1194)
Co-authored-by: Ralph Khreish <Crunchyman-ralph@users.noreply.github.com>
Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com>
2025-09-17 22:00:36 +02:00
Ralph Khreish
7582219365 chore: fix format 2025-09-17 22:00:35 +02:00
Ralph Khreish
84baedc3d2 feat: implement tm list remote (#1185) 2025-09-17 22:00:35 +02:00
Ralph Khreish
78da39edff chore: address oauth PR concerns (#1184) 2025-09-17 22:00:35 +02:00
Ralph Khreish
4d1416b175 feat: add oauth with remote server (#1178) 2025-09-17 22:00:35 +02:00
Ralph Khreish
dc811eb45e feat: create tm-core and apps/cli (#1093)
- add typescript
- add npm workspaces
2025-09-17 22:00:34 +02:00
Ralph Khreish
3c41a113fe chore: move to tsdown (#1211)
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2025-09-17 21:55:53 +02:00
Ralph Khreish
0e8c42c7cb fix: UI list and show (#1210) 2025-09-17 15:05:33 +02:00
Ralph Khreish
799d1d2cce chore: fix env variables (#1204)
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2025-09-13 01:34:50 +02:00
losolosol
83af314879 feat: added vscode start task button (#1201)
Co-authored-by: Carlos Montoya <carlos@Carloss-MacBook-Pro.local>
Co-authored-by: Carlos Montoya <los@losmontoya.com>
2025-09-12 05:35:57 +02:00
Ralph Khreish
dd03374496 chore: fix CI p2 2025-09-11 18:44:40 -07:00
Ralph Khreish
4ab0affba7 chore: fix CI 2025-09-11 18:35:07 -07:00
Ralph Khreish
77e1ddc237 feat: add tm show (#1199) 2025-09-12 03:34:32 +02:00
Ralph Khreish
3eeb19590a chore: fix CI with new typescript setup (#1194)
Co-authored-by: Ralph Khreish <Crunchyman-ralph@users.noreply.github.com>
Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com>
2025-09-09 23:35:47 +02:00
Ralph Khreish
587745046f chore: fix format 2025-09-09 03:32:48 +02:00
Ralph Khreish
c61c73f827 feat: implement tm list remote (#1185) 2025-09-09 03:32:48 +02:00
Ralph Khreish
15900d9fd5 chore: address oauth PR concerns (#1184) 2025-09-09 03:32:48 +02:00
Ralph Khreish
7cf4004038 feat: add oauth with remote server (#1178) 2025-09-09 03:32:48 +02:00
Ralph Khreish
0f3ab00f26 feat: create tm-core and apps/cli (#1093)
- add typescript
- add npm workspaces
2025-09-09 03:32:48 +02:00
github-actions[bot]
e81040def5 Version Packages (#1189)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
2025-09-09 03:17:22 +02:00
Ralph Khreish
597f6b03b4 Merge pull request #1164 (Release 0.26.0) 2025-09-08 22:17:30 +02:00
Ralph Khreish
a7ad4c8e92 chore: improve Claude documentation workflows (#1155) 2025-09-08 22:11:46 +02:00
Ralph Khreish
0d54747894 chore: fix CI 2025-09-08 12:46:07 -07:00
Jeremy Watt
df26c65632 fix: re-add and re-name claude code clear tm commands (#1133)
* fix: remove claude code clear tm commands

* readded taskmaster claude code commands previously removed

* Update crazy-zebras-drum.md

Updated via nudging from task rabbit - changeset now more direct and no longer mentions issues.

* claude code remove subtasks added back to assets

---------

Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
Co-authored-by: Jeremy Watt <jeremywatt@Jeremys-Mac-mini.local>
2025-09-01 11:54:17 +02:00
github-actions[bot]
e80e5bb7cd chore: rc version bump 2025-08-28 22:29:10 +00:00
Ralph Khreish
c4f92f6a0a feat: add optional feature flag for code context (#1165)
* feat: add featureFlag for codebase analysis

* chore: add changeset

* chore: run format
2025-08-29 00:28:00 +02:00
github-actions[bot]
be0c0f267c chore: rc version bump 2025-08-28 22:11:59 +00:00
Ralph Khreish
a983f75d4f Merge pull request #1167 from eyaltoledano/ralph/chore.merge.main.into.next.august
ralph/chore.merge.main.into.next.august
2025-08-29 00:10:36 +02:00
Ralph Khreish
e743aaa8c2 chore: run format 2025-08-29 00:10:05 +02:00
Ralph Khreish
16ffffaf68 Merge remote-tracking branch 'origin/main' into next-backup 2025-08-29 00:08:26 +02:00
Ralph Khreish
f254aed4a6 chore: run format 2025-08-29 00:07:34 +02:00
github-actions[bot]
dd3b47bb2b Version Packages (#1154)
* Version Packages

* chore: fix changelog

---------

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
2025-08-29 00:07:34 +02:00
Ralph Khreish
37af0f1912 feat: add codebase context capabilities to gemini-cli (#1163)
* feat: add support for claude code context

- code context for:
  - add-task
  - update-subtask
  - update-task
  - update

* feat: fix CI and format + refactor

* chore: format

* chore: fix test

* feat: add gemini-cli support for codebase context

* feat: add google cli integration and fix tests

* chore: apply requested coderabbit changes

* chore: bump gemini cli package
2025-08-28 22:44:52 +02:00
Parthy
8783708e5e Improve cross-tag move UX and safety; add MCP suggestions and CLI tips (#1135)
* docs: Auto-update and format models.md

* docs(ui,cli): remove --force from cross-tag move guidance; recommend --with-dependencies/--ignore-dependencies

- scripts/modules/ui.js: drop force tip in conflict resolution
- scripts/modules/commands.js: remove force examples from move help
- docs/cross-tag-task-movement.md: purge force mentions; add explicit with/ignore examples

* test(move): update cross-tag move tests to drop --force; assert with/ignore deps behavior and current-tag fallback

- CLI integration: remove force expectations, keep with/ignore, current-tag fallback
- Integration: remove force-path test
- Unit: add scoped traversal test, adjust fixtures to avoid id collision

* fix(move): scope dependency traversal to source tag; tag-aware ignore-dependencies filtering

- resolveDependencies: traverse only sourceTag tasks to avoid cross-tag contamination
- filter dependent IDs to those present in source tag, numeric only
- ignore-dependencies: drop deps pointing to tasks from sourceTag; keep targetTag deps

* test(mcp): ensure cross-tag move passes only with/ignore options and returns conflict suggestions

- new test: tests/unit/mcp/tools/move-task-cross-tag-options.test.js

* feat(move): add advisory tips when ignoring cross-tag dependencies; add integration test case

* feat(cli/move): improve ID collision UX for cross-tag moves\n\n- Print Next Steps tips when core returns them (e.g., after ignore-dependencies)\n- Add dedicated help block when an ID already exists in target tag

* feat(move/mcp): improve ID collision UX and suggestions\n\n- Core: include suggestions on TASK_ALREADY_EXISTS errors\n- MCP: map ID collision to TASK_ALREADY_EXISTS with suggestions\n- Tests: add MCP unit test for ID collision suggestions

* test(move/cli): print tips on ignore-dependencies results; print ID collision suggestions\n\n- CLI integration test: assert Next Steps tips printed when result.tips present\n- Integration test: assert TASK_ALREADY_EXISTS error includes suggestions payload

* chore(changeset): add changeset for cross-tag move UX improvements (CLI/MCP/core/tests)

* Add cross-tag task movement help and validation improvements

- Introduced a detailed help command for cross-tag task movement, enhancing user guidance on usage and options.
- Updated validation logic in `validateCrossTagMove` to include checks for indirect dependencies, improving accuracy in conflict detection.
- Refactored tests to ensure comprehensive coverage of new validation scenarios and error handling.
- Cleaned up documentation to reflect the latest changes in task movement functionality.

* refactor(commands): remove redundant tips printing after move operation

- Eliminated duplicate printing of tips for next steps after the move operation, streamlining the output for users.
- This change enhances clarity by ensuring tips are only displayed when relevant, improving overall user experience.

* docs(move): clarify "force move" options and improve examples

- Updated documentation to replace the deprecated "force move" concept with clear alternatives: `--with-dependencies` and `--ignore-dependencies`.
- Enhanced Scenario 3 with explicit options and improved inline comments for better readability.
- Removed confusing commented code in favor of a straightforward note in the Force Move section.

* chore: run formatter

* Update .changeset/clarify-force-move-docs.md

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

* Update docs/cross-tag-task-movement.md

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

* Update tests/unit/scripts/modules/task-manager/move-task-cross-tag.test.js

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

* test(move): add test for dependency traversal scoping with --with-dependencies option

- Introduced a new test to ensure that the dependency traversal is limited to tasks from the source tag when using the --with-dependencies option, addressing potential ID collisions across tags.

* test(move): enhance tips validation in cross-tag task movement integration test

---------

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2025-08-28 19:02:00 +02:00
Ralph Khreish
4dad2fd613 feat: add support for code context in more commands (#1162)
* feat: add support for claude code context

- code context for:
  - add-task
  - update-subtask
  - update-task
  - update

* feat: fix CI and format + refactor

* chore: format

* chore: fix broken tests

* chore: fix test
2025-08-28 18:05:31 +02:00
github-actions[bot]
4cae2991d4 Version Packages (#1154)
* Version Packages

* chore: fix changelog

---------

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
2025-08-23 00:44:25 +02:00
Ralph Khreish
0d7ff627c9 Merge pull request #1152 from eyaltoledano/next 2025-08-23 00:34:40 +02:00
Ralph Khreish
db720a954d Fix: disable streaming for parse prd (#1151)
* fix: temporarily disable streaming
2025-08-22 18:33:02 +02:00
Ben Vargas
89335578ff fix(claude-code): prevent "Right-hand side of instanceof is not an object" when SDK missing; improve CLI stability (#1146)
* fix: handle missing @anthropic-ai/claude-code SDK gracefully

Add defensive checks to prevent "Right-hand side of 'instanceof' is not an object" errors when the optional Claude Code SDK is not installed.

Changes:
- Check if AbortError exists before using instanceof
- Check if query function exists before calling it
- Provide clear error messages when SDK is missing

This fixes the issue reported by users in v0.24.0 and v0.25.0 where Task Master would crash with instanceof errors when using the claude-code provider without the SDK installed.

* chore: bump @anthropic-ai/claude-code to ^1.0.88 and regenerate lockfile
2025-08-22 17:01:34 +02:00
Ralph Khreish
781b8ef2af remove claude code mentioning 2025-08-22 17:01:34 +02:00
github-actions[bot]
7d564920b5 Version Packages (#1143)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-08-19 23:04:40 +02:00
Ralph Khreish
2737fbaa67 Release 0.25.0 #1134 from eyaltoledano/next
Release 0.25.0
2025-08-19 21:10:44 +02:00
Ralph Khreish
9feb8d2dbf chore: fix CI of claude code docs updater 2025-08-19 16:11:57 +02:00
Ralph Khreish
8a991587f1 chore: fix CI 2025-08-19 16:09:08 +02:00
Ralph Khreish
7ceba2f572 chore: exit pre-release mode 2025-08-19 16:01:31 +02:00
Ralph Khreish
10565f07d3 chore: run format 2025-08-19 16:00:48 +02:00
Ralph Khreish
f27ce34fe9 chore: fix changeset 2025-08-14 14:24:08 +02:00
github-actions[bot]
71be933a8d chore: rc version bump 2025-08-13 22:37:59 +00:00
Ralph Khreish
5d94f1b471 chore: add a bunch of automations (#1132)
* chore: add a bunch of automations

* chore: run format

* Update .github/scripts/auto-close-duplicates.mjs

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

* chore: run format

---------

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2025-08-14 00:36:18 +02:00
Joe Danziger
3dee60dc3d fix: update Cursor one-click install link to new URL format (#1131)
* add /en to link

* add changeset
2025-08-13 18:02:47 +02:00
Ralph Khreish
f469515228 feat: add Claude documentation updater workflow (#1130)
This commit introduces a new GitHub Actions workflow that automatically updates documentation based on changes pushed to the 'next' branch. The workflow checks for modified files, creates a new branch for documentation updates, and utilizes the Claude Code Action to analyze changes and suggest necessary documentation revisions. If updates are made, a pull request is created for review.
2025-08-13 15:10:47 +02:00
Ralph Khreish
2fd0f026d3 chore: remove pre-release CI for extension, too much work and doesn't make sense for us (#1129) 2025-08-13 15:08:55 +02:00
Joe Danziger
e3ed4d7c14 feat: CLI & MCP progress tracking for parse-prd command (#1048)
* initial cutover

* update log to debug

* update tracker to pass units

* update test to match new base tracker format

* add streamTextService mocks

* remove unused imports

* Ensure the CLI waits for async main() completion

* refactor to reduce code duplication

* update comment

* reuse function

* ensure targetTag is defined in streaming mode

* avoid throwing inside process.exit spy

* check for null

* remove reference to generate

* fix formatting

* fix textStream assignment

* ensure no division by 0

* fix jest chalk mocks

* refactor for maintainability

* Improve bar chart calculation logic for consistent visual representation

* use custom streaming error types; fix mocks

* Update streamText extraction in parse-prd.js to match actual service response

* remove check - doesn't belong here

* update mocks

* remove streaming test that wasn't really doing anything

* add comment

* make parsing logic more DRY

* fix formatting

* Fix textStream extraction to match actual service response

* fix mock

* Add a cleanup method to ensure proper resource disposal and prevent memory leaks

* debounce progress updates to reduce UI flicker during rapid updates

* Implement timeout protection for streaming operations (60-second timeout) with automatic fallback to non-streaming mode.

* clear timeout properly

* Add a maximum buffer size limit (1MB) to prevent unbounded memory growth with very large streaming responses.

* fix formatting

* remove duplicate mock

* better docs

* fix formatting

* sanitize the dynamic property name

* Fix incorrect remaining progress calculation

* Use onError callback instead of console.warn

* Remove unused chalk import

* Add missing custom validator in fallback parsing configuration

* add custom validator parameter in fallback parsing

* chore: fix package-lock.json

* chore: large code refactor

* chore: increase timeout from 1 minute to 3 minutes

* fix: refactor and fix streaming

* Merge remote-tracking branch 'origin/next' into joedanz/parse-prd-progress

* fix: cleanup and fix unit tests

* chore: fix unit tests

* chore: fix format

* chore: run format

* chore: fix weird CI unit test error

* chore: fix format

---------

Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
2025-08-12 22:37:07 +02:00
Dominique Vidjanagni
fc47714340 Feat/add-kilocode-rules (#1040)
* feat: Add Kilo Code integration to TaskMaster

* feat: Add Kilo profile configuration to rule transformer tests

* refactor: Improve code formatting and consistency in Kilo profile and tests

* fix: Correct formatting of workspaces in package.json

* chore: add changeset for Kilo Code integration

* feat: add Kilo Code rules and mode configurations

- Add comprehensive rule sets for all modes (architect, ask, code, debug, orchestrator, test)
- Update .kilocodemodes configuration with mode-specific settings
- Configure MCP integration for Kilo Code profile
- Establish consistent rule structure across all modes

* refactor(kilo): simplify profile to reuse roo rules with replacements

Remove duplicate Kilo-specific rule files and assets in favor of reusing roo rules with dynamic replacements, eliminating 900+ lines of duplicated code while maintaining full Kilo functionality.

The profile now:
- Reuses ROO_MODES constant instead of maintaining separate KILO_MODES
- Applies text replacements to convert roo references to kilo
- Maps roo rule files to kilo equivalents via fileMap
- Removes all duplicate rule files from assets/kilocode directory

* refactor(kilo): restructure object literals for consistency and remove duplicate customReplacements array based on CodeRabbit's suggestion

* chore: remove disabled .mcp.json by mistake

---------

Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
2025-08-12 22:35:57 +02:00
github-actions[bot]
30ae0e9a57 docs: Auto-update and format models.md 2025-08-12 17:01:24 +00:00
Ralph Khreish
95640dcde8 feat: add gpt-oss models to ollama (#1124) 2025-08-12 19:01:10 +02:00
Ralph Khreish
311b2433e2 fix: remove claude code clear tm commands (#1123) 2025-08-11 18:59:35 +02:00
Parthy
04e11b5e82 feat: implement cross-tag task movement functionality (#1088)
* feat: enhance move command with cross-tag functionality

- Updated the `move` command to allow moving tasks between different tags, including options for handling dependencies.
- Added new options: `--from-tag`, `--to-tag`, `--with-dependencies`, `--ignore-dependencies`, and `--force`.
- Implemented validation for cross-tag moves and dependency checks.
- Introduced helper functions in the dependency manager for validating and resolving cross-tag dependencies.
- Added integration and unit tests to cover new functionality and edge cases.

* fix: refactor cross-tag move logic and enhance validation

- Moved the import of `moveTasksBetweenTags` to the correct location in `commands.js` for better clarity.
- Added new helper functions in `dependency-manager.js` to improve validation and error handling for cross-tag moves.
- Enhanced existing functions to ensure proper handling of task dependencies and conflicts.
- Updated tests to cover new validation scenarios and ensure robust error messaging for invalid task IDs and tags.

* fix: improve task ID handling and error messaging in cross-tag moves

- Refactored `moveTasksBetweenTags` to normalize task IDs for comparison, ensuring consistent handling of string and numeric IDs.
- Enhanced error messages for cases where source and target tags are the same but no destination is specified.
- Updated tests to validate new behavior, including handling string dependencies correctly during cross-tag moves.
- Cleaned up existing code for better readability and maintainability.

* test: add comprehensive tests for cross-tag move and dependency validation

- Introduced new test files for `move-cross-tag` and `cross-tag-dependencies` to cover various scenarios in cross-tag task movement.
- Implemented tests for handling task movement with and without dependencies, including edge cases for error handling.
- Enhanced existing tests in `fix-dependencies-command` and `move-task` to ensure robust validation of task IDs and dependencies.
- Mocked necessary modules and functions to isolate tests and improve reliability.
- Ensured coverage for both successful and failed cross-tag move operations, validating expected outcomes and error messages.

* test: refactor cross-tag move tests for better clarity and reusability

- Introduced a helper function `simulateCrossTagMove` to streamline cross-tag move test cases, reducing redundancy and improving readability.
- Updated existing tests to utilize the new helper function, ensuring consistent handling of expected messages and options.
- Enhanced test coverage for various scenarios, including handling of dependencies and flags.

* feat: add cross-tag task movement functionality

- Introduced new commands for moving tasks between different tags, enhancing project organization capabilities.
- Updated README with usage examples for cross-tag movement, including options for handling dependencies.
- Created comprehensive documentation for cross-tag task movement, detailing usage, error handling, and best practices.
- Implemented core logic for cross-tag moves, including validation for dependencies and error handling.
- Added integration and unit tests to ensure robust functionality and coverage for various scenarios, including edge cases.

* fix: enhance error handling and logging in cross-tag task movement

- Improved logging in `moveTaskCrossTagDirect` to include detailed arguments for better traceability.
- Refactored error handling to utilize structured error objects, providing clearer suggestions for resolving cross-tag dependency conflicts and subtask movement restrictions.
- Updated documentation to reflect changes in error handling and provide clearer guidance on task movement options.
- Added integration tests for cross-tag movement scenarios, ensuring robust validation of error handling and task movement logic.
- Cleaned up existing tests for clarity and reusability, enhancing overall test coverage.

* feat: enhance dependency resolution and error handling in task movement

- Added recursive dependency resolution for tasks in `moveTasksBetweenTags`, improving handling of complex task relationships.
- Introduced helper functions to find all dependencies and reverse dependencies, ensuring comprehensive coverage during task moves.
- Enhanced error messages in `validateSubtaskMove` and `displaySubtaskMoveError` for better clarity on movement restrictions.
- Updated tests to cover new functionality, including integration tests for complex cross-tag movement scenarios and edge cases.
- Refactored existing code for improved readability and maintainability, ensuring consistent handling of task IDs and dependencies.

* feat: unify dependency traversal and enhance task management utilities

- Introduced `traverseDependencies` utility for unified forward and reverse dependency traversal, improving code reusability and clarity.
- Refactored `findAllDependenciesRecursively` to leverage the new utility, streamlining dependency resolution in task management.
- Added `formatTaskIdForDisplay` helper for better task ID formatting in UI, enhancing user experience during error displays.
- Updated tests to cover new utility functions and ensure robust validation of dependency handling across various scenarios.
- Improved overall code organization and readability, ensuring consistent handling of task dependencies and IDs.

* fix: improve validation for dependency parameters in `findAllDependenciesRecursively`

- Added checks to ensure `sourceTasks` and `allTasks` are arrays, throwing errors if not, to prevent runtime issues.
- Updated documentation comment for clarity on the function's purpose and parameters.

* fix: remove `force` option from task movement parameters

- Eliminated the `force` parameter from the `moveTaskCrossTagDirect` function and related tools, simplifying the task movement logic.
- Updated documentation and tests to reflect the removal of the `force` option, ensuring clarity and consistency across the codebase.
- Adjusted related functions and tests to focus on `ignoreDependencies` as the primary control for handling dependency conflicts during task moves.

* Add cross-tag task movement functionality

- Introduced functionality for organizing tasks across different contexts by enabling cross-tag movement.
- Added `formatTaskIdForDisplay` helper to improve task ID formatting in UI error messages.
- Updated relevant tests to incorporate new functionality and ensure accurate error displays during task movements.

* Update scripts/modules/dependency-manager.js

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

* refactor(dependency-manager): Fix subtask resolution and extract helper functions

1. Fix subtask finding logic (lines 1315-1330):
   - Correctly locate parent task by numeric ID
   - Search within parent's subtasks array instead of top-level tasks
   - Properly handle relative subtask references

2. Extract helper functions from getDependentTaskIds (lines 1440-1636):
   - Move findTasksThatDependOn as module-level function
   - Move taskDependsOnSource as module-level function
   - Move subtasksDependOnSource as module-level function
   - Improves readability, maintainability, and testability

Both fixes address architectural issues and improve code organization.

* refactor(dependency-manager): Enhance subtask resolution and dependency validation

- Improved subtask resolution logic to correctly find parent tasks and their subtasks, ensuring accurate identification of dependencies.
- Filtered out null/undefined dependencies before processing, enhancing robustness in dependency checks.
- Updated comments for clarity on the logic flow and purpose of changes, improving code maintainability.

* refactor(move-task): clarify destination ID description and improve skipped task handling

- Updated the description for the destination ID to clarify its usage in cross-tag moves.
- Simplified the handling of skipped tasks during multiple task movements, improving readability and logging.
- Enhanced the API result response to include detailed information about moved and skipped tasks, ensuring better feedback for users.

* refactor(commands): remove redundant tag validation logic

- Eliminated the check for identical source and target tags in the task movement logic, simplifying the code.
- This change streamlines the flow for within-tag moves, enhancing readability and maintainability.

* refactor(commands): enhance move command logic and error handling

- Introduced helper functions for better organization of cross-tag and within-tag move logic, improving code readability and maintainability.
- Enhanced error handling with structured error objects, providing clearer feedback for dependency conflicts and invalid tag combinations.
- Updated move command help output to include best practices and error resolution tips, ensuring users have comprehensive guidance during task movements.
- Streamlined task movement logic to handle multiple tasks more effectively, including detailed logging of successful and failed moves.

* test(dependency-manager): add subtasks to task structure and mock dependency traversal

- Updated `circular-dependencies.test.js` to include subtasks in task definitions, enhancing test coverage for task structures with nested dependencies.
- Mocked `traverseDependencies` in `fix-dependencies-command.test.js` to ensure consistent behavior during tests, improving reliability of dependency-related tests.

* refactor(dependency-manager): extract subtask finding logic into helper function

- Added `findSubtaskInParent` function to encapsulate subtask resolution within a parent task's subtasks array, improving code organization and readability.
- Updated `findDependencyTask` to utilize the new helper function, streamlining the logic for finding subtasks and enhancing maintainability.
- Enhanced comments for clarity on the purpose and functionality of the new subtask finding logic.

* refactor(ui): enhance subtask ID validation and improve error handling

- Added validation for subtask ID format in `formatDependenciesWithStatus` and `taskExists`, ensuring proper handling of invalid formats.
- Updated error logging in `displaySubtaskMoveError` to provide warnings for unexpected task ID formats, improving user feedback.
- Converted hints to a Set in `displayDependencyValidationHints` to ensure unique hints are displayed, enhancing clarity in the UI.

* test(cli): remove redundant timing check in complex cross-tag scenarios

- Eliminated the timing check for task completion within 5 seconds in `complex-cross-tag-scenarios.test.js`, streamlining the test logic.
- This change focuses on verifying task success without unnecessary timing constraints, enhancing test clarity and maintainability.

* test(integration): enhance task movement tests with mock file system

- Added integration tests for moving tasks within the same tag and between different tags using the actual `moveTask` and `moveTasksBetweenTags` functions.
- Implemented `mock-fs` to simulate file system interactions, improving test isolation and reliability.
- Verified task movement success and ensured proper handling of subtasks and dependencies, enhancing overall test coverage for task management functionality.
- Included error handling tests for missing tags and task IDs to ensure robustness in task movement operations.

* test(unit): add comprehensive tests for moveTaskCrossTagDirect functionality

- Introduced new test cases to verify mock functionality, ensuring that mocks for `findTasksPath` and `readJSON` are working as expected.
- Added tests for parameter validation, error handling, and function call flow, including scenarios for missing project roots and identical source/target tags.
- Enhanced coverage for ID parsing and move options, ensuring robust handling of various input conditions and improving overall test reliability.

* test(integration): skip tests for dependency conflict handling and withDependencies option

- Marked tests for handling dependency conflicts and the withDependencies option as skipped due to issues with the mock setup.
- Added TODOs to address the mock-fs setup for complex dependency scenarios, ensuring future improvements in test reliability.

* test(unit): expand cross-tag move command tests with comprehensive mocks

- Added extensive mocks for various modules to enhance the testing of the cross-tag move functionality in `move-cross-tag.test.js`.
- Implemented detailed test cases for handling cross-tag moves, including validation for missing parameters and identical source/target tags.
- Improved error handling tests to ensure robust feedback for invalid operations, enhancing overall test reliability and coverage.

* test(integration): add complex dependency scenarios to task movement tests

- Introduced new integration tests for handling complex dependency scenarios in task movement, utilizing the actual `moveTasksBetweenTags` function.
- Added tests for circular dependencies, nested dependency chains, and cross-tag dependency resolution, enhancing coverage and reliability.
- Documented limitations of the mock-fs setup for complex scenarios and provided warnings in the test output to guide future improvements.
- Skipped tests for dependency conflicts and the withDependencies option due to mock setup issues, with TODOs for resolution.

* test(unit): refactor move-cross-tag tests with focused mock system

- Simplified mocking in `move-cross-tag.test.js` by implementing a configuration-driven mock system, reducing the number of mocked modules from 20+ to 5 core functionalities.
- Introduced a reusable mock factory to streamline the creation of mocks based on configuration, enhancing maintainability and clarity.
- Added documentation for the new mock system, detailing usage examples and benefits, including reduced complexity and improved test focus.
- Implemented tests to validate the mock configuration, ensuring flexibility in enabling/disabling specific mocks.

* test(unit): clean up mocks and improve isEmpty function in fix-dependencies-command tests

- Removed the mock for `traverseDependencies` as it was unnecessary, simplifying the test setup.
- Updated the `isEmpty` function to clarify its behavior regarding null and undefined values, enhancing code readability and maintainability.

* test(unit): update traverseDependencies mock for consistency across tests

- Standardized the mock implementation of `traverseDependencies` in both `fix-dependencies-command.test.js` and `complexity-report-tag-isolation.test.js` to accept `sourceTasks`, `allTasks`, and `options` parameters, ensuring uniformity in test setups.
- This change enhances clarity and maintainability of the tests by aligning the mock behavior across different test files.

* fix(core): improve task movement error handling and ID normalization

- Wrapped task movement logic in a try-finally block to ensure console output is restored even on errors, enhancing reliability.
- Normalized source IDs to handle mixed string/number comparisons, preventing potential issues in dependency checks.
- Added tests for ID type consistency to verify that the normalization fix works correctly across various scenarios, improving test coverage and robustness.

* refactor(task-manager): restructure task movement logic for improved validation and execution

- Renamed and refactored `moveTasksBetweenTags` to streamline the task movement process into distinct phases: validation, data preparation, dependency resolution, execution, and finalization.
- Introduced `validateMove`, `prepareTaskData`, `resolveDependencies`, `executeMoveOperation`, and `finalizeMove` functions to enhance modularity and clarity.
- Updated documentation comments to reflect changes in function responsibilities and parameters.
- Added comprehensive unit tests for the new structure, ensuring robust validation and error handling across various scenarios.
- Improved handling of dependencies and task existence checks during the move operation, enhancing overall reliability.

* fix(move-task): streamline task movement logic and improve error handling

- Refactored the task movement process to enhance clarity and maintainability by replacing `forEach` with a `for...of` loop for better async handling.
- Consolidated error handling and result logging to ensure consistent feedback during task moves.
- Updated the logic for generating files only on the last move, improving performance and reducing unnecessary operations.
- Enhanced validation for skipped tasks, ensuring accurate reporting of moved and skipped tasks in the final result.

* fix(docs): update error message formatting and enhance clarity in task movement documentation

- Changed code block syntax from generic to `text` for better readability in error messages related to task movement and dependency conflicts.
- Ensured consistent formatting across all error message examples to improve user understanding of task movement restrictions and resolutions.
- Added a newline at the end of the file for proper formatting.

* Update .changeset/crazy-meals-hope.md

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

* chore: improve changeset

* chore: improve changeset

* fix referenced bug in docs and remove docs

* chore: fix format

---------

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
2025-08-11 18:58:51 +02:00
Ladi
782728ff95 feat: add --compact flag for minimal task list output (#1054)
* feat: add --compact flag for minimal task list output

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-08-11 18:35:23 +02:00
Fábio Vedovelli
30ca144231 feat: Add task id to task details UI (#1100)
* Display current task ID on task details page

* Changeset

* Implement CodeRabbit review suggestion.

* chore: fix CI errors

---------

Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
2025-08-11 14:42:31 +02:00
Ralph Khreish
0220d0e994 chore: pimp my readme (#1122) 2025-08-11 14:29:49 +02:00
Ralph Khreish
41a8c2406a chore: add docs to monorepo (#1111) 2025-08-09 13:31:45 +02:00
github-actions[bot]
a003041cd8 Version Packages (#1107)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-08-08 22:00:39 +02:00
Ralph Khreish
6b57ead106 Release 0.24.0 #1098 from eyaltoledano/next
Release 0.24.0
2025-08-08 21:58:41 +02:00
Ralph Khreish
7b6e117b1d chore: prepare for release (exit pre mode) 2025-08-08 21:21:25 +02:00
Ralph Khreish
03b045e9cd chore: run format 2025-08-08 21:18:33 +02:00
neonwatty
699afdae59 feat: add task-checker agent to assets directory 2025-08-08 08:47:20 -07:00
github-actions[bot]
80c09802e8 chore: rc version bump 2025-08-08 12:41:29 +00:00
github-actions[bot]
cf8f0f4b1c docs: Auto-update and format models.md 2025-08-08 12:38:58 +00:00
Ralph Khreish
75c514cf5b feat: add gpt-5 support (#1105)
* feat: add gpt-5 support
2025-08-08 14:38:44 +02:00
Ralph Khreish
41d1e671b1 chore: fix CI checker, improve it (#1099) 2025-08-07 15:52:49 +02:00
Ralph Khreish
a464e550b8 feat(extension): implement simple solution to --package flag (#1090)
* feat(extension): implement simple solution to --package flag
2025-08-07 15:10:34 +02:00
Ralph Khreish
3a852afdae chore: implement pre-release for extensions (#1097)
* chore: implement pre-release for extensions

* chore: run format
2025-08-07 15:08:14 +02:00
Ralph Khreish
4bb63706b8 feat: implement claude code agents (#1091)
* feat: implement claude code agents

* chore: add changeset

- run format

* feat: improve task-checker, executor, and orchestrator

* chore: improve changeset
2025-08-07 12:37:06 +02:00
Ralph Khreish
fcf14e09be chore: improve release check for release-check and release flow (#1095)
* chore: improve release check for release-check and release flow

* chore: fix format
2025-08-06 23:19:28 +02:00
Ralph Khreish
4357af3f13 fix(expand-task): include parent task context in complexity report variant (#1094)
- Fixed bug where expand task generated generic authentication subtasks
- The complexity-report prompt variant now includes parent task details
- Added comprehensive unit tests to prevent regression
- Added debug logging to help diagnose similar issues

Previously, when using a complexity report with expansionPrompt, only the
expansion guidance was sent to the AI, missing the actual task context.
This caused the AI to generate unrelated generic subtasks.

Fixes the issue where all tasks would get the same generic auth-related
subtasks regardless of their actual purpose (AWS infrastructure, Docker
containerization, etc.)

Co-authored-by: Sadaqat Ali <32377500+sadaqat12@users.noreply.github.com>
2025-08-06 21:00:32 +02:00
Ralph Khreish
59f7676051 feat: add prompt for claude code to include context when generating tasks (#1092) 2025-08-06 12:41:19 +02:00
Ralph Khreish
36468f3c93 feat: add prompt for claude code to include context when generating tasks 2025-08-06 12:17:49 +02:00
Ralph Khreish
ca4d93ee6a chore: prepare prd for tm-core creation phase 1 (#1045) 2025-08-06 10:10:29 +02:00
Ralph Khreish
37fb569a62 Release 0.23.1 #1084 from eyaltoledano/next
chore: exit pre-release mode
2025-08-04 20:43:46 +03:00
Ralph Khreish
ed0d4e6641 chore: implement requested changes 2025-08-04 14:40:11 +03:00
Ralph Khreish
5184f8e7b2 chore: implement requested changes 2025-08-04 14:37:24 +03:00
Ralph Khreish
587523a23b chore: improve release-check CI 2025-08-04 14:29:03 +03:00
Ralph Khreish
7a50f0c6ec chore: add release-check CI to check if we are in release mode (#1086) 2025-08-04 13:22:46 +02:00
Ralph Khreish
adeb76ee15 chore: exit pre-release mode 2025-08-04 10:02:00 +03:00
Ralph Khreish
d342070375 Merge pull request #1081 from eyaltoledano/next 2025-08-03 21:38:19 +03:00
github-actions[bot]
5e4dbac525 chore: rc version bump 2025-08-03 14:07:15 +00:00
Ralph Khreish
fb15c2eaf7 chore: improve pre-release CI to just use dropdown (#1080)
* chore: improve pre-release CI to just use dropdown

* chore: implement requested changes
2025-08-03 16:04:29 +02:00
Ralph Khreish
e8ceb08341 chore: improve release and pre-release CI (#1075)
* chore: improve release and pre-release CI

* chore: apply requested changes

* Update .github/workflows/pre-release.yml

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

* chore: apply requested changes

* chore: apply requested changes

---------

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2025-08-03 15:17:18 +02:00
Ralph Khreish
e495b2b559 feat: improve scope up and down command & parse-prd improvements (#1079)
* feat: improve scope up and down command & parse-prd improvements

* chore: run format
2025-08-03 14:12:46 +02:00
Ralph Khreish
e0d1d03f33 chore: fix tag-extension and improve debugging (#1074) 2025-08-02 23:10:57 +02:00
Ralph Khreish
4a4bca905d chore: fix tag-extension package.json not found for extension (#1073)
* chore: fix tag-extension package.json not found for extension

* chore: fix format
2025-08-02 22:56:53 +02:00
github-actions[bot]
9d5f50ac8e Version Packages (#1072)
* Version Packages

* chore: add eyal instead of crunchyman to eyal's feature

* chore: run format

---------

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
2025-08-02 22:43:08 +02:00
Ralph Khreish
bbeaa9163a chore: exit pre (#1071) 2025-08-02 22:35:46 +02:00
Ralph Khreish
a4a172be94 Release 0.23.0 #1064 from eyaltoledano/next
Release 0.23.0
2025-08-02 23:18:05 +03:00
github-actions[bot]
028ed9c444 chore: rc version bump 2025-08-02 20:07:43 +00:00
Ralph Khreish
53903f1e8e chore: trick rc bump 2025-08-02 23:06:37 +03:00
github-actions[bot]
36c56231cc chore: rc version bump 2025-08-02 20:04:33 +00:00
Ralph Khreish
b82d858f81 chore: rename chagneset for rc bump 2025-08-02 23:03:10 +03:00
Ralph Khreish
9808967d6b chore: change from patch to minor 2025-08-02 23:00:56 +03:00
Ralph Khreish
3fee7515f3 fix: fix mcp tool call in extension (#1070)
* fix: fix mcp tool call in extension

- fix console.log directly being used in scope-adjutment.js breaking mcp

* chore: run format and fix tests

* chore: format
2025-08-02 21:59:02 +02:00
github-actions[bot]
82b17bdb57 chore: rc version bump 2025-08-02 16:44:19 +00:00
Eyal Toledano
72ca68edeb Task 104: Implement 'scope-up' and 'scope-down' CLI Commands for Dynamic Task Complexity Adjustment (#1069)
* feat(task-104): Complete task 104 - Implement scope-up and scope-down CLI Commands

- Added new CLI commands 'scope-up' and 'scope-down' with comma-separated ID support
- Implemented strength levels (light/regular/heavy) and custom prompt functionality
- Created core complexity adjustment logic with AI integration
- Added MCP tool equivalents for integrated environments
- Comprehensive error handling and task validation
- Full test coverage with TDD approach
- Updated task manager core and UI components

Task 104: Implement 'scope-up' and 'scope-down' CLI Commands for Dynamic Task Complexity Adjustment - Complete implementation with CLI, MCP integration, and testing

* chore: Add changeset for scope-up and scope-down features

- Comprehensive user-facing description with usage examples
- Key features and benefits explanation
- CLI and MCP integration details
- Real-world use cases for agile workflows

* feat(extension): Add scope-up and scope-down to VS Code extension task details

- Added useScopeUpTask and useScopeDownTask hooks in useTaskQueries.ts
- Enhanced AIActionsSection with Task Complexity Adjustment section
- Added strength selection (light/regular/heavy) and custom prompt support
- Integrated scope buttons with proper loading states and error handling
- Uses existing mcpRequest handler for scope_up_task and scope_down_task tools
- Maintains consistent UI patterns with existing AI actions

Extension now supports dynamic task complexity adjustment directly from task details view.
2025-08-02 18:43:04 +02:00
DavidMaliglowka
64302dc191 feat(extension): complete VS Code extension with kanban board interface (#997)
---------
Co-authored-by: DavidMaliglowka <13022280+DavidMaliglowka@users.noreply.github.com>
Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2025-08-01 14:04:22 +02:00
github-actions[bot]
60c03c548d chore: rc version bump 2025-07-31 22:15:10 +00:00
Ralph Khreish
2ae6e7e6be fix: normalize task IDs to numbers on load to fix comparison issues (#1063)
* fix: normalize task IDs to numbers on load to fix comparison issues

When tasks.json contains string IDs (e.g., "5" instead of 5), task lookups
fail because the code uses parseInt() and strict equality (===) for comparisons.

This fix normalizes all task and subtask IDs to numbers when loading the JSON,
ensuring consistent comparisons throughout the codebase without requiring
changes to multiple comparison locations.

Fixes task not found errors when using string IDs in tasks.json.

* Added test

* Don't mess up formatting

* Fix formatting once and for all

* Update scripts/modules/utils.js

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

* Update scripts/modules/utils.js

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

* Update scripts/modules/utils.js

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

* fix: normalize task IDs to numbers on load to fix comparison issues

- Added normalizeTaskIds function to convert string IDs to numbers
- Applied normalization in readJSON for all code paths
- Fixed set-task-status, add-task, and move-task to normalize IDs when working with raw data
- Exported normalizeTaskIds function for use in other modules
- Added test case for string ID normalization

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Simplified implementation

* refactor: normalize IDs once when loading JSON instead of scattered calls

- Normalize all tags' data when creating _rawTaggedData in readJSON
- Add support for handling malformed dotted subtask IDs (e.g., "5.1" -> 1)
- Remove redundant normalizeTaskIds calls from set-task-status, add-task, and move-task
- Add comprehensive test for mixed ID formats (string IDs and dotted notation)
- Cleaner, more maintainable solution that normalizes IDs at load time

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* chore: run format to resolve CI issues

---------

Co-authored-by: Carl Mercier <carl@carlmercier.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: Claude <noreply@anthropic.com>
2025-08-01 00:06:51 +02:00
Ben Vargas
45a14c323d fix: remove default value from complexity report option to enable tag-specific detection (#1049)
Removes the default empty array value from the complexity report option to properly detect when tags are explicitly provided vs when no tags are provided, fixing the expand --all command behavior with tagged tasks.

Co-authored-by: Ben Vargas <ben@example.com>
2025-07-26 15:26:45 +02:00
github-actions[bot]
29e67fafa4 Version Packages (#1050)
* Version Packages

* chore: fix release 0.22

todo: fix CI

* chore: run format

---------

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
2025-07-26 00:45:07 +02:00
Ralph Khreish
43e4d7c9d3 Release 0.22 #1038 from eyaltoledano/next
Release 0.22
2025-07-26 01:24:38 +03:00
Ralph Khreish
1bd1e64cac feat: add pull request templates for bug fixes, features, and integrations (#1044)
* feat: add pull request templates for bug fixes, features, and integrations

- Introduced a comprehensive pull request template structure to streamline contributions.
- Added specific templates for bug fixes, new features, and integrations to enhance clarity and consistency in PR submissions.
- Configured the pull request template settings for better user guidance during the contribution process.

* chore: fix format

* Update .github/PULL_REQUEST_TEMPLATE.md

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

* Update .github/PULL_REQUEST_TEMPLATE.md

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

* Update .github/PULL_REQUEST_TEMPLATE.md

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

* chore: implement PR requested changes

---------

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2025-07-24 17:33:43 +02:00
Ralph Khreish
dc44ed9de8 feat: Improved the CLI user interface to suggest generating a complexity report if it is missing, instead of showing an error. (#1043)
* feat: fix CLI UI error when trying to display non-existent complexity report

* chore: fix git issue

* chore: run format

* Update .changeset/thick-squids-attend.md

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

* Update .changeset/thick-squids-attend.md

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

* Update .changeset/thick-squids-attend.md

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

---------

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2025-07-24 15:50:18 +02:00
627 changed files with 94474 additions and 13139 deletions

View File

@@ -2,13 +2,16 @@
"$schema": "https://unpkg.com/@changesets/config@3.1.1/schema.json",
"changelog": [
"@changesets/changelog-github",
{ "repo": "eyaltoledano/claude-task-master" }
{
"repo": "eyaltoledano/claude-task-master"
}
],
"commit": false,
"fixed": [],
"linked": [],
"access": "public",
"baseBranch": "main",
"updateInternalDependencies": "patch",
"ignore": []
}
"ignore": [
"docs",
"@tm/claude-code-plugin"
]
}

View File

@@ -1,5 +0,0 @@
---
"task-master-ai": patch
---
Fix compatibility with @google/gemini-cli-core v0.1.12+ by updating ai-sdk-provider-gemini-cli to v0.1.1.

View File

@@ -1,5 +0,0 @@
---
"task-master-ai": patch
---
Fix 'expand --all' and 'show' commands to correctly handle tag contexts for complexity reports and task display.

View File

@@ -1,5 +0,0 @@
---
"task-master-ai": patch
---
Clean up remaining automatic task file generation calls

View File

@@ -1,24 +0,0 @@
---
"task-master-ai": minor
---
Add comprehensive Kiro IDE integration with autonomous task management hooks
- **Kiro Profile**: Added full support for Kiro IDE with automatic installation of 7 Taskmaster agent hooks
- **Hook-Driven Workflow**: Introduced natural language automation hooks that eliminate manual task status updates
- **Automatic Hook Installation**: Hooks are now automatically copied to `.kiro/hooks/` when running `task-master rules add kiro`
- **Language-Agnostic Support**: All hooks support multiple programming languages (JS, Python, Go, Rust, Java, etc.)
- **Frontmatter Transformation**: Kiro rules use simplified `inclusion: always` format instead of Cursor's complex frontmatter
- **Special Rule**: Added `taskmaster_hooks_workflow.md` that guides AI assistants to prefer hook-driven completion
Key hooks included:
- Task Dependency Auto-Progression: Automatically starts tasks when dependencies complete
- Code Change Task Tracker: Updates task progress as you save files
- Test Success Task Completer: Marks tasks done when tests pass
- Daily Standup Assistant: Provides personalized task status summaries
- PR Readiness Checker: Validates task completion before creating pull requests
- Complexity Analyzer: Auto-expands complex tasks into manageable subtasks
- Git Commit Task Linker: Links commits to tasks for better traceability
This creates a truly autonomous development workflow where task management happens naturally as you code!

View File

@@ -1,16 +0,0 @@
{
"mode": "pre",
"tag": "rc",
"initialVersions": {
"task-master-ai": "0.21.0",
"extension": "0.20.0"
},
"changesets": [
"fix-gemini-cli-dependency",
"fresh-bugs-squashed",
"happy-sites-stay",
"orange-pots-add",
"quiet-rabbits-bathe",
"swift-otters-argue"
]
}

View File

@@ -1,10 +0,0 @@
---
"task-master-ai": patch
---
Fix max_tokens limits for OpenRouter and Groq models
- Add special handling in config-manager.js for custom OpenRouter models to use a conservative default of 32,768 max_tokens
- Update qwen/qwen-turbo model max_tokens from 1,000,000 to 32,768 to match OpenRouter's actual limits
- Fix moonshotai/kimi-k2-instruct max_tokens to 16,384 to match Groq's actual limit (fixes #1028)
- This prevents "maximum context length exceeded" errors when using OpenRouter models not in our supported models list

View File

@@ -1,5 +0,0 @@
---
"task-master-ai": patch
---
Fix VSCode profile generation to use correct rule file names (using `.instructions.md` extension instead of `.md`) and front-matter properties (removing the unsupported `alwaysApply` property from instructions files' front-matter).

View File

@@ -1,5 +0,0 @@
---
"task-master-ai": minor
---
Prompt to generate a complexity report when it is missing

View File

@@ -0,0 +1,32 @@
{
"name": "taskmaster",
"owner": {
"name": "Hamster",
"email": "ralph@tryhamster.com"
},
"metadata": {
"description": "Official marketplace for Taskmaster AI - AI-powered task management for ambitious development",
"version": "1.0.0"
},
"plugins": [
{
"name": "taskmaster",
"source": "./packages/claude-code-plugin",
"description": "AI-powered task management system for ambitious development workflows with intelligent orchestration, complexity analysis, and automated coordination",
"author": {
"name": "Hamster"
},
"homepage": "https://github.com/eyaltoledano/claude-task-master",
"repository": "https://github.com/eyaltoledano/claude-task-master",
"keywords": [
"task-management",
"ai",
"workflow",
"orchestration",
"automation",
"mcp"
],
"category": "productivity"
}
]
}

View File

@@ -0,0 +1,38 @@
---
allowed-tools: Bash(gh issue view:*), Bash(gh search:*), Bash(gh issue list:*), Bash(gh api:*), Bash(gh issue comment:*)
description: Find duplicate GitHub issues
---
Find up to 3 likely duplicate issues for a given GitHub issue.
To do this, follow these steps precisely:
1. Use an agent to check if the Github issue (a) is closed, (b) does not need to be deduped (eg. because it is broad product feedback without a specific solution, or positive feedback), or (c) already has a duplicates comment that you made earlier. If so, do not proceed.
2. Use an agent to view a Github issue, and ask the agent to return a summary of the issue
3. Then, launch 5 parallel agents to search Github for duplicates of this issue, using diverse keywords and search approaches, using the summary from #1
4. Next, feed the results from #1 and #2 into another agent, so that it can filter out false positives, that are likely not actually duplicates of the original issue. If there are no duplicates remaining, do not proceed.
5. Finally, comment back on the issue with a list of up to three duplicate issues (or zero, if there are no likely duplicates)
Notes (be sure to tell this to your agents, too):
- Use `gh` to interact with Github, rather than web fetch
- Do not use other tools, beyond `gh` (eg. don't use other MCP servers, file edit, etc.)
- Make a todo list first
- For your comment, follow the following format precisely (assuming for this example that you found 3 suspected duplicates):
---
Found 3 possible duplicate issues:
1. <link to issue>
2. <link to issue>
3. <link to issue>
This issue will be automatically closed as a duplicate in 3 days.
- If your issue is a duplicate, please close it and 👍 the existing issue instead
- To prevent auto-closure, add a comment or 👎 this comment
🤖 Generated with \[Task Master Bot\]
---

View File

@@ -1,76 +0,0 @@
Add a subtask to a parent task.
Arguments: $ARGUMENTS
Parse arguments to create a new subtask or convert existing task.
## Adding Subtasks
Creates subtasks to break down complex parent tasks into manageable pieces.
## Argument Parsing
Flexible natural language:
- "add subtask to 5: implement login form"
- "break down 5 with: setup, implement, test"
- "subtask for 5: handle edge cases"
- "5: validate user input" → adds subtask to task 5
## Execution Modes
### 1. Create New Subtask
```bash
task-master add-subtask --parent=<id> --title="<title>" --description="<desc>"
```
### 2. Convert Existing Task
```bash
task-master add-subtask --parent=<id> --task-id=<existing-id>
```
## Smart Features
1. **Automatic Subtask Generation**
- If title contains "and" or commas, create multiple
- Suggest common subtask patterns
- Inherit parent's context
2. **Intelligent Defaults**
- Priority based on parent
- Appropriate time estimates
- Logical dependencies between subtasks
3. **Validation**
- Check parent task complexity
- Warn if too many subtasks
- Ensure subtask makes sense
## Creation Process
1. Parse parent task context
2. Generate subtask with ID like "5.1"
3. Set appropriate defaults
4. Link to parent task
5. Update parent's time estimate
## Example Flows
```
/project:tm/add-subtask to 5: implement user authentication
→ Created subtask #5.1: "implement user authentication"
→ Parent task #5 now has 1 subtask
→ Suggested next subtasks: tests, documentation
/project:tm/add-subtask 5: setup, implement, test
→ Created 3 subtasks:
#5.1: setup
#5.2: implement
#5.3: test
```
## Post-Creation
- Show updated task hierarchy
- Suggest logical next subtasks
- Update complexity estimates
- Recommend subtask order

View File

@@ -1,81 +0,0 @@
Automatically fix dependency issues found during validation.
## Automatic Dependency Repair
Intelligently fixes common dependency problems while preserving project logic.
## Execution
```bash
task-master fix-dependencies
```
## What Gets Fixed
### 1. **Auto-Fixable Issues**
- Remove references to deleted tasks
- Break simple circular dependencies
- Remove self-dependencies
- Clean up duplicate dependencies
### 2. **Smart Resolutions**
- Reorder dependencies to maintain logic
- Suggest task merging for over-dependent tasks
- Flatten unnecessary dependency chains
- Remove redundant transitive dependencies
### 3. **Manual Review Required**
- Complex circular dependencies
- Critical path modifications
- Business logic dependencies
- High-impact changes
## Fix Process
1. **Analysis Phase**
- Run validation check
- Categorize issues by type
- Determine fix strategy
2. **Execution Phase**
- Apply automatic fixes
- Log all changes made
- Preserve task relationships
3. **Verification Phase**
- Re-validate after fixes
- Show before/after comparison
- Highlight manual fixes needed
## Smart Features
- Preserves intended task flow
- Minimal disruption approach
- Creates fix history/log
- Suggests manual interventions
## Output Example
```
Dependency Auto-Fix Report
━━━━━━━━━━━━━━━━━━━━━━━━
Fixed Automatically:
✅ Removed 2 references to deleted tasks
✅ Resolved 1 self-dependency
✅ Cleaned 3 redundant dependencies
Manual Review Needed:
⚠️ Complex circular dependency: #12 → #15 → #18 → #12
Suggestion: Make #15 not depend on #12
⚠️ Task #45 has 8 dependencies
Suggestion: Break into subtasks
Run '/project:tm/validate-dependencies' to verify fixes
```
## Safety
- Preview mode available
- Rollback capability
- Change logging
- No data loss

View File

@@ -1,81 +0,0 @@
Show help for Task Master commands.
Arguments: $ARGUMENTS
Display help for Task Master commands. If arguments provided, show specific command help.
## Task Master Command Help
### Quick Navigation
Type `/project:tm/` and use tab completion to explore all commands.
### Command Categories
#### 🚀 Setup & Installation
- `/project:tm/setup/install` - Comprehensive installation guide
- `/project:tm/setup/quick-install` - One-line global install
#### 📋 Project Setup
- `/project:tm/init` - Initialize new project
- `/project:tm/init/quick` - Quick setup with auto-confirm
- `/project:tm/models` - View AI configuration
- `/project:tm/models/setup` - Configure AI providers
#### 🎯 Task Generation
- `/project:tm/parse-prd` - Generate tasks from PRD
- `/project:tm/parse-prd/with-research` - Enhanced parsing
- `/project:tm/generate` - Create task files
#### 📝 Task Management
- `/project:tm/list` - List tasks (natural language filters)
- `/project:tm/show <id>` - Display task details
- `/project:tm/add-task` - Create new task
- `/project:tm/update` - Update tasks naturally
- `/project:tm/next` - Get next task recommendation
#### 🔄 Status Management
- `/project:tm/set-status/to-pending <id>`
- `/project:tm/set-status/to-in-progress <id>`
- `/project:tm/set-status/to-done <id>`
- `/project:tm/set-status/to-review <id>`
- `/project:tm/set-status/to-deferred <id>`
- `/project:tm/set-status/to-cancelled <id>`
#### 🔍 Analysis & Breakdown
- `/project:tm/analyze-complexity` - Analyze task complexity
- `/project:tm/expand <id>` - Break down complex task
- `/project:tm/expand/all` - Expand all eligible tasks
#### 🔗 Dependencies
- `/project:tm/add-dependency` - Add task dependency
- `/project:tm/remove-dependency` - Remove dependency
- `/project:tm/validate-dependencies` - Check for issues
#### 🤖 Workflows
- `/project:tm/workflows/smart-flow` - Intelligent workflows
- `/project:tm/workflows/pipeline` - Command chaining
- `/project:tm/workflows/auto-implement` - Auto-implementation
#### 📊 Utilities
- `/project:tm/utils/analyze` - Project analysis
- `/project:tm/status` - Project dashboard
- `/project:tm/learn` - Interactive learning
### Natural Language Examples
```
/project:tm/list pending high priority
/project:tm/update mark all API tasks as done
/project:tm/add-task create login system with OAuth
/project:tm/show current
```
### Getting Started
1. Install: `/project:tm/setup/quick-install`
2. Initialize: `/project:tm/init/quick`
3. Learn: `/project:tm/learn start`
4. Work: `/project:tm/workflows/smart-flow`
For detailed command info: `/project:tm/help <command-name>`

View File

@@ -1,50 +0,0 @@
Initialize a new Task Master project.
Arguments: $ARGUMENTS
Parse arguments to determine initialization preferences.
## Initialization Process
1. **Parse Arguments**
- PRD file path (if provided)
- Project name
- Auto-confirm flag (-y)
2. **Project Setup**
```bash
task-master init
```
3. **Smart Initialization**
- Detect existing project files
- Suggest project name from directory
- Check for git repository
- Verify AI provider configuration
## Configuration Options
Based on arguments:
- `quick` / `-y` → Skip confirmations
- `<file.md>` → Use as PRD after init
- `--name=<name>` → Set project name
- `--description=<desc>` → Set description
## Post-Initialization
After successful init:
1. Show project structure created
2. Verify AI models configured
3. Suggest next steps:
- Parse PRD if available
- Configure AI providers
- Set up git hooks
- Create first tasks
## Integration
If PRD file provided:
```
/project:tm/init my-prd.md
→ Automatically runs parse-prd after init
```

View File

@@ -1,62 +0,0 @@
Remove a dependency between tasks.
Arguments: $ARGUMENTS
Parse the task IDs to remove dependency relationship.
## Removing Dependencies
Removes a dependency relationship, potentially unblocking tasks.
## Argument Parsing
Parse natural language or IDs:
- "remove dependency between 5 and 3"
- "5 no longer needs 3"
- "unblock 5 from 3"
- "5 3" → remove dependency of 5 on 3
## Execution
```bash
task-master remove-dependency --id=<task-id> --depends-on=<dependency-id>
```
## Pre-Removal Checks
1. **Verify dependency exists**
2. **Check impact on task flow**
3. **Warn if it breaks logical sequence**
4. **Show what will be unblocked**
## Smart Analysis
Before removing:
- Show why dependency might have existed
- Check if removal makes tasks executable
- Verify no critical path disruption
- Suggest alternative dependencies
## Post-Removal
After removing:
1. Show updated task status
2. List newly unblocked tasks
3. Update project timeline
4. Suggest next actions
## Safety Features
- Confirm if removing critical dependency
- Show tasks that become immediately actionable
- Warn about potential issues
- Keep removal history
## Example
```
/project:tm/remove-dependency 5 from 3
→ Removed: Task #5 no longer depends on #3
→ Task #5 is now UNBLOCKED and ready to start
→ Warning: Consider if #5 still needs #2 completed first
```

View File

@@ -1,146 +0,0 @@
# Task Master Command Reference
Comprehensive command structure for Task Master integration with Claude Code.
## Command Organization
Commands are organized hierarchically to match Task Master's CLI structure while providing enhanced Claude Code integration.
## Project Setup & Configuration
### `/project:tm/init`
- `init-project` - Initialize new project (handles PRD files intelligently)
- `init-project-quick` - Quick setup with auto-confirmation (-y flag)
### `/project:tm/models`
- `view-models` - View current AI model configuration
- `setup-models` - Interactive model configuration
- `set-main` - Set primary generation model
- `set-research` - Set research model
- `set-fallback` - Set fallback model
## Task Generation
### `/project:tm/parse-prd`
- `parse-prd` - Generate tasks from PRD document
- `parse-prd-with-research` - Enhanced parsing with research mode
### `/project:tm/generate`
- `generate-tasks` - Create individual task files from tasks.json
## Task Management
### `/project:tm/list`
- `list-tasks` - Smart listing with natural language filters
- `list-tasks-with-subtasks` - Include subtasks in hierarchical view
- `list-tasks-by-status` - Filter by specific status
### `/project:tm/set-status`
- `to-pending` - Reset task to pending
- `to-in-progress` - Start working on task
- `to-done` - Mark task complete
- `to-review` - Submit for review
- `to-deferred` - Defer task
- `to-cancelled` - Cancel task
### `/project:tm/sync-readme`
- `sync-readme` - Export tasks to README.md with formatting
### `/project:tm/update`
- `update-task` - Update tasks with natural language
- `update-tasks-from-id` - Update multiple tasks from a starting point
- `update-single-task` - Update specific task
### `/project:tm/add-task`
- `add-task` - Add new task with AI assistance
### `/project:tm/remove-task`
- `remove-task` - Remove task with confirmation
## Subtask Management
### `/project:tm/add-subtask`
- `add-subtask` - Add new subtask to parent
- `convert-task-to-subtask` - Convert existing task to subtask
### `/project:tm/remove-subtask`
- `remove-subtask` - Remove subtask (with optional conversion)
### `/project:tm/clear-subtasks`
- `clear-subtasks` - Clear subtasks from specific task
- `clear-all-subtasks` - Clear all subtasks globally
## Task Analysis & Breakdown
### `/project:tm/analyze-complexity`
- `analyze-complexity` - Analyze and generate expansion recommendations
### `/project:tm/complexity-report`
- `complexity-report` - Display complexity analysis report
### `/project:tm/expand`
- `expand-task` - Break down specific task
- `expand-all-tasks` - Expand all eligible tasks
- `with-research` - Enhanced expansion
## Task Navigation
### `/project:tm/next`
- `next-task` - Intelligent next task recommendation
### `/project:tm/show`
- `show-task` - Display detailed task information
### `/project:tm/status`
- `project-status` - Comprehensive project dashboard
## Dependency Management
### `/project:tm/add-dependency`
- `add-dependency` - Add task dependency
### `/project:tm/remove-dependency`
- `remove-dependency` - Remove task dependency
### `/project:tm/validate-dependencies`
- `validate-dependencies` - Check for dependency issues
### `/project:tm/fix-dependencies`
- `fix-dependencies` - Automatically fix dependency problems
## Workflows & Automation
### `/project:tm/workflows`
- `smart-workflow` - Context-aware intelligent workflow execution
- `command-pipeline` - Chain multiple commands together
- `auto-implement-tasks` - Advanced auto-implementation with code generation
## Utilities
### `/project:tm/utils`
- `analyze-project` - Deep project analysis and insights
### `/project:tm/setup`
- `install-taskmaster` - Comprehensive installation guide
- `quick-install-taskmaster` - One-line global installation
## Usage Patterns
### Natural Language
Most commands accept natural language arguments:
```
/project:tm/add-task create user authentication system
/project:tm/update mark all API tasks as high priority
/project:tm/list show blocked tasks
```
### ID-Based Commands
Commands requiring IDs intelligently parse from $ARGUMENTS:
```
/project:tm/show 45
/project:tm/expand 23
/project:tm/set-status/to-done 67
```
### Smart Defaults
Commands provide intelligent defaults and suggestions based on context.

View File

@@ -1,119 +0,0 @@
Update a single specific task with new information.
Arguments: $ARGUMENTS
Parse task ID and update details.
## Single Task Update
Precisely update one task with AI assistance to maintain consistency.
## Argument Parsing
Natural language updates:
- "5: add caching requirement"
- "update 5 to include error handling"
- "task 5 needs rate limiting"
- "5 change priority to high"
## Execution
```bash
task-master update-task --id=<id> --prompt="<context>"
```
## Update Types
### 1. **Content Updates**
- Enhance description
- Add requirements
- Clarify details
- Update acceptance criteria
### 2. **Metadata Updates**
- Change priority
- Adjust time estimates
- Update complexity
- Modify dependencies
### 3. **Strategic Updates**
- Revise approach
- Change test strategy
- Update implementation notes
- Adjust subtask needs
## AI-Powered Updates
The AI:
1. **Understands Context**
- Reads current task state
- Identifies update intent
- Maintains consistency
- Preserves important info
2. **Applies Changes**
- Updates relevant fields
- Keeps style consistent
- Adds without removing
- Enhances clarity
3. **Validates Results**
- Checks coherence
- Verifies completeness
- Maintains relationships
- Suggests related updates
## Example Updates
```
/project:tm/update/single 5: add rate limiting
→ Updating Task #5: "Implement API endpoints"
Current: Basic CRUD endpoints
Adding: Rate limiting requirements
Updated sections:
✓ Description: Added rate limiting mention
✓ Details: Added specific limits (100/min)
✓ Test Strategy: Added rate limit tests
✓ Complexity: Increased from 5 to 6
✓ Time Estimate: Increased by 2 hours
Suggestion: Also update task #6 (API Gateway) for consistency?
```
## Smart Features
1. **Incremental Updates**
- Adds without overwriting
- Preserves work history
- Tracks what changed
- Shows diff view
2. **Consistency Checks**
- Related task alignment
- Subtask compatibility
- Dependency validity
- Timeline impact
3. **Update History**
- Timestamp changes
- Track who/what updated
- Reason for update
- Previous versions
## Field-Specific Updates
Quick syntax for specific fields:
- "5 priority:high" → Update priority only
- "5 add-time:4h" → Add to time estimate
- "5 status:review" → Change status
- "5 depends:3,4" → Add dependencies
## Post-Update
- Show updated task
- Highlight changes
- Check related tasks
- Update suggestions
- Timeline adjustments

View File

@@ -1,108 +0,0 @@
Update multiple tasks starting from a specific ID.
Arguments: $ARGUMENTS
Parse starting task ID and update context.
## Bulk Task Updates
Update multiple related tasks based on new requirements or context changes.
## Argument Parsing
- "from 5: add security requirements"
- "5 onwards: update API endpoints"
- "starting at 5: change to use new framework"
## Execution
```bash
task-master update --from=<id> --prompt="<context>"
```
## Update Process
### 1. **Task Selection**
Starting from specified ID:
- Include the task itself
- Include all dependent tasks
- Include related subtasks
- Smart boundary detection
### 2. **Context Application**
AI analyzes the update context and:
- Identifies what needs changing
- Maintains consistency
- Preserves completed work
- Updates related information
### 3. **Intelligent Updates**
- Modify descriptions appropriately
- Update test strategies
- Adjust time estimates
- Revise dependencies if needed
## Smart Features
1. **Scope Detection**
- Find natural task groupings
- Identify related features
- Stop at logical boundaries
- Avoid over-updating
2. **Consistency Maintenance**
- Keep naming conventions
- Preserve relationships
- Update cross-references
- Maintain task flow
3. **Change Preview**
```
Bulk Update Preview
━━━━━━━━━━━━━━━━━━
Starting from: Task #5
Tasks to update: 8 tasks + 12 subtasks
Context: "add security requirements"
Changes will include:
- Add security sections to descriptions
- Update test strategies for security
- Add security-related subtasks where needed
- Adjust time estimates (+20% average)
Continue? (y/n)
```
## Example Updates
```
/project:tm/update/from-id 5: change database to PostgreSQL
→ Analyzing impact starting from task #5
→ Found 6 related tasks to update
→ Updates will maintain consistency
→ Preview changes? (y/n)
Applied updates:
✓ Task #5: Updated connection logic references
✓ Task #6: Changed migration approach
✓ Task #7: Updated query syntax notes
✓ Task #8: Revised testing strategy
✓ Task #9: Updated deployment steps
✓ Task #12: Changed backup procedures
```
## Safety Features
- Preview all changes
- Selective confirmation
- Rollback capability
- Change logging
- Validation checks
## Post-Update
- Summary of changes
- Consistency verification
- Suggest review tasks
- Update timeline if needed

View File

@@ -2,7 +2,7 @@
"mcpServers": {
"task-master-ai": {
"command": "node",
"args": ["./mcp-server/server.js"],
"args": ["./dist/mcp-server.js"],
"env": {
"ANTHROPIC_API_KEY": "ANTHROPIC_API_KEY_HERE",
"PERPLEXITY_API_KEY": "PERPLEXITY_API_KEY_HERE",

View File

@@ -0,0 +1,803 @@
---
description:
globs:
alwaysApply: true
---
# Test Workflow & Development Process
## **Initial Testing Framework Setup**
Before implementing the TDD workflow, ensure your project has a proper testing framework configured. This section covers setup for different technology stacks.
### **Detecting Project Type & Framework Needs**
**AI Agent Assessment Checklist:**
1. **Language Detection**: Check for `package.json` (Node.js/JavaScript), `requirements.txt` (Python), `Cargo.toml` (Rust), etc.
2. **Existing Tests**: Look for test files (`.test.`, `.spec.`, `_test.`) or test directories
3. **Framework Detection**: Check for existing test runners in dependencies
4. **Project Structure**: Analyze directory structure for testing patterns
### **JavaScript/Node.js Projects (Jest Setup)**
#### **Prerequisites Check**
```bash
# Verify Node.js project
ls package.json # Should exist
# Check for existing testing setup
ls jest.config.js jest.config.ts # Check for Jest config
grep -E "(jest|vitest|mocha)" package.json # Check for test runners
```
#### **Jest Installation & Configuration**
**Step 1: Install Dependencies**
```bash
# Core Jest dependencies
npm install --save-dev jest
# TypeScript support (if using TypeScript)
npm install --save-dev ts-jest @types/jest
# Additional useful packages
npm install --save-dev supertest @types/supertest # For API testing
npm install --save-dev jest-watch-typeahead # Enhanced watch mode
```
**Step 2: Create Jest Configuration**
Create `jest.config.js` with the following production-ready configuration:
```javascript
/** @type {import('jest').Config} */
module.exports = {
// Use ts-jest preset for TypeScript support
preset: 'ts-jest',
// Test environment
testEnvironment: 'node',
// Roots for test discovery
roots: ['<rootDir>/src', '<rootDir>/tests'],
// Test file patterns
testMatch: ['**/__tests__/**/*.ts', '**/?(*.)+(spec|test).ts'],
// Transform files
transform: {
'^.+\\.ts$': [
'ts-jest',
{
tsconfig: {
target: 'es2020',
module: 'commonjs',
esModuleInterop: true,
allowSyntheticDefaultImports: true,
skipLibCheck: true,
strict: false,
noImplicitAny: false,
},
},
],
'^.+\\.js$': [
'ts-jest',
{
useESM: false,
tsconfig: {
target: 'es2020',
module: 'commonjs',
esModuleInterop: true,
allowSyntheticDefaultImports: true,
allowJs: true,
},
},
],
},
// Module file extensions
moduleFileExtensions: ['ts', 'tsx', 'js', 'jsx', 'json', 'node'],
// Transform ignore patterns - adjust for ES modules
transformIgnorePatterns: ['node_modules/(?!(your-es-module-deps|.*\\.mjs$))'],
// Coverage configuration
collectCoverage: true,
coverageDirectory: 'coverage',
coverageReporters: [
'text', // Console output
'text-summary', // Brief summary
'lcov', // For IDE integration
'html', // Detailed HTML report
],
// Files to collect coverage from
collectCoverageFrom: [
'src/**/*.ts',
'!src/**/*.d.ts',
'!src/**/*.test.ts',
'!src/**/index.ts', // Often just exports
'!src/generated/**', // Generated code
'!src/config/database.ts', // Database config (tested via integration)
],
// Coverage thresholds - TaskMaster standards
coverageThreshold: {
global: {
branches: 70,
functions: 80,
lines: 80,
statements: 80,
},
// Higher standards for critical business logic
'./src/utils/': {
branches: 85,
functions: 90,
lines: 90,
statements: 90,
},
'./src/middleware/': {
branches: 80,
functions: 85,
lines: 85,
statements: 85,
},
},
// Setup files
setupFilesAfterEnv: ['<rootDir>/tests/setup.ts'],
// Global teardown to prevent worker process leaks
globalTeardown: '<rootDir>/tests/teardown.ts',
// Module path mapping (if needed)
moduleNameMapper: {
'^@/(.*)$': '<rootDir>/src/$1',
},
// Clear mocks between tests
clearMocks: true,
// Restore mocks after each test
restoreMocks: true,
// Global test timeout
testTimeout: 10000,
// Projects for different test types
projects: [
// Unit tests - for pure functions only
{
displayName: 'unit',
testMatch: ['<rootDir>/src/**/*.test.ts'],
testPathIgnorePatterns: ['.*\\.integration\\.test\\.ts$', '/tests/'],
preset: 'ts-jest',
testEnvironment: 'node',
collectCoverageFrom: [
'src/**/*.ts',
'!src/**/*.d.ts',
'!src/**/*.test.ts',
'!src/**/*.integration.test.ts',
],
coverageThreshold: {
global: {
branches: 70,
functions: 80,
lines: 80,
statements: 80,
},
},
},
// Integration tests - real database/services
{
displayName: 'integration',
testMatch: [
'<rootDir>/src/**/*.integration.test.ts',
'<rootDir>/tests/integration/**/*.test.ts',
],
preset: 'ts-jest',
testEnvironment: 'node',
setupFilesAfterEnv: ['<rootDir>/tests/setup/integration.ts'],
testTimeout: 10000,
},
// E2E tests - full workflows
{
displayName: 'e2e',
testMatch: ['<rootDir>/tests/e2e/**/*.test.ts'],
preset: 'ts-jest',
testEnvironment: 'node',
setupFilesAfterEnv: ['<rootDir>/tests/setup/e2e.ts'],
testTimeout: 30000,
},
],
// Verbose output for better debugging
verbose: true,
// Run projects sequentially to avoid conflicts
maxWorkers: 1,
// Enable watch mode plugins
watchPlugins: ['jest-watch-typeahead/filename', 'jest-watch-typeahead/testname'],
};
```
**Step 3: Update package.json Scripts**
Add these scripts to your `package.json`:
```json
{
"scripts": {
"test": "jest",
"test:watch": "jest --watch",
"test:coverage": "jest --coverage",
"test:unit": "jest --selectProjects unit",
"test:integration": "jest --selectProjects integration",
"test:e2e": "jest --selectProjects e2e",
"test:ci": "jest --ci --coverage --watchAll=false"
}
}
```
**Step 4: Create Test Setup Files**
Create essential test setup files:
```typescript
// tests/setup.ts - Global setup
import { jest } from '@jest/globals';
// Global test configuration
beforeAll(() => {
// Set test timeout
jest.setTimeout(10000);
});
afterEach(() => {
// Clean up mocks after each test
jest.clearAllMocks();
});
```
```typescript
// tests/setup/integration.ts - Integration test setup
import { PrismaClient } from '@prisma/client';
const prisma = new PrismaClient();
beforeAll(async () => {
// Connect to test database
await prisma.$connect();
});
afterAll(async () => {
// Cleanup and disconnect
await prisma.$disconnect();
});
beforeEach(async () => {
// Clean test data before each test
// Add your cleanup logic here
});
```
```typescript
// tests/teardown.ts - Global teardown
export default async () => {
// Global cleanup after all tests
console.log('Global test teardown complete');
};
```
**Step 5: Create Initial Test Structure**
```bash
# Create test directories
mkdir -p tests/{setup,fixtures,unit,integration,e2e}
mkdir -p tests/unit/src/{utils,services,middleware}
# Create sample test fixtures
mkdir tests/fixtures
```
### **Generic Testing Framework Setup (Any Language)**
#### **Framework Selection Guide**
**Python Projects:**
- **pytest**: Recommended for most Python projects
- **unittest**: Built-in, suitable for simple projects
- **Coverage**: Use `coverage.py` for code coverage
```bash
# Python setup example
pip install pytest pytest-cov
echo "[tool:pytest]" > pytest.ini
echo "testpaths = tests" >> pytest.ini
echo "addopts = --cov=src --cov-report=html --cov-report=term" >> pytest.ini
```
**Go Projects:**
- **Built-in testing**: Use Go's built-in `testing` package
- **Coverage**: Built-in with `go test -cover`
```bash
# Go setup example
go mod init your-project
mkdir -p tests
# Tests are typically *_test.go files alongside source
```
**Rust Projects:**
- **Built-in testing**: Use Rust's built-in test framework
- **cargo-tarpaulin**: For coverage analysis
```bash
# Rust setup example
cargo new your-project
cd your-project
cargo install cargo-tarpaulin # For coverage
```
**Java Projects:**
- **JUnit 5**: Modern testing framework
- **Maven/Gradle**: Build tools with testing integration
```xml
<!-- Maven pom.xml example -->
<dependency>
<groupId>org.junit.jupiter</groupId>
<artifactId>junit-jupiter</artifactId>
<version>5.9.2</version>
<scope>test</scope>
</dependency>
```
#### **Universal Testing Principles**
**Coverage Standards (Adapt to Your Language):**
- **Global Minimum**: 70-80% line coverage
- **Critical Code**: 85-90% coverage
- **New Features**: Must meet or exceed standards
- **Legacy Code**: Gradual improvement strategy
**Test Organization:**
- **Unit Tests**: Fast, isolated, no external dependencies
- **Integration Tests**: Test component interactions
- **E2E Tests**: Test complete user workflows
- **Performance Tests**: Load and stress testing (if applicable)
**Naming Conventions:**
- **Test Files**: `*.test.*`, `*_test.*`, or language-specific patterns
- **Test Functions**: Descriptive names (e.g., `should_return_error_for_invalid_input`)
- **Test Directories**: Organized by test type and mirroring source structure
#### **TaskMaster Integration for Any Framework**
**Document Testing Setup in Subtasks:**
```bash
# Update subtask with testing framework setup
task-master update-subtask --id=X.Y --prompt="Testing framework setup:
- Installed [Framework Name] with coverage support
- Configured [Coverage Tool] with thresholds: 80% lines, 70% branches
- Created test directory structure: unit/, integration/, e2e/
- Added test scripts to build configuration
- All setup tests passing"
```
**Testing Framework Verification:**
```bash
# Verify setup works
[test-command] # e.g., npm test, pytest, go test, cargo test
# Check coverage reporting
[coverage-command] # e.g., npm run test:coverage
# Update task with verification
task-master update-subtask --id=X.Y --prompt="Testing framework verified:
- Sample tests running successfully
- Coverage reporting functional
- CI/CD integration ready
- Ready to begin TDD workflow"
```
## **Test-Driven Development (TDD) Integration**
### **Core TDD Cycle with Jest**
```bash
# 1. Start development with watch mode
npm run test:watch
# 2. Write failing test first
# Create test file: src/utils/newFeature.test.ts
# Write test that describes expected behavior
# 3. Implement minimum code to make test pass
# 4. Refactor while keeping tests green
# 5. Add edge cases and error scenarios
```
### **TDD Workflow Per Subtask**
```bash
# When starting a new subtask:
task-master set-status --id=4.1 --status=in-progress
# Begin TDD cycle:
npm run test:watch # Keep running during development
# Document TDD progress in subtask:
task-master update-subtask --id=4.1 --prompt="TDD Progress:
- Written 3 failing tests for core functionality
- Implemented basic feature, tests now passing
- Adding edge case tests for error handling"
# Complete subtask with test summary:
task-master update-subtask --id=4.1 --prompt="Implementation complete:
- Feature implemented with 8 unit tests
- Coverage: 95% statements, 88% branches
- All tests passing, TDD cycle complete"
```
## **Testing Commands & Usage**
### **Development Commands**
```bash
# Primary development command - use during coding
npm run test:watch # Watch mode with Jest
npm run test:watch -- --testNamePattern="auth" # Watch specific tests
# Targeted testing during development
npm run test:unit # Run only unit tests
npm run test:unit -- --coverage # Unit tests with coverage
# Integration testing when APIs are ready
npm run test:integration # Run integration tests
npm run test:integration -- --detectOpenHandles # Debug hanging tests
# End-to-end testing for workflows
npm run test:e2e # Run E2E tests
npm run test:e2e -- --timeout=30000 # Extended timeout for E2E
```
### **Quality Assurance Commands**
```bash
# Full test suite with coverage (before commits)
npm run test:coverage # Complete coverage analysis
# All tests (CI/CD pipeline)
npm test # Run all test projects
# Specific test file execution
npm test -- auth.test.ts # Run specific test file
npm test -- --testNamePattern="should handle errors" # Run specific tests
```
## **Test Implementation Patterns**
### **Unit Test Development**
```typescript
// ✅ DO: Follow established patterns from auth.test.ts
describe('FeatureName', () => {
beforeEach(() => {
jest.clearAllMocks();
// Setup mocks with proper typing
});
describe('functionName', () => {
it('should handle normal case', () => {
// Test implementation with specific assertions
});
it('should throw error for invalid input', async () => {
// Error scenario testing
await expect(functionName(invalidInput))
.rejects.toThrow('Specific error message');
});
});
});
```
### **Integration Test Development**
```typescript
// ✅ DO: Use supertest for API endpoint testing
import request from 'supertest';
import { app } from '../../src/app';
describe('POST /api/auth/register', () => {
beforeEach(async () => {
await integrationTestUtils.cleanupTestData();
});
it('should register user successfully', async () => {
const userData = createTestUser();
const response = await request(app)
.post('/api/auth/register')
.send(userData)
.expect(201);
expect(response.body).toMatchObject({
id: expect.any(String),
email: userData.email
});
// Verify database state
const user = await prisma.user.findUnique({
where: { email: userData.email }
});
expect(user).toBeTruthy();
});
});
```
### **E2E Test Development**
```typescript
// ✅ DO: Test complete user workflows
describe('User Authentication Flow', () => {
it('should complete registration → login → protected access', async () => {
// Step 1: Register
const userData = createTestUser();
await request(app)
.post('/api/auth/register')
.send(userData)
.expect(201);
// Step 2: Login
const loginResponse = await request(app)
.post('/api/auth/login')
.send({ email: userData.email, password: userData.password })
.expect(200);
const { token } = loginResponse.body;
// Step 3: Access protected resource
await request(app)
.get('/api/profile')
.set('Authorization', `Bearer ${token}`)
.expect(200);
}, 30000); // Extended timeout for E2E
});
```
## **Mocking & Test Utilities**
### **Established Mocking Patterns**
```typescript
// ✅ DO: Use established bcrypt mocking pattern
jest.mock('bcrypt');
import bcrypt from 'bcrypt';
const mockHash = bcrypt.hash as jest.MockedFunction<typeof bcrypt.hash>;
const mockCompare = bcrypt.compare as jest.MockedFunction<typeof bcrypt.compare>;
// ✅ DO: Use Prisma mocking for unit tests
jest.mock('@prisma/client', () => ({
PrismaClient: jest.fn().mockImplementation(() => ({
user: {
create: jest.fn(),
findUnique: jest.fn(),
},
$connect: jest.fn(),
$disconnect: jest.fn(),
})),
}));
```
### **Test Fixtures Usage**
```typescript
// ✅ DO: Use centralized test fixtures
import { createTestUser, adminUser, invalidUser } from '../fixtures/users';
describe('User Service', () => {
it('should handle admin user creation', async () => {
const userData = createTestUser(adminUser);
// Test implementation
});
it('should reject invalid user data', async () => {
const userData = createTestUser(invalidUser);
// Error testing
});
});
```
## **Coverage Standards & Monitoring**
### **Coverage Thresholds**
- **Global Standards**: 80% lines/functions, 70% branches
- **Critical Code**: 90% utils, 85% middleware
- **New Features**: Must meet or exceed global thresholds
- **Legacy Code**: Gradual improvement with each change
### **Coverage Reporting & Analysis**
```bash
# Generate coverage reports
npm run test:coverage
# View detailed HTML report
open coverage/lcov-report/index.html
# Coverage files generated:
# - coverage/lcov-report/index.html # Detailed HTML report
# - coverage/lcov.info # LCOV format for IDE integration
# - coverage/coverage-final.json # JSON format for tooling
```
### **Coverage Quality Checks**
```typescript
// ✅ DO: Test all code paths
describe('validateInput', () => {
it('should return true for valid input', () => {
expect(validateInput('valid')).toBe(true);
});
it('should return false for various invalid inputs', () => {
expect(validateInput('')).toBe(false); // Empty string
expect(validateInput(null)).toBe(false); // Null value
expect(validateInput(undefined)).toBe(false); // Undefined
});
it('should throw for unexpected input types', () => {
expect(() => validateInput(123)).toThrow('Invalid input type');
});
});
```
## **Testing During Development Phases**
### **Feature Development Phase**
```bash
# 1. Start feature development
task-master set-status --id=X.Y --status=in-progress
# 2. Begin TDD cycle
npm run test:watch
# 3. Document test progress in subtask
task-master update-subtask --id=X.Y --prompt="Test development:
- Created test file with 5 failing tests
- Implemented core functionality
- Tests passing, adding error scenarios"
# 4. Verify coverage before completion
npm run test:coverage
# 5. Update subtask with final test status
task-master update-subtask --id=X.Y --prompt="Testing complete:
- 12 unit tests with full coverage
- All edge cases and error scenarios covered
- Ready for integration testing"
```
### **Integration Testing Phase**
```bash
# After API endpoints are implemented
npm run test:integration
# Update integration test templates
# Replace placeholder tests with real endpoint calls
# Document integration test results
task-master update-subtask --id=X.Y --prompt="Integration tests:
- Updated auth endpoint tests
- Database integration verified
- All HTTP status codes and responses tested"
```
### **Pre-Commit Testing Phase**
```bash
# Before committing code
npm run test:coverage # Verify all tests pass with coverage
npm run test:unit # Quick unit test verification
npm run test:integration # Integration test verification (if applicable)
# Commit pattern for test updates
git add tests/ src/**/*.test.ts
git commit -m "test(task-X): Add comprehensive tests for Feature Y
- Unit tests with 95% coverage (exceeds 90% threshold)
- Integration tests for API endpoints
- Test fixtures for data generation
- Proper mocking patterns established
Task X: Feature Y - Testing complete"
```
## **Error Handling & Debugging**
### **Test Debugging Techniques**
```typescript
// ✅ DO: Use test utilities for debugging
import { testUtils } from '../setup';
it('should debug complex operation', () => {
testUtils.withConsole(() => {
// Console output visible only for this test
console.log('Debug info:', complexData);
service.complexOperation();
});
});
// ✅ DO: Use proper async debugging
it('should handle async operations', async () => {
const promise = service.asyncOperation();
// Test intermediate state
expect(service.isProcessing()).toBe(true);
const result = await promise;
expect(result).toBe('expected');
expect(service.isProcessing()).toBe(false);
});
```
### **Common Test Issues & Solutions**
```bash
# Hanging tests (common with database connections)
npm run test:integration -- --detectOpenHandles
# Memory leaks in tests
npm run test:unit -- --logHeapUsage
# Slow tests identification
npm run test:coverage -- --verbose
# Mock not working properly
# Check: mock is declared before imports
# Check: jest.clearAllMocks() in beforeEach
# Check: TypeScript typing is correct
```
## **Continuous Integration Integration**
### **CI/CD Pipeline Testing**
```yaml
# Example GitHub Actions integration
- name: Run tests
run: |
npm ci
npm run test:coverage
- name: Upload coverage reports
uses: codecov/codecov-action@v3
with:
file: ./coverage/lcov.info
```
### **Pre-commit Hooks**
```bash
# Setup pre-commit testing (recommended)
# In package.json scripts:
"pre-commit": "npm run test:unit && npm run test:integration"
# Husky integration example:
npx husky add .husky/pre-commit "npm run test:unit"
```
## **Test Maintenance & Evolution**
### **Adding Tests for New Features**
1. **Create test file** alongside source code or in `tests/unit/`
2. **Follow established patterns** from `src/utils/auth.test.ts`
3. **Use existing fixtures** from `tests/fixtures/`
4. **Apply proper mocking** patterns for dependencies
5. **Meet coverage thresholds** for the module
### **Updating Integration/E2E Tests**
1. **Update templates** in `tests/integration/` when APIs change
2. **Modify E2E workflows** in `tests/e2e/` for new user journeys
3. **Update test fixtures** for new data requirements
4. **Maintain database cleanup** utilities
### **Test Performance Optimization**
- **Parallel execution**: Jest runs tests in parallel by default
- **Test isolation**: Use proper setup/teardown for independence
- **Mock optimization**: Mock heavy dependencies appropriately
- **Database efficiency**: Use transaction rollbacks where possible
---
**Key References:**
- [Testing Standards](mdc:.cursor/rules/tests.mdc)
- [Git Workflow](mdc:.cursor/rules/git_workflow.mdc)
- [Development Workflow](mdc:.cursor/rules/dev_workflow.mdc)
- [Jest Configuration](mdc:jest.config.js)

45
.github/PULL_REQUEST_TEMPLATE.md vendored Normal file
View File

@@ -0,0 +1,45 @@
# What type of PR is this?
<!-- Check one -->
- [ ] 🐛 Bug fix
- [ ] ✨ Feature
- [ ] 🔌 Integration
- [ ] 📝 Docs
- [ ] 🧹 Refactor
- [ ] Other:
## Description
<!-- What does this PR do? -->
## Related Issues
<!-- Link issues: Fixes #123 -->
## How to Test This
<!-- Quick steps to verify the changes work -->
```bash
# Example commands or steps
```
**Expected result:**
<!-- What should happen? -->
## Contributor Checklist
- [ ] Created changeset: `npm run changeset`
- [ ] Tests pass: `npm test`
- [ ] Format check passes: `npm run format-check` (or `npm run format` to fix)
- [ ] Addressed CodeRabbit comments (if any)
- [ ] Linked related issues (if any)
- [ ] Manually tested the changes
## Changelog Entry
<!-- One line describing the change for users -->
<!-- Example: "Added Kiro IDE integration with automatic task status updates" -->
---
### For Maintainers
- [ ] PR title follows conventional commits
- [ ] Target branch correct
- [ ] Labels added
- [ ] Milestone assigned (if applicable)

39
.github/PULL_REQUEST_TEMPLATE/bugfix.md vendored Normal file
View File

@@ -0,0 +1,39 @@
## 🐛 Bug Fix
### 🔍 Bug Description
<!-- Describe the bug -->
### 🔗 Related Issues
<!-- Fixes #123 -->
### ✨ Solution
<!-- How does this PR fix the bug? -->
## How to Test
### Steps that caused the bug:
1.
2.
**Before fix:**
**After fix:**
### Quick verification:
```bash
# Commands to verify the fix
```
## Contributor Checklist
- [ ] Created changeset: `npm run changeset`
- [ ] Tests pass: `npm test`
- [ ] Format check passes: `npm run format-check`
- [ ] Addressed CodeRabbit comments
- [ ] Added unit tests (if applicable)
- [ ] Manually verified the fix works
---
### For Maintainers
- [ ] Root cause identified
- [ ] Fix doesn't introduce new issues
- [ ] CI passes

View File

@@ -0,0 +1,11 @@
blank_issues_enabled: false
contact_links:
- name: 🐛 Bug Fix
url: https://github.com/eyaltoledano/claude-task-master/compare/next...HEAD?template=bugfix.md
about: Fix a bug in Task Master
- name: ✨ New Feature
url: https://github.com/eyaltoledano/claude-task-master/compare/next...HEAD?template=feature.md
about: Add a new feature to Task Master
- name: 🔌 New Integration
url: https://github.com/eyaltoledano/claude-task-master/compare/next...HEAD?template=integration.md
about: Add support for a new tool, IDE, or platform

View File

@@ -0,0 +1,49 @@
## ✨ New Feature
### 📋 Feature Description
<!-- Brief description -->
### 🎯 Problem Statement
<!-- What problem does this feature solve? Why is it needed? -->
### 💡 Solution
<!-- How does this feature solve the problem? What's the approach? -->
### 🔗 Related Issues
<!-- Link related issues: Fixes #123, Part of #456 -->
## How to Use It
### Quick Start
```bash
# Basic usage example
```
### Example
<!-- Show a real use case -->
```bash
# Practical example
```
**What you should see:**
<!-- Expected behavior -->
## Contributor Checklist
- [ ] Created changeset: `npm run changeset`
- [ ] Tests pass: `npm test`
- [ ] Format check passes: `npm run format-check`
- [ ] Addressed CodeRabbit comments
- [ ] Added tests for new functionality
- [ ] Manually tested in CLI mode
- [ ] Manually tested in MCP mode (if applicable)
## Changelog Entry
<!-- One-liner for release notes -->
---
### For Maintainers
- [ ] Feature aligns with project vision
- [ ] CIs pass
- [ ] Changeset file exists

View File

@@ -0,0 +1,53 @@
# 🔌 New Integration
## What tool/IDE is being integrated?
<!-- Name and brief description -->
## What can users do with it?
<!-- Key benefits -->
## How to Enable
### Setup
```bash
task-master rules add [name]
# Any other setup steps
```
### Example Usage
<!-- Show it in action -->
```bash
# Real example
```
### Natural Language Hooks (if applicable)
```
"When tests pass, mark task as done"
# Other examples
```
## Contributor Checklist
- [ ] Created changeset: `npm run changeset`
- [ ] Tests pass: `npm test`
- [ ] Format check passes: `npm run format-check`
- [ ] Addressed CodeRabbit comments
- [ ] Integration fully tested with target tool/IDE
- [ ] Error scenarios tested
- [ ] Added integration tests
- [ ] Documentation includes setup guide
- [ ] Examples are working and clear
---
## For Maintainers
- [ ] Integration stability verified
- [ ] Documentation comprehensive
- [ ] Examples working

View File

@@ -0,0 +1,259 @@
#!/usr/bin/env node
async function githubRequest(endpoint, token, method = 'GET', body) {
const response = await fetch(`https://api.github.com${endpoint}`, {
method,
headers: {
Authorization: `Bearer ${token}`,
Accept: 'application/vnd.github.v3+json',
'User-Agent': 'auto-close-duplicates-script',
...(body && { 'Content-Type': 'application/json' })
},
...(body && { body: JSON.stringify(body) })
});
if (!response.ok) {
throw new Error(
`GitHub API request failed: ${response.status} ${response.statusText}`
);
}
return response.json();
}
function extractDuplicateIssueNumber(commentBody) {
const match = commentBody.match(/#(\d+)/);
return match ? parseInt(match[1], 10) : null;
}
async function closeIssueAsDuplicate(
owner,
repo,
issueNumber,
duplicateOfNumber,
token
) {
await githubRequest(
`/repos/${owner}/${repo}/issues/${issueNumber}`,
token,
'PATCH',
{
state: 'closed',
state_reason: 'not_planned',
labels: ['duplicate']
}
);
await githubRequest(
`/repos/${owner}/${repo}/issues/${issueNumber}/comments`,
token,
'POST',
{
body: `This issue has been automatically closed as a duplicate of #${duplicateOfNumber}.
If this is incorrect, please re-open this issue or create a new one.
🤖 Generated with [Task Master Bot]`
}
);
}
async function autoCloseDuplicates() {
console.log('[DEBUG] Starting auto-close duplicates script');
const token = process.env.GITHUB_TOKEN;
if (!token) {
throw new Error('GITHUB_TOKEN environment variable is required');
}
console.log('[DEBUG] GitHub token found');
const owner = process.env.GITHUB_REPOSITORY_OWNER || 'eyaltoledano';
const repo = process.env.GITHUB_REPOSITORY_NAME || 'claude-task-master';
console.log(`[DEBUG] Repository: ${owner}/${repo}`);
const threeDaysAgo = new Date();
threeDaysAgo.setDate(threeDaysAgo.getDate() - 3);
console.log(
`[DEBUG] Checking for duplicate comments older than: ${threeDaysAgo.toISOString()}`
);
console.log('[DEBUG] Fetching open issues created more than 3 days ago...');
const allIssues = [];
let page = 1;
const perPage = 100;
const MAX_PAGES = 50; // Increase limit for larger repos
let foundRecentIssue = false;
while (true) {
const pageIssues = await githubRequest(
`/repos/${owner}/${repo}/issues?state=open&per_page=${perPage}&page=${page}&sort=created&direction=desc`,
token
);
if (pageIssues.length === 0) break;
// Filter for issues created more than 3 days ago
const oldEnoughIssues = pageIssues.filter(
(issue) => new Date(issue.created_at) <= threeDaysAgo
);
allIssues.push(...oldEnoughIssues);
// If all issues on this page are newer than 3 days, we can stop
if (oldEnoughIssues.length === 0 && page === 1) {
foundRecentIssue = true;
break;
}
// If we found some old issues but not all, continue to next page
// as there might be more old issues
page++;
// Safety limit to avoid infinite loops
if (page > MAX_PAGES) {
console.log(`[WARNING] Reached maximum page limit of ${MAX_PAGES}`);
break;
}
}
const issues = allIssues;
console.log(`[DEBUG] Found ${issues.length} open issues`);
let processedCount = 0;
let candidateCount = 0;
for (const issue of issues) {
processedCount++;
console.log(
`[DEBUG] Processing issue #${issue.number} (${processedCount}/${issues.length}): ${issue.title}`
);
console.log(`[DEBUG] Fetching comments for issue #${issue.number}...`);
const comments = await githubRequest(
`/repos/${owner}/${repo}/issues/${issue.number}/comments`,
token
);
console.log(
`[DEBUG] Issue #${issue.number} has ${comments.length} comments`
);
const dupeComments = comments.filter(
(comment) =>
comment.body.includes('Found') &&
comment.body.includes('possible duplicate') &&
comment.user.type === 'Bot'
);
console.log(
`[DEBUG] Issue #${issue.number} has ${dupeComments.length} duplicate detection comments`
);
if (dupeComments.length === 0) {
console.log(
`[DEBUG] Issue #${issue.number} - no duplicate comments found, skipping`
);
continue;
}
const lastDupeComment = dupeComments[dupeComments.length - 1];
const dupeCommentDate = new Date(lastDupeComment.created_at);
console.log(
`[DEBUG] Issue #${
issue.number
} - most recent duplicate comment from: ${dupeCommentDate.toISOString()}`
);
if (dupeCommentDate > threeDaysAgo) {
console.log(
`[DEBUG] Issue #${issue.number} - duplicate comment is too recent, skipping`
);
continue;
}
console.log(
`[DEBUG] Issue #${
issue.number
} - duplicate comment is old enough (${Math.floor(
(Date.now() - dupeCommentDate.getTime()) / (1000 * 60 * 60 * 24)
)} days)`
);
const commentsAfterDupe = comments.filter(
(comment) => new Date(comment.created_at) > dupeCommentDate
);
console.log(
`[DEBUG] Issue #${issue.number} - ${commentsAfterDupe.length} comments after duplicate detection`
);
if (commentsAfterDupe.length > 0) {
console.log(
`[DEBUG] Issue #${issue.number} - has activity after duplicate comment, skipping`
);
continue;
}
console.log(
`[DEBUG] Issue #${issue.number} - checking reactions on duplicate comment...`
);
const reactions = await githubRequest(
`/repos/${owner}/${repo}/issues/comments/${lastDupeComment.id}/reactions`,
token
);
console.log(
`[DEBUG] Issue #${issue.number} - duplicate comment has ${reactions.length} reactions`
);
const authorThumbsDown = reactions.some(
(reaction) =>
reaction.user.id === issue.user.id && reaction.content === '-1'
);
console.log(
`[DEBUG] Issue #${issue.number} - author thumbs down reaction: ${authorThumbsDown}`
);
if (authorThumbsDown) {
console.log(
`[DEBUG] Issue #${issue.number} - author disagreed with duplicate detection, skipping`
);
continue;
}
const duplicateIssueNumber = extractDuplicateIssueNumber(
lastDupeComment.body
);
if (!duplicateIssueNumber) {
console.log(
`[DEBUG] Issue #${issue.number} - could not extract duplicate issue number from comment, skipping`
);
continue;
}
candidateCount++;
const issueUrl = `https://github.com/${owner}/${repo}/issues/${issue.number}`;
try {
console.log(
`[INFO] Auto-closing issue #${issue.number} as duplicate of #${duplicateIssueNumber}: ${issueUrl}`
);
await closeIssueAsDuplicate(
owner,
repo,
issue.number,
duplicateIssueNumber,
token
);
console.log(
`[SUCCESS] Successfully closed issue #${issue.number} as duplicate of #${duplicateIssueNumber}`
);
} catch (error) {
console.error(
`[ERROR] Failed to close issue #${issue.number} as duplicate: ${error}`
);
}
}
console.log(
`[DEBUG] Script completed. Processed ${processedCount} issues, found ${candidateCount} candidates for auto-close`
);
}
autoCloseDuplicates().catch(console.error);

View File

@@ -0,0 +1,178 @@
#!/usr/bin/env node
async function githubRequest(endpoint, token, method = 'GET', body) {
const response = await fetch(`https://api.github.com${endpoint}`, {
method,
headers: {
Authorization: `Bearer ${token}`,
Accept: 'application/vnd.github.v3+json',
'User-Agent': 'backfill-duplicate-comments-script',
...(body && { 'Content-Type': 'application/json' })
},
...(body && { body: JSON.stringify(body) })
});
if (!response.ok) {
throw new Error(
`GitHub API request failed: ${response.status} ${response.statusText}`
);
}
return response.json();
}
async function triggerDedupeWorkflow(
owner,
repo,
issueNumber,
token,
dryRun = true
) {
if (dryRun) {
console.log(
`[DRY RUN] Would trigger dedupe workflow for issue #${issueNumber}`
);
return;
}
await githubRequest(
`/repos/${owner}/${repo}/actions/workflows/claude-dedupe-issues.yml/dispatches`,
token,
'POST',
{
ref: 'main',
inputs: {
issue_number: issueNumber.toString()
}
}
);
}
async function backfillDuplicateComments() {
console.log('[DEBUG] Starting backfill duplicate comments script');
const token = process.env.GITHUB_TOKEN;
if (!token) {
throw new Error(`GITHUB_TOKEN environment variable is required
Usage:
node .github/scripts/backfill-duplicate-comments.mjs
Environment Variables:
GITHUB_TOKEN - GitHub personal access token with repo and actions permissions (required)
DRY_RUN - Set to "false" to actually trigger workflows (default: true for safety)
DAYS_BACK - How many days back to look for old issues (default: 90)`);
}
console.log('[DEBUG] GitHub token found');
const owner = process.env.GITHUB_REPOSITORY_OWNER || 'eyaltoledano';
const repo = process.env.GITHUB_REPOSITORY_NAME || 'claude-task-master';
const dryRun = process.env.DRY_RUN !== 'false';
const daysBack = parseInt(process.env.DAYS_BACK || '90', 10);
console.log(`[DEBUG] Repository: ${owner}/${repo}`);
console.log(`[DEBUG] Dry run mode: ${dryRun}`);
console.log(`[DEBUG] Looking back ${daysBack} days`);
const cutoffDate = new Date();
cutoffDate.setDate(cutoffDate.getDate() - daysBack);
console.log(
`[DEBUG] Fetching issues created since ${cutoffDate.toISOString()}...`
);
const allIssues = [];
let page = 1;
const perPage = 100;
while (true) {
const pageIssues = await githubRequest(
`/repos/${owner}/${repo}/issues?state=all&per_page=${perPage}&page=${page}&since=${cutoffDate.toISOString()}`,
token
);
if (pageIssues.length === 0) break;
allIssues.push(...pageIssues);
page++;
// Safety limit to avoid infinite loops
if (page > 100) {
console.log('[DEBUG] Reached page limit, stopping pagination');
break;
}
}
console.log(
`[DEBUG] Found ${allIssues.length} issues from the last ${daysBack} days`
);
let processedCount = 0;
let candidateCount = 0;
let triggeredCount = 0;
for (const issue of allIssues) {
processedCount++;
console.log(
`[DEBUG] Processing issue #${issue.number} (${processedCount}/${allIssues.length}): ${issue.title}`
);
console.log(`[DEBUG] Fetching comments for issue #${issue.number}...`);
const comments = await githubRequest(
`/repos/${owner}/${repo}/issues/${issue.number}/comments`,
token
);
console.log(
`[DEBUG] Issue #${issue.number} has ${comments.length} comments`
);
// Look for existing duplicate detection comments (from the dedupe bot)
const dupeDetectionComments = comments.filter(
(comment) =>
comment.body.includes('Found') &&
comment.body.includes('possible duplicate') &&
comment.user.type === 'Bot'
);
console.log(
`[DEBUG] Issue #${issue.number} has ${dupeDetectionComments.length} duplicate detection comments`
);
// Skip if there's already a duplicate detection comment
if (dupeDetectionComments.length > 0) {
console.log(
`[DEBUG] Issue #${issue.number} already has duplicate detection comment, skipping`
);
continue;
}
candidateCount++;
const issueUrl = `https://github.com/${owner}/${repo}/issues/${issue.number}`;
try {
console.log(
`[INFO] ${dryRun ? '[DRY RUN] ' : ''}Triggering dedupe workflow for issue #${issue.number}: ${issueUrl}`
);
await triggerDedupeWorkflow(owner, repo, issue.number, token, dryRun);
if (!dryRun) {
console.log(
`[SUCCESS] Successfully triggered dedupe workflow for issue #${issue.number}`
);
}
triggeredCount++;
} catch (error) {
console.error(
`[ERROR] Failed to trigger workflow for issue #${issue.number}: ${error}`
);
}
// Add a delay between workflow triggers to avoid overwhelming the system
await new Promise((resolve) => setTimeout(resolve, 1000));
}
console.log(
`[DEBUG] Script completed. Processed ${processedCount} issues, found ${candidateCount} candidates without duplicate comments, ${dryRun ? 'would trigger' : 'triggered'} ${triggeredCount} workflows`
);
}
backfillDuplicateComments().catch(console.error);

102
.github/scripts/check-pre-release-mode.mjs vendored Executable file
View File

@@ -0,0 +1,102 @@
#!/usr/bin/env node
import { readFileSync, existsSync } from 'node:fs';
import { join, dirname, resolve } from 'node:path';
import { fileURLToPath } from 'node:url';
const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename);
// Get context from command line argument or environment
const context = process.argv[2] || process.env.GITHUB_WORKFLOW || 'manual';
function findRootDir(startDir) {
let currentDir = resolve(startDir);
while (currentDir !== '/') {
if (existsSync(join(currentDir, 'package.json'))) {
try {
const pkg = JSON.parse(
readFileSync(join(currentDir, 'package.json'), 'utf8')
);
if (pkg.name === 'task-master-ai' || pkg.repository) {
return currentDir;
}
} catch {}
}
currentDir = dirname(currentDir);
}
throw new Error('Could not find root directory');
}
function checkPreReleaseMode() {
console.log('🔍 Checking if branch is in pre-release mode...');
const rootDir = findRootDir(__dirname);
const preJsonPath = join(rootDir, '.changeset', 'pre.json');
// Check if pre.json exists
if (!existsSync(preJsonPath)) {
console.log('✅ Not in active pre-release mode - safe to proceed');
process.exit(0);
}
try {
// Read and parse pre.json
const preJsonContent = readFileSync(preJsonPath, 'utf8');
const preJson = JSON.parse(preJsonContent);
// Check if we're in active pre-release mode
if (preJson.mode === 'pre') {
console.error('❌ ERROR: This branch is in active pre-release mode!');
console.error('');
// Provide context-specific error messages
if (context === 'Release Check' || context === 'pull_request') {
console.error(
'Pre-release mode must be exited before merging to main.'
);
console.error('');
console.error(
'To fix this, run the following commands in your branch:'
);
console.error(' npx changeset pre exit');
console.error(' git add -u');
console.error(' git commit -m "chore: exit pre-release mode"');
console.error(' git push');
console.error('');
console.error('Then update this pull request.');
} else if (context === 'Release' || context === 'main') {
console.error(
'Pre-release mode should only be used on feature branches, not main.'
);
console.error('');
console.error('To fix this, run the following commands locally:');
console.error(' npx changeset pre exit');
console.error(' git add -u');
console.error(' git commit -m "chore: exit pre-release mode"');
console.error(' git push origin main');
console.error('');
console.error('Then re-run this workflow.');
} else {
console.error('Pre-release mode must be exited before proceeding.');
console.error('');
console.error('To fix this, run the following commands:');
console.error(' npx changeset pre exit');
console.error(' git add -u');
console.error(' git commit -m "chore: exit pre-release mode"');
console.error(' git push');
}
process.exit(1);
}
console.log('✅ Not in active pre-release mode - safe to proceed');
process.exit(0);
} catch (error) {
console.error(`❌ ERROR: Unable to parse .changeset/pre.json aborting.`);
console.error(`Error details: ${error.message}`);
process.exit(1);
}
}
// Run the check
checkPreReleaseMode();

157
.github/scripts/parse-metrics.mjs vendored Normal file
View File

@@ -0,0 +1,157 @@
#!/usr/bin/env node
import { readFileSync, existsSync, writeFileSync } from 'fs';
function parseMetricsTable(content, metricName) {
const lines = content.split('\n');
for (let i = 0; i < lines.length; i++) {
const line = lines[i].trim();
// Match a markdown table row like: | Metric Name | value | ...
const safeName = metricName.replace(/[.*+?^${}()|[\]\\]/g, '\\$&');
const re = new RegExp(`^\\|\\s*${safeName}\\s*\\|\\s*([^|]+)\\|?`);
const match = line.match(re);
if (match) {
return match[1].trim() || 'N/A';
}
}
return 'N/A';
}
function parseCountMetric(content, metricName) {
const result = parseMetricsTable(content, metricName);
// Extract number from string, handling commas and spaces
const numberMatch = result.toString().match(/[\d,]+/);
if (numberMatch) {
const number = parseInt(numberMatch[0].replace(/,/g, ''));
return isNaN(number) ? 0 : number;
}
return 0;
}
function main() {
const metrics = {
issues_created: 0,
issues_closed: 0,
prs_created: 0,
prs_merged: 0,
issue_avg_first_response: 'N/A',
issue_avg_time_to_close: 'N/A',
pr_avg_first_response: 'N/A',
pr_avg_merge_time: 'N/A'
};
// Parse issue metrics
if (existsSync('issue_metrics.md')) {
console.log('📄 Found issue_metrics.md, parsing...');
const issueContent = readFileSync('issue_metrics.md', 'utf8');
metrics.issues_created = parseCountMetric(
issueContent,
'Total number of items created'
);
metrics.issues_closed = parseCountMetric(
issueContent,
'Number of items closed'
);
metrics.issue_avg_first_response = parseMetricsTable(
issueContent,
'Time to first response'
);
metrics.issue_avg_time_to_close = parseMetricsTable(
issueContent,
'Time to close'
);
} else {
console.warn('[parse-metrics] issue_metrics.md not found; using defaults.');
}
// Parse PR created metrics
if (existsSync('pr_created_metrics.md')) {
console.log('📄 Found pr_created_metrics.md, parsing...');
const prCreatedContent = readFileSync('pr_created_metrics.md', 'utf8');
metrics.prs_created = parseCountMetric(
prCreatedContent,
'Total number of items created'
);
metrics.pr_avg_first_response = parseMetricsTable(
prCreatedContent,
'Time to first response'
);
} else {
console.warn(
'[parse-metrics] pr_created_metrics.md not found; using defaults.'
);
}
// Parse PR merged metrics (for more accurate merge data)
if (existsSync('pr_merged_metrics.md')) {
console.log('📄 Found pr_merged_metrics.md, parsing...');
const prMergedContent = readFileSync('pr_merged_metrics.md', 'utf8');
metrics.prs_merged = parseCountMetric(
prMergedContent,
'Total number of items created'
);
// For merged PRs, "Time to close" is actually time to merge
metrics.pr_avg_merge_time = parseMetricsTable(
prMergedContent,
'Time to close'
);
} else {
console.warn(
'[parse-metrics] pr_merged_metrics.md not found; falling back to pr_metrics.md.'
);
// Fallback: try old pr_metrics.md if it exists
if (existsSync('pr_metrics.md')) {
console.log('📄 Falling back to pr_metrics.md...');
const prContent = readFileSync('pr_metrics.md', 'utf8');
const mergedCount = parseCountMetric(prContent, 'Number of items merged');
metrics.prs_merged =
mergedCount || parseCountMetric(prContent, 'Number of items closed');
const maybeMergeTime = parseMetricsTable(
prContent,
'Average time to merge'
);
metrics.pr_avg_merge_time =
maybeMergeTime !== 'N/A'
? maybeMergeTime
: parseMetricsTable(prContent, 'Time to close');
} else {
console.warn('[parse-metrics] pr_metrics.md not found; using defaults.');
}
}
// Output for GitHub Actions
const output = Object.entries(metrics)
.map(([key, value]) => `${key}=${value}`)
.join('\n');
// Always output to stdout for debugging
console.log('\n=== FINAL METRICS ===');
Object.entries(metrics).forEach(([key, value]) => {
console.log(`${key}: ${value}`);
});
// Write to GITHUB_OUTPUT if in GitHub Actions
if (process.env.GITHUB_OUTPUT) {
try {
writeFileSync(process.env.GITHUB_OUTPUT, output + '\n', { flag: 'a' });
console.log(
`\nSuccessfully wrote metrics to ${process.env.GITHUB_OUTPUT}`
);
} catch (error) {
console.error(`Failed to write to GITHUB_OUTPUT: ${error.message}`);
process.exit(1);
}
} else {
console.log(
'\nNo GITHUB_OUTPUT environment variable found, skipping file write'
);
}
}
main();

30
.github/scripts/release.mjs vendored Executable file
View File

@@ -0,0 +1,30 @@
#!/usr/bin/env node
import { existsSync, unlinkSync } from 'node:fs';
import { join, dirname } from 'node:path';
import { fileURLToPath } from 'node:url';
import { findRootDir, runCommand } from './utils.mjs';
const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename);
const rootDir = findRootDir(__dirname);
console.log('🚀 Starting release process...');
// Double-check we're not in pre-release mode (safety net)
const preJsonPath = join(rootDir, '.changeset', 'pre.json');
if (existsSync(preJsonPath)) {
console.log('⚠️ Warning: pre.json still exists. Removing it...');
unlinkSync(preJsonPath);
}
// Check if the extension version has changed and tag it
// This prevents changeset from trying to publish the private package
runCommand('node', [join(__dirname, 'tag-extension.mjs')]);
// Run changeset publish for npm packages
runCommand('npx', ['changeset', 'publish']);
console.log('✅ Release process completed!');
// The extension tag (if created) will trigger the extension-release workflow

33
.github/scripts/tag-extension.mjs vendored Executable file
View File

@@ -0,0 +1,33 @@
#!/usr/bin/env node
import assert from 'node:assert/strict';
import { readFileSync } from 'node:fs';
import { join, dirname } from 'node:path';
import { fileURLToPath } from 'node:url';
import { findRootDir, createAndPushTag } from './utils.mjs';
const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename);
const rootDir = findRootDir(__dirname);
// Read the extension's package.json
const extensionDir = join(rootDir, 'apps', 'extension');
const pkgPath = join(extensionDir, 'package.json');
let pkg;
try {
const pkgContent = readFileSync(pkgPath, 'utf8');
pkg = JSON.parse(pkgContent);
} catch (error) {
console.error('Failed to read package.json:', error.message);
process.exit(1);
}
// Ensure we have required fields
assert(pkg.name, 'package.json must have a name field');
assert(pkg.version, 'package.json must have a version field');
const tag = `${pkg.name}@${pkg.version}`;
// Create and push the tag if it doesn't exist
createAndPushTag(tag);

88
.github/scripts/utils.mjs vendored Executable file
View File

@@ -0,0 +1,88 @@
#!/usr/bin/env node
import { spawnSync } from 'node:child_process';
import { readFileSync } from 'node:fs';
import { join, dirname, resolve } from 'node:path';
// Find the root directory by looking for package.json with task-master-ai
export function findRootDir(startDir) {
let currentDir = resolve(startDir);
while (currentDir !== '/') {
const pkgPath = join(currentDir, 'package.json');
try {
const pkg = JSON.parse(readFileSync(pkgPath, 'utf8'));
if (pkg.name === 'task-master-ai' || pkg.repository) {
return currentDir;
}
} catch {}
currentDir = dirname(currentDir);
}
throw new Error('Could not find root directory');
}
// Run a command with proper error handling
export function runCommand(command, args = [], options = {}) {
console.log(`Running: ${command} ${args.join(' ')}`);
const result = spawnSync(command, args, {
encoding: 'utf8',
stdio: 'inherit',
...options
});
if (result.status !== 0) {
console.error(`Command failed with exit code ${result.status}`);
process.exit(result.status);
}
return result;
}
// Get package version from a package.json file
export function getPackageVersion(packagePath) {
try {
const pkg = JSON.parse(readFileSync(packagePath, 'utf8'));
return pkg.version;
} catch (error) {
console.error(
`Failed to read package version from ${packagePath}:`,
error.message
);
process.exit(1);
}
}
// Check if a git tag exists on remote
export function tagExistsOnRemote(tag, remote = 'origin') {
const result = spawnSync('git', ['ls-remote', remote, tag], {
encoding: 'utf8'
});
return result.status === 0 && result.stdout.trim() !== '';
}
// Create and push a git tag if it doesn't exist
export function createAndPushTag(tag, remote = 'origin') {
// Check if tag already exists
if (tagExistsOnRemote(tag, remote)) {
console.log(`Tag ${tag} already exists on remote, skipping`);
return false;
}
console.log(`Creating new tag: ${tag}`);
// Create the tag locally
const tagResult = spawnSync('git', ['tag', tag]);
if (tagResult.status !== 0) {
console.error('Failed to create tag:', tagResult.error || tagResult.stderr);
process.exit(1);
}
// Push the tag to remote
const pushResult = spawnSync('git', ['push', remote, tag]);
if (pushResult.status !== 0) {
console.error('Failed to push tag:', pushResult.error || pushResult.stderr);
process.exit(1);
}
console.log(`✅ Successfully created and pushed tag: ${tag}`);
return true;
}

View File

@@ -0,0 +1,31 @@
name: Auto-close duplicate issues
# description: Auto-closes issues that are duplicates of existing issues
on:
schedule:
- cron: "0 9 * * *" # Runs daily at 9 AM UTC
workflow_dispatch:
jobs:
auto-close-duplicates:
runs-on: ubuntu-latest
timeout-minutes: 10
permissions:
contents: read
issues: write # Need write permission to close issues and add comments
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: 20
- name: Auto-close duplicate issues
run: node .github/scripts/auto-close-duplicates.mjs
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
GITHUB_REPOSITORY_OWNER: ${{ github.repository_owner }}
GITHUB_REPOSITORY_NAME: ${{ github.event.repository.name }}

View File

@@ -0,0 +1,46 @@
name: Backfill Duplicate Comments
# description: Triggers duplicate detection for old issues that don't have duplicate comments
on:
workflow_dispatch:
inputs:
days_back:
description: "How many days back to look for old issues"
required: false
default: "90"
type: string
dry_run:
description: "Dry run mode (true to only log what would be done)"
required: false
default: "true"
type: choice
options:
- "true"
- "false"
jobs:
backfill-duplicate-comments:
runs-on: ubuntu-latest
timeout-minutes: 30
permissions:
contents: read
issues: read
actions: write
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: 20
- name: Backfill duplicate comments
run: node .github/scripts/backfill-duplicate-comments.mjs
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
GITHUB_REPOSITORY_OWNER: ${{ github.repository_owner }}
GITHUB_REPOSITORY_NAME: ${{ github.event.repository.name }}
DAYS_BACK: ${{ inputs.days_back }}
DRY_RUN: ${{ inputs.dry_run }}

View File

@@ -6,73 +6,124 @@ on:
- main
- next
pull_request:
branches:
- main
- next
workflow_dispatch:
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
permissions:
contents: read
env:
DO_NOT_TRACK: 1
NODE_ENV: development
jobs:
setup:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- uses: actions/setup-node@v4
with:
node-version: 20
cache: 'npm'
- name: Install Dependencies
id: install
run: npm ci
timeout-minutes: 2
- name: Cache node_modules
uses: actions/cache@v4
with:
path: node_modules
key: ${{ runner.os }}-node-modules-${{ hashFiles('**/package-lock.json') }}
# Fast checks that can run in parallel
format-check:
needs: setup
name: Format Check
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 2
- uses: actions/setup-node@v4
with:
node-version: 20
cache: "npm"
- name: Restore node_modules
uses: actions/cache@v4
with:
path: node_modules
key: ${{ runner.os }}-node-modules-${{ hashFiles('**/package-lock.json') }}
- name: Install dependencies
run: npm install --frozen-lockfile --prefer-offline
timeout-minutes: 5
- name: Format Check
run: npm run format-check
env:
FORCE_COLOR: 1
test:
needs: setup
typecheck:
name: Typecheck
timeout-minutes: 10
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 2
- uses: actions/setup-node@v4
with:
node-version: 20
cache: "npm"
- name: Restore node_modules
uses: actions/cache@v4
- name: Install dependencies
run: npm install --frozen-lockfile --prefer-offline
timeout-minutes: 5
- name: Typecheck
run: npm run turbo:typecheck
env:
FORCE_COLOR: 1
# Build job to ensure everything compiles
build:
name: Build
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
path: node_modules
key: ${{ runner.os }}-node-modules-${{ hashFiles('**/package-lock.json') }}
fetch-depth: 2
- uses: actions/setup-node@v4
with:
node-version: 20
cache: "npm"
- name: Install dependencies
run: npm install --frozen-lockfile --prefer-offline
timeout-minutes: 5
- name: Build
run: npm run turbo:build
env:
NODE_ENV: production
FORCE_COLOR: 1
TM_PUBLIC_BASE_DOMAIN: ${{ secrets.TM_PUBLIC_BASE_DOMAIN }}
TM_PUBLIC_SUPABASE_URL: ${{ secrets.TM_PUBLIC_SUPABASE_URL }}
TM_PUBLIC_SUPABASE_ANON_KEY: ${{ secrets.TM_PUBLIC_SUPABASE_ANON_KEY }}
- name: Upload build artifacts
uses: actions/upload-artifact@v4
with:
name: build-artifacts
path: dist/
retention-days: 1
test:
name: Test
timeout-minutes: 15
runs-on: ubuntu-latest
needs: [format-check, typecheck, build]
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 2
- uses: actions/setup-node@v4
with:
node-version: 20
cache: "npm"
- name: Install dependencies
run: npm install --frozen-lockfile --prefer-offline
timeout-minutes: 5
- name: Download build artifacts
uses: actions/download-artifact@v4
with:
name: build-artifacts
path: dist/
- name: Run Tests
run: |
@@ -81,7 +132,6 @@ jobs:
NODE_ENV: test
CI: true
FORCE_COLOR: 1
timeout-minutes: 10
- name: Upload Test Results
if: always()

View File

@@ -0,0 +1,81 @@
name: Claude Issue Dedupe
# description: Automatically dedupe GitHub issues using Claude Code
on:
issues:
types: [opened]
workflow_dispatch:
inputs:
issue_number:
description: "Issue number to process for duplicate detection"
required: true
type: string
jobs:
claude-dedupe-issues:
runs-on: ubuntu-latest
timeout-minutes: 10
permissions:
contents: read
issues: write
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Run Claude Code slash command
uses: anthropics/claude-code-base-action@beta
with:
prompt: "/dedupe ${{ github.repository }}/issues/${{ github.event.issue.number || inputs.issue_number }}"
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
claude_env: |
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Log duplicate comment event to Statsig
if: always()
env:
STATSIG_API_KEY: ${{ secrets.STATSIG_API_KEY }}
run: |
ISSUE_NUMBER=${{ github.event.issue.number || inputs.issue_number }}
REPO=${{ github.repository }}
if [ -z "$STATSIG_API_KEY" ]; then
echo "STATSIG_API_KEY not found, skipping Statsig logging"
exit 0
fi
# Prepare the event payload
EVENT_PAYLOAD=$(jq -n \
--arg issue_number "$ISSUE_NUMBER" \
--arg repo "$REPO" \
--arg triggered_by "${{ github.event_name }}" \
'{
events: [{
eventName: "github_duplicate_comment_added",
value: 1,
metadata: {
repository: $repo,
issue_number: ($issue_number | tonumber),
triggered_by: $triggered_by,
workflow_run_id: "${{ github.run_id }}"
},
time: (now | floor | tostring)
}]
}')
# Send to Statsig API
echo "Logging duplicate comment event to Statsig for issue #${ISSUE_NUMBER}"
RESPONSE=$(curl -s -w "\n%{http_code}" -X POST https://events.statsigapi.net/v1/log_event \
-H "Content-Type: application/json" \
-H "STATSIG-API-KEY: ${STATSIG_API_KEY}" \
-d "$EVENT_PAYLOAD")
HTTP_CODE=$(echo "$RESPONSE" | tail -n1)
BODY=$(echo "$RESPONSE" | head -n-1)
if [ "$HTTP_CODE" -eq 200 ] || [ "$HTTP_CODE" -eq 202 ]; then
echo "Successfully logged duplicate comment event for issue #${ISSUE_NUMBER}"
else
echo "Failed to log duplicate comment event for issue #${ISSUE_NUMBER}. HTTP ${HTTP_CODE}: ${BODY}"
fi

View File

@@ -0,0 +1,57 @@
name: Trigger Claude Documentation Update
on:
push:
branches:
- next
paths-ignore:
- "apps/docs/**"
- "*.md"
- ".github/workflows/**"
jobs:
trigger-docs-update:
# Only run if changes were merged (not direct pushes from bots)
if: github.actor != 'github-actions[bot]' && github.actor != 'dependabot[bot]'
runs-on: ubuntu-latest
permissions:
contents: read
actions: write
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 2 # Need previous commit for comparison
- name: Get changed files
id: changed-files
run: |
echo "Changed files in this push:"
git diff --name-only HEAD^ HEAD | tee changed_files.txt
# Store changed files for Claude to analyze (escaped for JSON)
CHANGED_FILES=$(git diff --name-only HEAD^ HEAD | jq -Rs .)
echo "changed_files=$CHANGED_FILES" >> $GITHUB_OUTPUT
# Get the commit message (escaped for JSON)
COMMIT_MSG=$(git log -1 --pretty=%B | jq -Rs .)
echo "commit_message=$COMMIT_MSG" >> $GITHUB_OUTPUT
# Get diff for documentation context (escaped for JSON)
COMMIT_DIFF=$(git diff HEAD^ HEAD --stat | jq -Rs .)
echo "commit_diff=$COMMIT_DIFF" >> $GITHUB_OUTPUT
# Get commit SHA
echo "commit_sha=${{ github.sha }}" >> $GITHUB_OUTPUT
- name: Trigger Claude workflow
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
# Trigger the Claude docs updater workflow with the change information
gh workflow run claude-docs-updater.yml \
--ref next \
-f commit_sha="${{ steps.changed-files.outputs.commit_sha }}" \
-f commit_message=${{ steps.changed-files.outputs.commit_message }} \
-f changed_files=${{ steps.changed-files.outputs.changed_files }} \
-f commit_diff=${{ steps.changed-files.outputs.commit_diff }}

View File

@@ -0,0 +1,145 @@
name: Claude Documentation Updater
on:
workflow_dispatch:
inputs:
commit_sha:
description: 'The commit SHA that triggered this update'
required: true
type: string
commit_message:
description: 'The commit message'
required: true
type: string
changed_files:
description: 'List of changed files'
required: true
type: string
commit_diff:
description: 'Diff summary of changes'
required: true
type: string
jobs:
update-docs:
runs-on: ubuntu-latest
permissions:
contents: write
pull-requests: write
issues: write
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
ref: next
fetch-depth: 0 # Need full history to checkout specific commit
- name: Create docs update branch
id: create-branch
run: |
BRANCH_NAME="docs/auto-update-$(date +%Y%m%d-%H%M%S)"
git checkout -b $BRANCH_NAME
echo "branch_name=$BRANCH_NAME" >> $GITHUB_OUTPUT
- name: Run Claude Code to Update Documentation
uses: anthropics/claude-code-action@beta
with:
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
timeout_minutes: "30"
mode: "agent"
github_token: ${{ secrets.GITHUB_TOKEN }}
experimental_allowed_domains: |
.anthropic.com
.github.com
api.github.com
.githubusercontent.com
registry.npmjs.org
.task-master.dev
base_branch: "next"
direct_prompt: |
You are a documentation specialist. Analyze the recent changes pushed to the 'next' branch and update the documentation accordingly.
Recent changes:
- Commit: ${{ inputs.commit_message }}
- Changed files:
${{ inputs.changed_files }}
- Changes summary:
${{ inputs.commit_diff }}
Your task:
1. Analyze the changes to understand what functionality was added, modified, or removed
2. Check if these changes require documentation updates in apps/docs/
3. If documentation updates are needed:
- Update relevant documentation files in apps/docs/
- Ensure examples are updated if APIs changed
- Update any configuration documentation if config options changed
- Add new documentation pages if new features were added
- Update the changelog or release notes if applicable
4. If no documentation updates are needed, skip creating changes
Guidelines:
- Focus only on user-facing changes that need documentation
- Keep documentation clear, concise, and helpful
- Include code examples where appropriate
- Maintain consistent documentation style with existing docs
- Don't document internal implementation details unless they affect users
- Update navigation/menu files if new pages are added
Only make changes if the documentation truly needs updating based on the code changes.
- name: Check if changes were made
id: check-changes
run: |
if git diff --quiet; then
echo "has_changes=false" >> $GITHUB_OUTPUT
else
echo "has_changes=true" >> $GITHUB_OUTPUT
git add -A
git config --local user.email "github-actions[bot]@users.noreply.github.com"
git config --local user.name "github-actions[bot]"
git commit -m "docs: auto-update documentation based on changes in next branch
This PR was automatically generated to update documentation based on recent changes.
Original commit: ${{ inputs.commit_message }}
Co-authored-by: Claude <claude-assistant@anthropic.com>"
fi
- name: Push changes and create PR
if: steps.check-changes.outputs.has_changes == 'true'
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
git push origin ${{ steps.create-branch.outputs.branch_name }}
# Create PR using GitHub CLI
gh pr create \
--title "docs: update documentation for recent changes" \
--body "## 📚 Documentation Update
This PR automatically updates documentation based on recent changes merged to the \`next\` branch.
### Original Changes
**Commit:** ${{ inputs.commit_sha }}
**Message:** ${{ inputs.commit_message }}
### Changed Files in Original Commit
\`\`\`
${{ inputs.changed_files }}
\`\`\`
### Documentation Updates
This PR includes documentation updates to reflect the changes above. Please review to ensure:
- [ ] Documentation accurately reflects the changes
- [ ] Examples are correct and working
- [ ] No important details are missing
- [ ] Style is consistent with existing documentation
---
*This PR was automatically generated by Claude Code GitHub Action*" \
--base next \
--head ${{ steps.create-branch.outputs.branch_name }} \
--label "documentation" \
--label "automated"

View File

@@ -0,0 +1,107 @@
name: Claude Issue Triage
# description: Automatically triage GitHub issues using Claude Code
on:
issues:
types: [opened]
jobs:
triage-issue:
runs-on: ubuntu-latest
timeout-minutes: 10
permissions:
contents: read
issues: write
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Create triage prompt
run: |
mkdir -p /tmp/claude-prompts
cat > /tmp/claude-prompts/triage-prompt.txt << 'EOF'
You're an issue triage assistant for GitHub issues. Your task is to analyze the issue and select appropriate labels from the provided list.
IMPORTANT: Don't post any comments or messages to the issue. Your only action should be to apply labels.
Issue Information:
- REPO: ${{ github.repository }}
- ISSUE_NUMBER: ${{ github.event.issue.number }}
TASK OVERVIEW:
1. First, fetch the list of labels available in this repository by running: `gh label list`. Run exactly this command with nothing else.
2. Next, use the GitHub tools to get context about the issue:
- You have access to these tools:
- mcp__github__get_issue: Use this to retrieve the current issue's details including title, description, and existing labels
- mcp__github__get_issue_comments: Use this to read any discussion or additional context provided in the comments
- mcp__github__update_issue: Use this to apply labels to the issue (do not use this for commenting)
- mcp__github__search_issues: Use this to find similar issues that might provide context for proper categorization and to identify potential duplicate issues
- mcp__github__list_issues: Use this to understand patterns in how other issues are labeled
- Start by using mcp__github__get_issue to get the issue details
3. Analyze the issue content, considering:
- The issue title and description
- The type of issue (bug report, feature request, question, etc.)
- Technical areas mentioned
- Severity or priority indicators
- User impact
- Components affected
4. Select appropriate labels from the available labels list provided above:
- Choose labels that accurately reflect the issue's nature
- Be specific but comprehensive
- Select priority labels if you can determine urgency (high-priority, med-priority, or low-priority)
- Consider platform labels (android, ios) if applicable
- If you find similar issues using mcp__github__search_issues, consider using a "duplicate" label if appropriate. Only do so if the issue is a duplicate of another OPEN issue.
5. Apply the selected labels:
- Use mcp__github__update_issue to apply your selected labels
- DO NOT post any comments explaining your decision
- DO NOT communicate directly with users
- If no labels are clearly applicable, do not apply any labels
IMPORTANT GUIDELINES:
- Be thorough in your analysis
- Only select labels from the provided list above
- DO NOT post any comments to the issue
- Your ONLY action should be to apply labels using mcp__github__update_issue
- It's okay to not add any labels if none are clearly applicable
EOF
- name: Setup GitHub MCP Server
run: |
mkdir -p /tmp/mcp-config
cat > /tmp/mcp-config/mcp-servers.json << 'EOF'
{
"mcpServers": {
"github": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"-e",
"GITHUB_PERSONAL_ACCESS_TOKEN",
"ghcr.io/github/github-mcp-server:sha-7aced2b"
],
"env": {
"GITHUB_PERSONAL_ACCESS_TOKEN": "${{ secrets.GITHUB_TOKEN }}"
}
}
}
}
EOF
- name: Run Claude Code for Issue Triage
uses: anthropics/claude-code-base-action@beta
with:
prompt_file: /tmp/claude-prompts/triage-prompt.txt
allowed_tools: "Bash(gh label list),mcp__github__get_issue,mcp__github__get_issue_comments,mcp__github__update_issue,mcp__github__search_issues,mcp__github__list_issues"
timeout_minutes: "5"
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
mcp_config: /tmp/mcp-config/mcp-servers.json
claude_env: |
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}

36
.github/workflows/claude.yml vendored Normal file
View File

@@ -0,0 +1,36 @@
name: Claude Code
on:
issue_comment:
types: [created]
pull_request_review_comment:
types: [created]
issues:
types: [opened, assigned]
pull_request_review:
types: [submitted]
jobs:
claude:
if: |
(github.event_name == 'issue_comment' && contains(github.event.comment.body, '@claude')) ||
(github.event_name == 'pull_request_review_comment' && contains(github.event.comment.body, '@claude')) ||
(github.event_name == 'pull_request_review' && contains(github.event.review.body, '@claude')) ||
(github.event_name == 'issues' && (contains(github.event.issue.body, '@claude') || contains(github.event.issue.title, '@claude')))
runs-on: ubuntu-latest
permissions:
contents: read
pull-requests: read
issues: read
id-token: write
steps:
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
with:
fetch-depth: 1
- name: Run Claude Code
id: claude
uses: anthropics/claude-code-action@beta
with:
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}

140
.github/workflows/extension-ci.yml vendored Normal file
View File

@@ -0,0 +1,140 @@
name: Extension CI
on:
push:
branches:
- main
- next
paths:
- 'apps/extension/**'
- '.github/workflows/extension-ci.yml'
pull_request:
branches:
- main
- next
paths:
- 'apps/extension/**'
- '.github/workflows/extension-ci.yml'
permissions:
contents: read
jobs:
setup:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- uses: actions/setup-node@v4
with:
node-version: 20
- name: Cache node_modules
uses: actions/cache@v4
with:
path: |
node_modules
*/*/node_modules
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
restore-keys: |
${{ runner.os }}-node-
- name: Install Monorepo Dependencies
run: npm ci
timeout-minutes: 5
typecheck:
needs: setup
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
- name: Restore node_modules
uses: actions/cache@v4
with:
path: |
node_modules
*/*/node_modules
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
restore-keys: |
${{ runner.os }}-node-
- name: Install if cache miss
run: npm ci
timeout-minutes: 3
- name: Type Check Extension
working-directory: apps/extension
run: npm run check-types
env:
FORCE_COLOR: 1
build:
needs: setup
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
- name: Restore node_modules
uses: actions/cache@v4
with:
path: |
node_modules
*/*/node_modules
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
restore-keys: |
${{ runner.os }}-node-
- name: Install if cache miss
run: npm ci
timeout-minutes: 3
- name: Build Extension
working-directory: apps/extension
run: npm run build
env:
FORCE_COLOR: 1
- name: Package Extension
working-directory: apps/extension
run: npm run package
env:
FORCE_COLOR: 1
- name: Verify Package Contents
working-directory: apps/extension
run: |
echo "Checking vsix-build contents..."
ls -la vsix-build/
echo "Checking dist contents..."
ls -la vsix-build/dist/
echo "Checking package.json exists..."
test -f vsix-build/package.json
- name: Create VSIX Package (Test)
working-directory: apps/extension/vsix-build
run: npx vsce package --no-dependencies
env:
FORCE_COLOR: 1
- name: Upload Extension Artifact
uses: actions/upload-artifact@v4
with:
name: extension-package
path: |
apps/extension/vsix-build/*.vsix
apps/extension/dist/
retention-days: 30

110
.github/workflows/extension-release.yml vendored Normal file
View File

@@ -0,0 +1,110 @@
name: Extension Release
on:
push:
tags:
- "extension@*"
permissions:
contents: write
concurrency: extension-release-${{ github.ref }}
jobs:
publish-extension:
runs-on: ubuntu-latest
environment: extension-release
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
- name: Cache node_modules
uses: actions/cache@v4
with:
path: |
node_modules
*/*/node_modules
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
restore-keys: |
${{ runner.os }}-node-
- name: Install Monorepo Dependencies
run: npm ci
timeout-minutes: 5
- name: Type Check Extension
working-directory: apps/extension
run: npm run check-types
env:
FORCE_COLOR: 1
- name: Build Extension
working-directory: apps/extension
run: npm run build
env:
FORCE_COLOR: 1
- name: Package Extension
working-directory: apps/extension
run: npm run package
env:
FORCE_COLOR: 1
- name: Create VSIX Package
working-directory: apps/extension/vsix-build
run: npx vsce package --no-dependencies
env:
FORCE_COLOR: 1
- name: Get VSIX filename
id: vsix-info
working-directory: apps/extension/vsix-build
run: |
VSIX_FILE=$(find . -maxdepth 1 -name "*.vsix" -type f | head -n1 | xargs basename)
if [ -z "$VSIX_FILE" ]; then
echo "Error: No VSIX file found"
exit 1
fi
echo "vsix-filename=$VSIX_FILE" >> "$GITHUB_OUTPUT"
echo "Found VSIX: $VSIX_FILE"
- name: Publish to VS Code Marketplace
working-directory: apps/extension/vsix-build
run: npx vsce publish --packagePath "${{ steps.vsix-info.outputs.vsix-filename }}"
env:
VSCE_PAT: ${{ secrets.VSCE_PAT }}
FORCE_COLOR: 1
- name: Install Open VSX CLI
run: npm install -g ovsx
- name: Publish to Open VSX Registry
working-directory: apps/extension/vsix-build
run: ovsx publish "${{ steps.vsix-info.outputs.vsix-filename }}"
env:
OVSX_PAT: ${{ secrets.OVSX_PAT }}
FORCE_COLOR: 1
- name: Upload Build Artifacts
uses: actions/upload-artifact@v4
with:
name: extension-release-${{ github.ref_name }}
path: |
apps/extension/vsix-build/*.vsix
apps/extension/dist/
retention-days: 90
notify-success:
needs: publish-extension
if: success()
runs-on: ubuntu-latest
steps:
- name: Success Notification
run: |
echo "🎉 Extension ${{ github.ref_name }} successfully published!"
echo "📦 Available on VS Code Marketplace"
echo "🌍 Available on Open VSX Registry"
echo "🏷️ GitHub release created: ${{ github.ref_name }}"

176
.github/workflows/log-issue-events.yml vendored Normal file
View File

@@ -0,0 +1,176 @@
name: Log GitHub Issue Events
on:
issues:
types: [opened, closed]
jobs:
log-issue-created:
if: github.event.action == 'opened'
runs-on: ubuntu-latest
timeout-minutes: 5
permissions:
contents: read
issues: read
steps:
- name: Log issue creation to Statsig
env:
STATSIG_API_KEY: ${{ secrets.STATSIG_API_KEY }}
run: |
ISSUE_NUMBER=${{ github.event.issue.number }}
REPO=${{ github.repository }}
ISSUE_TITLE=$(echo '${{ github.event.issue.title }}' | sed "s/'/'\\\\''/g")
AUTHOR="${{ github.event.issue.user.login }}"
CREATED_AT="${{ github.event.issue.created_at }}"
if [ -z "$STATSIG_API_KEY" ]; then
echo "STATSIG_API_KEY not found, skipping Statsig logging"
exit 0
fi
# Prepare the event payload
EVENT_PAYLOAD=$(jq -n \
--arg issue_number "$ISSUE_NUMBER" \
--arg repo "$REPO" \
--arg title "$ISSUE_TITLE" \
--arg author "$AUTHOR" \
--arg created_at "$CREATED_AT" \
'{
events: [{
eventName: "github_issue_created",
value: 1,
metadata: {
repository: $repo,
issue_number: ($issue_number | tonumber),
issue_title: $title,
issue_author: $author,
created_at: $created_at
},
time: (now | floor | tostring)
}]
}')
# Send to Statsig API
echo "Logging issue creation to Statsig for issue #${ISSUE_NUMBER}"
RESPONSE=$(curl -s -w "\n%{http_code}" -X POST https://events.statsigapi.net/v1/log_event \
-H "Content-Type: application/json" \
-H "STATSIG-API-KEY: ${STATSIG_API_KEY}" \
-d "$EVENT_PAYLOAD")
HTTP_CODE=$(echo "$RESPONSE" | tail -n1)
BODY=$(echo "$RESPONSE" | head -n-1)
if [ "$HTTP_CODE" -eq 200 ] || [ "$HTTP_CODE" -eq 202 ]; then
echo "Successfully logged issue creation for issue #${ISSUE_NUMBER}"
else
echo "Failed to log issue creation for issue #${ISSUE_NUMBER}. HTTP ${HTTP_CODE}: ${BODY}"
fi
log-issue-closed:
if: github.event.action == 'closed'
runs-on: ubuntu-latest
timeout-minutes: 5
permissions:
contents: read
issues: read
steps:
- name: Log issue closure to Statsig
env:
STATSIG_API_KEY: ${{ secrets.STATSIG_API_KEY }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
ISSUE_NUMBER=${{ github.event.issue.number }}
REPO=${{ github.repository }}
ISSUE_TITLE=$(echo '${{ github.event.issue.title }}' | sed "s/'/'\\\\''/g")
CLOSED_BY="${{ github.event.issue.closed_by.login }}"
CLOSED_AT="${{ github.event.issue.closed_at }}"
STATE_REASON="${{ github.event.issue.state_reason }}"
if [ -z "$STATSIG_API_KEY" ]; then
echo "STATSIG_API_KEY not found, skipping Statsig logging"
exit 0
fi
# Get additional issue data via GitHub API
echo "Fetching additional issue data for #${ISSUE_NUMBER}"
ISSUE_DATA=$(curl -s -H "Authorization: token ${GITHUB_TOKEN}" \
-H "Accept: application/vnd.github.v3+json" \
"https://api.github.com/repos/${REPO}/issues/${ISSUE_NUMBER}")
COMMENTS_COUNT=$(echo "$ISSUE_DATA" | jq -r '.comments')
# Get reactions data
REACTIONS_DATA=$(curl -s -H "Authorization: token ${GITHUB_TOKEN}" \
-H "Accept: application/vnd.github.v3+json" \
"https://api.github.com/repos/${REPO}/issues/${ISSUE_NUMBER}/reactions")
REACTIONS_COUNT=$(echo "$REACTIONS_DATA" | jq '. | length')
# Check if issue was closed automatically (by checking if closed_by is a bot)
CLOSED_AUTOMATICALLY="false"
if [[ "$CLOSED_BY" == *"[bot]"* ]]; then
CLOSED_AUTOMATICALLY="true"
fi
# Check if closed as duplicate by state_reason
CLOSED_AS_DUPLICATE="false"
if [ "$STATE_REASON" = "duplicate" ]; then
CLOSED_AS_DUPLICATE="true"
fi
# Prepare the event payload
EVENT_PAYLOAD=$(jq -n \
--arg issue_number "$ISSUE_NUMBER" \
--arg repo "$REPO" \
--arg title "$ISSUE_TITLE" \
--arg closed_by "$CLOSED_BY" \
--arg closed_at "$CLOSED_AT" \
--arg state_reason "$STATE_REASON" \
--arg comments_count "$COMMENTS_COUNT" \
--arg reactions_count "$REACTIONS_COUNT" \
--arg closed_automatically "$CLOSED_AUTOMATICALLY" \
--arg closed_as_duplicate "$CLOSED_AS_DUPLICATE" \
'{
events: [{
eventName: "github_issue_closed",
value: 1,
metadata: {
repository: $repo,
issue_number: ($issue_number | tonumber),
issue_title: $title,
closed_by: $closed_by,
closed_at: $closed_at,
state_reason: $state_reason,
comments_count: ($comments_count | tonumber),
reactions_count: ($reactions_count | tonumber),
closed_automatically: ($closed_automatically | test("true")),
closed_as_duplicate: ($closed_as_duplicate | test("true"))
},
time: (now | floor | tostring)
}]
}')
# Send to Statsig API
echo "Logging issue closure to Statsig for issue #${ISSUE_NUMBER}"
RESPONSE=$(curl -s -w "\n%{http_code}" -X POST https://events.statsigapi.net/v1/log_event \
-H "Content-Type: application/json" \
-H "STATSIG-API-KEY: ${STATSIG_API_KEY}" \
-d "$EVENT_PAYLOAD")
HTTP_CODE=$(echo "$RESPONSE" | tail -n1)
BODY=$(echo "$RESPONSE" | head -n-1)
if [ "$HTTP_CODE" -eq 200 ] || [ "$HTTP_CODE" -eq 202 ]; then
echo "Successfully logged issue closure for issue #${ISSUE_NUMBER}"
echo "Closed by: $CLOSED_BY"
echo "Comments: $COMMENTS_COUNT"
echo "Reactions: $REACTIONS_COUNT"
echo "Closed automatically: $CLOSED_AUTOMATICALLY"
echo "Closed as duplicate: $CLOSED_AS_DUPLICATE"
else
echo "Failed to log issue closure for issue #${ISSUE_NUMBER}. HTTP ${HTTP_CODE}: ${BODY}"
fi

View File

@@ -3,11 +3,13 @@ name: Pre-Release (RC)
on:
workflow_dispatch: # Allows manual triggering from GitHub UI/API
concurrency: pre-release-${{ github.ref }}
concurrency: pre-release-${{ github.ref_name }}
jobs:
rc:
runs-on: ubuntu-latest
# Only allow pre-releases on non-main branches
if: github.ref != 'refs/heads/main'
environment: extension-release
steps:
- uses: actions/checkout@v4
with:
@@ -34,9 +36,26 @@ jobs:
- name: Enter RC mode (if not already in RC mode)
run: |
# ensure were in the right pre-mode (tag "rc")
if [ ! -f .changeset/pre.json ] \
|| [ "$(jq -r '.tag' .changeset/pre.json 2>/dev/null || echo '')" != "rc" ]; then
# Check if we're in pre-release mode with the "rc" tag
if [ -f .changeset/pre.json ]; then
MODE=$(jq -r '.mode' .changeset/pre.json 2>/dev/null || echo '')
TAG=$(jq -r '.tag' .changeset/pre.json 2>/dev/null || echo '')
if [ "$MODE" = "exit" ]; then
echo "Pre-release mode is in 'exit' state, re-entering RC mode..."
npx changeset pre enter rc
elif [ "$MODE" = "pre" ] && [ "$TAG" != "rc" ]; then
echo "In pre-release mode but with wrong tag ($TAG), switching to RC..."
npx changeset pre exit
npx changeset pre enter rc
elif [ "$MODE" = "pre" ] && [ "$TAG" = "rc" ]; then
echo "Already in RC pre-release mode"
else
echo "Unknown mode state: $MODE, entering RC mode..."
npx changeset pre enter rc
fi
else
echo "No pre.json found, entering RC mode..."
npx changeset pre enter rc
fi
@@ -46,10 +65,24 @@ jobs:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
NPM_TOKEN: ${{ secrets.NPM_TOKEN }}
- name: Run format
run: npm run format
env:
FORCE_COLOR: 1
- name: Build packages
run: npm run turbo:build
env:
NODE_ENV: production
FORCE_COLOR: 1
TM_PUBLIC_BASE_DOMAIN: ${{ secrets.TM_PUBLIC_BASE_DOMAIN }}
TM_PUBLIC_SUPABASE_URL: ${{ secrets.TM_PUBLIC_SUPABASE_URL }}
TM_PUBLIC_SUPABASE_ANON_KEY: ${{ secrets.TM_PUBLIC_SUPABASE_ANON_KEY }}
- name: Create Release Candidate Pull Request or Publish Release Candidate to npm
uses: changesets/action@v1
with:
publish: npm run release
publish: npx changeset publish
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
NPM_TOKEN: ${{ secrets.NPM_TOKEN }}

21
.github/workflows/release-check.yml vendored Normal file
View File

@@ -0,0 +1,21 @@
name: Release Check
on:
pull_request:
branches:
- main
concurrency:
group: release-check-${{ github.head_ref }}
cancel-in-progress: true
jobs:
check-release-mode:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Check release mode
run: node ./.github/scripts/check-pre-release-mode.mjs "pull_request"

View File

@@ -6,6 +6,11 @@ on:
concurrency: ${{ github.workflow }}-${{ github.ref }}
permissions:
contents: write
pull-requests: write
id-token: write
jobs:
release:
runs-on: ubuntu-latest
@@ -17,7 +22,7 @@ jobs:
- uses: actions/setup-node@v4
with:
node-version: 20
cache: 'npm'
cache: "npm"
- name: Cache node_modules
uses: actions/cache@v4
@@ -33,13 +38,22 @@ jobs:
run: npm ci
timeout-minutes: 2
- name: Exit pre-release mode (safety check)
run: npx changeset pre exit || true
- name: Check pre-release mode
run: node ./.github/scripts/check-pre-release-mode.mjs "main"
- name: Build packages
run: npm run turbo:build
env:
NODE_ENV: production
FORCE_COLOR: 1
TM_PUBLIC_BASE_DOMAIN: ${{ secrets.TM_PUBLIC_BASE_DOMAIN }}
TM_PUBLIC_SUPABASE_URL: ${{ secrets.TM_PUBLIC_SUPABASE_URL }}
TM_PUBLIC_SUPABASE_ANON_KEY: ${{ secrets.TM_PUBLIC_SUPABASE_ANON_KEY }}
- name: Create Release Pull Request or Publish to npm
uses: changesets/action@v1
with:
publish: npm run release
publish: node ./.github/scripts/release.mjs
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
NPM_TOKEN: ${{ secrets.NPM_TOKEN }}

View File

@@ -0,0 +1,108 @@
name: Weekly Metrics to Discord
# description: Sends weekly metrics summary to Discord channel
on:
schedule:
- cron: "0 9 * * 1" # Every Monday at 9 AM
workflow_dispatch:
permissions:
contents: read
issues: read
pull-requests: read
jobs:
weekly-metrics:
runs-on: ubuntu-latest
env:
DISCORD_WEBHOOK: ${{ secrets.DISCORD_METRICS_WEBHOOK }}
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
- name: Get dates for last 14 days
run: |
set -Eeuo pipefail
# Last 14 days
first_day=$(date -d "14 days ago" +%Y-%m-%d)
last_day=$(date +%Y-%m-%d)
echo "first_day=$first_day" >> $GITHUB_ENV
echo "last_day=$last_day" >> $GITHUB_ENV
echo "week_of=$(date -d '7 days ago' +'Week of %B %d, %Y')" >> $GITHUB_ENV
echo "date_range=Past 14 days ($first_day to $last_day)" >> $GITHUB_ENV
- name: Generate issue metrics
uses: github/issue-metrics@v3
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
SEARCH_QUERY: "repo:${{ github.repository }} is:issue created:${{ env.first_day }}..${{ env.last_day }}"
HIDE_TIME_TO_ANSWER: true
HIDE_LABEL_METRICS: false
OUTPUT_FILE: issue_metrics.md
- name: Generate PR created metrics
uses: github/issue-metrics@v3
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
SEARCH_QUERY: "repo:${{ github.repository }} is:pr created:${{ env.first_day }}..${{ env.last_day }}"
OUTPUT_FILE: pr_created_metrics.md
- name: Generate PR merged metrics
uses: github/issue-metrics@v3
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
SEARCH_QUERY: "repo:${{ github.repository }} is:pr is:merged merged:${{ env.first_day }}..${{ env.last_day }}"
OUTPUT_FILE: pr_merged_metrics.md
- name: Debug generated metrics
run: |
set -Eeuo pipefail
echo "Listing markdown files in workspace:"
ls -la *.md || true
for f in issue_metrics.md pr_created_metrics.md pr_merged_metrics.md; do
if [ -f "$f" ]; then
echo "== $f (first 10 lines) =="
head -n 10 "$f"
else
echo "Missing $f"
fi
done
- name: Parse metrics
id: metrics
run: node .github/scripts/parse-metrics.mjs
- name: Send to Discord
uses: sarisia/actions-status-discord@v1
if: env.DISCORD_WEBHOOK != ''
with:
webhook: ${{ env.DISCORD_WEBHOOK }}
status: Success
title: "📊 Weekly Metrics Report"
description: |
**${{ env.week_of }}**
*${{ env.date_range }}*
**🎯 Issues**
• Created: ${{ steps.metrics.outputs.issues_created }}
• Closed: ${{ steps.metrics.outputs.issues_closed }}
• Avg Response Time: ${{ steps.metrics.outputs.issue_avg_first_response }}
• Avg Time to Close: ${{ steps.metrics.outputs.issue_avg_time_to_close }}
**🔀 Pull Requests**
• Created: ${{ steps.metrics.outputs.prs_created }}
• Merged: ${{ steps.metrics.outputs.prs_merged }}
• Avg Response Time: ${{ steps.metrics.outputs.pr_avg_first_response }}
• Avg Time to Merge: ${{ steps.metrics.outputs.pr_avg_merge_time }}
**📈 Visual Analytics**
https://repobeats.axiom.co/api/embed/b439f28f0ab5bd7a2da19505355693cd2c55bfd4.svg
color: 0x58AFFF
username: Task Master Metrics Bot
avatar_url: https://raw.githubusercontent.com/eyaltoledano/claude-task-master/main/images/logo.png

10
.gitignore vendored
View File

@@ -87,3 +87,13 @@ dev-debug.log
*.njsproj
*.sln
*.sw?
# VS Code extension test files
.vscode-test/
apps/extension/.vscode-test/
# apps/extension
apps/extension/vsix-build/
# turbo
.turbo

View File

@@ -2,7 +2,7 @@
"mcpServers": {
"task-master-ai": {
"command": "npx",
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
"args": ["-y", "task-master-ai"],
"env": {
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",

6
.manypkg.json Normal file
View File

@@ -0,0 +1,6 @@
{
"$schema": "https://unpkg.com/@manypkg/get-packages@1.1.3/schema.json",
"defaultBranch": "main",
"ignoredRules": ["ROOT_HAS_DEPENDENCIES", "INTERNAL_MISMATCH"],
"ignoredPackages": ["@tm/core", "@tm/cli", "@tm/build-config"]
}

2
.nvmrc
View File

@@ -1 +1 @@
22
22

View File

@@ -85,7 +85,7 @@ Task Master provides an MCP server that Claude Code can connect to. Configure in
"mcpServers": {
"task-master-ai": {
"command": "npx",
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
"args": ["-y", "task-master-ai"],
"env": {
"ANTHROPIC_API_KEY": "your_key_here",
"PERPLEXITY_API_KEY": "your_key_here",

View File

@@ -2,8 +2,8 @@
"models": {
"main": {
"provider": "anthropic",
"modelId": "claude-3-7-sonnet-20250219",
"maxTokens": 120000,
"modelId": "claude-sonnet-4-20250514",
"maxTokens": 64000,
"temperature": 0.2
},
"research": {
@@ -14,8 +14,8 @@
},
"fallback": {
"provider": "anthropic",
"modelId": "claude-3-5-sonnet-20241022",
"maxTokens": 8192,
"modelId": "claude-3-7-sonnet-20250219",
"maxTokens": 120000,
"temperature": 0.2
}
},
@@ -29,9 +29,15 @@
"ollamaBaseURL": "http://localhost:11434/api",
"bedrockBaseURL": "https://bedrock.us-east-1.amazonaws.com",
"responseLanguage": "English",
"enableCodebaseAnalysis": true,
"userId": "1234567890",
"azureBaseURL": "https://your-endpoint.azure.com/",
"defaultTag": "master"
},
"claudeCode": {}
"claudeCode": {},
"grokCli": {
"timeout": 120000,
"workingDirectory": null,
"defaultModel": "grok-4-latest"
}
}

View File

@@ -0,0 +1,188 @@
# Task Master Migration Roadmap
## Overview
Gradual migration from scripts-based architecture to a clean monorepo with separated concerns.
## Architecture Vision
```
┌─────────────────────────────────────────────────┐
│ User Interfaces │
├──────────┬──────────┬──────────┬────────────────┤
│ @tm/cli │ @tm/mcp │ @tm/ext │ @tm/web │
│ (CLI) │ (MCP) │ (VSCode)│ (Future) │
└──────────┴──────────┴──────────┴────────────────┘
┌──────────────────────┐
│ @tm/core │
│ (Business Logic) │
└──────────────────────┘
```
## Migration Phases
### Phase 1: Core Extraction ✅ (In Progress)
**Goal**: Move all business logic to @tm/core
- [x] Create @tm/core package structure
- [x] Move types and interfaces
- [x] Implement TaskMasterCore facade
- [x] Move storage adapters
- [x] Move task services
- [ ] Move AI providers
- [ ] Move parser logic
- [ ] Complete test coverage
### Phase 2: CLI Package Creation 🚧 (Started)
**Goal**: Create @tm/cli as a thin presentation layer
- [x] Create @tm/cli package structure
- [x] Implement Command interface pattern
- [x] Create CommandRegistry
- [x] Build legacy bridge/adapter
- [x] Migrate list-tasks command
- [ ] Migrate remaining commands one by one
- [ ] Remove UI logic from core
### Phase 3: Transitional Integration
**Goal**: Use new packages in existing scripts without breaking changes
```javascript
// scripts/modules/commands.js gradually adopts new commands
import { ListTasksCommand } from '@tm/cli';
const listCommand = new ListTasksCommand();
// Old interface remains the same
programInstance
.command('list')
.action(async (options) => {
// Use new command internally
const result = await listCommand.execute(convertOptions(options));
});
```
### Phase 4: MCP Package
**Goal**: Separate MCP server as its own package
- [ ] Create @tm/mcp package
- [ ] Move MCP server code
- [ ] Use @tm/core for all logic
- [ ] MCP becomes a thin RPC layer
### Phase 5: Complete Migration
**Goal**: Remove old scripts, pure monorepo
- [ ] All commands migrated to @tm/cli
- [ ] Remove scripts/modules/task-manager/*
- [ ] Remove scripts/modules/commands.js
- [ ] Update bin/task-master.js to use @tm/cli
- [ ] Clean up dependencies
## Current Transitional Strategy
### 1. Adapter Pattern (commands-adapter.js)
```javascript
// Checks if new CLI is available and uses it
// Falls back to legacy implementation if not
export async function listTasksAdapter(...args) {
if (cliAvailable) {
return useNewImplementation(...args);
}
return useLegacyImplementation(...args);
}
```
### 2. Command Bridge Pattern
```javascript
// Allows new commands to work in old code
const bridge = new CommandBridge(new ListTasksCommand());
const data = await bridge.run(legacyOptions); // Legacy style
const result = await bridge.execute(newOptions); // New style
```
### 3. Gradual File Migration
Instead of big-bang refactoring:
1. Create new implementation in @tm/cli
2. Add adapter in commands-adapter.js
3. Update commands.js to use adapter
4. Test both paths work
5. Eventually remove adapter when all migrated
## Benefits of This Approach
1. **No Breaking Changes**: Existing CLI continues to work
2. **Incremental PRs**: Each command can be migrated separately
3. **Parallel Development**: New features can use new architecture
4. **Easy Rollback**: Can disable new implementation if issues
5. **Clear Separation**: Business logic (core) vs presentation (cli/mcp/etc)
## Example PR Sequence
### PR 1: Core Package Setup ✅
- Create @tm/core
- Move types and interfaces
- Basic TaskMasterCore implementation
### PR 2: CLI Package Foundation ✅
- Create @tm/cli
- Command interface and registry
- Legacy bridge utilities
### PR 3: First Command Migration
- Migrate list-tasks to new system
- Add adapter in scripts
- Test both implementations
### PR 4-N: Migrate Commands One by One
- Each PR migrates 1-2 related commands
- Small, reviewable changes
- Continuous delivery
### Final PR: Cleanup
- Remove legacy implementations
- Remove adapters
- Update documentation
## Testing Strategy
### Dual Testing During Migration
```javascript
describe('List Tasks', () => {
it('works with legacy implementation', async () => {
// Force legacy
const result = await legacyListTasks(...);
expect(result).toBeDefined();
});
it('works with new implementation', async () => {
// Force new
const command = new ListTasksCommand();
const result = await command.execute(...);
expect(result.success).toBe(true);
});
it('adapter chooses correctly', async () => {
// Let adapter decide
const result = await listTasksAdapter(...);
expect(result).toBeDefined();
});
});
```
## Success Metrics
- [ ] All commands migrated without breaking changes
- [ ] Test coverage maintained or improved
- [ ] Performance maintained or improved
- [ ] Cleaner, more maintainable codebase
- [ ] Easy to add new interfaces (web, desktop, etc.)
## Notes for Contributors
1. **Keep PRs Small**: Migrate one command at a time
2. **Test Both Paths**: Ensure legacy and new both work
3. **Document Changes**: Update this roadmap as you go
4. **Communicate**: Discuss in PRs if architecture needs adjustment
This is a living document - update as the migration progresses!

View File

@@ -0,0 +1,91 @@
<context>
# Overview
Add a new CLI command: `task-master start <task_id>` (alias: `tm start <task_id>`). This command hard-codes `claude-code` as the executor, fetches task details, builds a standardized prompt, runs claude-code, shows the result, checks for git changes, and auto-marks the task as done if successful.
We follow the Commander class pattern, reuse task retrieval from `show` command flow. Extremely minimal for 1-hour hackathon timeline.
# Core Features
- `start` command (Commander class style)
- Hard-coded executor: `claude-code`
- Standardized prompt designed for minimal changes following existing patterns
- Shows claude-code output (no streaming)
- Git status check for success detection
- Auto-mark task done if successful
# User Experience
```
task-master start 12
```
1) Fetches Task #12 details
2) Builds standardized prompt with task context
3) Runs claude-code with the prompt
4) Shows output
5) Checks git status for changes
6) Auto-marks task done if changes detected
</context>
<PRD>
# Technical Architecture
- Command pattern:
- Create `apps/cli/src/commands/start.command.ts` modeled on [list.command.ts](mdc:apps/cli/src/commands/list.command.ts) and task lookup from [show.command.ts](mdc:apps/cli/src/commands/show.command.ts)
- Task retrieval:
- Use `@tm/core` via `createTaskMasterCore` to get task by ID
- Extract: id, title, description, details
- Executor (ultra-simple approach):
- Execute `claude "full prompt here"` command directly
- The prompt tells Claude to first run `tm show <task_id>` to get task details
- Then tells Claude to implement the code changes
- This opens Claude CLI interface naturally in the current terminal
- No subprocess management needed - just execute the command
- Execution flow:
1) Validate `<task_id>` exists; exit with error if not
2) Build standardized prompt that includes instructions to run `tm show <task_id>`
3) Execute `claude "prompt"` command directly in terminal
4) Claude CLI opens, runs `tm show`, then implements changes
5) After Claude session ends, run `git status --porcelain` to detect changes
6) If changes detected, auto-run `task-master set-status --id=<task_id> --status=done`
- Success criteria:
- Success = exit code 0 AND git shows modified/created files
- Print changed file paths; warn if no changes detected
# Development Roadmap
MVP (ship in ~1 hour):
1) Implement `start.command.ts` (Commander class), parse `<task_id>`
2) Validate task exists via tm-core
3) Build prompt that tells Claude to run `tm show <task_id>` then implement
4) Execute `claude "prompt"` command, then check git status and auto-mark done
# Risks and Mitigations
- Executor availability: Error clearly if `claude-code` provider fails
- False success: Git-change heuristic acceptable for hackathon MVP
# Appendix
**Standardized Prompt Template:**
```
You are an AI coding assistant with access to this repository's codebase.
First, run this command to get the task details:
tm show <task_id>
Then implement the task with these requirements:
- Make the SMALLEST number of code changes possible
- Follow ALL existing patterns in the codebase (you have access to analyze the code)
- Do NOT over-engineer the solution
- Use existing files/functions/patterns wherever possible
- When complete, print: COMPLETED: <brief summary of changes>
Begin by running tm show <task_id> to understand what needs to be implemented.
```
**Key References:**
- [list.command.ts](mdc:apps/cli/src/commands/list.command.ts) - Command structure
- [show.command.ts](mdc:apps/cli/src/commands/show.command.ts) - Task validation
- Node.js `child_process.exec()` - For executing `claude "prompt"` command
</PRD>

View File

@@ -0,0 +1,153 @@
---
title: Research Session
query: "do we need to add new commands or can we just weaponize the gh cli tool"
date: 2025-08-01
time: 2:27:52 p.m.
timestamp: 2025-08-01T11:27:52.249Z
exchanges: 3
---
# Research Session
## Initial Query
**Question:** do we need to add new commands or can we just weaponize the gh cli tool
**Response:**
For the implementation of the `task-master git` command suite to automate git workflows (Task 97), it is possible to leverage the existing **GitHub CLI (`gh`) tool** rather than creating entirely new git commands from scratch. The GitHub CLI provides a rich set of commands that cover many GitHub-related operations such as branch management, pull request handling, workflow triggering, and release management, which can be scripted and integrated into your CLI tool[1][2][3][4].
### Key Points Supporting Weaponizing GitHub CLI:
- **Comprehensive GitHub Operations**: The `gh` CLI supports creating branches, managing pull requests, releases, tags, and even triggering and monitoring GitHub Actions workflows. This aligns well with the requirements of automated branch management, commit generation, and PR description generation in Task 97[1][3].
- **Scripting and Automation Friendly**: GitHub CLI commands can be scripted and integrated into custom workflows or CLI tools. You can create aliases or wrap `gh` commands inside your own CLI commands to provide a seamless user experience while relying on the battle-tested GitHub CLI for underlying operations[1][3].
- **Authentication and Permissions**: `gh` uses environment variables like `GH_TOKEN` or `GITHUB_TOKEN` for authentication, which can be securely managed in CI/CD pipelines or local environments. This simplifies integration without needing to implement custom authentication logic[3][4].
- **Extensibility**: If some specific git workflow features are not directly supported by `gh`, you can extend it with custom scripts or combine it with native git commands. This hybrid approach reduces the need to reinvent core git operations while allowing customization[1].
### Considerations for Your Project:
- Since Task 97 requires **automated branch management** (branch creation, validation, switching, cleanup), **intelligent commit generation**, and **PR description generation**, you can implement these by invoking `gh` commands programmatically within your `task-master git` commands.
- For example, branch creation and switching can be done with native git commands wrapped in your CLI, while PR creation and management can be handled via `gh pr create`, `gh pr merge`, etc.
- You may still need to implement some **custom logic** for commit type detection, branch name validation, and PR description generation based on your task metadata, but the underlying git and GitHub operations can be delegated to `gh`.
- This approach aligns with your existing CLI foundation (Task 2) and leverages a well-supported tool, reducing development effort and increasing reliability.
### Summary
You **do not necessarily need to add entirely new git commands** for Task 97. Instead, you can **weaponize the GitHub CLI (`gh`) tool** by wrapping and scripting its commands within your `task-master git` command suite. This will allow you to automate git workflows efficiently, maintain consistency, and leverage GitHubs API capabilities without reinventing the wheel[1][2][3][4].
If there are gaps in `gh` functionality for your specific workflow needs, you can supplement with native git commands or custom scripts, but the core git and GitHub operations should be handled by `gh` for best results.
---
## Follow-up 1
**Question:** how does this relate to git worktrees? i hear that they get super complicated when trying to implement them.
**Response:**
Git worktrees allow you to have multiple working directories attached to the same Git repository, each checked out to different branches simultaneously. This enables parallel development workflows without the overhead of cloning the repository multiple times or constantly switching branches in a single working directory[2][3].
However, **implementing or managing git worktrees can get complicated** for several reasons:
- **Complexity for users and tooling:** Managing multiple active worktrees requires careful tracking of which directory corresponds to which branch. This can be confusing and error-prone, especially for novices or when many worktrees are in use simultaneously. Mistakes like committing to the wrong branch or working in the wrong directory are common risks[1].
- **Maintenance overhead:** Worktrees accumulate over time and need to be pruned or cleaned up regularly (`git worktree prune`) to avoid clutter and potential confusion. This adds an extra maintenance step to the workflow[1].
- **Syncing and update challenges:** Each worktree is a separate checkout and can become outdated if not regularly updated from the main repository. This can lead to merge conflicts or redundant work if branches diverge significantly[1].
- **Tooling and integration limitations:** Not all Git tools, scripts, or GUIs fully support worktrees or are optimized for them. This can cause inconsistencies or require additional custom tooling to manage worktrees effectively[1].
- **Performance considerations:** Managing multiple worktrees simultaneously can have performance implications, especially in very large repositories, due to the overhead of maintaining multiple checkouts and metadata[1].
In the context of your project, which involves automating git workflows and possibly wrapping or extending git/GitHub CLI commands, these complexities imply:
- If you consider supporting git worktrees in your `task-master git` command suite, you will need to handle the complexity of managing multiple directories and branches carefully.
- You might need to implement additional logic or tooling to track and manage worktrees, avoid user confusion, and ensure synchronization between them.
- Since your project already plans to leverage the GitHub CLI (`gh`) and native git commands, you could integrate worktree commands (`git worktree add`, `git worktree remove`, `git worktree list`) but should be prepared for the added complexity in workflow management and user documentation.
- Custom wrappers or managers (like the one described in [3]) can help reduce friction by automating worktree creation, switching, and cleanup, which might be a useful approach to consider.
Overall, **git worktrees provide powerful parallel branch management but introduce complexity in implementation and user experience** that must be carefully managed through tooling, automation, and clear documentation[1][3][4].
---
**Summary Table of Git Worktree Challenges and Considerations**
| Aspect | Description | Implication for Project |
|-------------------------|-----------------------------------------------------------------------------------------------|----------------------------------------------------------|
| User Complexity | Confusing multiple directories and branches | Need clear UI/UX and safeguards |
| Maintenance Overhead | Requires pruning and cleanup of stale worktrees | Automate cleanup or provide commands |
| Syncing Issues | Worktrees can become outdated or diverge | Implement update checks and sync workflows |
| Tooling Support | Limited support in some Git tools and GUIs | May require custom tooling or wrappers |
| Performance | Overhead in large repos with many worktrees | Monitor and optimize performance |
| Workflow Automation | Manual worktree management is tedious | Build custom managers or wrappers to simplify usage |
---
Given your projects focus on automating git workflows and leveraging existing CLI tools, **implementing git worktree support is feasible but will require careful design to handle these complexities effectively**. You may want to start with basic worktree commands and incrementally add automation and safeguards as needed.
---
## Follow-up 2
**Question:** since we already have a way to have separated task lists per branch, couldn't we do the same thing for worktrees? each taskmaster list (tag) would have its own worktree based on the name of the tag?
**Response:**
Yes, you can apply a similar approach used for separated task lists per branch to git worktrees by associating each taskmaster list (tag) with its own git worktree named after the tag. This means each tag would correspond to a dedicated worktree directory checked out to a branch or commit related to that tag, enabling isolated parallel work environments for each task list.
### How This Maps to Git Worktrees
- **Git worktrees** allow multiple working directories linked to the same repository, each checked out to a different branch or commit[1][2]. This lets you work on multiple branches simultaneously without switching contexts in a single directory.
- When you create a worktree with `git worktree add <path> [<branch>]`, Git creates a new linked working tree at `<path>`, optionally checking out a specified branch or creating a new one[1][2]. Naming the worktree directory after the tag (taskmaster list) is straightforward and semantically clear.
- This setup mirrors your existing branch-based task list separation but extends it to physical directories, which can improve workflow isolation and reduce context switching.
### Benefits for Your Project
- **Isolated environments per task list/tag:** Each tags worktree can have its own checked-out branch, allowing independent development, testing, or review without interference.
- **Parallel workflows:** Developers can switch between worktrees (directories) instead of switching branches in one directory, reducing errors and improving productivity.
- **Automation potential:** Your `task-master git` commands can automate creating, switching, and cleaning up these worktrees based on tags, similar to how you manage branches and task lists.
### Challenges and Considerations
- **Complexity in management:** Git worktrees add complexity in tracking multiple directories and ensuring users operate in the correct one[1]. Your CLI should provide clear commands and feedback to avoid confusion.
- **Maintenance overhead:** Worktrees need pruning and cleanup (`git worktree prune` or `git worktree remove`) to avoid clutter[1]. Automating cleanup in your tool would be beneficial.
- **Syncing and updates:** Each worktree is a separate checkout and can become outdated. Your automation should include commands to update or rebase worktrees as needed.
- **Tooling support:** Not all Git tools or scripts fully support worktrees, so your CLI wrappers should handle edge cases and provide a consistent experience[1][3].
### Practical Implementation Ideas
- When a new tag (taskmaster list) is created, run `git worktree add` to create a new worktree directory named after the tag.
- Store metadata linking tags to worktree paths in your taskmaster configuration to track and manage them.
- Provide commands like `task-master worktree list`, `task-master worktree add <tag>`, and `task-master worktree remove <tag>` to manage these worktrees.
- Integrate with GitHub CLI (`gh`) and native git commands to automate branch creation, PR handling, and syncing within each worktree.
- Consider inspiration from existing tools like [wt](https://github.com/taecontrol/wt) that automate worktree environment setup and teardown[3].
### Summary
Using git worktrees per taskmaster tag is a feasible and potentially powerful extension of your current branch-based task list separation. It offers isolated, parallel working directories aligned with your tags, improving workflow clarity and reducing branch-switching overhead. However, it requires careful tooling and automation to manage complexity, maintenance, and synchronization effectively. Given your projects focus on automating git workflows and leveraging CLI tools, implementing this feature incrementally with robust user feedback and cleanup commands would be a practical approach.
---
*Generated by Task Master Research Command*
*Timestamp: 2025-08-01T11:27:52.249Z*

View File

@@ -0,0 +1,8 @@
Simple Todo App PRD
Create a basic todo list application with the following features:
1. Add new todos
2. Mark todos as complete
3. Delete todos
That's it. Keep it simple.

View File

@@ -0,0 +1,343 @@
# Product Requirements Document: tm-core Package - Parse PRD Feature
## Project Overview
Create a TypeScript package named `tm-core` at `packages/tm-core` that implements parse-prd functionality using class-based architecture similar to the existing AI providers pattern.
## Design Patterns & Architecture
### Patterns to Apply
1. **Factory Pattern**: Use for `ProviderFactory` to create AI provider instances
2. **Strategy Pattern**: Use for `IAIProvider` implementations and `IStorage` implementations
3. **Facade Pattern**: Use for `TaskMasterCore` as the main API entry point
4. **Template Method Pattern**: Use for `BaseProvider` abstract class
5. **Dependency Injection**: Use throughout for testability (pass dependencies via constructor)
6. **Repository Pattern**: Use for `FileStorage` to abstract data persistence
### Naming Conventions
- **Files**: kebab-case (e.g., `task-parser.ts`, `file-storage.ts`)
- **Classes**: PascalCase (e.g., `TaskParser`, `FileStorage`)
- **Interfaces**: PascalCase with 'I' prefix (e.g., `IStorage`, `IAIProvider`)
- **Methods**: camelCase (e.g., `parsePRD`, `loadTasks`)
- **Constants**: UPPER_SNAKE_CASE (e.g., `DEFAULT_MODEL`)
- **Type aliases**: PascalCase (e.g., `TaskStatus`, `ParseOptions`)
## Exact Folder Structure Required
```
packages/tm-core/
├── src/
│ ├── index.ts
│ ├── types/
│ │ └── index.ts
│ ├── interfaces/
│ │ ├── index.ts # Barrel export
│ │ ├── storage.interface.ts
│ │ ├── ai-provider.interface.ts
│ │ └── configuration.interface.ts
│ ├── tasks/
│ │ ├── index.ts # Barrel export
│ │ └── task-parser.ts
│ ├── ai/
│ │ ├── index.ts # Barrel export
│ │ ├── base-provider.ts
│ │ ├── provider-factory.ts
│ │ ├── prompt-builder.ts
│ │ └── providers/
│ │ ├── index.ts # Barrel export
│ │ ├── anthropic-provider.ts
│ │ ├── openai-provider.ts
│ │ └── google-provider.ts
│ ├── storage/
│ │ ├── index.ts # Barrel export
│ │ └── file-storage.ts
│ ├── config/
│ │ ├── index.ts # Barrel export
│ │ └── config-manager.ts
│ ├── utils/
│ │ ├── index.ts # Barrel export
│ │ └── id-generator.ts
│ └── errors/
│ ├── index.ts # Barrel export
│ └── task-master-error.ts
├── tests/
│ ├── task-parser.test.ts
│ ├── integration/
│ │ └── parse-prd.test.ts
│ └── mocks/
│ └── mock-provider.ts
├── package.json
├── tsconfig.json
├── tsup.config.js
└── jest.config.js
```
## Specific Implementation Requirements
### 1. Create types/index.ts
Define these exact TypeScript interfaces:
- `Task` interface with fields: id, title, description, status, priority, complexity, dependencies, subtasks, metadata, createdAt, updatedAt, source
- `Subtask` interface with fields: id, title, description, completed
- `TaskMetadata` interface with fields: parsedFrom, aiProvider, version, tags (optional)
- Type literals: `TaskStatus` = 'pending' | 'in-progress' | 'completed' | 'blocked'
- Type literals: `TaskPriority` = 'low' | 'medium' | 'high' | 'critical'
- Type literals: `TaskComplexity` = 'simple' | 'moderate' | 'complex'
- `ParseOptions` interface with fields: dryRun (optional), additionalContext (optional), tag (optional), maxTasks (optional)
### 2. Create interfaces/storage.interface.ts
Define `IStorage` interface with these exact methods:
- `loadTasks(tag?: string): Promise<Task[]>`
- `saveTasks(tasks: Task[], tag?: string): Promise<void>`
- `appendTasks(tasks: Task[], tag?: string): Promise<void>`
- `updateTask(id: string, task: Partial<Task>, tag?: string): Promise<void>`
- `deleteTask(id: string, tag?: string): Promise<void>`
- `exists(tag?: string): Promise<boolean>`
### 3. Create interfaces/ai-provider.interface.ts
Define `IAIProvider` interface with these exact methods:
- `generateCompletion(prompt: string, options?: AIOptions): Promise<string>`
- `calculateTokens(text: string): number`
- `getName(): string`
- `getModel(): string`
Define `AIOptions` interface with fields: temperature (optional), maxTokens (optional), systemPrompt (optional)
### 4. Create interfaces/configuration.interface.ts
Define `IConfiguration` interface with fields:
- `projectPath: string`
- `aiProvider: string`
- `apiKey?: string`
- `aiOptions?: AIOptions`
- `mainModel?: string`
- `researchModel?: string`
- `fallbackModel?: string`
- `tasksPath?: string`
- `enableTags?: boolean`
### 5. Create tasks/task-parser.ts
Create class `TaskParser` with:
- Constructor accepting `aiProvider: IAIProvider` and `config: IConfiguration`
- Private property `promptBuilder: PromptBuilder`
- Public method `parsePRD(prdPath: string, options: ParseOptions = {}): Promise<Task[]>`
- Private method `readPRD(prdPath: string): Promise<string>`
- Private method `extractTasks(aiResponse: string): Partial<Task>[]`
- Private method `enrichTasks(rawTasks: Partial<Task>[], prdPath: string): Task[]`
- Apply **Dependency Injection** pattern via constructor
### 6. Create ai/base-provider.ts
Copy existing base-provider.js and convert to TypeScript abstract class:
- Abstract class `BaseProvider` implementing `IAIProvider`
- Protected properties: `apiKey: string`, `model: string`
- Constructor accepting `apiKey: string` and `options: { model?: string }`
- Abstract methods matching IAIProvider interface
- Abstract method `getDefaultModel(): string`
- Apply **Template Method** pattern for common provider logic
### 7. Create ai/provider-factory.ts
Create class `ProviderFactory` with:
- Static method `create(config: { provider: string; apiKey?: string; model?: string }): Promise<IAIProvider>`
- Switch statement for providers: 'anthropic', 'openai', 'google'
- Dynamic imports for each provider
- Throw error for unknown providers
- Apply **Factory** pattern for creating provider instances
Example implementation structure:
```typescript
switch (provider.toLowerCase()) {
case 'anthropic':
const { AnthropicProvider } = await import('./providers/anthropic-provider.js');
return new AnthropicProvider(apiKey, { model });
}
```
### 8. Create ai/providers/anthropic-provider.ts
Create class `AnthropicProvider` extending `BaseProvider`:
- Import Anthropic SDK: `import { Anthropic } from '@anthropic-ai/sdk'`
- Private property `client: Anthropic`
- Implement all abstract methods from BaseProvider
- Default model: 'claude-3-sonnet-20240229'
- Handle API errors and wrap with meaningful messages
### 9. Create ai/providers/openai-provider.ts (placeholder)
Create class `OpenAIProvider` extending `BaseProvider`:
- Import OpenAI SDK when implemented
- For now, throw error: "OpenAI provider not yet implemented"
### 10. Create ai/providers/google-provider.ts (placeholder)
Create class `GoogleProvider` extending `BaseProvider`:
- Import Google Generative AI SDK when implemented
- For now, throw error: "Google provider not yet implemented"
### 11. Create ai/prompt-builder.ts
Create class `PromptBuilder` with:
- Method `buildParsePrompt(prdContent: string, options: ParseOptions = {}): string`
- Method `buildExpandPrompt(task: string, context?: string): string`
- Use template literals for prompt construction
- Include specific JSON format instructions in prompts
### 9. Create storage/file-storage.ts
Create class `FileStorage` implementing `IStorage`:
- Private property `basePath: string` set to `{projectPath}/.taskmaster`
- Constructor accepting `projectPath: string`
- Private method `getTasksPath(tag?: string): string` returning correct path based on tag
- Private method `ensureDirectory(dir: string): Promise<void>`
- Implement all IStorage methods
- Handle ENOENT errors by returning empty arrays
- Use JSON format with structure: `{ tasks: Task[], metadata: { version: string, lastModified: string } }`
- Apply **Repository** pattern for data access abstraction
### 10. Create config/config-manager.ts
Create class `ConfigManager`:
- Private property `config: IConfiguration`
- Constructor accepting `options: Partial<IConfiguration>`
- Use Zod for validation with schema matching IConfiguration
- Method `get<K extends keyof IConfiguration>(key: K): IConfiguration[K]`
- Method `getAll(): IConfiguration`
- Method `validate(): boolean`
- Default values: projectPath = process.cwd(), aiProvider = 'anthropic', enableTags = true
### 11. Create utils/id-generator.ts
Export functions:
- `generateTaskId(index: number = 0): string` returning format `task_{timestamp}_{index}_{random}`
- `generateSubtaskId(parentId: string, index: number = 0): string` returning format `{parentId}_sub_{index}_{random}`
### 16. Create src/index.ts
Create main class `TaskMasterCore`:
- Private properties: `config: ConfigManager`, `storage: IStorage`, `aiProvider?: IAIProvider`, `parser?: TaskParser`
- Constructor accepting `options: Partial<IConfiguration>`
- Method `initialize(): Promise<void>` for lazy loading
- Method `parsePRD(prdPath: string, options: ParseOptions = {}): Promise<Task[]>`
- Method `getTasks(tag?: string): Promise<Task[]>`
- Apply **Facade** pattern to provide simple API over complex subsystems
Export:
- Class `TaskMasterCore`
- Function `createTaskMaster(options: Partial<IConfiguration>): TaskMasterCore`
- All types from './types'
- All interfaces from './interfaces/*'
Import statements should use kebab-case:
```typescript
import { TaskParser } from './tasks/task-parser';
import { FileStorage } from './storage/file-storage';
import { ConfigManager } from './config/config-manager';
import { ProviderFactory } from './ai/provider-factory';
```
### 17. Configure package.json
Create package.json with:
- name: "@task-master/core"
- version: "0.1.0"
- type: "module"
- main: "./dist/index.js"
- module: "./dist/index.mjs"
- types: "./dist/index.d.ts"
- exports map for proper ESM/CJS support
- scripts: build (tsup), dev (tsup --watch), test (jest), typecheck (tsc --noEmit)
- dependencies: zod@^3.23.8
- peerDependencies: @anthropic-ai/sdk, openai, @google/generative-ai
- devDependencies: typescript, tsup, jest, ts-jest, @types/node, @types/jest
### 18. Configure TypeScript
Create tsconfig.json with:
- target: "ES2022"
- module: "ESNext"
- strict: true (with all strict flags enabled)
- declaration: true
- outDir: "./dist"
- rootDir: "./src"
### 19. Configure tsup
Create tsup.config.js with:
- entry: ['src/index.ts']
- format: ['cjs', 'esm']
- dts: true
- sourcemap: true
- clean: true
- external: AI provider SDKs
### 20. Configure Jest
Create jest.config.js with:
- preset: 'ts-jest'
- testEnvironment: 'node'
- Coverage threshold: 80% for all metrics
## Build Process
1. Use tsup to compile TypeScript to both CommonJS and ESM
2. Generate .d.ts files for TypeScript consumers
3. Output to dist/ directory
4. Ensure tree-shaking works properly
## Testing Requirements
- Create unit tests for TaskParser in tests/task-parser.test.ts
- Create MockProvider class in tests/mocks/mock-provider.ts for testing without API calls
- Test error scenarios (file not found, invalid JSON, etc.)
- Create integration test in tests/integration/parse-prd.test.ts
- Follow kebab-case naming for all test files
## Success Criteria
- TypeScript compilation with zero errors
- No use of 'any' type
- All interfaces properly exported
- Compatible with existing tasks.json format
- Feature flag support via USE_TM_CORE environment variable
## Import/Export Conventions
- Use named exports for all classes and interfaces
- Use barrel exports (index.ts) in each directory
- Import types/interfaces with type-only imports: `import type { Task } from '../types'`
- Group imports in order: Node built-ins, external packages, internal packages, relative imports
- Use .js extension in import paths for ESM compatibility
## Error Handling Patterns
- Create custom error classes in `src/errors/` directory
- All public methods should catch and wrap errors with context
- Use error codes for different error types (e.g., 'FILE_NOT_FOUND', 'PARSE_ERROR')
- Never expose internal implementation details in error messages
- Log errors to console.error only in development mode
## Barrel Exports Content
### interfaces/index.ts
```typescript
export type { IStorage } from './storage.interface';
export type { IAIProvider, AIOptions } from './ai-provider.interface';
export type { IConfiguration } from './configuration.interface';
```
### tasks/index.ts
```typescript
export { TaskParser } from './task-parser';
```
### ai/index.ts
```typescript
export { BaseProvider } from './base-provider';
export { ProviderFactory } from './provider-factory';
export { PromptBuilder } from './prompt-builder';
```
### ai/providers/index.ts
```typescript
export { AnthropicProvider } from './anthropic-provider';
export { OpenAIProvider } from './openai-provider';
export { GoogleProvider } from './google-provider';
```
### storage/index.ts
```typescript
export { FileStorage } from './file-storage';
```
### config/index.ts
```typescript
export { ConfigManager } from './config-manager';
```
### utils/index.ts
```typescript
export { generateTaskId, generateSubtaskId } from './id-generator';
```
### errors/index.ts
```typescript
export { TaskMasterError } from './task-master-error';
```

View File

@@ -1,373 +1,21 @@
{
"meta": {
"generatedAt": "2025-05-27T16:34:53.088Z",
"generatedAt": "2025-08-02T14:28:59.851Z",
"tasksAnalyzed": 1,
"totalTasks": 84,
"analysisCount": 45,
"totalTasks": 93,
"analysisCount": 1,
"thresholdScore": 5,
"projectName": "Taskmaster",
"usedResearch": true
"usedResearch": false
},
"complexityAnalysis": [
{
"taskId": 24,
"taskTitle": "Implement AI-Powered Test Generation Command",
"complexityScore": 7,
"recommendedSubtasks": 5,
"expansionPrompt": "Break down the implementation of the AI-powered test generation command into detailed subtasks covering: command structure setup, AI prompt engineering, test file generation logic, integration with Claude API, and comprehensive error handling.",
"reasoning": "This task involves complex integration with an AI service (Claude), requires sophisticated prompt engineering, and needs to generate structured code files. The existing 3 subtasks are a good start but could be expanded to include more detailed steps for AI integration, error handling, and test file formatting."
},
{
"taskId": 26,
"taskTitle": "Implement Context Foundation for AI Operations",
"complexityScore": 6,
"recommendedSubtasks": 4,
"expansionPrompt": "The current 4 subtasks for implementing the context foundation appear comprehensive. Consider if any additional subtasks are needed for testing, documentation, or integration with existing systems.",
"reasoning": "This task involves creating a foundation for context integration with several well-defined components. The existing 4 subtasks cover the main implementation areas (context-file flag, cursor rules integration, context extraction utility, and command handler updates). The complexity is moderate as it requires careful integration with existing systems but has clear requirements."
},
{
"taskId": 27,
"taskTitle": "Implement Context Enhancements for AI Operations",
"complexityScore": 7,
"recommendedSubtasks": 4,
"expansionPrompt": "The current 4 subtasks for implementing context enhancements appear well-structured. Consider if any additional subtasks are needed for testing, documentation, or performance optimization.",
"reasoning": "This task builds upon the foundation from Task #26 and adds more sophisticated context handling features. The 4 existing subtasks cover the main implementation areas (code context extraction, task history context, PRD context integration, and context formatting). The complexity is higher than the foundation task due to the need for intelligent context selection and optimization."
},
{
"taskId": 28,
"taskTitle": "Implement Advanced ContextManager System",
"complexityScore": 8,
"recommendedSubtasks": 5,
"expansionPrompt": "The current 5 subtasks for implementing the advanced ContextManager system appear comprehensive. Consider if any additional subtasks are needed for testing, documentation, or backward compatibility with previous context implementations.",
"reasoning": "This task represents the most complex phase of the context implementation, requiring a sophisticated class design, optimization algorithms, and integration with multiple systems. The 5 existing subtasks cover the core implementation areas, but the complexity is high due to the need for intelligent context prioritization, token management, and performance monitoring."
},
{
"taskId": 40,
"taskTitle": "Implement 'plan' Command for Task Implementation Planning",
"complexityScore": 5,
"recommendedSubtasks": 4,
"expansionPrompt": "The current 4 subtasks for implementing the 'plan' command appear well-structured. Consider if any additional subtasks are needed for testing, documentation, or integration with existing task management workflows.",
"reasoning": "This task involves creating a new command that leverages AI to generate implementation plans. The existing 4 subtasks cover the main implementation areas (retrieving task content, generating plans with AI, formatting in XML, and error handling). The complexity is moderate as it builds on existing patterns for task updates but requires careful AI integration."
},
{
"taskId": 41,
"taskTitle": "Implement Visual Task Dependency Graph in Terminal",
"complexityScore": 8,
"recommendedSubtasks": 10,
"expansionPrompt": "The current 10 subtasks for implementing the visual task dependency graph appear comprehensive. Consider if any additional subtasks are needed for performance optimization with large graphs or additional visualization options.",
"reasoning": "This task involves creating a sophisticated visualization system for terminal display, which is inherently complex due to layout algorithms, ASCII/Unicode rendering, and handling complex dependency relationships. The 10 existing subtasks cover all major aspects of implementation, from CLI interface to accessibility features."
},
{
"taskId": 42,
"taskTitle": "Implement MCP-to-MCP Communication Protocol",
"complexityScore": 9,
"recommendedSubtasks": 8,
"expansionPrompt": "The current 8 subtasks for implementing the MCP-to-MCP communication protocol appear well-structured. Consider if any additional subtasks are needed for security hardening, performance optimization, or comprehensive documentation.",
"reasoning": "This task involves designing and implementing a complex communication protocol between different MCP tools and servers. It requires sophisticated adapter patterns, client-server architecture, and handling of multiple operational modes. The complexity is very high due to the need for standardization, security, and backward compatibility."
},
{
"taskId": 44,
"taskTitle": "Implement Task Automation with Webhooks and Event Triggers",
"complexityScore": 8,
"recommendedSubtasks": 7,
"expansionPrompt": "The current 7 subtasks for implementing task automation with webhooks appear comprehensive. Consider if any additional subtasks are needed for security testing, rate limiting implementation, or webhook monitoring tools.",
"reasoning": "This task involves creating a sophisticated event system with webhooks for integration with external services. The complexity is high due to the need for secure authentication, reliable delivery mechanisms, and handling of various webhook formats and protocols. The existing subtasks cover the main implementation areas but security and monitoring could be emphasized more."
},
{
"taskId": 45,
"taskTitle": "Implement GitHub Issue Import Feature",
"complexityScore": 6,
"recommendedSubtasks": 5,
"expansionPrompt": "The current 5 subtasks for implementing the GitHub issue import feature appear well-structured. Consider if any additional subtasks are needed for handling GitHub API rate limiting, caching, or supporting additional issue metadata.",
"reasoning": "This task involves integrating with the GitHub API to import issues as tasks. The complexity is moderate as it requires API authentication, data mapping, and error handling. The existing 5 subtasks cover the main implementation areas from design to end-to-end implementation."
},
{
"taskId": 46,
"taskTitle": "Implement ICE Analysis Command for Task Prioritization",
"complexityScore": 7,
"recommendedSubtasks": 5,
"expansionPrompt": "The current 5 subtasks for implementing the ICE analysis command appear comprehensive. Consider if any additional subtasks are needed for visualization of ICE scores or integration with other prioritization methods.",
"reasoning": "This task involves creating an AI-powered analysis system for task prioritization using the ICE methodology. The complexity is high due to the need for sophisticated scoring algorithms, AI integration, and report generation. The existing subtasks cover the main implementation areas from algorithm design to integration with existing systems."
},
{
"taskId": 47,
"taskTitle": "Enhance Task Suggestion Actions Card Workflow",
"complexityScore": 6,
"recommendedSubtasks": 6,
"expansionPrompt": "The current 6 subtasks for enhancing the task suggestion actions card workflow appear well-structured. Consider if any additional subtasks are needed for user testing, accessibility improvements, or performance optimization.",
"reasoning": "This task involves redesigning the UI workflow for task expansion and management. The complexity is moderate as it requires careful UX design and state management but builds on existing components. The 6 existing subtasks cover the main implementation areas from design to testing."
},
{
"taskId": 48,
"taskTitle": "Refactor Prompts into Centralized Structure",
"complexityScore": 4,
"recommendedSubtasks": 3,
"expansionPrompt": "The current 3 subtasks for refactoring prompts into a centralized structure appear appropriate. Consider if any additional subtasks are needed for prompt versioning, documentation, or testing.",
"reasoning": "This task involves a straightforward refactoring to improve code organization. The complexity is relatively low as it primarily involves moving code rather than creating new functionality. The 3 existing subtasks cover the main implementation areas from directory structure to integration."
},
{
"taskId": 49,
"taskTitle": "Implement Code Quality Analysis Command",
"complexityScore": 8,
"recommendedSubtasks": 6,
"expansionPrompt": "The current 6 subtasks for implementing the code quality analysis command appear comprehensive. Consider if any additional subtasks are needed for performance optimization with large codebases or integration with existing code quality tools.",
"reasoning": "This task involves creating a sophisticated code analysis system with pattern recognition, best practice verification, and AI-powered recommendations. The complexity is high due to the need for code parsing, complex analysis algorithms, and integration with AI services. The existing subtasks cover the main implementation areas from algorithm design to user interface."
},
{
"taskId": 50,
"taskTitle": "Implement Test Coverage Tracking System by Task",
"complexityScore": 9,
"recommendedSubtasks": 5,
"expansionPrompt": "The current 5 subtasks for implementing the test coverage tracking system appear well-structured. Consider if any additional subtasks are needed for integration with CI/CD systems, performance optimization, or visualization tools.",
"reasoning": "This task involves creating a complex system that maps test coverage to specific tasks and subtasks. The complexity is very high due to the need for sophisticated data structures, integration with coverage tools, and AI-powered test generation. The existing subtasks are comprehensive and cover the main implementation areas from data structure design to AI integration."
},
{
"taskId": 51,
"taskTitle": "Implement Perplexity Research Command",
"complexityScore": 6,
"recommendedSubtasks": 5,
"expansionPrompt": "The current 5 subtasks for implementing the Perplexity research command appear comprehensive. Consider if any additional subtasks are needed for caching optimization, result formatting, or integration with other research tools.",
"reasoning": "This task involves creating a new command that integrates with the Perplexity AI API for research. The complexity is moderate as it requires API integration, context extraction, and result formatting. The 5 existing subtasks cover the main implementation areas from API client to caching system."
},
{
"taskId": 52,
"taskTitle": "Implement Task Suggestion Command for CLI",
"complexityScore": 6,
"recommendedSubtasks": 5,
"expansionPrompt": "The current 5 subtasks for implementing the task suggestion command appear well-structured. Consider if any additional subtasks are needed for suggestion quality evaluation, user feedback collection, or integration with existing task workflows.",
"reasoning": "This task involves creating a new CLI command that generates contextually relevant task suggestions using AI. The complexity is moderate as it requires AI integration, context collection, and interactive CLI interfaces. The existing subtasks cover the main implementation areas from data collection to user interface."
},
{
"taskId": 53,
"taskTitle": "Implement Subtask Suggestion Feature for Parent Tasks",
"complexityScore": 6,
"recommendedSubtasks": 6,
"expansionPrompt": "The current 6 subtasks for implementing the subtask suggestion feature appear comprehensive. Consider if any additional subtasks are needed for suggestion quality metrics, user feedback collection, or performance optimization.",
"reasoning": "This task involves creating a feature that suggests contextually relevant subtasks for parent tasks. The complexity is moderate as it builds on existing task management systems but requires sophisticated AI integration and context analysis. The 6 existing subtasks cover the main implementation areas from validation to testing."
},
{
"taskId": 55,
"taskTitle": "Implement Positional Arguments Support for CLI Commands",
"complexityScore": 5,
"recommendedSubtasks": 5,
"expansionPrompt": "The current 5 subtasks for implementing positional arguments support appear well-structured. Consider if any additional subtasks are needed for backward compatibility testing, documentation updates, or user experience improvements.",
"reasoning": "This task involves modifying the command parsing logic to support positional arguments alongside the existing flag-based syntax. The complexity is moderate as it requires careful handling of different argument styles and edge cases. The 5 existing subtasks cover the main implementation areas from analysis to documentation."
},
{
"taskId": 57,
"taskTitle": "Enhance Task-Master CLI User Experience and Interface",
"complexityScore": 7,
"recommendedSubtasks": 6,
"expansionPrompt": "The current 6 subtasks for enhancing the CLI user experience appear comprehensive. Consider if any additional subtasks are needed for accessibility testing, internationalization, or performance optimization.",
"reasoning": "This task involves a significant overhaul of the CLI interface to improve user experience. The complexity is high due to the breadth of changes (logging, visual elements, interactive components, etc.) and the need for consistent design across all commands. The 6 existing subtasks cover the main implementation areas from log management to help systems."
},
{
"taskId": 60,
"taskTitle": "Implement Mentor System with Round-Table Discussion Feature",
"complexityScore": 8,
"recommendedSubtasks": 7,
"expansionPrompt": "The current 7 subtasks for implementing the mentor system appear well-structured. Consider if any additional subtasks are needed for mentor personality consistency, discussion quality evaluation, or performance optimization with multiple mentors.",
"reasoning": "This task involves creating a sophisticated mentor simulation system with round-table discussions. The complexity is high due to the need for personality simulation, complex LLM integration, and structured discussion management. The 7 existing subtasks cover the main implementation areas from architecture to testing."
},
{
"taskId": 62,
"taskTitle": "Add --simple Flag to Update Commands for Direct Text Input",
"complexityScore": 4,
"recommendedSubtasks": 8,
"expansionPrompt": "The current 8 subtasks for implementing the --simple flag appear comprehensive. Consider if any additional subtasks are needed for user experience testing or documentation updates.",
"reasoning": "This task involves adding a simple flag option to bypass AI processing for updates. The complexity is relatively low as it primarily involves modifying existing command handlers and adding a flag. The 8 existing subtasks are very detailed and cover all aspects of implementation from command parsing to testing."
},
{
"taskId": 63,
"taskTitle": "Add pnpm Support for the Taskmaster Package",
"complexityScore": 5,
"recommendedSubtasks": 8,
"expansionPrompt": "The current 8 subtasks for adding pnpm support appear comprehensive. Consider if any additional subtasks are needed for CI/CD integration, performance comparison, or documentation updates.",
"reasoning": "This task involves ensuring the package works correctly with pnpm as an alternative package manager. The complexity is moderate as it requires careful testing of installation processes and scripts across different environments. The 8 existing subtasks cover all major aspects from documentation to binary verification."
},
{
"taskId": 64,
"taskTitle": "Add Yarn Support for Taskmaster Installation",
"complexityScore": 5,
"recommendedSubtasks": 9,
"expansionPrompt": "The current 9 subtasks for adding Yarn support appear comprehensive. Consider if any additional subtasks are needed for performance testing, CI/CD integration, or compatibility with different Yarn versions.",
"reasoning": "This task involves ensuring the package works correctly with Yarn as an alternative package manager. The complexity is moderate as it requires careful testing of installation processes and scripts across different environments. The 9 existing subtasks are very detailed and cover all aspects from configuration to testing."
},
{
"taskId": 65,
"taskTitle": "Add Bun Support for Taskmaster Installation",
"complexityScore": 6,
"recommendedSubtasks": 6,
"expansionPrompt": "The current 6 subtasks for adding Bun support appear well-structured. Consider if any additional subtasks are needed for handling Bun-specific issues, performance testing, or documentation updates.",
"reasoning": "This task involves adding support for the newer Bun package manager. The complexity is slightly higher than the other package manager tasks due to Bun's differences from Node.js and potential compatibility issues. The 6 existing subtasks cover the main implementation areas from research to documentation."
},
{
"taskId": 67,
"taskTitle": "Add CLI JSON output and Cursor keybindings integration",
"complexityScore": 5,
"recommendedSubtasks": 5,
"expansionPrompt": "The current 5 subtasks for implementing JSON output and Cursor keybindings appear well-structured. Consider if any additional subtasks are needed for testing across different operating systems, documentation updates, or user experience improvements.",
"reasoning": "This task involves two distinct features: adding JSON output to CLI commands and creating a keybindings installation command. The complexity is moderate as it requires careful handling of different output formats and OS-specific file paths. The 5 existing subtasks cover the main implementation areas for both features."
},
{
"taskId": 68,
"taskTitle": "Ability to create tasks without parsing PRD",
"complexityScore": 3,
"recommendedSubtasks": 2,
"expansionPrompt": "The current 2 subtasks for implementing task creation without PRD appear appropriate. Consider if any additional subtasks are needed for validation, error handling, or integration with existing task management workflows.",
"reasoning": "This task involves a relatively simple modification to allow task creation without requiring a PRD document. The complexity is low as it primarily involves creating a form interface and saving functionality. The 2 existing subtasks cover the main implementation areas of UI design and data saving."
},
{
"taskId": 72,
"taskTitle": "Implement PDF Generation for Project Progress and Dependency Overview",
"complexityScore": 7,
"recommendedSubtasks": 6,
"expansionPrompt": "The current 6 subtasks for implementing PDF generation appear comprehensive. Consider if any additional subtasks are needed for handling large projects, additional visualization options, or integration with existing reporting tools.",
"reasoning": "This task involves creating a feature to generate PDF reports of project progress and dependency visualization. The complexity is high due to the need for PDF generation, data collection, and visualization integration. The 6 existing subtasks cover the main implementation areas from library selection to export options."
},
{
"taskId": 75,
"taskTitle": "Integrate Google Search Grounding for Research Role",
"complexityScore": 5,
"recommendedSubtasks": 4,
"expansionPrompt": "The current 4 subtasks for integrating Google Search Grounding appear well-structured. Consider if any additional subtasks are needed for testing with different query types, error handling, or performance optimization.",
"reasoning": "This task involves updating the AI service layer to enable Google Search Grounding for research roles. The complexity is moderate as it requires careful integration with the existing AI service architecture and conditional logic. The 4 existing subtasks cover the main implementation areas from service layer modification to testing."
},
{
"taskId": 76,
"taskTitle": "Develop E2E Test Framework for Taskmaster MCP Server (FastMCP over stdio)",
"complexityScore": 8,
"recommendedSubtasks": 7,
"expansionPrompt": "The current 7 subtasks for developing the E2E test framework appear comprehensive. Consider if any additional subtasks are needed for test result reporting, CI/CD integration, or performance benchmarking.",
"reasoning": "This task involves creating a sophisticated end-to-end testing framework for the MCP server. The complexity is high due to the need for subprocess management, protocol handling, and robust test case definition. The 7 existing subtasks cover the main implementation areas from architecture to documentation."
},
{
"taskId": 77,
"taskTitle": "Implement AI Usage Telemetry for Taskmaster (with external analytics endpoint)",
"complexityScore": 7,
"recommendedSubtasks": 18,
"expansionPrompt": "The current 18 subtasks for implementing AI usage telemetry appear very comprehensive. Consider if any additional subtasks are needed for security hardening, privacy compliance, or user feedback collection.",
"reasoning": "This task involves creating a telemetry system to track AI usage metrics. The complexity is high due to the need for secure data transmission, comprehensive data collection, and integration across multiple commands. The 18 existing subtasks are extremely detailed and cover all aspects of implementation from core utility to provider-specific updates."
},
{
"taskId": 80,
"taskTitle": "Implement Unique User ID Generation and Storage During Installation",
"complexityScore": 4,
"recommendedSubtasks": 5,
"expansionPrompt": "The current 5 subtasks for implementing unique user ID generation appear well-structured. Consider if any additional subtasks are needed for privacy compliance, security auditing, or integration with the telemetry system.",
"reasoning": "This task involves generating and storing a unique user identifier during installation. The complexity is relatively low as it primarily involves UUID generation and configuration file management. The 5 existing subtasks cover the main implementation areas from script structure to documentation."
},
{
"taskId": 81,
"taskTitle": "Task #81: Implement Comprehensive Local Telemetry System with Future Server Integration Capability",
"complexityScore": 8,
"recommendedSubtasks": 6,
"expansionPrompt": "The current 6 subtasks for implementing the comprehensive local telemetry system appear well-structured. Consider if any additional subtasks are needed for data migration, storage optimization, or visualization tools.",
"reasoning": "This task involves expanding the telemetry system to capture additional metrics and implement local storage with future server integration capability. The complexity is high due to the breadth of data collection, storage requirements, and privacy considerations. The 6 existing subtasks cover the main implementation areas from data collection to user-facing benefits."
},
{
"taskId": 82,
"taskTitle": "Update supported-models.json with token limit fields",
"complexityScore": 3,
"recommendedSubtasks": 1,
"expansionPrompt": "This task appears straightforward enough to be implemented without further subtasks. Focus on researching accurate token limit values for each model and ensuring backward compatibility.",
"reasoning": "This task involves a simple update to the supported-models.json file to include new token limit fields. The complexity is low as it primarily involves research and data entry. No subtasks are necessary as the task is well-defined and focused."
},
{
"taskId": 83,
"taskTitle": "Update config-manager.js defaults and getters",
"complexityScore": 4,
"recommendedSubtasks": 1,
"expansionPrompt": "This task appears straightforward enough to be implemented without further subtasks. Focus on updating the DEFAULTS object and related getter functions while maintaining backward compatibility.",
"reasoning": "This task involves updating the config-manager.js module to replace maxTokens with more specific token limit fields. The complexity is relatively low as it primarily involves modifying existing code rather than creating new functionality. No subtasks are necessary as the task is well-defined and focused."
},
{
"taskId": 84,
"taskTitle": "Implement token counting utility",
"complexityScore": 5,
"recommendedSubtasks": 1,
"expansionPrompt": "This task appears well-defined enough to be implemented without further subtasks. Focus on implementing accurate token counting for different models and proper fallback mechanisms.",
"reasoning": "This task involves creating a utility function to count tokens for different AI models. The complexity is moderate as it requires integration with the tiktoken library and handling different tokenization schemes. No subtasks are necessary as the task is well-defined and focused."
},
{
"taskId": 69,
"taskTitle": "Enhance Analyze Complexity for Specific Task IDs",
"complexityScore": 7,
"recommendedSubtasks": 6,
"expansionPrompt": "Break down the task 'Enhance Analyze Complexity for Specific Task IDs' into 6 subtasks focusing on: 1) Core logic modification to accept ID parameters, 2) Report merging functionality, 3) CLI interface updates, 4) MCP tool integration, 5) Documentation updates, and 6) Comprehensive testing across all components.",
"reasoning": "This task involves modifying existing functionality across multiple components (core logic, CLI, MCP) with complex logic for filtering tasks and merging reports. The implementation requires careful handling of different parameter combinations and edge cases. The task has interdependent components that need to work together seamlessly, and the report merging functionality adds significant complexity."
},
{
"taskId": 70,
"taskTitle": "Implement 'diagram' command for Mermaid diagram generation",
"complexityScore": 6,
"recommendedSubtasks": 5,
"expansionPrompt": "Break down the 'diagram' command implementation into 5 subtasks: 1) Command interface and parameter handling, 2) Task data extraction and transformation to Mermaid syntax, 3) Diagram rendering with status color coding, 4) Output formatting and file export functionality, and 5) Error handling and edge case management.",
"reasoning": "This task requires implementing a new feature rather than modifying existing code, which reduces complexity from integration challenges. However, it involves working with visualization logic, dependency mapping, and multiple output formats. The color coding based on status and handling of dependency relationships adds moderate complexity. The task is well-defined but requires careful attention to diagram formatting and error handling."
},
{
"taskId": 85,
"taskTitle": "Update ai-services-unified.js for dynamic token limits",
"complexityScore": 7,
"recommendedSubtasks": 5,
"expansionPrompt": "Break down the update of ai-services-unified.js for dynamic token limits into subtasks such as: (1) Import and integrate the token counting utility, (2) Refactor _unifiedServiceRunner to calculate and enforce dynamic token limits, (3) Update error handling for token limit violations, (4) Add and verify logging for token usage, (5) Write and execute tests for various prompt and model scenarios.",
"reasoning": "This task involves significant code changes to a core function, integration of a new utility, dynamic logic for multiple models, and robust error handling. It also requires comprehensive testing for edge cases and integration, making it moderately complex and best managed by splitting into focused subtasks."
},
{
"taskId": 87,
"taskTitle": "Implement validation and error handling",
"complexityScore": 5,
"recommendedSubtasks": 4,
"expansionPrompt": "Decompose this task into: (1) Add validation logic for model and config loading, (2) Implement error handling and fallback mechanisms, (3) Enhance logging and reporting for token usage, (4) Develop helper functions for configuration suggestions and improvements.",
"reasoning": "This task is primarily about adding validation, error handling, and logging. While important for robustness, the logic is straightforward and can be modularized into a few clear subtasks."
},
{
"taskId": 89,
"taskTitle": "Introduce Prioritize Command with Enhanced Priority Levels",
"complexityScore": 6,
"recommendedSubtasks": 5,
"expansionPrompt": "Expand this task into: (1) Implement the prioritize command with all required flags and shorthands, (2) Update CLI output and help documentation for new priority levels, (3) Ensure backward compatibility with existing commands, (4) Add error handling for invalid inputs, (5) Write and run tests for all command scenarios.",
"reasoning": "This CLI feature requires command parsing, updating internal logic for new priority levels, documentation, and robust error handling. The complexity is moderate due to the need for backward compatibility and comprehensive testing."
},
{
"taskId": 90,
"taskTitle": "Implement Subtask Progress Analyzer and Reporting System",
"complexityScore": 8,
"recommendedSubtasks": 6,
"expansionPrompt": "Break down the analyzer implementation into: (1) Design and implement progress tracking logic, (2) Develop status validation and issue detection, (3) Build the reporting system with multiple output formats, (4) Integrate analyzer with the existing task management system, (5) Optimize for performance and scalability, (6) Write unit, integration, and performance tests.",
"reasoning": "This is a complex, multi-faceted feature involving data analysis, reporting, integration, and performance optimization. It touches many parts of the system and requires careful design, making it one of the most complex tasks in the list."
},
{
"taskId": 91,
"taskTitle": "Implement Move Command for Tasks and Subtasks",
"complexityScore": 7,
"recommendedSubtasks": 5,
"expansionPrompt": "Expand this task into: (1) Implement move logic for tasks and subtasks, (2) Handle edge cases (invalid ids, non-existent parents, circular dependencies), (3) Update CLI to support move command with flags, (4) Ensure data integrity and update relationships, (5) Write and execute tests for various move scenarios.",
"reasoning": "Moving tasks and subtasks requires careful handling of hierarchical data, edge cases, and data integrity. The command must be robust and user-friendly, necessitating multiple focused subtasks for safe implementation."
},
{
"taskId": 92,
"taskTitle": "Add Global Joke Flag to All CLI Commands",
"complexityScore": 8,
"recommendedSubtasks": 7,
"expansionPrompt": "Break down the implementation of the global --joke flag into the following subtasks: (1) Update CLI foundation to support global flags, (2) Develop the joke-service module with joke management and category support, (3) Integrate joke output into existing output utilities, (4) Update all CLI commands for joke flag compatibility, (5) Add configuration options for joke categories and custom jokes, (6) Implement comprehensive testing (flag recognition, output, content, integration, performance, regression), (7) Update documentation and usage examples.",
"reasoning": "This task requires changes across the CLI foundation, output utilities, all command modules, and configuration management. It introduces a new service module, global flag handling, and output logic that must not interfere with existing features (including JSON output). The need for robust testing and backward compatibility further increases complexity. The scope spans multiple code areas and requires careful integration, justifying a high complexity score and a detailed subtask breakdown to manage risk and ensure maintainability.[2][3][5]"
},
{
"taskId": 94,
"taskTitle": "Implement Standalone 'research' CLI Command for AI-Powered Queries",
"complexityScore": 7,
"recommendedSubtasks": 6,
"expansionPrompt": "Break down the implementation of the 'research' CLI command into logical subtasks covering command registration, parameter handling, context gathering, AI service integration, output formatting, and documentation.",
"reasoning": "This task has moderate to high complexity (7/10) due to multiple interconnected components: CLI argument parsing, integration with AI services, context gathering from various sources, and output formatting with different modes. The cyclomatic complexity would be significant with multiple decision paths for handling different flags and options. The task requires understanding existing patterns and extending the codebase in a consistent manner, suggesting the need for careful decomposition into manageable subtasks."
},
{
"taskId": 86,
"taskTitle": "Implement GitHub Issue Export Feature",
"complexityScore": 9,
"recommendedSubtasks": 10,
"expansionPrompt": "Break down the implementation of the GitHub Issue Export Feature into detailed subtasks covering: command structure and CLI integration, GitHub API client development, authentication and error handling, task-to-issue mapping logic, content formatting and markdown conversion, bidirectional linking and metadata management, extensible architecture and adapter interfaces, configuration and settings management, documentation, and comprehensive testing (unit, integration, edge cases, performance).",
"reasoning": "This task involves designing and implementing a robust, extensible export system with deep integration into GitHub, including bidirectional workflows, complex data mapping, error handling, and support for future platforms. The requirements span CLI design, API integration, content transformation, metadata management, extensibility, configuration, and extensive testing. The breadth and depth of these requirements, along with the need for maintainability and future extensibility, place this task at a high complexity level. Breaking it into at least 10 subtasks will ensure each major component and concern is addressed systematically, reducing risk and improving quality."
"expansionPrompt": "Expand task 24 'Implement AI-Powered Test Generation Command' into 6 subtasks, focusing on: 1) Command structure implementation, 2) AI prompt engineering for test generation, 3) Test file generation and output, 4) Framework-specific template implementation, 5) MCP tool integration, and 6) Documentation and help system integration. Include detailed implementation steps, dependencies, and testing approaches for each subtask.",
"reasoning": "This task has high complexity due to several challenging aspects: 1) AI integration requiring sophisticated prompt engineering, 2) Test generation across multiple frameworks, 3) File system operations with proper error handling, 4) MCP tool integration, 5) Complex configuration requirements, and 6) Framework-specific template generation. The task already has 5 subtasks but could benefit from reorganization based on the updated implementation details in the info blocks, particularly around framework support and configuration."
}
]
}

View File

@@ -0,0 +1,77 @@
{
"meta": {
"generatedAt": "2025-08-06T12:39:03.250Z",
"tasksAnalyzed": 8,
"totalTasks": 11,
"analysisCount": 8,
"thresholdScore": 5,
"projectName": "Taskmaster",
"usedResearch": false
},
"complexityAnalysis": [
{
"taskId": 118,
"taskTitle": "Create AI Provider Base Architecture",
"complexityScore": 7,
"recommendedSubtasks": 5,
"expansionPrompt": "Break down the implementation of BaseProvider abstract TypeScript class into subtasks focusing on: 1) Converting existing JavaScript base-provider.js to TypeScript with proper interface definitions, 2) Implementing the Template Method pattern with abstract methods, 3) Adding comprehensive error handling and retry logic with exponential backoff, 4) Creating proper TypeScript types for all method signatures and options, 5) Setting up comprehensive unit tests with MockProvider. Consider that the existing codebase uses JavaScript ES modules and Vercel AI SDK, so the TypeScript implementation needs to maintain compatibility while adding type safety.",
"reasoning": "This task requires significant architectural work including converting existing JavaScript code to TypeScript, creating new interfaces, implementing design patterns, and ensuring backward compatibility. The existing base-provider.js already implements a sophisticated provider pattern using Vercel AI SDK, so the TypeScript conversion needs careful consideration of type definitions and maintaining existing functionality."
},
{
"taskId": 119,
"taskTitle": "Implement Provider Factory with Dynamic Imports",
"complexityScore": 5,
"recommendedSubtasks": 5,
"expansionPrompt": "Break down the Provider Factory implementation into: 1) Creating the ProviderFactory class structure with proper TypeScript typing, 2) Implementing the switch statement for provider selection logic, 3) Adding dynamic imports for each provider to enable tree-shaking, 4) Handling provider instantiation with configuration passing, 5) Implementing comprehensive error handling for module loading failures. Note that the existing codebase already has a provider selection mechanism in the JavaScript files, so ensure the factory pattern integrates smoothly with existing infrastructure.",
"reasoning": "This is a moderate complexity task that involves creating a factory pattern with dynamic imports. The existing codebase already has provider management logic, so the main complexity is in creating a clean TypeScript implementation with proper dynamic imports while maintaining compatibility with the existing JavaScript module system."
},
{
"taskId": 120,
"taskTitle": "Implement Anthropic Provider",
"complexityScore": 6,
"recommendedSubtasks": 5,
"expansionPrompt": "Implement the AnthropicProvider class in stages: 1) Set up the class structure extending BaseProvider with proper TypeScript imports and type definitions, 2) Implement constructor with Anthropic SDK client initialization and configuration handling, 3) Implement generateCompletion method with proper message format transformation and error handling, 4) Add token calculation methods and utility functions (getName, getModel, getDefaultModel), 5) Implement comprehensive error handling with custom error wrapping and type exports. The existing anthropic.js provider can serve as a reference but needs to be reimplemented to extend the new TypeScript BaseProvider.",
"reasoning": "This task involves integrating with an external SDK (@anthropic-ai/sdk) and implementing all abstract methods from BaseProvider. The existing JavaScript implementation provides a good reference, but the TypeScript version needs proper type definitions, error handling, and must work with the new abstract base class architecture."
},
{
"taskId": 121,
"taskTitle": "Create Prompt Builder and Task Parser",
"complexityScore": 8,
"recommendedSubtasks": 5,
"expansionPrompt": "Implement PromptBuilder and TaskParser with focus on: 1) Creating PromptBuilder class with template methods for building structured prompts with JSON format instructions, 2) Implementing TaskParser class structure with dependency injection of IAIProvider and IConfiguration, 3) Implementing parsePRD method with file reading, prompt generation, and AI provider integration, 4) Adding task enrichment logic with metadata, validation, and structure verification, 5) Implementing comprehensive error handling for all failure scenarios including file I/O, AI provider errors, and JSON parsing. The existing parse-prd.js provides complex logic that needs to be reimplemented with proper TypeScript types and cleaner architecture.",
"reasoning": "This is a complex task that involves multiple components working together: file I/O, AI provider integration, JSON parsing, and data validation. The existing parse-prd.js implementation is quite sophisticated with Zod schemas and complex task processing logic that needs to be reimplemented in TypeScript with proper separation of concerns."
},
{
"taskId": 122,
"taskTitle": "Implement Configuration Management",
"complexityScore": 6,
"recommendedSubtasks": 5,
"expansionPrompt": "Create ConfigManager implementation focusing on: 1) Setting up Zod validation schema that matches the IConfiguration interface structure, 2) Implementing ConfigManager constructor with default values merging and storage initialization, 3) Creating validate method with Zod schema parsing and user-friendly error transformation, 4) Implementing type-safe get method using TypeScript generics and keyof operator, 5) Adding getAll method and ensuring proper immutability and module exports. The existing config-manager.js has complex configuration loading logic that can inform the TypeScript implementation but needs cleaner architecture.",
"reasoning": "This task involves creating a configuration management system with validation using Zod. The existing JavaScript config-manager.js is quite complex with multiple configuration sources, defaults, and validation logic. The TypeScript version needs to provide a cleaner API while maintaining the flexibility of the current system."
},
{
"taskId": 123,
"taskTitle": "Create Utility Functions and Error Handling",
"complexityScore": 4,
"recommendedSubtasks": 5,
"expansionPrompt": "Implement utilities and error handling in stages: 1) Create ID generation module with generateTaskId and generateSubtaskId functions using proper random generation, 2) Implement base TaskMasterError class extending Error with proper TypeScript typing, 3) Add error sanitization methods to prevent sensitive data exposure in production, 4) Implement development-only logging with environment detection, 5) Create specialized error subclasses (FileNotFoundError, ParseError, ValidationError, APIError) with appropriate error codes and formatting.",
"reasoning": "This is a relatively straightforward task involving utility functions and error class hierarchies. The main complexity is in ensuring proper error sanitization for production use and creating a well-structured error hierarchy that can be used throughout the application."
},
{
"taskId": 124,
"taskTitle": "Implement TaskMasterCore Facade",
"complexityScore": 7,
"recommendedSubtasks": 5,
"expansionPrompt": "Build TaskMasterCore facade implementation: 1) Create class structure with proper TypeScript imports and type definitions for all subsystem interfaces, 2) Implement initialize method for lazy loading AI provider and parser instances based on configuration, 3) Create parsePRD method that coordinates parser, AI provider, and storage subsystems, 4) Implement getTasks and other facade methods for task retrieval and management, 5) Create createTaskMaster factory function and set up all module exports including type re-exports. Ensure proper ESM compatibility with .js extensions in imports.",
"reasoning": "This is a complex integration task that brings together all the other components into a cohesive facade. It requires understanding of the facade pattern, proper dependency management, lazy initialization, and careful module export structure for the public API."
},
{
"taskId": 125,
"taskTitle": "Create Placeholder Providers and Complete Testing",
"complexityScore": 5,
"recommendedSubtasks": 5,
"expansionPrompt": "Complete the implementation with placeholders and testing: 1) Create OpenAIProvider placeholder class extending BaseProvider with 'not yet implemented' errors, 2) Create GoogleProvider placeholder class with similar structure, 3) Implement MockProvider in tests/mocks directory with configurable responses and behavior simulation, 4) Write comprehensive unit tests for TaskParser covering all methods and edge cases, 5) Create integration tests for the complete parse-prd workflow ensuring 80% code coverage. Follow kebab-case naming convention for test files.",
"reasoning": "This task involves creating placeholder implementations and a comprehensive test suite. While the placeholder providers are simple, creating a good MockProvider and comprehensive tests requires understanding the entire system architecture and ensuring all edge cases are covered."
}
]
}

View File

@@ -0,0 +1,77 @@
{
"meta": {
"generatedAt": "2025-08-06T12:15:01.327Z",
"tasksAnalyzed": 8,
"totalTasks": 11,
"analysisCount": 8,
"thresholdScore": 5,
"projectName": "Taskmaster",
"usedResearch": false
},
"complexityAnalysis": [
{
"taskId": 118,
"taskTitle": "Create AI Provider Base Architecture",
"complexityScore": 4,
"recommendedSubtasks": 5,
"expansionPrompt": "Break down the conversion of base-provider.js to TypeScript BaseProvider class: 1) Convert to TypeScript and define IAIProvider interface, 2) Implement abstract class with core properties, 3) Define abstract methods and Template Method pattern, 4) Add retry logic with exponential backoff, 5) Implement validation and logging. Focus on maintaining compatibility with existing provider pattern while adding type safety.",
"reasoning": "The codebase already has a well-established BaseAIProvider class in JavaScript. Converting to TypeScript mainly involves adding type definitions and ensuring the existing pattern is preserved. The complexity is moderate because the pattern is already proven in the codebase."
},
{
"taskId": 119,
"taskTitle": "Implement Provider Factory with Dynamic Imports",
"complexityScore": 3,
"recommendedSubtasks": 5,
"expansionPrompt": "Create ProviderFactory implementation: 1) Set up class structure and types, 2) Implement provider selection switch statement, 3) Add dynamic imports for tree-shaking, 4) Handle provider instantiation with config, 5) Add comprehensive error handling. The existing PROVIDERS registry pattern should guide the implementation.",
"reasoning": "The codebase already uses a dual registry pattern (static PROVIDERS and dynamic ProviderRegistry). Creating a factory is straightforward as the provider registration patterns are well-established. Dynamic imports are already used in the codebase."
},
{
"taskId": 120,
"taskTitle": "Implement Anthropic Provider",
"complexityScore": 3,
"recommendedSubtasks": 5,
"expansionPrompt": "Implement AnthropicProvider following existing patterns: 1) Create class structure with imports, 2) Implement constructor and client initialization, 3) Add generateCompletion with Claude API integration, 4) Implement token calculation and utility methods, 5) Add error handling and exports. Use the existing anthropic.js provider as reference.",
"reasoning": "AnthropicProvider already exists in the codebase with full implementation. This task essentially involves adapting the existing implementation to match the new TypeScript architecture, making it relatively straightforward."
},
{
"taskId": 121,
"taskTitle": "Create Prompt Builder and Task Parser",
"complexityScore": 6,
"recommendedSubtasks": 5,
"expansionPrompt": "Build prompt system and parser: 1) Create PromptBuilder with template methods, 2) Implement TaskParser with dependency injection, 3) Add parsePRD core logic with file reading, 4) Implement task enrichment and metadata, 5) Add comprehensive error handling. Leverage the existing prompt management system in src/prompts/.",
"reasoning": "While the codebase has a sophisticated prompt management system, creating a new PromptBuilder and TaskParser requires understanding the existing prompt templates, JSON schema validation, and integration with the AI provider system. The task involves significant new code."
},
{
"taskId": 122,
"taskTitle": "Implement Configuration Management",
"complexityScore": 5,
"recommendedSubtasks": 5,
"expansionPrompt": "Create ConfigManager with validation: 1) Define Zod schema for IConfiguration, 2) Implement constructor with defaults, 3) Add validate method with error handling, 4) Create type-safe get method with generics, 5) Implement getAll and finalize exports. Reference existing config-manager.js for patterns.",
"reasoning": "The codebase has an existing config-manager.js with sophisticated configuration handling. Adding Zod validation and TypeScript generics adds complexity, but the existing patterns provide a solid foundation."
},
{
"taskId": 123,
"taskTitle": "Create Utility Functions and Error Handling",
"complexityScore": 2,
"recommendedSubtasks": 5,
"expansionPrompt": "Implement utilities and error handling: 1) Create ID generation module with unique formats, 2) Build TaskMasterError base class, 3) Add error sanitization for security, 4) Implement development-only logging, 5) Create specialized error subclasses. Keep implementation simple and focused.",
"reasoning": "This is a straightforward utility implementation task. The codebase already has error handling patterns, and ID generation is a simple algorithmic task. The main work is creating clean, reusable utilities."
},
{
"taskId": 124,
"taskTitle": "Implement TaskMasterCore Facade",
"complexityScore": 7,
"recommendedSubtasks": 5,
"expansionPrompt": "Create main facade class: 1) Set up TaskMasterCore structure with imports, 2) Implement lazy initialization logic, 3) Add parsePRD coordination method, 4) Implement getTasks and other facade methods, 5) Create factory function and exports. This ties together all other components into a cohesive API.",
"reasoning": "This is the most complex task as it requires understanding and integrating all other components. The facade must coordinate between configuration, providers, storage, and parsing while maintaining a clean API. It's the architectural keystone of the system."
},
{
"taskId": 125,
"taskTitle": "Create Placeholder Providers and Complete Testing",
"complexityScore": 5,
"recommendedSubtasks": 5,
"expansionPrompt": "Implement testing infrastructure: 1) Create OpenAIProvider placeholder, 2) Create GoogleProvider placeholder, 3) Build MockProvider for testing, 4) Write TaskParser unit tests, 5) Create integration tests for parse-prd flow. Follow the existing test patterns in tests/ directory.",
"reasoning": "While creating placeholder providers is simple, the testing infrastructure requires understanding Jest with ES modules, mocking patterns, and comprehensive test coverage. The existing test structure provides good examples to follow."
}
]
}

View File

@@ -1,6 +1,6 @@
{
"currentTag": "master",
"lastSwitched": "2025-07-22T13:32:03.558Z",
"lastSwitched": "2025-09-12T22:25:27.535Z",
"branchTagMapping": {
"v017-adds": "v017-adds",
"next": "next"

View File

@@ -0,0 +1,34 @@
# Task ID: 1
# Title: Create start command class structure
# Status: pending
# Dependencies: None
# Priority: high
# Description: Create the basic structure for the start command following the Commander class pattern
# Details:
Create a new file `apps/cli/src/commands/start.command.ts` based on the existing list.command.ts pattern. Implement the command class with proper command registration, description, and argument handling for the task_id parameter. The class should extend the base Command class and implement the required methods.
Example structure:
```typescript
import { Command } from 'commander';
import { BaseCommand } from './base.command';
export class StartCommand extends BaseCommand {
public register(program: Command): void {
program
.command('start')
.alias('tm start')
.description('Start implementing a task using claude-code')
.argument('<task_id>', 'ID of the task to start')
.action(async (taskId: string) => {
await this.execute(taskId);
});
}
public async execute(taskId: string): Promise<void> {
// Implementation will be added in subsequent tasks
}
}
```
# Test Strategy:
Verify the command registers correctly by running the CLI with --help and checking that the start command appears with proper description and arguments. Test the basic structure by ensuring the command can be invoked without errors.

View File

@@ -0,0 +1,26 @@
# Task ID: 2
# Title: Register start command in CLI
# Status: pending
# Dependencies: 7
# Priority: high
# Description: Register the start command in the CLI application
# Details:
Update the CLI application to register the new start command. This involves importing the StartCommand class and adding it to the commands array in the CLI initialization.
In `apps/cli/src/index.ts` or the appropriate file where commands are registered:
```typescript
import { StartCommand } from './commands/start.command';
// Add StartCommand to the commands array
const commands = [
// ... existing commands
new StartCommand(),
];
// Register all commands
commands.forEach(command => command.register(program));
```
# Test Strategy:
Verify the command is correctly registered by running the CLI with --help and checking that the start command appears in the list of available commands.

View File

@@ -0,0 +1,32 @@
# Task ID: 3
# Title: Create standardized prompt builder
# Status: pending
# Dependencies: 1
# Priority: medium
# Description: Implement a function to build the standardized prompt for claude-code based on the task details
# Details:
Create a function in the StartCommand class that builds the standardized prompt according to the template provided in the PRD. The prompt should include instructions for Claude to first run `tm show <task_id>` to get task details, and then implement the required changes.
```typescript
private buildPrompt(taskId: string): string {
return `You are an AI coding assistant with access to this repository's codebase.
First, run this command to get the task details:
tm show ${taskId}
Then implement the task with these requirements:
- Make the SMALLEST number of code changes possible
- Follow ALL existing patterns in the codebase (you have access to analyze the code)
- Do NOT over-engineer the solution
- Use existing files/functions/patterns wherever possible
- When complete, print: COMPLETED: <brief summary of changes>
Begin by running tm show ${taskId} to understand what needs to be implemented.`;
}
```
<info added on 2025-09-12T02:40:01.812Z>
The prompt builder function will handle task context retrieval by instructing Claude to use the task-master show command. This approach ensures Claude has access to all necessary task details before implementation begins. The command syntax "tm show ${taskId}" embedded in the prompt will direct Claude to first gather the complete task context, including description, requirements, and any existing implementation details, before proceeding with code changes.
</info added on 2025-09-12T02:40:01.812Z>
# Test Strategy:
Verify the prompt is correctly formatted by calling the function with a sample task ID and checking that the output matches the expected template with the task ID properly inserted.

View File

@@ -0,0 +1,36 @@
# Task ID: 4
# Title: Implement claude-code executor
# Status: pending
# Dependencies: 3
# Priority: high
# Description: Add functionality to execute the claude-code command with the built prompt
# Details:
Implement the functionality to execute the claude command with the built prompt. This should use Node.js child_process.exec() to run the command directly in the terminal.
```typescript
import { exec } from 'child_process';
// Inside execute method, after task validation
private async executeClaude(prompt: string): Promise<void> {
console.log('Starting claude-code to implement the task...');
try {
// Execute claude with the prompt
const claudeCommand = `claude "${prompt.replace(/"/g, '\\"')}"`;
// Use execSync to wait for the command to complete
const { execSync } = require('child_process');
execSync(claudeCommand, { stdio: 'inherit' });
console.log('Claude session completed.');
} catch (error) {
console.error('Error executing claude-code:', error.message);
process.exit(1);
}
}
```
Then call this method from the execute method after building the prompt.
# Test Strategy:
Test by running the command with a valid task ID and verifying that the claude command is executed with the correct prompt. Check that the command handles errors appropriately if claude-code is not available.

View File

@@ -0,0 +1,49 @@
# Task ID: 7
# Title: Integrate execution flow in start command
# Status: pending
# Dependencies: 3, 4
# Priority: high
# Description: Connect all the components to implement the complete execution flow for the start command
# Details:
Update the execute method in the StartCommand class to integrate all the components and implement the complete execution flow as described in the PRD:
1. Validate task exists
2. Build standardized prompt
3. Execute claude-code
4. Check git status for changes
5. Auto-mark task as done if changes detected
```typescript
public async execute(taskId: string): Promise<void> {
// Validate task exists
const core = await createTaskMasterCore();
const task = await core.tasks.getById(parseInt(taskId, 10));
if (!task) {
console.error(`Task with ID ${taskId} not found`);
process.exit(1);
}
// Build prompt
const prompt = this.buildPrompt(taskId);
// Execute claude-code
await this.executeClaude(prompt);
// Check git status
const changedFiles = await this.checkGitChanges();
if (changedFiles.length > 0) {
console.log('\nChanges detected in the following files:');
changedFiles.forEach(file => console.log(`- ${file}`));
// Auto-mark task as done
await this.markTaskAsDone(taskId);
console.log(`\nTask ${taskId} completed successfully and marked as done.`);
} else {
console.warn('\nNo changes detected after claude-code execution. Task not marked as done.');
}
}
```
# Test Strategy:
Test the complete execution flow by running the start command with a valid task ID and verifying that all steps are executed correctly. Test with both scenarios: when changes are detected and when no changes are detected.

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,511 @@
<rpg-method>
# Repository Planning Graph (RPG) Method - PRD Template
This template teaches you (AI or human) how to create structured, dependency-aware PRDs using the RPG methodology from Microsoft Research. The key insight: separate WHAT (functional) from HOW (structural), then connect them with explicit dependencies.
## Core Principles
1. **Dual-Semantics**: Think functional (capabilities) AND structural (code organization) separately, then map them
2. **Explicit Dependencies**: Never assume - always state what depends on what
3. **Topological Order**: Build foundation first, then layers on top
4. **Progressive Refinement**: Start broad, refine iteratively
## How to Use This Template
- Follow the instructions in each `<instruction>` block
- Look at `<example>` blocks to see good vs bad patterns
- Fill in the content sections with your project details
- The AI reading this will learn the RPG method by following along
- Task Master will parse the resulting PRD into dependency-aware tasks
## Recommended Tools for Creating PRDs
When using this template to **create** a PRD (not parse it), use **code-context-aware AI assistants** for best results:
**Why?** The AI needs to understand your existing codebase to make good architectural decisions about modules, dependencies, and integration points.
**Recommended tools:**
- **Claude Code** (claude-code CLI) - Best for structured reasoning and large contexts
- **Cursor/Windsurf** - IDE integration with full codebase context
- **Gemini CLI** (gemini-cli) - Massive context window for large codebases
- **Codex/Grok CLI** - Strong code generation with context awareness
**Note:** Once your PRD is created, `task-master parse-prd` works with any configured AI model - it just needs to read the PRD text itself, not your codebase.
</rpg-method>
---
<overview>
<instruction>
Start with the problem, not the solution. Be specific about:
- What pain point exists?
- Who experiences it?
- Why existing solutions don't work?
- What success looks like (measurable outcomes)?
Keep this section focused - don't jump into implementation details yet.
</instruction>
## Problem Statement
[Describe the core problem. Be concrete about user pain points.]
## Target Users
[Define personas, their workflows, and what they're trying to achieve.]
## Success Metrics
[Quantifiable outcomes. Examples: "80% task completion via autopilot", "< 5% manual intervention rate"]
</overview>
---
<functional-decomposition>
<instruction>
Now think about CAPABILITIES (what the system DOES), not code structure yet.
Step 1: Identify high-level capability domains
- Think: "What major things does this system do?"
- Examples: Data Management, Core Processing, Presentation Layer
Step 2: For each capability, enumerate specific features
- Use explore-exploit strategy:
* Exploit: What features are REQUIRED for core value?
* Explore: What features make this domain COMPLETE?
Step 3: For each feature, define:
- Description: What it does in one sentence
- Inputs: What data/context it needs
- Outputs: What it produces/returns
- Behavior: Key logic or transformations
<example type="good">
Capability: Data Validation
Feature: Schema validation
- Description: Validate JSON payloads against defined schemas
- Inputs: JSON object, schema definition
- Outputs: Validation result (pass/fail) + error details
- Behavior: Iterate fields, check types, enforce constraints
Feature: Business rule validation
- Description: Apply domain-specific validation rules
- Inputs: Validated data object, rule set
- Outputs: Boolean + list of violated rules
- Behavior: Execute rules sequentially, short-circuit on failure
</example>
<example type="bad">
Capability: validation.js
(Problem: This is a FILE, not a CAPABILITY. Mixing structure into functional thinking.)
Capability: Validation
Feature: Make sure data is good
(Problem: Too vague. No inputs/outputs. Not actionable.)
</example>
</instruction>
## Capability Tree
### Capability: [Name]
[Brief description of what this capability domain covers]
#### Feature: [Name]
- **Description**: [One sentence]
- **Inputs**: [What it needs]
- **Outputs**: [What it produces]
- **Behavior**: [Key logic]
#### Feature: [Name]
- **Description**:
- **Inputs**:
- **Outputs**:
- **Behavior**:
### Capability: [Name]
...
</functional-decomposition>
---
<structural-decomposition>
<instruction>
NOW think about code organization. Map capabilities to actual file/folder structure.
Rules:
1. Each capability maps to a module (folder or file)
2. Features within a capability map to functions/classes
3. Use clear module boundaries - each module has ONE responsibility
4. Define what each module exports (public interface)
The goal: Create a clear mapping between "what it does" (functional) and "where it lives" (structural).
<example type="good">
Capability: Data Validation
→ Maps to: src/validation/
├── schema-validator.js (Schema validation feature)
├── rule-validator.js (Business rule validation feature)
└── index.js (Public exports)
Exports:
- validateSchema(data, schema)
- validateRules(data, rules)
</example>
<example type="bad">
Capability: Data Validation
→ Maps to: src/utils.js
(Problem: "utils" is not a clear module boundary. Where do I find validation logic?)
Capability: Data Validation
→ Maps to: src/validation/everything.js
(Problem: One giant file. Features should map to separate files for maintainability.)
</example>
</instruction>
## Repository Structure
```
project-root/
├── src/
│ ├── [module-name]/ # Maps to: [Capability Name]
│ │ ├── [file].js # Maps to: [Feature Name]
│ │ └── index.js # Public exports
│ └── [module-name]/
├── tests/
└── docs/
```
## Module Definitions
### Module: [Name]
- **Maps to capability**: [Capability from functional decomposition]
- **Responsibility**: [Single clear purpose]
- **File structure**:
```
module-name/
├── feature1.js
├── feature2.js
└── index.js
```
- **Exports**:
- `functionName()` - [what it does]
- `ClassName` - [what it does]
</structural-decomposition>
---
<dependency-graph>
<instruction>
This is THE CRITICAL SECTION for Task Master parsing.
Define explicit dependencies between modules. This creates the topological order for task execution.
Rules:
1. List modules in dependency order (foundation first)
2. For each module, state what it depends on
3. Foundation modules should have NO dependencies
4. Every non-foundation module should depend on at least one other module
5. Think: "What must EXIST before I can build this module?"
<example type="good">
Foundation Layer (no dependencies):
- error-handling: No dependencies
- config-manager: No dependencies
- base-types: No dependencies
Data Layer:
- schema-validator: Depends on [base-types, error-handling]
- data-ingestion: Depends on [schema-validator, config-manager]
Core Layer:
- algorithm-engine: Depends on [base-types, error-handling]
- pipeline-orchestrator: Depends on [algorithm-engine, data-ingestion]
</example>
<example type="bad">
- validation: Depends on API
- API: Depends on validation
(Problem: Circular dependency. This will cause build/runtime issues.)
- user-auth: Depends on everything
(Problem: Too many dependencies. Should be more focused.)
</example>
</instruction>
## Dependency Chain
### Foundation Layer (Phase 0)
No dependencies - these are built first.
- **[Module Name]**: [What it provides]
- **[Module Name]**: [What it provides]
### [Layer Name] (Phase 1)
- **[Module Name]**: Depends on [[module-from-phase-0], [module-from-phase-0]]
- **[Module Name]**: Depends on [[module-from-phase-0]]
### [Layer Name] (Phase 2)
- **[Module Name]**: Depends on [[module-from-phase-1], [module-from-foundation]]
[Continue building up layers...]
</dependency-graph>
---
<implementation-roadmap>
<instruction>
Turn the dependency graph into concrete development phases.
Each phase should:
1. Have clear entry criteria (what must exist before starting)
2. Contain tasks that can be parallelized (no inter-dependencies within phase)
3. Have clear exit criteria (how do we know phase is complete?)
4. Build toward something USABLE (not just infrastructure)
Phase ordering follows topological sort of dependency graph.
<example type="good">
Phase 0: Foundation
Entry: Clean repository
Tasks:
- Implement error handling utilities
- Create base type definitions
- Setup configuration system
Exit: Other modules can import foundation without errors
Phase 1: Data Layer
Entry: Phase 0 complete
Tasks:
- Implement schema validator (uses: base types, error handling)
- Build data ingestion pipeline (uses: validator, config)
Exit: End-to-end data flow from input to validated output
</example>
<example type="bad">
Phase 1: Build Everything
Tasks:
- API
- Database
- UI
- Tests
(Problem: No clear focus. Too broad. Dependencies not considered.)
</example>
</instruction>
## Development Phases
### Phase 0: [Foundation Name]
**Goal**: [What foundational capability this establishes]
**Entry Criteria**: [What must be true before starting]
**Tasks**:
- [ ] [Task name] (depends on: [none or list])
- Acceptance criteria: [How we know it's done]
- Test strategy: [What tests prove it works]
- [ ] [Task name] (depends on: [none or list])
**Exit Criteria**: [Observable outcome that proves phase complete]
**Delivers**: [What can users/developers do after this phase?]
---
### Phase 1: [Layer Name]
**Goal**:
**Entry Criteria**: Phase 0 complete
**Tasks**:
- [ ] [Task name] (depends on: [[tasks-from-phase-0]])
- [ ] [Task name] (depends on: [[tasks-from-phase-0]])
**Exit Criteria**:
**Delivers**:
---
[Continue with more phases...]
</implementation-roadmap>
---
<test-strategy>
<instruction>
Define how testing will be integrated throughout development (TDD approach).
Specify:
1. Test pyramid ratios (unit vs integration vs e2e)
2. Coverage requirements
3. Critical test scenarios
4. Test generation guidelines for Surgical Test Generator
This section guides the AI when generating tests during the RED phase of TDD.
<example type="good">
Critical Test Scenarios for Data Validation module:
- Happy path: Valid data passes all checks
- Edge cases: Empty strings, null values, boundary numbers
- Error cases: Invalid types, missing required fields
- Integration: Validator works with ingestion pipeline
</example>
</instruction>
## Test Pyramid
```
/\
/E2E\ ← [X]% (End-to-end, slow, comprehensive)
/------\
/Integration\ ← [Y]% (Module interactions)
/------------\
/ Unit Tests \ ← [Z]% (Fast, isolated, deterministic)
/----------------\
```
## Coverage Requirements
- Line coverage: [X]% minimum
- Branch coverage: [X]% minimum
- Function coverage: [X]% minimum
- Statement coverage: [X]% minimum
## Critical Test Scenarios
### [Module/Feature Name]
**Happy path**:
- [Scenario description]
- Expected: [What should happen]
**Edge cases**:
- [Scenario description]
- Expected: [What should happen]
**Error cases**:
- [Scenario description]
- Expected: [How system handles failure]
**Integration points**:
- [What interactions to test]
- Expected: [End-to-end behavior]
## Test Generation Guidelines
[Specific instructions for Surgical Test Generator about what to focus on, what patterns to follow, project-specific test conventions]
</test-strategy>
---
<architecture>
<instruction>
Describe technical architecture, data models, and key design decisions.
Keep this section AFTER functional/structural decomposition - implementation details come after understanding structure.
</instruction>
## System Components
[Major architectural pieces and their responsibilities]
## Data Models
[Core data structures, schemas, database design]
## Technology Stack
[Languages, frameworks, key libraries]
**Decision: [Technology/Pattern]**
- **Rationale**: [Why chosen]
- **Trade-offs**: [What we're giving up]
- **Alternatives considered**: [What else we looked at]
</architecture>
---
<risks>
<instruction>
Identify risks that could derail development and how to mitigate them.
Categories:
- Technical risks (complexity, unknowns)
- Dependency risks (blocking issues)
- Scope risks (creep, underestimation)
</instruction>
## Technical Risks
**Risk**: [Description]
- **Impact**: [High/Medium/Low - effect on project]
- **Likelihood**: [High/Medium/Low]
- **Mitigation**: [How to address]
- **Fallback**: [Plan B if mitigation fails]
## Dependency Risks
[External dependencies, blocking issues]
## Scope Risks
[Scope creep, underestimation, unclear requirements]
</risks>
---
<appendix>
## References
[Papers, documentation, similar systems]
## Glossary
[Domain-specific terms]
## Open Questions
[Things to resolve during development]
</appendix>
---
<task-master-integration>
# How Task Master Uses This PRD
When you run `task-master parse-prd <file>.txt`, the parser:
1. **Extracts capabilities** → Main tasks
- Each `### Capability:` becomes a top-level task
2. **Extracts features** → Subtasks
- Each `#### Feature:` becomes a subtask under its capability
3. **Parses dependencies** → Task dependencies
- `Depends on: [X, Y]` sets task.dependencies = ["X", "Y"]
4. **Orders by phases** → Task priorities
- Phase 0 tasks = highest priority
- Phase N tasks = lower priority, properly sequenced
5. **Uses test strategy** → Test generation context
- Feeds test scenarios to Surgical Test Generator during implementation
**Result**: A dependency-aware task graph that can be executed in topological order.
## Why RPG Structure Matters
Traditional flat PRDs lead to:
- ❌ Unclear task dependencies
- ❌ Arbitrary task ordering
- ❌ Circular dependencies discovered late
- ❌ Poorly scoped tasks
RPG-structured PRDs provide:
- ✅ Explicit dependency chains
- ✅ Topological execution order
- ✅ Clear module boundaries
- ✅ Validated task graph before implementation
## Tips for Best Results
1. **Spend time on dependency graph** - This is the most valuable section for Task Master
2. **Keep features atomic** - Each feature should be independently testable
3. **Progressive refinement** - Start broad, use `task-master expand` to break down complex tasks
4. **Use research mode** - `task-master parse-prd --research` leverages AI for better task generation
</task-master-integration>

15
.vscode/settings.json vendored
View File

@@ -10,5 +10,18 @@
},
"json.format.enable": true,
"json.validate.enable": true
"json.validate.enable": true,
"typescript.tsdk": "node_modules/typescript/lib",
"[typescript]": {
"editor.defaultFormatter": "biomejs.biome"
},
"[typescriptreact]": {
"editor.defaultFormatter": "biomejs.biome"
},
"[javascript]": {
"editor.defaultFormatter": "biomejs.biome"
},
"[json]": {
"editor.defaultFormatter": "biomejs.biome"
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -3,3 +3,29 @@
## Task Master AI Instructions
**Import Task Master's development workflow commands and guidelines, treat as if import is in the main CLAUDE.md file.**
@./.taskmaster/CLAUDE.md
## Test Guidelines
### Synchronous Tests
- **NEVER use async/await in test functions** unless testing actual asynchronous operations
- Use synchronous top-level imports instead of dynamic `await import()`
- Test bodies should be synchronous whenever possible
- Example:
```javascript
// ✅ CORRECT - Synchronous imports
import { MyClass } from '../src/my-class.js';
it('should verify behavior', () => {
expect(new MyClass().property).toBe(value);
});
// ❌ INCORRECT - Async imports
it('should verify behavior', async () => {
const { MyClass } = await import('../src/my-class.js');
expect(new MyClass().property).toBe(value);
});
```
## Changeset Guidelines
- When creating changesets, remember that it's user-facing, meaning we don't have to get into the specifics of the code, but rather mention what the end-user is getting or fixing from this changeset.

140
CLAUDE_CODE_PLUGIN.md Normal file
View File

@@ -0,0 +1,140 @@
# Taskmaster AI - Claude Code Marketplace
This repository includes a Claude Code plugin marketplace in `.claude-plugin/marketplace.json`.
## Installation
### From GitHub (Public Repository)
Once this repository is pushed to GitHub, users can install with:
```bash
# Add the marketplace
/plugin marketplace add eyaltoledano/claude-task-master
# Install the plugin
/plugin install taskmaster@taskmaster
```
### Local Development/Testing
```bash
# From the project root directory
cd /path/to/claude-task-master
# Build the plugin first
cd packages/claude-code-plugin
npm run build
cd ../..
# In Claude Code
/plugin marketplace add .
/plugin install taskmaster@taskmaster
```
## Marketplace Structure
```
claude-task-master/
├── .claude-plugin/
│ └── marketplace.json # Marketplace manifest (at repo root)
├── packages/claude-code-plugin/
│ ├── src/build.ts # Build tooling
│ └── [generated plugin files]
└── assets/claude/ # Plugin source files
├── commands/
└── agents/
```
## Available Plugins
### taskmaster
AI-powered task management system for ambitious development workflows.
**Features:**
- 49 slash commands for comprehensive task management
- 3 specialized AI agents (orchestrator, executor, checker)
- MCP server integration
- Complexity analysis and auto-expansion
- Dependency management and validation
- Automated workflow capabilities
**Quick Start:**
```bash
/tm:init
/tm:parse-prd
/tm:next
```
## For Contributors
### Adding New Plugins
To add more plugins to this marketplace:
1. **Update marketplace.json**:
```json
{
"plugins": [
{
"name": "new-plugin",
"source": "./path/to/plugin",
"description": "Plugin description",
"version": "1.0.0"
}
]
}
```
2. **Commit and push** the changes
3. **Users update** with: `/plugin marketplace update taskmaster`
### Marketplace Versioning
The marketplace version is tracked in `.claude-plugin/marketplace.json`:
```json
{
"metadata": {
"version": "1.0.0"
}
}
```
Increment the version when adding or updating plugins.
## Team Configuration
Organizations can auto-install this marketplace for all team members by adding to `.claude/settings.json`:
```json
{
"extraKnownMarketplaces": {
"task-master": {
"source": {
"source": "github",
"repo": "eyaltoledano/claude-task-master"
}
}
},
"enabledPlugins": {
"taskmaster": {
"marketplace": "taskmaster"
}
}
}
```
Team members who trust the repository folder will automatically get the marketplace and plugins installed.
## Documentation
- [Claude Code Plugin Docs](https://docs.claude.com/en/docs/claude-code/plugins)
- [Marketplace Documentation](https://docs.claude.com/en/docs/claude-code/plugin-marketplaces)

View File

@@ -1,14 +1,39 @@
# Task Master [![GitHub stars](https://img.shields.io/github/stars/eyaltoledano/claude-task-master?style=social)](https://github.com/eyaltoledano/claude-task-master/stargazers)
<a name="readme-top"></a>
[![CI](https://github.com/eyaltoledano/claude-task-master/actions/workflows/ci.yml/badge.svg)](https://github.com/eyaltoledano/claude-task-master/actions/workflows/ci.yml) [![npm version](https://badge.fury.io/js/task-master-ai.svg)](https://badge.fury.io/js/task-master-ai) [![Discord](https://dcbadge.limes.pink/api/server/https://discord.gg/taskmasterai?style=flat)](https://discord.gg/taskmasterai) [![License: MIT with Commons Clause](https://img.shields.io/badge/license-MIT%20with%20Commons%20Clause-blue.svg)](LICENSE)
<div align='center'>
<a href="https://trendshift.io/repositories/13971" target="_blank"><img src="https://trendshift.io/api/badge/repositories/13971" alt="eyaltoledano%2Fclaude-task-master | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
</div>
[![NPM Downloads](https://img.shields.io/npm/d18m/task-master-ai?style=flat)](https://www.npmjs.com/package/task-master-ai) [![NPM Downloads](https://img.shields.io/npm/dm/task-master-ai?style=flat)](https://www.npmjs.com/package/task-master-ai) [![NPM Downloads](https://img.shields.io/npm/dw/task-master-ai?style=flat)](https://www.npmjs.com/package/task-master-ai)
<p align="center">
<a href="https://task-master.dev"><img src="./images/logo.png?raw=true" alt="Taskmaster logo"></a>
</p>
## By [@eyaltoledano](https://x.com/eyaltoledano), [@RalphEcom](https://x.com/RalphEcom) & [@jasonzhou1993](https://x.com/jasonzhou1993)
<p align="center">
<b>Taskmaster</b>: A task management system for AI-driven development, designed to work seamlessly with any AI chat.
</p>
<p align="center">
<a href="https://discord.gg/taskmasterai" target="_blank"><img src="https://dcbadge.limes.pink/api/server/https://discord.gg/taskmasterai?style=flat" alt="Discord"></a> |
<a href="https://docs.task-master.dev" target="_blank">Docs</a>
</p>
<p align="center">
<a href="https://github.com/eyaltoledano/claude-task-master/actions/workflows/ci.yml"><img src="https://github.com/eyaltoledano/claude-task-master/actions/workflows/ci.yml/badge.svg" alt="CI"></a>
<a href="https://github.com/eyaltoledano/claude-task-master/stargazers"><img src="https://img.shields.io/github/stars/eyaltoledano/claude-task-master?style=social" alt="GitHub stars"></a>
<a href="https://badge.fury.io/js/task-master-ai"><img src="https://badge.fury.io/js/task-master-ai.svg" alt="npm version"></a>
<a href="LICENSE"><img src="https://img.shields.io/badge/license-MIT%20with%20Commons%20Clause-blue.svg" alt="License"></a>
</p>
<p align="center">
<a href="https://www.npmjs.com/package/task-master-ai"><img src="https://img.shields.io/npm/d18m/task-master-ai?style=flat" alt="NPM Downloads"></a>
<a href="https://www.npmjs.com/package/task-master-ai"><img src="https://img.shields.io/npm/dm/task-master-ai?style=flat" alt="NPM Downloads"></a>
<a href="https://www.npmjs.com/package/task-master-ai"><img src="https://img.shields.io/npm/dw/task-master-ai?style=flat" alt="NPM Downloads"></a>
</p>
## By [@eyaltoledano](https://x.com/eyaltoledano) & [@RalphEcom](https://x.com/RalphEcom)
[![Twitter Follow](https://img.shields.io/twitter/follow/eyaltoledano)](https://x.com/eyaltoledano)
[![Twitter Follow](https://img.shields.io/twitter/follow/RalphEcom)](https://x.com/RalphEcom)
[![Twitter Follow](https://img.shields.io/twitter/follow/jasonzhou1993)](https://x.com/jasonzhou1993)
A task management system for AI-driven development with Claude, designed to work seamlessly with Cursor AI.
@@ -31,10 +56,23 @@ The following documentation is also available in the `docs` directory:
#### Quick Install for Cursor 1.0+ (One-Click)
[![Add task-master-ai MCP server to Cursor](https://cursor.com/deeplink/mcp-install-dark.svg)](https://cursor.com/install-mcp?name=task-master-ai&config=eyJjb21tYW5kIjoibnB4IC15IC0tcGFja2FnZT10YXNrLW1hc3Rlci1haSB0YXNrLW1hc3Rlci1haSIsImVudiI6eyJBTlRIUk9QSUNfQVBJX0tFWSI6IllPVVJfQU5USFJPUElDX0FQSV9LRVlfSEVSRSIsIlBFUlBMRVhJVFlfQVBJX0tFWSI6IllPVVJfUEVSUExFWElUWV9BUElfS0VZX0hFUkUiLCJPUEVOQUlfQVBJX0tFWSI6IllPVVJfT1BFTkFJX0tFWV9IRVJFIiwiR09PR0xFX0FQSV9LRVkiOiJZT1VSX0dPT0dMRV9LRVlfSEVSRSIsIk1JU1RSQUxfQVBJX0tFWSI6IllPVVJfTUlTVFJBTF9LRVlfSEVSRSIsIkdST1FfQVBJX0tFWSI6IllPVVJfR1JPUV9LRVlfSEVSRSIsIk9QRU5ST1VURVJfQVBJX0tFWSI6IllPVVJfT1BFTlJPVVRFUl9LRVlfSEVSRSIsIlhBSV9BUElfS0VZIjoiWU9VUl9YQUlfS0VZX0hFUkUiLCJBWlVSRV9PUEVOQUlfQVBJX0tFWSI6IllPVVJfQVpVUkVfS0VZX0hFUkUiLCJPTExBTUFfQVBJX0tFWSI6IllPVVJfT0xMQU1BX0FQSV9LRVlfSEVSRSJ9fQ%3D%3D)
[![Add task-master-ai MCP server to Cursor](https://cursor.com/deeplink/mcp-install-dark.svg)](https://cursor.com/en/install-mcp?name=task-master-ai&config=eyJjb21tYW5kIjoibnB4IC15IC0tcGFja2FnZT10YXNrLW1hc3Rlci1haSB0YXNrLW1hc3Rlci1haSIsImVudiI6eyJBTlRIUk9QSUNfQVBJX0tFWSI6IllPVVJfQU5USFJPUElDX0FQSV9LRVlfSEVSRSIsIlBFUlBMRVhJVFlfQVBJX0tFWSI6IllPVVJfUEVSUExFWElUWV9BUElfS0VZX0hFUkUiLCJPUEVOQUlfQVBJX0tFWSI6IllPVVJfT1BFTkFJX0tFWV9IRVJFIiwiR09PR0xFX0FQSV9LRVkiOiJZT1VSX0dPT0dMRV9LRVlfSEVSRSIsIk1JU1RSQUxfQVBJX0tFWSI6IllPVVJfTUlTVFJBTF9LRVlfSEVSRSIsIkdST1FfQVBJX0tFWSI6IllPVVJfR1JPUV9LRVlfSEVSRSIsIk9QRU5ST1VURVJfQVBJX0tFWSI6IllPVVJfT1BFTlJPVVRFUl9LRVlfSEVSRSIsIlhBSV9BUElfS0VZIjoiWU9VUl9YQUlfS0VZX0hFUkUiLCJBWlVSRV9PUEVOQUlfQVBJX0tFWSI6IllPVVJfQVpVUkVfS0VZX0hFUkUiLCJPTExBTUFfQVBJX0tFWSI6IllPVVJfT0xMQU1BX0FQSV9LRVlfSEVSRSJ9fQ%3D%3D)
> **Note:** After clicking the link, you'll still need to add your API keys to the configuration. The link installs the MCP server with placeholder keys that you'll need to replace with your actual API keys.
#### Claude Code Quick Install
For Claude Code users:
```bash
claude mcp add taskmaster-ai -- npx -y task-master-ai
```
Don't forget to add your API keys to the configuration:
- in the root .env of your Project
- in the "env" section of your mcp config for taskmaster-ai
## Requirements
Taskmaster utilizes AI across several commands, and those require a separate API key. You can use a variety of models from different AI providers provided you add your API keys. For example, if you want to use Claude 3.7, you'll need an Anthropic API key.
@@ -50,8 +88,9 @@ At least one (1) of the following is required:
- xAI API Key (for research or main model)
- OpenRouter API Key (for research or main model)
- Claude Code (no API key required - requires Claude Code CLI)
- Codex CLI (OAuth via ChatGPT subscription - requires Codex CLI)
Using the research model is optional but highly recommended. You will need at least ONE API key (unless using Claude Code). Adding all API keys enables you to seamlessly switch between model providers at will.
Using the research model is optional but highly recommended. You will need at least ONE API key (unless using Claude Code or Codex CLI with OAuth). Adding all API keys enables you to seamlessly switch between model providers at will.
## Quick Start
@@ -67,17 +106,18 @@ MCP (Model Control Protocol) lets you run Task Master directly from your editor.
| | Project | `<project_folder>/.cursor/mcp.json` | `<project_folder>\.cursor\mcp.json` | `mcpServers` |
| **Windsurf** | Global | `~/.codeium/windsurf/mcp_config.json` | `%USERPROFILE%\.codeium\windsurf\mcp_config.json` | `mcpServers` |
| **VS Code** | Project | `<project_folder>/.vscode/mcp.json` | `<project_folder>\.vscode\mcp.json` | `servers` |
| **Q CLI** | Global | `~/.aws/amazonq/mcp.json` | | `mcpServers` |
##### Manual Configuration
###### Cursor & Windsurf (`mcpServers`)
###### Cursor & Windsurf & Q Developer CLI (`mcpServers`)
```json
{
"mcpServers": {
"task-master-ai": {
"command": "npx",
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
"args": ["-y", "task-master-ai"],
"env": {
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
@@ -97,7 +137,7 @@ MCP (Model Control Protocol) lets you run Task Master directly from your editor.
> 🔑 Replace `YOUR_…_KEY_HERE` with your real API keys. You can remove keys you don't use.
> **Note**: If you see `0 tools enabled` in the MCP settings, try removing the `--package=task-master-ai` flag from `args`.
> **Note**: If you see `0 tools enabled` in the MCP settings, restart your editor and check that your API keys are correctly configured.
###### VSCode (`servers` + `type`)
@@ -106,7 +146,7 @@ MCP (Model Control Protocol) lets you run Task Master directly from your editor.
"servers": {
"task-master-ai": {
"command": "npx",
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
"args": ["-y", "task-master-ai"],
"env": {
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
@@ -230,6 +270,11 @@ task-master show 1,3,5
# Research fresh information with project context
task-master research "What are the latest best practices for JWT authentication?"
# Move tasks between tags (cross-tag movement)
task-master move --from=5 --from-tag=backlog --to-tag=in-progress
task-master move --from=5,6,7 --from-tag=backlog --to-tag=done --with-dependencies
task-master move --from=5 --from-tag=backlog --to-tag=in-progress --ignore-dependencies
# Generate task files
task-master generate
@@ -265,6 +310,12 @@ cd claude-task-master
node scripts/init.js
```
## Join Our Team
<a href="https://tryhamster.com" target="_blank">
<img src="./images/hamster-hiring.png" alt="Join Hamster's founding team" />
</a>
## Contributors
<a href="https://github.com/eyaltoledano/claude-task-master/graphs/contributors">

41
apps/cli/CHANGELOG.md Normal file
View File

@@ -0,0 +1,41 @@
# @tm/cli
## null
### Patch Changes
- Updated dependencies []:
- @tm/core@null
## null
### Patch Changes
- Updated dependencies []:
- @tm/core@null
## null
### Patch Changes
- Updated dependencies []:
- @tm/core@null
## 0.27.0
### Patch Changes
- Updated dependencies []:
- @tm/core@0.26.1
## 0.27.0-rc.0
### Minor Changes
- [#1213](https://github.com/eyaltoledano/claude-task-master/pull/1213) [`137ef36`](https://github.com/eyaltoledano/claude-task-master/commit/137ef362789a9cdfdb1925e35e0438c1fa6c69ee) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - testing this stuff out to see how the release candidate works with monorepo
## 1.1.0-rc.0
### Minor Changes
- [#1213](https://github.com/eyaltoledano/claude-task-master/pull/1213) [`cd90b4d`](https://github.com/eyaltoledano/claude-task-master/commit/cd90b4d65fc2f04bdad9fb73aba320b58a124240) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - testing this stuff out to see how the release candidate works with monorepo

53
apps/cli/package.json Normal file
View File

@@ -0,0 +1,53 @@
{
"name": "@tm/cli",
"description": "Task Master CLI - Command line interface for task management",
"type": "module",
"private": true,
"main": "./dist/index.js",
"types": "./src/index.ts",
"exports": {
".": "./src/index.ts"
},
"files": ["dist", "README.md"],
"scripts": {
"typecheck": "tsc --noEmit",
"lint": "biome check src",
"format": "biome format --write src",
"test": "vitest run",
"test:watch": "vitest",
"test:coverage": "vitest run --coverage",
"test:unit": "vitest run -t unit",
"test:integration": "vitest run -t integration",
"test:e2e": "vitest run --dir tests/e2e",
"test:ci": "vitest run --coverage --reporter=dot"
},
"dependencies": {
"@tm/core": "*",
"boxen": "^8.0.1",
"chalk": "5.6.2",
"cli-table3": "^0.6.5",
"commander": "^12.1.0",
"inquirer": "^12.5.0",
"ora": "^8.2.0"
},
"devDependencies": {
"@biomejs/biome": "^1.9.4",
"@types/inquirer": "^9.0.3",
"@types/node": "^22.10.5",
"tsx": "^4.20.4",
"typescript": "^5.9.2",
"vitest": "^2.1.8"
},
"engines": {
"node": ">=18.0.0"
},
"keywords": ["task-master", "cli", "task-management", "productivity"],
"author": "",
"license": "MIT",
"typesVersions": {
"*": {
"*": ["src/*"]
}
},
"version": ""
}

View File

@@ -0,0 +1,255 @@
/**
* @fileoverview Centralized Command Registry
* Provides a single location for registering all CLI commands
*/
import { Command } from 'commander';
// Import all commands
import { ListTasksCommand } from './commands/list.command.js';
import { ShowCommand } from './commands/show.command.js';
import { AuthCommand } from './commands/auth.command.js';
import { ContextCommand } from './commands/context.command.js';
import { StartCommand } from './commands/start.command.js';
import { SetStatusCommand } from './commands/set-status.command.js';
import { ExportCommand } from './commands/export.command.js';
/**
* Command metadata for registration
*/
export interface CommandMetadata {
name: string;
description: string;
commandClass: typeof Command;
category?: 'task' | 'auth' | 'utility' | 'development';
}
/**
* Registry of all available commands
*/
export class CommandRegistry {
/**
* All available commands with their metadata
*/
private static commands: CommandMetadata[] = [
// Task Management Commands
{
name: 'list',
description: 'List all tasks with filtering and status overview',
commandClass: ListTasksCommand as any,
category: 'task'
},
{
name: 'show',
description: 'Display detailed information about a specific task',
commandClass: ShowCommand as any,
category: 'task'
},
{
name: 'start',
description: 'Start working on a task with claude-code',
commandClass: StartCommand as any,
category: 'task'
},
{
name: 'set-status',
description: 'Update the status of one or more tasks',
commandClass: SetStatusCommand as any,
category: 'task'
},
{
name: 'export',
description: 'Export tasks to external systems',
commandClass: ExportCommand as any,
category: 'task'
},
// Authentication & Context Commands
{
name: 'auth',
description: 'Manage authentication with tryhamster.com',
commandClass: AuthCommand as any,
category: 'auth'
},
{
name: 'context',
description: 'Manage workspace context (organization/brief)',
commandClass: ContextCommand as any,
category: 'auth'
}
];
/**
* Register all commands on a program instance
* @param program - Commander program to register commands on
*/
static registerAll(program: Command): void {
for (const cmd of this.commands) {
this.registerCommand(program, cmd);
}
}
/**
* Register specific commands by category
* @param program - Commander program to register commands on
* @param category - Category of commands to register
*/
static registerByCategory(
program: Command,
category: 'task' | 'auth' | 'utility' | 'development'
): void {
const categoryCommands = this.commands.filter(
(cmd) => cmd.category === category
);
for (const cmd of categoryCommands) {
this.registerCommand(program, cmd);
}
}
/**
* Register a single command by name
* @param program - Commander program to register the command on
* @param name - Name of the command to register
*/
static registerByName(program: Command, name: string): void {
const cmd = this.commands.find((c) => c.name === name);
if (cmd) {
this.registerCommand(program, cmd);
} else {
throw new Error(`Command '${name}' not found in registry`);
}
}
/**
* Register a single command
* @param program - Commander program to register the command on
* @param metadata - Command metadata
*/
private static registerCommand(
program: Command,
metadata: CommandMetadata
): void {
const CommandClass = metadata.commandClass as any;
// Use the static registration method that all commands have
if (CommandClass.registerOn) {
CommandClass.registerOn(program);
} else if (CommandClass.register) {
CommandClass.register(program);
} else {
// Fallback to creating instance and adding
const instance = new CommandClass();
program.addCommand(instance);
}
}
/**
* Get all registered command names
*/
static getCommandNames(): string[] {
return this.commands.map((cmd) => cmd.name);
}
/**
* Get commands by category
*/
static getCommandsByCategory(
category: 'task' | 'auth' | 'utility' | 'development'
): CommandMetadata[] {
return this.commands.filter((cmd) => cmd.category === category);
}
/**
* Add a new command to the registry
* @param metadata - Command metadata to add
*/
static addCommand(metadata: CommandMetadata): void {
// Check if command already exists
if (this.commands.some((cmd) => cmd.name === metadata.name)) {
throw new Error(`Command '${metadata.name}' already exists in registry`);
}
this.commands.push(metadata);
}
/**
* Remove a command from the registry
* @param name - Name of the command to remove
*/
static removeCommand(name: string): boolean {
const index = this.commands.findIndex((cmd) => cmd.name === name);
if (index >= 0) {
this.commands.splice(index, 1);
return true;
}
return false;
}
/**
* Get command metadata by name
* @param name - Name of the command
*/
static getCommand(name: string): CommandMetadata | undefined {
return this.commands.find((cmd) => cmd.name === name);
}
/**
* Check if a command exists
* @param name - Name of the command
*/
static hasCommand(name: string): boolean {
return this.commands.some((cmd) => cmd.name === name);
}
/**
* Get a formatted list of all commands for display
*/
static getFormattedCommandList(): string {
const categories = {
task: 'Task Management',
auth: 'Authentication & Context',
utility: 'Utilities',
development: 'Development'
};
let output = '';
for (const [category, title] of Object.entries(categories)) {
const cmds = this.getCommandsByCategory(
category as keyof typeof categories
);
if (cmds.length > 0) {
output += `\n${title}:\n`;
for (const cmd of cmds) {
output += ` ${cmd.name.padEnd(20)} ${cmd.description}\n`;
}
}
}
return output;
}
}
/**
* Convenience function to register all CLI commands
* @param program - Commander program instance
*/
export function registerAllCommands(program: Command): void {
CommandRegistry.registerAll(program);
}
/**
* Convenience function to register commands by category
* @param program - Commander program instance
* @param category - Category to register
*/
export function registerCommandsByCategory(
program: Command,
category: 'task' | 'auth' | 'utility' | 'development'
): void {
CommandRegistry.registerByCategory(program, category);
}
// Export the registry for direct access if needed
export default CommandRegistry;

View File

@@ -0,0 +1,513 @@
/**
* @fileoverview Auth command using Commander's native class pattern
* Extends Commander.Command for better integration with the framework
*/
import { Command } from 'commander';
import chalk from 'chalk';
import inquirer from 'inquirer';
import ora, { type Ora } from 'ora';
import open from 'open';
import {
AuthManager,
AuthenticationError,
type AuthCredentials
} from '@tm/core/auth';
import * as ui from '../utils/ui.js';
/**
* Result type from auth command
*/
export interface AuthResult {
success: boolean;
action: 'login' | 'logout' | 'status' | 'refresh';
credentials?: AuthCredentials;
message?: string;
}
/**
* AuthCommand extending Commander's Command class
* This is a thin presentation layer over @tm/core's AuthManager
*/
export class AuthCommand extends Command {
private authManager: AuthManager;
private lastResult?: AuthResult;
constructor(name?: string) {
super(name || 'auth');
// Initialize auth manager
this.authManager = AuthManager.getInstance();
// Configure the command with subcommands
this.description('Manage authentication with tryhamster.com');
// Add subcommands
this.addLoginCommand();
this.addLogoutCommand();
this.addStatusCommand();
this.addRefreshCommand();
// Default action shows help
this.action(() => {
this.help();
});
}
/**
* Add login subcommand
*/
private addLoginCommand(): void {
this.command('login')
.description('Authenticate with tryhamster.com')
.action(async () => {
await this.executeLogin();
});
}
/**
* Add logout subcommand
*/
private addLogoutCommand(): void {
this.command('logout')
.description('Logout and clear credentials')
.action(async () => {
await this.executeLogout();
});
}
/**
* Add status subcommand
*/
private addStatusCommand(): void {
this.command('status')
.description('Display authentication status')
.action(async () => {
await this.executeStatus();
});
}
/**
* Add refresh subcommand
*/
private addRefreshCommand(): void {
this.command('refresh')
.description('Refresh authentication token')
.action(async () => {
await this.executeRefresh();
});
}
/**
* Execute login command
*/
private async executeLogin(): Promise<void> {
try {
const result = await this.performInteractiveAuth();
this.setLastResult(result);
if (!result.success) {
process.exit(1);
}
// Exit cleanly after successful authentication
// Small delay to ensure all output is flushed
setTimeout(() => {
process.exit(0);
}, 100);
} catch (error: any) {
this.handleError(error);
process.exit(1);
}
}
/**
* Execute logout command
*/
private async executeLogout(): Promise<void> {
try {
const result = await this.performLogout();
this.setLastResult(result);
if (!result.success) {
process.exit(1);
}
} catch (error: any) {
this.handleError(error);
process.exit(1);
}
}
/**
* Execute status command
*/
private async executeStatus(): Promise<void> {
try {
const result = await this.displayStatus();
this.setLastResult(result);
} catch (error: any) {
this.handleError(error);
process.exit(1);
}
}
/**
* Execute refresh command
*/
private async executeRefresh(): Promise<void> {
try {
const result = await this.refreshToken();
this.setLastResult(result);
if (!result.success) {
process.exit(1);
}
} catch (error: any) {
this.handleError(error);
process.exit(1);
}
}
/**
* Display authentication status
*/
private async displayStatus(): Promise<AuthResult> {
const credentials = await this.authManager.getCredentials();
console.log(chalk.cyan('\n🔐 Authentication Status\n'));
if (credentials) {
console.log(chalk.green('✓ Authenticated'));
console.log(chalk.gray(` Email: ${credentials.email || 'N/A'}`));
console.log(chalk.gray(` User ID: ${credentials.userId}`));
console.log(
chalk.gray(` Token Type: ${credentials.tokenType || 'standard'}`)
);
if (credentials.expiresAt) {
const expiresAt = new Date(credentials.expiresAt);
const now = new Date();
const timeRemaining = expiresAt.getTime() - now.getTime();
const hoursRemaining = Math.floor(timeRemaining / (1000 * 60 * 60));
const minutesRemaining = Math.floor(timeRemaining / (1000 * 60));
if (timeRemaining > 0) {
// Token is still valid
if (hoursRemaining > 0) {
console.log(
chalk.gray(
` Expires at: ${expiresAt.toLocaleString()} (${hoursRemaining} hours remaining)`
)
);
} else {
console.log(
chalk.gray(
` Expires at: ${expiresAt.toLocaleString()} (${minutesRemaining} minutes remaining)`
)
);
}
} else {
// Token has expired
console.log(
chalk.yellow(` Expired at: ${expiresAt.toLocaleString()}`)
);
}
} else {
console.log(chalk.gray(' Expires: Never (API key)'));
}
console.log(
chalk.gray(` Saved: ${new Date(credentials.savedAt).toLocaleString()}`)
);
return {
success: true,
action: 'status',
credentials,
message: 'Authenticated'
};
} else {
console.log(chalk.yellow('✗ Not authenticated'));
console.log(
chalk.gray('\n Run "task-master auth login" to authenticate')
);
return {
success: false,
action: 'status',
message: 'Not authenticated'
};
}
}
/**
* Perform logout
*/
private async performLogout(): Promise<AuthResult> {
try {
await this.authManager.logout();
ui.displaySuccess('Successfully logged out');
return {
success: true,
action: 'logout',
message: 'Successfully logged out'
};
} catch (error) {
const message = `Failed to logout: ${(error as Error).message}`;
ui.displayError(message);
return {
success: false,
action: 'logout',
message
};
}
}
/**
* Refresh authentication token
*/
private async refreshToken(): Promise<AuthResult> {
const spinner = ora('Refreshing authentication token...').start();
try {
const credentials = await this.authManager.refreshToken();
spinner.succeed('Token refreshed successfully');
console.log(
chalk.gray(
` New expiration: ${credentials.expiresAt ? new Date(credentials.expiresAt).toLocaleString() : 'Never'}`
)
);
return {
success: true,
action: 'refresh',
credentials,
message: 'Token refreshed successfully'
};
} catch (error) {
spinner.fail('Failed to refresh token');
if ((error as AuthenticationError).code === 'NO_REFRESH_TOKEN') {
ui.displayWarning(
'No refresh token available. Please re-authenticate.'
);
} else {
ui.displayError(`Refresh failed: ${(error as Error).message}`);
}
return {
success: false,
action: 'refresh',
message: `Failed to refresh: ${(error as Error).message}`
};
}
}
/**
* Perform interactive authentication
*/
private async performInteractiveAuth(): Promise<AuthResult> {
ui.displayBanner('Task Master Authentication');
// Check if already authenticated
if (this.authManager.isAuthenticated()) {
const { continueAuth } = await inquirer.prompt([
{
type: 'confirm',
name: 'continueAuth',
message:
'You are already authenticated. Do you want to re-authenticate?',
default: false
}
]);
if (!continueAuth) {
const credentials = await this.authManager.getCredentials();
ui.displaySuccess('Using existing authentication');
if (credentials) {
console.log(chalk.gray(` Email: ${credentials.email || 'N/A'}`));
console.log(chalk.gray(` User ID: ${credentials.userId}`));
}
return {
success: true,
action: 'login',
credentials: credentials || undefined,
message: 'Using existing authentication'
};
}
}
try {
// Direct browser authentication - no menu needed
const credentials = await this.authenticateWithBrowser();
ui.displaySuccess('Authentication successful!');
console.log(
chalk.gray(` Logged in as: ${credentials.email || credentials.userId}`)
);
return {
success: true,
action: 'login',
credentials,
message: 'Authentication successful'
};
} catch (error) {
this.handleAuthError(error as AuthenticationError);
return {
success: false,
action: 'login',
message: `Authentication failed: ${(error as Error).message}`
};
}
}
/**
* Authenticate with browser using OAuth 2.0 with PKCE
*/
private async authenticateWithBrowser(): Promise<AuthCredentials> {
let authSpinner: Ora | null = null;
try {
// Use AuthManager's new unified OAuth flow method with callbacks
const credentials = await this.authManager.authenticateWithOAuth({
// Callback to handle browser opening
openBrowser: async (authUrl) => {
await open(authUrl);
},
timeout: 5 * 60 * 1000, // 5 minutes
// Callback when auth URL is ready
onAuthUrl: (authUrl) => {
// Display authentication instructions
console.log(chalk.blue.bold('\n🔐 Browser Authentication\n'));
console.log(chalk.white(' Opening your browser to authenticate...'));
console.log(chalk.gray(" If the browser doesn't open, visit:"));
console.log(chalk.cyan.underline(` ${authUrl}\n`));
},
// Callback when waiting for authentication
onWaitingForAuth: () => {
authSpinner = ora({
text: 'Waiting for authentication...',
spinner: 'dots'
}).start();
},
// Callback on success
onSuccess: () => {
if (authSpinner) {
authSpinner.succeed('Authentication successful!');
}
},
// Callback on error
onError: () => {
if (authSpinner) {
authSpinner.fail('Authentication failed');
}
}
});
return credentials;
} catch (error) {
throw error;
}
}
/**
* Handle authentication errors
*/
private handleAuthError(error: AuthenticationError): void {
console.error(chalk.red(`\n✗ ${error.message}`));
switch (error.code) {
case 'NETWORK_ERROR':
ui.displayWarning(
'Please check your internet connection and try again.'
);
break;
case 'INVALID_CREDENTIALS':
ui.displayWarning('Please check your credentials and try again.');
break;
case 'AUTH_EXPIRED':
ui.displayWarning(
'Your session has expired. Please authenticate again.'
);
break;
default:
if (process.env.DEBUG) {
console.error(chalk.gray(error.stack || ''));
}
}
}
/**
* Handle general errors
*/
private handleError(error: any): void {
if (error instanceof AuthenticationError) {
this.handleAuthError(error);
} else {
const msg = error?.getSanitizedDetails?.() ?? {
message: error?.message ?? String(error)
};
console.error(chalk.red(`Error: ${msg.message || 'Unexpected error'}`));
if (error.stack && process.env.DEBUG) {
console.error(chalk.gray(error.stack));
}
}
}
/**
* Set the last result for programmatic access
*/
private setLastResult(result: AuthResult): void {
this.lastResult = result;
}
/**
* Get the last result (for programmatic usage)
*/
getLastResult(): AuthResult | undefined {
return this.lastResult;
}
/**
* Get current authentication status (for programmatic usage)
*/
isAuthenticated(): boolean {
return this.authManager.isAuthenticated();
}
/**
* Get current credentials (for programmatic usage)
*/
getCredentials(): Promise<AuthCredentials | null> {
return this.authManager.getCredentials();
}
/**
* Clean up resources
*/
async cleanup(): Promise<void> {
// No resources to clean up for auth command
// But keeping method for consistency with other commands
}
/**
* Register this command on an existing program
*/
static register(program: Command, name?: string): AuthCommand {
const authCommand = new AuthCommand(name);
program.addCommand(authCommand);
return authCommand;
}
}

View File

@@ -0,0 +1,704 @@
/**
* @fileoverview Context command for managing org/brief selection
* Provides a clean interface for workspace context management
*/
import { Command } from 'commander';
import chalk from 'chalk';
import inquirer from 'inquirer';
import ora, { Ora } from 'ora';
import {
AuthManager,
AuthenticationError,
type UserContext
} from '@tm/core/auth';
import * as ui from '../utils/ui.js';
/**
* Result type from context command
*/
export interface ContextResult {
success: boolean;
action: 'show' | 'select-org' | 'select-brief' | 'clear' | 'set';
context?: UserContext;
message?: string;
}
/**
* ContextCommand extending Commander's Command class
* Manages user's workspace context (org/brief selection)
*/
export class ContextCommand extends Command {
private authManager: AuthManager;
private lastResult?: ContextResult;
constructor(name?: string) {
super(name || 'context');
// Initialize auth manager
this.authManager = AuthManager.getInstance();
// Configure the command
this.description(
'Manage workspace context (organization and brief selection)'
);
// Add subcommands
this.addOrgCommand();
this.addBriefCommand();
this.addClearCommand();
this.addSetCommand();
// Accept optional positional argument for brief ID or Hamster URL
this.argument('[briefOrUrl]', 'Brief ID or Hamster brief URL');
// Default action: if an argument is provided, resolve and set context; else show
this.action(async (briefOrUrl?: string) => {
if (briefOrUrl && briefOrUrl.trim().length > 0) {
await this.executeSetFromBriefInput(briefOrUrl.trim());
return;
}
await this.executeShow();
});
}
/**
* Add org selection subcommand
*/
private addOrgCommand(): void {
this.command('org')
.description('Select an organization')
.action(async () => {
await this.executeSelectOrg();
});
}
/**
* Add brief selection subcommand
*/
private addBriefCommand(): void {
this.command('brief')
.description('Select a brief within the current organization')
.action(async () => {
await this.executeSelectBrief();
});
}
/**
* Add clear subcommand
*/
private addClearCommand(): void {
this.command('clear')
.description('Clear all context selections')
.action(async () => {
await this.executeClear();
});
}
/**
* Add set subcommand for direct context setting
*/
private addSetCommand(): void {
this.command('set')
.description('Set context directly')
.option('--org <id>', 'Organization ID')
.option('--org-name <name>', 'Organization name')
.option('--brief <id>', 'Brief ID')
.option('--brief-name <name>', 'Brief name')
.action(async (options) => {
await this.executeSet(options);
});
}
/**
* Execute show current context
*/
private async executeShow(): Promise<void> {
try {
const result = await this.displayContext();
this.setLastResult(result);
} catch (error: any) {
this.handleError(error);
process.exit(1);
}
}
/**
* Display current context
*/
private async displayContext(): Promise<ContextResult> {
// Check authentication first
if (!this.authManager.isAuthenticated()) {
console.log(chalk.yellow('✗ Not authenticated'));
console.log(chalk.gray('\n Run "tm auth login" to authenticate first'));
return {
success: false,
action: 'show',
message: 'Not authenticated'
};
}
const context = await this.authManager.getContext();
console.log(chalk.cyan('\n🌍 Workspace Context\n'));
if (context && (context.orgId || context.briefId)) {
if (context.orgName || context.orgId) {
console.log(chalk.green('✓ Organization'));
if (context.orgName) {
console.log(chalk.white(` ${context.orgName}`));
}
if (context.orgId) {
console.log(chalk.gray(` ID: ${context.orgId}`));
}
}
if (context.briefName || context.briefId) {
console.log(chalk.green('\n✓ Brief'));
if (context.briefName) {
console.log(chalk.white(` ${context.briefName}`));
}
if (context.briefId) {
console.log(chalk.gray(` ID: ${context.briefId}`));
}
}
if (context.updatedAt) {
console.log(
chalk.gray(
`\n Last updated: ${new Date(context.updatedAt).toLocaleString()}`
)
);
}
return {
success: true,
action: 'show',
context,
message: 'Context loaded'
};
} else {
console.log(chalk.yellow('✗ No context selected'));
console.log(
chalk.gray('\n Run "tm context org" to select an organization')
);
console.log(chalk.gray(' Run "tm context brief" to select a brief'));
return {
success: true,
action: 'show',
message: 'No context selected'
};
}
}
/**
* Execute org selection
*/
private async executeSelectOrg(): Promise<void> {
try {
// Check authentication
if (!this.authManager.isAuthenticated()) {
ui.displayError('Not authenticated. Run "tm auth login" first.');
process.exit(1);
}
const result = await this.selectOrganization();
this.setLastResult(result);
if (!result.success) {
process.exit(1);
}
} catch (error: any) {
this.handleError(error);
process.exit(1);
}
}
/**
* Select an organization interactively
*/
private async selectOrganization(): Promise<ContextResult> {
const spinner = ora('Fetching organizations...').start();
try {
// Fetch organizations from API
const organizations = await this.authManager.getOrganizations();
spinner.stop();
if (organizations.length === 0) {
ui.displayWarning('No organizations available');
return {
success: false,
action: 'select-org',
message: 'No organizations available'
};
}
// Prompt for selection
const { selectedOrg } = await inquirer.prompt([
{
type: 'list',
name: 'selectedOrg',
message: 'Select an organization:',
choices: organizations.map((org) => ({
name: org.name,
value: org
}))
}
]);
// Update context
await this.authManager.updateContext({
orgId: selectedOrg.id,
orgName: selectedOrg.name,
// Clear brief when changing org
briefId: undefined,
briefName: undefined
});
ui.displaySuccess(`Selected organization: ${selectedOrg.name}`);
return {
success: true,
action: 'select-org',
context: (await this.authManager.getContext()) || undefined,
message: `Selected organization: ${selectedOrg.name}`
};
} catch (error) {
spinner.fail('Failed to fetch organizations');
throw error;
}
}
/**
* Execute brief selection
*/
private async executeSelectBrief(): Promise<void> {
try {
// Check authentication
if (!this.authManager.isAuthenticated()) {
ui.displayError('Not authenticated. Run "tm auth login" first.');
process.exit(1);
}
// Check if org is selected
const context = await this.authManager.getContext();
if (!context?.orgId) {
ui.displayError(
'No organization selected. Run "tm context org" first.'
);
process.exit(1);
}
const result = await this.selectBrief(context.orgId);
this.setLastResult(result);
if (!result.success) {
process.exit(1);
}
} catch (error: any) {
this.handleError(error);
process.exit(1);
}
}
/**
* Select a brief within the current organization
*/
private async selectBrief(orgId: string): Promise<ContextResult> {
const spinner = ora('Fetching briefs...').start();
try {
// Fetch briefs from API
const briefs = await this.authManager.getBriefs(orgId);
spinner.stop();
if (briefs.length === 0) {
ui.displayWarning('No briefs available in this organization');
return {
success: false,
action: 'select-brief',
message: 'No briefs available'
};
}
// Prompt for selection
const { selectedBrief } = await inquirer.prompt([
{
type: 'list',
name: 'selectedBrief',
message: 'Select a brief:',
choices: [
{ name: '(No brief - organization level)', value: null },
...briefs.map((brief) => ({
name: `Brief ${brief.id} (${new Date(brief.createdAt).toLocaleDateString()})`,
value: brief
}))
]
}
]);
if (selectedBrief) {
// Update context with brief
const briefName = `Brief ${selectedBrief.id.slice(0, 8)}`;
await this.authManager.updateContext({
briefId: selectedBrief.id,
briefName: briefName
});
ui.displaySuccess(`Selected brief: ${briefName}`);
return {
success: true,
action: 'select-brief',
context: (await this.authManager.getContext()) || undefined,
message: `Selected brief: ${selectedBrief.name}`
};
} else {
// Clear brief selection
await this.authManager.updateContext({
briefId: undefined,
briefName: undefined
});
ui.displaySuccess('Cleared brief selection (organization level)');
return {
success: true,
action: 'select-brief',
context: (await this.authManager.getContext()) || undefined,
message: 'Cleared brief selection'
};
}
} catch (error) {
spinner.fail('Failed to fetch briefs');
throw error;
}
}
/**
* Execute clear context
*/
private async executeClear(): Promise<void> {
try {
// Check authentication
if (!this.authManager.isAuthenticated()) {
ui.displayError('Not authenticated. Run "tm auth login" first.');
process.exit(1);
}
const result = await this.clearContext();
this.setLastResult(result);
if (!result.success) {
process.exit(1);
}
} catch (error: any) {
this.handleError(error);
process.exit(1);
}
}
/**
* Clear all context selections
*/
private async clearContext(): Promise<ContextResult> {
try {
await this.authManager.clearContext();
ui.displaySuccess('Context cleared');
return {
success: true,
action: 'clear',
message: 'Context cleared'
};
} catch (error) {
ui.displayError(`Failed to clear context: ${(error as Error).message}`);
return {
success: false,
action: 'clear',
message: `Failed to clear context: ${(error as Error).message}`
};
}
}
/**
* Execute set context with options
*/
private async executeSet(options: any): Promise<void> {
try {
// Check authentication
if (!this.authManager.isAuthenticated()) {
ui.displayError('Not authenticated. Run "tm auth login" first.');
process.exit(1);
}
const result = await this.setContext(options);
this.setLastResult(result);
if (!result.success) {
process.exit(1);
}
} catch (error: any) {
this.handleError(error);
process.exit(1);
}
}
/**
* Execute setting context from a brief ID or Hamster URL
*/
private async executeSetFromBriefInput(briefOrUrl: string): Promise<void> {
let spinner: Ora | undefined;
try {
// Check authentication
if (!this.authManager.isAuthenticated()) {
ui.displayError('Not authenticated. Run "tm auth login" first.');
process.exit(1);
}
spinner = ora('Resolving brief...');
spinner.start();
// Extract brief ID
const briefId = this.extractBriefId(briefOrUrl);
if (!briefId) {
spinner.fail('Could not extract a brief ID from the provided input');
ui.displayError(
`Provide a valid brief ID or a Hamster brief URL, e.g. https://${process.env.TM_PUBLIC_BASE_DOMAIN}/home/hamster/briefs/<id>`
);
process.exit(1);
}
// Fetch brief and resolve its organization
const brief = await this.authManager.getBrief(briefId);
if (!brief) {
spinner.fail('Brief not found or you do not have access');
process.exit(1);
}
// Fetch org to get a friendly name (optional)
let orgName: string | undefined;
try {
const org = await this.authManager.getOrganization(brief.accountId);
orgName = org?.name;
} catch {
// Non-fatal if org lookup fails
}
// Update context: set org and brief
const briefName = `Brief ${brief.id.slice(0, 8)}`;
await this.authManager.updateContext({
orgId: brief.accountId,
orgName,
briefId: brief.id,
briefName
});
spinner.succeed('Context set from brief');
console.log(
chalk.gray(
` Organization: ${orgName || brief.accountId}\n Brief: ${briefName}`
)
);
this.setLastResult({
success: true,
action: 'set',
context: (await this.authManager.getContext()) || undefined,
message: 'Context set from brief'
});
} catch (error: any) {
try {
if (spinner?.isSpinning) spinner.stop();
} catch {}
this.handleError(error);
process.exit(1);
}
}
/**
* Extract a brief ID from raw input (ID or Hamster URL)
*/
private extractBriefId(input: string): string | null {
const raw = input?.trim() ?? '';
if (!raw) return null;
const parseUrl = (s: string): URL | null => {
try {
return new URL(s);
} catch {}
try {
return new URL(`https://${s}`);
} catch {}
return null;
};
const fromParts = (path: string): string | null => {
const parts = path.split('/').filter(Boolean);
const briefsIdx = parts.lastIndexOf('briefs');
const candidate =
briefsIdx >= 0 && parts.length > briefsIdx + 1
? parts[briefsIdx + 1]
: parts[parts.length - 1];
return candidate?.trim() || null;
};
// 1) URL (absolute or schemeless)
const url = parseUrl(raw);
if (url) {
const qId = url.searchParams.get('id') || url.searchParams.get('briefId');
const candidate = (qId || fromParts(url.pathname)) ?? null;
if (candidate) {
// Light sanity check; let API be the final validator
if (this.isLikelyId(candidate) || candidate.length >= 8)
return candidate;
}
}
// 2) Looks like a path without scheme
if (raw.includes('/')) {
const candidate = fromParts(raw);
if (candidate && (this.isLikelyId(candidate) || candidate.length >= 8)) {
return candidate;
}
}
// 3) Fallback: raw token
return raw;
}
/**
* Heuristic to check if a string looks like a brief ID (UUID-like)
*/
private isLikelyId(value: string): boolean {
const uuidRegex =
/^[0-9a-fA-F]{8}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{12}$/;
const ulidRegex = /^[0-9A-HJKMNP-TV-Z]{26}$/i; // ULID
const slugRegex = /^[A-Za-z0-9_-]{16,}$/; // general token
return (
uuidRegex.test(value) || ulidRegex.test(value) || slugRegex.test(value)
);
}
/**
* Set context directly from options
*/
private async setContext(options: any): Promise<ContextResult> {
try {
const context: Partial<UserContext> = {};
if (options.org) {
context.orgId = options.org;
}
if (options.orgName) {
context.orgName = options.orgName;
}
if (options.brief) {
context.briefId = options.brief;
}
if (options.briefName) {
context.briefName = options.briefName;
}
if (Object.keys(context).length === 0) {
ui.displayWarning('No context options provided');
return {
success: false,
action: 'set',
message: 'No context options provided'
};
}
await this.authManager.updateContext(context);
ui.displaySuccess('Context updated');
// Display what was set
if (context.orgName || context.orgId) {
console.log(
chalk.gray(` Organization: ${context.orgName || context.orgId}`)
);
}
if (context.briefName || context.briefId) {
console.log(
chalk.gray(` Brief: ${context.briefName || context.briefId}`)
);
}
return {
success: true,
action: 'set',
context: (await this.authManager.getContext()) || undefined,
message: 'Context updated'
};
} catch (error) {
ui.displayError(`Failed to set context: ${(error as Error).message}`);
return {
success: false,
action: 'set',
message: `Failed to set context: ${(error as Error).message}`
};
}
}
/**
* Handle errors
*/
private handleError(error: any): void {
if (error instanceof AuthenticationError) {
console.error(chalk.red(`\n✗ ${error.message}`));
if (error.code === 'NOT_AUTHENTICATED') {
ui.displayWarning('Please authenticate first: tm auth login');
}
} else {
const msg = error?.message ?? String(error);
console.error(chalk.red(`Error: ${msg}`));
if (error.stack && process.env.DEBUG) {
console.error(chalk.gray(error.stack));
}
}
}
/**
* Set the last result for programmatic access
*/
private setLastResult(result: ContextResult): void {
this.lastResult = result;
}
/**
* Get the last result (for programmatic usage)
*/
getLastResult(): ContextResult | undefined {
return this.lastResult;
}
/**
* Get current context (for programmatic usage)
*/
getContext(): Promise<UserContext | null> {
return this.authManager.getContext();
}
/**
* Clean up resources
*/
async cleanup(): Promise<void> {
// No resources to clean up for context command
}
/**
* Register this command on an existing program
*/
static register(program: Command, name?: string): ContextCommand {
const contextCommand = new ContextCommand(name);
program.addCommand(contextCommand);
return contextCommand;
}
}

View File

@@ -0,0 +1,379 @@
/**
* @fileoverview Export command for exporting tasks to external systems
* Provides functionality to export tasks to Hamster briefs
*/
import { Command } from 'commander';
import chalk from 'chalk';
import inquirer from 'inquirer';
import ora, { Ora } from 'ora';
import {
AuthManager,
AuthenticationError,
type UserContext
} from '@tm/core/auth';
import { TaskMasterCore, type ExportResult } from '@tm/core';
import * as ui from '../utils/ui.js';
/**
* Result type from export command
*/
export interface ExportCommandResult {
success: boolean;
action: 'export' | 'validate' | 'cancelled';
result?: ExportResult;
message?: string;
}
/**
* ExportCommand extending Commander's Command class
* Handles task export to external systems
*/
export class ExportCommand extends Command {
private authManager: AuthManager;
private taskMasterCore?: TaskMasterCore;
private lastResult?: ExportCommandResult;
constructor(name?: string) {
super(name || 'export');
// Initialize auth manager
this.authManager = AuthManager.getInstance();
// Configure the command
this.description('Export tasks to external systems (e.g., Hamster briefs)');
// Add options
this.option('--org <id>', 'Organization ID to export to');
this.option('--brief <id>', 'Brief ID to export tasks to');
this.option('--tag <tag>', 'Export tasks from a specific tag');
this.option(
'--status <status>',
'Filter tasks by status (pending, in-progress, done, etc.)'
);
this.option('--exclude-subtasks', 'Exclude subtasks from export');
this.option('-y, --yes', 'Skip confirmation prompt');
// Accept optional positional argument for brief ID or Hamster URL
this.argument('[briefOrUrl]', 'Brief ID or Hamster brief URL');
// Default action
this.action(async (briefOrUrl?: string, options?: any) => {
await this.executeExport(briefOrUrl, options);
});
}
/**
* Initialize the TaskMasterCore
*/
private async initializeServices(): Promise<void> {
if (this.taskMasterCore) {
return;
}
try {
// Initialize TaskMasterCore
this.taskMasterCore = await TaskMasterCore.create({
projectPath: process.cwd()
});
} catch (error) {
throw new Error(
`Failed to initialize services: ${(error as Error).message}`
);
}
}
/**
* Execute the export command
*/
private async executeExport(
briefOrUrl?: string,
options?: any
): Promise<void> {
let spinner: Ora | undefined;
try {
// Check authentication
if (!this.authManager.isAuthenticated()) {
ui.displayError('Not authenticated. Run "tm auth login" first.');
process.exit(1);
}
// Initialize services
await this.initializeServices();
// Get current context
const context = await this.authManager.getContext();
// Determine org and brief IDs
let orgId = options?.org || context?.orgId;
let briefId = options?.brief || briefOrUrl || context?.briefId;
// If a URL/ID was provided as argument, resolve it
if (briefOrUrl && !options?.brief) {
spinner = ora('Resolving brief...').start();
const resolvedBrief = await this.resolveBriefInput(briefOrUrl);
if (resolvedBrief) {
briefId = resolvedBrief.briefId;
orgId = resolvedBrief.orgId;
spinner.succeed('Brief resolved');
} else {
spinner.fail('Could not resolve brief');
process.exit(1);
}
}
// Validate we have necessary IDs
if (!orgId) {
ui.displayError(
'No organization selected. Run "tm context org" or use --org flag.'
);
process.exit(1);
}
if (!briefId) {
ui.displayError(
'No brief specified. Run "tm context brief", provide a brief ID/URL, or use --brief flag.'
);
process.exit(1);
}
// Confirm export if not auto-confirmed
if (!options?.yes) {
const confirmed = await this.confirmExport(orgId, briefId, context);
if (!confirmed) {
ui.displayWarning('Export cancelled');
this.lastResult = {
success: false,
action: 'cancelled',
message: 'User cancelled export'
};
process.exit(0);
}
}
// Perform export
spinner = ora('Exporting tasks...').start();
const exportResult = await this.taskMasterCore!.exportTasks({
orgId,
briefId,
tag: options?.tag,
status: options?.status,
excludeSubtasks: options?.excludeSubtasks || false
});
if (exportResult.success) {
spinner.succeed(
`Successfully exported ${exportResult.taskCount} task(s) to brief`
);
// Display summary
console.log(chalk.cyan('\n📤 Export Summary\n'));
console.log(chalk.white(` Organization: ${orgId}`));
console.log(chalk.white(` Brief: ${briefId}`));
console.log(chalk.white(` Tasks exported: ${exportResult.taskCount}`));
if (options?.tag) {
console.log(chalk.gray(` Tag: ${options.tag}`));
}
if (options?.status) {
console.log(chalk.gray(` Status filter: ${options.status}`));
}
if (exportResult.message) {
console.log(chalk.gray(`\n ${exportResult.message}`));
}
} else {
spinner.fail('Export failed');
if (exportResult.error) {
console.error(chalk.red(`\n✗ ${exportResult.error.message}`));
}
}
this.lastResult = {
success: exportResult.success,
action: 'export',
result: exportResult
};
} catch (error: any) {
if (spinner?.isSpinning) spinner.fail('Export failed');
this.handleError(error);
process.exit(1);
}
}
/**
* Resolve brief input to get brief and org IDs
*/
private async resolveBriefInput(
briefOrUrl: string
): Promise<{ briefId: string; orgId: string } | null> {
try {
// Extract brief ID from input
const briefId = this.extractBriefId(briefOrUrl);
if (!briefId) {
return null;
}
// Fetch brief to get organization
const brief = await this.authManager.getBrief(briefId);
if (!brief) {
ui.displayError('Brief not found or you do not have access');
return null;
}
return {
briefId: brief.id,
orgId: brief.accountId
};
} catch (error) {
console.error(chalk.red(`Failed to resolve brief: ${error}`));
return null;
}
}
/**
* Extract a brief ID from raw input (ID or URL)
*/
private extractBriefId(input: string): string | null {
const raw = input?.trim() ?? '';
if (!raw) return null;
const parseUrl = (s: string): URL | null => {
try {
return new URL(s);
} catch {}
try {
return new URL(`https://${s}`);
} catch {}
return null;
};
const fromParts = (path: string): string | null => {
const parts = path.split('/').filter(Boolean);
const briefsIdx = parts.lastIndexOf('briefs');
const candidate =
briefsIdx >= 0 && parts.length > briefsIdx + 1
? parts[briefsIdx + 1]
: parts[parts.length - 1];
return candidate?.trim() || null;
};
// Try URL parsing
const url = parseUrl(raw);
if (url) {
const qId = url.searchParams.get('id') || url.searchParams.get('briefId');
const candidate = (qId || fromParts(url.pathname)) ?? null;
if (candidate) {
if (this.isLikelyId(candidate) || candidate.length >= 8) {
return candidate;
}
}
}
// Check if it looks like a path
if (raw.includes('/')) {
const candidate = fromParts(raw);
if (candidate && (this.isLikelyId(candidate) || candidate.length >= 8)) {
return candidate;
}
}
// Return raw if it looks like an ID
return raw;
}
/**
* Check if a string looks like a brief ID
*/
private isLikelyId(value: string): boolean {
const uuidRegex =
/^[0-9a-fA-F]{8}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{12}$/;
const ulidRegex = /^[0-9A-HJKMNP-TV-Z]{26}$/i;
const slugRegex = /^[A-Za-z0-9_-]{16,}$/;
return (
uuidRegex.test(value) || ulidRegex.test(value) || slugRegex.test(value)
);
}
/**
* Confirm export with the user
*/
private async confirmExport(
orgId: string,
briefId: string,
context: UserContext | null
): Promise<boolean> {
console.log(chalk.cyan('\n📤 Export Tasks\n'));
// Show org name if available
if (context?.orgName) {
console.log(chalk.white(` Organization: ${context.orgName}`));
console.log(chalk.gray(` ID: ${orgId}`));
} else {
console.log(chalk.white(` Organization ID: ${orgId}`));
}
// Show brief info
if (context?.briefName) {
console.log(chalk.white(`\n Brief: ${context.briefName}`));
console.log(chalk.gray(` ID: ${briefId}`));
} else {
console.log(chalk.white(`\n Brief ID: ${briefId}`));
}
const { confirmed } = await inquirer.prompt([
{
type: 'confirm',
name: 'confirmed',
message: 'Do you want to proceed with export?',
default: true
}
]);
return confirmed;
}
/**
* Handle errors
*/
private handleError(error: any): void {
if (error instanceof AuthenticationError) {
console.error(chalk.red(`\n✗ ${error.message}`));
if (error.code === 'NOT_AUTHENTICATED') {
ui.displayWarning('Please authenticate first: tm auth login');
}
} else {
const msg = error?.message ?? String(error);
console.error(chalk.red(`Error: ${msg}`));
if (error.stack && process.env.DEBUG) {
console.error(chalk.gray(error.stack));
}
}
}
/**
* Get the last export result (useful for testing)
*/
public getLastResult(): ExportCommandResult | undefined {
return this.lastResult;
}
/**
* Clean up resources
*/
async cleanup(): Promise<void> {
// No resources to clean up
}
/**
* Register this command on an existing program
*/
static register(program: Command, name?: string): ExportCommand {
const exportCommand = new ExportCommand(name);
program.addCommand(exportCommand);
return exportCommand;
}
}

View File

@@ -0,0 +1,484 @@
/**
* @fileoverview ListTasks command using Commander's native class pattern
* Extends Commander.Command for better integration with the framework
*/
import { Command } from 'commander';
import chalk from 'chalk';
import {
createTaskMasterCore,
type Task,
type TaskStatus,
type TaskMasterCore,
TASK_STATUSES,
OUTPUT_FORMATS,
STATUS_ICONS,
type OutputFormat
} from '@tm/core';
import type { StorageType } from '@tm/core/types';
import * as ui from '../utils/ui.js';
import {
displayHeader,
displayDashboards,
calculateTaskStatistics,
calculateSubtaskStatistics,
calculateDependencyStatistics,
getPriorityBreakdown,
displayRecommendedNextTask,
getTaskDescription,
displaySuggestedNextSteps,
type NextTaskInfo
} from '../ui/index.js';
/**
* Options interface for the list command
*/
export interface ListCommandOptions {
status?: string;
tag?: string;
withSubtasks?: boolean;
format?: OutputFormat;
silent?: boolean;
project?: string;
}
/**
* Result type from list command
*/
export interface ListTasksResult {
tasks: Task[];
total: number;
filtered: number;
tag?: string;
storageType: Exclude<StorageType, 'auto'>;
}
/**
* ListTasksCommand extending Commander's Command class
* This is a thin presentation layer over @tm/core
*/
export class ListTasksCommand extends Command {
private tmCore?: TaskMasterCore;
private lastResult?: ListTasksResult;
constructor(name?: string) {
super(name || 'list');
// Configure the command
this.description('List tasks with optional filtering')
.alias('ls')
.option('-s, --status <status>', 'Filter by status (comma-separated)')
.option('-t, --tag <tag>', 'Filter by tag')
.option('--with-subtasks', 'Include subtasks in the output')
.option(
'-f, --format <format>',
'Output format (text, json, compact)',
'text'
)
.option('--silent', 'Suppress output (useful for programmatic usage)')
.option('-p, --project <path>', 'Project root directory', process.cwd())
.action(async (options: ListCommandOptions) => {
await this.executeCommand(options);
});
}
/**
* Execute the list command
*/
private async executeCommand(options: ListCommandOptions): Promise<void> {
try {
// Validate options
if (!this.validateOptions(options)) {
process.exit(1);
}
// Initialize tm-core
await this.initializeCore(options.project || process.cwd());
// Get tasks from core
const result = await this.getTasks(options);
// Store result for programmatic access
this.setLastResult(result);
// Display results
if (!options.silent) {
this.displayResults(result, options);
}
} catch (error: any) {
const msg = error?.getSanitizedDetails?.() ?? {
message: error?.message ?? String(error)
};
console.error(chalk.red(`Error: ${msg.message || 'Unexpected error'}`));
if (error.stack && process.env.DEBUG) {
console.error(chalk.gray(error.stack));
}
process.exit(1);
}
}
/**
* Validate command options
*/
private validateOptions(options: ListCommandOptions): boolean {
// Validate format
if (
options.format &&
!OUTPUT_FORMATS.includes(options.format as OutputFormat)
) {
console.error(chalk.red(`Invalid format: ${options.format}`));
console.error(chalk.gray(`Valid formats: ${OUTPUT_FORMATS.join(', ')}`));
return false;
}
// Validate status
if (options.status) {
const statuses = options.status.split(',').map((s: string) => s.trim());
for (const status of statuses) {
if (status !== 'all' && !TASK_STATUSES.includes(status as TaskStatus)) {
console.error(chalk.red(`Invalid status: ${status}`));
console.error(
chalk.gray(`Valid statuses: ${TASK_STATUSES.join(', ')}`)
);
return false;
}
}
}
return true;
}
/**
* Initialize TaskMasterCore
*/
private async initializeCore(projectRoot: string): Promise<void> {
if (!this.tmCore) {
this.tmCore = await createTaskMasterCore({ projectPath: projectRoot });
}
}
/**
* Get tasks from tm-core
*/
private async getTasks(
options: ListCommandOptions
): Promise<ListTasksResult> {
if (!this.tmCore) {
throw new Error('TaskMasterCore not initialized');
}
// Build filter
const filter =
options.status && options.status !== 'all'
? {
status: options.status
.split(',')
.map((s: string) => s.trim() as TaskStatus)
}
: undefined;
// Call tm-core
const result = await this.tmCore.getTaskList({
tag: options.tag,
filter,
includeSubtasks: options.withSubtasks
});
return result as ListTasksResult;
}
/**
* Display results based on format
*/
private displayResults(
result: ListTasksResult,
options: ListCommandOptions
): void {
const format = (options.format || 'text') as OutputFormat | 'text';
switch (format) {
case 'json':
this.displayJson(result);
break;
case 'compact':
this.displayCompact(result.tasks, options.withSubtasks);
break;
case 'text':
default:
this.displayText(result, options.withSubtasks);
break;
}
}
/**
* Display in JSON format
*/
private displayJson(data: ListTasksResult): void {
console.log(
JSON.stringify(
{
tasks: data.tasks,
metadata: {
total: data.total,
filtered: data.filtered,
tag: data.tag,
storageType: data.storageType
}
},
null,
2
)
);
}
/**
* Display in compact format
*/
private displayCompact(tasks: Task[], withSubtasks?: boolean): void {
tasks.forEach((task) => {
const icon = STATUS_ICONS[task.status];
console.log(`${chalk.cyan(task.id)} ${icon} ${task.title}`);
if (withSubtasks && task.subtasks?.length) {
task.subtasks.forEach((subtask) => {
const subIcon = STATUS_ICONS[subtask.status];
console.log(
` ${chalk.gray(String(subtask.id))} ${subIcon} ${chalk.gray(subtask.title)}`
);
});
}
});
}
/**
* Display in text format with tables
*/
private displayText(data: ListTasksResult, withSubtasks?: boolean): void {
const { tasks, tag } = data;
// Get file path for display
const filePath = this.tmCore ? `.taskmaster/tasks/tasks.json` : undefined;
// Display header without banner (banner already shown by main CLI)
displayHeader({
tag: tag || 'master',
filePath: filePath
});
// No tasks message
if (tasks.length === 0) {
ui.displayWarning('No tasks found matching the criteria.');
return;
}
// Calculate statistics
const taskStats = calculateTaskStatistics(tasks);
const subtaskStats = calculateSubtaskStatistics(tasks);
const depStats = calculateDependencyStatistics(tasks);
const priorityBreakdown = getPriorityBreakdown(tasks);
// Find next task following the same logic as findNextTask
const nextTaskInfo = this.findNextTask(tasks);
// Get the full task object with complexity data already included
const nextTask = nextTaskInfo
? tasks.find((t) => String(t.id) === String(nextTaskInfo.id))
: undefined;
// Display dashboard boxes (nextTask already has complexity from storage enrichment)
displayDashboards(
taskStats,
subtaskStats,
priorityBreakdown,
depStats,
nextTask
);
// Task table
console.log(
ui.createTaskTable(tasks, {
showSubtasks: withSubtasks,
showDependencies: true,
showComplexity: true // Enable complexity column
})
);
// Display recommended next task section immediately after table
if (nextTask) {
const description = getTaskDescription(nextTask);
displayRecommendedNextTask({
id: nextTask.id,
title: nextTask.title,
priority: nextTask.priority,
status: nextTask.status,
dependencies: nextTask.dependencies,
description,
complexity: nextTask.complexity as number | undefined
});
} else {
displayRecommendedNextTask(undefined);
}
// Display suggested next steps at the end
displaySuggestedNextSteps();
}
/**
* Set the last result for programmatic access
*/
private setLastResult(result: ListTasksResult): void {
this.lastResult = result;
}
/**
* Find the next task to work on
* Implements the same logic as scripts/modules/task-manager/find-next-task.js
*/
private findNextTask(tasks: Task[]): NextTaskInfo | undefined {
const priorityValues: Record<string, number> = {
critical: 4,
high: 3,
medium: 2,
low: 1
};
// Build set of completed task IDs (including subtasks)
const completedIds = new Set<string>();
tasks.forEach((t) => {
if (t.status === 'done' || t.status === 'completed') {
completedIds.add(String(t.id));
}
if (t.subtasks) {
t.subtasks.forEach((st) => {
if (st.status === 'done' || st.status === 'completed') {
completedIds.add(`${t.id}.${st.id}`);
}
});
}
});
// First, look for eligible subtasks in in-progress parent tasks
const candidateSubtasks: NextTaskInfo[] = [];
tasks
.filter(
(t) => t.status === 'in-progress' && t.subtasks && t.subtasks.length > 0
)
.forEach((parent) => {
parent.subtasks!.forEach((st) => {
const stStatus = (st.status || 'pending').toLowerCase();
if (stStatus !== 'pending' && stStatus !== 'in-progress') return;
// Check if dependencies are satisfied
const fullDeps =
st.dependencies?.map((d) => {
// Handle both numeric and string IDs
if (typeof d === 'string' && d.includes('.')) {
return d;
}
return `${parent.id}.${d}`;
}) ?? [];
const depsSatisfied =
fullDeps.length === 0 ||
fullDeps.every((depId) => completedIds.has(String(depId)));
if (depsSatisfied) {
candidateSubtasks.push({
id: `${parent.id}.${st.id}`,
title: st.title || `Subtask ${st.id}`,
priority: st.priority || parent.priority || 'medium',
dependencies: fullDeps.map((d) => String(d))
});
}
});
});
if (candidateSubtasks.length > 0) {
// Sort by priority, then by dependencies count, then by ID
candidateSubtasks.sort((a, b) => {
const pa = priorityValues[a.priority || 'medium'] ?? 2;
const pb = priorityValues[b.priority || 'medium'] ?? 2;
if (pb !== pa) return pb - pa;
const depCountA = a.dependencies?.length || 0;
const depCountB = b.dependencies?.length || 0;
if (depCountA !== depCountB) return depCountA - depCountB;
return String(a.id).localeCompare(String(b.id));
});
return candidateSubtasks[0];
}
// Fall back to finding eligible top-level tasks
const eligibleTasks = tasks.filter((task) => {
// Skip non-eligible statuses
const status = (task.status || 'pending').toLowerCase();
if (status !== 'pending' && status !== 'in-progress') return false;
// Check dependencies
const deps = task.dependencies || [];
const depsSatisfied =
deps.length === 0 ||
deps.every((depId) => completedIds.has(String(depId)));
return depsSatisfied;
});
if (eligibleTasks.length === 0) return undefined;
// Sort eligible tasks
eligibleTasks.sort((a, b) => {
// Priority (higher first)
const pa = priorityValues[a.priority || 'medium'] ?? 2;
const pb = priorityValues[b.priority || 'medium'] ?? 2;
if (pb !== pa) return pb - pa;
// Dependencies count (fewer first)
const depCountA = a.dependencies?.length || 0;
const depCountB = b.dependencies?.length || 0;
if (depCountA !== depCountB) return depCountA - depCountB;
// ID (lower first)
return Number(a.id) - Number(b.id);
});
const nextTask = eligibleTasks[0];
return {
id: nextTask.id,
title: nextTask.title,
priority: nextTask.priority,
dependencies: nextTask.dependencies?.map((d) => String(d))
};
}
/**
* Get the last result (for programmatic usage)
*/
getLastResult(): ListTasksResult | undefined {
return this.lastResult;
}
/**
* Clean up resources
*/
async cleanup(): Promise<void> {
if (this.tmCore) {
await this.tmCore.close();
this.tmCore = undefined;
}
}
/**
* Register this command on an existing program
*/
static register(program: Command, name?: string): ListTasksCommand {
const listCommand = new ListTasksCommand(name);
program.addCommand(listCommand);
return listCommand;
}
}

View File

@@ -0,0 +1,304 @@
/**
* @fileoverview SetStatusCommand using Commander's native class pattern
* Extends Commander.Command for better integration with the framework
*/
import { Command } from 'commander';
import chalk from 'chalk';
import boxen from 'boxen';
import {
createTaskMasterCore,
type TaskMasterCore,
type TaskStatus
} from '@tm/core';
import type { StorageType } from '@tm/core/types';
/**
* Valid task status values for validation
*/
const VALID_TASK_STATUSES: TaskStatus[] = [
'pending',
'in-progress',
'done',
'deferred',
'cancelled',
'blocked',
'review'
];
/**
* Options interface for the set-status command
*/
export interface SetStatusCommandOptions {
id?: string;
status?: TaskStatus;
format?: 'text' | 'json';
silent?: boolean;
project?: string;
}
/**
* Result type from set-status command
*/
export interface SetStatusResult {
success: boolean;
updatedTasks: Array<{
taskId: string;
oldStatus: TaskStatus;
newStatus: TaskStatus;
}>;
storageType: Exclude<StorageType, 'auto'>;
}
/**
* SetStatusCommand extending Commander's Command class
* This is a thin presentation layer over @tm/core
*/
export class SetStatusCommand extends Command {
private tmCore?: TaskMasterCore;
private lastResult?: SetStatusResult;
constructor(name?: string) {
super(name || 'set-status');
// Configure the command
this.description('Update the status of one or more tasks')
.requiredOption(
'-i, --id <id>',
'Task ID(s) to update (comma-separated for multiple, supports subtasks like 5.2)'
)
.requiredOption(
'-s, --status <status>',
`New status (${VALID_TASK_STATUSES.join(', ')})`
)
.option('-f, --format <format>', 'Output format (text, json)', 'text')
.option('--silent', 'Suppress output (useful for programmatic usage)')
.option('-p, --project <path>', 'Project root directory', process.cwd())
.action(async (options: SetStatusCommandOptions) => {
await this.executeCommand(options);
});
}
/**
* Execute the set-status command
*/
private async executeCommand(
options: SetStatusCommandOptions
): Promise<void> {
try {
// Validate required options
if (!options.id) {
console.error(chalk.red('Error: Task ID is required. Use -i or --id'));
process.exit(1);
}
if (!options.status) {
console.error(
chalk.red('Error: Status is required. Use -s or --status')
);
process.exit(1);
}
// Validate status
if (!VALID_TASK_STATUSES.includes(options.status)) {
console.error(
chalk.red(
`Error: Invalid status "${options.status}". Valid options: ${VALID_TASK_STATUSES.join(', ')}`
)
);
process.exit(1);
}
// Initialize TaskMaster core
this.tmCore = await createTaskMasterCore({
projectPath: options.project || process.cwd()
});
// Parse task IDs (handle comma-separated values)
const taskIds = options.id.split(',').map((id) => id.trim());
// Update each task
const updatedTasks: Array<{
taskId: string;
oldStatus: TaskStatus;
newStatus: TaskStatus;
}> = [];
for (const taskId of taskIds) {
try {
const result = await this.tmCore.updateTaskStatus(
taskId,
options.status
);
updatedTasks.push({
taskId: result.taskId,
oldStatus: result.oldStatus,
newStatus: result.newStatus
});
} catch (error) {
const errorMessage =
error instanceof Error ? error.message : String(error);
if (!options.silent) {
console.error(
chalk.red(`Failed to update task ${taskId}: ${errorMessage}`)
);
}
if (options.format === 'json') {
console.log(
JSON.stringify({
success: false,
error: errorMessage,
taskId,
timestamp: new Date().toISOString()
})
);
}
process.exit(1);
}
}
// Store result for potential reuse
this.lastResult = {
success: true,
updatedTasks,
storageType: this.tmCore.getStorageType() as Exclude<
StorageType,
'auto'
>
};
// Display results
this.displayResults(this.lastResult, options);
} catch (error) {
const errorMessage =
error instanceof Error ? error.message : 'Unknown error occurred';
if (!options.silent) {
console.error(chalk.red(`Error: ${errorMessage}`));
}
if (options.format === 'json') {
console.log(JSON.stringify({ success: false, error: errorMessage }));
}
process.exit(1);
} finally {
// Clean up resources
if (this.tmCore) {
await this.tmCore.close();
}
}
}
/**
* Display results based on format
*/
private displayResults(
result: SetStatusResult,
options: SetStatusCommandOptions
): void {
const format = options.format || 'text';
switch (format) {
case 'json':
console.log(JSON.stringify(result, null, 2));
break;
case 'text':
default:
if (!options.silent) {
this.displayTextResults(result);
}
break;
}
}
/**
* Display results in text format
*/
private displayTextResults(result: SetStatusResult): void {
if (result.updatedTasks.length === 1) {
// Single task update
const update = result.updatedTasks[0];
console.log(
boxen(
chalk.white.bold(`✅ Successfully updated task ${update.taskId}`) +
'\n\n' +
`${chalk.blue('From:')} ${this.getStatusDisplay(update.oldStatus)}\n` +
`${chalk.blue('To:')} ${this.getStatusDisplay(update.newStatus)}`,
{
padding: 1,
borderColor: 'green',
borderStyle: 'round',
margin: { top: 1 }
}
)
);
} else {
// Multiple task updates
console.log(
boxen(
chalk.white.bold(
`✅ Successfully updated ${result.updatedTasks.length} tasks`
) +
'\n\n' +
result.updatedTasks
.map(
(update) =>
`${chalk.cyan(update.taskId)}: ${this.getStatusDisplay(update.oldStatus)}${this.getStatusDisplay(update.newStatus)}`
)
.join('\n'),
{
padding: 1,
borderColor: 'green',
borderStyle: 'round',
margin: { top: 1 }
}
)
);
}
}
/**
* Get colored status display
*/
private getStatusDisplay(status: TaskStatus): string {
const statusColors: Record<TaskStatus, (text: string) => string> = {
pending: chalk.yellow,
'in-progress': chalk.blue,
done: chalk.green,
deferred: chalk.gray,
cancelled: chalk.red,
blocked: chalk.red,
review: chalk.magenta,
completed: chalk.green
};
const colorFn = statusColors[status] || chalk.white;
return colorFn(status);
}
/**
* Get the last command result (useful for testing or chaining)
*/
getLastResult(): SetStatusResult | undefined {
return this.lastResult;
}
/**
* Register this command on an existing program
*/
static register(program: Command, name?: string): SetStatusCommand {
const setStatusCommand = new SetStatusCommand(name);
program.addCommand(setStatusCommand);
return setStatusCommand;
}
}
/**
* Factory function to create and configure the set-status command
*/
export function createSetStatusCommand(): SetStatusCommand {
return new SetStatusCommand();
}

View File

@@ -0,0 +1,332 @@
/**
* @fileoverview ShowCommand using Commander's native class pattern
* Extends Commander.Command for better integration with the framework
*/
import { Command } from 'commander';
import chalk from 'chalk';
import boxen from 'boxen';
import { createTaskMasterCore, type Task, type TaskMasterCore } from '@tm/core';
import type { StorageType } from '@tm/core/types';
import * as ui from '../utils/ui.js';
import { displayTaskDetails } from '../ui/components/task-detail.component.js';
/**
* Options interface for the show command
*/
export interface ShowCommandOptions {
id?: string;
status?: string;
format?: 'text' | 'json';
silent?: boolean;
project?: string;
}
/**
* Result type from show command
*/
export interface ShowTaskResult {
task: Task | null;
found: boolean;
storageType: Exclude<StorageType, 'auto'>;
}
/**
* Result type for multiple tasks
*/
export interface ShowMultipleTasksResult {
tasks: Task[];
notFound: string[];
storageType: Exclude<StorageType, 'auto'>;
}
/**
* ShowCommand extending Commander's Command class
* This is a thin presentation layer over @tm/core
*/
export class ShowCommand extends Command {
private tmCore?: TaskMasterCore;
private lastResult?: ShowTaskResult | ShowMultipleTasksResult;
constructor(name?: string) {
super(name || 'show');
// Configure the command
this.description('Display detailed information about one or more tasks')
.argument('[id]', 'Task ID(s) to show (comma-separated for multiple)')
.option(
'-i, --id <id>',
'Task ID(s) to show (comma-separated for multiple)'
)
.option('-s, --status <status>', 'Filter subtasks by status')
.option('-f, --format <format>', 'Output format (text, json)', 'text')
.option('--silent', 'Suppress output (useful for programmatic usage)')
.option('-p, --project <path>', 'Project root directory', process.cwd())
.action(
async (taskId: string | undefined, options: ShowCommandOptions) => {
await this.executeCommand(taskId, options);
}
);
}
/**
* Execute the show command
*/
private async executeCommand(
taskId: string | undefined,
options: ShowCommandOptions
): Promise<void> {
try {
// Validate options
if (!this.validateOptions(options)) {
process.exit(1);
}
// Initialize tm-core
await this.initializeCore(options.project || process.cwd());
// Get the task ID from argument or option
const idArg = taskId || options.id;
if (!idArg) {
console.error(chalk.red('Error: Please provide a task ID'));
process.exit(1);
}
// Check if multiple IDs are provided (comma-separated)
const taskIds = idArg
.split(',')
.map((id) => id.trim())
.filter((id) => id.length > 0);
// Get tasks from core
const result =
taskIds.length > 1
? await this.getMultipleTasks(taskIds, options)
: await this.getSingleTask(taskIds[0], options);
// Store result for programmatic access
this.setLastResult(result);
// Display results
if (!options.silent) {
this.displayResults(result, options);
}
} catch (error: any) {
const msg = error?.getSanitizedDetails?.() ?? {
message: error?.message ?? String(error)
};
console.error(chalk.red(`Error: ${msg.message || 'Unexpected error'}`));
if (error.stack && process.env.DEBUG) {
console.error(chalk.gray(error.stack));
}
process.exit(1);
}
}
/**
* Validate command options
*/
private validateOptions(options: ShowCommandOptions): boolean {
// Validate format
if (options.format && !['text', 'json'].includes(options.format)) {
console.error(chalk.red(`Invalid format: ${options.format}`));
console.error(chalk.gray(`Valid formats: text, json`));
return false;
}
return true;
}
/**
* Initialize TaskMasterCore
*/
private async initializeCore(projectRoot: string): Promise<void> {
if (!this.tmCore) {
this.tmCore = await createTaskMasterCore({ projectPath: projectRoot });
}
}
/**
* Get a single task from tm-core
*/
private async getSingleTask(
taskId: string,
_options: ShowCommandOptions
): Promise<ShowTaskResult> {
if (!this.tmCore) {
throw new Error('TaskMasterCore not initialized');
}
// Get the task
const task = await this.tmCore.getTask(taskId);
// Get storage type
const storageType = this.tmCore.getStorageType();
return {
task,
found: task !== null,
storageType: storageType as Exclude<StorageType, 'auto'>
};
}
/**
* Get multiple tasks from tm-core
*/
private async getMultipleTasks(
taskIds: string[],
_options: ShowCommandOptions
): Promise<ShowMultipleTasksResult> {
if (!this.tmCore) {
throw new Error('TaskMasterCore not initialized');
}
const tasks: Task[] = [];
const notFound: string[] = [];
// Get each task individually
for (const taskId of taskIds) {
const task = await this.tmCore.getTask(taskId);
if (task) {
tasks.push(task);
} else {
notFound.push(taskId);
}
}
// Get storage type
const storageType = this.tmCore.getStorageType();
return {
tasks,
notFound,
storageType: storageType as Exclude<StorageType, 'auto'>
};
}
/**
* Display results based on format
*/
private displayResults(
result: ShowTaskResult | ShowMultipleTasksResult,
options: ShowCommandOptions
): void {
const format = options.format || 'text';
switch (format) {
case 'json':
this.displayJson(result);
break;
case 'text':
default:
if ('task' in result) {
// Single task result
this.displaySingleTask(result, options);
} else {
// Multiple tasks result
this.displayMultipleTasks(result, options);
}
break;
}
}
/**
* Display in JSON format
*/
private displayJson(result: ShowTaskResult | ShowMultipleTasksResult): void {
console.log(JSON.stringify(result, null, 2));
}
/**
* Display a single task in text format
*/
private displaySingleTask(
result: ShowTaskResult,
options: ShowCommandOptions
): void {
if (!result.found || !result.task) {
console.log(
boxen(chalk.yellow(`Task not found!`), {
padding: { top: 0, bottom: 0, left: 1, right: 1 },
borderColor: 'yellow',
borderStyle: 'round',
margin: { top: 1 }
})
);
return;
}
// Use the global task details display function
displayTaskDetails(result.task, {
statusFilter: options.status,
showSuggestedActions: true
});
}
/**
* Display multiple tasks in text format
*/
private displayMultipleTasks(
result: ShowMultipleTasksResult,
_options: ShowCommandOptions
): void {
// Header
ui.displayBanner(`Tasks (${result.tasks.length} found)`);
if (result.notFound.length > 0) {
console.log(chalk.yellow(`\n⚠ Not found: ${result.notFound.join(', ')}`));
}
if (result.tasks.length === 0) {
ui.displayWarning('No tasks found matching the criteria.');
return;
}
// Task table
console.log(chalk.blue.bold(`\n📋 Tasks:\n`));
console.log(
ui.createTaskTable(result.tasks, {
showSubtasks: true,
showDependencies: true
})
);
console.log(`\n${chalk.gray('Storage: ' + result.storageType)}`);
}
/**
* Set the last result for programmatic access
*/
private setLastResult(
result: ShowTaskResult | ShowMultipleTasksResult
): void {
this.lastResult = result;
}
/**
* Get the last result (for programmatic usage)
*/
getLastResult(): ShowTaskResult | ShowMultipleTasksResult | undefined {
return this.lastResult;
}
/**
* Clean up resources
*/
async cleanup(): Promise<void> {
if (this.tmCore) {
await this.tmCore.close();
this.tmCore = undefined;
}
}
/**
* Register this command on an existing program
*/
static register(program: Command, name?: string): ShowCommand {
const showCommand = new ShowCommand(name);
program.addCommand(showCommand);
return showCommand;
}
}

View File

@@ -0,0 +1,503 @@
/**
* @fileoverview StartCommand using Commander's native class pattern
* Extends Commander.Command for better integration with the framework
* This is a thin presentation layer over @tm/core's TaskExecutionService
*/
import { Command } from 'commander';
import chalk from 'chalk';
import boxen from 'boxen';
import ora, { type Ora } from 'ora';
import { spawn } from 'child_process';
import {
createTaskMasterCore,
type TaskMasterCore,
type StartTaskResult as CoreStartTaskResult
} from '@tm/core';
import { displayTaskDetails } from '../ui/components/task-detail.component.js';
import * as ui from '../utils/ui.js';
/**
* CLI-specific options interface for the start command
*/
export interface StartCommandOptions {
id?: string;
format?: 'text' | 'json';
project?: string;
dryRun?: boolean;
force?: boolean;
noStatusUpdate?: boolean;
}
/**
* CLI-specific result type from start command
* Extends the core result with CLI-specific display information
*/
export interface StartCommandResult extends CoreStartTaskResult {
storageType?: string;
}
/**
* StartCommand extending Commander's Command class
* This is a thin presentation layer over @tm/core's TaskExecutionService
*/
export class StartCommand extends Command {
private tmCore?: TaskMasterCore;
private lastResult?: StartCommandResult;
constructor(name?: string) {
super(name || 'start');
// Configure the command
this.description(
'Start working on a task by launching claude-code with context'
)
.argument('[id]', 'Task ID to start working on')
.option('-i, --id <id>', 'Task ID to start working on')
.option('-f, --format <format>', 'Output format (text, json)', 'text')
.option('-p, --project <path>', 'Project root directory', process.cwd())
.option(
'--dry-run',
'Show what would be executed without launching claude-code'
)
.option(
'--force',
'Force start even if another task is already in-progress'
)
.option(
'--no-status-update',
'Do not automatically update task status to in-progress'
)
.action(
async (taskId: string | undefined, options: StartCommandOptions) => {
await this.executeCommand(taskId, options);
}
);
}
/**
* Execute the start command
*/
private async executeCommand(
taskId: string | undefined,
options: StartCommandOptions
): Promise<void> {
let spinner: Ora | null = null;
try {
// Validate options
if (!this.validateOptions(options)) {
process.exit(1);
}
// Initialize tm-core with spinner
spinner = ora('Initializing Task Master...').start();
await this.initializeCore(options.project || process.cwd());
spinner.succeed('Task Master initialized');
// Get the task ID from argument or option, or find next available task
const idArg = taskId || options.id || null;
let targetTaskId = idArg;
if (!targetTaskId) {
spinner = ora('Finding next available task...').start();
targetTaskId = await this.performGetNextTask();
if (targetTaskId) {
spinner.succeed(`Found next task: #${targetTaskId}`);
} else {
spinner.fail('No available tasks found');
}
}
if (!targetTaskId) {
ui.displayError('No task ID provided and no available tasks found');
process.exit(1);
}
// Show pre-launch message (no spinner needed, it's just display)
if (!options.dryRun) {
await this.showPreLaunchMessage(targetTaskId);
}
// Use tm-core's startTask method with spinner
spinner = ora('Preparing task execution...').start();
const coreResult = await this.performStartTask(targetTaskId, options);
if (coreResult.started) {
spinner.succeed(
options.dryRun
? 'Dry run completed'
: 'Task prepared - launching Claude...'
);
} else {
spinner.fail('Task execution failed');
}
// Execute command if we have one and it's not a dry run
if (!options.dryRun && coreResult.command) {
// Stop any remaining spinners before launching Claude
if (spinner && !spinner.isSpinning) {
// Clear the line to make room for Claude
console.log();
}
await this.executeChildProcess(coreResult.command);
}
// Convert core result to CLI result with storage type
const result: StartCommandResult = {
...coreResult,
storageType: this.tmCore?.getStorageType()
};
// Store result for programmatic access
this.setLastResult(result);
// Display results (only for dry run or if execution failed)
if (options.dryRun || !coreResult.started) {
this.displayResults(result, options);
}
} catch (error: any) {
if (spinner) {
spinner.fail('Operation failed');
}
this.handleError(error);
process.exit(1);
}
}
/**
* Validate command options
*/
private validateOptions(options: StartCommandOptions): boolean {
// Validate format
if (options.format && !['text', 'json'].includes(options.format)) {
console.error(chalk.red(`Invalid format: ${options.format}`));
console.error(chalk.gray(`Valid formats: text, json`));
return false;
}
return true;
}
/**
* Initialize TaskMasterCore
*/
private async initializeCore(projectRoot: string): Promise<void> {
if (!this.tmCore) {
this.tmCore = await createTaskMasterCore({ projectPath: projectRoot });
}
}
/**
* Get the next available task using tm-core
*/
private async performGetNextTask(): Promise<string | null> {
if (!this.tmCore) {
throw new Error('TaskMasterCore not initialized');
}
return this.tmCore.getNextAvailableTask();
}
/**
* Show pre-launch message using tm-core data
*/
private async showPreLaunchMessage(targetTaskId: string): Promise<void> {
if (!this.tmCore) return;
const { task, subtask, subtaskId } =
await this.tmCore.getTaskWithSubtask(targetTaskId);
if (task) {
const workItemText = subtask
? `Subtask #${task.id}.${subtaskId} - ${subtask.title}`
: `Task #${task.id} - ${task.title}`;
console.log(
chalk.green('🚀 Starting: ') + chalk.white.bold(workItemText)
);
console.log(chalk.gray('Launching Claude Code...'));
console.log(); // Empty line
}
}
/**
* Perform start task using tm-core business logic
*/
private async performStartTask(
targetTaskId: string,
options: StartCommandOptions
): Promise<CoreStartTaskResult> {
if (!this.tmCore) {
throw new Error('TaskMasterCore not initialized');
}
// Show spinner for status update if enabled
let statusSpinner: Ora | null = null;
if (!options.noStatusUpdate && !options.dryRun) {
statusSpinner = ora('Updating task status to in-progress...').start();
}
// Get execution command from tm-core (instead of executing directly)
const result = await this.tmCore.startTask(targetTaskId, {
dryRun: options.dryRun,
force: options.force,
updateStatus: !options.noStatusUpdate
});
if (statusSpinner) {
if (result.started) {
statusSpinner.succeed('Task status updated');
} else {
statusSpinner.warn('Task status update skipped');
}
}
if (!result) {
throw new Error('Failed to start task - core result is undefined');
}
// Don't execute here - let the main executeCommand method handle it
return result;
}
/**
* Execute the child process directly in the main thread for better process control
*/
private async executeChildProcess(command: {
executable: string;
args: string[];
cwd: string;
}): Promise<void> {
return new Promise((resolve, reject) => {
// Don't show the full command with args as it can be very long
console.log(chalk.green('🚀 Launching Claude Code...'));
console.log(); // Add space before Claude takes over
const childProcess = spawn(command.executable, command.args, {
cwd: command.cwd,
stdio: 'inherit', // Inherit stdio from parent process
shell: false
});
childProcess.on('close', (code) => {
if (code === 0) {
resolve();
} else {
reject(new Error(`Process exited with code ${code}`));
}
});
childProcess.on('error', (error) => {
reject(new Error(`Failed to spawn process: ${error.message}`));
});
// Handle process termination signals gracefully
const cleanup = () => {
if (childProcess && !childProcess.killed) {
childProcess.kill('SIGTERM');
}
};
process.on('SIGINT', cleanup);
process.on('SIGTERM', cleanup);
process.on('exit', cleanup);
});
}
/**
* Display results based on format
*/
private displayResults(
result: StartCommandResult,
options: StartCommandOptions
): void {
const format = options.format || 'text';
switch (format) {
case 'json':
this.displayJson(result);
break;
case 'text':
default:
this.displayTextResult(result, options);
break;
}
}
/**
* Display in JSON format
*/
private displayJson(result: StartCommandResult): void {
console.log(JSON.stringify(result, null, 2));
}
/**
* Display result in text format
*/
private displayTextResult(
result: StartCommandResult,
options: StartCommandOptions
): void {
if (!result.found || !result.task) {
console.log(
boxen(chalk.yellow(`Task not found!`), {
padding: { top: 0, bottom: 0, left: 1, right: 1 },
borderColor: 'yellow',
borderStyle: 'round',
margin: { top: 1 }
})
);
return;
}
const task = result.task;
if (options.dryRun) {
// For dry run, show full details since Claude Code won't be launched
let headerText = `Dry Run: Starting Task #${task.id} - ${task.title}`;
// If working on a specific subtask, highlight it in the header
if (result.subtask && result.subtaskId) {
headerText = `Dry Run: Starting Subtask #${task.id}.${result.subtaskId} - ${result.subtask.title}`;
}
displayTaskDetails(task, {
customHeader: headerText,
headerColor: 'yellow'
});
// Show claude-code prompt
if (result.executionOutput) {
console.log(); // Empty line for spacing
console.log(
boxen(
chalk.white.bold('Claude-Code Prompt:') +
'\n\n' +
result.executionOutput,
{
padding: 1,
borderStyle: 'round',
borderColor: 'cyan',
width: process.stdout.columns * 0.95 || 100
}
)
);
}
console.log(); // Empty line for spacing
console.log(
boxen(
chalk.yellow(
'🔍 Dry run - claude-code would be launched with the above prompt'
),
{
padding: { top: 0, bottom: 0, left: 1, right: 1 },
borderColor: 'yellow',
borderStyle: 'round'
}
)
);
} else {
// For actual execution, show minimal info since Claude Code will clear the terminal
if (result.started) {
// Determine what was worked on - task or subtask
let workItemText = `Task: #${task.id} - ${task.title}`;
let statusTarget = task.id;
if (result.subtask && result.subtaskId) {
workItemText = `Subtask: #${task.id}.${result.subtaskId} - ${result.subtask.title}`;
statusTarget = `${task.id}.${result.subtaskId}`;
}
// Post-execution message (shown after Claude Code exits)
console.log(
boxen(
chalk.green.bold('🎉 Task Session Complete!') +
'\n\n' +
chalk.white(workItemText) +
'\n\n' +
chalk.cyan('Next steps:') +
'\n' +
`• Run ${chalk.yellow('tm show ' + task.id)} to review task details\n` +
`• Run ${chalk.yellow('tm set-status --id=' + statusTarget + ' --status=done')} when complete\n` +
`• Run ${chalk.yellow('tm next')} to find the next available task\n` +
`• Run ${chalk.yellow('tm start')} to begin the next task`,
{
padding: 1,
borderStyle: 'round',
borderColor: 'green',
width: process.stdout.columns * 0.95 || 100,
margin: { top: 1 }
}
)
);
} else {
// Error case
console.log(
boxen(
chalk.red(
'❌ Failed to launch claude-code' +
(result.error ? `\nError: ${result.error}` : '')
),
{
padding: { top: 0, bottom: 0, left: 1, right: 1 },
borderColor: 'red',
borderStyle: 'round'
}
)
);
}
}
console.log(`\n${chalk.gray('Storage: ' + result.storageType)}`);
}
/**
* Handle general errors
*/
private handleError(error: any): void {
const msg = error?.getSanitizedDetails?.() ?? {
message: error?.message ?? String(error)
};
console.error(chalk.red(`Error: ${msg.message || 'Unexpected error'}`));
// Show stack trace in development mode or when DEBUG is set
const isDevelopment = process.env.NODE_ENV !== 'production';
if ((isDevelopment || process.env.DEBUG) && error.stack) {
console.error(chalk.gray(error.stack));
}
}
/**
* Set the last result for programmatic access
*/
private setLastResult(result: StartCommandResult): void {
this.lastResult = result;
}
/**
* Get the last result (for programmatic usage)
*/
getLastResult(): StartCommandResult | undefined {
return this.lastResult;
}
/**
* Clean up resources
*/
async cleanup(): Promise<void> {
if (this.tmCore) {
await this.tmCore.close();
this.tmCore = undefined;
}
}
/**
* Register this command on an existing program
*/
static register(program: Command, name?: string): StartCommand {
const startCommand = new StartCommand(name);
program.addCommand(startCommand);
return startCommand;
}
}

40
apps/cli/src/index.ts Normal file
View File

@@ -0,0 +1,40 @@
/**
* @fileoverview Main entry point for @tm/cli package
* Exports all public APIs for the CLI presentation layer
*/
// Commands
export { ListTasksCommand } from './commands/list.command.js';
export { ShowCommand } from './commands/show.command.js';
export { AuthCommand } from './commands/auth.command.js';
export { ContextCommand } from './commands/context.command.js';
export { StartCommand } from './commands/start.command.js';
export { SetStatusCommand } from './commands/set-status.command.js';
export { ExportCommand } from './commands/export.command.js';
// Command Registry
export {
CommandRegistry,
registerAllCommands,
registerCommandsByCategory,
type CommandMetadata
} from './command-registry.js';
// UI utilities (for other commands to use)
export * as ui from './utils/ui.js';
// Auto-update utilities
export {
checkForUpdate,
performAutoUpdate,
displayUpgradeNotification,
compareVersions
} from './utils/auto-update.js';
// Re-export commonly used types from tm-core
export type {
Task,
TaskStatus,
TaskPriority,
TaskMasterCore
} from '@tm/core';

View File

@@ -0,0 +1,568 @@
/**
* @fileoverview Dashboard components for Task Master CLI
* Displays project statistics and dependency information
*/
import chalk from 'chalk';
import boxen from 'boxen';
import type { Task, TaskPriority } from '@tm/core/types';
import { getComplexityWithColor } from '../../utils/ui.js';
/**
* Statistics for task collection
*/
export interface TaskStatistics {
total: number;
done: number;
inProgress: number;
pending: number;
blocked: number;
deferred: number;
cancelled: number;
review?: number;
completionPercentage: number;
}
/**
* Statistics for dependencies
*/
export interface DependencyStatistics {
tasksWithNoDeps: number;
tasksReadyToWork: number;
tasksBlockedByDeps: number;
mostDependedOnTaskId?: number;
mostDependedOnCount?: number;
avgDependenciesPerTask: number;
}
/**
* Next task information
*/
export interface NextTaskInfo {
id: string | number;
title: string;
priority?: TaskPriority;
dependencies?: (string | number)[];
complexity?: number | string;
}
/**
* Status breakdown for progress bars
*/
export interface StatusBreakdown {
'in-progress'?: number;
pending?: number;
blocked?: number;
deferred?: number;
cancelled?: number;
review?: number;
}
/**
* Create a progress bar with color-coded status segments
*/
function createProgressBar(
completionPercentage: number,
width: number = 30,
statusBreakdown?: StatusBreakdown
): string {
// If no breakdown provided, use simple green bar
if (!statusBreakdown) {
const filled = Math.round((completionPercentage / 100) * width);
const empty = width - filled;
return chalk.green('█').repeat(filled) + chalk.gray('░').repeat(empty);
}
// Build the bar with different colored sections
// Order matches the status display: Done, Cancelled, Deferred, In Progress, Review, Pending, Blocked
let bar = '';
let charsUsed = 0;
// 1. Green filled blocks for completed tasks (done)
const completedChars = Math.round((completionPercentage / 100) * width);
if (completedChars > 0) {
bar += chalk.green('█').repeat(completedChars);
charsUsed += completedChars;
}
// 2. Gray filled blocks for cancelled (won't be done)
if (statusBreakdown.cancelled && charsUsed < width) {
const cancelledChars = Math.round(
(statusBreakdown.cancelled / 100) * width
);
const actualChars = Math.min(cancelledChars, width - charsUsed);
if (actualChars > 0) {
bar += chalk.gray('█').repeat(actualChars);
charsUsed += actualChars;
}
}
// 3. Gray filled blocks for deferred (won't be done now)
if (statusBreakdown.deferred && charsUsed < width) {
const deferredChars = Math.round((statusBreakdown.deferred / 100) * width);
const actualChars = Math.min(deferredChars, width - charsUsed);
if (actualChars > 0) {
bar += chalk.gray('█').repeat(actualChars);
charsUsed += actualChars;
}
}
// 4. Blue filled blocks for in-progress (actively working)
if (statusBreakdown['in-progress'] && charsUsed < width) {
const inProgressChars = Math.round(
(statusBreakdown['in-progress'] / 100) * width
);
const actualChars = Math.min(inProgressChars, width - charsUsed);
if (actualChars > 0) {
bar += chalk.blue('█').repeat(actualChars);
charsUsed += actualChars;
}
}
// 5. Magenta empty blocks for review (almost done)
if (statusBreakdown.review && charsUsed < width) {
const reviewChars = Math.round((statusBreakdown.review / 100) * width);
const actualChars = Math.min(reviewChars, width - charsUsed);
if (actualChars > 0) {
bar += chalk.magenta('░').repeat(actualChars);
charsUsed += actualChars;
}
}
// 6. Yellow empty blocks for pending (ready to start)
if (statusBreakdown.pending && charsUsed < width) {
const pendingChars = Math.round((statusBreakdown.pending / 100) * width);
const actualChars = Math.min(pendingChars, width - charsUsed);
if (actualChars > 0) {
bar += chalk.yellow('░').repeat(actualChars);
charsUsed += actualChars;
}
}
// 7. Red empty blocks for blocked (can't start yet)
if (statusBreakdown.blocked && charsUsed < width) {
const blockedChars = Math.round((statusBreakdown.blocked / 100) * width);
const actualChars = Math.min(blockedChars, width - charsUsed);
if (actualChars > 0) {
bar += chalk.red('░').repeat(actualChars);
charsUsed += actualChars;
}
}
// Fill any remaining space with gray empty yellow blocks
if (charsUsed < width) {
bar += chalk.yellow('░').repeat(width - charsUsed);
}
return bar;
}
/**
* Calculate task statistics from a list of tasks
*/
export function calculateTaskStatistics(tasks: Task[]): TaskStatistics {
const stats: TaskStatistics = {
total: tasks.length,
done: 0,
inProgress: 0,
pending: 0,
blocked: 0,
deferred: 0,
cancelled: 0,
review: 0,
completionPercentage: 0
};
tasks.forEach((task) => {
switch (task.status) {
case 'done':
stats.done++;
break;
case 'in-progress':
stats.inProgress++;
break;
case 'pending':
stats.pending++;
break;
case 'blocked':
stats.blocked++;
break;
case 'deferred':
stats.deferred++;
break;
case 'cancelled':
stats.cancelled++;
break;
case 'review':
stats.review = (stats.review || 0) + 1;
break;
}
});
stats.completionPercentage =
stats.total > 0 ? Math.round((stats.done / stats.total) * 100) : 0;
return stats;
}
/**
* Calculate subtask statistics from tasks
*/
export function calculateSubtaskStatistics(tasks: Task[]): TaskStatistics {
const stats: TaskStatistics = {
total: 0,
done: 0,
inProgress: 0,
pending: 0,
blocked: 0,
deferred: 0,
cancelled: 0,
review: 0,
completionPercentage: 0
};
tasks.forEach((task) => {
if (task.subtasks && task.subtasks.length > 0) {
task.subtasks.forEach((subtask) => {
stats.total++;
switch (subtask.status) {
case 'done':
stats.done++;
break;
case 'in-progress':
stats.inProgress++;
break;
case 'pending':
stats.pending++;
break;
case 'blocked':
stats.blocked++;
break;
case 'deferred':
stats.deferred++;
break;
case 'cancelled':
stats.cancelled++;
break;
case 'review':
stats.review = (stats.review || 0) + 1;
break;
}
});
}
});
stats.completionPercentage =
stats.total > 0 ? Math.round((stats.done / stats.total) * 100) : 0;
return stats;
}
/**
* Calculate dependency statistics
*/
export function calculateDependencyStatistics(
tasks: Task[]
): DependencyStatistics {
const completedTaskIds = new Set(
tasks.filter((t) => t.status === 'done').map((t) => t.id)
);
const tasksWithNoDeps = tasks.filter(
(t) =>
t.status !== 'done' && (!t.dependencies || t.dependencies.length === 0)
).length;
const tasksWithAllDepsSatisfied = tasks.filter(
(t) =>
t.status !== 'done' &&
t.dependencies &&
t.dependencies.length > 0 &&
t.dependencies.every((depId) => completedTaskIds.has(depId))
).length;
const tasksBlockedByDeps = tasks.filter(
(t) =>
t.status !== 'done' &&
t.dependencies &&
t.dependencies.length > 0 &&
!t.dependencies.every((depId) => completedTaskIds.has(depId))
).length;
// Calculate most depended-on task
const dependencyCount: Record<string, number> = {};
tasks.forEach((task) => {
if (task.dependencies && task.dependencies.length > 0) {
task.dependencies.forEach((depId) => {
const key = String(depId);
dependencyCount[key] = (dependencyCount[key] || 0) + 1;
});
}
});
let mostDependedOnTaskId: number | undefined;
let mostDependedOnCount = 0;
for (const [taskId, count] of Object.entries(dependencyCount)) {
if (count > mostDependedOnCount) {
mostDependedOnCount = count;
mostDependedOnTaskId = parseInt(taskId);
}
}
// Calculate average dependencies
const totalDependencies = tasks.reduce(
(sum, task) => sum + (task.dependencies ? task.dependencies.length : 0),
0
);
const avgDependenciesPerTask =
tasks.length > 0 ? totalDependencies / tasks.length : 0;
return {
tasksWithNoDeps,
tasksReadyToWork: tasksWithNoDeps + tasksWithAllDepsSatisfied,
tasksBlockedByDeps,
mostDependedOnTaskId,
mostDependedOnCount,
avgDependenciesPerTask
};
}
/**
* Get priority counts
*/
export function getPriorityBreakdown(
tasks: Task[]
): Record<TaskPriority, number> {
const breakdown: Record<TaskPriority, number> = {
critical: 0,
high: 0,
medium: 0,
low: 0
};
tasks.forEach((task) => {
const priority = task.priority || 'medium';
breakdown[priority]++;
});
return breakdown;
}
/**
* Calculate status breakdown as percentages
*/
function calculateStatusBreakdown(stats: TaskStatistics): StatusBreakdown {
if (stats.total === 0) return {};
return {
'in-progress': (stats.inProgress / stats.total) * 100,
pending: (stats.pending / stats.total) * 100,
blocked: (stats.blocked / stats.total) * 100,
deferred: (stats.deferred / stats.total) * 100,
cancelled: (stats.cancelled / stats.total) * 100,
review: ((stats.review || 0) / stats.total) * 100
};
}
/**
* Format status counts in the correct order with colors
* @param stats - The statistics object containing counts
* @param isSubtask - Whether this is for subtasks (affects "Done" vs "Completed" label)
*/
function formatStatusLine(
stats: TaskStatistics,
isSubtask: boolean = false
): string {
const parts: string[] = [];
// Order: Done, Cancelled, Deferred, In Progress, Review, Pending, Blocked
if (isSubtask) {
parts.push(`Completed: ${chalk.green(`${stats.done}/${stats.total}`)}`);
} else {
parts.push(`Done: ${chalk.green(stats.done)}`);
}
parts.push(`Cancelled: ${chalk.gray(stats.cancelled)}`);
parts.push(`Deferred: ${chalk.gray(stats.deferred)}`);
// Add line break for second row
const firstLine = parts.join(' ');
parts.length = 0;
parts.push(`In Progress: ${chalk.blue(stats.inProgress)}`);
parts.push(`Review: ${chalk.magenta(stats.review || 0)}`);
parts.push(`Pending: ${chalk.yellow(stats.pending)}`);
parts.push(`Blocked: ${chalk.red(stats.blocked)}`);
const secondLine = parts.join(' ');
return firstLine + '\n' + secondLine;
}
/**
* Display the project dashboard box
*/
export function displayProjectDashboard(
taskStats: TaskStatistics,
subtaskStats: TaskStatistics,
priorityBreakdown: Record<TaskPriority, number>
): string {
// Calculate status breakdowns using the helper function
const taskStatusBreakdown = calculateStatusBreakdown(taskStats);
const subtaskStatusBreakdown = calculateStatusBreakdown(subtaskStats);
// Create progress bars with the breakdowns
const taskProgressBar = createProgressBar(
taskStats.completionPercentage,
30,
taskStatusBreakdown
);
const subtaskProgressBar = createProgressBar(
subtaskStats.completionPercentage,
30,
subtaskStatusBreakdown
);
const taskPercentage = `${taskStats.completionPercentage}% ${taskStats.done}/${taskStats.total}`;
const subtaskPercentage = `${subtaskStats.completionPercentage}% ${subtaskStats.done}/${subtaskStats.total}`;
const content =
chalk.white.bold('Project Dashboard') +
'\n' +
`Tasks Progress: ${taskProgressBar} ${chalk.yellow(taskPercentage)}\n` +
formatStatusLine(taskStats, false) +
'\n\n' +
`Subtasks Progress: ${subtaskProgressBar} ${chalk.cyan(subtaskPercentage)}\n` +
formatStatusLine(subtaskStats, true) +
'\n\n' +
chalk.cyan.bold('Priority Breakdown:') +
'\n' +
`${chalk.red('•')} ${chalk.white('High priority:')} ${priorityBreakdown.high}\n` +
`${chalk.yellow('•')} ${chalk.white('Medium priority:')} ${priorityBreakdown.medium}\n` +
`${chalk.green('•')} ${chalk.white('Low priority:')} ${priorityBreakdown.low}`;
return content;
}
/**
* Display the dependency dashboard box
*/
export function displayDependencyDashboard(
depStats: DependencyStatistics,
nextTask?: NextTaskInfo
): string {
const content =
chalk.white.bold('Dependency Status & Next Task') +
'\n' +
chalk.cyan.bold('Dependency Metrics:') +
'\n' +
`${chalk.green('•')} ${chalk.white('Tasks with no dependencies:')} ${depStats.tasksWithNoDeps}\n` +
`${chalk.green('•')} ${chalk.white('Tasks ready to work on:')} ${depStats.tasksReadyToWork}\n` +
`${chalk.yellow('•')} ${chalk.white('Tasks blocked by dependencies:')} ${depStats.tasksBlockedByDeps}\n` +
`${chalk.magenta('•')} ${chalk.white('Most depended-on task:')} ${
depStats.mostDependedOnTaskId
? chalk.cyan(
`#${depStats.mostDependedOnTaskId} (${depStats.mostDependedOnCount} dependents)`
)
: chalk.gray('None')
}\n` +
`${chalk.blue('•')} ${chalk.white('Avg dependencies per task:')} ${depStats.avgDependenciesPerTask.toFixed(1)}\n\n` +
chalk.cyan.bold('Next Task to Work On:') +
'\n' +
`ID: ${nextTask ? chalk.cyan(String(nextTask.id)) : chalk.gray('N/A')} - ${
nextTask
? chalk.white.bold(nextTask.title)
: chalk.yellow('No task available')
}\n` +
`Priority: ${nextTask?.priority || chalk.gray('N/A')} Dependencies: ${
nextTask?.dependencies?.length
? chalk.cyan(nextTask.dependencies.join(', '))
: chalk.gray('None')
}\n` +
`Complexity: ${nextTask?.complexity !== undefined ? getComplexityWithColor(nextTask.complexity) : chalk.gray('N/A')}`;
return content;
}
/**
* Display dashboard boxes side by side or stacked
*/
export function displayDashboards(
taskStats: TaskStatistics,
subtaskStats: TaskStatistics,
priorityBreakdown: Record<TaskPriority, number>,
depStats: DependencyStatistics,
nextTask?: NextTaskInfo
): void {
const projectDashboardContent = displayProjectDashboard(
taskStats,
subtaskStats,
priorityBreakdown
);
const dependencyDashboardContent = displayDependencyDashboard(
depStats,
nextTask
);
// Get terminal width
const terminalWidth = process.stdout.columns || 80;
const minDashboardWidth = 50;
const minDependencyWidth = 50;
const totalMinWidth = minDashboardWidth + minDependencyWidth + 4;
// If terminal is wide enough, show side by side
if (terminalWidth >= totalMinWidth) {
const halfWidth = Math.floor(terminalWidth / 2);
const boxContentWidth = halfWidth - 4;
const dashboardBox = boxen(projectDashboardContent, {
padding: 1,
borderColor: 'blue',
borderStyle: 'round',
width: boxContentWidth,
dimBorder: false
});
const dependencyBox = boxen(dependencyDashboardContent, {
padding: 1,
borderColor: 'magenta',
borderStyle: 'round',
width: boxContentWidth,
dimBorder: false
});
// Create side-by-side layout
const dashboardLines = dashboardBox.split('\n');
const dependencyLines = dependencyBox.split('\n');
const maxHeight = Math.max(dashboardLines.length, dependencyLines.length);
const combinedLines = [];
for (let i = 0; i < maxHeight; i++) {
const dashLine = i < dashboardLines.length ? dashboardLines[i] : '';
const depLine = i < dependencyLines.length ? dependencyLines[i] : '';
const paddedDashLine = dashLine.padEnd(halfWidth, ' ');
combinedLines.push(paddedDashLine + depLine);
}
console.log(combinedLines.join('\n'));
} else {
// Show stacked vertically
const dashboardBox = boxen(projectDashboardContent, {
padding: 1,
borderColor: 'blue',
borderStyle: 'round',
margin: { top: 0, bottom: 1 }
});
const dependencyBox = boxen(dependencyDashboardContent, {
padding: 1,
borderColor: 'magenta',
borderStyle: 'round',
margin: { top: 0, bottom: 1 }
});
console.log(dashboardBox);
console.log(dependencyBox);
}
}

View File

@@ -0,0 +1,45 @@
/**
* @fileoverview Task Master header component
* Displays the banner, version, project info, and file path
*/
import chalk from 'chalk';
/**
* Header configuration options
*/
export interface HeaderOptions {
title?: string;
tag?: string;
filePath?: string;
}
/**
* Display the Task Master header with project info
*/
export function displayHeader(options: HeaderOptions = {}): void {
const { filePath, tag } = options;
// Display tag and file path info
if (tag) {
let tagInfo = '';
if (tag && tag !== 'master') {
tagInfo = `🏷 tag: ${chalk.cyan(tag)}`;
} else {
tagInfo = `🏷 tag: ${chalk.cyan('master')}`;
}
console.log(tagInfo);
if (filePath) {
// Convert to absolute path if it's relative
const absolutePath = filePath.startsWith('/')
? filePath
: `${process.cwd()}/${filePath}`;
console.log(`Listing tasks from: ${chalk.dim(absolutePath)}`);
}
console.log(); // Empty line for spacing
}
}

View File

@@ -0,0 +1,9 @@
/**
* @fileoverview UI components exports
*/
export * from './header.component.js';
export * from './dashboard.component.js';
export * from './next-task.component.js';
export * from './suggested-steps.component.js';
export * from './task-detail.component.js';

View File

@@ -0,0 +1,141 @@
/**
* @fileoverview Next task recommendation component
* Displays detailed information about the recommended next task
*/
import chalk from 'chalk';
import boxen from 'boxen';
import type { Task } from '@tm/core/types';
import { getComplexityWithColor } from '../../utils/ui.js';
/**
* Next task display options
*/
export interface NextTaskDisplayOptions {
id: string | number;
title: string;
priority?: string;
status?: string;
dependencies?: (string | number)[];
description?: string;
complexity?: number;
}
/**
* Display the recommended next task section
*/
export function displayRecommendedNextTask(
task: NextTaskDisplayOptions | undefined
): void {
if (!task) {
// If no task available, show a message
console.log(
boxen(
chalk.yellow(
'No tasks available to work on. All tasks are either completed, blocked by dependencies, or in progress.'
),
{
padding: 1,
borderStyle: 'round',
borderColor: 'yellow',
title: '⚠ NO TASKS AVAILABLE ⚠',
titleAlignment: 'center'
}
)
);
return;
}
// Build the content for the next task box
const content = [];
// Task header with ID and title
content.push(
`🔥 ${chalk.hex('#FF8800').bold('Next Task to Work On:')} ${chalk.yellow(`#${task.id}`)}${chalk.hex('#FF8800').bold(` - ${task.title}`)}`
);
content.push('');
// Priority and Status line
const statusLine = [];
if (task.priority) {
const priorityColor =
task.priority === 'high'
? chalk.red
: task.priority === 'medium'
? chalk.yellow
: chalk.gray;
statusLine.push(`Priority: ${priorityColor.bold(task.priority)}`);
}
if (task.status) {
const statusDisplay =
task.status === 'pending'
? chalk.yellow('○ pending')
: task.status === 'in-progress'
? chalk.blue('▶ in-progress')
: chalk.gray(task.status);
statusLine.push(`Status: ${statusDisplay}`);
}
content.push(statusLine.join(' '));
// Dependencies
const depsDisplay =
!task.dependencies || task.dependencies.length === 0
? chalk.gray('None')
: chalk.cyan(task.dependencies.join(', '));
content.push(`Dependencies: ${depsDisplay}`);
// Complexity with color and label
if (typeof task.complexity === 'number') {
content.push(`Complexity: ${getComplexityWithColor(task.complexity)}`);
}
// Description if available
if (task.description) {
content.push('');
content.push(`Description: ${chalk.white(task.description)}`);
}
// Action commands
content.push('');
content.push(
`${chalk.cyan('Start working:')} ${chalk.yellow(`task-master set-status --id=${task.id} --status=in-progress`)}`
);
content.push(
`${chalk.cyan('View details:')} ${chalk.yellow(`task-master show ${task.id}`)}`
);
// Display in a styled box with orange border
console.log(
boxen(content.join('\n'), {
padding: 1,
margin: { top: 1, bottom: 1 },
borderStyle: 'round',
borderColor: '#FFA500', // Orange color
title: chalk.hex('#FFA500')('⚡ RECOMMENDED NEXT TASK ⚡'),
titleAlignment: 'center',
width: process.stdout.columns * 0.97,
fullscreen: false
})
);
}
/**
* Get task description from the full task object
*/
export function getTaskDescription(task: Task): string | undefined {
// Try to get description from the task
// This could be from task.description or the first line of task.details
if ('description' in task && task.description) {
return task.description as string;
}
if ('details' in task && task.details) {
// Take first sentence or line from details
const details = task.details as string;
const firstLine = details.split('\n')[0];
const firstSentence = firstLine.split('.')[0];
return firstSentence;
}
return undefined;
}

View File

@@ -0,0 +1,31 @@
/**
* @fileoverview Suggested next steps component
* Displays helpful command suggestions at the end of the list
*/
import chalk from 'chalk';
import boxen from 'boxen';
/**
* Display suggested next steps section
*/
export function displaySuggestedNextSteps(): void {
const steps = [
`${chalk.cyan('1.')} Run ${chalk.yellow('task-master next')} to see what to work on next`,
`${chalk.cyan('2.')} Run ${chalk.yellow('task-master expand --id=<id>')} to break down a task into subtasks`,
`${chalk.cyan('3.')} Run ${chalk.yellow('task-master set-status --id=<id> --status=done')} to mark a task as complete`
];
console.log(
boxen(
chalk.white.bold('Suggested Next Steps:') + '\n\n' + steps.join('\n'),
{
padding: 1,
margin: { top: 0, bottom: 1 },
borderStyle: 'round',
borderColor: 'gray',
width: process.stdout.columns * 0.97
}
)
);
}

View File

@@ -0,0 +1,340 @@
/**
* @fileoverview Task detail component for show command
* Displays detailed task information in a structured format
*/
import chalk from 'chalk';
import boxen from 'boxen';
import Table from 'cli-table3';
import { marked, MarkedExtension } from 'marked';
import { markedTerminal } from 'marked-terminal';
import type { Task } from '@tm/core/types';
import {
getStatusWithColor,
getPriorityWithColor,
getComplexityWithColor
} from '../../utils/ui.js';
// Configure marked to use terminal renderer with subtle colors
marked.use(
markedTerminal({
// More subtle colors that match the overall design
code: (code: string) => {
// Custom code block handler to preserve formatting
return code
.split('\n')
.map((line) => ' ' + chalk.cyan(line))
.join('\n');
},
blockquote: chalk.gray.italic,
html: chalk.gray,
heading: chalk.white.bold, // White bold for headings
hr: chalk.gray,
listitem: chalk.white, // White for list items
paragraph: chalk.white, // White for paragraphs (default text color)
strong: chalk.white.bold, // White bold for strong text
em: chalk.white.italic, // White italic for emphasis
codespan: chalk.cyan, // Cyan for inline code (no background)
del: chalk.dim.strikethrough,
link: chalk.blue,
href: chalk.blue.underline,
// Add more explicit code block handling
showSectionPrefix: false,
unescape: true,
emoji: false,
// Try to preserve whitespace in code blocks
tab: 4,
width: 120
}) as MarkedExtension
);
// Also set marked options to preserve whitespace
marked.setOptions({
breaks: true,
gfm: true
});
/**
* Display the task header with tag
*/
export function displayTaskHeader(
taskId: string | number,
title: string
): void {
// Display task header box
console.log(
boxen(chalk.white.bold(`Task: #${taskId} - ${title}`), {
padding: { top: 0, bottom: 0, left: 1, right: 1 },
borderColor: 'blue',
borderStyle: 'round'
})
);
}
/**
* Display task properties in a table format
*/
export function displayTaskProperties(task: Task): void {
const terminalWidth = process.stdout.columns * 0.95 || 100;
// Create table for task properties - simple 2-column layout
const table = new Table({
head: [],
style: {
head: [],
border: ['grey']
},
colWidths: [
Math.floor(terminalWidth * 0.2),
Math.floor(terminalWidth * 0.8)
],
wordWrap: true
});
const deps =
task.dependencies && task.dependencies.length > 0
? task.dependencies.map((d) => String(d)).join(', ')
: 'None';
// Build the left column (labels) and right column (values)
const labels = [
chalk.cyan('ID:'),
chalk.cyan('Title:'),
chalk.cyan('Status:'),
chalk.cyan('Priority:'),
chalk.cyan('Dependencies:'),
chalk.cyan('Complexity:'),
chalk.cyan('Description:')
].join('\n');
const values = [
String(task.id),
task.title,
getStatusWithColor(task.status),
getPriorityWithColor(task.priority),
deps,
typeof task.complexity === 'number'
? getComplexityWithColor(task.complexity)
: chalk.gray('N/A'),
task.description || ''
].join('\n');
table.push([labels, values]);
console.log(table.toString());
}
/**
* Display implementation details in a box
*/
export function displayImplementationDetails(details: string): void {
// Handle all escaped characters properly
const cleanDetails = details
.replace(/\\n/g, '\n') // Convert \n to actual newlines
.replace(/\\t/g, '\t') // Convert \t to actual tabs
.replace(/\\"/g, '"') // Convert \" to actual quotes
.replace(/\\\\/g, '\\'); // Convert \\ to single backslash
const terminalWidth = process.stdout.columns * 0.95 || 100;
// Parse markdown to terminal-friendly format
const markdownResult = marked(cleanDetails);
const formattedDetails =
typeof markdownResult === 'string' ? markdownResult.trim() : cleanDetails; // Fallback to original if Promise
console.log(
boxen(
chalk.white.bold('Implementation Details:') + '\n\n' + formattedDetails,
{
padding: 1,
borderStyle: 'round',
borderColor: 'cyan', // Changed to cyan to match the original
width: terminalWidth // Fixed width to match the original
}
)
);
}
/**
* Display test strategy in a box
*/
export function displayTestStrategy(testStrategy: string): void {
// Handle all escaped characters properly (same as implementation details)
const cleanStrategy = testStrategy
.replace(/\\n/g, '\n') // Convert \n to actual newlines
.replace(/\\t/g, '\t') // Convert \t to actual tabs
.replace(/\\"/g, '"') // Convert \" to actual quotes
.replace(/\\\\/g, '\\'); // Convert \\ to single backslash
const terminalWidth = process.stdout.columns * 0.95 || 100;
// Parse markdown to terminal-friendly format (same as implementation details)
const markdownResult = marked(cleanStrategy);
const formattedStrategy =
typeof markdownResult === 'string' ? markdownResult.trim() : cleanStrategy; // Fallback to original if Promise
console.log(
boxen(chalk.white.bold('Test Strategy:') + '\n\n' + formattedStrategy, {
padding: 1,
borderStyle: 'round',
borderColor: 'cyan', // Changed to cyan to match implementation details
width: terminalWidth
})
);
}
/**
* Display subtasks in a table format
*/
export function displaySubtasks(
subtasks: Array<{
id: string | number;
title: string;
status: any;
description?: string;
dependencies?: string[];
}>
): void {
const terminalWidth = process.stdout.columns * 0.95 || 100;
// Display subtasks header
console.log(
boxen(chalk.magenta.bold('Subtasks'), {
padding: { top: 0, bottom: 0, left: 1, right: 1 },
borderColor: 'magenta',
borderStyle: 'round',
margin: { top: 1, bottom: 0 }
})
);
// Create subtasks table
const table = new Table({
head: [
chalk.magenta.bold('ID'),
chalk.magenta.bold('Status'),
chalk.magenta.bold('Title'),
chalk.magenta.bold('Deps')
],
style: {
head: [],
border: ['grey']
},
colWidths: [
Math.floor(terminalWidth * 0.1),
Math.floor(terminalWidth * 0.15),
Math.floor(terminalWidth * 0.6),
Math.floor(terminalWidth * 0.15)
],
wordWrap: true
});
subtasks.forEach((subtask) => {
const subtaskId = String(subtask.id);
// Format dependencies
const deps =
subtask.dependencies && subtask.dependencies.length > 0
? subtask.dependencies.join(', ')
: 'None';
table.push([
subtaskId,
getStatusWithColor(subtask.status),
subtask.title,
deps
]);
});
console.log(table.toString());
}
/**
* Display suggested actions
*/
export function displaySuggestedActions(taskId: string | number): void {
console.log(
boxen(
chalk.white.bold('Suggested Actions:') +
'\n\n' +
`${chalk.cyan('1.')} Run ${chalk.yellow(`task-master set-status --id=${taskId} --status=in-progress`)} to start working\n` +
`${chalk.cyan('2.')} Run ${chalk.yellow(`task-master expand --id=${taskId}`)} to break down into subtasks\n` +
`${chalk.cyan('3.')} Run ${chalk.yellow(`task-master update-task --id=${taskId} --prompt="..."`)} to update details`,
{
padding: 1,
margin: { top: 1 },
borderStyle: 'round',
borderColor: 'green',
width: process.stdout.columns * 0.95 || 100
}
)
);
}
/**
* Display complete task details - used by both show and start commands
*/
export function displayTaskDetails(
task: Task,
options?: {
statusFilter?: string;
showSuggestedActions?: boolean;
customHeader?: string;
headerColor?: string;
}
): void {
const {
statusFilter,
showSuggestedActions = false,
customHeader,
headerColor = 'blue'
} = options || {};
// Display header - either custom or default
if (customHeader) {
console.log(
boxen(chalk.white.bold(customHeader), {
padding: { top: 0, bottom: 0, left: 1, right: 1 },
borderColor: headerColor,
borderStyle: 'round',
margin: { top: 1 }
})
);
} else {
displayTaskHeader(task.id, task.title);
}
// Display task properties in table format
displayTaskProperties(task);
// Display implementation details if available
if (task.details) {
console.log(); // Empty line for spacing
displayImplementationDetails(task.details);
}
// Display test strategy if available
if ('testStrategy' in task && task.testStrategy) {
console.log(); // Empty line for spacing
displayTestStrategy(task.testStrategy as string);
}
// Display subtasks if available
if (task.subtasks && task.subtasks.length > 0) {
// Filter subtasks by status if provided
const filteredSubtasks = statusFilter
? task.subtasks.filter((sub) => sub.status === statusFilter)
: task.subtasks;
if (filteredSubtasks.length === 0 && statusFilter) {
console.log(); // Empty line for spacing
console.log(chalk.gray(` No subtasks with status '${statusFilter}'`));
} else if (filteredSubtasks.length > 0) {
console.log(); // Empty line for spacing
displaySubtasks(filteredSubtasks);
}
}
// Display suggested actions if requested
if (showSuggestedActions) {
console.log(); // Empty line for spacing
displaySuggestedActions(task.id);
}
}

9
apps/cli/src/ui/index.ts Normal file
View File

@@ -0,0 +1,9 @@
/**
* @fileoverview Main UI exports
*/
// Export all components
export * from './components/index.js';
// Re-export existing UI utilities
export * from '../utils/ui.js';

View File

@@ -0,0 +1,377 @@
/**
* @fileoverview Auto-update utilities for task-master-ai CLI
*/
import { spawn } from 'child_process';
import https from 'https';
import chalk from 'chalk';
import ora from 'ora';
import boxen from 'boxen';
export interface UpdateInfo {
currentVersion: string;
latestVersion: string;
needsUpdate: boolean;
highlights?: string[];
}
/**
* Get current version from build-time injected environment variable
*/
function getCurrentVersion(): string {
// Version is injected at build time via TM_PUBLIC_VERSION
const version = process.env.TM_PUBLIC_VERSION;
if (version && version !== 'unknown') {
return version;
}
// Fallback for development or if injection failed
console.warn('Could not read version from TM_PUBLIC_VERSION, using fallback');
return '0.0.0';
}
/**
* Compare semantic versions with proper pre-release handling
* @param v1 - First version
* @param v2 - Second version
* @returns -1 if v1 < v2, 0 if v1 = v2, 1 if v1 > v2
*/
export function compareVersions(v1: string, v2: string): number {
const toParts = (v: string) => {
const [core, pre = ''] = v.split('-', 2);
const nums = core.split('.').map((n) => Number.parseInt(n, 10) || 0);
return { nums, pre };
};
const a = toParts(v1);
const b = toParts(v2);
const len = Math.max(a.nums.length, b.nums.length);
// Compare numeric parts
for (let i = 0; i < len; i++) {
const d = (a.nums[i] || 0) - (b.nums[i] || 0);
if (d !== 0) return d < 0 ? -1 : 1;
}
// Handle pre-release comparison
if (a.pre && !b.pre) return -1; // prerelease < release
if (!a.pre && b.pre) return 1; // release > prerelease
if (a.pre === b.pre) return 0; // same or both empty
return a.pre < b.pre ? -1 : 1; // basic prerelease tie-break
}
/**
* Fetch CHANGELOG.md from GitHub and extract highlights for a specific version
*/
async function fetchChangelogHighlights(version: string): Promise<string[]> {
return new Promise((resolve) => {
const options = {
hostname: 'raw.githubusercontent.com',
path: '/eyaltoledano/claude-task-master/main/CHANGELOG.md',
method: 'GET',
headers: {
'User-Agent': `task-master-ai/${version}`
}
};
const req = https.request(options, (res) => {
let data = '';
res.on('data', (chunk) => {
data += chunk;
});
res.on('end', () => {
try {
if (res.statusCode !== 200) {
resolve([]);
return;
}
const highlights = parseChangelogHighlights(data, version);
resolve(highlights);
} catch (error) {
resolve([]);
}
});
});
req.on('error', () => {
resolve([]);
});
req.setTimeout(3000, () => {
req.destroy();
resolve([]);
});
req.end();
});
}
/**
* Parse changelog markdown to extract Minor Changes for a specific version
* @internal - Exported for testing purposes only
*/
export function parseChangelogHighlights(
changelog: string,
version: string
): string[] {
try {
// Validate version format (basic semver pattern) to prevent ReDoS
if (!/^\d+\.\d+\.\d+(-[a-zA-Z0-9.-]+)?$/.test(version)) {
return [];
}
// Find the version section
const versionRegex = new RegExp(
`## ${version.replace(/\./g, '\\.')}\\s*\\n`,
'i'
);
const versionMatch = changelog.match(versionRegex);
if (!versionMatch) {
return [];
}
// Extract content from this version to the next version heading
const startIdx = versionMatch.index! + versionMatch[0].length;
const nextVersionIdx = changelog.indexOf('\n## ', startIdx);
const versionContent =
nextVersionIdx > 0
? changelog.slice(startIdx, nextVersionIdx)
: changelog.slice(startIdx);
// Find Minor Changes section
const minorChangesMatch = versionContent.match(
/### Minor Changes\s*\n([\s\S]*?)(?=\n###|\n##|$)/i
);
if (!minorChangesMatch) {
return [];
}
const minorChangesContent = minorChangesMatch[1];
const highlights: string[] = [];
// Extract all bullet points (lines starting with -)
// Format: - [#PR](...) Thanks [@author]! - Description
const bulletRegex = /^-\s+\[#\d+\][^\n]*?!\s+-\s+(.+?)$/gm;
let match;
while ((match = bulletRegex.exec(minorChangesContent)) !== null) {
const desc = match[1].trim();
highlights.push(desc);
}
return highlights;
} catch (error) {
return [];
}
}
/**
* Check for newer version of task-master-ai
*/
export async function checkForUpdate(
currentVersionOverride?: string
): Promise<UpdateInfo> {
const currentVersion = currentVersionOverride || getCurrentVersion();
return new Promise((resolve) => {
const options = {
hostname: 'registry.npmjs.org',
path: '/task-master-ai',
method: 'GET',
headers: {
Accept: 'application/vnd.npm.install-v1+json',
'User-Agent': `task-master-ai/${currentVersion}`
}
};
const req = https.request(options, (res) => {
let data = '';
res.on('data', (chunk) => {
data += chunk;
});
res.on('end', async () => {
try {
if (res.statusCode !== 200)
throw new Error(`npm registry status ${res.statusCode}`);
const npmData = JSON.parse(data);
const latestVersion = npmData['dist-tags']?.latest || currentVersion;
const needsUpdate =
compareVersions(currentVersion, latestVersion) < 0;
// Fetch highlights if update is needed
let highlights: string[] | undefined;
if (needsUpdate) {
highlights = await fetchChangelogHighlights(latestVersion);
}
resolve({
currentVersion,
latestVersion,
needsUpdate,
highlights
});
} catch (error) {
resolve({
currentVersion,
latestVersion: currentVersion,
needsUpdate: false
});
}
});
});
req.on('error', () => {
resolve({
currentVersion,
latestVersion: currentVersion,
needsUpdate: false
});
});
req.setTimeout(3000, () => {
req.destroy();
resolve({
currentVersion,
latestVersion: currentVersion,
needsUpdate: false
});
});
req.end();
});
}
/**
* Display upgrade notification message
*/
export function displayUpgradeNotification(
currentVersion: string,
latestVersion: string,
highlights?: string[]
) {
let content = `${chalk.blue.bold('Update Available!')} ${chalk.dim(currentVersion)}${chalk.green(latestVersion)}`;
if (highlights && highlights.length > 0) {
content += '\n\n' + chalk.bold("What's New:");
for (const highlight of highlights) {
content += '\n' + chalk.cyan('• ') + highlight;
}
content += '\n\n' + 'Auto-updating to the latest version...';
} else {
content +=
'\n\n' +
'Auto-updating to the latest version with new features and bug fixes...';
}
const message = boxen(content, {
padding: 1,
margin: { top: 1, bottom: 1 },
borderColor: 'yellow',
borderStyle: 'round'
});
console.log(message);
}
/**
* Automatically update task-master-ai to the latest version
*/
export async function performAutoUpdate(
latestVersion: string
): Promise<boolean> {
if (
process.env.TASKMASTER_SKIP_AUTO_UPDATE === '1' ||
process.env.CI ||
process.env.NODE_ENV === 'test'
) {
const reason =
process.env.TASKMASTER_SKIP_AUTO_UPDATE === '1'
? 'TASKMASTER_SKIP_AUTO_UPDATE=1'
: process.env.CI
? 'CI environment'
: 'NODE_ENV=test';
console.log(chalk.dim(`Skipping auto-update (${reason})`));
return false;
}
const spinner = ora({
text: chalk.blue(
`Updating task-master-ai to version ${chalk.green(latestVersion)}`
),
spinner: 'dots',
color: 'blue'
}).start();
return new Promise((resolve) => {
const updateProcess = spawn(
'npm',
[
'install',
'-g',
`task-master-ai@${latestVersion}`,
'--no-fund',
'--no-audit',
'--loglevel=warn'
],
{
stdio: ['ignore', 'pipe', 'pipe']
}
);
let errorOutput = '';
updateProcess.stdout.on('data', () => {
// Update spinner text with progress
spinner.text = chalk.blue(
`Installing task-master-ai@${latestVersion}...`
);
});
updateProcess.stderr.on('data', (data) => {
errorOutput += data.toString();
});
updateProcess.on('close', (code) => {
if (code === 0) {
spinner.succeed(
chalk.green(
`Successfully updated to version ${chalk.bold(latestVersion)}`
)
);
console.log(
chalk.dim('Please restart your command to use the new version.')
);
resolve(true);
} else {
spinner.fail(chalk.red('Auto-update failed'));
console.log(
chalk.cyan(
`Please run manually: npm install -g task-master-ai@${latestVersion}`
)
);
if (errorOutput) {
console.log(chalk.dim(`Error: ${errorOutput.trim()}`));
}
resolve(false);
}
});
updateProcess.on('error', (error) => {
spinner.fail(chalk.red('Auto-update failed'));
console.log(chalk.red('Error:'), error.message);
console.log(
chalk.cyan(
`Please run manually: npm install -g task-master-ai@${latestVersion}`
)
);
resolve(false);
});
});
}

393
apps/cli/src/utils/ui.ts Normal file
View File

@@ -0,0 +1,393 @@
/**
* @fileoverview UI utilities for Task Master CLI
* Provides formatting, display, and visual components for the command line interface
*/
import chalk from 'chalk';
import boxen from 'boxen';
import Table from 'cli-table3';
import type { Task, TaskStatus, TaskPriority } from '@tm/core/types';
/**
* Get colored status display with ASCII icons (matches scripts/modules/ui.js style)
*/
export function getStatusWithColor(
status: TaskStatus,
forTable: boolean = false
): string {
const statusConfig = {
done: {
color: chalk.green,
icon: '✓',
tableIcon: '✓'
},
pending: {
color: chalk.yellow,
icon: '○',
tableIcon: '○'
},
'in-progress': {
color: chalk.hex('#FFA500'),
icon: '▶',
tableIcon: '▶'
},
deferred: {
color: chalk.gray,
icon: 'x',
tableIcon: 'x'
},
review: {
color: chalk.magenta,
icon: '?',
tableIcon: '?'
},
cancelled: {
color: chalk.gray,
icon: 'x',
tableIcon: 'x'
},
blocked: {
color: chalk.red,
icon: '!',
tableIcon: '!'
},
completed: {
color: chalk.green,
icon: '✓',
tableIcon: '✓'
}
};
const config = statusConfig[status] || {
color: chalk.red,
icon: 'X',
tableIcon: 'X'
};
const icon = forTable ? config.tableIcon : config.icon;
return config.color(`${icon} ${status}`);
}
/**
* Get colored priority display
*/
export function getPriorityWithColor(priority: TaskPriority): string {
const priorityColors: Record<TaskPriority, (text: string) => string> = {
critical: chalk.red.bold,
high: chalk.red,
medium: chalk.yellow,
low: chalk.gray
};
const colorFn = priorityColors[priority] || chalk.white;
return colorFn(priority);
}
/**
* Get complexity color and label based on score thresholds
*/
function getComplexityLevel(score: number): {
color: (text: string) => string;
label: string;
} {
if (score >= 7) {
return { color: chalk.hex('#CC0000'), label: 'High' };
} else if (score >= 4) {
return { color: chalk.hex('#FF8800'), label: 'Medium' };
} else {
return { color: chalk.green, label: 'Low' };
}
}
/**
* Get colored complexity display with dot indicator (simple format)
*/
export function getComplexityWithColor(complexity: number | string): string {
const score =
typeof complexity === 'string' ? parseInt(complexity, 10) : complexity;
if (isNaN(score)) {
return chalk.gray('N/A');
}
const { color } = getComplexityLevel(score);
return color(`${score}`);
}
/**
* Get colored complexity display with /10 format (for dashboards)
*/
export function getComplexityWithScore(complexity: number | undefined): string {
if (typeof complexity !== 'number') {
return chalk.gray('N/A');
}
const { color, label } = getComplexityLevel(complexity);
return color(`${complexity}/10 (${label})`);
}
/**
* Truncate text to specified length
*/
export function truncate(text: string, maxLength: number): string {
if (text.length <= maxLength) {
return text;
}
return text.substring(0, maxLength - 3) + '...';
}
/**
* Create a progress bar
*/
export function createProgressBar(
completed: number,
total: number,
width: number = 30
): string {
if (total === 0) {
return chalk.gray('No tasks');
}
const percentage = Math.round((completed / total) * 100);
const filled = Math.round((completed / total) * width);
const empty = width - filled;
const bar = chalk.green('█').repeat(filled) + chalk.gray('░').repeat(empty);
return `${bar} ${chalk.cyan(`${percentage}%`)} (${completed}/${total})`;
}
/**
* Display a fancy banner
*/
export function displayBanner(title: string = 'Task Master'): void {
console.log(
boxen(chalk.white.bold(title), {
padding: 1,
margin: { top: 1, bottom: 1 },
borderStyle: 'round',
borderColor: 'blue',
textAlignment: 'center'
})
);
}
/**
* Display an error message (matches scripts/modules/ui.js style)
*/
export function displayError(message: string, details?: string): void {
console.error(
boxen(
chalk.red.bold('X Error: ') +
chalk.white(message) +
(details ? '\n\n' + chalk.gray(details) : ''),
{
padding: 1,
borderStyle: 'round',
borderColor: 'red'
}
)
);
}
/**
* Display a success message
*/
export function displaySuccess(message: string): void {
console.log(
boxen(
chalk.green.bold(String.fromCharCode(8730) + ' ') + chalk.white(message),
{
padding: 1,
borderStyle: 'round',
borderColor: 'green'
}
)
);
}
/**
* Display a warning message
*/
export function displayWarning(message: string): void {
console.log(
boxen(chalk.yellow.bold('⚠ ') + chalk.white(message), {
padding: 1,
borderStyle: 'round',
borderColor: 'yellow'
})
);
}
/**
* Display info message
*/
export function displayInfo(message: string): void {
console.log(
boxen(chalk.blue.bold('i ') + chalk.white(message), {
padding: 1,
borderStyle: 'round',
borderColor: 'blue'
})
);
}
/**
* Format dependencies with their status
*/
export function formatDependenciesWithStatus(
dependencies: string[] | number[],
tasks: Task[]
): string {
if (!dependencies || dependencies.length === 0) {
return chalk.gray('none');
}
const taskMap = new Map(tasks.map((t) => [t.id.toString(), t]));
return dependencies
.map((depId) => {
const task = taskMap.get(depId.toString());
if (!task) {
return chalk.red(`${depId} (not found)`);
}
const statusIcon =
task.status === 'done'
? '✓'
: task.status === 'in-progress'
? '►'
: '○';
return `${depId}${statusIcon}`;
})
.join(', ');
}
/**
* Create a task table for display
*/
export function createTaskTable(
tasks: Task[],
options?: {
showSubtasks?: boolean;
showComplexity?: boolean;
showDependencies?: boolean;
}
): string {
const {
showSubtasks = false,
showComplexity = false,
showDependencies = true
} = options || {};
// Calculate dynamic column widths based on terminal width
const terminalWidth = process.stdout.columns * 0.9 || 100;
// Adjust column widths to better match the original layout
const baseColWidths = showComplexity
? [
Math.floor(terminalWidth * 0.1),
Math.floor(terminalWidth * 0.4),
Math.floor(terminalWidth * 0.15),
Math.floor(terminalWidth * 0.1),
Math.floor(terminalWidth * 0.2),
Math.floor(terminalWidth * 0.1)
] // ID, Title, Status, Priority, Dependencies, Complexity
: [
Math.floor(terminalWidth * 0.08),
Math.floor(terminalWidth * 0.4),
Math.floor(terminalWidth * 0.18),
Math.floor(terminalWidth * 0.12),
Math.floor(terminalWidth * 0.2)
]; // ID, Title, Status, Priority, Dependencies
const headers = [
chalk.blue.bold('ID'),
chalk.blue.bold('Title'),
chalk.blue.bold('Status'),
chalk.blue.bold('Priority')
];
const colWidths = baseColWidths.slice(0, 4);
if (showDependencies) {
headers.push(chalk.blue.bold('Dependencies'));
colWidths.push(baseColWidths[4]);
}
if (showComplexity) {
headers.push(chalk.blue.bold('Complexity'));
colWidths.push(baseColWidths[5] || 12);
}
const table = new Table({
head: headers,
style: { head: [], border: [] },
colWidths,
wordWrap: true
});
tasks.forEach((task) => {
const row: string[] = [
chalk.cyan(task.id.toString()),
truncate(task.title, colWidths[1] - 3),
getStatusWithColor(task.status, true), // Use table version
getPriorityWithColor(task.priority)
];
if (showDependencies) {
// For table display, show simple format without status icons
if (!task.dependencies || task.dependencies.length === 0) {
row.push(chalk.gray('None'));
} else {
row.push(
chalk.cyan(task.dependencies.map((d) => String(d)).join(', '))
);
}
}
if (showComplexity) {
// Show complexity score from report if available
if (typeof task.complexity === 'number') {
row.push(getComplexityWithColor(task.complexity));
} else {
row.push(chalk.gray('N/A'));
}
}
table.push(row);
// Add subtasks if requested
if (showSubtasks && task.subtasks && task.subtasks.length > 0) {
task.subtasks.forEach((subtask) => {
const subRow: string[] = [
chalk.gray(` └─ ${subtask.id}`),
chalk.gray(truncate(subtask.title, colWidths[1] - 6)),
chalk.gray(getStatusWithColor(subtask.status, true)),
chalk.gray(subtask.priority || 'medium')
];
if (showDependencies) {
subRow.push(
chalk.gray(
subtask.dependencies && subtask.dependencies.length > 0
? subtask.dependencies.map((dep) => String(dep)).join(', ')
: 'None'
)
);
}
if (showComplexity) {
const complexityDisplay =
typeof subtask.complexity === 'number'
? getComplexityWithColor(subtask.complexity)
: '--';
subRow.push(chalk.gray(complexityDisplay));
}
table.push(subRow);
});
}
});
return table.toString();
}

36
apps/cli/tsconfig.json Normal file
View File

@@ -0,0 +1,36 @@
{
"compilerOptions": {
"target": "ES2022",
"module": "NodeNext",
"lib": ["ES2022"],
"declaration": true,
"declarationMap": true,
"sourceMap": true,
"outDir": "./dist",
"baseUrl": ".",
"rootDir": "./src",
"strict": true,
"noImplicitAny": true,
"strictNullChecks": true,
"strictFunctionTypes": true,
"strictBindCallApply": true,
"strictPropertyInitialization": true,
"noImplicitThis": true,
"alwaysStrict": true,
"noUnusedLocals": true,
"noUnusedParameters": true,
"noImplicitReturns": true,
"noFallthroughCasesInSwitch": true,
"esModuleInterop": true,
"skipLibCheck": true,
"forceConsistentCasingInFileNames": true,
"moduleResolution": "NodeNext",
"moduleDetection": "force",
"types": ["node"],
"resolveJsonModule": true,
"isolatedModules": true,
"allowImportingTsExtensions": false
},
"include": ["src/**/*"],
"exclude": ["node_modules", "dist", "tests", "**/*.test.ts", "**/*.spec.ts"]
}

13
apps/docs/CHANGELOG.md Normal file
View File

@@ -0,0 +1,13 @@
# docs
## 0.0.6
## 0.0.5
## 0.0.4
## 0.0.3
## 0.0.2
## 0.0.1

24
apps/docs/README.md Normal file
View File

@@ -0,0 +1,24 @@
# Task Master Documentation
Welcome to the Task Master documentation. This documentation site provides comprehensive guides for getting started with Task Master.
## Getting Started
- [Quick Start Guide](/getting-started/quick-start) - Complete setup and first-time usage guide
- [Requirements](/getting-started/quick-start/requirements) - What you need to get started
- [Installation](/getting-started/quick-start/installation) - How to install Task Master
## Core Capabilities
- [MCP Tools](/capabilities/mcp) - Model Control Protocol integration
- [CLI Commands](/capabilities/cli-root-commands) - Command line interface reference
- [Task Structure](/capabilities/task-structure) - Understanding tasks and subtasks
## Best Practices
- [Advanced Configuration](/best-practices/configuration-advanced) - Detailed configuration options
- [Advanced Tasks](/best-practices/advanced-tasks) - Working with complex task structures
## Need More Help?
If you can't find what you're looking for in these docs, please check the root README.md or visit our [GitHub repository](https://github.com/eyaltoledano/claude-task-master).

View File

@@ -0,0 +1,114 @@
---
title: "Installation(2)"
description: "This guide walks you through setting up Task Master in your development environment."
---
## Initial Setup
<Tip>
MCP (Model Control Protocol) provides the easiest way to get started with Task Master directly in your editor.
</Tip>
<AccordionGroup>
<Accordion title="Option 1: Using MCP (Recommended)" icon="sparkles">
<Steps>
<Step title="Add the MCP config to your editor">
<Link href="https://cursor.sh">Cursor</Link> recommended, but it works with other text editors
```json
{
"mcpServers": {
"taskmaster-ai": {
"command": "npx",
"args": ["-y", "task-master-ai"],
"env": {
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
"MODEL": "claude-3-7-sonnet-20250219",
"PERPLEXITY_MODEL": "sonar-pro",
"MAX_TOKENS": 128000,
"TEMPERATURE": 0.2,
"DEFAULT_SUBTASKS": 5,
"DEFAULT_PRIORITY": "medium"
}
}
}
}
```
</Step>
<Step title="Enable the MCP in your editor settings">
</Step>
<Step title="Prompt the AI to initialize Task Master">
> "Can you please initialize taskmaster-ai into my project?"
**The AI will:**
1. Create necessary project structure
2. Set up initial configuration files
3. Guide you through the rest of the process
4. Place your PRD document in the `scripts/` directory (e.g., `scripts/prd.txt`)
5. **Use natural language commands** to interact with Task Master:
> "Can you parse my PRD at scripts/prd.txt?"
>
> "What's the next task I should work on?"
>
> "Can you help me implement task 3?"
</Step>
</Steps>
</Accordion>
<Accordion title="Option 2: Manual Installation">
If you prefer to use the command line interface directly:
<Steps>
<Step title="Install">
<CodeGroup>
```bash Global
npm install -g task-master-ai
```
```bash Local
npm install task-master-ai
```
</CodeGroup>
</Step>
<Step title="Initialize a new project">
<CodeGroup>
```bash Global
task-master init
```
```bash Local
npx task-master-init
```
</CodeGroup>
</Step>
</Steps>
This will prompt you for project details and set up a new project with the necessary files and structure.
</Accordion>
</AccordionGroup>
## Common Commands
<Tip>
After setting up Task Master, you can use these commands (either via AI prompts or CLI)
</Tip>
```bash
# Parse a PRD and generate tasks
task-master parse-prd your-prd.txt
# List all tasks
task-master list
# Show the next task to work on
task-master next
# Generate task files
task-master generate

View File

@@ -0,0 +1,263 @@
---
title: "AI Client Utilities for MCP Tools"
description: "This document provides examples of how to use the new AI client utilities with AsyncOperationManager in MCP tools."
---
## Examples
<AccordionGroup>
<Accordion title="Basic Usage with Direct Functions">
```javascript
// In your direct function implementation:
import {
getAnthropicClientForMCP,
getModelConfig,
handleClaudeError
} from '../utils/ai-client-utils.js';
export async function someAiOperationDirect(args, log, context) {
try {
// Initialize Anthropic client with session from context
const client = getAnthropicClientForMCP(context.session, log);
// Get model configuration with defaults or session overrides
const modelConfig = getModelConfig(context.session);
// Make API call with proper error handling
try {
const response = await client.messages.create({
model: modelConfig.model,
max_tokens: modelConfig.maxTokens,
temperature: modelConfig.temperature,
messages: [{ role: 'user', content: 'Your prompt here' }]
});
return {
success: true,
data: response
};
} catch (apiError) {
// Use helper to get user-friendly error message
const friendlyMessage = handleClaudeError(apiError);
return {
success: false,
error: {
code: 'AI_API_ERROR',
message: friendlyMessage
}
};
}
} catch (error) {
// Handle client initialization errors
return {
success: false,
error: {
code: 'AI_CLIENT_ERROR',
message: error.message
}
};
}
}
```
</Accordion>
<Accordion title="Integration with AsyncOperationManager">
```javascript
// In your MCP tool implementation:
import {
AsyncOperationManager,
StatusCodes
} from '../../utils/async-operation-manager.js';
import { someAiOperationDirect } from '../../core/direct-functions/some-ai-operation.js';
export async function someAiOperation(args, context) {
const { session, mcpLog } = context;
const log = mcpLog || console;
try {
// Create operation description
const operationDescription = `AI operation: ${args.someParam}`;
// Start async operation
const operation = AsyncOperationManager.createOperation(
operationDescription,
async (reportProgress) => {
try {
// Initial progress report
reportProgress({
progress: 0,
status: 'Starting AI operation...'
});
// Call direct function with session and progress reporting
const result = await someAiOperationDirect(args, log, {
reportProgress,
mcpLog: log,
session
});
// Final progress update
reportProgress({
progress: 100,
status: result.success ? 'Operation completed' : 'Operation failed',
result: result.data,
error: result.error
});
return result;
} catch (error) {
// Handle errors in the operation
reportProgress({
progress: 100,
status: 'Operation failed',
error: {
message: error.message,
code: error.code || 'OPERATION_FAILED'
}
});
throw error;
}
}
);
// Return immediate response with operation ID
return {
status: StatusCodes.ACCEPTED,
body: {
success: true,
message: 'Operation started',
operationId: operation.id
}
};
} catch (error) {
// Handle errors in the MCP tool
log.error(`Error in someAiOperation: ${error.message}`);
return {
status: StatusCodes.INTERNAL_SERVER_ERROR,
body: {
success: false,
error: {
code: 'OPERATION_FAILED',
message: error.message
}
}
};
}
}
```
</Accordion>
<Accordion title="Using Research Capabilities with Perplexity">
```javascript
// In your direct function:
import {
getPerplexityClientForMCP,
getBestAvailableAIModel
} from '../utils/ai-client-utils.js';
export async function researchOperationDirect(args, log, context) {
try {
// Get the best AI model for this operation based on needs
const { type, client } = await getBestAvailableAIModel(
context.session,
{ requiresResearch: true },
log
);
// Report which model we're using
if (context.reportProgress) {
await context.reportProgress({
progress: 10,
status: `Using ${type} model for research...`
});
}
// Make API call based on the model type
if (type === 'perplexity') {
// Call Perplexity
const response = await client.chat.completions.create({
model: context.session?.env?.PERPLEXITY_MODEL || 'sonar-medium-online',
messages: [{ role: 'user', content: args.researchQuery }],
temperature: 0.1
});
return {
success: true,
data: response.choices[0].message.content
};
} else {
// Call Claude as fallback
// (Implementation depends on specific needs)
// ...
}
} catch (error) {
// Handle errors
return {
success: false,
error: {
code: 'RESEARCH_ERROR',
message: error.message
}
};
}
}
```
</Accordion>
<Accordion title="Model Configuration Override">
```javascript
// In your direct function:
import { getModelConfig } from '../utils/ai-client-utils.js';
// Using custom defaults for a specific operation
const operationDefaults = {
model: 'claude-3-haiku-20240307', // Faster, smaller model
maxTokens: 1000, // Lower token limit
temperature: 0.2 // Lower temperature for more deterministic output
};
// Get model config with operation-specific defaults
const modelConfig = getModelConfig(context.session, operationDefaults);
// Now use modelConfig in your API calls
const response = await client.messages.create({
model: modelConfig.model,
max_tokens: modelConfig.maxTokens,
temperature: modelConfig.temperature
// Other parameters...
});
```
</Accordion>
</AccordionGroup>
## Best Practices
<AccordionGroup>
<Accordion title="Error Handling">
- Always use try/catch blocks around both client initialization and API calls
- Use `handleClaudeError` to provide user-friendly error messages
- Return standardized error objects with code and message
</Accordion>
<Accordion title="Progress Reporting">
- Report progress at key points (starting, processing, completing)
- Include meaningful status messages
- Include error details in progress reports when failures occur
</Accordion>
<Accordion title="Session Handling">
- Always pass the session from the context to the AI client getters
- Use `getModelConfig` to respect user settings from session
</Accordion>
<Accordion title="Model Selection">
- Use `getBestAvailableAIModel` when you need to select between different models
- Set `requiresResearch: true` when you need Perplexity capabilities
</Accordion>
<Accordion title="AsyncOperationManager Integration">
- Create descriptive operation names
- Handle all errors within the operation function
- Return standardized results from direct functions
- Return immediate responses with operation IDs
</Accordion>
</AccordionGroup>

Some files were not shown because too many files have changed in this diff Show More