Compare commits
802 Commits
v0.10.1
...
ralph/feat
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
943356221c | ||
|
|
9923b4f486 | ||
|
|
6bc75c0ac6 | ||
|
|
d7fca1844f | ||
|
|
a98d96ef04 | ||
|
|
a69d8c91dc | ||
|
|
474a86cebb | ||
|
|
3283506444 | ||
|
|
9acb900153 | ||
|
|
c4f5d89e72 | ||
|
|
e308cf4f46 | ||
|
|
11b7354010 | ||
|
|
4c1ef2ca94 | ||
|
|
663aa2dfe9 | ||
|
|
8f60a0561e | ||
|
|
9a22622e9c | ||
|
|
8d3c7e4116 | ||
|
|
3010b90d98 | ||
|
|
90e6bdcf1c | ||
|
|
25a00dca67 | ||
|
|
f263d4b2e0 | ||
|
|
f12a16d096 | ||
|
|
aaf903ff2f | ||
|
|
2a910a40ba | ||
|
|
0df6595245 | ||
|
|
33e3fbb20f | ||
|
|
5cb7ed557a | ||
|
|
b9e644c556 | ||
|
|
7265a6cf53 | ||
|
|
db6f405f23 | ||
|
|
7b5a7c4495 | ||
|
|
caee040907 | ||
|
|
4b5473860b | ||
|
|
b43b7ce201 | ||
|
|
86027f1ee4 | ||
|
|
4f984f8a69 | ||
|
|
f7646f41b5 | ||
|
|
20004a39ea | ||
|
|
f1393f47b1 | ||
|
|
738ec51c04 | ||
|
|
c7418c4594 | ||
|
|
0747f1c772 | ||
|
|
ffe24a2e35 | ||
|
|
604b94baa9 | ||
|
|
2ea4bb6a81 | ||
|
|
3e96387715 | ||
|
|
100c3dc47d | ||
|
|
986ac117ae | ||
|
|
18aa416035 | ||
|
|
3b3dbabed1 | ||
|
|
af53525cbc | ||
|
|
0079b7defd | ||
|
|
0b2c6967c4 | ||
|
|
c0682ac795 | ||
|
|
01a7faea8f | ||
|
|
b7f32eac5a | ||
|
|
044a7bfc98 | ||
|
|
814265cd33 | ||
|
|
9b7b2ca7b2 | ||
|
|
949f091179 | ||
|
|
51a351760c | ||
|
|
732b2c61ad | ||
|
|
32c2b03c23 | ||
|
|
3bfd999d81 | ||
|
|
9fa79eb026 | ||
|
|
875134247a | ||
|
|
4c2801d5eb | ||
|
|
c911608f60 | ||
|
|
8f1497407f | ||
|
|
10b64ec6f5 | ||
|
|
1a1879483b | ||
|
|
d691cbb7ae | ||
|
|
1b7c9637a5 | ||
|
|
9ff5f158d5 | ||
|
|
b2ff06e8c5 | ||
|
|
c2fc61ddb3 | ||
|
|
aaacc3dae3 | ||
|
|
46cd5dc186 | ||
|
|
49a31be416 | ||
|
|
2b69936ee7 | ||
|
|
6438f6c7c8 | ||
|
|
6bbd777552 | ||
|
|
100482722f | ||
|
|
7ff882bf23 | ||
|
|
6ab768f6ec | ||
|
|
b5fe723f8e | ||
|
|
f487736670 | ||
|
|
d67b81d25d | ||
|
|
66c05053c0 | ||
|
|
d7ab4609aa | ||
|
|
05f6242f7e | ||
|
|
a58719cf50 | ||
|
|
674d1f6de7 | ||
|
|
f106fb8e0b | ||
|
|
fd9dd43ee0 | ||
|
|
c395e93696 | ||
|
|
a621ff05ea | ||
|
|
47ddb60231 | ||
|
|
fce841490a | ||
|
|
4e126430a0 | ||
|
|
a33abe6c21 | ||
|
|
2b0cbdbc84 | ||
|
|
f1cdf78aa6 | ||
|
|
e6de285cea | ||
|
|
cf3339fa48 | ||
|
|
255b9f0334 | ||
|
|
cb2c266b2d | ||
|
|
170d6f2f65 | ||
|
|
137ef36278 | ||
|
|
1a3a528bf7 | ||
|
|
c164adc6ff | ||
|
|
9d61e0447d | ||
|
|
ee11b735b3 | ||
|
|
6d978228d9 | ||
|
|
ea9341e7af | ||
|
|
4296e383ea | ||
|
|
97b2781709 | ||
|
|
96553e4a5f | ||
|
|
7582219365 | ||
|
|
84baedc3d2 | ||
|
|
78da39edff | ||
|
|
4d1416b175 | ||
|
|
dc811eb45e | ||
|
|
3c41a113fe | ||
|
|
0e8c42c7cb | ||
|
|
799d1d2cce | ||
|
|
83af314879 | ||
|
|
dd03374496 | ||
|
|
4ab0affba7 | ||
|
|
77e1ddc237 | ||
|
|
3eeb19590a | ||
|
|
587745046f | ||
|
|
c61c73f827 | ||
|
|
15900d9fd5 | ||
|
|
7cf4004038 | ||
|
|
0f3ab00f26 | ||
|
|
e81040def5 | ||
|
|
597f6b03b4 | ||
|
|
a7ad4c8e92 | ||
|
|
0d54747894 | ||
|
|
df26c65632 | ||
|
|
e80e5bb7cd | ||
|
|
c4f92f6a0a | ||
|
|
be0c0f267c | ||
|
|
a983f75d4f | ||
|
|
e743aaa8c2 | ||
|
|
16ffffaf68 | ||
|
|
f254aed4a6 | ||
|
|
dd3b47bb2b | ||
|
|
37af0f1912 | ||
|
|
8783708e5e | ||
|
|
4dad2fd613 | ||
|
|
4cae2991d4 | ||
|
|
0d7ff627c9 | ||
|
|
db720a954d | ||
|
|
89335578ff | ||
|
|
781b8ef2af | ||
|
|
7d564920b5 | ||
|
|
2737fbaa67 | ||
|
|
9feb8d2dbf | ||
|
|
8a991587f1 | ||
|
|
7ceba2f572 | ||
|
|
10565f07d3 | ||
|
|
f27ce34fe9 | ||
|
|
71be933a8d | ||
|
|
5d94f1b471 | ||
|
|
3dee60dc3d | ||
|
|
f469515228 | ||
|
|
2fd0f026d3 | ||
|
|
e3ed4d7c14 | ||
|
|
fc47714340 | ||
|
|
30ae0e9a57 | ||
|
|
95640dcde8 | ||
|
|
311b2433e2 | ||
|
|
04e11b5e82 | ||
|
|
782728ff95 | ||
|
|
30ca144231 | ||
|
|
0220d0e994 | ||
|
|
41a8c2406a | ||
|
|
a003041cd8 | ||
|
|
6b57ead106 | ||
|
|
7b6e117b1d | ||
|
|
03b045e9cd | ||
|
|
699afdae59 | ||
|
|
80c09802e8 | ||
|
|
cf8f0f4b1c | ||
|
|
75c514cf5b | ||
|
|
41d1e671b1 | ||
|
|
a464e550b8 | ||
|
|
3a852afdae | ||
|
|
4bb63706b8 | ||
|
|
fcf14e09be | ||
|
|
4357af3f13 | ||
|
|
59f7676051 | ||
|
|
36468f3c93 | ||
|
|
ca4d93ee6a | ||
|
|
37fb569a62 | ||
|
|
ed0d4e6641 | ||
|
|
5184f8e7b2 | ||
|
|
587523a23b | ||
|
|
7a50f0c6ec | ||
|
|
adeb76ee15 | ||
|
|
d342070375 | ||
|
|
5e4dbac525 | ||
|
|
fb15c2eaf7 | ||
|
|
e8ceb08341 | ||
|
|
e495b2b559 | ||
|
|
e0d1d03f33 | ||
|
|
4a4bca905d | ||
|
|
9d5f50ac8e | ||
|
|
bbeaa9163a | ||
|
|
a4a172be94 | ||
|
|
028ed9c444 | ||
|
|
53903f1e8e | ||
|
|
36c56231cc | ||
|
|
b82d858f81 | ||
|
|
9808967d6b | ||
|
|
3fee7515f3 | ||
|
|
82b17bdb57 | ||
|
|
72ca68edeb | ||
|
|
64302dc191 | ||
|
|
60c03c548d | ||
|
|
2ae6e7e6be | ||
|
|
45a14c323d | ||
|
|
29e67fafa4 | ||
|
|
43e4d7c9d3 | ||
|
|
1bd1e64cac | ||
|
|
dc44ed9de8 | ||
|
|
31b8407dbc | ||
|
|
2df4f13f65 | ||
|
|
a37017e5a5 | ||
|
|
fb7d588137 | ||
|
|
bdb11fb2db | ||
|
|
4423119a5e | ||
|
|
7b90568326 | ||
|
|
9b0630fdf1 | ||
|
|
ced04bddd3 | ||
|
|
6ae66b2afb | ||
|
|
8781794c56 | ||
|
|
fede909fe1 | ||
|
|
77cc5e4537 | ||
|
|
d31ef7a39c | ||
|
|
66555099ca | ||
|
|
1e565eab53 | ||
|
|
d87a7f1076 | ||
|
|
5b3dd3f29b | ||
|
|
b7804302a1 | ||
|
|
b2841c261f | ||
|
|
444aa5ae19 | ||
|
|
858d4a1c54 | ||
|
|
fd005c4c54 | ||
|
|
0451ebcc32 | ||
|
|
9c58a92243 | ||
|
|
f772a96d00 | ||
|
|
0886c83d0c | ||
|
|
806ec99939 | ||
|
|
36c4a7a869 | ||
|
|
88c434a939 | ||
|
|
b0e09c76ed | ||
|
|
6c5e0f97f8 | ||
|
|
8774e7d5ae | ||
|
|
58a301c380 | ||
|
|
624922ca59 | ||
|
|
0a70ab6179 | ||
|
|
901eec1058 | ||
|
|
4629128943 | ||
|
|
6d69d02fe0 | ||
|
|
458496e3b6 | ||
|
|
fb92693d81 | ||
|
|
f6ba4a36ee | ||
|
|
baf9bd545a | ||
|
|
fbea48d8ec | ||
|
|
d0fe7dc25a | ||
|
|
f380b8e86c | ||
|
|
bd89061a1d | ||
|
|
7d5ebf05e3 | ||
|
|
21392a1117 | ||
|
|
3e61d26235 | ||
|
|
dc5de53dcd | ||
|
|
4312d3bd67 | ||
|
|
0253f3ed87 | ||
|
|
a65ad0a47c | ||
|
|
4bc8029080 | ||
|
|
31d395322f | ||
|
|
699e9eefb5 | ||
|
|
95c299df64 | ||
|
|
5f009a5e1f | ||
|
|
38e6f3798e | ||
|
|
b53065713c | ||
|
|
de28026b32 | ||
|
|
f62eaad709 | ||
|
|
98d1c97436 | ||
|
|
3334e409ae | ||
|
|
5b9416f673 | ||
|
|
6c88a4a749 | ||
|
|
e5d2b61297 | ||
|
|
0726bc966c | ||
|
|
7fea9968ef | ||
|
|
f7fbdd6755 | ||
|
|
c99df64f65 | ||
|
|
5eafc5ea11 | ||
|
|
a33d6ecfeb | ||
|
|
dd96f51179 | ||
|
|
2852149a47 | ||
|
|
43e0025f4c | ||
|
|
598e687067 | ||
|
|
f38abd6843 | ||
|
|
24e9206da0 | ||
|
|
8d9fcf2064 | ||
|
|
56a415ef79 | ||
|
|
f081bba83c | ||
|
|
6fd5e23396 | ||
|
|
e4456b11bc | ||
|
|
295087a5b8 | ||
|
|
5f2b7323ad | ||
|
|
9ddc521757 | ||
|
|
e7087cf88f | ||
|
|
08f86f19c3 | ||
|
|
f272748965 | ||
|
|
15e15a1f17 | ||
|
|
3a30e9acd4 | ||
|
|
15286c029d | ||
|
|
c39e5158b4 | ||
|
|
4bda8f4d76 | ||
|
|
49976e864b | ||
|
|
30b873a7da | ||
|
|
ab37859a7e | ||
|
|
e704ba12fd | ||
|
|
64b2d8f79e | ||
|
|
bbb4bbcc11 | ||
|
|
8e38348203 | ||
|
|
01b651bddc | ||
|
|
0840ad8316 | ||
|
|
5c726dc542 | ||
|
|
21d988691b | ||
|
|
21839b1cd6 | ||
|
|
6160089b8e | ||
|
|
82bb50619f | ||
|
|
898f15e699 | ||
|
|
1a157567dc | ||
|
|
eb8a3a85a1 | ||
|
|
59a4ec9e1a | ||
|
|
ef1deec947 | ||
|
|
b40139ca05 | ||
|
|
403d7b00ca | ||
|
|
b78614b44e | ||
|
|
19d795d63f | ||
|
|
07ec89ab17 | ||
|
|
eaa7f24280 | ||
|
|
b3d43c5992 | ||
|
|
c5de4f8b68 | ||
|
|
b9299c5af0 | ||
|
|
122a0465d8 | ||
|
|
cf2c06697a | ||
|
|
727f1ec4eb | ||
|
|
648353794e | ||
|
|
a2a3229fd0 | ||
|
|
b592dff8bc | ||
|
|
e9d1bc2385 | ||
|
|
030694bb96 | ||
|
|
3e0f696c49 | ||
|
|
4b0c9d9af6 | ||
|
|
3fa91f56e5 | ||
|
|
e69ac5d5cf | ||
|
|
c60c9354a4 | ||
|
|
30b895be2c | ||
|
|
9995075093 | ||
|
|
b62cb1bbe7 | ||
|
|
7defcba465 | ||
|
|
3e838ed34b | ||
|
|
1b8c320c57 | ||
|
|
5da5b59bde | ||
|
|
04f44a2d3d | ||
|
|
36fe838fd5 | ||
|
|
415b1835d4 | ||
|
|
78112277b3 | ||
|
|
2bb4260966 | ||
|
|
3a2325a963 | ||
|
|
1bd6d4f246 | ||
|
|
a09a2d0967 | ||
|
|
02e0db09df | ||
|
|
3bcce8d70e | ||
|
|
8852831807 | ||
|
|
661d3e04ba | ||
|
|
0dba2cb2da | ||
|
|
9ee7a94056 | ||
|
|
636fb3f680 | ||
|
|
8cde6c2708 | ||
|
|
246acd1035 | ||
|
|
de5acbc6c9 | ||
|
|
664eb5b896 | ||
|
|
dbaf492bdb | ||
|
|
0c8a0b81a0 | ||
|
|
46d4f273f5 | ||
|
|
aa7396d65e | ||
|
|
5119cd2d8e | ||
|
|
44eba3f7d1 | ||
|
|
e82b093dce | ||
|
|
ad3acd874d | ||
|
|
52022d370b | ||
|
|
957af5253b | ||
|
|
c0b3f432a6 | ||
|
|
12bed2b307 | ||
|
|
d76bea49b3 | ||
|
|
0849c0c2ce | ||
|
|
5ec1f61c13 | ||
|
|
292c2caf7f | ||
|
|
526d64fb8a | ||
|
|
1b86ce6c83 | ||
|
|
8a86ec538e | ||
|
|
9e7387952d | ||
|
|
ab05f550b3 | ||
|
|
cf01fbedcf | ||
|
|
d2bcbee0c2 | ||
|
|
72171bd4ba | ||
|
|
9ad517231a | ||
|
|
7db3b47a47 | ||
|
|
3de785a99c | ||
|
|
8188fdd832 | ||
|
|
3fadc2f1ef | ||
|
|
dd36111367 | ||
|
|
c58ab8963c | ||
|
|
3eeb4721aa | ||
|
|
7ea905f2c5 | ||
|
|
51dd4f625b | ||
|
|
2e55757b26 | ||
|
|
54bfc72baa | ||
|
|
faae0b419d | ||
|
|
27edbd8f3f | ||
|
|
b1390e4ddf | ||
|
|
cc04d53720 | ||
|
|
bfd86eb9cc | ||
|
|
9eb3842f04 | ||
|
|
bf2053e140 | ||
|
|
ee0be04302 | ||
|
|
c0707fc399 | ||
|
|
1ece6f1904 | ||
|
|
f4a9ad1095 | ||
|
|
cba86510d3 | ||
|
|
86ea6d1dbc | ||
|
|
a22d2a45b5 | ||
|
|
d73c8e17ec | ||
|
|
4f23751d25 | ||
|
|
7d5c028ca0 | ||
|
|
f18df6da19 | ||
|
|
1754a31372 | ||
|
|
3096ccdfb3 | ||
|
|
6464bb11e5 | ||
|
|
edaa5fe0d5 | ||
|
|
41d9dbbe6d | ||
|
|
6e0d866756 | ||
|
|
926aa61a4e | ||
|
|
9b4168bb4e | ||
|
|
ad612763ff | ||
|
|
293b59bac6 | ||
|
|
1809c4ed7b | ||
|
|
6e406958c1 | ||
|
|
074b7ec0bc | ||
|
|
e0438c8fb8 | ||
|
|
1f6694fb3d | ||
|
|
b0dfcf345e | ||
|
|
3f64202c9f | ||
|
|
669b744ced | ||
|
|
f058543888 | ||
|
|
acd5c1ea3d | ||
|
|
682b54e103 | ||
|
|
6a8a68e1a3 | ||
|
|
80735f9e60 | ||
|
|
48732d5423 | ||
|
|
2d520de269 | ||
|
|
b60e1cf835 | ||
|
|
d1e45ff50e | ||
|
|
1513858da4 | ||
|
|
59dcf4bd64 | ||
|
|
a09ba021c5 | ||
|
|
e906166141 | ||
|
|
231e569e84 | ||
|
|
09add37423 | ||
|
|
91fc779714 | ||
|
|
8c69c0aafd | ||
|
|
43ad75c7fa | ||
|
|
a59dd037cf | ||
|
|
3293c7858b | ||
|
|
b371808524 | ||
|
|
86d8f00af8 | ||
|
|
0c55ce0165 | ||
|
|
5a91941913 | ||
|
|
04af16de27 | ||
|
|
edf0f23005 | ||
|
|
e0e1155260 | ||
|
|
70f4054f26 | ||
|
|
34c769bcd0 | ||
|
|
34df2c8bbd | ||
|
|
5e9bc28abe | ||
|
|
d2e64318e2 | ||
|
|
4c835264ac | ||
|
|
c882f89a8c | ||
|
|
20e1b72a17 | ||
|
|
db631f43a5 | ||
|
|
3b9402f1f8 | ||
|
|
c8c0fc2a57 | ||
|
|
60b8e97a1c | ||
|
|
3a6d6dd671 | ||
|
|
f4a83ec047 | ||
|
|
0699f64299 | ||
|
|
60b8f5faa3 | ||
|
|
cd6e42249e | ||
|
|
fcd80623b6 | ||
|
|
026815353f | ||
|
|
8a3b611fc2 | ||
|
|
6ba42b53dc | ||
|
|
3e304232ab | ||
|
|
70fa5b0031 | ||
|
|
314c0de8c4 | ||
|
|
58b417a8ce | ||
|
|
a8dabf4485 | ||
|
|
bc19bc7927 | ||
|
|
da317f2607 | ||
|
|
ed17cb0e0a | ||
|
|
e96734a6cc | ||
|
|
17294ff259 | ||
|
|
a96215a359 | ||
|
|
0a611843b5 | ||
|
|
a1f8d52474 | ||
|
|
da636f6681 | ||
|
|
ca5ec03cd8 | ||
|
|
c47deeb869 | ||
|
|
dd90c9cb5d | ||
|
|
c7042845d6 | ||
|
|
79a41543d5 | ||
|
|
efce37469b | ||
|
|
4117f71c18 | ||
|
|
9f4bac8d6a | ||
|
|
e53d5e1577 | ||
|
|
59230c4d91 | ||
|
|
04b6a3cb21 | ||
|
|
37178ff1b9 | ||
|
|
bbc8b9cc1f | ||
|
|
c955431753 | ||
|
|
21c3cb8cda | ||
|
|
ab84afd036 | ||
|
|
f89d2aacc0 | ||
|
|
0288311965 | ||
|
|
8ae772086d | ||
|
|
2b3ae8bf89 | ||
|
|
245c3cb398 | ||
|
|
09d839fff5 | ||
|
|
90068348d3 | ||
|
|
02e347d2d7 | ||
|
|
0527c363e3 | ||
|
|
735135efe9 | ||
|
|
4fee667a05 | ||
|
|
01963af2cb | ||
|
|
0633895f3b | ||
|
|
10442c1119 | ||
|
|
734a4fdcfc | ||
|
|
8dace2186c | ||
|
|
095e373843 | ||
|
|
0bc9bac392 | ||
|
|
0a45f4329c | ||
|
|
c4b2f7e514 | ||
|
|
9684beafc3 | ||
|
|
302b916045 | ||
|
|
e7f18f65b9 | ||
|
|
655c7c225a | ||
|
|
e1218b3747 | ||
|
|
ffa621a37c | ||
|
|
cd32fd9edf | ||
|
|
590e4bd66d | ||
|
|
70d3f2f103 | ||
|
|
424aae10ed | ||
|
|
a48d1f13e2 | ||
|
|
25ca1a45a0 | ||
|
|
2e17437da3 | ||
|
|
1f44ea5299 | ||
|
|
d63964a10e | ||
|
|
33559e368c | ||
|
|
9f86306766 | ||
|
|
8f8a3dc45d | ||
|
|
d18351dc38 | ||
|
|
9d437f8594 | ||
|
|
ad89253e31 | ||
|
|
70c5097553 | ||
|
|
c9e4558a19 | ||
|
|
cd4d8e335f | ||
|
|
16297058bb | ||
|
|
ae2d43de29 | ||
|
|
f5585e6c31 | ||
|
|
303b13e3d4 | ||
|
|
1862ca2360 | ||
|
|
ad1c234b4e | ||
|
|
d07f8fddc5 | ||
|
|
c7158d4910 | ||
|
|
2a07d366be | ||
|
|
40df57f969 | ||
|
|
d4a2e34b3b | ||
|
|
d67b21fd43 | ||
|
|
b1beae3042 | ||
|
|
d2f761c652 | ||
|
|
4cf7e8a74a | ||
|
|
5f504fafb8 | ||
|
|
e69a47d382 | ||
|
|
89bb62d44b | ||
|
|
5aea93d4c0 | ||
|
|
66ac9ab9f6 | ||
|
|
ca7b0457f1 | ||
|
|
87d97bba00 | ||
|
|
3516efdc3b | ||
|
|
c8722b0a7a | ||
|
|
ed79d4f473 | ||
|
|
2517bc112c | ||
|
|
842eaf7224 | ||
|
|
96aeeffc19 | ||
|
|
5a2371b7cc | ||
|
|
b47f189cc2 | ||
|
|
36d559db26 | ||
|
|
afb47584bd | ||
|
|
3721359782 | ||
|
|
ef782ff5bd | ||
|
|
99b1a0ad7a | ||
|
|
70cc15bc87 | ||
|
|
ce51b0d3ef | ||
|
|
a82284a2db | ||
|
|
205a11e82c | ||
|
|
be3f68e777 | ||
|
|
90c6c1e587 | ||
|
|
6cb213ebbd | ||
|
|
bd0ee1b6e3 | ||
|
|
8ed651c165 | ||
|
|
2829194d3c | ||
|
|
2acba945c0 | ||
|
|
78a5376796 | ||
|
|
b3b424be93 | ||
|
|
c90578b6da | ||
|
|
3a3ad9f4fe | ||
|
|
abdc15eab2 | ||
|
|
515dcae965 | ||
|
|
a40805adf7 | ||
|
|
4a9f6cd5f5 | ||
|
|
d46547a80f | ||
|
|
bcb885e0ba | ||
|
|
ddf0947710 | ||
|
|
3a6bc43778 | ||
|
|
73aa7ac32e | ||
|
|
538b874582 | ||
|
|
0300582b46 | ||
|
|
3aee9bc840 | ||
|
|
11b8d1bda5 | ||
|
|
ff8e75cded | ||
|
|
3e872f8afb | ||
|
|
0eb16d5ecb | ||
|
|
c17d912237 | ||
|
|
41b979c239 | ||
|
|
d99fa00980 | ||
|
|
b2ccd60526 | ||
|
|
454a1d9d37 | ||
|
|
d181c40a95 | ||
|
|
1ab836f191 | ||
|
|
d84c2486e4 | ||
|
|
329839aeb8 | ||
|
|
c7fefb0549 | ||
|
|
cde23946e9 | ||
|
|
1ceb545d86 | ||
|
|
9a482789f7 | ||
|
|
4c57537157 | ||
|
|
6599cb0bf9 | ||
|
|
48a8d952bc | ||
|
|
94601f1e11 | ||
|
|
9f834f5a27 | ||
|
|
f5c4eda132 | ||
|
|
9122e516b6 | ||
|
|
04de6d9698 | ||
|
|
3530e28ee3 | ||
|
|
08f0319058 | ||
|
|
6f2cda0a6f | ||
|
|
cb720ca298 | ||
|
|
c6b8783bce | ||
|
|
9c0ed3c799 | ||
|
|
d3d9dc6ebe | ||
|
|
30e6d47577 | ||
|
|
140bd3d265 | ||
|
|
5ed2120ee6 | ||
|
|
34c980ee51 | ||
|
|
e88682f881 | ||
|
|
59208ab7a9 | ||
|
|
a86e9affc5 | ||
|
|
6403e96ef9 | ||
|
|
51919950f1 | ||
|
|
39efd11979 | ||
|
|
65e7886506 | ||
|
|
b8e55dd612 | ||
|
|
819fc5d2f7 | ||
|
|
6ec892b2c1 | ||
|
|
08589b2796 | ||
|
|
d2a5f0e6a9 | ||
|
|
e1e3e31998 | ||
|
|
c414d50bdf | ||
|
|
2c63742a85 | ||
|
|
729e033fef | ||
|
|
69e0b3c393 | ||
|
|
da95466ee1 | ||
|
|
4f68bf3b47 | ||
|
|
12519946b4 | ||
|
|
709ea63350 | ||
|
|
ca3d54f7d6 | ||
|
|
8c5d609c9c | ||
|
|
b78535ac19 | ||
|
|
cfe3ba91e8 | ||
|
|
34501878b2 | ||
|
|
af9421b9ae | ||
|
|
42bf897f81 | ||
|
|
5e01399dca | ||
|
|
e6fe5dac85 | ||
|
|
66f16870c6 | ||
|
|
01a5be25a8 | ||
|
|
4386e74ed2 | ||
|
|
5d3d66ee64 | ||
|
|
bf38baf858 | ||
|
|
ab6746a0c0 | ||
|
|
c02483bc41 | ||
|
|
3148b57f1b | ||
|
|
47b79c0e29 | ||
|
|
0dfecec1b3 | ||
|
|
4386d01bf1 | ||
|
|
9a66db0309 | ||
|
|
b7580e038d | ||
|
|
b3e7ebefd9 | ||
|
|
189d9288c1 | ||
|
|
1a547fac91 | ||
|
|
3f1f96076c | ||
|
|
0f9bc3378d | ||
|
|
bdd582b9cb | ||
|
|
693369128d | ||
|
|
2b5fab5cb5 | ||
|
|
e6c062d061 | ||
|
|
689e2de94e | ||
|
|
ab5025e204 | ||
|
|
268577fd20 | ||
|
|
141e8a8585 | ||
|
|
76ecfc086a | ||
|
|
33bb596c01 | ||
|
|
8e478f9e5e | ||
|
|
bad16b200f | ||
|
|
1582fe32c1 | ||
|
|
87b1eb61ee | ||
|
|
f11e00a026 | ||
|
|
feddeafd6e | ||
|
|
d71e7872ea | ||
|
|
01bd121de2 | ||
|
|
cdd87ccc5e | ||
|
|
6442bf5ee1 | ||
|
|
f16a574ad8 | ||
|
|
6393f9f7fb | ||
|
|
74b67830ac | ||
|
|
a49a77d19f | ||
|
|
1a74b50658 | ||
|
|
e04c16cec6 | ||
|
|
3af469b35f | ||
|
|
d5ecca25db | ||
|
|
65f56978b2 | ||
|
|
5e22c8b4ba | ||
|
|
bdd0035fc0 | ||
|
|
c98b0cea11 | ||
|
|
f9ef0c1887 | ||
|
|
0e16d27294 | ||
|
|
3bfbe19fe3 | ||
|
|
087de784fa | ||
|
|
f76b69c935 | ||
|
|
6a6d06766b | ||
|
|
9f430ca48b | ||
|
|
ca87476919 | ||
|
|
fec9e12f49 | ||
|
|
d06e45bf12 | ||
|
|
535fb5be71 | ||
|
|
fba6131db7 | ||
|
|
7f0cdf9046 | ||
|
|
eecad5bfe0 | ||
|
|
fb4a8b6cb7 | ||
|
|
00e01d1d93 | ||
|
|
995e95263c | ||
|
|
0b7b395aa4 | ||
|
|
1679075b6b | ||
|
|
1908c4a337 | ||
|
|
43022d7010 | ||
|
|
04c2dee593 | ||
|
|
d0092a6e6f | ||
|
|
729ae4d2d5 | ||
|
|
219b40b516 | ||
|
|
05950ef318 | ||
|
|
9582c0a91f | ||
|
|
6d01ae3d47 | ||
|
|
d4f92858c2 | ||
|
|
e02ee96aff | ||
|
|
38f9e4deaa | ||
|
|
71410629ba | ||
|
|
450549d875 | ||
|
|
a49f5a117b | ||
|
|
bc9707f813 | ||
|
|
a56a3628b3 | ||
|
|
9dc5e75760 | ||
|
|
16f4d4b932 | ||
|
|
7fef5ab488 |
@@ -2,13 +2,16 @@
|
|||||||
"$schema": "https://unpkg.com/@changesets/config@3.1.1/schema.json",
|
"$schema": "https://unpkg.com/@changesets/config@3.1.1/schema.json",
|
||||||
"changelog": [
|
"changelog": [
|
||||||
"@changesets/changelog-github",
|
"@changesets/changelog-github",
|
||||||
{ "repo": "eyaltoledano/claude-task-master" }
|
{
|
||||||
|
"repo": "eyaltoledano/claude-task-master"
|
||||||
|
}
|
||||||
],
|
],
|
||||||
"commit": false,
|
"commit": false,
|
||||||
"fixed": [],
|
"fixed": [],
|
||||||
"linked": [],
|
|
||||||
"access": "public",
|
"access": "public",
|
||||||
"baseBranch": "main",
|
"baseBranch": "main",
|
||||||
"updateInternalDependencies": "patch",
|
"ignore": [
|
||||||
"ignore": []
|
"docs",
|
||||||
|
"@tm/claude-code-plugin"
|
||||||
|
]
|
||||||
}
|
}
|
||||||
5
.changeset/dirty-hairs-know.md
Normal file
5
.changeset/dirty-hairs-know.md
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
---
|
||||||
|
"task-master-ai": patch
|
||||||
|
---
|
||||||
|
|
||||||
|
Improve auth token refresh flow
|
||||||
7
.changeset/fix-parent-directory-traversal.md
Normal file
7
.changeset/fix-parent-directory-traversal.md
Normal file
@@ -0,0 +1,7 @@
|
|||||||
|
---
|
||||||
|
"task-master-ai": patch
|
||||||
|
---
|
||||||
|
|
||||||
|
Enable Task Master commands to traverse parent directories to find project root from nested paths
|
||||||
|
|
||||||
|
Fixes #1301
|
||||||
5
.changeset/fix-warning-box-alignment.md
Normal file
5
.changeset/fix-warning-box-alignment.md
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
---
|
||||||
|
"@tm/cli": patch
|
||||||
|
---
|
||||||
|
|
||||||
|
Fix warning message box width to match dashboard box width for consistent UI alignment
|
||||||
35
.changeset/light-owls-stay.md
Normal file
35
.changeset/light-owls-stay.md
Normal file
@@ -0,0 +1,35 @@
|
|||||||
|
---
|
||||||
|
"task-master-ai": minor
|
||||||
|
---
|
||||||
|
|
||||||
|
Add configurable MCP tool loading to optimize LLM context usage
|
||||||
|
|
||||||
|
You can now control which Task Master MCP tools are loaded by setting the `TASK_MASTER_TOOLS` environment variable in your MCP configuration. This helps reduce context usage for LLMs by only loading the tools you need.
|
||||||
|
|
||||||
|
**Configuration Options:**
|
||||||
|
|
||||||
|
- `all` (default): Load all 36 tools
|
||||||
|
- `core` or `lean`: Load only 7 essential tools for daily development
|
||||||
|
- Includes: `get_tasks`, `next_task`, `get_task`, `set_task_status`, `update_subtask`, `parse_prd`, `expand_task`
|
||||||
|
- `standard`: Load 15 commonly used tools (all core tools plus 8 more)
|
||||||
|
- Additional tools: `initialize_project`, `analyze_project_complexity`, `expand_all`, `add_subtask`, `remove_task`, `generate`, `add_task`, `complexity_report`
|
||||||
|
- Custom list: Comma-separated tool names (e.g., `get_tasks,next_task,set_task_status`)
|
||||||
|
|
||||||
|
**Example .mcp.json configuration:**
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"mcpServers": {
|
||||||
|
"task-master-ai": {
|
||||||
|
"command": "npx",
|
||||||
|
"args": ["-y", "task-master-ai"],
|
||||||
|
"env": {
|
||||||
|
"TASK_MASTER_TOOLS": "standard",
|
||||||
|
"ANTHROPIC_API_KEY": "your_key_here"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
For complete details on all available tools, configuration examples, and usage guidelines, see the [MCP Tools documentation](https://docs.task-master.dev/capabilities/mcp#configurable-tool-loading).
|
||||||
5
.changeset/metal-rocks-help.md
Normal file
5
.changeset/metal-rocks-help.md
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
---
|
||||||
|
"task-master-ai": minor
|
||||||
|
---
|
||||||
|
|
||||||
|
Improve next command to work with remote
|
||||||
5
.changeset/open-tips-notice.md
Normal file
5
.changeset/open-tips-notice.md
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
---
|
||||||
|
"task-master-ai": minor
|
||||||
|
---
|
||||||
|
|
||||||
|
Add 4.5 haiku and sonnet to supported models for claude-code and anthropic ai providers
|
||||||
32
.claude-plugin/marketplace.json
Normal file
32
.claude-plugin/marketplace.json
Normal file
@@ -0,0 +1,32 @@
|
|||||||
|
{
|
||||||
|
"name": "taskmaster",
|
||||||
|
"owner": {
|
||||||
|
"name": "Hamster",
|
||||||
|
"email": "ralph@tryhamster.com"
|
||||||
|
},
|
||||||
|
"metadata": {
|
||||||
|
"description": "Official marketplace for Taskmaster AI - AI-powered task management for ambitious development",
|
||||||
|
"version": "1.0.0"
|
||||||
|
},
|
||||||
|
"plugins": [
|
||||||
|
{
|
||||||
|
"name": "taskmaster",
|
||||||
|
"source": "./packages/claude-code-plugin",
|
||||||
|
"description": "AI-powered task management system for ambitious development workflows with intelligent orchestration, complexity analysis, and automated coordination",
|
||||||
|
"author": {
|
||||||
|
"name": "Hamster"
|
||||||
|
},
|
||||||
|
"homepage": "https://github.com/eyaltoledano/claude-task-master",
|
||||||
|
"repository": "https://github.com/eyaltoledano/claude-task-master",
|
||||||
|
"keywords": [
|
||||||
|
"task-management",
|
||||||
|
"ai",
|
||||||
|
"workflow",
|
||||||
|
"orchestration",
|
||||||
|
"automation",
|
||||||
|
"mcp"
|
||||||
|
],
|
||||||
|
"category": "productivity"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
147
.claude/TM_COMMANDS_GUIDE.md
Normal file
147
.claude/TM_COMMANDS_GUIDE.md
Normal file
@@ -0,0 +1,147 @@
|
|||||||
|
# Task Master Commands for Claude Code
|
||||||
|
|
||||||
|
Complete guide to using Task Master through Claude Code's slash commands.
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
All Task Master functionality is available through the `/project:tm/` namespace with natural language support and intelligent features.
|
||||||
|
|
||||||
|
## Quick Start
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Install Task Master
|
||||||
|
/project:tm/setup/quick-install
|
||||||
|
|
||||||
|
# Initialize project
|
||||||
|
/project:tm/init/quick
|
||||||
|
|
||||||
|
# Parse requirements
|
||||||
|
/project:tm/parse-prd requirements.md
|
||||||
|
|
||||||
|
# Start working
|
||||||
|
/project:tm/next
|
||||||
|
```
|
||||||
|
|
||||||
|
## Command Structure
|
||||||
|
|
||||||
|
Commands are organized hierarchically to match Task Master's CLI:
|
||||||
|
- Main commands at `/project:tm/[command]`
|
||||||
|
- Subcommands for specific operations `/project:tm/[command]/[subcommand]`
|
||||||
|
- Natural language arguments accepted throughout
|
||||||
|
|
||||||
|
## Complete Command Reference
|
||||||
|
|
||||||
|
### Setup & Configuration
|
||||||
|
- `/project:tm/setup/install` - Full installation guide
|
||||||
|
- `/project:tm/setup/quick-install` - One-line install
|
||||||
|
- `/project:tm/init` - Initialize project
|
||||||
|
- `/project:tm/init/quick` - Quick init with -y
|
||||||
|
- `/project:tm/models` - View AI config
|
||||||
|
- `/project:tm/models/setup` - Configure AI
|
||||||
|
|
||||||
|
### Task Generation
|
||||||
|
- `/project:tm/parse-prd` - Generate from PRD
|
||||||
|
- `/project:tm/parse-prd/with-research` - Enhanced parsing
|
||||||
|
- `/project:tm/generate` - Create task files
|
||||||
|
|
||||||
|
### Task Management
|
||||||
|
- `/project:tm/list` - List with natural language filters
|
||||||
|
- `/project:tm/list/with-subtasks` - Hierarchical view
|
||||||
|
- `/project:tm/list/by-status <status>` - Filter by status
|
||||||
|
- `/project:tm/show <id>` - Task details
|
||||||
|
- `/project:tm/add-task` - Create task
|
||||||
|
- `/project:tm/update` - Update tasks
|
||||||
|
- `/project:tm/remove-task` - Delete task
|
||||||
|
|
||||||
|
### Status Management
|
||||||
|
- `/project:tm/set-status/to-pending <id>`
|
||||||
|
- `/project:tm/set-status/to-in-progress <id>`
|
||||||
|
- `/project:tm/set-status/to-done <id>`
|
||||||
|
- `/project:tm/set-status/to-review <id>`
|
||||||
|
- `/project:tm/set-status/to-deferred <id>`
|
||||||
|
- `/project:tm/set-status/to-cancelled <id>`
|
||||||
|
|
||||||
|
### Task Analysis
|
||||||
|
- `/project:tm/analyze-complexity` - AI analysis
|
||||||
|
- `/project:tm/complexity-report` - View report
|
||||||
|
- `/project:tm/expand <id>` - Break down task
|
||||||
|
- `/project:tm/expand/all` - Expand all complex
|
||||||
|
|
||||||
|
### Dependencies
|
||||||
|
- `/project:tm/add-dependency` - Add dependency
|
||||||
|
- `/project:tm/remove-dependency` - Remove dependency
|
||||||
|
- `/project:tm/validate-dependencies` - Check issues
|
||||||
|
- `/project:tm/fix-dependencies` - Auto-fix
|
||||||
|
|
||||||
|
### Workflows
|
||||||
|
- `/project:tm/workflows/smart-flow` - Adaptive workflows
|
||||||
|
- `/project:tm/workflows/pipeline` - Chain commands
|
||||||
|
- `/project:tm/workflows/auto-implement` - AI implementation
|
||||||
|
|
||||||
|
### Utilities
|
||||||
|
- `/project:tm/status` - Project dashboard
|
||||||
|
- `/project:tm/next` - Next task recommendation
|
||||||
|
- `/project:tm/utils/analyze` - Project analysis
|
||||||
|
- `/project:tm/learn` - Interactive help
|
||||||
|
|
||||||
|
## Key Features
|
||||||
|
|
||||||
|
### Natural Language Support
|
||||||
|
All commands understand natural language:
|
||||||
|
```
|
||||||
|
/project:tm/list pending high priority
|
||||||
|
/project:tm/update mark 23 as done
|
||||||
|
/project:tm/add-task implement OAuth login
|
||||||
|
```
|
||||||
|
|
||||||
|
### Smart Context
|
||||||
|
Commands analyze project state and provide intelligent suggestions based on:
|
||||||
|
- Current task status
|
||||||
|
- Dependencies
|
||||||
|
- Team patterns
|
||||||
|
- Project phase
|
||||||
|
|
||||||
|
### Visual Enhancements
|
||||||
|
- Progress bars and indicators
|
||||||
|
- Status badges
|
||||||
|
- Organized displays
|
||||||
|
- Clear hierarchies
|
||||||
|
|
||||||
|
## Common Workflows
|
||||||
|
|
||||||
|
### Daily Development
|
||||||
|
```
|
||||||
|
/project:tm/workflows/smart-flow morning
|
||||||
|
/project:tm/next
|
||||||
|
/project:tm/set-status/to-in-progress <id>
|
||||||
|
/project:tm/set-status/to-done <id>
|
||||||
|
```
|
||||||
|
|
||||||
|
### Task Breakdown
|
||||||
|
```
|
||||||
|
/project:tm/show <id>
|
||||||
|
/project:tm/expand <id>
|
||||||
|
/project:tm/list/with-subtasks
|
||||||
|
```
|
||||||
|
|
||||||
|
### Sprint Planning
|
||||||
|
```
|
||||||
|
/project:tm/analyze-complexity
|
||||||
|
/project:tm/workflows/pipeline init → expand/all → status
|
||||||
|
```
|
||||||
|
|
||||||
|
## Migration from Old Commands
|
||||||
|
|
||||||
|
| Old | New |
|
||||||
|
|-----|-----|
|
||||||
|
| `/project:task-master:list` | `/project:tm/list` |
|
||||||
|
| `/project:task-master:complete` | `/project:tm/set-status/to-done` |
|
||||||
|
| `/project:workflows:auto-implement` | `/project:tm/workflows/auto-implement` |
|
||||||
|
|
||||||
|
## Tips
|
||||||
|
|
||||||
|
1. Use `/project:tm/` + Tab for command discovery
|
||||||
|
2. Natural language is supported everywhere
|
||||||
|
3. Commands provide smart defaults
|
||||||
|
4. Chain commands for automation
|
||||||
|
5. Check `/project:tm/learn` for interactive help
|
||||||
38
.claude/commands/dedupe.md
Normal file
38
.claude/commands/dedupe.md
Normal file
@@ -0,0 +1,38 @@
|
|||||||
|
---
|
||||||
|
allowed-tools: Bash(gh issue view:*), Bash(gh search:*), Bash(gh issue list:*), Bash(gh api:*), Bash(gh issue comment:*)
|
||||||
|
description: Find duplicate GitHub issues
|
||||||
|
---
|
||||||
|
|
||||||
|
Find up to 3 likely duplicate issues for a given GitHub issue.
|
||||||
|
|
||||||
|
To do this, follow these steps precisely:
|
||||||
|
|
||||||
|
1. Use an agent to check if the Github issue (a) is closed, (b) does not need to be deduped (eg. because it is broad product feedback without a specific solution, or positive feedback), or (c) already has a duplicates comment that you made earlier. If so, do not proceed.
|
||||||
|
2. Use an agent to view a Github issue, and ask the agent to return a summary of the issue
|
||||||
|
3. Then, launch 5 parallel agents to search Github for duplicates of this issue, using diverse keywords and search approaches, using the summary from #1
|
||||||
|
4. Next, feed the results from #1 and #2 into another agent, so that it can filter out false positives, that are likely not actually duplicates of the original issue. If there are no duplicates remaining, do not proceed.
|
||||||
|
5. Finally, comment back on the issue with a list of up to three duplicate issues (or zero, if there are no likely duplicates)
|
||||||
|
|
||||||
|
Notes (be sure to tell this to your agents, too):
|
||||||
|
|
||||||
|
- Use `gh` to interact with Github, rather than web fetch
|
||||||
|
- Do not use other tools, beyond `gh` (eg. don't use other MCP servers, file edit, etc.)
|
||||||
|
- Make a todo list first
|
||||||
|
- For your comment, follow the following format precisely (assuming for this example that you found 3 suspected duplicates):
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
Found 3 possible duplicate issues:
|
||||||
|
|
||||||
|
1. <link to issue>
|
||||||
|
2. <link to issue>
|
||||||
|
3. <link to issue>
|
||||||
|
|
||||||
|
This issue will be automatically closed as a duplicate in 3 days.
|
||||||
|
|
||||||
|
- If your issue is a duplicate, please close it and 👍 the existing issue instead
|
||||||
|
- To prevent auto-closure, add a comment or 👎 this comment
|
||||||
|
|
||||||
|
🤖 Generated with \[Task Master Bot\]
|
||||||
|
|
||||||
|
---
|
||||||
10
.coderabbit.yaml
Normal file
10
.coderabbit.yaml
Normal file
@@ -0,0 +1,10 @@
|
|||||||
|
reviews:
|
||||||
|
profile: assertive
|
||||||
|
poem: false
|
||||||
|
auto_review:
|
||||||
|
base_branches:
|
||||||
|
- rc
|
||||||
|
- beta
|
||||||
|
- alpha
|
||||||
|
- production
|
||||||
|
- next
|
||||||
@@ -1,10 +1,21 @@
|
|||||||
{
|
{
|
||||||
"mcpServers": {
|
"mcpServers": {
|
||||||
"taskmaster-ai": {
|
"task-master-ai": {
|
||||||
"command": "node",
|
"command": "node",
|
||||||
"args": [
|
"args": ["./dist/mcp-server.js"],
|
||||||
"./mcp-server/server.js"
|
"env": {
|
||||||
]
|
"ANTHROPIC_API_KEY": "ANTHROPIC_API_KEY_HERE",
|
||||||
|
"PERPLEXITY_API_KEY": "PERPLEXITY_API_KEY_HERE",
|
||||||
|
"OPENAI_API_KEY": "OPENAI_API_KEY_HERE",
|
||||||
|
"GOOGLE_API_KEY": "GOOGLE_API_KEY_HERE",
|
||||||
|
"GROQ_API_KEY": "GROQ_API_KEY_HERE",
|
||||||
|
"XAI_API_KEY": "XAI_API_KEY_HERE",
|
||||||
|
"OPENROUTER_API_KEY": "OPENROUTER_API_KEY_HERE",
|
||||||
|
"MISTRAL_API_KEY": "MISTRAL_API_KEY_HERE",
|
||||||
|
"AZURE_OPENAI_API_KEY": "AZURE_OPENAI_API_KEY_HERE",
|
||||||
|
"OLLAMA_API_KEY": "OLLAMA_API_KEY_HERE",
|
||||||
|
"GITHUB_API_KEY": "GITHUB_API_KEY_HERE"
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
155
.cursor/rules/ai_providers.mdc
Normal file
155
.cursor/rules/ai_providers.mdc
Normal file
@@ -0,0 +1,155 @@
|
|||||||
|
---
|
||||||
|
description: Guidelines for managing Task Master AI providers and models.
|
||||||
|
globs:
|
||||||
|
alwaysApply: false
|
||||||
|
---
|
||||||
|
# Task Master AI Provider Management
|
||||||
|
|
||||||
|
This rule guides AI assistants on how to view, configure, and interact with the different AI providers and models supported by Task Master. For internal implementation details of the service layer, see [`ai_services.mdc`](mdc:.cursor/rules/ai_services.mdc).
|
||||||
|
|
||||||
|
- **Primary Interaction:**
|
||||||
|
- Use the `models` MCP tool or the `task-master models` CLI command to manage AI configurations. See [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc) for detailed command/tool usage.
|
||||||
|
|
||||||
|
- **Configuration Roles:**
|
||||||
|
- Task Master uses three roles for AI models:
|
||||||
|
- `main`: Primary model for general tasks (generation, updates).
|
||||||
|
- `research`: Model used when the `--research` flag or `research: true` parameter is used (typically models with web access or specialized knowledge).
|
||||||
|
- `fallback`: Model used if the primary (`main`) model fails.
|
||||||
|
- Each role is configured with a specific `provider:modelId` pair (e.g., `openai:gpt-4o`).
|
||||||
|
|
||||||
|
- **Viewing Configuration & Available Models:**
|
||||||
|
- To see the current model assignments for each role and list all models available for assignment:
|
||||||
|
- **MCP Tool:** `models` (call with no arguments or `listAvailableModels: true`)
|
||||||
|
- **CLI Command:** `task-master models`
|
||||||
|
- The output will show currently assigned models and a list of others, prefixed with their provider (e.g., `google:gemini-2.5-pro-exp-03-25`).
|
||||||
|
|
||||||
|
- **Setting Models for Roles:**
|
||||||
|
- To assign a model to a role:
|
||||||
|
- **MCP Tool:** `models` with `setMain`, `setResearch`, or `setFallback` parameters.
|
||||||
|
- **CLI Command:** `task-master models` with `--set-main`, `--set-research`, or `--set-fallback` flags.
|
||||||
|
- **Crucially:** When providing the model ID to *set*, **DO NOT include the `provider:` prefix**. Use only the model ID itself.
|
||||||
|
- ✅ **DO:** `models(setMain='gpt-4o')` or `task-master models --set-main=gpt-4o`
|
||||||
|
- ❌ **DON'T:** `models(setMain='openai:gpt-4o')` or `task-master models --set-main=openai:gpt-4o`
|
||||||
|
- The tool/command will automatically determine the provider based on the model ID.
|
||||||
|
|
||||||
|
- **Setting Custom Models (Ollama/OpenRouter):**
|
||||||
|
- To set a model ID not in the internal list for Ollama or OpenRouter:
|
||||||
|
- **MCP Tool:** Use `models` with `set<Role>` and **also** `ollama: true` or `openrouter: true`.
|
||||||
|
- Example: `models(setMain='my-custom-ollama-model', ollama=true)`
|
||||||
|
- Example: `models(setMain='some-openrouter-model', openrouter=true)`
|
||||||
|
- **CLI Command:** Use `task-master models` with `--set-<role>` and **also** `--ollama` or `--openrouter`.
|
||||||
|
- Example: `task-master models --set-main=my-custom-ollama-model --ollama`
|
||||||
|
- Example: `task-master models --set-main=some-openrouter-model --openrouter`
|
||||||
|
- **Interactive Setup:** Use `task-master models --setup` and select the `Ollama (Enter Custom ID)` or `OpenRouter (Enter Custom ID)` options.
|
||||||
|
- **OpenRouter Validation:** When setting a custom OpenRouter model, Taskmaster attempts to validate the ID against the live OpenRouter API.
|
||||||
|
- **Ollama:** No live validation occurs for custom Ollama models; ensure the model is available on your Ollama server.
|
||||||
|
|
||||||
|
- **Supported Providers & Required API Keys:**
|
||||||
|
- Task Master integrates with various providers via the Vercel AI SDK.
|
||||||
|
- **API keys are essential** for most providers and must be configured correctly.
|
||||||
|
- **Key Locations** (See [`dev_workflow.mdc`](mdc:.cursor/rules/dev_workflow.mdc) - Configuration Management):
|
||||||
|
- **MCP/Cursor:** Set keys in the `env` section of `.cursor/mcp.json`.
|
||||||
|
- **CLI:** Set keys in a `.env` file in the project root.
|
||||||
|
- **Provider List & Keys:**
|
||||||
|
- **`anthropic`**: Requires `ANTHROPIC_API_KEY`.
|
||||||
|
- **`google`**: Requires `GOOGLE_API_KEY`.
|
||||||
|
- **`openai`**: Requires `OPENAI_API_KEY`.
|
||||||
|
- **`perplexity`**: Requires `PERPLEXITY_API_KEY`.
|
||||||
|
- **`xai`**: Requires `XAI_API_KEY`.
|
||||||
|
- **`mistral`**: Requires `MISTRAL_API_KEY`.
|
||||||
|
- **`azure`**: Requires `AZURE_OPENAI_API_KEY` and `AZURE_OPENAI_ENDPOINT`.
|
||||||
|
- **`openrouter`**: Requires `OPENROUTER_API_KEY`.
|
||||||
|
- **`ollama`**: Might require `OLLAMA_API_KEY` (not currently supported) *and* `OLLAMA_BASE_URL` (default: `http://localhost:11434/api`). *Check specific setup.*
|
||||||
|
|
||||||
|
- **Troubleshooting:**
|
||||||
|
- If AI commands fail (especially in MCP context):
|
||||||
|
1. **Verify API Key:** Ensure the correct API key for the *selected provider* (check `models` output) exists in the appropriate location (`.cursor/mcp.json` env or `.env`).
|
||||||
|
2. **Check Model ID:** Ensure the model ID set for the role is valid (use `models` listAvailableModels/`task-master models`).
|
||||||
|
3. **Provider Status:** Check the status of the external AI provider's service.
|
||||||
|
4. **Restart MCP:** If changes were made to configuration or provider code, restart the MCP server.
|
||||||
|
|
||||||
|
## Adding a New AI Provider (Vercel AI SDK Method)
|
||||||
|
|
||||||
|
Follow these steps to integrate a new AI provider that has an official Vercel AI SDK adapter (`@ai-sdk/<provider>`):
|
||||||
|
|
||||||
|
1. **Install Dependency:**
|
||||||
|
- Install the provider-specific package:
|
||||||
|
```bash
|
||||||
|
npm install @ai-sdk/<provider-name>
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Create Provider Module:**
|
||||||
|
- Create a new file in `src/ai-providers/` named `<provider-name>.js`.
|
||||||
|
- Use existing modules (`openai.js`, `anthropic.js`, etc.) as a template.
|
||||||
|
- **Import:**
|
||||||
|
- Import the provider's `create<ProviderName>` function from `@ai-sdk/<provider-name>`.
|
||||||
|
- Import `generateText`, `streamText`, `generateObject` from the core `ai` package.
|
||||||
|
- Import the `log` utility from `../../scripts/modules/utils.js`.
|
||||||
|
- **Implement Core Functions:**
|
||||||
|
- `generate<ProviderName>Text(params)`:
|
||||||
|
- Accepts `params` (apiKey, modelId, messages, etc.).
|
||||||
|
- Instantiate the client: `const client = create<ProviderName>({ apiKey });`
|
||||||
|
- Call `generateText({ model: client(modelId), ... })`.
|
||||||
|
- Return `result.text`.
|
||||||
|
- Include basic validation and try/catch error handling.
|
||||||
|
- `stream<ProviderName>Text(params)`:
|
||||||
|
- Similar structure to `generateText`.
|
||||||
|
- Call `streamText({ model: client(modelId), ... })`.
|
||||||
|
- Return the full stream result object.
|
||||||
|
- Include basic validation and try/catch.
|
||||||
|
- `generate<ProviderName>Object(params)`:
|
||||||
|
- Similar structure.
|
||||||
|
- Call `generateObject({ model: client(modelId), schema, messages, ... })`.
|
||||||
|
- Return `result.object`.
|
||||||
|
- Include basic validation and try/catch.
|
||||||
|
- **Export Functions:** Export the three implemented functions (`generate<ProviderName>Text`, `stream<ProviderName>Text`, `generate<ProviderName>Object`).
|
||||||
|
|
||||||
|
3. **Integrate with Unified Service:**
|
||||||
|
- Open `scripts/modules/ai-services-unified.js`.
|
||||||
|
- **Import:** Add `import * as <providerName> from '../../src/ai-providers/<provider-name>.js';`
|
||||||
|
- **Map:** Add an entry to the `PROVIDER_FUNCTIONS` map:
|
||||||
|
```javascript
|
||||||
|
'<provider-name>': {
|
||||||
|
generateText: <providerName>.generate<ProviderName>Text,
|
||||||
|
streamText: <providerName>.stream<ProviderName>Text,
|
||||||
|
generateObject: <providerName>.generate<ProviderName>Object
|
||||||
|
},
|
||||||
|
```
|
||||||
|
|
||||||
|
4. **Update Configuration Management:**
|
||||||
|
- Open `scripts/modules/config-manager.js`.
|
||||||
|
- **`MODEL_MAP`:** Add the new `<provider-name>` key to the `MODEL_MAP` loaded from `supported-models.json` (or ensure the loading handles new providers dynamically if `supported-models.json` is updated first).
|
||||||
|
- **`VALID_PROVIDERS`:** Ensure the new `<provider-name>` is included in the `VALID_PROVIDERS` array (this should happen automatically if derived from `MODEL_MAP` keys).
|
||||||
|
- **API Key Handling:**
|
||||||
|
- Update the `keyMap` in `_resolveApiKey` and `isApiKeySet` with the correct environment variable name (e.g., `PROVIDER_API_KEY`).
|
||||||
|
- Update the `switch` statement in `getMcpApiKeyStatus` to check the corresponding key in `mcp.json` and its placeholder value.
|
||||||
|
- Add a case to the `switch` statement in `getMcpApiKeyStatus` for the new provider, including its placeholder string if applicable.
|
||||||
|
- **Ollama Exception:** If adding Ollama or another provider *not* requiring an API key, add a specific check at the beginning of `isApiKeySet` and `getMcpApiKeyStatus` to return `true` immediately for that provider.
|
||||||
|
|
||||||
|
5. **Update Supported Models List:**
|
||||||
|
- Edit `scripts/modules/supported-models.json`.
|
||||||
|
- Add a new key for the `<provider-name>`.
|
||||||
|
- Add an array of model objects under the provider key, each including:
|
||||||
|
- `id`: The specific model identifier (e.g., `claude-3-opus-20240229`).
|
||||||
|
- `name`: A user-friendly name (optional).
|
||||||
|
- `swe_score`, `cost_per_1m_tokens`: (Optional) Add performance/cost data if available.
|
||||||
|
- `allowed_roles`: An array of roles (`"main"`, `"research"`, `"fallback"`) the model is suitable for.
|
||||||
|
- `max_tokens`: (Optional but recommended) The maximum token limit for the model.
|
||||||
|
|
||||||
|
6. **Update Environment Examples:**
|
||||||
|
- Add the new `PROVIDER_API_KEY` to `.env.example`.
|
||||||
|
- Add the new `PROVIDER_API_KEY` with its placeholder (`YOUR_PROVIDER_API_KEY_HERE`) to the `env` section for `taskmaster-ai` in `.cursor/mcp.json.example` (if it exists) or update instructions.
|
||||||
|
|
||||||
|
7. **Add Unit Tests:**
|
||||||
|
- Create `tests/unit/ai-providers/<provider-name>.test.js`.
|
||||||
|
- Mock the `@ai-sdk/<provider-name>` module and the core `ai` module functions (`generateText`, `streamText`, `generateObject`).
|
||||||
|
- Write tests for each exported function (`generate<ProviderName>Text`, etc.) to verify:
|
||||||
|
- Correct client instantiation.
|
||||||
|
- Correct parameters passed to the mocked Vercel AI SDK functions.
|
||||||
|
- Correct handling of results.
|
||||||
|
- Error handling (missing API key, SDK errors).
|
||||||
|
|
||||||
|
8. **Documentation:**
|
||||||
|
- Update any relevant documentation (like `README.md` or other rules) mentioning supported providers or configuration.
|
||||||
|
|
||||||
|
*(Note: For providers **without** an official Vercel AI SDK adapter, the process would involve directly using the provider's own SDK or API within the `src/ai-providers/<provider-name>.js` module and manually constructing responses compatible with the unified service layer, which is significantly more complex.)*
|
||||||
102
.cursor/rules/ai_services.mdc
Normal file
102
.cursor/rules/ai_services.mdc
Normal file
@@ -0,0 +1,102 @@
|
|||||||
|
---
|
||||||
|
description: Guidelines for interacting with the unified AI service layer.
|
||||||
|
globs: scripts/modules/ai-services-unified.js, scripts/modules/task-manager/*.js, scripts/modules/commands.js
|
||||||
|
---
|
||||||
|
|
||||||
|
# AI Services Layer Guidelines
|
||||||
|
|
||||||
|
This document outlines the architecture and usage patterns for interacting with Large Language Models (LLMs) via Task Master's unified AI service layer (`ai-services-unified.js`). The goal is to centralize configuration, provider selection, API key management, fallback logic, and error handling.
|
||||||
|
|
||||||
|
**Core Components:**
|
||||||
|
|
||||||
|
* **Configuration (`.taskmasterconfig` & [`config-manager.js`](mdc:scripts/modules/config-manager.js)):**
|
||||||
|
* Defines the AI provider and model ID for different **roles** (`main`, `research`, `fallback`).
|
||||||
|
* Stores parameters like `maxTokens` and `temperature` per role.
|
||||||
|
* Managed via the `task-master models --setup` CLI command.
|
||||||
|
* [`config-manager.js`](mdc:scripts/modules/config-manager.js) provides **getters** (e.g., `getMainProvider()`, `getParametersForRole()`) to access these settings. Core logic should **only** use these getters for *non-AI related application logic* (e.g., `getDefaultSubtasks`). The unified service fetches necessary AI parameters internally based on the `role`.
|
||||||
|
* **API keys** are **NOT** stored here; they are resolved via `resolveEnvVariable` (in [`utils.js`](mdc:scripts/modules/utils.js)) from `.env` (for CLI) or the MCP `session.env` object (for MCP calls). See [`utilities.mdc`](mdc:.cursor/rules/utilities.mdc) and [`dev_workflow.mdc`](mdc:.cursor/rules/dev_workflow.mdc).
|
||||||
|
|
||||||
|
* **Unified Service (`ai-services-unified.js`):**
|
||||||
|
* Exports primary interaction functions: `generateTextService`, `generateObjectService`. (Note: `streamTextService` exists but has known reliability issues with some providers/payloads).
|
||||||
|
* Contains the core `_unifiedServiceRunner` logic.
|
||||||
|
* Internally uses `config-manager.js` getters to determine the provider/model/parameters based on the requested `role`.
|
||||||
|
* Implements the **fallback sequence** (e.g., main -> fallback -> research) if the primary provider/model fails.
|
||||||
|
* Constructs the `messages` array required by the Vercel AI SDK.
|
||||||
|
* Implements **retry logic** for specific API errors (`_attemptProviderCallWithRetries`).
|
||||||
|
* Resolves API keys automatically via `_resolveApiKey` (using `resolveEnvVariable`).
|
||||||
|
* Maps requests to the correct provider implementation (in `src/ai-providers/`) via `PROVIDER_FUNCTIONS`.
|
||||||
|
* Returns a structured object containing the primary AI result (`mainResult`) and telemetry data (`telemetryData`). See [`telemetry.mdc`](mdc:.cursor/rules/telemetry.mdc) for details on how this telemetry data is propagated and handled.
|
||||||
|
|
||||||
|
* **Provider Implementations (`src/ai-providers/*.js`):**
|
||||||
|
* Contain provider-specific wrappers around Vercel AI SDK functions (`generateText`, `generateObject`).
|
||||||
|
|
||||||
|
**Usage Pattern (from Core Logic like `task-manager/*.js`):**
|
||||||
|
|
||||||
|
1. **Import Service:** Import `generateTextService` or `generateObjectService` from `../ai-services-unified.js`.
|
||||||
|
```javascript
|
||||||
|
// Preferred for most tasks (especially with complex JSON)
|
||||||
|
import { generateTextService } from '../ai-services-unified.js';
|
||||||
|
|
||||||
|
// Use if structured output is reliable for the specific use case
|
||||||
|
// import { generateObjectService } from '../ai-services-unified.js';
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Prepare Parameters:** Construct the parameters object for the service call.
|
||||||
|
* `role`: **Required.** `'main'`, `'research'`, or `'fallback'`. Determines the initial provider/model/parameters used by the unified service.
|
||||||
|
* `session`: **Required if called from MCP context.** Pass the `session` object received by the direct function wrapper. The unified service uses `session.env` to find API keys.
|
||||||
|
* `systemPrompt`: Your system instruction string.
|
||||||
|
* `prompt`: The user message string (can be long, include stringified data, etc.).
|
||||||
|
* (For `generateObjectService` only): `schema` (Zod schema), `objectName`.
|
||||||
|
|
||||||
|
3. **Call Service:** Use `await` to call the service function.
|
||||||
|
```javascript
|
||||||
|
// Example using generateTextService (most common)
|
||||||
|
try {
|
||||||
|
const resultText = await generateTextService({
|
||||||
|
role: useResearch ? 'research' : 'main', // Determine role based on logic
|
||||||
|
session: context.session, // Pass session from context object
|
||||||
|
systemPrompt: "You are...",
|
||||||
|
prompt: userMessageContent
|
||||||
|
});
|
||||||
|
// Process the raw text response (e.g., parse JSON, use directly)
|
||||||
|
// ...
|
||||||
|
} catch (error) {
|
||||||
|
// Handle errors thrown by the unified service (if all fallbacks/retries fail)
|
||||||
|
report('error', `Unified AI service call failed: ${error.message}`);
|
||||||
|
throw error;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Example using generateObjectService (use cautiously)
|
||||||
|
try {
|
||||||
|
const resultObject = await generateObjectService({
|
||||||
|
role: 'main',
|
||||||
|
session: context.session,
|
||||||
|
schema: myZodSchema,
|
||||||
|
objectName: 'myDataObject',
|
||||||
|
systemPrompt: "You are...",
|
||||||
|
prompt: userMessageContent
|
||||||
|
});
|
||||||
|
// resultObject is already a validated JS object
|
||||||
|
// ...
|
||||||
|
} catch (error) {
|
||||||
|
report('error', `Unified AI service call failed: ${error.message}`);
|
||||||
|
throw error;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
4. **Handle Results/Errors:** Process the returned text/object or handle errors thrown by the unified service layer.
|
||||||
|
|
||||||
|
**Key Implementation Rules & Gotchas:**
|
||||||
|
|
||||||
|
* ✅ **DO**: Centralize **all** LLM calls through `generateTextService` or `generateObjectService`.
|
||||||
|
* ✅ **DO**: Determine the appropriate `role` (`main`, `research`, `fallback`) in your core logic and pass it to the service.
|
||||||
|
* ✅ **DO**: Pass the `session` object (received in the `context` parameter, especially from direct function wrappers) to the service call when in MCP context.
|
||||||
|
* ✅ **DO**: Ensure API keys are correctly configured in `.env` (for CLI) or `.cursor/mcp.json` (for MCP).
|
||||||
|
* ✅ **DO**: Ensure `.taskmasterconfig` exists and has valid provider/model IDs for the roles you intend to use (manage via `task-master models --setup`).
|
||||||
|
* ✅ **DO**: Use `generateTextService` and implement robust manual JSON parsing (with Zod validation *after* parsing) when structured output is needed, as `generateObjectService` has shown unreliability with some providers/schemas.
|
||||||
|
* ❌ **DON'T**: Import or call anything from the old `ai-services.js`, `ai-client-factory.js`, or `ai-client-utils.js` files.
|
||||||
|
* ❌ **DON'T**: Initialize AI clients (Anthropic, Perplexity, etc.) directly within core logic (`task-manager/`) or MCP direct functions.
|
||||||
|
* ❌ **DON'T**: Fetch AI-specific parameters (model ID, max tokens, temp) using `config-manager.js` getters *for the AI call*. Pass the `role` instead.
|
||||||
|
* ❌ **DON'T**: Implement fallback or retry logic outside `ai-services-unified.js`.
|
||||||
|
* ❌ **DON'T**: Handle API key resolution outside the service layer (it uses `utils.js` internally).
|
||||||
|
* ⚠️ **generateObjectService Caution**: Be aware of potential reliability issues with `generateObjectService` across different providers and complex schemas. Prefer `generateTextService` + manual parsing as a more robust alternative for structured data needs.
|
||||||
@@ -3,7 +3,6 @@ description: Describes the high-level architecture of the Task Master CLI applic
|
|||||||
globs: scripts/modules/*.js
|
globs: scripts/modules/*.js
|
||||||
alwaysApply: false
|
alwaysApply: false
|
||||||
---
|
---
|
||||||
|
|
||||||
# Application Architecture Overview
|
# Application Architecture Overview
|
||||||
|
|
||||||
- **Modular Structure**: The Task Master CLI is built using a modular architecture, with distinct modules responsible for different aspects of the application. This promotes separation of concerns, maintainability, and testability.
|
- **Modular Structure**: The Task Master CLI is built using a modular architecture, with distinct modules responsible for different aspects of the application. This promotes separation of concerns, maintainability, and testability.
|
||||||
@@ -14,114 +13,213 @@ alwaysApply: false
|
|||||||
- **Purpose**: Defines and registers all CLI commands using Commander.js.
|
- **Purpose**: Defines and registers all CLI commands using Commander.js.
|
||||||
- **Responsibilities** (See also: [`commands.mdc`](mdc:.cursor/rules/commands.mdc)):
|
- **Responsibilities** (See also: [`commands.mdc`](mdc:.cursor/rules/commands.mdc)):
|
||||||
- Parses command-line arguments and options.
|
- Parses command-line arguments and options.
|
||||||
- Invokes appropriate functions from other modules to execute commands.
|
- Invokes appropriate core logic functions from `scripts/modules/`.
|
||||||
- Handles user input and output related to command execution.
|
- Handles user input/output for CLI.
|
||||||
- Implements input validation and error handling for CLI commands.
|
- Implements CLI-specific validation.
|
||||||
- **Key Components**:
|
|
||||||
- `programInstance` (Commander.js `Command` instance): Manages command definitions.
|
|
||||||
- `registerCommands(programInstance)`: Function to register all application commands.
|
|
||||||
- Command action handlers: Functions executed when a specific command is invoked.
|
|
||||||
|
|
||||||
- **[`task-manager.js`](mdc:scripts/modules/task-manager.js): Task Data Management**
|
- **[`task-manager.js`](mdc:scripts/modules/task-manager.js) & `task-manager/` directory: Task Data & Core Logic**
|
||||||
- **Purpose**: Manages task data, including loading, saving, creating, updating, deleting, and querying tasks.
|
- **Purpose**: Contains core functions for task data manipulation (CRUD), AI interactions, and related logic.
|
||||||
- **Responsibilities**:
|
- **Responsibilities**:
|
||||||
- Reads and writes task data to `tasks.json` file.
|
- Reading/writing `tasks.json` with tagged task lists support.
|
||||||
- Implements functions for task CRUD operations (Create, Read, Update, Delete).
|
- Implementing functions for task CRUD, parsing PRDs, expanding tasks, updating status, etc.
|
||||||
- Handles task parsing from PRD documents using AI.
|
- **Tagged Task Lists**: Handles task organization across multiple contexts (tags) like "master", branch names, or project phases.
|
||||||
- Manages task expansion and subtask generation.
|
- **Tag Resolution**: Provides backward compatibility by resolving tagged format to legacy format transparently.
|
||||||
- Updates task statuses and properties.
|
- **Delegating AI interactions** to the `ai-services-unified.js` layer.
|
||||||
- Implements task listing and display logic.
|
- Accessing non-AI configuration via `config-manager.js` getters.
|
||||||
- Performs task complexity analysis using AI.
|
- **Key Files**: Individual files within `scripts/modules/task-manager/` handle specific actions (e.g., `add-task.js`, `expand-task.js`).
|
||||||
- **Key Functions**:
|
|
||||||
- `readTasks(tasksPath)` / `writeTasks(tasksPath, tasksData)`: Load and save task data.
|
|
||||||
- `parsePRD(prdFilePath, outputPath, numTasks)`: Parses PRD document to create tasks.
|
|
||||||
- `expandTask(taskId, numSubtasks, useResearch, prompt, force)`: Expands a task into subtasks.
|
|
||||||
- `setTaskStatus(tasksPath, taskIdInput, newStatus)`: Updates task status.
|
|
||||||
- `listTasks(tasksPath, statusFilter, withSubtasks)`: Lists tasks with filtering and subtask display options.
|
|
||||||
- `analyzeComplexity(tasksPath, reportPath, useResearch, thresholdScore)`: Analyzes task complexity.
|
|
||||||
|
|
||||||
- **[`dependency-manager.js`](mdc:scripts/modules/dependency-manager.js): Dependency Management**
|
- **[`dependency-manager.js`](mdc:scripts/modules/dependency-manager.js): Dependency Management**
|
||||||
- **Purpose**: Manages task dependencies, including adding, removing, validating, and fixing dependency relationships.
|
- **Purpose**: Manages task dependencies.
|
||||||
- **Responsibilities**:
|
- **Responsibilities**: Add/remove/validate/fix dependencies across tagged task contexts.
|
||||||
- Adds and removes task dependencies.
|
|
||||||
- Validates dependency relationships to prevent circular dependencies and invalid references.
|
|
||||||
- Fixes invalid dependencies by removing non-existent or self-referential dependencies.
|
|
||||||
- Provides functions to check for circular dependencies.
|
|
||||||
- **Key Functions**:
|
|
||||||
- `addDependency(tasksPath, taskId, dependencyId)`: Adds a dependency between tasks.
|
|
||||||
- `removeDependency(tasksPath, taskId, dependencyId)`: Removes a dependency.
|
|
||||||
- `validateDependencies(tasksPath)`: Validates task dependencies.
|
|
||||||
- `fixDependencies(tasksPath)`: Fixes invalid task dependencies.
|
|
||||||
- `isCircularDependency(tasks, taskId, dependencyChain)`: Detects circular dependencies.
|
|
||||||
|
|
||||||
- **[`ui.js`](mdc:scripts/modules/ui.js): User Interface Components**
|
- **[`ui.js`](mdc:scripts/modules/ui.js): User Interface Components**
|
||||||
- **Purpose**: Handles all user interface elements, including displaying information, formatting output, and providing user feedback.
|
- **Purpose**: Handles CLI output formatting (tables, colors, boxes, spinners).
|
||||||
- **Responsibilities**:
|
- **Responsibilities**: Displaying tasks, reports, progress, suggestions, and migration notices for tagged systems.
|
||||||
- Displays task lists, task details, and command outputs in a formatted way.
|
|
||||||
- Uses `chalk` for colored output and `boxen` for boxed messages.
|
|
||||||
- Implements table display using `cli-table3`.
|
|
||||||
- Shows loading indicators using `ora`.
|
|
||||||
- Provides helper functions for status formatting, dependency display, and progress reporting.
|
|
||||||
- Suggests next actions to the user after command execution.
|
|
||||||
- **Key Functions**:
|
|
||||||
- `displayTaskList(tasks, statusFilter, withSubtasks)`: Displays a list of tasks in a table.
|
|
||||||
- `displayTaskDetails(task)`: Displays detailed information for a single task.
|
|
||||||
- `displayComplexityReport(reportPath)`: Displays the task complexity report.
|
|
||||||
- `startLoadingIndicator(message)` / `stopLoadingIndicator(indicator)`: Manages loading indicators.
|
|
||||||
- `getStatusWithColor(status)`: Returns status string with color formatting.
|
|
||||||
- `formatDependenciesWithStatus(dependencies, allTasks, inTable)`: Formats dependency list with status indicators.
|
|
||||||
|
|
||||||
- **[`ai-services.js`](mdc:scripts/modules/ai-services.js) (Conceptual): AI Integration**
|
- **[`ai-services-unified.js`](mdc:scripts/modules/ai-services-unified.js): Unified AI Service Layer**
|
||||||
- **Purpose**: Abstracts interactions with AI models (like Anthropic Claude and Perplexity AI) for various features. *Note: This module might be implicitly implemented within `task-manager.js` and `utils.js` or could be explicitly created for better organization as the project evolves.*
|
- **Purpose**: Centralized interface for all LLM interactions using Vercel AI SDK.
|
||||||
- **Responsibilities**:
|
- **Responsibilities** (See also: [`ai_services.mdc`](mdc:.cursor/rules/ai_services.mdc)):
|
||||||
- Handles API calls to AI services.
|
- Exports `generateTextService`, `generateObjectService`.
|
||||||
- Manages prompts and parameters for AI requests.
|
- Handles provider/model selection based on `role` and `.taskmasterconfig`.
|
||||||
- Parses AI responses and extracts relevant information.
|
- Resolves API keys (from `.env` or `session.env`).
|
||||||
- Implements logic for task complexity analysis, task expansion, and PRD parsing using AI.
|
- Implements fallback and retry logic.
|
||||||
- **Potential Functions**:
|
- Orchestrates calls to provider-specific implementations (`src/ai-providers/`).
|
||||||
- `getAIResponse(prompt, model, maxTokens, temperature)`: Generic function to interact with AI model.
|
- Telemetry data generated by the AI service layer is propagated upwards through core logic, direct functions, and MCP tools. See [`telemetry.mdc`](mdc:.cursor/rules/telemetry.mdc) for the detailed integration pattern.
|
||||||
- `analyzeTaskComplexityWithAI(taskDescription)`: Sends task description to AI for complexity analysis.
|
|
||||||
- `expandTaskWithAI(taskDescription, numSubtasks, researchContext)`: Generates subtasks using AI.
|
|
||||||
- `parsePRDWithAI(prdContent)`: Extracts tasks from PRD content using AI.
|
|
||||||
|
|
||||||
- **[`utils.js`](mdc:scripts/modules/utils.js): Utility Functions and Configuration**
|
- **[`src/ai-providers/*.js`](mdc:src/ai-providers/): Provider-Specific Implementations**
|
||||||
- **Purpose**: Provides reusable utility functions and global configuration settings used across the application.
|
- **Purpose**: Provider-specific wrappers for Vercel AI SDK functions.
|
||||||
|
- **Responsibilities**: Interact directly with Vercel AI SDK adapters.
|
||||||
|
|
||||||
|
- **[`config-manager.js`](mdc:scripts/modules/config-manager.js): Configuration Management**
|
||||||
|
- **Purpose**: Loads, validates, and provides access to configuration.
|
||||||
- **Responsibilities** (See also: [`utilities.mdc`](mdc:.cursor/rules/utilities.mdc)):
|
- **Responsibilities** (See also: [`utilities.mdc`](mdc:.cursor/rules/utilities.mdc)):
|
||||||
- Manages global configuration settings loaded from environment variables and defaults.
|
- Reads and merges `.taskmasterconfig` with defaults.
|
||||||
- Implements logging utility with different log levels and output formatting.
|
- Provides getters (e.g., `getMainProvider`, `getLogLevel`, `getDefaultSubtasks`) for accessing settings.
|
||||||
- Provides file system operation utilities (read/write JSON files).
|
- **Tag Configuration**: Manages `global.defaultTag` and `tags` section for tag system settings.
|
||||||
- Includes string manipulation utilities (e.g., `truncate`, `sanitizePrompt`).
|
- **Note**: Does **not** store or directly handle API keys (keys are in `.env` or MCP `session.env`).
|
||||||
- Offers task-specific utility functions (e.g., `formatTaskId`, `findTaskById`, `taskExists`).
|
|
||||||
- Implements graph algorithms like cycle detection for dependency management.
|
- **[`utils.js`](mdc:scripts/modules/utils.js): Core Utility Functions**
|
||||||
- **Key Components**:
|
- **Purpose**: Low-level, reusable CLI utilities.
|
||||||
- `CONFIG`: Global configuration object.
|
- **Responsibilities** (See also: [`utilities.mdc`](mdc:.cursor/rules/utilities.mdc)):
|
||||||
- `log(level, ...args)`: Logging function.
|
- Logging (`log` function), File I/O (`readJSON`, `writeJSON`), String utils (`truncate`).
|
||||||
- `readJSON(filepath)` / `writeJSON(filepath, data)`: File I/O utilities for JSON files.
|
- Task utils (`findTaskById`), Dependency utils (`findCycles`).
|
||||||
- `truncate(text, maxLength)`: String truncation utility.
|
- API Key Resolution (`resolveEnvVariable`).
|
||||||
- `formatTaskId(id)` / `findTaskById(tasks, taskId)`: Task ID and search utilities.
|
- Silent Mode Control (`enableSilentMode`, `disableSilentMode`).
|
||||||
- `findCycles(subtaskId, dependencyMap)`: Cycle detection algorithm.
|
- **Tagged Task Lists**: Silent migration system, tag resolution, current tag management.
|
||||||
|
- **Migration System**: `performCompleteTagMigration`, `migrateConfigJson`, `createStateJson`.
|
||||||
|
|
||||||
- **[`mcp-server/`](mdc:mcp-server/): MCP Server Integration**
|
- **[`mcp-server/`](mdc:mcp-server/): MCP Server Integration**
|
||||||
- **Purpose**: Provides an MCP (Model Context Protocol) interface for Task Master, allowing integration with external tools like Cursor. Uses FastMCP framework.
|
- **Purpose**: Provides MCP interface using FastMCP.
|
||||||
- **Responsibilities** (See also: [`mcp.mdc`](mdc:.cursor/rules/mcp.mdc)):
|
- **Responsibilities** (See also: [`mcp.mdc`](mdc:.cursor/rules/mcp.mdc)):
|
||||||
- Registers Task Master functionalities as tools consumable via MCP.
|
- Registers tools (`mcp-server/src/tools/*.js`). Tool `execute` methods **should be wrapped** with the `withNormalizedProjectRoot` HOF (from `tools/utils.js`) to ensure consistent path handling.
|
||||||
- Handles MCP requests and translates them into calls to the Task Master core logic.
|
- The HOF provides a normalized `args.projectRoot` to the `execute` method.
|
||||||
- Prefers direct function calls to core modules via [`task-master-core.js`](mdc:mcp-server/src/core/task-master-core.js) for performance.
|
- Tool `execute` methods call **direct function wrappers** (`mcp-server/src/core/direct-functions/*.js`), passing the normalized `projectRoot` and other args.
|
||||||
- Uses CLI execution via `executeTaskMasterCommand` as a fallback.
|
- Direct functions use path utilities (`mcp-server/src/core/utils/`) to resolve paths based on `projectRoot` from session.
|
||||||
- **Implements Caching**: Utilizes a caching layer (`ContextManager` with `lru-cache`) invoked via `getCachedOrExecute` within direct function wrappers ([`task-master-core.js`](mdc:mcp-server/src/core/task-master-core.js)) to optimize performance for specific read operations (e.g., listing tasks).
|
- Direct functions implement silent mode, logger wrappers, and call core logic functions from `scripts/modules/`.
|
||||||
- Standardizes response formatting for MCP clients using utilities in [`tools/utils.js`](mdc:mcp-server/src/tools/utils.js).
|
- **Tagged Task Lists**: MCP tools fully support the tagged format with complete tag management capabilities.
|
||||||
- **Key Components**:
|
- Manages MCP caching and response formatting.
|
||||||
- `mcp-server/src/server.js`: Main server setup and initialization.
|
|
||||||
- `mcp-server/src/tools/`: Directory containing individual tool definitions, each registering a specific Task Master command for MCP.
|
|
||||||
|
|
||||||
- **Data Flow and Module Dependencies**:
|
- **[`init.js`](mdc:scripts/init.js): Project Initialization Logic**
|
||||||
|
- **Purpose**: Sets up new Task Master project structure.
|
||||||
|
- **Responsibilities**: Creates directories, copies templates, manages `package.json`, sets up `.cursor/mcp.json`, initializes state.json for tagged system.
|
||||||
|
|
||||||
- **Commands Initiate Actions**: User commands entered via the CLI (handled by [`commands.js`](mdc:scripts/modules/commands.js)) are the entry points for most operations.
|
## Tagged Task Lists System Architecture
|
||||||
- **Command Handlers Delegate to Managers**: Command handlers in [`commands.js`](mdc:scripts/modules/commands.js) call functions in [`task-manager.js`](mdc:scripts/modules/task-manager.js) and [`dependency-manager.js`](mdc:scripts/modules/dependency-manager.js) to perform core task and dependency management logic.
|
|
||||||
- **UI for Presentation**: [`ui.js`](mdc:scripts/modules/ui.js) is used by command handlers and task/dependency managers to display information to the user. UI functions primarily consume data and format it for output, without modifying core application state.
|
**Data Structure**: Task Master now uses a tagged task lists system where the `tasks.json` file contains multiple named task lists as top-level keys:
|
||||||
- **Utilities for Common Tasks**: [`utils.js`](mdc:scripts/modules/utils.js) provides helper functions used by all other modules for configuration, logging, file operations, and common data manipulations.
|
|
||||||
- **AI Services Integration**: AI functionalities (complexity analysis, task expansion, PRD parsing) are invoked from [`task-manager.js`](mdc:scripts/modules/task-manager.js) and potentially [`commands.js`](mdc:scripts/modules/commands.js), likely using functions that would reside in a dedicated `ai-services.js` module or be integrated within `utils.js` or `task-manager.js`.
|
```json
|
||||||
- **MCP Server Interaction**: External tools interact with the `mcp-server`, which then calls direct function wrappers in `task-master-core.js` or falls back to `executeTaskMasterCommand`. Responses are formatted by `mcp-server/src/tools/utils.js`. See [`mcp.mdc`](mdc:.cursor/rules/mcp.mdc) for details.
|
{
|
||||||
|
"master": {
|
||||||
|
"tasks": [/* standard task objects */]
|
||||||
|
},
|
||||||
|
"feature-branch": {
|
||||||
|
"tasks": [/* separate task context */]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Key Components:**
|
||||||
|
|
||||||
|
- **Silent Migration**: Automatically transforms legacy `{"tasks": [...]}` format to tagged format `{"master": {"tasks": [...]}}` on first read
|
||||||
|
- **Tag Resolution Layer**: Provides 100% backward compatibility by intercepting tagged format and returning legacy format to existing code
|
||||||
|
- **Configuration Integration**: `global.defaultTag` and `tags` section in config.json manage tag system settings
|
||||||
|
- **State Management**: `.taskmaster/state.json` tracks current tag, migration status, and tag-branch mappings
|
||||||
|
- **Migration Notice**: User-friendly notification system for seamless migration experience
|
||||||
|
|
||||||
|
**Backward Compatibility**: All existing CLI commands and MCP tools continue to work unchanged. The tag resolution layer ensures that existing code receives the expected legacy format while the underlying storage uses the new tagged structure.
|
||||||
|
|
||||||
|
- **Data Flow and Module Dependencies (Updated)**:
|
||||||
|
|
||||||
|
- **CLI**: `bin/task-master.js` -> `scripts/dev.js` (loads `.env`) -> `scripts/modules/commands.js` -> Core Logic (`scripts/modules/*`) -> **Tag Resolution Layer** -> Unified AI Service (`ai-services-unified.js`) -> Provider Adapters -> LLM API.
|
||||||
|
- **MCP**: External Tool -> `mcp-server/server.js` -> Tool (`mcp-server/src/tools/*`) -> Direct Function (`mcp-server/src/core/direct-functions/*`) -> Core Logic (`scripts/modules/*`) -> **Tag Resolution Layer** -> Unified AI Service (`ai-services-unified.js`) -> Provider Adapters -> LLM API.
|
||||||
|
- **Configuration**: Core logic needing non-AI settings calls `config-manager.js` getters (passing `session.env` via `explicitRoot` if from MCP). Unified AI Service internally calls `config-manager.js` getters (using `role`) for AI params and `utils.js` (`resolveEnvVariable` with `session.env`) for API keys.
|
||||||
|
|
||||||
|
## Silent Mode Implementation Pattern in MCP Direct Functions
|
||||||
|
|
||||||
|
Direct functions (the `*Direct` functions in `mcp-server/src/core/direct-functions/`) need to carefully implement silent mode to prevent console logs from interfering with the structured JSON responses required by MCP. This involves both using `enableSilentMode`/`disableSilentMode` around core function calls AND passing the MCP logger via the standard wrapper pattern (see mcp.mdc). Here's the standard pattern for correct implementation:
|
||||||
|
|
||||||
|
1. **Import Silent Mode Utilities**:
|
||||||
|
```javascript
|
||||||
|
import { enableSilentMode, disableSilentMode, isSilentMode } from '../../../../scripts/modules/utils.js';
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Parameter Matching with Core Functions**:
|
||||||
|
- ✅ **DO**: Ensure direct function parameters match the core function parameters
|
||||||
|
- ✅ **DO**: Check the original core function signature before implementing
|
||||||
|
- ❌ **DON'T**: Add parameters to direct functions that don't exist in core functions
|
||||||
|
```javascript
|
||||||
|
// Example: Core function signature
|
||||||
|
// async function expandTask(tasksPath, taskId, numSubtasks, useResearch, additionalContext, options)
|
||||||
|
|
||||||
|
// Direct function implementation - extract only parameters that exist in core
|
||||||
|
export async function expandTaskDirect(args, log, context = {}) {
|
||||||
|
// Extract parameters that match the core function
|
||||||
|
const taskId = parseInt(args.id, 10);
|
||||||
|
const numSubtasks = args.num ? parseInt(args.num, 10) : undefined;
|
||||||
|
const useResearch = args.research === true;
|
||||||
|
const additionalContext = args.prompt || '';
|
||||||
|
|
||||||
|
// Later pass these parameters in the correct order to the core function
|
||||||
|
const result = await expandTask(
|
||||||
|
tasksPath,
|
||||||
|
taskId,
|
||||||
|
numSubtasks,
|
||||||
|
useResearch,
|
||||||
|
additionalContext,
|
||||||
|
{ mcpLog: log, session: context.session }
|
||||||
|
);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Checking Silent Mode State**:
|
||||||
|
- ✅ **DO**: Always use `isSilentMode()` function to check current status
|
||||||
|
- ❌ **DON'T**: Directly access the global `silentMode` variable or `global.silentMode`
|
||||||
|
```javascript
|
||||||
|
// CORRECT: Use the function to check current state
|
||||||
|
if (!isSilentMode()) {
|
||||||
|
// Only create a loading indicator if not in silent mode
|
||||||
|
loadingIndicator = startLoadingIndicator('Processing...');
|
||||||
|
}
|
||||||
|
|
||||||
|
// INCORRECT: Don't access global variables directly
|
||||||
|
if (!silentMode) { // ❌ WRONG
|
||||||
|
loadingIndicator = startLoadingIndicator('Processing...');
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
4. **Wrapping Core Function Calls**:
|
||||||
|
- ✅ **DO**: Use a try/finally block pattern to ensure silent mode is always restored
|
||||||
|
- ✅ **DO**: Enable silent mode before calling core functions that produce console output
|
||||||
|
- ✅ **DO**: Disable silent mode in a finally block to ensure it runs even if errors occur
|
||||||
|
- ❌ **DON'T**: Enable silent mode without ensuring it gets disabled
|
||||||
|
```javascript
|
||||||
|
export async function someDirectFunction(args, log) {
|
||||||
|
try {
|
||||||
|
// Argument preparation
|
||||||
|
const tasksPath = findTasksJsonPath(args, log);
|
||||||
|
const someArg = args.someArg;
|
||||||
|
|
||||||
|
// Enable silent mode to prevent console logs
|
||||||
|
enableSilentMode();
|
||||||
|
|
||||||
|
try {
|
||||||
|
// Call core function which might produce console output
|
||||||
|
const result = await someCoreFunction(tasksPath, someArg);
|
||||||
|
|
||||||
|
// Return standardized result object
|
||||||
|
return {
|
||||||
|
success: true,
|
||||||
|
data: result,
|
||||||
|
fromCache: false
|
||||||
|
};
|
||||||
|
} finally {
|
||||||
|
// ALWAYS disable silent mode in finally block
|
||||||
|
disableSilentMode();
|
||||||
|
}
|
||||||
|
} catch (error) {
|
||||||
|
// Standard error handling
|
||||||
|
log.error(`Error in direct function: ${error.message}`);
|
||||||
|
return {
|
||||||
|
success: false,
|
||||||
|
error: { code: 'OPERATION_ERROR', message: error.message },
|
||||||
|
fromCache: false
|
||||||
|
};
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
5. **Mixed Parameter and Global Silent Mode Handling**:
|
||||||
|
- For functions that need to handle both a passed `silentMode` parameter and check global state:
|
||||||
|
```javascript
|
||||||
|
// Check both the function parameter and global state
|
||||||
|
const isSilent = options.silentMode || (typeof options.silentMode === 'undefined' && isSilentMode());
|
||||||
|
|
||||||
|
if (!isSilent) {
|
||||||
|
console.log('Operation starting...');
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
By following these patterns consistently, direct functions will properly manage console output suppression while ensuring that silent mode is always properly reset, even when errors occur. This creates a more robust system that helps prevent unexpected silent mode states that could cause logging problems in subsequent operations.
|
||||||
|
|
||||||
- **Testing Architecture**:
|
- **Testing Architecture**:
|
||||||
|
|
||||||
@@ -130,6 +228,7 @@ alwaysApply: false
|
|||||||
- **Integration Tests**: Located in `tests/integration/`, test interactions between modules
|
- **Integration Tests**: Located in `tests/integration/`, test interactions between modules
|
||||||
- **End-to-End Tests**: Located in `tests/e2e/`, test complete workflows from a user perspective
|
- **End-to-End Tests**: Located in `tests/e2e/`, test complete workflows from a user perspective
|
||||||
- **Test Fixtures**: Located in `tests/fixtures/`, provide reusable test data
|
- **Test Fixtures**: Located in `tests/fixtures/`, provide reusable test data
|
||||||
|
- **Tagged System Tests**: Test migration, tag resolution, and multi-context functionality
|
||||||
|
|
||||||
- **Module Design for Testability**:
|
- **Module Design for Testability**:
|
||||||
- **Explicit Dependencies**: Functions accept their dependencies as parameters rather than using globals
|
- **Explicit Dependencies**: Functions accept their dependencies as parameters rather than using globals
|
||||||
@@ -138,12 +237,14 @@ alwaysApply: false
|
|||||||
- **Clear Module Interfaces**: Each module has well-defined exports that can be mocked in tests
|
- **Clear Module Interfaces**: Each module has well-defined exports that can be mocked in tests
|
||||||
- **Callback Isolation**: Callbacks are defined as separate functions for easier testing
|
- **Callback Isolation**: Callbacks are defined as separate functions for easier testing
|
||||||
- **Stateless Design**: Modules avoid maintaining internal state where possible
|
- **Stateless Design**: Modules avoid maintaining internal state where possible
|
||||||
|
- **Tag Resolution Testing**: Test both tagged and legacy format handling
|
||||||
|
|
||||||
- **Mock Integration Patterns**:
|
- **Mock Integration Patterns**:
|
||||||
- **External Libraries**: Libraries like `fs`, `commander`, and `@anthropic-ai/sdk` are mocked at module level
|
- **External Libraries**: Libraries like `fs`, `commander`, and `@anthropic-ai/sdk` are mocked at module level
|
||||||
- **Internal Modules**: Application modules are mocked with appropriate spy functions
|
- **Internal Modules**: Application modules are mocked with appropriate spy functions
|
||||||
- **Testing Function Callbacks**: Callbacks are extracted from mock call arguments and tested in isolation
|
- **Testing Function Callbacks**: Callbacks are extracted from mock call arguments and tested in isolation
|
||||||
- **UI Elements**: Output functions from `ui.js` are mocked to verify display calls
|
- **UI Elements**: Output functions from `ui.js` are mocked to verify display calls
|
||||||
|
- **Tagged Data Mocking**: Test both legacy and tagged task data structures
|
||||||
|
|
||||||
- **Testing Flow**:
|
- **Testing Flow**:
|
||||||
- Module dependencies are mocked (following Jest's hoisting behavior)
|
- Module dependencies are mocked (following Jest's hoisting behavior)
|
||||||
@@ -151,6 +252,7 @@ alwaysApply: false
|
|||||||
- Spy functions are set up on module methods
|
- Spy functions are set up on module methods
|
||||||
- Tests call the functions under test and verify behavior
|
- Tests call the functions under test and verify behavior
|
||||||
- Mocks are reset between test cases to maintain isolation
|
- Mocks are reset between test cases to maintain isolation
|
||||||
|
- Tagged system behavior is tested for both migration and normal operation
|
||||||
|
|
||||||
- **Benefits of this Architecture**:
|
- **Benefits of this Architecture**:
|
||||||
|
|
||||||
@@ -159,8 +261,61 @@ alwaysApply: false
|
|||||||
- **Mocking Support**: The clear dependency boundaries make mocking straightforward
|
- **Mocking Support**: The clear dependency boundaries make mocking straightforward
|
||||||
- **Test Isolation**: Each component can be tested without affecting others
|
- **Test Isolation**: Each component can be tested without affecting others
|
||||||
- **Callback Testing**: Function callbacks can be extracted and tested independently
|
- **Callback Testing**: Function callbacks can be extracted and tested independently
|
||||||
|
- **Multi-Context Testing**: Tagged system enables testing different task contexts independently
|
||||||
- **Reusability**: Utility functions and UI components can be reused across different parts of the application.
|
- **Reusability**: Utility functions and UI components can be reused across different parts of the application.
|
||||||
- **Scalability**: New features can be added as new modules or by extending existing ones without significantly impacting other parts of the application.
|
- **Scalability**: New features can be added as new modules or by extending existing ones without significantly impacting other parts of the application.
|
||||||
|
- **Multi-Context Support**: Tagged task lists enable working across different contexts (branches, environments, phases) without conflicts.
|
||||||
|
- **Backward Compatibility**: Seamless migration and tag resolution ensure existing workflows continue unchanged.
|
||||||
- **Clarity**: The modular structure provides a clear separation of concerns, making the codebase easier to navigate and understand for developers.
|
- **Clarity**: The modular structure provides a clear separation of concerns, making the codebase easier to navigate and understand for developers.
|
||||||
|
|
||||||
This architectural overview should help AI models understand the structure and organization of the Task Master CLI codebase, enabling them to more effectively assist with code generation, modification, and understanding.
|
This architectural overview should help AI models understand the structure and organization of the Task Master CLI codebase, enabling them to more effectively assist with code generation, modification, and understanding.
|
||||||
|
|
||||||
|
## Implementing MCP Support for a Command
|
||||||
|
|
||||||
|
Follow these steps to add MCP support for an existing Task Master command (see [`new_features.mdc`](mdc:.cursor/rules/new_features.mdc) for more detail):
|
||||||
|
|
||||||
|
1. **Ensure Core Logic Exists**: Verify the core functionality is implemented and exported from the relevant module in `scripts/modules/`.
|
||||||
|
|
||||||
|
2. **Create Direct Function File in `mcp-server/src/core/direct-functions/`:**
|
||||||
|
- Create a new file (e.g., `your-command.js`) using **kebab-case** naming.
|
||||||
|
- Import necessary core functions, **`findTasksJsonPath` from `../utils/path-utils.js`**, and **silent mode utilities**.
|
||||||
|
- Implement `async function yourCommandDirect(args, log)` using **camelCase** with `Direct` suffix:
|
||||||
|
- **Path Resolution**: Obtain the tasks file path using `const tasksPath = findTasksJsonPath(args, log);`. This relies on `args.projectRoot` being provided.
|
||||||
|
- Parse other `args` and perform necessary validation.
|
||||||
|
- **Implement Silent Mode**: Wrap core function calls with `enableSilentMode()` and `disableSilentMode()`.
|
||||||
|
- Implement caching with `getCachedOrExecute` if applicable.
|
||||||
|
- Call core logic.
|
||||||
|
- Return `{ success: true/false, data/error, fromCache: boolean }`.
|
||||||
|
- Export the wrapper function.
|
||||||
|
- **Note**: Tag-aware MCP tools are fully implemented with complete tag management support.
|
||||||
|
|
||||||
|
3. **Update `task-master-core.js` with Import/Export**: Add imports/exports for the new `*Direct` function.
|
||||||
|
|
||||||
|
4. **Create MCP Tool (`mcp-server/src/tools/`)**:
|
||||||
|
- Create a new file (e.g., `your-command.js`) using **kebab-case**.
|
||||||
|
- Import `zod`, `handleApiResult`, **`getProjectRootFromSession`**, and your `yourCommandDirect` function.
|
||||||
|
- Implement `registerYourCommandTool(server)`.
|
||||||
|
- **Define parameters, making `projectRoot` optional**: `projectRoot: z.string().optional().describe(...)`.
|
||||||
|
- Consider if this operation should run in the background using `AsyncOperationManager`.
|
||||||
|
- Implement the standard `execute` method:
|
||||||
|
- Get `rootFolder` using `getProjectRootFromSession` (with fallback to `args.projectRoot`).
|
||||||
|
- Call `yourCommandDirect({ ...args, projectRoot: rootFolder }, log)` or use `asyncOperationManager.addOperation`.
|
||||||
|
- Pass the result to `handleApiResult`.
|
||||||
|
|
||||||
|
5. **Register Tool**: Import and call `registerYourCommandTool` in `mcp-server/src/tools/index.js`.
|
||||||
|
|
||||||
|
6. **Update `mcp.json`**: Add the new tool definition.
|
||||||
|
|
||||||
|
## Project Initialization
|
||||||
|
|
||||||
|
The `initialize_project` command provides a way to set up a new Task Master project:
|
||||||
|
|
||||||
|
- **CLI Command**: `task-master init`
|
||||||
|
- **MCP Tool**: `initialize_project`
|
||||||
|
- **Functionality**:
|
||||||
|
- Creates necessary directories and files for a new project
|
||||||
|
- Sets up `tasks.json` with tagged structure and initial task files
|
||||||
|
- Configures project metadata (name, description, version)
|
||||||
|
- Initializes state.json for tag system
|
||||||
|
- Handles shell alias creation if requested
|
||||||
|
- Works in both interactive and non-interactive modes
|
||||||
105
.cursor/rules/changeset.mdc
Normal file
105
.cursor/rules/changeset.mdc
Normal file
@@ -0,0 +1,105 @@
|
|||||||
|
---
|
||||||
|
description: Guidelines for using Changesets (npm run changeset) to manage versioning and changelogs.
|
||||||
|
alwaysApply: true
|
||||||
|
---
|
||||||
|
|
||||||
|
# Changesets Workflow Guidelines
|
||||||
|
|
||||||
|
Changesets is used to manage package versioning and generate accurate `CHANGELOG.md` files automatically. It's crucial to use it correctly after making meaningful changes that affect the package from an external perspective or significantly impact internal development workflow documented elsewhere.
|
||||||
|
|
||||||
|
## When to Run Changeset
|
||||||
|
|
||||||
|
- Run `npm run changeset` (or `npx changeset add`) **after** you have staged (`git add .`) a logical set of changes that should be communicated in the next release's `CHANGELOG.md`.
|
||||||
|
- This typically includes:
|
||||||
|
- **New Features** (Backward-compatible additions)
|
||||||
|
- **Bug Fixes** (Fixes to existing functionality)
|
||||||
|
- **Breaking Changes** (Changes that are not backward-compatible)
|
||||||
|
- **Performance Improvements** (Enhancements to speed or resource usage)
|
||||||
|
- **Significant Refactoring** (Major code restructuring, even if external behavior is unchanged, as it might affect stability or maintainability) - *Such as reorganizing the MCP server's direct function implementations into separate files*
|
||||||
|
- **User-Facing Documentation Updates** (Changes to README, usage guides, public API docs)
|
||||||
|
- **Dependency Updates** (Especially if they fix known issues or introduce significant changes)
|
||||||
|
- **Build/Tooling Changes** (If they affect how consumers might build or interact with the package)
|
||||||
|
- **Every Pull Request** containing one or more of the above change types **should include a changeset file**.
|
||||||
|
|
||||||
|
## What NOT to Add a Changeset For
|
||||||
|
|
||||||
|
Avoid creating changesets for changes that have **no impact or relevance to external consumers** of the `task-master` package or contributors following **public-facing documentation**. Examples include:
|
||||||
|
|
||||||
|
- **Internal Documentation Updates:** Changes *only* to files within `.cursor/rules/` that solely guide internal development practices for this specific repository.
|
||||||
|
- **Trivial Chores:** Very minor code cleanup, adding comments that don't clarify behavior, typo fixes in non-user-facing code or internal docs.
|
||||||
|
- **Non-Impactful Test Updates:** Minor refactoring of tests, adding tests for existing functionality without fixing bugs.
|
||||||
|
- **Local Configuration Changes:** Updates to personal editor settings, local `.env` files, etc.
|
||||||
|
|
||||||
|
**Rule of Thumb:** If a user installing or using the `task-master` package wouldn't care about the change, or if a contributor following the main README wouldn't need to know about it for their workflow, you likely don't need a changeset.
|
||||||
|
|
||||||
|
## How to Run and What It Asks
|
||||||
|
|
||||||
|
1. **Run the command**:
|
||||||
|
```bash
|
||||||
|
npm run changeset
|
||||||
|
# or
|
||||||
|
npx changeset add
|
||||||
|
```
|
||||||
|
2. **Select Packages**: It will prompt you to select the package(s) affected by your changes using arrow keys and spacebar. If this is not a monorepo, select the main package.
|
||||||
|
3. **Select Bump Type**: Choose the appropriate semantic version bump for **each** selected package:
|
||||||
|
* **`Major`**: For **breaking changes**. Use sparingly.
|
||||||
|
* **`Minor`**: For **new features**.
|
||||||
|
* **`Patch`**: For **bug fixes**, performance improvements, **user-facing documentation changes**, significant refactoring, relevant dependency updates, or impactful build/tooling changes.
|
||||||
|
4. **Enter Summary**: Provide a concise summary of the changes **for the `CHANGELOG.md`**.
|
||||||
|
* **Purpose**: This message is user-facing and explains *what* changed in the release.
|
||||||
|
* **Format**: Use the imperative mood (e.g., "Add feature X", "Fix bug Y", "Update README setup instructions"). Keep it brief, typically a single line.
|
||||||
|
* **Audience**: Think about users installing/updating the package or developers consuming its public API/CLI.
|
||||||
|
* **Not a Git Commit Message**: This summary is *different* from your detailed Git commit message.
|
||||||
|
|
||||||
|
## Changeset Summary vs. Git Commit Message
|
||||||
|
|
||||||
|
- **Changeset Summary**:
|
||||||
|
- **Audience**: Users/Consumers of the package (reads `CHANGELOG.md`).
|
||||||
|
- **Purpose**: Briefly describe *what* changed in the released version that is relevant to them.
|
||||||
|
- **Format**: Concise, imperative mood, single line usually sufficient.
|
||||||
|
- **Example**: `Fix dependency resolution bug in 'next' command.`
|
||||||
|
- **Git Commit Message**:
|
||||||
|
- **Audience**: Developers browsing the Git history of *this* repository.
|
||||||
|
- **Purpose**: Explain *why* the change was made, the context, and the implementation details (can include internal context).
|
||||||
|
- **Format**: Follows commit conventions (e.g., Conventional Commits), can be multi-line with a subject and body.
|
||||||
|
- **Example**:
|
||||||
|
```
|
||||||
|
fix(deps): Correct dependency lookup in 'next' command
|
||||||
|
|
||||||
|
The logic previously failed to account for subtask dependencies when
|
||||||
|
determining the next available task. This commit refactors the
|
||||||
|
dependency check in `findNextTask` within `task-manager.js` to
|
||||||
|
correctly traverse both direct and subtask dependencies. Added
|
||||||
|
unit tests to cover this specific scenario.
|
||||||
|
```
|
||||||
|
- ✅ **DO**: Provide *both* a concise changeset summary (when appropriate) *and* a detailed Git commit message.
|
||||||
|
- ❌ **DON'T**: Use your detailed Git commit message body as the changeset summary.
|
||||||
|
- ❌ **DON'T**: Skip running `changeset` for user-relevant changes just because you wrote a good commit message.
|
||||||
|
|
||||||
|
## The `.changeset` File
|
||||||
|
|
||||||
|
- Running the command creates a unique markdown file in the `.changeset/` directory (e.g., `.changeset/random-name.md`).
|
||||||
|
- This file contains the bump type information and the summary you provided.
|
||||||
|
- **This file MUST be staged and committed** along with your relevant code changes.
|
||||||
|
|
||||||
|
## Standard Workflow Sequence (When a Changeset is Needed)
|
||||||
|
|
||||||
|
1. Make your code or relevant documentation changes.
|
||||||
|
2. Stage your changes: `git add .`
|
||||||
|
3. Run changeset: `npm run changeset`
|
||||||
|
* Select package(s).
|
||||||
|
* Select bump type (`Patch`, `Minor`, `Major`).
|
||||||
|
* Enter the **concise summary** for the changelog.
|
||||||
|
4. Stage the generated changeset file: `git add .changeset/*.md`
|
||||||
|
5. Commit all staged changes (code + changeset file) using your **detailed Git commit message**:
|
||||||
|
```bash
|
||||||
|
git commit -m "feat(module): Add new feature X..."
|
||||||
|
```
|
||||||
|
|
||||||
|
## Release Process (Context)
|
||||||
|
|
||||||
|
- The generated `.changeset/*.md` files are consumed later during the release process.
|
||||||
|
- Commands like `changeset version` read these files, update `package.json` versions, update the `CHANGELOG.md`, and delete the individual changeset files.
|
||||||
|
- Commands like `changeset publish` then publish the new versions to npm.
|
||||||
|
|
||||||
|
Following this workflow ensures that versioning is consistent and changelogs are automatically and accurately generated based on the contributions made.
|
||||||
@@ -6,6 +6,16 @@ alwaysApply: false
|
|||||||
|
|
||||||
# Command-Line Interface Implementation Guidelines
|
# Command-Line Interface Implementation Guidelines
|
||||||
|
|
||||||
|
**Note on Interaction Method:**
|
||||||
|
|
||||||
|
While this document details the implementation of Task Master's **CLI commands**, the **preferred method for interacting with Task Master in integrated environments (like Cursor) is through the MCP server tools**.
|
||||||
|
|
||||||
|
- **Use MCP Tools First**: Always prefer using the MCP tools (e.g., `get_tasks`, `add_task`) when interacting programmatically or via an integrated tool. They offer better performance, structured data, and richer error handling. See [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc) for a comprehensive list of MCP tools and their corresponding CLI commands.
|
||||||
|
- **CLI as Fallback/User Interface**: The `task-master` CLI commands described here are primarily intended for:
|
||||||
|
- Direct user interaction in the terminal.
|
||||||
|
- A fallback mechanism if the MCP server is unavailable or a specific functionality is not exposed via an MCP tool.
|
||||||
|
- **Implementation Context**: This document (`commands.mdc`) focuses on the standards for *implementing* the CLI commands using Commander.js within the [`commands.js`](mdc:scripts/modules/commands.js) module.
|
||||||
|
|
||||||
## Command Structure Standards
|
## Command Structure Standards
|
||||||
|
|
||||||
- **Basic Command Template**:
|
- **Basic Command Template**:
|
||||||
@@ -14,7 +24,7 @@ alwaysApply: false
|
|||||||
programInstance
|
programInstance
|
||||||
.command('command-name')
|
.command('command-name')
|
||||||
.description('Clear, concise description of what the command does')
|
.description('Clear, concise description of what the command does')
|
||||||
.option('-s, --short-option <value>', 'Option description', 'default value')
|
.option('-o, --option <value>', 'Option description', 'default value')
|
||||||
.option('--long-option <value>', 'Option description')
|
.option('--long-option <value>', 'Option description')
|
||||||
.action(async (options) => {
|
.action(async (options) => {
|
||||||
// Command implementation
|
// Command implementation
|
||||||
@@ -24,9 +34,130 @@ alwaysApply: false
|
|||||||
- **Command Handler Organization**:
|
- **Command Handler Organization**:
|
||||||
- ✅ DO: Keep action handlers concise and focused
|
- ✅ DO: Keep action handlers concise and focused
|
||||||
- ✅ DO: Extract core functionality to appropriate modules
|
- ✅ DO: Extract core functionality to appropriate modules
|
||||||
- ✅ DO: Include validation for required parameters
|
- ✅ DO: Have the action handler import and call the relevant functions from core modules, like `task-manager.js` or `init.js`, passing the parsed `options`.
|
||||||
|
- ✅ DO: Perform basic parameter validation, such as checking for required options, within the action handler or at the start of the called core function.
|
||||||
- ❌ DON'T: Implement business logic in command handlers
|
- ❌ DON'T: Implement business logic in command handlers
|
||||||
|
|
||||||
|
## Best Practices for Removal/Delete Commands
|
||||||
|
|
||||||
|
When implementing commands that delete or remove data (like `remove-task` or `remove-subtask`), follow these specific guidelines:
|
||||||
|
|
||||||
|
- **Confirmation Prompts**:
|
||||||
|
- ✅ **DO**: Include a confirmation prompt by default for destructive operations
|
||||||
|
- ✅ **DO**: Provide a `--yes` or `-y` flag to skip confirmation, useful for scripting or automation
|
||||||
|
- ✅ **DO**: Show what will be deleted in the confirmation message
|
||||||
|
- ❌ **DON'T**: Perform destructive operations without user confirmation unless explicitly overridden
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// ✅ DO: Include confirmation for destructive operations
|
||||||
|
programInstance
|
||||||
|
.command('remove-task')
|
||||||
|
.description('Remove a task or subtask permanently')
|
||||||
|
.option('-i, --id <id>', 'ID of the task to remove')
|
||||||
|
.option('-y, --yes', 'Skip confirmation prompt', false)
|
||||||
|
.action(async (options) => {
|
||||||
|
// Validation code...
|
||||||
|
|
||||||
|
if (!options.yes) {
|
||||||
|
const confirm = await inquirer.prompt([{
|
||||||
|
type: 'confirm',
|
||||||
|
name: 'proceed',
|
||||||
|
message: `Are you sure you want to permanently delete task ${taskId}? This cannot be undone.`,
|
||||||
|
default: false
|
||||||
|
}]);
|
||||||
|
|
||||||
|
if (!confirm.proceed) {
|
||||||
|
console.log(chalk.yellow('Operation cancelled.'));
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Proceed with removal...
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
- **File Path Handling**:
|
||||||
|
- ✅ **DO**: Use `path.join()` to construct file paths
|
||||||
|
- ✅ **DO**: Follow established naming conventions for tasks, like `task_001.txt`
|
||||||
|
- ✅ **DO**: Check if files exist before attempting to delete them
|
||||||
|
- ✅ **DO**: Handle file deletion errors gracefully
|
||||||
|
- ❌ **DON'T**: Construct paths with string concatenation
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// ✅ DO: Properly construct file paths
|
||||||
|
const taskFilePath = path.join(
|
||||||
|
path.dirname(tasksPath),
|
||||||
|
`task_${taskId.toString().padStart(3, '0')}.txt`
|
||||||
|
);
|
||||||
|
|
||||||
|
// ✅ DO: Check existence before deletion
|
||||||
|
if (fs.existsSync(taskFilePath)) {
|
||||||
|
try {
|
||||||
|
fs.unlinkSync(taskFilePath);
|
||||||
|
console.log(chalk.green(`Task file deleted: ${taskFilePath}`));
|
||||||
|
} catch (error) {
|
||||||
|
console.warn(chalk.yellow(`Could not delete task file: ${error.message}`));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- **Clean Up References**:
|
||||||
|
- ✅ **DO**: Clean up references to the deleted item in other parts of the data
|
||||||
|
- ✅ **DO**: Handle both direct and indirect references
|
||||||
|
- ✅ **DO**: Explain what related data is being updated
|
||||||
|
- ❌ **DON'T**: Leave dangling references
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// ✅ DO: Clean up references when deleting items
|
||||||
|
console.log(chalk.blue('Cleaning up task dependencies...'));
|
||||||
|
let referencesRemoved = 0;
|
||||||
|
|
||||||
|
// Update dependencies in other tasks
|
||||||
|
data.tasks.forEach(task => {
|
||||||
|
if (task.dependencies && task.dependencies.includes(taskId)) {
|
||||||
|
task.dependencies = task.dependencies.filter(depId => depId !== taskId);
|
||||||
|
referencesRemoved++;
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
if (referencesRemoved > 0) {
|
||||||
|
console.log(chalk.green(`Removed ${referencesRemoved} references to task ${taskId} from other tasks`));
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- **Task File Regeneration**:
|
||||||
|
- ✅ **DO**: Regenerate task files after destructive operations
|
||||||
|
- ✅ **DO**: Pass all required parameters to generation functions
|
||||||
|
- ✅ **DO**: Provide an option to skip regeneration if needed
|
||||||
|
- ❌ **DON'T**: Assume default parameters will work
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// ✅ DO: Properly regenerate files after deletion
|
||||||
|
if (!options.skipGenerate) {
|
||||||
|
console.log(chalk.blue('Regenerating task files...'));
|
||||||
|
try {
|
||||||
|
// Note both parameters are explicitly provided
|
||||||
|
await generateTaskFiles(tasksPath, path.dirname(tasksPath));
|
||||||
|
console.log(chalk.green('Task files regenerated successfully'));
|
||||||
|
} catch (error) {
|
||||||
|
console.warn(chalk.yellow(`Warning: Could not regenerate task files: ${error.message}`));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- **Alternative Suggestions**:
|
||||||
|
- ✅ **DO**: Suggest non-destructive alternatives when appropriate
|
||||||
|
- ✅ **DO**: Explain the difference between deletion and status changes
|
||||||
|
- ✅ **DO**: Include examples of alternative commands
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// ✅ DO: Suggest alternatives for destructive operations
|
||||||
|
console.log(chalk.yellow('Note: If you just want to exclude this task from active work, consider:'));
|
||||||
|
console.log(chalk.cyan(` task-master set-status --id='${taskId}' --status='cancelled'`));
|
||||||
|
console.log(chalk.cyan(` task-master set-status --id='${taskId}' --status='deferred'`));
|
||||||
|
console.log('This preserves the task and its history for reference.');
|
||||||
|
```
|
||||||
|
|
||||||
## Option Naming Conventions
|
## Option Naming Conventions
|
||||||
|
|
||||||
- **Command Names**:
|
- **Command Names**:
|
||||||
@@ -35,10 +166,10 @@ alwaysApply: false
|
|||||||
- ✅ DO: Use descriptive, action-oriented names
|
- ✅ DO: Use descriptive, action-oriented names
|
||||||
|
|
||||||
- **Option Names**:
|
- **Option Names**:
|
||||||
- ✅ DO: Use kebab-case for long-form option names (`--output-format`)
|
- ✅ DO: Use kebab-case for long-form option names, like `--output-format`
|
||||||
- ✅ DO: Provide single-letter shortcuts when appropriate (`-f, --file`)
|
- ✅ DO: Provide single-letter shortcuts when appropriate, like `-f, --file`
|
||||||
- ✅ DO: Use consistent option names across similar commands
|
- ✅ DO: Use consistent option names across similar commands
|
||||||
- ❌ DON'T: Use different names for the same concept (`--file` in one command, `--path` in another)
|
- ❌ DON'T: Use different names for the same concept, such as `--file` in one command and `--path` in another
|
||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
// ✅ DO: Use consistent option naming
|
// ✅ DO: Use consistent option naming
|
||||||
@@ -50,7 +181,7 @@ alwaysApply: false
|
|||||||
.option('-p, --path <dir>', 'Output directory') // Should be --output
|
.option('-p, --path <dir>', 'Output directory') // Should be --output
|
||||||
```
|
```
|
||||||
|
|
||||||
> **Note**: Although options are defined with kebab-case (`--num-tasks`), Commander.js stores them internally as camelCase properties. Access them in code as `options.numTasks`, not `options['num-tasks']`.
|
> **Note**: Although options are defined with kebab-case, like `--num-tasks`, Commander.js stores them internally as camelCase properties. Access them in code as `options.numTasks`, not `options['num-tasks']`.
|
||||||
|
|
||||||
- **Boolean Flag Conventions**:
|
- **Boolean Flag Conventions**:
|
||||||
- ✅ DO: Use positive flags with `--skip-` prefix for disabling behavior
|
- ✅ DO: Use positive flags with `--skip-` prefix for disabling behavior
|
||||||
@@ -79,7 +210,7 @@ alwaysApply: false
|
|||||||
- **Required Parameters**:
|
- **Required Parameters**:
|
||||||
- ✅ DO: Check that required parameters are provided
|
- ✅ DO: Check that required parameters are provided
|
||||||
- ✅ DO: Provide clear error messages when parameters are missing
|
- ✅ DO: Provide clear error messages when parameters are missing
|
||||||
- ✅ DO: Use early returns with process.exit(1) for validation failures
|
- ✅ DO: Use early returns with `process.exit(1)` for validation failures
|
||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
// ✅ DO: Validate required parameters early
|
// ✅ DO: Validate required parameters early
|
||||||
@@ -90,7 +221,7 @@ alwaysApply: false
|
|||||||
```
|
```
|
||||||
|
|
||||||
- **Parameter Type Conversion**:
|
- **Parameter Type Conversion**:
|
||||||
- ✅ DO: Convert string inputs to appropriate types (numbers, booleans)
|
- ✅ DO: Convert string inputs to appropriate types, such as numbers or booleans
|
||||||
- ✅ DO: Handle conversion errors gracefully
|
- ✅ DO: Handle conversion errors gracefully
|
||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
@@ -123,7 +254,7 @@ alwaysApply: false
|
|||||||
const taskId = parseInt(options.id, 10);
|
const taskId = parseInt(options.id, 10);
|
||||||
if (isNaN(taskId) || taskId <= 0) {
|
if (isNaN(taskId) || taskId <= 0) {
|
||||||
console.error(chalk.red(`Error: Invalid task ID: ${options.id}. Task ID must be a positive integer.`));
|
console.error(chalk.red(`Error: Invalid task ID: ${options.id}. Task ID must be a positive integer.`));
|
||||||
console.log(chalk.yellow('Usage example: task-master update-task --id=23 --prompt="Update with new information"'));
|
console.log(chalk.yellow("Usage example: task-master update-task --id='23' --prompt='Update with new information.\\nEnsure proper error handling.'"));
|
||||||
process.exit(1);
|
process.exit(1);
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -169,8 +300,8 @@ alwaysApply: false
|
|||||||
(dependencies.length > 0 ? chalk.white(`Dependencies: ${dependencies.join(', ')}`) + '\n' : '') +
|
(dependencies.length > 0 ? chalk.white(`Dependencies: ${dependencies.join(', ')}`) + '\n' : '') +
|
||||||
'\n' +
|
'\n' +
|
||||||
chalk.white.bold('Next Steps:') + '\n' +
|
chalk.white.bold('Next Steps:') + '\n' +
|
||||||
chalk.cyan(`1. Run ${chalk.yellow(`task-master show ${parentId}`)} to see the parent task with all subtasks`) + '\n' +
|
chalk.cyan(`1. Run ${chalk.yellow(`task-master show '${parentId}'`)} to see the parent task with all subtasks`) + '\n' +
|
||||||
chalk.cyan(`2. Run ${chalk.yellow(`task-master set-status --id=${parentId}.${subtask.id} --status=in-progress`)} to start working on it`),
|
chalk.cyan(`2. Run ${chalk.yellow(`task-master set-status --id='${parentId}.${subtask.id}' --status='in-progress'`)} to start working on it`),
|
||||||
{ padding: 1, borderColor: 'green', borderStyle: 'round', margin: { top: 1 } }
|
{ padding: 1, borderColor: 'green', borderStyle: 'round', margin: { top: 1 } }
|
||||||
));
|
));
|
||||||
```
|
```
|
||||||
@@ -198,6 +329,60 @@ alwaysApply: false
|
|||||||
};
|
};
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Context-Aware Command Pattern
|
||||||
|
|
||||||
|
For AI-powered commands that benefit from project context, follow the research command pattern:
|
||||||
|
|
||||||
|
- **Context Integration**:
|
||||||
|
- ✅ DO: Use `ContextGatherer` utility for multi-source context extraction
|
||||||
|
- ✅ DO: Support task IDs, file paths, custom context, and project tree
|
||||||
|
- ✅ DO: Implement fuzzy search for automatic task discovery
|
||||||
|
- ✅ DO: Display detailed token breakdown for transparency
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// ✅ DO: Follow this pattern for context-aware commands
|
||||||
|
programInstance
|
||||||
|
.command('research')
|
||||||
|
.description('Perform AI-powered research queries with project context')
|
||||||
|
.argument('<prompt>', 'Research prompt to investigate')
|
||||||
|
.option('-i, --id <ids>', 'Comma-separated task/subtask IDs to include as context')
|
||||||
|
.option('-f, --files <paths>', 'Comma-separated file paths to include as context')
|
||||||
|
.option('-c, --context <text>', 'Additional custom context')
|
||||||
|
.option('--tree', 'Include project file tree structure')
|
||||||
|
.option('-d, --detail <level>', 'Output detail level: low, medium, high', 'medium')
|
||||||
|
.action(async (prompt, options) => {
|
||||||
|
// 1. Parameter validation and parsing
|
||||||
|
const taskIds = options.id ? parseTaskIds(options.id) : [];
|
||||||
|
const filePaths = options.files ? parseFilePaths(options.files) : [];
|
||||||
|
|
||||||
|
// 2. Initialize context gatherer
|
||||||
|
const projectRoot = findProjectRoot() || '.';
|
||||||
|
const gatherer = new ContextGatherer(projectRoot, tasksPath);
|
||||||
|
|
||||||
|
// 3. Auto-discover relevant tasks if none specified
|
||||||
|
if (taskIds.length === 0) {
|
||||||
|
const fuzzySearch = new FuzzyTaskSearch(tasksData.tasks, 'research');
|
||||||
|
const discoveredIds = fuzzySearch.getTaskIds(
|
||||||
|
fuzzySearch.findRelevantTasks(prompt)
|
||||||
|
);
|
||||||
|
taskIds.push(...discoveredIds);
|
||||||
|
}
|
||||||
|
|
||||||
|
// 4. Gather context with token breakdown
|
||||||
|
const contextResult = await gatherer.gather({
|
||||||
|
tasks: taskIds,
|
||||||
|
files: filePaths,
|
||||||
|
customContext: options.context,
|
||||||
|
includeProjectTree: options.projectTree,
|
||||||
|
format: 'research',
|
||||||
|
includeTokenCounts: true
|
||||||
|
});
|
||||||
|
|
||||||
|
// 5. Display token breakdown and execute AI call
|
||||||
|
// Implementation continues...
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
## Error Handling
|
## Error Handling
|
||||||
|
|
||||||
- **Exception Management**:
|
- **Exception Management**:
|
||||||
@@ -245,7 +430,7 @@ alwaysApply: false
|
|||||||
' --option1 <value> Description of option1 (required)\n' +
|
' --option1 <value> Description of option1 (required)\n' +
|
||||||
' --option2 <value> Description of option2\n\n' +
|
' --option2 <value> Description of option2\n\n' +
|
||||||
chalk.cyan('Examples:') + '\n' +
|
chalk.cyan('Examples:') + '\n' +
|
||||||
' task-master command --option1=value --option2=value',
|
' task-master command --option1=\'value1\' --option2=\'value2\'',
|
||||||
{ padding: 1, borderColor: 'blue', borderStyle: 'round' }
|
{ padding: 1, borderColor: 'blue', borderStyle: 'round' }
|
||||||
));
|
));
|
||||||
}
|
}
|
||||||
@@ -261,9 +446,9 @@ alwaysApply: false
|
|||||||
process.on('uncaughtException', (err) => {
|
process.on('uncaughtException', (err) => {
|
||||||
// Handle Commander-specific errors
|
// Handle Commander-specific errors
|
||||||
if (err.code === 'commander.unknownOption') {
|
if (err.code === 'commander.unknownOption') {
|
||||||
const option = err.message.match(/'([^']+)'/)?.[1];
|
const option = err.message.match(/'([^']+)'/)?.[1]; // Safely extract option name
|
||||||
console.error(chalk.red(`Error: Unknown option '${option}'`));
|
console.error(chalk.red(`Error: Unknown option '${option}'`));
|
||||||
console.error(chalk.yellow(`Run 'task-master <command> --help' to see available options`));
|
console.error(chalk.yellow("Run 'task-master <command> --help' to see available options"));
|
||||||
process.exit(1);
|
process.exit(1);
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -288,7 +473,7 @@ alwaysApply: false
|
|||||||
// Provide more helpful error messages for common issues
|
// Provide more helpful error messages for common issues
|
||||||
if (error.message.includes('task') && error.message.includes('not found')) {
|
if (error.message.includes('task') && error.message.includes('not found')) {
|
||||||
console.log(chalk.yellow('\nTo fix this issue:'));
|
console.log(chalk.yellow('\nTo fix this issue:'));
|
||||||
console.log(' 1. Run task-master list to see all available task IDs');
|
console.log(' 1. Run \'task-master list\' to see all available task IDs');
|
||||||
console.log(' 2. Use a valid task ID with the --id parameter');
|
console.log(' 2. Use a valid task ID with the --id parameter');
|
||||||
} else if (error.message.includes('API key')) {
|
} else if (error.message.includes('API key')) {
|
||||||
console.log(chalk.yellow('\nThis error is related to API keys. Check your environment variables.'));
|
console.log(chalk.yellow('\nThis error is related to API keys. Check your environment variables.'));
|
||||||
@@ -333,12 +518,12 @@ alwaysApply: false
|
|||||||
.option('-f, --file <path>', 'Path to the tasks file', 'tasks/tasks.json')
|
.option('-f, --file <path>', 'Path to the tasks file', 'tasks/tasks.json')
|
||||||
.option('-p, --parent <id>', 'ID of the parent task (required)')
|
.option('-p, --parent <id>', 'ID of the parent task (required)')
|
||||||
.option('-i, --task-id <id>', 'Existing task ID to convert to subtask')
|
.option('-i, --task-id <id>', 'Existing task ID to convert to subtask')
|
||||||
.option('-t, --title <title>', 'Title for the new subtask (when not converting)')
|
.option('-t, --title <title>', 'Title for the new subtask, required if not converting')
|
||||||
.option('-d, --description <description>', 'Description for the new subtask (when not converting)')
|
.option('-d, --description <description>', 'Description for the new subtask, optional')
|
||||||
.option('--details <details>', 'Implementation details for the new subtask (when not converting)')
|
.option('--details <details>', 'Implementation details for the new subtask, optional')
|
||||||
.option('--dependencies <ids>', 'Comma-separated list of subtask IDs this subtask depends on')
|
.option('--dependencies <ids>', 'Comma-separated list of subtask IDs this subtask depends on')
|
||||||
.option('--status <status>', 'Initial status for the subtask', 'pending')
|
.option('--status <status>', 'Initial status for the subtask', 'pending')
|
||||||
.option('--skip-generate', 'Skip regenerating task files')
|
.option('--generate', 'Regenerate task files after adding subtask')
|
||||||
.action(async (options) => {
|
.action(async (options) => {
|
||||||
// Validate required parameters
|
// Validate required parameters
|
||||||
if (!options.parent) {
|
if (!options.parent) {
|
||||||
@@ -358,9 +543,9 @@ alwaysApply: false
|
|||||||
.command('remove-subtask')
|
.command('remove-subtask')
|
||||||
.description('Remove a subtask from its parent task, optionally converting it to a standalone task')
|
.description('Remove a subtask from its parent task, optionally converting it to a standalone task')
|
||||||
.option('-f, --file <path>', 'Path to the tasks file', 'tasks/tasks.json')
|
.option('-f, --file <path>', 'Path to the tasks file', 'tasks/tasks.json')
|
||||||
.option('-i, --id <id>', 'ID of the subtask to remove in format "parentId.subtaskId" (required)')
|
.option('-i, --id <id>', 'ID of the subtask to remove in format parentId.subtaskId, required')
|
||||||
.option('-c, --convert', 'Convert the subtask to a standalone task')
|
.option('-c, --convert', 'Convert the subtask to a standalone task instead of deleting')
|
||||||
.option('--skip-generate', 'Skip regenerating task files')
|
.option('--generate', 'Regenerate task files after removing subtask')
|
||||||
.action(async (options) => {
|
.action(async (options) => {
|
||||||
// Implementation with detailed error handling
|
// Implementation with detailed error handling
|
||||||
})
|
})
|
||||||
@@ -382,7 +567,8 @@ alwaysApply: false
|
|||||||
// ✅ DO: Implement version checking function
|
// ✅ DO: Implement version checking function
|
||||||
async function checkForUpdate() {
|
async function checkForUpdate() {
|
||||||
// Implementation details...
|
// Implementation details...
|
||||||
return { currentVersion, latestVersion, needsUpdate };
|
// Example return structure:
|
||||||
|
return { currentVersion, latestVersion, updateAvailable };
|
||||||
}
|
}
|
||||||
|
|
||||||
// ✅ DO: Implement semantic version comparison
|
// ✅ DO: Implement semantic version comparison
|
||||||
@@ -422,7 +608,7 @@ alwaysApply: false
|
|||||||
|
|
||||||
// After command execution, check if an update is available
|
// After command execution, check if an update is available
|
||||||
const updateInfo = await updateCheckPromise;
|
const updateInfo = await updateCheckPromise;
|
||||||
if (updateInfo.needsUpdate) {
|
if (updateInfo.updateAvailable) {
|
||||||
displayUpgradeNotification(updateInfo.currentVersion, updateInfo.latestVersion);
|
displayUpgradeNotification(updateInfo.currentVersion, updateInfo.latestVersion);
|
||||||
}
|
}
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
@@ -432,3 +618,45 @@ alwaysApply: false
|
|||||||
```
|
```
|
||||||
|
|
||||||
Refer to [`commands.js`](mdc:scripts/modules/commands.js) for implementation examples and [`new_features.mdc`](mdc:.cursor/rules/new_features.mdc) for integration guidelines.
|
Refer to [`commands.js`](mdc:scripts/modules/commands.js) for implementation examples and [`new_features.mdc`](mdc:.cursor/rules/new_features.mdc) for integration guidelines.
|
||||||
|
// Helper function to show add-subtask command help
|
||||||
|
function showAddSubtaskHelp() {
|
||||||
|
console.log(boxen(
|
||||||
|
chalk.white.bold('Add Subtask Command Help') + '\n\n' +
|
||||||
|
chalk.cyan('Usage:') + '\n' +
|
||||||
|
` task-master add-subtask --parent=<id> [options]\n\n` +
|
||||||
|
chalk.cyan('Options:') + '\n' +
|
||||||
|
' -p, --parent <id> Parent task ID (required)\n' +
|
||||||
|
' -i, --task-id <id> Existing task ID to convert to subtask\n' +
|
||||||
|
' -t, --title <title> Title for the new subtask\n' +
|
||||||
|
' -d, --description <text> Description for the new subtask\n' +
|
||||||
|
' --details <text> Implementation details for the new subtask\n' +
|
||||||
|
' --dependencies <ids> Comma-separated list of dependency IDs\n' +
|
||||||
|
' -s, --status <status> Status for the new subtask (default: "pending")\n' +
|
||||||
|
' -f, --file <file> Path to the tasks file (default: "tasks/tasks.json")\n' +
|
||||||
|
' --generate Regenerate task files after adding subtask\n\n' +
|
||||||
|
chalk.cyan('Examples:') + '\n' +
|
||||||
|
' task-master add-subtask --parent=\'5\' --task-id=\'8\'\n' +
|
||||||
|
' task-master add-subtask -p \'5\' -t \'Implement login UI\' -d \'Create the login form\'\n' +
|
||||||
|
' task-master add-subtask -p \'5\' -t \'Handle API Errors\' --details "Handle 401 Unauthorized.\\nHandle 500 Server Error." --generate',
|
||||||
|
{ padding: 1, borderColor: 'blue', borderStyle: 'round' }
|
||||||
|
));
|
||||||
|
}
|
||||||
|
|
||||||
|
// Helper function to show remove-subtask command help
|
||||||
|
function showRemoveSubtaskHelp() {
|
||||||
|
console.log(boxen(
|
||||||
|
chalk.white.bold('Remove Subtask Command Help') + '\n\n' +
|
||||||
|
chalk.cyan('Usage:') + '\n' +
|
||||||
|
` task-master remove-subtask --id=<parentId.subtaskId> [options]\n\n` +
|
||||||
|
chalk.cyan('Options:') + '\n' +
|
||||||
|
' -i, --id <id> Subtask ID(s) to remove in format "parentId.subtaskId" (can be comma-separated, required)\n' +
|
||||||
|
' -c, --convert Convert the subtask to a standalone task instead of deleting it\n' +
|
||||||
|
' -f, --file <file> Path to the tasks file (default: "tasks/tasks.json")\n' +
|
||||||
|
' --generate Regenerate task files after removing subtask\n\n' +
|
||||||
|
chalk.cyan('Examples:') + '\n' +
|
||||||
|
' task-master remove-subtask --id=\'5.2\'\n' +
|
||||||
|
' task-master remove-subtask --id=\'5.2,6.3,7.1\'\n' +
|
||||||
|
' task-master remove-subtask --id=\'5.2\' --convert',
|
||||||
|
{ padding: 1, borderColor: 'blue', borderStyle: 'round' }
|
||||||
|
));
|
||||||
|
}
|
||||||
|
|||||||
268
.cursor/rules/context_gathering.mdc
Normal file
268
.cursor/rules/context_gathering.mdc
Normal file
@@ -0,0 +1,268 @@
|
|||||||
|
---
|
||||||
|
description: Standardized patterns for gathering and processing context from multiple sources in Task Master commands, particularly for AI-powered features.
|
||||||
|
globs:
|
||||||
|
alwaysApply: false
|
||||||
|
---
|
||||||
|
# Context Gathering Patterns and Utilities
|
||||||
|
|
||||||
|
This document outlines the standardized patterns for gathering and processing context from multiple sources in Task Master commands, particularly for AI-powered features.
|
||||||
|
|
||||||
|
## Core Context Gathering Utility
|
||||||
|
|
||||||
|
The `ContextGatherer` class (`scripts/modules/utils/contextGatherer.js`) provides a centralized, reusable utility for extracting context from multiple sources:
|
||||||
|
|
||||||
|
### **Key Features**
|
||||||
|
- **Multi-source Context**: Tasks, files, custom text, project file tree
|
||||||
|
- **Token Counting**: Detailed breakdown using `gpt-tokens` library
|
||||||
|
- **Format Support**: Different output formats (research, chat, system-prompt)
|
||||||
|
- **Error Handling**: Graceful handling of missing files, invalid task IDs
|
||||||
|
- **Performance**: File size limits, depth limits for tree generation
|
||||||
|
|
||||||
|
### **Usage Pattern**
|
||||||
|
```javascript
|
||||||
|
import { ContextGatherer } from '../utils/contextGatherer.js';
|
||||||
|
|
||||||
|
// Initialize with project paths
|
||||||
|
const gatherer = new ContextGatherer(projectRoot, tasksPath);
|
||||||
|
|
||||||
|
// Gather context with detailed token breakdown
|
||||||
|
const result = await gatherer.gather({
|
||||||
|
tasks: ['15', '16.2'], // Task and subtask IDs
|
||||||
|
files: ['src/api.js', 'README.md'], // File paths
|
||||||
|
customContext: 'Additional context text',
|
||||||
|
includeProjectTree: true, // Include file tree
|
||||||
|
format: 'research', // Output format
|
||||||
|
includeTokenCounts: true // Get detailed token breakdown
|
||||||
|
});
|
||||||
|
|
||||||
|
// Access results
|
||||||
|
const contextString = result.context;
|
||||||
|
const tokenBreakdown = result.tokenBreakdown;
|
||||||
|
```
|
||||||
|
|
||||||
|
### **Token Breakdown Structure**
|
||||||
|
```javascript
|
||||||
|
{
|
||||||
|
customContext: { tokens: 150, characters: 800 },
|
||||||
|
tasks: [
|
||||||
|
{ id: '15', type: 'task', title: 'Task Title', tokens: 245, characters: 1200 },
|
||||||
|
{ id: '16.2', type: 'subtask', title: 'Subtask Title', tokens: 180, characters: 900 }
|
||||||
|
],
|
||||||
|
files: [
|
||||||
|
{ path: 'src/api.js', tokens: 890, characters: 4500, size: '4.5 KB' }
|
||||||
|
],
|
||||||
|
projectTree: { tokens: 320, characters: 1600 },
|
||||||
|
total: { tokens: 1785, characters: 8000 }
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Fuzzy Search Integration
|
||||||
|
|
||||||
|
The `FuzzyTaskSearch` class (`scripts/modules/utils/fuzzyTaskSearch.js`) provides intelligent task discovery:
|
||||||
|
|
||||||
|
### **Key Features**
|
||||||
|
- **Semantic Matching**: Uses Fuse.js for similarity scoring
|
||||||
|
- **Purpose Categories**: Pattern-based task categorization
|
||||||
|
- **Relevance Scoring**: High/medium/low relevance thresholds
|
||||||
|
- **Context-Aware**: Different search configurations for different use cases
|
||||||
|
|
||||||
|
### **Usage Pattern**
|
||||||
|
```javascript
|
||||||
|
import { FuzzyTaskSearch } from '../utils/fuzzyTaskSearch.js';
|
||||||
|
|
||||||
|
// Initialize with tasks data and context
|
||||||
|
const fuzzySearch = new FuzzyTaskSearch(tasksData.tasks, 'research');
|
||||||
|
|
||||||
|
// Find relevant tasks
|
||||||
|
const searchResults = fuzzySearch.findRelevantTasks(query, {
|
||||||
|
maxResults: 8,
|
||||||
|
includeRecent: true,
|
||||||
|
includeCategoryMatches: true
|
||||||
|
});
|
||||||
|
|
||||||
|
// Get task IDs for context gathering
|
||||||
|
const taskIds = fuzzySearch.getTaskIds(searchResults);
|
||||||
|
```
|
||||||
|
|
||||||
|
## Implementation Patterns for Commands
|
||||||
|
|
||||||
|
### **1. Context-Aware Command Structure**
|
||||||
|
```javascript
|
||||||
|
// In command action handler
|
||||||
|
async function commandAction(prompt, options) {
|
||||||
|
// 1. Parameter validation and parsing
|
||||||
|
const taskIds = options.id ? parseTaskIds(options.id) : [];
|
||||||
|
const filePaths = options.files ? parseFilePaths(options.files) : [];
|
||||||
|
|
||||||
|
// 2. Initialize context gatherer
|
||||||
|
const projectRoot = findProjectRoot() || '.';
|
||||||
|
const tasksPath = path.join(projectRoot, 'tasks', 'tasks.json');
|
||||||
|
const gatherer = new ContextGatherer(projectRoot, tasksPath);
|
||||||
|
|
||||||
|
// 3. Auto-discover relevant tasks if none specified
|
||||||
|
if (taskIds.length === 0) {
|
||||||
|
const fuzzySearch = new FuzzyTaskSearch(tasksData.tasks, 'research');
|
||||||
|
const discoveredIds = fuzzySearch.getTaskIds(
|
||||||
|
fuzzySearch.findRelevantTasks(prompt)
|
||||||
|
);
|
||||||
|
taskIds.push(...discoveredIds);
|
||||||
|
}
|
||||||
|
|
||||||
|
// 4. Gather context with token breakdown
|
||||||
|
const contextResult = await gatherer.gather({
|
||||||
|
tasks: taskIds,
|
||||||
|
files: filePaths,
|
||||||
|
customContext: options.context,
|
||||||
|
includeProjectTree: options.projectTree,
|
||||||
|
format: 'research',
|
||||||
|
includeTokenCounts: true
|
||||||
|
});
|
||||||
|
|
||||||
|
// 5. Display token breakdown (for CLI)
|
||||||
|
if (outputFormat === 'text') {
|
||||||
|
displayDetailedTokenBreakdown(contextResult.tokenBreakdown);
|
||||||
|
}
|
||||||
|
|
||||||
|
// 6. Use context in AI call
|
||||||
|
const aiResult = await generateTextService(role, session, systemPrompt, userPrompt);
|
||||||
|
|
||||||
|
// 7. Display results with enhanced formatting
|
||||||
|
displayResults(aiResult, contextResult.tokenBreakdown);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### **2. Token Display Pattern**
|
||||||
|
```javascript
|
||||||
|
function displayDetailedTokenBreakdown(tokenBreakdown, systemTokens, userTokens) {
|
||||||
|
const sections = [];
|
||||||
|
|
||||||
|
// Build context breakdown
|
||||||
|
if (tokenBreakdown.tasks?.length > 0) {
|
||||||
|
const taskDetails = tokenBreakdown.tasks.map(task =>
|
||||||
|
`${task.type === 'subtask' ? ' ' : ''}${task.id}: ${task.tokens.toLocaleString()}`
|
||||||
|
).join('\n');
|
||||||
|
sections.push(`Tasks (${tokenBreakdown.tasks.reduce((sum, t) => sum + t.tokens, 0).toLocaleString()}):\n${taskDetails}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (tokenBreakdown.files?.length > 0) {
|
||||||
|
const fileDetails = tokenBreakdown.files.map(file =>
|
||||||
|
` ${file.path}: ${file.tokens.toLocaleString()} (${file.size})`
|
||||||
|
).join('\n');
|
||||||
|
sections.push(`Files (${tokenBreakdown.files.reduce((sum, f) => sum + f.tokens, 0).toLocaleString()}):\n${fileDetails}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Add prompts breakdown
|
||||||
|
sections.push(`Prompts: system ${systemTokens.toLocaleString()}, user ${userTokens.toLocaleString()}`);
|
||||||
|
|
||||||
|
// Display in clean box
|
||||||
|
const content = sections.join('\n\n');
|
||||||
|
console.log(boxen(content, {
|
||||||
|
title: chalk.cyan('Token Usage'),
|
||||||
|
padding: { top: 1, bottom: 1, left: 2, right: 2 },
|
||||||
|
borderStyle: 'round',
|
||||||
|
borderColor: 'cyan'
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### **3. Enhanced Result Display Pattern**
|
||||||
|
```javascript
|
||||||
|
function displayResults(result, query, detailLevel, tokenBreakdown) {
|
||||||
|
// Header with query info
|
||||||
|
const header = boxen(
|
||||||
|
chalk.green.bold('Research Results') + '\n\n' +
|
||||||
|
chalk.gray('Query: ') + chalk.white(query) + '\n' +
|
||||||
|
chalk.gray('Detail Level: ') + chalk.cyan(detailLevel),
|
||||||
|
{
|
||||||
|
padding: { top: 1, bottom: 1, left: 2, right: 2 },
|
||||||
|
margin: { top: 1, bottom: 0 },
|
||||||
|
borderStyle: 'round',
|
||||||
|
borderColor: 'green'
|
||||||
|
}
|
||||||
|
);
|
||||||
|
console.log(header);
|
||||||
|
|
||||||
|
// Process and highlight code blocks
|
||||||
|
const processedResult = processCodeBlocks(result);
|
||||||
|
|
||||||
|
// Main content in clean box
|
||||||
|
const contentBox = boxen(processedResult, {
|
||||||
|
padding: { top: 1, bottom: 1, left: 2, right: 2 },
|
||||||
|
margin: { top: 0, bottom: 1 },
|
||||||
|
borderStyle: 'single',
|
||||||
|
borderColor: 'gray'
|
||||||
|
});
|
||||||
|
console.log(contentBox);
|
||||||
|
|
||||||
|
console.log(chalk.green('✓ Research complete'));
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Code Block Enhancement
|
||||||
|
|
||||||
|
### **Syntax Highlighting Pattern**
|
||||||
|
```javascript
|
||||||
|
import { highlight } from 'cli-highlight';
|
||||||
|
|
||||||
|
function processCodeBlocks(text) {
|
||||||
|
return text.replace(/```(\w+)?\n([\s\S]*?)```/g, (match, language, code) => {
|
||||||
|
try {
|
||||||
|
const highlighted = highlight(code.trim(), {
|
||||||
|
language: language || 'javascript',
|
||||||
|
theme: 'default'
|
||||||
|
});
|
||||||
|
return `\n${highlighted}\n`;
|
||||||
|
} catch (error) {
|
||||||
|
return `\n${code.trim()}\n`;
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Integration Guidelines
|
||||||
|
|
||||||
|
### **When to Use Context Gathering**
|
||||||
|
- ✅ **DO**: Use for AI-powered commands that benefit from project context
|
||||||
|
- ✅ **DO**: Use when users might want to reference specific tasks or files
|
||||||
|
- ✅ **DO**: Use for research, analysis, or generation commands
|
||||||
|
- ❌ **DON'T**: Use for simple CRUD operations that don't need AI context
|
||||||
|
|
||||||
|
### **Performance Considerations**
|
||||||
|
- ✅ **DO**: Set reasonable file size limits (50KB default)
|
||||||
|
- ✅ **DO**: Limit project tree depth (3-5 levels)
|
||||||
|
- ✅ **DO**: Provide token counts to help users understand context size
|
||||||
|
- ✅ **DO**: Allow users to control what context is included
|
||||||
|
|
||||||
|
### **Error Handling**
|
||||||
|
- ✅ **DO**: Gracefully handle missing files with warnings
|
||||||
|
- ✅ **DO**: Validate task IDs and provide helpful error messages
|
||||||
|
- ✅ **DO**: Continue processing even if some context sources fail
|
||||||
|
- ✅ **DO**: Provide fallback behavior when context gathering fails
|
||||||
|
|
||||||
|
### **Future Command Integration**
|
||||||
|
Commands that should consider adopting this pattern:
|
||||||
|
- `analyze-complexity` - Could benefit from file context
|
||||||
|
- `expand-task` - Could use related task context
|
||||||
|
- `update-task` - Could reference similar tasks for consistency
|
||||||
|
- `add-task` - Could use project context for better task generation
|
||||||
|
|
||||||
|
## Export Patterns
|
||||||
|
|
||||||
|
### **Context Gatherer Module**
|
||||||
|
```javascript
|
||||||
|
export {
|
||||||
|
ContextGatherer,
|
||||||
|
createContextGatherer // Factory function
|
||||||
|
};
|
||||||
|
```
|
||||||
|
|
||||||
|
### **Fuzzy Search Module**
|
||||||
|
```javascript
|
||||||
|
export {
|
||||||
|
FuzzyTaskSearch,
|
||||||
|
PURPOSE_CATEGORIES,
|
||||||
|
RELEVANCE_THRESHOLDS
|
||||||
|
};
|
||||||
|
```
|
||||||
|
|
||||||
|
This context gathering system provides a foundation for building more intelligent, context-aware commands that can leverage project knowledge to provide better AI-powered assistance.
|
||||||
@@ -1,345 +1,424 @@
|
|||||||
---
|
---
|
||||||
description: Guide for using meta-development script (scripts/dev.js) to manage task-driven development workflows
|
description: Guide for using Taskmaster to manage task-driven development workflows
|
||||||
globs: **/*
|
globs: **/*
|
||||||
alwaysApply: true
|
alwaysApply: true
|
||||||
---
|
---
|
||||||
|
|
||||||
- **Global CLI Commands**
|
# Taskmaster Development Workflow
|
||||||
- Task Master now provides a global CLI through the `task-master` command (See [`commands.mdc`](mdc:.cursor/rules/commands.mdc) for details)
|
|
||||||
- All functionality from `scripts/dev.js` is available through this interface
|
|
||||||
- Install globally with `npm install -g claude-task-master` or use locally via `npx`
|
|
||||||
- Use `task-master <command>` instead of `node scripts/dev.js <command>`
|
|
||||||
- Examples:
|
|
||||||
- `task-master list`
|
|
||||||
- `task-master next`
|
|
||||||
- `task-master expand --id=3`
|
|
||||||
- All commands accept the same options as their script equivalents
|
|
||||||
- The CLI (`task-master`) is the **primary** way for users to interact with the application.
|
|
||||||
|
|
||||||
- **Development Workflow Process**
|
This guide outlines the standard process for using Taskmaster to manage software development projects. It is written as a set of instructions for you, the AI agent.
|
||||||
- Start new projects by running `task-master init` or `node scripts/dev.js parse-prd --input=<prd-file.txt>` to generate initial tasks.json
|
|
||||||
- Begin coding sessions with `task-master list` to see current tasks, status, and IDs
|
|
||||||
- Analyze task complexity with `task-master analyze-complexity --research` before breaking down tasks
|
|
||||||
- Select tasks based on dependencies (all marked 'done'), priority level, and ID order
|
|
||||||
- Clarify tasks by checking task files in tasks/ directory or asking for user input
|
|
||||||
- View specific task details using `task-master show <id>` to understand implementation requirements
|
|
||||||
- Break down complex tasks using `task-master expand --id=<id>` with appropriate flags
|
|
||||||
- Clear existing subtasks if needed using `task-master clear-subtasks --id=<id>` before regenerating
|
|
||||||
- Implement code following task details, dependencies, and project standards
|
|
||||||
- Verify tasks according to test strategies before marking as complete
|
|
||||||
- Mark completed tasks with `task-master set-status --id=<id> --status=done`
|
|
||||||
- Update dependent tasks when implementation differs from original plan
|
|
||||||
- Generate task files with `task-master generate` after updating tasks.json
|
|
||||||
- Maintain valid dependency structure with `task-master fix-dependencies` when needed
|
|
||||||
- Respect dependency chains and task priorities when selecting work
|
|
||||||
- **MCP Server**: For integrations (like Cursor), interact via the MCP server which prefers direct function calls. Restart the MCP server if core logic in `scripts/modules` changes. See [`mcp.mdc`](mdc:.cursor/rules/mcp.mdc).
|
|
||||||
- Report progress regularly using the list command
|
|
||||||
|
|
||||||
- **Task Complexity Analysis**
|
- **Your Default Stance**: For most projects, the user can work directly within the `master` task context. Your initial actions should operate on this default context unless a clear pattern for multi-context work emerges.
|
||||||
- Run `node scripts/dev.js analyze-complexity --research` for comprehensive analysis
|
- **Your Goal**: Your role is to elevate the user's workflow by intelligently introducing advanced features like **Tagged Task Lists** when you detect the appropriate context. Do not force tags on the user; suggest them as a helpful solution to a specific need.
|
||||||
- Review complexity report in scripts/task-complexity-report.json
|
|
||||||
- Or use `node scripts/dev.js complexity-report` for a formatted, readable version of the report
|
|
||||||
- Focus on tasks with highest complexity scores (8-10) for detailed breakdown
|
|
||||||
- Use analysis results to determine appropriate subtask allocation
|
|
||||||
- Note that reports are automatically used by the expand command
|
|
||||||
|
|
||||||
- **Task Breakdown Process**
|
## The Basic Loop
|
||||||
- For tasks with complexity analysis, use `node scripts/dev.js expand --id=<id>`
|
The fundamental development cycle you will facilitate is:
|
||||||
- Otherwise use `node scripts/dev.js expand --id=<id> --subtasks=<number>`
|
1. **`list`**: Show the user what needs to be done.
|
||||||
- Add `--research` flag to leverage Perplexity AI for research-backed expansion
|
2. **`next`**: Help the user decide what to work on.
|
||||||
- Use `--prompt="<context>"` to provide additional context when needed
|
3. **`show <id>`**: Provide details for a specific task.
|
||||||
- Review and adjust generated subtasks as necessary
|
4. **`expand <id>`**: Break down a complex task into smaller, manageable subtasks.
|
||||||
- Use `--all` flag to expand multiple pending tasks at once
|
5. **Implement**: The user writes the code and tests.
|
||||||
- If subtasks need regeneration, clear them first with `clear-subtasks` command (See Command Reference below)
|
6. **`update-subtask`**: Log progress and findings on behalf of the user.
|
||||||
|
7. **`set-status`**: Mark tasks and subtasks as `done` as work is completed.
|
||||||
|
8. **Repeat**.
|
||||||
|
|
||||||
- **Implementation Drift Handling**
|
All your standard command executions should operate on the user's current task context, which defaults to `master`.
|
||||||
- When implementation differs significantly from planned approach
|
|
||||||
- When future tasks need modification due to current implementation choices
|
|
||||||
- When new dependencies or requirements emerge
|
|
||||||
- Call `node scripts/dev.js update --from=<futureTaskId> --prompt="<explanation>"` to update tasks.json
|
|
||||||
|
|
||||||
- **Task Status Management**
|
---
|
||||||
- Use 'pending' for tasks ready to be worked on
|
|
||||||
- Use 'done' for completed and verified tasks
|
|
||||||
- Use 'deferred' for postponed tasks
|
|
||||||
- Add custom status values as needed for project-specific workflows
|
|
||||||
|
|
||||||
- **Task File Format Reference**
|
## Standard Development Workflow Process
|
||||||
```
|
|
||||||
# Task ID: <id>
|
|
||||||
# Title: <title>
|
|
||||||
# Status: <status>
|
|
||||||
# Dependencies: <comma-separated list of dependency IDs>
|
|
||||||
# Priority: <priority>
|
|
||||||
# Description: <brief description>
|
|
||||||
# Details:
|
|
||||||
<detailed implementation notes>
|
|
||||||
|
|
||||||
# Test Strategy:
|
### Simple Workflow (Default Starting Point)
|
||||||
<verification approach>
|
|
||||||
```
|
|
||||||
|
|
||||||
- **Command Reference: parse-prd**
|
For new projects or when users are getting started, operate within the `master` tag context:
|
||||||
- CLI Syntax: `task-master parse-prd --input=<prd-file.txt>`
|
|
||||||
- Description: Parses a PRD document and generates a `tasks.json` file with structured tasks
|
|
||||||
- Parameters:
|
|
||||||
- `--input=<file>`: Path to the PRD text file (default: sample-prd.txt)
|
|
||||||
- Example: `task-master parse-prd --input=requirements.txt`
|
|
||||||
- Notes: Will overwrite existing tasks.json file. Use with caution.
|
|
||||||
|
|
||||||
- **Command Reference: update**
|
- Start new projects by running `initialize_project` tool / `task-master init` or `parse_prd` / `task-master parse-prd --input='<prd-file.txt>'` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) to generate initial tasks.json with tagged structure
|
||||||
- CLI Syntax: `task-master update --from=<id> --prompt="<prompt>"`
|
- Configure rule sets during initialization with `--rules` flag (e.g., `task-master init --rules cursor,windsurf`) or manage them later with `task-master rules add/remove` commands
|
||||||
- Description: Updates tasks with ID >= specified ID based on the provided prompt
|
- Begin coding sessions with `get_tasks` / `task-master list` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) to see current tasks, status, and IDs
|
||||||
- Parameters:
|
- Determine the next task to work on using `next_task` / `task-master next` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc))
|
||||||
- `--from=<id>`: Task ID from which to start updating (required)
|
- Analyze task complexity with `analyze_project_complexity` / `task-master analyze-complexity --research` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) before breaking down tasks
|
||||||
- `--prompt="<text>"`: Explanation of changes or new context (required)
|
- Review complexity report using `complexity_report` / `task-master complexity-report` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc))
|
||||||
- Example: `task-master update --from=4 --prompt="Now we are using Express instead of Fastify."`
|
- Select tasks based on dependencies (all marked 'done'), priority level, and ID order
|
||||||
- Notes: Only updates tasks not marked as 'done'. Completed tasks remain unchanged.
|
- View specific task details using `get_task` / `task-master show <id>` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) to understand implementation requirements
|
||||||
|
- Break down complex tasks using `expand_task` / `task-master expand --id=<id> --force --research` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) with appropriate flags like `--force` (to replace existing subtasks) and `--research`
|
||||||
|
- Implement code following task details, dependencies, and project standards
|
||||||
|
- Mark completed tasks with `set_task_status` / `task-master set-status --id=<id> --status=done` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc))
|
||||||
|
- Update dependent tasks when implementation differs from original plan using `update` / `task-master update --from=<id> --prompt="..."` or `update_task` / `task-master update-task --id=<id> --prompt="..."` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc))
|
||||||
|
|
||||||
- **Command Reference: update-task**
|
---
|
||||||
- CLI Syntax: `task-master update-task --id=<id> --prompt="<prompt>"`
|
|
||||||
- Description: Updates a single task by ID with new information
|
|
||||||
- Parameters:
|
|
||||||
- `--id=<id>`: ID of the task to update (required)
|
|
||||||
- `--prompt="<text>"`: New information or context to update the task (required)
|
|
||||||
- `--research`: Use Perplexity AI for research-backed updates
|
|
||||||
- Example: `task-master update-task --id=5 --prompt="Use JWT for authentication instead of sessions."`
|
|
||||||
- Notes: Only updates tasks not marked as 'done'. Preserves completed subtasks.
|
|
||||||
|
|
||||||
- **Command Reference: update-subtask**
|
## Leveling Up: Agent-Led Multi-Context Workflows
|
||||||
- CLI Syntax: `task-master update-subtask --id=<id> --prompt="<prompt>"`
|
|
||||||
- Description: Appends additional information to a specific subtask without replacing existing content
|
|
||||||
- Parameters:
|
|
||||||
- `--id=<id>`: ID of the subtask to update in format "parentId.subtaskId" (required)
|
|
||||||
- `--prompt="<text>"`: Information to add to the subtask (required)
|
|
||||||
- `--research`: Use Perplexity AI for research-backed updates
|
|
||||||
- Example: `task-master update-subtask --id=5.2 --prompt="Add details about API rate limiting."`
|
|
||||||
- Notes:
|
|
||||||
- Appends new information to subtask details with timestamp
|
|
||||||
- Does not replace existing content, only adds to it
|
|
||||||
- Uses XML-like tags to clearly mark added information
|
|
||||||
- Will not update subtasks marked as 'done' or 'completed'
|
|
||||||
|
|
||||||
- **Command Reference: generate**
|
While the basic workflow is powerful, your primary opportunity to add value is by identifying when to introduce **Tagged Task Lists**. These patterns are your tools for creating a more organized and efficient development environment for the user, especially if you detect agentic or parallel development happening across the same session.
|
||||||
- CLI Syntax: `task-master generate`
|
|
||||||
- Description: Generates individual task files in tasks/ directory based on tasks.json
|
|
||||||
- Parameters:
|
|
||||||
- `--file=<path>, -f`: Use alternative tasks.json file (default: 'tasks/tasks.json')
|
|
||||||
- `--output=<dir>, -o`: Output directory (default: 'tasks')
|
|
||||||
- Example: `task-master generate`
|
|
||||||
- Notes: Overwrites existing task files. Creates tasks/ directory if needed.
|
|
||||||
|
|
||||||
- **Command Reference: set-status**
|
**Critical Principle**: Most users should never see a difference in their experience. Only introduce advanced workflows when you detect clear indicators that the project has evolved beyond simple task management.
|
||||||
- CLI Syntax: `task-master set-status --id=<id> --status=<status>`
|
|
||||||
- Description: Updates the status of a specific task in tasks.json
|
|
||||||
- Parameters:
|
|
||||||
- `--id=<id>`: ID of the task to update (required)
|
|
||||||
- `--status=<status>`: New status value (required)
|
|
||||||
- Example: `task-master set-status --id=3 --status=done`
|
|
||||||
- Notes: Common values are 'done', 'pending', and 'deferred', but any string is accepted.
|
|
||||||
|
|
||||||
- **Command Reference: list**
|
### When to Introduce Tags: Your Decision Patterns
|
||||||
- CLI Syntax: `task-master list`
|
|
||||||
- Description: Lists all tasks in tasks.json with IDs, titles, and status
|
|
||||||
- Parameters:
|
|
||||||
- `--status=<status>, -s`: Filter by status
|
|
||||||
- `--with-subtasks`: Show subtasks for each task
|
|
||||||
- `--file=<path>, -f`: Use alternative tasks.json file (default: 'tasks/tasks.json')
|
|
||||||
- Example: `task-master list`
|
|
||||||
- Notes: Provides quick overview of project progress. Use at start of sessions.
|
|
||||||
|
|
||||||
- **Command Reference: expand**
|
Here are the patterns to look for. When you detect one, you should propose the corresponding workflow to the user.
|
||||||
- CLI Syntax: `task-master expand --id=<id> [--num=<number>] [--research] [--prompt="<context>"]`
|
|
||||||
- Description: Expands a task with subtasks for detailed implementation
|
|
||||||
- Parameters:
|
|
||||||
- `--id=<id>`: ID of task to expand (required unless using --all)
|
|
||||||
- `--all`: Expand all pending tasks, prioritized by complexity
|
|
||||||
- `--num=<number>`: Number of subtasks to generate (default: from complexity report)
|
|
||||||
- `--research`: Use Perplexity AI for research-backed generation
|
|
||||||
- `--prompt="<text>"`: Additional context for subtask generation
|
|
||||||
- `--force`: Regenerate subtasks even for tasks that already have them
|
|
||||||
- Example: `task-master expand --id=3 --num=5 --research --prompt="Focus on security aspects"`
|
|
||||||
- Notes: Uses complexity report recommendations if available.
|
|
||||||
|
|
||||||
- **Command Reference: analyze-complexity**
|
#### Pattern 1: Simple Git Feature Branching
|
||||||
- CLI Syntax: `task-master analyze-complexity [options]`
|
This is the most common and direct use case for tags.
|
||||||
- Description: Analyzes task complexity and generates expansion recommendations
|
|
||||||
- Parameters:
|
|
||||||
- `--output=<file>, -o`: Output file path (default: scripts/task-complexity-report.json)
|
|
||||||
- `--model=<model>, -m`: Override LLM model to use
|
|
||||||
- `--threshold=<number>, -t`: Minimum score for expansion recommendation (default: 5)
|
|
||||||
- `--file=<path>, -f`: Use alternative tasks.json file
|
|
||||||
- `--research, -r`: Use Perplexity AI for research-backed analysis
|
|
||||||
- Example: `task-master analyze-complexity --research`
|
|
||||||
- Notes: Report includes complexity scores, recommended subtasks, and tailored prompts.
|
|
||||||
|
|
||||||
- **Command Reference: clear-subtasks**
|
- **Trigger**: The user creates a new git branch (e.g., `git checkout -b feature/user-auth`).
|
||||||
- CLI Syntax: `task-master clear-subtasks --id=<id>`
|
- **Your Action**: Propose creating a new tag that mirrors the branch name to isolate the feature's tasks from `master`.
|
||||||
- Description: Removes subtasks from specified tasks to allow regeneration
|
- **Your Suggested Prompt**: *"I see you've created a new branch named 'feature/user-auth'. To keep all related tasks neatly organized and separate from your main list, I can create a corresponding task tag for you. This helps prevent merge conflicts in your `tasks.json` file later. Shall I create the 'feature-user-auth' tag?"*
|
||||||
- Parameters:
|
- **Tool to Use**: `task-master add-tag --from-branch`
|
||||||
- `--id=<id>`: ID or comma-separated IDs of tasks to clear subtasks from
|
|
||||||
- `--all`: Clear subtasks from all tasks
|
|
||||||
- Examples:
|
|
||||||
- `task-master clear-subtasks --id=3`
|
|
||||||
- `task-master clear-subtasks --id=1,2,3`
|
|
||||||
- `task-master clear-subtasks --all`
|
|
||||||
- Notes:
|
|
||||||
- Task files are automatically regenerated after clearing subtasks
|
|
||||||
- Can be combined with expand command to immediately generate new subtasks
|
|
||||||
- Works with both parent tasks and individual subtasks
|
|
||||||
|
|
||||||
- **Task Structure Fields**
|
#### Pattern 2: Team Collaboration
|
||||||
- **id**: Unique identifier for the task (Example: `1`)
|
- **Trigger**: The user mentions working with teammates (e.g., "My teammate Alice is handling the database schema," or "I need to review Bob's work on the API.").
|
||||||
- **title**: Brief, descriptive title (Example: `"Initialize Repo"`)
|
- **Your Action**: Suggest creating a separate tag for the user's work to prevent conflicts with shared master context.
|
||||||
- **description**: Concise summary of what the task involves (Example: `"Create a new repository, set up initial structure."`)
|
- **Your Suggested Prompt**: *"Since you're working with Alice, I can create a separate task context for your work to avoid conflicts. This way, Alice can continue working with the master list while you have your own isolated context. When you're ready to merge your work, we can coordinate the tasks back to master. Shall I create a tag for your current work?"*
|
||||||
- **status**: Current state of the task (Example: `"pending"`, `"done"`, `"deferred"`)
|
- **Tool to Use**: `task-master add-tag my-work --copy-from-current --description="My tasks while collaborating with Alice"`
|
||||||
- **dependencies**: IDs of prerequisite tasks (Example: `[1, 2]`)
|
|
||||||
|
#### Pattern 3: Experiments or Risky Refactors
|
||||||
|
- **Trigger**: The user wants to try something that might not be kept (e.g., "I want to experiment with switching our state management library," or "Let's refactor the old API module, but I want to keep the current tasks as a reference.").
|
||||||
|
- **Your Action**: Propose creating a sandboxed tag for the experimental work.
|
||||||
|
- **Your Suggested Prompt**: *"This sounds like a great experiment. To keep these new tasks separate from our main plan, I can create a temporary 'experiment-zustand' tag for this work. If we decide not to proceed, we can simply delete the tag without affecting the main task list. Sound good?"*
|
||||||
|
- **Tool to Use**: `task-master add-tag experiment-zustand --description="Exploring Zustand migration"`
|
||||||
|
|
||||||
|
#### Pattern 4: Large Feature Initiatives (PRD-Driven)
|
||||||
|
This is a more structured approach for significant new features or epics.
|
||||||
|
|
||||||
|
- **Trigger**: The user describes a large, multi-step feature that would benefit from a formal plan.
|
||||||
|
- **Your Action**: Propose a comprehensive, PRD-driven workflow.
|
||||||
|
- **Your Suggested Prompt**: *"This sounds like a significant new feature. To manage this effectively, I suggest we create a dedicated task context for it. Here's the plan: I'll create a new tag called 'feature-xyz', then we can draft a Product Requirements Document (PRD) together to scope the work. Once the PRD is ready, I'll automatically generate all the necessary tasks within that new tag. How does that sound?"*
|
||||||
|
- **Your Implementation Flow**:
|
||||||
|
1. **Create an empty tag**: `task-master add-tag feature-xyz --description "Tasks for the new XYZ feature"`. You can also start by creating a git branch if applicable, and then create the tag from that branch.
|
||||||
|
2. **Collaborate & Create PRD**: Work with the user to create a detailed PRD file (e.g., `.taskmaster/docs/feature-xyz-prd.txt`).
|
||||||
|
3. **Parse PRD into the new tag**: `task-master parse-prd .taskmaster/docs/feature-xyz-prd.txt --tag feature-xyz`
|
||||||
|
4. **Prepare the new task list**: Follow up by suggesting `analyze-complexity` and `expand-all` for the newly created tasks within the `feature-xyz` tag.
|
||||||
|
|
||||||
|
#### Pattern 5: Version-Based Development
|
||||||
|
Tailor your approach based on the project maturity indicated by tag names.
|
||||||
|
|
||||||
|
- **Prototype/MVP Tags** (`prototype`, `mvp`, `poc`, `v0.x`):
|
||||||
|
- **Your Approach**: Focus on speed and functionality over perfection
|
||||||
|
- **Task Generation**: Create tasks that emphasize "get it working" over "get it perfect"
|
||||||
|
- **Complexity Level**: Lower complexity, fewer subtasks, more direct implementation paths
|
||||||
|
- **Research Prompts**: Include context like "This is a prototype - prioritize speed and basic functionality over optimization"
|
||||||
|
- **Example Prompt Addition**: *"Since this is for the MVP, I'll focus on tasks that get core functionality working quickly rather than over-engineering."*
|
||||||
|
|
||||||
|
- **Production/Mature Tags** (`v1.0+`, `production`, `stable`):
|
||||||
|
- **Your Approach**: Emphasize robustness, testing, and maintainability
|
||||||
|
- **Task Generation**: Include comprehensive error handling, testing, documentation, and optimization
|
||||||
|
- **Complexity Level**: Higher complexity, more detailed subtasks, thorough implementation paths
|
||||||
|
- **Research Prompts**: Include context like "This is for production - prioritize reliability, performance, and maintainability"
|
||||||
|
- **Example Prompt Addition**: *"Since this is for production, I'll ensure tasks include proper error handling, testing, and documentation."*
|
||||||
|
|
||||||
|
### Advanced Workflow (Tag-Based & PRD-Driven)
|
||||||
|
|
||||||
|
**When to Transition**: Recognize when the project has evolved (or has initiated a project which existing code) beyond simple task management. Look for these indicators:
|
||||||
|
- User mentions teammates or collaboration needs
|
||||||
|
- Project has grown to 15+ tasks with mixed priorities
|
||||||
|
- User creates feature branches or mentions major initiatives
|
||||||
|
- User initializes Taskmaster on an existing, complex codebase
|
||||||
|
- User describes large features that would benefit from dedicated planning
|
||||||
|
|
||||||
|
**Your Role in Transition**: Guide the user to a more sophisticated workflow that leverages tags for organization and PRDs for comprehensive planning.
|
||||||
|
|
||||||
|
#### Master List Strategy (High-Value Focus)
|
||||||
|
Once you transition to tag-based workflows, the `master` tag should ideally contain only:
|
||||||
|
- **High-level deliverables** that provide significant business value
|
||||||
|
- **Major milestones** and epic-level features
|
||||||
|
- **Critical infrastructure** work that affects the entire project
|
||||||
|
- **Release-blocking** items
|
||||||
|
|
||||||
|
**What NOT to put in master**:
|
||||||
|
- Detailed implementation subtasks (these go in feature-specific tags' parent tasks)
|
||||||
|
- Refactoring work (create dedicated tags like `refactor-auth`)
|
||||||
|
- Experimental features (use `experiment-*` tags)
|
||||||
|
- Team member-specific tasks (use person-specific tags)
|
||||||
|
|
||||||
|
#### PRD-Driven Feature Development
|
||||||
|
|
||||||
|
**For New Major Features**:
|
||||||
|
1. **Identify the Initiative**: When user describes a significant feature
|
||||||
|
2. **Create Dedicated Tag**: `add_tag feature-[name] --description="[Feature description]"`
|
||||||
|
3. **Collaborative PRD Creation**: Work with user to create comprehensive PRD in `.taskmaster/docs/feature-[name]-prd.txt`
|
||||||
|
4. **Parse & Prepare**:
|
||||||
|
- `parse_prd .taskmaster/docs/feature-[name]-prd.txt --tag=feature-[name]`
|
||||||
|
- `analyze_project_complexity --tag=feature-[name] --research`
|
||||||
|
- `expand_all --tag=feature-[name] --research`
|
||||||
|
5. **Add Master Reference**: Create a high-level task in `master` that references the feature tag
|
||||||
|
|
||||||
|
**For Existing Codebase Analysis**:
|
||||||
|
When users initialize Taskmaster on existing projects:
|
||||||
|
1. **Codebase Discovery**: Use your native tools for producing deep context about the code base. You may use `research` tool with `--tree` and `--files` to collect up to date information using the existing architecture as context.
|
||||||
|
2. **Collaborative Assessment**: Work with user to identify improvement areas, technical debt, or new features
|
||||||
|
3. **Strategic PRD Creation**: Co-author PRDs that include:
|
||||||
|
- Current state analysis (based on your codebase research)
|
||||||
|
- Proposed improvements or new features
|
||||||
|
- Implementation strategy considering existing code
|
||||||
|
4. **Tag-Based Organization**: Parse PRDs into appropriate tags (`refactor-api`, `feature-dashboard`, `tech-debt`, etc.)
|
||||||
|
5. **Master List Curation**: Keep only the most valuable initiatives in master
|
||||||
|
|
||||||
|
The parse-prd's `--append` flag enables the user to parse multiple PRDs within tags or across tags. PRDs should be focused and the number of tasks they are parsed into should be strategically chosen relative to the PRD's complexity and level of detail.
|
||||||
|
|
||||||
|
### Workflow Transition Examples
|
||||||
|
|
||||||
|
**Example 1: Simple → Team-Based**
|
||||||
|
```
|
||||||
|
User: "Alice is going to help with the API work"
|
||||||
|
Your Response: "Great! To avoid conflicts, I'll create a separate task context for your work. Alice can continue with the master list while you work in your own context. When you're ready to merge, we can coordinate the tasks back together."
|
||||||
|
Action: add_tag my-api-work --copy-from-current --description="My API tasks while collaborating with Alice"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Example 2: Simple → PRD-Driven**
|
||||||
|
```
|
||||||
|
User: "I want to add a complete user dashboard with analytics, user management, and reporting"
|
||||||
|
Your Response: "This sounds like a major feature that would benefit from detailed planning. Let me create a dedicated context for this work and we can draft a PRD together to ensure we capture all requirements."
|
||||||
|
Actions:
|
||||||
|
1. add_tag feature-dashboard --description="User dashboard with analytics and management"
|
||||||
|
2. Collaborate on PRD creation
|
||||||
|
3. parse_prd dashboard-prd.txt --tag=feature-dashboard
|
||||||
|
4. Add high-level "User Dashboard" task to master
|
||||||
|
```
|
||||||
|
|
||||||
|
**Example 3: Existing Project → Strategic Planning**
|
||||||
|
```
|
||||||
|
User: "I just initialized Taskmaster on my existing React app. It's getting messy and I want to improve it."
|
||||||
|
Your Response: "Let me research your codebase to understand the current architecture, then we can create a strategic plan for improvements."
|
||||||
|
Actions:
|
||||||
|
1. research "Current React app architecture and improvement opportunities" --tree --files=src/
|
||||||
|
2. Collaborate on improvement PRD based on findings
|
||||||
|
3. Create tags for different improvement areas (refactor-components, improve-state-management, etc.)
|
||||||
|
4. Keep only major improvement initiatives in master
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Primary Interaction: MCP Server vs. CLI
|
||||||
|
|
||||||
|
Taskmaster offers two primary ways to interact:
|
||||||
|
|
||||||
|
1. **MCP Server (Recommended for Integrated Tools)**:
|
||||||
|
- For AI agents and integrated development environments (like Cursor), interacting via the **MCP server is the preferred method**.
|
||||||
|
- The MCP server exposes Taskmaster functionality through a set of tools (e.g., `get_tasks`, `add_subtask`).
|
||||||
|
- This method offers better performance, structured data exchange, and richer error handling compared to CLI parsing.
|
||||||
|
- Refer to [`mcp.mdc`](mdc:.cursor/rules/mcp.mdc) for details on the MCP architecture and available tools.
|
||||||
|
- A comprehensive list and description of MCP tools and their corresponding CLI commands can be found in [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc).
|
||||||
|
- **Restart the MCP server** if core logic in `scripts/modules` or MCP tool/direct function definitions change.
|
||||||
|
- **Note**: MCP tools fully support tagged task lists with complete tag management capabilities.
|
||||||
|
|
||||||
|
2. **`task-master` CLI (For Users & Fallback)**:
|
||||||
|
- The global `task-master` command provides a user-friendly interface for direct terminal interaction.
|
||||||
|
- It can also serve as a fallback if the MCP server is inaccessible or a specific function isn't exposed via MCP.
|
||||||
|
- Install globally with `npm install -g task-master-ai` or use locally via `npx task-master-ai ...`.
|
||||||
|
- The CLI commands often mirror the MCP tools (e.g., `task-master list` corresponds to `get_tasks`).
|
||||||
|
- Refer to [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc) for a detailed command reference.
|
||||||
|
- **Tagged Task Lists**: CLI fully supports the new tagged system with seamless migration.
|
||||||
|
|
||||||
|
## How the Tag System Works (For Your Reference)
|
||||||
|
|
||||||
|
- **Data Structure**: Tasks are organized into separate contexts (tags) like "master", "feature-branch", or "v2.0".
|
||||||
|
- **Silent Migration**: Existing projects automatically migrate to use a "master" tag with zero disruption.
|
||||||
|
- **Context Isolation**: Tasks in different tags are completely separate. Changes in one tag do not affect any other tag.
|
||||||
|
- **Manual Control**: The user is always in control. There is no automatic switching. You facilitate switching by using `use-tag <name>`.
|
||||||
|
- **Full CLI & MCP Support**: All tag management commands are available through both the CLI and MCP tools for you to use. Refer to [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc) for a full command list.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Task Complexity Analysis
|
||||||
|
|
||||||
|
- Run `analyze_project_complexity` / `task-master analyze-complexity --research` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) for comprehensive analysis
|
||||||
|
- Review complexity report via `complexity_report` / `task-master complexity-report` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) for a formatted, readable version.
|
||||||
|
- Focus on tasks with highest complexity scores (8-10) for detailed breakdown
|
||||||
|
- Use analysis results to determine appropriate subtask allocation
|
||||||
|
- Note that reports are automatically used by the `expand_task` tool/command
|
||||||
|
|
||||||
|
## Task Breakdown Process
|
||||||
|
|
||||||
|
- Use `expand_task` / `task-master expand --id=<id>`. It automatically uses the complexity report if found, otherwise generates default number of subtasks.
|
||||||
|
- Use `--num=<number>` to specify an explicit number of subtasks, overriding defaults or complexity report recommendations.
|
||||||
|
- Add `--research` flag to leverage Perplexity AI for research-backed expansion.
|
||||||
|
- Add `--force` flag to clear existing subtasks before generating new ones (default is to append).
|
||||||
|
- Use `--prompt="<context>"` to provide additional context when needed.
|
||||||
|
- Review and adjust generated subtasks as necessary.
|
||||||
|
- Use `expand_all` tool or `task-master expand --all` to expand multiple pending tasks at once, respecting flags like `--force` and `--research`.
|
||||||
|
- If subtasks need complete replacement (regardless of the `--force` flag on `expand`), clear them first with `clear_subtasks` / `task-master clear-subtasks --id=<id>`.
|
||||||
|
|
||||||
|
## Implementation Drift Handling
|
||||||
|
|
||||||
|
- When implementation differs significantly from planned approach
|
||||||
|
- When future tasks need modification due to current implementation choices
|
||||||
|
- When new dependencies or requirements emerge
|
||||||
|
- Use `update` / `task-master update --from=<futureTaskId> --prompt='<explanation>\nUpdate context...' --research` to update multiple future tasks.
|
||||||
|
- Use `update_task` / `task-master update-task --id=<taskId> --prompt='<explanation>\nUpdate context...' --research` to update a single specific task.
|
||||||
|
|
||||||
|
## Task Status Management
|
||||||
|
|
||||||
|
- Use 'pending' for tasks ready to be worked on
|
||||||
|
- Use 'done' for completed and verified tasks
|
||||||
|
- Use 'deferred' for postponed tasks
|
||||||
|
- Add custom status values as needed for project-specific workflows
|
||||||
|
|
||||||
|
## Task Structure Fields
|
||||||
|
|
||||||
|
- **id**: Unique identifier for the task (Example: `1`, `1.1`)
|
||||||
|
- **title**: Brief, descriptive title (Example: `"Initialize Repo"`)
|
||||||
|
- **description**: Concise summary of what the task involves (Example: `"Create a new repository, set up initial structure."`)
|
||||||
|
- **status**: Current state of the task (Example: `"pending"`, `"done"`, `"deferred"`)
|
||||||
|
- **dependencies**: IDs of prerequisite tasks (Example: `[1, 2.1]`)
|
||||||
- Dependencies are displayed with status indicators (✅ for completed, ⏱️ for pending)
|
- Dependencies are displayed with status indicators (✅ for completed, ⏱️ for pending)
|
||||||
- This helps quickly identify which prerequisite tasks are blocking work
|
- This helps quickly identify which prerequisite tasks are blocking work
|
||||||
- **priority**: Importance level (Example: `"high"`, `"medium"`, `"low"`)
|
- **priority**: Importance level (Example: `"high"`, `"medium"`, `"low"`)
|
||||||
- **details**: In-depth implementation instructions (Example: `"Use GitHub client ID/secret, handle callback, set session token."`)
|
- **details**: In-depth implementation instructions (Example: `"Use GitHub client ID/secret, handle callback, set session token."`)
|
||||||
- **testStrategy**: Verification approach (Example: `"Deploy and call endpoint to confirm 'Hello World' response."`)
|
- **testStrategy**: Verification approach (Example: `"Deploy and call endpoint to confirm 'Hello World' response."`)
|
||||||
- **subtasks**: List of smaller, more specific tasks (Example: `[{"id": 1, "title": "Configure OAuth", ...}]`)
|
- **subtasks**: List of smaller, more specific tasks (Example: `[{"id": 1, "title": "Configure OAuth", ...}]`)
|
||||||
|
- Refer to task structure details (previously linked to `tasks.mdc`).
|
||||||
|
|
||||||
- **Environment Variables Configuration**
|
## Configuration Management (Updated)
|
||||||
- **ANTHROPIC_API_KEY** (Required): Your Anthropic API key for Claude (Example: `ANTHROPIC_API_KEY=sk-ant-api03-...`)
|
|
||||||
- **MODEL** (Default: `"claude-3-7-sonnet-20250219"`): Claude model to use (Example: `MODEL=claude-3-opus-20240229`)
|
|
||||||
- **MAX_TOKENS** (Default: `"4000"`): Maximum tokens for responses (Example: `MAX_TOKENS=8000`)
|
|
||||||
- **TEMPERATURE** (Default: `"0.7"`): Temperature for model responses (Example: `TEMPERATURE=0.5`)
|
|
||||||
- **DEBUG** (Default: `"false"`): Enable debug logging (Example: `DEBUG=true`)
|
|
||||||
- **LOG_LEVEL** (Default: `"info"`): Console output level (Example: `LOG_LEVEL=debug`)
|
|
||||||
- **DEFAULT_SUBTASKS** (Default: `"3"`): Default subtask count (Example: `DEFAULT_SUBTASKS=5`)
|
|
||||||
- **DEFAULT_PRIORITY** (Default: `"medium"`): Default priority (Example: `DEFAULT_PRIORITY=high`)
|
|
||||||
- **PROJECT_NAME** (Default: `"MCP SaaS MVP"`): Project name in metadata (Example: `PROJECT_NAME=My Awesome Project`)
|
|
||||||
- **PROJECT_VERSION** (Default: `"1.0.0"`): Version in metadata (Example: `PROJECT_VERSION=2.1.0`)
|
|
||||||
- **PERPLEXITY_API_KEY**: For research-backed features (Example: `PERPLEXITY_API_KEY=pplx-...`)
|
|
||||||
- **PERPLEXITY_MODEL** (Default: `"sonar-medium-online"`): Perplexity model (Example: `PERPLEXITY_MODEL=sonar-large-online`)
|
|
||||||
|
|
||||||
- **Determining the Next Task**
|
Taskmaster configuration is managed through two main mechanisms:
|
||||||
- Run `task-master next` to show the next task to work on
|
|
||||||
- The next command identifies tasks with all dependencies satisfied
|
1. **`.taskmaster/config.json` File (Primary):**
|
||||||
- Tasks are prioritized by priority level, dependency count, and ID
|
* Located in the project root directory.
|
||||||
- The command shows comprehensive task information including:
|
* Stores most configuration settings: AI model selections (main, research, fallback), parameters (max tokens, temperature), logging level, default subtasks/priority, project name, etc.
|
||||||
|
* **Tagged System Settings**: Includes `global.defaultTag` (defaults to "master") and `tags` section for tag management configuration.
|
||||||
|
* **Managed via `task-master models --setup` command.** Do not edit manually unless you know what you are doing.
|
||||||
|
* **View/Set specific models via `task-master models` command or `models` MCP tool.**
|
||||||
|
* Created automatically when you run `task-master models --setup` for the first time or during tagged system migration.
|
||||||
|
|
||||||
|
2. **Environment Variables (`.env` / `mcp.json`):**
|
||||||
|
* Used **only** for sensitive API keys and specific endpoint URLs.
|
||||||
|
* Place API keys (one per provider) in a `.env` file in the project root for CLI usage.
|
||||||
|
* For MCP/Cursor integration, configure these keys in the `env` section of `.cursor/mcp.json`.
|
||||||
|
* Available keys/variables: See `assets/env.example` or the Configuration section in the command reference (previously linked to `taskmaster.mdc`).
|
||||||
|
|
||||||
|
3. **`.taskmaster/state.json` File (Tagged System State):**
|
||||||
|
* Tracks current tag context and migration status.
|
||||||
|
* Automatically created during tagged system migration.
|
||||||
|
* Contains: `currentTag`, `lastSwitched`, `migrationNoticeShown`.
|
||||||
|
|
||||||
|
**Important:** Non-API key settings (like model selections, `MAX_TOKENS`, `TASKMASTER_LOG_LEVEL`) are **no longer configured via environment variables**. Use the `task-master models` command (or `--setup` for interactive configuration) or the `models` MCP tool.
|
||||||
|
**If AI commands FAIL in MCP** verify that the API key for the selected provider is present in the `env` section of `.cursor/mcp.json`.
|
||||||
|
**If AI commands FAIL in CLI** verify that the API key for the selected provider is present in the `.env` file in the root of the project.
|
||||||
|
|
||||||
|
## Rules Management
|
||||||
|
|
||||||
|
Taskmaster supports multiple AI coding assistant rule sets that can be configured during project initialization or managed afterward:
|
||||||
|
|
||||||
|
- **Available Profiles**: Claude Code, Cline, Codex, Cursor, Roo Code, Trae, Windsurf (claude, cline, codex, cursor, roo, trae, windsurf)
|
||||||
|
- **During Initialization**: Use `task-master init --rules cursor,windsurf` to specify which rule sets to include
|
||||||
|
- **After Initialization**: Use `task-master rules add <profiles>` or `task-master rules remove <profiles>` to manage rule sets
|
||||||
|
- **Interactive Setup**: Use `task-master rules setup` to launch an interactive prompt for selecting rule profiles
|
||||||
|
- **Default Behavior**: If no `--rules` flag is specified during initialization, all available rule profiles are included
|
||||||
|
- **Rule Structure**: Each profile creates its own directory (e.g., `.cursor/rules`, `.roo/rules`) with appropriate configuration files
|
||||||
|
|
||||||
|
## Determining the Next Task
|
||||||
|
|
||||||
|
- Run `next_task` / `task-master next` to show the next task to work on.
|
||||||
|
- The command identifies tasks with all dependencies satisfied
|
||||||
|
- Tasks are prioritized by priority level, dependency count, and ID
|
||||||
|
- The command shows comprehensive task information including:
|
||||||
- Basic task details and description
|
- Basic task details and description
|
||||||
- Implementation details
|
- Implementation details
|
||||||
- Subtasks (if they exist)
|
- Subtasks (if they exist)
|
||||||
- Contextual suggested actions
|
- Contextual suggested actions
|
||||||
- Recommended before starting any new development work
|
- Recommended before starting any new development work
|
||||||
- Respects your project's dependency structure
|
- Respects your project's dependency structure
|
||||||
- Ensures tasks are completed in the appropriate sequence
|
- Ensures tasks are completed in the appropriate sequence
|
||||||
- Provides ready-to-use commands for common task actions
|
- Provides ready-to-use commands for common task actions
|
||||||
|
|
||||||
- **Viewing Specific Task Details**
|
## Viewing Specific Task Details
|
||||||
- Run `task-master show <id>` or `task-master show --id=<id>` to view a specific task
|
|
||||||
- Use dot notation for subtasks: `task-master show 1.2` (shows subtask 2 of task 1)
|
|
||||||
- Displays comprehensive information similar to the next command, but for a specific task
|
|
||||||
- For parent tasks, shows all subtasks and their current status
|
|
||||||
- For subtasks, shows parent task information and relationship
|
|
||||||
- Provides contextual suggested actions appropriate for the specific task
|
|
||||||
- Useful for examining task details before implementation or checking status
|
|
||||||
|
|
||||||
- **Managing Task Dependencies**
|
- Run `get_task` / `task-master show <id>` to view a specific task.
|
||||||
- Use `task-master add-dependency --id=<id> --depends-on=<id>` to add a dependency
|
- Use dot notation for subtasks: `task-master show 1.2` (shows subtask 2 of task 1)
|
||||||
- Use `task-master remove-dependency --id=<id> --depends-on=<id>` to remove a dependency
|
- Displays comprehensive information similar to the next command, but for a specific task
|
||||||
- The system prevents circular dependencies and duplicate dependency entries
|
- For parent tasks, shows all subtasks and their current status
|
||||||
- Dependencies are checked for existence before being added or removed
|
- For subtasks, shows parent task information and relationship
|
||||||
- Task files are automatically regenerated after dependency changes
|
- Provides contextual suggested actions appropriate for the specific task
|
||||||
- Dependencies are visualized with status indicators in task listings and files
|
- Useful for examining task details before implementation or checking status
|
||||||
|
|
||||||
- **Command Reference: add-dependency**
|
## Managing Task Dependencies
|
||||||
- CLI Syntax: `task-master add-dependency --id=<id> --depends-on=<id>`
|
|
||||||
- Description: Adds a dependency relationship between two tasks
|
|
||||||
- Parameters:
|
|
||||||
- `--id=<id>`: ID of task that will depend on another task (required)
|
|
||||||
- `--depends-on=<id>`: ID of task that will become a dependency (required)
|
|
||||||
- Example: `task-master add-dependency --id=22 --depends-on=21`
|
|
||||||
- Notes: Prevents circular dependencies and duplicates; updates task files automatically
|
|
||||||
|
|
||||||
- **Command Reference: remove-dependency**
|
- Use `add_dependency` / `task-master add-dependency --id=<id> --depends-on=<id>` to add a dependency.
|
||||||
- CLI Syntax: `task-master remove-dependency --id=<id> --depends-on=<id>`
|
- Use `remove_dependency` / `task-master remove-dependency --id=<id> --depends-on=<id>` to remove a dependency.
|
||||||
- Description: Removes a dependency relationship between two tasks
|
- The system prevents circular dependencies and duplicate dependency entries
|
||||||
- Parameters:
|
- Dependencies are checked for existence before being added or removed
|
||||||
- `--id=<id>`: ID of task to remove dependency from (required)
|
- Task files are automatically regenerated after dependency changes
|
||||||
- `--depends-on=<id>`: ID of task to remove as a dependency (required)
|
- Dependencies are visualized with status indicators in task listings and files
|
||||||
- Example: `task-master remove-dependency --id=22 --depends-on=21`
|
|
||||||
- Notes: Checks if dependency actually exists; updates task files automatically
|
|
||||||
|
|
||||||
- **Command Reference: validate-dependencies**
|
## Task Reorganization
|
||||||
- CLI Syntax: `task-master validate-dependencies [options]`
|
|
||||||
- Description: Checks for and identifies invalid dependencies in tasks.json and task files
|
|
||||||
- Parameters:
|
|
||||||
- `--file=<path>, -f`: Use alternative tasks.json file (default: 'tasks/tasks.json')
|
|
||||||
- Example: `task-master validate-dependencies`
|
|
||||||
- Notes:
|
|
||||||
- Reports all non-existent dependencies and self-dependencies without modifying files
|
|
||||||
- Provides detailed statistics on task dependency state
|
|
||||||
- Use before fix-dependencies to audit your task structure
|
|
||||||
|
|
||||||
- **Command Reference: fix-dependencies**
|
- Use `move_task` / `task-master move --from=<id> --to=<id>` to move tasks or subtasks within the hierarchy
|
||||||
- CLI Syntax: `task-master fix-dependencies [options]`
|
- This command supports several use cases:
|
||||||
- Description: Finds and fixes all invalid dependencies in tasks.json and task files
|
- Moving a standalone task to become a subtask (e.g., `--from=5 --to=7`)
|
||||||
- Parameters:
|
- Moving a subtask to become a standalone task (e.g., `--from=5.2 --to=7`)
|
||||||
- `--file=<path>, -f`: Use alternative tasks.json file (default: 'tasks/tasks.json')
|
- Moving a subtask to a different parent (e.g., `--from=5.2 --to=7.3`)
|
||||||
- Example: `task-master fix-dependencies`
|
- Reordering subtasks within the same parent (e.g., `--from=5.2 --to=5.4`)
|
||||||
- Notes:
|
- Moving a task to a new, non-existent ID position (e.g., `--from=5 --to=25`)
|
||||||
- Removes references to non-existent tasks and subtasks
|
- Moving multiple tasks at once using comma-separated IDs (e.g., `--from=10,11,12 --to=16,17,18`)
|
||||||
- Eliminates self-dependencies (tasks depending on themselves)
|
- The system includes validation to prevent data loss:
|
||||||
- Regenerates task files with corrected dependencies
|
- Allows moving to non-existent IDs by creating placeholder tasks
|
||||||
- Provides detailed report of all fixes made
|
- Prevents moving to existing task IDs that have content (to avoid overwriting)
|
||||||
|
- Validates source tasks exist before attempting to move them
|
||||||
|
- The system maintains proper parent-child relationships and dependency integrity
|
||||||
|
- Task files are automatically regenerated after the move operation
|
||||||
|
- This provides greater flexibility in organizing and refining your task structure as project understanding evolves
|
||||||
|
- This is especially useful when dealing with potential merge conflicts arising from teams creating tasks on separate branches. Solve these conflicts very easily by moving your tasks and keeping theirs.
|
||||||
|
|
||||||
- **Command Reference: complexity-report**
|
## Iterative Subtask Implementation
|
||||||
- CLI Syntax: `task-master complexity-report [options]`
|
|
||||||
- Description: Displays the task complexity analysis report in a formatted, easy-to-read way
|
|
||||||
- Parameters:
|
|
||||||
- `--file=<path>, -f`: Path to the complexity report file (default: 'scripts/task-complexity-report.json')
|
|
||||||
- Example: `task-master complexity-report`
|
|
||||||
- Notes:
|
|
||||||
- Shows tasks organized by complexity score with recommended actions
|
|
||||||
- Provides complexity distribution statistics
|
|
||||||
- Displays ready-to-use expansion commands for complex tasks
|
|
||||||
- If no report exists, offers to generate one interactively
|
|
||||||
|
|
||||||
- **Command Reference: add-task**
|
Once a task has been broken down into subtasks using `expand_task` or similar methods, follow this iterative process for implementation:
|
||||||
- CLI Syntax: `task-master add-task [options]`
|
|
||||||
- Description: Add a new task to tasks.json using AI
|
|
||||||
- Parameters:
|
|
||||||
- `--file=<path>, -f`: Path to the tasks file (default: 'tasks/tasks.json')
|
|
||||||
- `--prompt=<text>, -p`: Description of the task to add (required)
|
|
||||||
- `--dependencies=<ids>, -d`: Comma-separated list of task IDs this task depends on
|
|
||||||
- `--priority=<priority>`: Task priority (high, medium, low) (default: 'medium')
|
|
||||||
- Example: `task-master add-task --prompt="Create user authentication using Auth0"`
|
|
||||||
- Notes: Uses AI to convert description into structured task with appropriate details
|
|
||||||
|
|
||||||
- **Command Reference: init**
|
1. **Understand the Goal (Preparation):**
|
||||||
- CLI Syntax: `task-master init`
|
* Use `get_task` / `task-master show <subtaskId>` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) to thoroughly understand the specific goals and requirements of the subtask.
|
||||||
- Description: Initialize a new project with Task Master structure
|
|
||||||
- Parameters: None
|
|
||||||
- Example: `task-master init`
|
|
||||||
- Notes:
|
|
||||||
- Creates initial project structure with required files
|
|
||||||
- Prompts for project settings if not provided
|
|
||||||
- Merges with existing files when appropriate
|
|
||||||
- Can be used to bootstrap a new Task Master project quickly
|
|
||||||
|
|
||||||
- **Code Analysis & Refactoring Techniques**
|
2. **Initial Exploration & Planning (Iteration 1):**
|
||||||
- **Top-Level Function Search**
|
* This is the first attempt at creating a concrete implementation plan.
|
||||||
- Use grep pattern matching to find all exported functions across the codebase
|
* Explore the codebase to identify the precise files, functions, and even specific lines of code that will need modification.
|
||||||
- Command: `grep -E "export (function|const) \w+|function \w+\(|const \w+ = \(|module\.exports" --include="*.js" -r ./`
|
* Determine the intended code changes (diffs) and their locations.
|
||||||
- Benefits:
|
* Gather *all* relevant details from this exploration phase.
|
||||||
- Quickly identify all public API functions without reading implementation details
|
|
||||||
- Compare functions between files during refactoring (e.g., monolithic to modular structure)
|
3. **Log the Plan:**
|
||||||
- Verify all expected functions exist in refactored modules
|
* Run `update_subtask` / `task-master update-subtask --id=<subtaskId> --prompt='<detailed plan>'`.
|
||||||
- Identify duplicate functionality or naming conflicts
|
* Provide the *complete and detailed* findings from the exploration phase in the prompt. Include file paths, line numbers, proposed diffs, reasoning, and any potential challenges identified. Do not omit details. The goal is to create a rich, timestamped log within the subtask's `details`.
|
||||||
- Usage examples:
|
|
||||||
- When migrating from `scripts/dev.js` to modular structure: `grep -E "function \w+\(" scripts/dev.js`
|
4. **Verify the Plan:**
|
||||||
- Check function exports in a directory: `grep -E "export (function|const)" scripts/modules/`
|
* Run `get_task` / `task-master show <subtaskId>` again to confirm that the detailed implementation plan has been successfully appended to the subtask's details.
|
||||||
- Find potential naming conflicts: `grep -E "function (get|set|create|update)\w+\(" -r ./`
|
|
||||||
- Variations:
|
5. **Begin Implementation:**
|
||||||
- Add `-n` flag to include line numbers
|
* Set the subtask status using `set_task_status` / `task-master set-status --id=<subtaskId> --status=in-progress`.
|
||||||
- Add `--include="*.ts"` to filter by file extension
|
* Start coding based on the logged plan.
|
||||||
- Use with `| sort` to alphabetize results
|
|
||||||
- Integration with refactoring workflow:
|
6. **Refine and Log Progress (Iteration 2+):**
|
||||||
- Start by mapping all functions in the source file
|
* As implementation progresses, you will encounter challenges, discover nuances, or confirm successful approaches.
|
||||||
- Create target module files based on function grouping
|
* **Before appending new information**: Briefly review the *existing* details logged in the subtask (using `get_task` or recalling from context) to ensure the update adds fresh insights and avoids redundancy.
|
||||||
- Verify all functions were properly migrated
|
* **Regularly** use `update_subtask` / `task-master update-subtask --id=<subtaskId> --prompt='<update details>\n- What worked...\n- What didn't work...'` to append new findings.
|
||||||
- Check for any unintentional duplications or omissions
|
* **Crucially, log:**
|
||||||
|
* What worked ("fundamental truths" discovered).
|
||||||
|
* What didn't work and why (to avoid repeating mistakes).
|
||||||
|
* Specific code snippets or configurations that were successful.
|
||||||
|
* Decisions made, especially if confirmed with user input.
|
||||||
|
* Any deviations from the initial plan and the reasoning.
|
||||||
|
* The objective is to continuously enrich the subtask's details, creating a log of the implementation journey that helps the AI (and human developers) learn, adapt, and avoid repeating errors.
|
||||||
|
|
||||||
|
7. **Review & Update Rules (Post-Implementation):**
|
||||||
|
* Once the implementation for the subtask is functionally complete, review all code changes and the relevant chat history.
|
||||||
|
* Identify any new or modified code patterns, conventions, or best practices established during the implementation.
|
||||||
|
* Create new or update existing rules following internal guidelines (previously linked to `cursor_rules.mdc` and `self_improve.mdc`).
|
||||||
|
|
||||||
|
8. **Mark Task Complete:**
|
||||||
|
* After verifying the implementation and updating any necessary rules, mark the subtask as completed: `set_task_status` / `task-master set-status --id=<subtaskId> --status=done`.
|
||||||
|
|
||||||
|
9. **Commit Changes (If using Git):**
|
||||||
|
* Stage the relevant code changes and any updated/new rule files (`git add .`).
|
||||||
|
* Craft a comprehensive Git commit message summarizing the work done for the subtask, including both code implementation and any rule adjustments.
|
||||||
|
* Execute the commit command directly in the terminal (e.g., `git commit -m 'feat(module): Implement feature X for subtask <subtaskId>\n\n- Details about changes...\n- Updated rule Y for pattern Z'`).
|
||||||
|
* Consider if a Changeset is needed according to internal versioning guidelines (previously linked to `changeset.mdc`). If so, run `npm run changeset`, stage the generated file, and amend the commit or create a new one.
|
||||||
|
|
||||||
|
10. **Proceed to Next Subtask:**
|
||||||
|
* Identify the next subtask (e.g., using `next_task` / `task-master next`).
|
||||||
|
|
||||||
|
## Code Analysis & Refactoring Techniques
|
||||||
|
|
||||||
|
- **Top-Level Function Search**:
|
||||||
|
- Useful for understanding module structure or planning refactors.
|
||||||
|
- Use grep/ripgrep to find exported functions/constants:
|
||||||
|
`rg "export (async function|function|const) \w+"` or similar patterns.
|
||||||
|
- Can help compare functions between files during migrations or identify potential naming conflicts.
|
||||||
|
|
||||||
|
---
|
||||||
|
*This workflow provides a general guideline. Adapt it based on your specific project needs and team practices.*
|
||||||
404
.cursor/rules/git_workflow.mdc
Normal file
404
.cursor/rules/git_workflow.mdc
Normal file
@@ -0,0 +1,404 @@
|
|||||||
|
---
|
||||||
|
description: Git workflow integrated with Task Master for feature development and collaboration
|
||||||
|
globs: "**/*"
|
||||||
|
alwaysApply: true
|
||||||
|
---
|
||||||
|
# Git Workflow with Task Master Integration
|
||||||
|
|
||||||
|
## **Branch Strategy**
|
||||||
|
|
||||||
|
### **Main Branch Protection**
|
||||||
|
- **main** branch contains production-ready code
|
||||||
|
- All feature development happens on task-specific branches
|
||||||
|
- Direct commits to main are prohibited
|
||||||
|
- All changes merge via Pull Requests
|
||||||
|
|
||||||
|
### **Task Branch Naming**
|
||||||
|
```bash
|
||||||
|
# ✅ DO: Use consistent task branch naming
|
||||||
|
task-001 # For Task 1
|
||||||
|
task-004 # For Task 4
|
||||||
|
task-015 # For Task 15
|
||||||
|
|
||||||
|
# ❌ DON'T: Use inconsistent naming
|
||||||
|
feature/user-auth
|
||||||
|
fix-database-issue
|
||||||
|
random-branch-name
|
||||||
|
```
|
||||||
|
|
||||||
|
## **Tagged Task Lists Integration**
|
||||||
|
|
||||||
|
Task Master's **tagged task lists system** provides significant benefits for Git workflows:
|
||||||
|
|
||||||
|
### **Multi-Context Development**
|
||||||
|
- **Branch-Specific Tasks**: Each branch can have its own task context using tags
|
||||||
|
- **Merge Conflict Prevention**: Tasks in different tags are completely isolated
|
||||||
|
- **Context Switching**: Seamlessly switch between different development contexts
|
||||||
|
- **Parallel Development**: Multiple team members can work on separate task contexts
|
||||||
|
|
||||||
|
### **Migration and Compatibility**
|
||||||
|
- **Seamless Migration**: Existing projects automatically migrate to use a "master" tag
|
||||||
|
- **Zero Disruption**: All existing Git workflows continue unchanged
|
||||||
|
- **Backward Compatibility**: Legacy projects work exactly as before
|
||||||
|
|
||||||
|
### **Manual Git Integration**
|
||||||
|
- **Manual Tag Creation**: Use `--from-branch` option to create tags from current git branch
|
||||||
|
- **Manual Context Switching**: Explicitly switch tag contexts as needed for different branches
|
||||||
|
- **Simplified Integration**: Focused on manual control rather than automatic workflows
|
||||||
|
|
||||||
|
## **Workflow Overview**
|
||||||
|
|
||||||
|
```mermaid
|
||||||
|
flowchart TD
|
||||||
|
A[Start: On main branch] --> B[Pull latest changes]
|
||||||
|
B --> C[Create task branch<br/>git checkout -b task-XXX]
|
||||||
|
C --> D[Set task status: in-progress]
|
||||||
|
D --> E[Get task context & expand if needed<br/>Tasks automatically use current tag]
|
||||||
|
E --> F[Identify next subtask]
|
||||||
|
|
||||||
|
F --> G[Set subtask: in-progress]
|
||||||
|
G --> H[Research & collect context<br/>update_subtask with findings]
|
||||||
|
H --> I[Implement subtask]
|
||||||
|
I --> J[Update subtask with completion]
|
||||||
|
J --> K[Set subtask: done]
|
||||||
|
K --> L[Git commit subtask]
|
||||||
|
|
||||||
|
L --> M{More subtasks?}
|
||||||
|
M -->|Yes| F
|
||||||
|
M -->|No| N[Run final tests]
|
||||||
|
|
||||||
|
N --> O[Commit tests if added]
|
||||||
|
O --> P[Push task branch]
|
||||||
|
P --> Q[Create Pull Request]
|
||||||
|
Q --> R[Human review & merge]
|
||||||
|
R --> S[Switch to main & pull]
|
||||||
|
S --> T[Delete task branch]
|
||||||
|
T --> U[Ready for next task]
|
||||||
|
|
||||||
|
style A fill:#e1f5fe
|
||||||
|
style C fill:#f3e5f5
|
||||||
|
style G fill:#fff3e0
|
||||||
|
style L fill:#e8f5e8
|
||||||
|
style Q fill:#fce4ec
|
||||||
|
style R fill:#f1f8e9
|
||||||
|
style U fill:#e1f5fe
|
||||||
|
```
|
||||||
|
|
||||||
|
## **Complete Task Development Workflow**
|
||||||
|
|
||||||
|
### **Phase 1: Task Preparation**
|
||||||
|
```bash
|
||||||
|
# 1. Ensure you're on main branch and pull latest
|
||||||
|
git checkout main
|
||||||
|
git pull origin main
|
||||||
|
|
||||||
|
# 2. Check current branch status
|
||||||
|
git branch # Verify you're on main
|
||||||
|
|
||||||
|
# 3. Create task-specific branch
|
||||||
|
git checkout -b task-004 # For Task 4
|
||||||
|
|
||||||
|
# 4. Set task status in Task Master (tasks automatically use current tag context)
|
||||||
|
# Use: set_task_status tool or `task-master set-status --id=4 --status=in-progress`
|
||||||
|
```
|
||||||
|
|
||||||
|
### **Phase 2: Task Analysis & Planning**
|
||||||
|
```bash
|
||||||
|
# 5. Get task context and expand if needed (uses current tag automatically)
|
||||||
|
# Use: get_task tool or `task-master show 4`
|
||||||
|
# Use: expand_task tool or `task-master expand --id=4 --research --force` (if complex)
|
||||||
|
|
||||||
|
# 6. Identify next subtask to work on
|
||||||
|
# Use: next_task tool or `task-master next`
|
||||||
|
```
|
||||||
|
|
||||||
|
### **Phase 3: Subtask Implementation Loop**
|
||||||
|
For each subtask, follow this pattern:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 7. Mark subtask as in-progress
|
||||||
|
# Use: set_task_status tool or `task-master set-status --id=4.1 --status=in-progress`
|
||||||
|
|
||||||
|
# 8. Gather context and research (if needed)
|
||||||
|
# Use: update_subtask tool with research flag or:
|
||||||
|
# `task-master update-subtask --id=4.1 --prompt="Research findings..." --research`
|
||||||
|
|
||||||
|
# 9. Collect code context through AI exploration
|
||||||
|
# Document findings in subtask using update_subtask
|
||||||
|
|
||||||
|
# 10. Implement the subtask
|
||||||
|
# Write code, tests, documentation
|
||||||
|
|
||||||
|
# 11. Update subtask with completion details
|
||||||
|
# Use: update_subtask tool or:
|
||||||
|
# `task-master update-subtask --id=4.1 --prompt="Implementation complete..."`
|
||||||
|
|
||||||
|
# 12. Mark subtask as done
|
||||||
|
# Use: set_task_status tool or `task-master set-status --id=4.1 --status=done`
|
||||||
|
|
||||||
|
# 13. Commit the subtask implementation
|
||||||
|
git add .
|
||||||
|
git commit -m "feat(task-4): Complete subtask 4.1 - [Subtask Title]
|
||||||
|
|
||||||
|
- Implementation details
|
||||||
|
- Key changes made
|
||||||
|
- Any important notes
|
||||||
|
|
||||||
|
Subtask 4.1: [Brief description of what was accomplished]
|
||||||
|
Relates to Task 4: [Main task title]"
|
||||||
|
```
|
||||||
|
|
||||||
|
### **Phase 4: Task Completion**
|
||||||
|
```bash
|
||||||
|
# 14. When all subtasks are complete, run final testing
|
||||||
|
# Create test file if needed, ensure all tests pass
|
||||||
|
npm test # or jest, or manual testing
|
||||||
|
|
||||||
|
# 15. If tests were added/modified, commit them
|
||||||
|
git add .
|
||||||
|
git commit -m "test(task-4): Add comprehensive tests for Task 4
|
||||||
|
|
||||||
|
- Unit tests for core functionality
|
||||||
|
- Integration tests for API endpoints
|
||||||
|
- All tests passing
|
||||||
|
|
||||||
|
Task 4: [Main task title] - Testing complete"
|
||||||
|
|
||||||
|
# 16. Push the task branch
|
||||||
|
git push origin task-004
|
||||||
|
|
||||||
|
# 17. Create Pull Request
|
||||||
|
# Title: "Task 4: [Task Title]"
|
||||||
|
# Description should include:
|
||||||
|
# - Task overview
|
||||||
|
# - Subtasks completed
|
||||||
|
# - Testing approach
|
||||||
|
# - Any breaking changes or considerations
|
||||||
|
```
|
||||||
|
|
||||||
|
### **Phase 5: PR Merge & Cleanup**
|
||||||
|
```bash
|
||||||
|
# 18. Human reviews and merges PR into main
|
||||||
|
|
||||||
|
# 19. Switch back to main and pull merged changes
|
||||||
|
git checkout main
|
||||||
|
git pull origin main
|
||||||
|
|
||||||
|
# 20. Delete the feature branch (optional cleanup)
|
||||||
|
git branch -d task-004
|
||||||
|
git push origin --delete task-004
|
||||||
|
```
|
||||||
|
|
||||||
|
## **Commit Message Standards**
|
||||||
|
|
||||||
|
### **Subtask Commits**
|
||||||
|
```bash
|
||||||
|
# ✅ DO: Consistent subtask commit format
|
||||||
|
git commit -m "feat(task-4): Complete subtask 4.1 - Initialize Express server
|
||||||
|
|
||||||
|
- Set up Express.js with TypeScript configuration
|
||||||
|
- Added CORS and body parsing middleware
|
||||||
|
- Implemented health check endpoints
|
||||||
|
- Basic error handling middleware
|
||||||
|
|
||||||
|
Subtask 4.1: Initialize project with npm and install dependencies
|
||||||
|
Relates to Task 4: Setup Express.js Server Project"
|
||||||
|
|
||||||
|
# ❌ DON'T: Vague or inconsistent commits
|
||||||
|
git commit -m "fixed stuff"
|
||||||
|
git commit -m "working on task"
|
||||||
|
```
|
||||||
|
|
||||||
|
### **Test Commits**
|
||||||
|
```bash
|
||||||
|
# ✅ DO: Separate test commits when substantial
|
||||||
|
git commit -m "test(task-4): Add comprehensive tests for Express server setup
|
||||||
|
|
||||||
|
- Unit tests for middleware configuration
|
||||||
|
- Integration tests for health check endpoints
|
||||||
|
- Mock tests for database connection
|
||||||
|
- All tests passing with 95% coverage
|
||||||
|
|
||||||
|
Task 4: Setup Express.js Server Project - Testing complete"
|
||||||
|
```
|
||||||
|
|
||||||
|
### **Commit Type Prefixes**
|
||||||
|
- `feat(task-X):` - New feature implementation
|
||||||
|
- `fix(task-X):` - Bug fixes
|
||||||
|
- `test(task-X):` - Test additions/modifications
|
||||||
|
- `docs(task-X):` - Documentation updates
|
||||||
|
- `refactor(task-X):` - Code refactoring
|
||||||
|
- `chore(task-X):` - Build/tooling changes
|
||||||
|
|
||||||
|
## **Task Master Commands Integration**
|
||||||
|
|
||||||
|
### **Essential Commands for Git Workflow**
|
||||||
|
```bash
|
||||||
|
# Task management (uses current tag context automatically)
|
||||||
|
task-master show <id> # Get task/subtask details
|
||||||
|
task-master next # Find next task to work on
|
||||||
|
task-master set-status --id=<id> --status=<status>
|
||||||
|
task-master update-subtask --id=<id> --prompt="..." --research
|
||||||
|
|
||||||
|
# Task expansion (for complex tasks)
|
||||||
|
task-master expand --id=<id> --research --force
|
||||||
|
|
||||||
|
# Progress tracking
|
||||||
|
task-master list # View all tasks and status
|
||||||
|
task-master list --status=in-progress # View active tasks
|
||||||
|
```
|
||||||
|
|
||||||
|
### **MCP Tool Equivalents**
|
||||||
|
When using Cursor or other MCP-integrated tools:
|
||||||
|
- `get_task` instead of `task-master show`
|
||||||
|
- `next_task` instead of `task-master next`
|
||||||
|
- `set_task_status` instead of `task-master set-status`
|
||||||
|
- `update_subtask` instead of `task-master update-subtask`
|
||||||
|
|
||||||
|
## **Branch Management Rules**
|
||||||
|
|
||||||
|
### **Branch Protection**
|
||||||
|
```bash
|
||||||
|
# ✅ DO: Always work on task branches
|
||||||
|
git checkout -b task-005
|
||||||
|
# Make changes
|
||||||
|
git commit -m "..."
|
||||||
|
git push origin task-005
|
||||||
|
|
||||||
|
# ❌ DON'T: Commit directly to main
|
||||||
|
git checkout main
|
||||||
|
git commit -m "..." # NEVER do this
|
||||||
|
```
|
||||||
|
|
||||||
|
### **Keeping Branches Updated**
|
||||||
|
```bash
|
||||||
|
# ✅ DO: Regularly sync with main (for long-running tasks)
|
||||||
|
git checkout task-005
|
||||||
|
git fetch origin
|
||||||
|
git rebase origin/main # or merge if preferred
|
||||||
|
|
||||||
|
# Resolve any conflicts and continue
|
||||||
|
```
|
||||||
|
|
||||||
|
## **Pull Request Guidelines**
|
||||||
|
|
||||||
|
### **PR Title Format**
|
||||||
|
```
|
||||||
|
Task <ID>: <Task Title>
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
Task 4: Setup Express.js Server Project
|
||||||
|
Task 7: Implement User Authentication
|
||||||
|
Task 12: Add Stripe Payment Integration
|
||||||
|
```
|
||||||
|
|
||||||
|
### **PR Description Template**
|
||||||
|
```markdown
|
||||||
|
## Task Overview
|
||||||
|
Brief description of the main task objective.
|
||||||
|
|
||||||
|
## Subtasks Completed
|
||||||
|
- [x] 4.1: Initialize project with npm and install dependencies
|
||||||
|
- [x] 4.2: Configure TypeScript, ESLint and Prettier
|
||||||
|
- [x] 4.3: Create basic Express app with middleware and health check route
|
||||||
|
|
||||||
|
## Implementation Details
|
||||||
|
- Key architectural decisions made
|
||||||
|
- Important code changes
|
||||||
|
- Any deviations from original plan
|
||||||
|
|
||||||
|
## Testing
|
||||||
|
- [ ] Unit tests added/updated
|
||||||
|
- [ ] Integration tests passing
|
||||||
|
- [ ] Manual testing completed
|
||||||
|
|
||||||
|
## Breaking Changes
|
||||||
|
List any breaking changes or migration requirements.
|
||||||
|
|
||||||
|
## Related Tasks
|
||||||
|
Mention any dependent tasks or follow-up work needed.
|
||||||
|
```
|
||||||
|
|
||||||
|
## **Conflict Resolution**
|
||||||
|
|
||||||
|
### **Task Conflicts with Tagged System**
|
||||||
|
```bash
|
||||||
|
# With tagged task lists, merge conflicts are significantly reduced:
|
||||||
|
# 1. Different branches can use different tag contexts
|
||||||
|
# 2. Tasks in separate tags are completely isolated
|
||||||
|
# 3. Use Task Master's move functionality to reorganize if needed
|
||||||
|
|
||||||
|
# Manual git integration available:
|
||||||
|
# - Use `task-master add-tag --from-branch` to create tags from current branch
|
||||||
|
# - Manually switch contexts with `task-master use-tag <name>`
|
||||||
|
# - Simple, predictable workflow without automatic behavior
|
||||||
|
```
|
||||||
|
|
||||||
|
### **Code Conflicts**
|
||||||
|
```bash
|
||||||
|
# Standard Git conflict resolution
|
||||||
|
git fetch origin
|
||||||
|
git rebase origin/main
|
||||||
|
# Resolve conflicts in files
|
||||||
|
git add .
|
||||||
|
git rebase --continue
|
||||||
|
```
|
||||||
|
|
||||||
|
## **Emergency Procedures**
|
||||||
|
|
||||||
|
### **Hotfixes**
|
||||||
|
```bash
|
||||||
|
# For urgent production fixes:
|
||||||
|
git checkout main
|
||||||
|
git pull origin main
|
||||||
|
git checkout -b hotfix-urgent-issue
|
||||||
|
|
||||||
|
# Make minimal fix
|
||||||
|
git commit -m "hotfix: Fix critical production issue
|
||||||
|
|
||||||
|
- Specific fix description
|
||||||
|
- Minimal impact change
|
||||||
|
- Requires immediate deployment"
|
||||||
|
|
||||||
|
git push origin hotfix-urgent-issue
|
||||||
|
# Create emergency PR for immediate review
|
||||||
|
```
|
||||||
|
|
||||||
|
### **Task Abandonment**
|
||||||
|
```bash
|
||||||
|
# If task needs to be abandoned or significantly changed:
|
||||||
|
# 1. Update task status
|
||||||
|
task-master set-status --id=<id> --status=cancelled
|
||||||
|
|
||||||
|
# 2. Clean up branch
|
||||||
|
git checkout main
|
||||||
|
git branch -D task-<id>
|
||||||
|
git push origin --delete task-<id>
|
||||||
|
|
||||||
|
# 3. Document reasoning in task
|
||||||
|
task-master update-task --id=<id> --prompt="Task cancelled due to..."
|
||||||
|
```
|
||||||
|
|
||||||
|
## **Tagged System Benefits for Git Workflows**
|
||||||
|
|
||||||
|
### **Multi-Team Development**
|
||||||
|
- **Isolated Contexts**: Different teams can work on separate tag contexts without conflicts
|
||||||
|
- **Feature Branches**: Each feature branch can have its own task context
|
||||||
|
- **Release Management**: Separate tags for different release versions or environments
|
||||||
|
|
||||||
|
### **Merge Conflict Prevention**
|
||||||
|
- **Context Separation**: Tasks in different tags don't interfere with each other
|
||||||
|
- **Clean Merges**: Reduced likelihood of task-related merge conflicts
|
||||||
|
- **Parallel Development**: Multiple developers can work simultaneously without task conflicts
|
||||||
|
|
||||||
|
### **Manual Git Integration**
|
||||||
|
- **Branch-Based Tag Creation**: Use `--from-branch` option to create tags from current git branch
|
||||||
|
- **Manual Context Management**: Explicitly switch tag contexts as needed
|
||||||
|
- **Predictable Workflow**: Simple, manual control without automatic behavior
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**References:**
|
||||||
|
- [Task Master Workflow](mdc:.cursor/rules/dev_workflow.mdc)
|
||||||
|
- [Architecture Guidelines](mdc:.cursor/rules/architecture.mdc)
|
||||||
|
- [Task Master Commands](mdc:.cursor/rules/taskmaster.mdc)
|
||||||
26
.cursor/rules/glossary.mdc
Normal file
26
.cursor/rules/glossary.mdc
Normal file
@@ -0,0 +1,26 @@
|
|||||||
|
---
|
||||||
|
description: Glossary of other Cursor rules
|
||||||
|
globs: **/*
|
||||||
|
alwaysApply: true
|
||||||
|
---
|
||||||
|
# Glossary of Task Master Cursor Rules
|
||||||
|
|
||||||
|
This file provides a quick reference to the purpose of each rule file located in the `.cursor/rules` directory.
|
||||||
|
|
||||||
|
- **[`architecture.mdc`](mdc:.cursor/rules/architecture.mdc)**: Describes the high-level architecture of the Task Master CLI application, including the new tagged task lists system.
|
||||||
|
- **[`changeset.mdc`](mdc:.cursor/rules/changeset.mdc)**: Guidelines for using Changesets (npm run changeset) to manage versioning and changelogs.
|
||||||
|
- **[`commands.mdc`](mdc:.cursor/rules/commands.mdc)**: Guidelines for implementing CLI commands using Commander.js.
|
||||||
|
- **[`cursor_rules.mdc`](mdc:.cursor/rules/cursor_rules.mdc)**: Guidelines for creating and maintaining Cursor rules to ensure consistency and effectiveness.
|
||||||
|
- **[`dependencies.mdc`](mdc:.cursor/rules/dependencies.mdc)**: Guidelines for managing task dependencies and relationships across tagged task contexts.
|
||||||
|
- **[`dev_workflow.mdc`](mdc:.cursor/rules/dev_workflow.mdc)**: Guide for using Task Master to manage task-driven development workflows with tagged task lists support.
|
||||||
|
- **[`glossary.mdc`](mdc:.cursor/rules/glossary.mdc)**: This file; provides a glossary of other Cursor rules.
|
||||||
|
- **[`mcp.mdc`](mdc:.cursor/rules/mcp.mdc)**: Guidelines for implementing and interacting with the Task Master MCP Server.
|
||||||
|
- **[`new_features.mdc`](mdc:.cursor/rules/new_features.mdc)**: Guidelines for integrating new features into the Task Master CLI with tagged system considerations.
|
||||||
|
- **[`self_improve.mdc`](mdc:.cursor/rules/self_improve.mdc)**: Guidelines for continuously improving Cursor rules based on emerging code patterns and best practices.
|
||||||
|
- **[`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)**: Comprehensive reference for Taskmaster MCP tools and CLI commands with tagged task lists information.
|
||||||
|
- **[`tasks.mdc`](mdc:.cursor/rules/tasks.mdc)**: Guidelines for implementing task management operations with tagged task lists system support.
|
||||||
|
- **[`tests.mdc`](mdc:.cursor/rules/tests.mdc)**: Guidelines for implementing and maintaining tests for Task Master CLI.
|
||||||
|
- **[`ui.mdc`](mdc:.cursor/rules/ui.mdc)**: Guidelines for implementing and maintaining user interface components.
|
||||||
|
- **[`utilities.mdc`](mdc:.cursor/rules/utilities.mdc)**: Guidelines for implementing utility functions including tagged task lists utilities.
|
||||||
|
- **[`telemetry.mdc`](mdc:.cursor/rules/telemetry.mdc)**: Guidelines for integrating AI usage telemetry across Task Master.
|
||||||
|
|
||||||
@@ -3,7 +3,6 @@ description: Guidelines for implementing and interacting with the Task Master MC
|
|||||||
globs: mcp-server/src/**/*, scripts/modules/**/*
|
globs: mcp-server/src/**/*, scripts/modules/**/*
|
||||||
alwaysApply: false
|
alwaysApply: false
|
||||||
---
|
---
|
||||||
|
|
||||||
# Task Master MCP Server Guidelines
|
# Task Master MCP Server Guidelines
|
||||||
|
|
||||||
This document outlines the architecture and implementation patterns for the Task Master Model Context Protocol (MCP) server, designed for integration with tools like Cursor.
|
This document outlines the architecture and implementation patterns for the Task Master Model Context Protocol (MCP) server, designed for integration with tools like Cursor.
|
||||||
@@ -12,76 +11,519 @@ This document outlines the architecture and implementation patterns for the Task
|
|||||||
|
|
||||||
The MCP server acts as a bridge between external tools (like Cursor) and the core Task Master CLI logic. It leverages FastMCP for the server framework.
|
The MCP server acts as a bridge between external tools (like Cursor) and the core Task Master CLI logic. It leverages FastMCP for the server framework.
|
||||||
|
|
||||||
- **Flow**: `External Tool (Cursor)` <-> `FastMCP Server` <-> `MCP Tools` (`mcp-server/src/tools/*.js`) <-> `Core Logic Wrappers` (`mcp-server/src/core/task-master-core.js`) <-> `Core Modules` (`scripts/modules/*.js`)
|
- **Flow**: `External Tool (Cursor)` <-> `FastMCP Server` <-> `MCP Tools` (`mcp-server/src/tools/*.js`) <-> `Core Logic Wrappers` (`mcp-server/src/core/direct-functions/*.js`, exported via `task-master-core.js`) <-> `Core Modules` (`scripts/modules/*.js`)
|
||||||
- **Goal**: Provide a performant and reliable way for external tools to interact with Task Master functionality without directly invoking the CLI for every operation.
|
- **Goal**: Provide a performant and reliable way for external tools to interact with Task Master functionality without directly invoking the CLI for every operation.
|
||||||
|
|
||||||
|
## Direct Function Implementation Best Practices
|
||||||
|
|
||||||
|
When implementing a new direct function in `mcp-server/src/core/direct-functions/`, follow these critical guidelines:
|
||||||
|
|
||||||
|
1. **Verify Function Dependencies**:
|
||||||
|
- ✅ **DO**: Check that all helper functions your direct function needs are properly exported from their source modules
|
||||||
|
- ✅ **DO**: Import these dependencies explicitly at the top of your file
|
||||||
|
- ❌ **DON'T**: Assume helper functions like `findTaskById` or `taskExists` are automatically available
|
||||||
|
- **Example**:
|
||||||
|
```javascript
|
||||||
|
// At top of direct-function file
|
||||||
|
import { removeTask, findTaskById, taskExists } from '../../../../scripts/modules/task-manager.js';
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Parameter Verification and Completeness**:
|
||||||
|
- ✅ **DO**: Verify the signature of core functions you're calling and ensure all required parameters are provided
|
||||||
|
- ✅ **DO**: Pass explicit values for required parameters rather than relying on defaults
|
||||||
|
- ✅ **DO**: Double-check parameter order against function definition
|
||||||
|
- ❌ **DON'T**: Omit parameters assuming they have default values
|
||||||
|
- **Example**:
|
||||||
|
```javascript
|
||||||
|
// Correct parameter handling in direct function
|
||||||
|
async function generateTaskFilesDirect(args, log) {
|
||||||
|
const tasksPath = findTasksJsonPath(args, log);
|
||||||
|
const outputDir = args.output || path.dirname(tasksPath);
|
||||||
|
|
||||||
|
try {
|
||||||
|
// Pass all required parameters
|
||||||
|
const result = await generateTaskFiles(tasksPath, outputDir);
|
||||||
|
return { success: true, data: result, fromCache: false };
|
||||||
|
} catch (error) {
|
||||||
|
// Error handling...
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Consistent File Path Handling**:
|
||||||
|
- ✅ **DO**: Use `path.join()` instead of string concatenation for file paths
|
||||||
|
- ✅ **DO**: Follow established file naming conventions (`task_001.txt` not `1.md`)
|
||||||
|
- ✅ **DO**: Use `path.dirname()` and other path utilities for manipulating paths
|
||||||
|
- ✅ **DO**: When paths relate to task files, follow the standard format: `task_${id.toString().padStart(3, '0')}.txt`
|
||||||
|
- ❌ **DON'T**: Create custom file path handling logic that diverges from established patterns
|
||||||
|
- **Example**:
|
||||||
|
```javascript
|
||||||
|
// Correct file path handling
|
||||||
|
const taskFilePath = path.join(
|
||||||
|
path.dirname(tasksPath),
|
||||||
|
`task_${taskId.toString().padStart(3, '0')}.txt`
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
4. **Comprehensive Error Handling**:
|
||||||
|
- ✅ **DO**: Wrap core function calls *and AI calls* in try/catch blocks
|
||||||
|
- ✅ **DO**: Log errors with appropriate severity and context
|
||||||
|
- ✅ **DO**: Return standardized error objects with code and message (`{ success: false, error: { code: '...', message: '...' } }`)
|
||||||
|
- ✅ **DO**: Handle file system errors, AI client errors, AI processing errors, and core function errors distinctly with appropriate codes.
|
||||||
|
- **Example**:
|
||||||
|
```javascript
|
||||||
|
try {
|
||||||
|
// Core function call or AI logic
|
||||||
|
} catch (error) {
|
||||||
|
log.error(`Failed to execute direct function logic: ${error.message}`);
|
||||||
|
return {
|
||||||
|
success: false,
|
||||||
|
error: {
|
||||||
|
code: error.code || 'DIRECT_FUNCTION_ERROR', // Use specific codes like AI_CLIENT_ERROR, etc.
|
||||||
|
message: error.message,
|
||||||
|
details: error.stack // Optional: Include stack in debug mode
|
||||||
|
},
|
||||||
|
fromCache: false // Ensure this is included if applicable
|
||||||
|
};
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
5. **Handling Logging Context (`mcpLog`)**:
|
||||||
|
- **Requirement**: Core functions (like those in `task-manager.js`) may accept an `options` object containing an optional `mcpLog` property. If provided, the core function expects this object to have methods like `mcpLog.info(...)`, `mcpLog.error(...)`.
|
||||||
|
- **Solution: The Logger Wrapper Pattern**: When calling a core function from a direct function, pass the `log` object provided by FastMCP *wrapped* in the standard `logWrapper` object. This ensures the core function receives a logger with the expected method structure.
|
||||||
|
```javascript
|
||||||
|
// Standard logWrapper pattern within a Direct Function
|
||||||
|
const logWrapper = {
|
||||||
|
info: (message, ...args) => log.info(message, ...args),
|
||||||
|
warn: (message, ...args) => log.warn(message, ...args),
|
||||||
|
error: (message, ...args) => log.error(message, ...args),
|
||||||
|
debug: (message, ...args) => log.debug && log.debug(message, ...args),
|
||||||
|
success: (message, ...args) => log.info(message, ...args)
|
||||||
|
};
|
||||||
|
|
||||||
|
// ... later when calling the core function ...
|
||||||
|
await coreFunction(
|
||||||
|
// ... other arguments ...
|
||||||
|
{
|
||||||
|
mcpLog: logWrapper, // Pass the wrapper object
|
||||||
|
session // Also pass session if needed by core logic or AI service
|
||||||
|
},
|
||||||
|
'json' // Pass 'json' output format if supported by core function
|
||||||
|
);
|
||||||
|
```
|
||||||
|
- **JSON Output**: Passing `mcpLog` (via the wrapper) often triggers the core function to use a JSON-friendly output format, suppressing spinners/boxes.
|
||||||
|
- ✅ **DO**: Implement this pattern in direct functions calling core functions that might use `mcpLog`.
|
||||||
|
|
||||||
|
6. **Silent Mode Implementation**:
|
||||||
|
- ✅ **DO**: Import silent mode utilities: `import { enableSilentMode, disableSilentMode, isSilentMode } from '../../../../scripts/modules/utils.js';`
|
||||||
|
- ✅ **DO**: Wrap core function calls *within direct functions* using `enableSilentMode()` / `disableSilentMode()` in a `try/finally` block if the core function might produce console output (spinners, boxes, direct `console.log`) that isn't reliably controlled by passing `{ mcpLog }` or an `outputFormat` parameter.
|
||||||
|
- ✅ **DO**: Always disable silent mode in the `finally` block.
|
||||||
|
- ❌ **DON'T**: Wrap calls to the unified AI service (`generateTextService`, `generateObjectService`) in silent mode; their logging is handled internally.
|
||||||
|
- **Example (Direct Function Guaranteeing Silence & using Log Wrapper)**:
|
||||||
|
```javascript
|
||||||
|
export async function coreWrapperDirect(args, log, context = {}) {
|
||||||
|
const { session } = context;
|
||||||
|
const tasksPath = findTasksJsonPath(args, log);
|
||||||
|
const logWrapper = { /* ... */ };
|
||||||
|
|
||||||
|
enableSilentMode(); // Ensure silence for direct console output
|
||||||
|
try {
|
||||||
|
const result = await coreFunction(
|
||||||
|
tasksPath,
|
||||||
|
args.param1,
|
||||||
|
{ mcpLog: logWrapper, session }, // Pass context
|
||||||
|
'json' // Request JSON format if supported
|
||||||
|
);
|
||||||
|
return { success: true, data: result };
|
||||||
|
} catch (error) {
|
||||||
|
log.error(`Error: ${error.message}`);
|
||||||
|
return { success: false, error: { /* ... */ } };
|
||||||
|
} finally {
|
||||||
|
disableSilentMode(); // Critical: Always disable in finally
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
7. **Debugging MCP/Core Logic Interaction**:
|
||||||
|
- ✅ **DO**: If an MCP tool fails with unclear errors (like JSON parsing failures), run the equivalent `task-master` CLI command in the terminal. The CLI often provides more detailed error messages originating from the core logic (e.g., `ReferenceError`, stack traces) that are obscured by the MCP layer.
|
||||||
|
|
||||||
|
## Tool Definition and Execution
|
||||||
|
|
||||||
|
### Tool Structure
|
||||||
|
|
||||||
|
MCP tools must follow a specific structure to properly interact with the FastMCP framework:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
server.addTool({
|
||||||
|
name: "tool_name", // Use snake_case for tool names
|
||||||
|
description: "Description of what the tool does",
|
||||||
|
parameters: z.object({
|
||||||
|
// Define parameters using Zod
|
||||||
|
param1: z.string().describe("Parameter description"),
|
||||||
|
param2: z.number().optional().describe("Optional parameter description"),
|
||||||
|
// IMPORTANT: For file operations, always include these optional parameters
|
||||||
|
file: z.string().optional().describe("Path to the tasks file"),
|
||||||
|
projectRoot: z.string().optional().describe("Root directory of the project (typically derived from session)")
|
||||||
|
}),
|
||||||
|
|
||||||
|
// The execute function is the core of the tool implementation
|
||||||
|
execute: async (args, context) => {
|
||||||
|
// Implementation goes here
|
||||||
|
// Return response in the appropriate format
|
||||||
|
}
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
### Execute Function Signature
|
||||||
|
|
||||||
|
The `execute` function receives validated arguments and the FastMCP context:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Destructured signature (recommended)
|
||||||
|
execute: async (args, { log, session }) => {
|
||||||
|
// Tool implementation
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- **args**: Validated parameters.
|
||||||
|
- **context**: Contains `{ log, session }` from FastMCP. (Removed `reportProgress`).
|
||||||
|
|
||||||
|
### Standard Tool Execution Pattern with Path Normalization (Updated)
|
||||||
|
|
||||||
|
To ensure consistent handling of project paths across different client environments (Windows, macOS, Linux, WSL) and input formats (e.g., `file:///...`, URI encoded paths), all MCP tool `execute` methods that require access to the project root **MUST** be wrapped with the `withNormalizedProjectRoot` Higher-Order Function (HOF).
|
||||||
|
|
||||||
|
This HOF, defined in [`mcp-server/src/tools/utils.js`](mdc:mcp-server/src/tools/utils.js), performs the following before calling the tool's core logic:
|
||||||
|
|
||||||
|
1. **Determines the Raw Root:** It prioritizes `args.projectRoot` if provided by the client, otherwise it calls `getRawProjectRootFromSession` to extract the path from the session.
|
||||||
|
2. **Normalizes the Path:** It uses the `normalizeProjectRoot` helper to decode URIs, strip `file://` prefixes, fix potential Windows drive letter prefixes (e.g., `/C:/`), convert backslashes (`\`) to forward slashes (`/`), and resolve the path to an absolute path suitable for the server's OS.
|
||||||
|
3. **Injects Normalized Path:** It updates the `args` object by replacing the original `projectRoot` (or adding it) with the normalized, absolute path.
|
||||||
|
4. **Executes Original Logic:** It calls the original `execute` function body, passing the updated `args` object.
|
||||||
|
|
||||||
|
**Implementation Example:**
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// In mcp-server/src/tools/your-tool.js
|
||||||
|
import {
|
||||||
|
handleApiResult,
|
||||||
|
createErrorResponse,
|
||||||
|
withNormalizedProjectRoot // <<< Import HOF
|
||||||
|
} from './utils.js';
|
||||||
|
import { yourDirectFunction } from '../core/task-master-core.js';
|
||||||
|
import { findTasksJsonPath } from '../core/utils/path-utils.js'; // If needed
|
||||||
|
|
||||||
|
export function registerYourTool(server) {
|
||||||
|
server.addTool({
|
||||||
|
name: "your_tool",
|
||||||
|
description: "...".
|
||||||
|
parameters: z.object({
|
||||||
|
// ... other parameters ...
|
||||||
|
projectRoot: z.string().optional().describe('...') // projectRoot is optional here, HOF handles fallback
|
||||||
|
}),
|
||||||
|
// Wrap the entire execute function
|
||||||
|
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
|
||||||
|
// args.projectRoot is now guaranteed to be normalized and absolute
|
||||||
|
const { /* other args */, projectRoot } = args;
|
||||||
|
|
||||||
|
try {
|
||||||
|
log.info(`Executing your_tool with normalized root: ${projectRoot}`);
|
||||||
|
|
||||||
|
// Resolve paths using the normalized projectRoot
|
||||||
|
let tasksPath = findTasksJsonPath({ projectRoot, file: args.file }, log);
|
||||||
|
|
||||||
|
// Call direct function, passing normalized projectRoot if needed by direct func
|
||||||
|
const result = await yourDirectFunction(
|
||||||
|
{
|
||||||
|
/* other args */,
|
||||||
|
projectRoot // Pass it if direct function needs it
|
||||||
|
},
|
||||||
|
log,
|
||||||
|
{ session }
|
||||||
|
);
|
||||||
|
|
||||||
|
return handleApiResult(result, log);
|
||||||
|
} catch (error) {
|
||||||
|
log.error(`Error in your_tool: ${error.message}`);
|
||||||
|
return createErrorResponse(error.message);
|
||||||
|
}
|
||||||
|
}) // End HOF wrap
|
||||||
|
});
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
By using this HOF, the core logic within the `execute` method and any downstream functions (like `findTasksJsonPath` or direct functions) can reliably expect `args.projectRoot` to be a clean, absolute path suitable for the server environment.
|
||||||
|
|
||||||
|
### Project Initialization Tool
|
||||||
|
|
||||||
|
The `initialize_project` tool allows integrated clients like Cursor to set up a new Task Master project:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// In initialize-project.js
|
||||||
|
import { z } from "zod";
|
||||||
|
import { initializeProjectDirect } from "../core/task-master-core.js";
|
||||||
|
import { handleApiResult, createErrorResponse } from "./utils.js";
|
||||||
|
|
||||||
|
export function registerInitializeProjectTool(server) {
|
||||||
|
server.addTool({
|
||||||
|
name: "initialize_project",
|
||||||
|
description: "Initialize a new Task Master project",
|
||||||
|
parameters: z.object({
|
||||||
|
projectName: z.string().optional().describe("The name for the new project"),
|
||||||
|
projectDescription: z.string().optional().describe("A brief description"),
|
||||||
|
projectVersion: z.string().optional().describe("Initial version (e.g., '0.1.0')"),
|
||||||
|
authorName: z.string().optional().describe("The author's name"),
|
||||||
|
skipInstall: z.boolean().optional().describe("Skip installing dependencies"),
|
||||||
|
addAliases: z.boolean().optional().describe("Add shell aliases"),
|
||||||
|
yes: z.boolean().optional().describe("Skip prompts and use defaults")
|
||||||
|
}),
|
||||||
|
execute: async (args, { log, reportProgress }) => {
|
||||||
|
try {
|
||||||
|
// Since we're initializing, we don't need project root
|
||||||
|
const result = await initializeProjectDirect(args, log);
|
||||||
|
return handleApiResult(result, log, 'Error initializing project');
|
||||||
|
} catch (error) {
|
||||||
|
log.error(`Error in initialize_project: ${error.message}`);
|
||||||
|
return createErrorResponse(`Failed to initialize project: ${error.message}`);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Logging Convention
|
||||||
|
|
||||||
|
The `log` object (destructured from `context`) provides standardized logging methods. Use it within both the `execute` method and the `*Direct` functions. **If progress indication is needed within a direct function, use `log.info()` instead of `reportProgress`**.
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Proper logging usage
|
||||||
|
log.info(`Starting ${toolName} with parameters: ${JSON.stringify(sanitizedArgs)}`);
|
||||||
|
log.debug("Detailed operation info", { data });
|
||||||
|
log.warn("Potential issue detected");
|
||||||
|
log.error(`Error occurred: ${error.message}`, { stack: error.stack });
|
||||||
|
log.info('Progress: 50% - AI call initiated...'); // Example progress logging
|
||||||
|
```
|
||||||
|
|
||||||
|
## Session Usage Convention
|
||||||
|
|
||||||
|
The `session` object (destructured from `context`) contains authenticated session data and client information.
|
||||||
|
|
||||||
|
- **Authentication**: Access user-specific data (`session.userId`, etc.) if authentication is implemented.
|
||||||
|
- **Project Root**: The primary use in Task Master is accessing `session.roots` to determine the client's project root directory via the `getProjectRootFromSession` utility (from [`tools/utils.js`](mdc:mcp-server/src/tools/utils.js)). See the Standard Tool Execution Pattern above.
|
||||||
|
- **Environment Variables**: The `session.env` object provides access to environment variables set in the MCP client configuration (e.g., `.cursor/mcp.json`). This is the **primary mechanism** for the unified AI service layer (`ai-services-unified.js`) to securely access **API keys** when called from MCP context.
|
||||||
|
- **Capabilities**: Can be used to check client capabilities (`session.clientCapabilities`).
|
||||||
|
|
||||||
|
## Direct Function Wrappers (`*Direct`)
|
||||||
|
|
||||||
|
These functions, located in `mcp-server/src/core/direct-functions/`, form the core logic execution layer for MCP tools.
|
||||||
|
|
||||||
|
- **Purpose**: Bridge MCP tools and core Task Master modules (`scripts/modules/*`). Handle AI interactions if applicable.
|
||||||
|
- **Responsibilities**:
|
||||||
|
- Receive `args` (including `projectRoot`), `log`, and optionally `{ session }` context.
|
||||||
|
- Find `tasks.json` using `findTasksJsonPath`.
|
||||||
|
- Validate arguments.
|
||||||
|
- **Implement Caching (if applicable)**: Use `getCachedOrExecute`.
|
||||||
|
- **Call Core Logic**: Invoke function from `scripts/modules/*`.
|
||||||
|
- Pass `outputFormat: 'json'` if applicable.
|
||||||
|
- Wrap with `enableSilentMode/disableSilentMode` if needed.
|
||||||
|
- Pass `{ mcpLog: logWrapper, session }` context if core logic needs it.
|
||||||
|
- Handle errors.
|
||||||
|
- Return standardized result object.
|
||||||
|
- ❌ **DON'T**: Call `reportProgress`.
|
||||||
|
- ❌ **DON'T**: Initialize AI clients or call AI services directly.
|
||||||
|
|
||||||
## Key Principles
|
## Key Principles
|
||||||
|
|
||||||
- **Prefer Direct Function Calls**: For optimal performance and error handling, MCP tools should utilize direct function wrappers defined in [`task-master-core.js`](mdc:mcp-server/src/core/task-master-core.js). These wrappers call the underlying logic from the core modules (e.g., [`task-manager.js`](mdc:scripts/modules/task-manager.js)).
|
- **Prefer Direct Function Calls**: MCP tools should always call `*Direct` wrappers instead of `executeTaskMasterCommand`.
|
||||||
- **Use `executeMCPToolAction`**: This utility function in [`tools/utils.js`](mdc:mcp-server/src/tools/utils.js) is the standard wrapper for executing the main logic within an MCP tool's `execute` function. It handles common boilerplate like logging, argument processing, calling the core action (`*Direct` function), and formatting the response.
|
- **Standardized Execution Flow**: Follow the pattern: MCP Tool -> `getProjectRootFromSession` -> `*Direct` Function -> Core Logic / AI Logic.
|
||||||
- **CLI Execution as Fallback**: The `executeTaskMasterCommand` utility in [`tools/utils.js`](mdc:mcp-server/src/tools/utils.js) allows executing commands via the CLI (`task-master ...`). This should **only** be used as a fallback if a direct function wrapper is not yet implemented or if a specific command intrinsically requires CLI execution.
|
- **Path Resolution via Direct Functions**: The `*Direct` function is responsible for finding the exact `tasks.json` path using `findTasksJsonPath`, relying on the `projectRoot` passed in `args`.
|
||||||
- **Centralized Utilities** (See also: [`utilities.mdc`](mdc:.cursor/rules/utilities.mdc)):
|
- **AI Logic in Core Modules**: AI interactions (prompt building, calling unified service) reside within the core logic functions (`scripts/modules/*`), not direct functions.
|
||||||
- Use `findTasksJsonPath` (in [`task-master-core.js`](mdc:mcp-server/src/core/task-master-core.js)) within direct function wrappers to locate the `tasks.json` file consistently.
|
- **Silent Mode in Direct Functions**: Wrap *core function* calls (from `scripts/modules`) with `enableSilentMode()` and `disableSilentMode()` if they produce console output not handled by `outputFormat`. Do not wrap AI calls.
|
||||||
- **Leverage MCP Utilities**: The file [`tools/utils.js`](mdc:mcp-server/src/tools/utils.js) contains essential helpers for MCP tool implementation:
|
- **Selective Async Processing**: Use `AsyncOperationManager` in the *MCP Tool layer* for operations involving multiple steps or long waits beyond a single AI call (e.g., file processing + AI call + file writing). Simple AI calls handled entirely within the `*Direct` function (like `addTaskDirect`) may not need it at the tool layer.
|
||||||
- `getProjectRoot`: Normalizes project paths (used internally by other utils).
|
- **No `reportProgress` in Direct Functions**: Do not pass or use `reportProgress` within `*Direct` functions. Use `log.info()` for internal progress or report progress from the `AsyncOperationManager` callback in the MCP tool layer.
|
||||||
- `handleApiResult`: Standardizes handling results from direct function calls (success/error).
|
- **Output Formatting**: Ensure core functions called by `*Direct` functions can suppress CLI output, ideally via an `outputFormat` parameter.
|
||||||
- `createContentResponse`/`createErrorResponse`: Formats successful/error MCP responses.
|
- **Project Initialization**: Use the initialize_project tool for setting up new projects in integrated environments.
|
||||||
- `processMCPResponseData`: Filters/cleans data for MCP responses (e.g., removing `details`, `testStrategy`). This is the default processor used by `executeMCPToolAction`.
|
- **Centralized Utilities**: Use helpers from `mcp-server/src/tools/utils.js`, `mcp-server/src/core/utils/path-utils.js`, and `mcp-server/src/core/utils/ai-client-utils.js`. See [`utilities.mdc`](mdc:.cursor/rules/utilities.mdc).
|
||||||
- `executeMCPToolAction`: The primary wrapper function for tool execution logic.
|
- **Caching in Direct Functions**: Caching logic resides *within* the `*Direct` functions using `getCachedOrExecute`.
|
||||||
- `executeTaskMasterCommand`: Fallback for executing CLI commands.
|
|
||||||
- **Caching**: To improve performance for frequently called read operations (like `listTasks`), a caching layer using `lru-cache` is implemented.
|
## Resources and Resource Templates
|
||||||
- Caching logic should be added *inside* the direct function wrappers in [`task-master-core.js`](mdc:mcp-server/src/core/task-master-core.js) using the `getCachedOrExecute` utility from [`tools/utils.js`](mdc:mcp-server/src/tools/utils.js).
|
|
||||||
- Generate unique cache keys based on function arguments that define a distinct call.
|
Resources provide LLMs with static or dynamic data without executing tools.
|
||||||
- Responses will include a `fromCache` flag.
|
|
||||||
- Cache statistics can be monitored using the `cacheStats` MCP tool (implemented via `getCacheStatsDirect`).
|
- **Implementation**: Use `@mcp.resource()` decorator pattern or `server.addResource`/`server.addResourceTemplate` in `mcp-server/src/core/resources/`.
|
||||||
|
- **Registration**: Register resources during server initialization in [`mcp-server/src/index.js`](mdc:mcp-server/src/index.js).
|
||||||
|
- **Best Practices**: Organize resources, validate parameters, use consistent URIs, handle errors. See [`fastmcp-core.txt`](docs/fastmcp-core.txt) for underlying SDK details.
|
||||||
|
|
||||||
|
*(Self-correction: Removed detailed Resource implementation examples as they were less relevant to the current user focus on tool execution flow and project roots. Kept the overview.)*
|
||||||
|
|
||||||
## Implementing MCP Support for a Command
|
## Implementing MCP Support for a Command
|
||||||
|
|
||||||
Follow these steps to add MCP support for an existing Task Master command (see [`new_features.mdc`](mdc:.cursor/rules/new_features.mdc) for more detail):
|
Follow these steps to add MCP support for an existing Task Master command (see [`new_features.mdc`](mdc:.cursor/rules/new_features.mdc) for more detail):
|
||||||
|
|
||||||
1. **Ensure Core Logic Exists**: Verify the core functionality is implemented and exported from the relevant module in `scripts/modules/`.
|
1. **Ensure Core Logic Exists**: Verify the core functionality is implemented and exported from the relevant module in `scripts/modules/`. Ensure the core function can suppress console output (e.g., via an `outputFormat` parameter).
|
||||||
2. **Create Direct Wrapper**: In [`task-master-core.js`](mdc:mcp-server/src/core/task-master-core.js):
|
|
||||||
- Import the core function.
|
2. **Create Direct Function File in `mcp-server/src/core/direct-functions/`**:
|
||||||
- Import `getCachedOrExecute` from `../tools/utils.js`.
|
- Create a new file (e.g., `your-command.js`) using **kebab-case** naming.
|
||||||
- Create an `async function yourCommandDirect(args, log)` wrapper.
|
- Import necessary core functions, `findTasksJsonPath`, silent mode utilities, and potentially AI client/prompt utilities.
|
||||||
- Inside the wrapper:
|
- Implement `async function yourCommandDirect(args, log, context = {})` using **camelCase** with `Direct` suffix. **Remember `context` should only contain `{ session }` if needed (for AI keys/config).**
|
||||||
- Determine arguments needed for both the core logic and the cache key (e.g., `tasksPath`, filters). Use `findTasksJsonPath(args, log)` if needed.
|
- **Path Resolution**: Obtain `tasksPath` using `findTasksJsonPath(args, log)`.
|
||||||
- **Generate a unique `cacheKey`** based on the arguments that define a distinct operation (e.g., `\`yourCommand:${tasksPath}:${filter}\``).
|
- Parse other `args` and perform necessary validation.
|
||||||
- **Define the `coreActionFn`**: An `async` function that contains the actual call to the imported core logic function, handling its specific errors and returning `{ success: true/false, data/error }`.
|
- **Handle AI (if applicable)**: Initialize clients using `get*ClientForMCP(session, log)`, build prompts, call AI, parse response. Handle AI-specific errors.
|
||||||
- **Call `getCachedOrExecute`**:
|
- **Implement Caching (if applicable)**: Use `getCachedOrExecute`.
|
||||||
```javascript
|
- **Call Core Logic**:
|
||||||
const result = await getCachedOrExecute({
|
- Wrap with `enableSilentMode/disableSilentMode` if necessary.
|
||||||
cacheKey,
|
- Pass `outputFormat: 'json'` (or similar) if applicable.
|
||||||
actionFn: coreActionFn, // The function wrapping the core logic call
|
- Handle errors from the core function.
|
||||||
log
|
- Format the return as `{ success: true/false, data/error, fromCache?: boolean }`.
|
||||||
});
|
- ❌ **DON'T**: Call `reportProgress`.
|
||||||
return result; // Returns { success, data/error, fromCache }
|
- Export the wrapper function.
|
||||||
```
|
|
||||||
- Export the wrapper function and add it to the `directFunctions` map.
|
3. **Update `task-master-core.js` with Import/Export**: Import and re-export your `*Direct` function and add it to the `directFunctions` map.
|
||||||
3. **Create MCP Tool**: In `mcp-server/src/tools/`:
|
|
||||||
- Create a new file (e.g., `yourCommand.js`).
|
4. **Create MCP Tool (`mcp-server/src/tools/`)**:
|
||||||
- Import `z` for parameter schema definition.
|
- Create a new file (e.g., `your-command.js`) using **kebab-case**.
|
||||||
- Import `executeMCPToolAction` from [`./utils.js`](mdc:mcp-server/src/tools/utils.js).
|
- Import `zod`, `handleApiResult`, `createErrorResponse`, `getProjectRootFromSession`, and your `yourCommandDirect` function. Import `AsyncOperationManager` if needed.
|
||||||
- Import the `yourCommandDirect` wrapper function from `../core/task-master-core.js`.
|
- Implement `registerYourCommandTool(server)`.
|
||||||
- Implement `registerYourCommandTool(server)`:
|
- Define the tool `name` using **snake_case** (e.g., `your_command`).
|
||||||
- Call `server.addTool`.
|
- Define the `parameters` using `zod`. Include `projectRoot: z.string().optional()`.
|
||||||
- Define `name`, `description`, and `parameters` using `zod`. Include `projectRoot` and `file` as optional parameters if relevant.
|
- Implement the `async execute(args, { log, session })` method (omitting `reportProgress` from destructuring).
|
||||||
- Define the `async execute(args, log)` function.
|
- Get `rootFolder` using `getProjectRootFromSession(session, log)`.
|
||||||
- Inside `execute`, call `executeMCPToolAction`:
|
- **Determine Execution Strategy**:
|
||||||
```javascript
|
- **If using `AsyncOperationManager`**: Create the operation, call the `*Direct` function from within the async task callback (passing `log` and `{ session }`), report progress *from the callback*, and return the initial `ACCEPTED` response.
|
||||||
return executeMCPToolAction({
|
- **If calling `*Direct` function synchronously** (like `add-task`): Call `await yourCommandDirect({ ...args, projectRoot }, log, { session });`. Handle the result with `handleApiResult`.
|
||||||
actionFn: yourCommandDirect, // The direct function wrapper
|
- ❌ **DON'T**: Pass `reportProgress` down to the direct function in either case.
|
||||||
args, // Arguments from the tool call
|
|
||||||
log, // MCP logger instance
|
5. **Register Tool**: Import and call `registerYourCommandTool` in `mcp-server/src/tools/index.js`.
|
||||||
actionName: 'Your Command Description', // For logging
|
|
||||||
// processResult: customProcessor // Optional: if default filtering isn't enough
|
6. **Update `mcp.json`**: Add the new tool definition to the `tools` array in `.cursor/mcp.json`.
|
||||||
});
|
|
||||||
```
|
|
||||||
4. **Register Tool**: Import and call `registerYourCommandTool` in [`mcp-server/src/tools/index.js`](mdc:mcp-server/src/tools/index.js).
|
|
||||||
5. **Update `mcp.json`**: Add the new tool definition to the `tools` array in `.cursor/mcp.json`.
|
|
||||||
|
|
||||||
## Handling Responses
|
## Handling Responses
|
||||||
|
|
||||||
- MCP tools should return data formatted by `createContentResponse` (which stringifies objects) or `createErrorResponse`.
|
- MCP tools should return the object generated by `handleApiResult`.
|
||||||
- The `processMCPResponseData` utility automatically removes potentially large fields like `details` and `testStrategy` from task objects before they are returned. This is the default behavior when using `executeMCPToolAction`. If specific fields need to be preserved or different fields removed, a custom `processResult` function can be passed to `executeMCPToolAction`.
|
- `handleApiResult` uses `createContentResponse` or `createErrorResponse` internally.
|
||||||
- The `handleApiResult` utility (used by `executeMCPToolAction`) now expects the result object from the direct function wrapper to include a `fromCache` boolean flag. This flag is included in the final JSON response sent to the MCP client, nested alongside the actual data (e.g., `{ "fromCache": true, "data": { ... } }`).
|
- `handleApiResult` also uses `processMCPResponseData` by default to filter potentially large fields (`details`, `testStrategy`) from task data. Provide a custom processor function to `handleApiResult` if different filtering is needed.
|
||||||
|
- The final JSON response sent to the MCP client will include the `fromCache` boolean flag (obtained from the `*Direct` function's result) alongside the actual data (e.g., `{ "fromCache": true, "data": { ... } }` or `{ "fromCache": false, "data": { ... } }`).
|
||||||
|
|
||||||
|
## Parameter Type Handling
|
||||||
|
|
||||||
|
- **Prefer Direct Function Calls**: For optimal performance and error handling, MCP tools should utilize direct function wrappers defined in [`task-master-core.js`](mdc:mcp-server/src/core/task-master-core.js). These wrappers call the underlying logic from the core modules (e.g., [`task-manager.js`](mdc:scripts/modules/task-manager.js)).
|
||||||
|
- **Standard Tool Execution Pattern**:
|
||||||
|
- The `execute` method within each MCP tool (in `mcp-server/src/tools/*.js`) should:
|
||||||
|
1. Call the corresponding `*Direct` function wrapper (e.g., `listTasksDirect`) from [`task-master-core.js`](mdc:mcp-server/src/core/task-master-core.js), passing necessary arguments and the logger.
|
||||||
|
2. Receive the result object (typically `{ success, data/error, fromCache }`).
|
||||||
|
3. Pass this result object to the `handleApiResult` utility (from [`tools/utils.js`](mdc:mcp-server/src/tools/utils.js)) for standardized response formatting and error handling.
|
||||||
|
4. Return the formatted response object provided by `handleApiResult`.
|
||||||
|
- **CLI Execution as Fallback**: The `executeTaskMasterCommand` utility in [`tools/utils.js`](mdc:mcp-server/src/tools/utils.js) allows executing commands via the CLI (`task-master ...`). This should **only** be used as a fallback if a direct function wrapper is not yet implemented or if a specific command intrinsically requires CLI execution.
|
||||||
|
- **Centralized Utilities** (See also: [`utilities.mdc`](mdc:.cursor/rules/utilities.mdc)):
|
||||||
|
- Use `findTasksJsonPath` (in [`task-master-core.js`](mdc:mcp-server/src/core/task-master-core.js)) *within direct function wrappers* to locate the `tasks.json` file consistently.
|
||||||
|
- **Leverage MCP Utilities**: The file [`tools/utils.js`](mdc:mcp-server/src/tools/utils.js) contains essential helpers for MCP tool implementation:
|
||||||
|
- `getProjectRoot`: Normalizes project paths.
|
||||||
|
- `handleApiResult`: Takes the raw result from a `*Direct` function and formats it into a standard MCP success or error response, automatically handling data processing via `processMCPResponseData`. This is called by the tool's `execute` method.
|
||||||
|
- `createContentResponse`/`createErrorResponse`: Used by `handleApiResult` to format successful/error MCP responses.
|
||||||
|
- `processMCPResponseData`: Filters/cleans data (e.g., removing `details`, `testStrategy`) before it's sent in the MCP response. Called by `handleApiResult`.
|
||||||
|
- `getCachedOrExecute`: **Used inside `*Direct` functions** in `task-master-core.js` to implement caching logic.
|
||||||
|
- `executeTaskMasterCommand`: Fallback for executing CLI commands.
|
||||||
|
- **Caching**: To improve performance for frequently called read operations (like `listTasks`, `showTask`, `nextTask`), a caching layer using `lru-cache` is implemented.
|
||||||
|
- **Caching logic resides *within* the direct function wrappers** in [`task-master-core.js`](mdc:mcp-server/src/core/task-master-core.js) using the `getCachedOrExecute` utility from [`tools/utils.js`](mdc:mcp-server/src/tools/utils.js).
|
||||||
|
- Generate unique cache keys based on function arguments that define a distinct call (e.g., file path, filters).
|
||||||
|
- The `getCachedOrExecute` utility handles checking the cache, executing the core logic function on a cache miss, storing the result, and returning the data along with a `fromCache` flag.
|
||||||
|
- Cache statistics can be monitored using the `cacheStats` MCP tool (implemented via `getCacheStatsDirect`).
|
||||||
|
- **Caching should generally be applied to read-only operations** that don't modify the `tasks.json` state. Commands like `set-status`, `add-task`, `update-task`, `parse-prd`, `add-dependency` should *not* be cached as they change the underlying data.
|
||||||
|
|
||||||
|
**MCP Tool Implementation Checklist**:
|
||||||
|
|
||||||
|
1. **Core Logic Verification**:
|
||||||
|
- [ ] Confirm the core function is properly exported from its module (e.g., `task-manager.js`)
|
||||||
|
- [ ] Identify all required parameters and their types
|
||||||
|
|
||||||
|
2. **Direct Function Wrapper**:
|
||||||
|
- [ ] Create the `*Direct` function in the appropriate file in `mcp-server/src/core/direct-functions/`
|
||||||
|
- [ ] Import silent mode utilities and implement them around core function calls
|
||||||
|
- [ ] Handle all parameter validations and type conversions
|
||||||
|
- [ ] Implement path resolving for relative paths
|
||||||
|
- [ ] Add appropriate error handling with standardized error codes
|
||||||
|
- [ ] Add to imports/exports in `task-master-core.js`
|
||||||
|
|
||||||
|
3. **MCP Tool Implementation**:
|
||||||
|
- [ ] Create new file in `mcp-server/src/tools/` with kebab-case naming
|
||||||
|
- [ ] Define zod schema for all parameters
|
||||||
|
- [ ] Implement the `execute` method following the standard pattern
|
||||||
|
- [ ] Consider using AsyncOperationManager for long-running operations
|
||||||
|
- [ ] Register tool in `mcp-server/src/tools/index.js`
|
||||||
|
|
||||||
|
4. **Testing**:
|
||||||
|
- [ ] Write unit tests for the direct function wrapper
|
||||||
|
- [ ] Write integration tests for the MCP tool
|
||||||
|
|
||||||
|
## Standard Error Codes
|
||||||
|
|
||||||
|
- **Standard Error Codes**: Use consistent error codes across direct function wrappers
|
||||||
|
- `INPUT_VALIDATION_ERROR`: For missing or invalid required parameters
|
||||||
|
- `FILE_NOT_FOUND_ERROR`: For file system path issues
|
||||||
|
- `CORE_FUNCTION_ERROR`: For errors thrown by the core function
|
||||||
|
- `UNEXPECTED_ERROR`: For all other unexpected errors
|
||||||
|
|
||||||
|
- **Error Object Structure**:
|
||||||
|
```javascript
|
||||||
|
{
|
||||||
|
success: false,
|
||||||
|
error: {
|
||||||
|
code: 'ERROR_CODE',
|
||||||
|
message: 'Human-readable error message'
|
||||||
|
},
|
||||||
|
fromCache: false
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- **MCP Tool Logging Pattern**:
|
||||||
|
- ✅ DO: Log the start of execution with arguments (sanitized if sensitive)
|
||||||
|
- ✅ DO: Log successful completion with result summary
|
||||||
|
- ✅ DO: Log all error conditions with appropriate log levels
|
||||||
|
- ✅ DO: Include the cache status in result logs
|
||||||
|
- ❌ DON'T: Log entire large data structures or sensitive information
|
||||||
|
|
||||||
|
- The MCP server integrates with Task Master core functions through three layers:
|
||||||
|
1. Tool Definitions (`mcp-server/src/tools/*.js`) - Define parameters and validation
|
||||||
|
2. Direct Functions (`mcp-server/src/core/direct-functions/*.js`) - Handle core logic integration
|
||||||
|
3. Core Functions (`scripts/modules/*.js`) - Implement the actual functionality
|
||||||
|
|
||||||
|
- This layered approach provides:
|
||||||
|
- Clear separation of concerns
|
||||||
|
- Consistent parameter validation
|
||||||
|
- Centralized error handling
|
||||||
|
- Performance optimization through caching (for read operations)
|
||||||
|
- Standardized response formatting
|
||||||
|
|
||||||
|
## MCP Naming Conventions
|
||||||
|
|
||||||
|
- **Files and Directories**:
|
||||||
|
- ✅ DO: Use **kebab-case** for all file names: `list-tasks.js`, `set-task-status.js`
|
||||||
|
- ✅ DO: Use consistent directory structure: `mcp-server/src/tools/` for tool definitions, `mcp-server/src/core/direct-functions/` for direct function implementations
|
||||||
|
|
||||||
|
- **JavaScript Functions**:
|
||||||
|
- ✅ DO: Use **camelCase** with `Direct` suffix for direct function implementations: `listTasksDirect`, `setTaskStatusDirect`
|
||||||
|
- ✅ DO: Use **camelCase** with `Tool` suffix for tool registration functions: `registerListTasksTool`, `registerSetTaskStatusTool`
|
||||||
|
- ✅ DO: Use consistent action function naming inside direct functions: `coreActionFn` or similar descriptive name
|
||||||
|
|
||||||
|
- **MCP Tool Names**:
|
||||||
|
- ✅ DO: Use **snake_case** for tool names exposed to MCP clients: `list_tasks`, `set_task_status`, `parse_prd_document`
|
||||||
|
- ✅ DO: Include the core action in the tool name without redundant words: Use `list_tasks` instead of `list_all_tasks`
|
||||||
|
|
||||||
|
- **Examples**:
|
||||||
|
- File: `list-tasks.js`
|
||||||
|
- Direct Function: `listTasksDirect`
|
||||||
|
- Tool Registration: `registerListTasksTool`
|
||||||
|
- MCP Tool Name: `list_tasks`
|
||||||
|
|
||||||
|
- **Mapping**:
|
||||||
|
- The `directFunctions` map in `task-master-core.js` maps the core function name (in camelCase) to its direct implementation:
|
||||||
|
```javascript
|
||||||
|
export const directFunctions = {
|
||||||
|
list: listTasksDirect,
|
||||||
|
setStatus: setTaskStatusDirect,
|
||||||
|
// Add more functions as implemented
|
||||||
|
};
|
||||||
|
```
|
||||||
|
|
||||||
|
## Telemetry Integration
|
||||||
|
|
||||||
|
- Direct functions calling core logic that involves AI should receive and pass through `telemetryData` within their successful `data` payload. See [`telemetry.mdc`](mdc:.cursor/rules/telemetry.mdc) for the standard pattern.
|
||||||
|
- MCP tools use `handleApiResult`, which ensures the `data` object (potentially including `telemetryData`) from the direct function is correctly included in the final response.
|
||||||
|
|||||||
@@ -3,7 +3,6 @@ description: Guidelines for integrating new features into the Task Master CLI
|
|||||||
globs: scripts/modules/*.js
|
globs: scripts/modules/*.js
|
||||||
alwaysApply: false
|
alwaysApply: false
|
||||||
---
|
---
|
||||||
|
|
||||||
# Task Master Feature Integration Guidelines
|
# Task Master Feature Integration Guidelines
|
||||||
|
|
||||||
## Feature Placement Decision Process
|
## Feature Placement Decision Process
|
||||||
@@ -25,11 +24,183 @@ alwaysApply: false
|
|||||||
The standard pattern for adding a feature follows this workflow:
|
The standard pattern for adding a feature follows this workflow:
|
||||||
|
|
||||||
1. **Core Logic**: Implement the business logic in the appropriate module (e.g., [`task-manager.js`](mdc:scripts/modules/task-manager.js)).
|
1. **Core Logic**: Implement the business logic in the appropriate module (e.g., [`task-manager.js`](mdc:scripts/modules/task-manager.js)).
|
||||||
2. **UI Components**: Add any display functions to [`ui.js`](mdc:scripts/modules/ui.js) following [`ui.mdc`](mdc:.cursor/rules/ui.mdc).
|
2. **Context Gathering (If Applicable)**:
|
||||||
3. **Command Integration**: Add the CLI command to [`commands.js`](mdc:scripts/modules/commands.js) following [`commands.mdc`](mdc:.cursor/rules/commands.mdc).
|
- For AI-powered commands that benefit from project context, use the standardized context gathering patterns from [`context_gathering.mdc`](mdc:.cursor/rules/context_gathering.mdc).
|
||||||
4. **Testing**: Write tests for all components of the feature (following [`tests.mdc`](mdc:.cursor/rules/tests.mdc))
|
- Import `ContextGatherer` and `FuzzyTaskSearch` utilities for reusable context extraction.
|
||||||
5. **Configuration**: Update any configuration in [`utils.js`](mdc:scripts/modules/utils.js) if needed, following [`utilities.mdc`](mdc:.cursor/rules/utilities.mdc).
|
- Support multiple context types: tasks, files, custom text, project tree.
|
||||||
6. **Documentation**: Update help text and documentation in [dev_workflow.mdc](mdc:scripts/modules/dev_workflow.mdc)
|
- Implement detailed token breakdown display for transparency.
|
||||||
|
3. **AI Integration (If Applicable)**:
|
||||||
|
- Import necessary service functions (e.g., `generateTextService`, `streamTextService`) from [`ai-services-unified.js`](mdc:scripts/modules/ai-services-unified.js).
|
||||||
|
- Prepare parameters (`role`, `session`, `systemPrompt`, `prompt`).
|
||||||
|
- Call the service function.
|
||||||
|
- Handle the response (direct text or stream object).
|
||||||
|
- **Important**: Prefer `generateTextService` for calls sending large context (like stringified JSON) where incremental display is not needed. See [`ai_services.mdc`](mdc:.cursor/rules/ai_services.mdc) for detailed usage patterns and cautions.
|
||||||
|
4. **UI Components**: Add any display functions to [`ui.js`](mdc:scripts/modules/ui.js) following [`ui.mdc`](mdc:.cursor/rules/ui.mdc). Consider enhanced formatting with syntax highlighting for code blocks.
|
||||||
|
5. **Command Integration**: Add the CLI command to [`commands.js`](mdc:scripts/modules/commands.js) following [`commands.mdc`](mdc:.cursor/rules/commands.mdc).
|
||||||
|
6. **Testing**: Write tests for all components of the feature (following [`tests.mdc`](mdc:.cursor/rules/tests.mdc))
|
||||||
|
7. **Configuration**: Update configuration settings or add new ones in [`config-manager.js`](mdc:scripts/modules/config-manager.js) and ensure getters/setters are appropriate. Update documentation in [`utilities.mdc`](mdc:.cursor/rules/utilities.mdc) and [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc). Update the `.taskmasterconfig` structure if needed.
|
||||||
|
8. **Documentation**: Update help text and documentation in [`dev_workflow.mdc`](mdc:.cursor/rules/dev_workflow.mdc) and [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc).
|
||||||
|
|
||||||
|
## Critical Checklist for New Features
|
||||||
|
|
||||||
|
- **Comprehensive Function Exports**:
|
||||||
|
- ✅ **DO**: Export **all core functions, helper functions (like `generateSubtaskPrompt`), and utility methods** needed by your new function or command from their respective modules.
|
||||||
|
- ✅ **DO**: **Explicitly review the module's `export { ... }` block** at the bottom of the file to ensure every required dependency (even seemingly minor helpers like `findTaskById`, `taskExists`, specific prompt generators, AI call handlers, etc.) is included.
|
||||||
|
- ❌ **DON'T**: Assume internal functions are already exported - **always verify**. A missing export will cause runtime errors (e.g., `ReferenceError: generateSubtaskPrompt is not defined`).
|
||||||
|
- **Example**: If implementing a feature that checks task existence, ensure the helper function is in exports:
|
||||||
|
```javascript
|
||||||
|
// At the bottom of your module file:
|
||||||
|
export {
|
||||||
|
// ... existing exports ...
|
||||||
|
yourNewFunction,
|
||||||
|
taskExists, // Helper function used by yourNewFunction
|
||||||
|
findTaskById, // Helper function used by yourNewFunction
|
||||||
|
generateSubtaskPrompt, // Helper needed by expand/add features
|
||||||
|
getSubtasksFromAI, // Helper needed by expand/add features
|
||||||
|
};
|
||||||
|
```
|
||||||
|
|
||||||
|
- **Parameter Completeness and Matching**:
|
||||||
|
- ✅ **DO**: Pass all required parameters to functions you call within your implementation
|
||||||
|
- ✅ **DO**: Check function signatures before implementing calls to them
|
||||||
|
- ✅ **DO**: Verify that direct function parameters match their core function counterparts
|
||||||
|
- ✅ **DO**: When implementing a direct function for MCP, ensure it only accepts parameters that exist in the core function
|
||||||
|
- ✅ **DO**: Verify the expected *internal structure* of complex object parameters (like the `mcpLog` object, see mcp.mdc for the required logger wrapper pattern)
|
||||||
|
- ❌ **DON'T**: Add parameters to direct functions that don't exist in core functions
|
||||||
|
- ❌ **DON'T**: Assume default parameter values will handle missing arguments
|
||||||
|
- ❌ **DON'T**: Assume object parameters will work without verifying their required internal structure or methods.
|
||||||
|
- **Example**: When calling file generation, pass all required parameters:
|
||||||
|
```javascript
|
||||||
|
// ✅ DO: Pass all required parameters
|
||||||
|
await generateTaskFiles(tasksPath, path.dirname(tasksPath));
|
||||||
|
|
||||||
|
// ❌ DON'T: Omit required parameters
|
||||||
|
await generateTaskFiles(tasksPath); // Error - missing outputDir parameter
|
||||||
|
```
|
||||||
|
|
||||||
|
**Example**: Properly match direct function parameters to core function:
|
||||||
|
```javascript
|
||||||
|
// Core function signature
|
||||||
|
async function expandTask(tasksPath, taskId, numSubtasks, useResearch = false, additionalContext = '', options = {}) {
|
||||||
|
// Implementation...
|
||||||
|
}
|
||||||
|
|
||||||
|
// ✅ DO: Match direct function parameters to core function
|
||||||
|
export async function expandTaskDirect(args, log, context = {}) {
|
||||||
|
// Extract only parameters that exist in the core function
|
||||||
|
const taskId = parseInt(args.id, 10);
|
||||||
|
const numSubtasks = args.num ? parseInt(args.num, 10) : undefined;
|
||||||
|
const useResearch = args.research === true;
|
||||||
|
const additionalContext = args.prompt || '';
|
||||||
|
|
||||||
|
// Call core function with matched parameters
|
||||||
|
const result = await expandTask(
|
||||||
|
tasksPath,
|
||||||
|
taskId,
|
||||||
|
numSubtasks,
|
||||||
|
useResearch,
|
||||||
|
additionalContext,
|
||||||
|
{ mcpLog: log, session: context.session }
|
||||||
|
);
|
||||||
|
|
||||||
|
// Return result
|
||||||
|
return { success: true, data: result, fromCache: false };
|
||||||
|
}
|
||||||
|
|
||||||
|
// ❌ DON'T: Use parameters that don't exist in the core function
|
||||||
|
export async function expandTaskDirect(args, log, context = {}) {
|
||||||
|
// DON'T extract parameters that don't exist in the core function!
|
||||||
|
const force = args.force === true; // ❌ WRONG - 'force' doesn't exist in core function
|
||||||
|
|
||||||
|
// DON'T pass non-existent parameters to core functions
|
||||||
|
const result = await expandTask(
|
||||||
|
tasksPath,
|
||||||
|
args.id,
|
||||||
|
args.num,
|
||||||
|
args.research,
|
||||||
|
args.prompt,
|
||||||
|
force, // ❌ WRONG - this parameter doesn't exist in the core function
|
||||||
|
{ mcpLog: log }
|
||||||
|
);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- **Consistent File Path Handling**:
|
||||||
|
- ✅ DO: Use consistent file naming conventions: `task_${id.toString().padStart(3, '0')}.txt`
|
||||||
|
- ✅ DO: Use `path.join()` for composing file paths
|
||||||
|
- ✅ DO: Use appropriate file extensions (.txt for tasks, .json for data)
|
||||||
|
- ❌ DON'T: Hardcode path separators or inconsistent file extensions
|
||||||
|
- **Example**: Creating file paths for tasks:
|
||||||
|
```javascript
|
||||||
|
// ✅ DO: Use consistent file naming and path.join
|
||||||
|
const taskFileName = path.join(
|
||||||
|
path.dirname(tasksPath),
|
||||||
|
`task_${taskId.toString().padStart(3, '0')}.txt`
|
||||||
|
);
|
||||||
|
|
||||||
|
// ❌ DON'T: Use inconsistent naming or string concatenation
|
||||||
|
const taskFileName = path.dirname(tasksPath) + '/' + taskId + '.md';
|
||||||
|
```
|
||||||
|
|
||||||
|
- **Error Handling and Reporting**:
|
||||||
|
- ✅ DO: Use structured error objects with code and message properties
|
||||||
|
- ✅ DO: Include clear error messages identifying the specific problem
|
||||||
|
- ✅ DO: Handle both function-specific errors and potential file system errors
|
||||||
|
- ✅ DO: Log errors at appropriate severity levels
|
||||||
|
- **Example**: Structured error handling in core functions:
|
||||||
|
```javascript
|
||||||
|
try {
|
||||||
|
// Implementation...
|
||||||
|
} catch (error) {
|
||||||
|
log('error', `Error removing task: ${error.message}`);
|
||||||
|
throw {
|
||||||
|
code: 'REMOVE_TASK_ERROR',
|
||||||
|
message: error.message,
|
||||||
|
details: error.stack
|
||||||
|
};
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- **Silent Mode Implementation**:
|
||||||
|
- ✅ **DO**: Import all silent mode utilities together:
|
||||||
|
```javascript
|
||||||
|
import { enableSilentMode, disableSilentMode, isSilentMode } from '../../../../scripts/modules/utils.js';
|
||||||
|
```
|
||||||
|
- ✅ **DO**: Always use `isSilentMode()` function to check global silent mode status, never reference global variables.
|
||||||
|
- ✅ **DO**: Wrap core function calls **within direct functions** using `enableSilentMode()` and `disableSilentMode()` in a `try/finally` block if the core function might produce console output (like banners, spinners, direct `console.log`s) that isn't reliably controlled by an `outputFormat` parameter.
|
||||||
|
```javascript
|
||||||
|
// Direct Function Example:
|
||||||
|
try {
|
||||||
|
// Prefer passing 'json' if the core function reliably handles it
|
||||||
|
const result = await coreFunction(...args, 'json');
|
||||||
|
// OR, if outputFormat is not enough/unreliable:
|
||||||
|
// enableSilentMode(); // Enable *before* the call
|
||||||
|
// const result = await coreFunction(...args);
|
||||||
|
// disableSilentMode(); // Disable *after* the call (typically in finally)
|
||||||
|
|
||||||
|
return { success: true, data: result };
|
||||||
|
} catch (error) {
|
||||||
|
log.error(`Error: ${error.message}`);
|
||||||
|
return { success: false, error: { message: error.message } };
|
||||||
|
} finally {
|
||||||
|
// If you used enable/disable, ensure disable is called here
|
||||||
|
// disableSilentMode();
|
||||||
|
}
|
||||||
|
```
|
||||||
|
- ✅ **DO**: Core functions themselves *should* ideally check `outputFormat === 'text'` before displaying UI elements (banners, spinners, boxes) and use internal logging (`log`/`report`) that respects silent mode. The `enable/disableSilentMode` wrapper in the direct function is a safety net.
|
||||||
|
- ✅ **DO**: Handle mixed parameter/global silent mode correctly for functions accepting both (less common now, prefer `outputFormat`):
|
||||||
|
```javascript
|
||||||
|
// Check both the passed parameter and global silent mode
|
||||||
|
const isSilent = silentMode || (typeof silentMode === 'undefined' && isSilentMode());
|
||||||
|
```
|
||||||
|
- ❌ **DON'T**: Forget to disable silent mode in a `finally` block if you enabled it.
|
||||||
|
- ❌ **DON'T**: Access the global `silentMode` flag directly.
|
||||||
|
|
||||||
|
- **Debugging Strategy**:
|
||||||
|
- ✅ **DO**: If an MCP tool fails with vague errors (e.g., JSON parsing issues like `Unexpected token ... is not valid JSON`), **try running the equivalent CLI command directly in the terminal** (e.g., `task-master expand --all`). CLI output often provides much more specific error messages (like missing function definitions or stack traces from the core logic) that pinpoint the root cause.
|
||||||
|
- ❌ **DON'T**: Rely solely on MCP logs if the error is unclear; use the CLI as a complementary debugging tool for core logic issues.
|
||||||
|
|
||||||
|
- **Telemetry Integration**: Ensure AI calls correctly handle and propagate `telemetryData` as described in [`telemetry.mdc`](mdc:.cursor/rules/telemetry.mdc).
|
||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
// 1. CORE LOGIC: Add function to appropriate module (example in task-manager.js)
|
// 1. CORE LOGIC: Add function to appropriate module (example in task-manager.js)
|
||||||
@@ -52,7 +223,29 @@ export {
|
|||||||
```
|
```
|
||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
// 2. UI COMPONENTS: Add display function to ui.js
|
// 2. AI Integration: Add import and use necessary service functions
|
||||||
|
import { generateTextService } from './ai-services-unified.js';
|
||||||
|
|
||||||
|
// Example usage:
|
||||||
|
async function handleAIInteraction() {
|
||||||
|
const role = 'user';
|
||||||
|
const session = 'exampleSession';
|
||||||
|
const systemPrompt = 'You are a helpful assistant.';
|
||||||
|
const prompt = 'What is the capital of France?';
|
||||||
|
|
||||||
|
const result = await generateTextService(role, session, systemPrompt, prompt);
|
||||||
|
console.log(result);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Export from the module
|
||||||
|
export {
|
||||||
|
// ... existing exports ...
|
||||||
|
handleAIInteraction,
|
||||||
|
};
|
||||||
|
```
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// 3. UI COMPONENTS: Add display function to ui.js
|
||||||
/**
|
/**
|
||||||
* Display archive operation results
|
* Display archive operation results
|
||||||
* @param {string} archivePath - Path to the archive file
|
* @param {string} archivePath - Path to the archive file
|
||||||
@@ -73,7 +266,7 @@ export {
|
|||||||
```
|
```
|
||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
// 3. COMMAND INTEGRATION: Add to commands.js
|
// 4. COMMAND INTEGRATION: Add to commands.js
|
||||||
import { archiveTasks } from './task-manager.js';
|
import { archiveTasks } from './task-manager.js';
|
||||||
import { displayArchiveResults } from './ui.js';
|
import { displayArchiveResults } from './ui.js';
|
||||||
|
|
||||||
@@ -293,7 +486,7 @@ npm test
|
|||||||
For each new feature:
|
For each new feature:
|
||||||
|
|
||||||
1. Add help text to the command definition
|
1. Add help text to the command definition
|
||||||
2. Update [`dev_workflow.mdc`](mdc:scripts/modules/dev_workflow.mdc) with command reference
|
2. Update [`dev_workflow.mdc`](mdc:.cursor/rules/dev_workflow.mdc) with command reference
|
||||||
3. Consider updating [`architecture.mdc`](mdc:.cursor/rules/architecture.mdc) if the feature significantly changes module responsibilities.
|
3. Consider updating [`architecture.mdc`](mdc:.cursor/rules/architecture.mdc) if the feature significantly changes module responsibilities.
|
||||||
|
|
||||||
Follow the existing command reference format:
|
Follow the existing command reference format:
|
||||||
@@ -312,48 +505,416 @@ For more information on module structure, see [`MODULE_PLAN.md`](mdc:scripts/mod
|
|||||||
|
|
||||||
## Adding MCP Server Support for Commands
|
## Adding MCP Server Support for Commands
|
||||||
|
|
||||||
Integrating Task Master commands with the MCP server (for use by tools like Cursor) follows a specific pattern distinct from the CLI command implementation.
|
Integrating Task Master commands with the MCP server (for use by tools like Cursor) follows a specific pattern distinct from the CLI command implementation, prioritizing performance and reliability.
|
||||||
|
|
||||||
- **Goal**: Leverage direct function calls for performance and reliability, avoiding CLI overhead.
|
- **Goal**: Leverage direct function calls to core logic, avoiding CLI overhead.
|
||||||
- **Reference**: See [`mcp.mdc`](mdc:.cursor/rules/mcp.mdc) for full details.
|
- **Reference**: See [`mcp.mdc`](mdc:.cursor/rules/mcp.mdc) for full details.
|
||||||
|
|
||||||
**MCP Integration Workflow**:
|
**MCP Integration Workflow**:
|
||||||
|
|
||||||
1. **Core Logic**: Ensure the command's core logic exists in the appropriate module (e.g., [`task-manager.js`](mdc:scripts/modules/task-manager.js)).
|
1. **Core Logic**: Ensure the command's core logic exists and is exported from the appropriate module (e.g., [`task-manager.js`](mdc:scripts/modules/task-manager.js)).
|
||||||
2. **Direct Function Wrapper**:
|
2. **Direct Function Wrapper (`mcp-server/src/core/direct-functions/`)**:
|
||||||
- In [`task-master-core.js`](mdc:mcp-server/src/core/task-master-core.js), create an `async function yourCommandDirect(args, log)`.
|
- Create a new file (e.g., `your-command.js`) in `mcp-server/src/core/direct-functions/` using **kebab-case** naming.
|
||||||
- This function imports and calls the core logic.
|
- Import the core logic function, necessary MCP utilities like **`findTasksJsonPath` from `../utils/path-utils.js`**, and **silent mode utilities**: `import { enableSilentMode, disableSilentMode } from '../../../../scripts/modules/utils.js';`
|
||||||
- It uses utilities like `findTasksJsonPath` if needed.
|
- Implement an `async function yourCommandDirect(args, log)` using **camelCase** with `Direct` suffix.
|
||||||
- It handles argument parsing and validation specific to the direct call.
|
- **Path Finding**: Inside this function, obtain the `tasksPath` by calling `const tasksPath = findTasksJsonPath(args, log);`. This relies on `args.projectRoot` (derived from the session) being passed correctly.
|
||||||
- **Implement Caching (if applicable)**: For read operations that benefit from caching, use the `getCachedOrExecute` utility here to wrap the core logic call. Generate a unique cache key based on relevant arguments.
|
- Perform validation on other arguments received in `args`.
|
||||||
- It returns a standard `{ success: true/false, data/error, fromCache: boolean }` object.
|
- **Implement Silent Mode**: Wrap core function calls with `enableSilentMode()` and `disableSilentMode()` to prevent logs from interfering with JSON responses.
|
||||||
- Export the function and add it to the `directFunctions` map.
|
- **If Caching**: Implement caching using `getCachedOrExecute` from `../../tools/utils.js`.
|
||||||
3. **MCP Tool File**:
|
- **If Not Caching**: Directly call the core logic function within a try/catch block.
|
||||||
- Create a new file in `mcp-server/src/tools/` (e.g., `yourCommand.js`).
|
- Format the return as `{ success: true/false, data/error, fromCache: boolean }`.
|
||||||
- Import `zod`, `executeMCPToolAction` from `./utils.js`, and your `yourCommandDirect` function.
|
- Export the wrapper function.
|
||||||
- Implement `registerYourCommandTool(server)` which calls `server.addTool`:
|
|
||||||
- Define the tool `name`, `description`, and `parameters` using `zod`. Include optional `projectRoot` and `file` if relevant, following patterns in existing tools.
|
|
||||||
- Define the `async execute(args, log)` method for the tool.
|
|
||||||
- **Crucially**, the `execute` method should primarily call `executeMCPToolAction`:
|
|
||||||
```javascript
|
|
||||||
// In mcp-server/src/tools/yourCommand.js
|
|
||||||
import { executeMCPToolAction } from "./utils.js";
|
|
||||||
import { yourCommandDirect } from "../core/task-master-core.js";
|
|
||||||
import { z } from "zod";
|
|
||||||
|
|
||||||
export function registerYourCommandTool(server) {
|
3. **Update `task-master-core.js` with Import/Export**: Import and re-export your `*Direct` function and add it to the `directFunctions` map.
|
||||||
server.addTool({
|
|
||||||
name: "yourCommand",
|
4. **Create MCP Tool (`mcp-server/src/tools/`)**:
|
||||||
description: "Description of your command.",
|
- Create a new file (e.g., `your-command.js`) using **kebab-case**.
|
||||||
parameters: z.object({ /* zod schema */ }),
|
- Import `zod`, `handleApiResult`, **`withNormalizedProjectRoot` HOF**, and your `yourCommandDirect` function.
|
||||||
async execute(args, log) {
|
- Implement `registerYourCommandTool(server)`.
|
||||||
return executeMCPToolAction({
|
- **Define parameters**: Make `projectRoot` optional (`z.string().optional().describe(...)`) as the HOF handles fallback.
|
||||||
actionFn: yourCommandDirect, // Pass the direct function wrapper
|
- Consider if this operation should run in the background using `AsyncOperationManager`.
|
||||||
args, log, actionName: "Your Command Description"
|
- Implement the standard `execute` method **wrapped with `withNormalizedProjectRoot`**:
|
||||||
|
```javascript
|
||||||
|
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
|
||||||
|
// args.projectRoot is now normalized
|
||||||
|
const { projectRoot /*, other args */ } = args;
|
||||||
|
// ... resolve tasks path if needed using normalized projectRoot ...
|
||||||
|
const result = await yourCommandDirect(
|
||||||
|
{ /* other args */, projectRoot /* if needed by direct func */ },
|
||||||
|
log,
|
||||||
|
{ session }
|
||||||
|
);
|
||||||
|
return handleApiResult(result, log);
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
5. **Register Tool**: Import and call `registerYourCommandTool` in `mcp-server/src/tools/index.js`.
|
||||||
|
|
||||||
|
6. **Update `mcp.json`**: Add the new tool definition to the `tools` array in `.cursor/mcp.json`.
|
||||||
|
|
||||||
|
## Implementing Background Operations
|
||||||
|
|
||||||
|
For long-running operations that should not block the client, use the AsyncOperationManager:
|
||||||
|
|
||||||
|
1. **Identify Background-Appropriate Operations**:
|
||||||
|
- ✅ **DO**: Use async operations for CPU-intensive tasks like task expansion or PRD parsing
|
||||||
|
- ✅ **DO**: Consider async operations for tasks that may take more than 1-2 seconds
|
||||||
|
- ❌ **DON'T**: Use async operations for quick read/status operations
|
||||||
|
- ❌ **DON'T**: Use async operations when immediate feedback is critical
|
||||||
|
|
||||||
|
2. **Use AsyncOperationManager in MCP Tools**:
|
||||||
|
```javascript
|
||||||
|
import { asyncOperationManager } from '../core/utils/async-manager.js';
|
||||||
|
|
||||||
|
// In execute method:
|
||||||
|
const operationId = asyncOperationManager.addOperation(
|
||||||
|
expandTaskDirect, // The direct function to run in background
|
||||||
|
{ ...args, projectRoot: rootFolder }, // Args to pass to the function
|
||||||
|
{ log, reportProgress, session } // Context to preserve for the operation
|
||||||
|
);
|
||||||
|
|
||||||
|
// Return immediate response with operation ID
|
||||||
|
return createContentResponse({
|
||||||
|
message: "Operation started successfully",
|
||||||
|
operationId,
|
||||||
|
status: "pending"
|
||||||
});
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Implement Progress Reporting**:
|
||||||
|
- ✅ **DO**: Use the reportProgress function in direct functions:
|
||||||
|
```javascript
|
||||||
|
// In your direct function:
|
||||||
|
if (reportProgress) {
|
||||||
|
await reportProgress({ progress: 50 }); // 50% complete
|
||||||
|
}
|
||||||
|
```
|
||||||
|
- AsyncOperationManager will forward progress updates to the client
|
||||||
|
|
||||||
|
4. **Check Operation Status**:
|
||||||
|
- Implement a way for clients to check status using the `get_operation_status` MCP tool
|
||||||
|
- Return appropriate status codes and messages
|
||||||
|
|
||||||
|
## Project Initialization
|
||||||
|
|
||||||
|
When implementing project initialization commands:
|
||||||
|
|
||||||
|
1. **Support Programmatic Initialization**:
|
||||||
|
- ✅ **DO**: Design initialization to work with both CLI and MCP
|
||||||
|
- ✅ **DO**: Support non-interactive modes with sensible defaults
|
||||||
|
- ✅ **DO**: Handle project metadata like name, description, version
|
||||||
|
- ✅ **DO**: Create necessary files and directories
|
||||||
|
|
||||||
|
2. **In MCP Tool Implementation**:
|
||||||
|
```javascript
|
||||||
|
// In initialize-project.js MCP tool:
|
||||||
|
import { z } from "zod";
|
||||||
|
import { initializeProjectDirect } from "../core/task-master-core.js";
|
||||||
|
|
||||||
|
export function registerInitializeProjectTool(server) {
|
||||||
|
server.addTool({
|
||||||
|
name: "initialize_project",
|
||||||
|
description: "Initialize a new Task Master project",
|
||||||
|
parameters: z.object({
|
||||||
|
projectName: z.string().optional().describe("The name for the new project"),
|
||||||
|
projectDescription: z.string().optional().describe("A brief description"),
|
||||||
|
projectVersion: z.string().optional().describe("Initial version (e.g., '0.1.0')"),
|
||||||
|
// Add other parameters as needed
|
||||||
|
}),
|
||||||
|
execute: async (args, { log, reportProgress, session }) => {
|
||||||
|
try {
|
||||||
|
// No need for project root since we're creating a new project
|
||||||
|
const result = await initializeProjectDirect(args, log);
|
||||||
|
return handleApiResult(result, log, 'Error initializing project');
|
||||||
|
} catch (error) {
|
||||||
|
log.error(`Error in initialize_project: ${error.message}`);
|
||||||
|
return createErrorResponse(`Failed to initialize project: ${error.message}`);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
4. **Register in Tool Index**: Import and call `registerYourCommandTool` in [`mcp-server/src/tools/index.js`](mdc:mcp-server/src/tools/index.js).
|
|
||||||
5. **Update `mcp.json`**: Add the tool definition to `.cursor/mcp.json`.
|
## Feature Planning
|
||||||
|
|
||||||
|
- **Core Logic First**:
|
||||||
|
- ✅ DO: Implement core logic in `scripts/modules/` before CLI or MCP interfaces
|
||||||
|
- ✅ DO: Consider tagged task lists system compatibility from the start
|
||||||
|
- ✅ DO: Design functions to work with both legacy and tagged data formats
|
||||||
|
- ✅ DO: Use tag resolution functions (`getTasksForTag`, `setTasksForTag`) for task data access
|
||||||
|
- ❌ DON'T: Directly manipulate tagged data structure in new features
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// ✅ DO: Design tagged-aware core functions
|
||||||
|
async function newFeatureCore(tasksPath, featureParams, options = {}) {
|
||||||
|
const tasksData = readJSON(tasksPath);
|
||||||
|
const currentTag = getCurrentTag() || 'master';
|
||||||
|
const tasks = getTasksForTag(tasksData, currentTag);
|
||||||
|
|
||||||
|
// Perform feature logic on tasks array
|
||||||
|
const result = performFeatureLogic(tasks, featureParams);
|
||||||
|
|
||||||
|
// Save back using tag resolution
|
||||||
|
setTasksForTag(tasksData, currentTag, tasks);
|
||||||
|
writeJSON(tasksPath, tasksData);
|
||||||
|
|
||||||
|
return result;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- **Backward Compatibility**:
|
||||||
|
- ✅ DO: Ensure new features work with existing projects seamlessly
|
||||||
|
- ✅ DO: Test with both legacy and tagged task data formats
|
||||||
|
- ✅ DO: Support silent migration during feature usage
|
||||||
|
- ❌ DON'T: Break existing workflows when adding tagged system features
|
||||||
|
|
||||||
|
## CLI Command Implementation
|
||||||
|
|
||||||
|
- **Command Structure**:
|
||||||
|
- ✅ DO: Follow the established pattern in [`commands.js`](mdc:scripts/modules/commands.js)
|
||||||
|
- ✅ DO: Use Commander.js for argument parsing
|
||||||
|
- ✅ DO: Include comprehensive help text and examples
|
||||||
|
- ✅ DO: Support tagged task context awareness
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// ✅ DO: Implement CLI commands with tagged system awareness
|
||||||
|
program
|
||||||
|
.command('new-feature')
|
||||||
|
.description('Description of the new feature with tagged task lists support')
|
||||||
|
.option('-t, --tag <tag>', 'Specify tag context (defaults to current tag)')
|
||||||
|
.option('-p, --param <value>', 'Feature-specific parameter')
|
||||||
|
.option('--force', 'Force operation without confirmation')
|
||||||
|
.action(async (options) => {
|
||||||
|
try {
|
||||||
|
const projectRoot = findProjectRoot();
|
||||||
|
if (!projectRoot) {
|
||||||
|
console.error('Not in a Task Master project directory');
|
||||||
|
process.exit(1);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Use specified tag or current tag
|
||||||
|
const targetTag = options.tag || getCurrentTag() || 'master';
|
||||||
|
|
||||||
|
const result = await newFeatureCore(
|
||||||
|
path.join(projectRoot, '.taskmaster', 'tasks', 'tasks.json'),
|
||||||
|
{ param: options.param },
|
||||||
|
{
|
||||||
|
force: options.force,
|
||||||
|
targetTag: targetTag,
|
||||||
|
outputFormat: 'text'
|
||||||
|
}
|
||||||
|
);
|
||||||
|
|
||||||
|
console.log('Feature executed successfully');
|
||||||
|
} catch (error) {
|
||||||
|
console.error(`Error: ${error.message}`);
|
||||||
|
process.exit(1);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
- **Error Handling**:
|
||||||
|
- ✅ DO: Provide clear error messages for common failures
|
||||||
|
- ✅ DO: Handle tagged system migration errors gracefully
|
||||||
|
- ✅ DO: Include suggestion for resolution when possible
|
||||||
|
- ✅ DO: Exit with appropriate codes for scripting
|
||||||
|
|
||||||
|
## MCP Tool Implementation
|
||||||
|
|
||||||
|
- **Direct Function Pattern**:
|
||||||
|
- ✅ DO: Create direct function wrappers in `mcp-server/src/core/direct-functions/`
|
||||||
|
- ✅ DO: Follow silent mode patterns to prevent console output interference
|
||||||
|
- ✅ DO: Use `findTasksJsonPath` for consistent path resolution
|
||||||
|
- ✅ DO: Ensure tagged system compatibility
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// ✅ DO: Implement MCP direct functions with tagged awareness
|
||||||
|
export async function newFeatureDirect(args, log, context = {}) {
|
||||||
|
try {
|
||||||
|
const tasksPath = findTasksJsonPath(args, log);
|
||||||
|
|
||||||
|
// Enable silent mode for clean MCP responses
|
||||||
|
enableSilentMode();
|
||||||
|
|
||||||
|
try {
|
||||||
|
const result = await newFeatureCore(
|
||||||
|
tasksPath,
|
||||||
|
{ param: args.param },
|
||||||
|
{
|
||||||
|
force: args.force,
|
||||||
|
targetTag: args.tag || 'master', // Support tag specification
|
||||||
|
mcpLog: log,
|
||||||
|
session: context.session,
|
||||||
|
outputFormat: 'json'
|
||||||
|
}
|
||||||
|
);
|
||||||
|
|
||||||
|
return {
|
||||||
|
success: true,
|
||||||
|
data: result,
|
||||||
|
fromCache: false
|
||||||
|
};
|
||||||
|
} finally {
|
||||||
|
disableSilentMode();
|
||||||
|
}
|
||||||
|
} catch (error) {
|
||||||
|
log.error(`Error in newFeatureDirect: ${error.message}`);
|
||||||
|
return {
|
||||||
|
success: false,
|
||||||
|
error: { code: 'FEATURE_ERROR', message: error.message },
|
||||||
|
fromCache: false
|
||||||
|
};
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- **Tool Registration**:
|
||||||
|
- ✅ DO: Create tool definitions in `mcp-server/src/tools/`
|
||||||
|
- ✅ DO: Use Zod for parameter validation
|
||||||
|
- ✅ DO: Include optional tag parameter for multi-context support
|
||||||
|
- ✅ DO: Follow established naming conventions
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// ✅ DO: Register MCP tools with tagged system support
|
||||||
|
export function registerNewFeatureTool(server) {
|
||||||
|
server.addTool({
|
||||||
|
name: "new_feature",
|
||||||
|
description: "Description of the new feature with tagged task lists support",
|
||||||
|
inputSchema: z.object({
|
||||||
|
param: z.string().describe("Feature-specific parameter"),
|
||||||
|
tag: z.string().optional().describe("Target tag context (defaults to current tag)"),
|
||||||
|
force: z.boolean().optional().describe("Force operation without confirmation"),
|
||||||
|
projectRoot: z.string().optional().describe("Project root directory")
|
||||||
|
}),
|
||||||
|
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
|
||||||
|
try {
|
||||||
|
const result = await newFeatureDirect(
|
||||||
|
{ ...args, projectRoot: args.projectRoot },
|
||||||
|
log,
|
||||||
|
{ session }
|
||||||
|
);
|
||||||
|
return handleApiResult(result, log);
|
||||||
|
} catch (error) {
|
||||||
|
return handleApiResult({
|
||||||
|
success: false,
|
||||||
|
error: { code: 'EXECUTION_ERROR', message: error.message }
|
||||||
|
}, log);
|
||||||
|
}
|
||||||
|
})
|
||||||
|
});
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Testing Strategy
|
||||||
|
|
||||||
|
- **Unit Tests**:
|
||||||
|
- ✅ DO: Test core logic independently with both data formats
|
||||||
|
- ✅ DO: Mock file system operations appropriately
|
||||||
|
- ✅ DO: Test tag resolution behavior
|
||||||
|
- ✅ DO: Verify migration compatibility
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// ✅ DO: Test new features with tagged system awareness
|
||||||
|
describe('newFeature', () => {
|
||||||
|
beforeEach(() => {
|
||||||
|
jest.clearAllMocks();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should work with legacy task format', async () => {
|
||||||
|
const legacyData = { tasks: [/* test data */] };
|
||||||
|
fs.readFileSync.mockReturnValue(JSON.stringify(legacyData));
|
||||||
|
|
||||||
|
const result = await newFeatureCore('/test/tasks.json', { param: 'test' });
|
||||||
|
|
||||||
|
expect(result).toBeDefined();
|
||||||
|
// Test legacy format handling
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should work with tagged task format', async () => {
|
||||||
|
const taggedData = {
|
||||||
|
master: { tasks: [/* test data */] },
|
||||||
|
feature: { tasks: [/* test data */] }
|
||||||
|
};
|
||||||
|
fs.readFileSync.mockReturnValue(JSON.stringify(taggedData));
|
||||||
|
|
||||||
|
const result = await newFeatureCore('/test/tasks.json', { param: 'test' });
|
||||||
|
|
||||||
|
expect(result).toBeDefined();
|
||||||
|
// Test tagged format handling
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle tag migration during feature usage', async () => {
|
||||||
|
const legacyData = { tasks: [/* test data */] };
|
||||||
|
fs.readFileSync.mockReturnValue(JSON.stringify(legacyData));
|
||||||
|
|
||||||
|
await newFeatureCore('/test/tasks.json', { param: 'test' });
|
||||||
|
|
||||||
|
// Verify migration occurred
|
||||||
|
expect(fs.writeFileSync).toHaveBeenCalledWith(
|
||||||
|
'/test/tasks.json',
|
||||||
|
expect.stringContaining('"master"')
|
||||||
|
);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
- **Integration Tests**:
|
||||||
|
- ✅ DO: Test CLI and MCP interfaces with real task data
|
||||||
|
- ✅ DO: Verify end-to-end workflows across tag contexts
|
||||||
|
- ✅ DO: Test error scenarios and recovery
|
||||||
|
|
||||||
|
## Documentation Updates
|
||||||
|
|
||||||
|
- **Rule Updates**:
|
||||||
|
- ✅ DO: Update relevant `.cursor/rules/*.mdc` files
|
||||||
|
- ✅ DO: Include tagged system considerations in architecture docs
|
||||||
|
- ✅ DO: Add examples showing multi-context usage
|
||||||
|
- ✅ DO: Update workflow documentation as needed
|
||||||
|
|
||||||
|
- **User Documentation**:
|
||||||
|
- ✅ DO: Add feature documentation to `/docs` folder
|
||||||
|
- ✅ DO: Include tagged system usage examples
|
||||||
|
- ✅ DO: Update command reference documentation
|
||||||
|
- ✅ DO: Provide migration notes if relevant
|
||||||
|
|
||||||
|
## Migration Considerations
|
||||||
|
|
||||||
|
- **Silent Migration Support**:
|
||||||
|
- ✅ DO: Ensure new features trigger migration when needed
|
||||||
|
- ✅ DO: Handle migration errors gracefully in feature code
|
||||||
|
- ✅ DO: Test feature behavior with pre-migration projects
|
||||||
|
- ❌ DON'T: Assume projects are already migrated
|
||||||
|
|
||||||
|
- **Tag Context Handling**:
|
||||||
|
- ✅ DO: Default to current tag when not specified
|
||||||
|
- ✅ DO: Support explicit tag selection in advanced features
|
||||||
|
- ✅ DO: Validate tag existence before operations
|
||||||
|
- ✅ DO: Provide clear messaging about tag context
|
||||||
|
|
||||||
|
## Performance Considerations
|
||||||
|
|
||||||
|
- **Efficient Tag Operations**:
|
||||||
|
- ✅ DO: Minimize file I/O operations per feature execution
|
||||||
|
- ✅ DO: Cache tag resolution results when appropriate
|
||||||
|
- ✅ DO: Use streaming for large task datasets
|
||||||
|
- ❌ DON'T: Load all tags when only one is needed
|
||||||
|
|
||||||
|
- **Memory Management**:
|
||||||
|
- ✅ DO: Process large task lists efficiently
|
||||||
|
- ✅ DO: Clean up temporary data structures
|
||||||
|
- ✅ DO: Avoid keeping all tag data in memory simultaneously
|
||||||
|
|
||||||
|
## Deployment and Versioning
|
||||||
|
|
||||||
|
- **Changesets**:
|
||||||
|
- ✅ DO: Create appropriate changesets for new features
|
||||||
|
- ✅ DO: Use semantic versioning (minor for new features)
|
||||||
|
- ✅ DO: Include tagged system information in release notes
|
||||||
|
- ✅ DO: Document breaking changes if any
|
||||||
|
|
||||||
|
- **Feature Flags**:
|
||||||
|
- ✅ DO: Consider feature flags for experimental functionality
|
||||||
|
- ✅ DO: Ensure tagged system features work with flags
|
||||||
|
- ✅ DO: Provide clear documentation about flag usage
|
||||||
|
|
||||||
|
By following these guidelines, new features will integrate smoothly with the Task Master ecosystem while supporting the enhanced tagged task lists system for multi-context development workflows.
|
||||||
|
|||||||
@@ -69,5 +69,4 @@ alwaysApply: true
|
|||||||
- Update references to external docs
|
- Update references to external docs
|
||||||
- Maintain links between related rules
|
- Maintain links between related rules
|
||||||
- Document breaking changes
|
- Document breaking changes
|
||||||
|
|
||||||
Follow [cursor_rules.mdc](mdc:.cursor/rules/cursor_rules.mdc) for proper rule formatting and structure.
|
Follow [cursor_rules.mdc](mdc:.cursor/rules/cursor_rules.mdc) for proper rule formatting and structure.
|
||||||
229
.cursor/rules/tags.mdc
Normal file
229
.cursor/rules/tags.mdc
Normal file
@@ -0,0 +1,229 @@
|
|||||||
|
---
|
||||||
|
description:
|
||||||
|
globs: scripts/modules/*
|
||||||
|
alwaysApply: false
|
||||||
|
---
|
||||||
|
# Tagged Task Lists Command Patterns
|
||||||
|
|
||||||
|
This document outlines the standardized patterns that **ALL** Task Master commands must follow to properly support the tagged task lists system.
|
||||||
|
|
||||||
|
## Core Principles
|
||||||
|
|
||||||
|
- **Every command** that reads or writes tasks.json must be tag-aware
|
||||||
|
- **Consistent tag resolution** across all commands using `getCurrentTag(projectRoot)`
|
||||||
|
- **Proper context passing** to core functions with `{ projectRoot, tag }`
|
||||||
|
- **Standardized CLI options** with `--tag <tag>` flag
|
||||||
|
|
||||||
|
## Required Imports
|
||||||
|
|
||||||
|
All command files must import `getCurrentTag`:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// ✅ DO: Import getCurrentTag in commands.js
|
||||||
|
import {
|
||||||
|
log,
|
||||||
|
readJSON,
|
||||||
|
writeJSON,
|
||||||
|
findProjectRoot,
|
||||||
|
getCurrentTag
|
||||||
|
} from './utils.js';
|
||||||
|
|
||||||
|
// ✅ DO: Import getCurrentTag in task-manager files
|
||||||
|
import {
|
||||||
|
readJSON,
|
||||||
|
writeJSON,
|
||||||
|
getCurrentTag
|
||||||
|
} from '../utils.js';
|
||||||
|
```
|
||||||
|
|
||||||
|
## CLI Command Pattern
|
||||||
|
|
||||||
|
Every CLI command that operates on tasks must follow this exact pattern:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// ✅ DO: Standard tag-aware CLI command pattern
|
||||||
|
programInstance
|
||||||
|
.command('command-name')
|
||||||
|
.description('Command description')
|
||||||
|
.option('-f, --file <file>', 'Path to the tasks file', TASKMASTER_TASKS_FILE)
|
||||||
|
.option('--tag <tag>', 'Specify tag context for task operations') // REQUIRED
|
||||||
|
.action(async (options) => {
|
||||||
|
// 1. Find project root
|
||||||
|
const projectRoot = findProjectRoot();
|
||||||
|
if (!projectRoot) {
|
||||||
|
console.error(chalk.red('Error: Could not find project root.'));
|
||||||
|
process.exit(1);
|
||||||
|
}
|
||||||
|
|
||||||
|
// 2. Resolve tag using standard pattern
|
||||||
|
const tag = options.tag || getCurrentTag(projectRoot) || 'master';
|
||||||
|
|
||||||
|
// 3. Call core function with proper context
|
||||||
|
await coreFunction(
|
||||||
|
tasksPath,
|
||||||
|
// ... other parameters ...
|
||||||
|
{ projectRoot, tag } // REQUIRED context object
|
||||||
|
);
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
## Core Function Pattern
|
||||||
|
|
||||||
|
All core functions in `scripts/modules/task-manager/` must follow this pattern:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// ✅ DO: Standard tag-aware core function pattern
|
||||||
|
async function coreFunction(
|
||||||
|
tasksPath,
|
||||||
|
// ... other parameters ...
|
||||||
|
context = {} // REQUIRED context parameter
|
||||||
|
) {
|
||||||
|
const { projectRoot, tag } = context;
|
||||||
|
|
||||||
|
// Use tag-aware readJSON/writeJSON
|
||||||
|
const data = readJSON(tasksPath, projectRoot, tag);
|
||||||
|
|
||||||
|
// ... function logic ...
|
||||||
|
|
||||||
|
writeJSON(tasksPath, data, projectRoot, tag);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Tag Resolution Priority
|
||||||
|
|
||||||
|
The tag resolution follows this exact priority order:
|
||||||
|
|
||||||
|
1. **Explicit `--tag` flag**: `options.tag`
|
||||||
|
2. **Current active tag**: `getCurrentTag(projectRoot)`
|
||||||
|
3. **Default fallback**: `'master'`
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// ✅ DO: Standard tag resolution pattern
|
||||||
|
const tag = options.tag || getCurrentTag(projectRoot) || 'master';
|
||||||
|
```
|
||||||
|
|
||||||
|
## Commands Requiring Updates
|
||||||
|
|
||||||
|
### High Priority (Core Task Operations)
|
||||||
|
- [x] `add-task` - ✅ Fixed
|
||||||
|
- [x] `list` - ✅ Fixed
|
||||||
|
- [x] `update-task` - ✅ Fixed
|
||||||
|
- [x] `update-subtask` - ✅ Fixed
|
||||||
|
- [x] `set-status` - ✅ Already correct
|
||||||
|
- [x] `remove-task` - ✅ Already correct
|
||||||
|
- [x] `remove-subtask` - ✅ Fixed
|
||||||
|
- [x] `add-subtask` - ✅ Already correct
|
||||||
|
- [x] `clear-subtasks` - ✅ Fixed
|
||||||
|
- [x] `move-task` - ✅ Already correct
|
||||||
|
|
||||||
|
### Medium Priority (Analysis & Expansion)
|
||||||
|
- [x] `expand` - ✅ Fixed
|
||||||
|
- [ ] `next` - ✅ Fixed
|
||||||
|
- [ ] `show` (get-task) - Needs checking
|
||||||
|
- [ ] `analyze-complexity` - Needs checking
|
||||||
|
- [ ] `generate` - ✅ Fixed
|
||||||
|
|
||||||
|
### Lower Priority (Utilities)
|
||||||
|
- [ ] `research` - Needs checking
|
||||||
|
- [ ] `complexity-report` - Needs checking
|
||||||
|
- [ ] `validate-dependencies` - ✅ Fixed
|
||||||
|
- [ ] `fix-dependencies` - ✅ Fixed
|
||||||
|
- [ ] `add-dependency` - ✅ Fixed
|
||||||
|
- [ ] `remove-dependency` - ✅ Fixed
|
||||||
|
|
||||||
|
## MCP Integration Pattern
|
||||||
|
|
||||||
|
MCP direct functions must also follow the tag-aware pattern:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// ✅ DO: Tag-aware MCP direct function
|
||||||
|
export async function coreActionDirect(args, log, context = {}) {
|
||||||
|
const { session } = context;
|
||||||
|
const { projectRoot, tag } = args; // MCP passes these in args
|
||||||
|
|
||||||
|
try {
|
||||||
|
const result = await coreAction(
|
||||||
|
tasksPath,
|
||||||
|
// ... other parameters ...
|
||||||
|
{ projectRoot, tag, session, mcpLog: logWrapper }
|
||||||
|
);
|
||||||
|
|
||||||
|
return { success: true, data: result };
|
||||||
|
} catch (error) {
|
||||||
|
return { success: false, error: { code: 'ERROR_CODE', message: error.message } };
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## File Generation Tag-Aware Naming
|
||||||
|
|
||||||
|
The `generate` command must use tag-aware file naming:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// ✅ DO: Tag-aware file naming
|
||||||
|
const taskFileName = targetTag === 'master'
|
||||||
|
? `task_${task.id.toString().padStart(3, '0')}.txt`
|
||||||
|
: `task_${task.id.toString().padStart(3, '0')}_${targetTag}.txt`;
|
||||||
|
```
|
||||||
|
|
||||||
|
**Examples:**
|
||||||
|
- Master tag: `task_001.txt`, `task_002.txt`
|
||||||
|
- Other tags: `task_001_feature.txt`, `task_002_feature.txt`
|
||||||
|
|
||||||
|
## Common Anti-Patterns
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// ❌ DON'T: Missing getCurrentTag import
|
||||||
|
import { readJSON, writeJSON } from '../utils.js'; // Missing getCurrentTag
|
||||||
|
|
||||||
|
// ❌ DON'T: Hard-coded tag resolution
|
||||||
|
const tag = options.tag || 'master'; // Missing getCurrentTag
|
||||||
|
|
||||||
|
// ❌ DON'T: Missing --tag option
|
||||||
|
.option('-f, --file <file>', 'Path to tasks file') // Missing --tag option
|
||||||
|
|
||||||
|
// ❌ DON'T: Missing context parameter
|
||||||
|
await coreFunction(tasksPath, param1, param2); // Missing { projectRoot, tag }
|
||||||
|
|
||||||
|
// ❌ DON'T: Incorrect readJSON/writeJSON calls
|
||||||
|
const data = readJSON(tasksPath); // Missing projectRoot and tag
|
||||||
|
writeJSON(tasksPath, data); // Missing projectRoot and tag
|
||||||
|
```
|
||||||
|
|
||||||
|
## Validation Checklist
|
||||||
|
|
||||||
|
For each command, verify:
|
||||||
|
|
||||||
|
- [ ] Imports `getCurrentTag` from utils.js
|
||||||
|
- [ ] Has `--tag <tag>` CLI option
|
||||||
|
- [ ] Uses standard tag resolution: `options.tag || getCurrentTag(projectRoot) || 'master'`
|
||||||
|
- [ ] Finds `projectRoot` with error handling
|
||||||
|
- [ ] Passes `{ projectRoot, tag }` context to core functions
|
||||||
|
- [ ] Core functions accept and use context parameter
|
||||||
|
- [ ] Uses `readJSON(tasksPath, projectRoot, tag)` and `writeJSON(tasksPath, data, projectRoot, tag)`
|
||||||
|
|
||||||
|
## Testing Tag Resolution
|
||||||
|
|
||||||
|
Test each command with:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Test with explicit tag
|
||||||
|
node bin/task-master command-name --tag test-tag
|
||||||
|
|
||||||
|
# Test with active tag (should use current active tag)
|
||||||
|
node bin/task-master use-tag test-tag
|
||||||
|
node bin/task-master command-name
|
||||||
|
|
||||||
|
# Test with master tag (default)
|
||||||
|
node bin/task-master use-tag master
|
||||||
|
node bin/task-master command-name
|
||||||
|
```
|
||||||
|
|
||||||
|
## Migration Strategy
|
||||||
|
|
||||||
|
1. **Audit Phase**: Systematically check each command against the checklist
|
||||||
|
2. **Fix Phase**: Apply the standard patterns to non-compliant commands
|
||||||
|
3. **Test Phase**: Verify tag resolution works correctly
|
||||||
|
4. **Document Phase**: Update command documentation with tag support
|
||||||
|
|
||||||
|
This ensures consistent, predictable behavior across all Task Master commands and prevents tag deletion bugs.
|
||||||
559
.cursor/rules/taskmaster.mdc
Normal file
559
.cursor/rules/taskmaster.mdc
Normal file
@@ -0,0 +1,559 @@
|
|||||||
|
---
|
||||||
|
description: Comprehensive reference for Taskmaster MCP tools and CLI commands.
|
||||||
|
globs: **/*
|
||||||
|
alwaysApply: true
|
||||||
|
---
|
||||||
|
# Taskmaster Tool & Command Reference
|
||||||
|
|
||||||
|
This document provides a detailed reference for interacting with Taskmaster, covering both the recommended MCP tools, suitable for integrations like Cursor, and the corresponding `task-master` CLI commands, designed for direct user interaction or fallback.
|
||||||
|
|
||||||
|
**Note:** For interacting with Taskmaster programmatically or via integrated tools, using the **MCP tools is strongly recommended** due to better performance, structured data, and error handling. The CLI commands serve as a user-friendly alternative and fallback.
|
||||||
|
|
||||||
|
**Important:** Several MCP tools involve AI processing... The AI-powered tools include `parse_prd`, `analyze_project_complexity`, `update_subtask`, `update_task`, `update`, `expand_all`, `expand_task`, and `add_task`.
|
||||||
|
|
||||||
|
**🏷️ Tagged Task Lists System:** Task Master now supports **tagged task lists** for multi-context task management. This allows you to maintain separate, isolated lists of tasks for different features, branches, or experiments. Existing projects are seamlessly migrated to use a default "master" tag. Most commands now support a `--tag <name>` flag to specify which context to operate on. If omitted, commands use the currently active tag.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Initialization & Setup
|
||||||
|
|
||||||
|
### 1. Initialize Project (`init`)
|
||||||
|
|
||||||
|
* **MCP Tool:** `initialize_project`
|
||||||
|
* **CLI Command:** `task-master init [options]`
|
||||||
|
* **Description:** `Set up the basic Taskmaster file structure and configuration in the current directory for a new project.`
|
||||||
|
* **Key CLI Options:**
|
||||||
|
* `--name <name>`: `Set the name for your project in Taskmaster's configuration.`
|
||||||
|
* `--description <text>`: `Provide a brief description for your project.`
|
||||||
|
* `--version <version>`: `Set the initial version for your project, e.g., '0.1.0'.`
|
||||||
|
* `--no-git`: `Skip initializing a Git repository entirely.`
|
||||||
|
* `-y, --yes`: `Initialize Taskmaster quickly using default settings without interactive prompts.`
|
||||||
|
* **Usage:** Run this once at the beginning of a new project.
|
||||||
|
* **MCP Variant Description:** `Set up the basic Taskmaster file structure and configuration in the current directory for a new project by running the 'task-master init' command.`
|
||||||
|
* **Key MCP Parameters/Options:**
|
||||||
|
* `projectName`: `Set the name for your project.` (CLI: `--name <name>`)
|
||||||
|
* `projectDescription`: `Provide a brief description for your project.` (CLI: `--description <text>`)
|
||||||
|
* `projectVersion`: `Set the initial version for your project, e.g., '0.1.0'.` (CLI: `--version <version>`)
|
||||||
|
* `authorName`: `Author name.` (CLI: `--author <author>`)
|
||||||
|
* `skipInstall`: `Skip installing dependencies. Default is false.` (CLI: `--skip-install`)
|
||||||
|
* `addAliases`: `Add shell aliases tm and taskmaster. Default is false.` (CLI: `--aliases`)
|
||||||
|
* `noGit`: `Skip initializing a Git repository entirely. Default is false.` (CLI: `--no-git`)
|
||||||
|
* `yes`: `Skip prompts and use defaults/provided arguments. Default is false.` (CLI: `-y, --yes`)
|
||||||
|
* **Usage:** Run this once at the beginning of a new project, typically via an integrated tool like Cursor. Operates on the current working directory of the MCP server.
|
||||||
|
* **Important:** Once complete, you *MUST* parse a prd in order to generate tasks. There will be no tasks files until then. The next step after initializing should be to create a PRD using the example PRD in .taskmaster/templates/example_prd.txt.
|
||||||
|
* **Tagging:** Use the `--tag` option to parse the PRD into a specific, non-default tag context. If the tag doesn't exist, it will be created automatically. Example: `task-master parse-prd spec.txt --tag=new-feature`.
|
||||||
|
|
||||||
|
### 2. Parse PRD (`parse_prd`)
|
||||||
|
|
||||||
|
* **MCP Tool:** `parse_prd`
|
||||||
|
* **CLI Command:** `task-master parse-prd [file] [options]`
|
||||||
|
* **Description:** `Parse a Product Requirements Document, PRD, or text file with Taskmaster to automatically generate an initial set of tasks in tasks.json.`
|
||||||
|
* **Key Parameters/Options:**
|
||||||
|
* `input`: `Path to your PRD or requirements text file that Taskmaster should parse for tasks.` (CLI: `[file]` positional or `-i, --input <file>`)
|
||||||
|
* `output`: `Specify where Taskmaster should save the generated 'tasks.json' file. Defaults to '.taskmaster/tasks/tasks.json'.` (CLI: `-o, --output <file>`)
|
||||||
|
* `numTasks`: `Approximate number of top-level tasks Taskmaster should aim to generate from the document.` (CLI: `-n, --num-tasks <number>`)
|
||||||
|
* `force`: `Use this to allow Taskmaster to overwrite an existing 'tasks.json' without asking for confirmation.` (CLI: `-f, --force`)
|
||||||
|
* **Usage:** Useful for bootstrapping a project from an existing requirements document.
|
||||||
|
* **Notes:** Task Master will strictly adhere to any specific requirements mentioned in the PRD, such as libraries, database schemas, frameworks, tech stacks, etc., while filling in any gaps where the PRD isn't fully specified. Tasks are designed to provide the most direct implementation path while avoiding over-engineering.
|
||||||
|
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress. If the user does not have a PRD, suggest discussing their idea and then use the example PRD in `.taskmaster/templates/example_prd.txt` as a template for creating the PRD based on their idea, for use with `parse-prd`.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## AI Model Configuration
|
||||||
|
|
||||||
|
### 2. Manage Models (`models`)
|
||||||
|
* **MCP Tool:** `models`
|
||||||
|
* **CLI Command:** `task-master models [options]`
|
||||||
|
* **Description:** `View the current AI model configuration or set specific models for different roles (main, research, fallback). Allows setting custom model IDs for Ollama and OpenRouter.`
|
||||||
|
* **Key MCP Parameters/Options:**
|
||||||
|
* `setMain <model_id>`: `Set the primary model ID for task generation/updates.` (CLI: `--set-main <model_id>`)
|
||||||
|
* `setResearch <model_id>`: `Set the model ID for research-backed operations.` (CLI: `--set-research <model_id>`)
|
||||||
|
* `setFallback <model_id>`: `Set the model ID to use if the primary fails.` (CLI: `--set-fallback <model_id>`)
|
||||||
|
* `ollama <boolean>`: `Indicates the set model ID is a custom Ollama model.` (CLI: `--ollama`)
|
||||||
|
* `openrouter <boolean>`: `Indicates the set model ID is a custom OpenRouter model.` (CLI: `--openrouter`)
|
||||||
|
* `listAvailableModels <boolean>`: `If true, lists available models not currently assigned to a role.` (CLI: No direct equivalent; CLI lists available automatically)
|
||||||
|
* `projectRoot <string>`: `Optional. Absolute path to the project root directory.` (CLI: Determined automatically)
|
||||||
|
* **Key CLI Options:**
|
||||||
|
* `--set-main <model_id>`: `Set the primary model.`
|
||||||
|
* `--set-research <model_id>`: `Set the research model.`
|
||||||
|
* `--set-fallback <model_id>`: `Set the fallback model.`
|
||||||
|
* `--ollama`: `Specify that the provided model ID is for Ollama (use with --set-*).`
|
||||||
|
* `--openrouter`: `Specify that the provided model ID is for OpenRouter (use with --set-*). Validates against OpenRouter API.`
|
||||||
|
* `--bedrock`: `Specify that the provided model ID is for AWS Bedrock (use with --set-*).`
|
||||||
|
* `--setup`: `Run interactive setup to configure models, including custom Ollama/OpenRouter IDs.`
|
||||||
|
* **Usage (MCP):** Call without set flags to get current config. Use `setMain`, `setResearch`, or `setFallback` with a valid model ID to update the configuration. Use `listAvailableModels: true` to get a list of unassigned models. To set a custom model, provide the model ID and set `ollama: true` or `openrouter: true`.
|
||||||
|
* **Usage (CLI):** Run without flags to view current configuration and available models. Use set flags to update specific roles. Use `--setup` for guided configuration, including custom models. To set a custom model via flags, use `--set-<role>=<model_id>` along with either `--ollama` or `--openrouter`.
|
||||||
|
* **Notes:** Configuration is stored in `.taskmaster/config.json` in the project root. This command/tool modifies that file. Use `listAvailableModels` or `task-master models` to see internally supported models. OpenRouter custom models are validated against their live API. Ollama custom models are not validated live.
|
||||||
|
* **API note:** API keys for selected AI providers (based on their model) need to exist in the mcp.json file to be accessible in MCP context. The API keys must be present in the local .env file for the CLI to be able to read them.
|
||||||
|
* **Model costs:** The costs in supported models are expressed in dollars. An input/output value of 3 is $3.00. A value of 0.8 is $0.80.
|
||||||
|
* **Warning:** DO NOT MANUALLY EDIT THE .taskmaster/config.json FILE. Use the included commands either in the MCP or CLI format as needed. Always prioritize MCP tools when available and use the CLI as a fallback.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Task Listing & Viewing
|
||||||
|
|
||||||
|
### 3. Get Tasks (`get_tasks`)
|
||||||
|
|
||||||
|
* **MCP Tool:** `get_tasks`
|
||||||
|
* **CLI Command:** `task-master list [options]`
|
||||||
|
* **Description:** `List your Taskmaster tasks, optionally filtering by status and showing subtasks.`
|
||||||
|
* **Key Parameters/Options:**
|
||||||
|
* `status`: `Show only Taskmaster tasks matching this status (or multiple statuses, comma-separated), e.g., 'pending' or 'done,in-progress'.` (CLI: `-s, --status <status>`)
|
||||||
|
* `withSubtasks`: `Include subtasks indented under their parent tasks in the list.` (CLI: `--with-subtasks`)
|
||||||
|
* `tag`: `Specify which tag context to list tasks from. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||||
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||||
|
* **Usage:** Get an overview of the project status, often used at the start of a work session.
|
||||||
|
|
||||||
|
### 4. Get Next Task (`next_task`)
|
||||||
|
|
||||||
|
* **MCP Tool:** `next_task`
|
||||||
|
* **CLI Command:** `task-master next [options]`
|
||||||
|
* **Description:** `Ask Taskmaster to show the next available task you can work on, based on status and completed dependencies.`
|
||||||
|
* **Key Parameters/Options:**
|
||||||
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||||
|
* `tag`: `Specify which tag context to use. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||||
|
* **Usage:** Identify what to work on next according to the plan.
|
||||||
|
|
||||||
|
### 5. Get Task Details (`get_task`)
|
||||||
|
|
||||||
|
* **MCP Tool:** `get_task`
|
||||||
|
* **CLI Command:** `task-master show [id] [options]`
|
||||||
|
* **Description:** `Display detailed information for one or more specific Taskmaster tasks or subtasks by ID.`
|
||||||
|
* **Key Parameters/Options:**
|
||||||
|
* `id`: `Required. The ID of the Taskmaster task (e.g., '15'), subtask (e.g., '15.2'), or a comma-separated list of IDs ('1,5,10.2') you want to view.` (CLI: `[id]` positional or `-i, --id <id>`)
|
||||||
|
* `tag`: `Specify which tag context to get the task(s) from. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||||
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||||
|
* **Usage:** Understand the full details for a specific task. When multiple IDs are provided, a summary table is shown.
|
||||||
|
* **CRITICAL INFORMATION** If you need to collect information from multiple tasks, use comma-separated IDs (i.e. 1,2,3) to receive an array of tasks. Do not needlessly get tasks one at a time if you need to get many as that is wasteful.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Task Creation & Modification
|
||||||
|
|
||||||
|
### 6. Add Task (`add_task`)
|
||||||
|
|
||||||
|
* **MCP Tool:** `add_task`
|
||||||
|
* **CLI Command:** `task-master add-task [options]`
|
||||||
|
* **Description:** `Add a new task to Taskmaster by describing it; AI will structure it.`
|
||||||
|
* **Key Parameters/Options:**
|
||||||
|
* `prompt`: `Required. Describe the new task you want Taskmaster to create, e.g., "Implement user authentication using JWT".` (CLI: `-p, --prompt <text>`)
|
||||||
|
* `dependencies`: `Specify the IDs of any Taskmaster tasks that must be completed before this new one can start, e.g., '12,14'.` (CLI: `-d, --dependencies <ids>`)
|
||||||
|
* `priority`: `Set the priority for the new task: 'high', 'medium', or 'low'. Default is 'medium'.` (CLI: `--priority <priority>`)
|
||||||
|
* `research`: `Enable Taskmaster to use the research role for potentially more informed task creation.` (CLI: `-r, --research`)
|
||||||
|
* `tag`: `Specify which tag context to add the task to. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||||
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||||
|
* **Usage:** Quickly add newly identified tasks during development.
|
||||||
|
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress.
|
||||||
|
|
||||||
|
### 7. Add Subtask (`add_subtask`)
|
||||||
|
|
||||||
|
* **MCP Tool:** `add_subtask`
|
||||||
|
* **CLI Command:** `task-master add-subtask [options]`
|
||||||
|
* **Description:** `Add a new subtask to a Taskmaster parent task, or convert an existing task into a subtask.`
|
||||||
|
* **Key Parameters/Options:**
|
||||||
|
* `id` / `parent`: `Required. The ID of the Taskmaster task that will be the parent.` (MCP: `id`, CLI: `-p, --parent <id>`)
|
||||||
|
* `taskId`: `Use this if you want to convert an existing top-level Taskmaster task into a subtask of the specified parent.` (CLI: `-i, --task-id <id>`)
|
||||||
|
* `title`: `Required if not using taskId. The title for the new subtask Taskmaster should create.` (CLI: `-t, --title <title>`)
|
||||||
|
* `description`: `A brief description for the new subtask.` (CLI: `-d, --description <text>`)
|
||||||
|
* `details`: `Provide implementation notes or details for the new subtask.` (CLI: `--details <text>`)
|
||||||
|
* `dependencies`: `Specify IDs of other tasks or subtasks, e.g., '15' or '16.1', that must be done before this new subtask.` (CLI: `--dependencies <ids>`)
|
||||||
|
* `status`: `Set the initial status for the new subtask. Default is 'pending'.` (CLI: `-s, --status <status>`)
|
||||||
|
* `generate`: `Enable Taskmaster to regenerate markdown task files after adding the subtask.` (CLI: `--generate`)
|
||||||
|
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||||
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||||
|
* **Usage:** Break down tasks manually or reorganize existing tasks.
|
||||||
|
|
||||||
|
### 8. Update Tasks (`update`)
|
||||||
|
|
||||||
|
* **MCP Tool:** `update`
|
||||||
|
* **CLI Command:** `task-master update [options]`
|
||||||
|
* **Description:** `Update multiple upcoming tasks in Taskmaster based on new context or changes, starting from a specific task ID.`
|
||||||
|
* **Key Parameters/Options:**
|
||||||
|
* `from`: `Required. The ID of the first task Taskmaster should update. All tasks with this ID or higher that are not 'done' will be considered.` (CLI: `--from <id>`)
|
||||||
|
* `prompt`: `Required. Explain the change or new context for Taskmaster to apply to the tasks, e.g., "We are now using React Query instead of Redux Toolkit for data fetching".` (CLI: `-p, --prompt <text>`)
|
||||||
|
* `research`: `Enable Taskmaster to use the research role for more informed updates. Requires appropriate API key.` (CLI: `-r, --research`)
|
||||||
|
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||||
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||||
|
* **Usage:** Handle significant implementation changes or pivots that affect multiple future tasks. Example CLI: `task-master update --from='18' --prompt='Switching to React Query.\nNeed to refactor data fetching...'`
|
||||||
|
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress.
|
||||||
|
|
||||||
|
### 9. Update Task (`update_task`)
|
||||||
|
|
||||||
|
* **MCP Tool:** `update_task`
|
||||||
|
* **CLI Command:** `task-master update-task [options]`
|
||||||
|
* **Description:** `Modify a specific Taskmaster task by ID, incorporating new information or changes. By default, this replaces the existing task details.`
|
||||||
|
* **Key Parameters/Options:**
|
||||||
|
* `id`: `Required. The specific ID of the Taskmaster task, e.g., '15', you want to update.` (CLI: `-i, --id <id>`)
|
||||||
|
* `prompt`: `Required. Explain the specific changes or provide the new information Taskmaster should incorporate into this task.` (CLI: `-p, --prompt <text>`)
|
||||||
|
* `append`: `If true, appends the prompt content to the task's details with a timestamp, rather than replacing them. Behaves like update-subtask.` (CLI: `--append`)
|
||||||
|
* `research`: `Enable Taskmaster to use the research role for more informed updates. Requires appropriate API key.` (CLI: `-r, --research`)
|
||||||
|
* `tag`: `Specify which tag context the task belongs to. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||||
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||||
|
* **Usage:** Refine a specific task based on new understanding. Use `--append` to log progress without creating subtasks.
|
||||||
|
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress.
|
||||||
|
|
||||||
|
### 10. Update Subtask (`update_subtask`)
|
||||||
|
|
||||||
|
* **MCP Tool:** `update_subtask`
|
||||||
|
* **CLI Command:** `task-master update-subtask [options]`
|
||||||
|
* **Description:** `Append timestamped notes or details to a specific Taskmaster subtask without overwriting existing content. Intended for iterative implementation logging.`
|
||||||
|
* **Key Parameters/Options:**
|
||||||
|
* `id`: `Required. The ID of the Taskmaster subtask, e.g., '5.2', to update with new information.` (CLI: `-i, --id <id>`)
|
||||||
|
* `prompt`: `Required. The information, findings, or progress notes to append to the subtask's details with a timestamp.` (CLI: `-p, --prompt <text>`)
|
||||||
|
* `research`: `Enable Taskmaster to use the research role for more informed updates. Requires appropriate API key.` (CLI: `-r, --research`)
|
||||||
|
* `tag`: `Specify which tag context the subtask belongs to. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||||
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||||
|
* **Usage:** Log implementation progress, findings, and discoveries during subtask development. Each update is timestamped and appended to preserve the implementation journey.
|
||||||
|
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress.
|
||||||
|
|
||||||
|
### 11. Set Task Status (`set_task_status`)
|
||||||
|
|
||||||
|
* **MCP Tool:** `set_task_status`
|
||||||
|
* **CLI Command:** `task-master set-status [options]`
|
||||||
|
* **Description:** `Update the status of one or more Taskmaster tasks or subtasks, e.g., 'pending', 'in-progress', 'done'.`
|
||||||
|
* **Key Parameters/Options:**
|
||||||
|
* `id`: `Required. The ID(s) of the Taskmaster task(s) or subtask(s), e.g., '15', '15.2', or '16,17.1', to update.` (CLI: `-i, --id <id>`)
|
||||||
|
* `status`: `Required. The new status to set, e.g., 'done', 'pending', 'in-progress', 'review', 'cancelled'.` (CLI: `-s, --status <status>`)
|
||||||
|
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||||
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||||
|
* **Usage:** Mark progress as tasks move through the development cycle.
|
||||||
|
|
||||||
|
### 12. Remove Task (`remove_task`)
|
||||||
|
|
||||||
|
* **MCP Tool:** `remove_task`
|
||||||
|
* **CLI Command:** `task-master remove-task [options]`
|
||||||
|
* **Description:** `Permanently remove a task or subtask from the Taskmaster tasks list.`
|
||||||
|
* **Key Parameters/Options:**
|
||||||
|
* `id`: `Required. The ID of the Taskmaster task, e.g., '5', or subtask, e.g., '5.2', to permanently remove.` (CLI: `-i, --id <id>`)
|
||||||
|
* `yes`: `Skip the confirmation prompt and immediately delete the task.` (CLI: `-y, --yes`)
|
||||||
|
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||||
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||||
|
* **Usage:** Permanently delete tasks or subtasks that are no longer needed in the project.
|
||||||
|
* **Notes:** Use with caution as this operation cannot be undone. Consider using 'blocked', 'cancelled', or 'deferred' status instead if you just want to exclude a task from active planning but keep it for reference. The command automatically cleans up dependency references in other tasks.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Task Structure & Breakdown
|
||||||
|
|
||||||
|
### 13. Expand Task (`expand_task`)
|
||||||
|
|
||||||
|
* **MCP Tool:** `expand_task`
|
||||||
|
* **CLI Command:** `task-master expand [options]`
|
||||||
|
* **Description:** `Use Taskmaster's AI to break down a complex task into smaller, manageable subtasks. Appends subtasks by default.`
|
||||||
|
* **Key Parameters/Options:**
|
||||||
|
* `id`: `The ID of the specific Taskmaster task you want to break down into subtasks.` (CLI: `-i, --id <id>`)
|
||||||
|
* `num`: `Optional: Suggests how many subtasks Taskmaster should aim to create. Uses complexity analysis/defaults otherwise.` (CLI: `-n, --num <number>`)
|
||||||
|
* `research`: `Enable Taskmaster to use the research role for more informed subtask generation. Requires appropriate API key.` (CLI: `-r, --research`)
|
||||||
|
* `prompt`: `Optional: Provide extra context or specific instructions to Taskmaster for generating the subtasks.` (CLI: `-p, --prompt <text>`)
|
||||||
|
* `force`: `Optional: If true, clear existing subtasks before generating new ones. Default is false (append).` (CLI: `--force`)
|
||||||
|
* `tag`: `Specify which tag context the task belongs to. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||||
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||||
|
* **Usage:** Generate a detailed implementation plan for a complex task before starting coding. Automatically uses complexity report recommendations if available and `num` is not specified.
|
||||||
|
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress.
|
||||||
|
|
||||||
|
### 14. Expand All Tasks (`expand_all`)
|
||||||
|
|
||||||
|
* **MCP Tool:** `expand_all`
|
||||||
|
* **CLI Command:** `task-master expand --all [options]` (Note: CLI uses the `expand` command with the `--all` flag)
|
||||||
|
* **Description:** `Tell Taskmaster to automatically expand all eligible pending/in-progress tasks based on complexity analysis or defaults. Appends subtasks by default.`
|
||||||
|
* **Key Parameters/Options:**
|
||||||
|
* `num`: `Optional: Suggests how many subtasks Taskmaster should aim to create per task.` (CLI: `-n, --num <number>`)
|
||||||
|
* `research`: `Enable research role for more informed subtask generation. Requires appropriate API key.` (CLI: `-r, --research`)
|
||||||
|
* `prompt`: `Optional: Provide extra context for Taskmaster to apply generally during expansion.` (CLI: `-p, --prompt <text>`)
|
||||||
|
* `force`: `Optional: If true, clear existing subtasks before generating new ones for each eligible task. Default is false (append).` (CLI: `--force`)
|
||||||
|
* `tag`: `Specify which tag context to expand. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||||
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||||
|
* **Usage:** Useful after initial task generation or complexity analysis to break down multiple tasks at once.
|
||||||
|
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress.
|
||||||
|
|
||||||
|
### 15. Clear Subtasks (`clear_subtasks`)
|
||||||
|
|
||||||
|
* **MCP Tool:** `clear_subtasks`
|
||||||
|
* **CLI Command:** `task-master clear-subtasks [options]`
|
||||||
|
* **Description:** `Remove all subtasks from one or more specified Taskmaster parent tasks.`
|
||||||
|
* **Key Parameters/Options:**
|
||||||
|
* `id`: `The ID(s) of the Taskmaster parent task(s) whose subtasks you want to remove, e.g., '15' or '16,18'. Required unless using 'all'.` (CLI: `-i, --id <ids>`)
|
||||||
|
* `all`: `Tell Taskmaster to remove subtasks from all parent tasks.` (CLI: `--all`)
|
||||||
|
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||||
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||||
|
* **Usage:** Used before regenerating subtasks with `expand_task` if the previous breakdown needs replacement.
|
||||||
|
|
||||||
|
### 16. Remove Subtask (`remove_subtask`)
|
||||||
|
|
||||||
|
* **MCP Tool:** `remove_subtask`
|
||||||
|
* **CLI Command:** `task-master remove-subtask [options]`
|
||||||
|
* **Description:** `Remove a subtask from its Taskmaster parent, optionally converting it into a standalone task.`
|
||||||
|
* **Key Parameters/Options:**
|
||||||
|
* `id`: `Required. The ID(s) of the Taskmaster subtask(s) to remove, e.g., '15.2' or '16.1,16.3'.` (CLI: `-i, --id <id>`)
|
||||||
|
* `convert`: `If used, Taskmaster will turn the subtask into a regular top-level task instead of deleting it.` (CLI: `-c, --convert`)
|
||||||
|
* `generate`: `Enable Taskmaster to regenerate markdown task files after removing the subtask.` (CLI: `--generate`)
|
||||||
|
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||||
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||||
|
* **Usage:** Delete unnecessary subtasks or promote a subtask to a top-level task.
|
||||||
|
|
||||||
|
### 17. Move Task (`move_task`)
|
||||||
|
|
||||||
|
* **MCP Tool:** `move_task`
|
||||||
|
* **CLI Command:** `task-master move [options]`
|
||||||
|
* **Description:** `Move a task or subtask to a new position within the task hierarchy.`
|
||||||
|
* **Key Parameters/Options:**
|
||||||
|
* `from`: `Required. ID of the task/subtask to move (e.g., "5" or "5.2"). Can be comma-separated for multiple tasks.` (CLI: `--from <id>`)
|
||||||
|
* `to`: `Required. ID of the destination (e.g., "7" or "7.3"). Must match the number of source IDs if comma-separated.` (CLI: `--to <id>`)
|
||||||
|
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||||
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||||
|
* **Usage:** Reorganize tasks by moving them within the hierarchy. Supports various scenarios like:
|
||||||
|
* Moving a task to become a subtask
|
||||||
|
* Moving a subtask to become a standalone task
|
||||||
|
* Moving a subtask to a different parent
|
||||||
|
* Reordering subtasks within the same parent
|
||||||
|
* Moving a task to a new, non-existent ID (automatically creates placeholders)
|
||||||
|
* Moving multiple tasks at once with comma-separated IDs
|
||||||
|
* **Validation Features:**
|
||||||
|
* Allows moving tasks to non-existent destination IDs (creates placeholder tasks)
|
||||||
|
* Prevents moving to existing task IDs that already have content (to avoid overwriting)
|
||||||
|
* Validates that source tasks exist before attempting to move them
|
||||||
|
* Maintains proper parent-child relationships
|
||||||
|
* **Example CLI:** `task-master move --from=5.2 --to=7.3` to move subtask 5.2 to become subtask 7.3.
|
||||||
|
* **Example Multi-Move:** `task-master move --from=10,11,12 --to=16,17,18` to move multiple tasks to new positions.
|
||||||
|
* **Common Use:** Resolving merge conflicts in tasks.json when multiple team members create tasks on different branches.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Dependency Management
|
||||||
|
|
||||||
|
### 18. Add Dependency (`add_dependency`)
|
||||||
|
|
||||||
|
* **MCP Tool:** `add_dependency`
|
||||||
|
* **CLI Command:** `task-master add-dependency [options]`
|
||||||
|
* **Description:** `Define a dependency in Taskmaster, making one task a prerequisite for another.`
|
||||||
|
* **Key Parameters/Options:**
|
||||||
|
* `id`: `Required. The ID of the Taskmaster task that will depend on another.` (CLI: `-i, --id <id>`)
|
||||||
|
* `dependsOn`: `Required. The ID of the Taskmaster task that must be completed first, the prerequisite.` (CLI: `-d, --depends-on <id>`)
|
||||||
|
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||||
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <path>`)
|
||||||
|
* **Usage:** Establish the correct order of execution between tasks.
|
||||||
|
|
||||||
|
### 19. Remove Dependency (`remove_dependency`)
|
||||||
|
|
||||||
|
* **MCP Tool:** `remove_dependency`
|
||||||
|
* **CLI Command:** `task-master remove-dependency [options]`
|
||||||
|
* **Description:** `Remove a dependency relationship between two Taskmaster tasks.`
|
||||||
|
* **Key Parameters/Options:**
|
||||||
|
* `id`: `Required. The ID of the Taskmaster task you want to remove a prerequisite from.` (CLI: `-i, --id <id>`)
|
||||||
|
* `dependsOn`: `Required. The ID of the Taskmaster task that should no longer be a prerequisite.` (CLI: `-d, --depends-on <id>`)
|
||||||
|
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||||
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||||
|
* **Usage:** Update task relationships when the order of execution changes.
|
||||||
|
|
||||||
|
### 20. Validate Dependencies (`validate_dependencies`)
|
||||||
|
|
||||||
|
* **MCP Tool:** `validate_dependencies`
|
||||||
|
* **CLI Command:** `task-master validate-dependencies [options]`
|
||||||
|
* **Description:** `Check your Taskmaster tasks for dependency issues (like circular references or links to non-existent tasks) without making changes.`
|
||||||
|
* **Key Parameters/Options:**
|
||||||
|
* `tag`: `Specify which tag context to validate. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||||
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||||
|
* **Usage:** Audit the integrity of your task dependencies.
|
||||||
|
|
||||||
|
### 21. Fix Dependencies (`fix_dependencies`)
|
||||||
|
|
||||||
|
* **MCP Tool:** `fix_dependencies`
|
||||||
|
* **CLI Command:** `task-master fix-dependencies [options]`
|
||||||
|
* **Description:** `Automatically fix dependency issues (like circular references or links to non-existent tasks) in your Taskmaster tasks.`
|
||||||
|
* **Key Parameters/Options:**
|
||||||
|
* `tag`: `Specify which tag context to fix dependencies in. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||||
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||||
|
* **Usage:** Clean up dependency errors automatically.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Analysis & Reporting
|
||||||
|
|
||||||
|
### 22. Analyze Project Complexity (`analyze_project_complexity`)
|
||||||
|
|
||||||
|
* **MCP Tool:** `analyze_project_complexity`
|
||||||
|
* **CLI Command:** `task-master analyze-complexity [options]`
|
||||||
|
* **Description:** `Have Taskmaster analyze your tasks to determine their complexity and suggest which ones need to be broken down further.`
|
||||||
|
* **Key Parameters/Options:**
|
||||||
|
* `output`: `Where to save the complexity analysis report. Default is '.taskmaster/reports/task-complexity-report.json' (or '..._tagname.json' if a tag is used).` (CLI: `-o, --output <file>`)
|
||||||
|
* `threshold`: `The minimum complexity score (1-10) that should trigger a recommendation to expand a task.` (CLI: `-t, --threshold <number>`)
|
||||||
|
* `research`: `Enable research role for more accurate complexity analysis. Requires appropriate API key.` (CLI: `-r, --research`)
|
||||||
|
* `tag`: `Specify which tag context to analyze. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||||
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||||
|
* **Usage:** Used before breaking down tasks to identify which ones need the most attention.
|
||||||
|
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress.
|
||||||
|
|
||||||
|
### 23. View Complexity Report (`complexity_report`)
|
||||||
|
|
||||||
|
* **MCP Tool:** `complexity_report`
|
||||||
|
* **CLI Command:** `task-master complexity-report [options]`
|
||||||
|
* **Description:** `Display the task complexity analysis report in a readable format.`
|
||||||
|
* **Key Parameters/Options:**
|
||||||
|
* `tag`: `Specify which tag context to show the report for. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||||
|
* `file`: `Path to the complexity report (default: '.taskmaster/reports/task-complexity-report.json').` (CLI: `-f, --file <file>`)
|
||||||
|
* **Usage:** Review and understand the complexity analysis results after running analyze-complexity.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## File Management
|
||||||
|
|
||||||
|
### 24. Generate Task Files (`generate`)
|
||||||
|
|
||||||
|
* **MCP Tool:** `generate`
|
||||||
|
* **CLI Command:** `task-master generate [options]`
|
||||||
|
* **Description:** `Create or update individual Markdown files for each task based on your tasks.json.`
|
||||||
|
* **Key Parameters/Options:**
|
||||||
|
* `output`: `The directory where Taskmaster should save the task files (default: in a 'tasks' directory).` (CLI: `-o, --output <directory>`)
|
||||||
|
* `tag`: `Specify which tag context to generate files for. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||||
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||||
|
* **Usage:** Run this after making changes to tasks.json to keep individual task files up to date. This command is now manual and no longer runs automatically.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## AI-Powered Research
|
||||||
|
|
||||||
|
### 25. Research (`research`)
|
||||||
|
|
||||||
|
* **MCP Tool:** `research`
|
||||||
|
* **CLI Command:** `task-master research [options]`
|
||||||
|
* **Description:** `Perform AI-powered research queries with project context to get fresh, up-to-date information beyond the AI's knowledge cutoff.`
|
||||||
|
* **Key Parameters/Options:**
|
||||||
|
* `query`: `Required. Research query/prompt (e.g., "What are the latest best practices for React Query v5?").` (CLI: `[query]` positional or `-q, --query <text>`)
|
||||||
|
* `taskIds`: `Comma-separated list of task/subtask IDs from the current tag context (e.g., "15,16.2,17").` (CLI: `-i, --id <ids>`)
|
||||||
|
* `filePaths`: `Comma-separated list of file paths for context (e.g., "src/api.js,docs/readme.md").` (CLI: `-f, --files <paths>`)
|
||||||
|
* `customContext`: `Additional custom context text to include in the research.` (CLI: `-c, --context <text>`)
|
||||||
|
* `includeProjectTree`: `Include project file tree structure in context (default: false).` (CLI: `--tree`)
|
||||||
|
* `detailLevel`: `Detail level for the research response: 'low', 'medium', 'high' (default: medium).` (CLI: `--detail <level>`)
|
||||||
|
* `saveTo`: `Task or subtask ID (e.g., "15", "15.2") to automatically save the research conversation to.` (CLI: `--save-to <id>`)
|
||||||
|
* `saveFile`: `If true, saves the research conversation to a markdown file in '.taskmaster/docs/research/'.` (CLI: `--save-file`)
|
||||||
|
* `noFollowup`: `Disables the interactive follow-up question menu in the CLI.` (CLI: `--no-followup`)
|
||||||
|
* `tag`: `Specify which tag context to use for task-based context gathering. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||||
|
* `projectRoot`: `The directory of the project. Must be an absolute path.` (CLI: Determined automatically)
|
||||||
|
* **Usage:** **This is a POWERFUL tool that agents should use FREQUENTLY** to:
|
||||||
|
* Get fresh information beyond knowledge cutoff dates
|
||||||
|
* Research latest best practices, library updates, security patches
|
||||||
|
* Find implementation examples for specific technologies
|
||||||
|
* Validate approaches against current industry standards
|
||||||
|
* Get contextual advice based on project files and tasks
|
||||||
|
* **When to Consider Using Research:**
|
||||||
|
* **Before implementing any task** - Research current best practices
|
||||||
|
* **When encountering new technologies** - Get up-to-date implementation guidance (libraries, apis, etc)
|
||||||
|
* **For security-related tasks** - Find latest security recommendations
|
||||||
|
* **When updating dependencies** - Research breaking changes and migration guides
|
||||||
|
* **For performance optimization** - Get current performance best practices
|
||||||
|
* **When debugging complex issues** - Research known solutions and workarounds
|
||||||
|
* **Research + Action Pattern:**
|
||||||
|
* Use `research` to gather fresh information
|
||||||
|
* Use `update_subtask` to commit findings with timestamps
|
||||||
|
* Use `update_task` to incorporate research into task details
|
||||||
|
* Use `add_task` with research flag for informed task creation
|
||||||
|
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. The research provides FRESH data beyond the AI's training cutoff, making it invaluable for current best practices and recent developments.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Tag Management
|
||||||
|
|
||||||
|
This new suite of commands allows you to manage different task contexts (tags).
|
||||||
|
|
||||||
|
### 26. List Tags (`tags`)
|
||||||
|
|
||||||
|
* **MCP Tool:** `list_tags`
|
||||||
|
* **CLI Command:** `task-master tags [options]`
|
||||||
|
* **Description:** `List all available tags with task counts, completion status, and other metadata.`
|
||||||
|
* **Key Parameters/Options:**
|
||||||
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||||
|
* `--show-metadata`: `Include detailed metadata in the output (e.g., creation date, description).` (CLI: `--show-metadata`)
|
||||||
|
|
||||||
|
### 27. Add Tag (`add_tag`)
|
||||||
|
|
||||||
|
* **MCP Tool:** `add_tag`
|
||||||
|
* **CLI Command:** `task-master add-tag <tagName> [options]`
|
||||||
|
* **Description:** `Create a new, empty tag context, or copy tasks from another tag.`
|
||||||
|
* **Key Parameters/Options:**
|
||||||
|
* `tagName`: `Name of the new tag to create (alphanumeric, hyphens, underscores).` (CLI: `<tagName>` positional)
|
||||||
|
* `--from-branch`: `Creates a tag with a name derived from the current git branch, ignoring the <tagName> argument.` (CLI: `--from-branch`)
|
||||||
|
* `--copy-from-current`: `Copy tasks from the currently active tag to the new tag.` (CLI: `--copy-from-current`)
|
||||||
|
* `--copy-from <tag>`: `Copy tasks from a specific source tag to the new tag.` (CLI: `--copy-from <tag>`)
|
||||||
|
* `--description <text>`: `Provide an optional description for the new tag.` (CLI: `-d, --description <text>`)
|
||||||
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||||
|
|
||||||
|
### 28. Delete Tag (`delete_tag`)
|
||||||
|
|
||||||
|
* **MCP Tool:** `delete_tag`
|
||||||
|
* **CLI Command:** `task-master delete-tag <tagName> [options]`
|
||||||
|
* **Description:** `Permanently delete a tag and all of its associated tasks.`
|
||||||
|
* **Key Parameters/Options:**
|
||||||
|
* `tagName`: `Name of the tag to delete.` (CLI: `<tagName>` positional)
|
||||||
|
* `--yes`: `Skip the confirmation prompt.` (CLI: `-y, --yes`)
|
||||||
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||||
|
|
||||||
|
### 29. Use Tag (`use_tag`)
|
||||||
|
|
||||||
|
* **MCP Tool:** `use_tag`
|
||||||
|
* **CLI Command:** `task-master use-tag <tagName>`
|
||||||
|
* **Description:** `Switch your active task context to a different tag.`
|
||||||
|
* **Key Parameters/Options:**
|
||||||
|
* `tagName`: `Name of the tag to switch to.` (CLI: `<tagName>` positional)
|
||||||
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||||
|
|
||||||
|
### 30. Rename Tag (`rename_tag`)
|
||||||
|
|
||||||
|
* **MCP Tool:** `rename_tag`
|
||||||
|
* **CLI Command:** `task-master rename-tag <oldName> <newName>`
|
||||||
|
* **Description:** `Rename an existing tag.`
|
||||||
|
* **Key Parameters/Options:**
|
||||||
|
* `oldName`: `The current name of the tag.` (CLI: `<oldName>` positional)
|
||||||
|
* `newName`: `The new name for the tag.` (CLI: `<newName>` positional)
|
||||||
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||||
|
|
||||||
|
### 31. Copy Tag (`copy_tag`)
|
||||||
|
|
||||||
|
* **MCP Tool:** `copy_tag`
|
||||||
|
* **CLI Command:** `task-master copy-tag <sourceName> <targetName> [options]`
|
||||||
|
* **Description:** `Copy an entire tag context, including all its tasks and metadata, to a new tag.`
|
||||||
|
* **Key Parameters/Options:**
|
||||||
|
* `sourceName`: `Name of the tag to copy from.` (CLI: `<sourceName>` positional)
|
||||||
|
* `targetName`: `Name of the new tag to create.` (CLI: `<targetName>` positional)
|
||||||
|
* `--description <text>`: `Optional description for the new tag.` (CLI: `-d, --description <text>`)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Miscellaneous
|
||||||
|
|
||||||
|
### 32. Sync Readme (`sync-readme`) -- experimental
|
||||||
|
|
||||||
|
* **MCP Tool:** N/A
|
||||||
|
* **CLI Command:** `task-master sync-readme [options]`
|
||||||
|
* **Description:** `Exports your task list to your project's README.md file, useful for showcasing progress.`
|
||||||
|
* **Key Parameters/Options:**
|
||||||
|
* `status`: `Filter tasks by status (e.g., 'pending', 'done').` (CLI: `-s, --status <status>`)
|
||||||
|
* `withSubtasks`: `Include subtasks in the export.` (CLI: `--with-subtasks`)
|
||||||
|
* `tag`: `Specify which tag context to export from. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Environment Variables Configuration (Updated)
|
||||||
|
|
||||||
|
Taskmaster primarily uses the **`.taskmaster/config.json`** file (in project root) for configuration (models, parameters, logging level, etc.), managed via `task-master models --setup`.
|
||||||
|
|
||||||
|
Environment variables are used **only** for sensitive API keys related to AI providers and specific overrides like the Ollama base URL:
|
||||||
|
|
||||||
|
* **API Keys (Required for corresponding provider):**
|
||||||
|
* `ANTHROPIC_API_KEY`
|
||||||
|
* `PERPLEXITY_API_KEY`
|
||||||
|
* `OPENAI_API_KEY`
|
||||||
|
* `GOOGLE_API_KEY`
|
||||||
|
* `MISTRAL_API_KEY`
|
||||||
|
* `AZURE_OPENAI_API_KEY` (Requires `AZURE_OPENAI_ENDPOINT` too)
|
||||||
|
* `OPENROUTER_API_KEY`
|
||||||
|
* `XAI_API_KEY`
|
||||||
|
* `OLLAMA_API_KEY` (Requires `OLLAMA_BASE_URL` too)
|
||||||
|
* **Endpoints (Optional/Provider Specific inside .taskmaster/config.json):**
|
||||||
|
* `AZURE_OPENAI_ENDPOINT`
|
||||||
|
* `OLLAMA_BASE_URL` (Default: `http://localhost:11434/api`)
|
||||||
|
|
||||||
|
**Set API keys** in your **`.env`** file in the project root (for CLI use) or within the `env` section of your **`.cursor/mcp.json`** file (for MCP/Cursor integration). All other settings (model choice, max tokens, temperature, log level, custom endpoints) are managed in `.taskmaster/config.json` via `task-master models` command or `models` MCP tool.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
For details on how these commands fit into the development process, see the [Development Workflow Guide](mdc:.cursor/rules/dev_workflow.mdc).
|
||||||
@@ -3,9 +3,19 @@ description: Guidelines for implementing task management operations
|
|||||||
globs: scripts/modules/task-manager.js
|
globs: scripts/modules/task-manager.js
|
||||||
alwaysApply: false
|
alwaysApply: false
|
||||||
---
|
---
|
||||||
|
|
||||||
# Task Management Guidelines
|
# Task Management Guidelines
|
||||||
|
|
||||||
|
## Tagged Task Lists System
|
||||||
|
|
||||||
|
Task Master now uses a **tagged task lists system** for multi-context task management:
|
||||||
|
|
||||||
|
- **Data Structure**: Tasks are organized into separate contexts (tags) within `tasks.json`
|
||||||
|
- **Legacy Format**: `{"tasks": [...]}`
|
||||||
|
- **Tagged Format**: `{"master": {"tasks": [...]}, "feature-branch": {"tasks": [...]}}`
|
||||||
|
- **Silent Migration**: Legacy format automatically converts to tagged format on first use
|
||||||
|
- **Tag Resolution**: Core functions receive legacy format for 100% backward compatibility
|
||||||
|
- **Default Tag**: "master" is used for all existing and new tasks unless otherwise specified
|
||||||
|
|
||||||
## Task Structure Standards
|
## Task Structure Standards
|
||||||
|
|
||||||
- **Core Task Properties**:
|
- **Core Task Properties**:
|
||||||
@@ -28,6 +38,25 @@ alwaysApply: false
|
|||||||
};
|
};
|
||||||
```
|
```
|
||||||
|
|
||||||
|
- **Tagged Data Structure**:
|
||||||
|
- ✅ DO: Access tasks through tag resolution layer
|
||||||
|
- ✅ DO: Use `getTasksForTag(data, tagName)` to retrieve tasks for a specific tag
|
||||||
|
- ✅ DO: Use `setTasksForTag(data, tagName, tasks)` to update tasks for a specific tag
|
||||||
|
- ❌ DON'T: Directly manipulate the tagged structure in core functions
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// ✅ DO: Use tag resolution functions
|
||||||
|
const tasksData = readJSON(tasksPath);
|
||||||
|
const currentTag = getCurrentTag() || 'master';
|
||||||
|
const tasks = getTasksForTag(tasksData, currentTag);
|
||||||
|
|
||||||
|
// Manipulate tasks as normal...
|
||||||
|
|
||||||
|
// Save back to the tagged structure
|
||||||
|
setTasksForTag(tasksData, currentTag, tasks);
|
||||||
|
writeJSON(tasksPath, tasksData);
|
||||||
|
```
|
||||||
|
|
||||||
- **Subtask Structure**:
|
- **Subtask Structure**:
|
||||||
- ✅ DO: Use consistent properties across subtasks
|
- ✅ DO: Use consistent properties across subtasks
|
||||||
- ✅ DO: Maintain simple numeric IDs within parent tasks
|
- ✅ DO: Maintain simple numeric IDs within parent tasks
|
||||||
@@ -48,53 +77,56 @@ alwaysApply: false
|
|||||||
## Task Creation and Parsing
|
## Task Creation and Parsing
|
||||||
|
|
||||||
- **ID Management**:
|
- **ID Management**:
|
||||||
- ✅ DO: Assign unique sequential IDs to tasks
|
- ✅ DO: Assign unique sequential IDs to tasks within each tag context
|
||||||
- ✅ DO: Calculate the next ID based on existing tasks
|
- ✅ DO: Calculate the next ID based on existing tasks in the current tag
|
||||||
- ❌ DON'T: Hardcode or reuse IDs
|
- ❌ DON'T: Hardcode or reuse IDs within the same tag
|
||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
// ✅ DO: Calculate the next available ID
|
// ✅ DO: Calculate the next available ID within the current tag
|
||||||
const highestId = Math.max(...data.tasks.map(t => t.id));
|
const tasksData = readJSON(tasksPath);
|
||||||
|
const currentTag = getCurrentTag() || 'master';
|
||||||
|
const tasks = getTasksForTag(tasksData, currentTag);
|
||||||
|
const highestId = Math.max(...tasks.map(t => t.id));
|
||||||
const nextTaskId = highestId + 1;
|
const nextTaskId = highestId + 1;
|
||||||
```
|
```
|
||||||
|
|
||||||
- **PRD Parsing**:
|
- **PRD Parsing**:
|
||||||
- ✅ DO: Extract tasks from PRD documents using AI
|
- ✅ DO: Extract tasks from PRD documents using AI
|
||||||
|
- ✅ DO: Create tasks in the current tag context (defaults to "master")
|
||||||
- ✅ DO: Provide clear prompts to guide AI task generation
|
- ✅ DO: Provide clear prompts to guide AI task generation
|
||||||
- ✅ DO: Validate and clean up AI-generated tasks
|
- ✅ DO: Validate and clean up AI-generated tasks
|
||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
// ✅ DO: Validate AI responses
|
// ✅ DO: Parse into current tag context
|
||||||
try {
|
const tasksData = readJSON(tasksPath) || {};
|
||||||
// Parse the JSON response
|
const currentTag = getCurrentTag() || 'master';
|
||||||
taskData = JSON.parse(jsonContent);
|
|
||||||
|
|
||||||
// Check that we have the required fields
|
// Parse tasks and add to current tag
|
||||||
if (!taskData.title || !taskData.description) {
|
const newTasks = await parseTasksFromPRD(prdContent);
|
||||||
throw new Error("Missing required fields in the generated task");
|
setTasksForTag(tasksData, currentTag, newTasks);
|
||||||
}
|
writeJSON(tasksPath, tasksData);
|
||||||
} catch (error) {
|
|
||||||
log('error', "Failed to parse AI's response as valid task JSON:", error);
|
|
||||||
process.exit(1);
|
|
||||||
}
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## Task Updates and Modifications
|
## Task Updates and Modifications
|
||||||
|
|
||||||
- **Status Management**:
|
- **Status Management**:
|
||||||
- ✅ DO: Provide functions for updating task status
|
- ✅ DO: Provide functions for updating task status within current tag context
|
||||||
- ✅ DO: Handle both individual tasks and subtasks
|
- ✅ DO: Handle both individual tasks and subtasks
|
||||||
- ✅ DO: Consider subtask status when updating parent tasks
|
- ✅ DO: Consider subtask status when updating parent tasks
|
||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
// ✅ DO: Handle status updates for both tasks and subtasks
|
// ✅ DO: Handle status updates within tagged context
|
||||||
async function setTaskStatus(tasksPath, taskIdInput, newStatus) {
|
async function setTaskStatus(tasksPath, taskIdInput, newStatus) {
|
||||||
|
const tasksData = readJSON(tasksPath);
|
||||||
|
const currentTag = getCurrentTag() || 'master';
|
||||||
|
const tasks = getTasksForTag(tasksData, currentTag);
|
||||||
|
|
||||||
// Check if it's a subtask (e.g., "1.2")
|
// Check if it's a subtask (e.g., "1.2")
|
||||||
if (taskIdInput.includes('.')) {
|
if (taskIdInput.includes('.')) {
|
||||||
const [parentId, subtaskId] = taskIdInput.split('.').map(id => parseInt(id, 10));
|
const [parentId, subtaskId] = taskIdInput.split('.').map(id => parseInt(id, 10));
|
||||||
|
|
||||||
// Find the parent task and subtask
|
// Find the parent task and subtask
|
||||||
const parentTask = data.tasks.find(t => t.id === parentId);
|
const parentTask = tasks.find(t => t.id === parentId);
|
||||||
const subtask = parentTask.subtasks.find(st => st.id === subtaskId);
|
const subtask = parentTask.subtasks.find(st => st.id === subtaskId);
|
||||||
|
|
||||||
// Update subtask status
|
// Update subtask status
|
||||||
@@ -109,7 +141,7 @@ alwaysApply: false
|
|||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
// Handle regular task
|
// Handle regular task
|
||||||
const task = data.tasks.find(t => t.id === parseInt(taskIdInput, 10));
|
const task = tasks.find(t => t.id === parseInt(taskIdInput, 10));
|
||||||
task.status = newStatus;
|
task.status = newStatus;
|
||||||
|
|
||||||
// If marking as done, also mark subtasks
|
// If marking as done, also mark subtasks
|
||||||
@@ -119,16 +151,24 @@ alwaysApply: false
|
|||||||
});
|
});
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Save updated tasks back to tagged structure
|
||||||
|
setTasksForTag(tasksData, currentTag, tasks);
|
||||||
|
writeJSON(tasksPath, tasksData);
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
- **Task Expansion**:
|
- **Task Expansion**:
|
||||||
- ✅ DO: Use AI to generate detailed subtasks
|
- ✅ DO: Use AI to generate detailed subtasks within current tag context
|
||||||
- ✅ DO: Consider complexity analysis for subtask counts
|
- ✅ DO: Consider complexity analysis for subtask counts
|
||||||
- ✅ DO: Ensure proper IDs for newly created subtasks
|
- ✅ DO: Ensure proper IDs for newly created subtasks
|
||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
// ✅ DO: Generate appropriate subtasks based on complexity
|
// ✅ DO: Generate appropriate subtasks based on complexity
|
||||||
|
const tasksData = readJSON(tasksPath);
|
||||||
|
const currentTag = getCurrentTag() || 'master';
|
||||||
|
const tasks = getTasksForTag(tasksData, currentTag);
|
||||||
|
|
||||||
if (taskAnalysis) {
|
if (taskAnalysis) {
|
||||||
log('info', `Found complexity analysis for task ${taskId}: Score ${taskAnalysis.complexityScore}/10`);
|
log('info', `Found complexity analysis for task ${taskId}: Score ${taskAnalysis.complexityScore}/10`);
|
||||||
|
|
||||||
@@ -138,6 +178,11 @@ alwaysApply: false
|
|||||||
log('info', `Using recommended number of subtasks: ${numSubtasks}`);
|
log('info', `Using recommended number of subtasks: ${numSubtasks}`);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Generate subtasks and save back
|
||||||
|
// ... subtask generation logic ...
|
||||||
|
setTasksForTag(tasksData, currentTag, tasks);
|
||||||
|
writeJSON(tasksPath, tasksData);
|
||||||
```
|
```
|
||||||
|
|
||||||
## Task File Generation
|
## Task File Generation
|
||||||
@@ -155,67 +200,65 @@ alwaysApply: false
|
|||||||
|
|
||||||
// Format dependencies with their status
|
// Format dependencies with their status
|
||||||
if (task.dependencies && task.dependencies.length > 0) {
|
if (task.dependencies && task.dependencies.length > 0) {
|
||||||
content += `# Dependencies: ${formatDependenciesWithStatus(task.dependencies, data.tasks)}\n`;
|
content += `# Dependencies: ${formatDependenciesWithStatus(task.dependencies, tasks)}\n`;
|
||||||
} else {
|
} else {
|
||||||
content += '# Dependencies: None\n';
|
content += '# Dependencies: None\n';
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
- **Subtask Inclusion**:
|
- **Tagged Context Awareness**:
|
||||||
- ✅ DO: Include subtasks in parent task files
|
- ✅ DO: Generate task files from current tag context
|
||||||
- ✅ DO: Use consistent indentation for subtask sections
|
- ✅ DO: Include tag information in generated files
|
||||||
- ✅ DO: Display subtask dependencies with proper formatting
|
- ❌ DON'T: Mix tasks from different tags in file generation
|
||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
// ✅ DO: Format subtasks correctly in task files
|
// ✅ DO: Generate files for current tag context
|
||||||
if (task.subtasks && task.subtasks.length > 0) {
|
async function generateTaskFiles(tasksPath, outputDir) {
|
||||||
content += '\n# Subtasks:\n';
|
const tasksData = readJSON(tasksPath);
|
||||||
|
const currentTag = getCurrentTag() || 'master';
|
||||||
|
const tasks = getTasksForTag(tasksData, currentTag);
|
||||||
|
|
||||||
task.subtasks.forEach(subtask => {
|
// Add tag context to file header
|
||||||
content += `## ${subtask.id}. ${subtask.title} [${subtask.status || 'pending'}]\n`;
|
let content = `# Tag Context: ${currentTag}\n`;
|
||||||
|
content += `# Task ID: ${task.id}\n`;
|
||||||
// Format subtask dependencies
|
// ... rest of file generation
|
||||||
if (subtask.dependencies && subtask.dependencies.length > 0) {
|
|
||||||
// Format the dependencies
|
|
||||||
content += `### Dependencies: ${formattedDeps}\n`;
|
|
||||||
} else {
|
|
||||||
content += '### Dependencies: None\n';
|
|
||||||
}
|
|
||||||
|
|
||||||
content += `### Description: ${subtask.description || ''}\n`;
|
|
||||||
content += '### Details:\n';
|
|
||||||
content += (subtask.details || '').split('\n').map(line => line).join('\n');
|
|
||||||
content += '\n\n';
|
|
||||||
});
|
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
## Task Listing and Display
|
## Task Listing and Display
|
||||||
|
|
||||||
- **Filtering and Organization**:
|
- **Filtering and Organization**:
|
||||||
- ✅ DO: Allow filtering tasks by status
|
- ✅ DO: Allow filtering tasks by status within current tag context
|
||||||
- ✅ DO: Handle subtask display in lists
|
- ✅ DO: Handle subtask display in lists
|
||||||
- ✅ DO: Use consistent table formats
|
- ✅ DO: Use consistent table formats
|
||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
// ✅ DO: Implement clear filtering and organization
|
// ✅ DO: Implement clear filtering within tag context
|
||||||
|
const tasksData = readJSON(tasksPath);
|
||||||
|
const currentTag = getCurrentTag() || 'master';
|
||||||
|
const tasks = getTasksForTag(tasksData, currentTag);
|
||||||
|
|
||||||
// Filter tasks by status if specified
|
// Filter tasks by status if specified
|
||||||
const filteredTasks = statusFilter
|
const filteredTasks = statusFilter
|
||||||
? data.tasks.filter(task =>
|
? tasks.filter(task =>
|
||||||
task.status && task.status.toLowerCase() === statusFilter.toLowerCase())
|
task.status && task.status.toLowerCase() === statusFilter.toLowerCase())
|
||||||
: data.tasks;
|
: tasks;
|
||||||
```
|
```
|
||||||
|
|
||||||
- **Progress Tracking**:
|
- **Progress Tracking**:
|
||||||
- ✅ DO: Calculate and display completion statistics
|
- ✅ DO: Calculate and display completion statistics for current tag
|
||||||
- ✅ DO: Track both task and subtask completion
|
- ✅ DO: Track both task and subtask completion
|
||||||
- ✅ DO: Use visual progress indicators
|
- ✅ DO: Use visual progress indicators
|
||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
// ✅ DO: Track and display progress
|
// ✅ DO: Track and display progress within tag context
|
||||||
|
const tasksData = readJSON(tasksPath);
|
||||||
|
const currentTag = getCurrentTag() || 'master';
|
||||||
|
const tasks = getTasksForTag(tasksData, currentTag);
|
||||||
|
|
||||||
// Calculate completion statistics
|
// Calculate completion statistics
|
||||||
const totalTasks = data.tasks.length;
|
const totalTasks = tasks.length;
|
||||||
const completedTasks = data.tasks.filter(task =>
|
const completedTasks = tasks.filter(task =>
|
||||||
task.status === 'done' || task.status === 'completed').length;
|
task.status === 'done' || task.status === 'completed').length;
|
||||||
const completionPercentage = totalTasks > 0 ? (completedTasks / totalTasks) * 100 : 0;
|
const completionPercentage = totalTasks > 0 ? (completedTasks / totalTasks) * 100 : 0;
|
||||||
|
|
||||||
@@ -223,7 +266,7 @@ alwaysApply: false
|
|||||||
let totalSubtasks = 0;
|
let totalSubtasks = 0;
|
||||||
let completedSubtasks = 0;
|
let completedSubtasks = 0;
|
||||||
|
|
||||||
data.tasks.forEach(task => {
|
tasks.forEach(task => {
|
||||||
if (task.subtasks && task.subtasks.length > 0) {
|
if (task.subtasks && task.subtasks.length > 0) {
|
||||||
totalSubtasks += task.subtasks.length;
|
totalSubtasks += task.subtasks.length;
|
||||||
completedSubtasks += task.subtasks.filter(st =>
|
completedSubtasks += task.subtasks.filter(st =>
|
||||||
@@ -232,99 +275,52 @@ alwaysApply: false
|
|||||||
});
|
});
|
||||||
```
|
```
|
||||||
|
|
||||||
## Complexity Analysis
|
## Migration and Compatibility
|
||||||
|
|
||||||
- **Scoring System**:
|
- **Silent Migration Handling**:
|
||||||
- ✅ DO: Use AI to analyze task complexity
|
- ✅ DO: Implement silent migration in `readJSON()` function
|
||||||
- ✅ DO: Include complexity scores (1-10)
|
- ✅ DO: Detect legacy format and convert automatically
|
||||||
- ✅ DO: Generate specific expansion recommendations
|
- ✅ DO: Preserve all existing task data during migration
|
||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
// ✅ DO: Handle complexity analysis properly
|
// ✅ DO: Handle silent migration (implemented in utils.js)
|
||||||
const report = {
|
function readJSON(filepath) {
|
||||||
meta: {
|
let data = JSON.parse(fs.readFileSync(filepath, 'utf8'));
|
||||||
generatedAt: new Date().toISOString(),
|
|
||||||
tasksAnalyzed: tasksData.tasks.length,
|
// Silent migration for tasks.json files
|
||||||
thresholdScore: thresholdScore,
|
if (data.tasks && Array.isArray(data.tasks) && !data.master && isTasksFile) {
|
||||||
projectName: tasksData.meta?.projectName || 'Your Project Name',
|
const migratedData = {
|
||||||
usedResearch: useResearch
|
master: {
|
||||||
},
|
tasks: data.tasks
|
||||||
complexityAnalysis: complexityAnalysis
|
}
|
||||||
};
|
};
|
||||||
```
|
writeJSON(filepath, migratedData);
|
||||||
|
data = migratedData;
|
||||||
|
}
|
||||||
|
|
||||||
- **Analysis-Based Workflow**:
|
return data;
|
||||||
- ✅ DO: Use complexity reports to guide task expansion
|
|
||||||
- ✅ DO: Prioritize complex tasks for more detailed breakdown
|
|
||||||
- ✅ DO: Use expansion prompts from complexity analysis
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
// ✅ DO: Apply complexity analysis to workflow
|
|
||||||
// Sort tasks by complexity if report exists, otherwise by ID
|
|
||||||
if (complexityReport && complexityReport.complexityAnalysis) {
|
|
||||||
log('info', 'Sorting tasks by complexity...');
|
|
||||||
|
|
||||||
// Create a map of task IDs to complexity scores
|
|
||||||
const complexityMap = new Map();
|
|
||||||
complexityReport.complexityAnalysis.forEach(analysis => {
|
|
||||||
complexityMap.set(analysis.taskId, analysis.complexityScore);
|
|
||||||
});
|
|
||||||
|
|
||||||
// Sort tasks by complexity score (high to low)
|
|
||||||
tasksToExpand.sort((a, b) => {
|
|
||||||
const scoreA = complexityMap.get(a.id) || 0;
|
|
||||||
const scoreB = complexityMap.get(b.id) || 0;
|
|
||||||
return scoreB - scoreA;
|
|
||||||
});
|
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
## Next Task Selection
|
- **Tag Resolution**:
|
||||||
|
- ✅ DO: Use tag resolution functions to maintain backward compatibility
|
||||||
- **Eligibility Criteria**:
|
- ✅ DO: Return legacy format to core functions
|
||||||
- ✅ DO: Consider dependencies when finding next tasks
|
- ❌ DON'T: Expose tagged structure to existing core logic
|
||||||
- ✅ DO: Prioritize by task priority and dependency count
|
|
||||||
- ✅ DO: Skip completed tasks
|
|
||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
// ✅ DO: Use proper task prioritization logic
|
// ✅ DO: Use tag resolution layer
|
||||||
function findNextTask(tasks) {
|
function getTasksForTag(data, tagName) {
|
||||||
// Get all completed task IDs
|
if (data.tasks && Array.isArray(data.tasks)) {
|
||||||
const completedTaskIds = new Set(
|
// Legacy format - return as-is
|
||||||
tasks
|
return data.tasks;
|
||||||
.filter(t => t.status === 'done' || t.status === 'completed')
|
|
||||||
.map(t => t.id)
|
|
||||||
);
|
|
||||||
|
|
||||||
// Filter for pending tasks whose dependencies are all satisfied
|
|
||||||
const eligibleTasks = tasks.filter(task =>
|
|
||||||
(task.status === 'pending' || task.status === 'in-progress') &&
|
|
||||||
task.dependencies &&
|
|
||||||
task.dependencies.every(depId => completedTaskIds.has(depId))
|
|
||||||
);
|
|
||||||
|
|
||||||
// Sort by priority, dependency count, and ID
|
|
||||||
const priorityValues = { 'high': 3, 'medium': 2, 'low': 1 };
|
|
||||||
|
|
||||||
const nextTask = eligibleTasks.sort((a, b) => {
|
|
||||||
// Priority first
|
|
||||||
const priorityA = priorityValues[a.priority || 'medium'] || 2;
|
|
||||||
const priorityB = priorityValues[b.priority || 'medium'] || 2;
|
|
||||||
|
|
||||||
if (priorityB !== priorityA) {
|
|
||||||
return priorityB - priorityA; // Higher priority first
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Dependency count next
|
if (data[tagName] && data[tagName].tasks) {
|
||||||
if (a.dependencies.length !== b.dependencies.length) {
|
// Tagged format - return tasks for specified tag
|
||||||
return a.dependencies.length - b.dependencies.length; // Fewer dependencies first
|
return data[tagName].tasks;
|
||||||
}
|
}
|
||||||
|
|
||||||
// ID last
|
return [];
|
||||||
return a.id - b.id; // Lower ID first
|
|
||||||
})[0];
|
|
||||||
|
|
||||||
return nextTask;
|
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|||||||
228
.cursor/rules/telemetry.mdc
Normal file
228
.cursor/rules/telemetry.mdc
Normal file
@@ -0,0 +1,228 @@
|
|||||||
|
---
|
||||||
|
description: Guidelines for integrating AI usage telemetry across Task Master.
|
||||||
|
globs: scripts/modules/**/*.js,mcp-server/src/**/*.js
|
||||||
|
alwaysApply: true
|
||||||
|
---
|
||||||
|
|
||||||
|
# AI Usage Telemetry Integration
|
||||||
|
|
||||||
|
This document outlines the standard pattern for capturing, propagating, and handling AI usage telemetry data (cost, tokens, model, etc.) across the Task Master stack. This ensures consistent telemetry for both CLI and MCP interactions.
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
Telemetry data is generated within the unified AI service layer ([`ai-services-unified.js`](mdc:scripts/modules/ai-services-unified.js)) and then passed upwards through the calling functions.
|
||||||
|
|
||||||
|
- **Data Source**: [`ai-services-unified.js`](mdc:scripts/modules/ai-services-unified.js) (specifically its `generateTextService`, `generateObjectService`, etc.) returns an object like `{ mainResult: AI_CALL_OUTPUT, telemetryData: TELEMETRY_OBJECT }`.
|
||||||
|
- **`telemetryData` Object Structure**:
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"timestamp": "ISO_STRING_DATE",
|
||||||
|
"userId": "USER_ID_FROM_CONFIG",
|
||||||
|
"commandName": "invoking_command_or_tool_name",
|
||||||
|
"modelUsed": "ai_model_id",
|
||||||
|
"providerName": "ai_provider_name",
|
||||||
|
"inputTokens": NUMBER,
|
||||||
|
"outputTokens": NUMBER,
|
||||||
|
"totalTokens": NUMBER,
|
||||||
|
"totalCost": NUMBER, // e.g., 0.012414
|
||||||
|
"currency": "USD" // e.g., "USD"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Integration Pattern by Layer
|
||||||
|
|
||||||
|
The key principle is that each layer receives telemetry data from the layer below it (if applicable) and passes it to the layer above it, or handles it for display in the case of the CLI.
|
||||||
|
|
||||||
|
### 1. Core Logic Functions (e.g., in `scripts/modules/task-manager/`)
|
||||||
|
|
||||||
|
Functions in this layer that invoke AI services are responsible for handling the `telemetryData` they receive from [`ai-services-unified.js`](mdc:scripts/modules/ai-services-unified.js).
|
||||||
|
|
||||||
|
- **Actions**:
|
||||||
|
1. Call the appropriate AI service function (e.g., `generateObjectService`).
|
||||||
|
- Pass `commandName` (e.g., `add-task`, `expand-task`) and `outputType` (e.g., `cli` or `mcp`) in the `params` object to the AI service. The `outputType` can be derived from context (e.g., presence of `mcpLog`).
|
||||||
|
2. The AI service returns an object, e.g., `aiServiceResponse = { mainResult: {/*AI output*/}, telemetryData: {/*telemetry data*/} }`.
|
||||||
|
3. Extract `aiServiceResponse.mainResult` for the core processing.
|
||||||
|
4. **Must return an object that includes `aiServiceResponse.telemetryData`**.
|
||||||
|
Example: `return { operationSpecificData: /*...*/, telemetryData: aiServiceResponse.telemetryData };`
|
||||||
|
|
||||||
|
- **CLI Output Handling (If Applicable)**:
|
||||||
|
- If the core function also handles CLI output (e.g., it has an `outputFormat` parameter that can be `'text'` or `'cli'`):
|
||||||
|
1. Check if `outputFormat === 'text'` (or `'cli'`).
|
||||||
|
2. If so, and if `aiServiceResponse.telemetryData` is available, call `displayAiUsageSummary(aiServiceResponse.telemetryData, 'cli')` from [`scripts/modules/ui.js`](mdc:scripts/modules/ui.js).
|
||||||
|
- This ensures telemetry is displayed directly to CLI users after the main command output.
|
||||||
|
|
||||||
|
- **Example Snippet (Core Logic in `scripts/modules/task-manager/someAiAction.js`)**:
|
||||||
|
```javascript
|
||||||
|
import { generateObjectService } from '../ai-services-unified.js';
|
||||||
|
import { displayAiUsageSummary } from '../ui.js';
|
||||||
|
|
||||||
|
async function performAiRelatedAction(params, context, outputFormat = 'text') {
|
||||||
|
const { commandNameFromContext, /* other context vars */ } = context;
|
||||||
|
let aiServiceResponse = null;
|
||||||
|
|
||||||
|
try {
|
||||||
|
aiServiceResponse = await generateObjectService({
|
||||||
|
// ... other parameters for AI service ...
|
||||||
|
commandName: commandNameFromContext || 'default-action-name',
|
||||||
|
outputType: context.mcpLog ? 'mcp' : 'cli' // Derive outputType
|
||||||
|
});
|
||||||
|
|
||||||
|
const usefulAiOutput = aiServiceResponse.mainResult.object;
|
||||||
|
// ... do work with usefulAiOutput ...
|
||||||
|
|
||||||
|
if (outputFormat === 'text' && aiServiceResponse.telemetryData) {
|
||||||
|
displayAiUsageSummary(aiServiceResponse.telemetryData, 'cli');
|
||||||
|
}
|
||||||
|
|
||||||
|
return {
|
||||||
|
actionData: /* results of processing */,
|
||||||
|
telemetryData: aiServiceResponse.telemetryData
|
||||||
|
};
|
||||||
|
} catch (error) {
|
||||||
|
// ... handle error ...
|
||||||
|
throw error;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Direct Function Wrappers (in `mcp-server/src/core/direct-functions/`)
|
||||||
|
|
||||||
|
These functions adapt core logic for the MCP server, ensuring structured responses.
|
||||||
|
|
||||||
|
- **Actions**:
|
||||||
|
1. Call the corresponding core logic function.
|
||||||
|
- Pass necessary context (e.g., `session`, `mcpLog`, `projectRoot`).
|
||||||
|
- Provide the `commandName` (typically derived from the MCP tool name) and `outputType: 'mcp'` in the context object passed to the core function.
|
||||||
|
- If the core function supports an `outputFormat` parameter, pass `'json'` to suppress CLI-specific UI.
|
||||||
|
2. The core logic function returns an object (e.g., `coreResult = { actionData: ..., telemetryData: ... }`).
|
||||||
|
3. Include `coreResult.telemetryData` as a field within the `data` object of the successful response returned by the direct function.
|
||||||
|
|
||||||
|
- **Example Snippet (Direct Function `someAiActionDirect.js`)**:
|
||||||
|
```javascript
|
||||||
|
import { performAiRelatedAction } from '../../../../scripts/modules/task-manager/someAiAction.js'; // Core function
|
||||||
|
import { createLogWrapper } from '../../tools/utils.js'; // MCP Log wrapper
|
||||||
|
|
||||||
|
export async function someAiActionDirect(args, log, context = {}) {
|
||||||
|
const { session } = context;
|
||||||
|
// ... prepare arguments for core function from args, including args.projectRoot ...
|
||||||
|
|
||||||
|
try {
|
||||||
|
const coreResult = await performAiRelatedAction(
|
||||||
|
{ /* parameters for core function */ },
|
||||||
|
{ // Context for core function
|
||||||
|
session,
|
||||||
|
mcpLog: createLogWrapper(log),
|
||||||
|
projectRoot: args.projectRoot,
|
||||||
|
commandNameFromContext: 'mcp_tool_some_ai_action', // Example command name
|
||||||
|
outputType: 'mcp'
|
||||||
|
},
|
||||||
|
'json' // Request 'json' output format from core function
|
||||||
|
);
|
||||||
|
|
||||||
|
return {
|
||||||
|
success: true,
|
||||||
|
data: {
|
||||||
|
operationSpecificData: coreResult.actionData,
|
||||||
|
telemetryData: coreResult.telemetryData // Pass telemetry through
|
||||||
|
}
|
||||||
|
};
|
||||||
|
} catch (error) {
|
||||||
|
// ... error handling, return { success: false, error: ... } ...
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. MCP Tools (in `mcp-server/src/tools/`)
|
||||||
|
|
||||||
|
These are the exposed endpoints for MCP clients.
|
||||||
|
|
||||||
|
- **Actions**:
|
||||||
|
1. Call the corresponding direct function wrapper.
|
||||||
|
2. The direct function returns an object structured like `{ success: true, data: { operationSpecificData: ..., telemetryData: ... } }` (or an error object).
|
||||||
|
3. Pass this entire result object to `handleApiResult(result, log)` from [`mcp-server/src/tools/utils.js`](mdc:mcp-server/src/tools/utils.js).
|
||||||
|
4. `handleApiResult` ensures that the `data` field from the direct function's response (which correctly includes `telemetryData`) is part of the final MCP response.
|
||||||
|
|
||||||
|
- **Example Snippet (MCP Tool `some_ai_action.js`)**:
|
||||||
|
```javascript
|
||||||
|
import { someAiActionDirect } from '../core/task-master-core.js';
|
||||||
|
import { handleApiResult, withNormalizedProjectRoot } from './utils.js';
|
||||||
|
// ... zod for parameters ...
|
||||||
|
|
||||||
|
export function registerSomeAiActionTool(server) {
|
||||||
|
server.addTool({
|
||||||
|
name: "some_ai_action",
|
||||||
|
// ... description, parameters ...
|
||||||
|
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
|
||||||
|
try {
|
||||||
|
const resultFromDirectFunction = await someAiActionDirect(
|
||||||
|
{ /* args including projectRoot */ },
|
||||||
|
log,
|
||||||
|
{ session }
|
||||||
|
);
|
||||||
|
return handleApiResult(resultFromDirectFunction, log); // This passes the nested telemetryData through
|
||||||
|
} catch (error) {
|
||||||
|
// ... error handling ...
|
||||||
|
}
|
||||||
|
})
|
||||||
|
});
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. CLI Commands (`scripts/modules/commands.js`)
|
||||||
|
|
||||||
|
These define the command-line interface.
|
||||||
|
|
||||||
|
- **Actions**:
|
||||||
|
1. Call the appropriate core logic function.
|
||||||
|
2. Pass `outputFormat: 'text'` (or ensure the core function defaults to text-based output for CLI).
|
||||||
|
3. The core logic function (as per Section 1) is responsible for calling `displayAiUsageSummary` if telemetry data is available and it's in CLI mode.
|
||||||
|
4. The command action itself **should not** call `displayAiUsageSummary` if the core logic function already handles this. This avoids duplicate display.
|
||||||
|
|
||||||
|
- **Example Snippet (CLI Command in `commands.js`)**:
|
||||||
|
```javascript
|
||||||
|
// In scripts/modules/commands.js
|
||||||
|
import { performAiRelatedAction } from './task-manager/someAiAction.js'; // Core function
|
||||||
|
|
||||||
|
programInstance
|
||||||
|
.command('some-cli-ai-action')
|
||||||
|
// ... .option() ...
|
||||||
|
.action(async (options) => {
|
||||||
|
try {
|
||||||
|
const projectRoot = findProjectRoot() || '.'; // Example root finding
|
||||||
|
// ... prepare parameters for core function from command options ...
|
||||||
|
await performAiRelatedAction(
|
||||||
|
{ /* parameters for core function */ },
|
||||||
|
{ // Context for core function
|
||||||
|
projectRoot,
|
||||||
|
commandNameFromContext: 'some-cli-ai-action',
|
||||||
|
outputType: 'cli'
|
||||||
|
},
|
||||||
|
'text' // Explicitly request text output format for CLI
|
||||||
|
);
|
||||||
|
// Core function handles displayAiUsageSummary internally for 'text' outputFormat
|
||||||
|
} catch (error) {
|
||||||
|
// ... error handling ...
|
||||||
|
}
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
## Summary Flow
|
||||||
|
|
||||||
|
The telemetry data flows as follows:
|
||||||
|
|
||||||
|
1. **[`ai-services-unified.js`](mdc:scripts/modules/ai-services-unified.js)**: Generates `telemetryData` and returns `{ mainResult, telemetryData }`.
|
||||||
|
2. **Core Logic Function**:
|
||||||
|
* Receives `{ mainResult, telemetryData }`.
|
||||||
|
* Uses `mainResult`.
|
||||||
|
* If CLI (`outputFormat: 'text'`), calls `displayAiUsageSummary(telemetryData)`.
|
||||||
|
* Returns `{ operationSpecificData, telemetryData }`.
|
||||||
|
3. **Direct Function Wrapper**:
|
||||||
|
* Receives `{ operationSpecificData, telemetryData }` from core logic.
|
||||||
|
* Returns `{ success: true, data: { operationSpecificData, telemetryData } }`.
|
||||||
|
4. **MCP Tool**:
|
||||||
|
* Receives direct function response.
|
||||||
|
* `handleApiResult` ensures the final MCP response to the client is `{ success: true, data: { operationSpecificData, telemetryData } }`.
|
||||||
|
5. **CLI Command**:
|
||||||
|
* Calls core logic with `outputFormat: 'text'`. Display is handled by core logic.
|
||||||
|
|
||||||
|
This pattern ensures telemetry is captured and appropriately handled/exposed across all interaction modes.
|
||||||
803
.cursor/rules/test_workflow.mdc
Normal file
803
.cursor/rules/test_workflow.mdc
Normal file
@@ -0,0 +1,803 @@
|
|||||||
|
---
|
||||||
|
description:
|
||||||
|
globs:
|
||||||
|
alwaysApply: true
|
||||||
|
---
|
||||||
|
# Test Workflow & Development Process
|
||||||
|
|
||||||
|
## **Initial Testing Framework Setup**
|
||||||
|
|
||||||
|
Before implementing the TDD workflow, ensure your project has a proper testing framework configured. This section covers setup for different technology stacks.
|
||||||
|
|
||||||
|
### **Detecting Project Type & Framework Needs**
|
||||||
|
|
||||||
|
**AI Agent Assessment Checklist:**
|
||||||
|
1. **Language Detection**: Check for `package.json` (Node.js/JavaScript), `requirements.txt` (Python), `Cargo.toml` (Rust), etc.
|
||||||
|
2. **Existing Tests**: Look for test files (`.test.`, `.spec.`, `_test.`) or test directories
|
||||||
|
3. **Framework Detection**: Check for existing test runners in dependencies
|
||||||
|
4. **Project Structure**: Analyze directory structure for testing patterns
|
||||||
|
|
||||||
|
### **JavaScript/Node.js Projects (Jest Setup)**
|
||||||
|
|
||||||
|
#### **Prerequisites Check**
|
||||||
|
```bash
|
||||||
|
# Verify Node.js project
|
||||||
|
ls package.json # Should exist
|
||||||
|
|
||||||
|
# Check for existing testing setup
|
||||||
|
ls jest.config.js jest.config.ts # Check for Jest config
|
||||||
|
grep -E "(jest|vitest|mocha)" package.json # Check for test runners
|
||||||
|
```
|
||||||
|
|
||||||
|
#### **Jest Installation & Configuration**
|
||||||
|
|
||||||
|
**Step 1: Install Dependencies**
|
||||||
|
```bash
|
||||||
|
# Core Jest dependencies
|
||||||
|
npm install --save-dev jest
|
||||||
|
|
||||||
|
# TypeScript support (if using TypeScript)
|
||||||
|
npm install --save-dev ts-jest @types/jest
|
||||||
|
|
||||||
|
# Additional useful packages
|
||||||
|
npm install --save-dev supertest @types/supertest # For API testing
|
||||||
|
npm install --save-dev jest-watch-typeahead # Enhanced watch mode
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 2: Create Jest Configuration**
|
||||||
|
|
||||||
|
Create `jest.config.js` with the following production-ready configuration:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
/** @type {import('jest').Config} */
|
||||||
|
module.exports = {
|
||||||
|
// Use ts-jest preset for TypeScript support
|
||||||
|
preset: 'ts-jest',
|
||||||
|
|
||||||
|
// Test environment
|
||||||
|
testEnvironment: 'node',
|
||||||
|
|
||||||
|
// Roots for test discovery
|
||||||
|
roots: ['<rootDir>/src', '<rootDir>/tests'],
|
||||||
|
|
||||||
|
// Test file patterns
|
||||||
|
testMatch: ['**/__tests__/**/*.ts', '**/?(*.)+(spec|test).ts'],
|
||||||
|
|
||||||
|
// Transform files
|
||||||
|
transform: {
|
||||||
|
'^.+\\.ts$': [
|
||||||
|
'ts-jest',
|
||||||
|
{
|
||||||
|
tsconfig: {
|
||||||
|
target: 'es2020',
|
||||||
|
module: 'commonjs',
|
||||||
|
esModuleInterop: true,
|
||||||
|
allowSyntheticDefaultImports: true,
|
||||||
|
skipLibCheck: true,
|
||||||
|
strict: false,
|
||||||
|
noImplicitAny: false,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
],
|
||||||
|
'^.+\\.js$': [
|
||||||
|
'ts-jest',
|
||||||
|
{
|
||||||
|
useESM: false,
|
||||||
|
tsconfig: {
|
||||||
|
target: 'es2020',
|
||||||
|
module: 'commonjs',
|
||||||
|
esModuleInterop: true,
|
||||||
|
allowSyntheticDefaultImports: true,
|
||||||
|
allowJs: true,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
],
|
||||||
|
},
|
||||||
|
|
||||||
|
// Module file extensions
|
||||||
|
moduleFileExtensions: ['ts', 'tsx', 'js', 'jsx', 'json', 'node'],
|
||||||
|
|
||||||
|
// Transform ignore patterns - adjust for ES modules
|
||||||
|
transformIgnorePatterns: ['node_modules/(?!(your-es-module-deps|.*\\.mjs$))'],
|
||||||
|
|
||||||
|
// Coverage configuration
|
||||||
|
collectCoverage: true,
|
||||||
|
coverageDirectory: 'coverage',
|
||||||
|
coverageReporters: [
|
||||||
|
'text', // Console output
|
||||||
|
'text-summary', // Brief summary
|
||||||
|
'lcov', // For IDE integration
|
||||||
|
'html', // Detailed HTML report
|
||||||
|
],
|
||||||
|
|
||||||
|
// Files to collect coverage from
|
||||||
|
collectCoverageFrom: [
|
||||||
|
'src/**/*.ts',
|
||||||
|
'!src/**/*.d.ts',
|
||||||
|
'!src/**/*.test.ts',
|
||||||
|
'!src/**/index.ts', // Often just exports
|
||||||
|
'!src/generated/**', // Generated code
|
||||||
|
'!src/config/database.ts', // Database config (tested via integration)
|
||||||
|
],
|
||||||
|
|
||||||
|
// Coverage thresholds - TaskMaster standards
|
||||||
|
coverageThreshold: {
|
||||||
|
global: {
|
||||||
|
branches: 70,
|
||||||
|
functions: 80,
|
||||||
|
lines: 80,
|
||||||
|
statements: 80,
|
||||||
|
},
|
||||||
|
// Higher standards for critical business logic
|
||||||
|
'./src/utils/': {
|
||||||
|
branches: 85,
|
||||||
|
functions: 90,
|
||||||
|
lines: 90,
|
||||||
|
statements: 90,
|
||||||
|
},
|
||||||
|
'./src/middleware/': {
|
||||||
|
branches: 80,
|
||||||
|
functions: 85,
|
||||||
|
lines: 85,
|
||||||
|
statements: 85,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
|
||||||
|
// Setup files
|
||||||
|
setupFilesAfterEnv: ['<rootDir>/tests/setup.ts'],
|
||||||
|
|
||||||
|
// Global teardown to prevent worker process leaks
|
||||||
|
globalTeardown: '<rootDir>/tests/teardown.ts',
|
||||||
|
|
||||||
|
// Module path mapping (if needed)
|
||||||
|
moduleNameMapper: {
|
||||||
|
'^@/(.*)$': '<rootDir>/src/$1',
|
||||||
|
},
|
||||||
|
|
||||||
|
// Clear mocks between tests
|
||||||
|
clearMocks: true,
|
||||||
|
|
||||||
|
// Restore mocks after each test
|
||||||
|
restoreMocks: true,
|
||||||
|
|
||||||
|
// Global test timeout
|
||||||
|
testTimeout: 10000,
|
||||||
|
|
||||||
|
// Projects for different test types
|
||||||
|
projects: [
|
||||||
|
// Unit tests - for pure functions only
|
||||||
|
{
|
||||||
|
displayName: 'unit',
|
||||||
|
testMatch: ['<rootDir>/src/**/*.test.ts'],
|
||||||
|
testPathIgnorePatterns: ['.*\\.integration\\.test\\.ts$', '/tests/'],
|
||||||
|
preset: 'ts-jest',
|
||||||
|
testEnvironment: 'node',
|
||||||
|
collectCoverageFrom: [
|
||||||
|
'src/**/*.ts',
|
||||||
|
'!src/**/*.d.ts',
|
||||||
|
'!src/**/*.test.ts',
|
||||||
|
'!src/**/*.integration.test.ts',
|
||||||
|
],
|
||||||
|
coverageThreshold: {
|
||||||
|
global: {
|
||||||
|
branches: 70,
|
||||||
|
functions: 80,
|
||||||
|
lines: 80,
|
||||||
|
statements: 80,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
// Integration tests - real database/services
|
||||||
|
{
|
||||||
|
displayName: 'integration',
|
||||||
|
testMatch: [
|
||||||
|
'<rootDir>/src/**/*.integration.test.ts',
|
||||||
|
'<rootDir>/tests/integration/**/*.test.ts',
|
||||||
|
],
|
||||||
|
preset: 'ts-jest',
|
||||||
|
testEnvironment: 'node',
|
||||||
|
setupFilesAfterEnv: ['<rootDir>/tests/setup/integration.ts'],
|
||||||
|
testTimeout: 10000,
|
||||||
|
},
|
||||||
|
// E2E tests - full workflows
|
||||||
|
{
|
||||||
|
displayName: 'e2e',
|
||||||
|
testMatch: ['<rootDir>/tests/e2e/**/*.test.ts'],
|
||||||
|
preset: 'ts-jest',
|
||||||
|
testEnvironment: 'node',
|
||||||
|
setupFilesAfterEnv: ['<rootDir>/tests/setup/e2e.ts'],
|
||||||
|
testTimeout: 30000,
|
||||||
|
},
|
||||||
|
],
|
||||||
|
|
||||||
|
// Verbose output for better debugging
|
||||||
|
verbose: true,
|
||||||
|
|
||||||
|
// Run projects sequentially to avoid conflicts
|
||||||
|
maxWorkers: 1,
|
||||||
|
|
||||||
|
// Enable watch mode plugins
|
||||||
|
watchPlugins: ['jest-watch-typeahead/filename', 'jest-watch-typeahead/testname'],
|
||||||
|
};
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 3: Update package.json Scripts**
|
||||||
|
|
||||||
|
Add these scripts to your `package.json`:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"scripts": {
|
||||||
|
"test": "jest",
|
||||||
|
"test:watch": "jest --watch",
|
||||||
|
"test:coverage": "jest --coverage",
|
||||||
|
"test:unit": "jest --selectProjects unit",
|
||||||
|
"test:integration": "jest --selectProjects integration",
|
||||||
|
"test:e2e": "jest --selectProjects e2e",
|
||||||
|
"test:ci": "jest --ci --coverage --watchAll=false"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 4: Create Test Setup Files**
|
||||||
|
|
||||||
|
Create essential test setup files:
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// tests/setup.ts - Global setup
|
||||||
|
import { jest } from '@jest/globals';
|
||||||
|
|
||||||
|
// Global test configuration
|
||||||
|
beforeAll(() => {
|
||||||
|
// Set test timeout
|
||||||
|
jest.setTimeout(10000);
|
||||||
|
});
|
||||||
|
|
||||||
|
afterEach(() => {
|
||||||
|
// Clean up mocks after each test
|
||||||
|
jest.clearAllMocks();
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// tests/setup/integration.ts - Integration test setup
|
||||||
|
import { PrismaClient } from '@prisma/client';
|
||||||
|
|
||||||
|
const prisma = new PrismaClient();
|
||||||
|
|
||||||
|
beforeAll(async () => {
|
||||||
|
// Connect to test database
|
||||||
|
await prisma.$connect();
|
||||||
|
});
|
||||||
|
|
||||||
|
afterAll(async () => {
|
||||||
|
// Cleanup and disconnect
|
||||||
|
await prisma.$disconnect();
|
||||||
|
});
|
||||||
|
|
||||||
|
beforeEach(async () => {
|
||||||
|
// Clean test data before each test
|
||||||
|
// Add your cleanup logic here
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// tests/teardown.ts - Global teardown
|
||||||
|
export default async () => {
|
||||||
|
// Global cleanup after all tests
|
||||||
|
console.log('Global test teardown complete');
|
||||||
|
};
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 5: Create Initial Test Structure**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Create test directories
|
||||||
|
mkdir -p tests/{setup,fixtures,unit,integration,e2e}
|
||||||
|
mkdir -p tests/unit/src/{utils,services,middleware}
|
||||||
|
|
||||||
|
# Create sample test fixtures
|
||||||
|
mkdir tests/fixtures
|
||||||
|
```
|
||||||
|
|
||||||
|
### **Generic Testing Framework Setup (Any Language)**
|
||||||
|
|
||||||
|
#### **Framework Selection Guide**
|
||||||
|
|
||||||
|
**Python Projects:**
|
||||||
|
- **pytest**: Recommended for most Python projects
|
||||||
|
- **unittest**: Built-in, suitable for simple projects
|
||||||
|
- **Coverage**: Use `coverage.py` for code coverage
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Python setup example
|
||||||
|
pip install pytest pytest-cov
|
||||||
|
echo "[tool:pytest]" > pytest.ini
|
||||||
|
echo "testpaths = tests" >> pytest.ini
|
||||||
|
echo "addopts = --cov=src --cov-report=html --cov-report=term" >> pytest.ini
|
||||||
|
```
|
||||||
|
|
||||||
|
**Go Projects:**
|
||||||
|
- **Built-in testing**: Use Go's built-in `testing` package
|
||||||
|
- **Coverage**: Built-in with `go test -cover`
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Go setup example
|
||||||
|
go mod init your-project
|
||||||
|
mkdir -p tests
|
||||||
|
# Tests are typically *_test.go files alongside source
|
||||||
|
```
|
||||||
|
|
||||||
|
**Rust Projects:**
|
||||||
|
- **Built-in testing**: Use Rust's built-in test framework
|
||||||
|
- **cargo-tarpaulin**: For coverage analysis
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Rust setup example
|
||||||
|
cargo new your-project
|
||||||
|
cd your-project
|
||||||
|
cargo install cargo-tarpaulin # For coverage
|
||||||
|
```
|
||||||
|
|
||||||
|
**Java Projects:**
|
||||||
|
- **JUnit 5**: Modern testing framework
|
||||||
|
- **Maven/Gradle**: Build tools with testing integration
|
||||||
|
|
||||||
|
```xml
|
||||||
|
<!-- Maven pom.xml example -->
|
||||||
|
<dependency>
|
||||||
|
<groupId>org.junit.jupiter</groupId>
|
||||||
|
<artifactId>junit-jupiter</artifactId>
|
||||||
|
<version>5.9.2</version>
|
||||||
|
<scope>test</scope>
|
||||||
|
</dependency>
|
||||||
|
```
|
||||||
|
|
||||||
|
#### **Universal Testing Principles**
|
||||||
|
|
||||||
|
**Coverage Standards (Adapt to Your Language):**
|
||||||
|
- **Global Minimum**: 70-80% line coverage
|
||||||
|
- **Critical Code**: 85-90% coverage
|
||||||
|
- **New Features**: Must meet or exceed standards
|
||||||
|
- **Legacy Code**: Gradual improvement strategy
|
||||||
|
|
||||||
|
**Test Organization:**
|
||||||
|
- **Unit Tests**: Fast, isolated, no external dependencies
|
||||||
|
- **Integration Tests**: Test component interactions
|
||||||
|
- **E2E Tests**: Test complete user workflows
|
||||||
|
- **Performance Tests**: Load and stress testing (if applicable)
|
||||||
|
|
||||||
|
**Naming Conventions:**
|
||||||
|
- **Test Files**: `*.test.*`, `*_test.*`, or language-specific patterns
|
||||||
|
- **Test Functions**: Descriptive names (e.g., `should_return_error_for_invalid_input`)
|
||||||
|
- **Test Directories**: Organized by test type and mirroring source structure
|
||||||
|
|
||||||
|
#### **TaskMaster Integration for Any Framework**
|
||||||
|
|
||||||
|
**Document Testing Setup in Subtasks:**
|
||||||
|
```bash
|
||||||
|
# Update subtask with testing framework setup
|
||||||
|
task-master update-subtask --id=X.Y --prompt="Testing framework setup:
|
||||||
|
- Installed [Framework Name] with coverage support
|
||||||
|
- Configured [Coverage Tool] with thresholds: 80% lines, 70% branches
|
||||||
|
- Created test directory structure: unit/, integration/, e2e/
|
||||||
|
- Added test scripts to build configuration
|
||||||
|
- All setup tests passing"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Testing Framework Verification:**
|
||||||
|
```bash
|
||||||
|
# Verify setup works
|
||||||
|
[test-command] # e.g., npm test, pytest, go test, cargo test
|
||||||
|
|
||||||
|
# Check coverage reporting
|
||||||
|
[coverage-command] # e.g., npm run test:coverage
|
||||||
|
|
||||||
|
# Update task with verification
|
||||||
|
task-master update-subtask --id=X.Y --prompt="Testing framework verified:
|
||||||
|
- Sample tests running successfully
|
||||||
|
- Coverage reporting functional
|
||||||
|
- CI/CD integration ready
|
||||||
|
- Ready to begin TDD workflow"
|
||||||
|
```
|
||||||
|
|
||||||
|
## **Test-Driven Development (TDD) Integration**
|
||||||
|
|
||||||
|
### **Core TDD Cycle with Jest**
|
||||||
|
```bash
|
||||||
|
# 1. Start development with watch mode
|
||||||
|
npm run test:watch
|
||||||
|
|
||||||
|
# 2. Write failing test first
|
||||||
|
# Create test file: src/utils/newFeature.test.ts
|
||||||
|
# Write test that describes expected behavior
|
||||||
|
|
||||||
|
# 3. Implement minimum code to make test pass
|
||||||
|
# 4. Refactor while keeping tests green
|
||||||
|
# 5. Add edge cases and error scenarios
|
||||||
|
```
|
||||||
|
|
||||||
|
### **TDD Workflow Per Subtask**
|
||||||
|
```bash
|
||||||
|
# When starting a new subtask:
|
||||||
|
task-master set-status --id=4.1 --status=in-progress
|
||||||
|
|
||||||
|
# Begin TDD cycle:
|
||||||
|
npm run test:watch # Keep running during development
|
||||||
|
|
||||||
|
# Document TDD progress in subtask:
|
||||||
|
task-master update-subtask --id=4.1 --prompt="TDD Progress:
|
||||||
|
- Written 3 failing tests for core functionality
|
||||||
|
- Implemented basic feature, tests now passing
|
||||||
|
- Adding edge case tests for error handling"
|
||||||
|
|
||||||
|
# Complete subtask with test summary:
|
||||||
|
task-master update-subtask --id=4.1 --prompt="Implementation complete:
|
||||||
|
- Feature implemented with 8 unit tests
|
||||||
|
- Coverage: 95% statements, 88% branches
|
||||||
|
- All tests passing, TDD cycle complete"
|
||||||
|
```
|
||||||
|
|
||||||
|
## **Testing Commands & Usage**
|
||||||
|
|
||||||
|
### **Development Commands**
|
||||||
|
```bash
|
||||||
|
# Primary development command - use during coding
|
||||||
|
npm run test:watch # Watch mode with Jest
|
||||||
|
npm run test:watch -- --testNamePattern="auth" # Watch specific tests
|
||||||
|
|
||||||
|
# Targeted testing during development
|
||||||
|
npm run test:unit # Run only unit tests
|
||||||
|
npm run test:unit -- --coverage # Unit tests with coverage
|
||||||
|
|
||||||
|
# Integration testing when APIs are ready
|
||||||
|
npm run test:integration # Run integration tests
|
||||||
|
npm run test:integration -- --detectOpenHandles # Debug hanging tests
|
||||||
|
|
||||||
|
# End-to-end testing for workflows
|
||||||
|
npm run test:e2e # Run E2E tests
|
||||||
|
npm run test:e2e -- --timeout=30000 # Extended timeout for E2E
|
||||||
|
```
|
||||||
|
|
||||||
|
### **Quality Assurance Commands**
|
||||||
|
```bash
|
||||||
|
# Full test suite with coverage (before commits)
|
||||||
|
npm run test:coverage # Complete coverage analysis
|
||||||
|
|
||||||
|
# All tests (CI/CD pipeline)
|
||||||
|
npm test # Run all test projects
|
||||||
|
|
||||||
|
# Specific test file execution
|
||||||
|
npm test -- auth.test.ts # Run specific test file
|
||||||
|
npm test -- --testNamePattern="should handle errors" # Run specific tests
|
||||||
|
```
|
||||||
|
|
||||||
|
## **Test Implementation Patterns**
|
||||||
|
|
||||||
|
### **Unit Test Development**
|
||||||
|
```typescript
|
||||||
|
// ✅ DO: Follow established patterns from auth.test.ts
|
||||||
|
describe('FeatureName', () => {
|
||||||
|
beforeEach(() => {
|
||||||
|
jest.clearAllMocks();
|
||||||
|
// Setup mocks with proper typing
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('functionName', () => {
|
||||||
|
it('should handle normal case', () => {
|
||||||
|
// Test implementation with specific assertions
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should throw error for invalid input', async () => {
|
||||||
|
// Error scenario testing
|
||||||
|
await expect(functionName(invalidInput))
|
||||||
|
.rejects.toThrow('Specific error message');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
### **Integration Test Development**
|
||||||
|
```typescript
|
||||||
|
// ✅ DO: Use supertest for API endpoint testing
|
||||||
|
import request from 'supertest';
|
||||||
|
import { app } from '../../src/app';
|
||||||
|
|
||||||
|
describe('POST /api/auth/register', () => {
|
||||||
|
beforeEach(async () => {
|
||||||
|
await integrationTestUtils.cleanupTestData();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should register user successfully', async () => {
|
||||||
|
const userData = createTestUser();
|
||||||
|
|
||||||
|
const response = await request(app)
|
||||||
|
.post('/api/auth/register')
|
||||||
|
.send(userData)
|
||||||
|
.expect(201);
|
||||||
|
|
||||||
|
expect(response.body).toMatchObject({
|
||||||
|
id: expect.any(String),
|
||||||
|
email: userData.email
|
||||||
|
});
|
||||||
|
|
||||||
|
// Verify database state
|
||||||
|
const user = await prisma.user.findUnique({
|
||||||
|
where: { email: userData.email }
|
||||||
|
});
|
||||||
|
expect(user).toBeTruthy();
|
||||||
|
});
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
### **E2E Test Development**
|
||||||
|
```typescript
|
||||||
|
// ✅ DO: Test complete user workflows
|
||||||
|
describe('User Authentication Flow', () => {
|
||||||
|
it('should complete registration → login → protected access', async () => {
|
||||||
|
// Step 1: Register
|
||||||
|
const userData = createTestUser();
|
||||||
|
await request(app)
|
||||||
|
.post('/api/auth/register')
|
||||||
|
.send(userData)
|
||||||
|
.expect(201);
|
||||||
|
|
||||||
|
// Step 2: Login
|
||||||
|
const loginResponse = await request(app)
|
||||||
|
.post('/api/auth/login')
|
||||||
|
.send({ email: userData.email, password: userData.password })
|
||||||
|
.expect(200);
|
||||||
|
|
||||||
|
const { token } = loginResponse.body;
|
||||||
|
|
||||||
|
// Step 3: Access protected resource
|
||||||
|
await request(app)
|
||||||
|
.get('/api/profile')
|
||||||
|
.set('Authorization', `Bearer ${token}`)
|
||||||
|
.expect(200);
|
||||||
|
}, 30000); // Extended timeout for E2E
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
## **Mocking & Test Utilities**
|
||||||
|
|
||||||
|
### **Established Mocking Patterns**
|
||||||
|
```typescript
|
||||||
|
// ✅ DO: Use established bcrypt mocking pattern
|
||||||
|
jest.mock('bcrypt');
|
||||||
|
import bcrypt from 'bcrypt';
|
||||||
|
const mockHash = bcrypt.hash as jest.MockedFunction<typeof bcrypt.hash>;
|
||||||
|
const mockCompare = bcrypt.compare as jest.MockedFunction<typeof bcrypt.compare>;
|
||||||
|
|
||||||
|
// ✅ DO: Use Prisma mocking for unit tests
|
||||||
|
jest.mock('@prisma/client', () => ({
|
||||||
|
PrismaClient: jest.fn().mockImplementation(() => ({
|
||||||
|
user: {
|
||||||
|
create: jest.fn(),
|
||||||
|
findUnique: jest.fn(),
|
||||||
|
},
|
||||||
|
$connect: jest.fn(),
|
||||||
|
$disconnect: jest.fn(),
|
||||||
|
})),
|
||||||
|
}));
|
||||||
|
```
|
||||||
|
|
||||||
|
### **Test Fixtures Usage**
|
||||||
|
```typescript
|
||||||
|
// ✅ DO: Use centralized test fixtures
|
||||||
|
import { createTestUser, adminUser, invalidUser } from '../fixtures/users';
|
||||||
|
|
||||||
|
describe('User Service', () => {
|
||||||
|
it('should handle admin user creation', async () => {
|
||||||
|
const userData = createTestUser(adminUser);
|
||||||
|
// Test implementation
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should reject invalid user data', async () => {
|
||||||
|
const userData = createTestUser(invalidUser);
|
||||||
|
// Error testing
|
||||||
|
});
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
## **Coverage Standards & Monitoring**
|
||||||
|
|
||||||
|
### **Coverage Thresholds**
|
||||||
|
- **Global Standards**: 80% lines/functions, 70% branches
|
||||||
|
- **Critical Code**: 90% utils, 85% middleware
|
||||||
|
- **New Features**: Must meet or exceed global thresholds
|
||||||
|
- **Legacy Code**: Gradual improvement with each change
|
||||||
|
|
||||||
|
### **Coverage Reporting & Analysis**
|
||||||
|
```bash
|
||||||
|
# Generate coverage reports
|
||||||
|
npm run test:coverage
|
||||||
|
|
||||||
|
# View detailed HTML report
|
||||||
|
open coverage/lcov-report/index.html
|
||||||
|
|
||||||
|
# Coverage files generated:
|
||||||
|
# - coverage/lcov-report/index.html # Detailed HTML report
|
||||||
|
# - coverage/lcov.info # LCOV format for IDE integration
|
||||||
|
# - coverage/coverage-final.json # JSON format for tooling
|
||||||
|
```
|
||||||
|
|
||||||
|
### **Coverage Quality Checks**
|
||||||
|
```typescript
|
||||||
|
// ✅ DO: Test all code paths
|
||||||
|
describe('validateInput', () => {
|
||||||
|
it('should return true for valid input', () => {
|
||||||
|
expect(validateInput('valid')).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return false for various invalid inputs', () => {
|
||||||
|
expect(validateInput('')).toBe(false); // Empty string
|
||||||
|
expect(validateInput(null)).toBe(false); // Null value
|
||||||
|
expect(validateInput(undefined)).toBe(false); // Undefined
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should throw for unexpected input types', () => {
|
||||||
|
expect(() => validateInput(123)).toThrow('Invalid input type');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
## **Testing During Development Phases**
|
||||||
|
|
||||||
|
### **Feature Development Phase**
|
||||||
|
```bash
|
||||||
|
# 1. Start feature development
|
||||||
|
task-master set-status --id=X.Y --status=in-progress
|
||||||
|
|
||||||
|
# 2. Begin TDD cycle
|
||||||
|
npm run test:watch
|
||||||
|
|
||||||
|
# 3. Document test progress in subtask
|
||||||
|
task-master update-subtask --id=X.Y --prompt="Test development:
|
||||||
|
- Created test file with 5 failing tests
|
||||||
|
- Implemented core functionality
|
||||||
|
- Tests passing, adding error scenarios"
|
||||||
|
|
||||||
|
# 4. Verify coverage before completion
|
||||||
|
npm run test:coverage
|
||||||
|
|
||||||
|
# 5. Update subtask with final test status
|
||||||
|
task-master update-subtask --id=X.Y --prompt="Testing complete:
|
||||||
|
- 12 unit tests with full coverage
|
||||||
|
- All edge cases and error scenarios covered
|
||||||
|
- Ready for integration testing"
|
||||||
|
```
|
||||||
|
|
||||||
|
### **Integration Testing Phase**
|
||||||
|
```bash
|
||||||
|
# After API endpoints are implemented
|
||||||
|
npm run test:integration
|
||||||
|
|
||||||
|
# Update integration test templates
|
||||||
|
# Replace placeholder tests with real endpoint calls
|
||||||
|
|
||||||
|
# Document integration test results
|
||||||
|
task-master update-subtask --id=X.Y --prompt="Integration tests:
|
||||||
|
- Updated auth endpoint tests
|
||||||
|
- Database integration verified
|
||||||
|
- All HTTP status codes and responses tested"
|
||||||
|
```
|
||||||
|
|
||||||
|
### **Pre-Commit Testing Phase**
|
||||||
|
```bash
|
||||||
|
# Before committing code
|
||||||
|
npm run test:coverage # Verify all tests pass with coverage
|
||||||
|
npm run test:unit # Quick unit test verification
|
||||||
|
npm run test:integration # Integration test verification (if applicable)
|
||||||
|
|
||||||
|
# Commit pattern for test updates
|
||||||
|
git add tests/ src/**/*.test.ts
|
||||||
|
git commit -m "test(task-X): Add comprehensive tests for Feature Y
|
||||||
|
|
||||||
|
- Unit tests with 95% coverage (exceeds 90% threshold)
|
||||||
|
- Integration tests for API endpoints
|
||||||
|
- Test fixtures for data generation
|
||||||
|
- Proper mocking patterns established
|
||||||
|
|
||||||
|
Task X: Feature Y - Testing complete"
|
||||||
|
```
|
||||||
|
|
||||||
|
## **Error Handling & Debugging**
|
||||||
|
|
||||||
|
### **Test Debugging Techniques**
|
||||||
|
```typescript
|
||||||
|
// ✅ DO: Use test utilities for debugging
|
||||||
|
import { testUtils } from '../setup';
|
||||||
|
|
||||||
|
it('should debug complex operation', () => {
|
||||||
|
testUtils.withConsole(() => {
|
||||||
|
// Console output visible only for this test
|
||||||
|
console.log('Debug info:', complexData);
|
||||||
|
service.complexOperation();
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
// ✅ DO: Use proper async debugging
|
||||||
|
it('should handle async operations', async () => {
|
||||||
|
const promise = service.asyncOperation();
|
||||||
|
|
||||||
|
// Test intermediate state
|
||||||
|
expect(service.isProcessing()).toBe(true);
|
||||||
|
|
||||||
|
const result = await promise;
|
||||||
|
expect(result).toBe('expected');
|
||||||
|
expect(service.isProcessing()).toBe(false);
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
### **Common Test Issues & Solutions**
|
||||||
|
```bash
|
||||||
|
# Hanging tests (common with database connections)
|
||||||
|
npm run test:integration -- --detectOpenHandles
|
||||||
|
|
||||||
|
# Memory leaks in tests
|
||||||
|
npm run test:unit -- --logHeapUsage
|
||||||
|
|
||||||
|
# Slow tests identification
|
||||||
|
npm run test:coverage -- --verbose
|
||||||
|
|
||||||
|
# Mock not working properly
|
||||||
|
# Check: mock is declared before imports
|
||||||
|
# Check: jest.clearAllMocks() in beforeEach
|
||||||
|
# Check: TypeScript typing is correct
|
||||||
|
```
|
||||||
|
|
||||||
|
## **Continuous Integration Integration**
|
||||||
|
|
||||||
|
### **CI/CD Pipeline Testing**
|
||||||
|
```yaml
|
||||||
|
# Example GitHub Actions integration
|
||||||
|
- name: Run tests
|
||||||
|
run: |
|
||||||
|
npm ci
|
||||||
|
npm run test:coverage
|
||||||
|
|
||||||
|
- name: Upload coverage reports
|
||||||
|
uses: codecov/codecov-action@v3
|
||||||
|
with:
|
||||||
|
file: ./coverage/lcov.info
|
||||||
|
```
|
||||||
|
|
||||||
|
### **Pre-commit Hooks**
|
||||||
|
```bash
|
||||||
|
# Setup pre-commit testing (recommended)
|
||||||
|
# In package.json scripts:
|
||||||
|
"pre-commit": "npm run test:unit && npm run test:integration"
|
||||||
|
|
||||||
|
# Husky integration example:
|
||||||
|
npx husky add .husky/pre-commit "npm run test:unit"
|
||||||
|
```
|
||||||
|
|
||||||
|
## **Test Maintenance & Evolution**
|
||||||
|
|
||||||
|
### **Adding Tests for New Features**
|
||||||
|
1. **Create test file** alongside source code or in `tests/unit/`
|
||||||
|
2. **Follow established patterns** from `src/utils/auth.test.ts`
|
||||||
|
3. **Use existing fixtures** from `tests/fixtures/`
|
||||||
|
4. **Apply proper mocking** patterns for dependencies
|
||||||
|
5. **Meet coverage thresholds** for the module
|
||||||
|
|
||||||
|
### **Updating Integration/E2E Tests**
|
||||||
|
1. **Update templates** in `tests/integration/` when APIs change
|
||||||
|
2. **Modify E2E workflows** in `tests/e2e/` for new user journeys
|
||||||
|
3. **Update test fixtures** for new data requirements
|
||||||
|
4. **Maintain database cleanup** utilities
|
||||||
|
|
||||||
|
### **Test Performance Optimization**
|
||||||
|
- **Parallel execution**: Jest runs tests in parallel by default
|
||||||
|
- **Test isolation**: Use proper setup/teardown for independence
|
||||||
|
- **Mock optimization**: Mock heavy dependencies appropriately
|
||||||
|
- **Database efficiency**: Use transaction rollbacks where possible
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Key References:**
|
||||||
|
- [Testing Standards](mdc:.cursor/rules/tests.mdc)
|
||||||
|
- [Git Workflow](mdc:.cursor/rules/git_workflow.mdc)
|
||||||
|
- [Development Workflow](mdc:.cursor/rules/dev_workflow.mdc)
|
||||||
|
- [Jest Configuration](mdc:jest.config.js)
|
||||||
@@ -5,6 +5,8 @@ globs: "**/*.test.js,tests/**/*"
|
|||||||
|
|
||||||
# Testing Guidelines for Task Master CLI
|
# Testing Guidelines for Task Master CLI
|
||||||
|
|
||||||
|
*Note:* Never use asynchronous operations in tests. Always mock tests properly based on the way the tested functions are defined and used. Do not arbitrarily create tests. Based them on the low-level details and execution of the underlying code being tested.
|
||||||
|
|
||||||
## Test Organization Structure
|
## Test Organization Structure
|
||||||
|
|
||||||
- **Unit Tests** (See [`architecture.mdc`](mdc:.cursor/rules/architecture.mdc) for module breakdown)
|
- **Unit Tests** (See [`architecture.mdc`](mdc:.cursor/rules/architecture.mdc) for module breakdown)
|
||||||
@@ -88,6 +90,122 @@ describe('Feature or Function Name', () => {
|
|||||||
});
|
});
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Commander.js Command Testing Best Practices
|
||||||
|
|
||||||
|
When testing CLI commands built with Commander.js, several special considerations must be made to avoid common pitfalls:
|
||||||
|
|
||||||
|
- **Direct Action Handler Testing**
|
||||||
|
- ✅ **DO**: Test the command action handlers directly rather than trying to mock the entire Commander.js chain
|
||||||
|
- ✅ **DO**: Create simplified test-specific implementations of command handlers that match the original behavior
|
||||||
|
- ✅ **DO**: Explicitly handle all options, including defaults and shorthand flags (e.g., `-p` for `--prompt`)
|
||||||
|
- ✅ **DO**: Include null/undefined checks in test implementations for parameters that might be optional
|
||||||
|
- ✅ **DO**: Use fixtures from `tests/fixtures/` for consistent sample data across tests
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// ✅ DO: Create a simplified test version of the command handler
|
||||||
|
const testAddTaskAction = async (options) => {
|
||||||
|
options = options || {}; // Ensure options aren't undefined
|
||||||
|
|
||||||
|
// Validate parameters
|
||||||
|
const isManualCreation = options.title && options.description;
|
||||||
|
const prompt = options.prompt || options.p; // Handle shorthand flags
|
||||||
|
|
||||||
|
if (!prompt && !isManualCreation) {
|
||||||
|
throw new Error('Expected error message');
|
||||||
|
}
|
||||||
|
|
||||||
|
// Call the mocked task manager
|
||||||
|
return mockTaskManager.addTask(/* parameters */);
|
||||||
|
};
|
||||||
|
|
||||||
|
test('should handle required parameters correctly', async () => {
|
||||||
|
// Call the test implementation directly
|
||||||
|
await expect(async () => {
|
||||||
|
await testAddTaskAction({ file: 'tasks.json' });
|
||||||
|
}).rejects.toThrow('Expected error message');
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
- **Commander Chain Mocking (If Necessary)**
|
||||||
|
- ✅ **DO**: Mock ALL chainable methods (`option`, `argument`, `action`, `on`, etc.)
|
||||||
|
- ✅ **DO**: Return `this` (or the mock object) from all chainable method mocks
|
||||||
|
- ✅ **DO**: Remember to mock not only the initial object but also all objects returned by methods
|
||||||
|
- ✅ **DO**: Implement a mechanism to capture the action handler for direct testing
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// If you must mock the Commander.js chain:
|
||||||
|
const mockCommand = {
|
||||||
|
command: jest.fn().mockReturnThis(),
|
||||||
|
description: jest.fn().mockReturnThis(),
|
||||||
|
option: jest.fn().mockReturnThis(),
|
||||||
|
argument: jest.fn().mockReturnThis(), // Don't forget this one
|
||||||
|
action: jest.fn(fn => {
|
||||||
|
actionHandler = fn; // Capture the handler for testing
|
||||||
|
return mockCommand;
|
||||||
|
}),
|
||||||
|
on: jest.fn().mockReturnThis() // Don't forget this one
|
||||||
|
};
|
||||||
|
```
|
||||||
|
|
||||||
|
- **Parameter Handling**
|
||||||
|
- ✅ **DO**: Check for both main flag and shorthand flags (e.g., `prompt` and `p`)
|
||||||
|
- ✅ **DO**: Handle parameters like Commander would (comma-separated lists, etc.)
|
||||||
|
- ✅ **DO**: Set proper default values as defined in the command
|
||||||
|
- ✅ **DO**: Validate that required parameters are actually required in tests
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Parse dependencies like Commander would
|
||||||
|
const dependencies = options.dependencies
|
||||||
|
? options.dependencies.split(',').map(id => id.trim())
|
||||||
|
: [];
|
||||||
|
```
|
||||||
|
|
||||||
|
- **Environment and Session Handling**
|
||||||
|
- ✅ **DO**: Properly mock session objects when required by functions
|
||||||
|
- ✅ **DO**: Reset environment variables between tests if modified
|
||||||
|
- ✅ **DO**: Use a consistent pattern for environment-dependent tests
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Session parameter mock pattern
|
||||||
|
const sessionMock = { session: process.env };
|
||||||
|
|
||||||
|
// In test:
|
||||||
|
expect(mockAddTask).toHaveBeenCalledWith(
|
||||||
|
expect.any(String),
|
||||||
|
'Test prompt',
|
||||||
|
[],
|
||||||
|
'medium',
|
||||||
|
sessionMock,
|
||||||
|
false,
|
||||||
|
null,
|
||||||
|
null
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
- **Common Pitfalls to Avoid**
|
||||||
|
- ❌ **DON'T**: Try to use the real action implementation without proper mocking
|
||||||
|
- ❌ **DON'T**: Mock Commander partially - either mock it completely or test the action directly
|
||||||
|
- ❌ **DON'T**: Forget to handle optional parameters that may be undefined
|
||||||
|
- ❌ **DON'T**: Neglect to test shorthand flag functionality (e.g., `-p`, `-r`)
|
||||||
|
- ❌ **DON'T**: Create circular dependencies in your test mocks
|
||||||
|
- ❌ **DON'T**: Access variables before initialization in your test implementations
|
||||||
|
- ❌ **DON'T**: Include actual command execution in unit tests
|
||||||
|
- ❌ **DON'T**: Overwrite the same file path in multiple tests
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// ❌ DON'T: Create circular references in mocks
|
||||||
|
const badMock = {
|
||||||
|
method: jest.fn().mockImplementation(() => badMock.method())
|
||||||
|
};
|
||||||
|
|
||||||
|
// ❌ DON'T: Access uninitialized variables
|
||||||
|
const badImplementation = () => {
|
||||||
|
const result = uninitialized;
|
||||||
|
let uninitialized = 'value';
|
||||||
|
return result;
|
||||||
|
};
|
||||||
|
```
|
||||||
|
|
||||||
## Jest Module Mocking Best Practices
|
## Jest Module Mocking Best Practices
|
||||||
|
|
||||||
- **Mock Hoisting Behavior**
|
- **Mock Hoisting Behavior**
|
||||||
@@ -165,107 +283,97 @@ When testing ES modules (`"type": "module"` in package.json), traditional mockin
|
|||||||
- Imported functions may not use your mocked dependencies even with proper jest.mock() setup
|
- Imported functions may not use your mocked dependencies even with proper jest.mock() setup
|
||||||
- ES module exports are read-only properties (cannot be reassigned during tests)
|
- ES module exports are read-only properties (cannot be reassigned during tests)
|
||||||
|
|
||||||
- **Mocking Entire Modules**
|
- **Mocking Modules Statically Imported**
|
||||||
```javascript
|
- For modules imported with standard `import` statements at the top level:
|
||||||
// Mock the entire module with custom implementation
|
- Use `jest.mock('path/to/module', factory)` **before** any imports.
|
||||||
jest.mock('../../scripts/modules/task-manager.js', () => {
|
- Jest hoists these mocks.
|
||||||
// Get original implementation for functions you want to preserve
|
- Ensure the factory function returns the mocked structure correctly.
|
||||||
const originalModule = jest.requireActual('../../scripts/modules/task-manager.js');
|
|
||||||
|
|
||||||
// Return mix of original and mocked functionality
|
- **Mocking Dependencies for Dynamically Imported Modules**
|
||||||
return {
|
- **Problem**: Standard `jest.mock()` often fails for dependencies of modules loaded later using dynamic `import('path/to/module')`. The mocks aren't applied correctly when the dynamic import resolves.
|
||||||
...originalModule,
|
- **Solution**: Use `jest.unstable_mockModule(modulePath, factory)` **before** the dynamic `import()` call.
|
||||||
generateTaskFiles: jest.fn() // Replace specific functions
|
```javascript
|
||||||
};
|
// 1. Define mock function instances
|
||||||
|
const mockExistsSync = jest.fn();
|
||||||
|
const mockReadFileSync = jest.fn();
|
||||||
|
// ... other mocks
|
||||||
|
|
||||||
|
// 2. Mock the dependency module *before* the dynamic import
|
||||||
|
jest.unstable_mockModule('fs', () => ({
|
||||||
|
__esModule: true, // Important for ES module mocks
|
||||||
|
// Mock named exports
|
||||||
|
existsSync: mockExistsSync,
|
||||||
|
readFileSync: mockReadFileSync,
|
||||||
|
// Mock default export if necessary
|
||||||
|
// default: { ... }
|
||||||
|
}));
|
||||||
|
|
||||||
|
// 3. Dynamically import the module under test (e.g., in beforeAll or test case)
|
||||||
|
let moduleUnderTest;
|
||||||
|
beforeAll(async () => {
|
||||||
|
// Ensure mocks are reset if needed before import
|
||||||
|
mockExistsSync.mockReset();
|
||||||
|
mockReadFileSync.mockReset();
|
||||||
|
// ... reset other mocks ...
|
||||||
|
|
||||||
|
// Import *after* unstable_mockModule is called
|
||||||
|
moduleUnderTest = await import('../../scripts/modules/module-using-fs.js');
|
||||||
});
|
});
|
||||||
|
|
||||||
// Import after mocks
|
// 4. Now tests can use moduleUnderTest, and its 'fs' calls will hit the mocks
|
||||||
import * as taskManager from '../../scripts/modules/task-manager.js';
|
test('should use mocked fs.readFileSync', () => {
|
||||||
|
mockReadFileSync.mockReturnValue('mock data');
|
||||||
|
moduleUnderTest.readFileAndProcess();
|
||||||
|
expect(mockReadFileSync).toHaveBeenCalled();
|
||||||
|
// ... other assertions
|
||||||
|
});
|
||||||
|
```
|
||||||
|
- ✅ **DO**: Call `jest.unstable_mockModule()` before `await import()`.
|
||||||
|
- ✅ **DO**: Include `__esModule: true` in the mock factory for ES modules.
|
||||||
|
- ✅ **DO**: Mock named and default exports as needed within the factory.
|
||||||
|
- ✅ **DO**: Reset mock functions (`mockFn.mockReset()`) before the dynamic import if they might have been called previously.
|
||||||
|
|
||||||
// Now you can use the mock directly
|
- **Mocking Entire Modules (Static Import)**
|
||||||
const { generateTaskFiles } = taskManager;
|
```javascript
|
||||||
|
// Mock the entire module with custom implementation for static imports
|
||||||
|
// ... (existing example remains valid) ...
|
||||||
```
|
```
|
||||||
|
|
||||||
- **Direct Implementation Testing**
|
- **Direct Implementation Testing**
|
||||||
- Instead of calling the actual function which may have module-scope reference issues:
|
- Instead of calling the actual function which may have module-scope reference issues:
|
||||||
```javascript
|
```javascript
|
||||||
test('should perform expected actions', () => {
|
// ... (existing example remains valid) ...
|
||||||
// Setup mocks for this specific test
|
|
||||||
mockReadJSON.mockImplementationOnce(() => sampleData);
|
|
||||||
|
|
||||||
// Manually simulate the function's behavior
|
|
||||||
const data = mockReadJSON('path/file.json');
|
|
||||||
mockValidateAndFixDependencies(data, 'path/file.json');
|
|
||||||
|
|
||||||
// Skip calling the actual function and verify mocks directly
|
|
||||||
expect(mockReadJSON).toHaveBeenCalledWith('path/file.json');
|
|
||||||
expect(mockValidateAndFixDependencies).toHaveBeenCalledWith(data, 'path/file.json');
|
|
||||||
});
|
|
||||||
```
|
```
|
||||||
|
|
||||||
- **Avoiding Module Property Assignment**
|
- **Avoiding Module Property Assignment**
|
||||||
```javascript
|
```javascript
|
||||||
// ❌ DON'T: This causes "Cannot assign to read only property" errors
|
// ... (existing example remains valid) ...
|
||||||
const utils = await import('../../scripts/modules/utils.js');
|
|
||||||
utils.readJSON = mockReadJSON; // Error: read-only property
|
|
||||||
|
|
||||||
// ✅ DO: Use the module factory pattern in jest.mock()
|
|
||||||
jest.mock('../../scripts/modules/utils.js', () => ({
|
|
||||||
readJSON: mockReadJSONFunc,
|
|
||||||
writeJSON: mockWriteJSONFunc
|
|
||||||
}));
|
|
||||||
```
|
```
|
||||||
|
|
||||||
- **Handling Mock Verification Failures**
|
- **Handling Mock Verification Failures**
|
||||||
- If verification like `expect(mockFn).toHaveBeenCalled()` fails:
|
- If verification like `expect(mockFn).toHaveBeenCalled()` fails:
|
||||||
1. Check that your mock setup is before imports
|
1. Check that your mock setup (`jest.mock` or `jest.unstable_mockModule`) is correctly placed **before** imports (static or dynamic).
|
||||||
2. Ensure you're using the right mock instance
|
2. Ensure you're using the right mock instance and it's properly passed to the module.
|
||||||
3. Verify your test invokes behavior that would call the mock
|
3. Verify your test invokes behavior that *should* call the mock.
|
||||||
4. Use `jest.clearAllMocks()` in beforeEach to reset mock state
|
4. Use `jest.clearAllMocks()` or specific `mockFn.mockReset()` in `beforeEach` to prevent state leakage between tests.
|
||||||
5. Consider implementing a simpler test that directly verifies mock behavior
|
5. **Check Console Assertions**: If verifying `console.log`, `console.warn`, or `console.error` calls, ensure your assertion matches the *actual* arguments passed. If the code logs a single formatted string, assert against that single string (using `expect.stringContaining` or exact match), not multiple `expect.stringContaining` arguments.
|
||||||
|
|
||||||
- **Full Example Pattern**
|
|
||||||
```javascript
|
```javascript
|
||||||
// 1. Define mock implementations
|
// Example: Code logs console.error(`Error: ${message}. Details: ${details}`)
|
||||||
const mockReadJSON = jest.fn();
|
// ❌ DON'T: Assert multiple arguments if only one is logged
|
||||||
const mockValidateAndFixDependencies = jest.fn();
|
// expect(console.error).toHaveBeenCalledWith(
|
||||||
|
// expect.stringContaining('Error:'),
|
||||||
// 2. Mock modules
|
// expect.stringContaining('Details:')
|
||||||
jest.mock('../../scripts/modules/utils.js', () => ({
|
// );
|
||||||
readJSON: mockReadJSON,
|
// ✅ DO: Assert the single string argument
|
||||||
// Include other functions as needed
|
expect(console.error).toHaveBeenCalledWith(
|
||||||
}));
|
expect.stringContaining('Error: Specific message. Details: More details')
|
||||||
|
);
|
||||||
jest.mock('../../scripts/modules/dependency-manager.js', () => ({
|
// or for exact match:
|
||||||
validateAndFixDependencies: mockValidateAndFixDependencies
|
expect(console.error).toHaveBeenCalledWith(
|
||||||
}));
|
'Error: Specific message. Details: More details'
|
||||||
|
);
|
||||||
// 3. Import after mocks
|
|
||||||
import * as taskManager from '../../scripts/modules/task-manager.js';
|
|
||||||
|
|
||||||
describe('generateTaskFiles function', () => {
|
|
||||||
beforeEach(() => {
|
|
||||||
jest.clearAllMocks();
|
|
||||||
});
|
|
||||||
|
|
||||||
test('should generate task files', () => {
|
|
||||||
// 4. Setup test-specific mock behavior
|
|
||||||
const sampleData = { tasks: [{ id: 1, title: 'Test' }] };
|
|
||||||
mockReadJSON.mockReturnValueOnce(sampleData);
|
|
||||||
|
|
||||||
// 5. Create direct implementation test
|
|
||||||
// Instead of calling: taskManager.generateTaskFiles('path', 'dir')
|
|
||||||
|
|
||||||
// Simulate reading data
|
|
||||||
const data = mockReadJSON('path');
|
|
||||||
expect(mockReadJSON).toHaveBeenCalledWith('path');
|
|
||||||
|
|
||||||
// Simulate other operations the function would perform
|
|
||||||
mockValidateAndFixDependencies(data, 'path');
|
|
||||||
expect(mockValidateAndFixDependencies).toHaveBeenCalledWith(data, 'path');
|
|
||||||
});
|
|
||||||
});
|
|
||||||
```
|
```
|
||||||
|
6. Consider implementing a simpler test that *only* verifies the mock behavior in isolation.
|
||||||
|
|
||||||
## Mocking Guidelines
|
## Mocking Guidelines
|
||||||
|
|
||||||
@@ -552,6 +660,102 @@ npm test -- -t "pattern to match"
|
|||||||
});
|
});
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Testing AI Service Integrations
|
||||||
|
|
||||||
|
- **DO NOT import real AI service clients**
|
||||||
|
- ❌ DON'T: Import actual AI clients from their libraries
|
||||||
|
- ✅ DO: Create fully mocked versions that return predictable responses
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// ❌ DON'T: Import and instantiate real AI clients
|
||||||
|
import { Anthropic } from '@anthropic-ai/sdk';
|
||||||
|
const anthropic = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY });
|
||||||
|
|
||||||
|
// ✅ DO: Mock the entire module with controlled behavior
|
||||||
|
jest.mock('@anthropic-ai/sdk', () => ({
|
||||||
|
Anthropic: jest.fn().mockImplementation(() => ({
|
||||||
|
messages: {
|
||||||
|
create: jest.fn().mockResolvedValue({
|
||||||
|
content: [{ type: 'text', text: 'Mocked AI response' }]
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}))
|
||||||
|
}));
|
||||||
|
```
|
||||||
|
|
||||||
|
- **DO NOT rely on environment variables for API keys**
|
||||||
|
- ❌ DON'T: Assume environment variables are set in tests
|
||||||
|
- ✅ DO: Set mock environment variables in test setup
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// In tests/setup.js or at the top of test file
|
||||||
|
process.env.ANTHROPIC_API_KEY = 'test-mock-api-key-for-tests';
|
||||||
|
process.env.PERPLEXITY_API_KEY = 'test-mock-perplexity-key-for-tests';
|
||||||
|
```
|
||||||
|
|
||||||
|
- **DO NOT use real AI client initialization logic**
|
||||||
|
- ❌ DON'T: Use code that attempts to initialize or validate real AI clients
|
||||||
|
- ✅ DO: Create test-specific paths that bypass client initialization
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// ❌ DON'T: Test functions that require valid AI client initialization
|
||||||
|
// This will fail without proper API keys or network access
|
||||||
|
test('should use AI client', async () => {
|
||||||
|
const result = await functionThatInitializesAIClient();
|
||||||
|
expect(result).toBeDefined();
|
||||||
|
});
|
||||||
|
|
||||||
|
// ✅ DO: Test with bypassed initialization or manual task paths
|
||||||
|
test('should handle manual task creation without AI', () => {
|
||||||
|
// Using a path that doesn't require AI client initialization
|
||||||
|
const result = addTaskDirect({
|
||||||
|
title: 'Manual Task',
|
||||||
|
description: 'Test Description'
|
||||||
|
}, mockLogger);
|
||||||
|
|
||||||
|
expect(result.success).toBe(true);
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
## Testing Asynchronous Code
|
||||||
|
|
||||||
|
- **DO NOT rely on asynchronous operations in tests**
|
||||||
|
- ❌ DON'T: Use real async/await or Promise resolution in tests
|
||||||
|
- ✅ DO: Make all mocks return synchronous values when possible
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// ❌ DON'T: Use real async functions that might fail unpredictably
|
||||||
|
test('should handle async operation', async () => {
|
||||||
|
const result = await realAsyncFunction(); // Can time out or fail for external reasons
|
||||||
|
expect(result).toBe(expectedValue);
|
||||||
|
});
|
||||||
|
|
||||||
|
// ✅ DO: Make async operations synchronous in tests
|
||||||
|
test('should handle operation', () => {
|
||||||
|
mockAsyncFunction.mockReturnValue({ success: true, data: 'test' });
|
||||||
|
const result = functionUnderTest();
|
||||||
|
expect(result).toEqual({ success: true, data: 'test' });
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
- **DO NOT test exact error messages**
|
||||||
|
- ❌ DON'T: Assert on exact error message text that might change
|
||||||
|
- ✅ DO: Test for error presence and general properties
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// ❌ DON'T: Test for exact error message text
|
||||||
|
expect(result.error).toBe('Could not connect to API: Network error');
|
||||||
|
|
||||||
|
// ✅ DO: Test for general error properties or message patterns
|
||||||
|
expect(result.success).toBe(false);
|
||||||
|
expect(result.error).toContain('Could not connect');
|
||||||
|
// Or even better:
|
||||||
|
expect(result).toMatchObject({
|
||||||
|
success: false,
|
||||||
|
error: expect.stringContaining('connect')
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
## Reliable Testing Techniques
|
## Reliable Testing Techniques
|
||||||
|
|
||||||
- **Create Simplified Test Functions**
|
- **Create Simplified Test Functions**
|
||||||
@@ -564,99 +768,125 @@ npm test -- -t "pattern to match"
|
|||||||
const setTaskStatus = async (taskId, newStatus) => {
|
const setTaskStatus = async (taskId, newStatus) => {
|
||||||
const tasksPath = 'tasks/tasks.json';
|
const tasksPath = 'tasks/tasks.json';
|
||||||
const data = await readJSON(tasksPath);
|
const data = await readJSON(tasksPath);
|
||||||
// Update task status logic
|
// [implementation]
|
||||||
await writeJSON(tasksPath, data);
|
await writeJSON(tasksPath, data);
|
||||||
return data;
|
return { success: true };
|
||||||
};
|
};
|
||||||
|
|
||||||
// Test-friendly simplified function (easy to test)
|
// Test-friendly version (easier to test)
|
||||||
const testSetTaskStatus = (tasksData, taskIdInput, newStatus) => {
|
const updateTaskStatus = (tasks, taskId, newStatus) => {
|
||||||
// Same core logic without file operations
|
// Pure logic without side effects
|
||||||
// Update task status logic on provided tasksData object
|
const updatedTasks = [...tasks];
|
||||||
return tasksData; // Return updated data for assertions
|
const taskIndex = findTaskById(updatedTasks, taskId);
|
||||||
|
if (taskIndex === -1) return { success: false, error: 'Task not found' };
|
||||||
|
updatedTasks[taskIndex].status = newStatus;
|
||||||
|
return { success: true, tasks: updatedTasks };
|
||||||
};
|
};
|
||||||
```
|
```
|
||||||
|
|
||||||
- **Avoid Real File System Operations**
|
|
||||||
- Never write to real files during tests
|
|
||||||
- Create test-specific versions of file operation functions
|
|
||||||
- Mock all file system operations including read, write, exists, etc.
|
|
||||||
- Verify function behavior using the in-memory data structures
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
// Mock file operations
|
|
||||||
const mockReadJSON = jest.fn();
|
|
||||||
const mockWriteJSON = jest.fn();
|
|
||||||
|
|
||||||
jest.mock('../../scripts/modules/utils.js', () => ({
|
|
||||||
readJSON: mockReadJSON,
|
|
||||||
writeJSON: mockWriteJSON,
|
|
||||||
}));
|
|
||||||
|
|
||||||
test('should update task status correctly', () => {
|
|
||||||
// Setup mock data
|
|
||||||
const testData = JSON.parse(JSON.stringify(sampleTasks));
|
|
||||||
mockReadJSON.mockReturnValue(testData);
|
|
||||||
|
|
||||||
// Call the function that would normally modify files
|
|
||||||
const result = testSetTaskStatus(testData, '1', 'done');
|
|
||||||
|
|
||||||
// Assert on the in-memory data structure
|
|
||||||
expect(result.tasks[0].status).toBe('done');
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
- **Data Isolation Between Tests**
|
|
||||||
- Always create fresh copies of test data for each test
|
|
||||||
- Use `JSON.parse(JSON.stringify(original))` for deep cloning
|
|
||||||
- Reset all mocks before each test with `jest.clearAllMocks()`
|
|
||||||
- Avoid state that persists between tests
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
beforeEach(() => {
|
|
||||||
jest.clearAllMocks();
|
|
||||||
// Deep clone the test data
|
|
||||||
testTasksData = JSON.parse(JSON.stringify(sampleTasks));
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
- **Test All Path Variations**
|
|
||||||
- Regular tasks and subtasks
|
|
||||||
- Single items and multiple items
|
|
||||||
- Success paths and error paths
|
|
||||||
- Edge cases (empty data, invalid inputs, etc.)
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
// Multiple test cases covering different scenarios
|
|
||||||
test('should update regular task status', () => {
|
|
||||||
/* test implementation */
|
|
||||||
});
|
|
||||||
|
|
||||||
test('should update subtask status', () => {
|
|
||||||
/* test implementation */
|
|
||||||
});
|
|
||||||
|
|
||||||
test('should update multiple tasks when given comma-separated IDs', () => {
|
|
||||||
/* test implementation */
|
|
||||||
});
|
|
||||||
|
|
||||||
test('should throw error for non-existent task ID', () => {
|
|
||||||
/* test implementation */
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
- **Stabilize Tests With Predictable Input/Output**
|
|
||||||
- Use consistent, predictable test fixtures
|
|
||||||
- Avoid random values or time-dependent data
|
|
||||||
- Make tests deterministic for reliable CI/CD
|
|
||||||
- Control all variables that might affect test outcomes
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
// Use a specific known date instead of current date
|
|
||||||
const fixedDate = new Date('2023-01-01T12:00:00Z');
|
|
||||||
jest.spyOn(global, 'Date').mockImplementation(() => fixedDate);
|
|
||||||
```
|
|
||||||
|
|
||||||
See [tests/README.md](mdc:tests/README.md) for more details on the testing approach.
|
See [tests/README.md](mdc:tests/README.md) for more details on the testing approach.
|
||||||
|
|
||||||
Refer to [jest.config.js](mdc:jest.config.js) for Jest configuration options.
|
Refer to [jest.config.js](mdc:jest.config.js) for Jest configuration options.
|
||||||
|
|
||||||
|
## Variable Hoisting and Module Initialization Issues
|
||||||
|
|
||||||
|
When testing ES modules or working with complex module imports, you may encounter variable hoisting and initialization issues. These can be particularly tricky to debug and often appear as "Cannot access 'X' before initialization" errors.
|
||||||
|
|
||||||
|
- **Understanding Module Initialization Order**
|
||||||
|
- ✅ **DO**: Declare and initialize global variables at the top of modules
|
||||||
|
- ✅ **DO**: Use proper function declarations to avoid hoisting issues
|
||||||
|
- ✅ **DO**: Initialize variables before they are referenced, especially in imported modules
|
||||||
|
- ✅ **DO**: Be aware that imports are hoisted to the top of the file
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// ✅ DO: Define global state variables at the top of the module
|
||||||
|
let silentMode = false; // Declare and initialize first
|
||||||
|
|
||||||
|
const CONFIG = { /* configuration */ };
|
||||||
|
|
||||||
|
function isSilentMode() {
|
||||||
|
return silentMode; // Reference variable after it's initialized
|
||||||
|
}
|
||||||
|
|
||||||
|
function log(level, message) {
|
||||||
|
if (isSilentMode()) return; // Use the function instead of accessing variable directly
|
||||||
|
// ...
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- **Testing Modules with Initialization-Dependent Functions**
|
||||||
|
- ✅ **DO**: Create test-specific implementations that initialize all variables correctly
|
||||||
|
- ✅ **DO**: Use factory functions in mocks to ensure proper initialization order
|
||||||
|
- ✅ **DO**: Be careful with how you mock or stub functions that depend on module state
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// ✅ DO: Test-specific implementation that avoids initialization issues
|
||||||
|
const testLog = (level, ...args) => {
|
||||||
|
// Local implementation with proper initialization
|
||||||
|
const isSilent = false; // Explicit initialization
|
||||||
|
if (isSilent) return;
|
||||||
|
// Test implementation...
|
||||||
|
};
|
||||||
|
```
|
||||||
|
|
||||||
|
- **Common Hoisting-Related Errors to Avoid**
|
||||||
|
- ❌ **DON'T**: Reference variables before their declaration in module scope
|
||||||
|
- ❌ **DON'T**: Create circular dependencies between modules
|
||||||
|
- ❌ **DON'T**: Rely on variable initialization order across module boundaries
|
||||||
|
- ❌ **DON'T**: Define functions that use hoisted variables before they're initialized
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// ❌ DON'T: Create reference-before-initialization patterns
|
||||||
|
function badFunction() {
|
||||||
|
if (silentMode) { /* ... */ } // ReferenceError if silentMode is declared later
|
||||||
|
}
|
||||||
|
|
||||||
|
let silentMode = false;
|
||||||
|
|
||||||
|
// ❌ DON'T: Create cross-module references that depend on initialization order
|
||||||
|
// module-a.js
|
||||||
|
import { getSetting } from './module-b.js';
|
||||||
|
export const config = { value: getSetting() };
|
||||||
|
|
||||||
|
// module-b.js
|
||||||
|
import { config } from './module-a.js';
|
||||||
|
export function getSetting() {
|
||||||
|
return config.value; // Circular dependency causing initialization issues
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- **Dynamic Imports as a Solution**
|
||||||
|
- ✅ **DO**: Use dynamic imports (`import()`) to avoid initialization order issues
|
||||||
|
- ✅ **DO**: Structure modules to avoid circular dependencies that cause initialization issues
|
||||||
|
- ✅ **DO**: Consider factory functions for modules with complex state
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// ✅ DO: Use dynamic imports to avoid initialization issues
|
||||||
|
async function getTaskManager() {
|
||||||
|
return import('./task-manager.js');
|
||||||
|
}
|
||||||
|
|
||||||
|
async function someFunction() {
|
||||||
|
const taskManager = await getTaskManager();
|
||||||
|
return taskManager.someMethod();
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- **Testing Approach for Modules with Initialization Issues**
|
||||||
|
- ✅ **DO**: Create self-contained test implementations rather than using real implementations
|
||||||
|
- ✅ **DO**: Mock dependencies at module boundaries instead of trying to mock deep dependencies
|
||||||
|
- ✅ **DO**: Isolate module-specific state in tests
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// ✅ DO: Create isolated test implementation instead of reusing module code
|
||||||
|
test('should log messages when not in silent mode', () => {
|
||||||
|
// Local test implementation instead of importing from module
|
||||||
|
const testLog = (level, message) => {
|
||||||
|
if (false) return; // Always non-silent for this test
|
||||||
|
mockConsole(level, message);
|
||||||
|
};
|
||||||
|
|
||||||
|
testLog('info', 'test message');
|
||||||
|
expect(mockConsole).toHaveBeenCalledWith('info', 'test message');
|
||||||
|
});
|
||||||
|
```
|
||||||
@@ -150,4 +150,91 @@ alwaysApply: false
|
|||||||
));
|
));
|
||||||
```
|
```
|
||||||
|
|
||||||
Refer to [`ui.js`](mdc:scripts/modules/ui.js) for implementation examples and [`new_features.mdc`](mdc:.cursor/rules/new_features.mdc) for integration guidelines.
|
## Enhanced Display Patterns
|
||||||
|
|
||||||
|
### **Token Breakdown Display**
|
||||||
|
- Use detailed, granular token breakdowns for AI-powered commands
|
||||||
|
- Display context sources with individual token counts
|
||||||
|
- Show both token count and character count for transparency
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// ✅ DO: Display detailed token breakdown
|
||||||
|
function displayDetailedTokenBreakdown(tokenBreakdown, systemTokens, userTokens) {
|
||||||
|
const sections = [];
|
||||||
|
|
||||||
|
if (tokenBreakdown.tasks?.length > 0) {
|
||||||
|
const taskDetails = tokenBreakdown.tasks.map(task =>
|
||||||
|
`${task.type === 'subtask' ? ' ' : ''}${task.id}: ${task.tokens.toLocaleString()}`
|
||||||
|
).join('\n');
|
||||||
|
sections.push(`Tasks (${tokenBreakdown.tasks.reduce((sum, t) => sum + t.tokens, 0).toLocaleString()}):\n${taskDetails}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
const content = sections.join('\n\n');
|
||||||
|
console.log(boxen(content, {
|
||||||
|
title: chalk.cyan('Token Usage'),
|
||||||
|
padding: { top: 1, bottom: 1, left: 2, right: 2 },
|
||||||
|
borderStyle: 'round',
|
||||||
|
borderColor: 'cyan'
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### **Code Block Syntax Highlighting**
|
||||||
|
- Use `cli-highlight` library for syntax highlighting in terminal output
|
||||||
|
- Process code blocks in AI responses for better readability
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// ✅ DO: Enhance code blocks with syntax highlighting
|
||||||
|
import { highlight } from 'cli-highlight';
|
||||||
|
|
||||||
|
function processCodeBlocks(text) {
|
||||||
|
return text.replace(/```(\w+)?\n([\s\S]*?)```/g, (match, language, code) => {
|
||||||
|
try {
|
||||||
|
const highlighted = highlight(code.trim(), {
|
||||||
|
language: language || 'javascript',
|
||||||
|
theme: 'default'
|
||||||
|
});
|
||||||
|
return `\n${highlighted}\n`;
|
||||||
|
} catch (error) {
|
||||||
|
return `\n${code.trim()}\n`;
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### **Multi-Section Result Display**
|
||||||
|
- Use separate boxes for headers, content, and metadata
|
||||||
|
- Maintain consistent styling across different result types
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// ✅ DO: Use structured result display
|
||||||
|
function displayResults(result, query, detailLevel) {
|
||||||
|
// Header with query info
|
||||||
|
const header = boxen(
|
||||||
|
chalk.green.bold('Research Results') + '\n\n' +
|
||||||
|
chalk.gray('Query: ') + chalk.white(query) + '\n' +
|
||||||
|
chalk.gray('Detail Level: ') + chalk.cyan(detailLevel),
|
||||||
|
{
|
||||||
|
padding: { top: 1, bottom: 1, left: 2, right: 2 },
|
||||||
|
margin: { top: 1, bottom: 0 },
|
||||||
|
borderStyle: 'round',
|
||||||
|
borderColor: 'green'
|
||||||
|
}
|
||||||
|
);
|
||||||
|
console.log(header);
|
||||||
|
|
||||||
|
// Process and display main content
|
||||||
|
const processedResult = processCodeBlocks(result);
|
||||||
|
const contentBox = boxen(processedResult, {
|
||||||
|
padding: { top: 1, bottom: 1, left: 2, right: 2 },
|
||||||
|
margin: { top: 0, bottom: 1 },
|
||||||
|
borderStyle: 'single',
|
||||||
|
borderColor: 'gray'
|
||||||
|
});
|
||||||
|
console.log(contentBox);
|
||||||
|
|
||||||
|
console.log(chalk.green('✓ Operation complete'));
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Refer to [`ui.js`](mdc:scripts/modules/ui.js) for implementation examples, [`context_gathering.mdc`](mdc:.cursor/rules/context_gathering.mdc) for context display patterns, and [`new_features.mdc`](mdc:.cursor/rules/new_features.mdc) for integration guidelines.
|
||||||
File diff suppressed because it is too large
Load Diff
35
.env.example
35
.env.example
@@ -1,20 +1,17 @@
|
|||||||
# API Keys (Required)
|
# API Keys (Required for using in any role i.e. main/research/fallback -- see `task-master models`)
|
||||||
ANTHROPIC_API_KEY=your_anthropic_api_key_here # Format: sk-ant-api03-...
|
ANTHROPIC_API_KEY=YOUR_ANTHROPIC_KEY_HERE
|
||||||
PERPLEXITY_API_KEY=your_perplexity_api_key_here # Format: pplx-...
|
PERPLEXITY_API_KEY=YOUR_PERPLEXITY_KEY_HERE
|
||||||
|
OPENAI_API_KEY=YOUR_OPENAI_KEY_HERE
|
||||||
|
GOOGLE_API_KEY=YOUR_GOOGLE_KEY_HERE
|
||||||
|
MISTRAL_API_KEY=YOUR_MISTRAL_KEY_HERE
|
||||||
|
GROQ_API_KEY=YOUR_GROQ_KEY_HERE
|
||||||
|
OPENROUTER_API_KEY=YOUR_OPENROUTER_KEY_HERE
|
||||||
|
XAI_API_KEY=YOUR_XAI_KEY_HERE
|
||||||
|
AZURE_OPENAI_API_KEY=YOUR_AZURE_KEY_HERE
|
||||||
|
OLLAMA_API_KEY=YOUR_OLLAMA_API_KEY_HERE
|
||||||
|
|
||||||
# Model Configuration
|
# Google Vertex AI Configuration
|
||||||
MODEL=claude-3-7-sonnet-20250219 # Recommended models: claude-3-7-sonnet-20250219, claude-3-opus-20240229
|
VERTEX_PROJECT_ID=your-gcp-project-id
|
||||||
PERPLEXITY_MODEL=sonar-pro # Perplexity model for research-backed subtasks
|
VERTEX_LOCATION=us-central1
|
||||||
MAX_TOKENS=64000 # Maximum tokens for model responses
|
# Optional: Path to service account credentials JSON file (alternative to API key)
|
||||||
TEMPERATURE=0.4 # Temperature for model responses (0.0-1.0)
|
GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account-credentials.json
|
||||||
|
|
||||||
# Logging Configuration
|
|
||||||
DEBUG=false # Enable debug logging (true/false)
|
|
||||||
LOG_LEVEL=info # Log level (debug, info, warn, error)
|
|
||||||
|
|
||||||
# Task Generation Settings
|
|
||||||
DEFAULT_SUBTASKS=4 # Default number of subtasks when expanding
|
|
||||||
DEFAULT_PRIORITY=medium # Default priority for generated tasks (high, medium, low)
|
|
||||||
|
|
||||||
# Project Metadata (Optional)
|
|
||||||
PROJECT_NAME=Your Project Name # Override default project name in tasks.json
|
|
||||||
39
.github/ISSUE_TEMPLATE/bug_report.md
vendored
Normal file
39
.github/ISSUE_TEMPLATE/bug_report.md
vendored
Normal file
@@ -0,0 +1,39 @@
|
|||||||
|
---
|
||||||
|
name: Bug report
|
||||||
|
about: Create a report to help us improve
|
||||||
|
title: 'bug: '
|
||||||
|
labels: bug
|
||||||
|
assignees: ''
|
||||||
|
---
|
||||||
|
|
||||||
|
### Description
|
||||||
|
|
||||||
|
Detailed description of the problem, including steps to reproduce the issue.
|
||||||
|
|
||||||
|
### Steps to Reproduce
|
||||||
|
|
||||||
|
1. Step-by-step instructions to reproduce the issue
|
||||||
|
2. Include command examples or UI interactions
|
||||||
|
|
||||||
|
### Expected Behavior
|
||||||
|
|
||||||
|
Describe clearly what the expected outcome or behavior should be.
|
||||||
|
|
||||||
|
### Actual Behavior
|
||||||
|
|
||||||
|
Describe clearly what the actual outcome or behavior is.
|
||||||
|
|
||||||
|
### Screenshots or Logs
|
||||||
|
|
||||||
|
Provide screenshots, logs, or error messages if applicable.
|
||||||
|
|
||||||
|
### Environment
|
||||||
|
|
||||||
|
- Task Master version:
|
||||||
|
- Node.js version:
|
||||||
|
- Operating system:
|
||||||
|
- IDE (if applicable):
|
||||||
|
|
||||||
|
### Additional Context
|
||||||
|
|
||||||
|
Any additional information or context that might help diagnose the issue.
|
||||||
51
.github/ISSUE_TEMPLATE/enhancements---feature-requests.md
vendored
Normal file
51
.github/ISSUE_TEMPLATE/enhancements---feature-requests.md
vendored
Normal file
@@ -0,0 +1,51 @@
|
|||||||
|
---
|
||||||
|
name: Enhancements & feature requests
|
||||||
|
about: Suggest an idea for this project
|
||||||
|
title: 'feat: '
|
||||||
|
labels: enhancement
|
||||||
|
assignees: ''
|
||||||
|
---
|
||||||
|
|
||||||
|
> "Direct quote or clear summary of user request or need or user story."
|
||||||
|
|
||||||
|
### Motivation
|
||||||
|
|
||||||
|
Detailed explanation of why this feature is important. Describe the problem it solves or the benefit it provides.
|
||||||
|
|
||||||
|
### Proposed Solution
|
||||||
|
|
||||||
|
Clearly describe the proposed feature, including:
|
||||||
|
|
||||||
|
- High-level overview of the feature
|
||||||
|
- Relevant technologies or integrations
|
||||||
|
- How it fits into the existing workflow or architecture
|
||||||
|
|
||||||
|
### High-Level Workflow
|
||||||
|
|
||||||
|
1. Step-by-step description of how the feature will be implemented
|
||||||
|
2. Include necessary intermediate milestones
|
||||||
|
|
||||||
|
### Key Elements
|
||||||
|
|
||||||
|
- Bullet-point list of technical or UX/UI enhancements
|
||||||
|
- Mention specific integrations or APIs
|
||||||
|
- Highlight changes needed in existing data models or commands
|
||||||
|
|
||||||
|
### Example Workflow
|
||||||
|
|
||||||
|
Provide a clear, concrete example demonstrating the feature:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
$ task-master [action]
|
||||||
|
→ Expected response/output
|
||||||
|
```
|
||||||
|
|
||||||
|
### Implementation Considerations
|
||||||
|
|
||||||
|
- Dependencies on external components or APIs
|
||||||
|
- Backward compatibility requirements
|
||||||
|
- Potential performance impacts or resource usage
|
||||||
|
|
||||||
|
### Out of Scope (Future Considerations)
|
||||||
|
|
||||||
|
Clearly list any features or improvements not included but relevant for future iterations.
|
||||||
31
.github/ISSUE_TEMPLATE/feedback.md
vendored
Normal file
31
.github/ISSUE_TEMPLATE/feedback.md
vendored
Normal file
@@ -0,0 +1,31 @@
|
|||||||
|
---
|
||||||
|
name: Feedback
|
||||||
|
about: Give us specific feedback on the product/approach/tech
|
||||||
|
title: 'feedback: '
|
||||||
|
labels: feedback
|
||||||
|
assignees: ''
|
||||||
|
---
|
||||||
|
|
||||||
|
### Feedback Summary
|
||||||
|
|
||||||
|
Provide a clear summary or direct quote from user feedback.
|
||||||
|
|
||||||
|
### User Context
|
||||||
|
|
||||||
|
Explain the user's context or scenario in which this feedback was provided.
|
||||||
|
|
||||||
|
### User Impact
|
||||||
|
|
||||||
|
Describe how this feedback affects the user experience or workflow.
|
||||||
|
|
||||||
|
### Suggestions
|
||||||
|
|
||||||
|
Provide any initial thoughts, potential solutions, or improvements based on the feedback.
|
||||||
|
|
||||||
|
### Relevant Screenshots or Examples
|
||||||
|
|
||||||
|
Attach screenshots, logs, or examples that illustrate the feedback.
|
||||||
|
|
||||||
|
### Additional Notes
|
||||||
|
|
||||||
|
Any additional context or related information.
|
||||||
45
.github/PULL_REQUEST_TEMPLATE.md
vendored
Normal file
45
.github/PULL_REQUEST_TEMPLATE.md
vendored
Normal file
@@ -0,0 +1,45 @@
|
|||||||
|
# What type of PR is this?
|
||||||
|
<!-- Check one -->
|
||||||
|
|
||||||
|
- [ ] 🐛 Bug fix
|
||||||
|
- [ ] ✨ Feature
|
||||||
|
- [ ] 🔌 Integration
|
||||||
|
- [ ] 📝 Docs
|
||||||
|
- [ ] 🧹 Refactor
|
||||||
|
- [ ] Other:
|
||||||
|
## Description
|
||||||
|
<!-- What does this PR do? -->
|
||||||
|
|
||||||
|
## Related Issues
|
||||||
|
<!-- Link issues: Fixes #123 -->
|
||||||
|
|
||||||
|
## How to Test This
|
||||||
|
<!-- Quick steps to verify the changes work -->
|
||||||
|
```bash
|
||||||
|
# Example commands or steps
|
||||||
|
```
|
||||||
|
|
||||||
|
**Expected result:**
|
||||||
|
<!-- What should happen? -->
|
||||||
|
|
||||||
|
## Contributor Checklist
|
||||||
|
|
||||||
|
- [ ] Created changeset: `npm run changeset`
|
||||||
|
- [ ] Tests pass: `npm test`
|
||||||
|
- [ ] Format check passes: `npm run format-check` (or `npm run format` to fix)
|
||||||
|
- [ ] Addressed CodeRabbit comments (if any)
|
||||||
|
- [ ] Linked related issues (if any)
|
||||||
|
- [ ] Manually tested the changes
|
||||||
|
|
||||||
|
## Changelog Entry
|
||||||
|
<!-- One line describing the change for users -->
|
||||||
|
<!-- Example: "Added Kiro IDE integration with automatic task status updates" -->
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### For Maintainers
|
||||||
|
|
||||||
|
- [ ] PR title follows conventional commits
|
||||||
|
- [ ] Target branch correct
|
||||||
|
- [ ] Labels added
|
||||||
|
- [ ] Milestone assigned (if applicable)
|
||||||
39
.github/PULL_REQUEST_TEMPLATE/bugfix.md
vendored
Normal file
39
.github/PULL_REQUEST_TEMPLATE/bugfix.md
vendored
Normal file
@@ -0,0 +1,39 @@
|
|||||||
|
## 🐛 Bug Fix
|
||||||
|
|
||||||
|
### 🔍 Bug Description
|
||||||
|
<!-- Describe the bug -->
|
||||||
|
|
||||||
|
### 🔗 Related Issues
|
||||||
|
<!-- Fixes #123 -->
|
||||||
|
|
||||||
|
### ✨ Solution
|
||||||
|
<!-- How does this PR fix the bug? -->
|
||||||
|
|
||||||
|
## How to Test
|
||||||
|
|
||||||
|
### Steps that caused the bug:
|
||||||
|
1.
|
||||||
|
2.
|
||||||
|
|
||||||
|
**Before fix:**
|
||||||
|
**After fix:**
|
||||||
|
|
||||||
|
### Quick verification:
|
||||||
|
```bash
|
||||||
|
# Commands to verify the fix
|
||||||
|
```
|
||||||
|
|
||||||
|
## Contributor Checklist
|
||||||
|
- [ ] Created changeset: `npm run changeset`
|
||||||
|
- [ ] Tests pass: `npm test`
|
||||||
|
- [ ] Format check passes: `npm run format-check`
|
||||||
|
- [ ] Addressed CodeRabbit comments
|
||||||
|
- [ ] Added unit tests (if applicable)
|
||||||
|
- [ ] Manually verified the fix works
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### For Maintainers
|
||||||
|
- [ ] Root cause identified
|
||||||
|
- [ ] Fix doesn't introduce new issues
|
||||||
|
- [ ] CI passes
|
||||||
11
.github/PULL_REQUEST_TEMPLATE/config.yml
vendored
Normal file
11
.github/PULL_REQUEST_TEMPLATE/config.yml
vendored
Normal file
@@ -0,0 +1,11 @@
|
|||||||
|
blank_issues_enabled: false
|
||||||
|
contact_links:
|
||||||
|
- name: 🐛 Bug Fix
|
||||||
|
url: https://github.com/eyaltoledano/claude-task-master/compare/next...HEAD?template=bugfix.md
|
||||||
|
about: Fix a bug in Task Master
|
||||||
|
- name: ✨ New Feature
|
||||||
|
url: https://github.com/eyaltoledano/claude-task-master/compare/next...HEAD?template=feature.md
|
||||||
|
about: Add a new feature to Task Master
|
||||||
|
- name: 🔌 New Integration
|
||||||
|
url: https://github.com/eyaltoledano/claude-task-master/compare/next...HEAD?template=integration.md
|
||||||
|
about: Add support for a new tool, IDE, or platform
|
||||||
49
.github/PULL_REQUEST_TEMPLATE/feature.md
vendored
Normal file
49
.github/PULL_REQUEST_TEMPLATE/feature.md
vendored
Normal file
@@ -0,0 +1,49 @@
|
|||||||
|
## ✨ New Feature
|
||||||
|
|
||||||
|
### 📋 Feature Description
|
||||||
|
<!-- Brief description -->
|
||||||
|
|
||||||
|
### 🎯 Problem Statement
|
||||||
|
<!-- What problem does this feature solve? Why is it needed? -->
|
||||||
|
|
||||||
|
### 💡 Solution
|
||||||
|
<!-- How does this feature solve the problem? What's the approach? -->
|
||||||
|
|
||||||
|
### 🔗 Related Issues
|
||||||
|
<!-- Link related issues: Fixes #123, Part of #456 -->
|
||||||
|
|
||||||
|
## How to Use It
|
||||||
|
|
||||||
|
### Quick Start
|
||||||
|
```bash
|
||||||
|
# Basic usage example
|
||||||
|
```
|
||||||
|
|
||||||
|
### Example
|
||||||
|
<!-- Show a real use case -->
|
||||||
|
```bash
|
||||||
|
# Practical example
|
||||||
|
```
|
||||||
|
|
||||||
|
**What you should see:**
|
||||||
|
<!-- Expected behavior -->
|
||||||
|
|
||||||
|
## Contributor Checklist
|
||||||
|
- [ ] Created changeset: `npm run changeset`
|
||||||
|
- [ ] Tests pass: `npm test`
|
||||||
|
- [ ] Format check passes: `npm run format-check`
|
||||||
|
- [ ] Addressed CodeRabbit comments
|
||||||
|
- [ ] Added tests for new functionality
|
||||||
|
- [ ] Manually tested in CLI mode
|
||||||
|
- [ ] Manually tested in MCP mode (if applicable)
|
||||||
|
|
||||||
|
## Changelog Entry
|
||||||
|
<!-- One-liner for release notes -->
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### For Maintainers
|
||||||
|
|
||||||
|
- [ ] Feature aligns with project vision
|
||||||
|
- [ ] CIs pass
|
||||||
|
- [ ] Changeset file exists
|
||||||
53
.github/PULL_REQUEST_TEMPLATE/integration.md
vendored
Normal file
53
.github/PULL_REQUEST_TEMPLATE/integration.md
vendored
Normal file
@@ -0,0 +1,53 @@
|
|||||||
|
# 🔌 New Integration
|
||||||
|
|
||||||
|
## What tool/IDE is being integrated?
|
||||||
|
|
||||||
|
<!-- Name and brief description -->
|
||||||
|
|
||||||
|
## What can users do with it?
|
||||||
|
|
||||||
|
<!-- Key benefits -->
|
||||||
|
|
||||||
|
## How to Enable
|
||||||
|
|
||||||
|
### Setup
|
||||||
|
|
||||||
|
```bash
|
||||||
|
task-master rules add [name]
|
||||||
|
# Any other setup steps
|
||||||
|
```
|
||||||
|
|
||||||
|
### Example Usage
|
||||||
|
|
||||||
|
<!-- Show it in action -->
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Real example
|
||||||
|
```
|
||||||
|
|
||||||
|
### Natural Language Hooks (if applicable)
|
||||||
|
|
||||||
|
```
|
||||||
|
"When tests pass, mark task as done"
|
||||||
|
# Other examples
|
||||||
|
```
|
||||||
|
|
||||||
|
## Contributor Checklist
|
||||||
|
|
||||||
|
- [ ] Created changeset: `npm run changeset`
|
||||||
|
- [ ] Tests pass: `npm test`
|
||||||
|
- [ ] Format check passes: `npm run format-check`
|
||||||
|
- [ ] Addressed CodeRabbit comments
|
||||||
|
- [ ] Integration fully tested with target tool/IDE
|
||||||
|
- [ ] Error scenarios tested
|
||||||
|
- [ ] Added integration tests
|
||||||
|
- [ ] Documentation includes setup guide
|
||||||
|
- [ ] Examples are working and clear
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## For Maintainers
|
||||||
|
|
||||||
|
- [ ] Integration stability verified
|
||||||
|
- [ ] Documentation comprehensive
|
||||||
|
- [ ] Examples working
|
||||||
259
.github/scripts/auto-close-duplicates.mjs
vendored
Normal file
259
.github/scripts/auto-close-duplicates.mjs
vendored
Normal file
@@ -0,0 +1,259 @@
|
|||||||
|
#!/usr/bin/env node
|
||||||
|
|
||||||
|
async function githubRequest(endpoint, token, method = 'GET', body) {
|
||||||
|
const response = await fetch(`https://api.github.com${endpoint}`, {
|
||||||
|
method,
|
||||||
|
headers: {
|
||||||
|
Authorization: `Bearer ${token}`,
|
||||||
|
Accept: 'application/vnd.github.v3+json',
|
||||||
|
'User-Agent': 'auto-close-duplicates-script',
|
||||||
|
...(body && { 'Content-Type': 'application/json' })
|
||||||
|
},
|
||||||
|
...(body && { body: JSON.stringify(body) })
|
||||||
|
});
|
||||||
|
|
||||||
|
if (!response.ok) {
|
||||||
|
throw new Error(
|
||||||
|
`GitHub API request failed: ${response.status} ${response.statusText}`
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
return response.json();
|
||||||
|
}
|
||||||
|
|
||||||
|
function extractDuplicateIssueNumber(commentBody) {
|
||||||
|
const match = commentBody.match(/#(\d+)/);
|
||||||
|
return match ? parseInt(match[1], 10) : null;
|
||||||
|
}
|
||||||
|
|
||||||
|
async function closeIssueAsDuplicate(
|
||||||
|
owner,
|
||||||
|
repo,
|
||||||
|
issueNumber,
|
||||||
|
duplicateOfNumber,
|
||||||
|
token
|
||||||
|
) {
|
||||||
|
await githubRequest(
|
||||||
|
`/repos/${owner}/${repo}/issues/${issueNumber}`,
|
||||||
|
token,
|
||||||
|
'PATCH',
|
||||||
|
{
|
||||||
|
state: 'closed',
|
||||||
|
state_reason: 'not_planned',
|
||||||
|
labels: ['duplicate']
|
||||||
|
}
|
||||||
|
);
|
||||||
|
|
||||||
|
await githubRequest(
|
||||||
|
`/repos/${owner}/${repo}/issues/${issueNumber}/comments`,
|
||||||
|
token,
|
||||||
|
'POST',
|
||||||
|
{
|
||||||
|
body: `This issue has been automatically closed as a duplicate of #${duplicateOfNumber}.
|
||||||
|
|
||||||
|
If this is incorrect, please re-open this issue or create a new one.
|
||||||
|
|
||||||
|
🤖 Generated with [Task Master Bot]`
|
||||||
|
}
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
async function autoCloseDuplicates() {
|
||||||
|
console.log('[DEBUG] Starting auto-close duplicates script');
|
||||||
|
|
||||||
|
const token = process.env.GITHUB_TOKEN;
|
||||||
|
if (!token) {
|
||||||
|
throw new Error('GITHUB_TOKEN environment variable is required');
|
||||||
|
}
|
||||||
|
console.log('[DEBUG] GitHub token found');
|
||||||
|
|
||||||
|
const owner = process.env.GITHUB_REPOSITORY_OWNER || 'eyaltoledano';
|
||||||
|
const repo = process.env.GITHUB_REPOSITORY_NAME || 'claude-task-master';
|
||||||
|
console.log(`[DEBUG] Repository: ${owner}/${repo}`);
|
||||||
|
|
||||||
|
const threeDaysAgo = new Date();
|
||||||
|
threeDaysAgo.setDate(threeDaysAgo.getDate() - 3);
|
||||||
|
console.log(
|
||||||
|
`[DEBUG] Checking for duplicate comments older than: ${threeDaysAgo.toISOString()}`
|
||||||
|
);
|
||||||
|
|
||||||
|
console.log('[DEBUG] Fetching open issues created more than 3 days ago...');
|
||||||
|
const allIssues = [];
|
||||||
|
let page = 1;
|
||||||
|
const perPage = 100;
|
||||||
|
|
||||||
|
const MAX_PAGES = 50; // Increase limit for larger repos
|
||||||
|
let foundRecentIssue = false;
|
||||||
|
|
||||||
|
while (true) {
|
||||||
|
const pageIssues = await githubRequest(
|
||||||
|
`/repos/${owner}/${repo}/issues?state=open&per_page=${perPage}&page=${page}&sort=created&direction=desc`,
|
||||||
|
token
|
||||||
|
);
|
||||||
|
|
||||||
|
if (pageIssues.length === 0) break;
|
||||||
|
|
||||||
|
// Filter for issues created more than 3 days ago
|
||||||
|
const oldEnoughIssues = pageIssues.filter(
|
||||||
|
(issue) => new Date(issue.created_at) <= threeDaysAgo
|
||||||
|
);
|
||||||
|
|
||||||
|
allIssues.push(...oldEnoughIssues);
|
||||||
|
|
||||||
|
// If all issues on this page are newer than 3 days, we can stop
|
||||||
|
if (oldEnoughIssues.length === 0 && page === 1) {
|
||||||
|
foundRecentIssue = true;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
|
// If we found some old issues but not all, continue to next page
|
||||||
|
// as there might be more old issues
|
||||||
|
page++;
|
||||||
|
|
||||||
|
// Safety limit to avoid infinite loops
|
||||||
|
if (page > MAX_PAGES) {
|
||||||
|
console.log(`[WARNING] Reached maximum page limit of ${MAX_PAGES}`);
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
const issues = allIssues;
|
||||||
|
console.log(`[DEBUG] Found ${issues.length} open issues`);
|
||||||
|
|
||||||
|
let processedCount = 0;
|
||||||
|
let candidateCount = 0;
|
||||||
|
|
||||||
|
for (const issue of issues) {
|
||||||
|
processedCount++;
|
||||||
|
console.log(
|
||||||
|
`[DEBUG] Processing issue #${issue.number} (${processedCount}/${issues.length}): ${issue.title}`
|
||||||
|
);
|
||||||
|
|
||||||
|
console.log(`[DEBUG] Fetching comments for issue #${issue.number}...`);
|
||||||
|
const comments = await githubRequest(
|
||||||
|
`/repos/${owner}/${repo}/issues/${issue.number}/comments`,
|
||||||
|
token
|
||||||
|
);
|
||||||
|
console.log(
|
||||||
|
`[DEBUG] Issue #${issue.number} has ${comments.length} comments`
|
||||||
|
);
|
||||||
|
|
||||||
|
const dupeComments = comments.filter(
|
||||||
|
(comment) =>
|
||||||
|
comment.body.includes('Found') &&
|
||||||
|
comment.body.includes('possible duplicate') &&
|
||||||
|
comment.user.type === 'Bot'
|
||||||
|
);
|
||||||
|
console.log(
|
||||||
|
`[DEBUG] Issue #${issue.number} has ${dupeComments.length} duplicate detection comments`
|
||||||
|
);
|
||||||
|
|
||||||
|
if (dupeComments.length === 0) {
|
||||||
|
console.log(
|
||||||
|
`[DEBUG] Issue #${issue.number} - no duplicate comments found, skipping`
|
||||||
|
);
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
const lastDupeComment = dupeComments[dupeComments.length - 1];
|
||||||
|
const dupeCommentDate = new Date(lastDupeComment.created_at);
|
||||||
|
console.log(
|
||||||
|
`[DEBUG] Issue #${
|
||||||
|
issue.number
|
||||||
|
} - most recent duplicate comment from: ${dupeCommentDate.toISOString()}`
|
||||||
|
);
|
||||||
|
|
||||||
|
if (dupeCommentDate > threeDaysAgo) {
|
||||||
|
console.log(
|
||||||
|
`[DEBUG] Issue #${issue.number} - duplicate comment is too recent, skipping`
|
||||||
|
);
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
console.log(
|
||||||
|
`[DEBUG] Issue #${
|
||||||
|
issue.number
|
||||||
|
} - duplicate comment is old enough (${Math.floor(
|
||||||
|
(Date.now() - dupeCommentDate.getTime()) / (1000 * 60 * 60 * 24)
|
||||||
|
)} days)`
|
||||||
|
);
|
||||||
|
|
||||||
|
const commentsAfterDupe = comments.filter(
|
||||||
|
(comment) => new Date(comment.created_at) > dupeCommentDate
|
||||||
|
);
|
||||||
|
console.log(
|
||||||
|
`[DEBUG] Issue #${issue.number} - ${commentsAfterDupe.length} comments after duplicate detection`
|
||||||
|
);
|
||||||
|
|
||||||
|
if (commentsAfterDupe.length > 0) {
|
||||||
|
console.log(
|
||||||
|
`[DEBUG] Issue #${issue.number} - has activity after duplicate comment, skipping`
|
||||||
|
);
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
console.log(
|
||||||
|
`[DEBUG] Issue #${issue.number} - checking reactions on duplicate comment...`
|
||||||
|
);
|
||||||
|
const reactions = await githubRequest(
|
||||||
|
`/repos/${owner}/${repo}/issues/comments/${lastDupeComment.id}/reactions`,
|
||||||
|
token
|
||||||
|
);
|
||||||
|
console.log(
|
||||||
|
`[DEBUG] Issue #${issue.number} - duplicate comment has ${reactions.length} reactions`
|
||||||
|
);
|
||||||
|
|
||||||
|
const authorThumbsDown = reactions.some(
|
||||||
|
(reaction) =>
|
||||||
|
reaction.user.id === issue.user.id && reaction.content === '-1'
|
||||||
|
);
|
||||||
|
console.log(
|
||||||
|
`[DEBUG] Issue #${issue.number} - author thumbs down reaction: ${authorThumbsDown}`
|
||||||
|
);
|
||||||
|
|
||||||
|
if (authorThumbsDown) {
|
||||||
|
console.log(
|
||||||
|
`[DEBUG] Issue #${issue.number} - author disagreed with duplicate detection, skipping`
|
||||||
|
);
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
const duplicateIssueNumber = extractDuplicateIssueNumber(
|
||||||
|
lastDupeComment.body
|
||||||
|
);
|
||||||
|
if (!duplicateIssueNumber) {
|
||||||
|
console.log(
|
||||||
|
`[DEBUG] Issue #${issue.number} - could not extract duplicate issue number from comment, skipping`
|
||||||
|
);
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
candidateCount++;
|
||||||
|
const issueUrl = `https://github.com/${owner}/${repo}/issues/${issue.number}`;
|
||||||
|
|
||||||
|
try {
|
||||||
|
console.log(
|
||||||
|
`[INFO] Auto-closing issue #${issue.number} as duplicate of #${duplicateIssueNumber}: ${issueUrl}`
|
||||||
|
);
|
||||||
|
await closeIssueAsDuplicate(
|
||||||
|
owner,
|
||||||
|
repo,
|
||||||
|
issue.number,
|
||||||
|
duplicateIssueNumber,
|
||||||
|
token
|
||||||
|
);
|
||||||
|
console.log(
|
||||||
|
`[SUCCESS] Successfully closed issue #${issue.number} as duplicate of #${duplicateIssueNumber}`
|
||||||
|
);
|
||||||
|
} catch (error) {
|
||||||
|
console.error(
|
||||||
|
`[ERROR] Failed to close issue #${issue.number} as duplicate: ${error}`
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
console.log(
|
||||||
|
`[DEBUG] Script completed. Processed ${processedCount} issues, found ${candidateCount} candidates for auto-close`
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
autoCloseDuplicates().catch(console.error);
|
||||||
178
.github/scripts/backfill-duplicate-comments.mjs
vendored
Normal file
178
.github/scripts/backfill-duplicate-comments.mjs
vendored
Normal file
@@ -0,0 +1,178 @@
|
|||||||
|
#!/usr/bin/env node
|
||||||
|
|
||||||
|
async function githubRequest(endpoint, token, method = 'GET', body) {
|
||||||
|
const response = await fetch(`https://api.github.com${endpoint}`, {
|
||||||
|
method,
|
||||||
|
headers: {
|
||||||
|
Authorization: `Bearer ${token}`,
|
||||||
|
Accept: 'application/vnd.github.v3+json',
|
||||||
|
'User-Agent': 'backfill-duplicate-comments-script',
|
||||||
|
...(body && { 'Content-Type': 'application/json' })
|
||||||
|
},
|
||||||
|
...(body && { body: JSON.stringify(body) })
|
||||||
|
});
|
||||||
|
|
||||||
|
if (!response.ok) {
|
||||||
|
throw new Error(
|
||||||
|
`GitHub API request failed: ${response.status} ${response.statusText}`
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
return response.json();
|
||||||
|
}
|
||||||
|
|
||||||
|
async function triggerDedupeWorkflow(
|
||||||
|
owner,
|
||||||
|
repo,
|
||||||
|
issueNumber,
|
||||||
|
token,
|
||||||
|
dryRun = true
|
||||||
|
) {
|
||||||
|
if (dryRun) {
|
||||||
|
console.log(
|
||||||
|
`[DRY RUN] Would trigger dedupe workflow for issue #${issueNumber}`
|
||||||
|
);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
await githubRequest(
|
||||||
|
`/repos/${owner}/${repo}/actions/workflows/claude-dedupe-issues.yml/dispatches`,
|
||||||
|
token,
|
||||||
|
'POST',
|
||||||
|
{
|
||||||
|
ref: 'main',
|
||||||
|
inputs: {
|
||||||
|
issue_number: issueNumber.toString()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
async function backfillDuplicateComments() {
|
||||||
|
console.log('[DEBUG] Starting backfill duplicate comments script');
|
||||||
|
|
||||||
|
const token = process.env.GITHUB_TOKEN;
|
||||||
|
if (!token) {
|
||||||
|
throw new Error(`GITHUB_TOKEN environment variable is required
|
||||||
|
|
||||||
|
Usage:
|
||||||
|
node .github/scripts/backfill-duplicate-comments.mjs
|
||||||
|
|
||||||
|
Environment Variables:
|
||||||
|
GITHUB_TOKEN - GitHub personal access token with repo and actions permissions (required)
|
||||||
|
DRY_RUN - Set to "false" to actually trigger workflows (default: true for safety)
|
||||||
|
DAYS_BACK - How many days back to look for old issues (default: 90)`);
|
||||||
|
}
|
||||||
|
console.log('[DEBUG] GitHub token found');
|
||||||
|
|
||||||
|
const owner = process.env.GITHUB_REPOSITORY_OWNER || 'eyaltoledano';
|
||||||
|
const repo = process.env.GITHUB_REPOSITORY_NAME || 'claude-task-master';
|
||||||
|
const dryRun = process.env.DRY_RUN !== 'false';
|
||||||
|
const daysBack = parseInt(process.env.DAYS_BACK || '90', 10);
|
||||||
|
|
||||||
|
console.log(`[DEBUG] Repository: ${owner}/${repo}`);
|
||||||
|
console.log(`[DEBUG] Dry run mode: ${dryRun}`);
|
||||||
|
console.log(`[DEBUG] Looking back ${daysBack} days`);
|
||||||
|
|
||||||
|
const cutoffDate = new Date();
|
||||||
|
cutoffDate.setDate(cutoffDate.getDate() - daysBack);
|
||||||
|
|
||||||
|
console.log(
|
||||||
|
`[DEBUG] Fetching issues created since ${cutoffDate.toISOString()}...`
|
||||||
|
);
|
||||||
|
const allIssues = [];
|
||||||
|
let page = 1;
|
||||||
|
const perPage = 100;
|
||||||
|
|
||||||
|
while (true) {
|
||||||
|
const pageIssues = await githubRequest(
|
||||||
|
`/repos/${owner}/${repo}/issues?state=all&per_page=${perPage}&page=${page}&since=${cutoffDate.toISOString()}`,
|
||||||
|
token
|
||||||
|
);
|
||||||
|
|
||||||
|
if (pageIssues.length === 0) break;
|
||||||
|
|
||||||
|
allIssues.push(...pageIssues);
|
||||||
|
page++;
|
||||||
|
|
||||||
|
// Safety limit to avoid infinite loops
|
||||||
|
if (page > 100) {
|
||||||
|
console.log('[DEBUG] Reached page limit, stopping pagination');
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
console.log(
|
||||||
|
`[DEBUG] Found ${allIssues.length} issues from the last ${daysBack} days`
|
||||||
|
);
|
||||||
|
|
||||||
|
let processedCount = 0;
|
||||||
|
let candidateCount = 0;
|
||||||
|
let triggeredCount = 0;
|
||||||
|
|
||||||
|
for (const issue of allIssues) {
|
||||||
|
processedCount++;
|
||||||
|
console.log(
|
||||||
|
`[DEBUG] Processing issue #${issue.number} (${processedCount}/${allIssues.length}): ${issue.title}`
|
||||||
|
);
|
||||||
|
|
||||||
|
console.log(`[DEBUG] Fetching comments for issue #${issue.number}...`);
|
||||||
|
const comments = await githubRequest(
|
||||||
|
`/repos/${owner}/${repo}/issues/${issue.number}/comments`,
|
||||||
|
token
|
||||||
|
);
|
||||||
|
console.log(
|
||||||
|
`[DEBUG] Issue #${issue.number} has ${comments.length} comments`
|
||||||
|
);
|
||||||
|
|
||||||
|
// Look for existing duplicate detection comments (from the dedupe bot)
|
||||||
|
const dupeDetectionComments = comments.filter(
|
||||||
|
(comment) =>
|
||||||
|
comment.body.includes('Found') &&
|
||||||
|
comment.body.includes('possible duplicate') &&
|
||||||
|
comment.user.type === 'Bot'
|
||||||
|
);
|
||||||
|
|
||||||
|
console.log(
|
||||||
|
`[DEBUG] Issue #${issue.number} has ${dupeDetectionComments.length} duplicate detection comments`
|
||||||
|
);
|
||||||
|
|
||||||
|
// Skip if there's already a duplicate detection comment
|
||||||
|
if (dupeDetectionComments.length > 0) {
|
||||||
|
console.log(
|
||||||
|
`[DEBUG] Issue #${issue.number} already has duplicate detection comment, skipping`
|
||||||
|
);
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
candidateCount++;
|
||||||
|
const issueUrl = `https://github.com/${owner}/${repo}/issues/${issue.number}`;
|
||||||
|
|
||||||
|
try {
|
||||||
|
console.log(
|
||||||
|
`[INFO] ${dryRun ? '[DRY RUN] ' : ''}Triggering dedupe workflow for issue #${issue.number}: ${issueUrl}`
|
||||||
|
);
|
||||||
|
await triggerDedupeWorkflow(owner, repo, issue.number, token, dryRun);
|
||||||
|
|
||||||
|
if (!dryRun) {
|
||||||
|
console.log(
|
||||||
|
`[SUCCESS] Successfully triggered dedupe workflow for issue #${issue.number}`
|
||||||
|
);
|
||||||
|
}
|
||||||
|
triggeredCount++;
|
||||||
|
} catch (error) {
|
||||||
|
console.error(
|
||||||
|
`[ERROR] Failed to trigger workflow for issue #${issue.number}: ${error}`
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Add a delay between workflow triggers to avoid overwhelming the system
|
||||||
|
await new Promise((resolve) => setTimeout(resolve, 1000));
|
||||||
|
}
|
||||||
|
|
||||||
|
console.log(
|
||||||
|
`[DEBUG] Script completed. Processed ${processedCount} issues, found ${candidateCount} candidates without duplicate comments, ${dryRun ? 'would trigger' : 'triggered'} ${triggeredCount} workflows`
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
backfillDuplicateComments().catch(console.error);
|
||||||
102
.github/scripts/check-pre-release-mode.mjs
vendored
Executable file
102
.github/scripts/check-pre-release-mode.mjs
vendored
Executable file
@@ -0,0 +1,102 @@
|
|||||||
|
#!/usr/bin/env node
|
||||||
|
import { readFileSync, existsSync } from 'node:fs';
|
||||||
|
import { join, dirname, resolve } from 'node:path';
|
||||||
|
import { fileURLToPath } from 'node:url';
|
||||||
|
|
||||||
|
const __filename = fileURLToPath(import.meta.url);
|
||||||
|
const __dirname = dirname(__filename);
|
||||||
|
|
||||||
|
// Get context from command line argument or environment
|
||||||
|
const context = process.argv[2] || process.env.GITHUB_WORKFLOW || 'manual';
|
||||||
|
|
||||||
|
function findRootDir(startDir) {
|
||||||
|
let currentDir = resolve(startDir);
|
||||||
|
while (currentDir !== '/') {
|
||||||
|
if (existsSync(join(currentDir, 'package.json'))) {
|
||||||
|
try {
|
||||||
|
const pkg = JSON.parse(
|
||||||
|
readFileSync(join(currentDir, 'package.json'), 'utf8')
|
||||||
|
);
|
||||||
|
if (pkg.name === 'task-master-ai' || pkg.repository) {
|
||||||
|
return currentDir;
|
||||||
|
}
|
||||||
|
} catch {}
|
||||||
|
}
|
||||||
|
currentDir = dirname(currentDir);
|
||||||
|
}
|
||||||
|
throw new Error('Could not find root directory');
|
||||||
|
}
|
||||||
|
|
||||||
|
function checkPreReleaseMode() {
|
||||||
|
console.log('🔍 Checking if branch is in pre-release mode...');
|
||||||
|
|
||||||
|
const rootDir = findRootDir(__dirname);
|
||||||
|
const preJsonPath = join(rootDir, '.changeset', 'pre.json');
|
||||||
|
|
||||||
|
// Check if pre.json exists
|
||||||
|
if (!existsSync(preJsonPath)) {
|
||||||
|
console.log('✅ Not in active pre-release mode - safe to proceed');
|
||||||
|
process.exit(0);
|
||||||
|
}
|
||||||
|
|
||||||
|
try {
|
||||||
|
// Read and parse pre.json
|
||||||
|
const preJsonContent = readFileSync(preJsonPath, 'utf8');
|
||||||
|
const preJson = JSON.parse(preJsonContent);
|
||||||
|
|
||||||
|
// Check if we're in active pre-release mode
|
||||||
|
if (preJson.mode === 'pre') {
|
||||||
|
console.error('❌ ERROR: This branch is in active pre-release mode!');
|
||||||
|
console.error('');
|
||||||
|
|
||||||
|
// Provide context-specific error messages
|
||||||
|
if (context === 'Release Check' || context === 'pull_request') {
|
||||||
|
console.error(
|
||||||
|
'Pre-release mode must be exited before merging to main.'
|
||||||
|
);
|
||||||
|
console.error('');
|
||||||
|
console.error(
|
||||||
|
'To fix this, run the following commands in your branch:'
|
||||||
|
);
|
||||||
|
console.error(' npx changeset pre exit');
|
||||||
|
console.error(' git add -u');
|
||||||
|
console.error(' git commit -m "chore: exit pre-release mode"');
|
||||||
|
console.error(' git push');
|
||||||
|
console.error('');
|
||||||
|
console.error('Then update this pull request.');
|
||||||
|
} else if (context === 'Release' || context === 'main') {
|
||||||
|
console.error(
|
||||||
|
'Pre-release mode should only be used on feature branches, not main.'
|
||||||
|
);
|
||||||
|
console.error('');
|
||||||
|
console.error('To fix this, run the following commands locally:');
|
||||||
|
console.error(' npx changeset pre exit');
|
||||||
|
console.error(' git add -u');
|
||||||
|
console.error(' git commit -m "chore: exit pre-release mode"');
|
||||||
|
console.error(' git push origin main');
|
||||||
|
console.error('');
|
||||||
|
console.error('Then re-run this workflow.');
|
||||||
|
} else {
|
||||||
|
console.error('Pre-release mode must be exited before proceeding.');
|
||||||
|
console.error('');
|
||||||
|
console.error('To fix this, run the following commands:');
|
||||||
|
console.error(' npx changeset pre exit');
|
||||||
|
console.error(' git add -u');
|
||||||
|
console.error(' git commit -m "chore: exit pre-release mode"');
|
||||||
|
console.error(' git push');
|
||||||
|
}
|
||||||
|
|
||||||
|
process.exit(1);
|
||||||
|
}
|
||||||
|
|
||||||
|
console.log('✅ Not in active pre-release mode - safe to proceed');
|
||||||
|
process.exit(0);
|
||||||
|
} catch (error) {
|
||||||
|
console.error(`❌ ERROR: Unable to parse .changeset/pre.json – aborting.`);
|
||||||
|
console.error(`Error details: ${error.message}`);
|
||||||
|
process.exit(1);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Run the check
|
||||||
|
checkPreReleaseMode();
|
||||||
157
.github/scripts/parse-metrics.mjs
vendored
Normal file
157
.github/scripts/parse-metrics.mjs
vendored
Normal file
@@ -0,0 +1,157 @@
|
|||||||
|
#!/usr/bin/env node
|
||||||
|
|
||||||
|
import { readFileSync, existsSync, writeFileSync } from 'fs';
|
||||||
|
|
||||||
|
function parseMetricsTable(content, metricName) {
|
||||||
|
const lines = content.split('\n');
|
||||||
|
|
||||||
|
for (let i = 0; i < lines.length; i++) {
|
||||||
|
const line = lines[i].trim();
|
||||||
|
// Match a markdown table row like: | Metric Name | value | ...
|
||||||
|
const safeName = metricName.replace(/[.*+?^${}()|[\]\\]/g, '\\$&');
|
||||||
|
const re = new RegExp(`^\\|\\s*${safeName}\\s*\\|\\s*([^|]+)\\|?`);
|
||||||
|
const match = line.match(re);
|
||||||
|
if (match) {
|
||||||
|
return match[1].trim() || 'N/A';
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return 'N/A';
|
||||||
|
}
|
||||||
|
|
||||||
|
function parseCountMetric(content, metricName) {
|
||||||
|
const result = parseMetricsTable(content, metricName);
|
||||||
|
// Extract number from string, handling commas and spaces
|
||||||
|
const numberMatch = result.toString().match(/[\d,]+/);
|
||||||
|
if (numberMatch) {
|
||||||
|
const number = parseInt(numberMatch[0].replace(/,/g, ''));
|
||||||
|
return isNaN(number) ? 0 : number;
|
||||||
|
}
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
function main() {
|
||||||
|
const metrics = {
|
||||||
|
issues_created: 0,
|
||||||
|
issues_closed: 0,
|
||||||
|
prs_created: 0,
|
||||||
|
prs_merged: 0,
|
||||||
|
issue_avg_first_response: 'N/A',
|
||||||
|
issue_avg_time_to_close: 'N/A',
|
||||||
|
pr_avg_first_response: 'N/A',
|
||||||
|
pr_avg_merge_time: 'N/A'
|
||||||
|
};
|
||||||
|
|
||||||
|
// Parse issue metrics
|
||||||
|
if (existsSync('issue_metrics.md')) {
|
||||||
|
console.log('📄 Found issue_metrics.md, parsing...');
|
||||||
|
const issueContent = readFileSync('issue_metrics.md', 'utf8');
|
||||||
|
|
||||||
|
metrics.issues_created = parseCountMetric(
|
||||||
|
issueContent,
|
||||||
|
'Total number of items created'
|
||||||
|
);
|
||||||
|
metrics.issues_closed = parseCountMetric(
|
||||||
|
issueContent,
|
||||||
|
'Number of items closed'
|
||||||
|
);
|
||||||
|
metrics.issue_avg_first_response = parseMetricsTable(
|
||||||
|
issueContent,
|
||||||
|
'Time to first response'
|
||||||
|
);
|
||||||
|
metrics.issue_avg_time_to_close = parseMetricsTable(
|
||||||
|
issueContent,
|
||||||
|
'Time to close'
|
||||||
|
);
|
||||||
|
} else {
|
||||||
|
console.warn('[parse-metrics] issue_metrics.md not found; using defaults.');
|
||||||
|
}
|
||||||
|
|
||||||
|
// Parse PR created metrics
|
||||||
|
if (existsSync('pr_created_metrics.md')) {
|
||||||
|
console.log('📄 Found pr_created_metrics.md, parsing...');
|
||||||
|
const prCreatedContent = readFileSync('pr_created_metrics.md', 'utf8');
|
||||||
|
|
||||||
|
metrics.prs_created = parseCountMetric(
|
||||||
|
prCreatedContent,
|
||||||
|
'Total number of items created'
|
||||||
|
);
|
||||||
|
metrics.pr_avg_first_response = parseMetricsTable(
|
||||||
|
prCreatedContent,
|
||||||
|
'Time to first response'
|
||||||
|
);
|
||||||
|
} else {
|
||||||
|
console.warn(
|
||||||
|
'[parse-metrics] pr_created_metrics.md not found; using defaults.'
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Parse PR merged metrics (for more accurate merge data)
|
||||||
|
if (existsSync('pr_merged_metrics.md')) {
|
||||||
|
console.log('📄 Found pr_merged_metrics.md, parsing...');
|
||||||
|
const prMergedContent = readFileSync('pr_merged_metrics.md', 'utf8');
|
||||||
|
|
||||||
|
metrics.prs_merged = parseCountMetric(
|
||||||
|
prMergedContent,
|
||||||
|
'Total number of items created'
|
||||||
|
);
|
||||||
|
// For merged PRs, "Time to close" is actually time to merge
|
||||||
|
metrics.pr_avg_merge_time = parseMetricsTable(
|
||||||
|
prMergedContent,
|
||||||
|
'Time to close'
|
||||||
|
);
|
||||||
|
} else {
|
||||||
|
console.warn(
|
||||||
|
'[parse-metrics] pr_merged_metrics.md not found; falling back to pr_metrics.md.'
|
||||||
|
);
|
||||||
|
// Fallback: try old pr_metrics.md if it exists
|
||||||
|
if (existsSync('pr_metrics.md')) {
|
||||||
|
console.log('📄 Falling back to pr_metrics.md...');
|
||||||
|
const prContent = readFileSync('pr_metrics.md', 'utf8');
|
||||||
|
|
||||||
|
const mergedCount = parseCountMetric(prContent, 'Number of items merged');
|
||||||
|
metrics.prs_merged =
|
||||||
|
mergedCount || parseCountMetric(prContent, 'Number of items closed');
|
||||||
|
|
||||||
|
const maybeMergeTime = parseMetricsTable(
|
||||||
|
prContent,
|
||||||
|
'Average time to merge'
|
||||||
|
);
|
||||||
|
metrics.pr_avg_merge_time =
|
||||||
|
maybeMergeTime !== 'N/A'
|
||||||
|
? maybeMergeTime
|
||||||
|
: parseMetricsTable(prContent, 'Time to close');
|
||||||
|
} else {
|
||||||
|
console.warn('[parse-metrics] pr_metrics.md not found; using defaults.');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Output for GitHub Actions
|
||||||
|
const output = Object.entries(metrics)
|
||||||
|
.map(([key, value]) => `${key}=${value}`)
|
||||||
|
.join('\n');
|
||||||
|
|
||||||
|
// Always output to stdout for debugging
|
||||||
|
console.log('\n=== FINAL METRICS ===');
|
||||||
|
Object.entries(metrics).forEach(([key, value]) => {
|
||||||
|
console.log(`${key}: ${value}`);
|
||||||
|
});
|
||||||
|
|
||||||
|
// Write to GITHUB_OUTPUT if in GitHub Actions
|
||||||
|
if (process.env.GITHUB_OUTPUT) {
|
||||||
|
try {
|
||||||
|
writeFileSync(process.env.GITHUB_OUTPUT, output + '\n', { flag: 'a' });
|
||||||
|
console.log(
|
||||||
|
`\nSuccessfully wrote metrics to ${process.env.GITHUB_OUTPUT}`
|
||||||
|
);
|
||||||
|
} catch (error) {
|
||||||
|
console.error(`Failed to write to GITHUB_OUTPUT: ${error.message}`);
|
||||||
|
process.exit(1);
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
console.log(
|
||||||
|
'\nNo GITHUB_OUTPUT environment variable found, skipping file write'
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
main();
|
||||||
30
.github/scripts/release.mjs
vendored
Executable file
30
.github/scripts/release.mjs
vendored
Executable file
@@ -0,0 +1,30 @@
|
|||||||
|
#!/usr/bin/env node
|
||||||
|
import { existsSync, unlinkSync } from 'node:fs';
|
||||||
|
import { join, dirname } from 'node:path';
|
||||||
|
import { fileURLToPath } from 'node:url';
|
||||||
|
import { findRootDir, runCommand } from './utils.mjs';
|
||||||
|
|
||||||
|
const __filename = fileURLToPath(import.meta.url);
|
||||||
|
const __dirname = dirname(__filename);
|
||||||
|
|
||||||
|
const rootDir = findRootDir(__dirname);
|
||||||
|
|
||||||
|
console.log('🚀 Starting release process...');
|
||||||
|
|
||||||
|
// Double-check we're not in pre-release mode (safety net)
|
||||||
|
const preJsonPath = join(rootDir, '.changeset', 'pre.json');
|
||||||
|
if (existsSync(preJsonPath)) {
|
||||||
|
console.log('⚠️ Warning: pre.json still exists. Removing it...');
|
||||||
|
unlinkSync(preJsonPath);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if the extension version has changed and tag it
|
||||||
|
// This prevents changeset from trying to publish the private package
|
||||||
|
runCommand('node', [join(__dirname, 'tag-extension.mjs')]);
|
||||||
|
|
||||||
|
// Run changeset publish for npm packages
|
||||||
|
runCommand('npx', ['changeset', 'publish']);
|
||||||
|
|
||||||
|
console.log('✅ Release process completed!');
|
||||||
|
|
||||||
|
// The extension tag (if created) will trigger the extension-release workflow
|
||||||
33
.github/scripts/tag-extension.mjs
vendored
Executable file
33
.github/scripts/tag-extension.mjs
vendored
Executable file
@@ -0,0 +1,33 @@
|
|||||||
|
#!/usr/bin/env node
|
||||||
|
import assert from 'node:assert/strict';
|
||||||
|
import { readFileSync } from 'node:fs';
|
||||||
|
import { join, dirname } from 'node:path';
|
||||||
|
import { fileURLToPath } from 'node:url';
|
||||||
|
import { findRootDir, createAndPushTag } from './utils.mjs';
|
||||||
|
|
||||||
|
const __filename = fileURLToPath(import.meta.url);
|
||||||
|
const __dirname = dirname(__filename);
|
||||||
|
|
||||||
|
const rootDir = findRootDir(__dirname);
|
||||||
|
|
||||||
|
// Read the extension's package.json
|
||||||
|
const extensionDir = join(rootDir, 'apps', 'extension');
|
||||||
|
const pkgPath = join(extensionDir, 'package.json');
|
||||||
|
|
||||||
|
let pkg;
|
||||||
|
try {
|
||||||
|
const pkgContent = readFileSync(pkgPath, 'utf8');
|
||||||
|
pkg = JSON.parse(pkgContent);
|
||||||
|
} catch (error) {
|
||||||
|
console.error('Failed to read package.json:', error.message);
|
||||||
|
process.exit(1);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Ensure we have required fields
|
||||||
|
assert(pkg.name, 'package.json must have a name field');
|
||||||
|
assert(pkg.version, 'package.json must have a version field');
|
||||||
|
|
||||||
|
const tag = `${pkg.name}@${pkg.version}`;
|
||||||
|
|
||||||
|
// Create and push the tag if it doesn't exist
|
||||||
|
createAndPushTag(tag);
|
||||||
88
.github/scripts/utils.mjs
vendored
Executable file
88
.github/scripts/utils.mjs
vendored
Executable file
@@ -0,0 +1,88 @@
|
|||||||
|
#!/usr/bin/env node
|
||||||
|
import { spawnSync } from 'node:child_process';
|
||||||
|
import { readFileSync } from 'node:fs';
|
||||||
|
import { join, dirname, resolve } from 'node:path';
|
||||||
|
|
||||||
|
// Find the root directory by looking for package.json with task-master-ai
|
||||||
|
export function findRootDir(startDir) {
|
||||||
|
let currentDir = resolve(startDir);
|
||||||
|
while (currentDir !== '/') {
|
||||||
|
const pkgPath = join(currentDir, 'package.json');
|
||||||
|
try {
|
||||||
|
const pkg = JSON.parse(readFileSync(pkgPath, 'utf8'));
|
||||||
|
if (pkg.name === 'task-master-ai' || pkg.repository) {
|
||||||
|
return currentDir;
|
||||||
|
}
|
||||||
|
} catch {}
|
||||||
|
currentDir = dirname(currentDir);
|
||||||
|
}
|
||||||
|
throw new Error('Could not find root directory');
|
||||||
|
}
|
||||||
|
|
||||||
|
// Run a command with proper error handling
|
||||||
|
export function runCommand(command, args = [], options = {}) {
|
||||||
|
console.log(`Running: ${command} ${args.join(' ')}`);
|
||||||
|
const result = spawnSync(command, args, {
|
||||||
|
encoding: 'utf8',
|
||||||
|
stdio: 'inherit',
|
||||||
|
...options
|
||||||
|
});
|
||||||
|
|
||||||
|
if (result.status !== 0) {
|
||||||
|
console.error(`Command failed with exit code ${result.status}`);
|
||||||
|
process.exit(result.status);
|
||||||
|
}
|
||||||
|
|
||||||
|
return result;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get package version from a package.json file
|
||||||
|
export function getPackageVersion(packagePath) {
|
||||||
|
try {
|
||||||
|
const pkg = JSON.parse(readFileSync(packagePath, 'utf8'));
|
||||||
|
return pkg.version;
|
||||||
|
} catch (error) {
|
||||||
|
console.error(
|
||||||
|
`Failed to read package version from ${packagePath}:`,
|
||||||
|
error.message
|
||||||
|
);
|
||||||
|
process.exit(1);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if a git tag exists on remote
|
||||||
|
export function tagExistsOnRemote(tag, remote = 'origin') {
|
||||||
|
const result = spawnSync('git', ['ls-remote', remote, tag], {
|
||||||
|
encoding: 'utf8'
|
||||||
|
});
|
||||||
|
|
||||||
|
return result.status === 0 && result.stdout.trim() !== '';
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create and push a git tag if it doesn't exist
|
||||||
|
export function createAndPushTag(tag, remote = 'origin') {
|
||||||
|
// Check if tag already exists
|
||||||
|
if (tagExistsOnRemote(tag, remote)) {
|
||||||
|
console.log(`Tag ${tag} already exists on remote, skipping`);
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
console.log(`Creating new tag: ${tag}`);
|
||||||
|
|
||||||
|
// Create the tag locally
|
||||||
|
const tagResult = spawnSync('git', ['tag', tag]);
|
||||||
|
if (tagResult.status !== 0) {
|
||||||
|
console.error('Failed to create tag:', tagResult.error || tagResult.stderr);
|
||||||
|
process.exit(1);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Push the tag to remote
|
||||||
|
const pushResult = spawnSync('git', ['push', remote, tag]);
|
||||||
|
if (pushResult.status !== 0) {
|
||||||
|
console.error('Failed to push tag:', pushResult.error || pushResult.stderr);
|
||||||
|
process.exit(1);
|
||||||
|
}
|
||||||
|
|
||||||
|
console.log(`✅ Successfully created and pushed tag: ${tag}`);
|
||||||
|
return true;
|
||||||
|
}
|
||||||
31
.github/workflows/auto-close-duplicates.yml
vendored
Normal file
31
.github/workflows/auto-close-duplicates.yml
vendored
Normal file
@@ -0,0 +1,31 @@
|
|||||||
|
name: Auto-close duplicate issues
|
||||||
|
# description: Auto-closes issues that are duplicates of existing issues
|
||||||
|
|
||||||
|
on:
|
||||||
|
schedule:
|
||||||
|
- cron: "0 9 * * *" # Runs daily at 9 AM UTC
|
||||||
|
workflow_dispatch:
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
auto-close-duplicates:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
timeout-minutes: 10
|
||||||
|
permissions:
|
||||||
|
contents: read
|
||||||
|
issues: write # Need write permission to close issues and add comments
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- name: Checkout repository
|
||||||
|
uses: actions/checkout@v4
|
||||||
|
|
||||||
|
- name: Setup Node.js
|
||||||
|
uses: actions/setup-node@v4
|
||||||
|
with:
|
||||||
|
node-version: 20
|
||||||
|
|
||||||
|
- name: Auto-close duplicate issues
|
||||||
|
run: node .github/scripts/auto-close-duplicates.mjs
|
||||||
|
env:
|
||||||
|
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
GITHUB_REPOSITORY_OWNER: ${{ github.repository_owner }}
|
||||||
|
GITHUB_REPOSITORY_NAME: ${{ github.event.repository.name }}
|
||||||
46
.github/workflows/backfill-duplicate-comments.yml
vendored
Normal file
46
.github/workflows/backfill-duplicate-comments.yml
vendored
Normal file
@@ -0,0 +1,46 @@
|
|||||||
|
name: Backfill Duplicate Comments
|
||||||
|
# description: Triggers duplicate detection for old issues that don't have duplicate comments
|
||||||
|
|
||||||
|
on:
|
||||||
|
workflow_dispatch:
|
||||||
|
inputs:
|
||||||
|
days_back:
|
||||||
|
description: "How many days back to look for old issues"
|
||||||
|
required: false
|
||||||
|
default: "90"
|
||||||
|
type: string
|
||||||
|
dry_run:
|
||||||
|
description: "Dry run mode (true to only log what would be done)"
|
||||||
|
required: false
|
||||||
|
default: "true"
|
||||||
|
type: choice
|
||||||
|
options:
|
||||||
|
- "true"
|
||||||
|
- "false"
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
backfill-duplicate-comments:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
timeout-minutes: 30
|
||||||
|
permissions:
|
||||||
|
contents: read
|
||||||
|
issues: read
|
||||||
|
actions: write
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- name: Checkout repository
|
||||||
|
uses: actions/checkout@v4
|
||||||
|
|
||||||
|
- name: Setup Node.js
|
||||||
|
uses: actions/setup-node@v4
|
||||||
|
with:
|
||||||
|
node-version: 20
|
||||||
|
|
||||||
|
- name: Backfill duplicate comments
|
||||||
|
run: node .github/scripts/backfill-duplicate-comments.mjs
|
||||||
|
env:
|
||||||
|
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
GITHUB_REPOSITORY_OWNER: ${{ github.repository_owner }}
|
||||||
|
GITHUB_REPOSITORY_NAME: ${{ github.event.repository.name }}
|
||||||
|
DAYS_BACK: ${{ inputs.days_back }}
|
||||||
|
DRY_RUN: ${{ inputs.dry_run }}
|
||||||
145
.github/workflows/ci.yml
vendored
Normal file
145
.github/workflows/ci.yml
vendored
Normal file
@@ -0,0 +1,145 @@
|
|||||||
|
name: CI
|
||||||
|
|
||||||
|
on:
|
||||||
|
push:
|
||||||
|
branches:
|
||||||
|
- main
|
||||||
|
- next
|
||||||
|
pull_request:
|
||||||
|
workflow_dispatch:
|
||||||
|
|
||||||
|
concurrency:
|
||||||
|
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
|
||||||
|
cancel-in-progress: true
|
||||||
|
|
||||||
|
permissions:
|
||||||
|
contents: read
|
||||||
|
|
||||||
|
env:
|
||||||
|
DO_NOT_TRACK: 1
|
||||||
|
NODE_ENV: development
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
# Fast checks that can run in parallel
|
||||||
|
format-check:
|
||||||
|
name: Format Check
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v4
|
||||||
|
with:
|
||||||
|
fetch-depth: 2
|
||||||
|
|
||||||
|
- uses: actions/setup-node@v4
|
||||||
|
with:
|
||||||
|
node-version: 20
|
||||||
|
cache: "npm"
|
||||||
|
|
||||||
|
- name: Install dependencies
|
||||||
|
run: npm install --frozen-lockfile --prefer-offline
|
||||||
|
timeout-minutes: 5
|
||||||
|
|
||||||
|
- name: Format Check
|
||||||
|
run: npm run format-check
|
||||||
|
env:
|
||||||
|
FORCE_COLOR: 1
|
||||||
|
|
||||||
|
typecheck:
|
||||||
|
name: Typecheck
|
||||||
|
timeout-minutes: 10
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v4
|
||||||
|
with:
|
||||||
|
fetch-depth: 2
|
||||||
|
|
||||||
|
- uses: actions/setup-node@v4
|
||||||
|
with:
|
||||||
|
node-version: 20
|
||||||
|
cache: "npm"
|
||||||
|
|
||||||
|
- name: Install dependencies
|
||||||
|
run: npm install --frozen-lockfile --prefer-offline
|
||||||
|
timeout-minutes: 5
|
||||||
|
|
||||||
|
- name: Typecheck
|
||||||
|
run: npm run turbo:typecheck
|
||||||
|
env:
|
||||||
|
FORCE_COLOR: 1
|
||||||
|
|
||||||
|
# Build job to ensure everything compiles
|
||||||
|
build:
|
||||||
|
name: Build
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v4
|
||||||
|
with:
|
||||||
|
fetch-depth: 2
|
||||||
|
|
||||||
|
- uses: actions/setup-node@v4
|
||||||
|
with:
|
||||||
|
node-version: 20
|
||||||
|
cache: "npm"
|
||||||
|
|
||||||
|
- name: Install dependencies
|
||||||
|
run: npm install --frozen-lockfile --prefer-offline
|
||||||
|
timeout-minutes: 5
|
||||||
|
|
||||||
|
- name: Build
|
||||||
|
run: npm run turbo:build
|
||||||
|
env:
|
||||||
|
NODE_ENV: production
|
||||||
|
FORCE_COLOR: 1
|
||||||
|
TM_PUBLIC_BASE_DOMAIN: ${{ secrets.TM_PUBLIC_BASE_DOMAIN }}
|
||||||
|
TM_PUBLIC_SUPABASE_URL: ${{ secrets.TM_PUBLIC_SUPABASE_URL }}
|
||||||
|
TM_PUBLIC_SUPABASE_ANON_KEY: ${{ secrets.TM_PUBLIC_SUPABASE_ANON_KEY }}
|
||||||
|
|
||||||
|
- name: Upload build artifacts
|
||||||
|
uses: actions/upload-artifact@v4
|
||||||
|
with:
|
||||||
|
name: build-artifacts
|
||||||
|
path: dist/
|
||||||
|
retention-days: 1
|
||||||
|
|
||||||
|
test:
|
||||||
|
name: Test
|
||||||
|
timeout-minutes: 15
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
needs: [format-check, typecheck, build]
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v4
|
||||||
|
with:
|
||||||
|
fetch-depth: 2
|
||||||
|
|
||||||
|
- uses: actions/setup-node@v4
|
||||||
|
with:
|
||||||
|
node-version: 20
|
||||||
|
cache: "npm"
|
||||||
|
|
||||||
|
- name: Install dependencies
|
||||||
|
run: npm install --frozen-lockfile --prefer-offline
|
||||||
|
timeout-minutes: 5
|
||||||
|
|
||||||
|
- name: Download build artifacts
|
||||||
|
uses: actions/download-artifact@v4
|
||||||
|
with:
|
||||||
|
name: build-artifacts
|
||||||
|
path: dist/
|
||||||
|
|
||||||
|
- name: Run Tests
|
||||||
|
run: |
|
||||||
|
npm run test:coverage -- --coverageThreshold '{"global":{"branches":0,"functions":0,"lines":0,"statements":0}}' --detectOpenHandles --forceExit
|
||||||
|
env:
|
||||||
|
NODE_ENV: test
|
||||||
|
CI: true
|
||||||
|
FORCE_COLOR: 1
|
||||||
|
|
||||||
|
- name: Upload Test Results
|
||||||
|
if: always()
|
||||||
|
uses: actions/upload-artifact@v4
|
||||||
|
with:
|
||||||
|
name: test-results
|
||||||
|
path: |
|
||||||
|
test-results
|
||||||
|
coverage
|
||||||
|
junit.xml
|
||||||
|
retention-days: 30
|
||||||
81
.github/workflows/claude-dedupe-issues.yml
vendored
Normal file
81
.github/workflows/claude-dedupe-issues.yml
vendored
Normal file
@@ -0,0 +1,81 @@
|
|||||||
|
name: Claude Issue Dedupe
|
||||||
|
# description: Automatically dedupe GitHub issues using Claude Code
|
||||||
|
|
||||||
|
on:
|
||||||
|
issues:
|
||||||
|
types: [opened]
|
||||||
|
workflow_dispatch:
|
||||||
|
inputs:
|
||||||
|
issue_number:
|
||||||
|
description: "Issue number to process for duplicate detection"
|
||||||
|
required: true
|
||||||
|
type: string
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
claude-dedupe-issues:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
timeout-minutes: 10
|
||||||
|
permissions:
|
||||||
|
contents: read
|
||||||
|
issues: write
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- name: Checkout repository
|
||||||
|
uses: actions/checkout@v4
|
||||||
|
|
||||||
|
- name: Run Claude Code slash command
|
||||||
|
uses: anthropics/claude-code-base-action@beta
|
||||||
|
with:
|
||||||
|
prompt: "/dedupe ${{ github.repository }}/issues/${{ github.event.issue.number || inputs.issue_number }}"
|
||||||
|
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||||
|
claude_env: |
|
||||||
|
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
|
||||||
|
- name: Log duplicate comment event to Statsig
|
||||||
|
if: always()
|
||||||
|
env:
|
||||||
|
STATSIG_API_KEY: ${{ secrets.STATSIG_API_KEY }}
|
||||||
|
run: |
|
||||||
|
ISSUE_NUMBER=${{ github.event.issue.number || inputs.issue_number }}
|
||||||
|
REPO=${{ github.repository }}
|
||||||
|
|
||||||
|
if [ -z "$STATSIG_API_KEY" ]; then
|
||||||
|
echo "STATSIG_API_KEY not found, skipping Statsig logging"
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Prepare the event payload
|
||||||
|
EVENT_PAYLOAD=$(jq -n \
|
||||||
|
--arg issue_number "$ISSUE_NUMBER" \
|
||||||
|
--arg repo "$REPO" \
|
||||||
|
--arg triggered_by "${{ github.event_name }}" \
|
||||||
|
'{
|
||||||
|
events: [{
|
||||||
|
eventName: "github_duplicate_comment_added",
|
||||||
|
value: 1,
|
||||||
|
metadata: {
|
||||||
|
repository: $repo,
|
||||||
|
issue_number: ($issue_number | tonumber),
|
||||||
|
triggered_by: $triggered_by,
|
||||||
|
workflow_run_id: "${{ github.run_id }}"
|
||||||
|
},
|
||||||
|
time: (now | floor | tostring)
|
||||||
|
}]
|
||||||
|
}')
|
||||||
|
|
||||||
|
# Send to Statsig API
|
||||||
|
echo "Logging duplicate comment event to Statsig for issue #${ISSUE_NUMBER}"
|
||||||
|
|
||||||
|
RESPONSE=$(curl -s -w "\n%{http_code}" -X POST https://events.statsigapi.net/v1/log_event \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "STATSIG-API-KEY: ${STATSIG_API_KEY}" \
|
||||||
|
-d "$EVENT_PAYLOAD")
|
||||||
|
|
||||||
|
HTTP_CODE=$(echo "$RESPONSE" | tail -n1)
|
||||||
|
BODY=$(echo "$RESPONSE" | head -n-1)
|
||||||
|
|
||||||
|
if [ "$HTTP_CODE" -eq 200 ] || [ "$HTTP_CODE" -eq 202 ]; then
|
||||||
|
echo "Successfully logged duplicate comment event for issue #${ISSUE_NUMBER}"
|
||||||
|
else
|
||||||
|
echo "Failed to log duplicate comment event for issue #${ISSUE_NUMBER}. HTTP ${HTTP_CODE}: ${BODY}"
|
||||||
|
fi
|
||||||
57
.github/workflows/claude-docs-trigger.yml
vendored
Normal file
57
.github/workflows/claude-docs-trigger.yml
vendored
Normal file
@@ -0,0 +1,57 @@
|
|||||||
|
name: Trigger Claude Documentation Update
|
||||||
|
|
||||||
|
on:
|
||||||
|
push:
|
||||||
|
branches:
|
||||||
|
- next
|
||||||
|
paths-ignore:
|
||||||
|
- "apps/docs/**"
|
||||||
|
- "*.md"
|
||||||
|
- ".github/workflows/**"
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
trigger-docs-update:
|
||||||
|
# Only run if changes were merged (not direct pushes from bots)
|
||||||
|
if: github.actor != 'github-actions[bot]' && github.actor != 'dependabot[bot]'
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
permissions:
|
||||||
|
contents: read
|
||||||
|
actions: write
|
||||||
|
steps:
|
||||||
|
- name: Checkout repository
|
||||||
|
uses: actions/checkout@v4
|
||||||
|
with:
|
||||||
|
fetch-depth: 2 # Need previous commit for comparison
|
||||||
|
|
||||||
|
- name: Get changed files
|
||||||
|
id: changed-files
|
||||||
|
run: |
|
||||||
|
echo "Changed files in this push:"
|
||||||
|
git diff --name-only HEAD^ HEAD | tee changed_files.txt
|
||||||
|
|
||||||
|
# Store changed files for Claude to analyze (escaped for JSON)
|
||||||
|
CHANGED_FILES=$(git diff --name-only HEAD^ HEAD | jq -Rs .)
|
||||||
|
echo "changed_files=$CHANGED_FILES" >> $GITHUB_OUTPUT
|
||||||
|
|
||||||
|
# Get the commit message (escaped for JSON)
|
||||||
|
COMMIT_MSG=$(git log -1 --pretty=%B | jq -Rs .)
|
||||||
|
echo "commit_message=$COMMIT_MSG" >> $GITHUB_OUTPUT
|
||||||
|
|
||||||
|
# Get diff for documentation context (escaped for JSON)
|
||||||
|
COMMIT_DIFF=$(git diff HEAD^ HEAD --stat | jq -Rs .)
|
||||||
|
echo "commit_diff=$COMMIT_DIFF" >> $GITHUB_OUTPUT
|
||||||
|
|
||||||
|
# Get commit SHA
|
||||||
|
echo "commit_sha=${{ github.sha }}" >> $GITHUB_OUTPUT
|
||||||
|
|
||||||
|
- name: Trigger Claude workflow
|
||||||
|
env:
|
||||||
|
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
run: |
|
||||||
|
# Trigger the Claude docs updater workflow with the change information
|
||||||
|
gh workflow run claude-docs-updater.yml \
|
||||||
|
--ref next \
|
||||||
|
-f commit_sha="${{ steps.changed-files.outputs.commit_sha }}" \
|
||||||
|
-f commit_message=${{ steps.changed-files.outputs.commit_message }} \
|
||||||
|
-f changed_files=${{ steps.changed-files.outputs.changed_files }} \
|
||||||
|
-f commit_diff=${{ steps.changed-files.outputs.commit_diff }}
|
||||||
145
.github/workflows/claude-docs-updater.yml
vendored
Normal file
145
.github/workflows/claude-docs-updater.yml
vendored
Normal file
@@ -0,0 +1,145 @@
|
|||||||
|
name: Claude Documentation Updater
|
||||||
|
|
||||||
|
on:
|
||||||
|
workflow_dispatch:
|
||||||
|
inputs:
|
||||||
|
commit_sha:
|
||||||
|
description: 'The commit SHA that triggered this update'
|
||||||
|
required: true
|
||||||
|
type: string
|
||||||
|
commit_message:
|
||||||
|
description: 'The commit message'
|
||||||
|
required: true
|
||||||
|
type: string
|
||||||
|
changed_files:
|
||||||
|
description: 'List of changed files'
|
||||||
|
required: true
|
||||||
|
type: string
|
||||||
|
commit_diff:
|
||||||
|
description: 'Diff summary of changes'
|
||||||
|
required: true
|
||||||
|
type: string
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
update-docs:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
permissions:
|
||||||
|
contents: write
|
||||||
|
pull-requests: write
|
||||||
|
issues: write
|
||||||
|
steps:
|
||||||
|
- name: Checkout repository
|
||||||
|
uses: actions/checkout@v4
|
||||||
|
with:
|
||||||
|
ref: next
|
||||||
|
fetch-depth: 0 # Need full history to checkout specific commit
|
||||||
|
|
||||||
|
- name: Create docs update branch
|
||||||
|
id: create-branch
|
||||||
|
run: |
|
||||||
|
BRANCH_NAME="docs/auto-update-$(date +%Y%m%d-%H%M%S)"
|
||||||
|
git checkout -b $BRANCH_NAME
|
||||||
|
echo "branch_name=$BRANCH_NAME" >> $GITHUB_OUTPUT
|
||||||
|
|
||||||
|
- name: Run Claude Code to Update Documentation
|
||||||
|
uses: anthropics/claude-code-action@beta
|
||||||
|
with:
|
||||||
|
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||||
|
timeout_minutes: "30"
|
||||||
|
mode: "agent"
|
||||||
|
github_token: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
experimental_allowed_domains: |
|
||||||
|
.anthropic.com
|
||||||
|
.github.com
|
||||||
|
api.github.com
|
||||||
|
.githubusercontent.com
|
||||||
|
registry.npmjs.org
|
||||||
|
.task-master.dev
|
||||||
|
base_branch: "next"
|
||||||
|
direct_prompt: |
|
||||||
|
You are a documentation specialist. Analyze the recent changes pushed to the 'next' branch and update the documentation accordingly.
|
||||||
|
|
||||||
|
Recent changes:
|
||||||
|
- Commit: ${{ inputs.commit_message }}
|
||||||
|
- Changed files:
|
||||||
|
${{ inputs.changed_files }}
|
||||||
|
|
||||||
|
- Changes summary:
|
||||||
|
${{ inputs.commit_diff }}
|
||||||
|
|
||||||
|
Your task:
|
||||||
|
1. Analyze the changes to understand what functionality was added, modified, or removed
|
||||||
|
2. Check if these changes require documentation updates in apps/docs/
|
||||||
|
3. If documentation updates are needed:
|
||||||
|
- Update relevant documentation files in apps/docs/
|
||||||
|
- Ensure examples are updated if APIs changed
|
||||||
|
- Update any configuration documentation if config options changed
|
||||||
|
- Add new documentation pages if new features were added
|
||||||
|
- Update the changelog or release notes if applicable
|
||||||
|
4. If no documentation updates are needed, skip creating changes
|
||||||
|
|
||||||
|
Guidelines:
|
||||||
|
- Focus only on user-facing changes that need documentation
|
||||||
|
- Keep documentation clear, concise, and helpful
|
||||||
|
- Include code examples where appropriate
|
||||||
|
- Maintain consistent documentation style with existing docs
|
||||||
|
- Don't document internal implementation details unless they affect users
|
||||||
|
- Update navigation/menu files if new pages are added
|
||||||
|
|
||||||
|
Only make changes if the documentation truly needs updating based on the code changes.
|
||||||
|
|
||||||
|
- name: Check if changes were made
|
||||||
|
id: check-changes
|
||||||
|
run: |
|
||||||
|
if git diff --quiet; then
|
||||||
|
echo "has_changes=false" >> $GITHUB_OUTPUT
|
||||||
|
else
|
||||||
|
echo "has_changes=true" >> $GITHUB_OUTPUT
|
||||||
|
git add -A
|
||||||
|
git config --local user.email "github-actions[bot]@users.noreply.github.com"
|
||||||
|
git config --local user.name "github-actions[bot]"
|
||||||
|
git commit -m "docs: auto-update documentation based on changes in next branch
|
||||||
|
|
||||||
|
This PR was automatically generated to update documentation based on recent changes.
|
||||||
|
|
||||||
|
Original commit: ${{ inputs.commit_message }}
|
||||||
|
|
||||||
|
Co-authored-by: Claude <claude-assistant@anthropic.com>"
|
||||||
|
fi
|
||||||
|
|
||||||
|
- name: Push changes and create PR
|
||||||
|
if: steps.check-changes.outputs.has_changes == 'true'
|
||||||
|
env:
|
||||||
|
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
run: |
|
||||||
|
git push origin ${{ steps.create-branch.outputs.branch_name }}
|
||||||
|
|
||||||
|
# Create PR using GitHub CLI
|
||||||
|
gh pr create \
|
||||||
|
--title "docs: update documentation for recent changes" \
|
||||||
|
--body "## 📚 Documentation Update
|
||||||
|
|
||||||
|
This PR automatically updates documentation based on recent changes merged to the \`next\` branch.
|
||||||
|
|
||||||
|
### Original Changes
|
||||||
|
**Commit:** ${{ inputs.commit_sha }}
|
||||||
|
**Message:** ${{ inputs.commit_message }}
|
||||||
|
|
||||||
|
### Changed Files in Original Commit
|
||||||
|
\`\`\`
|
||||||
|
${{ inputs.changed_files }}
|
||||||
|
\`\`\`
|
||||||
|
|
||||||
|
### Documentation Updates
|
||||||
|
This PR includes documentation updates to reflect the changes above. Please review to ensure:
|
||||||
|
- [ ] Documentation accurately reflects the changes
|
||||||
|
- [ ] Examples are correct and working
|
||||||
|
- [ ] No important details are missing
|
||||||
|
- [ ] Style is consistent with existing documentation
|
||||||
|
|
||||||
|
---
|
||||||
|
*This PR was automatically generated by Claude Code GitHub Action*" \
|
||||||
|
--base next \
|
||||||
|
--head ${{ steps.create-branch.outputs.branch_name }} \
|
||||||
|
--label "documentation" \
|
||||||
|
--label "automated"
|
||||||
107
.github/workflows/claude-issue-triage.yml
vendored
Normal file
107
.github/workflows/claude-issue-triage.yml
vendored
Normal file
@@ -0,0 +1,107 @@
|
|||||||
|
name: Claude Issue Triage
|
||||||
|
# description: Automatically triage GitHub issues using Claude Code
|
||||||
|
|
||||||
|
on:
|
||||||
|
issues:
|
||||||
|
types: [opened]
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
triage-issue:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
timeout-minutes: 10
|
||||||
|
permissions:
|
||||||
|
contents: read
|
||||||
|
issues: write
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- name: Checkout repository
|
||||||
|
uses: actions/checkout@v4
|
||||||
|
|
||||||
|
- name: Create triage prompt
|
||||||
|
run: |
|
||||||
|
mkdir -p /tmp/claude-prompts
|
||||||
|
cat > /tmp/claude-prompts/triage-prompt.txt << 'EOF'
|
||||||
|
You're an issue triage assistant for GitHub issues. Your task is to analyze the issue and select appropriate labels from the provided list.
|
||||||
|
|
||||||
|
IMPORTANT: Don't post any comments or messages to the issue. Your only action should be to apply labels.
|
||||||
|
|
||||||
|
Issue Information:
|
||||||
|
- REPO: ${{ github.repository }}
|
||||||
|
- ISSUE_NUMBER: ${{ github.event.issue.number }}
|
||||||
|
|
||||||
|
TASK OVERVIEW:
|
||||||
|
|
||||||
|
1. First, fetch the list of labels available in this repository by running: `gh label list`. Run exactly this command with nothing else.
|
||||||
|
|
||||||
|
2. Next, use the GitHub tools to get context about the issue:
|
||||||
|
- You have access to these tools:
|
||||||
|
- mcp__github__get_issue: Use this to retrieve the current issue's details including title, description, and existing labels
|
||||||
|
- mcp__github__get_issue_comments: Use this to read any discussion or additional context provided in the comments
|
||||||
|
- mcp__github__update_issue: Use this to apply labels to the issue (do not use this for commenting)
|
||||||
|
- mcp__github__search_issues: Use this to find similar issues that might provide context for proper categorization and to identify potential duplicate issues
|
||||||
|
- mcp__github__list_issues: Use this to understand patterns in how other issues are labeled
|
||||||
|
- Start by using mcp__github__get_issue to get the issue details
|
||||||
|
|
||||||
|
3. Analyze the issue content, considering:
|
||||||
|
- The issue title and description
|
||||||
|
- The type of issue (bug report, feature request, question, etc.)
|
||||||
|
- Technical areas mentioned
|
||||||
|
- Severity or priority indicators
|
||||||
|
- User impact
|
||||||
|
- Components affected
|
||||||
|
|
||||||
|
4. Select appropriate labels from the available labels list provided above:
|
||||||
|
- Choose labels that accurately reflect the issue's nature
|
||||||
|
- Be specific but comprehensive
|
||||||
|
- Select priority labels if you can determine urgency (high-priority, med-priority, or low-priority)
|
||||||
|
- Consider platform labels (android, ios) if applicable
|
||||||
|
- If you find similar issues using mcp__github__search_issues, consider using a "duplicate" label if appropriate. Only do so if the issue is a duplicate of another OPEN issue.
|
||||||
|
|
||||||
|
5. Apply the selected labels:
|
||||||
|
- Use mcp__github__update_issue to apply your selected labels
|
||||||
|
- DO NOT post any comments explaining your decision
|
||||||
|
- DO NOT communicate directly with users
|
||||||
|
- If no labels are clearly applicable, do not apply any labels
|
||||||
|
|
||||||
|
IMPORTANT GUIDELINES:
|
||||||
|
- Be thorough in your analysis
|
||||||
|
- Only select labels from the provided list above
|
||||||
|
- DO NOT post any comments to the issue
|
||||||
|
- Your ONLY action should be to apply labels using mcp__github__update_issue
|
||||||
|
- It's okay to not add any labels if none are clearly applicable
|
||||||
|
EOF
|
||||||
|
|
||||||
|
- name: Setup GitHub MCP Server
|
||||||
|
run: |
|
||||||
|
mkdir -p /tmp/mcp-config
|
||||||
|
cat > /tmp/mcp-config/mcp-servers.json << 'EOF'
|
||||||
|
{
|
||||||
|
"mcpServers": {
|
||||||
|
"github": {
|
||||||
|
"command": "docker",
|
||||||
|
"args": [
|
||||||
|
"run",
|
||||||
|
"-i",
|
||||||
|
"--rm",
|
||||||
|
"-e",
|
||||||
|
"GITHUB_PERSONAL_ACCESS_TOKEN",
|
||||||
|
"ghcr.io/github/github-mcp-server:sha-7aced2b"
|
||||||
|
],
|
||||||
|
"env": {
|
||||||
|
"GITHUB_PERSONAL_ACCESS_TOKEN": "${{ secrets.GITHUB_TOKEN }}"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
EOF
|
||||||
|
|
||||||
|
- name: Run Claude Code for Issue Triage
|
||||||
|
uses: anthropics/claude-code-base-action@beta
|
||||||
|
with:
|
||||||
|
prompt_file: /tmp/claude-prompts/triage-prompt.txt
|
||||||
|
allowed_tools: "Bash(gh label list),mcp__github__get_issue,mcp__github__get_issue_comments,mcp__github__update_issue,mcp__github__search_issues,mcp__github__list_issues"
|
||||||
|
timeout_minutes: "5"
|
||||||
|
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||||
|
mcp_config: /tmp/mcp-config/mcp-servers.json
|
||||||
|
claude_env: |
|
||||||
|
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
36
.github/workflows/claude.yml
vendored
Normal file
36
.github/workflows/claude.yml
vendored
Normal file
@@ -0,0 +1,36 @@
|
|||||||
|
name: Claude Code
|
||||||
|
|
||||||
|
on:
|
||||||
|
issue_comment:
|
||||||
|
types: [created]
|
||||||
|
pull_request_review_comment:
|
||||||
|
types: [created]
|
||||||
|
issues:
|
||||||
|
types: [opened, assigned]
|
||||||
|
pull_request_review:
|
||||||
|
types: [submitted]
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
claude:
|
||||||
|
if: |
|
||||||
|
(github.event_name == 'issue_comment' && contains(github.event.comment.body, '@claude')) ||
|
||||||
|
(github.event_name == 'pull_request_review_comment' && contains(github.event.comment.body, '@claude')) ||
|
||||||
|
(github.event_name == 'pull_request_review' && contains(github.event.review.body, '@claude')) ||
|
||||||
|
(github.event_name == 'issues' && (contains(github.event.issue.body, '@claude') || contains(github.event.issue.title, '@claude')))
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
permissions:
|
||||||
|
contents: read
|
||||||
|
pull-requests: read
|
||||||
|
issues: read
|
||||||
|
id-token: write
|
||||||
|
steps:
|
||||||
|
- name: Checkout repository
|
||||||
|
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
|
||||||
|
with:
|
||||||
|
fetch-depth: 1
|
||||||
|
|
||||||
|
- name: Run Claude Code
|
||||||
|
id: claude
|
||||||
|
uses: anthropics/claude-code-action@beta
|
||||||
|
with:
|
||||||
|
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||||
140
.github/workflows/extension-ci.yml
vendored
Normal file
140
.github/workflows/extension-ci.yml
vendored
Normal file
@@ -0,0 +1,140 @@
|
|||||||
|
name: Extension CI
|
||||||
|
|
||||||
|
on:
|
||||||
|
push:
|
||||||
|
branches:
|
||||||
|
- main
|
||||||
|
- next
|
||||||
|
paths:
|
||||||
|
- 'apps/extension/**'
|
||||||
|
- '.github/workflows/extension-ci.yml'
|
||||||
|
pull_request:
|
||||||
|
branches:
|
||||||
|
- main
|
||||||
|
- next
|
||||||
|
paths:
|
||||||
|
- 'apps/extension/**'
|
||||||
|
- '.github/workflows/extension-ci.yml'
|
||||||
|
|
||||||
|
permissions:
|
||||||
|
contents: read
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
setup:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v4
|
||||||
|
with:
|
||||||
|
fetch-depth: 0
|
||||||
|
|
||||||
|
- uses: actions/setup-node@v4
|
||||||
|
with:
|
||||||
|
node-version: 20
|
||||||
|
|
||||||
|
- name: Cache node_modules
|
||||||
|
uses: actions/cache@v4
|
||||||
|
with:
|
||||||
|
path: |
|
||||||
|
node_modules
|
||||||
|
*/*/node_modules
|
||||||
|
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
|
||||||
|
restore-keys: |
|
||||||
|
${{ runner.os }}-node-
|
||||||
|
|
||||||
|
- name: Install Monorepo Dependencies
|
||||||
|
run: npm ci
|
||||||
|
timeout-minutes: 5
|
||||||
|
|
||||||
|
typecheck:
|
||||||
|
needs: setup
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v4
|
||||||
|
|
||||||
|
- uses: actions/setup-node@v4
|
||||||
|
with:
|
||||||
|
node-version: 20
|
||||||
|
|
||||||
|
|
||||||
|
- name: Restore node_modules
|
||||||
|
uses: actions/cache@v4
|
||||||
|
with:
|
||||||
|
path: |
|
||||||
|
node_modules
|
||||||
|
*/*/node_modules
|
||||||
|
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
|
||||||
|
restore-keys: |
|
||||||
|
${{ runner.os }}-node-
|
||||||
|
|
||||||
|
- name: Install if cache miss
|
||||||
|
run: npm ci
|
||||||
|
timeout-minutes: 3
|
||||||
|
|
||||||
|
- name: Type Check Extension
|
||||||
|
working-directory: apps/extension
|
||||||
|
run: npm run check-types
|
||||||
|
env:
|
||||||
|
FORCE_COLOR: 1
|
||||||
|
|
||||||
|
build:
|
||||||
|
needs: setup
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v4
|
||||||
|
|
||||||
|
- uses: actions/setup-node@v4
|
||||||
|
with:
|
||||||
|
node-version: 20
|
||||||
|
|
||||||
|
|
||||||
|
- name: Restore node_modules
|
||||||
|
uses: actions/cache@v4
|
||||||
|
with:
|
||||||
|
path: |
|
||||||
|
node_modules
|
||||||
|
*/*/node_modules
|
||||||
|
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
|
||||||
|
restore-keys: |
|
||||||
|
${{ runner.os }}-node-
|
||||||
|
|
||||||
|
- name: Install if cache miss
|
||||||
|
run: npm ci
|
||||||
|
timeout-minutes: 3
|
||||||
|
|
||||||
|
- name: Build Extension
|
||||||
|
working-directory: apps/extension
|
||||||
|
run: npm run build
|
||||||
|
env:
|
||||||
|
FORCE_COLOR: 1
|
||||||
|
|
||||||
|
- name: Package Extension
|
||||||
|
working-directory: apps/extension
|
||||||
|
run: npm run package
|
||||||
|
env:
|
||||||
|
FORCE_COLOR: 1
|
||||||
|
|
||||||
|
- name: Verify Package Contents
|
||||||
|
working-directory: apps/extension
|
||||||
|
run: |
|
||||||
|
echo "Checking vsix-build contents..."
|
||||||
|
ls -la vsix-build/
|
||||||
|
echo "Checking dist contents..."
|
||||||
|
ls -la vsix-build/dist/
|
||||||
|
echo "Checking package.json exists..."
|
||||||
|
test -f vsix-build/package.json
|
||||||
|
|
||||||
|
- name: Create VSIX Package (Test)
|
||||||
|
working-directory: apps/extension/vsix-build
|
||||||
|
run: npx vsce package --no-dependencies
|
||||||
|
env:
|
||||||
|
FORCE_COLOR: 1
|
||||||
|
|
||||||
|
- name: Upload Extension Artifact
|
||||||
|
uses: actions/upload-artifact@v4
|
||||||
|
with:
|
||||||
|
name: extension-package
|
||||||
|
path: |
|
||||||
|
apps/extension/vsix-build/*.vsix
|
||||||
|
apps/extension/dist/
|
||||||
|
retention-days: 30
|
||||||
|
|
||||||
110
.github/workflows/extension-release.yml
vendored
Normal file
110
.github/workflows/extension-release.yml
vendored
Normal file
@@ -0,0 +1,110 @@
|
|||||||
|
name: Extension Release
|
||||||
|
|
||||||
|
on:
|
||||||
|
push:
|
||||||
|
tags:
|
||||||
|
- "extension@*"
|
||||||
|
|
||||||
|
permissions:
|
||||||
|
contents: write
|
||||||
|
|
||||||
|
concurrency: extension-release-${{ github.ref }}
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
publish-extension:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
environment: extension-release
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v4
|
||||||
|
|
||||||
|
- uses: actions/setup-node@v4
|
||||||
|
with:
|
||||||
|
node-version: 20
|
||||||
|
|
||||||
|
- name: Cache node_modules
|
||||||
|
uses: actions/cache@v4
|
||||||
|
with:
|
||||||
|
path: |
|
||||||
|
node_modules
|
||||||
|
*/*/node_modules
|
||||||
|
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
|
||||||
|
restore-keys: |
|
||||||
|
${{ runner.os }}-node-
|
||||||
|
|
||||||
|
- name: Install Monorepo Dependencies
|
||||||
|
run: npm ci
|
||||||
|
timeout-minutes: 5
|
||||||
|
|
||||||
|
- name: Type Check Extension
|
||||||
|
working-directory: apps/extension
|
||||||
|
run: npm run check-types
|
||||||
|
env:
|
||||||
|
FORCE_COLOR: 1
|
||||||
|
|
||||||
|
- name: Build Extension
|
||||||
|
working-directory: apps/extension
|
||||||
|
run: npm run build
|
||||||
|
env:
|
||||||
|
FORCE_COLOR: 1
|
||||||
|
|
||||||
|
- name: Package Extension
|
||||||
|
working-directory: apps/extension
|
||||||
|
run: npm run package
|
||||||
|
env:
|
||||||
|
FORCE_COLOR: 1
|
||||||
|
|
||||||
|
- name: Create VSIX Package
|
||||||
|
working-directory: apps/extension/vsix-build
|
||||||
|
run: npx vsce package --no-dependencies
|
||||||
|
env:
|
||||||
|
FORCE_COLOR: 1
|
||||||
|
|
||||||
|
- name: Get VSIX filename
|
||||||
|
id: vsix-info
|
||||||
|
working-directory: apps/extension/vsix-build
|
||||||
|
run: |
|
||||||
|
VSIX_FILE=$(find . -maxdepth 1 -name "*.vsix" -type f | head -n1 | xargs basename)
|
||||||
|
if [ -z "$VSIX_FILE" ]; then
|
||||||
|
echo "Error: No VSIX file found"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
echo "vsix-filename=$VSIX_FILE" >> "$GITHUB_OUTPUT"
|
||||||
|
echo "Found VSIX: $VSIX_FILE"
|
||||||
|
|
||||||
|
- name: Publish to VS Code Marketplace
|
||||||
|
working-directory: apps/extension/vsix-build
|
||||||
|
run: npx vsce publish --packagePath "${{ steps.vsix-info.outputs.vsix-filename }}"
|
||||||
|
env:
|
||||||
|
VSCE_PAT: ${{ secrets.VSCE_PAT }}
|
||||||
|
FORCE_COLOR: 1
|
||||||
|
|
||||||
|
- name: Install Open VSX CLI
|
||||||
|
run: npm install -g ovsx
|
||||||
|
|
||||||
|
- name: Publish to Open VSX Registry
|
||||||
|
working-directory: apps/extension/vsix-build
|
||||||
|
run: ovsx publish "${{ steps.vsix-info.outputs.vsix-filename }}"
|
||||||
|
env:
|
||||||
|
OVSX_PAT: ${{ secrets.OVSX_PAT }}
|
||||||
|
FORCE_COLOR: 1
|
||||||
|
|
||||||
|
- name: Upload Build Artifacts
|
||||||
|
uses: actions/upload-artifact@v4
|
||||||
|
with:
|
||||||
|
name: extension-release-${{ github.ref_name }}
|
||||||
|
path: |
|
||||||
|
apps/extension/vsix-build/*.vsix
|
||||||
|
apps/extension/dist/
|
||||||
|
retention-days: 90
|
||||||
|
|
||||||
|
notify-success:
|
||||||
|
needs: publish-extension
|
||||||
|
if: success()
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
- name: Success Notification
|
||||||
|
run: |
|
||||||
|
echo "🎉 Extension ${{ github.ref_name }} successfully published!"
|
||||||
|
echo "📦 Available on VS Code Marketplace"
|
||||||
|
echo "🌍 Available on Open VSX Registry"
|
||||||
|
echo "🏷️ GitHub release created: ${{ github.ref_name }}"
|
||||||
176
.github/workflows/log-issue-events.yml
vendored
Normal file
176
.github/workflows/log-issue-events.yml
vendored
Normal file
@@ -0,0 +1,176 @@
|
|||||||
|
name: Log GitHub Issue Events
|
||||||
|
|
||||||
|
on:
|
||||||
|
issues:
|
||||||
|
types: [opened, closed]
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
log-issue-created:
|
||||||
|
if: github.event.action == 'opened'
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
timeout-minutes: 5
|
||||||
|
permissions:
|
||||||
|
contents: read
|
||||||
|
issues: read
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- name: Log issue creation to Statsig
|
||||||
|
env:
|
||||||
|
STATSIG_API_KEY: ${{ secrets.STATSIG_API_KEY }}
|
||||||
|
run: |
|
||||||
|
ISSUE_NUMBER=${{ github.event.issue.number }}
|
||||||
|
REPO=${{ github.repository }}
|
||||||
|
ISSUE_TITLE=$(echo '${{ github.event.issue.title }}' | sed "s/'/'\\\\''/g")
|
||||||
|
AUTHOR="${{ github.event.issue.user.login }}"
|
||||||
|
CREATED_AT="${{ github.event.issue.created_at }}"
|
||||||
|
|
||||||
|
if [ -z "$STATSIG_API_KEY" ]; then
|
||||||
|
echo "STATSIG_API_KEY not found, skipping Statsig logging"
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Prepare the event payload
|
||||||
|
EVENT_PAYLOAD=$(jq -n \
|
||||||
|
--arg issue_number "$ISSUE_NUMBER" \
|
||||||
|
--arg repo "$REPO" \
|
||||||
|
--arg title "$ISSUE_TITLE" \
|
||||||
|
--arg author "$AUTHOR" \
|
||||||
|
--arg created_at "$CREATED_AT" \
|
||||||
|
'{
|
||||||
|
events: [{
|
||||||
|
eventName: "github_issue_created",
|
||||||
|
value: 1,
|
||||||
|
metadata: {
|
||||||
|
repository: $repo,
|
||||||
|
issue_number: ($issue_number | tonumber),
|
||||||
|
issue_title: $title,
|
||||||
|
issue_author: $author,
|
||||||
|
created_at: $created_at
|
||||||
|
},
|
||||||
|
time: (now | floor | tostring)
|
||||||
|
}]
|
||||||
|
}')
|
||||||
|
|
||||||
|
# Send to Statsig API
|
||||||
|
echo "Logging issue creation to Statsig for issue #${ISSUE_NUMBER}"
|
||||||
|
|
||||||
|
RESPONSE=$(curl -s -w "\n%{http_code}" -X POST https://events.statsigapi.net/v1/log_event \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "STATSIG-API-KEY: ${STATSIG_API_KEY}" \
|
||||||
|
-d "$EVENT_PAYLOAD")
|
||||||
|
|
||||||
|
HTTP_CODE=$(echo "$RESPONSE" | tail -n1)
|
||||||
|
BODY=$(echo "$RESPONSE" | head -n-1)
|
||||||
|
|
||||||
|
if [ "$HTTP_CODE" -eq 200 ] || [ "$HTTP_CODE" -eq 202 ]; then
|
||||||
|
echo "Successfully logged issue creation for issue #${ISSUE_NUMBER}"
|
||||||
|
else
|
||||||
|
echo "Failed to log issue creation for issue #${ISSUE_NUMBER}. HTTP ${HTTP_CODE}: ${BODY}"
|
||||||
|
fi
|
||||||
|
|
||||||
|
log-issue-closed:
|
||||||
|
if: github.event.action == 'closed'
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
timeout-minutes: 5
|
||||||
|
permissions:
|
||||||
|
contents: read
|
||||||
|
issues: read
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- name: Log issue closure to Statsig
|
||||||
|
env:
|
||||||
|
STATSIG_API_KEY: ${{ secrets.STATSIG_API_KEY }}
|
||||||
|
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
run: |
|
||||||
|
ISSUE_NUMBER=${{ github.event.issue.number }}
|
||||||
|
REPO=${{ github.repository }}
|
||||||
|
ISSUE_TITLE=$(echo '${{ github.event.issue.title }}' | sed "s/'/'\\\\''/g")
|
||||||
|
CLOSED_BY="${{ github.event.issue.closed_by.login }}"
|
||||||
|
CLOSED_AT="${{ github.event.issue.closed_at }}"
|
||||||
|
STATE_REASON="${{ github.event.issue.state_reason }}"
|
||||||
|
|
||||||
|
if [ -z "$STATSIG_API_KEY" ]; then
|
||||||
|
echo "STATSIG_API_KEY not found, skipping Statsig logging"
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Get additional issue data via GitHub API
|
||||||
|
echo "Fetching additional issue data for #${ISSUE_NUMBER}"
|
||||||
|
ISSUE_DATA=$(curl -s -H "Authorization: token ${GITHUB_TOKEN}" \
|
||||||
|
-H "Accept: application/vnd.github.v3+json" \
|
||||||
|
"https://api.github.com/repos/${REPO}/issues/${ISSUE_NUMBER}")
|
||||||
|
|
||||||
|
COMMENTS_COUNT=$(echo "$ISSUE_DATA" | jq -r '.comments')
|
||||||
|
|
||||||
|
# Get reactions data
|
||||||
|
REACTIONS_DATA=$(curl -s -H "Authorization: token ${GITHUB_TOKEN}" \
|
||||||
|
-H "Accept: application/vnd.github.v3+json" \
|
||||||
|
"https://api.github.com/repos/${REPO}/issues/${ISSUE_NUMBER}/reactions")
|
||||||
|
|
||||||
|
REACTIONS_COUNT=$(echo "$REACTIONS_DATA" | jq '. | length')
|
||||||
|
|
||||||
|
# Check if issue was closed automatically (by checking if closed_by is a bot)
|
||||||
|
CLOSED_AUTOMATICALLY="false"
|
||||||
|
if [[ "$CLOSED_BY" == *"[bot]"* ]]; then
|
||||||
|
CLOSED_AUTOMATICALLY="true"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Check if closed as duplicate by state_reason
|
||||||
|
CLOSED_AS_DUPLICATE="false"
|
||||||
|
if [ "$STATE_REASON" = "duplicate" ]; then
|
||||||
|
CLOSED_AS_DUPLICATE="true"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Prepare the event payload
|
||||||
|
EVENT_PAYLOAD=$(jq -n \
|
||||||
|
--arg issue_number "$ISSUE_NUMBER" \
|
||||||
|
--arg repo "$REPO" \
|
||||||
|
--arg title "$ISSUE_TITLE" \
|
||||||
|
--arg closed_by "$CLOSED_BY" \
|
||||||
|
--arg closed_at "$CLOSED_AT" \
|
||||||
|
--arg state_reason "$STATE_REASON" \
|
||||||
|
--arg comments_count "$COMMENTS_COUNT" \
|
||||||
|
--arg reactions_count "$REACTIONS_COUNT" \
|
||||||
|
--arg closed_automatically "$CLOSED_AUTOMATICALLY" \
|
||||||
|
--arg closed_as_duplicate "$CLOSED_AS_DUPLICATE" \
|
||||||
|
'{
|
||||||
|
events: [{
|
||||||
|
eventName: "github_issue_closed",
|
||||||
|
value: 1,
|
||||||
|
metadata: {
|
||||||
|
repository: $repo,
|
||||||
|
issue_number: ($issue_number | tonumber),
|
||||||
|
issue_title: $title,
|
||||||
|
closed_by: $closed_by,
|
||||||
|
closed_at: $closed_at,
|
||||||
|
state_reason: $state_reason,
|
||||||
|
comments_count: ($comments_count | tonumber),
|
||||||
|
reactions_count: ($reactions_count | tonumber),
|
||||||
|
closed_automatically: ($closed_automatically | test("true")),
|
||||||
|
closed_as_duplicate: ($closed_as_duplicate | test("true"))
|
||||||
|
},
|
||||||
|
time: (now | floor | tostring)
|
||||||
|
}]
|
||||||
|
}')
|
||||||
|
|
||||||
|
# Send to Statsig API
|
||||||
|
echo "Logging issue closure to Statsig for issue #${ISSUE_NUMBER}"
|
||||||
|
|
||||||
|
RESPONSE=$(curl -s -w "\n%{http_code}" -X POST https://events.statsigapi.net/v1/log_event \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "STATSIG-API-KEY: ${STATSIG_API_KEY}" \
|
||||||
|
-d "$EVENT_PAYLOAD")
|
||||||
|
|
||||||
|
HTTP_CODE=$(echo "$RESPONSE" | tail -n1)
|
||||||
|
BODY=$(echo "$RESPONSE" | head -n-1)
|
||||||
|
|
||||||
|
if [ "$HTTP_CODE" -eq 200 ] || [ "$HTTP_CODE" -eq 202 ]; then
|
||||||
|
echo "Successfully logged issue closure for issue #${ISSUE_NUMBER}"
|
||||||
|
echo "Closed by: $CLOSED_BY"
|
||||||
|
echo "Comments: $COMMENTS_COUNT"
|
||||||
|
echo "Reactions: $REACTIONS_COUNT"
|
||||||
|
echo "Closed automatically: $CLOSED_AUTOMATICALLY"
|
||||||
|
echo "Closed as duplicate: $CLOSED_AS_DUPLICATE"
|
||||||
|
else
|
||||||
|
echo "Failed to log issue closure for issue #${ISSUE_NUMBER}. HTTP ${HTTP_CODE}: ${BODY}"
|
||||||
|
fi
|
||||||
95
.github/workflows/pre-release.yml
vendored
Normal file
95
.github/workflows/pre-release.yml
vendored
Normal file
@@ -0,0 +1,95 @@
|
|||||||
|
name: Pre-Release (RC)
|
||||||
|
|
||||||
|
on:
|
||||||
|
workflow_dispatch: # Allows manual triggering from GitHub UI/API
|
||||||
|
|
||||||
|
concurrency: pre-release-${{ github.ref_name }}
|
||||||
|
jobs:
|
||||||
|
rc:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
# Only allow pre-releases on non-main branches
|
||||||
|
if: github.ref != 'refs/heads/main'
|
||||||
|
environment: extension-release
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v4
|
||||||
|
with:
|
||||||
|
fetch-depth: 0
|
||||||
|
|
||||||
|
- uses: actions/setup-node@v4
|
||||||
|
with:
|
||||||
|
node-version: 20
|
||||||
|
cache: "npm"
|
||||||
|
|
||||||
|
- name: Cache node_modules
|
||||||
|
uses: actions/cache@v4
|
||||||
|
with:
|
||||||
|
path: |
|
||||||
|
node_modules
|
||||||
|
*/*/node_modules
|
||||||
|
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
|
||||||
|
restore-keys: |
|
||||||
|
${{ runner.os }}-node-
|
||||||
|
|
||||||
|
- name: Install dependencies
|
||||||
|
run: npm ci
|
||||||
|
timeout-minutes: 2
|
||||||
|
|
||||||
|
- name: Enter RC mode (if not already in RC mode)
|
||||||
|
run: |
|
||||||
|
# Check if we're in pre-release mode with the "rc" tag
|
||||||
|
if [ -f .changeset/pre.json ]; then
|
||||||
|
MODE=$(jq -r '.mode' .changeset/pre.json 2>/dev/null || echo '')
|
||||||
|
TAG=$(jq -r '.tag' .changeset/pre.json 2>/dev/null || echo '')
|
||||||
|
|
||||||
|
if [ "$MODE" = "exit" ]; then
|
||||||
|
echo "Pre-release mode is in 'exit' state, re-entering RC mode..."
|
||||||
|
npx changeset pre enter rc
|
||||||
|
elif [ "$MODE" = "pre" ] && [ "$TAG" != "rc" ]; then
|
||||||
|
echo "In pre-release mode but with wrong tag ($TAG), switching to RC..."
|
||||||
|
npx changeset pre exit
|
||||||
|
npx changeset pre enter rc
|
||||||
|
elif [ "$MODE" = "pre" ] && [ "$TAG" = "rc" ]; then
|
||||||
|
echo "Already in RC pre-release mode"
|
||||||
|
else
|
||||||
|
echo "Unknown mode state: $MODE, entering RC mode..."
|
||||||
|
npx changeset pre enter rc
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
echo "No pre.json found, entering RC mode..."
|
||||||
|
npx changeset pre enter rc
|
||||||
|
fi
|
||||||
|
|
||||||
|
- name: Version RC packages
|
||||||
|
run: npx changeset version
|
||||||
|
env:
|
||||||
|
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
NPM_TOKEN: ${{ secrets.NPM_TOKEN }}
|
||||||
|
|
||||||
|
- name: Run format
|
||||||
|
run: npm run format
|
||||||
|
env:
|
||||||
|
FORCE_COLOR: 1
|
||||||
|
|
||||||
|
- name: Build packages
|
||||||
|
run: npm run turbo:build
|
||||||
|
env:
|
||||||
|
NODE_ENV: production
|
||||||
|
FORCE_COLOR: 1
|
||||||
|
TM_PUBLIC_BASE_DOMAIN: ${{ secrets.TM_PUBLIC_BASE_DOMAIN }}
|
||||||
|
TM_PUBLIC_SUPABASE_URL: ${{ secrets.TM_PUBLIC_SUPABASE_URL }}
|
||||||
|
TM_PUBLIC_SUPABASE_ANON_KEY: ${{ secrets.TM_PUBLIC_SUPABASE_ANON_KEY }}
|
||||||
|
|
||||||
|
- name: Create Release Candidate Pull Request or Publish Release Candidate to npm
|
||||||
|
uses: changesets/action@v1
|
||||||
|
with:
|
||||||
|
publish: npx changeset publish
|
||||||
|
env:
|
||||||
|
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
NPM_TOKEN: ${{ secrets.NPM_TOKEN }}
|
||||||
|
|
||||||
|
- name: Commit & Push changes
|
||||||
|
uses: actions-js/push@master
|
||||||
|
with:
|
||||||
|
github_token: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
branch: ${{ github.ref }}
|
||||||
|
message: "chore: rc version bump"
|
||||||
21
.github/workflows/release-check.yml
vendored
Normal file
21
.github/workflows/release-check.yml
vendored
Normal file
@@ -0,0 +1,21 @@
|
|||||||
|
name: Release Check
|
||||||
|
|
||||||
|
on:
|
||||||
|
pull_request:
|
||||||
|
branches:
|
||||||
|
- main
|
||||||
|
|
||||||
|
concurrency:
|
||||||
|
group: release-check-${{ github.head_ref }}
|
||||||
|
cancel-in-progress: true
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
check-release-mode:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v4
|
||||||
|
with:
|
||||||
|
fetch-depth: 0
|
||||||
|
|
||||||
|
- name: Check release mode
|
||||||
|
run: node ./.github/scripts/check-pre-release-mode.mjs "pull_request"
|
||||||
37
.github/workflows/release.yml
vendored
37
.github/workflows/release.yml
vendored
@@ -3,7 +3,14 @@ on:
|
|||||||
push:
|
push:
|
||||||
branches:
|
branches:
|
||||||
- main
|
- main
|
||||||
- next
|
|
||||||
|
concurrency: ${{ github.workflow }}-${{ github.ref }}
|
||||||
|
|
||||||
|
permissions:
|
||||||
|
contents: write
|
||||||
|
pull-requests: write
|
||||||
|
id-token: write
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
release:
|
release:
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
@@ -15,14 +22,38 @@ jobs:
|
|||||||
- uses: actions/setup-node@v4
|
- uses: actions/setup-node@v4
|
||||||
with:
|
with:
|
||||||
node-version: 20
|
node-version: 20
|
||||||
|
cache: "npm"
|
||||||
|
|
||||||
|
- name: Cache node_modules
|
||||||
|
uses: actions/cache@v4
|
||||||
|
with:
|
||||||
|
path: |
|
||||||
|
node_modules
|
||||||
|
*/*/node_modules
|
||||||
|
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
|
||||||
|
restore-keys: |
|
||||||
|
${{ runner.os }}-node-
|
||||||
|
|
||||||
- name: Install Dependencies
|
- name: Install Dependencies
|
||||||
run: npm install
|
run: npm ci
|
||||||
|
timeout-minutes: 2
|
||||||
|
|
||||||
|
- name: Check pre-release mode
|
||||||
|
run: node ./.github/scripts/check-pre-release-mode.mjs "main"
|
||||||
|
|
||||||
|
- name: Build packages
|
||||||
|
run: npm run turbo:build
|
||||||
|
env:
|
||||||
|
NODE_ENV: production
|
||||||
|
FORCE_COLOR: 1
|
||||||
|
TM_PUBLIC_BASE_DOMAIN: ${{ secrets.TM_PUBLIC_BASE_DOMAIN }}
|
||||||
|
TM_PUBLIC_SUPABASE_URL: ${{ secrets.TM_PUBLIC_SUPABASE_URL }}
|
||||||
|
TM_PUBLIC_SUPABASE_ANON_KEY: ${{ secrets.TM_PUBLIC_SUPABASE_ANON_KEY }}
|
||||||
|
|
||||||
- name: Create Release Pull Request or Publish to npm
|
- name: Create Release Pull Request or Publish to npm
|
||||||
uses: changesets/action@v1
|
uses: changesets/action@v1
|
||||||
with:
|
with:
|
||||||
publish: npm run release
|
publish: node ./.github/scripts/release.mjs
|
||||||
env:
|
env:
|
||||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
NPM_TOKEN: ${{ secrets.NPM_TOKEN }}
|
NPM_TOKEN: ${{ secrets.NPM_TOKEN }}
|
||||||
|
|||||||
40
.github/workflows/update-models-md.yml
vendored
Normal file
40
.github/workflows/update-models-md.yml
vendored
Normal file
@@ -0,0 +1,40 @@
|
|||||||
|
name: Update models.md from supported-models.json
|
||||||
|
|
||||||
|
on:
|
||||||
|
push:
|
||||||
|
branches:
|
||||||
|
- main
|
||||||
|
- next
|
||||||
|
paths:
|
||||||
|
- 'scripts/modules/supported-models.json'
|
||||||
|
- 'docs/scripts/models-json-to-markdown.js'
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
update_markdown:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
- name: Checkout repository
|
||||||
|
uses: actions/checkout@v4
|
||||||
|
|
||||||
|
- name: Set up Node.js
|
||||||
|
uses: actions/setup-node@v4
|
||||||
|
with:
|
||||||
|
node-version: 20
|
||||||
|
|
||||||
|
- name: Run transformation script
|
||||||
|
run: node docs/scripts/models-json-to-markdown.js
|
||||||
|
|
||||||
|
- name: Format Markdown with Prettier
|
||||||
|
run: npx prettier --write docs/models.md
|
||||||
|
|
||||||
|
- name: Stage docs/models.md
|
||||||
|
run: git add docs/models.md
|
||||||
|
|
||||||
|
- name: Commit & Push docs/models.md
|
||||||
|
uses: actions-js/push@master
|
||||||
|
with:
|
||||||
|
github_token: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
branch: ${{ github.ref_name }}
|
||||||
|
message: 'docs: Auto-update and format models.md'
|
||||||
|
author_name: 'github-actions[bot]'
|
||||||
|
author_email: 'github-actions[bot]@users.noreply.github.com'
|
||||||
108
.github/workflows/weekly-metrics-discord.yml
vendored
Normal file
108
.github/workflows/weekly-metrics-discord.yml
vendored
Normal file
@@ -0,0 +1,108 @@
|
|||||||
|
name: Weekly Metrics to Discord
|
||||||
|
# description: Sends weekly metrics summary to Discord channel
|
||||||
|
|
||||||
|
on:
|
||||||
|
schedule:
|
||||||
|
- cron: "0 9 * * 1" # Every Monday at 9 AM
|
||||||
|
workflow_dispatch:
|
||||||
|
|
||||||
|
permissions:
|
||||||
|
contents: read
|
||||||
|
issues: read
|
||||||
|
pull-requests: read
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
weekly-metrics:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
env:
|
||||||
|
DISCORD_WEBHOOK: ${{ secrets.DISCORD_METRICS_WEBHOOK }}
|
||||||
|
steps:
|
||||||
|
- name: Checkout repository
|
||||||
|
uses: actions/checkout@v4
|
||||||
|
|
||||||
|
- name: Setup Node.js
|
||||||
|
uses: actions/setup-node@v4
|
||||||
|
with:
|
||||||
|
node-version: '20'
|
||||||
|
|
||||||
|
- name: Get dates for last 14 days
|
||||||
|
run: |
|
||||||
|
set -Eeuo pipefail
|
||||||
|
# Last 14 days
|
||||||
|
first_day=$(date -d "14 days ago" +%Y-%m-%d)
|
||||||
|
last_day=$(date +%Y-%m-%d)
|
||||||
|
|
||||||
|
echo "first_day=$first_day" >> $GITHUB_ENV
|
||||||
|
echo "last_day=$last_day" >> $GITHUB_ENV
|
||||||
|
echo "week_of=$(date -d '7 days ago' +'Week of %B %d, %Y')" >> $GITHUB_ENV
|
||||||
|
echo "date_range=Past 14 days ($first_day to $last_day)" >> $GITHUB_ENV
|
||||||
|
|
||||||
|
- name: Generate issue metrics
|
||||||
|
uses: github/issue-metrics@v3
|
||||||
|
env:
|
||||||
|
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
SEARCH_QUERY: "repo:${{ github.repository }} is:issue created:${{ env.first_day }}..${{ env.last_day }}"
|
||||||
|
HIDE_TIME_TO_ANSWER: true
|
||||||
|
HIDE_LABEL_METRICS: false
|
||||||
|
OUTPUT_FILE: issue_metrics.md
|
||||||
|
|
||||||
|
- name: Generate PR created metrics
|
||||||
|
uses: github/issue-metrics@v3
|
||||||
|
env:
|
||||||
|
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
SEARCH_QUERY: "repo:${{ github.repository }} is:pr created:${{ env.first_day }}..${{ env.last_day }}"
|
||||||
|
OUTPUT_FILE: pr_created_metrics.md
|
||||||
|
|
||||||
|
- name: Generate PR merged metrics
|
||||||
|
uses: github/issue-metrics@v3
|
||||||
|
env:
|
||||||
|
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
SEARCH_QUERY: "repo:${{ github.repository }} is:pr is:merged merged:${{ env.first_day }}..${{ env.last_day }}"
|
||||||
|
OUTPUT_FILE: pr_merged_metrics.md
|
||||||
|
|
||||||
|
- name: Debug generated metrics
|
||||||
|
run: |
|
||||||
|
set -Eeuo pipefail
|
||||||
|
echo "Listing markdown files in workspace:"
|
||||||
|
ls -la *.md || true
|
||||||
|
for f in issue_metrics.md pr_created_metrics.md pr_merged_metrics.md; do
|
||||||
|
if [ -f "$f" ]; then
|
||||||
|
echo "== $f (first 10 lines) =="
|
||||||
|
head -n 10 "$f"
|
||||||
|
else
|
||||||
|
echo "Missing $f"
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
|
||||||
|
- name: Parse metrics
|
||||||
|
id: metrics
|
||||||
|
run: node .github/scripts/parse-metrics.mjs
|
||||||
|
|
||||||
|
- name: Send to Discord
|
||||||
|
uses: sarisia/actions-status-discord@v1
|
||||||
|
if: env.DISCORD_WEBHOOK != ''
|
||||||
|
with:
|
||||||
|
webhook: ${{ env.DISCORD_WEBHOOK }}
|
||||||
|
status: Success
|
||||||
|
title: "📊 Weekly Metrics Report"
|
||||||
|
description: |
|
||||||
|
**${{ env.week_of }}**
|
||||||
|
*${{ env.date_range }}*
|
||||||
|
|
||||||
|
**🎯 Issues**
|
||||||
|
• Created: ${{ steps.metrics.outputs.issues_created }}
|
||||||
|
• Closed: ${{ steps.metrics.outputs.issues_closed }}
|
||||||
|
• Avg Response Time: ${{ steps.metrics.outputs.issue_avg_first_response }}
|
||||||
|
• Avg Time to Close: ${{ steps.metrics.outputs.issue_avg_time_to_close }}
|
||||||
|
|
||||||
|
**🔀 Pull Requests**
|
||||||
|
• Created: ${{ steps.metrics.outputs.prs_created }}
|
||||||
|
• Merged: ${{ steps.metrics.outputs.prs_merged }}
|
||||||
|
• Avg Response Time: ${{ steps.metrics.outputs.pr_avg_first_response }}
|
||||||
|
• Avg Time to Merge: ${{ steps.metrics.outputs.pr_avg_merge_time }}
|
||||||
|
|
||||||
|
**📈 Visual Analytics**
|
||||||
|
https://repobeats.axiom.co/api/embed/b439f28f0ab5bd7a2da19505355693cd2c55bfd4.svg
|
||||||
|
color: 0x58AFFF
|
||||||
|
username: Task Master Metrics Bot
|
||||||
|
avatar_url: https://raw.githubusercontent.com/eyaltoledano/claude-task-master/main/images/logo.png
|
||||||
43
.gitignore
vendored
43
.gitignore
vendored
@@ -9,6 +9,9 @@ jspm_packages/
|
|||||||
.env.test.local
|
.env.test.local
|
||||||
.env.production.local
|
.env.production.local
|
||||||
|
|
||||||
|
# Cursor configuration -- might have ENV variables. Included by default
|
||||||
|
# .cursor/mcp.json
|
||||||
|
|
||||||
# Logs
|
# Logs
|
||||||
logs
|
logs
|
||||||
*.log
|
*.log
|
||||||
@@ -18,9 +21,24 @@ yarn-error.log*
|
|||||||
lerna-debug.log*
|
lerna-debug.log*
|
||||||
|
|
||||||
# Coverage directory used by tools like istanbul
|
# Coverage directory used by tools like istanbul
|
||||||
coverage
|
coverage/
|
||||||
*.lcov
|
*.lcov
|
||||||
|
|
||||||
|
# Jest cache
|
||||||
|
.jest/
|
||||||
|
|
||||||
|
# Test temporary files and directories
|
||||||
|
tests/temp/
|
||||||
|
tests/e2e/_runs/
|
||||||
|
tests/e2e/log/
|
||||||
|
tests/**/*.log
|
||||||
|
tests/**/coverage/
|
||||||
|
|
||||||
|
# Test database files (if any)
|
||||||
|
tests/**/*.db
|
||||||
|
tests/**/*.sqlite
|
||||||
|
tests/**/*.sqlite3
|
||||||
|
|
||||||
# Optional npm cache directory
|
# Optional npm cache directory
|
||||||
.npm
|
.npm
|
||||||
|
|
||||||
@@ -56,3 +74,26 @@ dist
|
|||||||
*.debug
|
*.debug
|
||||||
init-debug.log
|
init-debug.log
|
||||||
dev-debug.log
|
dev-debug.log
|
||||||
|
|
||||||
|
# NPMRC
|
||||||
|
.npmrc
|
||||||
|
|
||||||
|
# Added by Task Master AI
|
||||||
|
# Editor directories and files
|
||||||
|
.idea
|
||||||
|
.vscode
|
||||||
|
*.suo
|
||||||
|
*.ntvs*
|
||||||
|
*.njsproj
|
||||||
|
*.sln
|
||||||
|
*.sw?
|
||||||
|
|
||||||
|
# VS Code extension test files
|
||||||
|
.vscode-test/
|
||||||
|
apps/extension/.vscode-test/
|
||||||
|
|
||||||
|
# apps/extension
|
||||||
|
apps/extension/vsix-build/
|
||||||
|
|
||||||
|
# turbo
|
||||||
|
.turbo
|
||||||
23
.kiro/hooks/tm-code-change-task-tracker.kiro.hook
Normal file
23
.kiro/hooks/tm-code-change-task-tracker.kiro.hook
Normal file
@@ -0,0 +1,23 @@
|
|||||||
|
{
|
||||||
|
"enabled": true,
|
||||||
|
"name": "[TM] Code Change Task Tracker",
|
||||||
|
"description": "Track implementation progress by monitoring code changes",
|
||||||
|
"version": "1",
|
||||||
|
"when": {
|
||||||
|
"type": "fileEdited",
|
||||||
|
"patterns": [
|
||||||
|
"**/*.{js,ts,jsx,tsx,py,go,rs,java,cpp,c,h,hpp,cs,rb,php,swift,kt,scala,clj}",
|
||||||
|
"!**/node_modules/**",
|
||||||
|
"!**/vendor/**",
|
||||||
|
"!**/.git/**",
|
||||||
|
"!**/build/**",
|
||||||
|
"!**/dist/**",
|
||||||
|
"!**/target/**",
|
||||||
|
"!**/__pycache__/**"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"then": {
|
||||||
|
"type": "askAgent",
|
||||||
|
"prompt": "I just saved a source code file. Please:\n\n1. Check what task is currently 'in-progress' using 'tm list --status=in-progress'\n2. Look at the file I saved and summarize what was changed (considering the programming language and context)\n3. Update the task's notes with: 'tm update-subtask --id=<task_id> --prompt=\"Implemented: <summary_of_changes> in <file_path>\"'\n4. If the changes seem to complete the task based on its description, ask if I want to mark it as done"
|
||||||
|
}
|
||||||
|
}
|
||||||
16
.kiro/hooks/tm-complexity-analyzer.kiro.hook
Normal file
16
.kiro/hooks/tm-complexity-analyzer.kiro.hook
Normal file
@@ -0,0 +1,16 @@
|
|||||||
|
{
|
||||||
|
"enabled": false,
|
||||||
|
"name": "[TM] Complexity Analyzer",
|
||||||
|
"description": "Analyze task complexity when new tasks are added",
|
||||||
|
"version": "1",
|
||||||
|
"when": {
|
||||||
|
"type": "fileEdited",
|
||||||
|
"patterns": [
|
||||||
|
".taskmaster/tasks/tasks.json"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"then": {
|
||||||
|
"type": "askAgent",
|
||||||
|
"prompt": "New tasks were added to tasks.json. For each new task:\n\n1. Run 'tm analyze-complexity --id=<task_id>'\n2. If complexity score is > 7, automatically expand it: 'tm expand --id=<task_id> --num=5'\n3. Show the complexity analysis results\n4. Suggest task dependencies based on the expanded subtasks"
|
||||||
|
}
|
||||||
|
}
|
||||||
13
.kiro/hooks/tm-daily-standup-assistant.kiro.hook
Normal file
13
.kiro/hooks/tm-daily-standup-assistant.kiro.hook
Normal file
@@ -0,0 +1,13 @@
|
|||||||
|
{
|
||||||
|
"enabled": true,
|
||||||
|
"name": "[TM] Daily Standup Assistant",
|
||||||
|
"description": "Morning workflow summary and task selection",
|
||||||
|
"version": "1",
|
||||||
|
"when": {
|
||||||
|
"type": "userTriggered"
|
||||||
|
},
|
||||||
|
"then": {
|
||||||
|
"type": "askAgent",
|
||||||
|
"prompt": "Good morning! Please provide my daily standup summary:\n\n1. Run 'tm list --status=done' and show tasks completed in the last 24 hours\n2. Run 'tm list --status=in-progress' to show current work\n3. Run 'tm next' to suggest the highest priority task to start\n4. Show the dependency graph for upcoming work\n5. Ask which task I'd like to focus on today"
|
||||||
|
}
|
||||||
|
}
|
||||||
13
.kiro/hooks/tm-git-commit-task-linker.kiro.hook
Normal file
13
.kiro/hooks/tm-git-commit-task-linker.kiro.hook
Normal file
@@ -0,0 +1,13 @@
|
|||||||
|
{
|
||||||
|
"enabled": true,
|
||||||
|
"name": "[TM] Git Commit Task Linker",
|
||||||
|
"description": "Link commits to tasks for traceability",
|
||||||
|
"version": "1",
|
||||||
|
"when": {
|
||||||
|
"type": "manual"
|
||||||
|
},
|
||||||
|
"then": {
|
||||||
|
"type": "askAgent",
|
||||||
|
"prompt": "I'm about to commit code. Please:\n\n1. Run 'git diff --staged' to see what's being committed\n2. Analyze the changes and suggest which tasks they relate to\n3. Generate a commit message in format: 'feat(task-<id>): <description>'\n4. Update the relevant tasks with a note about this commit\n5. Show the proposed commit message for approval"
|
||||||
|
}
|
||||||
|
}
|
||||||
13
.kiro/hooks/tm-pr-readiness-checker.kiro.hook
Normal file
13
.kiro/hooks/tm-pr-readiness-checker.kiro.hook
Normal file
@@ -0,0 +1,13 @@
|
|||||||
|
{
|
||||||
|
"enabled": true,
|
||||||
|
"name": "[TM] PR Readiness Checker",
|
||||||
|
"description": "Validate tasks before creating a pull request",
|
||||||
|
"version": "1",
|
||||||
|
"when": {
|
||||||
|
"type": "manual"
|
||||||
|
},
|
||||||
|
"then": {
|
||||||
|
"type": "askAgent",
|
||||||
|
"prompt": "I'm about to create a PR. Please:\n\n1. List all tasks marked as 'done' in this branch\n2. For each done task, verify:\n - All subtasks are also done\n - Test files exist for new functionality\n - No TODO comments remain related to the task\n3. Generate a PR description listing completed tasks\n4. Suggest a PR title based on the main tasks completed"
|
||||||
|
}
|
||||||
|
}
|
||||||
17
.kiro/hooks/tm-task-dependency-auto-progression.kiro.hook
Normal file
17
.kiro/hooks/tm-task-dependency-auto-progression.kiro.hook
Normal file
@@ -0,0 +1,17 @@
|
|||||||
|
{
|
||||||
|
"enabled": true,
|
||||||
|
"name": "[TM] Task Dependency Auto-Progression",
|
||||||
|
"description": "Automatically progress tasks when dependencies are completed",
|
||||||
|
"version": "1",
|
||||||
|
"when": {
|
||||||
|
"type": "fileEdited",
|
||||||
|
"patterns": [
|
||||||
|
".taskmaster/tasks/tasks.json",
|
||||||
|
".taskmaster/tasks/*.json"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"then": {
|
||||||
|
"type": "askAgent",
|
||||||
|
"prompt": "Check the tasks.json file for any tasks that just changed status to 'done'. For each completed task:\n\n1. Find all tasks that depend on it\n2. Check if those dependent tasks now have all their dependencies satisfied\n3. If a task has all dependencies met and is still 'pending', use the command 'tm set-status --id=<task_id> --status=in-progress' to start it\n4. Show me which tasks were auto-started and why"
|
||||||
|
}
|
||||||
|
}
|
||||||
23
.kiro/hooks/tm-test-success-task-completer.kiro.hook
Normal file
23
.kiro/hooks/tm-test-success-task-completer.kiro.hook
Normal file
@@ -0,0 +1,23 @@
|
|||||||
|
{
|
||||||
|
"enabled": true,
|
||||||
|
"name": "[TM] Test Success Task Completer",
|
||||||
|
"description": "Mark tasks as done when their tests pass",
|
||||||
|
"version": "1",
|
||||||
|
"when": {
|
||||||
|
"type": "fileEdited",
|
||||||
|
"patterns": [
|
||||||
|
"**/*test*.{js,ts,jsx,tsx,py,go,java,rb,php,rs,cpp,cs}",
|
||||||
|
"**/*spec*.{js,ts,jsx,tsx,rb}",
|
||||||
|
"**/test_*.py",
|
||||||
|
"**/*_test.go",
|
||||||
|
"**/*Test.java",
|
||||||
|
"**/*Tests.cs",
|
||||||
|
"!**/node_modules/**",
|
||||||
|
"!**/vendor/**"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"then": {
|
||||||
|
"type": "askAgent",
|
||||||
|
"prompt": "A test file was just saved. Please:\n\n1. Identify the test framework/language and run the appropriate test command for this file (npm test, pytest, go test, cargo test, dotnet test, mvn test, etc.)\n2. If all tests pass, check which tasks mention this functionality\n3. For any matching tasks that are 'in-progress', ask if the passing tests mean the task is complete\n4. If confirmed, mark the task as done with 'tm set-status --id=<task_id> --status=done'"
|
||||||
|
}
|
||||||
|
}
|
||||||
19
.kiro/settings/mcp.json
Normal file
19
.kiro/settings/mcp.json
Normal file
@@ -0,0 +1,19 @@
|
|||||||
|
{
|
||||||
|
"mcpServers": {
|
||||||
|
"task-master-ai": {
|
||||||
|
"command": "npx",
|
||||||
|
"args": ["-y", "task-master-ai"],
|
||||||
|
"env": {
|
||||||
|
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
|
||||||
|
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
|
||||||
|
"OPENAI_API_KEY": "YOUR_OPENAI_KEY_HERE",
|
||||||
|
"GOOGLE_API_KEY": "YOUR_GOOGLE_KEY_HERE",
|
||||||
|
"XAI_API_KEY": "YOUR_XAI_KEY_HERE",
|
||||||
|
"OPENROUTER_API_KEY": "YOUR_OPENROUTER_KEY_HERE",
|
||||||
|
"MISTRAL_API_KEY": "YOUR_MISTRAL_KEY_HERE",
|
||||||
|
"AZURE_OPENAI_API_KEY": "YOUR_AZURE_KEY_HERE",
|
||||||
|
"OLLAMA_API_KEY": "YOUR_OLLAMA_API_KEY_HERE"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
422
.kiro/steering/dev_workflow.md
Normal file
422
.kiro/steering/dev_workflow.md
Normal file
@@ -0,0 +1,422 @@
|
|||||||
|
---
|
||||||
|
inclusion: always
|
||||||
|
---
|
||||||
|
|
||||||
|
# Taskmaster Development Workflow
|
||||||
|
|
||||||
|
This guide outlines the standard process for using Taskmaster to manage software development projects. It is written as a set of instructions for you, the AI agent.
|
||||||
|
|
||||||
|
- **Your Default Stance**: For most projects, the user can work directly within the `master` task context. Your initial actions should operate on this default context unless a clear pattern for multi-context work emerges.
|
||||||
|
- **Your Goal**: Your role is to elevate the user's workflow by intelligently introducing advanced features like **Tagged Task Lists** when you detect the appropriate context. Do not force tags on the user; suggest them as a helpful solution to a specific need.
|
||||||
|
|
||||||
|
## The Basic Loop
|
||||||
|
The fundamental development cycle you will facilitate is:
|
||||||
|
1. **`list`**: Show the user what needs to be done.
|
||||||
|
2. **`next`**: Help the user decide what to work on.
|
||||||
|
3. **`show <id>`**: Provide details for a specific task.
|
||||||
|
4. **`expand <id>`**: Break down a complex task into smaller, manageable subtasks.
|
||||||
|
5. **Implement**: The user writes the code and tests.
|
||||||
|
6. **`update-subtask`**: Log progress and findings on behalf of the user.
|
||||||
|
7. **`set-status`**: Mark tasks and subtasks as `done` as work is completed.
|
||||||
|
8. **Repeat**.
|
||||||
|
|
||||||
|
All your standard command executions should operate on the user's current task context, which defaults to `master`.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Standard Development Workflow Process
|
||||||
|
|
||||||
|
### Simple Workflow (Default Starting Point)
|
||||||
|
|
||||||
|
For new projects or when users are getting started, operate within the `master` tag context:
|
||||||
|
|
||||||
|
- Start new projects by running `initialize_project` tool / `task-master init` or `parse_prd` / `task-master parse-prd --input='<prd-file.txt>'` (see @`taskmaster.md`) to generate initial tasks.json with tagged structure
|
||||||
|
- Configure rule sets during initialization with `--rules` flag (e.g., `task-master init --rules kiro,windsurf`) or manage them later with `task-master rules add/remove` commands
|
||||||
|
- Begin coding sessions with `get_tasks` / `task-master list` (see @`taskmaster.md`) to see current tasks, status, and IDs
|
||||||
|
- Determine the next task to work on using `next_task` / `task-master next` (see @`taskmaster.md`)
|
||||||
|
- Analyze task complexity with `analyze_project_complexity` / `task-master analyze-complexity --research` (see @`taskmaster.md`) before breaking down tasks
|
||||||
|
- Review complexity report using `complexity_report` / `task-master complexity-report` (see @`taskmaster.md`)
|
||||||
|
- Select tasks based on dependencies (all marked 'done'), priority level, and ID order
|
||||||
|
- View specific task details using `get_task` / `task-master show <id>` (see @`taskmaster.md`) to understand implementation requirements
|
||||||
|
- Break down complex tasks using `expand_task` / `task-master expand --id=<id> --force --research` (see @`taskmaster.md`) with appropriate flags like `--force` (to replace existing subtasks) and `--research`
|
||||||
|
- Implement code following task details, dependencies, and project standards
|
||||||
|
- Mark completed tasks with `set_task_status` / `task-master set-status --id=<id> --status=done` (see @`taskmaster.md`)
|
||||||
|
- Update dependent tasks when implementation differs from original plan using `update` / `task-master update --from=<id> --prompt="..."` or `update_task` / `task-master update-task --id=<id> --prompt="..."` (see @`taskmaster.md`)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Leveling Up: Agent-Led Multi-Context Workflows
|
||||||
|
|
||||||
|
While the basic workflow is powerful, your primary opportunity to add value is by identifying when to introduce **Tagged Task Lists**. These patterns are your tools for creating a more organized and efficient development environment for the user, especially if you detect agentic or parallel development happening across the same session.
|
||||||
|
|
||||||
|
**Critical Principle**: Most users should never see a difference in their experience. Only introduce advanced workflows when you detect clear indicators that the project has evolved beyond simple task management.
|
||||||
|
|
||||||
|
### When to Introduce Tags: Your Decision Patterns
|
||||||
|
|
||||||
|
Here are the patterns to look for. When you detect one, you should propose the corresponding workflow to the user.
|
||||||
|
|
||||||
|
#### Pattern 1: Simple Git Feature Branching
|
||||||
|
This is the most common and direct use case for tags.
|
||||||
|
|
||||||
|
- **Trigger**: The user creates a new git branch (e.g., `git checkout -b feature/user-auth`).
|
||||||
|
- **Your Action**: Propose creating a new tag that mirrors the branch name to isolate the feature's tasks from `master`.
|
||||||
|
- **Your Suggested Prompt**: *"I see you've created a new branch named 'feature/user-auth'. To keep all related tasks neatly organized and separate from your main list, I can create a corresponding task tag for you. This helps prevent merge conflicts in your `tasks.json` file later. Shall I create the 'feature-user-auth' tag?"*
|
||||||
|
- **Tool to Use**: `task-master add-tag --from-branch`
|
||||||
|
|
||||||
|
#### Pattern 2: Team Collaboration
|
||||||
|
- **Trigger**: The user mentions working with teammates (e.g., "My teammate Alice is handling the database schema," or "I need to review Bob's work on the API.").
|
||||||
|
- **Your Action**: Suggest creating a separate tag for the user's work to prevent conflicts with shared master context.
|
||||||
|
- **Your Suggested Prompt**: *"Since you're working with Alice, I can create a separate task context for your work to avoid conflicts. This way, Alice can continue working with the master list while you have your own isolated context. When you're ready to merge your work, we can coordinate the tasks back to master. Shall I create a tag for your current work?"*
|
||||||
|
- **Tool to Use**: `task-master add-tag my-work --copy-from-current --description="My tasks while collaborating with Alice"`
|
||||||
|
|
||||||
|
#### Pattern 3: Experiments or Risky Refactors
|
||||||
|
- **Trigger**: The user wants to try something that might not be kept (e.g., "I want to experiment with switching our state management library," or "Let's refactor the old API module, but I want to keep the current tasks as a reference.").
|
||||||
|
- **Your Action**: Propose creating a sandboxed tag for the experimental work.
|
||||||
|
- **Your Suggested Prompt**: *"This sounds like a great experiment. To keep these new tasks separate from our main plan, I can create a temporary 'experiment-zustand' tag for this work. If we decide not to proceed, we can simply delete the tag without affecting the main task list. Sound good?"*
|
||||||
|
- **Tool to Use**: `task-master add-tag experiment-zustand --description="Exploring Zustand migration"`
|
||||||
|
|
||||||
|
#### Pattern 4: Large Feature Initiatives (PRD-Driven)
|
||||||
|
This is a more structured approach for significant new features or epics.
|
||||||
|
|
||||||
|
- **Trigger**: The user describes a large, multi-step feature that would benefit from a formal plan.
|
||||||
|
- **Your Action**: Propose a comprehensive, PRD-driven workflow.
|
||||||
|
- **Your Suggested Prompt**: *"This sounds like a significant new feature. To manage this effectively, I suggest we create a dedicated task context for it. Here's the plan: I'll create a new tag called 'feature-xyz', then we can draft a Product Requirements Document (PRD) together to scope the work. Once the PRD is ready, I'll automatically generate all the necessary tasks within that new tag. How does that sound?"*
|
||||||
|
- **Your Implementation Flow**:
|
||||||
|
1. **Create an empty tag**: `task-master add-tag feature-xyz --description "Tasks for the new XYZ feature"`. You can also start by creating a git branch if applicable, and then create the tag from that branch.
|
||||||
|
2. **Collaborate & Create PRD**: Work with the user to create a detailed PRD file (e.g., `.taskmaster/docs/feature-xyz-prd.txt`).
|
||||||
|
3. **Parse PRD into the new tag**: `task-master parse-prd .taskmaster/docs/feature-xyz-prd.txt --tag feature-xyz`
|
||||||
|
4. **Prepare the new task list**: Follow up by suggesting `analyze-complexity` and `expand-all` for the newly created tasks within the `feature-xyz` tag.
|
||||||
|
|
||||||
|
#### Pattern 5: Version-Based Development
|
||||||
|
Tailor your approach based on the project maturity indicated by tag names.
|
||||||
|
|
||||||
|
- **Prototype/MVP Tags** (`prototype`, `mvp`, `poc`, `v0.x`):
|
||||||
|
- **Your Approach**: Focus on speed and functionality over perfection
|
||||||
|
- **Task Generation**: Create tasks that emphasize "get it working" over "get it perfect"
|
||||||
|
- **Complexity Level**: Lower complexity, fewer subtasks, more direct implementation paths
|
||||||
|
- **Research Prompts**: Include context like "This is a prototype - prioritize speed and basic functionality over optimization"
|
||||||
|
- **Example Prompt Addition**: *"Since this is for the MVP, I'll focus on tasks that get core functionality working quickly rather than over-engineering."*
|
||||||
|
|
||||||
|
- **Production/Mature Tags** (`v1.0+`, `production`, `stable`):
|
||||||
|
- **Your Approach**: Emphasize robustness, testing, and maintainability
|
||||||
|
- **Task Generation**: Include comprehensive error handling, testing, documentation, and optimization
|
||||||
|
- **Complexity Level**: Higher complexity, more detailed subtasks, thorough implementation paths
|
||||||
|
- **Research Prompts**: Include context like "This is for production - prioritize reliability, performance, and maintainability"
|
||||||
|
- **Example Prompt Addition**: *"Since this is for production, I'll ensure tasks include proper error handling, testing, and documentation."*
|
||||||
|
|
||||||
|
### Advanced Workflow (Tag-Based & PRD-Driven)
|
||||||
|
|
||||||
|
**When to Transition**: Recognize when the project has evolved (or has initiated a project which existing code) beyond simple task management. Look for these indicators:
|
||||||
|
- User mentions teammates or collaboration needs
|
||||||
|
- Project has grown to 15+ tasks with mixed priorities
|
||||||
|
- User creates feature branches or mentions major initiatives
|
||||||
|
- User initializes Taskmaster on an existing, complex codebase
|
||||||
|
- User describes large features that would benefit from dedicated planning
|
||||||
|
|
||||||
|
**Your Role in Transition**: Guide the user to a more sophisticated workflow that leverages tags for organization and PRDs for comprehensive planning.
|
||||||
|
|
||||||
|
#### Master List Strategy (High-Value Focus)
|
||||||
|
Once you transition to tag-based workflows, the `master` tag should ideally contain only:
|
||||||
|
- **High-level deliverables** that provide significant business value
|
||||||
|
- **Major milestones** and epic-level features
|
||||||
|
- **Critical infrastructure** work that affects the entire project
|
||||||
|
- **Release-blocking** items
|
||||||
|
|
||||||
|
**What NOT to put in master**:
|
||||||
|
- Detailed implementation subtasks (these go in feature-specific tags' parent tasks)
|
||||||
|
- Refactoring work (create dedicated tags like `refactor-auth`)
|
||||||
|
- Experimental features (use `experiment-*` tags)
|
||||||
|
- Team member-specific tasks (use person-specific tags)
|
||||||
|
|
||||||
|
#### PRD-Driven Feature Development
|
||||||
|
|
||||||
|
**For New Major Features**:
|
||||||
|
1. **Identify the Initiative**: When user describes a significant feature
|
||||||
|
2. **Create Dedicated Tag**: `add_tag feature-[name] --description="[Feature description]"`
|
||||||
|
3. **Collaborative PRD Creation**: Work with user to create comprehensive PRD in `.taskmaster/docs/feature-[name]-prd.txt`
|
||||||
|
4. **Parse & Prepare**:
|
||||||
|
- `parse_prd .taskmaster/docs/feature-[name]-prd.txt --tag=feature-[name]`
|
||||||
|
- `analyze_project_complexity --tag=feature-[name] --research`
|
||||||
|
- `expand_all --tag=feature-[name] --research`
|
||||||
|
5. **Add Master Reference**: Create a high-level task in `master` that references the feature tag
|
||||||
|
|
||||||
|
**For Existing Codebase Analysis**:
|
||||||
|
When users initialize Taskmaster on existing projects:
|
||||||
|
1. **Codebase Discovery**: Use your native tools for producing deep context about the code base. You may use `research` tool with `--tree` and `--files` to collect up to date information using the existing architecture as context.
|
||||||
|
2. **Collaborative Assessment**: Work with user to identify improvement areas, technical debt, or new features
|
||||||
|
3. **Strategic PRD Creation**: Co-author PRDs that include:
|
||||||
|
- Current state analysis (based on your codebase research)
|
||||||
|
- Proposed improvements or new features
|
||||||
|
- Implementation strategy considering existing code
|
||||||
|
4. **Tag-Based Organization**: Parse PRDs into appropriate tags (`refactor-api`, `feature-dashboard`, `tech-debt`, etc.)
|
||||||
|
5. **Master List Curation**: Keep only the most valuable initiatives in master
|
||||||
|
|
||||||
|
The parse-prd's `--append` flag enables the user to parse multiple PRDs within tags or across tags. PRDs should be focused and the number of tasks they are parsed into should be strategically chosen relative to the PRD's complexity and level of detail.
|
||||||
|
|
||||||
|
### Workflow Transition Examples
|
||||||
|
|
||||||
|
**Example 1: Simple → Team-Based**
|
||||||
|
```
|
||||||
|
User: "Alice is going to help with the API work"
|
||||||
|
Your Response: "Great! To avoid conflicts, I'll create a separate task context for your work. Alice can continue with the master list while you work in your own context. When you're ready to merge, we can coordinate the tasks back together."
|
||||||
|
Action: add_tag my-api-work --copy-from-current --description="My API tasks while collaborating with Alice"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Example 2: Simple → PRD-Driven**
|
||||||
|
```
|
||||||
|
User: "I want to add a complete user dashboard with analytics, user management, and reporting"
|
||||||
|
Your Response: "This sounds like a major feature that would benefit from detailed planning. Let me create a dedicated context for this work and we can draft a PRD together to ensure we capture all requirements."
|
||||||
|
Actions:
|
||||||
|
1. add_tag feature-dashboard --description="User dashboard with analytics and management"
|
||||||
|
2. Collaborate on PRD creation
|
||||||
|
3. parse_prd dashboard-prd.txt --tag=feature-dashboard
|
||||||
|
4. Add high-level "User Dashboard" task to master
|
||||||
|
```
|
||||||
|
|
||||||
|
**Example 3: Existing Project → Strategic Planning**
|
||||||
|
```
|
||||||
|
User: "I just initialized Taskmaster on my existing React app. It's getting messy and I want to improve it."
|
||||||
|
Your Response: "Let me research your codebase to understand the current architecture, then we can create a strategic plan for improvements."
|
||||||
|
Actions:
|
||||||
|
1. research "Current React app architecture and improvement opportunities" --tree --files=src/
|
||||||
|
2. Collaborate on improvement PRD based on findings
|
||||||
|
3. Create tags for different improvement areas (refactor-components, improve-state-management, etc.)
|
||||||
|
4. Keep only major improvement initiatives in master
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Primary Interaction: MCP Server vs. CLI
|
||||||
|
|
||||||
|
Taskmaster offers two primary ways to interact:
|
||||||
|
|
||||||
|
1. **MCP Server (Recommended for Integrated Tools)**:
|
||||||
|
- For AI agents and integrated development environments (like Kiro), interacting via the **MCP server is the preferred method**.
|
||||||
|
- The MCP server exposes Taskmaster functionality through a set of tools (e.g., `get_tasks`, `add_subtask`).
|
||||||
|
- This method offers better performance, structured data exchange, and richer error handling compared to CLI parsing.
|
||||||
|
- Refer to @`mcp.md` for details on the MCP architecture and available tools.
|
||||||
|
- A comprehensive list and description of MCP tools and their corresponding CLI commands can be found in @`taskmaster.md`.
|
||||||
|
- **Restart the MCP server** if core logic in `scripts/modules` or MCP tool/direct function definitions change.
|
||||||
|
- **Note**: MCP tools fully support tagged task lists with complete tag management capabilities.
|
||||||
|
|
||||||
|
2. **`task-master` CLI (For Users & Fallback)**:
|
||||||
|
- The global `task-master` command provides a user-friendly interface for direct terminal interaction.
|
||||||
|
- It can also serve as a fallback if the MCP server is inaccessible or a specific function isn't exposed via MCP.
|
||||||
|
- Install globally with `npm install -g task-master-ai` or use locally via `npx task-master-ai ...`.
|
||||||
|
- The CLI commands often mirror the MCP tools (e.g., `task-master list` corresponds to `get_tasks`).
|
||||||
|
- Refer to @`taskmaster.md` for a detailed command reference.
|
||||||
|
- **Tagged Task Lists**: CLI fully supports the new tagged system with seamless migration.
|
||||||
|
|
||||||
|
## How the Tag System Works (For Your Reference)
|
||||||
|
|
||||||
|
- **Data Structure**: Tasks are organized into separate contexts (tags) like "master", "feature-branch", or "v2.0".
|
||||||
|
- **Silent Migration**: Existing projects automatically migrate to use a "master" tag with zero disruption.
|
||||||
|
- **Context Isolation**: Tasks in different tags are completely separate. Changes in one tag do not affect any other tag.
|
||||||
|
- **Manual Control**: The user is always in control. There is no automatic switching. You facilitate switching by using `use-tag <name>`.
|
||||||
|
- **Full CLI & MCP Support**: All tag management commands are available through both the CLI and MCP tools for you to use. Refer to @`taskmaster.md` for a full command list.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Task Complexity Analysis
|
||||||
|
|
||||||
|
- Run `analyze_project_complexity` / `task-master analyze-complexity --research` (see @`taskmaster.md`) for comprehensive analysis
|
||||||
|
- Review complexity report via `complexity_report` / `task-master complexity-report` (see @`taskmaster.md`) for a formatted, readable version.
|
||||||
|
- Focus on tasks with highest complexity scores (8-10) for detailed breakdown
|
||||||
|
- Use analysis results to determine appropriate subtask allocation
|
||||||
|
- Note that reports are automatically used by the `expand_task` tool/command
|
||||||
|
|
||||||
|
## Task Breakdown Process
|
||||||
|
|
||||||
|
- Use `expand_task` / `task-master expand --id=<id>`. It automatically uses the complexity report if found, otherwise generates default number of subtasks.
|
||||||
|
- Use `--num=<number>` to specify an explicit number of subtasks, overriding defaults or complexity report recommendations.
|
||||||
|
- Add `--research` flag to leverage Perplexity AI for research-backed expansion.
|
||||||
|
- Add `--force` flag to clear existing subtasks before generating new ones (default is to append).
|
||||||
|
- Use `--prompt="<context>"` to provide additional context when needed.
|
||||||
|
- Review and adjust generated subtasks as necessary.
|
||||||
|
- Use `expand_all` tool or `task-master expand --all` to expand multiple pending tasks at once, respecting flags like `--force` and `--research`.
|
||||||
|
- If subtasks need complete replacement (regardless of the `--force` flag on `expand`), clear them first with `clear_subtasks` / `task-master clear-subtasks --id=<id>`.
|
||||||
|
|
||||||
|
## Implementation Drift Handling
|
||||||
|
|
||||||
|
- When implementation differs significantly from planned approach
|
||||||
|
- When future tasks need modification due to current implementation choices
|
||||||
|
- When new dependencies or requirements emerge
|
||||||
|
- Use `update` / `task-master update --from=<futureTaskId> --prompt='<explanation>\nUpdate context...' --research` to update multiple future tasks.
|
||||||
|
- Use `update_task` / `task-master update-task --id=<taskId> --prompt='<explanation>\nUpdate context...' --research` to update a single specific task.
|
||||||
|
|
||||||
|
## Task Status Management
|
||||||
|
|
||||||
|
- Use 'pending' for tasks ready to be worked on
|
||||||
|
- Use 'done' for completed and verified tasks
|
||||||
|
- Use 'deferred' for postponed tasks
|
||||||
|
- Add custom status values as needed for project-specific workflows
|
||||||
|
|
||||||
|
## Task Structure Fields
|
||||||
|
|
||||||
|
- **id**: Unique identifier for the task (Example: `1`, `1.1`)
|
||||||
|
- **title**: Brief, descriptive title (Example: `"Initialize Repo"`)
|
||||||
|
- **description**: Concise summary of what the task involves (Example: `"Create a new repository, set up initial structure."`)
|
||||||
|
- **status**: Current state of the task (Example: `"pending"`, `"done"`, `"deferred"`)
|
||||||
|
- **dependencies**: IDs of prerequisite tasks (Example: `[1, 2.1]`)
|
||||||
|
- Dependencies are displayed with status indicators (✅ for completed, ⏱️ for pending)
|
||||||
|
- This helps quickly identify which prerequisite tasks are blocking work
|
||||||
|
- **priority**: Importance level (Example: `"high"`, `"medium"`, `"low"`)
|
||||||
|
- **details**: In-depth implementation instructions (Example: `"Use GitHub client ID/secret, handle callback, set session token."`)
|
||||||
|
- **testStrategy**: Verification approach (Example: `"Deploy and call endpoint to confirm 'Hello World' response."`)
|
||||||
|
- **subtasks**: List of smaller, more specific tasks (Example: `[{"id": 1, "title": "Configure OAuth", ...}]`)
|
||||||
|
- Refer to task structure details (previously linked to `tasks.md`).
|
||||||
|
|
||||||
|
## Configuration Management (Updated)
|
||||||
|
|
||||||
|
Taskmaster configuration is managed through two main mechanisms:
|
||||||
|
|
||||||
|
1. **`.taskmaster/config.json` File (Primary):**
|
||||||
|
* Located in the project root directory.
|
||||||
|
* Stores most configuration settings: AI model selections (main, research, fallback), parameters (max tokens, temperature), logging level, default subtasks/priority, project name, etc.
|
||||||
|
* **Tagged System Settings**: Includes `global.defaultTag` (defaults to "master") and `tags` section for tag management configuration.
|
||||||
|
* **Managed via `task-master models --setup` command.** Do not edit manually unless you know what you are doing.
|
||||||
|
* **View/Set specific models via `task-master models` command or `models` MCP tool.**
|
||||||
|
* Created automatically when you run `task-master models --setup` for the first time or during tagged system migration.
|
||||||
|
|
||||||
|
2. **Environment Variables (`.env` / `mcp.json`):**
|
||||||
|
* Used **only** for sensitive API keys and specific endpoint URLs.
|
||||||
|
* Place API keys (one per provider) in a `.env` file in the project root for CLI usage.
|
||||||
|
* For MCP/Kiro integration, configure these keys in the `env` section of `.kiro/mcp.json`.
|
||||||
|
* Available keys/variables: See `assets/env.example` or the Configuration section in the command reference (previously linked to `taskmaster.md`).
|
||||||
|
|
||||||
|
3. **`.taskmaster/state.json` File (Tagged System State):**
|
||||||
|
* Tracks current tag context and migration status.
|
||||||
|
* Automatically created during tagged system migration.
|
||||||
|
* Contains: `currentTag`, `lastSwitched`, `migrationNoticeShown`.
|
||||||
|
|
||||||
|
**Important:** Non-API key settings (like model selections, `MAX_TOKENS`, `TASKMASTER_LOG_LEVEL`) are **no longer configured via environment variables**. Use the `task-master models` command (or `--setup` for interactive configuration) or the `models` MCP tool.
|
||||||
|
**If AI commands FAIL in MCP** verify that the API key for the selected provider is present in the `env` section of `.kiro/mcp.json`.
|
||||||
|
**If AI commands FAIL in CLI** verify that the API key for the selected provider is present in the `.env` file in the root of the project.
|
||||||
|
|
||||||
|
## Rules Management
|
||||||
|
|
||||||
|
Taskmaster supports multiple AI coding assistant rule sets that can be configured during project initialization or managed afterward:
|
||||||
|
|
||||||
|
- **Available Profiles**: Claude Code, Cline, Codex, Kiro, Roo Code, Trae, Windsurf (claude, cline, codex, kiro, roo, trae, windsurf)
|
||||||
|
- **During Initialization**: Use `task-master init --rules kiro,windsurf` to specify which rule sets to include
|
||||||
|
- **After Initialization**: Use `task-master rules add <profiles>` or `task-master rules remove <profiles>` to manage rule sets
|
||||||
|
- **Interactive Setup**: Use `task-master rules setup` to launch an interactive prompt for selecting rule profiles
|
||||||
|
- **Default Behavior**: If no `--rules` flag is specified during initialization, all available rule profiles are included
|
||||||
|
- **Rule Structure**: Each profile creates its own directory (e.g., `.kiro/steering`, `.roo/rules`) with appropriate configuration files
|
||||||
|
|
||||||
|
## Determining the Next Task
|
||||||
|
|
||||||
|
- Run `next_task` / `task-master next` to show the next task to work on.
|
||||||
|
- The command identifies tasks with all dependencies satisfied
|
||||||
|
- Tasks are prioritized by priority level, dependency count, and ID
|
||||||
|
- The command shows comprehensive task information including:
|
||||||
|
- Basic task details and description
|
||||||
|
- Implementation details
|
||||||
|
- Subtasks (if they exist)
|
||||||
|
- Contextual suggested actions
|
||||||
|
- Recommended before starting any new development work
|
||||||
|
- Respects your project's dependency structure
|
||||||
|
- Ensures tasks are completed in the appropriate sequence
|
||||||
|
- Provides ready-to-use commands for common task actions
|
||||||
|
|
||||||
|
## Viewing Specific Task Details
|
||||||
|
|
||||||
|
- Run `get_task` / `task-master show <id>` to view a specific task.
|
||||||
|
- Use dot notation for subtasks: `task-master show 1.2` (shows subtask 2 of task 1)
|
||||||
|
- Displays comprehensive information similar to the next command, but for a specific task
|
||||||
|
- For parent tasks, shows all subtasks and their current status
|
||||||
|
- For subtasks, shows parent task information and relationship
|
||||||
|
- Provides contextual suggested actions appropriate for the specific task
|
||||||
|
- Useful for examining task details before implementation or checking status
|
||||||
|
|
||||||
|
## Managing Task Dependencies
|
||||||
|
|
||||||
|
- Use `add_dependency` / `task-master add-dependency --id=<id> --depends-on=<id>` to add a dependency.
|
||||||
|
- Use `remove_dependency` / `task-master remove-dependency --id=<id> --depends-on=<id>` to remove a dependency.
|
||||||
|
- The system prevents circular dependencies and duplicate dependency entries
|
||||||
|
- Dependencies are checked for existence before being added or removed
|
||||||
|
- Task files are automatically regenerated after dependency changes
|
||||||
|
- Dependencies are visualized with status indicators in task listings and files
|
||||||
|
|
||||||
|
## Task Reorganization
|
||||||
|
|
||||||
|
- Use `move_task` / `task-master move --from=<id> --to=<id>` to move tasks or subtasks within the hierarchy
|
||||||
|
- This command supports several use cases:
|
||||||
|
- Moving a standalone task to become a subtask (e.g., `--from=5 --to=7`)
|
||||||
|
- Moving a subtask to become a standalone task (e.g., `--from=5.2 --to=7`)
|
||||||
|
- Moving a subtask to a different parent (e.g., `--from=5.2 --to=7.3`)
|
||||||
|
- Reordering subtasks within the same parent (e.g., `--from=5.2 --to=5.4`)
|
||||||
|
- Moving a task to a new, non-existent ID position (e.g., `--from=5 --to=25`)
|
||||||
|
- Moving multiple tasks at once using comma-separated IDs (e.g., `--from=10,11,12 --to=16,17,18`)
|
||||||
|
- The system includes validation to prevent data loss:
|
||||||
|
- Allows moving to non-existent IDs by creating placeholder tasks
|
||||||
|
- Prevents moving to existing task IDs that have content (to avoid overwriting)
|
||||||
|
- Validates source tasks exist before attempting to move them
|
||||||
|
- The system maintains proper parent-child relationships and dependency integrity
|
||||||
|
- Task files are automatically regenerated after the move operation
|
||||||
|
- This provides greater flexibility in organizing and refining your task structure as project understanding evolves
|
||||||
|
- This is especially useful when dealing with potential merge conflicts arising from teams creating tasks on separate branches. Solve these conflicts very easily by moving your tasks and keeping theirs.
|
||||||
|
|
||||||
|
## Iterative Subtask Implementation
|
||||||
|
|
||||||
|
Once a task has been broken down into subtasks using `expand_task` or similar methods, follow this iterative process for implementation:
|
||||||
|
|
||||||
|
1. **Understand the Goal (Preparation):**
|
||||||
|
* Use `get_task` / `task-master show <subtaskId>` (see @`taskmaster.md`) to thoroughly understand the specific goals and requirements of the subtask.
|
||||||
|
|
||||||
|
2. **Initial Exploration & Planning (Iteration 1):**
|
||||||
|
* This is the first attempt at creating a concrete implementation plan.
|
||||||
|
* Explore the codebase to identify the precise files, functions, and even specific lines of code that will need modification.
|
||||||
|
* Determine the intended code changes (diffs) and their locations.
|
||||||
|
* Gather *all* relevant details from this exploration phase.
|
||||||
|
|
||||||
|
3. **Log the Plan:**
|
||||||
|
* Run `update_subtask` / `task-master update-subtask --id=<subtaskId> --prompt='<detailed plan>'`.
|
||||||
|
* Provide the *complete and detailed* findings from the exploration phase in the prompt. Include file paths, line numbers, proposed diffs, reasoning, and any potential challenges identified. Do not omit details. The goal is to create a rich, timestamped log within the subtask's `details`.
|
||||||
|
|
||||||
|
4. **Verify the Plan:**
|
||||||
|
* Run `get_task` / `task-master show <subtaskId>` again to confirm that the detailed implementation plan has been successfully appended to the subtask's details.
|
||||||
|
|
||||||
|
5. **Begin Implementation:**
|
||||||
|
* Set the subtask status using `set_task_status` / `task-master set-status --id=<subtaskId> --status=in-progress`.
|
||||||
|
* Start coding based on the logged plan.
|
||||||
|
|
||||||
|
6. **Refine and Log Progress (Iteration 2+):**
|
||||||
|
* As implementation progresses, you will encounter challenges, discover nuances, or confirm successful approaches.
|
||||||
|
* **Before appending new information**: Briefly review the *existing* details logged in the subtask (using `get_task` or recalling from context) to ensure the update adds fresh insights and avoids redundancy.
|
||||||
|
* **Regularly** use `update_subtask` / `task-master update-subtask --id=<subtaskId> --prompt='<update details>\n- What worked...\n- What didn't work...'` to append new findings.
|
||||||
|
* **Crucially, log:**
|
||||||
|
* What worked ("fundamental truths" discovered).
|
||||||
|
* What didn't work and why (to avoid repeating mistakes).
|
||||||
|
* Specific code snippets or configurations that were successful.
|
||||||
|
* Decisions made, especially if confirmed with user input.
|
||||||
|
* Any deviations from the initial plan and the reasoning.
|
||||||
|
* The objective is to continuously enrich the subtask's details, creating a log of the implementation journey that helps the AI (and human developers) learn, adapt, and avoid repeating errors.
|
||||||
|
|
||||||
|
7. **Review & Update Rules (Post-Implementation):**
|
||||||
|
* Once the implementation for the subtask is functionally complete, review all code changes and the relevant chat history.
|
||||||
|
* Identify any new or modified code patterns, conventions, or best practices established during the implementation.
|
||||||
|
* Create new or update existing rules following internal guidelines (previously linked to `cursor_rules.md` and `self_improve.md`).
|
||||||
|
|
||||||
|
8. **Mark Task Complete:**
|
||||||
|
* After verifying the implementation and updating any necessary rules, mark the subtask as completed: `set_task_status` / `task-master set-status --id=<subtaskId> --status=done`.
|
||||||
|
|
||||||
|
9. **Commit Changes (If using Git):**
|
||||||
|
* Stage the relevant code changes and any updated/new rule files (`git add .`).
|
||||||
|
* Craft a comprehensive Git commit message summarizing the work done for the subtask, including both code implementation and any rule adjustments.
|
||||||
|
* Execute the commit command directly in the terminal (e.g., `git commit -m 'feat(module): Implement feature X for subtask <subtaskId>\n\n- Details about changes...\n- Updated rule Y for pattern Z'`).
|
||||||
|
* Consider if a Changeset is needed according to internal versioning guidelines (previously linked to `changeset.md`). If so, run `npm run changeset`, stage the generated file, and amend the commit or create a new one.
|
||||||
|
|
||||||
|
10. **Proceed to Next Subtask:**
|
||||||
|
* Identify the next subtask (e.g., using `next_task` / `task-master next`).
|
||||||
|
|
||||||
|
## Code Analysis & Refactoring Techniques
|
||||||
|
|
||||||
|
- **Top-Level Function Search**:
|
||||||
|
- Useful for understanding module structure or planning refactors.
|
||||||
|
- Use grep/ripgrep to find exported functions/constants:
|
||||||
|
`rg "export (async function|function|const) \w+"` or similar patterns.
|
||||||
|
- Can help compare functions between files during migrations or identify potential naming conflicts.
|
||||||
|
|
||||||
|
---
|
||||||
|
*This workflow provides a general guideline. Adapt it based on your specific project needs and team practices.*
|
||||||
51
.kiro/steering/kiro_rules.md
Normal file
51
.kiro/steering/kiro_rules.md
Normal file
@@ -0,0 +1,51 @@
|
|||||||
|
---
|
||||||
|
inclusion: always
|
||||||
|
---
|
||||||
|
|
||||||
|
- **Required Rule Structure:**
|
||||||
|
```markdown
|
||||||
|
---
|
||||||
|
description: Clear, one-line description of what the rule enforces
|
||||||
|
globs: path/to/files/*.ext, other/path/**/*
|
||||||
|
alwaysApply: boolean
|
||||||
|
---
|
||||||
|
|
||||||
|
- **Main Points in Bold**
|
||||||
|
- Sub-points with details
|
||||||
|
- Examples and explanations
|
||||||
|
```
|
||||||
|
|
||||||
|
- **File References:**
|
||||||
|
- Use `[filename](mdc:path/to/file)` ([filename](mdc:filename)) to reference files
|
||||||
|
- Example: [prisma.md](.kiro/steering/prisma.md) for rule references
|
||||||
|
- Example: [schema.prisma](mdc:prisma/schema.prisma) for code references
|
||||||
|
|
||||||
|
- **Code Examples:**
|
||||||
|
- Use language-specific code blocks
|
||||||
|
```typescript
|
||||||
|
// ✅ DO: Show good examples
|
||||||
|
const goodExample = true;
|
||||||
|
|
||||||
|
// ❌ DON'T: Show anti-patterns
|
||||||
|
const badExample = false;
|
||||||
|
```
|
||||||
|
|
||||||
|
- **Rule Content Guidelines:**
|
||||||
|
- Start with high-level overview
|
||||||
|
- Include specific, actionable requirements
|
||||||
|
- Show examples of correct implementation
|
||||||
|
- Reference existing code when possible
|
||||||
|
- Keep rules DRY by referencing other rules
|
||||||
|
|
||||||
|
- **Rule Maintenance:**
|
||||||
|
- Update rules when new patterns emerge
|
||||||
|
- Add examples from actual codebase
|
||||||
|
- Remove outdated patterns
|
||||||
|
- Cross-reference related rules
|
||||||
|
|
||||||
|
- **Best Practices:**
|
||||||
|
- Use bullet points for clarity
|
||||||
|
- Keep descriptions concise
|
||||||
|
- Include both DO and DON'T examples
|
||||||
|
- Reference actual code over theoretical examples
|
||||||
|
- Use consistent formatting across rules
|
||||||
70
.kiro/steering/self_improve.md
Normal file
70
.kiro/steering/self_improve.md
Normal file
@@ -0,0 +1,70 @@
|
|||||||
|
---
|
||||||
|
inclusion: always
|
||||||
|
---
|
||||||
|
|
||||||
|
- **Rule Improvement Triggers:**
|
||||||
|
- New code patterns not covered by existing rules
|
||||||
|
- Repeated similar implementations across files
|
||||||
|
- Common error patterns that could be prevented
|
||||||
|
- New libraries or tools being used consistently
|
||||||
|
- Emerging best practices in the codebase
|
||||||
|
|
||||||
|
- **Analysis Process:**
|
||||||
|
- Compare new code with existing rules
|
||||||
|
- Identify patterns that should be standardized
|
||||||
|
- Look for references to external documentation
|
||||||
|
- Check for consistent error handling patterns
|
||||||
|
- Monitor test patterns and coverage
|
||||||
|
|
||||||
|
- **Rule Updates:**
|
||||||
|
- **Add New Rules When:**
|
||||||
|
- A new technology/pattern is used in 3+ files
|
||||||
|
- Common bugs could be prevented by a rule
|
||||||
|
- Code reviews repeatedly mention the same feedback
|
||||||
|
- New security or performance patterns emerge
|
||||||
|
|
||||||
|
- **Modify Existing Rules When:**
|
||||||
|
- Better examples exist in the codebase
|
||||||
|
- Additional edge cases are discovered
|
||||||
|
- Related rules have been updated
|
||||||
|
- Implementation details have changed
|
||||||
|
|
||||||
|
- **Example Pattern Recognition:**
|
||||||
|
```typescript
|
||||||
|
// If you see repeated patterns like:
|
||||||
|
const data = await prisma.user.findMany({
|
||||||
|
select: { id: true, email: true },
|
||||||
|
where: { status: 'ACTIVE' }
|
||||||
|
});
|
||||||
|
|
||||||
|
// Consider adding to [prisma.md](.kiro/steering/prisma.md):
|
||||||
|
// - Standard select fields
|
||||||
|
// - Common where conditions
|
||||||
|
// - Performance optimization patterns
|
||||||
|
```
|
||||||
|
|
||||||
|
- **Rule Quality Checks:**
|
||||||
|
- Rules should be actionable and specific
|
||||||
|
- Examples should come from actual code
|
||||||
|
- References should be up to date
|
||||||
|
- Patterns should be consistently enforced
|
||||||
|
|
||||||
|
- **Continuous Improvement:**
|
||||||
|
- Monitor code review comments
|
||||||
|
- Track common development questions
|
||||||
|
- Update rules after major refactors
|
||||||
|
- Add links to relevant documentation
|
||||||
|
- Cross-reference related rules
|
||||||
|
|
||||||
|
- **Rule Deprecation:**
|
||||||
|
- Mark outdated patterns as deprecated
|
||||||
|
- Remove rules that no longer apply
|
||||||
|
- Update references to deprecated rules
|
||||||
|
- Document migration paths for old patterns
|
||||||
|
|
||||||
|
- **Documentation Updates:**
|
||||||
|
- Keep examples synchronized with code
|
||||||
|
- Update references to external docs
|
||||||
|
- Maintain links between related rules
|
||||||
|
- Document breaking changes
|
||||||
|
Follow [kiro_rules.md](.kiro/steering/kiro_rules.md) for proper rule formatting and structure.
|
||||||
556
.kiro/steering/taskmaster.md
Normal file
556
.kiro/steering/taskmaster.md
Normal file
@@ -0,0 +1,556 @@
|
|||||||
|
---
|
||||||
|
inclusion: always
|
||||||
|
---
|
||||||
|
|
||||||
|
# Taskmaster Tool & Command Reference
|
||||||
|
|
||||||
|
This document provides a detailed reference for interacting with Taskmaster, covering both the recommended MCP tools, suitable for integrations like Kiro, and the corresponding `task-master` CLI commands, designed for direct user interaction or fallback.
|
||||||
|
|
||||||
|
**Note:** For interacting with Taskmaster programmatically or via integrated tools, using the **MCP tools is strongly recommended** due to better performance, structured data, and error handling. The CLI commands serve as a user-friendly alternative and fallback.
|
||||||
|
|
||||||
|
**Important:** Several MCP tools involve AI processing... The AI-powered tools include `parse_prd`, `analyze_project_complexity`, `update_subtask`, `update_task`, `update`, `expand_all`, `expand_task`, and `add_task`.
|
||||||
|
|
||||||
|
**🏷️ Tagged Task Lists System:** Task Master now supports **tagged task lists** for multi-context task management. This allows you to maintain separate, isolated lists of tasks for different features, branches, or experiments. Existing projects are seamlessly migrated to use a default "master" tag. Most commands now support a `--tag <name>` flag to specify which context to operate on. If omitted, commands use the currently active tag.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Initialization & Setup
|
||||||
|
|
||||||
|
### 1. Initialize Project (`init`)
|
||||||
|
|
||||||
|
* **MCP Tool:** `initialize_project`
|
||||||
|
* **CLI Command:** `task-master init [options]`
|
||||||
|
* **Description:** `Set up the basic Taskmaster file structure and configuration in the current directory for a new project.`
|
||||||
|
* **Key CLI Options:**
|
||||||
|
* `--name <name>`: `Set the name for your project in Taskmaster's configuration.`
|
||||||
|
* `--description <text>`: `Provide a brief description for your project.`
|
||||||
|
* `--version <version>`: `Set the initial version for your project, e.g., '0.1.0'.`
|
||||||
|
* `-y, --yes`: `Initialize Taskmaster quickly using default settings without interactive prompts.`
|
||||||
|
* **Usage:** Run this once at the beginning of a new project.
|
||||||
|
* **MCP Variant Description:** `Set up the basic Taskmaster file structure and configuration in the current directory for a new project by running the 'task-master init' command.`
|
||||||
|
* **Key MCP Parameters/Options:**
|
||||||
|
* `projectName`: `Set the name for your project.` (CLI: `--name <name>`)
|
||||||
|
* `projectDescription`: `Provide a brief description for your project.` (CLI: `--description <text>`)
|
||||||
|
* `projectVersion`: `Set the initial version for your project, e.g., '0.1.0'.` (CLI: `--version <version>`)
|
||||||
|
* `authorName`: `Author name.` (CLI: `--author <author>`)
|
||||||
|
* `skipInstall`: `Skip installing dependencies. Default is false.` (CLI: `--skip-install`)
|
||||||
|
* `addAliases`: `Add shell aliases tm and taskmaster. Default is false.` (CLI: `--aliases`)
|
||||||
|
* `yes`: `Skip prompts and use defaults/provided arguments. Default is false.` (CLI: `-y, --yes`)
|
||||||
|
* **Usage:** Run this once at the beginning of a new project, typically via an integrated tool like Kiro. Operates on the current working directory of the MCP server.
|
||||||
|
* **Important:** Once complete, you *MUST* parse a prd in order to generate tasks. There will be no tasks files until then. The next step after initializing should be to create a PRD using the example PRD in .taskmaster/templates/example_prd.txt.
|
||||||
|
* **Tagging:** Use the `--tag` option to parse the PRD into a specific, non-default tag context. If the tag doesn't exist, it will be created automatically. Example: `task-master parse-prd spec.txt --tag=new-feature`.
|
||||||
|
|
||||||
|
### 2. Parse PRD (`parse_prd`)
|
||||||
|
|
||||||
|
* **MCP Tool:** `parse_prd`
|
||||||
|
* **CLI Command:** `task-master parse-prd [file] [options]`
|
||||||
|
* **Description:** `Parse a Product Requirements Document, PRD, or text file with Taskmaster to automatically generate an initial set of tasks in tasks.json.`
|
||||||
|
* **Key Parameters/Options:**
|
||||||
|
* `input`: `Path to your PRD or requirements text file that Taskmaster should parse for tasks.` (CLI: `[file]` positional or `-i, --input <file>`)
|
||||||
|
* `output`: `Specify where Taskmaster should save the generated 'tasks.json' file. Defaults to '.taskmaster/tasks/tasks.json'.` (CLI: `-o, --output <file>`)
|
||||||
|
* `numTasks`: `Approximate number of top-level tasks Taskmaster should aim to generate from the document.` (CLI: `-n, --num-tasks <number>`)
|
||||||
|
* `force`: `Use this to allow Taskmaster to overwrite an existing 'tasks.json' without asking for confirmation.` (CLI: `-f, --force`)
|
||||||
|
* **Usage:** Useful for bootstrapping a project from an existing requirements document.
|
||||||
|
* **Notes:** Task Master will strictly adhere to any specific requirements mentioned in the PRD, such as libraries, database schemas, frameworks, tech stacks, etc., while filling in any gaps where the PRD isn't fully specified. Tasks are designed to provide the most direct implementation path while avoiding over-engineering.
|
||||||
|
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress. If the user does not have a PRD, suggest discussing their idea and then use the example PRD in `.taskmaster/templates/example_prd.txt` as a template for creating the PRD based on their idea, for use with `parse-prd`.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## AI Model Configuration
|
||||||
|
|
||||||
|
### 2. Manage Models (`models`)
|
||||||
|
* **MCP Tool:** `models`
|
||||||
|
* **CLI Command:** `task-master models [options]`
|
||||||
|
* **Description:** `View the current AI model configuration or set specific models for different roles (main, research, fallback). Allows setting custom model IDs for Ollama and OpenRouter.`
|
||||||
|
* **Key MCP Parameters/Options:**
|
||||||
|
* `setMain <model_id>`: `Set the primary model ID for task generation/updates.` (CLI: `--set-main <model_id>`)
|
||||||
|
* `setResearch <model_id>`: `Set the model ID for research-backed operations.` (CLI: `--set-research <model_id>`)
|
||||||
|
* `setFallback <model_id>`: `Set the model ID to use if the primary fails.` (CLI: `--set-fallback <model_id>`)
|
||||||
|
* `ollama <boolean>`: `Indicates the set model ID is a custom Ollama model.` (CLI: `--ollama`)
|
||||||
|
* `openrouter <boolean>`: `Indicates the set model ID is a custom OpenRouter model.` (CLI: `--openrouter`)
|
||||||
|
* `listAvailableModels <boolean>`: `If true, lists available models not currently assigned to a role.` (CLI: No direct equivalent; CLI lists available automatically)
|
||||||
|
* `projectRoot <string>`: `Optional. Absolute path to the project root directory.` (CLI: Determined automatically)
|
||||||
|
* **Key CLI Options:**
|
||||||
|
* `--set-main <model_id>`: `Set the primary model.`
|
||||||
|
* `--set-research <model_id>`: `Set the research model.`
|
||||||
|
* `--set-fallback <model_id>`: `Set the fallback model.`
|
||||||
|
* `--ollama`: `Specify that the provided model ID is for Ollama (use with --set-*).`
|
||||||
|
* `--openrouter`: `Specify that the provided model ID is for OpenRouter (use with --set-*). Validates against OpenRouter API.`
|
||||||
|
* `--bedrock`: `Specify that the provided model ID is for AWS Bedrock (use with --set-*).`
|
||||||
|
* `--setup`: `Run interactive setup to configure models, including custom Ollama/OpenRouter IDs.`
|
||||||
|
* **Usage (MCP):** Call without set flags to get current config. Use `setMain`, `setResearch`, or `setFallback` with a valid model ID to update the configuration. Use `listAvailableModels: true` to get a list of unassigned models. To set a custom model, provide the model ID and set `ollama: true` or `openrouter: true`.
|
||||||
|
* **Usage (CLI):** Run without flags to view current configuration and available models. Use set flags to update specific roles. Use `--setup` for guided configuration, including custom models. To set a custom model via flags, use `--set-<role>=<model_id>` along with either `--ollama` or `--openrouter`.
|
||||||
|
* **Notes:** Configuration is stored in `.taskmaster/config.json` in the project root. This command/tool modifies that file. Use `listAvailableModels` or `task-master models` to see internally supported models. OpenRouter custom models are validated against their live API. Ollama custom models are not validated live.
|
||||||
|
* **API note:** API keys for selected AI providers (based on their model) need to exist in the mcp.json file to be accessible in MCP context. The API keys must be present in the local .env file for the CLI to be able to read them.
|
||||||
|
* **Model costs:** The costs in supported models are expressed in dollars. An input/output value of 3 is $3.00. A value of 0.8 is $0.80.
|
||||||
|
* **Warning:** DO NOT MANUALLY EDIT THE .taskmaster/config.json FILE. Use the included commands either in the MCP or CLI format as needed. Always prioritize MCP tools when available and use the CLI as a fallback.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Task Listing & Viewing
|
||||||
|
|
||||||
|
### 3. Get Tasks (`get_tasks`)
|
||||||
|
|
||||||
|
* **MCP Tool:** `get_tasks`
|
||||||
|
* **CLI Command:** `task-master list [options]`
|
||||||
|
* **Description:** `List your Taskmaster tasks, optionally filtering by status and showing subtasks.`
|
||||||
|
* **Key Parameters/Options:**
|
||||||
|
* `status`: `Show only Taskmaster tasks matching this status (or multiple statuses, comma-separated), e.g., 'pending' or 'done,in-progress'.` (CLI: `-s, --status <status>`)
|
||||||
|
* `withSubtasks`: `Include subtasks indented under their parent tasks in the list.` (CLI: `--with-subtasks`)
|
||||||
|
* `tag`: `Specify which tag context to list tasks from. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||||
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||||
|
* **Usage:** Get an overview of the project status, often used at the start of a work session.
|
||||||
|
|
||||||
|
### 4. Get Next Task (`next_task`)
|
||||||
|
|
||||||
|
* **MCP Tool:** `next_task`
|
||||||
|
* **CLI Command:** `task-master next [options]`
|
||||||
|
* **Description:** `Ask Taskmaster to show the next available task you can work on, based on status and completed dependencies.`
|
||||||
|
* **Key Parameters/Options:**
|
||||||
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||||
|
* `tag`: `Specify which tag context to use. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||||
|
* **Usage:** Identify what to work on next according to the plan.
|
||||||
|
|
||||||
|
### 5. Get Task Details (`get_task`)
|
||||||
|
|
||||||
|
* **MCP Tool:** `get_task`
|
||||||
|
* **CLI Command:** `task-master show [id] [options]`
|
||||||
|
* **Description:** `Display detailed information for one or more specific Taskmaster tasks or subtasks by ID.`
|
||||||
|
* **Key Parameters/Options:**
|
||||||
|
* `id`: `Required. The ID of the Taskmaster task (e.g., '15'), subtask (e.g., '15.2'), or a comma-separated list of IDs ('1,5,10.2') you want to view.` (CLI: `[id]` positional or `-i, --id <id>`)
|
||||||
|
* `tag`: `Specify which tag context to get the task(s) from. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||||
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||||
|
* **Usage:** Understand the full details for a specific task. When multiple IDs are provided, a summary table is shown.
|
||||||
|
* **CRITICAL INFORMATION** If you need to collect information from multiple tasks, use comma-separated IDs (i.e. 1,2,3) to receive an array of tasks. Do not needlessly get tasks one at a time if you need to get many as that is wasteful.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Task Creation & Modification
|
||||||
|
|
||||||
|
### 6. Add Task (`add_task`)
|
||||||
|
|
||||||
|
* **MCP Tool:** `add_task`
|
||||||
|
* **CLI Command:** `task-master add-task [options]`
|
||||||
|
* **Description:** `Add a new task to Taskmaster by describing it; AI will structure it.`
|
||||||
|
* **Key Parameters/Options:**
|
||||||
|
* `prompt`: `Required. Describe the new task you want Taskmaster to create, e.g., "Implement user authentication using JWT".` (CLI: `-p, --prompt <text>`)
|
||||||
|
* `dependencies`: `Specify the IDs of any Taskmaster tasks that must be completed before this new one can start, e.g., '12,14'.` (CLI: `-d, --dependencies <ids>`)
|
||||||
|
* `priority`: `Set the priority for the new task: 'high', 'medium', or 'low'. Default is 'medium'.` (CLI: `--priority <priority>`)
|
||||||
|
* `research`: `Enable Taskmaster to use the research role for potentially more informed task creation.` (CLI: `-r, --research`)
|
||||||
|
* `tag`: `Specify which tag context to add the task to. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||||
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||||
|
* **Usage:** Quickly add newly identified tasks during development.
|
||||||
|
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress.
|
||||||
|
|
||||||
|
### 7. Add Subtask (`add_subtask`)
|
||||||
|
|
||||||
|
* **MCP Tool:** `add_subtask`
|
||||||
|
* **CLI Command:** `task-master add-subtask [options]`
|
||||||
|
* **Description:** `Add a new subtask to a Taskmaster parent task, or convert an existing task into a subtask.`
|
||||||
|
* **Key Parameters/Options:**
|
||||||
|
* `id` / `parent`: `Required. The ID of the Taskmaster task that will be the parent.` (MCP: `id`, CLI: `-p, --parent <id>`)
|
||||||
|
* `taskId`: `Use this if you want to convert an existing top-level Taskmaster task into a subtask of the specified parent.` (CLI: `-i, --task-id <id>`)
|
||||||
|
* `title`: `Required if not using taskId. The title for the new subtask Taskmaster should create.` (CLI: `-t, --title <title>`)
|
||||||
|
* `description`: `A brief description for the new subtask.` (CLI: `-d, --description <text>`)
|
||||||
|
* `details`: `Provide implementation notes or details for the new subtask.` (CLI: `--details <text>`)
|
||||||
|
* `dependencies`: `Specify IDs of other tasks or subtasks, e.g., '15' or '16.1', that must be done before this new subtask.` (CLI: `--dependencies <ids>`)
|
||||||
|
* `status`: `Set the initial status for the new subtask. Default is 'pending'.` (CLI: `-s, --status <status>`)
|
||||||
|
* `generate`: `Enable Taskmaster to regenerate markdown task files after adding the subtask.` (CLI: `--generate`)
|
||||||
|
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||||
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||||
|
* **Usage:** Break down tasks manually or reorganize existing tasks.
|
||||||
|
|
||||||
|
### 8. Update Tasks (`update`)
|
||||||
|
|
||||||
|
* **MCP Tool:** `update`
|
||||||
|
* **CLI Command:** `task-master update [options]`
|
||||||
|
* **Description:** `Update multiple upcoming tasks in Taskmaster based on new context or changes, starting from a specific task ID.`
|
||||||
|
* **Key Parameters/Options:**
|
||||||
|
* `from`: `Required. The ID of the first task Taskmaster should update. All tasks with this ID or higher that are not 'done' will be considered.` (CLI: `--from <id>`)
|
||||||
|
* `prompt`: `Required. Explain the change or new context for Taskmaster to apply to the tasks, e.g., "We are now using React Query instead of Redux Toolkit for data fetching".` (CLI: `-p, --prompt <text>`)
|
||||||
|
* `research`: `Enable Taskmaster to use the research role for more informed updates. Requires appropriate API key.` (CLI: `-r, --research`)
|
||||||
|
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||||
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||||
|
* **Usage:** Handle significant implementation changes or pivots that affect multiple future tasks. Example CLI: `task-master update --from='18' --prompt='Switching to React Query.\nNeed to refactor data fetching...'`
|
||||||
|
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress.
|
||||||
|
|
||||||
|
### 9. Update Task (`update_task`)
|
||||||
|
|
||||||
|
* **MCP Tool:** `update_task`
|
||||||
|
* **CLI Command:** `task-master update-task [options]`
|
||||||
|
* **Description:** `Modify a specific Taskmaster task by ID, incorporating new information or changes. By default, this replaces the existing task details.`
|
||||||
|
* **Key Parameters/Options:**
|
||||||
|
* `id`: `Required. The specific ID of the Taskmaster task, e.g., '15', you want to update.` (CLI: `-i, --id <id>`)
|
||||||
|
* `prompt`: `Required. Explain the specific changes or provide the new information Taskmaster should incorporate into this task.` (CLI: `-p, --prompt <text>`)
|
||||||
|
* `append`: `If true, appends the prompt content to the task's details with a timestamp, rather than replacing them. Behaves like update-subtask.` (CLI: `--append`)
|
||||||
|
* `research`: `Enable Taskmaster to use the research role for more informed updates. Requires appropriate API key.` (CLI: `-r, --research`)
|
||||||
|
* `tag`: `Specify which tag context the task belongs to. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||||
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||||
|
* **Usage:** Refine a specific task based on new understanding. Use `--append` to log progress without creating subtasks.
|
||||||
|
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress.
|
||||||
|
|
||||||
|
### 10. Update Subtask (`update_subtask`)
|
||||||
|
|
||||||
|
* **MCP Tool:** `update_subtask`
|
||||||
|
* **CLI Command:** `task-master update-subtask [options]`
|
||||||
|
* **Description:** `Append timestamped notes or details to a specific Taskmaster subtask without overwriting existing content. Intended for iterative implementation logging.`
|
||||||
|
* **Key Parameters/Options:**
|
||||||
|
* `id`: `Required. The ID of the Taskmaster subtask, e.g., '5.2', to update with new information.` (CLI: `-i, --id <id>`)
|
||||||
|
* `prompt`: `Required. The information, findings, or progress notes to append to the subtask's details with a timestamp.` (CLI: `-p, --prompt <text>`)
|
||||||
|
* `research`: `Enable Taskmaster to use the research role for more informed updates. Requires appropriate API key.` (CLI: `-r, --research`)
|
||||||
|
* `tag`: `Specify which tag context the subtask belongs to. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||||
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||||
|
* **Usage:** Log implementation progress, findings, and discoveries during subtask development. Each update is timestamped and appended to preserve the implementation journey.
|
||||||
|
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress.
|
||||||
|
|
||||||
|
### 11. Set Task Status (`set_task_status`)
|
||||||
|
|
||||||
|
* **MCP Tool:** `set_task_status`
|
||||||
|
* **CLI Command:** `task-master set-status [options]`
|
||||||
|
* **Description:** `Update the status of one or more Taskmaster tasks or subtasks, e.g., 'pending', 'in-progress', 'done'.`
|
||||||
|
* **Key Parameters/Options:**
|
||||||
|
* `id`: `Required. The ID(s) of the Taskmaster task(s) or subtask(s), e.g., '15', '15.2', or '16,17.1', to update.` (CLI: `-i, --id <id>`)
|
||||||
|
* `status`: `Required. The new status to set, e.g., 'done', 'pending', 'in-progress', 'review', 'cancelled'.` (CLI: `-s, --status <status>`)
|
||||||
|
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||||
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||||
|
* **Usage:** Mark progress as tasks move through the development cycle.
|
||||||
|
|
||||||
|
### 12. Remove Task (`remove_task`)
|
||||||
|
|
||||||
|
* **MCP Tool:** `remove_task`
|
||||||
|
* **CLI Command:** `task-master remove-task [options]`
|
||||||
|
* **Description:** `Permanently remove a task or subtask from the Taskmaster tasks list.`
|
||||||
|
* **Key Parameters/Options:**
|
||||||
|
* `id`: `Required. The ID of the Taskmaster task, e.g., '5', or subtask, e.g., '5.2', to permanently remove.` (CLI: `-i, --id <id>`)
|
||||||
|
* `yes`: `Skip the confirmation prompt and immediately delete the task.` (CLI: `-y, --yes`)
|
||||||
|
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||||
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||||
|
* **Usage:** Permanently delete tasks or subtasks that are no longer needed in the project.
|
||||||
|
* **Notes:** Use with caution as this operation cannot be undone. Consider using 'blocked', 'cancelled', or 'deferred' status instead if you just want to exclude a task from active planning but keep it for reference. The command automatically cleans up dependency references in other tasks.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Task Structure & Breakdown
|
||||||
|
|
||||||
|
### 13. Expand Task (`expand_task`)
|
||||||
|
|
||||||
|
* **MCP Tool:** `expand_task`
|
||||||
|
* **CLI Command:** `task-master expand [options]`
|
||||||
|
* **Description:** `Use Taskmaster's AI to break down a complex task into smaller, manageable subtasks. Appends subtasks by default.`
|
||||||
|
* **Key Parameters/Options:**
|
||||||
|
* `id`: `The ID of the specific Taskmaster task you want to break down into subtasks.` (CLI: `-i, --id <id>`)
|
||||||
|
* `num`: `Optional: Suggests how many subtasks Taskmaster should aim to create. Uses complexity analysis/defaults otherwise.` (CLI: `-n, --num <number>`)
|
||||||
|
* `research`: `Enable Taskmaster to use the research role for more informed subtask generation. Requires appropriate API key.` (CLI: `-r, --research`)
|
||||||
|
* `prompt`: `Optional: Provide extra context or specific instructions to Taskmaster for generating the subtasks.` (CLI: `-p, --prompt <text>`)
|
||||||
|
* `force`: `Optional: If true, clear existing subtasks before generating new ones. Default is false (append).` (CLI: `--force`)
|
||||||
|
* `tag`: `Specify which tag context the task belongs to. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||||
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||||
|
* **Usage:** Generate a detailed implementation plan for a complex task before starting coding. Automatically uses complexity report recommendations if available and `num` is not specified.
|
||||||
|
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress.
|
||||||
|
|
||||||
|
### 14. Expand All Tasks (`expand_all`)
|
||||||
|
|
||||||
|
* **MCP Tool:** `expand_all`
|
||||||
|
* **CLI Command:** `task-master expand --all [options]` (Note: CLI uses the `expand` command with the `--all` flag)
|
||||||
|
* **Description:** `Tell Taskmaster to automatically expand all eligible pending/in-progress tasks based on complexity analysis or defaults. Appends subtasks by default.`
|
||||||
|
* **Key Parameters/Options:**
|
||||||
|
* `num`: `Optional: Suggests how many subtasks Taskmaster should aim to create per task.` (CLI: `-n, --num <number>`)
|
||||||
|
* `research`: `Enable research role for more informed subtask generation. Requires appropriate API key.` (CLI: `-r, --research`)
|
||||||
|
* `prompt`: `Optional: Provide extra context for Taskmaster to apply generally during expansion.` (CLI: `-p, --prompt <text>`)
|
||||||
|
* `force`: `Optional: If true, clear existing subtasks before generating new ones for each eligible task. Default is false (append).` (CLI: `--force`)
|
||||||
|
* `tag`: `Specify which tag context to expand. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||||
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||||
|
* **Usage:** Useful after initial task generation or complexity analysis to break down multiple tasks at once.
|
||||||
|
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress.
|
||||||
|
|
||||||
|
### 15. Clear Subtasks (`clear_subtasks`)
|
||||||
|
|
||||||
|
* **MCP Tool:** `clear_subtasks`
|
||||||
|
* **CLI Command:** `task-master clear-subtasks [options]`
|
||||||
|
* **Description:** `Remove all subtasks from one or more specified Taskmaster parent tasks.`
|
||||||
|
* **Key Parameters/Options:**
|
||||||
|
* `id`: `The ID(s) of the Taskmaster parent task(s) whose subtasks you want to remove, e.g., '15' or '16,18'. Required unless using 'all'.` (CLI: `-i, --id <ids>`)
|
||||||
|
* `all`: `Tell Taskmaster to remove subtasks from all parent tasks.` (CLI: `--all`)
|
||||||
|
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||||
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||||
|
* **Usage:** Used before regenerating subtasks with `expand_task` if the previous breakdown needs replacement.
|
||||||
|
|
||||||
|
### 16. Remove Subtask (`remove_subtask`)
|
||||||
|
|
||||||
|
* **MCP Tool:** `remove_subtask`
|
||||||
|
* **CLI Command:** `task-master remove-subtask [options]`
|
||||||
|
* **Description:** `Remove a subtask from its Taskmaster parent, optionally converting it into a standalone task.`
|
||||||
|
* **Key Parameters/Options:**
|
||||||
|
* `id`: `Required. The ID(s) of the Taskmaster subtask(s) to remove, e.g., '15.2' or '16.1,16.3'.` (CLI: `-i, --id <id>`)
|
||||||
|
* `convert`: `If used, Taskmaster will turn the subtask into a regular top-level task instead of deleting it.` (CLI: `-c, --convert`)
|
||||||
|
* `generate`: `Enable Taskmaster to regenerate markdown task files after removing the subtask.` (CLI: `--generate`)
|
||||||
|
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||||
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||||
|
* **Usage:** Delete unnecessary subtasks or promote a subtask to a top-level task.
|
||||||
|
|
||||||
|
### 17. Move Task (`move_task`)
|
||||||
|
|
||||||
|
* **MCP Tool:** `move_task`
|
||||||
|
* **CLI Command:** `task-master move [options]`
|
||||||
|
* **Description:** `Move a task or subtask to a new position within the task hierarchy.`
|
||||||
|
* **Key Parameters/Options:**
|
||||||
|
* `from`: `Required. ID of the task/subtask to move (e.g., "5" or "5.2"). Can be comma-separated for multiple tasks.` (CLI: `--from <id>`)
|
||||||
|
* `to`: `Required. ID of the destination (e.g., "7" or "7.3"). Must match the number of source IDs if comma-separated.` (CLI: `--to <id>`)
|
||||||
|
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||||
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||||
|
* **Usage:** Reorganize tasks by moving them within the hierarchy. Supports various scenarios like:
|
||||||
|
* Moving a task to become a subtask
|
||||||
|
* Moving a subtask to become a standalone task
|
||||||
|
* Moving a subtask to a different parent
|
||||||
|
* Reordering subtasks within the same parent
|
||||||
|
* Moving a task to a new, non-existent ID (automatically creates placeholders)
|
||||||
|
* Moving multiple tasks at once with comma-separated IDs
|
||||||
|
* **Validation Features:**
|
||||||
|
* Allows moving tasks to non-existent destination IDs (creates placeholder tasks)
|
||||||
|
* Prevents moving to existing task IDs that already have content (to avoid overwriting)
|
||||||
|
* Validates that source tasks exist before attempting to move them
|
||||||
|
* Maintains proper parent-child relationships
|
||||||
|
* **Example CLI:** `task-master move --from=5.2 --to=7.3` to move subtask 5.2 to become subtask 7.3.
|
||||||
|
* **Example Multi-Move:** `task-master move --from=10,11,12 --to=16,17,18` to move multiple tasks to new positions.
|
||||||
|
* **Common Use:** Resolving merge conflicts in tasks.json when multiple team members create tasks on different branches.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Dependency Management
|
||||||
|
|
||||||
|
### 18. Add Dependency (`add_dependency`)
|
||||||
|
|
||||||
|
* **MCP Tool:** `add_dependency`
|
||||||
|
* **CLI Command:** `task-master add-dependency [options]`
|
||||||
|
* **Description:** `Define a dependency in Taskmaster, making one task a prerequisite for another.`
|
||||||
|
* **Key Parameters/Options:**
|
||||||
|
* `id`: `Required. The ID of the Taskmaster task that will depend on another.` (CLI: `-i, --id <id>`)
|
||||||
|
* `dependsOn`: `Required. The ID of the Taskmaster task that must be completed first, the prerequisite.` (CLI: `-d, --depends-on <id>`)
|
||||||
|
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||||
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <path>`)
|
||||||
|
* **Usage:** Establish the correct order of execution between tasks.
|
||||||
|
|
||||||
|
### 19. Remove Dependency (`remove_dependency`)
|
||||||
|
|
||||||
|
* **MCP Tool:** `remove_dependency`
|
||||||
|
* **CLI Command:** `task-master remove-dependency [options]`
|
||||||
|
* **Description:** `Remove a dependency relationship between two Taskmaster tasks.`
|
||||||
|
* **Key Parameters/Options:**
|
||||||
|
* `id`: `Required. The ID of the Taskmaster task you want to remove a prerequisite from.` (CLI: `-i, --id <id>`)
|
||||||
|
* `dependsOn`: `Required. The ID of the Taskmaster task that should no longer be a prerequisite.` (CLI: `-d, --depends-on <id>`)
|
||||||
|
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||||
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||||
|
* **Usage:** Update task relationships when the order of execution changes.
|
||||||
|
|
||||||
|
### 20. Validate Dependencies (`validate_dependencies`)
|
||||||
|
|
||||||
|
* **MCP Tool:** `validate_dependencies`
|
||||||
|
* **CLI Command:** `task-master validate-dependencies [options]`
|
||||||
|
* **Description:** `Check your Taskmaster tasks for dependency issues (like circular references or links to non-existent tasks) without making changes.`
|
||||||
|
* **Key Parameters/Options:**
|
||||||
|
* `tag`: `Specify which tag context to validate. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||||
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||||
|
* **Usage:** Audit the integrity of your task dependencies.
|
||||||
|
|
||||||
|
### 21. Fix Dependencies (`fix_dependencies`)
|
||||||
|
|
||||||
|
* **MCP Tool:** `fix_dependencies`
|
||||||
|
* **CLI Command:** `task-master fix-dependencies [options]`
|
||||||
|
* **Description:** `Automatically fix dependency issues (like circular references or links to non-existent tasks) in your Taskmaster tasks.`
|
||||||
|
* **Key Parameters/Options:**
|
||||||
|
* `tag`: `Specify which tag context to fix dependencies in. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||||
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||||
|
* **Usage:** Clean up dependency errors automatically.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Analysis & Reporting
|
||||||
|
|
||||||
|
### 22. Analyze Project Complexity (`analyze_project_complexity`)
|
||||||
|
|
||||||
|
* **MCP Tool:** `analyze_project_complexity`
|
||||||
|
* **CLI Command:** `task-master analyze-complexity [options]`
|
||||||
|
* **Description:** `Have Taskmaster analyze your tasks to determine their complexity and suggest which ones need to be broken down further.`
|
||||||
|
* **Key Parameters/Options:**
|
||||||
|
* `output`: `Where to save the complexity analysis report. Default is '.taskmaster/reports/task-complexity-report.json' (or '..._tagname.json' if a tag is used).` (CLI: `-o, --output <file>`)
|
||||||
|
* `threshold`: `The minimum complexity score (1-10) that should trigger a recommendation to expand a task.` (CLI: `-t, --threshold <number>`)
|
||||||
|
* `research`: `Enable research role for more accurate complexity analysis. Requires appropriate API key.` (CLI: `-r, --research`)
|
||||||
|
* `tag`: `Specify which tag context to analyze. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||||
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||||
|
* **Usage:** Used before breaking down tasks to identify which ones need the most attention.
|
||||||
|
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress.
|
||||||
|
|
||||||
|
### 23. View Complexity Report (`complexity_report`)
|
||||||
|
|
||||||
|
* **MCP Tool:** `complexity_report`
|
||||||
|
* **CLI Command:** `task-master complexity-report [options]`
|
||||||
|
* **Description:** `Display the task complexity analysis report in a readable format.`
|
||||||
|
* **Key Parameters/Options:**
|
||||||
|
* `tag`: `Specify which tag context to show the report for. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||||
|
* `file`: `Path to the complexity report (default: '.taskmaster/reports/task-complexity-report.json').` (CLI: `-f, --file <file>`)
|
||||||
|
* **Usage:** Review and understand the complexity analysis results after running analyze-complexity.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## File Management
|
||||||
|
|
||||||
|
### 24. Generate Task Files (`generate`)
|
||||||
|
|
||||||
|
* **MCP Tool:** `generate`
|
||||||
|
* **CLI Command:** `task-master generate [options]`
|
||||||
|
* **Description:** `Create or update individual Markdown files for each task based on your tasks.json.`
|
||||||
|
* **Key Parameters/Options:**
|
||||||
|
* `output`: `The directory where Taskmaster should save the task files (default: in a 'tasks' directory).` (CLI: `-o, --output <directory>`)
|
||||||
|
* `tag`: `Specify which tag context to generate files for. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||||
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||||
|
* **Usage:** Run this after making changes to tasks.json to keep individual task files up to date. This command is now manual and no longer runs automatically.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## AI-Powered Research
|
||||||
|
|
||||||
|
### 25. Research (`research`)
|
||||||
|
|
||||||
|
* **MCP Tool:** `research`
|
||||||
|
* **CLI Command:** `task-master research [options]`
|
||||||
|
* **Description:** `Perform AI-powered research queries with project context to get fresh, up-to-date information beyond the AI's knowledge cutoff.`
|
||||||
|
* **Key Parameters/Options:**
|
||||||
|
* `query`: `Required. Research query/prompt (e.g., "What are the latest best practices for React Query v5?").` (CLI: `[query]` positional or `-q, --query <text>`)
|
||||||
|
* `taskIds`: `Comma-separated list of task/subtask IDs from the current tag context (e.g., "15,16.2,17").` (CLI: `-i, --id <ids>`)
|
||||||
|
* `filePaths`: `Comma-separated list of file paths for context (e.g., "src/api.js,docs/readme.md").` (CLI: `-f, --files <paths>`)
|
||||||
|
* `customContext`: `Additional custom context text to include in the research.` (CLI: `-c, --context <text>`)
|
||||||
|
* `includeProjectTree`: `Include project file tree structure in context (default: false).` (CLI: `--tree`)
|
||||||
|
* `detailLevel`: `Detail level for the research response: 'low', 'medium', 'high' (default: medium).` (CLI: `--detail <level>`)
|
||||||
|
* `saveTo`: `Task or subtask ID (e.g., "15", "15.2") to automatically save the research conversation to.` (CLI: `--save-to <id>`)
|
||||||
|
* `saveFile`: `If true, saves the research conversation to a markdown file in '.taskmaster/docs/research/'.` (CLI: `--save-file`)
|
||||||
|
* `noFollowup`: `Disables the interactive follow-up question menu in the CLI.` (CLI: `--no-followup`)
|
||||||
|
* `tag`: `Specify which tag context to use for task-based context gathering. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||||
|
* `projectRoot`: `The directory of the project. Must be an absolute path.` (CLI: Determined automatically)
|
||||||
|
* **Usage:** **This is a POWERFUL tool that agents should use FREQUENTLY** to:
|
||||||
|
* Get fresh information beyond knowledge cutoff dates
|
||||||
|
* Research latest best practices, library updates, security patches
|
||||||
|
* Find implementation examples for specific technologies
|
||||||
|
* Validate approaches against current industry standards
|
||||||
|
* Get contextual advice based on project files and tasks
|
||||||
|
* **When to Consider Using Research:**
|
||||||
|
* **Before implementing any task** - Research current best practices
|
||||||
|
* **When encountering new technologies** - Get up-to-date implementation guidance (libraries, apis, etc)
|
||||||
|
* **For security-related tasks** - Find latest security recommendations
|
||||||
|
* **When updating dependencies** - Research breaking changes and migration guides
|
||||||
|
* **For performance optimization** - Get current performance best practices
|
||||||
|
* **When debugging complex issues** - Research known solutions and workarounds
|
||||||
|
* **Research + Action Pattern:**
|
||||||
|
* Use `research` to gather fresh information
|
||||||
|
* Use `update_subtask` to commit findings with timestamps
|
||||||
|
* Use `update_task` to incorporate research into task details
|
||||||
|
* Use `add_task` with research flag for informed task creation
|
||||||
|
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. The research provides FRESH data beyond the AI's training cutoff, making it invaluable for current best practices and recent developments.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Tag Management
|
||||||
|
|
||||||
|
This new suite of commands allows you to manage different task contexts (tags).
|
||||||
|
|
||||||
|
### 26. List Tags (`tags`)
|
||||||
|
|
||||||
|
* **MCP Tool:** `list_tags`
|
||||||
|
* **CLI Command:** `task-master tags [options]`
|
||||||
|
* **Description:** `List all available tags with task counts, completion status, and other metadata.`
|
||||||
|
* **Key Parameters/Options:**
|
||||||
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||||
|
* `--show-metadata`: `Include detailed metadata in the output (e.g., creation date, description).` (CLI: `--show-metadata`)
|
||||||
|
|
||||||
|
### 27. Add Tag (`add_tag`)
|
||||||
|
|
||||||
|
* **MCP Tool:** `add_tag`
|
||||||
|
* **CLI Command:** `task-master add-tag <tagName> [options]`
|
||||||
|
* **Description:** `Create a new, empty tag context, or copy tasks from another tag.`
|
||||||
|
* **Key Parameters/Options:**
|
||||||
|
* `tagName`: `Name of the new tag to create (alphanumeric, hyphens, underscores).` (CLI: `<tagName>` positional)
|
||||||
|
* `--from-branch`: `Creates a tag with a name derived from the current git branch, ignoring the <tagName> argument.` (CLI: `--from-branch`)
|
||||||
|
* `--copy-from-current`: `Copy tasks from the currently active tag to the new tag.` (CLI: `--copy-from-current`)
|
||||||
|
* `--copy-from <tag>`: `Copy tasks from a specific source tag to the new tag.` (CLI: `--copy-from <tag>`)
|
||||||
|
* `--description <text>`: `Provide an optional description for the new tag.` (CLI: `-d, --description <text>`)
|
||||||
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||||
|
|
||||||
|
### 28. Delete Tag (`delete_tag`)
|
||||||
|
|
||||||
|
* **MCP Tool:** `delete_tag`
|
||||||
|
* **CLI Command:** `task-master delete-tag <tagName> [options]`
|
||||||
|
* **Description:** `Permanently delete a tag and all of its associated tasks.`
|
||||||
|
* **Key Parameters/Options:**
|
||||||
|
* `tagName`: `Name of the tag to delete.` (CLI: `<tagName>` positional)
|
||||||
|
* `--yes`: `Skip the confirmation prompt.` (CLI: `-y, --yes`)
|
||||||
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||||
|
|
||||||
|
### 29. Use Tag (`use_tag`)
|
||||||
|
|
||||||
|
* **MCP Tool:** `use_tag`
|
||||||
|
* **CLI Command:** `task-master use-tag <tagName>`
|
||||||
|
* **Description:** `Switch your active task context to a different tag.`
|
||||||
|
* **Key Parameters/Options:**
|
||||||
|
* `tagName`: `Name of the tag to switch to.` (CLI: `<tagName>` positional)
|
||||||
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||||
|
|
||||||
|
### 30. Rename Tag (`rename_tag`)
|
||||||
|
|
||||||
|
* **MCP Tool:** `rename_tag`
|
||||||
|
* **CLI Command:** `task-master rename-tag <oldName> <newName>`
|
||||||
|
* **Description:** `Rename an existing tag.`
|
||||||
|
* **Key Parameters/Options:**
|
||||||
|
* `oldName`: `The current name of the tag.` (CLI: `<oldName>` positional)
|
||||||
|
* `newName`: `The new name for the tag.` (CLI: `<newName>` positional)
|
||||||
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||||
|
|
||||||
|
### 31. Copy Tag (`copy_tag`)
|
||||||
|
|
||||||
|
* **MCP Tool:** `copy_tag`
|
||||||
|
* **CLI Command:** `task-master copy-tag <sourceName> <targetName> [options]`
|
||||||
|
* **Description:** `Copy an entire tag context, including all its tasks and metadata, to a new tag.`
|
||||||
|
* **Key Parameters/Options:**
|
||||||
|
* `sourceName`: `Name of the tag to copy from.` (CLI: `<sourceName>` positional)
|
||||||
|
* `targetName`: `Name of the new tag to create.` (CLI: `<targetName>` positional)
|
||||||
|
* `--description <text>`: `Optional description for the new tag.` (CLI: `-d, --description <text>`)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Miscellaneous
|
||||||
|
|
||||||
|
### 32. Sync Readme (`sync-readme`) -- experimental
|
||||||
|
|
||||||
|
* **MCP Tool:** N/A
|
||||||
|
* **CLI Command:** `task-master sync-readme [options]`
|
||||||
|
* **Description:** `Exports your task list to your project's README.md file, useful for showcasing progress.`
|
||||||
|
* **Key Parameters/Options:**
|
||||||
|
* `status`: `Filter tasks by status (e.g., 'pending', 'done').` (CLI: `-s, --status <status>`)
|
||||||
|
* `withSubtasks`: `Include subtasks in the export.` (CLI: `--with-subtasks`)
|
||||||
|
* `tag`: `Specify which tag context to export from. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Environment Variables Configuration (Updated)
|
||||||
|
|
||||||
|
Taskmaster primarily uses the **`.taskmaster/config.json`** file (in project root) for configuration (models, parameters, logging level, etc.), managed via `task-master models --setup`.
|
||||||
|
|
||||||
|
Environment variables are used **only** for sensitive API keys related to AI providers and specific overrides like the Ollama base URL:
|
||||||
|
|
||||||
|
* **API Keys (Required for corresponding provider):**
|
||||||
|
* `ANTHROPIC_API_KEY`
|
||||||
|
* `PERPLEXITY_API_KEY`
|
||||||
|
* `OPENAI_API_KEY`
|
||||||
|
* `GOOGLE_API_KEY`
|
||||||
|
* `MISTRAL_API_KEY`
|
||||||
|
* `AZURE_OPENAI_API_KEY` (Requires `AZURE_OPENAI_ENDPOINT` too)
|
||||||
|
* `OPENROUTER_API_KEY`
|
||||||
|
* `XAI_API_KEY`
|
||||||
|
* `OLLAMA_API_KEY` (Requires `OLLAMA_BASE_URL` too)
|
||||||
|
* **Endpoints (Optional/Provider Specific inside .taskmaster/config.json):**
|
||||||
|
* `AZURE_OPENAI_ENDPOINT`
|
||||||
|
* `OLLAMA_BASE_URL` (Default: `http://localhost:11434/api`)
|
||||||
|
|
||||||
|
**Set API keys** in your **`.env`** file in the project root (for CLI use) or within the `env` section of your **`.kiro/mcp.json`** file (for MCP/Kiro integration). All other settings (model choice, max tokens, temperature, log level, custom endpoints) are managed in `.taskmaster/config.json` via `task-master models` command or `models` MCP tool.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
For details on how these commands fit into the development process, see the [dev_workflow.md](.kiro/steering/dev_workflow.md).
|
||||||
59
.kiro/steering/taskmaster_hooks_workflow.md
Normal file
59
.kiro/steering/taskmaster_hooks_workflow.md
Normal file
@@ -0,0 +1,59 @@
|
|||||||
|
---
|
||||||
|
inclusion: always
|
||||||
|
---
|
||||||
|
|
||||||
|
# Taskmaster Hook-Driven Workflow
|
||||||
|
|
||||||
|
## Core Principle: Hooks Automate Task Management
|
||||||
|
|
||||||
|
When working with Taskmaster in Kiro, **avoid manually marking tasks as done**. The hook system automatically handles task completion based on:
|
||||||
|
|
||||||
|
- **Test Success**: `[TM] Test Success Task Completer` detects passing tests and prompts for task completion
|
||||||
|
- **Code Changes**: `[TM] Code Change Task Tracker` monitors implementation progress
|
||||||
|
- **Dependency Chains**: `[TM] Task Dependency Auto-Progression` auto-starts dependent tasks
|
||||||
|
|
||||||
|
## AI Assistant Workflow
|
||||||
|
|
||||||
|
Follow this pattern when implementing features:
|
||||||
|
|
||||||
|
1. **Implement First**: Write code, create tests, make changes
|
||||||
|
2. **Save Frequently**: Hooks trigger on file saves to track progress automatically
|
||||||
|
3. **Let Hooks Decide**: Allow hooks to detect completion rather than manually setting status
|
||||||
|
4. **Respond to Prompts**: Confirm when hooks suggest task completion
|
||||||
|
|
||||||
|
## Key Rules for AI Assistants
|
||||||
|
|
||||||
|
- **Never use `tm set-status --status=done`** unless hooks fail to detect completion
|
||||||
|
- **Always write tests** - they provide the most reliable completion signal
|
||||||
|
- **Save files after implementation** - this triggers progress tracking
|
||||||
|
- **Trust hook suggestions** - if no completion prompt appears, more work may be needed
|
||||||
|
|
||||||
|
## Automatic Behaviors
|
||||||
|
|
||||||
|
The hook system provides:
|
||||||
|
|
||||||
|
- **Progress Logging**: Implementation details automatically added to task notes
|
||||||
|
- **Evidence-Based Completion**: Tasks marked done only when criteria are met
|
||||||
|
- **Dependency Management**: Next tasks auto-started when dependencies complete
|
||||||
|
- **Natural Flow**: Focus on coding, not task management overhead
|
||||||
|
|
||||||
|
## Manual Override Cases
|
||||||
|
|
||||||
|
Only manually set task status for:
|
||||||
|
|
||||||
|
- Documentation-only tasks
|
||||||
|
- Tasks without testable outcomes
|
||||||
|
- Emergency fixes without proper test coverage
|
||||||
|
|
||||||
|
Use `tm set-status` sparingly - prefer hook-driven completion.
|
||||||
|
|
||||||
|
## Implementation Pattern
|
||||||
|
|
||||||
|
```
|
||||||
|
1. Implement feature → Save file
|
||||||
|
2. Write tests → Save test file
|
||||||
|
3. Tests pass → Hook prompts completion
|
||||||
|
4. Confirm completion → Next task auto-starts
|
||||||
|
```
|
||||||
|
|
||||||
|
This workflow ensures proper task tracking while maintaining development flow.
|
||||||
6
.manypkg.json
Normal file
6
.manypkg.json
Normal file
@@ -0,0 +1,6 @@
|
|||||||
|
{
|
||||||
|
"$schema": "https://unpkg.com/@manypkg/get-packages@1.1.3/schema.json",
|
||||||
|
"defaultBranch": "main",
|
||||||
|
"ignoredRules": ["ROOT_HAS_DEPENDENCIES", "INTERNAL_MISMATCH"],
|
||||||
|
"ignoredPackages": ["@tm/core", "@tm/cli", "@tm/build-config"]
|
||||||
|
}
|
||||||
9
.mcp.json
Normal file
9
.mcp.json
Normal file
@@ -0,0 +1,9 @@
|
|||||||
|
{
|
||||||
|
"mcpServers": {
|
||||||
|
"task-master-ai": {
|
||||||
|
"type": "stdio",
|
||||||
|
"command": "npx",
|
||||||
|
"args": ["-y", "task-master-ai"]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
417
.taskmaster/CLAUDE.md
Normal file
417
.taskmaster/CLAUDE.md
Normal file
@@ -0,0 +1,417 @@
|
|||||||
|
# Task Master AI - Agent Integration Guide
|
||||||
|
|
||||||
|
## Essential Commands
|
||||||
|
|
||||||
|
### Core Workflow Commands
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Project Setup
|
||||||
|
task-master init # Initialize Task Master in current project
|
||||||
|
task-master parse-prd .taskmaster/docs/prd.txt # Generate tasks from PRD document
|
||||||
|
task-master models --setup # Configure AI models interactively
|
||||||
|
|
||||||
|
# Daily Development Workflow
|
||||||
|
task-master list # Show all tasks with status
|
||||||
|
task-master next # Get next available task to work on
|
||||||
|
task-master show <id> # View detailed task information (e.g., task-master show 1.2)
|
||||||
|
task-master set-status --id=<id> --status=done # Mark task complete
|
||||||
|
|
||||||
|
# Task Management
|
||||||
|
task-master add-task --prompt="description" --research # Add new task with AI assistance
|
||||||
|
task-master expand --id=<id> --research --force # Break task into subtasks
|
||||||
|
task-master update-task --id=<id> --prompt="changes" # Update specific task
|
||||||
|
task-master update --from=<id> --prompt="changes" # Update multiple tasks from ID onwards
|
||||||
|
task-master update-subtask --id=<id> --prompt="notes" # Add implementation notes to subtask
|
||||||
|
|
||||||
|
# Analysis & Planning
|
||||||
|
task-master analyze-complexity --research # Analyze task complexity
|
||||||
|
task-master complexity-report # View complexity analysis
|
||||||
|
task-master expand --all --research # Expand all eligible tasks
|
||||||
|
|
||||||
|
# Dependencies & Organization
|
||||||
|
task-master add-dependency --id=<id> --depends-on=<id> # Add task dependency
|
||||||
|
task-master move --from=<id> --to=<id> # Reorganize task hierarchy
|
||||||
|
task-master validate-dependencies # Check for dependency issues
|
||||||
|
task-master generate # Update task markdown files (usually auto-called)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Key Files & Project Structure
|
||||||
|
|
||||||
|
### Core Files
|
||||||
|
|
||||||
|
- `.taskmaster/tasks/tasks.json` - Main task data file (auto-managed)
|
||||||
|
- `.taskmaster/config.json` - AI model configuration (use `task-master models` to modify)
|
||||||
|
- `.taskmaster/docs/prd.txt` - Product Requirements Document for parsing
|
||||||
|
- `.taskmaster/tasks/*.txt` - Individual task files (auto-generated from tasks.json)
|
||||||
|
- `.env` - API keys for CLI usage
|
||||||
|
|
||||||
|
### Claude Code Integration Files
|
||||||
|
|
||||||
|
- `CLAUDE.md` - Auto-loaded context for Claude Code (this file)
|
||||||
|
- `.claude/settings.json` - Claude Code tool allowlist and preferences
|
||||||
|
- `.claude/commands/` - Custom slash commands for repeated workflows
|
||||||
|
- `.mcp.json` - MCP server configuration (project-specific)
|
||||||
|
|
||||||
|
### Directory Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
project/
|
||||||
|
├── .taskmaster/
|
||||||
|
│ ├── tasks/ # Task files directory
|
||||||
|
│ │ ├── tasks.json # Main task database
|
||||||
|
│ │ ├── task-1.md # Individual task files
|
||||||
|
│ │ └── task-2.md
|
||||||
|
│ ├── docs/ # Documentation directory
|
||||||
|
│ │ ├── prd.txt # Product requirements
|
||||||
|
│ ├── reports/ # Analysis reports directory
|
||||||
|
│ │ └── task-complexity-report.json
|
||||||
|
│ ├── templates/ # Template files
|
||||||
|
│ │ └── example_prd.txt # Example PRD template
|
||||||
|
│ └── config.json # AI models & settings
|
||||||
|
├── .claude/
|
||||||
|
│ ├── settings.json # Claude Code configuration
|
||||||
|
│ └── commands/ # Custom slash commands
|
||||||
|
├── .env # API keys
|
||||||
|
├── .mcp.json # MCP configuration
|
||||||
|
└── CLAUDE.md # This file - auto-loaded by Claude Code
|
||||||
|
```
|
||||||
|
|
||||||
|
## MCP Integration
|
||||||
|
|
||||||
|
Task Master provides an MCP server that Claude Code can connect to. Configure in `.mcp.json`:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"mcpServers": {
|
||||||
|
"task-master-ai": {
|
||||||
|
"command": "npx",
|
||||||
|
"args": ["-y", "task-master-ai"],
|
||||||
|
"env": {
|
||||||
|
"ANTHROPIC_API_KEY": "your_key_here",
|
||||||
|
"PERPLEXITY_API_KEY": "your_key_here",
|
||||||
|
"OPENAI_API_KEY": "OPENAI_API_KEY_HERE",
|
||||||
|
"GOOGLE_API_KEY": "GOOGLE_API_KEY_HERE",
|
||||||
|
"XAI_API_KEY": "XAI_API_KEY_HERE",
|
||||||
|
"OPENROUTER_API_KEY": "OPENROUTER_API_KEY_HERE",
|
||||||
|
"MISTRAL_API_KEY": "MISTRAL_API_KEY_HERE",
|
||||||
|
"AZURE_OPENAI_API_KEY": "AZURE_OPENAI_API_KEY_HERE",
|
||||||
|
"OLLAMA_API_KEY": "OLLAMA_API_KEY_HERE"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Essential MCP Tools
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
help; // = shows available taskmaster commands
|
||||||
|
// Project setup
|
||||||
|
initialize_project; // = task-master init
|
||||||
|
parse_prd; // = task-master parse-prd
|
||||||
|
|
||||||
|
// Daily workflow
|
||||||
|
get_tasks; // = task-master list
|
||||||
|
next_task; // = task-master next
|
||||||
|
get_task; // = task-master show <id>
|
||||||
|
set_task_status; // = task-master set-status
|
||||||
|
|
||||||
|
// Task management
|
||||||
|
add_task; // = task-master add-task
|
||||||
|
expand_task; // = task-master expand
|
||||||
|
update_task; // = task-master update-task
|
||||||
|
update_subtask; // = task-master update-subtask
|
||||||
|
update; // = task-master update
|
||||||
|
|
||||||
|
// Analysis
|
||||||
|
analyze_project_complexity; // = task-master analyze-complexity
|
||||||
|
complexity_report; // = task-master complexity-report
|
||||||
|
```
|
||||||
|
|
||||||
|
## Claude Code Workflow Integration
|
||||||
|
|
||||||
|
### Standard Development Workflow
|
||||||
|
|
||||||
|
#### 1. Project Initialization
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Initialize Task Master
|
||||||
|
task-master init
|
||||||
|
|
||||||
|
# Create or obtain PRD, then parse it
|
||||||
|
task-master parse-prd .taskmaster/docs/prd.txt
|
||||||
|
|
||||||
|
# Analyze complexity and expand tasks
|
||||||
|
task-master analyze-complexity --research
|
||||||
|
task-master expand --all --research
|
||||||
|
```
|
||||||
|
|
||||||
|
If tasks already exist, another PRD can be parsed (with new information only!) using parse-prd with --append flag. This will add the generated tasks to the existing list of tasks..
|
||||||
|
|
||||||
|
#### 2. Daily Development Loop
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Start each session
|
||||||
|
task-master next # Find next available task
|
||||||
|
task-master show <id> # Review task details
|
||||||
|
|
||||||
|
# During implementation, check in code context into the tasks and subtasks
|
||||||
|
task-master update-subtask --id=<id> --prompt="implementation notes..."
|
||||||
|
|
||||||
|
# Complete tasks
|
||||||
|
task-master set-status --id=<id> --status=done
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 3. Multi-Claude Workflows
|
||||||
|
|
||||||
|
For complex projects, use multiple Claude Code sessions:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Terminal 1: Main implementation
|
||||||
|
cd project && claude
|
||||||
|
|
||||||
|
# Terminal 2: Testing and validation
|
||||||
|
cd project-test-worktree && claude
|
||||||
|
|
||||||
|
# Terminal 3: Documentation updates
|
||||||
|
cd project-docs-worktree && claude
|
||||||
|
```
|
||||||
|
|
||||||
|
### Custom Slash Commands
|
||||||
|
|
||||||
|
Create `.claude/commands/taskmaster-next.md`:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
Find the next available Task Master task and show its details.
|
||||||
|
|
||||||
|
Steps:
|
||||||
|
|
||||||
|
1. Run `task-master next` to get the next task
|
||||||
|
2. If a task is available, run `task-master show <id>` for full details
|
||||||
|
3. Provide a summary of what needs to be implemented
|
||||||
|
4. Suggest the first implementation step
|
||||||
|
```
|
||||||
|
|
||||||
|
Create `.claude/commands/taskmaster-complete.md`:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
Complete a Task Master task: $ARGUMENTS
|
||||||
|
|
||||||
|
Steps:
|
||||||
|
|
||||||
|
1. Review the current task with `task-master show $ARGUMENTS`
|
||||||
|
2. Verify all implementation is complete
|
||||||
|
3. Run any tests related to this task
|
||||||
|
4. Mark as complete: `task-master set-status --id=$ARGUMENTS --status=done`
|
||||||
|
5. Show the next available task with `task-master next`
|
||||||
|
```
|
||||||
|
|
||||||
|
## Tool Allowlist Recommendations
|
||||||
|
|
||||||
|
Add to `.claude/settings.json`:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"allowedTools": [
|
||||||
|
"Edit",
|
||||||
|
"Bash(task-master *)",
|
||||||
|
"Bash(git commit:*)",
|
||||||
|
"Bash(git add:*)",
|
||||||
|
"Bash(npm run *)",
|
||||||
|
"mcp__task_master_ai__*"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Configuration & Setup
|
||||||
|
|
||||||
|
### API Keys Required
|
||||||
|
|
||||||
|
At least **one** of these API keys must be configured:
|
||||||
|
|
||||||
|
- `ANTHROPIC_API_KEY` (Claude models) - **Recommended**
|
||||||
|
- `PERPLEXITY_API_KEY` (Research features) - **Highly recommended**
|
||||||
|
- `OPENAI_API_KEY` (GPT models)
|
||||||
|
- `GOOGLE_API_KEY` (Gemini models)
|
||||||
|
- `MISTRAL_API_KEY` (Mistral models)
|
||||||
|
- `OPENROUTER_API_KEY` (Multiple models)
|
||||||
|
- `XAI_API_KEY` (Grok models)
|
||||||
|
|
||||||
|
An API key is required for any provider used across any of the 3 roles defined in the `models` command.
|
||||||
|
|
||||||
|
### Model Configuration
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Interactive setup (recommended)
|
||||||
|
task-master models --setup
|
||||||
|
|
||||||
|
# Set specific models
|
||||||
|
task-master models --set-main claude-3-5-sonnet-20241022
|
||||||
|
task-master models --set-research perplexity-llama-3.1-sonar-large-128k-online
|
||||||
|
task-master models --set-fallback gpt-4o-mini
|
||||||
|
```
|
||||||
|
|
||||||
|
## Task Structure & IDs
|
||||||
|
|
||||||
|
### Task ID Format
|
||||||
|
|
||||||
|
- Main tasks: `1`, `2`, `3`, etc.
|
||||||
|
- Subtasks: `1.1`, `1.2`, `2.1`, etc.
|
||||||
|
- Sub-subtasks: `1.1.1`, `1.1.2`, etc.
|
||||||
|
|
||||||
|
### Task Status Values
|
||||||
|
|
||||||
|
- `pending` - Ready to work on
|
||||||
|
- `in-progress` - Currently being worked on
|
||||||
|
- `done` - Completed and verified
|
||||||
|
- `deferred` - Postponed
|
||||||
|
- `cancelled` - No longer needed
|
||||||
|
- `blocked` - Waiting on external factors
|
||||||
|
|
||||||
|
### Task Fields
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"id": "1.2",
|
||||||
|
"title": "Implement user authentication",
|
||||||
|
"description": "Set up JWT-based auth system",
|
||||||
|
"status": "pending",
|
||||||
|
"priority": "high",
|
||||||
|
"dependencies": ["1.1"],
|
||||||
|
"details": "Use bcrypt for hashing, JWT for tokens...",
|
||||||
|
"testStrategy": "Unit tests for auth functions, integration tests for login flow",
|
||||||
|
"subtasks": []
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Claude Code Best Practices with Task Master
|
||||||
|
|
||||||
|
### Context Management
|
||||||
|
|
||||||
|
- Use `/clear` between different tasks to maintain focus
|
||||||
|
- This CLAUDE.md file is automatically loaded for context
|
||||||
|
- Use `task-master show <id>` to pull specific task context when needed
|
||||||
|
|
||||||
|
### Iterative Implementation
|
||||||
|
|
||||||
|
1. `task-master show <subtask-id>` - Understand requirements
|
||||||
|
2. Explore codebase and plan implementation
|
||||||
|
3. `task-master update-subtask --id=<id> --prompt="detailed plan"` - Log plan
|
||||||
|
4. `task-master set-status --id=<id> --status=in-progress` - Start work
|
||||||
|
5. Implement code following logged plan
|
||||||
|
6. `task-master update-subtask --id=<id> --prompt="what worked/didn't work"` - Log progress
|
||||||
|
7. `task-master set-status --id=<id> --status=done` - Complete task
|
||||||
|
|
||||||
|
### Complex Workflows with Checklists
|
||||||
|
|
||||||
|
For large migrations or multi-step processes:
|
||||||
|
|
||||||
|
1. Create a markdown PRD file describing the new changes: `touch task-migration-checklist.md` (prds can be .txt or .md)
|
||||||
|
2. Use Taskmaster to parse the new prd with `task-master parse-prd --append` (also available in MCP)
|
||||||
|
3. Use Taskmaster to expand the newly generated tasks into subtasks. Consdier using `analyze-complexity` with the correct --to and --from IDs (the new ids) to identify the ideal subtask amounts for each task. Then expand them.
|
||||||
|
4. Work through items systematically, checking them off as completed
|
||||||
|
5. Use `task-master update-subtask` to log progress on each task/subtask and/or updating/researching them before/during implementation if getting stuck
|
||||||
|
|
||||||
|
### Git Integration
|
||||||
|
|
||||||
|
Task Master works well with `gh` CLI:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Create PR for completed task
|
||||||
|
gh pr create --title "Complete task 1.2: User authentication" --body "Implements JWT auth system as specified in task 1.2"
|
||||||
|
|
||||||
|
# Reference task in commits
|
||||||
|
git commit -m "feat: implement JWT auth (task 1.2)"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Parallel Development with Git Worktrees
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Create worktrees for parallel task development
|
||||||
|
git worktree add ../project-auth feature/auth-system
|
||||||
|
git worktree add ../project-api feature/api-refactor
|
||||||
|
|
||||||
|
# Run Claude Code in each worktree
|
||||||
|
cd ../project-auth && claude # Terminal 1: Auth work
|
||||||
|
cd ../project-api && claude # Terminal 2: API work
|
||||||
|
```
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### AI Commands Failing
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check API keys are configured
|
||||||
|
cat .env # For CLI usage
|
||||||
|
|
||||||
|
# Verify model configuration
|
||||||
|
task-master models
|
||||||
|
|
||||||
|
# Test with different model
|
||||||
|
task-master models --set-fallback gpt-4o-mini
|
||||||
|
```
|
||||||
|
|
||||||
|
### MCP Connection Issues
|
||||||
|
|
||||||
|
- Check `.mcp.json` configuration
|
||||||
|
- Verify Node.js installation
|
||||||
|
- Use `--mcp-debug` flag when starting Claude Code
|
||||||
|
- Use CLI as fallback if MCP unavailable
|
||||||
|
|
||||||
|
### Task File Sync Issues
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Regenerate task files from tasks.json
|
||||||
|
task-master generate
|
||||||
|
|
||||||
|
# Fix dependency issues
|
||||||
|
task-master fix-dependencies
|
||||||
|
```
|
||||||
|
|
||||||
|
DO NOT RE-INITIALIZE. That will not do anything beyond re-adding the same Taskmaster core files.
|
||||||
|
|
||||||
|
## Important Notes
|
||||||
|
|
||||||
|
### AI-Powered Operations
|
||||||
|
|
||||||
|
These commands make AI calls and may take up to a minute:
|
||||||
|
|
||||||
|
- `parse_prd` / `task-master parse-prd`
|
||||||
|
- `analyze_project_complexity` / `task-master analyze-complexity`
|
||||||
|
- `expand_task` / `task-master expand`
|
||||||
|
- `expand_all` / `task-master expand --all`
|
||||||
|
- `add_task` / `task-master add-task`
|
||||||
|
- `update` / `task-master update`
|
||||||
|
- `update_task` / `task-master update-task`
|
||||||
|
- `update_subtask` / `task-master update-subtask`
|
||||||
|
|
||||||
|
### File Management
|
||||||
|
|
||||||
|
- Never manually edit `tasks.json` - use commands instead
|
||||||
|
- Never manually edit `.taskmaster/config.json` - use `task-master models`
|
||||||
|
- Task markdown files in `tasks/` are auto-generated
|
||||||
|
- Run `task-master generate` after manual changes to tasks.json
|
||||||
|
|
||||||
|
### Claude Code Session Management
|
||||||
|
|
||||||
|
- Use `/clear` frequently to maintain focused context
|
||||||
|
- Create custom slash commands for repeated Task Master workflows
|
||||||
|
- Configure tool allowlist to streamline permissions
|
||||||
|
- Use headless mode for automation: `claude -p "task-master next"`
|
||||||
|
|
||||||
|
### Multi-Task Updates
|
||||||
|
|
||||||
|
- Use `update --from=<id>` to update multiple future tasks
|
||||||
|
- Use `update-task --id=<id>` for single task updates
|
||||||
|
- Use `update-subtask --id=<id>` for implementation logging
|
||||||
|
|
||||||
|
### Research Mode
|
||||||
|
|
||||||
|
- Add `--research` flag for research-based AI enhancement
|
||||||
|
- Requires a research model API key like Perplexity (`PERPLEXITY_API_KEY`) in environment
|
||||||
|
- Provides more informed task creation and updates
|
||||||
|
- Recommended for complex technical tasks
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
_This guide ensures Claude Code has immediate access to Task Master's essential functionality for agentic development workflows._
|
||||||
43
.taskmaster/config.json
Normal file
43
.taskmaster/config.json
Normal file
@@ -0,0 +1,43 @@
|
|||||||
|
{
|
||||||
|
"models": {
|
||||||
|
"main": {
|
||||||
|
"provider": "anthropic",
|
||||||
|
"modelId": "claude-sonnet-4-20250514",
|
||||||
|
"maxTokens": 64000,
|
||||||
|
"temperature": 0.2
|
||||||
|
},
|
||||||
|
"research": {
|
||||||
|
"provider": "perplexity",
|
||||||
|
"modelId": "sonar",
|
||||||
|
"maxTokens": 8700,
|
||||||
|
"temperature": 0.1
|
||||||
|
},
|
||||||
|
"fallback": {
|
||||||
|
"provider": "anthropic",
|
||||||
|
"modelId": "claude-3-7-sonnet-20250219",
|
||||||
|
"maxTokens": 120000,
|
||||||
|
"temperature": 0.2
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"global": {
|
||||||
|
"logLevel": "info",
|
||||||
|
"debug": false,
|
||||||
|
"defaultNumTasks": 10,
|
||||||
|
"defaultSubtasks": 5,
|
||||||
|
"defaultPriority": "medium",
|
||||||
|
"projectName": "Taskmaster",
|
||||||
|
"ollamaBaseURL": "http://localhost:11434/api",
|
||||||
|
"bedrockBaseURL": "https://bedrock.us-east-1.amazonaws.com",
|
||||||
|
"responseLanguage": "English",
|
||||||
|
"enableCodebaseAnalysis": true,
|
||||||
|
"userId": "1234567890",
|
||||||
|
"azureBaseURL": "https://your-endpoint.azure.com/",
|
||||||
|
"defaultTag": "master"
|
||||||
|
},
|
||||||
|
"claudeCode": {},
|
||||||
|
"grokCli": {
|
||||||
|
"timeout": 120000,
|
||||||
|
"workingDirectory": null,
|
||||||
|
"defaultModel": "grok-4-latest"
|
||||||
|
}
|
||||||
|
}
|
||||||
188
.taskmaster/docs/MIGRATION-ROADMAP.md
Normal file
188
.taskmaster/docs/MIGRATION-ROADMAP.md
Normal file
@@ -0,0 +1,188 @@
|
|||||||
|
# Task Master Migration Roadmap
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
Gradual migration from scripts-based architecture to a clean monorepo with separated concerns.
|
||||||
|
|
||||||
|
## Architecture Vision
|
||||||
|
|
||||||
|
```
|
||||||
|
┌─────────────────────────────────────────────────┐
|
||||||
|
│ User Interfaces │
|
||||||
|
├──────────┬──────────┬──────────┬────────────────┤
|
||||||
|
│ @tm/cli │ @tm/mcp │ @tm/ext │ @tm/web │
|
||||||
|
│ (CLI) │ (MCP) │ (VSCode)│ (Future) │
|
||||||
|
└──────────┴──────────┴──────────┴────────────────┘
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
┌──────────────────────┐
|
||||||
|
│ @tm/core │
|
||||||
|
│ (Business Logic) │
|
||||||
|
└──────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
## Migration Phases
|
||||||
|
|
||||||
|
### Phase 1: Core Extraction ✅ (In Progress)
|
||||||
|
**Goal**: Move all business logic to @tm/core
|
||||||
|
|
||||||
|
- [x] Create @tm/core package structure
|
||||||
|
- [x] Move types and interfaces
|
||||||
|
- [x] Implement TaskMasterCore facade
|
||||||
|
- [x] Move storage adapters
|
||||||
|
- [x] Move task services
|
||||||
|
- [ ] Move AI providers
|
||||||
|
- [ ] Move parser logic
|
||||||
|
- [ ] Complete test coverage
|
||||||
|
|
||||||
|
### Phase 2: CLI Package Creation 🚧 (Started)
|
||||||
|
**Goal**: Create @tm/cli as a thin presentation layer
|
||||||
|
|
||||||
|
- [x] Create @tm/cli package structure
|
||||||
|
- [x] Implement Command interface pattern
|
||||||
|
- [x] Create CommandRegistry
|
||||||
|
- [x] Build legacy bridge/adapter
|
||||||
|
- [x] Migrate list-tasks command
|
||||||
|
- [ ] Migrate remaining commands one by one
|
||||||
|
- [ ] Remove UI logic from core
|
||||||
|
|
||||||
|
### Phase 3: Transitional Integration
|
||||||
|
**Goal**: Use new packages in existing scripts without breaking changes
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// scripts/modules/commands.js gradually adopts new commands
|
||||||
|
import { ListTasksCommand } from '@tm/cli';
|
||||||
|
const listCommand = new ListTasksCommand();
|
||||||
|
|
||||||
|
// Old interface remains the same
|
||||||
|
programInstance
|
||||||
|
.command('list')
|
||||||
|
.action(async (options) => {
|
||||||
|
// Use new command internally
|
||||||
|
const result = await listCommand.execute(convertOptions(options));
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
### Phase 4: MCP Package
|
||||||
|
**Goal**: Separate MCP server as its own package
|
||||||
|
|
||||||
|
- [ ] Create @tm/mcp package
|
||||||
|
- [ ] Move MCP server code
|
||||||
|
- [ ] Use @tm/core for all logic
|
||||||
|
- [ ] MCP becomes a thin RPC layer
|
||||||
|
|
||||||
|
### Phase 5: Complete Migration
|
||||||
|
**Goal**: Remove old scripts, pure monorepo
|
||||||
|
|
||||||
|
- [ ] All commands migrated to @tm/cli
|
||||||
|
- [ ] Remove scripts/modules/task-manager/*
|
||||||
|
- [ ] Remove scripts/modules/commands.js
|
||||||
|
- [ ] Update bin/task-master.js to use @tm/cli
|
||||||
|
- [ ] Clean up dependencies
|
||||||
|
|
||||||
|
## Current Transitional Strategy
|
||||||
|
|
||||||
|
### 1. Adapter Pattern (commands-adapter.js)
|
||||||
|
```javascript
|
||||||
|
// Checks if new CLI is available and uses it
|
||||||
|
// Falls back to legacy implementation if not
|
||||||
|
export async function listTasksAdapter(...args) {
|
||||||
|
if (cliAvailable) {
|
||||||
|
return useNewImplementation(...args);
|
||||||
|
}
|
||||||
|
return useLegacyImplementation(...args);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Command Bridge Pattern
|
||||||
|
```javascript
|
||||||
|
// Allows new commands to work in old code
|
||||||
|
const bridge = new CommandBridge(new ListTasksCommand());
|
||||||
|
const data = await bridge.run(legacyOptions); // Legacy style
|
||||||
|
const result = await bridge.execute(newOptions); // New style
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Gradual File Migration
|
||||||
|
Instead of big-bang refactoring:
|
||||||
|
1. Create new implementation in @tm/cli
|
||||||
|
2. Add adapter in commands-adapter.js
|
||||||
|
3. Update commands.js to use adapter
|
||||||
|
4. Test both paths work
|
||||||
|
5. Eventually remove adapter when all migrated
|
||||||
|
|
||||||
|
## Benefits of This Approach
|
||||||
|
|
||||||
|
1. **No Breaking Changes**: Existing CLI continues to work
|
||||||
|
2. **Incremental PRs**: Each command can be migrated separately
|
||||||
|
3. **Parallel Development**: New features can use new architecture
|
||||||
|
4. **Easy Rollback**: Can disable new implementation if issues
|
||||||
|
5. **Clear Separation**: Business logic (core) vs presentation (cli/mcp/etc)
|
||||||
|
|
||||||
|
## Example PR Sequence
|
||||||
|
|
||||||
|
### PR 1: Core Package Setup ✅
|
||||||
|
- Create @tm/core
|
||||||
|
- Move types and interfaces
|
||||||
|
- Basic TaskMasterCore implementation
|
||||||
|
|
||||||
|
### PR 2: CLI Package Foundation ✅
|
||||||
|
- Create @tm/cli
|
||||||
|
- Command interface and registry
|
||||||
|
- Legacy bridge utilities
|
||||||
|
|
||||||
|
### PR 3: First Command Migration
|
||||||
|
- Migrate list-tasks to new system
|
||||||
|
- Add adapter in scripts
|
||||||
|
- Test both implementations
|
||||||
|
|
||||||
|
### PR 4-N: Migrate Commands One by One
|
||||||
|
- Each PR migrates 1-2 related commands
|
||||||
|
- Small, reviewable changes
|
||||||
|
- Continuous delivery
|
||||||
|
|
||||||
|
### Final PR: Cleanup
|
||||||
|
- Remove legacy implementations
|
||||||
|
- Remove adapters
|
||||||
|
- Update documentation
|
||||||
|
|
||||||
|
## Testing Strategy
|
||||||
|
|
||||||
|
### Dual Testing During Migration
|
||||||
|
```javascript
|
||||||
|
describe('List Tasks', () => {
|
||||||
|
it('works with legacy implementation', async () => {
|
||||||
|
// Force legacy
|
||||||
|
const result = await legacyListTasks(...);
|
||||||
|
expect(result).toBeDefined();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('works with new implementation', async () => {
|
||||||
|
// Force new
|
||||||
|
const command = new ListTasksCommand();
|
||||||
|
const result = await command.execute(...);
|
||||||
|
expect(result.success).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('adapter chooses correctly', async () => {
|
||||||
|
// Let adapter decide
|
||||||
|
const result = await listTasksAdapter(...);
|
||||||
|
expect(result).toBeDefined();
|
||||||
|
});
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
## Success Metrics
|
||||||
|
|
||||||
|
- [ ] All commands migrated without breaking changes
|
||||||
|
- [ ] Test coverage maintained or improved
|
||||||
|
- [ ] Performance maintained or improved
|
||||||
|
- [ ] Cleaner, more maintainable codebase
|
||||||
|
- [ ] Easy to add new interfaces (web, desktop, etc.)
|
||||||
|
|
||||||
|
## Notes for Contributors
|
||||||
|
|
||||||
|
1. **Keep PRs Small**: Migrate one command at a time
|
||||||
|
2. **Test Both Paths**: Ensure legacy and new both work
|
||||||
|
3. **Document Changes**: Update this roadmap as you go
|
||||||
|
4. **Communicate**: Discuss in PRs if architecture needs adjustment
|
||||||
|
|
||||||
|
This is a living document - update as the migration progresses!
|
||||||
@@ -21,16 +21,18 @@ In an AI-driven development process—particularly with tools like [Cursor](http
|
|||||||
The script can be configured through environment variables in a `.env` file at the root of the project:
|
The script can be configured through environment variables in a `.env` file at the root of the project:
|
||||||
|
|
||||||
### Required Configuration
|
### Required Configuration
|
||||||
|
|
||||||
- `ANTHROPIC_API_KEY`: Your Anthropic API key for Claude
|
- `ANTHROPIC_API_KEY`: Your Anthropic API key for Claude
|
||||||
|
|
||||||
### Optional Configuration
|
### Optional Configuration
|
||||||
|
|
||||||
- `MODEL`: Specify which Claude model to use (default: "claude-3-7-sonnet-20250219")
|
- `MODEL`: Specify which Claude model to use (default: "claude-3-7-sonnet-20250219")
|
||||||
- `MAX_TOKENS`: Maximum tokens for model responses (default: 4000)
|
- `MAX_TOKENS`: Maximum tokens for model responses (default: 4000)
|
||||||
- `TEMPERATURE`: Temperature for model responses (default: 0.7)
|
- `TEMPERATURE`: Temperature for model responses (default: 0.7)
|
||||||
- `PERPLEXITY_API_KEY`: Your Perplexity API key for research-backed subtask generation
|
- `PERPLEXITY_API_KEY`: Your Perplexity API key for research-backed subtask generation
|
||||||
- `PERPLEXITY_MODEL`: Specify which Perplexity model to use (default: "sonar-medium-online")
|
- `PERPLEXITY_MODEL`: Specify which Perplexity model to use (default: "sonar-medium-online")
|
||||||
- `DEBUG`: Enable debug logging (default: false)
|
- `DEBUG`: Enable debug logging (default: false)
|
||||||
- `LOG_LEVEL`: Log level - debug, info, warn, error (default: info)
|
- `TASKMASTER_LOG_LEVEL`: Log level - debug, info, warn, error (default: info)
|
||||||
- `DEFAULT_SUBTASKS`: Default number of subtasks when expanding (default: 3)
|
- `DEFAULT_SUBTASKS`: Default number of subtasks when expanding (default: 3)
|
||||||
- `DEFAULT_PRIORITY`: Default priority for generated tasks (default: medium)
|
- `DEFAULT_PRIORITY`: Default priority for generated tasks (default: medium)
|
||||||
- `PROJECT_NAME`: Override default project name in tasks.json
|
- `PROJECT_NAME`: Override default project name in tasks.json
|
||||||
@@ -39,6 +41,7 @@ The script can be configured through environment variables in a `.env` file at t
|
|||||||
## How It Works
|
## How It Works
|
||||||
|
|
||||||
1. **`tasks.json`**:
|
1. **`tasks.json`**:
|
||||||
|
|
||||||
- A JSON file at the project root containing an array of tasks (each with `id`, `title`, `description`, `status`, etc.).
|
- A JSON file at the project root containing an array of tasks (each with `id`, `title`, `description`, `status`, etc.).
|
||||||
- The `meta` field can store additional info like the project's name, version, or reference to the PRD.
|
- The `meta` field can store additional info like the project's name, version, or reference to the PRD.
|
||||||
- Tasks can have `subtasks` for more detailed implementation steps.
|
- Tasks can have `subtasks` for more detailed implementation steps.
|
||||||
@@ -102,6 +105,7 @@ node scripts/dev.js update --file=custom-tasks.json --from=5 --prompt="Change da
|
|||||||
```
|
```
|
||||||
|
|
||||||
Notes:
|
Notes:
|
||||||
|
|
||||||
- The `--prompt` parameter is required and should explain the changes or new context
|
- The `--prompt` parameter is required and should explain the changes or new context
|
||||||
- Only tasks that aren't marked as 'done' will be updated
|
- Only tasks that aren't marked as 'done' will be updated
|
||||||
- Tasks with ID >= the specified --from value will be updated
|
- Tasks with ID >= the specified --from value will be updated
|
||||||
@@ -120,6 +124,7 @@ node scripts/dev.js update-task --id=4 --prompt="Use JWT for authentication" --r
|
|||||||
```
|
```
|
||||||
|
|
||||||
This command:
|
This command:
|
||||||
|
|
||||||
- Updates only the specified task rather than a range of tasks
|
- Updates only the specified task rather than a range of tasks
|
||||||
- Provides detailed validation with helpful error messages
|
- Provides detailed validation with helpful error messages
|
||||||
- Checks for required API keys when using research mode
|
- Checks for required API keys when using research mode
|
||||||
@@ -146,6 +151,7 @@ node scripts/dev.js set-status --id=1,2,3 --status=done
|
|||||||
```
|
```
|
||||||
|
|
||||||
Notes:
|
Notes:
|
||||||
|
|
||||||
- When marking a parent task as "done", all of its subtasks will automatically be marked as "done" as well
|
- When marking a parent task as "done", all of its subtasks will automatically be marked as "done" as well
|
||||||
- Common status values are 'done', 'pending', and 'deferred', but any string is accepted
|
- Common status values are 'done', 'pending', and 'deferred', but any string is accepted
|
||||||
- You can specify multiple task IDs by separating them with commas
|
- You can specify multiple task IDs by separating them with commas
|
||||||
@@ -195,6 +201,7 @@ node scripts/dev.js clear-subtasks --all
|
|||||||
```
|
```
|
||||||
|
|
||||||
Notes:
|
Notes:
|
||||||
|
|
||||||
- After clearing subtasks, task files are automatically regenerated
|
- After clearing subtasks, task files are automatically regenerated
|
||||||
- This is useful when you want to regenerate subtasks with a different approach
|
- This is useful when you want to regenerate subtasks with a different approach
|
||||||
- Can be combined with the `expand` command to immediately generate new subtasks
|
- Can be combined with the `expand` command to immediately generate new subtasks
|
||||||
@@ -210,6 +217,7 @@ The script integrates with two AI services:
|
|||||||
The Perplexity integration uses the OpenAI client to connect to Perplexity's API, which provides enhanced research capabilities for generating more informed subtasks. If the Perplexity API is unavailable or encounters an error, the script will automatically fall back to using Anthropic's Claude.
|
The Perplexity integration uses the OpenAI client to connect to Perplexity's API, which provides enhanced research capabilities for generating more informed subtasks. If the Perplexity API is unavailable or encounters an error, the script will automatically fall back to using Anthropic's Claude.
|
||||||
|
|
||||||
To use the Perplexity integration:
|
To use the Perplexity integration:
|
||||||
|
|
||||||
1. Obtain a Perplexity API key
|
1. Obtain a Perplexity API key
|
||||||
2. Add `PERPLEXITY_API_KEY` to your `.env` file
|
2. Add `PERPLEXITY_API_KEY` to your `.env` file
|
||||||
3. Optionally specify `PERPLEXITY_MODEL` in your `.env` file (default: "sonar-medium-online")
|
3. Optionally specify `PERPLEXITY_MODEL` in your `.env` file (default: "sonar-medium-online")
|
||||||
@@ -217,7 +225,8 @@ To use the Perplexity integration:
|
|||||||
|
|
||||||
## Logging
|
## Logging
|
||||||
|
|
||||||
The script supports different logging levels controlled by the `LOG_LEVEL` environment variable:
|
The script supports different logging levels controlled by the `TASKMASTER_LOG_LEVEL` environment variable:
|
||||||
|
|
||||||
- `debug`: Detailed information, typically useful for troubleshooting
|
- `debug`: Detailed information, typically useful for troubleshooting
|
||||||
- `info`: Confirmation that things are working as expected (default)
|
- `info`: Confirmation that things are working as expected (default)
|
||||||
- `warn`: Warning messages that don't prevent execution
|
- `warn`: Warning messages that don't prevent execution
|
||||||
@@ -240,17 +249,20 @@ node scripts/dev.js remove-dependency --id=<id> --depends-on=<id>
|
|||||||
These commands:
|
These commands:
|
||||||
|
|
||||||
1. **Allow precise dependency management**:
|
1. **Allow precise dependency management**:
|
||||||
|
|
||||||
- Add dependencies between tasks with automatic validation
|
- Add dependencies between tasks with automatic validation
|
||||||
- Remove dependencies when they're no longer needed
|
- Remove dependencies when they're no longer needed
|
||||||
- Update task files automatically after changes
|
- Update task files automatically after changes
|
||||||
|
|
||||||
2. **Include validation checks**:
|
2. **Include validation checks**:
|
||||||
|
|
||||||
- Prevent circular dependencies (a task depending on itself)
|
- Prevent circular dependencies (a task depending on itself)
|
||||||
- Prevent duplicate dependencies
|
- Prevent duplicate dependencies
|
||||||
- Verify that both tasks exist before adding/removing dependencies
|
- Verify that both tasks exist before adding/removing dependencies
|
||||||
- Check if dependencies exist before attempting to remove them
|
- Check if dependencies exist before attempting to remove them
|
||||||
|
|
||||||
3. **Provide clear feedback**:
|
3. **Provide clear feedback**:
|
||||||
|
|
||||||
- Success messages confirm when dependencies are added/removed
|
- Success messages confirm when dependencies are added/removed
|
||||||
- Error messages explain why operations failed (if applicable)
|
- Error messages explain why operations failed (if applicable)
|
||||||
|
|
||||||
@@ -275,6 +287,7 @@ node scripts/dev.js validate-dependencies --file=custom-tasks.json
|
|||||||
```
|
```
|
||||||
|
|
||||||
This command:
|
This command:
|
||||||
|
|
||||||
- Scans all tasks and subtasks for non-existent dependencies
|
- Scans all tasks and subtasks for non-existent dependencies
|
||||||
- Identifies potential self-dependencies (tasks referencing themselves)
|
- Identifies potential self-dependencies (tasks referencing themselves)
|
||||||
- Reports all found issues without modifying files
|
- Reports all found issues without modifying files
|
||||||
@@ -296,6 +309,7 @@ node scripts/dev.js fix-dependencies --file=custom-tasks.json
|
|||||||
```
|
```
|
||||||
|
|
||||||
This command:
|
This command:
|
||||||
|
|
||||||
1. **Validates all dependencies** across tasks and subtasks
|
1. **Validates all dependencies** across tasks and subtasks
|
||||||
2. **Automatically removes**:
|
2. **Automatically removes**:
|
||||||
- References to non-existent tasks and subtasks
|
- References to non-existent tasks and subtasks
|
||||||
@@ -333,6 +347,7 @@ node scripts/dev.js analyze-complexity --research
|
|||||||
```
|
```
|
||||||
|
|
||||||
Notes:
|
Notes:
|
||||||
|
|
||||||
- The command uses Claude to analyze each task's complexity (or Perplexity with --research flag)
|
- The command uses Claude to analyze each task's complexity (or Perplexity with --research flag)
|
||||||
- Tasks are scored on a scale of 1-10
|
- Tasks are scored on a scale of 1-10
|
||||||
- Each task receives a recommended number of subtasks based on DEFAULT_SUBTASKS configuration
|
- Each task receives a recommended number of subtasks based on DEFAULT_SUBTASKS configuration
|
||||||
@@ -357,12 +372,14 @@ node scripts/dev.js expand --id=8 --num=5 --prompt="Custom prompt"
|
|||||||
```
|
```
|
||||||
|
|
||||||
When a complexity report exists:
|
When a complexity report exists:
|
||||||
|
|
||||||
- The `expand` command will use the recommended subtask count from the report (unless overridden)
|
- The `expand` command will use the recommended subtask count from the report (unless overridden)
|
||||||
- It will use the tailored expansion prompt from the report (unless a custom prompt is provided)
|
- It will use the tailored expansion prompt from the report (unless a custom prompt is provided)
|
||||||
- When using `--all`, tasks are sorted by complexity score (highest first)
|
- When using `--all`, tasks are sorted by complexity score (highest first)
|
||||||
- The `--research` flag is preserved from the complexity analysis to expansion
|
- The `--research` flag is preserved from the complexity analysis to expansion
|
||||||
|
|
||||||
The output report structure is:
|
The output report structure is:
|
||||||
|
|
||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
"meta": {
|
"meta": {
|
||||||
@@ -381,7 +398,7 @@ The output report structure is:
|
|||||||
"expansionPrompt": "Create subtasks that handle detecting...",
|
"expansionPrompt": "Create subtasks that handle detecting...",
|
||||||
"reasoning": "This task requires sophisticated logic...",
|
"reasoning": "This task requires sophisticated logic...",
|
||||||
"expansionCommand": "node scripts/dev.js expand --id=8 --num=6 --prompt=\"Create subtasks...\" --research"
|
"expansionCommand": "node scripts/dev.js expand --id=8 --num=6 --prompt=\"Create subtasks...\" --research"
|
||||||
},
|
}
|
||||||
// More tasks sorted by complexity score (highest first)
|
// More tasks sorted by complexity score (highest first)
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
@@ -457,16 +474,19 @@ This command is particularly useful when you need to examine a specific task in
|
|||||||
The script now includes improved error handling throughout all commands:
|
The script now includes improved error handling throughout all commands:
|
||||||
|
|
||||||
1. **Detailed Validation**:
|
1. **Detailed Validation**:
|
||||||
|
|
||||||
- Required parameters (like task IDs and prompts) are validated early
|
- Required parameters (like task IDs and prompts) are validated early
|
||||||
- File existence is checked with customized errors for common scenarios
|
- File existence is checked with customized errors for common scenarios
|
||||||
- Parameter type conversion is handled with clear error messages
|
- Parameter type conversion is handled with clear error messages
|
||||||
|
|
||||||
2. **Contextual Error Messages**:
|
2. **Contextual Error Messages**:
|
||||||
|
|
||||||
- Task not found errors include suggestions to run the list command
|
- Task not found errors include suggestions to run the list command
|
||||||
- API key errors include reminders to check environment variables
|
- API key errors include reminders to check environment variables
|
||||||
- Invalid ID format errors show the expected format
|
- Invalid ID format errors show the expected format
|
||||||
|
|
||||||
3. **Command-Specific Help Displays**:
|
3. **Command-Specific Help Displays**:
|
||||||
|
|
||||||
- When validation fails, detailed help for the specific command is shown
|
- When validation fails, detailed help for the specific command is shown
|
||||||
- Help displays include usage examples and parameter descriptions
|
- Help displays include usage examples and parameter descriptions
|
||||||
- Formatted in clear, color-coded boxes with examples
|
- Formatted in clear, color-coded boxes with examples
|
||||||
@@ -481,11 +501,13 @@ The script now includes improved error handling throughout all commands:
|
|||||||
The script now automatically checks for updates without slowing down execution:
|
The script now automatically checks for updates without slowing down execution:
|
||||||
|
|
||||||
1. **Background Version Checking**:
|
1. **Background Version Checking**:
|
||||||
|
|
||||||
- Non-blocking version checks run in the background while commands execute
|
- Non-blocking version checks run in the background while commands execute
|
||||||
- Actual command execution isn't delayed by version checking
|
- Actual command execution isn't delayed by version checking
|
||||||
- Update notifications appear after command completion
|
- Update notifications appear after command completion
|
||||||
|
|
||||||
2. **Update Notifications**:
|
2. **Update Notifications**:
|
||||||
|
|
||||||
- When a newer version is available, a notification is displayed
|
- When a newer version is available, a notification is displayed
|
||||||
- Notifications include current version, latest version, and update command
|
- Notifications include current version, latest version, and update command
|
||||||
- Formatted in an attention-grabbing box with clear instructions
|
- Formatted in an attention-grabbing box with clear instructions
|
||||||
@@ -516,6 +538,7 @@ node scripts/dev.js add-subtask --parent=5 --title="Login API route" --skip-gene
|
|||||||
```
|
```
|
||||||
|
|
||||||
Key features:
|
Key features:
|
||||||
|
|
||||||
- Create new subtasks with detailed properties or convert existing tasks
|
- Create new subtasks with detailed properties or convert existing tasks
|
||||||
- Define dependencies between subtasks
|
- Define dependencies between subtasks
|
||||||
- Set custom status for new subtasks
|
- Set custom status for new subtasks
|
||||||
@@ -538,6 +561,7 @@ node scripts/dev.js remove-subtask --id=5.2 --skip-generate
|
|||||||
```
|
```
|
||||||
|
|
||||||
Key features:
|
Key features:
|
||||||
|
|
||||||
- Remove subtasks individually or in batches
|
- Remove subtasks individually or in batches
|
||||||
- Optionally convert subtasks to standalone tasks
|
- Optionally convert subtasks to standalone tasks
|
||||||
- Control whether task files are regenerated
|
- Control whether task files are regenerated
|
||||||
91
.taskmaster/docs/prd-tm-start.txt
Normal file
91
.taskmaster/docs/prd-tm-start.txt
Normal file
@@ -0,0 +1,91 @@
|
|||||||
|
<context>
|
||||||
|
# Overview
|
||||||
|
Add a new CLI command: `task-master start <task_id>` (alias: `tm start <task_id>`). This command hard-codes `claude-code` as the executor, fetches task details, builds a standardized prompt, runs claude-code, shows the result, checks for git changes, and auto-marks the task as done if successful.
|
||||||
|
|
||||||
|
We follow the Commander class pattern, reuse task retrieval from `show` command flow. Extremely minimal for 1-hour hackathon timeline.
|
||||||
|
|
||||||
|
# Core Features
|
||||||
|
- `start` command (Commander class style)
|
||||||
|
- Hard-coded executor: `claude-code`
|
||||||
|
- Standardized prompt designed for minimal changes following existing patterns
|
||||||
|
- Shows claude-code output (no streaming)
|
||||||
|
- Git status check for success detection
|
||||||
|
- Auto-mark task done if successful
|
||||||
|
|
||||||
|
# User Experience
|
||||||
|
```
|
||||||
|
task-master start 12
|
||||||
|
```
|
||||||
|
1) Fetches Task #12 details
|
||||||
|
2) Builds standardized prompt with task context
|
||||||
|
3) Runs claude-code with the prompt
|
||||||
|
4) Shows output
|
||||||
|
5) Checks git status for changes
|
||||||
|
6) Auto-marks task done if changes detected
|
||||||
|
</context>
|
||||||
|
|
||||||
|
<PRD>
|
||||||
|
# Technical Architecture
|
||||||
|
|
||||||
|
- Command pattern:
|
||||||
|
- Create `apps/cli/src/commands/start.command.ts` modeled on [list.command.ts](mdc:apps/cli/src/commands/list.command.ts) and task lookup from [show.command.ts](mdc:apps/cli/src/commands/show.command.ts)
|
||||||
|
|
||||||
|
- Task retrieval:
|
||||||
|
- Use `@tm/core` via `createTaskMasterCore` to get task by ID
|
||||||
|
- Extract: id, title, description, details
|
||||||
|
|
||||||
|
- Executor (ultra-simple approach):
|
||||||
|
- Execute `claude "full prompt here"` command directly
|
||||||
|
- The prompt tells Claude to first run `tm show <task_id>` to get task details
|
||||||
|
- Then tells Claude to implement the code changes
|
||||||
|
- This opens Claude CLI interface naturally in the current terminal
|
||||||
|
- No subprocess management needed - just execute the command
|
||||||
|
|
||||||
|
- Execution flow:
|
||||||
|
1) Validate `<task_id>` exists; exit with error if not
|
||||||
|
2) Build standardized prompt that includes instructions to run `tm show <task_id>`
|
||||||
|
3) Execute `claude "prompt"` command directly in terminal
|
||||||
|
4) Claude CLI opens, runs `tm show`, then implements changes
|
||||||
|
5) After Claude session ends, run `git status --porcelain` to detect changes
|
||||||
|
6) If changes detected, auto-run `task-master set-status --id=<task_id> --status=done`
|
||||||
|
|
||||||
|
- Success criteria:
|
||||||
|
- Success = exit code 0 AND git shows modified/created files
|
||||||
|
- Print changed file paths; warn if no changes detected
|
||||||
|
|
||||||
|
# Development Roadmap
|
||||||
|
|
||||||
|
MVP (ship in ~1 hour):
|
||||||
|
1) Implement `start.command.ts` (Commander class), parse `<task_id>`
|
||||||
|
2) Validate task exists via tm-core
|
||||||
|
3) Build prompt that tells Claude to run `tm show <task_id>` then implement
|
||||||
|
4) Execute `claude "prompt"` command, then check git status and auto-mark done
|
||||||
|
|
||||||
|
# Risks and Mitigations
|
||||||
|
- Executor availability: Error clearly if `claude-code` provider fails
|
||||||
|
- False success: Git-change heuristic acceptable for hackathon MVP
|
||||||
|
|
||||||
|
# Appendix
|
||||||
|
|
||||||
|
**Standardized Prompt Template:**
|
||||||
|
```
|
||||||
|
You are an AI coding assistant with access to this repository's codebase.
|
||||||
|
|
||||||
|
First, run this command to get the task details:
|
||||||
|
tm show <task_id>
|
||||||
|
|
||||||
|
Then implement the task with these requirements:
|
||||||
|
- Make the SMALLEST number of code changes possible
|
||||||
|
- Follow ALL existing patterns in the codebase (you have access to analyze the code)
|
||||||
|
- Do NOT over-engineer the solution
|
||||||
|
- Use existing files/functions/patterns wherever possible
|
||||||
|
- When complete, print: COMPLETED: <brief summary of changes>
|
||||||
|
|
||||||
|
Begin by running tm show <task_id> to understand what needs to be implemented.
|
||||||
|
```
|
||||||
|
|
||||||
|
**Key References:**
|
||||||
|
- [list.command.ts](mdc:apps/cli/src/commands/list.command.ts) - Command structure
|
||||||
|
- [show.command.ts](mdc:apps/cli/src/commands/show.command.ts) - Task validation
|
||||||
|
- Node.js `child_process.exec()` - For executing `claude "prompt"` command
|
||||||
|
</PRD>
|
||||||
@@ -0,0 +1,128 @@
|
|||||||
|
---
|
||||||
|
title: Research Session
|
||||||
|
query: "how can i improve the scope-up and scope-down command"
|
||||||
|
date: 6/14/2025
|
||||||
|
time: 4:35:03 PM
|
||||||
|
timestamp: 2025-06-14T20:35:03.231Z
|
||||||
|
exchanges: 2
|
||||||
|
---
|
||||||
|
|
||||||
|
# Research Session
|
||||||
|
|
||||||
|
## Initial Query
|
||||||
|
|
||||||
|
**Question:** how can i improve the scope-up and scope-down command
|
||||||
|
|
||||||
|
**Response:**
|
||||||
|
|
||||||
|
## Improving the `scope-up` and `scope-down` Commands
|
||||||
|
|
||||||
|
**Clarify Command Purpose and User Experience**
|
||||||
|
|
||||||
|
- Ensure the commands have clear, intuitive behavior. For example, `scope-up` should consistently broaden the context or task view, while `scope-down` should reliably narrow it.
|
||||||
|
- Provide immediate, user-friendly feedback after each command, indicating what changed (e.g., "Now viewing parent task and all subtasks" or "Focused on subtask: X").
|
||||||
|
- Document the commands with examples in your `docs/command-reference.md` and `docs/examples.md` to help users understand their effects and use cases.
|
||||||
|
|
||||||
|
**Enhance Context Awareness**
|
||||||
|
|
||||||
|
- Make the commands context-sensitive: if the user is already at the top or bottom of the scope hierarchy, return a helpful message instead of failing silently.
|
||||||
|
- Allow optional arguments to specify the exact level or target (e.g., `scope-up 2` to move up two levels, or `scope-down <subtask-id>` to jump directly to a subtask).
|
||||||
|
|
||||||
|
**Robust Error Handling and Validation**
|
||||||
|
|
||||||
|
- Implement checks to prevent invalid operations, such as attempting to scope up from the root or scope down when no subtasks exist.
|
||||||
|
- Provide clear error messages and suggestions for next steps, improving user trust and reducing confusion[1][5].
|
||||||
|
|
||||||
|
**Integrate with Task and Tag Systems**
|
||||||
|
|
||||||
|
- Ensure `scope-up` and `scope-down` interact smoothly with your tagging and task hierarchy features. For example, scoping up from a subtask should show the parent task and its context, while scoping down should allow selection from available subtasks or related tags.
|
||||||
|
- Consider supporting batch operations, such as scoping down into all subtasks and displaying them in a summarized view.
|
||||||
|
|
||||||
|
**Improve Performance and Synchronization**
|
||||||
|
|
||||||
|
- Optimize the underlying logic to minimize delays, especially in large task trees. Use asynchronous operations where possible to keep the UI responsive.
|
||||||
|
- If the commands trigger data loading or context switching, ensure synchronization so that subsequent commands operate on the correct, updated context[2][4].
|
||||||
|
|
||||||
|
**Testing and Stakeholder Feedback**
|
||||||
|
|
||||||
|
- Add or expand unit and integration tests for these commands (see `tests/unit/commands.test.js` and `integration/cli/commands.test.js`) to cover edge cases and ensure reliability.
|
||||||
|
- Gather feedback from users and stakeholders to identify pain points and desired enhancements, then iterate on the command design and implementation[1][5].
|
||||||
|
|
||||||
|
**Example Improvements Table**
|
||||||
|
|
||||||
|
| Area | Current Issue | Suggested Improvement |
|
||||||
|
|---------------------|-----------------------------|----------------------------------------------|
|
||||||
|
| User Feedback | Minimal or unclear output | Add explicit, contextual messages |
|
||||||
|
| Error Handling | Silent failures | Informative errors and recovery suggestions |
|
||||||
|
| Flexibility | Only single-level movement | Support multi-level and targeted navigation |
|
||||||
|
| Integration | Limited context awareness | Sync with tags, parent/child relationships |
|
||||||
|
| Performance | Slow on large hierarchies | Optimize data access and UI updates |
|
||||||
|
|
||||||
|
By focusing on these areas, you can make `scope-up` and `scope-down` more robust, user-friendly, and aligned with best practices in scope management and command design[1][5].
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Follow-up 1
|
||||||
|
|
||||||
|
**Question:** how might i do this relative to the tag system?
|
||||||
|
|
||||||
|
**Response:**
|
||||||
|
|
||||||
|
## Integrating Scope-Up and Scope-Down with the Tag System
|
||||||
|
|
||||||
|
**Conceptual Mapping**
|
||||||
|
|
||||||
|
- In your project, *scope-up* and *scope-down* commands currently operate on the task hierarchy, broadening or narrowing the context by moving between parent and child tasks.
|
||||||
|
- The tag system provides a parallel, orthogonal way to group and filter tasks—tags can represent categories, priorities, or any custom grouping, independent of strict parent/child relationships.
|
||||||
|
|
||||||
|
**How to Make Scope Commands Tag-Aware**
|
||||||
|
|
||||||
|
- **Scoping by Tag**: Allow users to "scope-down" into a tag, viewing only tasks associated with that tag. Conversely, "scope-up" could move from a specific tag context back to the broader task list or to a parent tag grouping if you support tag hierarchies.
|
||||||
|
- **Tag-Scoped Views**: When a tag is active as the current scope, all navigation and operations (listing, editing, etc.) are limited to tasks with that tag. This is similar to how some systems let you set the "Tag Scope" to filter navigation and search to only tagged items[1][5].
|
||||||
|
- **Combining Hierarchy and Tags**: Enable compound scoping, such as "all subtasks of Task X that also have Tag Y." This can be implemented by intersecting the results of hierarchical and tag-based queries.
|
||||||
|
|
||||||
|
**Implementation Strategies**
|
||||||
|
|
||||||
|
- **Scope Stack or Context Object**: Maintain a stack or context object that tracks both the current task scope and any active tag filters. Each scope change (up/down or tag in/out) updates this context.
|
||||||
|
- **Commands**:
|
||||||
|
- `scope-down --tag <tag>`: Narrows the view to tasks with the specified tag.
|
||||||
|
- `scope-up --tag`: Removes the tag filter, broadening the view.
|
||||||
|
- `scope-down <subtask-id> --tag <tag>`: Narrows to a specific subtask *and* applies a tag filter.
|
||||||
|
- **Tag Hierarchies (Optional)**: If your tag system supports parent/child tags or tag groups, allow "scope-up" to move from a specific tag to its parent tag or group, and "scope-down" to move into a child tag[4][5].
|
||||||
|
|
||||||
|
**Example User Flows**
|
||||||
|
|
||||||
|
- *Scoping Down by Tag*:
|
||||||
|
User runs `scope-down --tag urgent` → Only tasks tagged "urgent" are shown.
|
||||||
|
- *Combining Task and Tag*:
|
||||||
|
User runs `scope-down <task-id> --tag frontend` → Only subtasks of `<task-id>` that are tagged "frontend" are shown.
|
||||||
|
- *Scoping Up from Tag*:
|
||||||
|
User runs `scope-up --tag` → Removes the tag filter, returning to the previous broader context.
|
||||||
|
|
||||||
|
**Technical Considerations**
|
||||||
|
|
||||||
|
- **Efficient Filtering**: Use indexed queries or in-memory filtering to efficiently combine tag and hierarchy filters, especially for large task sets.
|
||||||
|
- **Feedback and UI**: Always display the current scope context (e.g., "Viewing tasks tagged 'urgent' under 'Feature X'") so users understand what they're seeing.
|
||||||
|
- **Extensibility**: Design the scope system so new types of scopes (e.g., by status, assignee) can be added in the future, not just tags and hierarchy.
|
||||||
|
|
||||||
|
**Relevant Precedents**
|
||||||
|
|
||||||
|
- Tag-based file systems and management tools often implement "scopes" as dynamic groups defined by tag criteria, allowing users to build up complex views by combining tags and other filters[5].
|
||||||
|
- Some systems allow scoping into tag management itself, where you can create, edit, or view tags and their associated objects, which could be mirrored in your CLI or UI[4].
|
||||||
|
|
||||||
|
**Summary Table**
|
||||||
|
|
||||||
|
| Command Example | Resulting Scope |
|
||||||
|
|----------------------------------|--------------------------------------------------|
|
||||||
|
| `scope-down --tag bug` | All tasks tagged "bug" |
|
||||||
|
| `scope-down <task-id> --tag api` | Subtasks of `<task-id>` tagged "api" |
|
||||||
|
| `scope-up --tag` | Removes tag filter, returns to previous scope |
|
||||||
|
| `scope-down --tag frontend` | All tasks tagged "frontend" |
|
||||||
|
|
||||||
|
By integrating tag-based scoping, you make your scope commands more flexible and powerful, supporting both hierarchical and ad-hoc organizational strategies[1][5].
|
||||||
|
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
*Generated by Task Master Research Command*
|
||||||
|
*Timestamp: 2025-06-14T20:35:03.231Z*
|
||||||
@@ -0,0 +1,158 @@
|
|||||||
|
---
|
||||||
|
title: Research Session
|
||||||
|
query: "should i be using any specific libraries for this?"
|
||||||
|
date: 6/14/2025
|
||||||
|
time: 5:20:00 PM
|
||||||
|
timestamp: 2025-06-14T21:20:00.561Z
|
||||||
|
exchanges: 2
|
||||||
|
---
|
||||||
|
|
||||||
|
# Research Session
|
||||||
|
|
||||||
|
## Initial Query
|
||||||
|
|
||||||
|
**Question:** should i be using any specific libraries for this?
|
||||||
|
|
||||||
|
**Response:**
|
||||||
|
|
||||||
|
HELL YEAH PARTNER
|
||||||
|
|
||||||
|
For implementing a GitHub Issue export feature with bidirectional linking in your project, you absolutely should use a dedicated library for interacting with the GitHub API. This will save you a ton of time, reduce boilerplate, and help you avoid common pitfalls with authentication, pagination, and error handling.
|
||||||
|
|
||||||
|
## Recommended Library: PyGithub
|
||||||
|
|
||||||
|
- **PyGithub** is the most widely used and well-documented Python library for accessing the GitHub REST API v3. It allows you to manage repositories, issues, user profiles, and more, all with a clean, object-oriented interface[1][2][3][5].
|
||||||
|
- It supports all the operations you’ll need for your export feature, including creating issues, updating them, and managing references for bidirectional linking.
|
||||||
|
|
||||||
|
### Key Features
|
||||||
|
|
||||||
|
- **Authentication**: Easily authenticate with a personal access token.
|
||||||
|
- **Issue Management**: Create, update, and link issues programmatically.
|
||||||
|
- **Repository Access**: Fetch and manipulate repository data.
|
||||||
|
- **User/Org Support**: Works for both user and organization repos.
|
||||||
|
- **Pagination & Rate Limiting**: Handles these for you.
|
||||||
|
|
||||||
|
### Example Usage
|
||||||
|
|
||||||
|
```python
|
||||||
|
from github import Github, Auth
|
||||||
|
|
||||||
|
# Authenticate with a personal access token
|
||||||
|
auth = Auth.Token("your_github_token")
|
||||||
|
g = Github(auth=auth)
|
||||||
|
|
||||||
|
# Get the repo
|
||||||
|
repo = g.get_repo("owner/repo_name")
|
||||||
|
|
||||||
|
# Create a new issue
|
||||||
|
issue = repo.create_issue(
|
||||||
|
title="Exported from Task Master",
|
||||||
|
body="This issue was created by Task Master. [Task ID: 123]"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Add a comment or update the issue to include a backlink
|
||||||
|
issue.create_comment("Linked to Task Master task #123")
|
||||||
|
```
|
||||||
|
[3][5]
|
||||||
|
|
||||||
|
## Why Not Use Raw Requests?
|
||||||
|
|
||||||
|
- While you *can* use the `requests` library to call the GitHub API directly[4], PyGithub abstracts away authentication, error handling, and pagination, making your code much cleaner and less error-prone[2][5].
|
||||||
|
|
||||||
|
## Integration Tips
|
||||||
|
|
||||||
|
- Store your GitHub token securely (e.g., in environment variables).
|
||||||
|
- Use PyGithub’s objects to fetch, create, and update issues.
|
||||||
|
- For bidirectional linking, store the GitHub issue URL in your task data and the Task Master task ID in the GitHub issue body or as a label/comment.
|
||||||
|
- PyGithub is actively maintained and widely used, so you’ll find plenty of support and documentation.
|
||||||
|
|
||||||
|
## Installation
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pip install PyGithub
|
||||||
|
```
|
||||||
|
[3]
|
||||||
|
|
||||||
|
## Summary Table
|
||||||
|
|
||||||
|
| Library | Language | API Coverage | Ease of Use | Maintained | Docs/Support |
|
||||||
|
|-----------|----------|--------------|-------------|------------|--------------|
|
||||||
|
| PyGithub | Python | Full | High | Yes | Excellent |
|
||||||
|
| requests | Python | Full (manual)| Medium | Yes | Good |
|
||||||
|
|
||||||
|
**Use PyGithub for all GitHub API interactions in your export/import features.** It’s the standard for Python projects and will make your implementation faster, safer, and more maintainable[1][2][3][5].
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Follow-up 1
|
||||||
|
|
||||||
|
**Question:** we don't want to use py github because we have a node project
|
||||||
|
|
||||||
|
**Response:**
|
||||||
|
|
||||||
|
HELL YEAH PARTNER
|
||||||
|
|
||||||
|
For a Node.js project, you should use a Node-native library to interact with the GitHub API instead of PyGithub, which is Python-only. The industry standard for Node.js is **Octokit**, the official GitHub SDK for JavaScript/TypeScript[1][2].
|
||||||
|
|
||||||
|
## Why Use Octokit?
|
||||||
|
|
||||||
|
- **Official & Complete**: Octokit is maintained by GitHub and covers all REST and GraphQL API endpoints, including issues, repositories, users, and more[1][2].
|
||||||
|
- **Universal**: Works in Node.js, browsers, and Deno environments[2].
|
||||||
|
- **Typed & Tested**: Comes with TypeScript declarations and 100% test coverage[2].
|
||||||
|
- **Authentication**: Supports personal access tokens, GitHub Apps, OAuth, and more.
|
||||||
|
- **Pagination & Error Handling**: Handles pagination and common error scenarios out of the box.
|
||||||
|
- **Extendable**: Supports plugins and custom hooks for advanced use cases[2].
|
||||||
|
|
||||||
|
## How to Use Octokit for GitHub Issue Export
|
||||||
|
|
||||||
|
Install Octokit in your Node.js project:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
npm install @octokit/rest
|
||||||
|
```
|
||||||
|
|
||||||
|
Basic usage for creating and linking issues:
|
||||||
|
|
||||||
|
```js
|
||||||
|
const { Octokit } = require("@octokit/rest");
|
||||||
|
|
||||||
|
// Authenticate with a personal access token
|
||||||
|
const octokit = new Octokit({ auth: process.env.GITHUB_TOKEN });
|
||||||
|
|
||||||
|
// Create a new issue
|
||||||
|
const { data: issue } = await octokit.issues.create({
|
||||||
|
owner: "owner",
|
||||||
|
repo: "repo_name",
|
||||||
|
title: "Exported from Task Master",
|
||||||
|
body: "This issue was created by Task Master. [Task ID: 123]"
|
||||||
|
});
|
||||||
|
|
||||||
|
// Add a comment for bidirectional linking
|
||||||
|
await octokit.issues.createComment({
|
||||||
|
owner: "owner",
|
||||||
|
repo: "repo_name",
|
||||||
|
issue_number: issue.number,
|
||||||
|
body: "Linked to Task Master task #123"
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
## Alternatives
|
||||||
|
|
||||||
|
- **node-github**: An older library, less maintained than Octokit, but still functional for basic API access[5].
|
||||||
|
- **Raw HTTP Requests**: You can use `node-fetch` or `axios` to call the GitHub API directly, but you’ll need to handle authentication, pagination, and error handling yourself[3]. Octokit abstracts all of this for you.
|
||||||
|
|
||||||
|
## Summary Table
|
||||||
|
|
||||||
|
| Library | Language | API Coverage | Maintained | TypeScript | Docs/Support |
|
||||||
|
|-----------------|----------|--------------|------------|------------|--------------|
|
||||||
|
| Octokit | Node.js | Full | Yes | Yes | Excellent |
|
||||||
|
| node-github | Node.js | Full | No | No | Limited |
|
||||||
|
| Raw HTTP | Node.js | Full (manual)| N/A | N/A | N/A |
|
||||||
|
|
||||||
|
**Recommendation:** Use Octokit for all GitHub API interactions in your Node.js project. It’s the official, most robust, and best-supported option for your use case[1][2].
|
||||||
|
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
*Generated by Task Master Research Command*
|
||||||
|
*Timestamp: 2025-06-14T21:20:00.561Z*
|
||||||
@@ -0,0 +1,76 @@
|
|||||||
|
---
|
||||||
|
title: Research Session
|
||||||
|
query: "test save functionality"
|
||||||
|
date: 6/14/2025
|
||||||
|
time: 5:50:23 PM
|
||||||
|
timestamp: 2025-06-14T21:50:23.797Z
|
||||||
|
exchanges: 1
|
||||||
|
---
|
||||||
|
|
||||||
|
# Research Session
|
||||||
|
|
||||||
|
## Initial Query
|
||||||
|
|
||||||
|
**Question:** test save functionality
|
||||||
|
|
||||||
|
**Response:**
|
||||||
|
|
||||||
|
## Testing Save Functionality in a Task Management System
|
||||||
|
|
||||||
|
**Core Steps for Testing Save Functionality**
|
||||||
|
|
||||||
|
- Create or update a task in the system using the CLI or UI.
|
||||||
|
- Trigger the save operation (e.g., by issuing a command or clicking a save button).
|
||||||
|
- Verify that the data is correctly written to the persistent storage (e.g., `tasks.json`).
|
||||||
|
- Reload or re-query the data to ensure the saved changes persist and are accurately reflected.
|
||||||
|
- Validate that all required fields and relationships (such as dependencies and subtasks) are preserved and correctly structured.
|
||||||
|
|
||||||
|
**Key Test Scenarios**
|
||||||
|
|
||||||
|
- **Basic Save:** Add a new task and confirm it appears in `tasks.json` with all fields populated.
|
||||||
|
- **Update Save:** Modify an existing task (e.g., change status, description, or priority) and verify the changes are saved.
|
||||||
|
- **Subtask and Dependency Preservation:** Update a parent task and ensure that completed subtasks and dependency links remain intact, especially after using commands like `updateTask`.
|
||||||
|
- **Batch Operations:** Perform bulk updates or additions and confirm all affected tasks are saved correctly, with no data loss or corruption.
|
||||||
|
- **Validation:** Attempt to save invalid data and ensure the system rejects it or provides meaningful error messages, confirming validation logic is enforced.
|
||||||
|
|
||||||
|
**Recommended Practices**
|
||||||
|
|
||||||
|
- Use automated tests to simulate save operations for various scenarios, including edge cases (e.g., circular dependencies, missing required fields).
|
||||||
|
- After each save, reload the data and compare it to the expected state to ensure integrity.
|
||||||
|
- Log all save operations and errors for traceability and debugging.
|
||||||
|
- Regularly back up the `tasks.json` file to prevent data loss during testing and development.
|
||||||
|
|
||||||
|
**Relevant Features from Industry Standards**
|
||||||
|
|
||||||
|
- Task management systems typically provide robust save and update mechanisms, ensuring that all task attributes (including dependencies, subtasks, and comments) are consistently maintained[1][2][5].
|
||||||
|
- Validation and error handling are critical to prevent corrupt or incomplete saves, especially when handling complex relationships or batch operations[1][5].
|
||||||
|
- User feedback (such as success messages or error alerts) is important for confirming save actions and guiding users in case of issues[1][2].
|
||||||
|
|
||||||
|
**Example Test Case**
|
||||||
|
|
||||||
|
1. Create a task with subtasks and dependencies.
|
||||||
|
2. Mark some subtasks as complete.
|
||||||
|
3. Use the `updateTask` command to modify the parent task.
|
||||||
|
4. Save and reload the data.
|
||||||
|
5. Assert that:
|
||||||
|
- The parent task reflects the updates.
|
||||||
|
- Completed subtasks remain marked as complete.
|
||||||
|
- Dependencies are unchanged and valid.
|
||||||
|
|
||||||
|
**Summary Table: Save Functionality Test Aspects**
|
||||||
|
|
||||||
|
| Aspect | What to Test | Expected Outcome |
|
||||||
|
|-----------------------|---------------------------------------------------|-----------------------------------------|
|
||||||
|
| Basic Save | Add/update task, save, reload | Data persists and matches expectations |
|
||||||
|
| Subtask Preservation | Update parent, check subtask status | Completed subtasks remain complete |
|
||||||
|
| Dependency Handling | Save with dependencies, reload | Dependencies are intact and valid |
|
||||||
|
| Batch Operations | Bulk save/update, reload | All changes persist, no data loss |
|
||||||
|
| Validation | Save invalid data | Error is raised, data not corrupted |
|
||||||
|
|
||||||
|
These steps and scenarios ensure that the save functionality is robust, reliable, and meets the requirements of a modern task management system[1][2][5].
|
||||||
|
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
*Generated by Task Master Research Command*
|
||||||
|
*Timestamp: 2025-06-14T21:50:23.797Z*
|
||||||
@@ -0,0 +1,81 @@
|
|||||||
|
---
|
||||||
|
title: Research Session
|
||||||
|
query: "test the fix for duplicate saves - final test"
|
||||||
|
date: 6/14/2025
|
||||||
|
time: 6:10:00 PM
|
||||||
|
timestamp: 2025-06-14T22:10:00.394Z
|
||||||
|
exchanges: 1
|
||||||
|
---
|
||||||
|
|
||||||
|
# Research Session
|
||||||
|
|
||||||
|
## Initial Query
|
||||||
|
|
||||||
|
**Question:** test the fix for duplicate saves - final test
|
||||||
|
|
||||||
|
**Response:**
|
||||||
|
|
||||||
|
## Steps to Test the Fix for Duplicate Saves
|
||||||
|
|
||||||
|
**1. Prepare a Clean Test Environment**
|
||||||
|
- Ensure the tasks.json file and any related data stores are in a known, clean state with no pre-existing duplicate entries.
|
||||||
|
- Back up the current tasks.json in case rollback is needed.
|
||||||
|
|
||||||
|
**2. Define Test Scenarios**
|
||||||
|
- Attempt to save a new task with unique data.
|
||||||
|
- Attempt to save a task with the same ID or unique identifier as an existing task (simulate a duplicate).
|
||||||
|
- Attempt to save a task with the same title or content but a different ID (to test if the fix is based on ID or content).
|
||||||
|
- Simultaneously trigger multiple save operations (if the system supports concurrency) to test race conditions.
|
||||||
|
|
||||||
|
**3. Execute Manual and Automated Tests**
|
||||||
|
- Use the application's UI or API to create and save tasks as per the scenarios above.
|
||||||
|
- For each save operation, check the tasks.json file or the data store to verify:
|
||||||
|
- No duplicate entries are created.
|
||||||
|
- Existing tasks are not overwritten unless intended (e.g., for updates).
|
||||||
|
- The system returns appropriate error messages or warnings when a duplicate save is attempted.
|
||||||
|
|
||||||
|
**4. Validate System Behavior**
|
||||||
|
- Confirm that after all test operations, each task in tasks.json is unique according to the defined criteria (ID, title, or other unique fields).
|
||||||
|
- If the system is designed to merge duplicates, verify that merging occurs as expected.
|
||||||
|
- If the system is designed to reject duplicates, ensure that the rejection is clear and no duplicate is present.
|
||||||
|
|
||||||
|
**5. Test Edge Cases**
|
||||||
|
- Attempt to save tasks with minor variations (e.g., whitespace differences, case sensitivity) to ensure the duplicate detection logic is robust.
|
||||||
|
- Test with large numbers of tasks to ensure performance and correctness at scale.
|
||||||
|
|
||||||
|
**6. Review Logs and Error Handling**
|
||||||
|
- Check application logs for any errors or warnings related to duplicate saves.
|
||||||
|
- Ensure that error handling is user-friendly and provides actionable feedback.
|
||||||
|
|
||||||
|
**7. Regression Testing**
|
||||||
|
- Run the full suite of task operations (create, update, delete) to ensure the fix for duplicate saves does not introduce new issues elsewhere in the system.
|
||||||
|
|
||||||
|
## Additional Best Practices
|
||||||
|
|
||||||
|
- Communicate test results with the team to confirm the fix is effective and to prevent future regressions[1][2][5].
|
||||||
|
- Consider implementing automated duplicate detection as part of the save operation to prevent future issues[2][3].
|
||||||
|
- Document the test cases and results for future reference and audits.
|
||||||
|
|
||||||
|
## Example Test Case Table
|
||||||
|
|
||||||
|
| Test Scenario | Expected Result | Actual Result | Pass/Fail |
|
||||||
|
|--------------------------------------|-------------------------------|---------------|-----------|
|
||||||
|
| Save unique task | Task saved | | |
|
||||||
|
| Save duplicate task (same ID) | Duplicate rejected/merged | | |
|
||||||
|
| Save duplicate task (same title) | Duplicate rejected/merged | | |
|
||||||
|
| Simultaneous saves (race condition) | Only one unique task present | | |
|
||||||
|
| Save with minor variations | No false positives/negatives | | |
|
||||||
|
|
||||||
|
Fill in the "Actual Result" and "Pass/Fail" columns during testing.
|
||||||
|
|
||||||
|
## Action Items
|
||||||
|
|
||||||
|
- Complete all test scenarios above.
|
||||||
|
- Document any issues found and retest after fixes.
|
||||||
|
- Confirm with stakeholders before closing the issue.
|
||||||
|
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
*Generated by Task Master Research Command*
|
||||||
|
*Timestamp: 2025-06-14T22:10:00.394Z*
|
||||||
@@ -0,0 +1,153 @@
|
|||||||
|
---
|
||||||
|
title: Research Session
|
||||||
|
query: "do we need to add new commands or can we just weaponize the gh cli tool"
|
||||||
|
date: 2025-08-01
|
||||||
|
time: 2:27:52 p.m.
|
||||||
|
timestamp: 2025-08-01T11:27:52.249Z
|
||||||
|
exchanges: 3
|
||||||
|
---
|
||||||
|
|
||||||
|
# Research Session
|
||||||
|
|
||||||
|
## Initial Query
|
||||||
|
|
||||||
|
**Question:** do we need to add new commands or can we just weaponize the gh cli tool
|
||||||
|
|
||||||
|
**Response:**
|
||||||
|
|
||||||
|
For the implementation of the `task-master git` command suite to automate git workflows (Task 97), it is possible to leverage the existing **GitHub CLI (`gh`) tool** rather than creating entirely new git commands from scratch. The GitHub CLI provides a rich set of commands that cover many GitHub-related operations such as branch management, pull request handling, workflow triggering, and release management, which can be scripted and integrated into your CLI tool[1][2][3][4].
|
||||||
|
|
||||||
|
### Key Points Supporting Weaponizing GitHub CLI:
|
||||||
|
|
||||||
|
- **Comprehensive GitHub Operations**: The `gh` CLI supports creating branches, managing pull requests, releases, tags, and even triggering and monitoring GitHub Actions workflows. This aligns well with the requirements of automated branch management, commit generation, and PR description generation in Task 97[1][3].
|
||||||
|
|
||||||
|
- **Scripting and Automation Friendly**: GitHub CLI commands can be scripted and integrated into custom workflows or CLI tools. You can create aliases or wrap `gh` commands inside your own CLI commands to provide a seamless user experience while relying on the battle-tested GitHub CLI for underlying operations[1][3].
|
||||||
|
|
||||||
|
- **Authentication and Permissions**: `gh` uses environment variables like `GH_TOKEN` or `GITHUB_TOKEN` for authentication, which can be securely managed in CI/CD pipelines or local environments. This simplifies integration without needing to implement custom authentication logic[3][4].
|
||||||
|
|
||||||
|
- **Extensibility**: If some specific git workflow features are not directly supported by `gh`, you can extend it with custom scripts or combine it with native git commands. This hybrid approach reduces the need to reinvent core git operations while allowing customization[1].
|
||||||
|
|
||||||
|
### Considerations for Your Project:
|
||||||
|
|
||||||
|
- Since Task 97 requires **automated branch management** (branch creation, validation, switching, cleanup), **intelligent commit generation**, and **PR description generation**, you can implement these by invoking `gh` commands programmatically within your `task-master git` commands.
|
||||||
|
|
||||||
|
- For example, branch creation and switching can be done with native git commands wrapped in your CLI, while PR creation and management can be handled via `gh pr create`, `gh pr merge`, etc.
|
||||||
|
|
||||||
|
- You may still need to implement some **custom logic** for commit type detection, branch name validation, and PR description generation based on your task metadata, but the underlying git and GitHub operations can be delegated to `gh`.
|
||||||
|
|
||||||
|
- This approach aligns with your existing CLI foundation (Task 2) and leverages a well-supported tool, reducing development effort and increasing reliability.
|
||||||
|
|
||||||
|
### Summary
|
||||||
|
|
||||||
|
You **do not necessarily need to add entirely new git commands** for Task 97. Instead, you can **weaponize the GitHub CLI (`gh`) tool** by wrapping and scripting its commands within your `task-master git` command suite. This will allow you to automate git workflows efficiently, maintain consistency, and leverage GitHub’s API capabilities without reinventing the wheel[1][2][3][4].
|
||||||
|
|
||||||
|
If there are gaps in `gh` functionality for your specific workflow needs, you can supplement with native git commands or custom scripts, but the core git and GitHub operations should be handled by `gh` for best results.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Follow-up 1
|
||||||
|
|
||||||
|
**Question:** how does this relate to git worktrees? i hear that they get super complicated when trying to implement them.
|
||||||
|
|
||||||
|
**Response:**
|
||||||
|
|
||||||
|
Git worktrees allow you to have multiple working directories attached to the same Git repository, each checked out to different branches simultaneously. This enables parallel development workflows without the overhead of cloning the repository multiple times or constantly switching branches in a single working directory[2][3].
|
||||||
|
|
||||||
|
However, **implementing or managing git worktrees can get complicated** for several reasons:
|
||||||
|
|
||||||
|
- **Complexity for users and tooling:** Managing multiple active worktrees requires careful tracking of which directory corresponds to which branch. This can be confusing and error-prone, especially for novices or when many worktrees are in use simultaneously. Mistakes like committing to the wrong branch or working in the wrong directory are common risks[1].
|
||||||
|
|
||||||
|
- **Maintenance overhead:** Worktrees accumulate over time and need to be pruned or cleaned up regularly (`git worktree prune`) to avoid clutter and potential confusion. This adds an extra maintenance step to the workflow[1].
|
||||||
|
|
||||||
|
- **Syncing and update challenges:** Each worktree is a separate checkout and can become outdated if not regularly updated from the main repository. This can lead to merge conflicts or redundant work if branches diverge significantly[1].
|
||||||
|
|
||||||
|
- **Tooling and integration limitations:** Not all Git tools, scripts, or GUIs fully support worktrees or are optimized for them. This can cause inconsistencies or require additional custom tooling to manage worktrees effectively[1].
|
||||||
|
|
||||||
|
- **Performance considerations:** Managing multiple worktrees simultaneously can have performance implications, especially in very large repositories, due to the overhead of maintaining multiple checkouts and metadata[1].
|
||||||
|
|
||||||
|
In the context of your project, which involves automating git workflows and possibly wrapping or extending git/GitHub CLI commands, these complexities imply:
|
||||||
|
|
||||||
|
- If you consider supporting git worktrees in your `task-master git` command suite, you will need to handle the complexity of managing multiple directories and branches carefully.
|
||||||
|
|
||||||
|
- You might need to implement additional logic or tooling to track and manage worktrees, avoid user confusion, and ensure synchronization between them.
|
||||||
|
|
||||||
|
- Since your project already plans to leverage the GitHub CLI (`gh`) and native git commands, you could integrate worktree commands (`git worktree add`, `git worktree remove`, `git worktree list`) but should be prepared for the added complexity in workflow management and user documentation.
|
||||||
|
|
||||||
|
- Custom wrappers or managers (like the one described in [3]) can help reduce friction by automating worktree creation, switching, and cleanup, which might be a useful approach to consider.
|
||||||
|
|
||||||
|
Overall, **git worktrees provide powerful parallel branch management but introduce complexity in implementation and user experience** that must be carefully managed through tooling, automation, and clear documentation[1][3][4].
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Summary Table of Git Worktree Challenges and Considerations**
|
||||||
|
|
||||||
|
| Aspect | Description | Implication for Project |
|
||||||
|
|-------------------------|-----------------------------------------------------------------------------------------------|----------------------------------------------------------|
|
||||||
|
| User Complexity | Confusing multiple directories and branches | Need clear UI/UX and safeguards |
|
||||||
|
| Maintenance Overhead | Requires pruning and cleanup of stale worktrees | Automate cleanup or provide commands |
|
||||||
|
| Syncing Issues | Worktrees can become outdated or diverge | Implement update checks and sync workflows |
|
||||||
|
| Tooling Support | Limited support in some Git tools and GUIs | May require custom tooling or wrappers |
|
||||||
|
| Performance | Overhead in large repos with many worktrees | Monitor and optimize performance |
|
||||||
|
| Workflow Automation | Manual worktree management is tedious | Build custom managers or wrappers to simplify usage |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
Given your project’s focus on automating git workflows and leveraging existing CLI tools, **implementing git worktree support is feasible but will require careful design to handle these complexities effectively**. You may want to start with basic worktree commands and incrementally add automation and safeguards as needed.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Follow-up 2
|
||||||
|
|
||||||
|
**Question:** since we already have a way to have separated task lists per branch, couldn't we do the same thing for worktrees? each taskmaster list (tag) would have its own worktree based on the name of the tag?
|
||||||
|
|
||||||
|
**Response:**
|
||||||
|
|
||||||
|
Yes, you can apply a similar approach used for separated task lists per branch to git worktrees by associating each taskmaster list (tag) with its own git worktree named after the tag. This means each tag would correspond to a dedicated worktree directory checked out to a branch or commit related to that tag, enabling isolated parallel work environments for each task list.
|
||||||
|
|
||||||
|
### How This Maps to Git Worktrees
|
||||||
|
|
||||||
|
- **Git worktrees** allow multiple working directories linked to the same repository, each checked out to a different branch or commit[1][2]. This lets you work on multiple branches simultaneously without switching contexts in a single directory.
|
||||||
|
|
||||||
|
- When you create a worktree with `git worktree add <path> [<branch>]`, Git creates a new linked working tree at `<path>`, optionally checking out a specified branch or creating a new one[1][2]. Naming the worktree directory after the tag (taskmaster list) is straightforward and semantically clear.
|
||||||
|
|
||||||
|
- This setup mirrors your existing branch-based task list separation but extends it to physical directories, which can improve workflow isolation and reduce context switching.
|
||||||
|
|
||||||
|
### Benefits for Your Project
|
||||||
|
|
||||||
|
- **Isolated environments per task list/tag:** Each tag’s worktree can have its own checked-out branch, allowing independent development, testing, or review without interference.
|
||||||
|
|
||||||
|
- **Parallel workflows:** Developers can switch between worktrees (directories) instead of switching branches in one directory, reducing errors and improving productivity.
|
||||||
|
|
||||||
|
- **Automation potential:** Your `task-master git` commands can automate creating, switching, and cleaning up these worktrees based on tags, similar to how you manage branches and task lists.
|
||||||
|
|
||||||
|
### Challenges and Considerations
|
||||||
|
|
||||||
|
- **Complexity in management:** Git worktrees add complexity in tracking multiple directories and ensuring users operate in the correct one[1]. Your CLI should provide clear commands and feedback to avoid confusion.
|
||||||
|
|
||||||
|
- **Maintenance overhead:** Worktrees need pruning and cleanup (`git worktree prune` or `git worktree remove`) to avoid clutter[1]. Automating cleanup in your tool would be beneficial.
|
||||||
|
|
||||||
|
- **Syncing and updates:** Each worktree is a separate checkout and can become outdated. Your automation should include commands to update or rebase worktrees as needed.
|
||||||
|
|
||||||
|
- **Tooling support:** Not all Git tools or scripts fully support worktrees, so your CLI wrappers should handle edge cases and provide a consistent experience[1][3].
|
||||||
|
|
||||||
|
### Practical Implementation Ideas
|
||||||
|
|
||||||
|
- When a new tag (taskmaster list) is created, run `git worktree add` to create a new worktree directory named after the tag.
|
||||||
|
|
||||||
|
- Store metadata linking tags to worktree paths in your taskmaster configuration to track and manage them.
|
||||||
|
|
||||||
|
- Provide commands like `task-master worktree list`, `task-master worktree add <tag>`, and `task-master worktree remove <tag>` to manage these worktrees.
|
||||||
|
|
||||||
|
- Integrate with GitHub CLI (`gh`) and native git commands to automate branch creation, PR handling, and syncing within each worktree.
|
||||||
|
|
||||||
|
- Consider inspiration from existing tools like [wt](https://github.com/taecontrol/wt) that automate worktree environment setup and teardown[3].
|
||||||
|
|
||||||
|
### Summary
|
||||||
|
|
||||||
|
Using git worktrees per taskmaster tag is a feasible and potentially powerful extension of your current branch-based task list separation. It offers isolated, parallel working directories aligned with your tags, improving workflow clarity and reducing branch-switching overhead. However, it requires careful tooling and automation to manage complexity, maintenance, and synchronization effectively. Given your project’s focus on automating git workflows and leveraging CLI tools, implementing this feature incrementally with robust user feedback and cleanup commands would be a practical approach.
|
||||||
|
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
*Generated by Task Master Research Command*
|
||||||
|
*Timestamp: 2025-08-01T11:27:52.249Z*
|
||||||
471
.taskmaster/docs/task-template-importing-prd.txt
Normal file
471
.taskmaster/docs/task-template-importing-prd.txt
Normal file
@@ -0,0 +1,471 @@
|
|||||||
|
# Task Template Importing System - Product Requirements Document
|
||||||
|
|
||||||
|
<context>
|
||||||
|
# Overview
|
||||||
|
The Task Template Importing system enables seamless integration of external task templates into the Task Master CLI through automatic file discovery. This system allows users to drop task template files into the tasks directory and immediately access them as new tag contexts without manual import commands or configuration. The solution addresses the need for multi-project task management, team collaboration through shared templates, and clean separation between permanent tasks and temporary project contexts.
|
||||||
|
|
||||||
|
# Core Features
|
||||||
|
## Silent Task Template Discovery
|
||||||
|
- **What it does**: Automatically scans for `tasks_*.json` files in the tasks directory during tag operations
|
||||||
|
- **Why it's important**: Eliminates friction in adding new task contexts and enables zero-configuration workflow
|
||||||
|
- **How it works**: File pattern matching extracts tag names from filenames and validates against internal tag keys
|
||||||
|
|
||||||
|
## External Tag Resolution System
|
||||||
|
- **What it does**: Provides fallback mechanism to external files when tags are not found in main tasks.json
|
||||||
|
- **Why it's important**: Maintains clean separation between core tasks and project-specific templates
|
||||||
|
- **How it works**: Tag resolution logic checks external files as secondary source while preserving main file precedence
|
||||||
|
|
||||||
|
## Read-Only External Tag Access
|
||||||
|
- **What it does**: Allows viewing and switching to external tags while preventing modifications
|
||||||
|
- **Why it's important**: Protects template integrity and prevents accidental changes to shared templates
|
||||||
|
- **How it works**: All task modifications route to main tasks.json regardless of current tag context
|
||||||
|
|
||||||
|
## Tag Precedence Management
|
||||||
|
- **What it does**: Ensures main tasks.json tags override external files with same tag names
|
||||||
|
- **Why it's important**: Prevents conflicts and maintains data integrity
|
||||||
|
- **How it works**: Priority system where main file tags take precedence over external file tags
|
||||||
|
|
||||||
|
# User Experience
|
||||||
|
## User Personas
|
||||||
|
- **Solo Developer**: Manages multiple projects with different task contexts
|
||||||
|
- **Team Lead**: Shares standardized task templates across team members
|
||||||
|
- **Project Manager**: Organizes tasks by project phases or feature branches
|
||||||
|
|
||||||
|
## Key User Flows
|
||||||
|
### Template Addition Flow
|
||||||
|
1. User receives or creates a `tasks_projectname.json` file
|
||||||
|
2. User drops file into `.taskmaster/tasks/` directory
|
||||||
|
3. Tag becomes immediately available via `task-master use-tag projectname`
|
||||||
|
4. User can list, view, and switch to external tag without configuration
|
||||||
|
|
||||||
|
### Template Usage Flow
|
||||||
|
1. User runs `task-master tags` to see available tags including external ones
|
||||||
|
2. External tags display with `(imported)` indicator
|
||||||
|
3. User switches to external tag with `task-master use-tag projectname`
|
||||||
|
4. User can view tasks but modifications are routed to main tasks.json
|
||||||
|
|
||||||
|
## UI/UX Considerations
|
||||||
|
- External tags clearly marked with `(imported)` suffix in listings
|
||||||
|
- Visual indicators distinguish between main and external tags
|
||||||
|
- Error messages guide users when external files are malformed
|
||||||
|
- Read-only warnings when attempting to modify external tag contexts
|
||||||
|
</context>
|
||||||
|
|
||||||
|
<PRD>
|
||||||
|
# Technical Architecture
|
||||||
|
## System Components
|
||||||
|
1. **External File Discovery Engine**
|
||||||
|
- File pattern scanner for `tasks_*.json` files
|
||||||
|
- Tag name extraction from filenames using regex
|
||||||
|
- Dynamic tag registry combining main and external sources
|
||||||
|
- Error handling for malformed external files
|
||||||
|
|
||||||
|
2. **Enhanced Tag Resolution System**
|
||||||
|
- Fallback mechanism to external files when tags not found in main tasks.json
|
||||||
|
- Precedence management ensuring main file tags override external files
|
||||||
|
- Read-only access enforcement for external tags
|
||||||
|
- Tag metadata preservation during discovery operations
|
||||||
|
|
||||||
|
3. **Silent Discovery Integration**
|
||||||
|
- Automatic scanning during tag-related operations
|
||||||
|
- Seamless integration with existing tag management functions
|
||||||
|
- Zero-configuration workflow requiring no manual import commands
|
||||||
|
- Dynamic tag availability without restart requirements
|
||||||
|
|
||||||
|
## Data Models
|
||||||
|
|
||||||
|
### External Task File Structure
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"meta": {
|
||||||
|
"projectName": "External Project Name",
|
||||||
|
"version": "1.0.0",
|
||||||
|
"templateSource": "external",
|
||||||
|
"createdAt": "ISO-8601 timestamp"
|
||||||
|
},
|
||||||
|
"tags": {
|
||||||
|
"projectname": {
|
||||||
|
"meta": {
|
||||||
|
"name": "Project Name",
|
||||||
|
"description": "Project description",
|
||||||
|
"createdAt": "ISO-8601 timestamp"
|
||||||
|
},
|
||||||
|
"tasks": [
|
||||||
|
// Array of task objects
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"master": {
|
||||||
|
// This section is ignored to prevent conflicts
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Enhanced Tag Registry Model
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"mainTags": [
|
||||||
|
{
|
||||||
|
"name": "master",
|
||||||
|
"source": "main",
|
||||||
|
"taskCount": 150,
|
||||||
|
"isActive": true
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"externalTags": [
|
||||||
|
{
|
||||||
|
"name": "projectname",
|
||||||
|
"source": "external",
|
||||||
|
"filename": "tasks_projectname.json",
|
||||||
|
"taskCount": 25,
|
||||||
|
"isReadOnly": true
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## APIs and Integrations
|
||||||
|
1. **File System Discovery API**
|
||||||
|
- Directory scanning with pattern matching
|
||||||
|
- JSON file validation and parsing
|
||||||
|
- Error handling for corrupted or malformed files
|
||||||
|
- File modification time tracking for cache invalidation
|
||||||
|
|
||||||
|
2. **Enhanced Tag Management API**
|
||||||
|
- `scanForExternalTaskFiles(projectRoot)` - Discover external template files
|
||||||
|
- `getExternalTagsFromFiles(projectRoot)` - Extract tag names from external files
|
||||||
|
- `readExternalTagData(projectRoot, tagName)` - Read specific external tag data
|
||||||
|
- `getAvailableTags(projectRoot)` - Combined main and external tag listing
|
||||||
|
|
||||||
|
3. **Tag Resolution Enhancement**
|
||||||
|
- Modified `readJSON()` with external file fallback
|
||||||
|
- Enhanced `tags()` function with external tag display
|
||||||
|
- Updated `useTag()` function supporting external tag switching
|
||||||
|
- Read-only enforcement for external tag operations
|
||||||
|
|
||||||
|
## Infrastructure Requirements
|
||||||
|
1. **File System Access**
|
||||||
|
- Read permissions for tasks directory
|
||||||
|
- JSON parsing capabilities
|
||||||
|
- Pattern matching and regex support
|
||||||
|
- Error handling for file system operations
|
||||||
|
|
||||||
|
2. **Backward Compatibility**
|
||||||
|
- Existing tag operations continue unchanged
|
||||||
|
- Main tasks.json structure preserved
|
||||||
|
- No breaking changes to current workflows
|
||||||
|
- Graceful degradation when external files unavailable
|
||||||
|
|
||||||
|
# Development Roadmap
|
||||||
|
## Phase 1: Core External File Discovery (Foundation)
|
||||||
|
1. **External File Scanner Implementation**
|
||||||
|
- Create `scanForExternalTaskFiles()` function in utils.js
|
||||||
|
- Implement file pattern matching for `tasks_*.json` files
|
||||||
|
- Add error handling for file system access issues
|
||||||
|
- Test with various filename patterns and edge cases
|
||||||
|
|
||||||
|
2. **Tag Name Extraction System**
|
||||||
|
- Implement `getExternalTagsFromFiles()` function
|
||||||
|
- Create regex pattern for extracting tag names from filenames
|
||||||
|
- Add validation to ensure tag names match internal tag key format
|
||||||
|
- Handle special characters and invalid filename patterns
|
||||||
|
|
||||||
|
3. **External Tag Data Reader**
|
||||||
|
- Create `readExternalTagData()` function
|
||||||
|
- Implement JSON parsing with error handling
|
||||||
|
- Add validation for required tag structure
|
||||||
|
- Ignore 'master' key in external files to prevent conflicts
|
||||||
|
|
||||||
|
## Phase 2: Tag Resolution Enhancement (Core Integration)
|
||||||
|
1. **Enhanced Tag Registry**
|
||||||
|
- Implement `getAvailableTags()` function combining main and external sources
|
||||||
|
- Create tag metadata structure including source information
|
||||||
|
- Add deduplication logic prioritizing main tags over external
|
||||||
|
- Implement caching mechanism for performance optimization
|
||||||
|
|
||||||
|
2. **Modified readJSON Function**
|
||||||
|
- Add external file fallback when tag not found in main tasks.json
|
||||||
|
- Maintain precedence rule: main tasks.json overrides external files
|
||||||
|
- Preserve existing error handling and validation patterns
|
||||||
|
- Ensure read-only access for external tags
|
||||||
|
|
||||||
|
3. **Tag Listing Enhancement**
|
||||||
|
- Update `tags()` function to display external tags with `(imported)` indicator
|
||||||
|
- Show external tag metadata and task counts
|
||||||
|
- Maintain current tag highlighting and sorting functionality
|
||||||
|
- Add visual distinction between main and external tags
|
||||||
|
|
||||||
|
## Phase 3: User Interface Integration (User Experience)
|
||||||
|
1. **Tag Switching Enhancement**
|
||||||
|
- Update `useTag()` function to support external tag switching
|
||||||
|
- Add read-only warnings when switching to external tags
|
||||||
|
- Update state.json with external tag context information
|
||||||
|
- Maintain current tag switching behavior for main tags
|
||||||
|
|
||||||
|
2. **Error Handling and User Feedback**
|
||||||
|
- Implement comprehensive error messages for malformed external files
|
||||||
|
- Add user guidance for proper external file structure
|
||||||
|
- Create warnings for read-only operations on external tags
|
||||||
|
- Ensure graceful degradation when external files are corrupted
|
||||||
|
|
||||||
|
3. **Documentation and Help Integration**
|
||||||
|
- Update command help text to include external tag information
|
||||||
|
- Add examples of external file structure and usage
|
||||||
|
- Create troubleshooting guide for common external file issues
|
||||||
|
- Document file naming conventions and best practices
|
||||||
|
|
||||||
|
## Phase 4: Advanced Features and Optimization (Enhancement)
|
||||||
|
1. **Performance Optimization**
|
||||||
|
- Implement file modification time caching
|
||||||
|
- Add lazy loading for external tag data
|
||||||
|
- Optimize file scanning for directories with many files
|
||||||
|
- Create efficient tag resolution caching mechanism
|
||||||
|
|
||||||
|
2. **Advanced External File Features**
|
||||||
|
- Support for nested external file directories
|
||||||
|
- Batch external file validation and reporting
|
||||||
|
- External file metadata display and management
|
||||||
|
- Integration with version control ignore patterns
|
||||||
|
|
||||||
|
3. **Team Collaboration Features**
|
||||||
|
- Shared external file validation
|
||||||
|
- External file conflict detection and resolution
|
||||||
|
- Team template sharing guidelines and documentation
|
||||||
|
- Integration with git workflows for template management
|
||||||
|
|
||||||
|
# Logical Dependency Chain
|
||||||
|
## Foundation Layer (Must Be Built First)
|
||||||
|
1. **External File Scanner**
|
||||||
|
- Core requirement for all other functionality
|
||||||
|
- Provides the discovery mechanism for external template files
|
||||||
|
- Must handle file system access and pattern matching reliably
|
||||||
|
|
||||||
|
2. **Tag Name Extraction**
|
||||||
|
- Depends on file scanner functionality
|
||||||
|
- Required for identifying available external tags
|
||||||
|
- Must validate tag names against internal format requirements
|
||||||
|
|
||||||
|
3. **External Tag Data Reader**
|
||||||
|
- Depends on tag name extraction
|
||||||
|
- Provides access to external tag content
|
||||||
|
- Must handle JSON parsing and validation safely
|
||||||
|
|
||||||
|
## Integration Layer (Builds on Foundation)
|
||||||
|
4. **Enhanced Tag Registry**
|
||||||
|
- Depends on all foundation components
|
||||||
|
- Combines main and external tag sources
|
||||||
|
- Required for unified tag management across the system
|
||||||
|
|
||||||
|
5. **Modified readJSON Function**
|
||||||
|
- Depends on enhanced tag registry
|
||||||
|
- Provides fallback mechanism for tag resolution
|
||||||
|
- Critical for maintaining backward compatibility
|
||||||
|
|
||||||
|
6. **Tag Listing Enhancement**
|
||||||
|
- Depends on enhanced tag registry
|
||||||
|
- Provides user visibility into external tags
|
||||||
|
- Required for user discovery of available templates
|
||||||
|
|
||||||
|
## User Experience Layer (Completes the Feature)
|
||||||
|
7. **Tag Switching Enhancement**
|
||||||
|
- Depends on modified readJSON and tag listing
|
||||||
|
- Enables user interaction with external tags
|
||||||
|
- Must enforce read-only access properly
|
||||||
|
|
||||||
|
8. **Error Handling and User Feedback**
|
||||||
|
- Can be developed in parallel with other UX components
|
||||||
|
- Enhances reliability and user experience
|
||||||
|
- Should be integrated throughout development process
|
||||||
|
|
||||||
|
9. **Documentation and Help Integration**
|
||||||
|
- Should be developed alongside implementation
|
||||||
|
- Required for user adoption and proper usage
|
||||||
|
- Can be completed in parallel with advanced features
|
||||||
|
|
||||||
|
## Optimization Layer (Performance and Advanced Features)
|
||||||
|
10. **Performance Optimization**
|
||||||
|
- Can be developed after core functionality is stable
|
||||||
|
- Improves user experience with large numbers of external files
|
||||||
|
- Not blocking for initial release
|
||||||
|
|
||||||
|
11. **Advanced External File Features**
|
||||||
|
- Can be developed independently after core features
|
||||||
|
- Enhances power user workflows
|
||||||
|
- Optional for initial release
|
||||||
|
|
||||||
|
12. **Team Collaboration Features**
|
||||||
|
- Depends on stable core functionality
|
||||||
|
- Enhances team workflows and template sharing
|
||||||
|
- Can be prioritized based on user feedback
|
||||||
|
|
||||||
|
# Risks and Mitigations
|
||||||
|
## Technical Challenges
|
||||||
|
|
||||||
|
### File System Performance
|
||||||
|
**Risk**: Scanning for external files on every tag operation could impact performance with large directories.
|
||||||
|
**Mitigation**:
|
||||||
|
- Implement file modification time caching to avoid unnecessary rescans
|
||||||
|
- Use lazy loading for external tag data - only read when accessed
|
||||||
|
- Add configurable limits on number of external files to scan
|
||||||
|
- Optimize file pattern matching with efficient regex patterns
|
||||||
|
|
||||||
|
### External File Corruption
|
||||||
|
**Risk**: Malformed or corrupted external JSON files could break tag operations.
|
||||||
|
**Mitigation**:
|
||||||
|
- Implement robust JSON parsing with comprehensive error handling
|
||||||
|
- Add file validation before attempting to parse external files
|
||||||
|
- Gracefully skip corrupted files and continue with valid ones
|
||||||
|
- Provide clear error messages guiding users to fix malformed files
|
||||||
|
|
||||||
|
### Tag Name Conflicts
|
||||||
|
**Risk**: External files might contain tag names that conflict with main tasks.json tags.
|
||||||
|
**Mitigation**:
|
||||||
|
- Implement strict precedence rule: main tasks.json always overrides external files
|
||||||
|
- Add warnings when external tags are ignored due to conflicts
|
||||||
|
- Document naming conventions to avoid common conflicts
|
||||||
|
- Provide validation tools to check for potential conflicts
|
||||||
|
|
||||||
|
## MVP Definition
|
||||||
|
|
||||||
|
### Core Feature Scope
|
||||||
|
**Risk**: Including too many advanced features could delay the core functionality.
|
||||||
|
**Mitigation**:
|
||||||
|
- Define MVP as basic external file discovery + tag switching
|
||||||
|
- Focus on the silent discovery mechanism as the primary value proposition
|
||||||
|
- Defer advanced features like nested directories and batch operations
|
||||||
|
- Ensure each phase delivers complete, usable functionality
|
||||||
|
|
||||||
|
### User Experience Complexity
|
||||||
|
**Risk**: The read-only nature of external tags might confuse users.
|
||||||
|
**Mitigation**:
|
||||||
|
- Provide clear visual indicators for external tags in all interfaces
|
||||||
|
- Add explicit warnings when users attempt to modify external tag contexts
|
||||||
|
- Document the read-only behavior and its rationale clearly
|
||||||
|
- Consider future enhancement for external tag modification workflows
|
||||||
|
|
||||||
|
### Backward Compatibility
|
||||||
|
**Risk**: Changes to tag resolution logic might break existing workflows.
|
||||||
|
**Mitigation**:
|
||||||
|
- Maintain existing tag operations unchanged for main tasks.json
|
||||||
|
- Add external file support as enhancement, not replacement
|
||||||
|
- Test thoroughly with existing task structures and workflows
|
||||||
|
- Provide migration path if any breaking changes are necessary
|
||||||
|
|
||||||
|
## Resource Constraints
|
||||||
|
|
||||||
|
### Development Complexity
|
||||||
|
**Risk**: Integration with existing tag management system could be complex.
|
||||||
|
**Mitigation**:
|
||||||
|
- Phase implementation to minimize risk of breaking existing functionality
|
||||||
|
- Create comprehensive test suite covering both main and external tag scenarios
|
||||||
|
- Use feature flags to enable/disable external file support during development
|
||||||
|
- Implement thorough error handling to prevent system failures
|
||||||
|
|
||||||
|
### File System Dependencies
|
||||||
|
**Risk**: Different operating systems might handle file operations differently.
|
||||||
|
**Mitigation**:
|
||||||
|
- Use Node.js built-in file system APIs for cross-platform compatibility
|
||||||
|
- Test on multiple operating systems (Windows, macOS, Linux)
|
||||||
|
- Handle file path separators and naming conventions properly
|
||||||
|
- Add fallback mechanisms for file system access issues
|
||||||
|
|
||||||
|
### User Adoption
|
||||||
|
**Risk**: Users might not understand or adopt the external file template system.
|
||||||
|
**Mitigation**:
|
||||||
|
- Create clear documentation with practical examples
|
||||||
|
- Provide sample external template files for common use cases
|
||||||
|
- Integrate help and guidance directly into the CLI interface
|
||||||
|
- Gather user feedback early and iterate on the user experience
|
||||||
|
|
||||||
|
# Appendix
|
||||||
|
## External File Naming Convention
|
||||||
|
|
||||||
|
### Filename Pattern
|
||||||
|
- **Format**: `tasks_[tagname].json`
|
||||||
|
- **Examples**: `tasks_feature-auth.json`, `tasks_v2-migration.json`, `tasks_project-alpha.json`
|
||||||
|
- **Validation**: Tag name must match internal tag key format (alphanumeric, hyphens, underscores)
|
||||||
|
|
||||||
|
### File Structure Requirements
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"meta": {
|
||||||
|
"projectName": "Required: Human-readable project name",
|
||||||
|
"version": "Optional: Template version",
|
||||||
|
"templateSource": "Optional: Source identifier",
|
||||||
|
"createdAt": "Optional: ISO-8601 timestamp"
|
||||||
|
},
|
||||||
|
"tags": {
|
||||||
|
"[tagname]": {
|
||||||
|
"meta": {
|
||||||
|
"name": "Required: Tag display name",
|
||||||
|
"description": "Optional: Tag description",
|
||||||
|
"createdAt": "Optional: ISO-8601 timestamp"
|
||||||
|
},
|
||||||
|
"tasks": [
|
||||||
|
// Required: Array of task objects following standard task structure
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Implementation Functions Specification
|
||||||
|
|
||||||
|
### Core Discovery Functions
|
||||||
|
```javascript
|
||||||
|
// Scan tasks directory for external template files
|
||||||
|
function scanForExternalTaskFiles(projectRoot) {
|
||||||
|
// Returns: Array of external file paths
|
||||||
|
}
|
||||||
|
|
||||||
|
// Extract tag names from external filenames
|
||||||
|
function getExternalTagsFromFiles(projectRoot) {
|
||||||
|
// Returns: Array of external tag names
|
||||||
|
}
|
||||||
|
|
||||||
|
// Read specific external tag data
|
||||||
|
function readExternalTagData(projectRoot, tagName) {
|
||||||
|
// Returns: Tag data object or null if not found
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get combined main and external tags
|
||||||
|
function getAvailableTags(projectRoot) {
|
||||||
|
// Returns: Combined tag registry with metadata
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Integration Points
|
||||||
|
```javascript
|
||||||
|
// Enhanced readJSON with external fallback
|
||||||
|
function readJSON(projectRoot, tag = null) {
|
||||||
|
// Modified to check external files when tag not found in main
|
||||||
|
}
|
||||||
|
|
||||||
|
// Enhanced tags listing with external indicators
|
||||||
|
function tags(projectRoot, options = {}) {
|
||||||
|
// Modified to display external tags with (imported) suffix
|
||||||
|
}
|
||||||
|
|
||||||
|
// Enhanced tag switching with external support
|
||||||
|
function useTag(projectRoot, tagName) {
|
||||||
|
// Modified to support switching to external tags (read-only)
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Error Handling Specifications
|
||||||
|
|
||||||
|
### File System Errors
|
||||||
|
- **ENOENT**: External file not found - gracefully skip and continue
|
||||||
|
- **EACCES**: Permission denied - warn user and continue with available files
|
||||||
|
- **EISDIR**: Directory instead of file - skip and continue scanning
|
||||||
|
|
||||||
|
### JSON Parsing Errors
|
||||||
|
- **SyntaxError**: Malformed JSON - skip file and log warning with filename
|
||||||
|
- **Missing required fields**: Skip file and provide specific error message
|
||||||
|
- **Invalid tag structure**: Skip file and guide user to correct format
|
||||||
|
|
||||||
|
### Tag Conflict Resolution
|
||||||
|
- **Duplicate tag names**: Main tasks.json takes precedence, log warning
|
||||||
|
- **Invalid tag names**: Skip external file and provide naming guidance
|
||||||
|
- **Master key in external**: Ignore master key, process other tags normally
|
||||||
|
</PRD>
|
||||||
8
.taskmaster/docs/test-prd.txt
Normal file
8
.taskmaster/docs/test-prd.txt
Normal file
@@ -0,0 +1,8 @@
|
|||||||
|
Simple Todo App PRD
|
||||||
|
|
||||||
|
Create a basic todo list application with the following features:
|
||||||
|
1. Add new todos
|
||||||
|
2. Mark todos as complete
|
||||||
|
3. Delete todos
|
||||||
|
|
||||||
|
That's it. Keep it simple.
|
||||||
343
.taskmaster/docs/tm-core-phase-1.txt
Normal file
343
.taskmaster/docs/tm-core-phase-1.txt
Normal file
@@ -0,0 +1,343 @@
|
|||||||
|
# Product Requirements Document: tm-core Package - Parse PRD Feature
|
||||||
|
|
||||||
|
## Project Overview
|
||||||
|
Create a TypeScript package named `tm-core` at `packages/tm-core` that implements parse-prd functionality using class-based architecture similar to the existing AI providers pattern.
|
||||||
|
|
||||||
|
## Design Patterns & Architecture
|
||||||
|
|
||||||
|
### Patterns to Apply
|
||||||
|
1. **Factory Pattern**: Use for `ProviderFactory` to create AI provider instances
|
||||||
|
2. **Strategy Pattern**: Use for `IAIProvider` implementations and `IStorage` implementations
|
||||||
|
3. **Facade Pattern**: Use for `TaskMasterCore` as the main API entry point
|
||||||
|
4. **Template Method Pattern**: Use for `BaseProvider` abstract class
|
||||||
|
5. **Dependency Injection**: Use throughout for testability (pass dependencies via constructor)
|
||||||
|
6. **Repository Pattern**: Use for `FileStorage` to abstract data persistence
|
||||||
|
|
||||||
|
### Naming Conventions
|
||||||
|
- **Files**: kebab-case (e.g., `task-parser.ts`, `file-storage.ts`)
|
||||||
|
- **Classes**: PascalCase (e.g., `TaskParser`, `FileStorage`)
|
||||||
|
- **Interfaces**: PascalCase with 'I' prefix (e.g., `IStorage`, `IAIProvider`)
|
||||||
|
- **Methods**: camelCase (e.g., `parsePRD`, `loadTasks`)
|
||||||
|
- **Constants**: UPPER_SNAKE_CASE (e.g., `DEFAULT_MODEL`)
|
||||||
|
- **Type aliases**: PascalCase (e.g., `TaskStatus`, `ParseOptions`)
|
||||||
|
|
||||||
|
## Exact Folder Structure Required
|
||||||
|
```
|
||||||
|
packages/tm-core/
|
||||||
|
├── src/
|
||||||
|
│ ├── index.ts
|
||||||
|
│ ├── types/
|
||||||
|
│ │ └── index.ts
|
||||||
|
│ ├── interfaces/
|
||||||
|
│ │ ├── index.ts # Barrel export
|
||||||
|
│ │ ├── storage.interface.ts
|
||||||
|
│ │ ├── ai-provider.interface.ts
|
||||||
|
│ │ └── configuration.interface.ts
|
||||||
|
│ ├── tasks/
|
||||||
|
│ │ ├── index.ts # Barrel export
|
||||||
|
│ │ └── task-parser.ts
|
||||||
|
│ ├── ai/
|
||||||
|
│ │ ├── index.ts # Barrel export
|
||||||
|
│ │ ├── base-provider.ts
|
||||||
|
│ │ ├── provider-factory.ts
|
||||||
|
│ │ ├── prompt-builder.ts
|
||||||
|
│ │ └── providers/
|
||||||
|
│ │ ├── index.ts # Barrel export
|
||||||
|
│ │ ├── anthropic-provider.ts
|
||||||
|
│ │ ├── openai-provider.ts
|
||||||
|
│ │ └── google-provider.ts
|
||||||
|
│ ├── storage/
|
||||||
|
│ │ ├── index.ts # Barrel export
|
||||||
|
│ │ └── file-storage.ts
|
||||||
|
│ ├── config/
|
||||||
|
│ │ ├── index.ts # Barrel export
|
||||||
|
│ │ └── config-manager.ts
|
||||||
|
│ ├── utils/
|
||||||
|
│ │ ├── index.ts # Barrel export
|
||||||
|
│ │ └── id-generator.ts
|
||||||
|
│ └── errors/
|
||||||
|
│ ├── index.ts # Barrel export
|
||||||
|
│ └── task-master-error.ts
|
||||||
|
├── tests/
|
||||||
|
│ ├── task-parser.test.ts
|
||||||
|
│ ├── integration/
|
||||||
|
│ │ └── parse-prd.test.ts
|
||||||
|
│ └── mocks/
|
||||||
|
│ └── mock-provider.ts
|
||||||
|
├── package.json
|
||||||
|
├── tsconfig.json
|
||||||
|
├── tsup.config.js
|
||||||
|
└── jest.config.js
|
||||||
|
```
|
||||||
|
|
||||||
|
## Specific Implementation Requirements
|
||||||
|
|
||||||
|
### 1. Create types/index.ts
|
||||||
|
Define these exact TypeScript interfaces:
|
||||||
|
- `Task` interface with fields: id, title, description, status, priority, complexity, dependencies, subtasks, metadata, createdAt, updatedAt, source
|
||||||
|
- `Subtask` interface with fields: id, title, description, completed
|
||||||
|
- `TaskMetadata` interface with fields: parsedFrom, aiProvider, version, tags (optional)
|
||||||
|
- Type literals: `TaskStatus` = 'pending' | 'in-progress' | 'completed' | 'blocked'
|
||||||
|
- Type literals: `TaskPriority` = 'low' | 'medium' | 'high' | 'critical'
|
||||||
|
- Type literals: `TaskComplexity` = 'simple' | 'moderate' | 'complex'
|
||||||
|
- `ParseOptions` interface with fields: dryRun (optional), additionalContext (optional), tag (optional), maxTasks (optional)
|
||||||
|
|
||||||
|
### 2. Create interfaces/storage.interface.ts
|
||||||
|
Define `IStorage` interface with these exact methods:
|
||||||
|
- `loadTasks(tag?: string): Promise<Task[]>`
|
||||||
|
- `saveTasks(tasks: Task[], tag?: string): Promise<void>`
|
||||||
|
- `appendTasks(tasks: Task[], tag?: string): Promise<void>`
|
||||||
|
- `updateTask(id: string, task: Partial<Task>, tag?: string): Promise<void>`
|
||||||
|
- `deleteTask(id: string, tag?: string): Promise<void>`
|
||||||
|
- `exists(tag?: string): Promise<boolean>`
|
||||||
|
|
||||||
|
### 3. Create interfaces/ai-provider.interface.ts
|
||||||
|
Define `IAIProvider` interface with these exact methods:
|
||||||
|
- `generateCompletion(prompt: string, options?: AIOptions): Promise<string>`
|
||||||
|
- `calculateTokens(text: string): number`
|
||||||
|
- `getName(): string`
|
||||||
|
- `getModel(): string`
|
||||||
|
|
||||||
|
Define `AIOptions` interface with fields: temperature (optional), maxTokens (optional), systemPrompt (optional)
|
||||||
|
|
||||||
|
### 4. Create interfaces/configuration.interface.ts
|
||||||
|
Define `IConfiguration` interface with fields:
|
||||||
|
- `projectPath: string`
|
||||||
|
- `aiProvider: string`
|
||||||
|
- `apiKey?: string`
|
||||||
|
- `aiOptions?: AIOptions`
|
||||||
|
- `mainModel?: string`
|
||||||
|
- `researchModel?: string`
|
||||||
|
- `fallbackModel?: string`
|
||||||
|
- `tasksPath?: string`
|
||||||
|
- `enableTags?: boolean`
|
||||||
|
|
||||||
|
### 5. Create tasks/task-parser.ts
|
||||||
|
Create class `TaskParser` with:
|
||||||
|
- Constructor accepting `aiProvider: IAIProvider` and `config: IConfiguration`
|
||||||
|
- Private property `promptBuilder: PromptBuilder`
|
||||||
|
- Public method `parsePRD(prdPath: string, options: ParseOptions = {}): Promise<Task[]>`
|
||||||
|
- Private method `readPRD(prdPath: string): Promise<string>`
|
||||||
|
- Private method `extractTasks(aiResponse: string): Partial<Task>[]`
|
||||||
|
- Private method `enrichTasks(rawTasks: Partial<Task>[], prdPath: string): Task[]`
|
||||||
|
- Apply **Dependency Injection** pattern via constructor
|
||||||
|
|
||||||
|
### 6. Create ai/base-provider.ts
|
||||||
|
Copy existing base-provider.js and convert to TypeScript abstract class:
|
||||||
|
- Abstract class `BaseProvider` implementing `IAIProvider`
|
||||||
|
- Protected properties: `apiKey: string`, `model: string`
|
||||||
|
- Constructor accepting `apiKey: string` and `options: { model?: string }`
|
||||||
|
- Abstract methods matching IAIProvider interface
|
||||||
|
- Abstract method `getDefaultModel(): string`
|
||||||
|
- Apply **Template Method** pattern for common provider logic
|
||||||
|
|
||||||
|
### 7. Create ai/provider-factory.ts
|
||||||
|
Create class `ProviderFactory` with:
|
||||||
|
- Static method `create(config: { provider: string; apiKey?: string; model?: string }): Promise<IAIProvider>`
|
||||||
|
- Switch statement for providers: 'anthropic', 'openai', 'google'
|
||||||
|
- Dynamic imports for each provider
|
||||||
|
- Throw error for unknown providers
|
||||||
|
- Apply **Factory** pattern for creating provider instances
|
||||||
|
|
||||||
|
Example implementation structure:
|
||||||
|
```typescript
|
||||||
|
switch (provider.toLowerCase()) {
|
||||||
|
case 'anthropic':
|
||||||
|
const { AnthropicProvider } = await import('./providers/anthropic-provider.js');
|
||||||
|
return new AnthropicProvider(apiKey, { model });
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 8. Create ai/providers/anthropic-provider.ts
|
||||||
|
Create class `AnthropicProvider` extending `BaseProvider`:
|
||||||
|
- Import Anthropic SDK: `import { Anthropic } from '@anthropic-ai/sdk'`
|
||||||
|
- Private property `client: Anthropic`
|
||||||
|
- Implement all abstract methods from BaseProvider
|
||||||
|
- Default model: 'claude-3-sonnet-20240229'
|
||||||
|
- Handle API errors and wrap with meaningful messages
|
||||||
|
|
||||||
|
### 9. Create ai/providers/openai-provider.ts (placeholder)
|
||||||
|
Create class `OpenAIProvider` extending `BaseProvider`:
|
||||||
|
- Import OpenAI SDK when implemented
|
||||||
|
- For now, throw error: "OpenAI provider not yet implemented"
|
||||||
|
|
||||||
|
### 10. Create ai/providers/google-provider.ts (placeholder)
|
||||||
|
Create class `GoogleProvider` extending `BaseProvider`:
|
||||||
|
- Import Google Generative AI SDK when implemented
|
||||||
|
- For now, throw error: "Google provider not yet implemented"
|
||||||
|
|
||||||
|
### 11. Create ai/prompt-builder.ts
|
||||||
|
Create class `PromptBuilder` with:
|
||||||
|
- Method `buildParsePrompt(prdContent: string, options: ParseOptions = {}): string`
|
||||||
|
- Method `buildExpandPrompt(task: string, context?: string): string`
|
||||||
|
- Use template literals for prompt construction
|
||||||
|
- Include specific JSON format instructions in prompts
|
||||||
|
|
||||||
|
### 9. Create storage/file-storage.ts
|
||||||
|
Create class `FileStorage` implementing `IStorage`:
|
||||||
|
- Private property `basePath: string` set to `{projectPath}/.taskmaster`
|
||||||
|
- Constructor accepting `projectPath: string`
|
||||||
|
- Private method `getTasksPath(tag?: string): string` returning correct path based on tag
|
||||||
|
- Private method `ensureDirectory(dir: string): Promise<void>`
|
||||||
|
- Implement all IStorage methods
|
||||||
|
- Handle ENOENT errors by returning empty arrays
|
||||||
|
- Use JSON format with structure: `{ tasks: Task[], metadata: { version: string, lastModified: string } }`
|
||||||
|
- Apply **Repository** pattern for data access abstraction
|
||||||
|
|
||||||
|
### 10. Create config/config-manager.ts
|
||||||
|
Create class `ConfigManager`:
|
||||||
|
- Private property `config: IConfiguration`
|
||||||
|
- Constructor accepting `options: Partial<IConfiguration>`
|
||||||
|
- Use Zod for validation with schema matching IConfiguration
|
||||||
|
- Method `get<K extends keyof IConfiguration>(key: K): IConfiguration[K]`
|
||||||
|
- Method `getAll(): IConfiguration`
|
||||||
|
- Method `validate(): boolean`
|
||||||
|
- Default values: projectPath = process.cwd(), aiProvider = 'anthropic', enableTags = true
|
||||||
|
|
||||||
|
### 11. Create utils/id-generator.ts
|
||||||
|
Export functions:
|
||||||
|
- `generateTaskId(index: number = 0): string` returning format `task_{timestamp}_{index}_{random}`
|
||||||
|
- `generateSubtaskId(parentId: string, index: number = 0): string` returning format `{parentId}_sub_{index}_{random}`
|
||||||
|
|
||||||
|
### 16. Create src/index.ts
|
||||||
|
Create main class `TaskMasterCore`:
|
||||||
|
- Private properties: `config: ConfigManager`, `storage: IStorage`, `aiProvider?: IAIProvider`, `parser?: TaskParser`
|
||||||
|
- Constructor accepting `options: Partial<IConfiguration>`
|
||||||
|
- Method `initialize(): Promise<void>` for lazy loading
|
||||||
|
- Method `parsePRD(prdPath: string, options: ParseOptions = {}): Promise<Task[]>`
|
||||||
|
- Method `getTasks(tag?: string): Promise<Task[]>`
|
||||||
|
- Apply **Facade** pattern to provide simple API over complex subsystems
|
||||||
|
|
||||||
|
Export:
|
||||||
|
- Class `TaskMasterCore`
|
||||||
|
- Function `createTaskMaster(options: Partial<IConfiguration>): TaskMasterCore`
|
||||||
|
- All types from './types'
|
||||||
|
- All interfaces from './interfaces/*'
|
||||||
|
|
||||||
|
Import statements should use kebab-case:
|
||||||
|
```typescript
|
||||||
|
import { TaskParser } from './tasks/task-parser';
|
||||||
|
import { FileStorage } from './storage/file-storage';
|
||||||
|
import { ConfigManager } from './config/config-manager';
|
||||||
|
import { ProviderFactory } from './ai/provider-factory';
|
||||||
|
```
|
||||||
|
|
||||||
|
### 17. Configure package.json
|
||||||
|
Create package.json with:
|
||||||
|
- name: "@task-master/core"
|
||||||
|
- version: "0.1.0"
|
||||||
|
- type: "module"
|
||||||
|
- main: "./dist/index.js"
|
||||||
|
- module: "./dist/index.mjs"
|
||||||
|
- types: "./dist/index.d.ts"
|
||||||
|
- exports map for proper ESM/CJS support
|
||||||
|
- scripts: build (tsup), dev (tsup --watch), test (jest), typecheck (tsc --noEmit)
|
||||||
|
- dependencies: zod@^3.23.8
|
||||||
|
- peerDependencies: @anthropic-ai/sdk, openai, @google/generative-ai
|
||||||
|
- devDependencies: typescript, tsup, jest, ts-jest, @types/node, @types/jest
|
||||||
|
|
||||||
|
### 18. Configure TypeScript
|
||||||
|
Create tsconfig.json with:
|
||||||
|
- target: "ES2022"
|
||||||
|
- module: "ESNext"
|
||||||
|
- strict: true (with all strict flags enabled)
|
||||||
|
- declaration: true
|
||||||
|
- outDir: "./dist"
|
||||||
|
- rootDir: "./src"
|
||||||
|
|
||||||
|
### 19. Configure tsup
|
||||||
|
Create tsup.config.js with:
|
||||||
|
- entry: ['src/index.ts']
|
||||||
|
- format: ['cjs', 'esm']
|
||||||
|
- dts: true
|
||||||
|
- sourcemap: true
|
||||||
|
- clean: true
|
||||||
|
- external: AI provider SDKs
|
||||||
|
|
||||||
|
### 20. Configure Jest
|
||||||
|
Create jest.config.js with:
|
||||||
|
- preset: 'ts-jest'
|
||||||
|
- testEnvironment: 'node'
|
||||||
|
- Coverage threshold: 80% for all metrics
|
||||||
|
|
||||||
|
## Build Process
|
||||||
|
1. Use tsup to compile TypeScript to both CommonJS and ESM
|
||||||
|
2. Generate .d.ts files for TypeScript consumers
|
||||||
|
3. Output to dist/ directory
|
||||||
|
4. Ensure tree-shaking works properly
|
||||||
|
|
||||||
|
## Testing Requirements
|
||||||
|
- Create unit tests for TaskParser in tests/task-parser.test.ts
|
||||||
|
- Create MockProvider class in tests/mocks/mock-provider.ts for testing without API calls
|
||||||
|
- Test error scenarios (file not found, invalid JSON, etc.)
|
||||||
|
- Create integration test in tests/integration/parse-prd.test.ts
|
||||||
|
- Follow kebab-case naming for all test files
|
||||||
|
|
||||||
|
## Success Criteria
|
||||||
|
- TypeScript compilation with zero errors
|
||||||
|
- No use of 'any' type
|
||||||
|
- All interfaces properly exported
|
||||||
|
- Compatible with existing tasks.json format
|
||||||
|
- Feature flag support via USE_TM_CORE environment variable
|
||||||
|
|
||||||
|
## Import/Export Conventions
|
||||||
|
- Use named exports for all classes and interfaces
|
||||||
|
- Use barrel exports (index.ts) in each directory
|
||||||
|
- Import types/interfaces with type-only imports: `import type { Task } from '../types'`
|
||||||
|
- Group imports in order: Node built-ins, external packages, internal packages, relative imports
|
||||||
|
- Use .js extension in import paths for ESM compatibility
|
||||||
|
|
||||||
|
## Error Handling Patterns
|
||||||
|
- Create custom error classes in `src/errors/` directory
|
||||||
|
- All public methods should catch and wrap errors with context
|
||||||
|
- Use error codes for different error types (e.g., 'FILE_NOT_FOUND', 'PARSE_ERROR')
|
||||||
|
- Never expose internal implementation details in error messages
|
||||||
|
- Log errors to console.error only in development mode
|
||||||
|
|
||||||
|
## Barrel Exports Content
|
||||||
|
|
||||||
|
### interfaces/index.ts
|
||||||
|
```typescript
|
||||||
|
export type { IStorage } from './storage.interface';
|
||||||
|
export type { IAIProvider, AIOptions } from './ai-provider.interface';
|
||||||
|
export type { IConfiguration } from './configuration.interface';
|
||||||
|
```
|
||||||
|
|
||||||
|
### tasks/index.ts
|
||||||
|
```typescript
|
||||||
|
export { TaskParser } from './task-parser';
|
||||||
|
```
|
||||||
|
|
||||||
|
### ai/index.ts
|
||||||
|
```typescript
|
||||||
|
export { BaseProvider } from './base-provider';
|
||||||
|
export { ProviderFactory } from './provider-factory';
|
||||||
|
export { PromptBuilder } from './prompt-builder';
|
||||||
|
```
|
||||||
|
|
||||||
|
### ai/providers/index.ts
|
||||||
|
```typescript
|
||||||
|
export { AnthropicProvider } from './anthropic-provider';
|
||||||
|
export { OpenAIProvider } from './openai-provider';
|
||||||
|
export { GoogleProvider } from './google-provider';
|
||||||
|
```
|
||||||
|
|
||||||
|
### storage/index.ts
|
||||||
|
```typescript
|
||||||
|
export { FileStorage } from './file-storage';
|
||||||
|
```
|
||||||
|
|
||||||
|
### config/index.ts
|
||||||
|
```typescript
|
||||||
|
export { ConfigManager } from './config-manager';
|
||||||
|
```
|
||||||
|
|
||||||
|
### utils/index.ts
|
||||||
|
```typescript
|
||||||
|
export { generateTaskId, generateSubtaskId } from './id-generator';
|
||||||
|
```
|
||||||
|
|
||||||
|
### errors/index.ts
|
||||||
|
```typescript
|
||||||
|
export { TaskMasterError } from './task-master-error';
|
||||||
|
```
|
||||||
21
.taskmaster/reports/task-complexity-report.json
Normal file
21
.taskmaster/reports/task-complexity-report.json
Normal file
@@ -0,0 +1,21 @@
|
|||||||
|
{
|
||||||
|
"meta": {
|
||||||
|
"generatedAt": "2025-08-02T14:28:59.851Z",
|
||||||
|
"tasksAnalyzed": 1,
|
||||||
|
"totalTasks": 93,
|
||||||
|
"analysisCount": 1,
|
||||||
|
"thresholdScore": 5,
|
||||||
|
"projectName": "Taskmaster",
|
||||||
|
"usedResearch": false
|
||||||
|
},
|
||||||
|
"complexityAnalysis": [
|
||||||
|
{
|
||||||
|
"taskId": 24,
|
||||||
|
"taskTitle": "Implement AI-Powered Test Generation Command",
|
||||||
|
"complexityScore": 8,
|
||||||
|
"recommendedSubtasks": 6,
|
||||||
|
"expansionPrompt": "Expand task 24 'Implement AI-Powered Test Generation Command' into 6 subtasks, focusing on: 1) Command structure implementation, 2) AI prompt engineering for test generation, 3) Test file generation and output, 4) Framework-specific template implementation, 5) MCP tool integration, and 6) Documentation and help system integration. Include detailed implementation steps, dependencies, and testing approaches for each subtask.",
|
||||||
|
"reasoning": "This task has high complexity due to several challenging aspects: 1) AI integration requiring sophisticated prompt engineering, 2) Test generation across multiple frameworks, 3) File system operations with proper error handling, 4) MCP tool integration, 5) Complex configuration requirements, and 6) Framework-specific template generation. The task already has 5 subtasks but could benefit from reorganization based on the updated implementation details in the info blocks, particularly around framework support and configuration."
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
@@ -0,0 +1,93 @@
|
|||||||
|
{
|
||||||
|
"meta": {
|
||||||
|
"generatedAt": "2025-07-22T09:41:10.517Z",
|
||||||
|
"tasksAnalyzed": 10,
|
||||||
|
"totalTasks": 10,
|
||||||
|
"analysisCount": 10,
|
||||||
|
"thresholdScore": 5,
|
||||||
|
"projectName": "Taskmaster",
|
||||||
|
"usedResearch": false
|
||||||
|
},
|
||||||
|
"complexityAnalysis": [
|
||||||
|
{
|
||||||
|
"taskId": 1,
|
||||||
|
"taskTitle": "Implement Task Integration Layer (TIL) Core",
|
||||||
|
"complexityScore": 8,
|
||||||
|
"recommendedSubtasks": 5,
|
||||||
|
"expansionPrompt": "Break down the TIL Core implementation into distinct components: hook registration system, task lifecycle management, event coordination, state persistence layer, and configuration validation. Each subtask should focus on a specific architectural component with clear interfaces and testable boundaries.",
|
||||||
|
"reasoning": "This is a foundational component with multiple complex subsystems including event-driven architecture, API integration, state management, and configuration validation. The existing 5 subtasks are well-structured and appropriately sized."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"taskId": 2,
|
||||||
|
"taskTitle": "Develop Dependency Monitor with Taskmaster MCP Integration",
|
||||||
|
"complexityScore": 7,
|
||||||
|
"recommendedSubtasks": 4,
|
||||||
|
"expansionPrompt": "Divide the dependency monitor into: dependency graph data structure implementation, circular dependency detection algorithm, Taskmaster MCP integration layer, and real-time notification system. Focus on performance optimization for large graphs and efficient caching strategies.",
|
||||||
|
"reasoning": "Complex graph algorithms and real-time monitoring require careful implementation. The task involves sophisticated data structures, algorithm design, and API integration with performance constraints."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"taskId": 3,
|
||||||
|
"taskTitle": "Build Execution Manager with Priority Queue and Parallel Execution",
|
||||||
|
"complexityScore": 8,
|
||||||
|
"recommendedSubtasks": 5,
|
||||||
|
"expansionPrompt": "Structure the execution manager into: priority queue implementation, resource conflict detection system, parallel execution coordinator, timeout and cancellation handler, and execution history persistence layer. Each component should handle specific aspects of concurrent task management.",
|
||||||
|
"reasoning": "Managing concurrent execution with resource conflicts, priority scheduling, and persistence is highly complex. Requires careful synchronization, error handling, and performance optimization."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"taskId": 4,
|
||||||
|
"taskTitle": "Implement Safety Manager with Configurable Constraints and Emergency Controls",
|
||||||
|
"complexityScore": 7,
|
||||||
|
"recommendedSubtasks": 4,
|
||||||
|
"expansionPrompt": "Break down into: constraint validation engine, emergency control system (stop/pause), user approval workflow implementation, and safety monitoring/audit logging. Each subtask should address specific safety aspects with fail-safe mechanisms.",
|
||||||
|
"reasoning": "Safety systems require careful design with multiple fail-safes. The task involves validation logic, real-time controls, workflow management, and comprehensive logging."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"taskId": 5,
|
||||||
|
"taskTitle": "Develop Event-Based Hook Processor",
|
||||||
|
"complexityScore": 6,
|
||||||
|
"recommendedSubtasks": 4,
|
||||||
|
"expansionPrompt": "Organize into: file system event integration, Git/VCS event listeners, build system event connectors, and event filtering/debouncing mechanism. Focus on modular event source integration with configurable processing pipelines.",
|
||||||
|
"reasoning": "While conceptually straightforward, integrating multiple event sources with proper filtering and performance optimization requires careful implementation. Each event source has unique characteristics."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"taskId": 6,
|
||||||
|
"taskTitle": "Implement Prompt-Based Hook Processor with AI Integration",
|
||||||
|
"complexityScore": 7,
|
||||||
|
"recommendedSubtasks": 4,
|
||||||
|
"expansionPrompt": "Divide into: prompt interception mechanism, NLP-based task suggestion engine, context injection system, and conversation-based status updater. Each component should handle specific aspects of AI conversation integration.",
|
||||||
|
"reasoning": "AI integration with prompt analysis and dynamic context injection is complex. Requires understanding of conversation flow, relevance scoring, and seamless integration with existing systems."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"taskId": 7,
|
||||||
|
"taskTitle": "Create Update-Based Hook Processor for Automatic Progress Tracking",
|
||||||
|
"complexityScore": 6,
|
||||||
|
"recommendedSubtasks": 4,
|
||||||
|
"expansionPrompt": "Structure as: code change monitor, acceptance criteria validator, dependency update propagator, and conflict detection/resolution system. Focus on accurate progress tracking and automated validation logic.",
|
||||||
|
"reasoning": "Automatic progress tracking requires integration with version control and intelligent analysis of code changes. Conflict detection and dependency propagation add complexity."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"taskId": 8,
|
||||||
|
"taskTitle": "Develop Real-Time Automation Dashboard and User Controls",
|
||||||
|
"complexityScore": 7,
|
||||||
|
"recommendedSubtasks": 5,
|
||||||
|
"expansionPrompt": "Break down into: WebSocket real-time communication layer, interactive dependency graph visualization, task queue and status displays, user control interfaces, and analytics/charting components. Each UI component should be modular and reusable.",
|
||||||
|
"reasoning": "Building a responsive real-time dashboard with complex visualizations and interactive controls is challenging. Requires careful state management, performance optimization, and user experience design."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"taskId": 9,
|
||||||
|
"taskTitle": "Integrate Kiro IDE and Taskmaster MCP with Core Services",
|
||||||
|
"complexityScore": 8,
|
||||||
|
"recommendedSubtasks": 4,
|
||||||
|
"expansionPrompt": "Organize into: KiroHookAdapter implementation, TaskmasterMCPAdapter development, error handling and retry logic layer, and IDE UI component integration. Focus on robust adapter patterns and comprehensive error recovery.",
|
||||||
|
"reasoning": "End-to-end integration of multiple systems with different architectures is highly complex. Requires careful adapter design, extensive error handling, and thorough testing across all integration points."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"taskId": 10,
|
||||||
|
"taskTitle": "Implement Configuration Management and Safety Profiles",
|
||||||
|
"complexityScore": 6,
|
||||||
|
"recommendedSubtasks": 4,
|
||||||
|
"expansionPrompt": "Divide into: visual configuration editor UI, JSON Schema validation engine, import/export functionality, and version control integration. Each component should provide intuitive configuration management with robust validation.",
|
||||||
|
"reasoning": "While technically less complex than core systems, building an intuitive configuration editor with validation, versioning, and import/export requires careful UI/UX design and robust data handling."
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
53
.taskmaster/reports/task-complexity-report_test-prd-tag.json
Normal file
53
.taskmaster/reports/task-complexity-report_test-prd-tag.json
Normal file
@@ -0,0 +1,53 @@
|
|||||||
|
{
|
||||||
|
"meta": {
|
||||||
|
"generatedAt": "2025-06-13T06:52:00.611Z",
|
||||||
|
"tasksAnalyzed": 5,
|
||||||
|
"totalTasks": 5,
|
||||||
|
"analysisCount": 5,
|
||||||
|
"thresholdScore": 5,
|
||||||
|
"projectName": "Taskmaster",
|
||||||
|
"usedResearch": true
|
||||||
|
},
|
||||||
|
"complexityAnalysis": [
|
||||||
|
{
|
||||||
|
"taskId": 1,
|
||||||
|
"taskTitle": "Setup Project Repository and Node.js Environment",
|
||||||
|
"complexityScore": 4,
|
||||||
|
"recommendedSubtasks": 6,
|
||||||
|
"expansionPrompt": "Break down the setup process into subtasks such as initializing npm, creating directory structure, installing dependencies, configuring package.json, adding configuration files, and setting up the main entry point.",
|
||||||
|
"reasoning": "This task involves several standard setup steps that are well-defined and sequential, with low algorithmic complexity but moderate procedural detail. Each step is independent and can be assigned as a subtask, making the overall complexity moderate."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"taskId": 2,
|
||||||
|
"taskTitle": "Implement Core Functionality and CLI Interface",
|
||||||
|
"complexityScore": 7,
|
||||||
|
"recommendedSubtasks": 7,
|
||||||
|
"expansionPrompt": "Expand into subtasks for implementing main logic, designing CLI commands, creating the CLI entry point, integrating business logic, adding error handling, formatting output, and ensuring CLI executability.",
|
||||||
|
"reasoning": "This task requires both application logic and user interface (CLI) development, including error handling and integration. The need to coordinate between core logic and CLI, plus ensuring usability, increases complexity and warrants detailed subtasking."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"taskId": 3,
|
||||||
|
"taskTitle": "Implement Testing Suite and Validation",
|
||||||
|
"complexityScore": 6,
|
||||||
|
"recommendedSubtasks": 6,
|
||||||
|
"expansionPrompt": "Divide into subtasks for configuring Jest, writing unit tests, writing integration tests, testing CLI commands, setting up coverage reporting, and preparing test fixtures/mocks.",
|
||||||
|
"reasoning": "Comprehensive testing involves multiple types of tests and configuration steps. While each is straightforward, the breadth of coverage and need for automation and validation increases the overall complexity."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"taskId": 4,
|
||||||
|
"taskTitle": "Setup Node.js Project with CLI Interface",
|
||||||
|
"complexityScore": 5,
|
||||||
|
"recommendedSubtasks": 7,
|
||||||
|
"expansionPrompt": "Break down into subtasks for npm initialization, package.json setup, directory structure creation, dependency installation, CLI entry point creation, package.json bin configuration, and CLI executability.",
|
||||||
|
"reasoning": "This task combines project setup with initial CLI implementation. While each step is standard, the integration of CLI elements adds a layer of complexity beyond a basic setup."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"taskId": 5,
|
||||||
|
"taskTitle": "Implement Core Functionality with Testing",
|
||||||
|
"complexityScore": 8,
|
||||||
|
"recommendedSubtasks": 8,
|
||||||
|
"expansionPrompt": "Expand into subtasks for implementing each feature (A, B, C), setting up the testing framework, writing tests for each feature, integrating CLI with core logic, and adding coverage reporting.",
|
||||||
|
"reasoning": "This task requires simultaneous development of multiple features, integration with CLI, and comprehensive testing. The coordination and depth required for both implementation and validation make it the most complex among the listed tasks."
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
@@ -0,0 +1,77 @@
|
|||||||
|
{
|
||||||
|
"meta": {
|
||||||
|
"generatedAt": "2025-08-06T12:39:03.250Z",
|
||||||
|
"tasksAnalyzed": 8,
|
||||||
|
"totalTasks": 11,
|
||||||
|
"analysisCount": 8,
|
||||||
|
"thresholdScore": 5,
|
||||||
|
"projectName": "Taskmaster",
|
||||||
|
"usedResearch": false
|
||||||
|
},
|
||||||
|
"complexityAnalysis": [
|
||||||
|
{
|
||||||
|
"taskId": 118,
|
||||||
|
"taskTitle": "Create AI Provider Base Architecture",
|
||||||
|
"complexityScore": 7,
|
||||||
|
"recommendedSubtasks": 5,
|
||||||
|
"expansionPrompt": "Break down the implementation of BaseProvider abstract TypeScript class into subtasks focusing on: 1) Converting existing JavaScript base-provider.js to TypeScript with proper interface definitions, 2) Implementing the Template Method pattern with abstract methods, 3) Adding comprehensive error handling and retry logic with exponential backoff, 4) Creating proper TypeScript types for all method signatures and options, 5) Setting up comprehensive unit tests with MockProvider. Consider that the existing codebase uses JavaScript ES modules and Vercel AI SDK, so the TypeScript implementation needs to maintain compatibility while adding type safety.",
|
||||||
|
"reasoning": "This task requires significant architectural work including converting existing JavaScript code to TypeScript, creating new interfaces, implementing design patterns, and ensuring backward compatibility. The existing base-provider.js already implements a sophisticated provider pattern using Vercel AI SDK, so the TypeScript conversion needs careful consideration of type definitions and maintaining existing functionality."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"taskId": 119,
|
||||||
|
"taskTitle": "Implement Provider Factory with Dynamic Imports",
|
||||||
|
"complexityScore": 5,
|
||||||
|
"recommendedSubtasks": 5,
|
||||||
|
"expansionPrompt": "Break down the Provider Factory implementation into: 1) Creating the ProviderFactory class structure with proper TypeScript typing, 2) Implementing the switch statement for provider selection logic, 3) Adding dynamic imports for each provider to enable tree-shaking, 4) Handling provider instantiation with configuration passing, 5) Implementing comprehensive error handling for module loading failures. Note that the existing codebase already has a provider selection mechanism in the JavaScript files, so ensure the factory pattern integrates smoothly with existing infrastructure.",
|
||||||
|
"reasoning": "This is a moderate complexity task that involves creating a factory pattern with dynamic imports. The existing codebase already has provider management logic, so the main complexity is in creating a clean TypeScript implementation with proper dynamic imports while maintaining compatibility with the existing JavaScript module system."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"taskId": 120,
|
||||||
|
"taskTitle": "Implement Anthropic Provider",
|
||||||
|
"complexityScore": 6,
|
||||||
|
"recommendedSubtasks": 5,
|
||||||
|
"expansionPrompt": "Implement the AnthropicProvider class in stages: 1) Set up the class structure extending BaseProvider with proper TypeScript imports and type definitions, 2) Implement constructor with Anthropic SDK client initialization and configuration handling, 3) Implement generateCompletion method with proper message format transformation and error handling, 4) Add token calculation methods and utility functions (getName, getModel, getDefaultModel), 5) Implement comprehensive error handling with custom error wrapping and type exports. The existing anthropic.js provider can serve as a reference but needs to be reimplemented to extend the new TypeScript BaseProvider.",
|
||||||
|
"reasoning": "This task involves integrating with an external SDK (@anthropic-ai/sdk) and implementing all abstract methods from BaseProvider. The existing JavaScript implementation provides a good reference, but the TypeScript version needs proper type definitions, error handling, and must work with the new abstract base class architecture."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"taskId": 121,
|
||||||
|
"taskTitle": "Create Prompt Builder and Task Parser",
|
||||||
|
"complexityScore": 8,
|
||||||
|
"recommendedSubtasks": 5,
|
||||||
|
"expansionPrompt": "Implement PromptBuilder and TaskParser with focus on: 1) Creating PromptBuilder class with template methods for building structured prompts with JSON format instructions, 2) Implementing TaskParser class structure with dependency injection of IAIProvider and IConfiguration, 3) Implementing parsePRD method with file reading, prompt generation, and AI provider integration, 4) Adding task enrichment logic with metadata, validation, and structure verification, 5) Implementing comprehensive error handling for all failure scenarios including file I/O, AI provider errors, and JSON parsing. The existing parse-prd.js provides complex logic that needs to be reimplemented with proper TypeScript types and cleaner architecture.",
|
||||||
|
"reasoning": "This is a complex task that involves multiple components working together: file I/O, AI provider integration, JSON parsing, and data validation. The existing parse-prd.js implementation is quite sophisticated with Zod schemas and complex task processing logic that needs to be reimplemented in TypeScript with proper separation of concerns."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"taskId": 122,
|
||||||
|
"taskTitle": "Implement Configuration Management",
|
||||||
|
"complexityScore": 6,
|
||||||
|
"recommendedSubtasks": 5,
|
||||||
|
"expansionPrompt": "Create ConfigManager implementation focusing on: 1) Setting up Zod validation schema that matches the IConfiguration interface structure, 2) Implementing ConfigManager constructor with default values merging and storage initialization, 3) Creating validate method with Zod schema parsing and user-friendly error transformation, 4) Implementing type-safe get method using TypeScript generics and keyof operator, 5) Adding getAll method and ensuring proper immutability and module exports. The existing config-manager.js has complex configuration loading logic that can inform the TypeScript implementation but needs cleaner architecture.",
|
||||||
|
"reasoning": "This task involves creating a configuration management system with validation using Zod. The existing JavaScript config-manager.js is quite complex with multiple configuration sources, defaults, and validation logic. The TypeScript version needs to provide a cleaner API while maintaining the flexibility of the current system."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"taskId": 123,
|
||||||
|
"taskTitle": "Create Utility Functions and Error Handling",
|
||||||
|
"complexityScore": 4,
|
||||||
|
"recommendedSubtasks": 5,
|
||||||
|
"expansionPrompt": "Implement utilities and error handling in stages: 1) Create ID generation module with generateTaskId and generateSubtaskId functions using proper random generation, 2) Implement base TaskMasterError class extending Error with proper TypeScript typing, 3) Add error sanitization methods to prevent sensitive data exposure in production, 4) Implement development-only logging with environment detection, 5) Create specialized error subclasses (FileNotFoundError, ParseError, ValidationError, APIError) with appropriate error codes and formatting.",
|
||||||
|
"reasoning": "This is a relatively straightforward task involving utility functions and error class hierarchies. The main complexity is in ensuring proper error sanitization for production use and creating a well-structured error hierarchy that can be used throughout the application."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"taskId": 124,
|
||||||
|
"taskTitle": "Implement TaskMasterCore Facade",
|
||||||
|
"complexityScore": 7,
|
||||||
|
"recommendedSubtasks": 5,
|
||||||
|
"expansionPrompt": "Build TaskMasterCore facade implementation: 1) Create class structure with proper TypeScript imports and type definitions for all subsystem interfaces, 2) Implement initialize method for lazy loading AI provider and parser instances based on configuration, 3) Create parsePRD method that coordinates parser, AI provider, and storage subsystems, 4) Implement getTasks and other facade methods for task retrieval and management, 5) Create createTaskMaster factory function and set up all module exports including type re-exports. Ensure proper ESM compatibility with .js extensions in imports.",
|
||||||
|
"reasoning": "This is a complex integration task that brings together all the other components into a cohesive facade. It requires understanding of the facade pattern, proper dependency management, lazy initialization, and careful module export structure for the public API."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"taskId": 125,
|
||||||
|
"taskTitle": "Create Placeholder Providers and Complete Testing",
|
||||||
|
"complexityScore": 5,
|
||||||
|
"recommendedSubtasks": 5,
|
||||||
|
"expansionPrompt": "Complete the implementation with placeholders and testing: 1) Create OpenAIProvider placeholder class extending BaseProvider with 'not yet implemented' errors, 2) Create GoogleProvider placeholder class with similar structure, 3) Implement MockProvider in tests/mocks directory with configurable responses and behavior simulation, 4) Write comprehensive unit tests for TaskParser covering all methods and edge cases, 5) Create integration tests for the complete parse-prd workflow ensuring 80% code coverage. Follow kebab-case naming convention for test files.",
|
||||||
|
"reasoning": "This task involves creating placeholder implementations and a comprehensive test suite. While the placeholder providers are simple, creating a good MockProvider and comprehensive tests requires understanding the entire system architecture and ensuring all edge cases are covered."
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
77
.taskmaster/reports/tm-core-complexity.json
Normal file
77
.taskmaster/reports/tm-core-complexity.json
Normal file
@@ -0,0 +1,77 @@
|
|||||||
|
{
|
||||||
|
"meta": {
|
||||||
|
"generatedAt": "2025-08-06T12:15:01.327Z",
|
||||||
|
"tasksAnalyzed": 8,
|
||||||
|
"totalTasks": 11,
|
||||||
|
"analysisCount": 8,
|
||||||
|
"thresholdScore": 5,
|
||||||
|
"projectName": "Taskmaster",
|
||||||
|
"usedResearch": false
|
||||||
|
},
|
||||||
|
"complexityAnalysis": [
|
||||||
|
{
|
||||||
|
"taskId": 118,
|
||||||
|
"taskTitle": "Create AI Provider Base Architecture",
|
||||||
|
"complexityScore": 4,
|
||||||
|
"recommendedSubtasks": 5,
|
||||||
|
"expansionPrompt": "Break down the conversion of base-provider.js to TypeScript BaseProvider class: 1) Convert to TypeScript and define IAIProvider interface, 2) Implement abstract class with core properties, 3) Define abstract methods and Template Method pattern, 4) Add retry logic with exponential backoff, 5) Implement validation and logging. Focus on maintaining compatibility with existing provider pattern while adding type safety.",
|
||||||
|
"reasoning": "The codebase already has a well-established BaseAIProvider class in JavaScript. Converting to TypeScript mainly involves adding type definitions and ensuring the existing pattern is preserved. The complexity is moderate because the pattern is already proven in the codebase."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"taskId": 119,
|
||||||
|
"taskTitle": "Implement Provider Factory with Dynamic Imports",
|
||||||
|
"complexityScore": 3,
|
||||||
|
"recommendedSubtasks": 5,
|
||||||
|
"expansionPrompt": "Create ProviderFactory implementation: 1) Set up class structure and types, 2) Implement provider selection switch statement, 3) Add dynamic imports for tree-shaking, 4) Handle provider instantiation with config, 5) Add comprehensive error handling. The existing PROVIDERS registry pattern should guide the implementation.",
|
||||||
|
"reasoning": "The codebase already uses a dual registry pattern (static PROVIDERS and dynamic ProviderRegistry). Creating a factory is straightforward as the provider registration patterns are well-established. Dynamic imports are already used in the codebase."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"taskId": 120,
|
||||||
|
"taskTitle": "Implement Anthropic Provider",
|
||||||
|
"complexityScore": 3,
|
||||||
|
"recommendedSubtasks": 5,
|
||||||
|
"expansionPrompt": "Implement AnthropicProvider following existing patterns: 1) Create class structure with imports, 2) Implement constructor and client initialization, 3) Add generateCompletion with Claude API integration, 4) Implement token calculation and utility methods, 5) Add error handling and exports. Use the existing anthropic.js provider as reference.",
|
||||||
|
"reasoning": "AnthropicProvider already exists in the codebase with full implementation. This task essentially involves adapting the existing implementation to match the new TypeScript architecture, making it relatively straightforward."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"taskId": 121,
|
||||||
|
"taskTitle": "Create Prompt Builder and Task Parser",
|
||||||
|
"complexityScore": 6,
|
||||||
|
"recommendedSubtasks": 5,
|
||||||
|
"expansionPrompt": "Build prompt system and parser: 1) Create PromptBuilder with template methods, 2) Implement TaskParser with dependency injection, 3) Add parsePRD core logic with file reading, 4) Implement task enrichment and metadata, 5) Add comprehensive error handling. Leverage the existing prompt management system in src/prompts/.",
|
||||||
|
"reasoning": "While the codebase has a sophisticated prompt management system, creating a new PromptBuilder and TaskParser requires understanding the existing prompt templates, JSON schema validation, and integration with the AI provider system. The task involves significant new code."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"taskId": 122,
|
||||||
|
"taskTitle": "Implement Configuration Management",
|
||||||
|
"complexityScore": 5,
|
||||||
|
"recommendedSubtasks": 5,
|
||||||
|
"expansionPrompt": "Create ConfigManager with validation: 1) Define Zod schema for IConfiguration, 2) Implement constructor with defaults, 3) Add validate method with error handling, 4) Create type-safe get method with generics, 5) Implement getAll and finalize exports. Reference existing config-manager.js for patterns.",
|
||||||
|
"reasoning": "The codebase has an existing config-manager.js with sophisticated configuration handling. Adding Zod validation and TypeScript generics adds complexity, but the existing patterns provide a solid foundation."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"taskId": 123,
|
||||||
|
"taskTitle": "Create Utility Functions and Error Handling",
|
||||||
|
"complexityScore": 2,
|
||||||
|
"recommendedSubtasks": 5,
|
||||||
|
"expansionPrompt": "Implement utilities and error handling: 1) Create ID generation module with unique formats, 2) Build TaskMasterError base class, 3) Add error sanitization for security, 4) Implement development-only logging, 5) Create specialized error subclasses. Keep implementation simple and focused.",
|
||||||
|
"reasoning": "This is a straightforward utility implementation task. The codebase already has error handling patterns, and ID generation is a simple algorithmic task. The main work is creating clean, reusable utilities."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"taskId": 124,
|
||||||
|
"taskTitle": "Implement TaskMasterCore Facade",
|
||||||
|
"complexityScore": 7,
|
||||||
|
"recommendedSubtasks": 5,
|
||||||
|
"expansionPrompt": "Create main facade class: 1) Set up TaskMasterCore structure with imports, 2) Implement lazy initialization logic, 3) Add parsePRD coordination method, 4) Implement getTasks and other facade methods, 5) Create factory function and exports. This ties together all other components into a cohesive API.",
|
||||||
|
"reasoning": "This is the most complex task as it requires understanding and integrating all other components. The facade must coordinate between configuration, providers, storage, and parsing while maintaining a clean API. It's the architectural keystone of the system."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"taskId": 125,
|
||||||
|
"taskTitle": "Create Placeholder Providers and Complete Testing",
|
||||||
|
"complexityScore": 5,
|
||||||
|
"recommendedSubtasks": 5,
|
||||||
|
"expansionPrompt": "Implement testing infrastructure: 1) Create OpenAIProvider placeholder, 2) Create GoogleProvider placeholder, 3) Build MockProvider for testing, 4) Write TaskParser unit tests, 5) Create integration tests for parse-prd flow. Follow the existing test patterns in tests/ directory.",
|
||||||
|
"reasoning": "While creating placeholder providers is simple, the testing infrastructure requires understanding Jest with ES modules, mocking patterns, and comprehensive test coverage. The existing test structure provides good examples to follow."
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user