* refactor(context): Standardize tag and projectRoot handling across all task tools This commit unifies context management by adopting a boundary-first resolution strategy. All task-scoped tools now resolve `tag` and `projectRoot` at their entry point and forward these values to the underlying direct functions. This approach centralizes context logic, ensuring consistent behavior and enhanced flexibility in multi-tag environments. * fix(tag): Clean up tag handling in task functions and sync process This commit refines the handling of the `tag` parameter across multiple functions, ensuring consistent context management. The `tag` is now passed more efficiently in `listTasksDirect`, `setTaskStatusDirect`, and `syncTasksToReadme`, improving clarity and reducing redundancy. Additionally, a TODO comment has been added in `sync-readme.js` to address future tag support enhancements. * feat(tag): Implement Boundary-First Tag Resolution for consistent tag handling This commit introduces Boundary-First Tag Resolution in the task manager, ensuring consistent and deterministic tag handling across CLI and MCP. This change resolves potential race conditions and improves the reliability of tag-specific operations. Additionally, the `expandTask` function has been updated to use the resolved tag when writing JSON, enhancing data integrity during task updates. * chore(biome): formatting * fix(expand-task): Update writeJSON call to use tag instead of resolvedTag * fix(commands): Enhance complexity report path resolution and task initialization `resolveComplexityReportPath` function to streamline output path generation based on tag context and user-defined output. - Improved clarity and maintainability of command handling by centralizing path resolution logic. * Fix: unknown currentTag * fix(task-manager): Update generateTaskFiles calls to include tag and projectRoot parameters This commit modifies the `moveTask` and `updateSubtaskById` functions to pass the `tag` and `projectRoot` parameters to the `generateTaskFiles` function. This ensures that task files are generated with the correct context when requested, enhancing consistency in task management operations. * fix(commands): Refactor tag handling and complexity report path resolution This commit updates the `registerCommands` function to utilize `taskMaster.getCurrentTag()` for consistent tag retrieval across command actions. It also enhances the initialization of `TaskMaster` by passing the tag directly, improving clarity and maintainability. The complexity report path resolution is streamlined to ensure correct file naming based on the current tag context. * fix(task-master): Update complexity report path expectations in tests This commit modifies the `initTaskMaster` test to expect a valid string for the complexity report path, ensuring it matches the expected file naming convention. This change enhances test reliability by verifying the correct output format when the path is generated. * fix(set-task-status): Enhance logging and tag resolution in task status updates This commit improves the logging output in the `registerSetTaskStatusTool` function to include the tag context when setting task statuses. It also updates the tag handling by resolving the tag using the `resolveTag` utility, ensuring that the correct tag is used when updating task statuses. Additionally, the `setTaskStatus` function is modified to remove the tag parameter from the `readJSON` and `writeJSON` calls, streamlining the data handling process. * fix(commands, expand-task, task-manager): Add complexity report option and enhance path handling This commit introduces a new `--complexity-report` option in the `registerCommands` function, allowing users to specify a custom path for the complexity report. The `expandTask` function is updated to accept the `complexityReportPath` from the context, ensuring it is utilized correctly during task expansion. Additionally, the `setTaskStatus` function now includes the `tag` parameter in the `readJSON` and `writeJSON` calls, improving task status updates with proper context. The `initTaskMaster` function is also modified to create parent directories for output paths, enhancing file handling robustness. * fix(expand-task): Add complexityReportPath to context for task expansion tests This commit updates the test for the `expandTask` function by adding the `complexityReportPath` to the context object. This change ensures that the complexity report path is correctly utilized in the test, aligning with recent enhancements to complexity report handling in the task manager. * chore: implement suggested changes * fix(parse-prd): Clarify tag parameter description for task organization Updated the documentation for the `tag` parameter in the `parse-prd.js` file to provide a clearer context on its purpose for organizing tasks into separate task lists. * Fix Inconsistent tag resolution pattern. * fix: Enhance complexity report path handling with tag support This commit updates various functions to incorporate the `tag` parameter when resolving complexity report paths. The `expandTaskDirect`, `resolveComplexityReportPath`, and related tools now utilize the current tag context, improving consistency in task management. Additionally, the complexity report path is now correctly passed through the context in the `expand-task` and `set-task-status` tools, ensuring accurate report retrieval based on the active tag. * Updated the JSDoc for the `tag` parameter in the `show-task.js` file. * Remove redundant comment on tag parameter in readJSON call * Remove unused import for getTagAwareFilePath * Add missed complexityReportPath to args for task expansion * fix(tests): Enhance research tests with tag-aware functionality This commit updates the `research.test.js` file to improve the testing of the `performResearch` function by incorporating tag-aware functionality. Key changes include mocking the `findProjectRoot` to return a valid path, enhancing the `ContextGatherer` and `FuzzyTaskSearch` mocks, and adding comprehensive tests for tag parameter handling in various scenarios. The tests now cover passing different tag values, ensuring correct behavior when tags are provided, undefined, or null, and validating the integration of tags in task discovery and context gathering processes. * Remove unused import for * fix: Refactor complexity report path handling and improve argument destructuring This commit enhances the `expandTaskDirect` function by improving the destructuring of arguments for better readability. It also updates the `analyze.js` and `analyze-task-complexity.js` files to utilize the new `resolveComplexityReportOutputPath` function, ensuring tag-aware resolution of output paths. Additionally, logging has been added to provide clarity on the report path being used. * test: Add complexity report tag isolation tests and improve path handling This commit introduces a new test file for complexity report tag isolation, ensuring that different tags maintain separate complexity reports. It enhances the existing tests in `analyze-task-complexity.test.js` by updating expectations to use `expect.stringContaining` for file paths, improving robustness against path changes. The new tests cover various scenarios, including path resolution and report generation for both master and feature tags, ensuring no cross-tag contamination occurs. * Update scripts/modules/task-manager/list-tasks.js Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> * Update scripts/modules/task-manager/list-tasks.js Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> * test(complexity-report): Fix tag slugification in filename expectations - Update mocks to use slugifyTagForFilePath for cross-platform compatibility - Replace raw tag values with slugified versions in expected filenames - Fix test expecting 'feature/user-auth-v2' to expect 'feature-user-auth-v2' - Align test with actual filename generation logic that sanitizes special chars --------- Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
966 lines
40 KiB
Bash
Executable File
966 lines
40 KiB
Bash
Executable File
#!/bin/bash
|
|
|
|
# Treat unset variables as an error when substituting.
|
|
set -u
|
|
# Prevent errors in pipelines from being masked.
|
|
set -o pipefail
|
|
|
|
# --- Default Settings ---
|
|
run_verification_test=true
|
|
|
|
# --- Argument Parsing ---
|
|
# Simple loop to check for the skip flag
|
|
# Note: This needs to happen *before* the main block piped to tee
|
|
# if we want the decision logged early. Or handle args inside.
|
|
# Let's handle it before for clarity.
|
|
processed_args=()
|
|
while [[ $# -gt 0 ]]; do
|
|
case "$1" in
|
|
--skip-verification)
|
|
run_verification_test=false
|
|
echo "[INFO] Argument '--skip-verification' detected. Fallback verification will be skipped."
|
|
shift # Consume the flag
|
|
;;
|
|
--analyze-log)
|
|
# Keep the analyze-log flag handling separate for now
|
|
# It exits early, so doesn't conflict with the main run flags
|
|
processed_args+=("$1")
|
|
if [[ $# -gt 1 ]]; then
|
|
processed_args+=("$2")
|
|
shift 2
|
|
else
|
|
shift 1
|
|
fi
|
|
;;
|
|
*)
|
|
# Unknown argument, pass it along or handle error
|
|
# For now, just pass it along in case --analyze-log needs it later
|
|
processed_args+=("$1")
|
|
shift
|
|
;;
|
|
esac
|
|
done
|
|
# Restore processed arguments ONLY if the array is not empty
|
|
if [ ${#processed_args[@]} -gt 0 ]; then
|
|
set -- "${processed_args[@]}"
|
|
fi
|
|
|
|
|
|
# --- Configuration ---
|
|
# Assumes script is run from the project root (claude-task-master)
|
|
TASKMASTER_SOURCE_DIR="." # Current directory is the source
|
|
# Base directory for test runs, relative to project root
|
|
BASE_TEST_DIR="$TASKMASTER_SOURCE_DIR/tests/e2e/_runs"
|
|
# Log directory, relative to project root
|
|
LOG_DIR="$TASKMASTER_SOURCE_DIR/tests/e2e/log"
|
|
# Path to the sample PRD, relative to project root
|
|
SAMPLE_PRD_SOURCE="$TASKMASTER_SOURCE_DIR/tests/fixtures/sample-prd.txt"
|
|
# Path to the main .env file in the source directory
|
|
MAIN_ENV_FILE="$TASKMASTER_SOURCE_DIR/.env"
|
|
# ---
|
|
|
|
# <<< Source the helper script >>>
|
|
# shellcheck source=tests/e2e/e2e_helpers.sh
|
|
source "$TASKMASTER_SOURCE_DIR/tests/e2e/e2e_helpers.sh"
|
|
|
|
# ==========================================
|
|
# >>> Global Helper Functions Defined in run_e2e.sh <<<
|
|
# --- Helper Functions (Define globally before export) ---
|
|
_format_duration() {
|
|
local total_seconds=$1
|
|
local minutes=$((total_seconds / 60))
|
|
local seconds=$((total_seconds % 60))
|
|
printf "%dm%02ds" "$minutes" "$seconds"
|
|
}
|
|
|
|
# Note: This relies on 'overall_start_time' being set globally before the function is called
|
|
_get_elapsed_time_for_log() {
|
|
local current_time
|
|
current_time=$(date +%s)
|
|
# Use overall_start_time here, as start_time_for_helpers might not be relevant globally
|
|
local elapsed_seconds
|
|
elapsed_seconds=$((current_time - overall_start_time))
|
|
_format_duration "$elapsed_seconds"
|
|
}
|
|
|
|
log_info() {
|
|
echo "[INFO] [$(_get_elapsed_time_for_log)] $(date +"%Y-%m-%d %H:%M:%S") $1"
|
|
}
|
|
|
|
log_success() {
|
|
echo "[SUCCESS] [$(_get_elapsed_time_for_log)] $(date +"%Y-%m-%d %H:%M:%S") $1"
|
|
}
|
|
|
|
log_error() {
|
|
echo "[ERROR] [$(_get_elapsed_time_for_log)] $(date +"%Y-%m-%d %H:%M:%S") $1" >&2
|
|
}
|
|
|
|
log_step() {
|
|
test_step_count=$((test_step_count + 1))
|
|
echo ""
|
|
echo "============================================="
|
|
echo " STEP ${test_step_count}: [$(_get_elapsed_time_for_log)] $(date +"%Y-%m-%d %H:%M:%S") $1"
|
|
echo "============================================="
|
|
}
|
|
# ==========================================
|
|
|
|
# <<< Export helper functions for subshells >>>
|
|
export -f log_info log_success log_error log_step _format_duration _get_elapsed_time_for_log extract_and_sum_cost
|
|
|
|
# --- Argument Parsing for Analysis-Only Mode ---
|
|
# This remains the same, as it exits early if matched
|
|
if [ "$#" -ge 1 ] && [ "$1" == "--analyze-log" ]; then
|
|
LOG_TO_ANALYZE=""
|
|
# Check if a log file path was provided as the second argument
|
|
if [ "$#" -ge 2 ] && [ -n "$2" ]; then
|
|
LOG_TO_ANALYZE="$2"
|
|
echo "[INFO] Using specified log file for analysis: $LOG_TO_ANALYZE"
|
|
else
|
|
echo "[INFO] Log file not specified. Attempting to find the latest log..."
|
|
# Find the latest log file in the LOG_DIR
|
|
# Ensure LOG_DIR is absolute for ls to work correctly regardless of PWD
|
|
ABS_LOG_DIR="$(cd "$TASKMASTER_SOURCE_DIR/$LOG_DIR" && pwd)"
|
|
LATEST_LOG=$(ls -t "$ABS_LOG_DIR"/e2e_run_*.log 2>/dev/null | head -n 1)
|
|
|
|
if [ -z "$LATEST_LOG" ]; then
|
|
echo "[ERROR] No log files found matching 'e2e_run_*.log' in $ABS_LOG_DIR. Cannot analyze." >&2
|
|
exit 1
|
|
fi
|
|
LOG_TO_ANALYZE="$LATEST_LOG"
|
|
echo "[INFO] Found latest log file: $LOG_TO_ANALYZE"
|
|
fi
|
|
|
|
# Ensure the log path is absolute (it should be if found by ls, but double-check)
|
|
if [[ "$LOG_TO_ANALYZE" != /* ]]; then
|
|
LOG_TO_ANALYZE="$(pwd)/$LOG_TO_ANALYZE" # Fallback if relative path somehow occurred
|
|
fi
|
|
echo "[INFO] Running in analysis-only mode for log: $LOG_TO_ANALYZE"
|
|
|
|
# --- Derive TEST_RUN_DIR from log file path ---
|
|
# Extract timestamp like YYYYMMDD_HHMMSS from e2e_run_YYYYMMDD_HHMMSS.log
|
|
log_basename=$(basename "$LOG_TO_ANALYZE")
|
|
# Ensure the sed command matches the .log suffix correctly
|
|
timestamp_match=$(echo "$log_basename" | sed -n 's/^e2e_run_\([0-9]\{8\}_[0-9]\{6\}\)\.log$/\1/p')
|
|
|
|
if [ -z "$timestamp_match" ]; then
|
|
echo "[ERROR] Could not extract timestamp from log file name: $log_basename" >&2
|
|
echo "[ERROR] Expected format: e2e_run_YYYYMMDD_HHMMSS.log" >&2
|
|
exit 1
|
|
fi
|
|
|
|
# Construct the expected run directory path relative to project root
|
|
EXPECTED_RUN_DIR="$TASKMASTER_SOURCE_DIR/tests/e2e/_runs/run_$timestamp_match"
|
|
# Make it absolute
|
|
EXPECTED_RUN_DIR_ABS="$(cd "$TASKMASTER_SOURCE_DIR" && pwd)/tests/e2e/_runs/run_$timestamp_match"
|
|
|
|
if [ ! -d "$EXPECTED_RUN_DIR_ABS" ]; then
|
|
echo "[ERROR] Corresponding test run directory not found: $EXPECTED_RUN_DIR_ABS" >&2
|
|
exit 1
|
|
fi
|
|
|
|
# Save original dir before changing
|
|
ORIGINAL_DIR=$(pwd)
|
|
|
|
echo "[INFO] Changing directory to $EXPECTED_RUN_DIR_ABS for analysis context..."
|
|
cd "$EXPECTED_RUN_DIR_ABS"
|
|
|
|
# Call the analysis function (sourced from helpers)
|
|
echo "[INFO] Calling analyze_log_with_llm function..."
|
|
analyze_log_with_llm "$LOG_TO_ANALYZE" "$(cd "$ORIGINAL_DIR/$TASKMASTER_SOURCE_DIR" && pwd)" # Pass absolute project root
|
|
ANALYSIS_EXIT_CODE=$?
|
|
|
|
# Return to original directory
|
|
cd "$ORIGINAL_DIR"
|
|
exit $ANALYSIS_EXIT_CODE
|
|
fi
|
|
# --- End Analysis-Only Mode Logic ---
|
|
|
|
# --- Normal Execution Starts Here (if not in analysis-only mode) ---
|
|
|
|
# --- Test State Variables ---
|
|
# Note: These are mainly for step numbering within the log now, not for final summary
|
|
test_step_count=0
|
|
start_time_for_helpers=0 # Separate start time for helper functions inside the pipe
|
|
total_e2e_cost="0.0" # Initialize total E2E cost
|
|
# ---
|
|
|
|
# --- Log File Setup ---
|
|
# Create the log directory if it doesn't exist
|
|
mkdir -p "$LOG_DIR"
|
|
# Define timestamped log file path
|
|
TIMESTAMP=$(date +"%Y%m%d_%H%M%S")
|
|
# <<< Use pwd to create an absolute path AND add .log extension >>>
|
|
LOG_FILE="$(pwd)/$LOG_DIR/e2e_run_${TIMESTAMP}.log"
|
|
|
|
# Define and create the test run directory *before* the main pipe
|
|
mkdir -p "$BASE_TEST_DIR" # Ensure base exists first
|
|
TEST_RUN_DIR="$BASE_TEST_DIR/run_$TIMESTAMP"
|
|
mkdir -p "$TEST_RUN_DIR"
|
|
|
|
# Echo starting message to the original terminal BEFORE the main piped block
|
|
echo "Starting E2E test. Output will be shown here and saved to: $LOG_FILE"
|
|
echo "Running from directory: $(pwd)"
|
|
echo "--- Starting E2E Run ---" # Separator before piped output starts
|
|
|
|
# Record start time for overall duration *before* the pipe
|
|
overall_start_time=$(date +%s)
|
|
|
|
# <<< DEFINE ORIGINAL_DIR GLOBALLY HERE >>>
|
|
ORIGINAL_DIR=$(pwd)
|
|
|
|
# ==========================================
|
|
# >>> MOVE FUNCTION DEFINITION HERE <<<
|
|
# --- Helper Functions (Define globally) ---
|
|
_format_duration() {
|
|
local total_seconds=$1
|
|
local minutes=$((total_seconds / 60))
|
|
local seconds=$((total_seconds % 60))
|
|
printf "%dm%02ds" "$minutes" "$seconds"
|
|
}
|
|
|
|
# Note: This relies on 'overall_start_time' being set globally before the function is called
|
|
_get_elapsed_time_for_log() {
|
|
local current_time=$(date +%s)
|
|
# Use overall_start_time here, as start_time_for_helpers might not be relevant globally
|
|
local elapsed_seconds=$((current_time - overall_start_time))
|
|
_format_duration "$elapsed_seconds"
|
|
}
|
|
|
|
log_info() {
|
|
echo "[INFO] [$(_get_elapsed_time_for_log)] $(date +"%Y-%m-%d %H:%M:%S") $1"
|
|
}
|
|
|
|
log_success() {
|
|
echo "[SUCCESS] [$(_get_elapsed_time_for_log)] $(date +"%Y-%m-%d %H:%M:%S") $1"
|
|
}
|
|
|
|
log_error() {
|
|
echo "[ERROR] [$(_get_elapsed_time_for_log)] $(date +"%Y-%m-%d %H:%M:%S") $1" >&2
|
|
}
|
|
|
|
log_step() {
|
|
test_step_count=$((test_step_count + 1))
|
|
echo ""
|
|
echo "============================================="
|
|
echo " STEP ${test_step_count}: [$(_get_elapsed_time_for_log)] $(date +"%Y-%m-%d %H:%M:%S") $1"
|
|
echo "============================================="
|
|
}
|
|
|
|
# ==========================================
|
|
|
|
# --- Main Execution Block (Piped to tee) ---
|
|
# Wrap the main part of the script in braces and pipe its output (stdout and stderr) to tee
|
|
{
|
|
# Note: Helper functions are now defined globally above,
|
|
# but we still need start_time_for_helpers if any logging functions
|
|
# called *inside* this block depend on it. If not, it can be removed.
|
|
start_time_for_helpers=$(date +%s) # Keep if needed by helpers called inside this block
|
|
|
|
# Log the verification decision
|
|
if [ "$run_verification_test" = true ]; then
|
|
log_info "Fallback verification test will be run as part of this E2E test."
|
|
else
|
|
log_info "Fallback verification test will be SKIPPED (--skip-verification flag detected)."
|
|
fi
|
|
|
|
# --- Dependency Checks ---
|
|
log_step "Checking for dependencies (jq, bc)"
|
|
if ! command -v jq &> /dev/null; then
|
|
log_error "Dependency 'jq' is not installed or not found in PATH. Please install jq (e.g., 'brew install jq' or 'sudo apt-get install jq')."
|
|
exit 1
|
|
fi
|
|
if ! command -v bc &> /dev/null; then
|
|
log_error "Dependency 'bc' not installed (for cost calculation). Please install bc (e.g., 'brew install bc' or 'sudo apt-get install bc')."
|
|
exit 1
|
|
fi
|
|
log_success "Dependencies 'jq' and 'bc' found."
|
|
|
|
# --- Test Setup (Output to tee) ---
|
|
log_step "Setting up test environment"
|
|
|
|
log_step "Creating global npm link for task-master-ai"
|
|
if npm link; then
|
|
log_success "Global link created/updated."
|
|
else
|
|
log_error "Failed to run 'npm link'. Check permissions or output for details."
|
|
exit 1
|
|
fi
|
|
|
|
log_info "Ensured base test directory exists: $BASE_TEST_DIR"
|
|
|
|
log_info "Using test run directory (created earlier): $TEST_RUN_DIR"
|
|
|
|
# Check if source .env file exists
|
|
if [ ! -f "$MAIN_ENV_FILE" ]; then
|
|
log_error "Source .env file not found at $MAIN_ENV_FILE. Cannot proceed with API-dependent tests."
|
|
exit 1
|
|
fi
|
|
log_info "Source .env file found at $MAIN_ENV_FILE."
|
|
|
|
# Check if sample PRD exists
|
|
if [ ! -f "$SAMPLE_PRD_SOURCE" ]; then
|
|
log_error "Sample PRD not found at $SAMPLE_PRD_SOURCE. Please check path."
|
|
exit 1
|
|
fi
|
|
|
|
log_info "Copying sample PRD to test directory..."
|
|
cp "$SAMPLE_PRD_SOURCE" "$TEST_RUN_DIR/prd.txt"
|
|
if [ ! -f "$TEST_RUN_DIR/prd.txt" ]; then
|
|
log_error "Failed to copy sample PRD to $TEST_RUN_DIR."
|
|
exit 1
|
|
fi
|
|
log_success "Sample PRD copied."
|
|
|
|
# ORIGINAL_DIR=$(pwd) # Save original dir # <<< REMOVED FROM HERE
|
|
cd "$TEST_RUN_DIR"
|
|
log_info "Changed directory to $(pwd)"
|
|
|
|
# === Copy .env file BEFORE init ===
|
|
log_step "Copying source .env file for API keys"
|
|
if cp "$ORIGINAL_DIR/.env" ".env"; then
|
|
log_success ".env file copied successfully."
|
|
else
|
|
log_error "Failed to copy .env file from $ORIGINAL_DIR/.env"
|
|
exit 1
|
|
fi
|
|
# ========================================
|
|
|
|
# --- Test Execution (Output to tee) ---
|
|
|
|
log_step "Linking task-master-ai package locally"
|
|
npm link task-master-ai
|
|
log_success "Package linked locally."
|
|
|
|
log_step "Initializing Task Master project (non-interactive)"
|
|
task-master init -y --name="E2E Test $TIMESTAMP" --description="Automated E2E test run"
|
|
if [ ! -f ".taskmaster/config.json" ]; then
|
|
log_error "Initialization failed: .taskmaster/config.json not found."
|
|
exit 1
|
|
fi
|
|
log_success "Project initialized."
|
|
|
|
log_step "Parsing PRD"
|
|
cmd_output_prd=$(task-master parse-prd ./prd.txt --force 2>&1)
|
|
exit_status_prd=$?
|
|
echo "$cmd_output_prd"
|
|
extract_and_sum_cost "$cmd_output_prd"
|
|
if [ $exit_status_prd -ne 0 ] || [ ! -s ".taskmaster/tasks/tasks.json" ]; then
|
|
log_error "Parsing PRD failed: .taskmaster/tasks/tasks.json not found or is empty. Exit status: $exit_status_prd"
|
|
exit 1
|
|
else
|
|
log_success "PRD parsed successfully."
|
|
fi
|
|
|
|
log_step "Expanding Task 1 (to ensure subtask 1.1 exists)"
|
|
cmd_output_analyze=$(task-master analyze-complexity --research --output complexity_results.json 2>&1)
|
|
exit_status_analyze=$?
|
|
echo "$cmd_output_analyze"
|
|
extract_and_sum_cost "$cmd_output_analyze"
|
|
if [ $exit_status_analyze -ne 0 ] || [ ! -f "complexity_results.json" ]; then
|
|
log_error "Complexity analysis failed: complexity_results.json not found. Exit status: $exit_status_analyze"
|
|
exit 1
|
|
else
|
|
log_success "Complexity analysis saved to complexity_results.json"
|
|
fi
|
|
|
|
log_step "Generating complexity report"
|
|
task-master complexity-report --file complexity_results.json > complexity_report_formatted.log
|
|
log_success "Formatted complexity report saved to complexity_report_formatted.log"
|
|
|
|
log_step "Expanding Task 1 (assuming it exists)"
|
|
cmd_output_expand1=$(task-master expand --id=1 --cr complexity_results.json 2>&1)
|
|
exit_status_expand1=$?
|
|
echo "$cmd_output_expand1"
|
|
extract_and_sum_cost "$cmd_output_expand1"
|
|
if [ $exit_status_expand1 -ne 0 ]; then
|
|
log_error "Expanding Task 1 failed. Exit status: $exit_status_expand1"
|
|
else
|
|
log_success "Attempted to expand Task 1."
|
|
fi
|
|
|
|
log_step "Setting status for Subtask 1.1 (assuming it exists)"
|
|
task-master set-status --id=1.1 --status=done
|
|
log_success "Attempted to set status for Subtask 1.1 to 'done'."
|
|
|
|
log_step "Listing tasks again (after changes)"
|
|
task-master list --with-subtasks > task_list_after_changes.log
|
|
log_success "Task list after changes saved to task_list_after_changes.log"
|
|
|
|
# === Start New Test Section: Tag-Aware Expand Testing ===
|
|
log_step "Creating additional tag for expand testing"
|
|
task-master add-tag feature-expand --description="Tag for testing expand command with tag preservation"
|
|
log_success "Created feature-expand tag."
|
|
|
|
log_step "Adding task to feature-expand tag"
|
|
task-master add-task --tag=feature-expand --prompt="Test task for tag-aware expansion" --priority=medium
|
|
# Get the new task ID dynamically
|
|
new_expand_task_id=$(jq -r '.["feature-expand"].tasks[-1].id' .taskmaster/tasks/tasks.json)
|
|
log_success "Added task $new_expand_task_id to feature-expand tag."
|
|
|
|
log_step "Verifying tags exist before expand test"
|
|
task-master tags > tags_before_expand.log
|
|
tag_count_before=$(jq 'keys | length' .taskmaster/tasks/tasks.json)
|
|
log_success "Tag count before expand: $tag_count_before"
|
|
|
|
log_step "Expanding task in feature-expand tag (testing tag corruption fix)"
|
|
cmd_output_expand_tagged=$(task-master expand --tag=feature-expand --id="$new_expand_task_id" 2>&1)
|
|
exit_status_expand_tagged=$?
|
|
echo "$cmd_output_expand_tagged"
|
|
extract_and_sum_cost "$cmd_output_expand_tagged"
|
|
if [ $exit_status_expand_tagged -ne 0 ]; then
|
|
log_error "Tagged expand failed. Exit status: $exit_status_expand_tagged"
|
|
else
|
|
log_success "Tagged expand completed."
|
|
fi
|
|
|
|
log_step "Verifying tag preservation after expand"
|
|
task-master tags > tags_after_expand.log
|
|
tag_count_after=$(jq 'keys | length' .taskmaster/tasks/tasks.json)
|
|
|
|
if [ "$tag_count_before" -eq "$tag_count_after" ]; then
|
|
log_success "Tag count preserved: $tag_count_after (no corruption detected)"
|
|
else
|
|
log_error "Tag corruption detected! Before: $tag_count_before, After: $tag_count_after"
|
|
fi
|
|
|
|
log_step "Verifying master tag still exists and has tasks"
|
|
master_task_count=$(jq -r '.master.tasks | length' .taskmaster/tasks/tasks.json 2>/dev/null || echo "0")
|
|
if [ "$master_task_count" -gt "0" ]; then
|
|
log_success "Master tag preserved with $master_task_count tasks"
|
|
else
|
|
log_error "Master tag corrupted or empty after tagged expand"
|
|
fi
|
|
|
|
log_step "Verifying feature-expand tag has expanded subtasks"
|
|
expanded_subtask_count=$(jq -r ".\"feature-expand\".tasks[] | select(.id == $new_expand_task_id) | .subtasks | length" .taskmaster/tasks/tasks.json 2>/dev/null || echo "0")
|
|
if [ "$expanded_subtask_count" -gt "0" ]; then
|
|
log_success "Expand successful: $expanded_subtask_count subtasks created in feature-expand tag"
|
|
else
|
|
log_error "Expand failed: No subtasks found in feature-expand tag"
|
|
fi
|
|
|
|
log_step "Testing force expand with tag preservation"
|
|
cmd_output_force_expand=$(task-master expand --tag=feature-expand --id="$new_expand_task_id" --force 2>&1)
|
|
exit_status_force_expand=$?
|
|
echo "$cmd_output_force_expand"
|
|
extract_and_sum_cost "$cmd_output_force_expand"
|
|
|
|
# Verify tags still preserved after force expand
|
|
tag_count_after_force=$(jq 'keys | length' .taskmaster/tasks/tasks.json)
|
|
if [ "$tag_count_before" -eq "$tag_count_after_force" ]; then
|
|
log_success "Force expand preserved all tags"
|
|
else
|
|
log_error "Force expand caused tag corruption"
|
|
fi
|
|
|
|
log_step "Testing expand --all with tag preservation"
|
|
# Add another task to feature-expand for expand-all testing
|
|
task-master add-task --tag=feature-expand --prompt="Second task for expand-all testing" --priority=low
|
|
second_expand_task_id=$(jq -r '.["feature-expand"].tasks[-1].id' .taskmaster/tasks/tasks.json)
|
|
|
|
cmd_output_expand_all=$(task-master expand --tag=feature-expand --all 2>&1)
|
|
exit_status_expand_all=$?
|
|
echo "$cmd_output_expand_all"
|
|
extract_and_sum_cost "$cmd_output_expand_all"
|
|
|
|
# Verify tags preserved after expand-all
|
|
tag_count_after_all=$(jq 'keys | length' .taskmaster/tasks/tasks.json)
|
|
if [ "$tag_count_before" -eq "$tag_count_after_all" ]; then
|
|
log_success "Expand --all preserved all tags"
|
|
else
|
|
log_error "Expand --all caused tag corruption"
|
|
fi
|
|
|
|
log_success "Completed expand --all tag preservation test."
|
|
|
|
# === End New Test Section: Tag-Aware Expand Testing ===
|
|
|
|
# === Test Model Commands ===
|
|
log_step "Checking initial model configuration"
|
|
task-master models > models_initial_config.log
|
|
log_success "Initial model config saved to models_initial_config.log"
|
|
|
|
log_step "Setting main model"
|
|
task-master models --set-main claude-3-7-sonnet-20250219
|
|
log_success "Set main model."
|
|
|
|
log_step "Setting research model"
|
|
task-master models --set-research sonar-pro
|
|
log_success "Set research model."
|
|
|
|
log_step "Setting fallback model"
|
|
task-master models --set-fallback claude-3-5-sonnet-20241022
|
|
log_success "Set fallback model."
|
|
|
|
log_step "Checking final model configuration"
|
|
task-master models > models_final_config.log
|
|
log_success "Final model config saved to models_final_config.log"
|
|
|
|
log_step "Resetting main model to default (Claude Sonnet) before provider tests"
|
|
task-master models --set-main claude-3-7-sonnet-20250219
|
|
log_success "Main model reset to claude-3-7-sonnet-20250219."
|
|
|
|
# === End Model Commands Test ===
|
|
|
|
# === Fallback Model generateObjectService Verification ===
|
|
if [ "$run_verification_test" = true ]; then
|
|
log_step "Starting Fallback Model (generateObjectService) Verification (Calls separate script)"
|
|
verification_script_path="$ORIGINAL_DIR/tests/e2e/run_fallback_verification.sh"
|
|
|
|
if [ -x "$verification_script_path" ]; then
|
|
log_info "--- Executing Fallback Verification Script: $verification_script_path ---"
|
|
verification_output=$("$verification_script_path" "$(pwd)" 2>&1)
|
|
verification_exit_code=$?
|
|
echo "$verification_output"
|
|
extract_and_sum_cost "$verification_output"
|
|
|
|
log_info "--- Finished Fallback Verification Script Execution (Exit Code: $verification_exit_code) ---"
|
|
|
|
# Log success/failure based on captured exit code
|
|
if [ $verification_exit_code -eq 0 ]; then
|
|
log_success "Fallback verification script reported success."
|
|
else
|
|
log_error "Fallback verification script reported FAILURE (Exit Code: $verification_exit_code)."
|
|
fi
|
|
else
|
|
log_error "Fallback verification script not found or not executable at $verification_script_path. Skipping verification."
|
|
fi
|
|
else
|
|
log_info "Skipping Fallback Verification test as requested by flag."
|
|
fi
|
|
# === END Verification Section ===
|
|
|
|
|
|
# === Multi-Provider Add-Task Test (Keep as is) ===
|
|
log_step "Starting Multi-Provider Add-Task Test Sequence"
|
|
|
|
# Define providers, models, and flags
|
|
# Array order matters: providers[i] corresponds to models[i] and flags[i]
|
|
declare -a providers=("anthropic" "openai" "google" "perplexity" "xai" "openrouter")
|
|
declare -a models=(
|
|
"claude-3-7-sonnet-20250219"
|
|
"gpt-4o"
|
|
"gemini-2.5-pro-preview-05-06"
|
|
"sonar-pro" # Note: This is research-only, add-task might fail if not using research model
|
|
"grok-3"
|
|
"anthropic/claude-3.7-sonnet" # OpenRouter uses Claude 3.7
|
|
)
|
|
# Flags: Add provider-specific flags here, e.g., --openrouter. Use empty string if none.
|
|
declare -a flags=("" "" "" "" "" "--openrouter")
|
|
|
|
# Consistent prompt for all providers
|
|
add_task_prompt="Create a task to implement user authentication using OAuth 2.0 with Google as the provider. Include steps for registering the app, handling the callback, and storing user sessions."
|
|
log_info "Using consistent prompt for add-task tests: \"$add_task_prompt\""
|
|
echo "--- Multi-Provider Add Task Summary ---" > provider_add_task_summary.log # Initialize summary log
|
|
|
|
for i in "${!providers[@]}"; do
|
|
provider="${providers[$i]}"
|
|
model="${models[$i]}"
|
|
flag="${flags[$i]}"
|
|
|
|
log_step "Testing Add-Task with Provider: $provider (Model: $model)"
|
|
|
|
# 1. Set the main model for this provider
|
|
log_info "Setting main model to $model for $provider ${flag:+using flag $flag}..."
|
|
set_model_cmd="task-master models --set-main \"$model\" $flag"
|
|
echo "Executing: $set_model_cmd"
|
|
if eval $set_model_cmd; then
|
|
log_success "Successfully set main model for $provider."
|
|
else
|
|
log_error "Failed to set main model for $provider. Skipping add-task for this provider."
|
|
# Optionally save failure info here if needed for LLM analysis
|
|
echo "Provider $provider set-main FAILED" >> provider_add_task_summary.log
|
|
continue # Skip to the next provider
|
|
fi
|
|
|
|
# 2. Run add-task
|
|
log_info "Running add-task with prompt..."
|
|
add_task_output_file="add_task_raw_output_${provider}_${model//\//_}.log" # Sanitize ID
|
|
# Run add-task and capture ALL output (stdout & stderr) to a file AND a variable
|
|
add_task_cmd_output=$(task-master add-task --prompt "$add_task_prompt" 2>&1 | tee "$add_task_output_file")
|
|
add_task_exit_code=${PIPESTATUS[0]}
|
|
|
|
# 3. Check for success and extract task ID
|
|
new_task_id=""
|
|
extract_and_sum_cost "$add_task_cmd_output"
|
|
if [ $add_task_exit_code -eq 0 ] && (echo "$add_task_cmd_output" | grep -q "✓ Added new task #" || echo "$add_task_cmd_output" | grep -q "✅ New task created successfully:" || echo "$add_task_cmd_output" | grep -q "Task [0-9]\+ Created Successfully"); then
|
|
new_task_id=$(echo "$add_task_cmd_output" | grep -o -E "(Task |#)[0-9.]+" | grep -o -E "[0-9.]+" | head -n 1)
|
|
if [ -n "$new_task_id" ]; then
|
|
log_success "Add-task succeeded for $provider. New task ID: $new_task_id"
|
|
echo "Provider $provider add-task SUCCESS (ID: $new_task_id)" >> provider_add_task_summary.log
|
|
else
|
|
# Succeeded but couldn't parse ID - treat as warning/anomaly
|
|
log_error "Add-task command succeeded for $provider, but failed to extract task ID from output."
|
|
echo "Provider $provider add-task SUCCESS (ID extraction FAILED)" >> provider_add_task_summary.log
|
|
new_task_id="UNKNOWN_ID_EXTRACTION_FAILED"
|
|
fi
|
|
else
|
|
log_error "Add-task command failed for $provider (Exit Code: $add_task_exit_code). See $add_task_output_file for details."
|
|
echo "Provider $provider add-task FAILED (Exit Code: $add_task_exit_code)" >> provider_add_task_summary.log
|
|
new_task_id="FAILED"
|
|
fi
|
|
|
|
# 4. Run task show if ID was obtained (even if extraction failed, use placeholder)
|
|
if [ "$new_task_id" != "FAILED" ] && [ "$new_task_id" != "UNKNOWN_ID_EXTRACTION_FAILED" ]; then
|
|
log_info "Running task show for new task ID: $new_task_id"
|
|
show_output_file="add_task_show_output_${provider}_id_${new_task_id}.log"
|
|
if task-master show "$new_task_id" > "$show_output_file"; then
|
|
log_success "Task show output saved to $show_output_file"
|
|
else
|
|
log_error "task show command failed for ID $new_task_id. Check log."
|
|
# Still keep the file, it might contain error output
|
|
fi
|
|
elif [ "$new_task_id" == "UNKNOWN_ID_EXTRACTION_FAILED" ]; then
|
|
log_info "Skipping task show for $provider due to ID extraction failure."
|
|
else
|
|
log_info "Skipping task show for $provider due to add-task failure."
|
|
fi
|
|
|
|
done # End of provider loop
|
|
|
|
log_step "Finished Multi-Provider Add-Task Test Sequence"
|
|
echo "Provider add-task summary log available at: provider_add_task_summary.log"
|
|
# === End Multi-Provider Add-Task Test ===
|
|
|
|
log_step "Listing tasks again (after multi-add)"
|
|
task-master list --with-subtasks > task_list_after_multi_add.log
|
|
log_success "Task list after multi-add saved to task_list_after_multi_add.log"
|
|
|
|
|
|
# === Resume Core Task Commands Test ===
|
|
log_step "Listing tasks (for core tests)"
|
|
task-master list > task_list_core_test_start.log
|
|
log_success "Core test initial task list saved."
|
|
|
|
log_step "Getting next task"
|
|
task-master next > next_task_core_test.log
|
|
log_success "Core test next task saved."
|
|
|
|
log_step "Showing Task 1 details"
|
|
task-master show 1 > task_1_details_core_test.log
|
|
log_success "Task 1 details saved."
|
|
|
|
log_step "Adding dependency (Task 2 depends on Task 1)"
|
|
task-master add-dependency --id=2 --depends-on=1
|
|
log_success "Added dependency 2->1."
|
|
|
|
log_step "Validating dependencies (after add)"
|
|
task-master validate-dependencies > validate_dependencies_after_add_core.log
|
|
log_success "Dependency validation after add saved."
|
|
|
|
log_step "Removing dependency (Task 2 depends on Task 1)"
|
|
task-master remove-dependency --id=2 --depends-on=1
|
|
log_success "Removed dependency 2->1."
|
|
|
|
log_step "Fixing dependencies (should be no-op now)"
|
|
task-master fix-dependencies > fix_dependencies_output_core.log
|
|
log_success "Fix dependencies attempted."
|
|
|
|
# === Start New Test Section: Validate/Fix Bad Dependencies ===
|
|
|
|
log_step "Intentionally adding non-existent dependency (1 -> 999)"
|
|
task-master add-dependency --id=1 --depends-on=999 || log_error "Failed to add non-existent dependency (unexpected)"
|
|
# Don't exit even if the above fails, the goal is to test validation
|
|
log_success "Attempted to add dependency 1 -> 999."
|
|
|
|
log_step "Validating dependencies (expecting non-existent error)"
|
|
task-master validate-dependencies > validate_deps_non_existent.log 2>&1 || true # Allow command to fail without exiting script
|
|
if grep -q "Non-existent dependency ID: 999" validate_deps_non_existent.log; then
|
|
log_success "Validation correctly identified non-existent dependency 999."
|
|
else
|
|
log_error "Validation DID NOT report non-existent dependency 999 as expected. Check validate_deps_non_existent.log"
|
|
fi
|
|
|
|
log_step "Fixing dependencies (should remove 1 -> 999)"
|
|
task-master fix-dependencies > fix_deps_after_non_existent.log
|
|
log_success "Attempted to fix dependencies."
|
|
|
|
log_step "Validating dependencies (after fix)"
|
|
task-master validate-dependencies > validate_deps_after_fix_non_existent.log 2>&1 || true # Allow potential failure
|
|
if grep -q "Non-existent dependency ID: 999" validate_deps_after_fix_non_existent.log; then
|
|
log_error "Validation STILL reports non-existent dependency 999 after fix. Check logs."
|
|
else
|
|
log_success "Validation shows non-existent dependency 999 was removed."
|
|
fi
|
|
|
|
|
|
log_step "Intentionally adding circular dependency (4 -> 5 -> 4)"
|
|
task-master add-dependency --id=4 --depends-on=5 || log_error "Failed to add dependency 4->5"
|
|
task-master add-dependency --id=5 --depends-on=4 || log_error "Failed to add dependency 5->4"
|
|
log_success "Attempted to add dependencies 4 -> 5 and 5 -> 4."
|
|
|
|
|
|
log_step "Validating dependencies (expecting circular error)"
|
|
task-master validate-dependencies > validate_deps_circular.log 2>&1 || true # Allow command to fail
|
|
# Note: Adjust the grep pattern based on the EXACT error message from validate-dependencies
|
|
if grep -q -E "Circular dependency detected involving task IDs: (4, 5|5, 4)" validate_deps_circular.log; then
|
|
log_success "Validation correctly identified circular dependency between 4 and 5."
|
|
else
|
|
log_error "Validation DID NOT report circular dependency 4<->5 as expected. Check validate_deps_circular.log"
|
|
fi
|
|
|
|
log_step "Fixing dependencies (should remove one side of 4 <-> 5)"
|
|
task-master fix-dependencies > fix_deps_after_circular.log
|
|
log_success "Attempted to fix dependencies."
|
|
|
|
log_step "Validating dependencies (after fix circular)"
|
|
task-master validate-dependencies > validate_deps_after_fix_circular.log 2>&1 || true # Allow potential failure
|
|
if grep -q -E "Circular dependency detected involving task IDs: (4, 5|5, 4)" validate_deps_after_fix_circular.log; then
|
|
log_error "Validation STILL reports circular dependency 4<->5 after fix. Check logs."
|
|
else
|
|
log_success "Validation shows circular dependency 4<->5 was resolved."
|
|
fi
|
|
|
|
# === End New Test Section ===
|
|
|
|
# Find the next available task ID dynamically instead of hardcoding 11, 12
|
|
# Assuming tasks are added sequentially and we didn't remove any core tasks yet
|
|
last_task_id=$(jq '[.master.tasks[].id] | max' .taskmaster/tasks/tasks.json)
|
|
manual_task_id=$((last_task_id + 1))
|
|
ai_task_id=$((manual_task_id + 1))
|
|
|
|
log_step "Adding Task $manual_task_id (Manual)"
|
|
task-master add-task --title="Manual E2E Task" --description="Add basic health check endpoint" --priority=low --dependencies=3 # Depends on backend setup
|
|
log_success "Added Task $manual_task_id manually."
|
|
|
|
log_step "Adding Task $ai_task_id (AI)"
|
|
cmd_output_add_ai=$(task-master add-task --prompt="Implement basic UI styling using CSS variables for colors and spacing" --priority=medium --dependencies=1 2>&1)
|
|
exit_status_add_ai=$?
|
|
echo "$cmd_output_add_ai"
|
|
extract_and_sum_cost "$cmd_output_add_ai"
|
|
if [ $exit_status_add_ai -ne 0 ]; then
|
|
log_error "Adding AI Task $ai_task_id failed. Exit status: $exit_status_add_ai"
|
|
else
|
|
log_success "Added Task $ai_task_id via AI prompt."
|
|
fi
|
|
|
|
|
|
log_step "Updating Task 3 (update-task AI)"
|
|
cmd_output_update_task3=$(task-master update-task --id=3 --prompt="Update backend server setup: Ensure CORS is configured to allow requests from the frontend origin." 2>&1)
|
|
exit_status_update_task3=$?
|
|
echo "$cmd_output_update_task3"
|
|
extract_and_sum_cost "$cmd_output_update_task3"
|
|
if [ $exit_status_update_task3 -ne 0 ]; then
|
|
log_error "Updating Task 3 failed. Exit status: $exit_status_update_task3"
|
|
else
|
|
log_success "Attempted update for Task 3."
|
|
fi
|
|
|
|
log_step "Updating Tasks from Task 5 (update AI)"
|
|
cmd_output_update_from5=$(task-master update --from=5 --prompt="Refactor the backend storage module to use a simple JSON file (storage.json) instead of an in-memory object for persistence. Update relevant tasks." 2>&1)
|
|
exit_status_update_from5=$?
|
|
echo "$cmd_output_update_from5"
|
|
extract_and_sum_cost "$cmd_output_update_from5"
|
|
if [ $exit_status_update_from5 -ne 0 ]; then
|
|
log_error "Updating from Task 5 failed. Exit status: $exit_status_update_from5"
|
|
else
|
|
log_success "Attempted update from Task 5 onwards."
|
|
fi
|
|
|
|
log_step "Expanding Task 8 (AI)"
|
|
cmd_output_expand8=$(task-master expand --id=8 2>&1)
|
|
exit_status_expand8=$?
|
|
echo "$cmd_output_expand8"
|
|
extract_and_sum_cost "$cmd_output_expand8"
|
|
if [ $exit_status_expand8 -ne 0 ]; then
|
|
log_error "Expanding Task 8 failed. Exit status: $exit_status_expand8"
|
|
else
|
|
log_success "Attempted to expand Task 8."
|
|
fi
|
|
|
|
log_step "Updating Subtask 8.1 (update-subtask AI)"
|
|
cmd_output_update_subtask81=$(task-master update-subtask --id=8.1 --prompt="Implementation note: Remember to handle potential API errors and display a user-friendly message." 2>&1)
|
|
exit_status_update_subtask81=$?
|
|
echo "$cmd_output_update_subtask81"
|
|
extract_and_sum_cost "$cmd_output_update_subtask81"
|
|
if [ $exit_status_update_subtask81 -ne 0 ]; then
|
|
log_error "Updating Subtask 8.1 failed. Exit status: $exit_status_update_subtask81"
|
|
else
|
|
log_success "Attempted update for Subtask 8.1."
|
|
fi
|
|
|
|
# Add a couple more subtasks for multi-remove test
|
|
log_step 'Adding subtasks to Task 2 (for multi-remove test)'
|
|
task-master add-subtask --parent=2 --title="Subtask 2.1 for removal"
|
|
task-master add-subtask --parent=2 --title="Subtask 2.2 for removal"
|
|
log_success "Added subtasks 2.1 and 2.2."
|
|
|
|
log_step "Removing Subtasks 2.1 and 2.2 (multi-ID)"
|
|
task-master remove-subtask --id=2.1,2.2
|
|
log_success "Removed subtasks 2.1 and 2.2."
|
|
|
|
log_step "Setting status for Task 1 to done"
|
|
task-master set-status --id=1 --status=done
|
|
log_success "Set status for Task 1 to done."
|
|
|
|
log_step "Getting next task (after status change)"
|
|
task-master next > next_task_after_change_core.log
|
|
log_success "Next task after change saved."
|
|
|
|
# === Start New Test Section: List Filtering ===
|
|
log_step "Listing tasks filtered by status 'done'"
|
|
task-master list --status=done > task_list_status_done.log
|
|
log_success "Filtered list saved to task_list_status_done.log (Manual/LLM check recommended)"
|
|
# Optional assertion: Check if Task 1 ID exists and Task 2 ID does NOT
|
|
# if grep -q "^1\." task_list_status_done.log && ! grep -q "^2\." task_list_status_done.log; then
|
|
# log_success "Basic check passed: Task 1 found, Task 2 not found in 'done' list."
|
|
# else
|
|
# log_error "Basic check failed for list --status=done."
|
|
# fi
|
|
# === End New Test Section ===
|
|
|
|
log_step "Clearing subtasks from Task 8"
|
|
task-master clear-subtasks --id=8
|
|
log_success "Attempted to clear subtasks from Task 8."
|
|
|
|
log_step "Removing Tasks $manual_task_id and $ai_task_id (multi-ID)"
|
|
# Remove the tasks we added earlier
|
|
task-master remove-task --id="$manual_task_id,$ai_task_id" -y
|
|
log_success "Removed tasks $manual_task_id and $ai_task_id."
|
|
|
|
# === Start New Test Section: Subtasks & Dependencies ===
|
|
|
|
log_step "Expanding Task 2 (to ensure multiple tasks have subtasks)"
|
|
task-master expand --id=2 # Expand task 2: Backend setup
|
|
log_success "Attempted to expand Task 2."
|
|
|
|
log_step "Listing tasks with subtasks (Before Clear All)"
|
|
task-master list --with-subtasks > task_list_before_clear_all.log
|
|
log_success "Task list before clear-all saved."
|
|
|
|
log_step "Clearing ALL subtasks"
|
|
task-master clear-subtasks --all
|
|
log_success "Attempted to clear all subtasks."
|
|
|
|
log_step "Listing tasks with subtasks (After Clear All)"
|
|
task-master list --with-subtasks > task_list_after_clear_all.log
|
|
log_success "Task list after clear-all saved. (Manual/LLM check recommended to verify subtasks removed)"
|
|
|
|
log_step "Expanding Task 3 again (to have subtasks for next test)"
|
|
task-master expand --id=3
|
|
log_success "Attempted to expand Task 3."
|
|
# Verify 3.1 exists
|
|
if ! jq -e '.master.tasks[] | select(.id == 3) | .subtasks[] | select(.id == 1)' .taskmaster/tasks/tasks.json > /dev/null; then
|
|
log_error "Subtask 3.1 not found in tasks.json after expanding Task 3."
|
|
exit 1
|
|
fi
|
|
|
|
log_step "Adding dependency: Task 4 depends on Subtask 3.1"
|
|
task-master add-dependency --id=4 --depends-on=3.1
|
|
log_success "Added dependency 4 -> 3.1."
|
|
|
|
log_step "Showing Task 4 details (after adding subtask dependency)"
|
|
task-master show 4 > task_4_details_after_dep_add.log
|
|
log_success "Task 4 details saved. (Manual/LLM check recommended for dependency [3.1])"
|
|
|
|
log_step "Removing dependency: Task 4 depends on Subtask 3.1"
|
|
task-master remove-dependency --id=4 --depends-on=3.1
|
|
log_success "Removed dependency 4 -> 3.1."
|
|
|
|
log_step "Showing Task 4 details (after removing subtask dependency)"
|
|
task-master show 4 > task_4_details_after_dep_remove.log
|
|
log_success "Task 4 details saved. (Manual/LLM check recommended to verify dependency removed)"
|
|
|
|
# === End New Test Section ===
|
|
|
|
log_step "Generating task files (final)"
|
|
task-master generate
|
|
log_success "Generated task files."
|
|
# === End Core Task Commands Test ===
|
|
|
|
# === AI Commands (Re-test some after changes) ===
|
|
log_step "Analyzing complexity (AI with Research - Final Check)"
|
|
cmd_output_analyze_final=$(task-master analyze-complexity --research --output complexity_results_final.json 2>&1)
|
|
exit_status_analyze_final=$?
|
|
echo "$cmd_output_analyze_final"
|
|
extract_and_sum_cost "$cmd_output_analyze_final"
|
|
if [ $exit_status_analyze_final -ne 0 ] || [ ! -f "complexity_results_final.json" ]; then
|
|
log_error "Final Complexity analysis failed. Exit status: $exit_status_analyze_final. File found: $(test -f complexity_results_final.json && echo true || echo false)"
|
|
exit 1 # Critical for subsequent report step
|
|
else
|
|
log_success "Final Complexity analysis command executed and file created."
|
|
fi
|
|
|
|
log_step "Generating complexity report (Non-AI - Final Check)"
|
|
task-master complexity-report --file complexity_results_final.json > complexity_report_formatted_final.log
|
|
log_success "Final Formatted complexity report saved."
|
|
|
|
# === End AI Commands Re-test ===
|
|
|
|
log_step "Listing tasks again (final)"
|
|
task-master list --with-subtasks > task_list_final.log
|
|
log_success "Final task list saved to task_list_final.log"
|
|
|
|
# --- Test Completion (Output to tee) ---
|
|
log_step "E2E Test Steps Completed"
|
|
echo ""
|
|
ABS_TEST_RUN_DIR="$(pwd)"
|
|
echo "Test artifacts and logs are located in: $ABS_TEST_RUN_DIR"
|
|
echo "Key artifact files (within above dir):"
|
|
ls -1 # List files in the current directory
|
|
echo ""
|
|
echo "Full script log also available at: $LOG_FILE (relative to project root)"
|
|
|
|
# Optional: cd back to original directory
|
|
# cd "$ORIGINAL_DIR"
|
|
|
|
# End of the main execution block brace
|
|
} 2>&1 | tee "$LOG_FILE"
|
|
|
|
# --- Final Terminal Message ---
|
|
EXIT_CODE=${PIPESTATUS[0]}
|
|
overall_end_time=$(date +%s)
|
|
total_elapsed_seconds=$((overall_end_time - overall_start_time))
|
|
|
|
# Format total duration
|
|
total_minutes=$((total_elapsed_seconds / 60))
|
|
total_sec_rem=$((total_elapsed_seconds % 60))
|
|
formatted_total_time=$(printf "%dm%02ds" "$total_minutes" "$total_sec_rem")
|
|
|
|
# Count steps and successes from the log file *after* the pipe finishes
|
|
# Use grep -c for counting lines matching the pattern
|
|
# Corrected pattern to match ' STEP X:' format
|
|
final_step_count=$(grep -c '^[[:space:]]\+STEP [0-9]\+:' "$LOG_FILE" || true)
|
|
final_success_count=$(grep -c '\[SUCCESS\]' "$LOG_FILE" || true) # Count lines containing [SUCCESS]
|
|
|
|
echo "--- E2E Run Summary ---"
|
|
echo "Log File: $LOG_FILE"
|
|
echo "Total Elapsed Time: ${formatted_total_time}"
|
|
echo "Total Steps Executed: ${final_step_count}" # Use count from log
|
|
|
|
if [ $EXIT_CODE -eq 0 ]; then
|
|
echo "Status: SUCCESS"
|
|
# Use counts from log file
|
|
echo "Successful Steps: ${final_success_count}/${final_step_count}"
|
|
else
|
|
echo "Status: FAILED"
|
|
# Use count from log file for total steps attempted
|
|
echo "Failure likely occurred during/after Step: ${final_step_count}"
|
|
# Use count from log file for successes before failure
|
|
echo "Successful Steps Before Failure: ${final_success_count}"
|
|
echo "Please check the log file '$LOG_FILE' for error details."
|
|
fi
|
|
echo "-------------------------"
|
|
|
|
# --- Attempt LLM Analysis ---
|
|
# Run this *after* the main execution block and tee pipe finish writing the log file
|
|
if [ -d "$TEST_RUN_DIR" ]; then
|
|
# Define absolute path to source dir if not already defined (though it should be by setup)
|
|
TASKMASTER_SOURCE_DIR_ABS=${TASKMASTER_SOURCE_DIR_ABS:-$(cd "$ORIGINAL_DIR/$TASKMASTER_SOURCE_DIR" && pwd)}
|
|
|
|
cd "$TEST_RUN_DIR"
|
|
# Pass the absolute source directory path
|
|
analyze_log_with_llm "$LOG_FILE" "$TASKMASTER_SOURCE_DIR_ABS"
|
|
ANALYSIS_EXIT_CODE=$? # Capture the exit code of the analysis function
|
|
# Optional: cd back again if needed
|
|
cd "$ORIGINAL_DIR" # Ensure we change back to the original directory
|
|
else
|
|
formatted_duration_for_error=$(_format_duration "$total_elapsed_seconds")
|
|
echo "[ERROR] [$formatted_duration_for_error] $(date +"%Y-%m-%d %H:%M:%S") Test run directory $TEST_RUN_DIR not found. Cannot perform LLM analysis." >&2
|
|
fi
|
|
|
|
# Final cost formatting
|
|
formatted_total_e2e_cost=$(printf "%.6f" "$total_e2e_cost")
|
|
echo "Total E2E AI Cost: $formatted_total_e2e_cost USD"
|
|
|
|
exit $EXIT_CODE |