Test Orchestrator
Overview
Coordinate parallel test execution across multiple test suites, frameworks, and environments. Manages test splitting, worker allocation, result aggregation, and intelligent retry strategies.
Prerequisites
- Test runner with parallel execution support (Jest, Vitest, pytest-xdist, Playwright, or JUnit 5)
- CI/CD platform configured (GitHub Actions, GitLab CI, CircleCI, or Jenkins)
- Test suite with consistent pass rates (flaky tests identified and tagged)
- Sufficient CI runner resources for parallel worker count
- Test result reporting tool (JUnit XML, Allure, or equivalent)
Instructions
- Analyze the existing test suite using Grep and Glob to catalog all test files, their framework, approximate run time, and dependency requirements.
- Classify tests into execution tiers:
- Tier 1 (Fast): Unit tests with no I/O -- target under 30 seconds total.
- Tier 2 (Medium): Integration tests requiring local services -- target under 3 minutes.
- Tier 3 (Slow): E2E and browser tests -- target under 10 minutes.
- Configure parallel execution for each tier:
- Split unit tests across N workers using
jest --shard=i/N or pytest -n auto.
- Shard E2E tests by test file using Playwright
--shard=i/N or Cypress parallelization.
- Assign heavier integration tests to dedicated workers with more resources.
- Create a CI pipeline configuration that runs tiers in parallel:
- Tier 1 and Tier 2 run concurrently on separate jobs.
- Tier 3 runs after a fast pre-check gate passes.
- Each tier reports results to a unified collection step.
- Implement intelligent retry logic for flaky tests:
- Tag known flaky tests with
@flaky or equivalent marker.
- Retry failed tests up to 2 times before marking as failed.
- Track flaky test frequency in a log file for triage.
- Aggregate results from all parallel workers into a single report:
- Merge JUnit XML files from each shard.
- Calculate total pass/fail/skip counts and execution time.
- Identify the slowest tests for optimization targets.
- Write the orchestration configuration to the project's CI config file and validate it with a dry run.
Output
- CI pipeline configuration file (
.github/workflows/test.yml, .gitlab-ci.yml, or equivalent)
- Test sharding configuration with worker count and split strategy
- Merged test result report in JUnit XML or JSON format
- Execution timeline showing parallel job durations and bottlenecks
- Flaky test inventory with retry counts and failure pattern