This document describes the testing infrastructure and coverage goals for the GitHub Agentic Workflow Firewall project.
The project uses Jest as the testing framework with TypeScript support via ts-jest and Babel for ESM module transformation. All tests are located in the src/ directory alongside their corresponding source files using the .test.ts suffix.
The test infrastructure is configured to handle ESM-only dependencies (like chalk 5.x, execa 9.x, commander 14.x) through:
- babel.config.js: Transforms ESM syntax to CommonJS for Jest compatibility
- jest.config.js: Includes
transformIgnorePatternsto transform ESM packages in node_modules - babel-jest: Handles JavaScript module transformation
- @babel/preset-env: Targets current Node.js version for optimal transformation
This configuration allows the project to:
- Use modern ESM-only npm packages in tests
- Mock ESM modules with Jest's standard mocking API
- Maintain compatibility with the existing TypeScript + CommonJS codebase
# Run all tests
npm test
# Run tests in watch mode (auto-rerun on file changes)
npm run test:watch
# Run tests with coverage report
npm run test:coverage
# Run a specific test file
npm test -- src/logger.test.ts
# Run tests matching a pattern
npm test -- --testNamePattern="should log debug messages"# Build TypeScript to JavaScript
npm run build
# Run linter
npm run lint
# Clean build artifacts
npm run cleanThe project generates comprehensive coverage reports in multiple formats:
After running npm run test:coverage, coverage reports are available in the coverage/ directory:
- HTML Report: Open
coverage/index.htmlin a browser for an interactive view - Terminal: Coverage summary is displayed in the console after test run
- LCOV:
coverage/lcov.infofor integration with CI/CD tools - JSON:
coverage/coverage-summary.jsonfor programmatic access
The project includes automated test coverage reporting via GitHub Actions (.github/workflows/test-coverage.yml):
- Automatic PR Comments: Coverage reports are automatically posted as comments on pull requests
- GitHub Actions Summary: Each workflow run includes a coverage summary in the job output
- Coverage Artifacts: Full coverage reports are uploaded as artifacts for 30 days
- Update Strategy: Existing coverage comments are updated on subsequent pushes to avoid comment spam
The coverage workflow runs on:
- All pull requests to
main - All pushes to
main
Required permissions:
contents: read- To checkout the repositorypull-requests: write- To post/update PR commentschecks: write- To update check status
The project maintains the following minimum coverage thresholds (configured in jest.config.js):
| Metric | Threshold |
|---|---|
| Statements | 38% |
| Branches | 30% |
| Functions | 35% |
| Lines | 38% |
Tests will fail if coverage drops below these thresholds.
As of the latest update:
| File | Statements | Branches | Functions | Lines | Status |
|---|---|---|---|---|---|
| cli-workflow.ts | 100% | 100% | 100% | 100% | ✅ |
| squid-config.ts | 100% | 100% | 100% | 100% | ✅ |
| logger.ts | 100% | 100% | 100% | 100% | ✅ |
| host-iptables.ts | 83.63% | 55.55% | 100% | 83.63% | |
| docker-manager.ts | 18% | 22.22% | 4% | 17.15% | ❌ |
| cli.ts | 0% | 0% | 0% | 0% | ❌ |
| Overall | 38.39% | 31.78% | 37.03% | 38.31% |
- ✅ Excellent (>80%): Functions and modules with high coverage
⚠️ Good (50-80%): Acceptable coverage, improvement recommended- ❌ Needs Improvement (<50%): Priority areas for adding tests
- Tests are colocated with source files
- Test files use the
.test.tsextension - Example:
logger.ts→logger.test.ts
Tests follow this structure:
import { functionToTest } from './module';
describe('module name', () => {
describe('function or class name', () => {
it('should do something specific', () => {
// Test implementation
expect(result).toBe(expected);
});
});
});The project uses Jest's built-in mocking capabilities:
// Mock external dependencies
jest.mock('execa');
// Mock console output in tests
const consoleErrorSpy = jest.spyOn(console, 'error').mockImplementation();
// Mock chalk for cleaner test output
jest.mock('chalk', () => ({
gray: jest.fn((text) => text),
blue: jest.fn((text) => text),
// ... other colors
}));- Test Behavior, Not Implementation: Focus on what the code does, not how it does it
- Clear Test Names: Use descriptive test names that explain the expected behavior
- Arrange-Act-Assert: Structure tests in three clear sections
- Test Edge Cases: Include tests for boundary conditions and error cases
- Mock External Dependencies: Isolate the unit under test from external systems
- Clean Up: Use
beforeEachandafterEachto reset state between tests
describe('logger', () => {
let consoleErrorSpy: jest.SpyInstance;
beforeEach(() => {
consoleErrorSpy = jest.spyOn(console, 'error').mockImplementation();
logger.setLevel('info');
});
afterEach(() => {
consoleErrorSpy.mockRestore();
});
it('should log info messages when level is info', () => {
logger.info('test message');
expect(consoleErrorSpy).toHaveBeenCalledWith('[INFO] test message');
});
});The project runs tests automatically on:
- Pull request creation and updates
- Pushes to main branch
- Scheduled daily runs
CI checks include:
- Linting with ESLint
- TypeScript compilation
- Full test suite execution
- Coverage report generation
- Coverage threshold validation
The project uses a multi-level parallelization strategy to optimize CI time:
- Unit tests run in parallel using Jest's worker pool
- Configuration:
maxWorkers: '50%'uses half of available CPU cores - Tests are isolated and have no shared state, making them safe to parallelize
- Run time: ~6-8 seconds for 549 tests
- Integration tests run in separate GitHub Actions runners using a matrix strategy
- Each test file runs in its own isolated environment to avoid Docker conflicts
- Test files run in parallel across multiple runners:
basic-firewall.test.ts- Core firewall functionalityrobustness.test.ts- Comprehensive edge casesvolume-mounts.test.ts- Volume mount functionalitycontainer-workdir.test.ts- Working directory configurationno-docker.test.ts- Docker-in-Docker removal verification
Why Integration Tests Can't Use Jest Workers:
- Integration tests use Docker containers that share network resources
- Running multiple tests simultaneously would cause:
- Docker network subnet pool exhaustion
- Container name conflicts
- Port binding conflicts
- The
maxWorkers: 1setting injest.integration.config.jsensures sequential execution within each runner
Benefits of Matrix Strategy:
- Each test file runs on a dedicated runner (full isolation)
- All test files run in parallel (reduces wall-clock time)
fail-fast: falseensures all tests complete even if one fails- Individual test artifacts are captured for failed tests
# Run tests with Node debugger
node --inspect-brk node_modules/.bin/jest --runInBand
# Run tests with increased timeout
npm test -- --testTimeout=10000- Test Timeout: Increase timeout for slow tests using
jest.setTimeout(10000)in test file - Mock Not Working: Ensure mocks are defined before imports using
jest.mock() - Async Tests Failing: Make sure to
awaitasync operations and usedone()callback if needed - Coverage Not Generated: Check that files match patterns in
collectCoverageFromin jest.config.js
The project includes the following test files:
cli-workflow.test.ts: Tests for the main workflow orchestrationcli.test.ts: Tests for CLI argument parsing and command executiondocker-manager.test.ts: Tests for Docker container managementhost-iptables.test.ts: Tests for iptables firewall configurationjest-esm-config.test.ts: Tests for Jest ESM configuration and module transformationlogger.test.ts: Tests for logging functionalitysquid-config.test.ts: Tests for Squid proxy configuration generation
To improve coverage in low-coverage areas:
-
docker-manager.ts (Current: 18%)
- Add tests for container lifecycle functions
- Test error handling paths
- Mock Docker API interactions
-
cli.ts (Current: 0%)
- Test CLI entry point with various argument combinations
- Test error handling and validation
- Mock subprocess execution
-
host-iptables.ts (Current: 83.63%)
- Test remaining edge cases
- Add tests for error conditions
- Test cleanup scenarios