JUnit-inspired testing framework for Pike. Provides structured test discovery, rich assertions, parameterized tests, tag-based filtering, and multiple output formats.
- Pike 8.0.1116 or later
Write a test file:
import PUnit;
inherit PUnit.TestCase;
constant test_tags = ([
"test_addition": ({"math", "core"}),
]);
void test_addition() {
assert_equal(2, 1 + 1);
}
void test_comparison() {
assert_gt(10, 5);
assert_lte(5, 5);
}Run it:
pike -M . run_tests.pike tests/Output:
..
Results: 2 passed (0.0ms)
| Assertion | Description |
|---|---|
assert_equal(expected, actual) |
Structural equality (with diff for arrays/mappings) |
assert_not_equal(expected, actual) |
Inequality |
assert_same(expected, actual) |
Identity check (same object) |
assert_not_same(expected, actual) |
Different identity |
assert_true(val) / assert_false(val) |
Boolean check |
assert_null(val) / assert_not_null(val) |
Zero check |
assert_undefined(val) |
UNDEFINED check |
assert_gt(a, b) / assert_lt(a, b) |
Ordered comparison |
assert_gte(a, b) / assert_lte(a, b) |
Inclusive bounds |
assert_contains(needle, haystack) |
Membership (array, mapping, string) |
assert_match(pattern, str) |
Regexp match |
assert_approx_equal(a, b, tolerance) |
Floating-point comparison |
assert_type(type_name, val) |
Runtime type check |
assert_throws(error_type, fn) |
Exception expected |
assert_throws_fn(fn) |
Any exception expected |
assert_no_throw(fn) |
No exception expected |
assert_fail(msg) |
Unconditional failure |
By default, assertion failures report locations inferred from the backtrace. For exact file:line reporting, include the macro header:
#include <PUnit.pmod/macros.h>
import PUnit;
inherit PUnit.TestCase;
void test_example() {
assert_equal(2, 1 + 1); // failure shows this exact line
}The header redefines all assertion functions as preprocessor macros that inject __FILE__ and __LINE__.
Define test_data as a mapping from method names to arrays of row data. Each row is passed as a mapping argument:
constant test_data = ([
"test_add": ({
([ "a": 1, "b": 1, "expected": 2 ]),
([ "a": -1, "b": 1, "expected": 0 ]),
([ "a": 100, "b": 200, "expected": 300 ]),
}),
]);
void test_add(mapping p) {
assert_equal(p->expected, p->a + p->b);
}The runner expands these into individual tests named test_add[0], test_add[1], test_add[2]. Each row runs independently and reports its own pass/fail status.
Assign tags via the test_tags constant:
constant test_tags = ([
"test_addition": ({"math", "core"}),
"test_slow_operation": ({"slow"}),
]);Or use inline tags directly in method names with double-underscore suffixes:
void test_add__math__fast() {
// Automatically tagged with "math" and "fast"
assert_true(1);
}Inline tags are merged with any explicit test_tags entries for the base method name.
Run filtered subsets:
# Only tests tagged "math"
pike -M . run_tests.pike --tag=math tests/
# Exclude "slow" tests
pike -M . run_tests.pike --exclude-tag=slow tests/
# Combine: run "core" tests but not "slow" ones
pike -M . run_tests.pike -t core -e slow tests/constant skip_tests = (< "test_not_ready" >);
void test_not_ready() {
assert_fail("This should not run");
}Skipped tests are reported separately and do not count as failures.
Override lifecycle hooks in your test class. All four are optional:
| Hook | Scope | When it runs |
|---|---|---|
setup_class() |
Once per class | Before any test method in the class |
teardown_class() |
Once per class | After all test methods in the class |
setup() |
Per test | Before each test method |
teardown() |
Per test | After each test method (even on failure) |
inherit PUnit.TestCase;
protected object db;
void setup_class() {
// Expensive one-time setup: open database connection
db = Database.Connection("test://localhost");
}
void teardown_class() {
// One-time cleanup
db->close();
db = 0;
}
void setup() {
// Per-test setup: reset state before each test
db->execute("BEGIN");
}
void teardown() {
// Per-test cleanup: runs even if setup or the test threw
db->execute("ROLLBACK");
}setup_class() and teardown_class() run once for the entire class. setup() and teardown() run around each individual test method.
List test names without running them:
# Names only
pike -M . run_tests.pike --list tests/
# Names with tags
pike -M . run_tests.pike --list=verbose tests/Validate configuration correctness (catch typos in test_tags, skip_tests, or test_data keys):
pike -M . run_tests.pike --strict tests/Without --strict, mismatches produce warnings. With --strict, they become errors that cause a non-zero exit code.
....S..F.
Results: 8 passed, 1 failed, 1 skipped (12.3ms)
pike -M . run_tests.pike -v tests/[ ExampleTests ] (5 tests)
OK ExampleTests::test_addition (0.1ms)
OK ExampleTests::test_subtraction (0.0ms)
SKIP ExampleTests::test_slow (skipped)
FAIL ExampleTests::test_broken (0.0ms)
Expected: 42
Actual: 0
at tests/ExampleTests.pike:15
Results: 3 passed, 1 failed, 1 skipped (0.1ms)
pike -M . run_tests.pike --tap tests/TAP version 13
ok 1 - ExampleTests::test_addition
ok 2 - ExampleTests::test_subtraction
ok 3 - ExampleTests::test_slow # SKIP skipped
not ok 4 - ExampleTests::test_broken
---
message: "Expected 42, got 0"
severity: fail
location: "tests/ExampleTests.pike:15"
...
1..4
pike -M . run_tests.pike --junit=report.xml tests/Writes a JUnit-compatible XML report suitable for CI systems.
Set a per-test timeout to prevent hanging tests from blocking CI:
pike -M . run_tests.pike --timeout=10 tests/Each test method gets N seconds to complete. Timed-out tests are reported as errors with the message "Test timed out after Ns". Without --timeout, tests run without a time limit.
Run tests in random order to detect hidden inter-test dependencies:
# Random order (auto-generated seed printed to stderr)
pike -M . run_tests.pike --randomize tests/
# Reproducible random order
pike -M . run_tests.pike --randomize --seed=42 tests/The seed is printed to stderr so you can reproduce failures. Without --seed, a seed is generated from the current time.
pike -M . run_tests.pike [options] <directories...>
Options:
-v, --verbose Show each test name with status
-t, --tag=TAG Run only tests with this tag (repeatable)
-e, --exclude-tag=TAG Skip tests with this tag (repeatable)
-f, --filter=GLOB Run only test methods matching glob
-s, --stop-on-failure Stop after first failure
--list List test names without running
--list=verbose List test names with tags
--strict Treat validation warnings as errors
--no-color Disable ANSI colors
--timeout=N Per-test timeout in seconds
--randomize Run tests in random order
--seed=N Random seed for --randomize (reproducible)
--junit=FILE Write JUnit XML report to FILE
--tap Output TAP v13 to stdout
-h, --help Show this help
Exit code: 0 if all pass, 1 if any failure.
PUnit.pmod/
Assertions.pmod Assertion functions
Colors.pmod ANSI color helpers
DotReporter.pike Dot-matrix output
Error.pmod Error formatting and location extraction
JUnitReporter.pike JUnit XML output
macros.h Preprocessor macros for exact locations
module.pmod Module entry point (re-exports Assertions)
Reporter.pike Base reporter interface
TAPReporter.pike TAP v13 output
TestCase.pike Base test class with setup/teardown
TestResult.pike Per-test result container
TestRunner.pike CLI harness, file discovery, compilation
TestSuite.pike Suite runner, parameterization, tag filtering
VerboseReporter.pike Human-readable verbose output
run_tests.pike CLI entry point
tests/ Example tests
MIT