PixelProbe uses pytest with unit, integration, and performance tests.
tests/
├── conftest.py # Shared fixtures and test configuration
├── test_media_checker.py # Core media checking functionality tests
├── unit/ # Unit tests for individual components
│ ├── test_scan_service.py
│ ├── test_stats_service.py
│ ├── test_export_service.py
│ ├── test_maintenance_service.py
│ └── test_repositories.py
├── integration/ # API endpoint integration tests
│ ├── test_scan_routes.py
│ ├── test_stats_routes.py
│ ├── test_admin_routes.py
│ └── test_maintenance_routes.py
├── performance/ # Performance and benchmark tests
│ └── test_scan_performance.py
└── fixtures/ # Test data and media samples
├── corrupted/ # Known corrupted media files
└── valid/ # Valid media files for testing
# Run all tests
pytest
# Run with verbose output
pytest -v
# Run specific test file
pytest tests/test_media_checker.py
# Run specific test
pytest tests/test_media_checker.py::test_video_corruption_detection
# Run tests matching pattern
pytest -k "corruption"
# Run with coverage report
pytest --cov=pixelprobe --cov-report=html
# Run only unit tests
pytest tests/unit/
# Run only integration tests
pytest tests/integration/
# Run with benchmark tests
pytest --benchmark-only# Generate coverage report
pytest --cov=pixelprobe --cov-report=term-missing
# Generate HTML coverage report
pytest --cov=pixelprobe --cov-report=html
# Open htmlcov/index.html in browser
# Coverage requirements
# - Minimum 80% overall coverage
# - 90% coverage for critical paths (scan_service, media_checker)
# - 100% coverage for security modulesUnit tests validate individual components in isolation using mocks and fixtures.
- Services: Business logic without database/filesystem dependencies
- Repositories: Data access patterns with mocked database
- Utilities: Helper functions, validators, decorators
- Models: Database model methods and properties
def test_scan_service_discovery(scan_service, mock_media_files):
"""Test file discovery logic"""
with patch('os.scandir', return_value=mock_media_files):
files = scan_service.discover_media_files(['/test'])
assert len(files) == 3
assert all(f.endswith(('.mp4', '.jpg')) for f in files)Integration tests validate API endpoints and full request/response cycles.
- API Endpoints: All routes with various input scenarios
- Authentication: Access control and permissions
- Database Integration: Real database operations
- Error Handling: 4xx/5xx responses and error messages
def test_scan_endpoint(client, db):
"""Test full scan workflow via API"""
response = client.post('/api/scan-all',
json={'directories': ['/media']})
assert response.status_code == 200
assert response.json['status'] == 'started'
# Verify database state
scan = ScanState.query.first()
assert scan.phase == 'discovering'Performance tests ensure operations meet speed requirements.
- File Discovery: Speed of finding files in large directories
- Hash Calculation: Throughput for different file sizes
- Database Operations: Query performance with large datasets
- API Response Times: Endpoint latency under load
@pytest.mark.benchmark
def test_file_discovery_performance(benchmark, large_directory):
"""Benchmark file discovery for 10k files"""
result = benchmark(discover_media_files, [large_directory])
assert benchmark.stats['mean'] < 1.0 # Must complete in < 1 second@pytest.fixture
def app():
"""Create test Flask application"""
app = create_app(testing=True)
return app
@pytest.fixture
def db(app):
"""Create test database"""
with app.app_context():
db.create_all()
yield db
db.drop_all()
@pytest.fixture
def scan_service(db):
"""Create ScanService instance"""
return ScanService()
@pytest.fixture
def mock_scan_result():
"""Create mock scan result"""
return ScanResult(
file_path='/test/video.mp4',
is_corrupted=False,
file_size=1024000,
file_type='video/mp4'
)@pytest.fixture
def corrupted_video():
"""Provide path to corrupted video file"""
return 'tests/fixtures/corrupted/broken_video.mp4'
@pytest.fixture
def valid_image():
"""Provide path to valid image file"""
return 'tests/fixtures/valid/good_image.jpg'- Each test should be independent
- Use fixtures for setup/teardown
- Mock external dependencies
# Good
def test_scan_service_handles_missing_directory():
def test_api_returns_404_for_invalid_file():
# Bad
def test_scan():
def test_error():def test_mark_file_as_good(scan_service, corrupted_file):
# Arrange
scan_result = scan_service.scan_file(corrupted_file)
# Act
updated = scan_service.mark_as_good(scan_result.id)
# Assert
assert updated.marked_as_good is True
assert updated.is_corrupted is False- Empty inputs
- Invalid data types
- Boundary values
- Concurrent operations
- Error conditions
# Mock external services
@patch('subprocess.run')
def test_ffmpeg_error_handling(mock_run):
mock_run.side_effect = subprocess.CalledProcessError(1, 'ffmpeg')
result = check_video_corruption('/test.mp4')
assert result['is_corrupted'] is True# Create corrupted video for testing
dd if=/dev/urandom of=tests/fixtures/corrupted/broken.mp4 bs=1024 count=100
# Create valid but small video
ffmpeg -f lavfi -i testsrc=duration=1:size=320x240:rate=30 \
-f lavfi -i sine=frequency=1000:duration=1 \
-pix_fmt yuv420p tests/fixtures/valid/small.mp4Tests use a PostgreSQL test database that's created fresh for each test run:
- No persistence between tests
- Identical schema to production
- Configure via
TEST_DATABASE_URLenvironment variable
name: Tests
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Install dependencies
run: |
sudo apt-get update
sudo apt-get install -y ffmpeg imagemagick
pip install -r requirements.txt
pip install -r requirements-test.txt
- name: Run tests
run: pytest --cov=pixelprobe --cov-report=xml
- name: Upload coverage
uses: codecov/codecov-action@v2# Run single test with output
pytest -s -v tests/test_media_checker.py::test_specific_case
# Run with debugger
pytest --pdb tests/failing_test.py
# Show local variables on failure
pytest -l- Import Errors: Ensure PYTHONPATH includes project root
- Database Errors: Check fixtures are properly scoped
- Async Issues: Use pytest-asyncio for async tests
- File Not Found: Use absolute paths in fixtures
When adding new features:
- Write tests first (TDD approach)
- Cover happy path and error cases
- Add integration test for new endpoints
- Update fixtures if needed
- Run full suite before committing
Example for new feature:
# 1. Unit test for service
def test_new_feature_service_logic(scan_service):
result = scan_service.new_feature(param='value')
assert result.status == 'success'
# 2. Integration test for API
def test_new_feature_endpoint(client):
response = client.post('/api/new-feature', json={'param': 'value'})
assert response.status_code == 200
# 3. Error case test
def test_new_feature_invalid_input(client):
response = client.post('/api/new-feature', json={})
assert response.status_code == 400Current test coverage goals:
- Overall: 80% minimum
- Core modules: 90% minimum
- API routes: 85% minimum
- Security modules: 100% required
Run pytest --cov to check current coverage.