This guide covers running SecAI OS services locally for development and testing, without building a full OS image.
- Go 1.22+ for building Go services (registry, tool-firewall, airlock)
- Python 3.11+ for running Python services (quarantine, UI, search-mediator)
- pip for Python dependency management
- git for version control
- make (optional, for convenience targets)
git clone https://github.com/SecAI-Hub/SecAI_OS.git
cd SecAI_OSEach Go service is in its own directory under services/.
cd services/registry
go build -o registry .
./registryThe registry listens on port 8470. It will create a manifest file at the configured path (defaults to ./manifest.yaml in dev mode).
cd services/tool-firewall
go build -o tool-firewall .
./tool-firewallThe tool firewall listens on port 8475.
cd services/airlock
go build -o airlock .
./airlockThe airlock listens on port 8490. It is disabled by default; set AIRLOCK_ENABLED=true to activate it in dev mode.
pip install -r services/quarantine/requirements.txt
pip install -r services/ui/requirements.txt
pip install -r services/search-mediator/requirements.txt
pip install -r tests/requirements.txtcd services/ui
python app.pyThe UI listens on port 8480. Open http://localhost:8480 in a browser.
The quarantine pipeline runs as a watcher service that monitors the quarantine directory:
cd services/quarantine
python watcher.pycd services/search-mediator
python app.pyThe search mediator listens on port 8485. Requires a running SearXNG instance and Tor for full functionality.
# Registry tests
cd services/registry && go test -v ./...
# Tool Firewall tests
cd services/tool-firewall && go test -v ./...
# Airlock tests
cd services/airlock && go test -v ./...# All Python tests
cd tests
python -m pytest -v
# Specific test suites
python -m pytest test_pipeline.py -v
python -m pytest test_ui.py -v
python -m pytest test_memory_protection.py -v
python -m pytest test_differential_privacy.py -v
python -m pytest test_traffic_analysis.py -vServices look for configuration files in the following order:
- Path specified by environment variable (e.g.,
SECAI_POLICY_PATH) ./policy.yamlin the current working directory/etc/secure-ai/policy/policy.yaml(production path, unlikely to exist in dev)
For development, copy the default policy file:
cp files/system/etc/secure-ai/policy/policy.yaml ./policy.yamlEdit policy.yaml to adjust settings for your dev environment.
When running services directly (outside of the full OS image), the following security features are not active:
- Systemd sandboxing: ProtectSystem, ProtectHome, PrivateTmp, NoNewPrivileges, and other systemd hardening directives only apply when services run under systemd.
- nftables firewall: Network rules are not applied in dev mode. Services can make arbitrary network connections.
- Seccomp-BPF filters: System call filtering requires the systemd service units.
- Landlock LSM: Filesystem access restrictions require the systemd service units.
- Encrypted vault: The LUKS encrypted volume is not present in dev mode. Models are stored in plain directories.
- Read-only root: The immutable filesystem is a property of the OS image, not the services.
Dev mode is for development and testing only. Do not use dev mode for processing sensitive data or running untrusted models.
If you do not have llama.cpp installed or do not need actual inference:
- The UI, registry, and tool firewall can run independently.
- Chat and generation endpoints will return errors without an inference worker.
- Model management (import, quarantine, promote) works without inference.
To set up llama-server for local inference:
# Build llama.cpp
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
make -j$(nproc)
# Start the server with a model
./llama-server -m /path/to/model.gguf --port 8081