Multi Node Launcher is the deployment and operations repository for r1setup and the ratio1.multi_node_launcher Ansible collection. Its purpose is to let an operator configure remote nodes, deploy the Ratio1 edge-node stack with Ansible, Docker, and systemd, and then operate those nodes through a single CLI instead of a collection of one-off scripts.
This repository exists to solve three related problems:
- bootstrap a local control machine with a usable
r1setupcommand - manage one or more remote Linux nodes through a consistent inventory-driven workflow
- package the underlying deployment logic as an Ansible collection that can be built and published independently
In practice, the repo contains:
- a root installer, install.sh, that installs the CLI entrypoint
- the Ansible collection under mnl_factory
- the main operator CLI in mnl_factory/scripts/r1setup
Network install:
curl -sSL https://raw.githubusercontent.com/Ratio1/r1setup/refs/heads/main/install.sh | bashLocal install from a checked-out repo:
bash install.shStart the CLI:
r1setupConfigure and deploy via the CLI:
r1setupManual Ansible workflow:
cd mnl_factory
ansible-galaxy collection install -r requirements.yml
ansible-playbook -i inventory/hosts.yml playbooks/site.ymlBuild the collection locally:
cd mnl_factory
ansible-galaxy collection build --forceRun the CLI test suite:
cd mnl_factory/scripts
python3 test_r1setup.py- node configuration and inventory management
- deployment flows for Docker, NVIDIA GPU support, and final service setup
- node status and information commands
- service customization
- SSH key management:
- key installation and migration from password auth
- extra public key installation
- key-auth validation
- optional SSH password-auth disable after successful verification
See mnl_factory/scripts/README_r1setup.md for CLI-specific operator guidance.
CLI-managed configuration lives under the current user’s home:
~/.ratio1/r1_setup/: CLI state, local configs, active config metadata, local virtualenv~/.ratio1/ansible_config/: installed Ansible collection,ansible.cfg, collection path
Manual inventory example:
all:
children:
gpu_nodes:
hosts:
node-a:
ansible_host: 192.168.1.100
ansible_user: root
ansible_ssh_private_key_file: ~/.ssh/id_ed25519Collection configuration surfaces:
After installation:
r1setupis symlinked to/usr/local/bin/r1setup- scripts are stored under
~/.ratio1/r1_setup - the Ansible collection is installed under
~/.ratio1/ansible_config/collections
After building the collection:
ansible-galaxy collection build --forceproduces a*.tar.gzartifact inmnl_factory/
After a CLI release:
- GitHub release assets include the repository archive plus
r1setup,ver.py, andupdate.py
Test connectivity manually:
cd mnl_factory
ansible all -i inventory/hosts.yml -m pingRun the deploy playbook manually:
cd mnl_factory
ansible-playbook -i inventory/hosts.yml playbooks/site.ymlRun targeted SSH tests:
cd mnl_factory/scripts
python3 -m unittest tests.test_ssh_key_manager-
r1setupcommand missing after install:- rerun install.sh; on Linux the symlink step needs
sudofor/usr/local/bin
- rerun install.sh; on Linux the symlink step needs
-
CLI release workflow did not trigger:
- the automatic release workflow watches
mnl_factory/scripts/ver.py - it only proceeds when
__VER__changes - it also requires the fallback
CLI_VERSIONinsidemnl_factory/scripts/r1setupto match
- the automatic release workflow watches
-
Ansible Galaxy publish workflow did not trigger:
- the publish workflow watches
mnl_factory/galaxy.yml - it only proceeds when the
versionfield changes
- the publish workflow watches
-
SSH hardening concern:
- disabling password authentication changes the remote machine’s sshd policy
- test this on disposable hosts first and keep recovery keys outside the target machine
-
GPU driver install issues:
- verify Secure Boot is disabled
- verify the host actually exposes NVIDIA hardware to the OS
- inspect the Ansible output from the
nvidia_gpurole for package/install failures
The repository is split between an end-user bootstrap layer and an Ansible collection:
-
root bootstrap:
- install.sh downloads CLI scripts and installs the
r1setupcommand
- install.sh downloads CLI scripts and installs the
-
CLI layer:
- mnl_factory/scripts/r1setup is the main interactive application
- mnl_factory/scripts/ver.py is the CLI version source of truth
- mnl_factory/scripts/update.py supports CLI update behavior
-
collection layer:
- mnl_factory/playbooks contains operational playbooks
- mnl_factory/roles contains Docker, GPU, prerequisites, and setup roles
- mnl_factory/galaxy.yml defines collection metadata
- mnl_factory/scripts: CLI logic, prerequisite/bootstrap scripts, tests
- mnl_factory/playbooks: deploy, service, node-info, SSH-key-management, and SSH hardening actions
- mnl_factory/roles: reusable Ansible roles
- docs: dated design and operational notes
- .github/workflows: CLI release and collection publish automation
Local machine prerequisites installed by mnl_factory/scripts/1_prerequisites.sh:
- Python 3 and a local virtualenv
- Ansible
ssh,ssh-keygen,openssl,sshpass- Python packages:
pyyaml,typing_extensions,certifi
Collection dependencies from mnl_factory/requirements.yml:
community.dockercommunity.generalansible.posix
Primary test commands:
cd mnl_factory/scripts
python3 test_r1setup.py
python3 -m unittest discover tests
python3 -m py_compile r1setupThe modular test package lives in mnl_factory/scripts/tests. The compatibility runner in mnl_factory/scripts/test_r1setup.py simply discovers and runs that suite.
- Password-based node configs may temporarily store SSH credentials in the managed inventory until migrated to SSH keys.
- SSH hardening is intentionally separated from SSH key migration so inventory auth changes and remote sshd policy changes are not conflated.
- The repository currently has strong unit and CLI regression coverage, but not a built-in disposable-host integration harness for end-to-end SSH daemon testing.
mnl_factory/build.shis a local helper and is not the CI-safe publishing path; GitHub Actions uses dedicated workflows instead.
- CLI release workflow: .github/workflows/release.yml
- triggered by
mnl_factory/scripts/ver.py
- triggered by
- Ansible Galaxy publish workflow: .github/workflows/publish-ansible-galaxy.yml
- triggered by
mnl_factory/galaxy.yml
- triggered by
- Ansible documentation, “Installing collections”: https://docs.ansible.com/ansible/latest/collections_guide/collections_installing.html
- Ansible documentation, “Developing collections / distributing collections”: https://docs.ansible.com/ansible/latest/dev_guide/developing_collections_distributing.html
- Ansible documentation,
ansible.posix.authorized_key: https://docs.ansible.com/ansible/latest/collections/ansible/posix/authorized_key_module.html - GitHub documentation, “Workflow syntax for GitHub Actions”: https://docs.github.com/actions/using-workflows/workflow-syntax-for-github-actions
- OpenBSD manual page,
ssh-keygen(1): https://man.openbsd.org/ssh-keygen
- Andrei Damian
- Vitalii Toderian
MIT
This repository can modify remote SSH configuration, install Docker and NVIDIA-related packages, and deploy a long-running systemd-managed container workload. Validate changes on disposable infrastructure before rolling them into production.