Docs: move {contributing, disk, mesh, backups} into guides

This commit is contained in:
Johannes Kirschbauer
2025-05-18 18:22:32 +02:00
parent f6239b1471
commit 7ff62958e6
9 changed files with 8 additions and 8 deletions

167
docs/site/guides/backups.md Normal file
View File

@@ -0,0 +1,167 @@
# Introduction to Backups
When you're managing your own services, creating regular backups is crucial to ensure your data's safety.
This guide introduces you to Clan's built-in backup functionalities.
Clan supports backing up your data to both local storage devices (like USB drives) and remote servers, using well-known tools like borgbackup and rsnapshot.
We might add more options in the future, but for now, let's dive into how you can secure your data.
## Backing Up Locally with Localbackup
Localbackup lets you backup your data onto physical storage devices connected to your computer,
such as USB hard drives or network-attached storage. It uses a tool called rsnapshot for this purpose.
### Setting Up Localbackup
1. **Identify Your Backup Device:**
First, figure out which device you'll use for backups. You can see all connected devices by running this command in your terminal:
```bash
lsblk --output NAME,PTUUID,FSTYPE,SIZE,MOUNTPOINT
```
Look for the device you intend to use for backups and note its details.
2. **Configure Your Backup Device:**
Once you've identified your device, you'll need to add it to your configuration.
Here's an example NixOS configuration for a device located at `/dev/sda2` with an `ext4` filesystem:
```nix
{
fileSystems."/mnt/hdd" = {
device = "/dev/sda2";
fsType = "ext4";
options = [ "defaults" "noauto" ];
};
}
```
Replace `/dev/sda2` with your device and `/mnt/hdd` with your preferred mount point.
3. **Set Backup Targets:** Next, define where on your device you'd like the backups to be stored:
```nix
{
clan.localbackup.targets.hdd = {
directory = "/mnt/hdd/backup";
mountpoint = "/mnt/hdd";
};
}
```
Change `/mnt/hdd` to the actual mount point you're using.
4. **Create Backups:** To create a backup, run:
```bash
clan backups create mymachine
```
This command saves snapshots of your data onto the backup device.
5. **Listing Backups:** To see available backups, run:
```bash
clan backups list mymachine
```
## Remote Backups with Borgbackup
### Overview of Borgbackup
Borgbackup splits the backup process into two parts: a backup client that sends data to a backup server.
The server stores the backups.
### Setting Up the Borgbackup Client
1. **Specify Backup Server:**
Start by indicating where your backup data should be sent. Replace `hostname` with your server's address:
```nix
{
clan.borgbackup.destinations = {
myhostname = {
repo = "borg@backuphost:/var/lib/borgbackup/myhostname";
};
};
}
```
2. **Select Folders to Backup:**
Decide which folders you want to back up. For example, to backup your home and root directories:
```nix
{ clan.core.state.userdata.folders = [ "/home" "/root" ]; }
```
3. **Generate Backup Credentials:**
Run `clan facts generate <yourmachine>` to prepare your machine for backup, creating necessary SSH keys and credentials.
### Setting Up the Borgbackup Server
1. **Configure Backup Repository:**
On the server where backups will be stored, enable the SSH daemon and set up a repository for each client:
```nix
{
services.borgbackup.repos.myhostname = {
path = "/var/lib/borgbackup/myhostname";
authorizedKeys = [
(builtins.readFile (config.clan.core.settings.directory + "/machines/myhostname/facts/borgbackup.ssh.pub"))
];
};
}
```
Ensure the path to the public key is correct.
2. **Update Your Systems:** Apply your changes by running `clan machines update` to both the server and your client
### Managing Backups
- **Scheduled Backups:**
Backups are automatically performed nightly. To check the next scheduled backup, use:
```bash
systemctl list-timers | grep -E 'NEXT|borg'
```
- **Listing Backups:** To see available backups, run:
```bash
clan backups list mymachine
```
- **Manual Backups:** You can also initiate a backup manually:
```bash
clan backups create mymachine
```
- **Restoring Backups:** To restore a backup that has been listed by the list command (NAME):
```bash
clan backups restore [MACHINE] [PROVIDER] [NAME]
```
Example (Restoring a machine called `client` with the backup provider `borgbackup`):
```bash
clan backups restore client borgbackup [NAME]
```
The `backups` command is service aware and allows optional specification of the `--service` flag.
To only restore the service called `zerotier` on a machine called `controller` through the backup provider `borgbackup` use the following command:
```bash
clan backups restore client borgbackup [NAME] --service zerotier
```

View File

@@ -0,0 +1 @@
../../../CONTRIBUTING.md

View File

@@ -0,0 +1,167 @@
Here are some methods for debugging and testing the clan-cli
## Using a Development Branch
To streamline your development process, I suggest not installing `clan-cli`. Instead, clone the `clan-core` repository and add `clan-core/pkgs/clan-cli/bin` to your PATH to use the checked-out version directly.
!!! Note
After cloning, navigate to `clan-core/pkgs/clan-cli` and execute `direnv allow` to activate the devshell. This will set up a symlink to nixpkgs at a specific location; without it, `clan-cli` won't function correctly.
With this setup, you can easily use [breakpoint()](https://docs.python.org/3/library/pdb.html) to inspect the application's internal state as needed.
This approach is feasible because `clan-cli` only requires a Python interpreter and has no other dependencies.
```nix
pkgs.mkShell {
packages = [
pkgs.python3
];
shellHook = ''
export GIT_ROOT="$(git rev-parse --show-toplevel)"
export PATH=$PATH:~/Projects/clan-core/pkgs/clan-cli/bin
'';
}
```
## The Debug Flag
You can enhance your debugging process with the `--debug` flag in the `clan` command. When you add this flag to any command, it displays all subprocess commands initiated by `clan` in a readable format, along with the source code position that triggered them. This feature makes it easier to understand and trace what's happening under the hood.
```bash
$ clan machines list --debug 1
Debug log activated
nix \
--extra-experimental-features 'nix-command flakes' \
eval \
--show-trace --json \
--print-build-logs '/home/qubasa/Projects/qubasas-clan#clanInternals.machines.x86_64-linux' \
--apply builtins.attrNames \
--json
Caller: ~/Projects/clan-core/pkgs/clan-cli/clan_cli/machines/list.py:96::list_nixos_machines
warning: Git tree '/home/qubasa/Projects/qubasas-clan' is dirty
demo
gchq-local
wintux
```
## VSCode
If you're using VSCode, it has a handy feature that makes paths to source code files clickable in the integrated terminal. Combined with the previously mentioned techniques, this allows you to open a Clan in VSCode, execute a command like `clan machines list --debug`, and receive a printed path to the code that initiates the subprocess. With the `Ctrl` key (or `Cmd` on macOS) and a mouse click, you can jump directly to the corresponding line in the code file and add a `breakpoint()` function to it, to inspect the internal state.
## Finding Print Messages
To identify where a specific print message comes from, you can enable a helpful feature. Simply set the environment variable `export TRACE_PRINT=1`. When you run commands with `--debug` mode, each print message will include information about its source location.
If you need more details, you can expand the stack trace information that appears with each print by setting the environment variable `export TRACE_DEPTH=3`.
## Analyzing Performance
To understand what's causing slow performance, set the environment variable `export CLAN_CLI_PERF=1`. When you complete a clan command, you'll see a summary of various performance metrics, helping you identify what's taking up time.
## See all possible packages and tests
To quickly show all possible packages and tests execute:
```bash
nix flake show
```
Under `checks` you will find all tests that are executed in our CI. Under `packages` you find all our projects.
```
git+file:///home/lhebendanz/Projects/clan-core
├───apps
│ └───x86_64-linux
│ ├───install-vm: app
│ └───install-vm-nogui: app
├───checks
│ └───x86_64-linux
│ ├───borgbackup omitted (use '--all-systems' to show)
│ ├───check-for-breakpoints omitted (use '--all-systems' to show)
│ ├───clan-dep-age omitted (use '--all-systems' to show)
│ ├───clan-dep-bash omitted (use '--all-systems' to show)
│ ├───clan-dep-e2fsprogs omitted (use '--all-systems' to show)
│ ├───clan-dep-fakeroot omitted (use '--all-systems' to show)
│ ├───clan-dep-git omitted (use '--all-systems' to show)
│ ├───clan-dep-nix omitted (use '--all-systems' to show)
│ ├───clan-dep-openssh omitted (use '--all-systems' to show)
│ ├───"clan-dep-python3.11-mypy" omitted (use '--all-systems' to show)
├───packages
│ └───x86_64-linux
│ ├───clan-cli omitted (use '--all-systems' to show)
│ ├───clan-cli-docs omitted (use '--all-systems' to show)
│ ├───clan-ts-api omitted (use '--all-systems' to show)
│ ├───clan-app omitted (use '--all-systems' to show)
│ ├───default omitted (use '--all-systems' to show)
│ ├───deploy-docs omitted (use '--all-systems' to show)
│ ├───docs omitted (use '--all-systems' to show)
│ ├───editor omitted (use '--all-systems' to show)
└───templates
├───default: template: Initialize a new clan flake
└───new-clan: template: Initialize a new clan flake
```
You can execute every test separately by following the tree path `nix run .#checks.x86_64-linux.clan-pytest -L` for example.
## Test Locally in Devshell with Breakpoints
To test the cli locally in a development environment and set breakpoints for debugging, follow these steps:
1. Run the following command to execute your tests and allow for debugging with breakpoints:
```bash
cd ./pkgs/clan-cli
pytest -n0 -s --maxfail=1 ./tests/test_nameofthetest.py
```
You can place `breakpoint()` in your Python code where you want to trigger a breakpoint for debugging.
## Test Locally in a Nix Sandbox
To run tests in a Nix sandbox, you have two options depending on whether your test functions have been marked as impure or not:
### Running Tests Marked as Impure
If your test functions need to execute `nix build` and have been marked as impure because you can't execute `nix build` inside a Nix sandbox, use the following command:
```bash
nix run .#impure-checks -L
```
This command will run the impure test functions.
### Running Pure Tests
For test functions that have not been marked as impure and don't require executing `nix build`, you can use the following command:
```bash
nix build .#checks.x86_64-linux.clan-pytest --rebuild
```
This command will run all pure test functions.
### Inspecting the Nix Sandbox
If you need to inspect the Nix sandbox while running tests, follow these steps:
1. Insert an endless sleep into your test code where you want to pause the execution. For example:
```python
import time
time.sleep(3600) # Sleep for one hour
```
2. Use `cntr` and `psgrep` to attach to the Nix sandbox. This allows you to interactively debug your code while it's paused. For example:
```bash
psgrep <your_python_process_name>
cntr attach <container id, container name or process id>
```
Or you can also use the [nix breakpoint hook](https://nixos.org/manual/nixpkgs/stable/#breakpointhook)

View File

@@ -0,0 +1,316 @@
# Testing your contributions
Each feature added to clan should be tested extensively via automated tests.
This document covers different methods of automated testing, including creating, running and debugging such tests.
In order to test the behavior of clan, different testing frameworks are used depending on the concern:
- NixOS VM tests: for high level integration
- NixOS container tests: for high level integration
- Python tests via pytest: for unit tests and integration tests
- Nix eval tests: for nix functions, libraries, modules, etc.
## NixOS VM Tests
The [NixOS VM Testing Framework](https://nixos.org/manual/nixos/stable/index.html#sec-nixos-tests) is used to create high level integration tests, by running one or more VMs generated from a specified config. Commands can be executed on the booted machine(s) to verify a deployment of a service works as expected. All machines within a test are connected by a virtual network. Internet access is not available.
### When to use VM tests
- testing that a service defined through a clan module works as expected after deployment
- testing clan-cli subcommands which require accessing a remote machine
### When not to use VM tests
NixOS VM Tests are slow and expensive. They should only be used for testing high level integration of components.
VM tests should be avoided wherever it is possible to implement a cheaper unit test instead.
- testing detailed behavior of a certain clan-cli command -> use unit testing via pytest instead
- regression testing -> add a unit test
### Finding examples for VM tests
Existing nixos vm tests in clan-core can be found by using ripgrep:
```shellSession
rg self.clanLib.test.baseTest
```
### Locating definitions of failing VM tests
All nixos vm tests in clan are exported as individual flake outputs under `checks.x86_64-linux.{test-attr-name}`.
If a test fails in CI:
- look for the job name of the test near the top if the CI Job page, like, for example `gitea:clan/clan-core#checks.x86_64-linux.borgbackup/1242`
- in this case `checks.x86_64-linux.borgbackup` is the attribute path
- note the last element of that attribute path, in this case `borgbackup`
- search for the attribute name inside the `/checks` directory via ripgrep
example: locating the vm test named `borgbackup`:
```shellSession
$ rg "borgbackup =" ./checks
./checks/flake-module.nix
44- wayland-proxy-virtwl = self.clanLib.test.baseTest ./wayland-proxy-virtwl nixosTestArgs;
```
-> the location of that test is `/checks/flake-module.nix` line `41`.
### Adding vm tests
Create a nixos test module under `/checks/{name}/default.nix` and import it in `/checks/flake-module.nix`.
### Running VM tests
```shellSession
nix build .#checks.x86_64-linux.{test-attr-name}
```
(replace `{test-attr-name}` with the name of the test)
### Debugging VM tests
The following techniques can be used to debug a VM test:
#### Print Statements
Locate the definition (see above) and add print statements, like, for example `print(client.succeed("systemctl --failed"))`, then re-run the test via `nix build` (see above)
#### Interactive Shell
- Execute the vm test outside the nix Sandbox via the following command:
`nix run .#checks.x86_64-linux.{test-attr-name}.driver -- --interactive`
- Then run the commands in the machines manually, like for example:
```python3
start_all()
machine1.succeed("echo hello")
```
#### Breakpoints
To get an interactive shell at a specific line in the VM test script, add a `breakpoint()` call before the line to debug, then run the test outside of the sandbox via:
`nix run .#checks.x86_64-linux.{test-attr-name}.driver`
## NixOS Container Tests
Those are very similar to NixOS VM tests, as in they run virtualized nixos machines, but instead of using VMs, they use containers which are much cheaper to launch.
As of now the container test driver is a downstream development in clan-core.
Basically everything stated under the NixOS VM tests sections applies here, except some limitations.
Limitations:
- Cannot run in interactive mode, however while the container test runs, it logs a nsenter command that can be used to log into each of the container.
- setuid binaries don't work
### Where to find examples for NixOS container tests
Existing nixos container tests in clan-core can be found by using ripgrep:
```shellSession
rg self.clanLib.test.containerTest
```
## Python tests via pytest
Since the clan cli is written in python, the `pytest` framework is used to define unit tests and integration tests via python
Due to superior efficiency,
### When to use python tests
- writing unit tests for python functions and modules, or bugfixes of such
- all integrations tests that do not require building or running a nixos machine
- impure integrations tests that require internet access (very rare, try to avoid)
### When not to use python tests
- integrations tests that require building or running a nixos machine (use NixOS VM or container tests instead)
- testing behavior of a nix function or library (use nix eval tests instead)
### Finding examples of python tests
Existing python tests in clan-core can be found by using ripgrep:
```shellSession
rg "import pytest"
```
### Locating definitions of failing python tests
If any python test fails in the CI pipeline, an error message like this can be found at the end of the log:
```
...
FAILED tests/test_machines_cli.py::test_machine_delete - clan_lib.errors.ClanError: Template 'new-machine' not in 'inputs.clan-core
...
```
In this case the test is defined in the file `/tests/test_machines_cli.py` via the test function `test_machine_delete`.
### Adding python tests
If a specific python module is tested, the test should be located near the tested module in a subdirectory called `./tests`
If the test is not clearly related to a specific module, put it in the top-level `./tests` directory of the tested python package. For `clan-cli` this would be `/pkgs/clan-cli/clan_cli/tests`.
All filenames must be prefixed with `test_` and test functions prefixed with `test_` for pytest to discover them.
### Running python tests
#### Running all python tests
To run all python tests which are executed in the CI pipeline locally, use this `nix build` command
```shellSession
nix build .#checks.x86_64-linux.clan-pytest-{with,without}-core
```
#### Running a specific python test
To run a specific python test outside the nix sandbox
1. Enter the development environment of the python package, by either:
- Having direnv enabled and entering the directory of the package (eg. `/pkgs/clan-cli`)
- Or using the command `select-shell {package}` in the top-level dev shell of clan-core, (eg. `switch-shell clan-cli`)
2. Execute the test via pytest using issuing
`pytest ./path/to/test_file.py:test_function_name -s -n0`
The flags `-sn0` are useful to forwards all stdout/stderr output to the terminal and be able to debug interactively via `breakpoint()`.
### Debugging python tests
To debug a specific python test, find its definition (see above) and make sure to enter the correct dev environment for that python package.
Modify the test and add `breakpoint()` statements to it.
Execute the test using the flags `-sn0` in order to get an interactive shell at the breakpoint:
```shelSession
pytest ./path/to/test_file.py:test_function_name -sn0
```
## Nix Eval Tests
### When to use nix eval tests
Nix eval tests are good for testing any nix logic, including
- nix functions
- nix libraries
- modules for the nixos module system
When not to use
- tests that require building nix derivations (except some very cheap ones)
- tests that require running programs written in other languages
- tests that require building or running nixos machines
### Finding examples of nix eval tests
Existing nix eval tests can be found via this ripgrep command:
```shellSession
rg "nix-unit --eval-store"
```
### Locating definitions of failing nix eval tests
Failing nix eval tests look like this:
```shellSession
> ✅ test_attrsOf_attrsOf_submodule
> ✅ test_attrsOf_submodule
> ❌ test_default
> /build/nix-8-2/expected.nix --- Nix
> 1 { foo = { bar = { __prio = 1500; }; } 1 { foo = { bar = { __prio = 1501; }; }
> . ; } . ; }
>
>
> ✅ test_no_default
> ✅ test_submodule
> ✅ test_submoduleWith
> ✅ test_submodule_with_merging
>
> 😢 6/7 successful
> error: Tests failed
```
To locate the definition, find the flake attribute name of the failing test near the top of the CI Job page, like for example `gitea:clan/clan-core#checks.x86_64-linux.lib-values-eval/1242`.
In this case `lib-values-eval` is the attribute we are looking for.
Find the attribute via ripgrep:
```shellSession
$ rg "lib-values-eval ="
lib/values/flake-module.nix
21: lib-values-eval = pkgs.runCommand "tests" { nativeBuildInputs = [ pkgs.nix-unit ]; } ''
grmpf@grmpf-nix ~/p/c/clan-core (test-docs)>
```
In this case the test is defined in the file `lib/values/flake-module.nix` line 21
### Adding nix eval tests
In clan core, the following pattern is usually followed:
- tests are put in a `test.nix` file
- a CI Job is exposed via a `flake-module.nix`
- that `flake-module.nix` is imported via the `flake.nix` at the root of the project
For example see `/lib/values/{test.nix,flake-module.nix}`.
### Running nix eval tests
Since all nix eval tests are exposed via the flake outputs, they can be ran via `nix build`:
```shellSession
nix build .#checks.x86_64-linux.{test-attr-name}
```
For quicker iteration times, instead of `nix build` use the `nix-unit` command available in the dev environment.
Example:
```shellSession
nix-unit --flake .#legacyPackages.x86_64-linux.{test-attr-name}
```
### Debugging nix eval tests
Follow the instructions above to find the definition of the test, then use one of the following techniques:
#### Print debugging
Add `lib.trace` or `lib.traceVal` statements in order to print some variables during evaluation
#### Nix repl
Use `nix repl` to evaluate to inspec the test.
Each test consists opf an `expr` (expression) and an `expected` field. `nix-unit` simply checks if `expr == expected` and prints the diff if that's not the case.
`nix repl` can be used to inspect `expr` manually, or any other variables that you choose to expose.
Example:
```shellSession
$ nix repl
Nix 2.25.5
Type :? for help.
nix-repl> tests = import ./lib/values/test.nix {}
nix-repl> tests
{
test_attrsOf_attrsOf_submodule = { ... };
test_attrsOf_submodule = { ... };
test_default = { ... };
test_no_default = { ... };
test_submodule = { ... };
test_submoduleWith = { ... };
test_submodule_with_merging = { ... };
}
nix-repl> tests.test_default.expr
{
foo = { ... };
}
```

View File

@@ -0,0 +1,170 @@
This guide provides an example setup for a single-disk ZFS system with native encryption, accessible for decryption remotely.
!!! Warning
This configuration only applies to `systemd-boot` enabled systems and **requires** UEFI booting.
Replace the highlighted lines with your own disk-id.
You can find our your disk-id by executing:
```bash
lsblk --output NAME,ID-LINK,FSTYPE,SIZE,MOUNTPOINT
```
=== "**Single Disk**"
Below is the configuration for `disko.nix`
```nix hl_lines="13 53"
--8<-- "docs/code-examples/disko-single-disk.nix"
```
=== "**Raid 1**"
Below is the configuration for `disko.nix`
```nix hl_lines="13 53 54"
--8<-- "docs/code-examples/disko-raid.nix"
```
Below is the configuration for `initrd.nix`.
Replace `<yourkey>` with your ssh public key.
Replace `kernelModules` with the ethernet module loaded one on your target machine.
```nix hl_lines="18 29"
{config, pkgs, ...}:
{
boot.initrd.systemd = {
enable = true;
};
# uncomment this if you want to be asked for the decryption password on login
#users.root.shell = "/bin/systemd-tty-ask-password-agent";
boot.initrd.network = {
enable = true;
ssh = {
enable = true;
port = 7172;
authorizedKeys = [ "<yourkey>" ];
hostKeys = [
"/var/lib/initrd_host_ed25519_key"
"/var/lib/initrd_host_rsa_key"
];
};
};
boot.initrd.availableKernelModules = [
"xhci_pci"
];
# Find out the required network card driver by running `lspci -k` on the target machine
boot.initrd.kernelModules = [ "r8169" ];
}
```
### Step 1: Copying SSH Public Key
Before starting the installation process, ensure that the SSH public key is copied to the NixOS installer.
1. Copy your public SSH key to the installer, if it has not been copied already:
```bash
ssh-copy-id -o PreferredAuthentications=password -o PubkeyAuthentication=no root@nixos-installer.local
```
### Step 1.5: Prepare Secret Key and Partition Disks
1. Access the installer using SSH:
```bash
ssh root@nixos-installer.local
```
2. Create a `secret.key` file in `/tmp` using `nano` or another text editor:
```bash
nano /tmp/secret.key
```
3. Discard the old disk partition data:
```bash
blkdiscard /dev/disk/by-id/<installdisk>
```
4. Run `clan` machines install, only running kexec and disko, with the following command:
```bash
clan machines install gchq-local --target-host root@nixos-installer --phases kexec,disko
```
### Step 2: ZFS Pool Import and System Installation
1. SSH into the installer once again:
```bash
ssh root@nixos-installer.local
```
2. Run the following command on the remote installation environment:
```bash
zfs set keylocation=prompt zroot/root
```
3. Disconnect from the SSH session:
```bash
CTRL+D
```
4. Locally generate ssh host keys. You only need to generate ones for the algorithms you're using in `authorizedKeys`.
```bash
ssh-keygen -q -N "" -t ed25519 -f ./initrd_host_ed25519_key
ssh-keygen -q -N "" -t rsa -b 4096 -f ./initrd_host_rsa_key
```
5. Securely copy your local initrd ssh host keys to the installer's `/mnt` directory:
```bash
scp ./initrd_host* root@nixos-installer.local:/mnt/var/lib/
```
6. Install nixos to the mounted partitions
```bash
clan machines install gchq-local --target-host root@nixos-installer --phases install
```
7. After the installation process, unmount `/mnt/boot`, change the ZFS mountpoints and unmount all the ZFS volumes by exporting the zpool:
```bash
umount /mnt/boot
cd /
zfs set -u mountpoint=/ zroot/root/nixos
zfs set -u mountpoint=/tmp zroot/root/tmp
zfs set -u mountpoint=/home zroot/root/home
zpool export zroot
```
8. Perform a reboot of the machine and remove the USB installer.
### Step 3: Accessing the Initial Ramdisk (initrd) Environment
1. SSH into the initrd environment using the `initrd_rsa_key` and provided port:
```bash
ssh -p 7172 root@192.168.178.141
```
2. Run the `systemd-tty-ask-password-agent` utility to query a password:
```bash
systemd-tty-ask-password-agent
```
After completing these steps, your NixOS should be successfully installed and ready for use.
**Note:** Replace `root@nixos-installer.local` and `192.168.178.141` with the appropriate user and IP addresses for your setup. Also, adjust `<SYS_PATH>` to reflect the correct system path for your environment.

View File

@@ -0,0 +1,155 @@
This guide provides detailed instructions for configuring
[ZeroTier VPN](https://zerotier.com) within Clan. Follow the
outlined steps to set up a machine as a VPN controller (`<CONTROLLER>`) and to
include a new machine into the VPN.
## Concept
By default all machines within one clan are connected via a chosen network technology.
```{.no-copy}
Clan
Node A
<-> (zerotier / mycelium / ...)
Node B
```
If you select multiple network technologies at the same time. e.g. (zerotier + yggdrassil)
You must choose one of them as primary network and the machines are always connected via the primary network.
This guide shows you how to configure `zerotier` either through `NixOS Options` directly, or Clan's `Inventory` System.
=== "**Inventory**"
## 1. Choose the Controller
The controller is the initial entrypoint for new machines into the vpn.
It will sign the id's of new machines.
Once id's are signed, the controller's continuous operation is not essential.
A good controller choice is nevertheless a machine that can always be reached for updates - so that new peers can be added to the network.
For the purpose of this guide we have two machines:
- The `controller` machine, which will be the zerotier controller.
- The `new_machine` machine, which is the machine we want to add to the vpn network.
## 2. Configure the Inventory
```nix
clan.inventory = {
services.zerotier.default = {
roles.controller.machines = [
"controller"
];
roles.peer.machines = [
"new_machine"
];
};
};
```
## 3. Apply the Configuration
Update the `controller` machine:
```bash
clan machines update controller
```
=== "**NixOS Options**"
## 1. Set-Up the VPN Controller
The VPN controller is initially essential for providing configuration to new
peers. Once addresses are allocated, the controller's continuous operation is not essential.
1. **Designate a Machine**: Label a machine as the VPN controller in the clan,
referred to as `<CONTROLLER>` henceforth in this guide.
2. **Add Configuration**: Input the following configuration to the NixOS
configuration of the controller machine:
```nix
clan.core.networking.zerotier.controller = {
enable = true;
public = true;
};
```
3. **Update the Controller Machine**: Execute the following:
```bash
clan machines update <CONTROLLER>
```
Your machine is now operational as the VPN controller.
## 2. Add Machines to the VPN
To introduce a new machine to the VPN, adhere to the following steps:
1. **Update Configuration**: On the new machine, incorporate the following to its
configuration, substituting `<CONTROLLER>` with the controller machine name:
```nix
{ config, ... }: {
clan.core.networking.zerotier.networkId = builtins.readFile ../../vars/per-machine/<CONTROLLER>/zerotier/zerotier-network-id/value;
}
```
1. **Update the New Machine**: Execute:
```bash
$ clan machines update <NEW_MACHINE>
```
Replace `<NEW_MACHINE>` with the designated new machine name.
!!! Note "For Private Networks"
1. **Retrieve Zerotier Metadata**
=== "From the repo"
**Retrieve the ZeroTier IP**: In the clan repo, execute:
```console
$ clan facts list <NEW_MACHINE> | jq -r '.["zerotier-ip"]'
```
The returned address is the Zerotier IP address of the machine.
=== "On the new machine"
**Retrieve the ZeroTier ID**: On the `new_machine`, execute:
```bash
$ sudo zerotier-cli info
```
Example Output:
```{.console, .no-copy}
200 info d2c71971db 1.12.1 OFFLINE
```
, where `d2c71971db` is the ZeroTier ID.
2. **Authorize the New Machine on the Controller**: On the controller machine,
execute:
=== "with ZerotierIP"
```bash
$ sudo zerotier-members allow --member-ip <IP>
```
Substitute `<IP>` with the ZeroTier IP obtained previously.
=== "with ZerotierID"
```bash
$ sudo zerotier-members allow <ID>
```
Substitute `<ID>` with the ZeroTier ID obtained previously.
2. **Verify Connection**: On the `new_machine`, re-execute:
```bash
$ sudo zerotier-cli info
```
The status should now be "ONLINE":
```{.console, .no-copy}
200 info d2c71971db 1.12.1 ONLINE
```
!!! success "Congratulations!"
The new machine is now part of the VPN, and the ZeroTier
configuration on NixOS within the Clan project is complete.
## Further
Currently you can only use **Zerotier** as networking technology because this is the first network stack we aim to support.
In the future we plan to add additional network technologies like tinc, head/tailscale, yggdrassil and mycelium.
We chose zerotier because in our tests it was a straight forwards solution to bootstrap.
It allows you to selfhost a controller and the controller doesn't need to be globally reachable.
Which made it a good fit for starting the project.