docs: unify documentation

Strictly enforce diataxis
Use resource driven approach
Can extend later to add 'developer' link index page
This commit is contained in:
Johannes Kirschbauer
2025-07-24 16:51:57 +02:00
parent 59105bd1da
commit caaf9dc4f3
30 changed files with 174 additions and 594 deletions

View File

@@ -138,7 +138,7 @@ You can use services exposed by Clans core module library, `clan-core`.
You can also author your own `clanService` modules.
🔗 Learn how to write your own service: [Authoring a clanService](../developer/extensions/clanServices/index.md)
🔗 Learn how to write your own service: [Authoring a service](../guides/services/community.md)
You might expose your service module from your flake — this makes it easy for other people to also use your module in their clan.
@@ -154,6 +154,6 @@ You might expose your service module from your flake — this makes it easy for
## Whats Next?
* [Author your own clanService →](../developer/extensions/clanServices/index.md)
* [Author your own clanService →](../guides/services/community.md)
* [Migrate from clanModules →](../guides/migrations/migrate-inventory-services.md)
<!-- TODO: * [Understand the architecture →](../explanation/clan-architecture.md) -->

View File

@@ -0,0 +1 @@
../../../CONTRIBUTING.md

View File

@@ -0,0 +1,195 @@
Here are some methods for debugging and testing the clan-cli
## Using a Development Branch
To streamline your development process, I suggest not installing `clan-cli`. Instead, clone the `clan-core` repository and add `clan-core/pkgs/clan-cli/bin` to your PATH to use the checked-out version directly.
!!! Note
After cloning, navigate to `clan-core/pkgs/clan-cli` and execute `direnv allow` to activate the devshell. This will set up a symlink to nixpkgs at a specific location; without it, `clan-cli` won't function correctly.
With this setup, you can easily use [breakpoint()](https://docs.python.org/3/library/pdb.html) to inspect the application's internal state as needed.
This approach is feasible because `clan-cli` only requires a Python interpreter and has no other dependencies.
```nix
pkgs.mkShell {
packages = [
pkgs.python3
];
shellHook = ''
export GIT_ROOT="$(git rev-parse --show-toplevel)"
export PATH=$PATH:~/Projects/clan-core/pkgs/clan-cli/bin
'';
}
```
## The Debug Flag
You can enhance your debugging process with the `--debug` flag in the `clan` command. When you add this flag to any command, it displays all subprocess commands initiated by `clan` in a readable format, along with the source code position that triggered them. This feature makes it easier to understand and trace what's happening under the hood.
```bash
$ clan machines list --debug 1
Debug log activated
nix \
--extra-experimental-features 'nix-command flakes' \
eval \
--show-trace --json \
--print-build-logs '/home/qubasa/Projects/qubasas-clan#clanInternals.machines.x86_64-linux' \
--apply builtins.attrNames \
--json
Caller: ~/Projects/clan-core/pkgs/clan-cli/clan_cli/machines/list.py:96::list_nixos_machines
warning: Git tree '/home/qubasa/Projects/qubasas-clan' is dirty
demo
gchq-local
wintux
```
## VSCode
If you're using VSCode, it has a handy feature that makes paths to source code files clickable in the integrated terminal. Combined with the previously mentioned techniques, this allows you to open a Clan in VSCode, execute a command like `clan machines list --debug`, and receive a printed path to the code that initiates the subprocess. With the `Ctrl` key (or `Cmd` on macOS) and a mouse click, you can jump directly to the corresponding line in the code file and add a `breakpoint()` function to it, to inspect the internal state.
## Finding Print Messages
To trace the origin of print messages in `clan-cli`, you can enable special debugging features using environment variables:
- Set `TRACE_PRINT=1` to include the source location with each print message:
```bash
export TRACE_PRINT=1
```
When running commands with `--debug`, every print will show where it was triggered in the code.
- To see a deeper stack trace for each print, set `TRACE_DEPTH` to the desired number of stack frames (e.g., 3):
```bash
export TRACE_DEPTH=3
```
### Additional Debug Logging
You can enable more detailed logging for specific components by setting these environment variables:
- `CLAN_DEBUG_NIX_SELECTORS=1` — verbose logs for flake.select operations
- `CLAN_DEBUG_NIX_PREFETCH=1` — verbose logs for flake.prefetch operations
- `CLAN_DEBUG_COMMANDS=1` — print the diffed environment of executed commands
Example:
```bash
export CLAN_DEBUG_NIX_SELECTORS=1
export CLAN_DEBUG_NIX_PREFETCH=1
export CLAN_DEBUG_COMMANDS=1
```
These options help you pinpoint the source and context of print messages and debug logs during development.
## Analyzing Performance
To understand what's causing slow performance, set the environment variable `export CLAN_CLI_PERF=1`. When you complete a clan command, you'll see a summary of various performance metrics, helping you identify what's taking up time.
## See all possible packages and tests
To quickly show all possible packages and tests execute:
```bash
nix flake show
```
Under `checks` you will find all tests that are executed in our CI. Under `packages` you find all our projects.
```
git+file:///home/lhebendanz/Projects/clan-core
├───apps
│ └───x86_64-linux
│ ├───install-vm: app
│ └───install-vm-nogui: app
├───checks
│ └───x86_64-linux
│ ├───borgbackup omitted (use '--all-systems' to show)
│ ├───check-for-breakpoints omitted (use '--all-systems' to show)
│ ├───clan-dep-age omitted (use '--all-systems' to show)
│ ├───clan-dep-bash omitted (use '--all-systems' to show)
│ ├───clan-dep-e2fsprogs omitted (use '--all-systems' to show)
│ ├───clan-dep-fakeroot omitted (use '--all-systems' to show)
│ ├───clan-dep-git omitted (use '--all-systems' to show)
│ ├───clan-dep-nix omitted (use '--all-systems' to show)
│ ├───clan-dep-openssh omitted (use '--all-systems' to show)
│ ├───"clan-dep-python3.11-mypy" omitted (use '--all-systems' to show)
├───packages
│ └───x86_64-linux
│ ├───clan-cli omitted (use '--all-systems' to show)
│ ├───clan-cli-docs omitted (use '--all-systems' to show)
│ ├───clan-ts-api omitted (use '--all-systems' to show)
│ ├───clan-app omitted (use '--all-systems' to show)
│ ├───default omitted (use '--all-systems' to show)
│ ├───deploy-docs omitted (use '--all-systems' to show)
│ ├───docs omitted (use '--all-systems' to show)
│ ├───editor omitted (use '--all-systems' to show)
└───templates
├───default: template: Initialize a new clan flake
└───default: template: Initialize a new clan flake
```
You can execute every test separately by following the tree path `nix run .#checks.x86_64-linux.clan-pytest -L` for example.
## Test Locally in Devshell with Breakpoints
To test the CLI locally in a development environment and set breakpoints for debugging, follow these steps:
1. Run the following command to execute your tests and allow for debugging with breakpoints:
```bash
cd ./pkgs/clan-cli
pytest -n0 -s --maxfail=1 ./tests/test_nameofthetest.py
```
You can place `breakpoint()` in your Python code where you want to trigger a breakpoint for debugging.
## Test Locally in a Nix Sandbox
To run tests in a Nix sandbox, you have two options depending on whether your test functions have been marked as impure or not:
### Running Tests Marked as Impure
If your test functions need to execute `nix build` and have been marked as impure because you can't execute `nix build` inside a Nix sandbox, use the following command:
```bash
nix run .#impure-checks -L
```
This command will run the impure test functions.
### Running Pure Tests
For test functions that have not been marked as impure and don't require executing `nix build`, you can use the following command:
```bash
nix build .#checks.x86_64-linux.clan-pytest --rebuild
```
This command will run all pure test functions.
### Inspecting the Nix Sandbox
If you need to inspect the Nix sandbox while running tests, follow these steps:
1. Insert an endless sleep into your test code where you want to pause the execution. For example:
```python
import time
time.sleep(3600) # Sleep for one hour
```
2. Use `cntr` and `psgrep` to attach to the Nix sandbox. This allows you to interactively debug your code while it's paused. For example:
```bash
psgrep <your_python_process_name>
cntr attach <container id, container name or process id>
```
Or you can also use the [nix breakpoint hook](https://nixos.org/manual/nixpkgs/stable/#breakpointhook)

View File

@@ -0,0 +1,316 @@
# Testing your contributions
Each feature added to clan should be tested extensively via automated tests.
This document covers different methods of automated testing, including creating, running and debugging such tests.
In order to test the behavior of clan, different testing frameworks are used depending on the concern:
- NixOS VM tests: for high level integration
- NixOS container tests: for high level integration
- Python tests via pytest: for unit tests and integration tests
- Nix eval tests: for nix functions, libraries, modules, etc.
## NixOS VM Tests
The [NixOS VM Testing Framework](https://nixos.org/manual/nixos/stable/index.html#sec-nixos-tests) is used to create high level integration tests, by running one or more VMs generated from a specified config. Commands can be executed on the booted machine(s) to verify a deployment of a service works as expected. All machines within a test are connected by a virtual network. Internet access is not available.
### When to use VM tests
- testing that a service defined through a clan module works as expected after deployment
- testing clan-cli subcommands which require accessing a remote machine
### When not to use VM tests
NixOS VM Tests are slow and expensive. They should only be used for testing high level integration of components.
VM tests should be avoided wherever it is possible to implement a cheaper unit test instead.
- testing detailed behavior of a certain clan-cli command -> use unit testing via pytest instead
- regression testing -> add a unit test
### Finding examples for VM tests
Existing nixos vm tests in clan-core can be found by using ripgrep:
```shellSession
rg self.clanLib.test.baseTest
```
### Locating definitions of failing VM tests
All nixos vm tests in clan are exported as individual flake outputs under `checks.x86_64-linux.{test-attr-name}`.
If a test fails in CI:
- look for the job name of the test near the top if the CI Job page, like, for example `gitea:clan/clan-core#checks.x86_64-linux.borgbackup/1242`
- in this case `checks.x86_64-linux.borgbackup` is the attribute path
- note the last element of that attribute path, in this case `borgbackup`
- search for the attribute name inside the `/checks` directory via ripgrep
example: locating the vm test named `borgbackup`:
```shellSession
$ rg "borgbackup =" ./checks
./checks/flake-module.nix
44- wayland-proxy-virtwl = self.clanLib.test.baseTest ./wayland-proxy-virtwl nixosTestArgs;
```
-> the location of that test is `/checks/flake-module.nix` line `41`.
### Adding vm tests
Create a nixos test module under `/checks/{name}/default.nix` and import it in `/checks/flake-module.nix`.
### Running VM tests
```shellSession
nix build .#checks.x86_64-linux.{test-attr-name}
```
(replace `{test-attr-name}` with the name of the test)
### Debugging VM tests
The following techniques can be used to debug a VM test:
#### Print Statements
Locate the definition (see above) and add print statements, like, for example `print(client.succeed("systemctl --failed"))`, then re-run the test via `nix build` (see above)
#### Interactive Shell
- Execute the vm test outside the nix Sandbox via the following command:
`nix run .#checks.x86_64-linux.{test-attr-name}.driver -- --interactive`
- Then run the commands in the machines manually, like for example:
```python3
start_all()
machine1.succeed("echo hello")
```
#### Breakpoints
To get an interactive shell at a specific line in the VM test script, add a `breakpoint()` call before the line to debug, then run the test outside of the sandbox via:
`nix run .#checks.x86_64-linux.{test-attr-name}.driver`
## NixOS Container Tests
Those are very similar to NixOS VM tests, as in they run virtualized nixos machines, but instead of using VMs, they use containers which are much cheaper to launch.
As of now the container test driver is a downstream development in clan-core.
Basically everything stated under the NixOS VM tests sections applies here, except some limitations.
Limitations:
- Cannot run in interactive mode, however while the container test runs, it logs a nsenter command that can be used to log into each of the container.
- setuid binaries don't work
### Where to find examples for NixOS container tests
Existing NixOS container tests in clan-core can be found by using `ripgrep`:
```shellSession
rg self.clanLib.test.containerTest
```
## Python tests via pytest
Since the Clan CLI is written in python, the `pytest` framework is used to define unit tests and integration tests via python
Due to superior efficiency,
### When to use python tests
- writing unit tests for python functions and modules, or bugfixes of such
- all integrations tests that do not require building or running a nixos machine
- impure integrations tests that require internet access (very rare, try to avoid)
### When not to use python tests
- integrations tests that require building or running a nixos machine (use NixOS VM or container tests instead)
- testing behavior of a nix function or library (use nix eval tests instead)
### Finding examples of python tests
Existing python tests in clan-core can be found by using `ripgrep`:
```shellSession
rg "import pytest"
```
### Locating definitions of failing python tests
If any python test fails in the CI pipeline, an error message like this can be found at the end of the log:
```
...
FAILED tests/test_machines_cli.py::test_machine_delete - clan_lib.errors.ClanError: Template 'new-machine' not in 'inputs.clan-core
...
```
In this case the test is defined in the file `/tests/test_machines_cli.py` via the test function `test_machine_delete`.
### Adding python tests
If a specific python module is tested, the test should be located near the tested module in a subdirectory called `./tests`
If the test is not clearly related to a specific module, put it in the top-level `./tests` directory of the tested python package. For `clan-cli` this would be `/pkgs/clan-cli/clan_cli/tests`.
All filenames must be prefixed with `test_` and test functions prefixed with `test_` for pytest to discover them.
### Running python tests
#### Running all python tests
To run all python tests which are executed in the CI pipeline locally, use this `nix build` command
```shellSession
nix build .#checks.x86_64-linux.clan-pytest-{with,without}-core
```
#### Running a specific python test
To run a specific python test outside the nix sandbox
1. Enter the development environment of the python package, by either:
- Having direnv enabled and entering the directory of the package (eg. `/pkgs/clan-cli`)
- Or using the command `select-shell {package}` in the top-level dev shell of clan-core, (eg. `switch-shell clan-cli`)
2. Execute the test via pytest using issuing
`pytest ./path/to/test_file.py:test_function_name -s -n0`
The flags `-sn0` are useful to forwards all stdout/stderr output to the terminal and be able to debug interactively via `breakpoint()`.
### Debugging python tests
To debug a specific python test, find its definition (see above) and make sure to enter the correct dev environment for that python package.
Modify the test and add `breakpoint()` statements to it.
Execute the test using the flags `-sn0` in order to get an interactive shell at the breakpoint:
```shelSession
pytest ./path/to/test_file.py:test_function_name -sn0
```
## Nix Eval Tests
### When to use nix eval tests
Nix eval tests are good for testing any nix logic, including
- nix functions
- nix libraries
- modules for the NixOS module system
When not to use
- tests that require building nix derivations (except some very cheap ones)
- tests that require running programs written in other languages
- tests that require building or running NixOS machines
### Finding examples of nix eval tests
Existing nix eval tests can be found via this `ripgrep` command:
```shellSession
rg "nix-unit --eval-store"
```
### Locating definitions of failing nix eval tests
Failing nix eval tests look like this:
```shellSession
> ✅ test_attrsOf_attrsOf_submodule
> ✅ test_attrsOf_submodule
> ❌ test_default
> /build/nix-8-2/expected.nix --- Nix
> 1 { foo = { bar = { __prio = 1500; }; } 1 { foo = { bar = { __prio = 1501; }; }
> . ; } . ; }
>
>
> ✅ test_no_default
> ✅ test_submodule
> ✅ test_submoduleWith
> ✅ test_submodule_with_merging
>
> 😢 6/7 successful
> error: Tests failed
```
To locate the definition, find the flake attribute name of the failing test near the top of the CI Job page, like for example `gitea:clan/clan-core#checks.x86_64-linux.eval-lib-values/1242`.
In this case `eval-lib-values` is the attribute we are looking for.
Find the attribute via ripgrep:
```shellSession
$ rg "eval-lib-values ="
lib/values/flake-module.nix
21: eval-lib-values = pkgs.runCommand "tests" { nativeBuildInputs = [ pkgs.nix-unit ]; } ''
grmpf@grmpf-nix ~/p/c/clan-core (test-docs)>
```
In this case the test is defined in the file `lib/values/flake-module.nix` line 21
### Adding nix eval tests
In clan core, the following pattern is usually followed:
- tests are put in a `test.nix` file
- a CI Job is exposed via a `flake-module.nix`
- that `flake-module.nix` is imported via the `flake.nix` at the root of the project
For example see `/lib/values/{test.nix,flake-module.nix}`.
### Running nix eval tests
Since all nix eval tests are exposed via the flake outputs, they can be ran via `nix build`:
```shellSession
nix build .#checks.x86_64-linux.{test-attr-name}
```
For quicker iteration times, instead of `nix build` use the `nix-unit` command available in the dev environment.
Example:
```shellSession
nix-unit --flake .#legacyPackages.x86_64-linux.{test-attr-name}
```
### Debugging nix eval tests
Follow the instructions above to find the definition of the test, then use one of the following techniques:
#### Print debugging
Add `lib.trace` or `lib.traceVal` statements in order to print some variables during evaluation
#### Nix repl
Use `nix repl` to evaluate and inspect the test.
Each test consists of an `expr` (expression) and an `expected` field. `nix-unit` simply checks if `expr == expected` and prints the diff if that's not the case.
`nix repl` can be used to inspect an `expr` manually, or any other variables that you choose to expose.
Example:
```shellSession
$ nix repl
Nix 2.25.5
Type :? for help.
nix-repl> tests = import ./lib/values/test.nix {}
nix-repl> tests
{
test_attrsOf_attrsOf_submodule = { ... };
test_attrsOf_submodule = { ... };
test_default = { ... };
test_no_default = { ... };
test_submodule = { ... };
test_submoduleWith = { ... };
test_submodule_with_merging = { ... };
}
nix-repl> tests.test_default.expr
{
foo = { ... };
}
```

View File

@@ -0,0 +1,94 @@
!!! Danger ":fontawesome-solid-road-barrier: Under Construction :fontawesome-solid-road-barrier:"
Currently under construction use with caution
:fontawesome-solid-road-barrier: :fontawesome-solid-road-barrier: :fontawesome-solid-road-barrier:
## Structure
A disk template consists of exactly two files
- `default.nix`
- `README.md`
```sh
└── single-disk
├── default.nix
└── README.md
```
## `default.nix`
Placeholders are filled with their machine specific options when a template is used for a machine.
The user can choose any valid options from the hardware report.
The file itself is then copied to `machines/{machineName}/disko.nix` and will be automatically loaded by the machine.
`single-disk/default.nix`
```
{
disko.devices = {
disk = {
main = {
device = "{{mainDisk}}";
...
};
};
};
}
```
## Placeholders
Each template must declare the options of its placeholders depending on the hardware-report.
`api/disk.py`
```py
templates: dict[str, dict[str, Callable[[dict[str, Any]], Placeholder]]] = {
"single-disk": {
# Placeholders
"mainDisk": lambda hw_report: Placeholder(
label="Main disk", options=hw_main_disk_options(hw_report), required=True
),
}
}
```
Introducing new local or global placeholders requires contributing to clan-core `api/disks.py`.
### Predefined placeholders
Some placeholders provide predefined functionality
- `uuid`: In most cases we recommend adding a unique id to all disks. This prevents the system to false boot from i.e. hot-plugged devices.
```
disko.devices = {
disk = {
main = {
name = "main-{{uuid}}";
...
}
}
}
```
## Readme
The readme frontmatter must be of the same format as modules frontmatter.
```markdown
---
description = "Simple disk schema for single disk setups"
---
# Single disk
Use this schema for simple setups where ....
```
The format and fields of this file is not clear yet. We might change that once fully implemented.

View File

@@ -27,7 +27,7 @@ inputs = {
## Import the Clan flake-parts Module
After updating your flake inputs, the next step is to import the Clan flake-parts module. This will make the [Clan options](../reference/nix-api/clan.md) available within `mkFlake`.
After updating your flake inputs, the next step is to import the Clan flake-parts module. This will make the [Clan options](../options.md) available within `mkFlake`.
```nix
{

View File

@@ -6,7 +6,7 @@ Machines can be added using the following methods
- Editing machines/`machine_name`/configuration.nix (automatically included if it exists)
- `clan machines create` (imperative)
See the complete [list](../../guides/more-machines.md#automatic-registration) of auto-loaded files.
See the complete [list](../../concepts/autoincludes.md) of auto-loaded files.
## Create a machine

View File

@@ -41,7 +41,7 @@ To learn more: [Guide about clanService](../clanServices.md)
```
1. See [reference/clanServices](../../reference/clanServices/index.md) for all available services and how to configure them.
Or read [authoring/clanServices](../../developer/extensions/clanServices/index.md) if you want to bring your own
Or read [authoring/clanServices](../../guides/services/community.md) if you want to bring your own
2. Replace `__YOUR_CONTROLLER_` with the *name* of your machine.

View File

@@ -1,123 +0,0 @@
`Inventory` is an abstract service layer for consistently configuring distributed services across machine boundaries.
## Concept
Its concept is slightly different to what NixOS veterans might be used to. The inventory is a service definition on a higher level, not a machine configuration. This allows you to define a consistent and coherent service.
The inventory logic will automatically derive the modules and configurations to enable on each machine in your `clan` based on its `role`. This makes it super easy to setup distributed `services` such as Backups, Networking, traditional cloud services, or peer-to-peer based applications.
The following tutorial will walk through setting up a Backup service where the terms `Service` and `Role` will become more clear.
See also: [Inventory API Documentation](../reference/nix-api/inventory.md)
!!! example "Experimental status"
The inventory implementation is not considered stable yet.
We are actively soliciting feedback from users.
Stabilizing the API is a priority.
## Prerequisites
- [x] [Add multiple machines](./more-machines.md) to your Clan.
## Services
The inventory defines `services`. Membership of `machines` is defined via `roles` exclusively.
See each [modules documentation](../reference/clanModules/index.md) for its available roles.
### Adding services to machines
A service can be added to one or multiple machines via `Roles`. Clan's `Role` interface provide sane defaults for a module this allows the module author to reduce the configuration overhead to a minimum.
Each service can still be customized and configured according to the modules options.
- Per instance configuration via `services.<serviceName>.<instanceName>.config`
- Per role configuration via `services.<serviceName>.<instanceName>.roles.<roleName>.config`
- Per machine configuration via `services.<serviceName>.<instanceName>.machines.<machineName>.config`
### Setting up the Backup Service
!!! Example "Borgbackup Example"
To configure a service it needs to be added to the machine.
It is required to assign the service (`borgbackup`) an arbitrary instance name. (`instance_1`)
See also: [Multiple Service Instances](#multiple-service-instances)
```{.nix hl_lines="6-7"}
clan-core.lib.clan {
inventory = {
services = {
borgbackup.instance_1 = {
# Machines can be added here.
roles.client.machines = [ "jon" ];
roles.server.machines = [ "backup_server" ];
};
};
};
}
```
### Scaling the Backup
The inventory allows machines to set Tags
It is possible to add services to multiple machines via tags as shown
!!! Example "Tags Example"
```{.nix hl_lines="5 8 14"}
clan-core.lib.clan {
inventory = {
machines = {
"jon" = {
tags = [ "backup" ];
};
"sara" = {
tags = [ "backup" ];
};
# ...
};
services = {
borgbackup.instance_1 = {
roles.client.tags = [ "backup" ];
roles.server.machines = [ "backup_server" ];
};
};
};
}
```
### Multiple Service Instances
!!! danger "Important"
Not all modules implement support for multiple instances yet.
Multiple instance usage could create complexity, refer to each modules documentation, for intended usage.
!!! Example
In this example `backup_server` has role `client` and `server` in different instances.
```{.nix hl_lines="11 14"}
clan-core.lib.clan {
inventory = {
machines = {
"jon" = {};
"backup_server" = {};
"backup_backup_server" = {}
};
services = {
borgbackup.instance_1 = {
roles.client.machines = [ "jon" ];
roles.server.machines = [ "backup_server" ];
};
borgbackup.instance_2 = {
roles.client.machines = [ "backup_server" ];
roles.server.machines = [ "backup_backup_server" ];
};
};
};
}
```

View File

@@ -7,7 +7,7 @@ This guide explains how to manage macOS machines using Clan.
Currently, Clan supports the following features for macOS:
- `clan machines update` for existing [nix-darwin](https://github.com/nix-darwin/nix-darwin) installations
- Support for [vars](../guides/vars-backend.md)
- Support for [vars](../concepts/generators.md)
## Add Your Machine to Your Clan Flake

View File

@@ -1,7 +1,7 @@
# Migrating from using `clanModules` to `clanServices`
**Audience**: This is a guide for **people using `clanModules`**.
If you are a **module author** and need to migrate your modules please consult our **new** [clanServices authoring guide](../../developer/extensions/clanServices/index.md)
If you are a **module author** and need to migrate your modules please consult our **new** [clanServices authoring guide](../../guides/services/community.md)
## What's Changing?
@@ -329,6 +329,6 @@ instances = {
## Further reference
* [Authoring a 'clan.service' module](../../developer/extensions/clanServices/index.md)
* [Inventory Concept](../../concepts/inventory.md)
* [Authoring a 'clan.service' module](../../guides/services/community.md)
* [ClanServices](../clanServices.md)
* [Inventory Reference](../../reference/nix-api/inventory.md)

View File

@@ -3,7 +3,7 @@
For a high level overview about `vars` see our [blog post](https://clan.lol/blog/vars/).
This guide will help you migrate your modules that still use our [`facts`](../../guides/secrets.md) backend
to the [`vars`](../../guides/vars-backend.md) backend.
to the [`vars`](../../concepts/generators.md) backend.
The `vars` [module](../../reference/clan.core/vars.md) and the clan [command](../../reference/cli/vars.md) work in tandem, they should ideally be kept in sync.

View File

@@ -1,50 +0,0 @@
Clan has two general methods of adding machines:
- **Automatic**: Detects every folder in the `machines` folder.
- **Declarative**: Explicit declarations in Nix.
## Automatic registration
Every folder `machines/{machineName}` will be registered automatically as a Clan machine.
!!! info "Automatically loaded files"
The following files are loaded automatically for each Clan machine:
- [x] `machines/{machineName}/configuration.nix`
- [x] `machines/{machineName}/hardware-configuration.nix`
- [x] `machines/{machineName}/facter.json` Automatically configured, for further information see [nixos-facter](https://clan.lol/blog/nixos-facter/)
- [x] `machines/{machineName}/disko.nix` Automatically loaded, for further information see the [disko docs](https://github.com/nix-community/disko/blob/master/docs/quickstart.md).
## Manual declaration
Machines can be added via [`clan.inventory.machines`](../guides/inventory.md) or in `clan.machines`, which allows for defining NixOS options.
=== "**Individual Machine Configuration**"
```{.nix}
clan-core.lib.clan {
machines = {
"jon" = {
# Any valid nixos config
};
};
}
```
=== "**Inventory Configuration**"
```{.nix}
clan-core.lib.clan {
inventory = {
machines = {
"jon" = {
# Inventory can set tags and other metadata
tags = [ "zone1" ];
deploy.targetHost = "root@jon";
};
};
};
}
```

View File

@@ -1,5 +1,5 @@
This article provides an overview over the underlying secrets system which is used by [Vars](../guides/vars-backend.md).
Under most circumstances you should use [Vars](../guides/vars-backend.md) directly instead.
This article provides an overview over the underlying secrets system which is used by [Vars](../concepts/generators.md).
Under most circumstances you should use [Vars](../concepts/generators.md) directly instead.
Consider using `clan secrets` only for managing admin users and groups, as well as a debugging tool.

View File

@@ -0,0 +1,271 @@
# Authoring a 'clan.service' module
!!! Tip
This is the successor format to the older [clanModules](../../reference/clanModules/index.md)
While some features might still be missing we recommend to adapt this format early and give feedback.
## Service Module Specification
This section explains how to author a clan service module.
We discussed the initial architecture in [01-clan-service-modules](../../decisions/01-ClanModules.md) and decided to rework the format.
For the full specification and current state see: **[Service Author Reference](../../reference/clanServices/clan-service-author-interface.md)**
### A Minimal module
First of all we need to register our module into the `clan.modules` attribute. Make sure to choose a unique name so the module doesn't have a name collision with any of the core modules.
While not required we recommend to prefix your module attribute name.
If you export the module from your flake, other people will be able to import it and use it within their clan.
i.e. `@hsjobeki/customNetworking`
```nix title="flake.nix"
# ...
outputs = inputs: inputs.flake-parts.lib.mkFlake { inherit inputs; } ({
imports = [ inputs.clan-core.flakeModules.default ];
# ...
# Sometimes this attribute set is defined in clan.nix
clan = {
# If needed: Exporting the module for other people
modules."@hsjobeki/customNetworking" = import ./service-modules/networking.nix;
# We could also inline the complete module spec here
# For example
# {...}: { _class = "clan.service"; ... };
};
})
```
The imported module file must fulfill at least the following requirements:
- Be an actual module. That means: Be either an attribute set; or a function that returns an attribute set.
- Required `_class = "clan.service"
- Required `manifest.name = "<name of the provided service>"`
```nix title="/service-modules/networking.nix"
{
_class = "clan.service";
manifest.name = "zerotier-networking";
# ...
}
```
For more attributes see: **[Service Author Reference](../../reference/clanServices/clan-service-author-interface.md)**
### Adding functionality to the module
While the very minimal module is valid in itself it has no way of adding any machines to it, because it doesn't specify any roles.
The next logical step is to think about the interactions between the machines and define *roles* for them.
Here is a short guide with some conventions:
- [ ] If they all have the same relation to each other `peer` is commonly used. `peers` can often talk to each other directly.
- [ ] Often machines don't necessarily have direct relation to each other and there is one elevated machine in the middle classically know as `client-server`. `clients` are less likely to talk directly to each other than `peers`
- [ ] If your machines don't have any relation and/or interactions to each other you should reconsider if the desired functionality is really a multi-host service.
```nix title="/service-modules/networking.nix"
{
_class = "clan.service";
manifest.name = "zerotier-networking";
# Define what roles exist
roles.peer = {};
roles.controller = {};
# ...
}
```
Next we need to define the settings and the behavior of these distinct roles.
```nix title="/service-modules/networking.nix"
{
_class = "clan.service";
manifest.name = "zerotier-networking";
# Define what roles exist
roles.peer = {
interface = {
# These options can be set via 'roles.client.settings'
options.ipRanges = mkOption { type = listOf str; };
};
# Maps over all instances and produces one result per instance.
perInstance = { instanceName, settings, machine, roles, ... }: {
# Analog to 'perSystem' of flake-parts.
# For every instance of this service we will add a nixosModule to a client-machine
nixosModule = { config, ... }: {
# Interaction examples what you could do here:
# - Get some settings of this machine
# settings.ipRanges
#
# - Get all controller names:
# allControllerNames = lib.attrNames roles.controller.machines
#
# - Get all roles of the machine:
# machine.roles
#
# - Get the settings that where applied to a specific controller machine:
# roles.controller.machines.jon.settings
#
# Add one systemd service for every instance
systemd.services.zerotier-client-${instanceName} = {
# ... depend on the '.config' and 'perInstance arguments'
};
};
}
};
roles.controller = {
interface = {
# These options can be set via 'roles.server.settings'
options.dynamicIp.enable = mkOption { type = bool; };
};
perInstance = { ... }: {};
};
# Maps over all machines and produces one result per machine.
perMachine = { instances, machine, ... }: {
# Analog to 'perSystem' of flake-parts.
# For every machine of this service we will add exactly one nixosModule to a machine
nixosModule = { config, ... }: {
# Interaction examples what you could do here:
# - Get the name of this machine
# machine.name
#
# - Get all roles of this machine across all instances:
# machine.roles
#
# - Get the settings of a specific instance of a specific machine
# instances.foo.roles.peer.machines.jon.settings
#
# Globally enable something
networking.enable = true;
};
};
# ...
}
```
## Using values from a NixOS machine inside the module
!!! Example "Experimental Status"
This feature is experimental and should be used with care.
Sometimes a settings value depends on something within a machines `config`.
Since the `interface` is defined completely machine-agnostic this means values from a machine cannot be set in the traditional way.
The following example shows how to create a local instance of machine specific settings.
```nix title="someservice.nix"
{
# Maps over all instances and produces one result per instance.
perInstance = { instanceName, extendSettings, machine, roles, ... }: {
nixosModule = { config, ... }:
let
# Create new settings with
# 'ipRanges' defaulting to 'config.network.ip.range' from this machine
# This only works if there is no 'default' already.
localSettings = extendSettings {
ipRanges = lib.mkDefault config.network.ip.range;
};
in
{
# ...
};
};
}
```
!!! Danger
`localSettings` are a local attribute. Other machines cannot access it.
If calling extendSettings is done that doesn't change the original `settings` this means if a different machine tries to access i.e `roles.client.settings` it would *NOT* contain your changes.
Exposing the changed settings to other machines would come with a huge performance penalty, thats why we don't want to offer it.
## Passing `self` or `pkgs` to the module
Passing any dependencies in general must be done manually.
In general we found the following two best practices:
1. Using `lib.importApply`
2. Using a wrapper module
Both have pros and cons. After all using `importApply` is the easier one, but might be more limiting sometimes.
### Using `importApply`
Using [importApply](https://github.com/NixOS/nixpkgs/pull/230588) is essentially the same as `import file` followed by a function-application; but preserves the error location.
Imagine your module looks like this
```nix title="messaging.nix"
{ self }:
{ ... }: # This line is optional
{
_class = "clan.service";
manifest.name = "messaging"
# ...
}
```
To import the module use `importApply`
```nix title="flake.nix"
# ...
outputs = inputs: flake-parts.lib.mkFlake { inherit inputs; } ({self, lib, ...}: {
imports = [ inputs.clan-core.flakeModules.default ];
# ...
# Sometimes this attribute set is defined in clan.nix
clan = {
# Register the module
modules."@hsjobeki/messaging" = lib.importApply ./service-modules/messaging.nix { inherit self; };
};
})
```
### Using a wrapper module
```nix title="messaging.nix"
{ config, ... }:
{
_class = "clan.service";
manifest.name = "messaging"
# ...
# config.myClan
}
```
Then wrap the module and forward the variable `self` from the outer context into the module
```nix title="flake.nix"
# ...
outputs = inputs: flake-parts.lib.mkFlake { inherit inputs; } ({self, lib, ...}: {
imports = [ inputs.clan-core.flakeModules.default ];
# ...
# Sometimes this attribute set is defined in clan.nix
clan = {
# Register the module
modules."@hsjobeki/messaging" = {
# Create an option 'myClan' and assign it to 'self'
options.myClan = lib.mkOption {
default = self;
};
imports = [./service-modules/messaging.nix ];
}
};
})
```
The benefit of this approach is that downstream users can override the value of `myClan` by using `mkForce` or other priority modifiers.
---
## Further
- [Reference Documentation for Service Authors](../../reference/clanServices/clan-service-author-interface.md)
- [Migration Guide from ClanModules to ClanServices](../../guides/migrations/migrate-inventory-services.md)
- [Decision that lead to ClanServices](../../decisions/01-ClanModules.md)

View File

@@ -1,151 +0,0 @@
!!! Note
Vars is the new secret backend that will soon replace the Facts backend
Defining a linux user's password via the nixos configuration previously required running `mkpasswd ...` and then copying the hash back into the nix configuration.
In this example, we will guide you through automating that interaction using clan `vars`.
For a more general explanation of what clan vars are and how it works, see the intro of the [Reference Documentation for vars](../reference/clan.core/vars.md)
This guide assumes
- Clan is set up already (see [Getting Started](../guides/getting-started/index.md))
- a machine has been added to the clan (see [Adding Machines](./more-machines.md))
This section will walk you through the following steps:
1. declare a `generator` in the machine's nixos configuration
2. inspect the status via the Clan CLI
3. generate the vars
4. observe the changes
5. update the machine
6. share the root password between machines
7. change the password
## Declare the generator
In this example, a `vars` `generator` is used to:
- prompt the user for the password
- run the required `mkpasswd` command to generate the hash
- store the hash in a file
- expose the file path to the nixos configuration
Create a new nix file `root-password.nix` with the following content and import it into your `configuration.nix`
```nix
{config, pkgs, ...}: {
clan.core.vars.generators.root-password = {
# prompt the user for a password
# (`password-input` being an arbitrary name)
prompts.password-input.description = "the root user's password";
prompts.password-input.type = "hidden";
# don't store the prompted password itself
prompts.password-input.persist = false;
# define an output file for storing the hash
files.password-hash.secret = false;
# define the logic for generating the hash
script = ''
cat $prompts/password-input | mkpasswd -m sha-512 > $out/password-hash
'';
# the tools required by the script
runtimeInputs = [ pkgs.mkpasswd ];
};
# ensure users are immutable (otherwise the following config might be ignored)
users.mutableUsers = false;
# set the root password to the file containing the hash
users.users.root.hashedPasswordFile =
# clan will make sure, this path exists
config.clan.core.vars.generators.root-password.files.password-hash.path;
}
```
## Inspect the status
Executing `clan vars list`, you should see the following:
```shellSession
$ clan vars list my_machine
root-password/password-hash: <not set>
```
...indicating that the value `password-hash` for the generator `root-password` is not set yet.
## Generate the values
This step is not strictly necessary, as deploying the machine via `clan machines update` would trigger the generator as well.
To run the generator, execute `clan vars generate` for your machine
```shellSession
$ clan vars generate my_machine
Enter the value for root-password/password-input (hidden):
```
After entering the value, the updated status is reported:
```shellSession
Updated var root-password/password-hash
old: <not set>
new: $6$RMats/YMeypFtcYX$DUi...
```
## Observe the changes
With the last step, a new file was created in your repository:
`vars/per-machine/my-machine/root-password/password-hash/value`
If the repository is a git repository, a commit was created automatically:
```shellSession
$ git log -n1
commit ... (HEAD -> master)
Author: ...
Date: ...
Update vars via generator root-password for machine grmpf-nix
```
## Update the machine
```shell
clan machines update my_machine
```
## Share root password between machines
If we just imported the `root-password.nix` from above into more machines, clan would ask for a new password for each additional machine.
If the root password instead should only be entered once and shared across all machines, the generator defined above needs to be declared as `shared`, by adding `share = true` to it:
```nix
{config, pkgs, ...}: {
clan.vars.generators.root-password = {
share = true;
# ...
}
}
```
Importing that shared generator into each machine, will ensure that the password is only asked once the first machine gets updated and then re-used for all subsequent machines.
## Change the root password
Changing the password can be done via this command.
Replace `my-machine` with your machine.
If the password is shared, just pick any machine that has the generator declared.
```shellSession
$ clan vars generate my-machine --generator root-password --regenerate
...
Enter the value for root-password/password-input (hidden):
Input received. Processing...
...
Updated var root-password/password-hash
old: $6$tb27m6EOdff.X9TM$19N...
new: $6$OyoQtDVzeemgh8EQ$zRK...
```
## Further Reading
- [Reference Documentation for `clan.core.vars` NixOS options](../reference/clan.core/vars.md)
- [Reference Documentation for the `clan vars` CLI command](../reference/cli/vars.md)