docs: Move developer guides into the Developer section

nix fmt

address davhau review
This commit is contained in:
Qubasa
2025-07-23 15:54:30 +07:00
parent 5b59cfbc34
commit 5c7e6b3830
16 changed files with 104 additions and 103 deletions

View File

@@ -0,0 +1,7 @@
---
template: options.html
hide:
- navigation
- toc
---
<redoc src="/openapi.json" />

View File

@@ -0,0 +1 @@
../../../CONTRIBUTING.md

View File

@@ -0,0 +1,195 @@
Here are some methods for debugging and testing the clan-cli
## Using a Development Branch
To streamline your development process, I suggest not installing `clan-cli`. Instead, clone the `clan-core` repository and add `clan-core/pkgs/clan-cli/bin` to your PATH to use the checked-out version directly.
!!! Note
After cloning, navigate to `clan-core/pkgs/clan-cli` and execute `direnv allow` to activate the devshell. This will set up a symlink to nixpkgs at a specific location; without it, `clan-cli` won't function correctly.
With this setup, you can easily use [breakpoint()](https://docs.python.org/3/library/pdb.html) to inspect the application's internal state as needed.
This approach is feasible because `clan-cli` only requires a Python interpreter and has no other dependencies.
```nix
pkgs.mkShell {
packages = [
pkgs.python3
];
shellHook = ''
export GIT_ROOT="$(git rev-parse --show-toplevel)"
export PATH=$PATH:~/Projects/clan-core/pkgs/clan-cli/bin
'';
}
```
## The Debug Flag
You can enhance your debugging process with the `--debug` flag in the `clan` command. When you add this flag to any command, it displays all subprocess commands initiated by `clan` in a readable format, along with the source code position that triggered them. This feature makes it easier to understand and trace what's happening under the hood.
```bash
$ clan machines list --debug 1
Debug log activated
nix \
--extra-experimental-features 'nix-command flakes' \
eval \
--show-trace --json \
--print-build-logs '/home/qubasa/Projects/qubasas-clan#clanInternals.machines.x86_64-linux' \
--apply builtins.attrNames \
--json
Caller: ~/Projects/clan-core/pkgs/clan-cli/clan_cli/machines/list.py:96::list_nixos_machines
warning: Git tree '/home/qubasa/Projects/qubasas-clan' is dirty
demo
gchq-local
wintux
```
## VSCode
If you're using VSCode, it has a handy feature that makes paths to source code files clickable in the integrated terminal. Combined with the previously mentioned techniques, this allows you to open a Clan in VSCode, execute a command like `clan machines list --debug`, and receive a printed path to the code that initiates the subprocess. With the `Ctrl` key (or `Cmd` on macOS) and a mouse click, you can jump directly to the corresponding line in the code file and add a `breakpoint()` function to it, to inspect the internal state.
## Finding Print Messages
To trace the origin of print messages in `clan-cli`, you can enable special debugging features using environment variables:
- Set `TRACE_PRINT=1` to include the source location with each print message:
```bash
export TRACE_PRINT=1
```
When running commands with `--debug`, every print will show where it was triggered in the code.
- To see a deeper stack trace for each print, set `TRACE_DEPTH` to the desired number of stack frames (e.g., 3):
```bash
export TRACE_DEPTH=3
```
### Additional Debug Logging
You can enable more detailed logging for specific components by setting these environment variables:
- `CLAN_DEBUG_NIX_SELECTORS=1` — verbose logs for flake.select operations
- `CLAN_DEBUG_NIX_PREFETCH=1` — verbose logs for flake.prefetch operations
- `CLAN_DEBUG_COMMANDS=1` — print the diffed environment of executed commands
Example:
```bash
export CLAN_DEBUG_NIX_SELECTORS=1
export CLAN_DEBUG_NIX_PREFETCH=1
export CLAN_DEBUG_COMMANDS=1
```
These options help you pinpoint the source and context of print messages and debug logs during development.
## Analyzing Performance
To understand what's causing slow performance, set the environment variable `export CLAN_CLI_PERF=1`. When you complete a clan command, you'll see a summary of various performance metrics, helping you identify what's taking up time.
## See all possible packages and tests
To quickly show all possible packages and tests execute:
```bash
nix flake show
```
Under `checks` you will find all tests that are executed in our CI. Under `packages` you find all our projects.
```
git+file:///home/lhebendanz/Projects/clan-core
├───apps
│ └───x86_64-linux
│ ├───install-vm: app
│ └───install-vm-nogui: app
├───checks
│ └───x86_64-linux
│ ├───borgbackup omitted (use '--all-systems' to show)
│ ├───check-for-breakpoints omitted (use '--all-systems' to show)
│ ├───clan-dep-age omitted (use '--all-systems' to show)
│ ├───clan-dep-bash omitted (use '--all-systems' to show)
│ ├───clan-dep-e2fsprogs omitted (use '--all-systems' to show)
│ ├───clan-dep-fakeroot omitted (use '--all-systems' to show)
│ ├───clan-dep-git omitted (use '--all-systems' to show)
│ ├───clan-dep-nix omitted (use '--all-systems' to show)
│ ├───clan-dep-openssh omitted (use '--all-systems' to show)
│ ├───"clan-dep-python3.11-mypy" omitted (use '--all-systems' to show)
├───packages
│ └───x86_64-linux
│ ├───clan-cli omitted (use '--all-systems' to show)
│ ├───clan-cli-docs omitted (use '--all-systems' to show)
│ ├───clan-ts-api omitted (use '--all-systems' to show)
│ ├───clan-app omitted (use '--all-systems' to show)
│ ├───default omitted (use '--all-systems' to show)
│ ├───deploy-docs omitted (use '--all-systems' to show)
│ ├───docs omitted (use '--all-systems' to show)
│ ├───editor omitted (use '--all-systems' to show)
└───templates
├───default: template: Initialize a new clan flake
└───default: template: Initialize a new clan flake
```
You can execute every test separately by following the tree path `nix run .#checks.x86_64-linux.clan-pytest -L` for example.
## Test Locally in Devshell with Breakpoints
To test the CLI locally in a development environment and set breakpoints for debugging, follow these steps:
1. Run the following command to execute your tests and allow for debugging with breakpoints:
```bash
cd ./pkgs/clan-cli
pytest -n0 -s --maxfail=1 ./tests/test_nameofthetest.py
```
You can place `breakpoint()` in your Python code where you want to trigger a breakpoint for debugging.
## Test Locally in a Nix Sandbox
To run tests in a Nix sandbox, you have two options depending on whether your test functions have been marked as impure or not:
### Running Tests Marked as Impure
If your test functions need to execute `nix build` and have been marked as impure because you can't execute `nix build` inside a Nix sandbox, use the following command:
```bash
nix run .#impure-checks -L
```
This command will run the impure test functions.
### Running Pure Tests
For test functions that have not been marked as impure and don't require executing `nix build`, you can use the following command:
```bash
nix build .#checks.x86_64-linux.clan-pytest --rebuild
```
This command will run all pure test functions.
### Inspecting the Nix Sandbox
If you need to inspect the Nix sandbox while running tests, follow these steps:
1. Insert an endless sleep into your test code where you want to pause the execution. For example:
```python
import time
time.sleep(3600) # Sleep for one hour
```
2. Use `cntr` and `psgrep` to attach to the Nix sandbox. This allows you to interactively debug your code while it's paused. For example:
```bash
psgrep <your_python_process_name>
cntr attach <container id, container name or process id>
```
Or you can also use the [nix breakpoint hook](https://nixos.org/manual/nixpkgs/stable/#breakpointhook)

View File

@@ -0,0 +1,316 @@
# Testing your contributions
Each feature added to clan should be tested extensively via automated tests.
This document covers different methods of automated testing, including creating, running and debugging such tests.
In order to test the behavior of clan, different testing frameworks are used depending on the concern:
- NixOS VM tests: for high level integration
- NixOS container tests: for high level integration
- Python tests via pytest: for unit tests and integration tests
- Nix eval tests: for nix functions, libraries, modules, etc.
## NixOS VM Tests
The [NixOS VM Testing Framework](https://nixos.org/manual/nixos/stable/index.html#sec-nixos-tests) is used to create high level integration tests, by running one or more VMs generated from a specified config. Commands can be executed on the booted machine(s) to verify a deployment of a service works as expected. All machines within a test are connected by a virtual network. Internet access is not available.
### When to use VM tests
- testing that a service defined through a clan module works as expected after deployment
- testing clan-cli subcommands which require accessing a remote machine
### When not to use VM tests
NixOS VM Tests are slow and expensive. They should only be used for testing high level integration of components.
VM tests should be avoided wherever it is possible to implement a cheaper unit test instead.
- testing detailed behavior of a certain clan-cli command -> use unit testing via pytest instead
- regression testing -> add a unit test
### Finding examples for VM tests
Existing nixos vm tests in clan-core can be found by using ripgrep:
```shellSession
rg self.clanLib.test.baseTest
```
### Locating definitions of failing VM tests
All nixos vm tests in clan are exported as individual flake outputs under `checks.x86_64-linux.{test-attr-name}`.
If a test fails in CI:
- look for the job name of the test near the top if the CI Job page, like, for example `gitea:clan/clan-core#checks.x86_64-linux.borgbackup/1242`
- in this case `checks.x86_64-linux.borgbackup` is the attribute path
- note the last element of that attribute path, in this case `borgbackup`
- search for the attribute name inside the `/checks` directory via ripgrep
example: locating the vm test named `borgbackup`:
```shellSession
$ rg "borgbackup =" ./checks
./checks/flake-module.nix
44- wayland-proxy-virtwl = self.clanLib.test.baseTest ./wayland-proxy-virtwl nixosTestArgs;
```
-> the location of that test is `/checks/flake-module.nix` line `41`.
### Adding vm tests
Create a nixos test module under `/checks/{name}/default.nix` and import it in `/checks/flake-module.nix`.
### Running VM tests
```shellSession
nix build .#checks.x86_64-linux.{test-attr-name}
```
(replace `{test-attr-name}` with the name of the test)
### Debugging VM tests
The following techniques can be used to debug a VM test:
#### Print Statements
Locate the definition (see above) and add print statements, like, for example `print(client.succeed("systemctl --failed"))`, then re-run the test via `nix build` (see above)
#### Interactive Shell
- Execute the vm test outside the nix Sandbox via the following command:
`nix run .#checks.x86_64-linux.{test-attr-name}.driver -- --interactive`
- Then run the commands in the machines manually, like for example:
```python3
start_all()
machine1.succeed("echo hello")
```
#### Breakpoints
To get an interactive shell at a specific line in the VM test script, add a `breakpoint()` call before the line to debug, then run the test outside of the sandbox via:
`nix run .#checks.x86_64-linux.{test-attr-name}.driver`
## NixOS Container Tests
Those are very similar to NixOS VM tests, as in they run virtualized nixos machines, but instead of using VMs, they use containers which are much cheaper to launch.
As of now the container test driver is a downstream development in clan-core.
Basically everything stated under the NixOS VM tests sections applies here, except some limitations.
Limitations:
- Cannot run in interactive mode, however while the container test runs, it logs a nsenter command that can be used to log into each of the container.
- setuid binaries don't work
### Where to find examples for NixOS container tests
Existing NixOS container tests in clan-core can be found by using `ripgrep`:
```shellSession
rg self.clanLib.test.containerTest
```
## Python tests via pytest
Since the Clan CLI is written in python, the `pytest` framework is used to define unit tests and integration tests via python
Due to superior efficiency,
### When to use python tests
- writing unit tests for python functions and modules, or bugfixes of such
- all integrations tests that do not require building or running a nixos machine
- impure integrations tests that require internet access (very rare, try to avoid)
### When not to use python tests
- integrations tests that require building or running a nixos machine (use NixOS VM or container tests instead)
- testing behavior of a nix function or library (use nix eval tests instead)
### Finding examples of python tests
Existing python tests in clan-core can be found by using `ripgrep`:
```shellSession
rg "import pytest"
```
### Locating definitions of failing python tests
If any python test fails in the CI pipeline, an error message like this can be found at the end of the log:
```
...
FAILED tests/test_machines_cli.py::test_machine_delete - clan_lib.errors.ClanError: Template 'new-machine' not in 'inputs.clan-core
...
```
In this case the test is defined in the file `/tests/test_machines_cli.py` via the test function `test_machine_delete`.
### Adding python tests
If a specific python module is tested, the test should be located near the tested module in a subdirectory called `./tests`
If the test is not clearly related to a specific module, put it in the top-level `./tests` directory of the tested python package. For `clan-cli` this would be `/pkgs/clan-cli/clan_cli/tests`.
All filenames must be prefixed with `test_` and test functions prefixed with `test_` for pytest to discover them.
### Running python tests
#### Running all python tests
To run all python tests which are executed in the CI pipeline locally, use this `nix build` command
```shellSession
nix build .#checks.x86_64-linux.clan-pytest-{with,without}-core
```
#### Running a specific python test
To run a specific python test outside the nix sandbox
1. Enter the development environment of the python package, by either:
- Having direnv enabled and entering the directory of the package (eg. `/pkgs/clan-cli`)
- Or using the command `select-shell {package}` in the top-level dev shell of clan-core, (eg. `switch-shell clan-cli`)
2. Execute the test via pytest using issuing
`pytest ./path/to/test_file.py:test_function_name -s -n0`
The flags `-sn0` are useful to forwards all stdout/stderr output to the terminal and be able to debug interactively via `breakpoint()`.
### Debugging python tests
To debug a specific python test, find its definition (see above) and make sure to enter the correct dev environment for that python package.
Modify the test and add `breakpoint()` statements to it.
Execute the test using the flags `-sn0` in order to get an interactive shell at the breakpoint:
```shelSession
pytest ./path/to/test_file.py:test_function_name -sn0
```
## Nix Eval Tests
### When to use nix eval tests
Nix eval tests are good for testing any nix logic, including
- nix functions
- nix libraries
- modules for the NixOS module system
When not to use
- tests that require building nix derivations (except some very cheap ones)
- tests that require running programs written in other languages
- tests that require building or running NixOS machines
### Finding examples of nix eval tests
Existing nix eval tests can be found via this `ripgrep` command:
```shellSession
rg "nix-unit --eval-store"
```
### Locating definitions of failing nix eval tests
Failing nix eval tests look like this:
```shellSession
> ✅ test_attrsOf_attrsOf_submodule
> ✅ test_attrsOf_submodule
> ❌ test_default
> /build/nix-8-2/expected.nix --- Nix
> 1 { foo = { bar = { __prio = 1500; }; } 1 { foo = { bar = { __prio = 1501; }; }
> . ; } . ; }
>
>
> ✅ test_no_default
> ✅ test_submodule
> ✅ test_submoduleWith
> ✅ test_submodule_with_merging
>
> 😢 6/7 successful
> error: Tests failed
```
To locate the definition, find the flake attribute name of the failing test near the top of the CI Job page, like for example `gitea:clan/clan-core#checks.x86_64-linux.eval-lib-values/1242`.
In this case `eval-lib-values` is the attribute we are looking for.
Find the attribute via ripgrep:
```shellSession
$ rg "eval-lib-values ="
lib/values/flake-module.nix
21: eval-lib-values = pkgs.runCommand "tests" { nativeBuildInputs = [ pkgs.nix-unit ]; } ''
grmpf@grmpf-nix ~/p/c/clan-core (test-docs)>
```
In this case the test is defined in the file `lib/values/flake-module.nix` line 21
### Adding nix eval tests
In clan core, the following pattern is usually followed:
- tests are put in a `test.nix` file
- a CI Job is exposed via a `flake-module.nix`
- that `flake-module.nix` is imported via the `flake.nix` at the root of the project
For example see `/lib/values/{test.nix,flake-module.nix}`.
### Running nix eval tests
Since all nix eval tests are exposed via the flake outputs, they can be ran via `nix build`:
```shellSession
nix build .#checks.x86_64-linux.{test-attr-name}
```
For quicker iteration times, instead of `nix build` use the `nix-unit` command available in the dev environment.
Example:
```shellSession
nix-unit --flake .#legacyPackages.x86_64-linux.{test-attr-name}
```
### Debugging nix eval tests
Follow the instructions above to find the definition of the test, then use one of the following techniques:
#### Print debugging
Add `lib.trace` or `lib.traceVal` statements in order to print some variables during evaluation
#### Nix repl
Use `nix repl` to evaluate and inspect the test.
Each test consists of an `expr` (expression) and an `expected` field. `nix-unit` simply checks if `expr == expected` and prints the diff if that's not the case.
`nix repl` can be used to inspect an `expr` manually, or any other variables that you choose to expose.
Example:
```shellSession
$ nix repl
Nix 2.25.5
Type :? for help.
nix-repl> tests = import ./lib/values/test.nix {}
nix-repl> tests
{
test_attrsOf_attrsOf_submodule = { ... };
test_attrsOf_submodule = { ... };
test_default = { ... };
test_no_default = { ... };
test_submodule = { ... };
test_submoduleWith = { ... };
test_submodule_with_merging = { ... };
}
nix-repl> tests.test_default.expr
{
foo = { ... };
}
```

View File

@@ -0,0 +1,229 @@
# Authoring a clanModule
!!! Danger "Will get deprecated soon"
Please consider twice creating new modules in this format
[`clan.service` module](../clanServices/index.md) will be the new standard soon.
This site will guide you through authoring your first module. Explaining which conventions must be followed, such that others will have an enjoyable experience and the module can be used with minimal effort.
!!! Tip
External ClanModules can be ad-hoc loaded via [`clan.inventory.modules`](../../../reference/nix-api/inventory.md#inventory.modules)
## Bootstrapping the `clanModule`
A ClanModule is a specific subset of a [NixOS Module](https://nix.dev/tutorials/module-system/index.html), but it has some constraints and might be used via the [Inventory](../../../guides/inventory.md) interface.
In fact a `ClanModule` can be thought of as a layer of abstraction on-top of NixOS and/or other ClanModules. It may configure sane defaults and provide an ergonomic interface that is easy to use and can also be used via a UI that is under development currently.
Because ClanModules should be configurable via `json`/`API` all of its interface (`options`) must be serializable.
!!! Tip
ClanModules interface can be checked by running the json schema converter as follows.
`nix build .#legacyPackages.x86_64-linux.schemas.inventory`
If the build succeeds the module is compatible.
## Directory structure
Each module SHOULD be a directory of the following format:
```sh
# Example: borgbackup
clanModules/borgbackup
├── README.md
└── roles
├── client.nix
└── server.nix
```
!!! Tip
`README.md` is always required. See section [Readme](#readme) for further details.
The `roles` folder is strictly required for `features = [ "inventory" ]`.
## Registering the module
=== "User module"
If the module should be ad-hoc loaded.
It can be made available in any project via the [`clan.inventory.modules`](../../../reference/nix-api/inventory.md#inventory.modules) attribute.
```nix title="flake.nix"
# ...
# Sometimes this attribute set is defined in clan.nix
clan-core.lib.clan {
# 1. Add the module to the available clanModules with inventory support
inventory.modules = {
custom-module = ./modules/my_module;
};
# 2. Use the module in the inventory
inventory.services = {
custom-module.instance_1 = {
roles.default.machines = [ "machineA" ];
};
};
};
```
=== "Upstream module"
If the module will be contributed to [`clan-core`](https://git.clan.lol/clan-core)
The clanModule must be registered within the `clanModules` attribute in `clan-core`
```nix title="clanModules/flake-module.nix"
--8<-- "clanModules/flake-module.nix:0:5"
# Register our new module here
# ...
```
## Readme
The `README.md` is a required file for all modules. It MUST contain frontmatter in [`toml`](https://toml.io) format.
```markdown
---
description = "Module A"
---
This is the example module that does xyz.
```
See the [Full Frontmatter reference](../../../reference/clanModules/frontmatter/index.md) further details and all supported attributes.
## Roles
If the module declares to implement `features = [ "inventory" ]` then it MUST contain a roles directory.
Each `.nix` file in the `roles` directory is added as a role to the inventory service.
Other files can also be placed alongside the `.nix` files
```sh
└── roles
├── client.nix
└── server.nix
```
Adds the roles: `client` and `server`
??? Tip "Good to know"
Sometimes a `ClanModule` should be usable via both clan's `inventory` concept but also natively as a NixOS module.
> In the long term, we want most modules to implement support for the inventory,
> but we are also aware that there are certain low-level modules that always serve as a backend for other higher-level `clanModules` with inventory support.
> These modules may not want to implement inventory interfaces as they are always used directly by other modules.
This can be achieved by placing an additional `default.nix` into the root of the ClanModules directory as shown:
```sh
# ModuleA
├── README.md
├── default.nix
└── roles
└── default.nix
```
```nix title="default.nix"
{...}:{
imports = [ ./roles/default.nix ];
}
```
By utilizing this pattern the module (`moduleA`) can then be imported into any regular NixOS module via:
```nix
{...}:{
imports = [ clanModules.moduleA ];
}
```
## Adding configuration options
While we recommend to keep the interface as minimal as possible and deriving all required information from the `roles` model it might sometimes be required or convenient to expose customization options beyond `roles`.
The following shows how to add options to your module.
**It is important to understand that every module has its own namespace where it should declare options**
**`clan.{moduleName}`**
???+ Example
The following example shows how to register options in the module interface
and how it can be set via the inventory
```nix title="/default.nix"
custom-module = ./modules/custom-module;
```
Since the module is called `custom-module` all of its exposed options should be added to `options.clan.custom-module.*...*`
```nix title="custom-module/roles/default.nix"
{
options = {
clan.custom-module.foo = mkOption {
type = types.str;
default = "bar";
};
};
}
```
If the module is [registered](#registering-the-module).
Configuration can be set as follows.
```nix title="flake.nix"
# Sometimes this attribute set is defined in clan.nix
clan-core.lib.clan {
inventory.services = {
custom-module.instance_1 = {
roles.default.machines = [ "machineA" ];
roles.default.config = {
# All configuration here is scoped to `clan.custom-module`
foo = "foobar";
};
};
};
}
```
## Organizing the ClanModule
Each `{role}.nix` is included into the machine if the machine is declared to have the role.
For example
```nix
roles.client.machines = ["MachineA"];
```
Then `roles/client.nix` will be added to the machine `MachineA`.
This behavior makes it possible to split the interface and common code paths when using multiple roles.
In the concrete example of `borgbackup` this allows a `server` to declare a different interface than the corresponding `client`.
The client offers configuration option, to exclude certain local directories from being backed up:
```nix title="roles/client.nix"
# Example client interface
options.clan.borgbackup.exclude = ...
```
The server doesn't offer any configuration option. Because everything is set-up automatically.
```nix title="roles/server.nix"
# Example server interface
options.clan.borgbackup = {};
```
Assuming that there is a common code path or a common interface between `server` and `client` this can be structured as:
```nix title="roles/server.nix, roles/client.nix"
{...}: {
# ...
imports = [ ../common.nix ];
}
```

View File

@@ -0,0 +1,271 @@
# Authoring a 'clan.service' module
!!! Tip
This is the successor format to the older [clanModules](../clanModules/index.md)
While some features might still be missing we recommend to adapt this format early and give feedback.
## Service Module Specification
This section explains how to author a clan service module.
We discussed the initial architecture in [01-clan-service-modules](../../../decisions/01-ClanModules.md) and decided to rework the format.
For the full specification and current state see: **[Service Author Reference](../../../reference/clanServices/clan-service-author-interface.md)**
### A Minimal module
First of all we need to register our module into the `clan.modules` attribute. Make sure to choose a unique name so the module doesn't have a name collision with any of the core modules.
While not required we recommend to prefix your module attribute name.
If you export the module from your flake, other people will be able to import it and use it within their clan.
i.e. `@hsjobeki/customNetworking`
```nix title="flake.nix"
# ...
outputs = inputs: inputs.flake-parts.lib.mkFlake { inherit inputs; } ({
imports = [ inputs.clan-core.flakeModules.default ];
# ...
# Sometimes this attribute set is defined in clan.nix
clan = {
# If needed: Exporting the module for other people
modules."@hsjobeki/customNetworking" = import ./service-modules/networking.nix;
# We could also inline the complete module spec here
# For example
# {...}: { _class = "clan.service"; ... };
};
})
```
The imported module file must fulfill at least the following requirements:
- Be an actual module. That means: Be either an attribute set; or a function that returns an attribute set.
- Required `_class = "clan.service"
- Required `manifest.name = "<name of the provided service>"`
```nix title="/service-modules/networking.nix"
{
_class = "clan.service";
manifest.name = "zerotier-networking";
# ...
}
```
For more attributes see: **[Service Author Reference](../../../reference/clanServices/clan-service-author-interface.md)**
### Adding functionality to the module
While the very minimal module is valid in itself it has no way of adding any machines to it, because it doesn't specify any roles.
The next logical step is to think about the interactions between the machines and define *roles* for them.
Here is a short guide with some conventions:
- [ ] If they all have the same relation to each other `peer` is commonly used. `peers` can often talk to each other directly.
- [ ] Often machines don't necessarily have direct relation to each other and there is one elevated machine in the middle classically know as `client-server`. `clients` are less likely to talk directly to each other than `peers`
- [ ] If your machines don't have any relation and/or interactions to each other you should reconsider if the desired functionality is really a multi-host service.
```nix title="/service-modules/networking.nix"
{
_class = "clan.service";
manifest.name = "zerotier-networking";
# Define what roles exist
roles.peer = {};
roles.controller = {};
# ...
}
```
Next we need to define the settings and the behavior of these distinct roles.
```nix title="/service-modules/networking.nix"
{
_class = "clan.service";
manifest.name = "zerotier-networking";
# Define what roles exist
roles.peer = {
interface = {
# These options can be set via 'roles.client.settings'
options.ipRanges = mkOption { type = listOf str; };
};
# Maps over all instances and produces one result per instance.
perInstance = { instanceName, settings, machine, roles, ... }: {
# Analog to 'perSystem' of flake-parts.
# For every instance of this service we will add a nixosModule to a client-machine
nixosModule = { config, ... }: {
# Interaction examples what you could do here:
# - Get some settings of this machine
# settings.ipRanges
#
# - Get all controller names:
# allControllerNames = lib.attrNames roles.controller.machines
#
# - Get all roles of the machine:
# machine.roles
#
# - Get the settings that where applied to a specific controller machine:
# roles.controller.machines.jon.settings
#
# Add one systemd service for every instance
systemd.services.zerotier-client-${instanceName} = {
# ... depend on the '.config' and 'perInstance arguments'
};
};
}
};
roles.controller = {
interface = {
# These options can be set via 'roles.server.settings'
options.dynamicIp.enable = mkOption { type = bool; };
};
perInstance = { ... }: {};
};
# Maps over all machines and produces one result per machine.
perMachine = { instances, machine, ... }: {
# Analog to 'perSystem' of flake-parts.
# For every machine of this service we will add exactly one nixosModule to a machine
nixosModule = { config, ... }: {
# Interaction examples what you could do here:
# - Get the name of this machine
# machine.name
#
# - Get all roles of this machine across all instances:
# machine.roles
#
# - Get the settings of a specific instance of a specific machine
# instances.foo.roles.peer.machines.jon.settings
#
# Globally enable something
networking.enable = true;
};
};
# ...
}
```
## Using values from a NixOS machine inside the module
!!! Example "Experimental Status"
This feature is experimental and should be used with care.
Sometimes a settings value depends on something within a machines `config`.
Since the `interface` is defined completely machine-agnostic this means values from a machine cannot be set in the traditional way.
The following example shows how to create a local instance of machine specific settings.
```nix title="someservice.nix"
{
# Maps over all instances and produces one result per instance.
perInstance = { instanceName, extendSettings, machine, roles, ... }: {
nixosModule = { config, ... }:
let
# Create new settings with
# 'ipRanges' defaulting to 'config.network.ip.range' from this machine
# This only works if there is no 'default' already.
localSettings = extendSettings {
ipRanges = lib.mkDefault config.network.ip.range;
};
in
{
# ...
};
};
}
```
!!! Danger
`localSettings` are a local attribute. Other machines cannot access it.
If calling extendSettings is done that doesn't change the original `settings` this means if a different machine tries to access i.e `roles.client.settings` it would *NOT* contain your changes.
Exposing the changed settings to other machines would come with a huge performance penalty, thats why we don't want to offer it.
## Passing `self` or `pkgs` to the module
Passing any dependencies in general must be done manually.
In general we found the following two best practices:
1. Using `lib.importApply`
2. Using a wrapper module
Both have pros and cons. After all using `importApply` is the easier one, but might be more limiting sometimes.
### Using `importApply`
Using [importApply](https://github.com/NixOS/nixpkgs/pull/230588) is essentially the same as `import file` followed by a function-application; but preserves the error location.
Imagine your module looks like this
```nix title="messaging.nix"
{ self }:
{ ... }: # This line is optional
{
_class = "clan.service";
manifest.name = "messaging"
# ...
}
```
To import the module use `importApply`
```nix title="flake.nix"
# ...
outputs = inputs: flake-parts.lib.mkFlake { inherit inputs; } ({self, lib, ...}: {
imports = [ inputs.clan-core.flakeModules.default ];
# ...
# Sometimes this attribute set is defined in clan.nix
clan = {
# Register the module
modules."@hsjobeki/messaging" = lib.importApply ./service-modules/messaging.nix { inherit self; };
};
})
```
### Using a wrapper module
```nix title="messaging.nix"
{ config, ... }:
{
_class = "clan.service";
manifest.name = "messaging"
# ...
# config.myClan
}
```
Then wrap the module and forward the variable `self` from the outer context into the module
```nix title="flake.nix"
# ...
outputs = inputs: flake-parts.lib.mkFlake { inherit inputs; } ({self, lib, ...}: {
imports = [ inputs.clan-core.flakeModules.default ];
# ...
# Sometimes this attribute set is defined in clan.nix
clan = {
# Register the module
modules."@hsjobeki/messaging" = {
# Create an option 'myClan' and assign it to 'self'
options.myClan = lib.mkOption {
default = self;
};
imports = [./service-modules/messaging.nix ];
}
};
})
```
The benefit of this approach is that downstream users can override the value of `myClan` by using `mkForce` or other priority modifiers.
---
## Further
- [Reference Documentation for Service Authors](../../../reference/clanServices/clan-service-author-interface.md)
- [Migration Guide from ClanModules to ClanServices](../../../guides/migrations/migrate-inventory-services.md)
- [Decision that lead to ClanServices](../../../decisions/01-ClanModules.md)

View File

@@ -0,0 +1,94 @@
!!! Danger ":fontawesome-solid-road-barrier: Under Construction :fontawesome-solid-road-barrier:"
Currently under construction use with caution
:fontawesome-solid-road-barrier: :fontawesome-solid-road-barrier: :fontawesome-solid-road-barrier:
## Structure
A disk template consists of exactly two files
- `default.nix`
- `README.md`
```sh
└── single-disk
├── default.nix
└── README.md
```
## `default.nix`
Placeholders are filled with their machine specific options when a template is used for a machine.
The user can choose any valid options from the hardware report.
The file itself is then copied to `machines/{machineName}/disko.nix` and will be automatically loaded by the machine.
`single-disk/default.nix`
```
{
disko.devices = {
disk = {
main = {
device = "{{mainDisk}}";
...
};
};
};
}
```
## Placeholders
Each template must declare the options of its placeholders depending on the hardware-report.
`api/disk.py`
```py
templates: dict[str, dict[str, Callable[[dict[str, Any]], Placeholder]]] = {
"single-disk": {
# Placeholders
"mainDisk": lambda hw_report: Placeholder(
label="Main disk", options=hw_main_disk_options(hw_report), required=True
),
}
}
```
Introducing new local or global placeholders requires contributing to clan-core `api/disks.py`.
### Predefined placeholders
Some placeholders provide predefined functionality
- `uuid`: In most cases we recommend adding a unique id to all disks. This prevents the system to false boot from i.e. hot-plugged devices.
```
disko.devices = {
disk = {
main = {
name = "main-{{uuid}}";
...
}
}
}
```
## Readme
The readme frontmatter must be of the same format as modules frontmatter.
```markdown
---
description = "Simple disk schema for single disk setups"
---
# Single disk
Use this schema for simple setups where ....
```
The format and fields of this file is not clear yet. We might change that once fully implemented.

View File

@@ -0,0 +1,25 @@
# Developer Documentation
!!! Danger
This documentation is **not** intended for external users. It may contain low-level details and internal-only interfaces.*
Welcome to the internal developer documentation.
This section is intended for contributors, engineers, and internal stakeholders working directly with our system, tooling, and APIs. It provides a technical overview of core components, internal APIs, conventions, and patterns that support the platform.
Our goal is to make the internal workings of the system **transparent, discoverable, and consistent** — helping you contribute confidently, troubleshoot effectively, and build faster.
## What's Here?
!!! note "docs migration ongoing"
- [ ] **API Reference**: 🚧🚧🚧 Detailed documentation of internal API functions, inputs, and expected outputs. 🚧🚧🚧
- [ ] **System Concepts**: Architectural overviews and domain-specific guides.
- [ ] **Development Guides**: How to test, extend, or integrate with key components.
- [ ] **Design Notes**: Rationales behind major design decisions or patterns.
## Who is This For?
* Developers contributing to the platform
* Engineers debugging or extending internal systems
* Anyone needing to understand **how** and **why** things work under the hood