Compare commits

..

1 Commits

Author SHA1 Message Date
Johannes Kirschbauer
5859eeac5a vm_manager: remove in favor of clan-app 2025-08-13 20:23:05 +02:00
1249 changed files with 24797 additions and 52877 deletions

View File

@@ -1,12 +0,0 @@
## Description of the change
<!-- Brief summary of the change if not already clear from the title -->
## Checklist
- [ ] Updated Documentation
- [ ] Added tests
- [ ] Doesn't affect backwards compatibility - or check the next points
- [ ] Add the breaking change and migration details to docs/release-notes.md
- !!! Review from another person is required *BEFORE* merge !!!
- [ ] Add introduction of major feature to docs/release-notes.md

View File

@@ -17,4 +17,4 @@ jobs:
- name: Build clan-app for x86_64-darwin
run: |
nix build .#packages.x86_64-darwin.clan-app --log-format bar-with-logs
nix build .#packages.x86_64-darwin.clan-app --system x86_64-darwin --log-format bar-with-logs

View File

@@ -0,0 +1,9 @@
name: checks
on:
pull_request:
jobs:
checks-impure:
runs-on: nix
steps:
- uses: actions/checkout@v4
- run: nix run .#impure-checks

View File

@@ -8,6 +8,6 @@ jobs:
runs-on: nix
steps:
- uses: actions/checkout@v4
- run: nix run --print-build-logs .#deploy-docs
- run: nix run .#deploy-docs
env:
SSH_HOMEPAGE_KEY: ${{ secrets.SSH_HOMEPAGE_KEY }}

View File

@@ -21,9 +21,5 @@ jobs:
# Exclude private flakes and update-clan-core checks flake
exclude-patterns: "checks/impure/flake.nix"
auto-merge: true
git-author-name: "clan-bot"
git-committer-name: "clan-bot"
git-author-email: "clan-bot@clan.lol"
git-committer-email: "clan-bot@clan.lol"
gitea-token: ${{ secrets.CI_BOT_TOKEN }}
github-token: ${{ secrets.CI_BOT_GITHUB_TOKEN }}

View File

@@ -10,7 +10,7 @@ jobs:
if: github.repository_owner == 'clan-lol'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v5
- uses: actions/checkout@v4
with:
persist-credentials: false
- uses: actions/create-github-app-token@v2

3
.gitignore vendored
View File

@@ -39,6 +39,7 @@ select
# Generated files
pkgs/clan-app/ui/api/API.json
pkgs/clan-app/ui/api/API.ts
pkgs/clan-app/ui/api/Inventory.ts
pkgs/clan-app/ui/api/modules_schemas.json
pkgs/clan-app/ui/api/schema.json
pkgs/clan-app/ui/.fonts
@@ -52,5 +53,3 @@ pkgs/clan-app/ui/.fonts
*.gif
*.mp4
*.mkv
.jj

View File

@@ -1,22 +0,0 @@
clanServices/.* @pinpox @kenji
lib/test/container-test-driver/.* @DavHau @mic92
lib/inventory/.* @hsjobeki
lib/inventoryClass/.* @hsjobeki
modules/.* @hsjobeki
pkgs/clan-app/ui/.* @hsjobeki @brianmcgee
pkgs/clan-app/clan_app/.* @qubasa @hsjobeki
pkgs/clan-cli/clan_cli/.* @lassulus @mic92 @kenji
pkgs/clan-cli/clan_cli/(secrets|vars)/.* @DavHau @lassulus
pkgs/clan-cli/clan_lib/log_machines/.* @Qubasa
pkgs/clan-cli/clan_lib/ssh/.* @Qubasa @Mic92 @lassulus
pkgs/clan-cli/clan_lib/tags/.* @hsjobeki
pkgs/clan-cli/clan_lib/persist/.* @hsjobeki
pkgs/clan-cli/clan_lib/flake/.* @lassulus
pkgs/clan-cli/api.py @hsjobeki
pkgs/clan-cli/openapi.py @hsjobeki

4
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,4 @@
# Contributing to Clan
<!-- Local file: docs/CONTRIBUTING.md -->
Go to the Contributing guide at https://docs.clan.lol/guides/contributing/CONTRIBUTING

View File

@@ -1,4 +1,4 @@
Copyright 2023-2025 Clan contributors
Copyright 2023-2024 Clan contributors
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in

105
PLAN.md
View File

@@ -1,105 +0,0 @@
Title: Add nix-darwin Support to Clan Services (clan.service)
Summary
- Extend clan services so authors can ship a darwinModule alongside nixosModule.
- Wire service results into darwin machines the same way we already do for NixOS.
- Keep full backward compatibility: existing services that only export nixosModule continue to work unchanged.
Goals
- Service authors can return perInstance/perMachine darwinModule similarly to nixosModule.
- Darwin machines import the correct aggregated service module outputs.
- Documentation describes the new result attribute and authoring pattern.
Non-Goals (initial phase)
- No rework of service settings schema or UI beyond documenting darwinModule.
- No OS-specific extraModules handling (we will keep extraModules affecting only nixos aggregation initially to avoid breaking existing users).
- No sweeping updates of all services; well add a concrete example (users) and leave others to be migrated incrementally.
Design Overview
- Service result attributes gain darwinModule in both roles.<name>.perInstance and perMachine results.
- The service aggregator composes both nixosModule and darwinModule per machine.
- The machine wiring picks the correct module based on the machines class (nixos vs darwin).
Changes By File (with anchors)
- lib/inventory/distributed-service/service-module.nix
- Add darwinModule to per-instance return type next to nixosModule.
- Where: lib/inventory/distributed-service/service-module.nix:536 (options.nixosModule = mkOption { … })
- Action: Add sibling options.darwinModule = mkOption { type = types.deferredModule; default = { }; description = "A single nix-darwin module for the instance."; }.
- Add darwinModule to per-machine return type next to nixosModule.
- Where: lib/inventory/distributed-service/service-module.nix:666 (options.nixosModule = mkOption { … })
- Action: Add sibling options.darwinModule = mkOption { type = types.deferredModule; default = { }; description = "A single nix-darwin module for the machine."; }.
- Compose darwinModule per (role, instance, machine) similarly to nixosModule.
- Where: lib/inventory/distributed-service/service-module.nix:878893 (wrapper that builds nixosModule = { imports = [ instanceRes.nixosModule ] ++ extraModules … })
- Action: Build darwinModule = { imports = [ instanceRes.darwinModule ]; }.
Note: Do NOT include roles.*.extraModules here for darwin initially to avoid importing nixos-specific modules into darwin eval.
- Aggregate darwinModules in final result.
- Where: lib/inventory/distributed-service/service-module.nix:958993 (instanceResults builder and final nixosModule = { imports = [ machineResult.nixosModule ] ++ instanceResults.nixosModules; })
- Actions:
- Track instanceResults.darwinModules in parallel to instanceResults.nixosModules.
- Add final darwinModule = { imports = [ machineResult.darwinModule ] ++ instanceResults.darwinModules; }.
- modules/clan/distributed-services.nix
- Feed the right service module to each machine based on machineClass.
- Where: modules/clan/distributed-services.nix:147152
- Current: machineImports = fold over services, collecting serviceModule.result.final.${machineName}.nixosModule
- Change: If inventory.machines.${machineName}.machineClass == "darwin" then collect .darwinModule else .nixosModule.
- modules/clan/module.nix
- Ensure machineImports are included for both nixos and darwin machines.
- Where: modules/clan/module.nix:195 (currently ++ lib.optionals (_class == "nixos") (v.machineImports or [ ]))
- Change: Include machineImports for darwin as well (or remove the conditional and always append v.machineImports).
- docs/site/decisions/01-Clan-Modules.md
- Document darwinModule as a result attribute.
- Where: docs/site/decisions/01-Clan-Modules.md:129146 (Result attributes and perMachine text mentioning only nixosModule)
- Change: Add “darwinModule” to the Result attributes list and examples, mirroring nixosModule.
- Example service update: clanServices/users/default.nix
- Add perInstance.darwinModule and perMachine.darwinModule mirroring nixos behavior where feasible.
- Where: clanServices/users/default.nix:2890 (roles.default.perInstance.nixosModule), 148153 (perMachine.nixosModule)
- Change: Provide minimal darwinModule that sets users.users.<name> (and any safe, cross-platform bits). If some nixos-only settings (e.g., systemd hooks) exist, keep them nixos-only.
Implementation Steps
1) Service API extensions
- Add options.darwinModule to roles.*.perInstance and perMachine (see anchors above).
- Keep defaults to {} so services can omit it safely.
2) Aggregation logic
- result.allRoles: emit darwinModule wrapper from instanceRes.darwinModule.
- result.final:
- Collect instanceResults.darwinModules alongside instanceResults.nixosModules.
- Produce final darwinModule with [ machineResult.darwinModule ] ++ instanceResults.darwinModules.
- Leave exports logic unchanged.
3) Machine wiring
- modules/clan/distributed-services.nix: choose .darwinModule vs .nixosModule based on inventory.machines.<name>.machineClass.
- modules/clan/module.nix: include v.machineImports for both OS classes.
4) Example migration (users)
- Add darwinModule in clanServices/users/default.nix.
- Validate that users service evaluates for a darwin machine and does not reference nixos-specific options.
5) Documentation
- Update ADR docs to mention darwinModule in Result attributes and examples.
- Add a short “Authoring for Darwin” snippet showing perInstance/perMachine returning both modules.
6) Tests and verification
- Unit-level: extend lib/inventory/distributed-service/tests to assert presence of result.final.<machine>.darwinModule when perInstance/perMachine return it.
- Integration-level: evaluate a sample darwin machine (e.g., inventory.json has test-darwin-machine) and assert clan.darwinModules.<machine> includes the aggregated module.
- Sanity: ensure existing nixos-only services still evaluate unchanged.
Backward Compatibility
- Existing services that only return nixosModule continue to work.
- Darwin machines wont import service modules until services provide darwinModule, avoiding accidental breakage.
- extraModules remain applied only to nixos aggregation initially to prevent nixos-only modules from breaking darwin evaluation. We can add OS-specific extraModules in a follow-up (e.g., roles.*.extraModulesDarwin).
Acceptance Criteria
- Services can return darwinModule in perInstance/perMachine without errors.
- Darwin machines import aggregated darwinModule outputs from all participating services.
- nixos behavior remains unchanged for existing services.
- Documentation updated to reflect the new attribute and example.
Rollout Notes
- Start by updating clanServices/users as a working example.
- Encourage service authors to add darwinModule incrementally; no global migration is required.

View File

@@ -8,7 +8,7 @@ Our mission is simple: to democratize computing by providing tools that empower
## Features of Clan
- **Full-Stack System Deployment:** Utilize Clan's toolkit alongside Nix's reliability to build and manage systems effortlessly.
- **Full-Stack System Deployment:** Utilize Clans toolkit alongside Nix's reliability to build and manage systems effortlessly.
- **Overlay Networks:** Secure, private communication channels between devices.
- **Virtual Machine Integration:** Seamless operation of VM applications within the main operating system.
- **Robust Backup Management:** Long-term, self-hosted data preservation.
@@ -30,7 +30,7 @@ In the Clan ecosystem, security is paramount. Learn how to handle secrets effect
The Clan project thrives on community contributions. We welcome everyone to contribute and collaborate:
- **Contribution Guidelines**: Make a meaningful impact by following the steps in [contributing](https://docs.clan.lol/guides/contributing/CONTRIBUTING/)<!-- [contributing.md](docs/CONTRIBUTING.md) -->.
- **Contribution Guidelines**: Make a meaningful impact by following the steps in [contributing](https://docs.clan.lol/contributing/contributing/)<!-- [contributing.md](docs/CONTRIBUTING.md) -->.
## Join the revolution

View File

@@ -0,0 +1,51 @@
(
{ ... }:
{
name = "borgbackup";
nodes.machine =
{ self, pkgs, ... }:
{
imports = [
self.clanModules.borgbackup
self.nixosModules.clanCore
{
services.openssh.enable = true;
services.borgbackup.repos.testrepo = {
authorizedKeys = [ (builtins.readFile ../assets/ssh/pubkey) ];
};
}
{
clan.core.settings.directory = ./.;
clan.core.state.testState.folders = [ "/etc/state" ];
environment.etc.state.text = "hello world";
systemd.tmpfiles.settings."vmsecrets" = {
"/etc/secrets/borgbackup/borgbackup.ssh" = {
C.argument = "${../assets/ssh/privkey}";
z = {
mode = "0400";
user = "root";
};
};
"/etc/secrets/borgbackup/borgbackup.repokey" = {
C.argument = builtins.toString (pkgs.writeText "repokey" "repokey12345");
z = {
mode = "0400";
user = "root";
};
};
};
# clan.core.facts.secretStore = "vm";
clan.core.vars.settings.secretStore = "vm";
clan.borgbackup.destinations.test.repo = "borg@localhost:.";
}
];
};
testScript = ''
start_all()
machine.systemctl("start --wait borgbackup-job-test.service")
assert "machine-test" in machine.succeed("BORG_UNKNOWN_UNENCRYPTED_REPO_ACCESS_IS_OK=yes /run/current-system/sw/bin/borg-job-test list")
'';
}
)

View File

@@ -0,0 +1,6 @@
{ fetchgit }:
fetchgit {
url = "https://git.clan.lol/clan/clan-core.git";
rev = "5d884cecc2585a29b6a3596681839d081b4de192";
sha256 = "09is1afmncamavb2q88qac37vmsijxzsy1iz1vr6gsyjq2rixaxc";
}

View File

@@ -12,6 +12,7 @@ let
elem
filter
filterAttrs
flip
genAttrs
hasPrefix
pathExists
@@ -19,23 +20,33 @@ let
nixosLib = import (self.inputs.nixpkgs + "/nixos/lib") { };
in
{
imports = filter pathExists [
./devshell/flake-module.nix
./flash/flake-module.nix
./installation/flake-module.nix
./update/flake-module.nix
./morph/flake-module.nix
./nixos-documentation/flake-module.nix
./dont-depend-on-repo-root.nix
# clan core submodule tests
../nixosModules/clanCore/machine-id/tests/flake-module.nix
../nixosModules/clanCore/postgresql/tests/flake-module.nix
../nixosModules/clanCore/state-version/tests/flake-module.nix
];
imports =
let
clanCoreModulesDir = ../nixosModules/clanCore;
getClanCoreTestModules =
let
moduleNames = attrNames (builtins.readDir clanCoreModulesDir);
testPaths = map (
moduleName: clanCoreModulesDir + "/${moduleName}/tests/flake-module.nix"
) moduleNames;
in
filter pathExists testPaths;
in
getClanCoreTestModules
++ filter pathExists [
./devshell/flake-module.nix
./flash/flake-module.nix
./impure/flake-module.nix
./installation/flake-module.nix
./update/flake-module.nix
./morph/flake-module.nix
./nixos-documentation/flake-module.nix
./dont-depend-on-repo-root.nix
];
flake.check = genAttrs [ "x86_64-linux" "aarch64-darwin" ] (
system:
let
checks = filterAttrs (
checks = flip filterAttrs self.checks.${system} (
name: _check:
!(hasPrefix "nixos-test-" name)
&& !(hasPrefix "nixos-" name)
@@ -47,7 +58,7 @@ in
"clan-core-for-checks"
"clan-deps"
])
) self.checks.${system};
);
in
inputs.nixpkgs.legacyPackages.${system}.runCommand "fast-flake-checks-${system}"
{ passthru.checks = checks; }
@@ -82,17 +93,18 @@ in
# Base Tests
nixos-test-secrets = self.clanLib.test.baseTest ./secrets nixosTestArgs;
nixos-test-borgbackup-legacy = self.clanLib.test.baseTest ./borgbackup-legacy nixosTestArgs;
nixos-test-wayland-proxy-virtwl = self.clanLib.test.baseTest ./wayland-proxy-virtwl nixosTestArgs;
# Container Tests
nixos-test-container = self.clanLib.test.containerTest ./container nixosTestArgs;
nixos-systemd-abstraction = self.clanLib.test.containerTest ./systemd-abstraction nixosTestArgs;
nixos-llm-test = self.clanLib.test.containerTest ./llm nixosTestArgs;
nixos-test-zt-tcp-relay = self.clanLib.test.containerTest ./zt-tcp-relay nixosTestArgs;
nixos-test-matrix-synapse = self.clanLib.test.containerTest ./matrix-synapse nixosTestArgs;
nixos-test-user-firewall-iptables = self.clanLib.test.containerTest ./user-firewall/iptables.nix nixosTestArgs;
nixos-test-user-firewall-nftables = self.clanLib.test.containerTest ./user-firewall/nftables.nix nixosTestArgs;
nixos-test-extra-python-packages = self.clanLib.test.containerTest ./test-extra-python-packages nixosTestArgs;
service-dummy-test = import ./service-dummy-test nixosTestArgs;
wireguard = import ./wireguard nixosTestArgs;
service-dummy-test-from-flake = import ./service-dummy-test-from-flake nixosTestArgs;
};
@@ -102,8 +114,6 @@ in
"dont-depend-on-repo-root"
];
# Temporary workaround: Filter out docs package and devshell for aarch64-darwin due to CI builder hangs
# TODO: Remove this filter once macOS CI builder is updated
flakeOutputs =
lib.mapAttrs' (
name: config: lib.nameValuePair "nixos-${name}" config.config.system.build.toplevel
@@ -111,18 +121,8 @@ in
// lib.mapAttrs' (
name: config: lib.nameValuePair "darwin-${name}" config.config.system.build.toplevel
) (self.darwinConfigurations or { })
// lib.mapAttrs' (n: lib.nameValuePair "package-${n}") (
if system == "aarch64-darwin" then
lib.filterAttrs (n: _: n != "docs" && n != "deploy-docs" && n != "option-search") packagesToBuild
else
packagesToBuild
)
// lib.mapAttrs' (n: lib.nameValuePair "devShell-${n}") (
if system == "aarch64-darwin" then
lib.filterAttrs (n: _: n != "docs") self'.devShells
else
self'.devShells
)
// lib.mapAttrs' (n: lib.nameValuePair "package-${n}") packagesToBuild
// lib.mapAttrs' (n: lib.nameValuePair "devShell-${n}") self'.devShells
// lib.mapAttrs' (name: config: lib.nameValuePair "home-manager-${name}" config.activation-script) (
self'.legacyPackages.homeConfigurations or { }
);
@@ -130,6 +130,33 @@ in
nixosTests
// flakeOutputs
// {
# TODO: Automatically provide this check to downstream users to check their modules
clan-modules-json-compatible =
let
allSchemas = lib.mapAttrs (
_n: m:
let
schema =
(self.clanLib.evalService {
modules = [ m ];
prefix = [
"checks"
system
];
}).config.result.api.schema;
in
schema
) self.clan.modules;
in
pkgs.runCommand "combined-result"
{
schemaFile = builtins.toFile "schemas.json" (builtins.toJSON allSchemas);
}
''
mkdir -p $out
cat $schemaFile > $out/allSchemas.json
'';
clan-core-for-checks = pkgs.runCommand "clan-core-for-checks" { } ''
cp -r ${privateInputs.clan-core-for-checks} $out
chmod -R +w $out

View File

@@ -13,6 +13,8 @@
fileSystems."/".device = lib.mkDefault "/dev/vda";
boot.loader.grub.device = lib.mkDefault "/dev/vda";
# We need to use `mkForce` because we inherit from `test-install-machine`
# which currently hardcodes `nixpkgs.hostPlatform`
nixpkgs.hostPlatform = lib.mkForce system;
imports = [ self.nixosModules.test-flash-machine ];
@@ -26,24 +28,10 @@
{
imports = [ self.nixosModules.test-install-machine-without-system ];
# We don't want our system to define any `vars` generators as these can't
# be generated as the flake is inside `/nix/store`.
clan.core.settings.state-version.enable = false;
clan.core.vars.generators.test = lib.mkForce { };
disko.devices.disk.main.preCreateHook = lib.mkForce "";
# Every option here should match the options set through `clan flash write`
# if you get a mass rebuild on the disko derivation, this means you need to
# adjust something here. Also make sure that the injected json in clan flash write
# is up to date.
i18n.defaultLocale = "de_DE.UTF-8";
console.keyMap = "de";
services.xserver.xkb.layout = "de";
users.users.root.openssh.authorizedKeys.keys = [
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIRWUusawhlIorx7VFeQJHmMkhl9X3QpnvOdhnV/bQNG root@target\n"
];
};
};
perSystem =
@@ -56,55 +44,49 @@
dependencies = [
pkgs.disko
pkgs.buildPackages.xorg.lndir
pkgs.glibcLocales
pkgs.kbd.out
self.nixosConfigurations."test-flash-machine-${pkgs.stdenv.hostPlatform.system}".pkgs.perlPackages.ConfigIniFiles
self.nixosConfigurations."test-flash-machine-${pkgs.stdenv.hostPlatform.system}".pkgs.perlPackages.FileSlurp
pkgs.bubblewrap
self.nixosConfigurations."test-flash-machine-${pkgs.hostPlatform.system}".pkgs.perlPackages.ConfigIniFiles
self.nixosConfigurations."test-flash-machine-${pkgs.hostPlatform.system}".pkgs.perlPackages.FileSlurp
self.nixosConfigurations."test-flash-machine-${pkgs.stdenv.hostPlatform.system}".config.system.build.toplevel
self.nixosConfigurations."test-flash-machine-${pkgs.stdenv.hostPlatform.system}".config.system.build.diskoScript
self.nixosConfigurations."test-flash-machine-${pkgs.stdenv.hostPlatform.system}".config.system.build.diskoScript.drvPath
self.nixosConfigurations."test-flash-machine-${pkgs.hostPlatform.system}".config.system.build.toplevel
self.nixosConfigurations."test-flash-machine-${pkgs.hostPlatform.system}".config.system.build.diskoScript
self.nixosConfigurations."test-flash-machine-${pkgs.hostPlatform.system}".config.system.build.diskoScript.drvPath
]
++ builtins.map (i: i.outPath) (builtins.attrValues self.inputs);
closureInfo = pkgs.closureInfo { rootPaths = dependencies; };
in
{
# Skip flash test on aarch64-linux for now as it's too slow
checks =
lib.optionalAttrs (pkgs.stdenv.isLinux && pkgs.stdenv.hostPlatform.system != "aarch64-linux")
{
nixos-test-flash = self.clanLib.test.baseTest {
name = "flash";
nodes.target = {
virtualisation.emptyDiskImages = [ 4096 ];
virtualisation.memorySize = 4096;
checks = pkgs.lib.mkIf pkgs.stdenv.isLinux {
nixos-test-flash = self.clanLib.test.baseTest {
name = "flash";
nodes.target = {
virtualisation.emptyDiskImages = [ 4096 ];
virtualisation.memorySize = 4096;
virtualisation.useNixStoreImage = true;
virtualisation.writableStore = true;
virtualisation.useNixStoreImage = true;
virtualisation.writableStore = true;
environment.systemPackages = [ self.packages.${pkgs.system}.clan-cli ];
environment.etc."install-closure".source = "${closureInfo}/store-paths";
environment.systemPackages = [ self.packages.${pkgs.system}.clan-cli ];
environment.etc."install-closure".source = "${closureInfo}/store-paths";
nix.settings = {
substituters = lib.mkForce [ ];
hashed-mirrors = null;
connect-timeout = lib.mkForce 3;
flake-registry = "";
experimental-features = [
"nix-command"
"flakes"
];
};
};
testScript = ''
start_all()
machine.succeed("echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIRWUusawhlIorx7VFeQJHmMkhl9X3QpnvOdhnV/bQNG root@target' > ./test_id_ed25519.pub")
# Some distros like to automount disks with spaces
machine.succeed('mkdir -p "/mnt/with spaces" && mkfs.ext4 /dev/vdc && mount /dev/vdc "/mnt/with spaces"')
machine.succeed("clan flash write --ssh-pubkey ./test_id_ed25519.pub --keymap de --language de_DE.UTF-8 --debug --flake ${self.checks.x86_64-linux.clan-core-for-checks} --yes --disk main /dev/vdc test-flash-machine-${pkgs.stdenv.hostPlatform.system}")
'';
} { inherit pkgs self; };
nix.settings = {
substituters = lib.mkForce [ ];
hashed-mirrors = null;
connect-timeout = lib.mkForce 3;
flake-registry = pkgs.writeText "flake-registry" ''{"flakes":[],"version":2}'';
experimental-features = [
"nix-command"
"flakes"
];
};
};
testScript = ''
start_all()
# Some distros like to automount disks with spaces
machine.succeed('mkdir -p "/mnt/with spaces" && mkfs.ext4 /dev/vdc && mount /dev/vdc "/mnt/with spaces"')
machine.succeed("clan flash write --debug --flake ${self.checks.x86_64-linux.clan-core-for-checks} --yes --disk main /dev/vdc test-flash-machine-${pkgs.hostPlatform.system}")
'';
} { inherit pkgs self; };
};
};
}

View File

@@ -0,0 +1,51 @@
{
perSystem =
{
pkgs,
lib,
self',
...
}:
{
# a script that executes all other checks
packages.impure-checks = pkgs.writeShellScriptBin "impure-checks" ''
#!${pkgs.bash}/bin/bash
set -euo pipefail
unset CLAN_DIR
export PATH="${
lib.makeBinPath (
[
pkgs.gitMinimal
pkgs.nix
pkgs.coreutils
pkgs.rsync # needed to have rsync installed on the dummy ssh server
]
++ self'.packages.clan-cli-full.runtimeDependencies
)
}"
ROOT=$(git rev-parse --show-toplevel)
cd "$ROOT/pkgs/clan-cli"
# Set up custom git configuration for tests
export GIT_CONFIG_GLOBAL=$(mktemp)
git config --file "$GIT_CONFIG_GLOBAL" user.name "Test User"
git config --file "$GIT_CONFIG_GLOBAL" user.email "test@example.com"
export GIT_CONFIG_SYSTEM=/dev/null
# this disables dynamic dependency loading in clan-cli
export CLAN_NO_DYNAMIC_DEPS=1
jobs=$(nproc)
# Spawning worker in pytest is relatively slow, so we limit the number of jobs to 13
# (current number of impure tests)
jobs="$((jobs > 13 ? 13 : jobs))"
nix develop "$ROOT#clan-cli" -c bash -c "TMPDIR=/tmp python -m pytest -n $jobs -m impure ./clan_cli $@"
# Clean up temporary git config
rm -f "$GIT_CONFIG_GLOBAL"
'';
};
}

View File

@@ -1,8 +1,8 @@
{
config,
self,
lib,
privateInputs,
...
}:
{
@@ -14,38 +14,31 @@
# you can get a new one by adding
# client.fail("cat test-flake/machines/test-install-machine/facter.json >&2")
# to the installation test.
clan.machines = {
test-install-machine-without-system = {
clan.machines.test-install-machine-without-system = {
fileSystems."/".device = lib.mkDefault "/dev/vda";
boot.loader.grub.device = lib.mkDefault "/dev/vda";
imports = [ self.nixosModules.test-install-machine-without-system ];
};
clan.machines.test-install-machine-with-system =
{ pkgs, ... }:
{
# https://git.clan.lol/clan/test-fixtures
facter.reportPath = builtins.fetchurl {
url = "https://git.clan.lol/clan/test-fixtures/raw/commit/4a2bc56d886578124b05060d3fb7eddc38c019f8/nixos-vm-facter-json/${pkgs.hostPlatform.system}.json";
sha256 =
{
aarch64-linux = "sha256:1rlfymk03rmfkm2qgrc8l5kj5i20srx79n1y1h4nzlpwaz0j7hh2";
x86_64-linux = "sha256:16myh0ll2gdwsiwkjw5ba4dl23ppwbsanxx214863j7nvzx42pws";
}
.${pkgs.hostPlatform.system};
};
fileSystems."/".device = lib.mkDefault "/dev/vda";
boot.loader.grub.device = lib.mkDefault "/dev/vda";
imports = [
self.nixosModules.test-install-machine-without-system
];
imports = [ self.nixosModules.test-install-machine-without-system ];
};
}
// (lib.listToAttrs (
lib.map (
system:
lib.nameValuePair "test-install-machine-${system}" {
imports = [
self.nixosModules.test-install-machine-without-system
(
if privateInputs ? test-fixtures then
{
facter.reportPath = privateInputs.test-fixtures + /nixos-vm-facter-json/${system}.json;
}
else
{ nixpkgs.hostPlatform = system; }
)
];
fileSystems."/".device = lib.mkDefault "/dev/vda";
boot.loader.grub.device = lib.mkDefault "/dev/vda";
}
) (lib.filter (lib.hasSuffix "linux") config.systems)
));
flake.nixosModules = {
test-install-machine-without-system =
{ lib, modulesPath, ... }:
@@ -160,9 +153,9 @@
closureInfo = pkgs.closureInfo {
rootPaths = [
privateInputs.clan-core-for-checks
self.nixosConfigurations."test-install-machine-${pkgs.stdenv.hostPlatform.system}".config.system.build.toplevel
self.nixosConfigurations."test-install-machine-${pkgs.stdenv.hostPlatform.system}".config.system.build.initialRamdisk
self.nixosConfigurations."test-install-machine-${pkgs.stdenv.hostPlatform.system}".config.system.build.diskoScript
self.clanInternals.machines.${pkgs.hostPlatform.system}.test-install-machine-with-system.config.system.build.toplevel
self.clanInternals.machines.${pkgs.hostPlatform.system}.test-install-machine-with-system.config.system.build.initialRamdisk
self.clanInternals.machines.${pkgs.hostPlatform.system}.test-install-machine-with-system.config.system.build.diskoScript
pkgs.stdenv.drvPath
pkgs.bash.drvPath
pkgs.buildPackages.xorg.lndir
@@ -215,7 +208,7 @@
# Prepare test flake and Nix store
flake_dir = prepare_test_flake(
temp_dir,
"${self.checks.${pkgs.stdenv.hostPlatform.system}.clan-core-for-checks}",
"${self.checks.x86_64-linux.clan-core-for-checks}",
"${closureInfo}"
)
@@ -226,22 +219,6 @@
"${../assets/ssh/privkey}"
)
# Run clan install from host using port forwarding
clan_cmd = [
"${self.packages.${pkgs.system}.clan-cli-full}/bin/clan",
"machines",
"init-hardware-config",
"--debug",
"--flake", str(flake_dir),
"--yes", "test-install-machine-without-system",
"--host-key-check", "none",
"--target-host", f"nonrootuser@localhost:{ssh_conn.host_port}",
"-i", ssh_conn.ssh_key,
"--option", "store", os.environ['CLAN_TEST_STORE']
]
subprocess.run(clan_cmd, check=True)
# Run clan install from host using port forwarding
clan_cmd = [
"${self.packages.${pkgs.system}.clan-cli-full}/bin/clan",
@@ -255,7 +232,6 @@
"-i", ssh_conn.ssh_key,
"--option", "store", os.environ['CLAN_TEST_STORE'],
"--update-hardware-config", "nixos-facter",
"--no-persist-state",
]
subprocess.run(clan_cmd, check=True)
@@ -265,7 +241,7 @@
target.shutdown()
except BrokenPipeError:
# qemu has already exited
target.connected = False
pass
# Create a new machine instance that boots from the installed system
installed_machine = create_test_machine(target, "${pkgs.qemu_test}", name="after_install")
@@ -296,10 +272,10 @@
# Prepare test flake and Nix store
flake_dir = prepare_test_flake(
temp_dir,
"${self.checks.${pkgs.stdenv.hostPlatform.system}.clan-core-for-checks}",
"${self.checks.x86_64-linux.clan-core-for-checks}",
"${closureInfo}"
)
# Set up SSH connection
ssh_conn = setup_ssh_connection(
target,
@@ -325,8 +301,7 @@
"test-install-machine-without-system",
"-i", ssh_conn.ssh_key,
"--option", "store", os.environ['CLAN_TEST_STORE'],
"--target-host", f"nonrootuser@localhost:{ssh_conn.host_port}",
"--yes"
f"nonrootuser@localhost:{ssh_conn.host_port}"
]
result = subprocess.run(clan_cmd, capture_output=True, cwd=flake_dir)
@@ -350,9 +325,7 @@
"test-install-machine-without-system",
"-i", ssh_conn.ssh_key,
"--option", "store", os.environ['CLAN_TEST_STORE'],
"--target-host",
f"nonrootuser@localhost:{ssh_conn.host_port}",
"--yes"
f"nonrootuser@localhost:{ssh_conn.host_port}"
]
result = subprocess.run(clan_cmd, capture_output=True, cwd=flake_dir)

View File

@@ -15,6 +15,7 @@ let
networking.useNetworkd = true;
services.openssh.enable = true;
services.openssh.settings.UseDns = false;
services.openssh.settings.PasswordAuthentication = false;
system.nixos.variant_id = "installer";
environment.systemPackages = [
pkgs.nixos-facter
@@ -146,11 +147,28 @@ let
];
doCheck = false;
};
# Common closure info
closureInfo = pkgs.closureInfo {
rootPaths = [
self.checks.x86_64-linux.clan-core-for-checks
self.clanInternals.machines.${pkgs.hostPlatform.system}.test-install-machine-with-system.config.system.build.toplevel
self.clanInternals.machines.${pkgs.hostPlatform.system}.test-install-machine-with-system.config.system.build.initialRamdisk
self.clanInternals.machines.${pkgs.hostPlatform.system}.test-install-machine-with-system.config.system.build.diskoScript
self.clanInternals.machines.${pkgs.hostPlatform.system}.test-install-machine-with-system.config.system.clan.deployment.file
pkgs.stdenv.drvPath
pkgs.bash.drvPath
pkgs.buildPackages.xorg.lndir
]
++ builtins.map (i: i.outPath) (builtins.attrValues self.inputs);
};
in
{
inherit
target
baseTestMachine
nixosTestLib
closureInfo
;
}

View File

@@ -1,82 +0,0 @@
{ self, pkgs, ... }:
let
cli = self.packages.${pkgs.stdenv.hostPlatform.system}.clan-cli-full;
ollama-model = pkgs.callPackage ./qwen3-4b-instruct.nix { };
in
{
name = "llm";
nodes = {
peer1 =
{ pkgs, ... }:
{
users.users.text-user = {
isNormalUser = true;
linger = true;
uid = 1000;
extraGroups = [ "systemd-journal" ];
};
# Set environment variables for user systemd
environment.extraInit = ''
if [ "$(id -u)" = "1000" ]; then
export XDG_RUNTIME_DIR="/run/user/1000"
export DBUS_SESSION_BUS_ADDRESS="unix:path=/run/user/1000/bus"
ollama_dir="$HOME/.ollama"
mkdir -p "$ollama_dir"
ln -sf ${ollama-model}/models "$ollama_dir"/models
fi
'';
# Enable PAM for user systemd sessions
security.pam.services.systemd-user = {
startSession = true;
# Workaround for containers - use pam_permit to avoid helper binary issues
text = pkgs.lib.mkForce ''
account required pam_permit.so
session required pam_permit.so
session required pam_env.so conffile=/etc/pam/environment readenv=0
session required ${pkgs.systemd}/lib/security/pam_systemd.so
'';
};
environment.systemPackages = [
cli
pkgs.ollama
(cli.pythonRuntime.withPackages (
ps: with ps; [
pytest
pytest-xdist
(cli.pythonRuntime.pkgs.toPythonModule cli)
self.legacyPackages.${pkgs.stdenv.hostPlatform.system}.nixosTestLib
]
))
];
};
};
testScript =
{ ... }:
''
start_all()
peer1.wait_for_unit("multi-user.target")
peer1.wait_for_unit("user@1000.service")
# Fix user journal permissions so text-user can read their own logs
peer1.succeed("chown text-user:systemd-journal /var/log/journal/*/user-1000.journal*")
peer1.succeed("chmod 640 /var/log/journal/*/user-1000.journal*")
# the -o adopts="" is needed to overwrite any args coming from pyproject.toml
# -p no:cacheprovider disables pytest's cacheprovider which tries to write to the nix store in this case
cmd = "su - text-user -c 'pytest -s -n0 -m service_runner -p no:cacheprovider -o addopts="" ${cli.passthru.sourceWithTests}/clan_lib/llm'"
print("Running tests with command: " + cmd)
# Run tests as text-user (environment variables are set automatically)
peer1.succeed(cmd)
'';
}

View File

@@ -1,70 +0,0 @@
{ pkgs }:
let
# Got them from https://github.com/Gholamrezadar/ollama-direct-downloader
# Download manifest
manifest = pkgs.fetchurl {
url = "https://registry.ollama.ai/v2/library/qwen3/manifests/4b-instruct";
# You'll need to calculate this hash - run the derivation once and it will tell you the correct hash
hash = "sha256-Dtze80WT6sGqK+nH0GxDLc+BlFrcpeyi8nZiwY8Wi6A=";
};
# Download blobs
blob1 = pkgs.fetchurl {
url = "https://registry.ollama.ai/v2/library/qwen3/blobs/sha256:b72accf9724e93698c57cbd3b1af2d3341b3d05ec2089d86d273d97964853cd2";
hash = "sha256-tyrM+XJOk2mMV8vTsa8tM0Gz0F7CCJ2G0nPZeWSFPNI=";
};
blob2 = pkgs.fetchurl {
url = "https://registry.ollama.ai/v2/library/qwen3/blobs/sha256:85e4a5b7b8ef0e48af0e8658f5aaab9c2324c76c1641493f4d1e25fce54b18b9";
hash = "sha256-heSlt7jvDkivDoZY9aqrnCMkx2wWQUk/TR4l/OVLGLk=";
};
blob3 = pkgs.fetchurl {
url = "https://registry.ollama.ai/v2/library/qwen3/blobs/sha256:eade0a07cac7712787bbce23d12f9306adb4781d873d1df6e16f7840fa37afec";
hash = "sha256-6t4KB8rHcSeHu84j0S+TBq20eB2HPR324W94QPo3r+w=";
};
blob4 = pkgs.fetchurl {
url = "https://registry.ollama.ai/v2/library/qwen3/blobs/sha256:d18a5cc71b84bc4af394a31116bd3932b42241de70c77d2b76d69a314ec8aa12";
hash = "sha256-0YpcxxuEvErzlKMRFr05MrQiQd5wx30rdtaaMU7IqhI=";
};
blob5 = pkgs.fetchurl {
url = "https://registry.ollama.ai/v2/library/qwen3/blobs/sha256:0914c7781e001948488d937994217538375b4fd8c1466c5e7a625221abd3ea7a";
hash = "sha256-CRTHeB4AGUhIjZN5lCF1ODdbT9jBRmxeemJSIavT6no=";
};
in
pkgs.stdenv.mkDerivation {
pname = "ollama-qwen3-4b-instruct";
version = "1.0";
dontUnpack = true;
buildPhase = ''
mkdir -p $out/models/manifests/registry.ollama.ai/library/qwen3
mkdir -p $out/models/blobs
# Copy manifest
cp ${manifest} $out/models/manifests/registry.ollama.ai/library/qwen3/4b-instruct
# Copy blobs with correct names
cp ${blob1} $out/models/blobs/sha256-b72accf9724e93698c57cbd3b1af2d3341b3d05ec2089d86d273d97964853cd2
cp ${blob2} $out/models/blobs/sha256-85e4a5b7b8ef0e48af0e8658f5aaab9c2324c76c1641493f4d1e25fce54b18b9
cp ${blob3} $out/models/blobs/sha256-eade0a07cac7712787bbce23d12f9306adb4781d873d1df6e16f7840fa37afec
cp ${blob4} $out/models/blobs/sha256-d18a5cc71b84bc4af394a31116bd3932b42241de70c77d2b76d69a314ec8aa12
cp ${blob5} $out/models/blobs/sha256-0914c7781e001948488d937994217538375b4fd8c1466c5e7a625221abd3ea7a
'';
installPhase = ''
# buildPhase already created everything in $out
:
'';
meta = with pkgs.lib; {
description = "Qwen3 4B Instruct model for Ollama";
license = "apache-2.0";
platforms = platforms.all;
};
}

View File

@@ -0,0 +1,83 @@
(
{ pkgs, ... }:
{
name = "matrix-synapse";
nodes.machine =
{
config,
self,
lib,
...
}:
{
imports = [
self.clanModules.matrix-synapse
self.nixosModules.clanCore
{
clan.core.settings.directory = ./.;
services.nginx.virtualHosts."matrix.clan.test" = {
enableACME = lib.mkForce false;
forceSSL = lib.mkForce false;
};
clan.nginx.acme.email = "admins@clan.lol";
clan.matrix-synapse = {
server_tld = "clan.test";
app_domain = "matrix.clan.test";
};
clan.matrix-synapse.users.admin.admin = true;
clan.matrix-synapse.users.someuser = { };
clan.core.facts.secretStore = "vm";
clan.core.vars.settings.secretStore = "vm";
clan.core.vars.settings.publicStore = "in_repo";
# because we use systemd-tmpfiles to copy the secrets, we need to a separate systemd-tmpfiles call to provision them.
boot.postBootCommands = "${config.systemd.package}/bin/systemd-tmpfiles --create /etc/tmpfiles.d/00-vmsecrets.conf";
systemd.tmpfiles.settings."00-vmsecrets" = {
# run before 00-nixos.conf
"/etc/secrets" = {
d.mode = "0700";
z.mode = "0700";
};
"/etc/secrets/matrix-synapse/synapse-registration_shared_secret" = {
f.argument = "supersecret";
z = {
mode = "0400";
user = "root";
};
};
"/etc/secrets/matrix-password-admin/matrix-password-admin" = {
f.argument = "matrix-password1";
z = {
mode = "0400";
user = "root";
};
};
"/etc/secrets/matrix-password-someuser/matrix-password-someuser" = {
f.argument = "matrix-password2";
z = {
mode = "0400";
user = "root";
};
};
};
}
];
};
testScript = ''
start_all()
machine.wait_for_unit("matrix-synapse")
machine.succeed("${pkgs.netcat}/bin/nc -z -v ::1 8008")
machine.wait_until_succeeds("${pkgs.curl}/bin/curl -Ssf -L http://localhost/_matrix/static/ -H 'Host: matrix.clan.test'")
machine.systemctl("restart matrix-synapse >&2") # check if user creation is idempotent
machine.execute("journalctl -u matrix-synapse --no-pager >&2")
machine.wait_for_unit("matrix-synapse")
machine.succeed("${pkgs.netcat}/bin/nc -z -v ::1 8008")
machine.succeed("${pkgs.curl}/bin/curl -Ssf -L http://localhost/_matrix/static/ -H 'Host: matrix.clan.test'")
'';
}
)

View File

@@ -0,0 +1 @@
registration_shared_secret: supersecret

View File

@@ -16,6 +16,7 @@ nixosLib.runTest (
# This tests the compatibility of the inventory
# With the test framework
# - legacy-modules
# - clan.service modules
name = "service-dummy-test-from-flake";
@@ -29,34 +30,35 @@ nixosLib.runTest (
{ nodes, ... }:
''
import subprocess
import tempfile
from nixos_test_lib.nix_setup import setup_nix_in_nix
from nixos_test_lib.nix_setup import setup_nix_in_nix # type: ignore[import-untyped]
with tempfile.TemporaryDirectory() as temp_dir:
setup_nix_in_nix(temp_dir, None) # No closure info for this test
setup_nix_in_nix(None) # No closure info for this test
start_all()
admin1.wait_for_unit("multi-user.target")
peer1.wait_for_unit("multi-user.target")
start_all()
admin1.wait_for_unit("multi-user.target")
peer1.wait_for_unit("multi-user.target")
# Provided by the legacy module
print(admin1.succeed("systemctl status dummy-service"))
print(peer1.succeed("systemctl status dummy-service"))
# peer1 should have the 'hello' file
peer1.succeed("cat ${nodes.peer1.clan.core.vars.generators.new-service.files.not-a-secret.path}")
# peer1 should have the 'hello' file
peer1.succeed("cat ${nodes.peer1.clan.core.vars.generators.new-service.files.not-a-secret.path}")
ls_out = peer1.succeed("ls -la ${nodes.peer1.clan.core.vars.generators.new-service.files.a-secret.path}")
# Check that the file is owned by 'nobody'
assert "nobody" in ls_out, f"File is not owned by 'nobody': {ls_out}"
# Check that the file is in the 'users' group
assert "users" in ls_out, f"File is not in the 'users' group: {ls_out}"
# Check that the file is in the '0644' mode
assert "-rw-r--r--" in ls_out, f"File is not in the '0644' mode: {ls_out}"
ls_out = peer1.succeed("ls -la ${nodes.peer1.clan.core.vars.generators.new-service.files.a-secret.path}")
# Check that the file is owned by 'nobody'
assert "nobody" in ls_out, f"File is not owned by 'nobody': {ls_out}"
# Check that the file is in the 'users' group
assert "users" in ls_out, f"File is not in the 'users' group: {ls_out}"
# Check that the file is in the '0644' mode
assert "-rw-r--r--" in ls_out, f"File is not in the '0644' mode: {ls_out}"
# Run clan command
result = subprocess.run(
["${
clan-core.packages.${hostPkgs.system}.clan-cli
}/bin/clan", "machines", "list", "--flake", "${config.clan.test.flakeForSandbox}"],
check=True
)
# Run clan command
result = subprocess.run(
["${
clan-core.packages.${hostPkgs.system}.clan-cli
}/bin/clan", "machines", "list", "--flake", "${config.clan.test.flakeForSandbox}"],
check=True
)
'';
}
)

View File

@@ -15,6 +15,12 @@
meta.name = "foo";
machines.peer1 = { };
machines.admin1 = { };
services = {
legacy-module.default = {
roles.peer.machines = [ "peer1" ];
roles.admin.machines = [ "admin1" ];
};
};
instances."test" = {
module.name = "new-service";
@@ -22,15 +28,15 @@
roles.peer.machines.peer1 = { };
};
modules = {
legacy-module = ./legacy-module;
};
};
modules.new-service = {
_class = "clan.service";
manifest.name = "new-service";
manifest.readme = "Just a sample readme to not trigger the warning.";
roles.peer = {
description = "A peer that uses the new-service to generate some files.";
};
roles.peer = { };
perMachine = {
nixosModule = {
# This should be generated by:

View File

@@ -0,0 +1,10 @@
---
description = "Set up dummy-module"
categories = ["System"]
features = [ "inventory" ]
[constraints]
roles.admin.min = 1
roles.admin.max = 1
---

View File

@@ -0,0 +1,5 @@
{
imports = [
../shared.nix
];
}

View File

@@ -0,0 +1,5 @@
{
imports = [
../shared.nix
];
}

View File

@@ -0,0 +1,34 @@
{ config, ... }:
{
systemd.services.dummy-service = {
enable = true;
description = "Dummy service";
wantedBy = [ "multi-user.target" ];
serviceConfig = {
Type = "oneshot";
RemainAfterExit = true;
};
script = ''
generated_password_path="${config.clan.core.vars.generators.dummy-generator.files.generated-password.path}"
if [ ! -f "$generated_password_path" ]; then
echo "Generated password file not found: $generated_password_path"
exit 1
fi
host_id_path="${config.clan.core.vars.generators.dummy-generator.files.host-id.path}"
if [ ! -e "$host_id_path" ]; then
echo "Host ID file not found: $host_id_path"
exit 1
fi
'';
};
# TODO: add and prompt and make it work in the test framework
clan.core.vars.generators.dummy-generator = {
files.host-id.secret = false;
files.generated-password.secret = true;
script = ''
echo $RANDOM > "$out"/host-id
echo $RANDOM > "$out"/generated-password
'';
};
}

View File

@@ -15,6 +15,7 @@ nixosLib.runTest (
# This tests the compatibility of the inventory
# With the test framework
# - legacy-modules
# - clan.service modules
name = "service-dummy-test";
@@ -23,6 +24,12 @@ nixosLib.runTest (
inventory = {
machines.peer1 = { };
machines.admin1 = { };
services = {
legacy-module.default = {
roles.peer.machines = [ "peer1" ];
roles.admin.machines = [ "admin1" ];
};
};
instances."test" = {
module.name = "new-service";
@@ -30,14 +37,14 @@ nixosLib.runTest (
roles.peer.machines.peer1 = { };
};
modules = {
legacy-module = ./legacy-module;
};
};
modules.new-service = {
_class = "clan.service";
manifest.name = "new-service";
manifest.readme = "Just a sample readme to not trigger the warning.";
roles.peer = {
description = "A peer that uses the new-service to generate some files.";
};
roles.peer = { };
perMachine = {
nixosModule = {
# This should be generated by:
@@ -71,6 +78,9 @@ nixosLib.runTest (
start_all()
admin1.wait_for_unit("multi-user.target")
peer1.wait_for_unit("multi-user.target")
# Provided by the legacy module
print(admin1.succeed("systemctl status dummy-service"))
print(peer1.succeed("systemctl status dummy-service"))
# peer1 should have the 'hello' file
peer1.succeed("cat ${nodes.peer1.clan.core.vars.generators.new-service.files.not-a-secret.path}")

View File

@@ -1,67 +0,0 @@
{ self, pkgs, ... }:
let
cli = self.packages.${pkgs.stdenv.hostPlatform.system}.clan-cli-full;
in
{
name = "systemd-abstraction";
nodes = {
peer1 = {
users.users.text-user = {
isNormalUser = true;
linger = true;
uid = 1000;
extraGroups = [ "systemd-journal" ];
};
# Set environment variables for user systemd
environment.extraInit = ''
if [ "$(id -u)" = "1000" ]; then
export XDG_RUNTIME_DIR="/run/user/1000"
export DBUS_SESSION_BUS_ADDRESS="unix:path=/run/user/1000/bus"
fi
'';
# Enable PAM for user systemd sessions
security.pam.services.systemd-user = {
startSession = true;
# Workaround for containers - use pam_permit to avoid helper binary issues
text = pkgs.lib.mkForce ''
account required pam_permit.so
session required pam_permit.so
session required pam_env.so conffile=/etc/pam/environment readenv=0
session required ${pkgs.systemd}/lib/security/pam_systemd.so
'';
};
environment.systemPackages = [
cli
(cli.pythonRuntime.withPackages (
ps: with ps; [
pytest
pytest-xdist
]
))
];
};
};
testScript =
{ ... }:
''
start_all()
peer1.wait_for_unit("multi-user.target")
peer1.wait_for_unit("user@1000.service")
# Fix user journal permissions so text-user can read their own logs
peer1.succeed("chown text-user:systemd-journal /var/log/journal/*/user-1000.journal*")
peer1.succeed("chmod 640 /var/log/journal/*/user-1000.journal*")
# Run tests as text-user (environment variables are set automatically)
peer1.succeed("su - text-user -c 'pytest -p no:cacheprovider -o addopts="" -s -n0 ${cli.passthru.sourceWithTests}/clan_lib/service_runner'")
'';
}

View File

@@ -1,26 +0,0 @@
(
{ ... }:
{
name = "test-extra-python-packages";
extraPythonPackages = ps: [ ps.numpy ];
nodes.machine =
{ ... }:
{
networking.hostName = "machine";
};
testScript = ''
import numpy as np
start_all()
machine.wait_for_unit("multi-user.target")
# Test availability of numpy
arr = np.array([1, 2, 3])
print(f"Numpy array: {arr}")
assert len(arr) == 3
'';
}
)

View File

@@ -67,15 +67,6 @@
];
};
nix.settings = {
flake-registry = "";
# required for setting the `flake-registry`
experimental-features = [
"nix-command"
"flakes"
];
};
# Define the mounts that exist in the container to prevent them from being stopped
fileSystems = {
"/" = {
@@ -115,13 +106,12 @@
let
closureInfo = pkgs.closureInfo {
rootPaths = [
self.packages.${pkgs.stdenv.hostPlatform.system}.clan-cli
self.checks.${pkgs.stdenv.hostPlatform.system}.clan-core-for-checks
self.clanInternals.machines.${pkgs.stdenv.hostPlatform.system}.test-update-machine.config.system.build.toplevel
self.packages.${pkgs.system}.clan-cli
self.checks.${pkgs.system}.clan-core-for-checks
self.clanInternals.machines.${pkgs.hostPlatform.system}.test-update-machine.config.system.build.toplevel
pkgs.stdenv.drvPath
pkgs.bash.drvPath
pkgs.buildPackages.xorg.lndir
pkgs.bubblewrap
]
++ builtins.map (i: i.outPath) (builtins.attrValues self.inputs);
};
@@ -132,7 +122,7 @@
imports = [ self.nixosModules.test-update-machine ];
};
extraPythonPackages = _p: [
self.legacyPackages.${pkgs.stdenv.hostPlatform.system}.nixosTestLib
self.legacyPackages.${pkgs.system}.nixosTestLib
];
testScript = ''
@@ -154,7 +144,7 @@
# Prepare test flake and Nix store
flake_dir = prepare_test_flake(
temp_dir,
"${self.checks.${pkgs.stdenv.hostPlatform.system}.clan-core-for-checks}",
"${self.checks.x86_64-linux.clan-core-for-checks}",
"${closureInfo}"
)
(flake_dir / ".clan-flake").write_text("") # Ensure .clan-flake exists
@@ -221,13 +211,12 @@
[
"${pkgs.nix}/bin/nix",
"copy",
"--from",
f"{temp_dir}/store",
"--to",
"ssh://root@192.168.1.1",
"--no-check-sigs",
f"${self.packages.${pkgs.stdenv.hostPlatform.system}.clan-cli}",
f"${self.packages.${pkgs.system}.clan-cli}",
"--extra-experimental-features", "nix-command flakes",
"--from", f"{os.environ["TMPDIR"]}/store"
],
check=True,
env={
@@ -242,7 +231,7 @@
"-o", "UserKnownHostsFile=/dev/null",
"-o", "StrictHostKeyChecking=no",
f"root@192.168.1.1",
"${self.packages.${pkgs.stdenv.hostPlatform.system}.clan-cli}/bin/clan",
"${self.packages.${pkgs.system}.clan-cli}/bin/clan",
"machines",
"update",
"--debug",
@@ -270,7 +259,7 @@
# Run clan update command
subprocess.run([
"${self.packages.${pkgs.stdenv.hostPlatform.system}.clan-cli-full}/bin/clan",
"${self.packages.${pkgs.system}.clan-cli-full}/bin/clan",
"machines",
"update",
"--debug",
@@ -297,7 +286,7 @@
# Run clan update command with --build-host
subprocess.run([
"${self.packages.${pkgs.stdenv.hostPlatform.system}.clan-cli-full}/bin/clan",
"${self.packages.${pkgs.system}.clan-cli-full}/bin/clan",
"machines",
"update",
"--debug",

View File

@@ -0,0 +1,115 @@
{
pkgs,
nixosLib,
clan-core,
lib,
...
}:
nixosLib.runTest (
{ ... }:
let
machines = [
"controller1"
"controller2"
"peer1"
"peer2"
"peer3"
];
in
{
imports = [
clan-core.modules.nixosTest.clanTest
];
hostPkgs = pkgs;
name = "wireguard";
clan = {
directory = ./.;
modules."@clan/wireguard" = import ../../clanServices/wireguard/default.nix;
inventory = {
machines = lib.genAttrs machines (_: { });
instances = {
/*
wg-test-one
controller2 controller1
peer2 peer1 peer3
*/
wg-test-one = {
module.name = "@clan/wireguard";
module.input = "self";
roles.controller.machines."controller1".settings = {
endpoint = "192.168.1.1";
};
roles.controller.machines."controller2".settings = {
endpoint = "192.168.1.2";
};
roles.peer.machines = {
peer1.settings.controller = "controller1";
peer2.settings.controller = "controller2";
peer3.settings.controller = "controller1";
};
};
# TODO: Will this actually work with conflicting ports? Can we re-use interfaces?
#wg-test-two = {
# module.name = "@clan/wireguard";
# roles.controller.machines."controller1".settings = {
# endpoint = "192.168.1.1";
# port = 51922;
# };
# roles.peer.machines = {
# peer1 = { };
# };
#};
};
};
};
testScript = ''
start_all()
# Show all addresses
machines = [peer1, peer2, peer3, controller1, controller2]
for m in machines:
m.systemctl("start network-online.target")
for m in machines:
m.wait_for_unit("network-online.target")
m.wait_for_unit("systemd-networkd.service")
print("\n\n" + "="*60)
print("STARTING PING TESTS")
print("="*60)
for m1 in machines:
for m2 in machines:
if m1 != m2:
print(f"\n--- Pinging from {m1.name} to {m2.name}.wg-test-one ---")
m1.wait_until_succeeds(f"ping -c1 {m2.name}.wg-test-one >&2")
'';
}
)

View File

@@ -0,0 +1,24 @@
(
{ pkgs, ... }:
{
name = "zt-tcp-relay";
nodes.machine =
{ self, ... }:
{
imports = [
self.nixosModules.clanCore
self.clanModules.zt-tcp-relay
{
clan.core.settings.directory = ./.;
}
];
};
testScript = ''
start_all()
machine.wait_for_unit("zt-tcp-relay.service")
out = machine.succeed("${pkgs.netcat}/bin/nc -z -v localhost 4443")
print(out)
'';
}
)

View File

@@ -0,0 +1,5 @@
---
description = "Convenient Administration for the Clan App"
categories = ["Utility"]
features = [ "inventory", "deprecated" ]
---

View File

@@ -0,0 +1,3 @@
{
imports = [ ./roles/default.nix ];
}

View File

@@ -0,0 +1,30 @@
{ lib, config, ... }:
{
options.clan.admin = {
allowedKeys = lib.mkOption {
default = { };
type = lib.types.attrsOf lib.types.str;
description = "The allowed public keys for ssh access to the admin user";
example = {
"key_1" = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQD...";
};
};
};
# Bad practice.
# Should we add 'clanModules' to specialArgs?
imports = [
../../sshd
../../root-password
];
config = {
warnings = [
"The clan.admin module is deprecated and will be removed on 2025-07-15.
Please migrate to user-maintained configuration or the new equivalent clan services
(https://docs.clan.lol/reference/clanServices)."
];
users.users.root.openssh.authorizedKeys.keys = builtins.attrValues config.clan.admin.allowedKeys;
};
}

View File

@@ -0,0 +1,8 @@
---
description = "Set up automatic upgrades"
categories = ["System"]
features = [ "inventory", "deprecated" ]
---
Whether to periodically upgrade NixOS to the latest version. If enabled, a
systemd timer will run `nixos-rebuild switch --upgrade` once a day.

View File

@@ -0,0 +1,32 @@
{
config,
lib,
...
}:
let
cfg = config.clan.auto-upgrade;
in
{
options.clan.auto-upgrade = {
flake = lib.mkOption {
type = lib.types.str;
description = "Flake reference";
};
};
config = {
warnings = [
"The clan.auto-upgrade module is deprecated and will be removed on 2025-07-15.
Please migrate to user-maintained configuration or the new equivalent clan services
(https://docs.clan.lol/reference/clanServices)."
];
system.autoUpgrade = {
inherit (cfg) flake;
enable = true;
dates = "02:00";
randomizedDelaySec = "45min";
};
};
}

View File

@@ -0,0 +1,16 @@
---
description = "Statically configure borgbackup with sane defaults."
---
!!! Danger "Deprecated"
Use [borgbackup](borgbackup.md) instead.
Don't use borgbackup-static through [inventory](../../concepts/inventory.md).
This module implements the `borgbackup` backend and implements sane defaults
for backup management through `borgbackup` for members of the clan.
Configure target machines where the backups should be sent to through `targets`.
Configure machines that should be backuped either through `includeMachines`
which will exclusively add the included machines to be backuped, or through
`excludeMachines`, which will add every machine except the excluded machine to the backup.

View File

@@ -0,0 +1,104 @@
{ lib, config, ... }:
let
dir = config.clan.core.settings.directory;
machineDir = dir + "/machines/";
in
{
imports = [ ../borgbackup ];
options.clan.borgbackup-static = {
excludeMachines = lib.mkOption {
type = lib.types.listOf lib.types.str;
example = lib.literalExpression "[ config.clan.core.settings.machine.name ]";
default = [ ];
description = ''
Machines that should not be backuped.
Mutually exclusive with includeMachines.
If this is not empty, every other machine except the targets in the clan will be backuped by this module.
If includeMachines is set, only the included machines will be backuped.
'';
};
includeMachines = lib.mkOption {
type = lib.types.listOf lib.types.str;
example = lib.literalExpression "[ config.clan.core.settings.machine.name ]";
default = [ ];
description = ''
Machines that should be backuped.
Mutually exclusive with excludeMachines.
'';
};
targets = lib.mkOption {
type = lib.types.listOf lib.types.str;
default = [ ];
description = ''
Machines that should act as target machines for backups.
'';
};
};
config.services.borgbackup.repos =
let
machines = builtins.readDir machineDir;
borgbackupIpMachinePath = machines: machineDir + machines + "/facts/borgbackup.ssh.pub";
filteredMachines =
if ((builtins.length config.clan.borgbackup-static.includeMachines) != 0) then
lib.filterAttrs (name: _: (lib.elem name config.clan.borgbackup-static.includeMachines)) machines
else
lib.filterAttrs (name: _: !(lib.elem name config.clan.borgbackup-static.excludeMachines)) machines;
machinesMaybeKey = lib.mapAttrsToList (
machine: _:
let
fullPath = borgbackupIpMachinePath machine;
in
if builtins.pathExists fullPath then machine else null
) filteredMachines;
machinesWithKey = lib.filter (x: x != null) machinesMaybeKey;
hosts = builtins.map (machine: {
name = machine;
value = {
path = "/var/lib/borgbackup/${machine}";
authorizedKeys = [ (builtins.readFile (borgbackupIpMachinePath machine)) ];
};
}) machinesWithKey;
in
lib.mkIf
(builtins.any (
target: target == config.clan.core.settings.machine.name
) config.clan.borgbackup-static.targets)
(if (builtins.listToAttrs hosts) != null then builtins.listToAttrs hosts else { });
config.clan.borgbackup.destinations =
let
destinations = builtins.map (d: {
name = d;
value = {
repo = "borg@${d}:/var/lib/borgbackup/${config.clan.core.settings.machine.name}";
};
}) config.clan.borgbackup-static.targets;
in
lib.mkIf (builtins.any (
target: target == config.clan.core.settings.machine.name
) config.clan.borgbackup-static.includeMachines) (builtins.listToAttrs destinations);
config.assertions = [
{
assertion =
!(
((builtins.length config.clan.borgbackup-static.excludeMachines) != 0)
&& ((builtins.length config.clan.borgbackup-static.includeMachines) != 0)
);
message = ''
The options:
config.clan.borgbackup-static.excludeMachines = [${builtins.toString config.clan.borgbackup-static.excludeMachines}]
and
config.clan.borgbackup-static.includeMachines = [${builtins.toString config.clan.borgbackup-static.includeMachines}]
are mutually exclusive.
Use excludeMachines to exclude certain machines and backup the other clan machines.
Use include machines to only backup certain machines.
'';
}
];
config.warnings = lib.optional (
builtins.length config.clan.borgbackup-static.targets > 0
) "The borgbackup-static module is deprecated use the service via the inventory interface instead.";
}

View File

@@ -0,0 +1,14 @@
---
description = "Efficient, deduplicating backup program with optional compression and secure encryption."
categories = ["System"]
features = [ "inventory", "deprecated" ]
---
BorgBackup (short: Borg) gives you:
- Space efficient storage of backups.
- Secure, authenticated encryption.
- Compression: lz4, zstd, zlib, lzma or none.
- Mountable backups with FUSE.
- Easy installation on multiple platforms: Linux, macOS, BSD, …
- Free software (BSD license).
- Backed by a large and active open-source community.

View File

@@ -0,0 +1,6 @@
# Dont import this file
# It is only here for backwards compatibility.
# Dont author new modules with this file.
{
imports = [ ./roles/client.nix ];
}

View File

@@ -0,0 +1,210 @@
{
config,
lib,
pkgs,
...
}:
let
# Instances might be empty, if the module is not used via the inventory
instances = config.clan.inventory.services.borgbackup or { };
# roles = { ${role_name} :: { machines :: [string] } }
allServers = lib.foldlAttrs (
acc: _instanceName: instanceConfig:
acc
++ (
if builtins.elem machineName instanceConfig.roles.client.machines then
instanceConfig.roles.server.machines
else
[ ]
)
) [ ] instances;
machineName = config.clan.core.settings.machine.name;
cfg = config.clan.borgbackup;
preBackupScript = ''
declare -A preCommandErrors
${lib.concatMapStringsSep "\n" (
state:
lib.optionalString (state.preBackupCommand != null) ''
echo "Running pre-backup command for ${state.name}"
if ! /run/current-system/sw/bin/${state.preBackupCommand}; then
preCommandErrors["${state.name}"]=1
fi
''
) (lib.attrValues config.clan.core.state)}
if [[ ''${#preCommandErrors[@]} -gt 0 ]]; then
echo "pre-backup commands failed for the following services:"
for state in "''${!preCommandErrors[@]}"; do
echo " $state"
done
exit 1
fi
'';
in
{
options.clan.borgbackup.destinations = lib.mkOption {
type = lib.types.attrsOf (
lib.types.submodule (
{ name, ... }:
{
options = {
name = lib.mkOption {
type = lib.types.strMatching "^[a-zA-Z0-9._-]+$";
default = name;
description = "the name of the backup job";
};
repo = lib.mkOption {
type = lib.types.str;
description = "the borgbackup repository to backup to";
};
rsh = lib.mkOption {
type = lib.types.str;
default = "ssh -i ${
config.clan.core.vars.generators.borgbackup.files."borgbackup.ssh".path
} -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=Yes";
defaultText = "ssh -i \${config.clan.core.vars.generators.borgbackup.files.\"borgbackup.ssh\".path} -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null";
description = "the rsh to use for the backup";
};
};
}
)
);
default = { };
description = ''
destinations where the machine should be backuped to
'';
};
options.clan.borgbackup.exclude = lib.mkOption {
type = lib.types.listOf lib.types.str;
example = [ "*.pyc" ];
default = [ ];
description = ''
Directories/Files to exclude from the backup.
Use * as a wildcard.
'';
};
config = {
warnings = [
"The clan.borgbackup module is deprecated and will be removed on 2025-07-15.
Please migrate to user-maintained configuration or the new equivalent clan services
(https://docs.clan.lol/reference/clanServices)."
];
# Destinations
clan.borgbackup.destinations =
let
destinations = builtins.map (serverName: {
name = serverName;
value = {
repo = "borg@${serverName}:/var/lib/borgbackup/${machineName}";
};
}) allServers;
in
(builtins.listToAttrs destinations);
# Derived from the destinations
systemd.services = lib.mapAttrs' (
_: dest:
lib.nameValuePair "borgbackup-job-${dest.name}" {
# since borgbackup mounts the system read-only, we need to run in a
# ExecStartPre script, so we can generate additional files.
serviceConfig.ExecStartPre = [
''+${pkgs.writeShellScript "borgbackup-job-${dest.name}-pre-backup-commands" preBackupScript}''
];
}
) cfg.destinations;
services.borgbackup.jobs = lib.mapAttrs (_: dest: {
paths = lib.unique (
lib.flatten (map (state: state.folders) (lib.attrValues config.clan.core.state))
);
exclude = cfg.exclude;
repo = dest.repo;
environment.BORG_RSH = dest.rsh;
compression = "auto,zstd";
startAt = "*-*-* 01:00:00";
persistentTimer = true;
encryption = {
mode = "repokey";
passCommand = "cat ${config.clan.core.vars.generators.borgbackup.files."borgbackup.repokey".path}";
};
prune.keep = {
within = "1d"; # Keep all archives from the last day
daily = 7;
weekly = 4;
monthly = 0;
};
}) cfg.destinations;
environment.systemPackages = [
(pkgs.writeShellApplication {
name = "borgbackup-create";
runtimeInputs = [ config.systemd.package ];
text = ''
${lib.concatMapStringsSep "\n" (dest: ''
systemctl start borgbackup-job-${dest.name}
'') (lib.attrValues cfg.destinations)}
'';
})
(pkgs.writeShellApplication {
name = "borgbackup-list";
runtimeInputs = [ pkgs.jq ];
text = ''
(${
lib.concatMapStringsSep "\n" (
dest:
# we need yes here to skip the changed url verification
''echo y | /run/current-system/sw/bin/borg-job-${dest.name} list --json | jq '[.archives[] | {"name": ("${dest.name}::${dest.repo}::" + .name)}]' ''
) (lib.attrValues cfg.destinations)
}) | jq -s 'add // []'
'';
})
(pkgs.writeShellApplication {
name = "borgbackup-restore";
runtimeInputs = [ pkgs.gawk ];
text = ''
cd /
IFS=':' read -ra FOLDER <<< "''${FOLDERS-}"
job_name=$(echo "$NAME" | awk -F'::' '{print $1}')
backup_name=''${NAME#"$job_name"::}
if [[ ! -x /run/current-system/sw/bin/borg-job-"$job_name" ]]; then
echo "borg-job-$job_name not found: Backup name is invalid" >&2
exit 1
fi
echo y | /run/current-system/sw/bin/borg-job-"$job_name" extract "$backup_name" "''${FOLDER[@]}"
'';
})
];
clan.core.vars.generators.borgbackup = {
files."borgbackup.ssh.pub".secret = false;
files."borgbackup.ssh" = { };
files."borgbackup.repokey" = { };
migrateFact = "borgbackup";
runtimeInputs = [
pkgs.coreutils
pkgs.openssh
pkgs.xkcdpass
];
script = ''
ssh-keygen -t ed25519 -N "" -C "" -f "$out"/borgbackup.ssh
xkcdpass -n 4 -d - > "$out"/borgbackup.repokey
'';
};
clan.core.backups.providers.borgbackup = {
list = "borgbackup-list";
create = "borgbackup-create";
restore = "borgbackup-restore";
};
};
}

View File

@@ -0,0 +1,63 @@
{ config, lib, ... }:
let
dir = config.clan.core.settings.directory;
machineDir = dir + "/vars/per-machine/";
machineName = config.clan.core.settings.machine.name;
# Instances might be empty, if the module is not used via the inventory
#
# Type: { ${instanceName} :: { roles :: Roles } }
# Roles :: { ${role_name} :: { machines :: [string] } }
instances = config.clan.inventory.services.borgbackup or { };
allClients = lib.foldlAttrs (
acc: _instanceName: instanceConfig:
acc
++ (
if (builtins.elem machineName instanceConfig.roles.server.machines) then
instanceConfig.roles.client.machines
else
[ ]
)
) [ ] instances;
in
{
options = {
clan.borgbackup.directory = lib.mkOption {
type = lib.types.str;
default = "/var/lib/borgbackup";
description = ''
The directory where the borgbackup repositories are stored.
'';
};
};
config.services.borgbackup.repos =
let
borgbackupIpMachinePath = machine: machineDir + machine + "/borgbackup/borgbackup.ssh.pub/value";
machinesMaybeKey = builtins.map (
machine:
let
fullPath = borgbackupIpMachinePath machine;
in
if builtins.pathExists fullPath then
machine
else
lib.warn ''
Machine ${machine} does not have a borgbackup key at ${fullPath},
run `clan vars generate ${machine}` to generate it.
'' null
) allClients;
machinesWithKey = lib.filter (x: x != null) machinesMaybeKey;
hosts = builtins.map (machine: {
name = machine;
value = {
path = "${config.clan.borgbackup.directory}/${machine}";
authorizedKeys = [ (builtins.readFile (borgbackupIpMachinePath machine)) ];
};
}) machinesWithKey;
in
if (builtins.listToAttrs hosts) != [ ] then builtins.listToAttrs hosts else { };
}

View File

@@ -0,0 +1,10 @@
---
description = "Set up data-mesher"
categories = ["System"]
features = [ "inventory" ]
[constraints]
roles.admin.min = 1
roles.admin.max = 1
---

View File

@@ -0,0 +1,19 @@
lib: {
machines =
config:
let
instanceNames = builtins.attrNames config.clan.inventory.services.data-mesher;
instanceName = builtins.head instanceNames;
dataMesherInstances = config.clan.inventory.services.data-mesher.${instanceName};
uniqueStrings = list: builtins.attrNames (builtins.groupBy lib.id list);
in
rec {
admins = dataMesherInstances.roles.admin.machines or [ ];
signers = dataMesherInstances.roles.signer.machines or [ ];
peers = dataMesherInstances.roles.peer.machines or [ ];
bootstrap = uniqueStrings (admins ++ signers);
};
}

View File

@@ -0,0 +1,58 @@
{ lib, config, ... }:
let
cfg = config.clan.data-mesher;
dmLib = import ../lib.nix lib;
in
{
imports = [
../shared.nix
];
options.clan.data-mesher = {
network = {
tld = lib.mkOption {
type = lib.types.str;
default = (config.networking.domain or "clan");
description = "Top level domain to use for the network";
};
hostTTL = lib.mkOption {
type = lib.types.str;
default = "672h"; # 28 days
example = "24h";
description = "The TTL for hosts in the network, in the form of a Go time.Duration";
};
};
};
config = {
warnings = [
"The clan.admin module is deprecated and will be removed on 2025-07-15.
Please migrate to user-maintained configuration or the new equivalent clan services
(https://docs.clan.lol/reference/clanServices)."
];
services.data-mesher.initNetwork =
let
# for a given machine, read it's public key and remove any new lines
readHostKey =
machine:
let
path = "${config.clan.core.settings.directory}/vars/per-machine/${machine}/data-mesher-host-key/public_key/value";
in
builtins.elemAt (lib.splitString "\n" (builtins.readFile path)) 1;
in
{
enable = true;
keyPath = config.clan.core.vars.generators.data-mesher-network-key.files.private_key.path;
tld = cfg.network.tld;
hostTTL = cfg.network.hostTTL;
# admin and signer host public keys
signingKeys = builtins.map readHostKey (dmLib.machines config).bootstrap;
};
};
}

View File

@@ -0,0 +1,5 @@
{
imports = [
../shared.nix
];
}

View File

@@ -0,0 +1,5 @@
{
imports = [
../shared.nix
];
}

View File

@@ -0,0 +1,152 @@
{
config,
lib,
...
}:
let
cfg = config.clan.data-mesher;
dmLib = import ./lib.nix lib;
# the default bootstrap nodes are any machines with the admin or signers role
# we iterate through those machines, determining an IP address for them based on their VPN
# currently only supports zerotier
defaultBootstrapNodes = builtins.foldl' (
urls: name:
let
ipPath = "${config.clan.core.settings.directory}/vars/per-machine/${name}/zerotier/zerotier-ip/value";
in
if builtins.pathExists ipPath then
let
ip = builtins.readFile ipPath;
in
urls ++ [ "[${ip}]:${builtins.toString cfg.network.port}" ]
else
urls
) [ ] (dmLib.machines config).bootstrap;
in
{
options.clan.data-mesher = {
bootstrapNodes = lib.mkOption {
type = lib.types.nullOr (lib.types.listOf lib.types.str);
default = null;
description = ''
A list of bootstrap nodes that act as an initial gateway when joining
the cluster.
'';
example = [
"192.168.1.1:7946"
"192.168.1.2:7946"
];
};
network = {
interface = lib.mkOption {
type = lib.types.str;
description = ''
The interface over which cluster communication should be performed.
All the ip addresses associate with this interface will be part of
our host claim, including both ipv4 and ipv6.
This should be set to an internal/VPN interface.
'';
example = "tailscale0";
};
port = lib.mkOption {
type = lib.types.port;
default = 7946;
description = ''
Port to listen on for cluster communication.
'';
};
};
};
config = {
services.data-mesher = {
enable = true;
openFirewall = true;
settings = {
log_level = "warn";
state_dir = "/var/lib/data-mesher";
# read network id from vars
network.id = config.clan.core.vars.generators.data-mesher-network-key.files.public_key.value;
host = {
names = [ config.networking.hostName ];
key_path = config.clan.core.vars.generators.data-mesher-host-key.files.private_key.path;
};
cluster = {
port = cfg.network.port;
join_interval = "30s";
push_pull_interval = "30s";
interface = cfg.network.interface;
bootstrap_nodes = if cfg.bootstrapNodes == null then defaultBootstrapNodes else cfg.bootstrapNodes;
};
http.port = 7331;
http.interface = "lo";
};
};
# Generate host key.
clan.core.vars.generators.data-mesher-host-key = {
files =
let
owner = config.users.users.data-mesher.name;
in
{
private_key = {
inherit owner;
};
public_key.secret = false;
};
runtimeInputs = [
config.services.data-mesher.package
];
script = ''
data-mesher generate keypair \
--public-key-path "$out"/public_key \
--private-key-path "$out"/private_key
'';
};
clan.core.vars.generators.data-mesher-network-key = {
# generated once per clan
share = true;
files =
let
owner = config.users.users.data-mesher.name;
in
{
private_key = {
inherit owner;
};
public_key.secret = false;
};
runtimeInputs = [
config.services.data-mesher.package
];
script = ''
data-mesher generate keypair \
--public-key-path "$out"/public_key \
--private-key-path "$out"/private_key
'';
};
};
}

View File

@@ -0,0 +1,17 @@
---
description = "Email-based instant messaging for Desktop."
categories = ["Social"]
features = [ "inventory", "deprecated" ]
---
!!! info
This module will automatically configure an email server on the machine for handling the e-mail messaging seamlessly.
## Features
- [x] **Email-based**: Uses any email account as its backend.
- [x] **End-to-End Encryption**: Supports Autocrypt to automatically encrypt messages.
- [x] **No Phone Number Required**: Uses your email address instead of a phone number.
- [x] **Cross-Platform**: Available on desktop and mobile platforms.
- [x] **Automatic Server Setup**: Includes your own DeltaChat server for enhanced control and privacy.
- [ ] **Bake a cake**: This module cannot cake a bake.

View File

@@ -0,0 +1,3 @@
{
imports = [ ./roles/default.nix ];
}

Some files were not shown because too many files have changed in this diff Show More