Compare commits

..

43 Commits

Author SHA1 Message Date
pinpox
3f8e26c87c Deprecate clan.auto-upgrade module with warning
Add deprecation warning that shows when the module is imported, scheduled for removal on 2025-07-15. Users should migrate to using the system.autoUpgrade NixOS option directly.
2025-06-18 10:51:23 +02:00
pinpox
64f9e6f655 remove auto-upgrade service 2025-06-18 10:44:19 +02:00
renovate[bot]
90d3de3514 chore(deps): update data-mesher digest to cb75111 2025-06-17 19:21:50 +00:00
Mic92
63e741ed20 Merge pull request 'Introduce flake parts module for clan nixos tests' (#4000) from speed-up-ci into main
Reviewed-on: https://git.clan.lol/clan/clan-core/pulls/4000
2025-06-17 19:20:08 +00:00
Jörg Thalheim
a260083919 fix(vars-check): include generator scripts in test closure
The vars-check test was failing because it only included the
runtimeInputs of generators but not the actual generator scripts
themselves. This caused the test to fail when trying to execute
generators that reference local files (like generate.py).

Added allGeneratorScripts to the closureInfo to ensure all generator
scripts and their dependencies are available in the test environment.
2025-06-17 21:09:59 +02:00
Jörg Thalheim
80a0f66809 no longer make test derivation depends on vars-check
this triggers more builds than necessary.
2025-06-17 21:09:59 +02:00
Jörg Thalheim
c03fda1b84 zerotier: migrate to clan.nixosTests module 2025-06-17 21:09:59 +02:00
Jörg Thalheim
be760704eb wifi: migrate to clan.nixosTests module 2025-06-17 20:39:06 +02:00
Jörg Thalheim
9cefd70bf8 users: migrate to clan.nixosTests module 2025-06-17 20:39:06 +02:00
Jörg Thalheim
d31c9d1537 trusted-nix-caches: migrate to clan.nixosTests module 2025-06-17 20:38:31 +02:00
Jörg Thalheim
8e2fc1056f state-version: migrate to clan.nixosTests module 2025-06-17 20:38:31 +02:00
Jörg Thalheim
41513e6a70 sshd: migrate to clan.nixosTests module 2025-06-17 19:32:04 +02:00
Jörg Thalheim
e5d6d6e7f9 packages: migrate to clan.nixosTests module 2025-06-17 19:31:09 +02:00
Jörg Thalheim
b2a587021f mycelium: migrate to clan.nixosTests module 2025-06-17 19:30:21 +02:00
Jörg Thalheim
509b18647c localsend: migrate to clan.nixosTests module 2025-06-17 19:29:08 +02:00
Jörg Thalheim
3535350cb6 hello-world: migrate to clan.nixosTests module 2025-06-17 19:28:12 +02:00
Jörg Thalheim
4459899fb6 heisenbridge: migrate to clan.nixosTests module 2025-06-17 19:27:15 +02:00
Jörg Thalheim
a6f0f27f02 garage: migrate to clan.nixosTests module 2025-06-17 19:26:25 +02:00
Jörg Thalheim
88e935f7c9 ergochat: migrate to clan.nixosTests module 2025-06-17 19:24:09 +02:00
Jörg Thalheim
12cdc279e8 deltachat: make test more robust with wait_until_succeeds
Use wait_until_succeeds for the first network check to ensure the
service is fully ready before testing connectivity.
2025-06-17 19:18:04 +02:00
Jörg Thalheim
e9cded4fd8 deltachat: migrate to clan.nixosTests module 2025-06-17 19:13:25 +02:00
Jörg Thalheim
065c697e0b borgbackup: migrate to clan.nixosTests module 2025-06-17 19:04:47 +02:00
Jörg Thalheim
268a95f2e4 clan-nixos-test: pass clan-core to test nodes via module args
This allows tests that need access to clan-core (e.g. for clan-cli or
dependencies) to use it within their node configurations.
2025-06-17 19:04:47 +02:00
Jörg Thalheim
3a1b2aede8 admin: migrate to clan.nixosTests module 2025-06-17 19:04:47 +02:00
Jörg Thalheim
29b2c51391 clan-nixos-test: add individual vars-checks back
The consolidated vars-check was too slow to eval. Individual vars-checks allow for better parallelization.
2025-06-17 18:49:16 +02:00
Jörg Thalheim
28d3cee649 introduce flake parts module for clan nixos tests 2025-06-17 18:38:52 +02:00
Mic92
9518fb660b Merge pull request 'fix: correctly check existence of CLAN_TEST_STORE paths in cache' (#3999) from fix-clan-test-store-caching into main
Reviewed-on: https://git.clan.lol/clan/clan-core/pulls/3999
2025-06-17 15:38:13 +00:00
Jörg Thalheim
d9c97fcb10 fix: correctly check existence of CLAN_TEST_STORE paths in cache
The flake cache was only checking existence for paths starting with
NIX_STORE_DIR (defaulting to /nix/store), but not for paths in the
test store when CLAN_TEST_STORE is set. This caused the cache to
return stale references to paths that had been garbage collected.

This fix updates the is_cached method to also check for paths in 
the test store, preventing cache misses during tests.
2025-06-17 17:21:06 +02:00
Mic92
9064848d86 Merge pull request 'machines: fix remote-program for darwin nix copy' (#3993) from darwin-remote-program into main
Reviewed-on: https://git.clan.lol/clan/clan-core/pulls/3993
2025-06-17 14:45:43 +00:00
Jörg Thalheim
5dbe44bb43 machines: fix remote-program for darwin nix copy
MacOS doesn't come with a proper login shell for ssh and therefore
doesn't have nix in $PATH as it doesn't source /etc/profile.
This restores the remote-program parameter that was accidentally
removed in commit cff5d61f26.
2025-06-17 16:30:04 +02:00
Mic92
578b620e68 Merge pull request 'garage: make test more reliable' (#3997) from garage-test-fix into main
Reviewed-on: https://git.clan.lol/clan/clan-core/pulls/3997
2025-06-17 14:26:48 +00:00
Jörg Thalheim
733fe41b4e garage: make test more reliable 2025-06-17 16:10:38 +02:00
Mic92
d4d37ad4ff Merge pull request 'add run-vm-test-offline package for offline VM testing' (#3994) from run-vm-test-offline into main
Reviewed-on: https://git.clan.lol/clan/clan-core/pulls/3994
2025-06-17 13:20:19 +00:00
kenji
12247c705a Merge pull request 'chore(deps): lock file maintenance' (#3975) from renovate/lock-file-maintenance into main
Reviewed-on: https://git.clan.lol/clan/clan-core/pulls/3975
2025-06-17 12:47:57 +00:00
renovate[bot]
21ca1ed152 chore(deps): lock file maintenance 2025-06-17 12:47:57 +00:00
Mic92
64e5d40de5 Merge pull request 'clan-lib: Make Remote overridable over function arguments' (#3969) from Qubasa/clan-core:nix_transform_host_options into main
Reviewed-on: https://git.clan.lol/clan/clan-core/pulls/3969
2025-06-17 12:47:00 +00:00
kenji
8a2bd8c03c Merge pull request 'agit: Add latest commit information to comment' (#3990) from kenji/agit: Add latest commit information to comment into main
Reviewed-on: https://git.clan.lol/clan/clan-core/pulls/3990
2025-06-17 12:43:22 +00:00
Jörg Thalheim
f7f6b22f92 add run-vm-test-offline package for offline VM testing
This package allows running NixOS VM tests in an offline environment
using network namespace isolation. It builds the test driver and runs
it with unshare to ensure no network access.
2025-06-17 14:41:12 +02:00
Qubasa
2c57c35517 clan-lib: Refactor remote host handling to function parameters
This refactoring improves the separation of concerns by moving remote host creation logic from the Machine class to the calling functions, making the code more flexible and testable.
2025-06-17 14:04:22 +02:00
Qubasa
75bfed044b clan-app: Fix UI errors 2025-06-17 13:53:43 +02:00
Qubasa
344259aa56 genmoon.py: Fix type error 2025-06-17 13:53:43 +02:00
Qubasa
fa4160dda1 clan-lib: Make Remote overridable over function arguments 2025-06-17 13:53:43 +02:00
a-kenji
7c871cdeb2 agit: Add latest commit information to comment
Add latest commit information to the editor comments.
That way we can easily adjust the PR based on the latest commit.
2025-06-17 13:50:36 +02:00
66 changed files with 1395 additions and 1487 deletions

View File

@@ -112,6 +112,9 @@ in
cp ${../flake.lock} $out/flake.lock
'';
};
packages = lib.optionalAttrs (pkgs.stdenv.isLinux) {
run-vm-test-offline = pkgs.callPackage ../pkgs/run-vm-test-offline { };
};
legacyPackages = {
nixosTests =
let

View File

@@ -4,5 +4,7 @@ categories = ["System"]
features = [ "inventory", "deprecated" ]
---
**⚠️ DEPRECATED: This module is deprecated and will be removed on 2025-07-15. Please migrate to using the system.autoUpgrade NixOS option directly.**
Whether to periodically upgrade NixOS to the latest version. If enabled, a
systemd timer will run `nixos-rebuild switch --upgrade` once a day.

View File

@@ -14,7 +14,9 @@ in
};
};
config = {
system.autoUpgrade = {
warnings = [ "The clan.auto-upgrade module is deprecated and will be removed on 2025-07-15. Please migrate to using the system.autoUpgrade NixOS option directly." ];
system.autoUpgrade = lib.mkIf (cfg ? flake) {
inherit (cfg) flake;
enable = true;
dates = "02:00";

View File

@@ -1,17 +1,18 @@
{ lib, self, ... }:
{ lib, ... }:
let
module = lib.modules.importApply ./default.nix { };
in
{
clan.modules = {
admin = lib.modules.importApply ./default.nix { };
admin = module;
};
perSystem =
{ pkgs, ... }:
{ ... }:
{
checks = lib.optionalAttrs (pkgs.stdenv.isLinux) {
admin = import ./tests/vm/default.nix {
inherit pkgs;
clan-core = self;
nixosLib = import (self.inputs.nixpkgs + "/nixos/lib") { };
};
clan.nixosTests.admin = {
imports = [ ./tests/vm/default.nix ];
clan.modules."@clan/admin" = module;
};
};
}

View File

@@ -1,62 +1,45 @@
{
pkgs,
nixosLib,
clan-core,
...
}:
let
public-key = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAII6zj7ubTg6z/aDwRNwvM/WlQdUocMprQ8E92NWxl6t+ test@test";
in
nixosLib.runTest (
{ ... }:
{
imports = [
clan-core.modules.nixosVmTest.clanTest
];
{
name = "admin";
hostPkgs = pkgs;
clan = {
directory = ./.;
inventory = {
name = "admin";
machines.client = { };
machines.server = { };
clan = {
directory = ./.;
modules."@clan/admin" = ../../default.nix;
inventory = {
machines.client = { };
machines.server = { };
instances = {
ssh-test-one = {
module.name = "@clan/admin";
roles.default.machines."server".settings = {
allowedKeys.testkey = public-key;
};
instances = {
ssh-test-one = {
module.name = "@clan/admin";
roles.default.machines."server".settings = {
allowedKeys.testkey = public-key;
};
};
};
};
};
nodes = {
client.environment.etc.private-test-key.source = ./private-test-key;
nodes = {
client.environment.etc.private-test-key.source = ./private-test-key;
server = {
services.openssh.enable = true;
};
server = {
services.openssh.enable = true;
};
};
testScript = ''
start_all()
testScript = ''
start_all()
machines = [client, server]
for m in machines:
m.systemctl("start network-online.target")
machines = [client, server]
for m in machines:
m.systemctl("start network-online.target")
for m in machines:
m.wait_for_unit("network-online.target")
for m in machines:
m.wait_for_unit("network-online.target")
client.succeed(f"ssh -F /dev/null -i /etc/private-test-key -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o BatchMode=yes root@server true &>/dev/null")
'';
}
)
client.succeed(f"ssh -F /dev/null -i /etc/private-test-key -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o BatchMode=yes root@server true &>/dev/null")
'';
}

View File

@@ -1,33 +0,0 @@
{ ... }:
{
_class = "clan.service";
manifest.name = "clan-core/auto-upgrade";
manifest.description = "Automatic system upgrade for the Clan App";
manifest.categories = [ "System" ];
roles.default = {
interface =
{ lib, ... }:
{
options.flake = lib.mkOption {
type = lib.types.str;
description = "Flake reference";
};
};
perInstance =
{ settings, ... }:
{
nixosModule =
{ ... }:
{
system.autoUpgrade = {
inherit (settings) flake;
enable = true;
dates = "02:00";
randomizedDelaySec = "45min";
};
};
};
};
}

View File

@@ -1,6 +0,0 @@
{ lib, ... }:
{
clan.modules = {
auto-upgrade = lib.modules.importApply ./default.nix { };
};
}

View File

@@ -1,17 +1,18 @@
{ lib, self, ... }:
{ lib, ... }:
let
module = lib.modules.importApply ./default.nix { };
in
{
clan.modules = {
borgbackup = lib.modules.importApply ./default.nix { };
borgbackup = module;
};
perSystem =
{ pkgs, ... }:
{ ... }:
{
checks = lib.optionalAttrs (pkgs.stdenv.isLinux) {
borgbackup = import ./tests/vm/default.nix {
inherit pkgs;
clan-core = self;
nixosLib = import (self.inputs.nixpkgs + "/nixos/lib") { };
};
clan.nixosTests.borgbackup = {
imports = [ ./tests/vm/default.nix ];
clan.modules."@clan/borgbackup" = module;
};
};
}

View File

@@ -1,118 +1,112 @@
{
module,
pkgs,
nixosLib,
clan-core,
...
}:
nixosLib.runTest (
{ ... }:
{
imports = [
clan-core.modules.nixosVmTest.clanTest
];
{
name = "borgbackup";
hostPkgs = pkgs;
clan = {
directory = ./.;
test.useContainers = true;
inventory = {
name = "borgbackup";
machines.clientone = { };
machines.serverone = { };
clan = {
directory = ./.;
test.useContainers = true;
modules."@clan/borgbackup" = ../../default.nix;
inventory = {
instances = {
borgone = {
machines.clientone = { };
machines.serverone = { };
module.name = "@clan/borgbackup";
instances = {
borgone = {
module.name = "@clan/borgbackup";
roles.client.machines."clientone" = { };
roles.server.machines."serverone".settings.directory = "/tmp/borg-test";
};
roles.client.machines."clientone" = { };
roles.server.machines."serverone".settings.directory = "/tmp/borg-test";
};
};
};
};
nodes = {
nodes = {
serverone = {
services.openssh.enable = true;
# Needed so PAM doesn't see the user as locked
users.users.borg.password = "borg";
};
clientone =
{
config,
pkgs,
clan-core,
...
}:
let
dependencies = [
clan-core
pkgs.stdenv.drvPath
] ++ builtins.map (i: i.outPath) (builtins.attrValues clan-core.inputs);
closureInfo = pkgs.closureInfo { rootPaths = dependencies; };
in
{
serverone = {
services.openssh.enable = true;
# Needed so PAM doesn't see the user as locked
users.users.borg.password = "borg";
users.users.root.openssh.authorizedKeys.keyFiles = [ ../../../../checks/assets/ssh/pubkey ];
clan.core.networking.targetHost = config.networking.hostName;
environment.systemPackages = [ clan-core.packages.${pkgs.system}.clan-cli ];
environment.etc.install-closure.source = "${closureInfo}/store-paths";
nix.settings = {
substituters = pkgs.lib.mkForce [ ];
hashed-mirrors = null;
connect-timeout = pkgs.lib.mkForce 3;
flake-registry = pkgs.writeText "flake-registry" ''{"flakes":[],"version":2}'';
};
system.extraDependencies = dependencies;
clan.core.state.test-backups.folders = [ "/var/test-backups" ];
};
clientone =
{ config, pkgs, ... }:
let
dependencies = [
clan-core
pkgs.stdenv.drvPath
] ++ builtins.map (i: i.outPath) (builtins.attrValues clan-core.inputs);
closureInfo = pkgs.closureInfo { rootPaths = dependencies; };
};
in
{
testScript = ''
import json
start_all()
services.openssh.enable = true;
machines = [clientone, serverone]
users.users.root.openssh.authorizedKeys.keyFiles = [ ../../../../checks/assets/ssh/pubkey ];
for m in machines:
m.systemctl("start network-online.target")
clan.core.networking.targetHost = config.networking.hostName;
for m in machines:
m.wait_for_unit("network-online.target")
environment.systemPackages = [ clan-core.packages.${pkgs.system}.clan-cli ];
# dummy data
clientone.succeed("mkdir -p /var/test-backups /var/test-service")
clientone.succeed("echo testing > /var/test-backups/somefile")
environment.etc.install-closure.source = "${closureInfo}/store-paths";
nix.settings = {
substituters = pkgs.lib.mkForce [ ];
hashed-mirrors = null;
connect-timeout = pkgs.lib.mkForce 3;
flake-registry = pkgs.writeText "flake-registry" ''{"flakes":[],"version":2}'';
};
system.extraDependencies = dependencies;
clientone.succeed("${pkgs.coreutils}/bin/install -Dm 600 ${../../../../checks/assets/ssh/privkey} /root/.ssh/id_ed25519")
clientone.succeed("${pkgs.coreutils}/bin/touch /root/.ssh/known_hosts")
clientone.wait_until_succeeds("timeout 2 ssh -o StrictHostKeyChecking=accept-new localhost hostname")
clientone.wait_until_succeeds("timeout 2 ssh -o StrictHostKeyChecking=accept-new $(hostname) hostname")
clan.core.state.test-backups.folders = [ "/var/test-backups" ];
};
# create
clientone.succeed("borgbackup-create >&2")
clientone.wait_until_succeeds("! systemctl is-active borgbackup-job-serverone >&2")
};
# list
backup_id = json.loads(clientone.succeed("borg-job-serverone list --json"))["archives"][0]["archive"]
out = clientone.succeed("borgbackup-list").strip()
print(out)
assert backup_id in out, f"backup {backup_id} not found in {out}"
testScript = ''
import json
start_all()
machines = [clientone, serverone]
for m in machines:
m.systemctl("start network-online.target")
for m in machines:
m.wait_for_unit("network-online.target")
# dummy data
clientone.succeed("mkdir -p /var/test-backups /var/test-service")
clientone.succeed("echo testing > /var/test-backups/somefile")
clientone.succeed("${pkgs.coreutils}/bin/install -Dm 600 ${../../../../checks/assets/ssh/privkey} /root/.ssh/id_ed25519")
clientone.succeed("${pkgs.coreutils}/bin/touch /root/.ssh/known_hosts")
clientone.wait_until_succeeds("timeout 2 ssh -o StrictHostKeyChecking=accept-new localhost hostname")
clientone.wait_until_succeeds("timeout 2 ssh -o StrictHostKeyChecking=accept-new $(hostname) hostname")
# create
clientone.succeed("borgbackup-create >&2")
clientone.wait_until_succeeds("! systemctl is-active borgbackup-job-serverone >&2")
# list
backup_id = json.loads(clientone.succeed("borg-job-serverone list --json"))["archives"][0]["archive"]
out = clientone.succeed("borgbackup-list").strip()
print(out)
assert backup_id in out, f"backup {backup_id} not found in {out}"
# borgbackup restore
clientone.succeed("rm -f /var/test-backups/somefile")
clientone.succeed(f"NAME='serverone::borg@serverone:.::{backup_id}' borgbackup-restore >&2")
assert clientone.succeed("cat /var/test-backups/somefile").strip() == "testing", "restore failed"
'';
}
)
# borgbackup restore
clientone.succeed("rm -f /var/test-backups/somefile")
clientone.succeed(f"NAME='serverone::borg@serverone:.::{backup_id}' borgbackup-restore >&2")
assert clientone.succeed("cat /var/test-backups/somefile").strip() == "testing", "restore failed"
'';
}

View File

@@ -1,17 +1,18 @@
{ lib, self, ... }:
{ lib, ... }:
let
module = lib.modules.importApply ./default.nix { };
in
{
clan.modules = {
deltachat = lib.modules.importApply ./default.nix { };
deltachat = module;
};
perSystem =
{ pkgs, ... }:
{ ... }:
{
checks = lib.optionalAttrs (pkgs.stdenv.isLinux) {
deltachat = import ./tests/vm/default.nix {
inherit pkgs;
clan-core = self;
nixosLib = import (self.inputs.nixpkgs + "/nixos/lib") { };
};
clan.nixosTests.deltachat = {
imports = [ ./tests/vm/default.nix ];
clan.modules."@clan/deltachat" = module;
};
};
}

View File

@@ -1,50 +1,39 @@
{
module,
pkgs,
nixosLib,
clan-core,
...
}:
nixosLib.runTest (
{ ... }:
{
imports = [
clan-core.modules.nixosVmTest.clanTest
];
{
name = "deltachat";
hostPkgs = pkgs;
clan = {
directory = ./.;
inventory = {
machines.server = { };
name = "deltachat";
clan = {
directory = ./.;
modules."@clan/deltachat" = ../../default.nix;
inventory = {
machines.server = { };
instances = {
deltachat-test = {
module.name = "@clan/deltachat";
roles.default.machines."server".settings = { };
};
instances = {
deltachat-test = {
module.name = "@clan/deltachat";
roles.default.machines."server".settings = { };
};
};
};
};
nodes = {
server = { };
};
nodes = {
server = { };
};
testScript = ''
start_all()
testScript = ''
start_all()
server.wait_for_unit("maddy")
server.wait_for_unit("maddy")
# imap
server.succeed("${pkgs.netcat}/bin/nc -z -v ::1 143")
# smtp submission
server.succeed("${pkgs.netcat}/bin/nc -z -v ::1 587")
# smtp
server.succeed("${pkgs.netcat}/bin/nc -z -v ::1 25")
'';
}
)
# imap
server.wait_until_succeeds("${pkgs.netcat}/bin/nc -z -v ::1 143")
# smtp submission
server.succeed("${pkgs.netcat}/bin/nc -z -v ::1 587")
# smtp
server.succeed("${pkgs.netcat}/bin/nc -z -v ::1 25")
'';
}

View File

@@ -1,17 +1,18 @@
{ lib, self, ... }:
{ lib, ... }:
let
module = lib.modules.importApply ./default.nix { };
in
{
clan.modules = {
ergochat = lib.modules.importApply ./default.nix { };
ergochat = module;
};
perSystem =
{ pkgs, ... }:
{ ... }:
{
checks = lib.optionalAttrs (pkgs.stdenv.isLinux) {
ergochat = import ./tests/vm/default.nix {
inherit pkgs;
clan-core = self;
nixosLib = import (self.inputs.nixpkgs + "/nixos/lib") { };
};
clan.nixosTests.ergochat = {
imports = [ ./tests/vm/default.nix ];
clan.modules."@clan/ergochat" = module;
};
};
}

View File

@@ -1,51 +1,41 @@
{
module,
pkgs,
nixosLib,
clan-core,
...
}:
nixosLib.runTest (
{ ... }:
{
imports = [
clan-core.modules.nixosVmTest.clanTest
];
{
name = "ergochat";
hostPkgs = pkgs;
name = "ergochat";
clan = {
directory = ./.;
inventory = {
machines.server = { };
clan = {
directory = ./.;
modules."@clan/ergochat" = ../../default.nix;
inventory = {
machines.server = { };
instances = {
ergochat-test = {
module.name = "@clan/ergochat";
roles.default.machines."server".settings = { };
};
instances = {
ergochat-test = {
module.name = "@clan/ergochat";
roles.default.machines."server".settings = { };
};
};
};
};
nodes = {
server = { };
};
nodes = {
server = { };
};
testScript = ''
start_all()
testScript = ''
start_all()
server.wait_for_unit("ergochat")
server.wait_for_unit("ergochat")
# Check that ergochat is running
server.succeed("systemctl status ergochat")
# Check that ergochat is running
server.succeed("systemctl status ergochat")
# Check that the data directory exists
server.succeed("test -d /var/lib/ergo")
# Check that the data directory exists
server.succeed("test -d /var/lib/ergo")
# Check that the server is listening on the correct ports
server.succeed("${pkgs.netcat}/bin/nc -z -v ::1 6667")
'';
}
)
# Check that the server is listening on the correct ports
server.succeed("${pkgs.netcat}/bin/nc -z -v ::1 6667")
'';
}

View File

@@ -1,18 +1,19 @@
{ lib, self, ... }:
{ lib, ... }:
let
module = lib.modules.importApply ./default.nix { };
in
{
clan.modules = {
garage = lib.modules.importApply ./default.nix { };
garage = module;
};
perSystem =
{ pkgs, ... }:
{ ... }:
{
checks = lib.optionalAttrs (pkgs.stdenv.isLinux) {
garage = import ./tests/vm/default.nix {
inherit pkgs;
clan-core = self;
nixosLib = import (self.inputs.nixpkgs + "/nixos/lib") { };
};
clan.nixosTests.garage = {
imports = [ ./tests/vm/default.nix ];
clan.modules."@clan/garage" = module;
};
};
}

View File

@@ -1,87 +1,76 @@
{
module,
pkgs,
nixosLib,
clan-core,
...
}:
nixosLib.runTest (
{ ... }:
{
imports = [
clan-core.modules.nixosVmTest.clanTest
];
{
name = "garage";
hostPkgs = pkgs;
clan = {
directory = ./.;
inventory = {
machines.server = { };
name = "garage";
instances = {
garage-test = {
module.name = "@clan/garage";
roles.default.machines."server".settings = { };
};
};
};
};
clan = {
directory = ./.;
modules."@clan/garage" = ../../default.nix;
inventory = {
machines.server = { };
nodes = {
server = {
services.garage = {
enable = true;
package = pkgs.garage;
settings = {
instances = {
garage-test = {
module.name = "@clan/garage";
roles.default.machines."server".settings = { };
metadata_dir = "/var/lib/garage/meta";
data_dir = "/var/lib/garage/data";
db_engine = "sqlite";
replication_factor = 1;
rpc_bind_addr = "127.0.0.1:3901";
s3_api = {
api_bind_addr = "127.0.0.1:3900";
s3_region = "garage";
root_domain = ".s3.garage";
};
s3_web = {
bind_addr = "127.0.0.1:3902";
root_domain = ".web.garage";
};
admin = {
api_bind_addr = "127.0.0.1:3903";
};
};
};
};
};
nodes = {
server = {
services.garage = {
enable = true;
package = pkgs.garage;
settings = {
testScript = ''
start_all()
metadata_dir = "/var/lib/garage/meta";
data_dir = "/var/lib/garage/data";
db_engine = "sqlite";
server.wait_for_unit("network-online.target")
server.wait_for_unit("garage")
replication_factor = 1;
# Check that garage is running
server.succeed("systemctl status garage")
rpc_bind_addr = "127.0.0.1:3901";
# Check that the data directories exist
server.succeed("test -d /var/lib/garage/meta")
server.succeed("test -d /var/lib/garage/data")
s3_api = {
api_bind_addr = "127.0.0.1:3900";
s3_region = "garage";
root_domain = ".s3.garage";
};
s3_web = {
bind_addr = "127.0.0.1:3902";
root_domain = ".web.garage";
};
admin = {
api_bind_addr = "127.0.0.1:3903";
};
};
};
};
};
testScript = ''
start_all()
server.wait_for_unit("network-online.target")
server.wait_for_unit("garage")
# Check that garage is running
server.succeed("systemctl status garage")
# Check that the data directories exist
server.succeed("test -d /var/lib/garage/meta")
server.succeed("test -d /var/lib/garage/data")
# Check that the ports are open to confirm that garage is running
server.succeed("${pkgs.netcat}/bin/nc -z -v 127.0.0.1 3901")
server.succeed("${pkgs.netcat}/bin/nc -z -v 127.0.0.1 3900")
server.succeed("${pkgs.netcat}/bin/nc -z -v 127.0.0.1 3902")
server.succeed("${pkgs.netcat}/bin/nc -z -v 127.0.0.1 3903")
'';
}
)
# Check that the ports are open to confirm that garage is running
server.wait_until_succeeds("${pkgs.netcat}/bin/nc -z -v 127.0.0.1 3901")
server.succeed("${pkgs.netcat}/bin/nc -z -v 127.0.0.1 3900")
server.succeed("${pkgs.netcat}/bin/nc -z -v 127.0.0.1 3902")
server.succeed("${pkgs.netcat}/bin/nc -z -v 127.0.0.1 3903")
'';
}

View File

@@ -1,17 +1,18 @@
{ lib, self, ... }:
{ lib, ... }:
let
module = lib.modules.importApply ./default.nix { };
in
{
clan.modules = {
heisenbridge = lib.modules.importApply ./default.nix { };
heisenbridge = module;
};
perSystem =
{ pkgs, ... }:
{ ... }:
{
checks = lib.optionalAttrs (pkgs.stdenv.isLinux) {
heisenbridge = import ./tests/vm/default.nix {
inherit pkgs;
clan-core = self;
nixosLib = import (self.inputs.nixpkgs + "/nixos/lib") { };
};
clan.nixosTests.heisenbridge = {
imports = [ ./tests/vm/default.nix ];
clan.modules."@clan/heisenbridge" = module;
};
};
}

View File

@@ -1,65 +1,53 @@
{
module,
pkgs,
nixosLib,
clan-core,
...
}:
{
name = "heisenbridge";
nixosLib.runTest (
{ ... }:
{
imports = [
clan-core.modules.nixosVmTest.clanTest
];
clan = {
directory = ./.;
inventory = {
machines.server = { };
hostPkgs = pkgs;
name = "heisenbridge";
clan = {
directory = ./.;
modules."@clan/heisenbridge" = ../../default.nix;
inventory = {
machines.server = { };
instances = {
heisenbridge-test = {
module.name = "@clan/heisenbridge";
roles.default.machines."server".settings = {
homeserver = "http://127.0.0.1:8008";
};
instances = {
heisenbridge-test = {
module.name = "@clan/heisenbridge";
roles.default.machines."server".settings = {
homeserver = "http://127.0.0.1:8008";
};
};
};
};
};
nodes = {
server = {
# Setup a minimal matrix-synapse to test with
services.matrix-synapse = {
enable = true;
settings.server_name = "example.com";
settings.database = {
name = "sqlite3";
};
nodes = {
server = {
# Setup a minimal matrix-synapse to test with
services.matrix-synapse = {
enable = true;
settings.server_name = "example.com";
settings.database = {
name = "sqlite3";
};
};
};
};
testScript = ''
start_all()
testScript = ''
start_all()
server.wait_for_unit("matrix-synapse")
server.wait_for_unit("heisenbridge")
server.wait_for_unit("matrix-synapse")
server.wait_for_unit("heisenbridge")
# Check that heisenbridge is running
server.succeed("systemctl status heisenbridge")
# Check that heisenbridge is running
server.succeed("systemctl status heisenbridge")
# Wait for the bridge to initialize
server.wait_until_succeeds("journalctl -u heisenbridge | grep -q 'bridge is now running'")
# Wait for the bridge to initialize
server.wait_until_succeeds("journalctl -u heisenbridge | grep -q 'bridge is now running'")
# Check that heisenbridge is listening on the default port
server.succeed("${pkgs.netcat}/bin/nc -z -v 127.0.0.1 9898")
'';
}
)
# Check that heisenbridge is listening on the default port
server.succeed("${pkgs.netcat}/bin/nc -z -v 127.0.0.1 9898")
'';
}

View File

@@ -14,7 +14,7 @@ in
hello-world = module;
};
perSystem =
{ pkgs, ... }:
{ ... }:
let
# Module that contains the tests
# This module adds:
@@ -41,15 +41,10 @@ in
2. To run the test
nix build .#checks.x86_64-linux.hello-service
*/
checks =
# Currently we don't support nixos-integration tests on darwin
lib.optionalAttrs (pkgs.stdenv.isLinux) {
hello-service = import ./tests/vm/default.nix {
inherit module;
inherit self inputs pkgs;
nixosLib = import (self.inputs.nixpkgs + "/nixos/lib") { };
clan-core = self;
};
};
clan.nixosTests.hello-service = {
imports = [ ./tests/vm/default.nix ];
clan.modules.hello-service = module;
};
};
}

View File

@@ -1,44 +1,29 @@
{
pkgs,
nixosLib,
clan-core,
module,
...
}:
nixosLib.runTest (
{ ... }:
{
imports = [
clan-core.modules.nixosVmTest.clanTest
];
{
name = "hello-service";
hostPkgs = pkgs;
clan = {
directory = ./.;
inventory = {
machines.peer1 = { };
name = "hello-service";
clan = {
directory = ./.;
modules = {
hello-service = module;
};
inventory = {
machines.peer1 = { };
instances."test" = {
module.name = "hello-service";
roles.peer.machines.peer1 = { };
};
instances."test" = {
module.name = "hello-service";
roles.peer.machines.peer1 = { };
};
};
};
testScript =
{ nodes, ... }:
''
start_all()
testScript =
{ nodes, ... }:
''
start_all()
# peer1 should have the 'hello' file
value = peer1.succeed("cat ${nodes.peer1.clan.core.vars.generators.hello.files.hello.path}")
assert value.strip() == "Hello world from peer1", value
'';
}
)
# peer1 should have the 'hello' file
value = peer1.succeed("cat ${nodes.peer1.clan.core.vars.generators.hello.files.hello.path}")
assert value.strip() == "Hello world from peer1", value
'';
}

View File

@@ -1,18 +1,19 @@
{ lib, self, ... }:
{ lib, ... }:
let
module = lib.modules.importApply ./default.nix { };
in
{
clan.modules = {
localsend = lib.modules.importApply ./default.nix { };
localsend = module;
};
perSystem =
{ pkgs, ... }:
{ ... }:
{
checks = lib.optionalAttrs (pkgs.stdenv.isLinux) {
localsend = import ./tests/vm/default.nix {
inherit pkgs;
clan-core = self;
nixosLib = import (self.inputs.nixpkgs + "/nixos/lib") { };
};
clan.nixosTests.localsend = {
imports = [ ./tests/vm/default.nix ];
clan.modules."@clan/localsend" = module;
};
};
}

View File

@@ -1,51 +1,38 @@
{
pkgs,
nixosLib,
clan-core,
module,
...
}:
{
name = "localsend";
nixosLib.runTest (
{ ... }:
{
imports = [
clan-core.modules.nixosVmTest.clanTest
];
clan = {
directory = ./.;
inventory = {
machines.server = { };
hostPkgs = pkgs;
name = "localsend";
clan = {
directory = ./.;
modules."@clan/localsend" = ../../default.nix;
inventory = {
machines.server = { };
instances = {
localsend-test = {
module.name = "@clan/localsend";
roles.default.machines."server".settings = {
displayName = "Test Instance";
ipv4Addr = "192.168.56.2/24";
};
instances = {
localsend-test = {
module.name = "@clan/localsend";
roles.default.machines."server".settings = {
displayName = "Test Instance";
ipv4Addr = "192.168.56.2/24";
};
};
};
};
};
nodes = {
server = { };
};
nodes = {
server = { };
};
testScript = ''
start_all()
testScript = ''
start_all()
# Check that the localsend wrapper script is available
server.succeed("command -v localsend")
# Check that the localsend wrapper script is available
server.succeed("command -v localsend")
# Verify the 09-zerotier network is configured with the specified IP address
server.succeed("grep -q 'Address=192.168.56.2/24' /etc/systemd/network/09-zerotier.network")
'';
}
)
# Verify the 09-zerotier network is configured with the specified IP address
server.succeed("grep -q 'Address=192.168.56.2/24' /etc/systemd/network/09-zerotier.network")
'';
}

View File

@@ -1,17 +1,18 @@
{ lib, self, ... }:
{ lib, ... }:
let
module = lib.modules.importApply ./default.nix { };
in
{
clan.modules = {
mycelium = lib.modules.importApply ./default.nix { };
mycelium = module;
};
perSystem =
{ pkgs, ... }:
{ ... }:
{
checks = lib.optionalAttrs (pkgs.stdenv.isLinux) {
mycelium = import ./tests/vm/default.nix {
inherit pkgs;
clan-core = self;
nixosLib = import (self.inputs.nixpkgs + "/nixos/lib") { };
};
clan.nixosTests.mycelium = {
imports = [ ./tests/vm/default.nix ];
clan.modules."@clan/mycelium" = module;
};
};
}

View File

@@ -1,53 +1,42 @@
{
module,
pkgs,
nixosLib,
clan-core,
...
}:
nixosLib.runTest (
{ ... }:
{
imports = [
clan-core.modules.nixosVmTest.clanTest
];
{
name = "mycelium";
hostPkgs = pkgs;
clan = {
name = "mycelium";
test.useContainers = false;
directory = ./.;
inventory = {
machines.server = { };
clan = {
test.useContainers = false;
directory = ./.;
modules."@clan/mycelium" = ../../default.nix;
inventory = {
machines.server = { };
instances = {
mycelium-test = {
module.name = "@clan/mycelium";
roles.peer.machines."server".settings = {
openFirewall = true;
addHostedPublicNodes = true;
};
instances = {
mycelium-test = {
module.name = "@clan/mycelium";
roles.peer.machines."server".settings = {
openFirewall = true;
addHostedPublicNodes = true;
};
};
};
};
};
nodes = {
server = { };
};
nodes = {
server = { };
};
testScript = ''
start_all()
testScript = ''
start_all()
# Check that mycelium service is running
server.wait_for_unit("mycelium")
server.succeed("systemctl status mycelium")
# Check that mycelium service is running
server.wait_for_unit("mycelium")
server.succeed("systemctl status mycelium")
# Check that mycelium is listening on its default port
server.wait_until_succeeds("${pkgs.iproute2}/bin/ss -tulpn | grep -q 'mycelium'", 10)
'';
}
)
# Check that mycelium is listening on its default port
server.wait_until_succeeds("${pkgs.iproute2}/bin/ss -tulpn | grep -q 'mycelium'", 10)
'';
}

View File

@@ -1,18 +1,19 @@
{ lib, self, ... }:
{ lib, ... }:
let
module = lib.modules.importApply ./default.nix { };
in
{
clan.modules = {
packages = lib.modules.importApply ./default.nix { };
packages = module;
};
perSystem =
{ pkgs, ... }:
{ ... }:
{
checks = lib.optionalAttrs (pkgs.stdenv.isLinux) {
packages = import ./tests/vm/default.nix {
inherit pkgs;
clan-core = self;
nixosLib = import (self.inputs.nixpkgs + "/nixos/lib") { };
};
clan.nixosTests.packages = {
imports = [ ./tests/vm/default.nix ];
clan.modules."@clan/packages" = module;
};
};

View File

@@ -1,41 +1,28 @@
{
pkgs,
nixosLib,
clan-core,
module,
...
}:
{
name = "packages";
nixosLib.runTest (
{ ... }:
{
imports = [
clan-core.modules.nixosVmTest.clanTest
];
clan = {
directory = ./.;
inventory = {
machines.server = { };
hostPkgs = pkgs;
name = "packages";
clan = {
directory = ./.;
modules."@clan/packages" = ../../default.nix;
inventory = {
machines.server = { };
instances.default = {
module.name = "@clan/packages";
roles.default.machines."server".settings = {
packages = [ "cbonsai" ];
};
instances.default = {
module.name = "@clan/packages";
roles.default.machines."server".settings = {
packages = [ "cbonsai" ];
};
};
};
};
nodes.server = { };
nodes.server = { };
testScript = ''
start_all()
server.succeed("cbonsai")
'';
}
)
testScript = ''
start_all()
server.succeed("cbonsai")
'';
}

View File

@@ -1,18 +1,19 @@
{ lib, self, ... }:
{ lib, ... }:
let
module = lib.modules.importApply ./default.nix { };
in
{
clan.modules = {
sshd = lib.modules.importApply ./default.nix { };
sshd = module;
};
perSystem =
{ pkgs, ... }:
{ ... }:
{
checks = lib.optionalAttrs (pkgs.stdenv.isLinux) {
sshd = import ./tests/vm/default.nix {
inherit pkgs;
clan-core = self;
nixosLib = import (self.inputs.nixpkgs + "/nixos/lib") { };
};
clan.nixosTests.sshd = {
imports = [ ./tests/vm/default.nix ];
clan.modules."@clan/sshd" = module;
};
};

View File

@@ -1,62 +1,50 @@
{
module,
pkgs,
nixosLib,
clan-core,
...
}:
{
name = "sshd";
nixosLib.runTest (
{ ... }:
{
imports = [
clan-core.modules.nixosVmTest.clanTest
];
clan = {
directory = ./.;
inventory = {
machines.server = { };
machines.client = { };
hostPkgs = pkgs;
name = "sshd";
clan = {
directory = ./.;
modules."@clan/sshd" = ../../default.nix;
inventory = {
machines.server = { };
machines.client = { };
instances = {
sshd-test = {
module.name = "@clan/sshd";
roles.server.machines."server".settings = {
certificate.searchDomains = [ "example.com" ];
hostKeys.rsa.enable = true;
};
roles.client.machines."client".settings = {
certificate.searchDomains = [ "example.com" ];
};
instances = {
sshd-test = {
module.name = "@clan/sshd";
roles.server.machines."server".settings = {
certificate.searchDomains = [ "example.com" ];
hostKeys.rsa.enable = true;
};
roles.client.machines."client".settings = {
certificate.searchDomains = [ "example.com" ];
};
};
};
};
};
nodes = {
server = { };
client = { };
};
nodes = {
server = { };
client = { };
};
testScript = ''
start_all()
testScript = ''
start_all()
# Check that sshd port is open on the server
server.succeed("${pkgs.netcat}/bin/nc -z -v 127.0.0.1 22")
# Check that sshd port is open on the server
server.succeed("${pkgs.netcat}/bin/nc -z -v 127.0.0.1 22")
# Check that /etc/ssh/ssh_known_hosts contains the required CA string on the server
server.succeed("grep '^@cert-authority ssh-ca,\*.example.com ssh-ed25519 ' /etc/ssh/ssh_known_hosts")
# Check that /etc/ssh/ssh_known_hosts contains the required CA string on the server
server.succeed("grep '^@cert-authority ssh-ca,\*.example.com ssh-ed25519 ' /etc/ssh/ssh_known_hosts")
# Check that server contains a line starting with 'localhost,server ssh-ed25519'
server.succeed("grep '^localhost,server ssh-ed25519 ' /etc/ssh/ssh_known_hosts")
# Check that server contains a line starting with 'localhost,server ssh-ed25519'
server.succeed("grep '^localhost,server ssh-ed25519 ' /etc/ssh/ssh_known_hosts")
# Check that /etc/ssh/ssh_known_hosts contains the required CA string on the client
client.succeed("grep '^.cert-authority ssh-ca.*example.com ssh-ed25519 ' /etc/ssh/ssh_known_hosts")
'';
}
)
# Check that /etc/ssh/ssh_known_hosts contains the required CA string on the client
client.succeed("grep '^.cert-authority ssh-ca.*example.com ssh-ed25519 ' /etc/ssh/ssh_known_hosts")
'';
}

View File

@@ -1,19 +1,16 @@
{ lib, self, ... }:
{ lib, ... }:
let
module = lib.modules.importApply ./default.nix { };
in
{
clan.modules = {
state-version = lib.modules.importApply ./default.nix { };
};
clan.modules.state-version = module;
perSystem =
{ pkgs, ... }:
{ ... }:
{
checks = lib.optionalAttrs (pkgs.stdenv.isLinux) {
state-version = import ./tests/vm/default.nix {
inherit pkgs;
clan-core = self;
nixosLib = import (self.inputs.nixpkgs + "/nixos/lib") { };
};
clan.nixosTests.state-version = {
imports = [ ./tests/vm/default.nix ];
clan.modules."@clan/state-version" = module;
};
};
}

View File

@@ -1,37 +1,20 @@
{
pkgs,
nixosLib,
clan-core,
...
}:
name = "state-version";
nixosLib.runTest (
{ ... }:
{
imports = [
clan-core.modules.nixosVmTest.clanTest
];
hostPkgs = pkgs;
name = "state-version";
clan = {
directory = ./.;
modules."@clan/state-version" = ../../default.nix;
inventory = {
machines.server = { };
instances.default = {
module.name = "@clan/state-version";
roles.default.machines."server" = { };
};
clan = {
directory = ./.;
inventory = {
machines.server = { };
instances.default = {
module.name = "@clan/state-version";
roles.default.machines."server" = { };
};
};
};
nodes.server = { };
nodes.server = { };
testScript = ''
start_all()
'';
}
)
testScript = ''
start_all()
'';
}

View File

@@ -1,17 +1,16 @@
{ lib, self, ... }:
{ lib, ... }:
let
module = lib.modules.importApply ./default.nix { };
in
{
clan.modules = {
trusted-nix-caches = lib.modules.importApply ./default.nix { };
};
clan.modules.trusted-nix-caches = module;
perSystem =
{ pkgs, ... }:
{ ... }:
{
checks = lib.optionalAttrs (pkgs.stdenv.isLinux) {
trusted-nix-caches = import ./tests/vm/default.nix {
inherit pkgs;
clan-core = self;
nixosLib = import (self.inputs.nixpkgs + "/nixos/lib") { };
};
clan.nixosTests.trusted-nix-caches = {
imports = [ ./tests/vm/default.nix ];
clan.modules."@clan/trusted-nix-caches" = module;
};
};
}

View File

@@ -1,40 +1,24 @@
{
pkgs,
nixosLib,
clan-core,
...
}:
nixosLib.runTest (
{ ... }:
{
imports = [
clan-core.modules.nixosVmTest.clanTest
];
name = "trusted-nix-caches";
hostPkgs = pkgs;
clan = {
directory = ./.;
inventory = {
machines.server = { };
name = "trusted-nix-caches";
clan = {
directory = ./.;
modules."@clan/trusted-nix-caches" = ../../default.nix;
inventory = {
machines.server = { };
instances = {
trusted-nix-caches = {
module.name = "@clan/trusted-nix-caches";
roles.default.machines."server" = { };
};
instances = {
trusted-nix-caches = {
module.name = "@clan/trusted-nix-caches";
roles.default.machines."server" = { };
};
};
};
};
nodes.server = { };
nodes.server = { };
testScript = ''
start_all()
server.succeed("grep -q 'cache.clan.lol' /etc/nix/nix.conf")
'';
}
)
testScript = ''
start_all()
server.succeed("grep -q 'cache.clan.lol' /etc/nix/nix.conf")
'';
}

View File

@@ -1,18 +1,16 @@
{ lib, self, ... }:
{ lib, ... }:
let
module = lib.modules.importApply ./default.nix { };
in
{
clan.modules = {
users = lib.modules.importApply ./default.nix { };
};
clan.modules.users = module;
perSystem =
{ pkgs, ... }:
{ ... }:
{
checks = lib.optionalAttrs (pkgs.stdenv.isLinux) {
users = import ./tests/vm/default.nix {
inherit pkgs;
clan-core = self;
nixosLib = import (self.inputs.nixpkgs + "/nixos/lib") { };
};
clan.nixosTests.users = {
imports = [ ./tests/vm/default.nix ];
clan.modules."@clan/users" = module;
};
};
}

View File

@@ -1,67 +1,50 @@
{
pkgs,
nixosLib,
clan-core,
...
}:
name = "users";
nixosLib.runTest (
{ ... }:
{
imports = [
clan-core.modules.nixosVmTest.clanTest
];
clan = {
directory = ./.;
inventory = {
machines.server = { };
hostPkgs = pkgs;
name = "users";
clan = {
directory = ./.;
modules."@clan/users" = ../../default.nix;
inventory = {
machines.server = { };
instances = {
root-password-test = {
module.name = "@clan/users";
roles.default.machines."server".settings = {
user = "root";
prompt = false;
};
instances = {
root-password-test = {
module.name = "@clan/users";
roles.default.machines."server".settings = {
user = "root";
prompt = false;
};
user-password-test = {
module.name = "@clan/users";
roles.default.machines."server".settings = {
user = "testuser";
prompt = false;
};
};
user-password-test = {
module.name = "@clan/users";
roles.default.machines."server".settings = {
user = "testuser";
prompt = false;
};
};
};
};
};
nodes = {
server = {
users.users.testuser.group = "testuser";
users.groups.testuser = { };
users.users.testuser.isNormalUser = true;
};
nodes = {
server = {
users.users.testuser.group = "testuser";
users.groups.testuser = { };
users.users.testuser.isNormalUser = true;
};
};
testScript = ''
start_all()
testScript = ''
start_all()
server.wait_for_unit("multi-user.target")
server.wait_for_unit("multi-user.target")
# Check that the testuser account exists
server.succeed("id testuser")
# Check that the testuser account exists
server.succeed("id testuser")
# Try to log in as the user using the generated password
# TODO: fix
# password = server.succeed("cat /run/clan/vars/user-password/user-password").strip()
# server.succeed(f"echo '{password}' | su - testuser -c 'echo Login successful'")
# Try to log in as the user using the generated password
# TODO: fix
# password = server.succeed("cat /run/clan/vars/user-password/user-password").strip()
# server.succeed(f"echo '{password}' | su - testuser -c 'echo Login successful'")
'';
}
)
'';
}

View File

@@ -1,6 +1,5 @@
{
self,
inputs,
lib,
...
}:
@@ -10,28 +9,14 @@ let
};
in
{
clan.modules = {
wifi = module;
};
clan.modules.wifi = module;
perSystem =
{ pkgs, ... }:
{ ... }:
{
/**
1. Prepare the test vars
nix run .#generate-test-vars -- clanServices/hello-world/tests/vm hello-service
clan.nixosTests.wifi = {
imports = [ ./tests/vm/default.nix ];
2. To run the test
nix build .#checks.x86_64-linux.hello-service
*/
checks =
# Currently we don't support nixos-integration tests on darwin
lib.optionalAttrs (pkgs.stdenv.isLinux) {
wifi-service = import ./tests/vm/default.nix {
inherit module;
inherit inputs pkgs;
clan-core = self;
nixosLib = import (self.inputs.nixpkgs + "/nixos/lib") { };
};
};
clan.modules."@clan/wifi" = module;
};
};
}

View File

@@ -1,46 +1,29 @@
{
pkgs,
nixosLib,
clan-core,
module,
...
}:
nixosLib.runTest (
{ ... }:
{
imports = [
clan-core.modules.nixosVmTest.clanTest
];
name = "wifi";
hostPkgs = pkgs;
clan = {
directory = ./.;
test.useContainers = false;
inventory = {
name = "wifi-service";
machines.test = { };
clan = {
directory = ./.;
test.useContainers = false;
modules."@clan/wifi" = module;
inventory = {
instances = {
wg-test-one = {
module.name = "@clan/wifi";
machines.test = { };
instances = {
wg-test-one = {
module.name = "@clan/wifi";
roles.default.machines = {
test.settings.networks.one = { };
};
roles.default.machines = {
test.settings.networks.one = { };
};
};
};
};
};
testScript = ''
start_all()
test.wait_for_unit("NetworkManager.service")
psk = test.succeed("cat /run/NetworkManager/system-connections/one.nmconnection")
assert "password-eins" in psk, "Password is incorrect"
'';
}
)
testScript = ''
start_all()
test.wait_for_unit("NetworkManager.service")
psk = test.succeed("cat /run/NetworkManager/system-connections/one.nmconnection")
assert "password-eins" in psk, "Password is incorrect"
'';
}

View File

@@ -8,9 +8,7 @@ let
module = lib.modules.importApply ./default.nix { };
in
{
clan.modules = {
zerotier = module;
};
clan.modules.zerotier = module;
perSystem =
{ ... }:
let
@@ -28,11 +26,11 @@ in
imports = [
unit-test-module
];
# zerotier = import ./tests/vm/default.nix {
# inherit module;
# inherit inputs pkgs;
# clan-core = self;
# nixosLib = import (self.inputs.nixpkgs + "/nixos/lib") { };
# };
clan.nixosTests.zerotier = {
imports = [ ./tests/vm/default.nix ];
clan.modules.zerotier = module;
};
};
}

View File

@@ -1,43 +1,27 @@
{
pkgs,
nixosLib,
clan-core,
module,
...
}:
nixosLib.runTest (
{ ... }:
{
imports = [
clan-core.modules.nixosVmTest.clanTest
];
name = "zerotier";
hostPkgs = pkgs;
clan = {
directory = ./.;
inventory = {
name = "zerotier";
machines.jon = { };
machines.sara = { };
machines.bam = { };
clan = {
directory = ./.;
modules."zerotier" = module;
inventory = {
instances = {
"zerotier" = {
module.name = "zerotier";
machines.jon = { };
machines.sara = { };
machines.bam = { };
instances = {
"zerotier" = {
module.name = "zerotier";
roles.peer.tags.all = { };
roles.controller.machines.bam = { };
};
roles.peer.tags.all = { };
roles.controller.machines.bam = { };
roles.moon.machines = { };
};
};
};
};
# This is not an actual vm test, this is a workaround to
# generate the needed vars for the eval test.
testScript = '''';
}
)
# This is not an actual vm test, this is a workaround to
# generate the needed vars for the eval test.
testScript = "";
}

View File

@@ -1,6 +0,0 @@
[
{
"publickey": "age13ahclyps97532zt2sfta5zrfx976d3r2jmctj8d36vj9x5v5ffqq304fqf",
"type": "age"
}
]

View File

@@ -1,15 +0,0 @@
{
"data": "ENC[AES256_GCM,data:AGYme1x1pE7SVk6HowmIYMN3EHNaZglW97geihpDCkKqArq/zD2IHxbgo8OtXmaNws16i0R6LehWJTL21fVmnAEA9GNZQOE/Y4Q=,iv:Kc3bDcOwJmxHnnlBweUbqDE77VVFZFelEGpmpfBSct8=,tag:m4kzx3nOtexD91kisQafFw==,type:str]",
"sops": {
"age": [
{
"recipient": "age1qm0p4vf9jvcnn43s6l4prk8zn6cx0ep9gzvevxecv729xz540v8qa742eg",
"enc": "-----BEGIN AGE ENCRYPTED FILE-----\nYWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBTc2Q5NTY1ejl5ODhSOXhv\nVUFrb0xvblErWEY1R0k3UXNBQk5Ja1MwaERVCmdISk1RSGFUL2FRMWlPSFdERjB6\nalltcHZLd21XOVFuaExSRUNQc1VmdjAKLS0tIGg0ZGdvbm9wbC9Jd255cHNmVWxP\nWStOQS9EQW9WQUtLZVp5SDBmM1ByaEEKzviyWc0yLbDMwk/CHhTwntrjA5LX44Wu\nNdlsQG/yfRaqRL1TKZztT9RnX0293gOEZFvoYZasEJJAIeBoZvN6VQ==\n-----END AGE ENCRYPTED FILE-----\n"
}
],
"lastmodified": "2025-05-29T13:14:51Z",
"mac": "ENC[AES256_GCM,data:uCk2e5aFHZhttLkIdvDU3KARN7PiHKLtXsqxmuLkZP903XhDTCuj1GH6S0C9UN5LftlaVjCEaqlgx68cCNwTc9bTUnhSdVVjMWy0gjxKZ1Y25YzOMlEmOAk/TZqUvnMn/cUL8KOeBnymPbAeqLm8yATjwsyx5+GrFrIVxwGQzUA=,iv:UMX2Ik0xlcljMZyBhjOpvYcsJCC5Wb6d/rgbTFb+6oM=,tag:HH05tFDzOcRrQ8TTXxrDyw==,type:str]",
"unencrypted_suffix": "_unencrypted",
"version": "3.10.2"
}
}

View File

@@ -1 +0,0 @@
../../../users/admin

View File

@@ -1 +0,0 @@
../../../../../sops/machines/test

View File

@@ -1,19 +0,0 @@
{
"data": "ENC[AES256_GCM,data:iNOb,iv:24+bKY5u61JYsvLHV8TIUBVmJPV1aX/BJr//c7le68o=,tag:ANCOrzvnukvqyKGf+L8gFQ==,type:str]",
"sops": {
"age": [
{
"recipient": "age13ahclyps97532zt2sfta5zrfx976d3r2jmctj8d36vj9x5v5ffqq304fqf",
"enc": "-----BEGIN AGE ENCRYPTED FILE-----\nYWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBxN2EwVHN3SENVTjdjZGRi\nQmJOWlNGYmpmM1BnZnpYWGhaSlRaUVJIODFRCkhhMUhyZzVWWk53SDBwSVBVZGVY\nVUpMTm9qWTIzc3VwdGJHcUVWVzFlV0UKLS0tIDBBVXdlS1FFbzNPSnlZWWtEaDJi\nK215OWQvMVRCRUZyQjFZckJFbHBZeDQK2cqgDnGM5uIm834dbQ3bi3nQA5nPq6Bf\n0+sezXuY55GdFS6OxIgI5/KcitHzDE0WHOvklIGDCSysoXIQ3QXanA==\n-----END AGE ENCRYPTED FILE-----\n"
},
{
"recipient": "age1qm0p4vf9jvcnn43s6l4prk8zn6cx0ep9gzvevxecv729xz540v8qa742eg",
"enc": "-----BEGIN AGE ENCRYPTED FILE-----\nYWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSA0NDB5SVcrU0V6akYwbDlv\na1BuSm5XbjYwN2ZkZWtIcnhBVHBTWGFxd24wCnZTVGlPRm5uZEd3QXYwdFRMS09K\nWWw5N2RJZ3d4N0VDMWZmM2lkYVM4VncKLS0tIGplTDVka1VoUVdXMU9VS3hYSlZ1\nRjZGL25hQWxHWEx3OXdQamJiNG9KaDgKk94uXPuCE/M4Hz/7hVKJPHuzQfbOQi/9\nVfR2i17Hjcq08l68Xzn+DllQEAFdts2fS96Pu4FFKfiLK7INl/fUOg==\n-----END AGE ENCRYPTED FILE-----\n"
}
],
"lastmodified": "2025-05-29T13:15:02Z",
"mac": "ENC[AES256_GCM,data:4beXC5ONY5RLChluoVkklpDnaf/KCjlUzpQkFVSp7vauQmMKeTK40xqfvY5d+64u/OKRTIdc38KQTwhZ0pYzOv1LcJOWbHrGu7XadlALKgyUqKOZy03G2O8y0IF6t/LUK8TaNFnNvNteFsfD36/+wkRaxPJe7MKXGqPhWf6RC78=,iv:FR/PQUZqL3HnyVbW+H1QlZMmgFxA5juSb88wuatIlHM=,tag:parvZw3y9ZHieZ8pmUjCZQ==,type:str]",
"unencrypted_suffix": "_unencrypted",
"version": "3.10.2"
}
}

View File

@@ -1 +0,0 @@
../../../../../sops/users/admin

View File

@@ -1 +0,0 @@
../../../../../sops/machines/test

View File

@@ -1,19 +0,0 @@
{
"data": "ENC[AES256_GCM,data:HHWyM9d6StpKc6uTxg==,iv:blDyfL/xSThCt+dhxeR5eOLa11OsIkbe+w4ReLBv754=,tag:qGHcDXS4DWdUIXUvtLc5XQ==,type:str]",
"sops": {
"age": [
{
"recipient": "age13ahclyps97532zt2sfta5zrfx976d3r2jmctj8d36vj9x5v5ffqq304fqf",
"enc": "-----BEGIN AGE ENCRYPTED FILE-----\nYWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBPdkQyYnQ1UzlCWEFtdnJh\nMWlBK0RGcENTMmRITWM5SSs2Mkt2N0ZKdm5VClNTS0NuR05OVHY3QkFLZWt6bTUx\nMzJLc2Vib1ZUbW1VM0lhYXFFeEhOaEEKLS0tIHVoODVOK3BUU2JDZkJkN2I2Wm1L\nMWM0TUNQazljZS9uWXRKRFlxWmd0clUKg1YhJoRea05c24hCuZKYvqyvjuu965KD\nr4GLtyqQ6wt9sn50Rzx5cAY/Ac684DNFJVZ1RwG1NTB2kmXcVP8SJA==\n-----END AGE ENCRYPTED FILE-----\n"
},
{
"recipient": "age1qm0p4vf9jvcnn43s6l4prk8zn6cx0ep9gzvevxecv729xz540v8qa742eg",
"enc": "-----BEGIN AGE ENCRYPTED FILE-----\nYWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBoZTA5QXpsOXR3L2FKcnJD\neUxzNVp3M2VQMFFaUUxwNXQ4UTlXa01rR0IwCjkyU2hmdlVYbWY4WUpVK0J1ZC9Q\nRjVkYWlGTlh1MFY3R3FxMEZHODZXMmcKLS0tIFV3bGdvUEtnT21wRWJveEQwdTBV\nbGFUUExBZWR1enQ0c0l0dUY3TnErM3cKutl5cv8dSlpQA7SXUYWJq1M0yLmko/Bx\nUvxxGGLQaK0Mp81Z5mOsjNhcVQrY160AyVnWJ0z39cqOJq9PpXRP+A==\n-----END AGE ENCRYPTED FILE-----\n"
}
],
"lastmodified": "2025-05-29T13:15:02Z",
"mac": "ENC[AES256_GCM,data:Y2FFQevNHSJrEtCmGHQXcpfyof0v2IF8ey79g7EfGj13An4ylhvogsVjRtfMkQvKD5GZykswZgmh+PmKUIzRoc+cvnMLu0iBzleYv+KzpYqtvUpdK0+NQn/4cKOoafajwNV7EuCQh+SkJgSGjNSbMs8xtIb4q9DmJyTcTbG0JQ4=,iv:xmA/cEhl/J0Z+8QR2GFiGWRw4aH/C4HmO+Qd4e25utw=,tag:/hG5S/EmRt8CjAy8DfBoqg==,type:str]",
"unencrypted_suffix": "_unencrypted",
"version": "3.10.2"
}
}

View File

@@ -1 +0,0 @@
../../../../../sops/users/admin

View File

@@ -83,7 +83,6 @@ nav:
- Services:
- Overview: reference/clanServices/index.md
- reference/clanServices/admin.md
- reference/clanServices/auto-upgrade.md
- reference/clanServices/borgbackup.md
- reference/clanServices/deltachat.md
- reference/clanServices/emergency-access.md

14
flake.lock generated
View File

@@ -16,11 +16,11 @@
]
},
"locked": {
"lastModified": 1749836431,
"narHash": "sha256-TygKDHK5tAzl9vPh0HcCzWQ5leNh1fCratxkHLFFaRA=",
"rev": "b7cc9ee5989081fe7f11959b647792f267319441",
"lastModified": 1750183842,
"narHash": "sha256-znYkJ+9GUNQCmFtEhGvMRZPRP3fdGmbiuTyyrJRKUGA=",
"rev": "cb75111e4c99c7a960cfdd0d743f75663e36cbfa",
"type": "tarball",
"url": "https://git.clan.lol/api/v1/repos/clan/data-mesher/archive/b7cc9ee5989081fe7f11959b647792f267319441.tar.gz"
"url": "https://git.clan.lol/api/v1/repos/clan/data-mesher/archive/cb75111e4c99c7a960cfdd0d743f75663e36cbfa.tar.gz"
},
"original": {
"type": "tarball",
@@ -118,10 +118,10 @@
"nixpkgs": {
"locked": {
"lastModified": 315532800,
"narHash": "sha256-lvaKRckKFtUXIQTiK1e/0JnZRlEm/y0olZwbkl8ArrY=",
"rev": "08fcb0dcb59df0344652b38ea6326a2d8271baff",
"narHash": "sha256-VgDAFPxHNhCfC7rI5I5wFqdiVJBH43zUefVo8hwo7cI=",
"rev": "41da1e3ea8e23e094e5e3eeb1e6b830468a7399e",
"type": "tarball",
"url": "https://releases.nixos.org/nixpkgs/nixpkgs-25.11pre812400.08fcb0dcb59d/nixexprs.tar.xz"
"url": "https://releases.nixos.org/nixpkgs/nixpkgs-25.11pre814815.41da1e3ea8e2/nixexprs.tar.xz"
},
"original": {
"type": "tarball",

View File

@@ -71,6 +71,7 @@
./flakeModules/demo_iso.nix
./lib/filter-clan-core/flake-module.nix
./lib/flake-module.nix
./lib/flake-parts/clan-nixos-test.nix
./nixosModules/clanCore/vars/flake-module.nix
./nixosModules/flake-module.nix
./pkgs/flake-module.nix

View File

@@ -99,10 +99,18 @@ in
machine:
flip mapAttrsToList machine.clan.core.vars.generators (_name: generator: generator.runtimeInputs);
generatorScripts =
machine:
flip mapAttrsToList machine.clan.core.vars.generators (_name: generator: generator.finalScript);
generatorRuntimeInputs = unique (
flatten (flip mapAttrsToList config.nodes (_machineName: machine: inputsForMachine machine))
);
allGeneratorScripts = unique (
flatten (flip mapAttrsToList config.nodes (_machineName: machine: generatorScripts machine))
);
vars-check =
hostPkgs.runCommand "update-vars-check-${testName}"
{
@@ -114,16 +122,19 @@ in
hostPkgs.bubblewrap
];
closureInfo = hostPkgs.closureInfo {
rootPaths = generatorRuntimeInputs ++ [
hostPkgs.bash
hostPkgs.coreutils
hostPkgs.jq.dev
hostPkgs.stdenv
hostPkgs.stdenvNoCC
hostPkgs.shellcheck-minimal
hostPkgs.age
hostPkgs.sops
];
rootPaths =
generatorRuntimeInputs
++ allGeneratorScripts
++ [
hostPkgs.bash
hostPkgs.coreutils
hostPkgs.jq.dev
hostPkgs.stdenv
hostPkgs.stdenvNoCC
hostPkgs.shellcheck-minimal
hostPkgs.age
hostPkgs.sops
];
};
}
''
@@ -277,8 +288,6 @@ in
# Harder to handle advanced setups (like TPM, LUKS, or LVM-on-LUKS) but not needed since we are in a test
# No systemd journal logs from initrd.
boot.initrd.systemd.enable = false;
# make the test depend on its vars-check derivation
environment.variables.CLAN_VARS_CHECK = "${vars-check}";
}
);

View File

@@ -0,0 +1,94 @@
{
lib,
flake-parts-lib,
self,
inputs,
...
}:
let
inherit (lib)
mkOption
types
;
inherit (flake-parts-lib)
mkPerSystemOption
;
nixosLib = import (inputs.nixpkgs + "/nixos/lib") { };
in
{
options = {
perSystem = mkPerSystemOption (
{ config, pkgs, ... }:
let
cfg = config.clan.nixosTests;
in
{
options.clan.nixosTests = mkOption {
description = "Clan NixOS tests configuration";
type = types.attrsOf types.unspecified;
default = { };
};
config.checks = lib.optionalAttrs (pkgs.stdenv.isLinux) (
let
# Build all individual vars-check derivations
varsChecks = lib.mapAttrs' (
name: testModule:
lib.nameValuePair "vars-check-${name}" (
let
test = nixosLib.runTest (
{ ... }:
{
imports = [
self.modules.nixosVmTest.clanTest
testModule
];
hostPkgs = pkgs;
defaults = {
imports = [
{
_module.args.clan-core = self;
}
];
};
}
);
in
test.config.result.vars-check
)
) cfg;
in
lib.mkMerge [
# Add the VM tests as checks
(lib.mapAttrs (
_name: testModule:
nixosLib.runTest (
{ ... }:
{
imports = [
self.modules.nixosVmTest.clanTest
testModule
];
hostPkgs = pkgs;
defaults = {
imports = [
{
_module.args.clan-core = self;
}
];
};
}
)
) cfg)
varsChecks
]
);
}
);
};
}

View File

@@ -11,11 +11,11 @@ def main() -> None:
if len(sys.argv) != 4:
print("Usage: genmoon.py <moon.json> <endpoint.json> <moons.d>")
sys.exit(1)
moon_json = sys.argv[1]
moon_json_path = sys.argv[1]
endpoint_config = sys.argv[2]
moons_d = sys.argv[3]
moon_json = json.loads(Path(moon_json).read_text())
moon_json = json.loads(Path(moon_json_path).read_text())
moon_json["roots"][0]["stableEndpoints"] = json.loads(
Path(endpoint_config).read_text()
)

View File

@@ -48,6 +48,8 @@ def get_latest_commit_info() -> tuple[str, str]:
def open_editor_for_pr() -> tuple[str, str]:
"""Open editor to get PR title and description. First line is title, rest is description."""
commit_title, commit_body = get_latest_commit_info()
with tempfile.NamedTemporaryFile(
mode="w+", suffix="COMMIT_EDITMSG", delete=False
) as temp_file:
@@ -57,6 +59,15 @@ def open_editor_for_pr() -> tuple[str, str]:
temp_file.write("# The first line will be used as the PR title.\n")
temp_file.write("# Everything else will be used as the PR description.\n")
temp_file.write("#\n")
temp_file.write("# Current commit information:\n")
temp_file.write("#\n")
if commit_title:
temp_file.write(f"# {commit_title}\n")
temp_file.write("#\n")
if commit_body:
for line in commit_body.split("\n"):
temp_file.write(f"# {line}\n")
temp_file.write("#\n")
temp_file.flush()
temp_file_path = temp_file.name
@@ -129,7 +140,7 @@ def create_agit_push(
push_cmd.extend(["-o", f"title={title}"])
if description:
escaped_desc = description.replace('"', '\\"')
escaped_desc = description.rstrip("\n").replace('"', '\\"')
push_cmd.extend(["-o", f"description={escaped_desc}"])
if force_push:

View File

@@ -47,6 +47,27 @@ export const MachineListItem = (props: MachineListItemProps) => {
);
return;
}
const target_host = await callApi("get_host", {
field: "targetHost",
flake: { identifier: active_clan },
name: name,
}).promise;
if (target_host.status == "error") {
console.error("No target host found for the machine");
return;
}
if (target_host.data === null) {
console.error("No target host found for the machine");
return;
}
if (!target_host.data!.data) {
console.error("No target host found for the machine");
return;
}
setInstalling(true);
await callApi("install_machine", {
opts: {
@@ -55,15 +76,14 @@ export const MachineListItem = (props: MachineListItemProps) => {
flake: {
identifier: active_clan,
},
override_target_host: info?.deploy.targetHost,
},
no_reboot: true,
debug: true,
nix_options: [],
password: null,
},
}).promise;
setInstalling(false);
target_host: target_host.data!.data,
}).promise.finally(() => setInstalling(false));
};
const handleUpdate = async () => {
@@ -83,14 +103,53 @@ export const MachineListItem = (props: MachineListItemProps) => {
return;
}
setUpdating(true);
const target_host = await callApi("get_host", {
field: "targetHost",
flake: { identifier: active_clan },
name: name,
}).promise;
if (target_host.status == "error") {
console.error("No target host found for the machine");
return;
}
if (target_host.data === null) {
console.error("No target host found for the machine");
return;
}
if (!target_host.data!.data) {
console.error("No target host found for the machine");
return;
}
const build_host = await callApi("get_host", {
field: "buildHost",
flake: { identifier: active_clan },
name: name,
}).promise;
if (build_host.status == "error") {
console.error("No target host found for the machine");
return;
}
if (build_host.data === null) {
console.error("No target host found for the machine");
return;
}
await callApi("deploy_machine", {
machine: {
name: name,
flake: {
identifier: active_clan,
},
override_target_host: info?.deploy.targetHost,
},
target_host: target_host.data!.data,
build_host: build_host.data?.data || null,
}).promise;
setUpdating(false);

View File

@@ -135,6 +135,27 @@ const InstallMachine = (props: InstallMachineProps) => {
setProgressText("Installing machine ... (2/5)");
const target_host = await callApi("get_host", {
field: "targetHost",
flake: { identifier: curr_uri },
name: props.name,
}).promise;
if (target_host.status == "error") {
console.error("No target host found for the machine");
return;
}
if (target_host.data === null) {
console.error("No target host found for the machine");
return;
}
if (!target_host.data!.data) {
console.error("No target host found for the machine");
return;
}
const installPromise = callApi("install_machine", {
opts: {
machine: {
@@ -142,11 +163,11 @@ const InstallMachine = (props: InstallMachineProps) => {
flake: {
identifier: curr_uri,
},
override_target_host: target,
private_key: values.sshKey?.name,
},
password: "",
},
target_host: target_host.data!.data,
});
// Next step
@@ -480,6 +501,49 @@ const MachineForm = (props: MachineDetailsProps) => {
const target = targetHost();
const active_clan = activeClanURI();
if (!active_clan) {
console.error("No active clan selected");
return;
}
const target_host = await callApi("get_host", {
field: "targetHost",
flake: { identifier: active_clan },
name: machine,
}).promise;
if (target_host.status == "error") {
console.error("No target host found for the machine");
return;
}
if (target_host.data === null) {
console.error("No target host found for the machine");
return;
}
if (!target_host.data!.data) {
console.error("No target host found for the machine");
return;
}
const build_host = await callApi("get_host", {
field: "buildHost",
flake: { identifier: active_clan },
name: machine,
}).promise;
if (build_host.status == "error") {
console.error("No target host found for the machine");
return;
}
if (build_host.data === null) {
console.error("No target host found for the machine");
return;
}
setIsUpdating(true);
const r = await callApi("deploy_machine", {
machine: {
@@ -487,8 +551,9 @@ const MachineForm = (props: MachineDetailsProps) => {
flake: {
identifier: curr_uri,
},
override_target_host: target,
},
target_host: target_host.data!.data,
build_host: build_host.data!.data,
}).promise;
};

View File

@@ -90,11 +90,37 @@ export const HWStep = (props: StepProps<HardwareValues>) => {
return;
}
const active_clan = activeClanURI();
if (!active_clan) {
console.error("No active clan selected");
return;
}
const target_host = await callApi("get_host", {
field: "targetHost",
flake: { identifier: active_clan },
name: props.machine_id,
}).promise;
if (target_host.status == "error") {
console.error("No target host found for the machine");
return;
}
if (target_host.data === null) {
console.error("No target host found for the machine");
return;
}
if (!target_host.data!.data) {
console.error("No target host found for the machine");
return;
}
const r = await callApi("generate_machine_hardware_info", {
opts: {
machine: {
name: props.machine_id,
override_target_host: target,
private_key: sshFile?.name,
flake: {
identifier: curr_uri,
@@ -102,6 +128,7 @@ export const HWStep = (props: StepProps<HardwareValues>) => {
},
backend: "nixos-facter",
},
target_host: target_host.data!.data,
});
// TODO: refresh the machine details

View File

@@ -12,6 +12,7 @@ from clan_lib.errors import ClanCmdError, ClanError
from clan_lib.git import commit_file
from clan_lib.machines.machines import Machine
from clan_lib.nix import nix_config, nix_eval
from clan_lib.ssh.remote import HostKeyCheck, Remote
from clan_cli.completions import add_dynamic_completer, complete_machines
@@ -82,7 +83,9 @@ class HardwareGenerateOptions:
@API.register
def generate_machine_hardware_info(opts: HardwareGenerateOptions) -> HardwareConfig:
def generate_machine_hardware_info(
opts: HardwareGenerateOptions, target_host: Remote
) -> HardwareConfig:
"""
Generate hardware information for a machine
and place the resulting *.nix file in the machine's directory.
@@ -103,9 +106,7 @@ def generate_machine_hardware_info(opts: HardwareGenerateOptions) -> HardwareCon
"--show-hardware-config",
]
host = opts.machine.target_host()
with host.ssh_control_master() as ssh, ssh.become_root() as sudo_ssh:
with target_host.ssh_control_master() as ssh, ssh.become_root() as sudo_ssh:
out = sudo_ssh.run(config_command, opts=RunOpts(check=False))
if out.returncode != 0:
if "nixos-facter" in out.stderr and "not found" in out.stderr:
@@ -117,7 +118,7 @@ def generate_machine_hardware_info(opts: HardwareGenerateOptions) -> HardwareCon
raise ClanError(msg)
machine.error(str(out))
msg = f"Failed to inspect {opts.machine}. Address: {host.target}"
msg = f"Failed to inspect {opts.machine}. Address: {target_host.target}"
raise ClanError(msg)
backup_file = None
@@ -157,17 +158,28 @@ def generate_machine_hardware_info(opts: HardwareGenerateOptions) -> HardwareCon
def update_hardware_config_command(args: argparse.Namespace) -> None:
host_key_check = HostKeyCheck.from_str(args.host_key_check)
machine = Machine(
flake=args.flake,
name=args.machine,
override_target_host=args.target_host,
host_key_check=host_key_check,
)
opts = HardwareGenerateOptions(
machine=machine,
password=args.password,
backend=HardwareConfig(args.backend),
)
generate_machine_hardware_info(opts)
if args.target_host:
target_host = Remote.from_deployment_address(
machine_name=machine.name,
address=args.target_host,
host_key_check=host_key_check,
)
else:
target_host = machine.target_host()
generate_machine_hardware_info(opts, target_host)
def register_update_hardware_config(parser: argparse.ArgumentParser) -> None:
@@ -184,6 +196,12 @@ def register_update_hardware_config(parser: argparse.ArgumentParser) -> None:
nargs="?",
help="ssh address to install to in the form of user@host:2222",
)
parser.add_argument(
"--host-key-check",
choices=["strict", "ask", "tofu", "none"],
default="ask",
help="Host key (.ssh/known_hosts) check mode.",
)
parser.add_argument(
"--password",
help="Pre-provided password the cli will prompt otherwise if needed.",

View File

@@ -12,6 +12,7 @@ from clan_lib.cmd import Log, RunOpts, run
from clan_lib.errors import ClanError
from clan_lib.machines.machines import Machine
from clan_lib.nix import nix_shell
from clan_lib.ssh.remote import HostKeyCheck, Remote
from clan_cli.completions import (
add_dynamic_completer,
@@ -48,7 +49,7 @@ class InstallOptions:
@API.register
def install_machine(opts: InstallOptions) -> None:
def install_machine(opts: InstallOptions, target_host: Remote) -> None:
machine = opts.machine
machine.debug(f"installing {machine.name}")
@@ -56,7 +57,6 @@ def install_machine(opts: InstallOptions) -> None:
generate_facts([machine])
generate_vars([machine])
host = machine.target_host()
with (
TemporaryDirectory(prefix="nixos-install-") as _base_directory,
):
@@ -127,8 +127,8 @@ def install_machine(opts: InstallOptions) -> None:
if opts.build_on:
cmd += ["--build-on", opts.build_on.value]
if host.port:
cmd += ["--ssh-port", str(host.port)]
if target_host.port:
cmd += ["--ssh-port", str(target_host.port)]
if opts.kexec:
cmd += ["--kexec", opts.kexec]
@@ -138,7 +138,7 @@ def install_machine(opts: InstallOptions) -> None:
# Add nix options to nixos-anywhere
cmd.extend(opts.nix_options)
cmd.append(host.target)
cmd.append(target_host.target)
if opts.use_tor:
# nix copy does not support tor socks proxy
# cmd.append("--ssh-option")
@@ -162,7 +162,7 @@ def install_command(args: argparse.Namespace) -> None:
try:
# Only if the caller did not specify a target_host via args.target_host
# Find a suitable target_host that is reachable
target_host = args.target_host
target_host_str = args.target_host
deploy_info: DeployInfo | None = ssh_command_parse(args)
use_tor = False
@@ -170,9 +170,9 @@ def install_command(args: argparse.Namespace) -> None:
host = find_reachable_host(deploy_info)
if host is None:
use_tor = True
target_host = deploy_info.tor.target
target_host_str = deploy_info.tor.target
else:
target_host = host.target
target_host_str = host.target
if args.password:
password = args.password
@@ -181,12 +181,20 @@ def install_command(args: argparse.Namespace) -> None:
else:
password = None
machine = Machine(
name=args.machine,
flake=args.flake,
nix_options=args.option,
override_target_host=target_host,
machine = Machine(name=args.machine, flake=args.flake, nix_options=args.option)
host_key_check = (
HostKeyCheck.from_str(args.host_key_check)
if args.host_key_check
else HostKeyCheck.ASK
)
if target_host_str is not None:
target_host = Remote.from_deployment_address(
machine_name=machine.name,
address=target_host_str,
host_key_check=host_key_check,
)
else:
target_host = machine.target_host().with_data(host_key_check=host_key_check)
if machine._class_ == "darwin":
msg = "Installing macOS machines is not yet supported"
@@ -197,9 +205,7 @@ def install_command(args: argparse.Namespace) -> None:
raise ClanError(msg)
if not args.yes:
ask = input(
f"Install {args.machine} to {machine.target_host().target}? [y/N] "
)
ask = input(f"Install {args.machine} to {target_host.target}? [y/N] ")
if ask != "y":
return None
@@ -217,6 +223,7 @@ def install_command(args: argparse.Namespace) -> None:
identity_file=args.identity_file,
use_tor=use_tor,
),
target_host=target_host,
)
except KeyboardInterrupt:
log.warning("Interrupted by user")

View File

@@ -55,6 +55,12 @@ def upload_sources(machine: Machine, ssh: Remote) -> str:
is_local_input(node) for node in flake_data["locks"]["nodes"].values()
)
# Construct the remote URL with proper parameters for Darwin
remote_url = f"ssh://{ssh.target}"
# MacOS doesn't come with a proper login shell for ssh and therefore doesn't have nix in $PATH as it doesn't source /etc/profile
if machine._class_ == "darwin":
remote_url += "?remote-program=bash -lc 'exec nix-daemon --stdio'"
if not has_path_inputs:
# Just copy the flake to the remote machine, we can substitute other inputs there.
path = flake_data["path"]
@@ -62,7 +68,7 @@ def upload_sources(machine: Machine, ssh: Remote) -> str:
[
"copy",
"--to",
f"ssh://{ssh.target}",
remote_url,
"--no-check-sigs",
path,
]
@@ -84,7 +90,7 @@ def upload_sources(machine: Machine, ssh: Remote) -> str:
"flake",
"archive",
"--to",
f"ssh://{ssh.target}",
remote_url,
"--json",
flake_url,
]
@@ -104,10 +110,12 @@ def upload_sources(machine: Machine, ssh: Remote) -> str:
@API.register
def deploy_machine(machine: Machine) -> None:
def deploy_machine(
machine: Machine, target_host: Remote, build_host: Remote | None
) -> None:
with ExitStack() as stack:
target_host = stack.enter_context(machine.target_host().ssh_control_master())
build_host = machine.build_host()
target_host = stack.enter_context(target_host.ssh_control_master())
if build_host is not None:
build_host = stack.enter_context(build_host.ssh_control_master())
@@ -198,24 +206,6 @@ def deploy_machine(machine: Machine) -> None:
)
def deploy_machines(machines: list[Machine]) -> None:
"""
Deploy to all hosts in parallel
"""
with AsyncRuntime() as runtime:
for machine in machines:
runtime.async_run(
AsyncOpts(
tid=machine.name, async_ctx=AsyncContext(prefix=machine.name)
),
deploy_machine,
machine,
)
runtime.join_all()
runtime.check_all()
def update_command(args: argparse.Namespace) -> None:
try:
if args.flake is None:
@@ -228,21 +218,19 @@ def update_command(args: argparse.Namespace) -> None:
args.machines if args.machines else list_full_machines(args.flake).keys()
)
if args.target_host is not None and len(args.machines) > 1:
msg = "Target Host can only be set for one machines"
raise ClanError(msg)
for machine_name in selected_machines:
machine = Machine(
name=machine_name,
flake=args.flake,
nix_options=args.option,
override_target_host=args.target_host,
override_build_host=args.build_host,
host_key_check=HostKeyCheck.from_str(args.host_key_check),
)
machines.append(machine)
if args.target_host is not None and len(machines) > 1:
msg = "Target Host can only be set for one machines"
raise ClanError(msg)
def filter_machine(m: Machine) -> bool:
if m.deployment.get("requireExplicitUpdate", False):
return False
@@ -285,8 +273,30 @@ def update_command(args: argparse.Namespace) -> None:
f"clanInternals.machines.{system}.{{{','.join(machine_names)}}}.config.system.clan.deployment.file",
]
)
# Run the deplyoyment
deploy_machines(machines_to_update)
host_key_check = HostKeyCheck.from_str(args.host_key_check)
with AsyncRuntime() as runtime:
for machine in machines:
if args.target_host:
target_host = Remote.from_deployment_address(
machine_name=machine.name,
address=args.target_host,
host_key_check=host_key_check,
)
else:
target_host = machine.target_host()
runtime.async_run(
AsyncOpts(
tid=machine.name,
async_ctx=AsyncContext(prefix=machine.name),
),
deploy_machine,
machine=machine,
target_host=target_host,
build_host=machine.build_host(),
)
runtime.join_all()
runtime.check_all()
except KeyboardInterrupt:
log.warning("Interrupted by user")

View File

@@ -9,7 +9,6 @@ from tempfile import NamedTemporaryFile
from typing import Any
from clan_lib.errors import ClanError
from clan_lib.nix import nix_config
log = logging.getLogger(__name__)
@@ -320,7 +319,9 @@ class FlakeCacheEntry:
# strings need to be checked if they are store paths
# if they are, we store them as a dict with the outPath key
# this is to mirror nix behavior, where the outPath of an attrset is used if no further key is specified
elif isinstance(value, str) and self._is_store_path(value):
elif isinstance(value, str) and value.startswith(
os.environ.get("NIX_STORE_DIR", "/nix/store")
):
assert selectors == []
self.value = {"outPath": FlakeCacheEntry(value)}
@@ -334,68 +335,20 @@ class FlakeCacheEntry:
msg = f"Cannot insert {value} into cache, already have {self.value}"
raise TypeError(msg)
def _is_store_path(self, value: str) -> bool:
"""Check if a string is a nix store path."""
# A store path is any path that has "store" as one of its parent directories
# and contains a hash-prefixed name after it
path_parts = Path(value).parts
try:
store_idx = path_parts.index("store")
except ValueError:
return False
# Check if there's at least one more component after "store"
if store_idx + 1 < len(path_parts):
# Check if the component after store looks like a nix store item
# (starts with a hash)
store_item = path_parts[store_idx + 1]
# Basic check: nix store items typically start with a hash
return len(store_item) > 32 and "-" in store_item
return False
def _normalize_store_path(self, store_path: str) -> Path | None:
"""
Normalize a store path to use the current NIX_STORE_DIR.
Returns None if the path is not a valid store path.
"""
# Extract the store item (hash-name) from the path
path_parts = Path(store_path).parts
# Find the index of "store" in the path
try:
store_idx = path_parts.index("store")
except ValueError:
return None
if store_idx + 1 < len(path_parts):
store_item = path_parts[store_idx + 1]
# Get the current store path
# Check if we're using a test store first
test_store = os.environ.get("CLAN_TEST_STORE")
if test_store:
# In test mode, the store is at CLAN_TEST_STORE/nix/store
current_store = str(Path(test_store) / "nix" / "store")
else:
# Otherwise use nix config
config = nix_config()
current_store = config.get("store", "/nix/store")
return Path(current_store) / store_item if current_store else None
return None
def is_cached(self, selectors: list[Selector]) -> bool:
selector: Selector
# for store paths we have to check if they still exist, otherwise they have to be rebuild and are thus not cached
if isinstance(self.value, str) and self._is_store_path(self.value):
normalized_path = self._normalize_store_path(self.value)
if normalized_path:
return normalized_path.exists()
return False
if isinstance(self.value, str):
# Check if it's a regular nix store path
nix_store_dir = os.environ.get("NIX_STORE_DIR", "/nix/store")
if self.value.startswith(nix_store_dir):
return Path(self.value).exists()
# Check if it's a test store path
test_store = os.environ.get("CLAN_TEST_STORE")
if test_store and self.value.startswith(test_store):
return Path(self.value).exists()
# if self.value is not dict but we request more selectors, we assume we are cached and an error will be thrown in the select function
if isinstance(self.value, str | float | int | None):

View File

@@ -0,0 +1,297 @@
import contextlib
import subprocess
from pathlib import Path
from sys import platform
from unittest.mock import patch
import pytest
from clan_cli.tests.fixtures_flakes import ClanFlake
from clan_lib.flake.flake import Flake, FlakeCache, FlakeCacheEntry, parse_selector
@pytest.mark.with_core
def test_flake_caching(flake: ClanFlake) -> None:
m1 = flake.machines["machine1"]
m1["nixpkgs"]["hostPlatform"] = "x86_64-linux"
flake.machines["machine2"] = m1.copy()
flake.machines["machine3"] = m1.copy()
flake.refresh()
flake_ = Flake(str(flake.path))
hostnames = flake_.select("nixosConfigurations.*.config.networking.hostName")
assert hostnames == {
"machine1": "machine1",
"machine2": "machine2",
"machine3": "machine3",
}
@pytest.mark.with_core
def test_cache_persistance(flake: ClanFlake) -> None:
m1 = flake.machines["machine1"]
m1["nixpkgs"]["hostPlatform"] = "x86_64-linux"
flake.refresh()
flake1 = Flake(str(flake.path))
flake2 = Flake(str(flake.path))
flake1.invalidate_cache()
flake2.invalidate_cache()
assert isinstance(flake1._cache, FlakeCache) # noqa: SLF001
assert isinstance(flake2._cache, FlakeCache) # noqa: SLF001
assert not flake1._cache.is_cached( # noqa: SLF001
"nixosConfigurations.*.config.networking.hostName"
)
flake1.select("nixosConfigurations.*.config.networking.hostName")
flake1.select("nixosConfigurations.*.config.networking.{hostName,hostId}")
flake2.invalidate_cache()
assert flake2._cache.is_cached( # noqa: SLF001
"nixosConfigurations.*.config.networking.{hostName,hostId}"
)
def test_insert_and_iscached() -> None:
test_cache = FlakeCacheEntry()
selectors = parse_selector("x.y.z")
test_cache.insert("x", selectors)
assert test_cache["x"]["y"]["z"].value == "x"
assert test_cache.is_cached(selectors)
assert not test_cache.is_cached(parse_selector("x.y"))
assert test_cache.is_cached(parse_selector("x.y.z.1"))
assert not test_cache.is_cached(parse_selector("x.*.z"))
assert test_cache.is_cached(parse_selector("x.{y}.z"))
assert test_cache.is_cached(parse_selector("x.?y.z"))
assert not test_cache.is_cached(parse_selector("x.?z.z"))
test_cache = FlakeCacheEntry()
selectors = parse_selector("x.*.z")
test_cache.insert({"y": "x"}, selectors)
assert test_cache["x"]["y"]["z"].value == "x"
assert test_cache.is_cached(selectors)
assert not test_cache.is_cached(parse_selector("x.y"))
assert not test_cache.is_cached(parse_selector("x.y.x"))
assert test_cache.is_cached(parse_selector("x.y.z.1"))
assert test_cache.is_cached(parse_selector("x.{y}.z"))
assert test_cache.is_cached(parse_selector("x.{y,z}.z"))
assert test_cache.is_cached(parse_selector("x.{y,?z}.z"))
assert test_cache.is_cached(parse_selector("x.?y.z"))
assert test_cache.is_cached(parse_selector("x.?z.z"))
test_cache = FlakeCacheEntry()
selectors = parse_selector("x.{y}.z")
test_cache.insert({"y": "x"}, selectors)
assert test_cache["x"]["y"]["z"].value == "x"
assert test_cache.is_cached(selectors)
assert not test_cache.is_cached(parse_selector("x.y"))
assert test_cache.is_cached(parse_selector("x.y.z.1"))
assert not test_cache.is_cached(parse_selector("x.*.z"))
assert test_cache.is_cached(parse_selector("x.{y}.z"))
assert test_cache.is_cached(parse_selector("x.?y.z"))
assert not test_cache.is_cached(parse_selector("x.?z.z"))
test_cache = FlakeCacheEntry()
selectors = parse_selector("x.?y.z")
test_cache.insert({"y": "x"}, selectors)
assert test_cache["x"]["y"]["z"].value == "x"
assert test_cache.is_cached(selectors)
assert not test_cache.is_cached(parse_selector("x.y"))
assert test_cache.is_cached(parse_selector("x.y.z.1"))
assert not test_cache.is_cached(parse_selector("x.*.z"))
assert test_cache.is_cached(parse_selector("x.{y}.z"))
assert test_cache.is_cached(parse_selector("x.?y.z"))
assert not test_cache.is_cached(parse_selector("x.?z.z"))
test_cache = FlakeCacheEntry()
selectors = parse_selector("x.?y.z")
test_cache.insert({}, selectors)
assert test_cache["x"]["y"].exists is False
assert test_cache.is_cached(selectors)
assert test_cache.is_cached(parse_selector("x.y"))
assert test_cache.is_cached(parse_selector("x.y.z.1"))
assert test_cache.is_cached(parse_selector("x.?y.z.1"))
assert not test_cache.is_cached(parse_selector("x.*.z"))
assert test_cache.is_cached(parse_selector("x.{y}.z"))
assert test_cache.is_cached(parse_selector("x.?y.abc"))
assert not test_cache.is_cached(parse_selector("x.?z.z"))
test_cache = FlakeCacheEntry()
selectors = parse_selector("x.{y,z}.z")
test_cache.insert({"y": 1, "z": 2}, selectors)
assert test_cache["x"]["y"]["z"].value == 1
assert test_cache["x"]["z"]["z"].value == 2
assert test_cache.is_cached(selectors)
assert not test_cache.is_cached(parse_selector("x.y"))
assert test_cache.is_cached(parse_selector("x.y.z.1"))
assert not test_cache.is_cached(parse_selector("x.*.z"))
assert test_cache.is_cached(parse_selector("x.{y}.z"))
assert not test_cache.is_cached(parse_selector("x.?y.abc"))
assert test_cache.is_cached(parse_selector("x.?z.z"))
test_cache = FlakeCacheEntry()
selectors = parse_selector("x.y")
test_cache.insert(1, selectors)
selectors = parse_selector("x.z")
test_cache.insert(2, selectors)
assert test_cache["x"]["y"].value == 1
assert test_cache["x"]["z"].value == 2
assert test_cache.is_cached(parse_selector("x.y"))
assert test_cache.is_cached(parse_selector("x.y.z.1"))
assert not test_cache.is_cached(parse_selector("x.*.z"))
assert test_cache.is_cached(parse_selector("x.{y}.z"))
assert test_cache.is_cached(parse_selector("x.?y.abc"))
assert test_cache.is_cached(parse_selector("x.?z.z"))
assert not test_cache.is_cached(parse_selector("x.?x.z"))
test_cache = FlakeCacheEntry()
selectors = parse_selector("x.y.z")
test_cache.insert({"a": {"b": {"c": 1}}}, selectors)
assert test_cache.is_cached(selectors)
assert test_cache.is_cached(parse_selector("x.y.z.a.b.c"))
assert test_cache.is_cached(parse_selector("x.y.z.a.b"))
assert test_cache.is_cached(parse_selector("x.y.z.a"))
assert test_cache.is_cached(parse_selector("x.y.z"))
assert not test_cache.is_cached(parse_selector("x.y"))
assert not test_cache.is_cached(parse_selector("x"))
assert test_cache.is_cached(parse_selector("x.y.z.xxx"))
test_cache = FlakeCacheEntry()
selectors = parse_selector("x.y")
test_cache.insert(1, selectors)
with pytest.raises(TypeError):
test_cache.insert(2, selectors)
assert test_cache["x"]["y"].value == 1
def test_cache_is_cached_with_clan_test_store(
tmp_path: Path, monkeypatch: pytest.MonkeyPatch
) -> None:
"""Test that is_cached correctly handles CLAN_TEST_STORE paths.
This is a regression test for the bug where cached store paths are not
checked for existence when CLAN_TEST_STORE is set, because the cache
only checks existence for paths starting with NIX_STORE_DIR (/nix/store).
"""
# Create a temporary store
test_store = tmp_path / "test-store"
test_store.mkdir()
# Set CLAN_TEST_STORE environment variable
monkeypatch.setenv("CLAN_TEST_STORE", str(test_store))
# Ensure NIX_STORE_DIR is not set (typical scenario)
monkeypatch.delenv("NIX_STORE_DIR", raising=False)
# Create a fake store path in the test store
fake_store_path = test_store / "abc123-test-output"
fake_store_path.write_text("test content")
# Create a cache entry
cache = FlakeCacheEntry()
# Insert a store path into the cache
selectors = parse_selector("testOutput")
cache.insert(str(fake_store_path), selectors)
# Verify the path is cached and exists
assert cache.is_cached(selectors), "Path should be cached"
assert Path(cache.select(selectors)).exists(), "Path should exist"
# Now delete the path to simulate garbage collection
fake_store_path.unlink()
assert not fake_store_path.exists(), "Path should be deleted"
# After the fix: is_cached correctly returns False when the path doesn't exist
# even for test store paths
is_cached_result = cache.is_cached(selectors)
assert not is_cached_result, "Cache correctly checks existence of test store paths"
# For comparison, let's test with a /nix/store path
cache2 = FlakeCacheEntry()
nix_store_path = "/nix/store/fake-path-that-doesnt-exist"
cache2.insert(nix_store_path, selectors)
# This should return False because the path doesn't exist
assert not cache2.is_cached(selectors), (
"Cache correctly checks existence of /nix/store paths"
)
# Test that the caching works
@pytest.mark.with_core
def test_caching_works(flake: ClanFlake) -> None:
my_flake = Flake(str(flake.path))
with patch.object(
my_flake, "get_from_nix", wraps=my_flake.get_from_nix
) as tracked_build:
assert tracked_build.call_count == 0
my_flake.select("clanInternals.inventoryClass.inventory.meta")
assert tracked_build.call_count == 1
my_flake.select("clanInternals.inventoryClass.inventory.meta")
assert tracked_build.call_count == 1
def test_cache_is_cached_with_nix_store_dir(
tmp_path: Path, monkeypatch: pytest.MonkeyPatch
) -> None:
"""Test that is_cached works correctly when NIX_STORE_DIR is set to match CLAN_TEST_STORE."""
# Create a temporary store
test_store = tmp_path / "test-store"
test_store.mkdir()
# Set both CLAN_TEST_STORE and NIX_STORE_DIR to the same value
monkeypatch.setenv("CLAN_TEST_STORE", str(test_store))
monkeypatch.setenv("NIX_STORE_DIR", str(test_store))
# Create a fake store path in the test store
fake_store_path = test_store / "abc123-test-output"
fake_store_path.write_text("test content")
# Create a cache entry
cache = FlakeCacheEntry()
# Insert a store path into the cache
selectors = parse_selector("testOutput")
cache.insert(str(fake_store_path), selectors)
# With NIX_STORE_DIR set correctly, is_cached should return True
assert cache.is_cached(selectors), (
"Cache should recognize test store path when NIX_STORE_DIR is set"
)
@pytest.mark.with_core
def test_cache_gc(tmp_path: Path, monkeypatch: pytest.MonkeyPatch) -> None:
"""Test that garbage collection properly invalidates cached store paths."""
monkeypatch.setenv("NIX_STATE_DIR", str(tmp_path / "var"))
monkeypatch.setenv("NIX_LOG_DIR", str(tmp_path / "var" / "log"))
monkeypatch.setenv("NIX_STORE_DIR", str(tmp_path / "store"))
monkeypatch.setenv("NIX_CACHE_HOME", str(tmp_path / "cache"))
monkeypatch.setenv("HOME", str(tmp_path / "home"))
with contextlib.suppress(KeyError):
monkeypatch.delenv("CLAN_TEST_STORE")
monkeypatch.setenv("NIX_BUILD_TOP", str(tmp_path / "build"))
test_file = tmp_path / "flake" / "testfile"
test_file.parent.mkdir(parents=True, exist_ok=True)
test_file.write_text("test")
test_flake = tmp_path / "flake" / "flake.nix"
test_flake.write_text("""
{
outputs = _: {
testfile = ./testfile;
};
}
""")
my_flake = Flake(str(tmp_path / "flake"))
if platform == "darwin":
my_flake.select("testfile")
else:
my_flake.select(
"testfile", nix_options=["--sandbox-build-dir", str(tmp_path / "build")]
)
assert my_flake._cache is not None # noqa: SLF001
assert my_flake._cache.is_cached("testfile") # noqa: SLF001
subprocess.run(["nix-collect-garbage"], check=True)
assert not my_flake._cache.is_cached("testfile") # noqa: SLF001

View File

@@ -1,9 +1,4 @@
import contextlib
import logging
import subprocess
from pathlib import Path
from sys import platform
from unittest.mock import patch
import pytest
from clan_cli.tests.fixtures_flakes import ClanFlake
@@ -121,118 +116,6 @@ def test_parse_selector() -> None:
]
def test_insert_and_iscached() -> None:
test_cache = FlakeCacheEntry()
selectors = parse_selector("x.y.z")
test_cache.insert("x", selectors)
assert test_cache["x"]["y"]["z"].value == "x"
assert test_cache.is_cached(selectors)
assert not test_cache.is_cached(parse_selector("x.y"))
assert test_cache.is_cached(parse_selector("x.y.z.1"))
assert not test_cache.is_cached(parse_selector("x.*.z"))
assert test_cache.is_cached(parse_selector("x.{y}.z"))
assert test_cache.is_cached(parse_selector("x.?y.z"))
assert not test_cache.is_cached(parse_selector("x.?z.z"))
test_cache = FlakeCacheEntry()
selectors = parse_selector("x.*.z")
test_cache.insert({"y": "x"}, selectors)
assert test_cache["x"]["y"]["z"].value == "x"
assert test_cache.is_cached(selectors)
assert not test_cache.is_cached(parse_selector("x.y"))
assert not test_cache.is_cached(parse_selector("x.y.x"))
assert test_cache.is_cached(parse_selector("x.y.z.1"))
assert test_cache.is_cached(parse_selector("x.{y}.z"))
assert test_cache.is_cached(parse_selector("x.{y,z}.z"))
assert test_cache.is_cached(parse_selector("x.{y,?z}.z"))
assert test_cache.is_cached(parse_selector("x.?y.z"))
assert test_cache.is_cached(parse_selector("x.?z.z"))
test_cache = FlakeCacheEntry()
selectors = parse_selector("x.{y}.z")
test_cache.insert({"y": "x"}, selectors)
assert test_cache["x"]["y"]["z"].value == "x"
assert test_cache.is_cached(selectors)
assert not test_cache.is_cached(parse_selector("x.y"))
assert test_cache.is_cached(parse_selector("x.y.z.1"))
assert not test_cache.is_cached(parse_selector("x.*.z"))
assert test_cache.is_cached(parse_selector("x.{y}.z"))
assert test_cache.is_cached(parse_selector("x.?y.z"))
assert not test_cache.is_cached(parse_selector("x.?z.z"))
test_cache = FlakeCacheEntry()
selectors = parse_selector("x.?y.z")
test_cache.insert({"y": "x"}, selectors)
assert test_cache["x"]["y"]["z"].value == "x"
assert test_cache.is_cached(selectors)
assert not test_cache.is_cached(parse_selector("x.y"))
assert test_cache.is_cached(parse_selector("x.y.z.1"))
assert not test_cache.is_cached(parse_selector("x.*.z"))
assert test_cache.is_cached(parse_selector("x.{y}.z"))
assert test_cache.is_cached(parse_selector("x.?y.z"))
assert not test_cache.is_cached(parse_selector("x.?z.z"))
test_cache = FlakeCacheEntry()
selectors = parse_selector("x.?y.z")
test_cache.insert({}, selectors)
assert test_cache["x"]["y"].exists is False
assert test_cache.is_cached(selectors)
assert test_cache.is_cached(parse_selector("x.y"))
assert test_cache.is_cached(parse_selector("x.y.z.1"))
assert test_cache.is_cached(parse_selector("x.?y.z.1"))
assert not test_cache.is_cached(parse_selector("x.*.z"))
assert test_cache.is_cached(parse_selector("x.{y}.z"))
assert test_cache.is_cached(parse_selector("x.?y.abc"))
assert not test_cache.is_cached(parse_selector("x.?z.z"))
test_cache = FlakeCacheEntry()
selectors = parse_selector("x.{y,z}.z")
test_cache.insert({"y": 1, "z": 2}, selectors)
assert test_cache["x"]["y"]["z"].value == 1
assert test_cache["x"]["z"]["z"].value == 2
assert test_cache.is_cached(selectors)
assert not test_cache.is_cached(parse_selector("x.y"))
assert test_cache.is_cached(parse_selector("x.y.z.1"))
assert not test_cache.is_cached(parse_selector("x.*.z"))
assert test_cache.is_cached(parse_selector("x.{y}.z"))
assert not test_cache.is_cached(parse_selector("x.?y.abc"))
assert test_cache.is_cached(parse_selector("x.?z.z"))
test_cache = FlakeCacheEntry()
selectors = parse_selector("x.y")
test_cache.insert(1, selectors)
selectors = parse_selector("x.z")
test_cache.insert(2, selectors)
assert test_cache["x"]["y"].value == 1
assert test_cache["x"]["z"].value == 2
assert test_cache.is_cached(parse_selector("x.y"))
assert test_cache.is_cached(parse_selector("x.y.z.1"))
assert not test_cache.is_cached(parse_selector("x.*.z"))
assert test_cache.is_cached(parse_selector("x.{y}.z"))
assert test_cache.is_cached(parse_selector("x.?y.abc"))
assert test_cache.is_cached(parse_selector("x.?z.z"))
assert not test_cache.is_cached(parse_selector("x.?x.z"))
test_cache = FlakeCacheEntry()
selectors = parse_selector("x.y.z")
test_cache.insert({"a": {"b": {"c": 1}}}, selectors)
assert test_cache.is_cached(selectors)
assert test_cache.is_cached(parse_selector("x.y.z.a.b.c"))
assert test_cache.is_cached(parse_selector("x.y.z.a.b"))
assert test_cache.is_cached(parse_selector("x.y.z.a"))
assert test_cache.is_cached(parse_selector("x.y.z"))
assert not test_cache.is_cached(parse_selector("x.y"))
assert not test_cache.is_cached(parse_selector("x"))
assert test_cache.is_cached(parse_selector("x.y.z.xxx"))
test_cache = FlakeCacheEntry()
selectors = parse_selector("x.y")
test_cache.insert(1, selectors)
with pytest.raises(TypeError):
test_cache.insert(2, selectors)
assert test_cache["x"]["y"].value == 1
def test_select() -> None:
test_cache = FlakeCacheEntry()
@@ -285,46 +168,6 @@ def test_out_path() -> None:
assert test_cache.select(selectors) == "/nix/store/bla"
@pytest.mark.with_core
def test_flake_caching(flake: ClanFlake) -> None:
m1 = flake.machines["machine1"]
m1["nixpkgs"]["hostPlatform"] = "x86_64-linux"
flake.machines["machine2"] = m1.copy()
flake.machines["machine3"] = m1.copy()
flake.refresh()
flake_ = Flake(str(flake.path))
hostnames = flake_.select("nixosConfigurations.*.config.networking.hostName")
assert hostnames == {
"machine1": "machine1",
"machine2": "machine2",
"machine3": "machine3",
}
@pytest.mark.with_core
def test_cache_persistance(flake: ClanFlake) -> None:
m1 = flake.machines["machine1"]
m1["nixpkgs"]["hostPlatform"] = "x86_64-linux"
flake.refresh()
flake1 = Flake(str(flake.path))
flake2 = Flake(str(flake.path))
flake1.invalidate_cache()
flake2.invalidate_cache()
assert isinstance(flake1._cache, FlakeCache) # noqa: SLF001
assert isinstance(flake2._cache, FlakeCache) # noqa: SLF001
assert not flake1._cache.is_cached( # noqa: SLF001
"nixosConfigurations.*.config.networking.hostName"
)
flake1.select("nixosConfigurations.*.config.networking.hostName")
flake1.select("nixosConfigurations.*.config.networking.{hostName,hostId}")
flake2.invalidate_cache()
assert flake2._cache.is_cached( # noqa: SLF001
"nixosConfigurations.*.config.networking.{hostName,hostId}"
)
@pytest.mark.with_core
def test_conditional_all_selector(flake: ClanFlake) -> None:
m1 = flake.machines["machine1"]
@@ -347,93 +190,3 @@ def test_conditional_all_selector(flake: ClanFlake) -> None:
assert res1["clan-core"].get("clan") is not None
flake2.invalidate_cache()
# Test that the caching works
@pytest.mark.with_core
def test_caching_works(flake: ClanFlake) -> None:
my_flake = Flake(str(flake.path))
with patch.object(
my_flake, "get_from_nix", wraps=my_flake.get_from_nix
) as tracked_build:
assert tracked_build.call_count == 0
my_flake.select("clanInternals.inventoryClass.inventory.meta")
assert tracked_build.call_count == 1
my_flake.select("clanInternals.inventoryClass.inventory.meta")
assert tracked_build.call_count == 1
@pytest.mark.with_core
def test_cache_gc(temp_dir: Path, monkeypatch: pytest.MonkeyPatch) -> None:
monkeypatch.setenv("NIX_STATE_DIR", str(temp_dir / "var"))
monkeypatch.setenv("NIX_LOG_DIR", str(temp_dir / "var" / "log"))
monkeypatch.setenv("NIX_STORE_DIR", str(temp_dir / "store"))
monkeypatch.setenv("NIX_CACHE_HOME", str(temp_dir / "cache"))
monkeypatch.setenv("HOME", str(temp_dir / "home"))
with contextlib.suppress(KeyError):
monkeypatch.delenv("CLAN_TEST_STORE")
monkeypatch.setenv("NIX_BUILD_TOP", str(temp_dir / "build"))
test_file = temp_dir / "flake" / "testfile"
test_file.parent.mkdir(parents=True, exist_ok=True)
test_file.write_text("test")
test_flake = temp_dir / "flake" / "flake.nix"
test_flake.write_text("""
{
outputs = _: {
testfile = ./testfile;
};
}
""")
my_flake = Flake(str(temp_dir / "flake"))
if platform == "darwin":
my_flake.select("testfile")
else:
my_flake.select(
"testfile", nix_options=["--sandbox-build-dir", str(temp_dir / "build")]
)
assert my_flake._cache is not None # noqa: SLF001
assert my_flake._cache.is_cached("testfile") # noqa: SLF001
subprocess.run(["nix-collect-garbage"], check=True)
assert not my_flake._cache.is_cached("testfile") # noqa: SLF001
# This test fails because the CI sandbox does not have the required packages to run the generators
# maybe @DavHau or @Qubasa can fix this at some point :)
# @pytest.mark.with_core
# def test_cache_invalidation(flake: ClanFlake, sops_setup: SopsSetup) -> None:
# m1 = flake.machines["machine1"]
# m1["nixpkgs"]["hostPlatform"] = "x86_64-linux"
# flake.refresh()
# clan_dir = Flake(str(flake.path))
# machine1 = Machine(
# name="machine1",
# flake=clan_dir,
# )
# sops_setup.init(flake.path)
# generate_vars([machine1])
#
# flake.inventory["services"] = {
# "sshd": {
# "someid": {
# "roles": {
# "server": {
# "machines": ["machine1"],
# }
# }
# }
# }
# }
# flake.refresh()
# machine1.flush_caches() # because flake.refresh() does not invalidate the cache but it writes into the directory
#
# generate_vars([machine1])
# vpn_ip = (
# get_var(str(clan_dir), machine1.name, "openssh/ssh.id_ed25519")
# .value.decode()
# .strip("\n")
# )
# assert vpn_ip is not None

View File

@@ -32,8 +32,6 @@ class Machine:
flake: Flake
nix_options: list[str] = field(default_factory=list)
override_target_host: None | str = None
override_build_host: None | str = None
private_key: Path | None = None
host_key_check: HostKeyCheck = HostKeyCheck.STRICT
@@ -143,14 +141,6 @@ class Machine:
return self.flake.path
def target_host(self) -> Remote:
if self.override_target_host:
return Remote.from_deployment_address(
machine_name=self.name,
address=self.override_target_host,
host_key_check=self.host_key_check,
private_key=self.private_key,
)
remote = get_host(self.name, self.flake, field="targetHost")
if remote is None:
msg = f"'targetHost' is not set for machine '{self.name}'"
@@ -178,15 +168,6 @@ class Machine:
The host where the machine is built and deployed from.
Can be the same as the target host.
"""
if self.override_build_host:
return Remote.from_deployment_address(
machine_name=self.name,
address=self.override_build_host,
host_key_check=self.host_key_check,
private_key=self.private_key,
)
remote = get_host(self.name, self.flake, field="buildHost")
if remote:

View File

@@ -54,6 +54,28 @@ class Remote:
except ValueError:
return False
def with_data(self, host_key_check: HostKeyCheck | None = None) -> "Remote":
"""
Returns a new Remote instance with the same data but with a different host_key_check.
"""
return Remote(
address=self.address,
user=self.user,
command_prefix=self.command_prefix,
port=self.port,
private_key=self.private_key,
password=self.password,
forward_agent=self.forward_agent,
host_key_check=host_key_check
if host_key_check is not None
else self.host_key_check,
verbose_ssh=self.verbose_ssh,
ssh_options=self.ssh_options,
tor_socks=self.tor_socks,
_control_path_dir=self._control_path_dir,
_askpass_path=self._askpass_path,
)
@property
def target(self) -> str:
return f"{self.user}@{self.address}"

View File

@@ -1,108 +0,0 @@
"""Test flake cache with chroot stores."""
import os
import tempfile
from pathlib import Path
import pytest
from clan_lib.flake.flake import FlakeCache, FlakeCacheEntry
def test_flake_cache_with_chroot_store(
tmp_path: Path, monkeypatch: pytest.MonkeyPatch
) -> None:
"""Test that flake cache works correctly with chroot stores."""
# Create a mock store path
normal_store = "/nix/store"
chroot_store = str(tmp_path / "nix" / "store")
# Create the chroot store directory
Path(chroot_store).mkdir(parents=True)
# Create a fake derivation in the chroot store with proper nix store format
fake_drv = "abcd1234abcd1234abcd1234abcd1234-test-package"
fake_store_path = f"{chroot_store}/{fake_drv}"
Path(fake_store_path).mkdir(parents=True)
# Test 1: Cache entry with normal store path
cache = FlakeCache()
# Insert a store path that doesn't exist in the normal store
non_existent_path = f"{normal_store}/{fake_drv}"
cache.insert({"test": {"package": non_existent_path}}, "")
# Without chroot store, this should be uncached (path doesn't exist)
assert not cache.is_cached("test.package")
# Test 2: Set CLAN_TEST_STORE to chroot store parent
# CLAN_TEST_STORE should point to the parent of nix/store
monkeypatch.setenv("CLAN_TEST_STORE", str(tmp_path))
# Create a new cache with the chroot store
cache2 = FlakeCache()
# Insert the same path but now it should use chroot store
cache2.insert({"test": {"package": fake_store_path}}, "")
# This should be cached because the path exists in chroot store
assert cache2.is_cached("test.package")
# Test 3: Cache persistence with chroot store
cache_file = tmp_path / "cache.json"
cache2.save_to_file(cache_file)
# Load cache in a new instance
cache3 = FlakeCache()
cache3.load_from_file(cache_file)
# Should still be cached with chroot store
assert cache3.is_cached("test.package")
# Test 4: Cache validation fails when chroot store changes
monkeypatch.setenv("CLAN_TEST_STORE", "/different")
# Same cache should now be invalid
assert not cache3.is_cached("test.package")
def test_flake_cache_entry_store_path_validation() -> None:
"""Test that FlakeCacheEntry correctly validates store paths."""
# Test with default store
entry = FlakeCacheEntry()
# Insert a non-existent store path with proper format
fake_path = "/nix/store/abcd1234abcd1234abcd1234abcd1234-fake-package"
entry.insert(fake_path, [])
# Should not be cached because path doesn't exist
assert not entry.is_cached([])
# Test with environment variable
with tempfile.TemporaryDirectory() as tmpdir:
# Create nix/store structure
store_dir = Path(tmpdir) / "nix" / "store"
store_dir.mkdir(parents=True)
# Create a fake store path with proper format
fake_drv = "test1234test1234test1234test1234-package"
fake_path_obj = store_dir / fake_drv
fake_path_obj.mkdir()
fake_path = str(fake_path_obj)
# Set CLAN_TEST_STORE to parent of store dir
old_env = os.environ.get("CLAN_TEST_STORE")
try:
os.environ["CLAN_TEST_STORE"] = str(tmpdir)
entry2 = FlakeCacheEntry()
entry2.insert(str(fake_path), [])
# Should be cached because path exists
assert entry2.is_cached([])
finally:
if old_env is None:
os.environ.pop("CLAN_TEST_STORE", None)
else:
os.environ["CLAN_TEST_STORE"] = old_env

View File

@@ -0,0 +1,38 @@
{
writeShellApplication,
util-linux,
coreutils,
}:
writeShellApplication {
name = "run-vm-test-offline";
runtimeInputs = [
util-linux
coreutils
]; # nix is inherited from the environment
text = ''
set -euo pipefail
if [ $# -eq 0 ]; then
echo "Error: Test name required"
echo "Usage: nix run .#run-offline-test -- <test-name>"
echo "Example: nix run .#run-offline-test -- installation"
exit 1
fi
TEST_NAME="$1"
echo "Building $TEST_NAME test driver..."
SYSTEM=$(nix eval --impure --raw --expr 'builtins.currentSystem')
nix build ".#checks.$SYSTEM.$TEST_NAME.driver"
echo "Running $TEST_NAME test in offline environment..."
# We use unshare here with root to avoid usernamespace issues originating from bubblewrap
currentUser="$(whoami)"
sudo unshare --net -- bash -c "
ip link set lo up
runuser -u $(printf "%q" "$currentUser") ./result/bin/nixos-test-driver
"
'';
meta.description = "Run interactivly NixOS VM tests in an sandbox without network access";
}