Compare commits

..

1 Commits

Author SHA1 Message Date
pinpox
5f05e36c9d Remove outdated facts message
One every executino of `clan machines update` a message referring to
facts is still being printed. While there is more code to be cleaned up
around facts, this simple change removes the prominent message out of
user's attention.
2025-10-28 15:32:14 +01:00
59 changed files with 790 additions and 1046 deletions

105
PLAN.md
View File

@@ -1,105 +0,0 @@
Title: Add nix-darwin Support to Clan Services (clan.service)
Summary
- Extend clan services so authors can ship a darwinModule alongside nixosModule.
- Wire service results into darwin machines the same way we already do for NixOS.
- Keep full backward compatibility: existing services that only export nixosModule continue to work unchanged.
Goals
- Service authors can return perInstance/perMachine darwinModule similarly to nixosModule.
- Darwin machines import the correct aggregated service module outputs.
- Documentation describes the new result attribute and authoring pattern.
Non-Goals (initial phase)
- No rework of service settings schema or UI beyond documenting darwinModule.
- No OS-specific extraModules handling (we will keep extraModules affecting only nixos aggregation initially to avoid breaking existing users).
- No sweeping updates of all services; well add a concrete example (users) and leave others to be migrated incrementally.
Design Overview
- Service result attributes gain darwinModule in both roles.<name>.perInstance and perMachine results.
- The service aggregator composes both nixosModule and darwinModule per machine.
- The machine wiring picks the correct module based on the machines class (nixos vs darwin).
Changes By File (with anchors)
- lib/inventory/distributed-service/service-module.nix
- Add darwinModule to per-instance return type next to nixosModule.
- Where: lib/inventory/distributed-service/service-module.nix:536 (options.nixosModule = mkOption { … })
- Action: Add sibling options.darwinModule = mkOption { type = types.deferredModule; default = { }; description = "A single nix-darwin module for the instance."; }.
- Add darwinModule to per-machine return type next to nixosModule.
- Where: lib/inventory/distributed-service/service-module.nix:666 (options.nixosModule = mkOption { … })
- Action: Add sibling options.darwinModule = mkOption { type = types.deferredModule; default = { }; description = "A single nix-darwin module for the machine."; }.
- Compose darwinModule per (role, instance, machine) similarly to nixosModule.
- Where: lib/inventory/distributed-service/service-module.nix:878893 (wrapper that builds nixosModule = { imports = [ instanceRes.nixosModule ] ++ extraModules … })
- Action: Build darwinModule = { imports = [ instanceRes.darwinModule ]; }.
Note: Do NOT include roles.*.extraModules here for darwin initially to avoid importing nixos-specific modules into darwin eval.
- Aggregate darwinModules in final result.
- Where: lib/inventory/distributed-service/service-module.nix:958993 (instanceResults builder and final nixosModule = { imports = [ machineResult.nixosModule ] ++ instanceResults.nixosModules; })
- Actions:
- Track instanceResults.darwinModules in parallel to instanceResults.nixosModules.
- Add final darwinModule = { imports = [ machineResult.darwinModule ] ++ instanceResults.darwinModules; }.
- modules/clan/distributed-services.nix
- Feed the right service module to each machine based on machineClass.
- Where: modules/clan/distributed-services.nix:147152
- Current: machineImports = fold over services, collecting serviceModule.result.final.${machineName}.nixosModule
- Change: If inventory.machines.${machineName}.machineClass == "darwin" then collect .darwinModule else .nixosModule.
- modules/clan/module.nix
- Ensure machineImports are included for both nixos and darwin machines.
- Where: modules/clan/module.nix:195 (currently ++ lib.optionals (_class == "nixos") (v.machineImports or [ ]))
- Change: Include machineImports for darwin as well (or remove the conditional and always append v.machineImports).
- docs/site/decisions/01-Clan-Modules.md
- Document darwinModule as a result attribute.
- Where: docs/site/decisions/01-Clan-Modules.md:129146 (Result attributes and perMachine text mentioning only nixosModule)
- Change: Add “darwinModule” to the Result attributes list and examples, mirroring nixosModule.
- Example service update: clanServices/users/default.nix
- Add perInstance.darwinModule and perMachine.darwinModule mirroring nixos behavior where feasible.
- Where: clanServices/users/default.nix:2890 (roles.default.perInstance.nixosModule), 148153 (perMachine.nixosModule)
- Change: Provide minimal darwinModule that sets users.users.<name> (and any safe, cross-platform bits). If some nixos-only settings (e.g., systemd hooks) exist, keep them nixos-only.
Implementation Steps
1) Service API extensions
- Add options.darwinModule to roles.*.perInstance and perMachine (see anchors above).
- Keep defaults to {} so services can omit it safely.
2) Aggregation logic
- result.allRoles: emit darwinModule wrapper from instanceRes.darwinModule.
- result.final:
- Collect instanceResults.darwinModules alongside instanceResults.nixosModules.
- Produce final darwinModule with [ machineResult.darwinModule ] ++ instanceResults.darwinModules.
- Leave exports logic unchanged.
3) Machine wiring
- modules/clan/distributed-services.nix: choose .darwinModule vs .nixosModule based on inventory.machines.<name>.machineClass.
- modules/clan/module.nix: include v.machineImports for both OS classes.
4) Example migration (users)
- Add darwinModule in clanServices/users/default.nix.
- Validate that users service evaluates for a darwin machine and does not reference nixos-specific options.
5) Documentation
- Update ADR docs to mention darwinModule in Result attributes and examples.
- Add a short “Authoring for Darwin” snippet showing perInstance/perMachine returning both modules.
6) Tests and verification
- Unit-level: extend lib/inventory/distributed-service/tests to assert presence of result.final.<machine>.darwinModule when perInstance/perMachine return it.
- Integration-level: evaluate a sample darwin machine (e.g., inventory.json has test-darwin-machine) and assert clan.darwinModules.<machine> includes the aggregated module.
- Sanity: ensure existing nixos-only services still evaluate unchanged.
Backward Compatibility
- Existing services that only return nixosModule continue to work.
- Darwin machines wont import service modules until services provide darwinModule, avoiding accidental breakage.
- extraModules remain applied only to nixos aggregation initially to prevent nixos-only modules from breaking darwin evaluation. We can add OS-specific extraModules in a follow-up (e.g., roles.*.extraModulesDarwin).
Acceptance Criteria
- Services can return darwinModule in perInstance/perMachine without errors.
- Darwin machines import aggregated darwinModule outputs from all participating services.
- nixos behavior remains unchanged for existing services.
- Documentation updated to reflect the new attribute and example.
Rollout Notes
- Start by updating clanServices/users as a working example.
- Encourage service authors to add darwinModule incrementally; no global migration is required.

View File

@@ -58,53 +58,51 @@
pkgs.buildPackages.xorg.lndir
pkgs.glibcLocales
pkgs.kbd.out
self.nixosConfigurations."test-flash-machine-${pkgs.stdenv.hostPlatform.system}".pkgs.perlPackages.ConfigIniFiles
self.nixosConfigurations."test-flash-machine-${pkgs.stdenv.hostPlatform.system}".pkgs.perlPackages.FileSlurp
self.nixosConfigurations."test-flash-machine-${pkgs.hostPlatform.system}".pkgs.perlPackages.ConfigIniFiles
self.nixosConfigurations."test-flash-machine-${pkgs.hostPlatform.system}".pkgs.perlPackages.FileSlurp
pkgs.bubblewrap
self.nixosConfigurations."test-flash-machine-${pkgs.stdenv.hostPlatform.system}".config.system.build.toplevel
self.nixosConfigurations."test-flash-machine-${pkgs.stdenv.hostPlatform.system}".config.system.build.diskoScript
self.nixosConfigurations."test-flash-machine-${pkgs.stdenv.hostPlatform.system}".config.system.build.diskoScript.drvPath
self.nixosConfigurations."test-flash-machine-${pkgs.hostPlatform.system}".config.system.build.toplevel
self.nixosConfigurations."test-flash-machine-${pkgs.hostPlatform.system}".config.system.build.diskoScript
self.nixosConfigurations."test-flash-machine-${pkgs.hostPlatform.system}".config.system.build.diskoScript.drvPath
]
++ builtins.map (i: i.outPath) (builtins.attrValues self.inputs);
closureInfo = pkgs.closureInfo { rootPaths = dependencies; };
in
{
# Skip flash test on aarch64-linux for now as it's too slow
checks =
lib.optionalAttrs (pkgs.stdenv.isLinux && pkgs.stdenv.hostPlatform.system != "aarch64-linux")
{
nixos-test-flash = self.clanLib.test.baseTest {
name = "flash";
nodes.target = {
virtualisation.emptyDiskImages = [ 4096 ];
virtualisation.memorySize = 4096;
checks = lib.optionalAttrs (pkgs.stdenv.isLinux && pkgs.hostPlatform.system != "aarch64-linux") {
nixos-test-flash = self.clanLib.test.baseTest {
name = "flash";
nodes.target = {
virtualisation.emptyDiskImages = [ 4096 ];
virtualisation.memorySize = 4096;
virtualisation.useNixStoreImage = true;
virtualisation.writableStore = true;
virtualisation.useNixStoreImage = true;
virtualisation.writableStore = true;
environment.systemPackages = [ self.packages.${pkgs.system}.clan-cli ];
environment.etc."install-closure".source = "${closureInfo}/store-paths";
environment.systemPackages = [ self.packages.${pkgs.system}.clan-cli ];
environment.etc."install-closure".source = "${closureInfo}/store-paths";
nix.settings = {
substituters = lib.mkForce [ ];
hashed-mirrors = null;
connect-timeout = lib.mkForce 3;
flake-registry = "";
experimental-features = [
"nix-command"
"flakes"
];
};
};
testScript = ''
start_all()
machine.succeed("echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIRWUusawhlIorx7VFeQJHmMkhl9X3QpnvOdhnV/bQNG root@target' > ./test_id_ed25519.pub")
# Some distros like to automount disks with spaces
machine.succeed('mkdir -p "/mnt/with spaces" && mkfs.ext4 /dev/vdc && mount /dev/vdc "/mnt/with spaces"')
machine.succeed("clan flash write --ssh-pubkey ./test_id_ed25519.pub --keymap de --language de_DE.UTF-8 --debug --flake ${self.checks.x86_64-linux.clan-core-for-checks} --yes --disk main /dev/vdc test-flash-machine-${pkgs.stdenv.hostPlatform.system}")
'';
} { inherit pkgs self; };
nix.settings = {
substituters = lib.mkForce [ ];
hashed-mirrors = null;
connect-timeout = lib.mkForce 3;
flake-registry = "";
experimental-features = [
"nix-command"
"flakes"
];
};
};
testScript = ''
start_all()
machine.succeed("echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIRWUusawhlIorx7VFeQJHmMkhl9X3QpnvOdhnV/bQNG root@target' > ./test_id_ed25519.pub")
# Some distros like to automount disks with spaces
machine.succeed('mkdir -p "/mnt/with spaces" && mkfs.ext4 /dev/vdc && mount /dev/vdc "/mnt/with spaces"')
machine.succeed("clan flash write --ssh-pubkey ./test_id_ed25519.pub --keymap de --language de_DE.UTF-8 --debug --flake ${self.checks.x86_64-linux.clan-core-for-checks} --yes --disk main /dev/vdc test-flash-machine-${pkgs.hostPlatform.system}")
'';
} { inherit pkgs self; };
};
};
}

View File

@@ -160,9 +160,9 @@
closureInfo = pkgs.closureInfo {
rootPaths = [
privateInputs.clan-core-for-checks
self.nixosConfigurations."test-install-machine-${pkgs.stdenv.hostPlatform.system}".config.system.build.toplevel
self.nixosConfigurations."test-install-machine-${pkgs.stdenv.hostPlatform.system}".config.system.build.initialRamdisk
self.nixosConfigurations."test-install-machine-${pkgs.stdenv.hostPlatform.system}".config.system.build.diskoScript
self.nixosConfigurations."test-install-machine-${pkgs.hostPlatform.system}".config.system.build.toplevel
self.nixosConfigurations."test-install-machine-${pkgs.hostPlatform.system}".config.system.build.initialRamdisk
self.nixosConfigurations."test-install-machine-${pkgs.hostPlatform.system}".config.system.build.diskoScript
pkgs.stdenv.drvPath
pkgs.bash.drvPath
pkgs.buildPackages.xorg.lndir
@@ -215,7 +215,7 @@
# Prepare test flake and Nix store
flake_dir = prepare_test_flake(
temp_dir,
"${self.checks.${pkgs.stdenv.hostPlatform.system}.clan-core-for-checks}",
"${self.checks.${pkgs.hostPlatform.system}.clan-core-for-checks}",
"${closureInfo}"
)
@@ -296,7 +296,7 @@
# Prepare test flake and Nix store
flake_dir = prepare_test_flake(
temp_dir,
"${self.checks.${pkgs.stdenv.hostPlatform.system}.clan-core-for-checks}",
"${self.checks.${pkgs.hostPlatform.system}.clan-core-for-checks}",
"${closureInfo}"
)

View File

@@ -2,7 +2,7 @@
let
cli = self.packages.${pkgs.stdenv.hostPlatform.system}.clan-cli-full;
cli = self.packages.${pkgs.hostPlatform.system}.clan-cli-full;
ollama-model = pkgs.callPackage ./qwen3-4b-instruct.nix { };
in
@@ -53,7 +53,7 @@ in
pytest
pytest-xdist
(cli.pythonRuntime.pkgs.toPythonModule cli)
self.legacyPackages.${pkgs.stdenv.hostPlatform.system}.nixosTestLib
self.legacyPackages.${pkgs.hostPlatform.system}.nixosTestLib
]
))
];

View File

@@ -2,7 +2,7 @@
let
cli = self.packages.${pkgs.stdenv.hostPlatform.system}.clan-cli-full;
cli = self.packages.${pkgs.hostPlatform.system}.clan-cli-full;
in
{
name = "systemd-abstraction";

View File

@@ -115,9 +115,9 @@
let
closureInfo = pkgs.closureInfo {
rootPaths = [
self.packages.${pkgs.stdenv.hostPlatform.system}.clan-cli
self.checks.${pkgs.stdenv.hostPlatform.system}.clan-core-for-checks
self.clanInternals.machines.${pkgs.stdenv.hostPlatform.system}.test-update-machine.config.system.build.toplevel
self.packages.${pkgs.hostPlatform.system}.clan-cli
self.checks.${pkgs.hostPlatform.system}.clan-core-for-checks
self.clanInternals.machines.${pkgs.hostPlatform.system}.test-update-machine.config.system.build.toplevel
pkgs.stdenv.drvPath
pkgs.bash.drvPath
pkgs.buildPackages.xorg.lndir
@@ -132,7 +132,7 @@
imports = [ self.nixosModules.test-update-machine ];
};
extraPythonPackages = _p: [
self.legacyPackages.${pkgs.stdenv.hostPlatform.system}.nixosTestLib
self.legacyPackages.${pkgs.hostPlatform.system}.nixosTestLib
];
testScript = ''
@@ -154,7 +154,7 @@
# Prepare test flake and Nix store
flake_dir = prepare_test_flake(
temp_dir,
"${self.checks.${pkgs.stdenv.hostPlatform.system}.clan-core-for-checks}",
"${self.checks.${pkgs.hostPlatform.system}.clan-core-for-checks}",
"${closureInfo}"
)
(flake_dir / ".clan-flake").write_text("") # Ensure .clan-flake exists
@@ -226,7 +226,7 @@
"--to",
"ssh://root@192.168.1.1",
"--no-check-sigs",
f"${self.packages.${pkgs.stdenv.hostPlatform.system}.clan-cli}",
f"${self.packages.${pkgs.hostPlatform.system}.clan-cli}",
"--extra-experimental-features", "nix-command flakes",
],
check=True,
@@ -242,7 +242,7 @@
"-o", "UserKnownHostsFile=/dev/null",
"-o", "StrictHostKeyChecking=no",
f"root@192.168.1.1",
"${self.packages.${pkgs.stdenv.hostPlatform.system}.clan-cli}/bin/clan",
"${self.packages.${pkgs.hostPlatform.system}.clan-cli}/bin/clan",
"machines",
"update",
"--debug",
@@ -270,7 +270,7 @@
# Run clan update command
subprocess.run([
"${self.packages.${pkgs.stdenv.hostPlatform.system}.clan-cli-full}/bin/clan",
"${self.packages.${pkgs.hostPlatform.system}.clan-cli-full}/bin/clan",
"machines",
"update",
"--debug",
@@ -297,7 +297,7 @@
# Run clan update command with --build-host
subprocess.run([
"${self.packages.${pkgs.stdenv.hostPlatform.system}.clan-cli-full}/bin/clan",
"${self.packages.${pkgs.hostPlatform.system}.clan-cli-full}/bin/clan",
"machines",
"update",
"--debug",

View File

@@ -1,6 +1,3 @@
!!! Danger "Experimental"
This service is experimental and will change in the future.
This service sets up a certificate authority (CA) that can issue certificates to
other machines in your clan. For this the `ca` role is used.
It additionally provides a `default` role, that can be applied to all machines

View File

@@ -1,6 +1,3 @@
!!! Danger "Experimental"
This service is experimental and will change in the future.
This module enables hosting clan-internal services easily, which can be resolved
inside your VPN. This allows defining a custom top-level domain (e.g. `.clan`)
and exposing endpoints from a machine to others, which will be

View File

@@ -1,83 +1 @@
!!! Danger "Experimental"
This service is for demonstration purpose only and may change in the future.
The Hello-World Clan Service is a minimal example showing how to build and register your own service.
It serves as a reference implementation and is used in clan-core CI tests to ensure compatibility.
## What it demonstrates
- How to define a basic Clan-compatible service.
- How to structure your service for discovery and configuration.
- How Clan services interact with nixos.
## Testing
This service demonstrates two levels of testing to ensure quality and stability across releases:
1. **Unit & Integration Testing** — via [`nix-unit`](https://github.com/nix-community/nix-unit)
2. **End-to-End Testing** — via **NixOS VM tests**, which we extended to support **container virtualization** for better performance.
We highly advocate following the [Practical Testing Pyramid](https://martinfowler.com/articles/practical-test-pyramid.html):
* Write **unit tests** for core logic and invariants.
* Add **one or two end-to-end (E2E)** tests to confirm your service starts and behaves correctly in a real NixOS environment.
NixOS is **untyped** and frequently changes; tests are the safest way to ensure long-term stability of services.
```
/ \
/ \
/ E2E \
/-------\
/ \
/Integration\
/-------------\
/ \
/ Unit Tests \
-------------------
```
### nix-unit
We highly advocate the usage of
[nix-unit](https://github.com/nix-community/nix-unit)
Example in: tests/eval-tests.nix
If you use flake-parts you can use the [native integration](https://flake.parts/options/nix-unit.html)
If nix-unit succeeds you'r nixos evaluation should be mostly correct.
!!! Tip
- Ensure most used 'settings' and variants are tested.
- Think about some important edge-cases your system should handle.
### NixOS VM / Container Test
!!! Warning "Early Vars & clanTest"
The testing system around vars is experimental
`clanTest` is still experimental and enables container virtualization by default.
This is still early and might have some limitations.
Some minimal boilerplate is needed to use `clanTest`
```nix
nixosLib = import (inputs.nixpkgs + "/nixos/lib") { }
nixosLib.runTest (
{ ... }:
{
imports = [
self.modules.nixosTest.clanTest
# Example in tests/vm/default.nix
testModule
];
hostPkgs = pkgs;
# Uncomment if you don't want or cannot use containers
# test.useContainers = false;
}
)
```
This a test README just to appease the eval warnings if we don't have one

View File

@@ -8,7 +8,7 @@
{
_class = "clan.service";
manifest.name = "clan-core/hello-word";
manifest.description = "Minimal example clan service that greets the world";
manifest.description = "This is a test";
manifest.readme = builtins.readFile ./README.md;
# This service provides two roles: "morning" and "evening". Roles can be

View File

@@ -26,7 +26,7 @@ in
# The hello-world service being tested
../../clanServices/hello-world
# Required modules
../../nixosModules
../../nixosModules/clanCore
];
testName = "hello-world";
tests = ./tests/eval-tests.nix;

View File

@@ -4,7 +4,7 @@
...
}:
let
testClan = clanLib.clan {
testFlake = clanLib.clan {
self = { };
# Point to the folder of the module
# TODO: make this optional
@@ -33,20 +33,10 @@ let
};
in
{
/**
We highly advocate the usage of:
https://github.com/nix-community/nix-unit
If you use flake-parts you can use the native integration: https://flake.parts/options/nix-unit.html
*/
test_simple = {
# Allows inspection via the nix-repl
# Ignored by nix-unit; it only looks at 'expr' and 'expected'
inherit testClan;
config = testFlake.config;
# Assert that jon has the
# configured greeting in 'environment.etc.hello.text'
expr = testClan.config.nixosConfigurations.jon.config.environment.etc."hello".text;
expected = "Good evening World!";
expr = { };
expected = { };
};
}

View File

@@ -1,5 +1,8 @@
!!! Danger "Experimental"
This service is experimental and will change in the future.
🚧🚧🚧 Experimental 🚧🚧🚧
Use at your own risk.
We are still refining its interfaces, instability and breakages are expected.
---

View File

@@ -1,6 +1,3 @@
!!! Danger "Experimental"
This service is experimental and will change in the future.
## Usage
```

View File

@@ -39,7 +39,6 @@
...
}:
let
uniqueStrings = list: builtins.attrNames (builtins.groupBy lib.id list);
# Collect searchDomains from all servers in this instance
allServerSearchDomains = lib.flatten (
lib.mapAttrsToList (_name: machineConfig: machineConfig.settings.certificate.searchDomains or [ ]) (
@@ -47,7 +46,7 @@
)
);
# Merge client's searchDomains with all servers' searchDomains
searchDomains = uniqueStrings (settings.certificate.searchDomains ++ allServerSearchDomains);
searchDomains = lib.uniqueStrings (settings.certificate.searchDomains ++ allServerSearchDomains);
in
{
clan.core.vars.generators.openssh-ca = lib.mkIf (searchDomains != [ ]) {

View File

@@ -1,5 +1,8 @@
!!! Danger "Experimental"
This service is experimental and will change in the future.
🚧🚧🚧 Experimental 🚧🚧🚧
Use at your own risk.
We are still refining its interfaces, instability and breakages are expected.
---

View File

@@ -120,63 +120,6 @@
share = settings.share;
script =
(
if settings.prompt then
''
prompt_value=$(cat "$prompts"/user-password)
if [[ -n "''${prompt_value-}" ]]; then
echo "$prompt_value" | tr -d "\n" > "$out"/user-password
else
xkcdpass --numwords 4 --delimiter - --count 1 | tr -d "\n" > "$out"/user-password
fi
''
else
''
xkcdpass --numwords 4 --delimiter - --count 1 | tr -d "\n" > "$out"/user-password
''
)
+ ''
mkpasswd -s -m sha-512 < "$out"/user-password | tr -d "\n" > "$out"/user-password-hash
'';
};
};
darwinModule =
{
config,
pkgs,
lib,
...
}:
{
# For darwin, we currently only generate and manage the password secret.
# Hooking into actual macOS account management may be added later.
clan.core.vars.generators."user-password-${settings.user}" = {
files.user-password-hash.neededFor = "users";
files.user-password.deploy = false;
prompts.user-password = lib.mkIf settings.prompt {
display = {
group = settings.user;
label = "password";
required = false;
helperText = ''
Your password will be encrypted and stored securely using the secret store you've configured.
'';
};
type = "hidden";
persist = true;
description = "Leave empty to generate automatically";
};
runtimeInputs = [
pkgs.coreutils
pkgs.xkcdpass
pkgs.mkpasswd
];
share = settings.share;
script =
(
if settings.prompt then
@@ -206,7 +149,5 @@
# Immutable users to ensure that this module has exclusive control over the users.
users.mutableUsers = false;
};
# No-op for darwin by default; can be extended later if needed.
darwinModule = { };
};
}

View File

@@ -41,14 +41,14 @@ let
# In this case it is 'self-zerotier-redux'
# This is usually only used internally, but we can use it to test the evaluation of service module in isolation
# evaluatedService =
# testFlake.clanInternals.inventoryClass.distributedServices.servicesEval.config.mappedServices.self-zerotier-redux.config;
# testFlake.clanInternals.inventoryClass.distributedServices.importedModulesEvaluated.self-zerotier-redux.config;
in
{
test_simple = {
inherit testFlake;
expr =
testFlake.config.clan.clanInternals.inventoryClass.distributedServices.servicesEval.config.mappedServices.self-wifi.config;
testFlake.config.clan.clanInternals.inventoryClass.distributedServices.importedModulesEvaluated.self-wifi.config;
expected = 1;
# expr = {

View File

@@ -1,9 +1,12 @@
!!! Danger "Experimental"
This service is experimental and will change in the future.
🚧🚧🚧 Experimental 🚧🚧🚧
Use at your own risk.
We are still refining its interfaces, instability and breakages are expected.
---
This module sets up [yggdrasil](https://yggdrasil-network.github.io/) across your clan.
This module sets up [yggdrasil](https://yggdrasil-network.github.io/) across your clan.
Yggdrasil is designed to be a future-proof and decentralised alternative to the
structured routing protocols commonly used today on the internet. Inside your

View File

@@ -140,9 +140,6 @@
pkgs,
...
}:
let
uniqueStrings = list: builtins.attrNames (builtins.groupBy lib.id list);
in
{
imports = [
(import ./shared.nix {
@@ -159,7 +156,7 @@
config = {
systemd.services.zerotier-inventory-autoaccept =
let
machines = uniqueStrings (
machines = lib.uniqueStrings (
(lib.optionals (roles ? moon) (lib.attrNames roles.moon.machines))
++ (lib.optionals (roles ? controller) (lib.attrNames roles.controller.machines))
++ (lib.optionals (roles ? peer) (lib.attrNames roles.peer.machines))

18
devFlake/flake.lock generated
View File

@@ -105,11 +105,11 @@
},
"nixpkgs-dev": {
"locked": {
"lastModified": 1762328495,
"narHash": "sha256-IUZvw5kvLiExApP9+SK/styzEKSqfe0NPclu9/z85OQ=",
"lastModified": 1761631514,
"narHash": "sha256-VsXz+2W4DFBozzppbF9SXD9pNcv17Z+c/lYXvPJi/eI=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "4c621660e393922cf68cdbfc40eb5a2d54d3989a",
"rev": "a0b0d4b52b5f375658ca8371dc49bff171dbda91",
"type": "github"
},
"original": {
@@ -128,11 +128,11 @@
]
},
"locked": {
"lastModified": 1761730856,
"narHash": "sha256-t1i5p/vSWwueZSC0Z2BImxx3BjoUDNKyC2mk24krcMY=",
"lastModified": 1760652422,
"narHash": "sha256-C88Pgz38QIl9JxQceexqL2G7sw9vodHWx1Uaq+NRJrw=",
"owner": "NuschtOS",
"repo": "search",
"rev": "e29de6db0cb3182e9aee75a3b1fd1919d995d85b",
"rev": "3ebeebe8b6a49dfb11f771f761e0310f7c48d726",
"type": "github"
},
"original": {
@@ -208,11 +208,11 @@
"nixpkgs": []
},
"locked": {
"lastModified": 1762366246,
"narHash": "sha256-3xc/f/ZNb5ma9Fc9knIzEwygXotA+0BZFQ5V5XovSOQ=",
"lastModified": 1761311587,
"narHash": "sha256-Msq86cR5SjozQGCnC6H8C+0cD4rnx91BPltZ9KK613Y=",
"owner": "numtide",
"repo": "treefmt-nix",
"rev": "a82c779ca992190109e431d7d680860e6723e048",
"rev": "2eddae033e4e74bf581c2d1dfa101f9033dbd2dc",
"type": "github"
},
"original": {

View File

@@ -51,7 +51,7 @@ Make sure you have the following:
**Note:** This creates a new directory in your current location
```shellSession
nix run "https://git.clan.lol/clan/clan-core/archive/main.tar.gz#clan-cli" --refresh -- flakes create
nix run https://git.clan.lol/clan/clan-core/archive/main.tar.gz#clan-cli --refresh -- flakes create
```
3. Enter a **name** in the prompt:

View File

@@ -150,61 +150,10 @@ Those are very similar to NixOS VM tests, as in they run virtualized nixos machi
As of now the container test driver is a downstream development in clan-core.
Basically everything stated under the NixOS VM tests sections applies here, except some limitations.
### Using Container Tests vs VM Tests
Limitations:
Container tests are **enabled by default** for all tests using the clan testing framework.
They offer significant performance advantages over VM tests:
- **Faster startup**
- **Lower resource usage**: No full kernel boot or hardware emulation overhead
To control whether a test uses containers or VMs, use the `clan.test.useContainers` option:
```nix
{
clan = {
directory = ./.;
test.useContainers = true; # Use containers (default)
# test.useContainers = false; # Use VMs instead
};
}
```
**When to use VM tests instead of container tests:**
- Testing kernel features, modules, or boot processes
- Testing hardware-specific features
- When you need full system isolation
### System Requirements for Container Tests
Container tests require the **`uid-range`** system feature** in the Nix sandbox.
This feature allows Nix to allocate a range of UIDs for containers to use, enabling `systemd-nspawn` containers to run properly inside the Nix build sandbox.
**Configuration:**
The `uid-range` feature requires the `auto-allocate-uids` setting to be enabled in your Nix configuration.
To verify or enable it, add to your `/etc/nix/nix.conf` or NixOS configuration:
```nix
settings.experimental-features = [
"auto-allocate-uids"
];
nix.settings.auto-allocate-uids = true;
nix.settings.system-features = [ "uid-range" ];
```
**Technical details:**
- Container tests set `requiredSystemFeatures = [ "uid-range" ];` in their derivation (see `lib/test/container-test-driver/driver-module.nix:98`)
- Without this feature, containers cannot properly manage user namespaces and will fail to start
### Limitations
- Cannot run in interactive mode, however while the container test runs, it logs a nsenter command that can be used to log into each of the containers.
- Early implementation and limited by features.
- Cannot run in interactive mode, however while the container test runs, it logs a nsenter command that can be used to log into each of the container.
- setuid binaries don't work
### Where to find examples for NixOS container tests

36
flake.lock generated
View File

@@ -31,11 +31,11 @@
]
},
"locked": {
"lastModified": 1762276996,
"narHash": "sha256-TtcPgPmp2f0FAnc+DMEw4ardEgv1SGNR3/WFGH0N19M=",
"lastModified": 1760701190,
"narHash": "sha256-y7UhnWlER8r776JsySqsbTUh2Txf7K30smfHlqdaIQw=",
"owner": "nix-community",
"repo": "disko",
"rev": "af087d076d3860760b3323f6b583f4d828c1ac17",
"rev": "3a9450b26e69dcb6f8de6e2b07b3fc1c288d85f5",
"type": "github"
},
"original": {
@@ -51,11 +51,11 @@
]
},
"locked": {
"lastModified": 1762040540,
"narHash": "sha256-z5PlZ47j50VNF3R+IMS9LmzI5fYRGY/Z5O5tol1c9I4=",
"lastModified": 1760948891,
"narHash": "sha256-TmWcdiUUaWk8J4lpjzu4gCGxWY6/Ok7mOK4fIFfBuU4=",
"owner": "hercules-ci",
"repo": "flake-parts",
"rev": "0010412d62a25d959151790968765a70c436598b",
"rev": "864599284fc7c0ba6357ed89ed5e2cd5040f0c04",
"type": "github"
},
"original": {
@@ -71,11 +71,11 @@
]
},
"locked": {
"lastModified": 1762304480,
"narHash": "sha256-ikVIPB/ea/BAODk6aksgkup9k2jQdrwr4+ZRXtBgmSs=",
"lastModified": 1761339987,
"narHash": "sha256-IUaawVwItZKi64IA6kF6wQCLCzpXbk2R46dHn8sHkig=",
"owner": "nix-darwin",
"repo": "nix-darwin",
"rev": "b8c7ac030211f18bd1f41eae0b815571853db7a2",
"rev": "7cd9aac79ee2924a85c211d21fafd394b06a38de",
"type": "github"
},
"original": {
@@ -99,11 +99,11 @@
},
"nixos-facter-modules": {
"locked": {
"lastModified": 1762264948,
"narHash": "sha256-iaRf6n0KPl9hndnIft3blm1YTAyxSREV1oX0MFZ6Tk4=",
"lastModified": 1761137276,
"narHash": "sha256-4lDjGnWRBLwqKQ4UWSUq6Mvxu9r8DSqCCydodW/Jsi8=",
"owner": "nix-community",
"repo": "nixos-facter-modules",
"rev": "fa695bff9ec37fd5bbd7ee3181dbeb5f97f53c96",
"rev": "70bcd64225d167c7af9b475c4df7b5abba5c7de8",
"type": "github"
},
"original": {
@@ -115,10 +115,10 @@
"nixpkgs": {
"locked": {
"lastModified": 315532800,
"narHash": "sha256-LDT9wuUZtjPfmviCcVWif5+7j4kBI2mWaZwjNNeg4eg=",
"rev": "a7fc11be66bdfb5cdde611ee5ce381c183da8386",
"narHash": "sha256-yDxtm0PESdgNetiJN5+MFxgubBcLDTiuSjjrJiyvsvM=",
"rev": "d7f52a7a640bc54c7bb414cca603835bf8dd4b10",
"type": "tarball",
"url": "https://releases.nixos.org/nixpkgs/nixpkgs-25.11pre887438.a7fc11be66bd/nixexprs.tar.xz"
"url": "https://releases.nixos.org/nixpkgs/nixpkgs-25.11pre871443.d7f52a7a640b/nixexprs.tar.xz"
},
"original": {
"type": "tarball",
@@ -181,11 +181,11 @@
]
},
"locked": {
"lastModified": 1762366246,
"narHash": "sha256-3xc/f/ZNb5ma9Fc9knIzEwygXotA+0BZFQ5V5XovSOQ=",
"lastModified": 1761311587,
"narHash": "sha256-Msq86cR5SjozQGCnC6H8C+0cD4rnx91BPltZ9KK613Y=",
"owner": "numtide",
"repo": "treefmt-nix",
"rev": "a82c779ca992190109e431d7d680860e6723e048",
"rev": "2eddae033e4e74bf581c2d1dfa101f9033dbd2dc",
"type": "github"
},
"original": {

View File

@@ -2,7 +2,11 @@
lib,
clanLib,
}:
let
services = clanLib.callLib ./distributed-service/inventory-adapter.nix { };
in
{
inherit (services) mapInstances;
inventoryModule = {
_file = "clanLib.inventory.module";
imports = [

View File

@@ -28,15 +28,19 @@ in
elemType = submoduleWith {
class = "clan.service";
specialArgs = {
exports = config.exports;
directory = directory;
clanLib = specialArgs.clanLib;
exports = config.exports;
};
modules = [
(
{ name, ... }:
{
_module.args._ctx = [ name ];
_module.args.clanLib = specialArgs.clanLib;
_module.args.exports = config.exports;
_module.args.directory = directory;
}
)
./service-module.nix

View File

@@ -0,0 +1,171 @@
# Adapter function between the inventory.instances and the clan.service module
#
# Data flow:
# - inventory.instances -> Adapter -> clan.service module -> Service Resources (i.e. NixosModules per Machine, Vars per Service, etc.)
#
# What this file does:
#
# - Resolves the [Module] to an actual module-path and imports it.
# - Groups together all the same modules into a single import and creates all instances for it.
# - Resolves the inventory tags into machines. Tags don't exist at the service level.
# Also combines the settings for 'machines' and 'tags'.
{
lib,
clanLib,
...
}:
{
mapInstances =
{
# This is used to resolve the module imports from 'flake.inputs'
flakeInputs,
# The clan inventory
inventory,
directory,
clanCoreModules,
prefix ? [ ],
exportsModule,
}:
let
# machineHasTag = machineName: tagName: lib.elem tagName inventory.machines.${machineName}.tags;
# map the instances into the module
importedModuleWithInstances = lib.mapAttrs (
instanceName: instance:
let
resolvedModule = clanLib.resolveModule {
moduleSpec = instance.module;
inherit flakeInputs clanCoreModules;
};
# Every instance includes machines via roles
# :: { client :: ... }
instanceRoles = lib.mapAttrs (
roleName: role:
let
resolvedMachines = clanLib.inventory.resolveTags {
members = {
# Explicit members
machines = lib.attrNames role.machines;
# Resolved Members
tags = lib.attrNames role.tags;
};
inherit (inventory) machines;
inherit instanceName roleName;
};
in
# instances.<instanceName>.roles.<roleName> =
# Remove "tags", they are resolved into "machines"
(removeAttrs role [ "tags" ])
// {
machines = lib.genAttrs resolvedMachines.machines (
machineName:
let
machineSettings = instance.roles.${roleName}.machines.${machineName}.settings or { };
in
# TODO: tag settings
# Wait for this feature until option introspection for 'settings' is done.
# This might get too complex to handle otherwise.
# settingsViaTags = lib.filterAttrs (
# tagName: _: machineHasTag machineName tagName
# ) instance.roles.${roleName}.tags;
{
# TODO: Do we want to wrap settings with
# setDefaultModuleLocation "inventory.instances.${instanceName}.roles.${roleName}.tags.${tagName}";
settings = {
imports = [
machineSettings
]; # ++ lib.attrValues (lib.mapAttrs (_tagName: v: v.settings) settingsViaTags);
};
}
);
}
) instance.roles;
in
{
inherit (instance) module;
inherit resolvedModule instanceRoles;
}
) inventory.instances or { };
# Group the instances by the module they resolve to
# This is necessary to evaluate the module in a single pass
# :: { <module.input>_<module.name> :: [ { name, value } ] }
# Since 'perMachine' needs access to all the instances we should include them as a whole
grouped = lib.foldlAttrs (
acc: instanceName: instance:
let
inputName = if instance.module.input == null then "<clan-core>" else instance.module.input;
id = inputName + "-" + instance.module.name;
in
acc
// {
${id} = acc.${id} or [ ] ++ [
{
inherit instanceName instance;
}
];
}
) { } importedModuleWithInstances;
# servicesEval.config.mappedServices.self-A.result.final.jon.nixosModule
allMachines = lib.mapAttrs (machineName: _: {
# This is the list of nixosModules for each machine
machineImports = lib.foldlAttrs (
acc: _module_ident: serviceModule:
acc ++ [ serviceModule.result.final.${machineName}.nixosModule or { } ]
) [ ] servicesEval.config.mappedServices;
}) inventory.machines or { };
evalServices =
{ modules, prefix }:
lib.evalModules {
class = "clan";
specialArgs = {
inherit clanLib;
_ctx = prefix;
};
modules = [
(import ./all-services-wrapper.nix { inherit directory; })
]
++ modules;
};
servicesEval = evalServices {
inherit prefix;
modules = [
{
inherit exportsModule;
mappedServices = lib.mapAttrs (_module_ident: instances: {
imports = [
# Import the resolved module.
# i.e. clan.modules.admin
{
options.module = lib.mkOption {
type = lib.types.raw;
default = (builtins.head instances).instance.module;
};
}
(builtins.head instances).instance.resolvedModule
] # Include all the instances that correlate to the resolved module
++ (builtins.map (v: {
instances.${v.instanceName}.roles = v.instance.instanceRoles;
}) instances);
}) grouped;
}
];
};
importedModulesEvaluated = servicesEval.config.mappedServices;
in
{
inherit
servicesEval
importedModuleWithInstances
# Exposed for testing
grouped
allMachines
importedModulesEvaluated
;
};
}

View File

@@ -7,14 +7,10 @@
...
}:
let
inherit (lib) mkOption types;
inherit (lib) mkOption types uniqueStrings;
inherit (types) attrsWith submoduleWith;
errorContext = "Error context: ${lib.concatStringsSep "." _ctx}";
# TODO:
# Remove once this gets merged upstream; performs in O(n*log(n) instead of O(n^2))
# https://github.com/NixOS/nixpkgs/pull/355616/files
uniqueStrings = list: builtins.attrNames (builtins.groupBy lib.id list);
/**
Merges the role- and machine-settings using the role interface
@@ -561,15 +557,6 @@ in
```
'';
};
options.darwinModule = mkOption {
type = types.deferredModule;
default = { };
description = ''
A single nix-darwin module for the instance.
This mirrors `nixosModule` but targets darwin machines.
'';
};
})
];
};
@@ -695,15 +682,6 @@ in
```
'';
};
options.darwinModule = mkOption {
type = types.deferredModule;
default = { };
description = ''
A single nix-darwin module for the machine.
This mirrors `nixosModule` but targets darwin machines.
'';
};
})
];
};
@@ -908,11 +886,6 @@ in
lib.setDefaultModuleLocation "via inventory.instances.${instanceName}.roles.${roleName}" s
) instanceCfg.roles.${roleName}.extraModules);
};
darwinModule = {
imports = [
instanceRes.darwinModule
];
};
}
) instanceCfg.roles.${roleName}.machines or { };
@@ -1002,24 +975,11 @@ in
else
instanceAcc.nixosModules
);
darwinModules = (
if instance.allMachines.${machineName}.darwinModule or { } != { } then
instanceAcc.darwinModules
++ [
(lib.setDefaultModuleLocation
"Via instances.${instanceName}.roles.${roleName}.machines.${machineName}"
instance.allMachines.${machineName}.darwinModule
)
]
else
instanceAcc.darwinModules
);
}
) roleAcc role.allInstances
)
{
nixosModules = [ ];
darwinModules = [ ];
# ...
}
config.result.allRoles;
@@ -1057,12 +1017,6 @@ in
]
++ instanceResults.nixosModules;
};
darwinModule = {
imports = [
(lib.setDefaultModuleLocation "Via ${config.manifest.name}.perMachine - machine='${machineName}';" machineResult.darwinModule)
]
++ instanceResults.darwinModules;
};
}
) config.result.allMachines;
};

View File

@@ -4,53 +4,63 @@
...
}:
let
inherit (lib)
evalModules
;
flakeInputsFixture = {
upstream.clan.modules = {
uzzi = {
_class = "clan.service";
manifest = {
name = "uzzi-from-upstream";
};
};
};
};
evalInventory =
m:
(evalModules {
# Static modules
modules = [
clanLib.inventory.inventoryModule
{
_file = "test file";
tags.all = [ ];
tags.nixos = [ ];
tags.darwin = [ ];
}
{
modules.test = { };
}
m
];
}).config;
createTestClan =
testClan:
callInventoryAdapter =
inventoryModule:
let
res = clanLib.clan ({
# Static / mocked
specialArgs = {
clan-core = {
clan.modules = { };
inventory = evalInventory inventoryModule;
flakeInputsFixture = {
self.clan.modules = inventoryModule.modules or { };
# Example upstream module
upstream.clan.modules = {
uzzi = {
_class = "clan.service";
manifest = {
name = "uzzi-from-upstream";
};
};
};
self.inputs = flakeInputsFixture // {
self.clan = res.config;
};
directory = ./.;
exportsModule = { };
imports = [
testClan
];
});
};
in
res;
clanLib.inventory.mapInstances {
directory = ./.;
clanCoreModules = { };
flakeInputs = flakeInputsFixture;
inherit inventory;
exportsModule = { };
};
in
{
extraModules = import ./extraModules.nix { inherit clanLib; };
exports = import ./exports.nix { inherit lib clanLib; };
settings = import ./settings.nix { inherit lib createTestClan; };
specialArgs = import ./specialArgs.nix { inherit lib createTestClan; };
resolve_module_spec = import ./import_module_spec.nix {
inherit lib createTestClan;
};
settings = import ./settings.nix { inherit lib callInventoryAdapter; };
specialArgs = import ./specialArgs.nix { inherit lib callInventoryAdapter; };
resolve_module_spec = import ./import_module_spec.nix { inherit lib callInventoryAdapter; };
test_simple =
let
res = createTestClan {
res = callInventoryAdapter {
# Authored module
# A minimal module looks like this
# It isn't exactly doing anything but it's a valid module that produces an output
@@ -61,7 +71,7 @@ in
};
};
# User config
inventory.instances."instance_foo" = {
instances."instance_foo" = {
module = {
name = "simple-module";
};
@@ -71,7 +81,7 @@ in
{
# Test that the module is mapped into the output
# We might change the attribute name in the future
expr = res.config._services.mappedServices ? "<clan-core>-simple-module";
expr = res.importedModulesEvaluated ? "<clan-core>-simple-module";
expected = true;
inherit res;
};
@@ -82,7 +92,7 @@ in
# All instances should be included within one evaluation to make all of them available
test_module_grouping =
let
res = createTestClan {
res = callInventoryAdapter {
# Authored module
# A minimal module looks like this
# It isn't exactly doing anything but it's a valid module that produces an output
@@ -102,19 +112,18 @@ in
perMachine = { }: { };
};
# User config
inventory.instances."instance_foo" = {
instances."instance_foo" = {
module = {
name = "A";
};
};
inventory.instances."instance_bar" = {
instances."instance_bar" = {
module = {
name = "B";
};
};
inventory.instances."instance_baz" = {
instances."instance_baz" = {
module = {
name = "A";
};
@@ -124,16 +133,16 @@ in
{
# Test that the module is mapped into the output
# We might change the attribute name in the future
expr = lib.attrNames res.config._services.mappedServices;
expected = [
"<clan-core>-A"
"<clan-core>-B"
];
expr = lib.mapAttrs (_n: v: builtins.length v) res.grouped;
expected = {
"<clan-core>-A" = 2;
"<clan-core>-B" = 1;
};
};
test_creates_all_instances =
let
res = createTestClan {
res = callInventoryAdapter {
# Authored module
# A minimal module looks like this
# It isn't exactly doing anything but it's a valid module that produces an output
@@ -145,24 +154,22 @@ in
perMachine = { }: { };
};
inventory = {
instances."instance_foo" = {
module = {
name = "A";
input = "self";
};
instances."instance_foo" = {
module = {
name = "A";
input = "self";
};
instances."instance_bar" = {
module = {
name = "A";
input = "self";
};
};
instances."instance_bar" = {
module = {
name = "A";
input = "self";
};
instances."instance_zaza" = {
module = {
name = "B";
input = null;
};
};
instances."instance_zaza" = {
module = {
name = "B";
input = null;
};
};
};
@@ -170,7 +177,7 @@ in
{
# Test that the module is mapped into the output
# We might change the attribute name in the future
expr = lib.attrNames res.config._services.mappedServices.self-A.instances;
expr = lib.attrNames res.importedModulesEvaluated.self-A.instances;
expected = [
"instance_bar"
"instance_foo"
@@ -180,7 +187,7 @@ in
# Membership via roles
test_add_machines_directly =
let
res = createTestClan {
res = callInventoryAdapter {
# Authored module
# A minimal module looks like this
# It isn't exactly doing anything but it's a valid module that produces an output
@@ -195,40 +202,38 @@ in
# perMachine = {}: {};
};
inventory = {
machines = {
jon = { };
sara = { };
hxi = { };
machines = {
jon = { };
sara = { };
hxi = { };
};
instances."instance_foo" = {
module = {
name = "A";
input = "self";
};
instances."instance_foo" = {
module = {
name = "A";
input = "self";
};
roles.peer.machines.jon = { };
roles.peer.machines.jon = { };
};
instances."instance_bar" = {
module = {
name = "A";
input = "self";
};
instances."instance_bar" = {
module = {
name = "A";
input = "self";
};
roles.peer.machines.sara = { };
};
instances."instance_zaza" = {
module = {
name = "B";
input = null;
};
roles.peer.tags.all = { };
roles.peer.machines.sara = { };
};
instances."instance_zaza" = {
module = {
name = "B";
input = null;
};
roles.peer.tags.all = { };
};
};
in
{
# Test that the module is mapped into the output
# We might change the attribute name in the future
expr = lib.attrNames res.config._services.mappedServices.self-A.result.allMachines;
expr = lib.attrNames res.importedModulesEvaluated.self-A.result.allMachines;
expected = [
"jon"
"sara"
@@ -238,7 +243,7 @@ in
# Membership via tags
test_add_machines_via_tags =
let
res = createTestClan {
res = callInventoryAdapter {
# Authored module
# A minimal module looks like this
# It isn't exactly doing anything but it's a valid module that produces an output
@@ -252,37 +257,35 @@ in
# perMachine = {}: {};
};
inventory = {
machines = {
jon = {
tags = [ "foo" ];
};
sara = {
tags = [ "foo" ];
};
hxi = { };
machines = {
jon = {
tags = [ "foo" ];
};
instances."instance_foo" = {
module = {
name = "A";
input = "self";
};
roles.peer.tags.foo = { };
sara = {
tags = [ "foo" ];
};
instances."instance_zaza" = {
module = {
name = "B";
input = null;
};
roles.peer.tags.all = { };
hxi = { };
};
instances."instance_foo" = {
module = {
name = "A";
input = "self";
};
roles.peer.tags.foo = { };
};
instances."instance_zaza" = {
module = {
name = "B";
input = null;
};
roles.peer.tags.all = { };
};
};
in
{
# Test that the module is mapped into the output
# We might change the attribute name in the future
expr = lib.attrNames res.config._services.mappedServices.self-A.result.allMachines;
expr = lib.attrNames res.importedModulesEvaluated.self-A.result.allMachines;
expected = [
"jon"
"sara"
@@ -290,9 +293,6 @@ in
};
machine_imports = import ./machine_imports.nix { inherit lib clanLib; };
per_machine_args = import ./per_machine_args.nix { inherit lib createTestClan; };
per_instance_args = import ./per_instance_args.nix {
inherit lib;
callInventoryAdapter = createTestClan;
};
per_machine_args = import ./per_machine_args.nix { inherit lib callInventoryAdapter; };
per_instance_args = import ./per_instance_args.nix { inherit lib callInventoryAdapter; };
}

View File

@@ -1,4 +1,4 @@
{ createTestClan, ... }:
{ callInventoryAdapter, ... }:
let
# Authored module
# A minimal module looks like this
@@ -23,13 +23,10 @@ let
resolve =
spec:
createTestClan {
inherit modules;
inventory = {
inherit machines;
instances."instance_foo" = {
module = spec;
};
callInventoryAdapter {
inherit modules machines;
instances."instance_foo" = {
module = spec;
};
};
in
@@ -39,16 +36,25 @@ in
(resolve {
name = "A";
input = "self";
}).config._services.mappedServices.self-A.manifest.name;
expected = "network";
}).importedModuleWithInstances.instance_foo.resolvedModule;
expected = {
_class = "clan.service";
manifest = {
name = "network";
};
};
};
test_import_remote_module_by_name = {
expr =
(resolve {
name = "uzzi";
input = "upstream";
}).config._services.mappedServices.upstream-uzzi.manifest.name;
expected = "uzzi-from-upstream";
}).importedModuleWithInstances.instance_foo.resolvedModule;
expected = {
_class = "clan.service";
manifest = {
name = "uzzi-from-upstream";
};
};
};
}

View File

@@ -58,43 +58,39 @@ let
sara = { };
};
res = callInventoryAdapter {
inherit modules;
inventory = {
inherit machines;
instances."instance_foo" = {
module = {
name = "A";
input = "self";
};
roles.peer.machines.jon = {
settings.timeout = lib.mkForce "foo-peer-jon";
};
roles.peer = {
settings.timeout = "foo-peer";
};
roles.controller.machines.jon = { };
inherit modules machines;
instances."instance_foo" = {
module = {
name = "A";
input = "self";
};
instances."instance_bar" = {
module = {
name = "A";
input = "self";
};
roles.peer.machines.jon = {
settings.timeout = "bar-peer-jon";
};
roles.peer.machines.jon = {
settings.timeout = lib.mkForce "foo-peer-jon";
};
# TODO: move this into a seperate test.
# Seperate out the check that this module is never imported
# import the module "B" (undefined)
# All machines have this instance
instances."instance_zaza" = {
module = {
name = "B";
input = null;
};
roles.peer.tags.all = { };
roles.peer = {
settings.timeout = "foo-peer";
};
roles.controller.machines.jon = { };
};
instances."instance_bar" = {
module = {
name = "A";
input = "self";
};
roles.peer.machines.jon = {
settings.timeout = "bar-peer-jon";
};
};
# TODO: move this into a seperate test.
# Seperate out the check that this module is never imported
# import the module "B" (undefined)
# All machines have this instance
instances."instance_zaza" = {
module = {
name = "B";
input = null;
};
roles.peer.tags.all = { };
};
};
@@ -109,10 +105,9 @@ in
{
# settings should evaluate
test_per_instance_arguments = {
inherit res;
expr = {
instanceName =
res.config._services.mappedServices.self-A.result.allRoles.peer.allInstances."instance_foo".allMachines.jon.passthru.instanceName;
res.importedModulesEvaluated.self-A.result.allRoles.peer.allInstances."instance_foo".allMachines.jon.passthru.instanceName;
# settings are specific.
# Below we access:
@@ -120,11 +115,11 @@ in
# roles = peer
# machines = jon
settings =
res.config._services.mappedServices.self-A.result.allRoles.peer.allInstances.instance_foo.allMachines.jon.passthru.settings;
res.importedModulesEvaluated.self-A.result.allRoles.peer.allInstances.instance_foo.allMachines.jon.passthru.settings;
machine =
res.config._services.mappedServices.self-A.result.allRoles.peer.allInstances.instance_foo.allMachines.jon.passthru.machine;
res.importedModulesEvaluated.self-A.result.allRoles.peer.allInstances.instance_foo.allMachines.jon.passthru.machine;
roles =
res.config._services.mappedServices.self-A.result.allRoles.peer.allInstances.instance_foo.allMachines.jon.passthru.roles;
res.importedModulesEvaluated.self-A.result.allRoles.peer.allInstances.instance_foo.allMachines.jon.passthru.roles;
};
expected = {
instanceName = "instance_foo";
@@ -165,9 +160,9 @@ in
# TODO: Cannot be tested like this anymore
test_per_instance_settings_vendoring = {
x = res.config._services.mappedServices.self-A;
x = res.importedModulesEvaluated.self-A;
expr =
res.config._services.mappedServices.self-A.result.allRoles.peer.allInstances.instance_foo.allMachines.jon.passthru.vendoredSettings;
res.importedModulesEvaluated.self-A.result.allRoles.peer.allInstances.instance_foo.allMachines.jon.passthru.vendoredSettings;
expected = {
timeout = "config.thing";
};

View File

@@ -1,4 +1,4 @@
{ lib, createTestClan }:
{ lib, callInventoryAdapter }:
let
# Authored module
# A minimal module looks like this
@@ -39,40 +39,36 @@ let
jon = { };
sara = { };
};
res = createTestClan {
inherit modules;
inventory = {
inherit machines;
instances."instance_foo" = {
module = {
name = "A";
input = "self";
};
roles.peer.machines.jon = {
settings.timeout = lib.mkForce "foo-peer-jon";
};
roles.peer = {
settings.timeout = "foo-peer";
};
res = callInventoryAdapter {
inherit modules machines;
instances."instance_foo" = {
module = {
name = "A";
input = "self";
};
instances."instance_bar" = {
module = {
name = "A";
input = "self";
};
roles.peer.machines.jon = {
settings.timeout = "bar-peer-jon";
};
roles.peer.machines.jon = {
settings.timeout = lib.mkForce "foo-peer-jon";
};
instances."instance_zaza" = {
module = {
name = "B";
input = null;
};
roles.peer.tags.all = { };
roles.peer = {
settings.timeout = "foo-peer";
};
};
instances."instance_bar" = {
module = {
name = "A";
input = "self";
};
roles.peer.machines.jon = {
settings.timeout = "bar-peer-jon";
};
};
instances."instance_zaza" = {
module = {
name = "B";
input = null;
};
roles.peer.tags.all = { };
};
};
in
@@ -83,7 +79,7 @@ in
inherit res;
expr = {
hasMachineSettings =
res.config._services.mappedServices.self-A.result.allMachines.jon.passthru.instances.instance_foo.roles.peer.machines.jon
res.importedModulesEvaluated.self-A.result.allMachines.jon.passthru.instances.instance_foo.roles.peer.machines.jon
? settings;
# settings are specific.
@@ -92,10 +88,10 @@ in
# roles = peer
# machines = jon
specificMachineSettings =
res.config._services.mappedServices.self-A.result.allMachines.jon.passthru.instances.instance_foo.roles.peer.machines.jon.settings;
res.importedModulesEvaluated.self-A.result.allMachines.jon.passthru.instances.instance_foo.roles.peer.machines.jon.settings;
hasRoleSettings =
res.config._services.mappedServices.self-A.result.allMachines.jon.passthru.instances.instance_foo.roles.peer
res.importedModulesEvaluated.self-A.result.allMachines.jon.passthru.instances.instance_foo.roles.peer
? settings;
# settings are specific.
@@ -104,7 +100,7 @@ in
# roles = peer
# machines = *
specificRoleSettings =
res.config._services.mappedServices.self-A.result.allMachines.jon.passthru.instances.instance_foo.roles.peer;
res.importedModulesEvaluated.self-A.result.allMachines.jon.passthru.instances.instance_foo.roles.peer;
};
expected = {
hasMachineSettings = true;

View File

@@ -1,6 +1,6 @@
{ createTestClan, lib, ... }:
{ callInventoryAdapter, lib, ... }:
let
res = createTestClan {
res = callInventoryAdapter {
modules."A" = {
_class = "clan.service";
manifest = {
@@ -21,31 +21,28 @@ let
};
};
};
inventory = {
machines = {
jon = { };
sara = { };
machines = {
jon = { };
sara = { };
};
instances."instance_foo" = {
module = {
name = "A";
input = "self";
};
instances."instance_foo" = {
module = {
name = "A";
input = "self";
};
# Settings for both jon and sara
roles.peer.settings = {
timeout = 40;
};
# Jon overrides timeout
roles.peer.machines.jon = {
settings.timeout = lib.mkForce 42;
};
roles.peer.machines.sara = { };
# Settings for both jon and sara
roles.peer.settings = {
timeout = 40;
};
# Jon overrides timeout
roles.peer.machines.jon = {
settings.timeout = lib.mkForce 42;
};
roles.peer.machines.sara = { };
};
};
config = res.config._services.mappedServices.self-A;
config = res.servicesEval.config.mappedServices.self-A;
#
applySettings =

View File

@@ -1,6 +1,6 @@
{ createTestClan, lib, ... }:
{ callInventoryAdapter, lib, ... }:
let
res = createTestClan {
res = callInventoryAdapter {
modules."A" = m: {
_class = "clan.service";
config = {
@@ -14,21 +14,19 @@ let
default = m;
};
};
inventory = {
machines = {
jon = { };
};
instances."instance_foo" = {
module = {
name = "A";
input = "self";
};
roles.peer.machines.jon = { };
machines = {
jon = { };
};
instances."instance_foo" = {
module = {
name = "A";
input = "self";
};
roles.peer.machines.jon = { };
};
};
specialArgs = lib.attrNames res.config._services.mappedServices.self-A.test.specialArgs;
specialArgs = lib.attrNames res.servicesEval.config.mappedServices.self-A.test.specialArgs;
in
{
test_simple = {

View File

@@ -19,7 +19,6 @@
imports = [
./top-level-interface.nix
./module.nix
./distributed-services.nix
./checks.nix
];
}

View File

@@ -1,176 +0,0 @@
{
lib,
clanLib,
config,
clan-core,
...
}:
let
inherit (lib) mkOption types;
# Keep a reference to top-level
clanConfig = config;
inventory = clanConfig.inventory;
flakeInputs = clanConfig.self.inputs;
clanCoreModules = clan-core.clan.modules;
grouped = lib.foldlAttrs (
acc: instanceName: instance:
let
inputName = if instance.module.input == null then "<clan-core>" else instance.module.input;
id = inputName + "-" + instance.module.name;
in
acc
// {
${id} = acc.${id} or [ ] ++ [
{
inherit instanceName instance;
}
];
}
) { } importedModuleWithInstances;
importedModuleWithInstances = lib.mapAttrs (
instanceName: instance:
let
resolvedModule = clanLib.resolveModule {
moduleSpec = instance.module;
inherit flakeInputs clanCoreModules;
};
# Every instance includes machines via roles
# :: { client :: ... }
instanceRoles = lib.mapAttrs (
roleName: role:
let
resolvedMachines = clanLib.inventory.resolveTags {
members = {
# Explicit members
machines = lib.attrNames role.machines;
# Resolved Members
tags = lib.attrNames role.tags;
};
inherit (inventory) machines;
inherit instanceName roleName;
};
in
# instances.<instanceName>.roles.<roleName> =
# Remove "tags", they are resolved into "machines"
(removeAttrs role [ "tags" ])
// {
machines = lib.genAttrs resolvedMachines.machines (
machineName:
let
machineSettings = instance.roles.${roleName}.machines.${machineName}.settings or { };
in
# TODO: tag settings
# Wait for this feature until option introspection for 'settings' is done.
# This might get too complex to handle otherwise.
# settingsViaTags = lib.filterAttrs (
# tagName: _: machineHasTag machineName tagName
# ) instance.roles.${roleName}.tags;
{
# TODO: Do we want to wrap settings with
# setDefaultModuleLocation "inventory.instances.${instanceName}.roles.${roleName}.tags.${tagName}";
settings = {
imports = [
machineSettings
]; # ++ lib.attrValues (lib.mapAttrs (_tagName: v: v.settings) settingsViaTags);
};
}
);
}
) instance.roles;
in
{
inherit (instance) module;
inherit resolvedModule instanceRoles;
}
) inventory.instances or { };
in
{
_class = "clan";
options._services = mkOption {
visible = false;
description = ''
All service instances
!!! Danger "Internal API"
Do not rely on this API yet.
- Will be renamed to just 'services' in the future.
Once the name can be claimed again.
- Structure will change.
API will be declared as public after beeing simplified.
'';
type = types.submoduleWith {
# TODO: Remove specialArgs
specialArgs = {
inherit clanLib;
};
modules = [
(import ../../lib/inventory/distributed-service/all-services-wrapper.nix {
inherit (clanConfig) directory;
})
# Dependencies
{
exportsModule = clanConfig.exportsModule;
}
{
# TODO: Rename to "allServices"
# All services
mappedServices = lib.mapAttrs (_module_ident: instances: {
imports = [
# Import the resolved module.
# i.e. clan.modules.admin
{
options.module = lib.mkOption {
type = lib.types.raw;
default = (builtins.head instances).instance.module;
};
}
(builtins.head instances).instance.resolvedModule
] # Include all the instances that correlate to the resolved module
++ (builtins.map (v: {
instances.${v.instanceName}.roles = v.instance.instanceRoles;
}) instances);
}) grouped;
}
];
};
default = { };
};
options._allMachines = mkOption {
internal = true;
type = types.raw;
default = lib.mapAttrs (machineName: _: {
# This is the list of service modules for each machine (nixos or darwin)
machineImports = lib.foldlAttrs (
acc: _module_ident: serviceModule:
let
modName =
if inventory.machines.${machineName}.machineClass == "darwin" then
"darwinModule"
else
"nixosModule";
finalForMachine = serviceModule.result.final.${machineName} or { };
picked =
if builtins.hasAttr modName finalForMachine then
(builtins.getAttr modName finalForMachine)
else
{ };
in
acc ++ [ picked ]
) [ ] config._services.mappedServices;
}) inventory.machines or { };
};
config = {
clanInternals.inventoryClass.machines = config._allMachines;
# clanInternals.inventoryClass.distributedServices = config._services;
# Exports from distributed services
exports = config._services.exports;
};
}

View File

@@ -3,16 +3,12 @@
lib,
clanModule,
clanLib,
clan-core,
}:
let
eval = lib.evalModules {
modules = [
clanModule
];
specialArgs = {
self = clan-core;
};
};
evalDocs = pkgs.nixosOptionsDoc {

View File

@@ -12,7 +12,6 @@ in
}:
let
jsonDocs = import ./eval-docs.nix {
clan-core = self;
inherit
pkgs
lib

View File

@@ -192,7 +192,7 @@ in
# - darwinModules (_class = darwin)
(lib.optionalAttrs (clan-core ? "${_class}Modules") clan-core."${_class}Modules".clanCore)
]
++ (v.machineImports or [ ]);
++ lib.optionals (_class == "nixos") (v.machineImports or [ ]);
# default hostname
networking.hostName = lib.mkDefault name;
@@ -219,6 +219,8 @@ in
inherit nixosConfigurations;
inherit darwinConfigurations;
exports = config.clanInternals.inventoryClass.distributedServices.servicesEval.config.exports;
clanInternals = {
inventoryClass =
let
@@ -252,9 +254,21 @@ in
exportsModule = config.exportsModule;
}
(
{ ... }:
{ config, ... }:
{
staticModules = clan-core.clan.modules;
distributedServices = clanLib.inventory.mapInstances {
inherit (config)
inventory
directory
flakeInputs
exportsModule
;
clanCoreModules = clan-core.clan.modules;
prefix = [ "distributedServices" ];
};
machines = config.distributedServices.allMachines;
}
)
];

View File

@@ -67,6 +67,9 @@ in
type = types.raw;
};
distributedServices = mkOption {
type = types.raw;
};
inventory = mkOption {
type = types.raw;
};

View File

@@ -5,7 +5,7 @@
}:
{
# If we also need zfs, we can use the unstable version as we otherwise don't have a new enough kernel version
boot.zfs.package = pkgs.zfs_unstable or pkgs.zfsUnstable;
boot.zfs.package = pkgs.zfsUnstable;
# Enable bcachefs support
boot.supportedFilesystems.bcachefs = lib.mkDefault true;

View File

@@ -18,7 +18,7 @@ let
inputs.data-mesher.nixosModules.data-mesher
];
config = {
clan.core.clanPkgs = lib.mkDefault self.packages.${pkgs.stdenv.hostPlatform.system};
clan.core.clanPkgs = lib.mkDefault self.packages.${pkgs.hostPlatform.system};
};
};
in

View File

@@ -6,7 +6,7 @@
}:
let
isUnstable = config.boot.zfs.package == pkgs.zfs_unstable or pkgs.zfsUnstable;
isUnstable = config.boot.zfs.package == pkgs.zfsUnstable;
zfsCompatibleKernelPackages = lib.filterAttrs (
name: kernelPackages:
(builtins.match "linux_[0-9]+_[0-9]+" name) != null
@@ -30,5 +30,5 @@ let
in
{
# Note this might jump back and worth as kernel get added or removed.
boot.kernelPackages = lib.mkIf (lib.meta.availableOn pkgs.stdenv.hostPlatform pkgs.zfs) latestKernelPackage;
boot.kernelPackages = lib.mkIf (lib.meta.availableOn pkgs.hostPlatform pkgs.zfs) latestKernelPackage;
}

View File

@@ -4,7 +4,6 @@
padding: 8px;
flex-direction: column;
align-items: flex-start;
gap: 4px;
border-radius: 5px;
border: 1px solid var(--clr-border-def-2, #d8e8eb);

View File

@@ -1,13 +1,11 @@
import { onCleanup, onMount } from "solid-js";
import styles from "./ContextMenu.module.css";
import { Typography } from "../Typography/Typography";
import { Divider } from "../Divider/Divider";
import Icon from "../Icon/Icon";
export const Menu = (props: {
x: number;
y: number;
onSelect: (option: "move" | "delete") => void;
onSelect: (option: "move") => void;
close: () => void;
intersect: string[];
}) => {
@@ -56,31 +54,13 @@ export const Menu = (props: {
>
<Typography
hierarchy="label"
size="s"
weight="bold"
color={currentMachine() ? "primary" : "quaternary"}
>
Move
</Typography>
</li>
<Divider />
<li
class={styles.item}
aria-disabled={!currentMachine()}
onClick={() => {
console.log("Delete clicked", currentMachine());
props.onSelect("delete");
props.close();
}}
>
<Typography
hierarchy="label"
color={currentMachine() ? "primary" : "quaternary"}
>
<span class="flex items-center gap-2">
Delete
<Icon icon="Trash" font-size="inherit" />
</span>
</Typography>
</li>
</ul>
);
};

View File

@@ -71,7 +71,7 @@ const Machines = () => {
}
const result = ctx.machinesQuery.data;
return Object.keys(result).length > 0 ? result : [];
return Object.keys(result).length > 0 ? result : undefined;
};
return (
@@ -117,7 +117,7 @@ const Machines = () => {
}
>
<nav>
<For each={Object.entries(machines())}>
<For each={Object.entries(machines()!)}>
{([id, machine]) => (
<MachineRoute
clanURI={clanURI}

View File

@@ -206,8 +206,8 @@ const ClanSceneController = (props: RouteSectionProps) => {
<AddMachine
onCreated={async (id) => {
const promise = currentPromise();
await ctx.machinesQuery.refetch();
if (promise) {
await ctx.machinesQuery.refetch();
promise.resolve({ id });
setCurrentPromise(null);
}

View File

@@ -18,12 +18,12 @@ export class MachineManager {
private disposeRoot: () => void;
private machinePositionsSignal: Accessor<SceneData | undefined>;
private machinePositionsSignal: Accessor<SceneData>;
constructor(
scene: THREE.Scene,
registry: ObjectRegistry,
machinePositionsSignal: Accessor<SceneData | undefined>,
machinePositionsSignal: Accessor<SceneData>,
machinesQueryResult: MachinesQueryResult,
selectedIds: Accessor<Set<string>>,
setMachinePos: (id: string, position: [number, number] | null) => void,
@@ -39,9 +39,8 @@ export class MachineManager {
if (!machinesQueryResult.data) return;
const actualIds = Object.keys(machinesQueryResult.data);
const machinePositions = machinePositionsSignal() || {};
const machinePositions = machinePositionsSignal();
// Remove stale
for (const id of Object.keys(machinePositions)) {
if (!actualIds.includes(id)) {
console.log("Removing stale machine", id);
@@ -62,7 +61,8 @@ export class MachineManager {
// Effect 2: sync store → scene
//
createEffect(() => {
const positions = machinePositionsSignal() || {};
const positions = machinePositionsSignal();
if (!positions) return;
// Remove machines from scene
for (const [id, repr] of this.machines) {
@@ -103,7 +103,7 @@ export class MachineManager {
nextGridPos(): [number, number] {
const occupiedPositions = new Set(
Object.values(this.machinePositionsSignal() || {}).map((data) =>
Object.values(this.machinePositionsSignal()).map((data) =>
keyFromPos(data.position),
),
);

View File

@@ -32,9 +32,6 @@ import {
} from "./highlightStore";
import { createMachineMesh } from "./MachineRepr";
import { useClanContext } from "@/src/routes/Clan/Clan";
import client from "@api/clan/client";
import { navigateToClan } from "../hooks/clan";
import { useNavigate } from "@solidjs/router";
function intersectMachines(
event: MouseEvent,
@@ -103,7 +100,7 @@ export function CubeScene(props: {
onCreate: () => Promise<{ id: string }>;
selectedIds: Accessor<Set<string>>;
onSelect: (v: Set<string>) => void;
sceneStore: Accessor<SceneData | undefined>;
sceneStore: Accessor<SceneData>;
setMachinePos: (machineId: string, pos: [number, number] | null) => void;
isLoading: boolean;
clanURI: string;
@@ -134,6 +131,9 @@ export function CubeScene(props: {
let machineManager: MachineManager;
const [positionMode, setPositionMode] = createSignal<"grid" | "circle">(
"grid",
);
// Managed by controls
const [isDragging, setIsDragging] = createSignal(false);
@@ -142,6 +142,10 @@ export function CubeScene(props: {
// TODO: Unify this with actionRepr position
const [cursorPosition, setCursorPosition] = createSignal<[number, number]>();
const [cameraInfo, setCameraInfo] = createSignal({
position: { x: 0, y: 0, z: 0 },
spherical: { radius: 0, theta: 0, phi: 0 },
});
// Context menu state
const [contextOpen, setContextOpen] = createSignal(false);
const [menuPos, setMenuPos] = createSignal<{ x: number; y: number }>();
@@ -153,6 +157,7 @@ export function CubeScene(props: {
const BASE_SIZE = 0.9; // Height of the cube above the ground
const CUBE_SIZE = BASE_SIZE / 1.5; //
const BASE_HEIGHT = 0.05; // Height of the cube above the ground
const CUBE_Y = 0 + CUBE_SIZE / 2 + BASE_HEIGHT / 2; // Y position of the cube above the ground
const CUBE_SEGMENT_HEIGHT = CUBE_SIZE / 1;
const FLOOR_COLOR = 0xcdd8d9;
@@ -196,8 +201,6 @@ export function CubeScene(props: {
const grid = new THREE.GridHelper(1000, 1000 / 1, 0xe1edef, 0xe1edef);
const navigate = useNavigate();
onMount(() => {
// Scene setup
scene = new THREE.Scene();
@@ -308,12 +311,21 @@ export function CubeScene(props: {
bgCamera,
);
// controls.addEventListener("start", (e) => {
// setIsDragging(true);
// });
// controls.addEventListener("end", (e) => {
// setIsDragging(false);
// });
// Lighting
const ambientLight = new THREE.AmbientLight(0xd9f2f7, 0.72);
scene.add(ambientLight);
const directionalLight = new THREE.DirectionalLight(0xffffff, 3.5);
// scene.add(new THREE.DirectionalLightHelper(directionalLight));
// scene.add(new THREE.CameraHelper(camera));
const lightPos = new THREE.Spherical(
15,
initialSphericalCameraPosition.phi - Math.PI / 8,
@@ -400,6 +412,30 @@ export function CubeScene(props: {
actionMachine = createActionMachine();
scene.add(actionMachine);
// const spherical = new THREE.Spherical();
// spherical.setFromVector3(camera.position);
// Function to update camera info
const updateCameraInfo = () => {
const spherical = new THREE.Spherical();
spherical.setFromVector3(camera.position);
setCameraInfo({
position: {
x: Math.round(camera.position.x * 100) / 100,
y: Math.round(camera.position.y * 100) / 100,
z: Math.round(camera.position.z * 100) / 100,
},
spherical: {
radius: Math.round(spherical.radius * 100) / 100,
theta: Math.round(spherical.theta * 100) / 100,
phi: Math.round(spherical.phi * 100) / 100,
},
});
};
// Initial camera info update
updateCameraInfo();
createEffect(
on(ctx.worldMode, (mode) => {
if (mode === "create") {
@@ -625,8 +661,7 @@ export function CubeScene(props: {
});
const snapToGrid = (point: THREE.Vector3) => {
const store = props.sceneStore() || {};
if (!props.sceneStore) return;
// Snap to grid
const snapped = new THREE.Vector3(
Math.round(point.x / GRID_SIZE) * GRID_SIZE,
@@ -635,7 +670,7 @@ export function CubeScene(props: {
);
// Skip snapping if there's already a cube at this position
const positions = Object.entries(store);
const positions = Object.entries(props.sceneStore());
const intersects = positions.some(
([_id, p]) => p.position[0] === snapped.x && p.position[1] === snapped.z,
);
@@ -659,6 +694,7 @@ export function CubeScene(props: {
};
const onAddClick = (event: MouseEvent) => {
setPositionMode("grid");
ctx.setWorldMode("create");
renderLoop.requestRender();
};
@@ -670,6 +706,9 @@ export function CubeScene(props: {
if (!actionRepr) return;
actionRepr.visible = true;
// (actionRepr.material as THREE.MeshPhongMaterial).emissive.set(
// worldMode() === "create" ? CREATE_BASE_EMISSIVE : MOVE_BASE_EMISSIVE,
// );
// Calculate mouse position in normalized device coordinates
// (-1 to +1) for both components
@@ -697,38 +736,23 @@ export function CubeScene(props: {
}
}
};
const handleMenuSelect = async (mode: "move" | "delete") => {
const firstId = menuIntersection()[0];
if (!firstId) {
return;
}
const machine = machineManager.machines.get(firstId);
if (mode === "delete") {
console.log("deleting machine", firstId);
await client.post("delete_machine", {
body: {
machine: { flake: { identifier: props.clanURI }, name: firstId },
},
});
navigateToClan(navigate, props.clanURI);
ctx.machinesQuery.refetch();
ctx.serviceInstancesQuery.refetch();
return;
}
// Else "move" mode
const handleMenuSelect = (mode: "move") => {
ctx.setWorldMode(mode);
setHighlightGroups({ move: new Set(menuIntersection()) });
// Find the position of the first selected machine
// Set the actionMachine position to that
if (machine && actionMachine) {
actionMachine.position.set(
machine.group.position.x,
0,
machine.group.position.z,
);
setCursorPosition([machine.group.position.x, machine.group.position.z]);
const firstId = menuIntersection()[0];
if (firstId) {
const machine = machineManager.machines.get(firstId);
if (machine && actionMachine) {
actionMachine.position.set(
machine.group.position.x,
0,
machine.group.position.z,
);
setCursorPosition([machine.group.position.x, machine.group.position.z]);
}
}
};

View File

@@ -225,7 +225,9 @@ def generate_facts(
raise ClanError(msg)
if not was_regenerated and len(machines) > 0:
log.info("All secrets and facts are already up to date")
pass
# Remove message until facts has been propertly deleted
# log.info("All secrets and facts are already up to date")
return was_regenerated

View File

@@ -766,28 +766,6 @@ def test_prompt(
assert sops_store.get(my_generator, "prompt_persist").decode() == "prompt_persist"
@pytest.mark.with_core
def test_non_existing_dependency_raises_error(
monkeypatch: pytest.MonkeyPatch,
flake_with_sops: ClanFlake,
) -> None:
"""Ensure that a generator with a non-existing dependency raises a clear error."""
flake = flake_with_sops
config = flake.machines["my_machine"] = create_test_machine_config()
my_generator = config["clan"]["core"]["vars"]["generators"]["my_generator"]
my_generator["files"]["my_value"]["secret"] = False
my_generator["script"] = 'echo "$RANDOM" > "$out"/my_value'
my_generator["dependencies"] = ["non_existing_generator"]
flake.refresh()
monkeypatch.chdir(flake.path)
with pytest.raises(
ClanError,
match="Generator 'my_generator' on machine 'my_machine' depends on generator 'non_existing_generator', but 'non_existing_generator' does not exist",
):
cli.run(["vars", "generate", "--flake", str(flake.path), "my_machine"])
@pytest.mark.with_core
def test_shared_vars_must_never_depend_on_machine_specific_vars(
monkeypatch: pytest.MonkeyPatch,

View File

@@ -66,41 +66,6 @@ class Generator:
_public_store: "StoreBase | None" = None
_secret_store: "StoreBase | None" = None
@staticmethod
def validate_dependencies(
generator_name: str,
machine_name: str,
dependencies: list[str],
generators_data: dict[str, dict],
) -> list[GeneratorKey]:
"""Validate and build dependency keys for a generator.
Args:
generator_name: Name of the generator that has dependencies
machine_name: Name of the machine the generator belongs to
dependencies: List of dependency generator names
generators_data: Dictionary of all available generators for this machine
Returns:
List of GeneratorKey objects
Raises:
ClanError: If a dependency does not exist
"""
deps_list = []
for dep in dependencies:
if dep not in generators_data:
msg = f"Generator '{generator_name}' on machine '{machine_name}' depends on generator '{dep}', but '{dep}' does not exist. Please check your configuration."
raise ClanError(msg)
deps_list.append(
GeneratorKey(
machine=None if generators_data[dep]["share"] else machine_name,
name=dep,
)
)
return deps_list
@property
def key(self) -> GeneratorKey:
if self.share:
@@ -275,12 +240,15 @@ class Generator:
name=gen_name,
share=share,
files=files,
dependencies=cls.validate_dependencies(
gen_name,
machine_name,
gen_data["dependencies"],
generators_data,
),
dependencies=[
GeneratorKey(
machine=None
if generators_data[dep]["share"]
else machine_name,
name=dep,
)
for dep in gen_data["dependencies"]
],
migrate_fact=gen_data.get("migrateFact"),
validation_hash=gen_data.get("validationHash"),
prompts=prompts,

View File

@@ -245,7 +245,7 @@ class SecretStore(StoreBase):
output_dir / "activation" / generator.name / file.name
)
out_file.parent.mkdir(parents=True, exist_ok=True)
out_file.write_bytes(file.value)
out_file.write_bytes(self.get(generator, file.name))
if "partitioning" in phases:
for generator in vars_generators:
for file in generator.files:
@@ -254,7 +254,7 @@ class SecretStore(StoreBase):
output_dir / "partitioning" / generator.name / file.name
)
out_file.parent.mkdir(parents=True, exist_ok=True)
out_file.write_bytes(file.value)
out_file.write_bytes(self.get(generator, file.name))
hash_data = self.generate_hash(machine)
if hash_data:

View File

@@ -246,7 +246,7 @@ class SecretStore(StoreBase):
)
# chmod after in case it doesn't have u+w
target_path.touch(mode=0o600)
target_path.write_bytes(file.value)
target_path.write_bytes(self.get(generator, file.name))
target_path.chmod(file.mode)
if "partitioning" in phases:
@@ -260,7 +260,7 @@ class SecretStore(StoreBase):
)
# chmod after in case it doesn't have u+w
target_path.touch(mode=0o600)
target_path.write_bytes(file.value)
target_path.write_bytes(self.get(generator, file.name))
target_path.chmod(file.mode)
@override

View File

@@ -211,7 +211,7 @@ class ClanSelectError(ClanError):
def __str__(self) -> str:
if self.description:
return f"{self.msg} Reason: {self.description}. Use flag '--debug' to see full nix trace."
return f"{self.msg} Reason: {self.description}"
return self.msg
def __repr__(self) -> str:

View File

@@ -59,7 +59,9 @@ def upload_sources(machine: Machine, ssh: Host, upload_inputs: bool) -> str:
if not has_path_inputs and not upload_inputs:
# Just copy the flake to the remote machine, we can substitute other inputs there.
path = flake_data["path"]
remote_url = f"ssh-ng://{remote_url_base}"
if machine._class_ == "darwin":
remote_program_params = "?remote-program=bash -lc 'exec nix-daemon --stdio'"
remote_url = f"ssh-ng://{remote_url_base}{remote_program_params}"
cmd = nix_command(
[
"copy",

View File

@@ -17,7 +17,7 @@
runCommand,
setuptools,
webkitgtk_6_0,
wrapGAppsHook3,
wrapGAppsHook,
python,
lib,
stdenv,
@@ -87,7 +87,7 @@ buildPythonApplication rec {
nativeBuildInputs = [
setuptools
copyDesktopItems
wrapGAppsHook3
wrapGAppsHook
gobject-introspection
];

View File

@@ -64,9 +64,6 @@
'';
in
{
legacyPackages = {
inherit jsonDocs clanModulesViaService;
};
packages = {
inherit module-docs;
};

View File

@@ -11,10 +11,151 @@
...
}:
let
inherit (lib)
mapAttrsToList
mapAttrs
mkOption
types
splitString
stringLength
substring
;
inherit (self) clanLib;
serviceModules = self.clan.modules;
baseHref = "/option-search/";
getRoles =
module:
(clanLib.evalService {
modules = [ module ];
prefix = [ ];
}).config.roles;
getManifest =
module:
(clanLib.evalService {
modules = [ module ];
prefix = [ ];
}).config.manifest;
settingsModules = module: mapAttrs (_roleName: roleConfig: roleConfig.interface) (getRoles module);
# Map each letter to its capitalized version
capitalizeChar =
char:
{
a = "A";
b = "B";
c = "C";
d = "D";
e = "E";
f = "F";
g = "G";
h = "H";
i = "I";
j = "J";
k = "K";
l = "L";
m = "M";
n = "N";
o = "O";
p = "P";
q = "Q";
r = "R";
s = "S";
t = "T";
u = "U";
v = "V";
w = "W";
x = "X";
y = "Y";
z = "Z";
}
.${char};
title =
name:
let
# split by -
parts = splitString "-" name;
# capitalize first letter of each part
capitalize = part: (capitalizeChar (substring 0 1 part)) + substring 1 (stringLength part) part;
capitalizedParts = map capitalize parts;
in
builtins.concatStringsSep " " capitalizedParts;
fakeInstanceOptions =
name: module:
let
manifest = getManifest module;
description = ''
# ${title name} (Clan Service)
**${manifest.description}**
${lib.optionalString (manifest ? readme) manifest.readme}
${
if manifest.categories != [ ] then
"Categories: " + builtins.concatStringsSep ", " manifest.categories
else
"No categories defined"
}
'';
in
{
options = {
instances.${name} = lib.mkOption {
inherit description;
type = types.submodule {
options.roles = mapAttrs (
roleName: roleSettingsModule:
mkOption {
type = types.submodule {
_file = "docs flake-module";
imports = [
{ _module.args = { inherit clanLib; }; }
(import ../../modules/inventoryClass/role.nix {
nestedSettingsOption = mkOption {
type = types.raw;
description = ''
See [instances.${name}.roles.${roleName}.settings](${baseHref}?option_scope=0&option=inventory.instances.${name}.roles.${roleName}.settings)
'';
};
settingsOption = mkOption {
type = types.submoduleWith {
modules = [ roleSettingsModule ];
};
};
})
];
};
}
) (settingsModules module);
};
};
};
};
docModules = [
{
inherit self;
}
self.modules.clan.default
{
options.inventory = lib.mkOption {
type = types.submoduleWith {
modules = [
{ noInstanceOptions = true; }
]
++ mapAttrsToList fakeInstanceOptions serviceModules;
};
};
}
];
baseModule =
# Module
@@ -67,6 +208,12 @@
title = "Clan Options";
# scopes = mapAttrsToList mkScope serviceModules;
scopes = [
{
inherit baseHref;
name = "Flake Options (clan.nix file)";
modules = docModules;
urlPrefix = "https://git.clan.lol/clan/clan-core/src/branch/main/";
}
{
name = "Machine Options (clan.core NixOS options)";
optionsJSON = "${coreOptions}/share/doc/nixos/options.json";