Compare commits
39 Commits
update-dev
...
macos
| Author | SHA1 | Date | |
|---|---|---|---|
| 55e343c43e | |||
| 8bef2e6b2e | |||
|
|
8eaca289ad | ||
|
|
6f2d482187 | ||
|
|
4c30418f12 | ||
|
|
3c66094d89 | ||
|
|
a8f180f8da | ||
|
|
e22218d589 | ||
|
|
228c60bcf7 | ||
|
|
ed2b2d9df9 | ||
|
|
7e2a127d11 | ||
|
|
8c8bacb1ab | ||
|
|
8ba71144b6 | ||
|
|
7f2d15c8a1 | ||
|
|
486463c793 | ||
|
|
071603d688 | ||
|
|
c612561ec3 | ||
|
|
a88cd2be40 | ||
|
|
7140b417d3 | ||
|
|
c7a42cca7f | ||
|
|
29ca23c629 | ||
|
|
cd7210de1b | ||
|
|
c2ebafcf92 | ||
|
|
2a9e4e7860 | ||
|
|
43a7652624 | ||
|
|
65fd25bc2e | ||
|
|
f89ea15749 | ||
|
|
19d4833be8 | ||
|
|
82f12eaf6f | ||
|
|
0b5a8e98de | ||
|
|
c5bddada05 | ||
|
|
62b64c3b3e | ||
|
|
19a1ad6081 | ||
|
|
a2df5db3d6 | ||
|
|
ac46f890ea | ||
|
|
83f78d9f59 | ||
|
|
19abf8d288 | ||
|
|
e5105e31c4 | ||
|
|
bc290fe59f |
105
PLAN.md
Normal file
105
PLAN.md
Normal file
@@ -0,0 +1,105 @@
|
||||
Title: Add nix-darwin Support to Clan Services (clan.service)
|
||||
|
||||
Summary
|
||||
- Extend clan services so authors can ship a darwinModule alongside nixosModule.
|
||||
- Wire service results into darwin machines the same way we already do for NixOS.
|
||||
- Keep full backward compatibility: existing services that only export nixosModule continue to work unchanged.
|
||||
|
||||
Goals
|
||||
- Service authors can return perInstance/perMachine darwinModule similarly to nixosModule.
|
||||
- Darwin machines import the correct aggregated service module outputs.
|
||||
- Documentation describes the new result attribute and authoring pattern.
|
||||
|
||||
Non-Goals (initial phase)
|
||||
- No rework of service settings schema or UI beyond documenting darwinModule.
|
||||
- No OS-specific extraModules handling (we will keep extraModules affecting only nixos aggregation initially to avoid breaking existing users).
|
||||
- No sweeping updates of all services; we’ll add a concrete example (users) and leave others to be migrated incrementally.
|
||||
|
||||
Design Overview
|
||||
- Service result attributes gain darwinModule in both roles.<name>.perInstance and perMachine results.
|
||||
- The service aggregator composes both nixosModule and darwinModule per machine.
|
||||
- The machine wiring picks the correct module based on the machine’s class (nixos vs darwin).
|
||||
|
||||
Changes By File (with anchors)
|
||||
- lib/inventory/distributed-service/service-module.nix
|
||||
- Add darwinModule to per-instance return type next to nixosModule.
|
||||
- Where: lib/inventory/distributed-service/service-module.nix:536 (options.nixosModule = mkOption { … })
|
||||
- Action: Add sibling options.darwinModule = mkOption { type = types.deferredModule; default = { }; description = "A single nix-darwin module for the instance."; }.
|
||||
- Add darwinModule to per-machine return type next to nixosModule.
|
||||
- Where: lib/inventory/distributed-service/service-module.nix:666 (options.nixosModule = mkOption { … })
|
||||
- Action: Add sibling options.darwinModule = mkOption { type = types.deferredModule; default = { }; description = "A single nix-darwin module for the machine."; }.
|
||||
- Compose darwinModule per (role, instance, machine) similarly to nixosModule.
|
||||
- Where: lib/inventory/distributed-service/service-module.nix:878–893 (wrapper that builds nixosModule = { imports = [ instanceRes.nixosModule ] ++ extraModules … })
|
||||
- Action: Build darwinModule = { imports = [ instanceRes.darwinModule ]; }.
|
||||
Note: Do NOT include roles.*.extraModules here for darwin initially to avoid importing nixos-specific modules into darwin eval.
|
||||
- Aggregate darwinModules in final result.
|
||||
- Where: lib/inventory/distributed-service/service-module.nix:958–993 (instanceResults builder and final nixosModule = { imports = [ machineResult.nixosModule ] ++ instanceResults.nixosModules; })
|
||||
- Actions:
|
||||
- Track instanceResults.darwinModules in parallel to instanceResults.nixosModules.
|
||||
- Add final darwinModule = { imports = [ machineResult.darwinModule ] ++ instanceResults.darwinModules; }.
|
||||
|
||||
- modules/clan/distributed-services.nix
|
||||
- Feed the right service module to each machine based on machineClass.
|
||||
- Where: modules/clan/distributed-services.nix:147–152
|
||||
- Current: machineImports = fold over services, collecting serviceModule.result.final.${machineName}.nixosModule
|
||||
- Change: If inventory.machines.${machineName}.machineClass == "darwin" then collect .darwinModule else .nixosModule.
|
||||
|
||||
- modules/clan/module.nix
|
||||
- Ensure machineImports are included for both nixos and darwin machines.
|
||||
- Where: modules/clan/module.nix:195 (currently ++ lib.optionals (_class == "nixos") (v.machineImports or [ ]))
|
||||
- Change: Include machineImports for darwin as well (or remove the conditional and always append v.machineImports).
|
||||
|
||||
- docs/site/decisions/01-Clan-Modules.md
|
||||
- Document darwinModule as a result attribute.
|
||||
- Where: docs/site/decisions/01-Clan-Modules.md:129–146 (Result attributes and perMachine text mentioning only nixosModule)
|
||||
- Change: Add “darwinModule” to the Result attributes list and examples, mirroring nixosModule.
|
||||
|
||||
- Example service update: clanServices/users/default.nix
|
||||
- Add perInstance.darwinModule and perMachine.darwinModule mirroring nixos behavior where feasible.
|
||||
- Where: clanServices/users/default.nix:28–90 (roles.default.perInstance.nixosModule), 148–153 (perMachine.nixosModule)
|
||||
- Change: Provide minimal darwinModule that sets users.users.<name> (and any safe, cross-platform bits). If some nixos-only settings (e.g., systemd hooks) exist, keep them nixos-only.
|
||||
|
||||
Implementation Steps
|
||||
1) Service API extensions
|
||||
- Add options.darwinModule to roles.*.perInstance and perMachine (see anchors above).
|
||||
- Keep defaults to {} so services can omit it safely.
|
||||
|
||||
2) Aggregation logic
|
||||
- result.allRoles: emit darwinModule wrapper from instanceRes.darwinModule.
|
||||
- result.final:
|
||||
- Collect instanceResults.darwinModules alongside instanceResults.nixosModules.
|
||||
- Produce final darwinModule with [ machineResult.darwinModule ] ++ instanceResults.darwinModules.
|
||||
- Leave exports logic unchanged.
|
||||
|
||||
3) Machine wiring
|
||||
- modules/clan/distributed-services.nix: choose .darwinModule vs .nixosModule based on inventory.machines.<name>.machineClass.
|
||||
- modules/clan/module.nix: include v.machineImports for both OS classes.
|
||||
|
||||
4) Example migration (users)
|
||||
- Add darwinModule in clanServices/users/default.nix.
|
||||
- Validate that users service evaluates for a darwin machine and does not reference nixos-specific options.
|
||||
|
||||
5) Documentation
|
||||
- Update ADR docs to mention darwinModule in Result attributes and examples.
|
||||
- Add a short “Authoring for Darwin” snippet showing perInstance/perMachine returning both modules.
|
||||
|
||||
6) Tests and verification
|
||||
- Unit-level: extend lib/inventory/distributed-service/tests to assert presence of result.final.<machine>.darwinModule when perInstance/perMachine return it.
|
||||
- Integration-level: evaluate a sample darwin machine (e.g., inventory.json has test-darwin-machine) and assert clan.darwinModules.<machine> includes the aggregated module.
|
||||
- Sanity: ensure existing nixos-only services still evaluate unchanged.
|
||||
|
||||
Backward Compatibility
|
||||
- Existing services that only return nixosModule continue to work.
|
||||
- Darwin machines won’t import service modules until services provide darwinModule, avoiding accidental breakage.
|
||||
- extraModules remain applied only to nixos aggregation initially to prevent nixos-only modules from breaking darwin evaluation. We can add OS-specific extraModules in a follow-up (e.g., roles.*.extraModulesDarwin).
|
||||
|
||||
Acceptance Criteria
|
||||
- Services can return darwinModule in perInstance/perMachine without errors.
|
||||
- Darwin machines import aggregated darwinModule outputs from all participating services.
|
||||
- nixos behavior remains unchanged for existing services.
|
||||
- Documentation updated to reflect the new attribute and example.
|
||||
|
||||
Rollout Notes
|
||||
- Start by updating clanServices/users as a working example.
|
||||
- Encourage service authors to add darwinModule incrementally; no global migration is required.
|
||||
|
||||
@@ -120,6 +120,63 @@
|
||||
|
||||
share = settings.share;
|
||||
|
||||
script =
|
||||
(
|
||||
if settings.prompt then
|
||||
''
|
||||
prompt_value=$(cat "$prompts"/user-password)
|
||||
if [[ -n "''${prompt_value-}" ]]; then
|
||||
echo "$prompt_value" | tr -d "\n" > "$out"/user-password
|
||||
else
|
||||
xkcdpass --numwords 4 --delimiter - --count 1 | tr -d "\n" > "$out"/user-password
|
||||
fi
|
||||
''
|
||||
else
|
||||
''
|
||||
xkcdpass --numwords 4 --delimiter - --count 1 | tr -d "\n" > "$out"/user-password
|
||||
''
|
||||
)
|
||||
+ ''
|
||||
mkpasswd -s -m sha-512 < "$out"/user-password | tr -d "\n" > "$out"/user-password-hash
|
||||
'';
|
||||
};
|
||||
};
|
||||
darwinModule =
|
||||
{
|
||||
config,
|
||||
pkgs,
|
||||
lib,
|
||||
...
|
||||
}:
|
||||
{
|
||||
# For darwin, we currently only generate and manage the password secret.
|
||||
# Hooking into actual macOS account management may be added later.
|
||||
clan.core.vars.generators."user-password-${settings.user}" = {
|
||||
files.user-password-hash.neededFor = "users";
|
||||
files.user-password.deploy = false;
|
||||
|
||||
prompts.user-password = lib.mkIf settings.prompt {
|
||||
display = {
|
||||
group = settings.user;
|
||||
label = "password";
|
||||
required = false;
|
||||
helperText = ''
|
||||
Your password will be encrypted and stored securely using the secret store you've configured.
|
||||
'';
|
||||
};
|
||||
type = "hidden";
|
||||
persist = true;
|
||||
description = "Leave empty to generate automatically";
|
||||
};
|
||||
|
||||
runtimeInputs = [
|
||||
pkgs.coreutils
|
||||
pkgs.xkcdpass
|
||||
pkgs.mkpasswd
|
||||
];
|
||||
|
||||
share = settings.share;
|
||||
|
||||
script =
|
||||
(
|
||||
if settings.prompt then
|
||||
@@ -149,5 +206,7 @@
|
||||
# Immutable users to ensure that this module has exclusive control over the users.
|
||||
users.mutableUsers = false;
|
||||
};
|
||||
# No-op for darwin by default; can be extended later if needed.
|
||||
darwinModule = { };
|
||||
};
|
||||
}
|
||||
|
||||
18
devFlake/flake.lock
generated
18
devFlake/flake.lock
generated
@@ -3,10 +3,10 @@
|
||||
"clan-core-for-checks": {
|
||||
"flake": false,
|
||||
"locked": {
|
||||
"lastModified": 1762113984,
|
||||
"narHash": "sha256-Gwah5F3ONMhvTYbsnJM4bAv0qcaI3wjz1Nq0rBGWVgo=",
|
||||
"lastModified": 1761204206,
|
||||
"narHash": "sha256-A4KDudGblln1yh8c95OVow2NRlHtbGZXr/pgNenyrNc=",
|
||||
"ref": "main",
|
||||
"rev": "0f847b4799deee4a2c878ba69bda9c446fe16177",
|
||||
"rev": "aabbe0dfac47b7cfbe2210bcb27fb7ecce93350f",
|
||||
"shallow": true,
|
||||
"type": "git",
|
||||
"url": "https://git.clan.lol/clan/clan-core"
|
||||
@@ -105,11 +105,11 @@
|
||||
},
|
||||
"nixpkgs-dev": {
|
||||
"locked": {
|
||||
"lastModified": 1762080734,
|
||||
"narHash": "sha256-fFunzA7ITlPHRr7dECaFGTBucNiWYEVDNPBw/9gFmII=",
|
||||
"lastModified": 1762328495,
|
||||
"narHash": "sha256-IUZvw5kvLiExApP9+SK/styzEKSqfe0NPclu9/z85OQ=",
|
||||
"owner": "NixOS",
|
||||
"repo": "nixpkgs",
|
||||
"rev": "bc7f6fa86de9b208edf4ea7bbf40bcd8cc7d70a5",
|
||||
"rev": "4c621660e393922cf68cdbfc40eb5a2d54d3989a",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
@@ -208,11 +208,11 @@
|
||||
"nixpkgs": []
|
||||
},
|
||||
"locked": {
|
||||
"lastModified": 1761311587,
|
||||
"narHash": "sha256-Msq86cR5SjozQGCnC6H8C+0cD4rnx91BPltZ9KK613Y=",
|
||||
"lastModified": 1762366246,
|
||||
"narHash": "sha256-3xc/f/ZNb5ma9Fc9knIzEwygXotA+0BZFQ5V5XovSOQ=",
|
||||
"owner": "numtide",
|
||||
"repo": "treefmt-nix",
|
||||
"rev": "2eddae033e4e74bf581c2d1dfa101f9033dbd2dc",
|
||||
"rev": "a82c779ca992190109e431d7d680860e6723e048",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
|
||||
@@ -150,10 +150,61 @@ Those are very similar to NixOS VM tests, as in they run virtualized nixos machi
|
||||
As of now the container test driver is a downstream development in clan-core.
|
||||
Basically everything stated under the NixOS VM tests sections applies here, except some limitations.
|
||||
|
||||
Limitations:
|
||||
### Using Container Tests vs VM Tests
|
||||
|
||||
- Cannot run in interactive mode, however while the container test runs, it logs a nsenter command that can be used to log into each of the container.
|
||||
- setuid binaries don't work
|
||||
Container tests are **enabled by default** for all tests using the clan testing framework.
|
||||
They offer significant performance advantages over VM tests:
|
||||
|
||||
- **Faster startup**
|
||||
- **Lower resource usage**: No full kernel boot or hardware emulation overhead
|
||||
|
||||
To control whether a test uses containers or VMs, use the `clan.test.useContainers` option:
|
||||
|
||||
```nix
|
||||
{
|
||||
clan = {
|
||||
directory = ./.;
|
||||
test.useContainers = true; # Use containers (default)
|
||||
# test.useContainers = false; # Use VMs instead
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
**When to use VM tests instead of container tests:**
|
||||
|
||||
- Testing kernel features, modules, or boot processes
|
||||
- Testing hardware-specific features
|
||||
- When you need full system isolation
|
||||
|
||||
### System Requirements for Container Tests
|
||||
|
||||
Container tests require the **`uid-range`** system feature** in the Nix sandbox.
|
||||
This feature allows Nix to allocate a range of UIDs for containers to use, enabling `systemd-nspawn` containers to run properly inside the Nix build sandbox.
|
||||
|
||||
**Configuration:**
|
||||
|
||||
The `uid-range` feature requires the `auto-allocate-uids` setting to be enabled in your Nix configuration.
|
||||
|
||||
To verify or enable it, add to your `/etc/nix/nix.conf` or NixOS configuration:
|
||||
|
||||
```nix
|
||||
settings.experimental-features = [
|
||||
"auto-allocate-uids"
|
||||
];
|
||||
|
||||
nix.settings.auto-allocate-uids = true;
|
||||
nix.settings.system-features = [ "uid-range" ];
|
||||
```
|
||||
|
||||
**Technical details:**
|
||||
|
||||
- Container tests set `requiredSystemFeatures = [ "uid-range" ];` in their derivation (see `lib/test/container-test-driver/driver-module.nix:98`)
|
||||
- Without this feature, containers cannot properly manage user namespaces and will fail to start
|
||||
|
||||
### Limitations
|
||||
|
||||
- Cannot run in interactive mode, however while the container test runs, it logs a nsenter command that can be used to log into each of the containers.
|
||||
- Early implementation and limited by features.
|
||||
|
||||
### Where to find examples for NixOS container tests
|
||||
|
||||
|
||||
30
flake.lock
generated
30
flake.lock
generated
@@ -31,11 +31,11 @@
|
||||
]
|
||||
},
|
||||
"locked": {
|
||||
"lastModified": 1761899396,
|
||||
"narHash": "sha256-XOpKBp6HLzzMCbzW50TEuXN35zN5WGQREC7n34DcNMM=",
|
||||
"lastModified": 1762276996,
|
||||
"narHash": "sha256-TtcPgPmp2f0FAnc+DMEw4ardEgv1SGNR3/WFGH0N19M=",
|
||||
"owner": "nix-community",
|
||||
"repo": "disko",
|
||||
"rev": "6f4cf5abbe318e4cd1e879506f6eeafd83f7b998",
|
||||
"rev": "af087d076d3860760b3323f6b583f4d828c1ac17",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
@@ -71,11 +71,11 @@
|
||||
]
|
||||
},
|
||||
"locked": {
|
||||
"lastModified": 1762039661,
|
||||
"narHash": "sha256-oM5BwAGE78IBLZn+AqxwH/saqwq3e926rNq5HmOulkc=",
|
||||
"lastModified": 1762304480,
|
||||
"narHash": "sha256-ikVIPB/ea/BAODk6aksgkup9k2jQdrwr4+ZRXtBgmSs=",
|
||||
"owner": "nix-darwin",
|
||||
"repo": "nix-darwin",
|
||||
"rev": "c3c8c9f2a5ed43175ac4dc030308756620e6e4e4",
|
||||
"rev": "b8c7ac030211f18bd1f41eae0b815571853db7a2",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
@@ -99,11 +99,11 @@
|
||||
},
|
||||
"nixos-facter-modules": {
|
||||
"locked": {
|
||||
"lastModified": 1761137276,
|
||||
"narHash": "sha256-4lDjGnWRBLwqKQ4UWSUq6Mvxu9r8DSqCCydodW/Jsi8=",
|
||||
"lastModified": 1762264948,
|
||||
"narHash": "sha256-iaRf6n0KPl9hndnIft3blm1YTAyxSREV1oX0MFZ6Tk4=",
|
||||
"owner": "nix-community",
|
||||
"repo": "nixos-facter-modules",
|
||||
"rev": "70bcd64225d167c7af9b475c4df7b5abba5c7de8",
|
||||
"rev": "fa695bff9ec37fd5bbd7ee3181dbeb5f97f53c96",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
@@ -115,10 +115,10 @@
|
||||
"nixpkgs": {
|
||||
"locked": {
|
||||
"lastModified": 315532800,
|
||||
"narHash": "sha256-yDxtm0PESdgNetiJN5+MFxgubBcLDTiuSjjrJiyvsvM=",
|
||||
"rev": "d7f52a7a640bc54c7bb414cca603835bf8dd4b10",
|
||||
"narHash": "sha256-LDT9wuUZtjPfmviCcVWif5+7j4kBI2mWaZwjNNeg4eg=",
|
||||
"rev": "a7fc11be66bdfb5cdde611ee5ce381c183da8386",
|
||||
"type": "tarball",
|
||||
"url": "https://releases.nixos.org/nixpkgs/nixpkgs-25.11pre871443.d7f52a7a640b/nixexprs.tar.xz"
|
||||
"url": "https://releases.nixos.org/nixpkgs/nixpkgs-25.11pre887438.a7fc11be66bd/nixexprs.tar.xz"
|
||||
},
|
||||
"original": {
|
||||
"type": "tarball",
|
||||
@@ -181,11 +181,11 @@
|
||||
]
|
||||
},
|
||||
"locked": {
|
||||
"lastModified": 1761311587,
|
||||
"narHash": "sha256-Msq86cR5SjozQGCnC6H8C+0cD4rnx91BPltZ9KK613Y=",
|
||||
"lastModified": 1762366246,
|
||||
"narHash": "sha256-3xc/f/ZNb5ma9Fc9knIzEwygXotA+0BZFQ5V5XovSOQ=",
|
||||
"owner": "numtide",
|
||||
"repo": "treefmt-nix",
|
||||
"rev": "2eddae033e4e74bf581c2d1dfa101f9033dbd2dc",
|
||||
"rev": "a82c779ca992190109e431d7d680860e6723e048",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
|
||||
@@ -561,6 +561,15 @@ in
|
||||
```
|
||||
'';
|
||||
};
|
||||
options.darwinModule = mkOption {
|
||||
type = types.deferredModule;
|
||||
default = { };
|
||||
description = ''
|
||||
A single nix-darwin module for the instance.
|
||||
|
||||
This mirrors `nixosModule` but targets darwin machines.
|
||||
'';
|
||||
};
|
||||
})
|
||||
];
|
||||
};
|
||||
@@ -686,6 +695,15 @@ in
|
||||
```
|
||||
'';
|
||||
};
|
||||
options.darwinModule = mkOption {
|
||||
type = types.deferredModule;
|
||||
default = { };
|
||||
description = ''
|
||||
A single nix-darwin module for the machine.
|
||||
|
||||
This mirrors `nixosModule` but targets darwin machines.
|
||||
'';
|
||||
};
|
||||
})
|
||||
];
|
||||
};
|
||||
@@ -890,6 +908,11 @@ in
|
||||
lib.setDefaultModuleLocation "via inventory.instances.${instanceName}.roles.${roleName}" s
|
||||
) instanceCfg.roles.${roleName}.extraModules);
|
||||
};
|
||||
darwinModule = {
|
||||
imports = [
|
||||
instanceRes.darwinModule
|
||||
];
|
||||
};
|
||||
}
|
||||
|
||||
) instanceCfg.roles.${roleName}.machines or { };
|
||||
@@ -979,11 +1002,24 @@ in
|
||||
else
|
||||
instanceAcc.nixosModules
|
||||
);
|
||||
darwinModules = (
|
||||
if instance.allMachines.${machineName}.darwinModule or { } != { } then
|
||||
instanceAcc.darwinModules
|
||||
++ [
|
||||
(lib.setDefaultModuleLocation
|
||||
"Via instances.${instanceName}.roles.${roleName}.machines.${machineName}"
|
||||
instance.allMachines.${machineName}.darwinModule
|
||||
)
|
||||
]
|
||||
else
|
||||
instanceAcc.darwinModules
|
||||
);
|
||||
}
|
||||
) roleAcc role.allInstances
|
||||
)
|
||||
{
|
||||
nixosModules = [ ];
|
||||
darwinModules = [ ];
|
||||
# ...
|
||||
}
|
||||
config.result.allRoles;
|
||||
@@ -1021,6 +1057,12 @@ in
|
||||
]
|
||||
++ instanceResults.nixosModules;
|
||||
};
|
||||
darwinModule = {
|
||||
imports = [
|
||||
(lib.setDefaultModuleLocation "Via ${config.manifest.name}.perMachine - machine='${machineName}';" machineResult.darwinModule)
|
||||
]
|
||||
++ instanceResults.darwinModules;
|
||||
};
|
||||
}
|
||||
) config.result.allMachines;
|
||||
};
|
||||
|
||||
@@ -145,10 +145,23 @@ in
|
||||
internal = true;
|
||||
type = types.raw;
|
||||
default = lib.mapAttrs (machineName: _: {
|
||||
# This is the list of nixosModules for each machine
|
||||
# This is the list of service modules for each machine (nixos or darwin)
|
||||
machineImports = lib.foldlAttrs (
|
||||
acc: _module_ident: serviceModule:
|
||||
acc ++ [ serviceModule.result.final.${machineName}.nixosModule or { } ]
|
||||
let
|
||||
modName =
|
||||
if inventory.machines.${machineName}.machineClass == "darwin" then
|
||||
"darwinModule"
|
||||
else
|
||||
"nixosModule";
|
||||
finalForMachine = serviceModule.result.final.${machineName} or { };
|
||||
picked =
|
||||
if builtins.hasAttr modName finalForMachine then
|
||||
(builtins.getAttr modName finalForMachine)
|
||||
else
|
||||
{ };
|
||||
in
|
||||
acc ++ [ picked ]
|
||||
) [ ] config._services.mappedServices;
|
||||
}) inventory.machines or { };
|
||||
};
|
||||
|
||||
@@ -192,7 +192,7 @@ in
|
||||
# - darwinModules (_class = darwin)
|
||||
(lib.optionalAttrs (clan-core ? "${_class}Modules") clan-core."${_class}Modules".clanCore)
|
||||
]
|
||||
++ lib.optionals (_class == "nixos") (v.machineImports or [ ]);
|
||||
++ (v.machineImports or [ ]);
|
||||
|
||||
# default hostname
|
||||
networking.hostName = lib.mkDefault name;
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
}:
|
||||
{
|
||||
# If we also need zfs, we can use the unstable version as we otherwise don't have a new enough kernel version
|
||||
boot.zfs.package = pkgs.zfsUnstable;
|
||||
boot.zfs.package = pkgs.zfs_unstable or pkgs.zfsUnstable;
|
||||
|
||||
# Enable bcachefs support
|
||||
boot.supportedFilesystems.bcachefs = lib.mkDefault true;
|
||||
|
||||
@@ -6,7 +6,7 @@
|
||||
}:
|
||||
|
||||
let
|
||||
isUnstable = config.boot.zfs.package == pkgs.zfsUnstable;
|
||||
isUnstable = config.boot.zfs.package == pkgs.zfs_unstable or pkgs.zfsUnstable;
|
||||
zfsCompatibleKernelPackages = lib.filterAttrs (
|
||||
name: kernelPackages:
|
||||
(builtins.match "linux_[0-9]+_[0-9]+" name) != null
|
||||
|
||||
@@ -4,6 +4,7 @@
|
||||
padding: 8px;
|
||||
flex-direction: column;
|
||||
align-items: flex-start;
|
||||
gap: 4px;
|
||||
|
||||
border-radius: 5px;
|
||||
border: 1px solid var(--clr-border-def-2, #d8e8eb);
|
||||
|
||||
@@ -1,11 +1,13 @@
|
||||
import { onCleanup, onMount } from "solid-js";
|
||||
import styles from "./ContextMenu.module.css";
|
||||
import { Typography } from "../Typography/Typography";
|
||||
import { Divider } from "../Divider/Divider";
|
||||
import Icon from "../Icon/Icon";
|
||||
|
||||
export const Menu = (props: {
|
||||
x: number;
|
||||
y: number;
|
||||
onSelect: (option: "move") => void;
|
||||
onSelect: (option: "move" | "delete") => void;
|
||||
close: () => void;
|
||||
intersect: string[];
|
||||
}) => {
|
||||
@@ -54,13 +56,31 @@ export const Menu = (props: {
|
||||
>
|
||||
<Typography
|
||||
hierarchy="label"
|
||||
size="s"
|
||||
weight="bold"
|
||||
color={currentMachine() ? "primary" : "quaternary"}
|
||||
>
|
||||
Move
|
||||
</Typography>
|
||||
</li>
|
||||
<Divider />
|
||||
<li
|
||||
class={styles.item}
|
||||
aria-disabled={!currentMachine()}
|
||||
onClick={() => {
|
||||
console.log("Delete clicked", currentMachine());
|
||||
props.onSelect("delete");
|
||||
props.close();
|
||||
}}
|
||||
>
|
||||
<Typography
|
||||
hierarchy="label"
|
||||
color={currentMachine() ? "primary" : "quaternary"}
|
||||
>
|
||||
<span class="flex items-center gap-2">
|
||||
Delete
|
||||
<Icon icon="Trash" font-size="inherit" />
|
||||
</span>
|
||||
</Typography>
|
||||
</li>
|
||||
</ul>
|
||||
);
|
||||
};
|
||||
|
||||
@@ -71,7 +71,7 @@ const Machines = () => {
|
||||
}
|
||||
|
||||
const result = ctx.machinesQuery.data;
|
||||
return Object.keys(result).length > 0 ? result : undefined;
|
||||
return Object.keys(result).length > 0 ? result : [];
|
||||
};
|
||||
|
||||
return (
|
||||
@@ -117,7 +117,7 @@ const Machines = () => {
|
||||
}
|
||||
>
|
||||
<nav>
|
||||
<For each={Object.entries(machines()!)}>
|
||||
<For each={Object.entries(machines())}>
|
||||
{([id, machine]) => (
|
||||
<MachineRoute
|
||||
clanURI={clanURI}
|
||||
|
||||
@@ -206,8 +206,8 @@ const ClanSceneController = (props: RouteSectionProps) => {
|
||||
<AddMachine
|
||||
onCreated={async (id) => {
|
||||
const promise = currentPromise();
|
||||
await ctx.machinesQuery.refetch();
|
||||
if (promise) {
|
||||
await ctx.machinesQuery.refetch();
|
||||
promise.resolve({ id });
|
||||
setCurrentPromise(null);
|
||||
}
|
||||
|
||||
@@ -18,12 +18,12 @@ export class MachineManager {
|
||||
|
||||
private disposeRoot: () => void;
|
||||
|
||||
private machinePositionsSignal: Accessor<SceneData>;
|
||||
private machinePositionsSignal: Accessor<SceneData | undefined>;
|
||||
|
||||
constructor(
|
||||
scene: THREE.Scene,
|
||||
registry: ObjectRegistry,
|
||||
machinePositionsSignal: Accessor<SceneData>,
|
||||
machinePositionsSignal: Accessor<SceneData | undefined>,
|
||||
machinesQueryResult: MachinesQueryResult,
|
||||
selectedIds: Accessor<Set<string>>,
|
||||
setMachinePos: (id: string, position: [number, number] | null) => void,
|
||||
@@ -39,8 +39,9 @@ export class MachineManager {
|
||||
if (!machinesQueryResult.data) return;
|
||||
|
||||
const actualIds = Object.keys(machinesQueryResult.data);
|
||||
const machinePositions = machinePositionsSignal();
|
||||
// Remove stale
|
||||
|
||||
const machinePositions = machinePositionsSignal() || {};
|
||||
|
||||
for (const id of Object.keys(machinePositions)) {
|
||||
if (!actualIds.includes(id)) {
|
||||
console.log("Removing stale machine", id);
|
||||
@@ -61,8 +62,7 @@ export class MachineManager {
|
||||
// Effect 2: sync store → scene
|
||||
//
|
||||
createEffect(() => {
|
||||
const positions = machinePositionsSignal();
|
||||
if (!positions) return;
|
||||
const positions = machinePositionsSignal() || {};
|
||||
|
||||
// Remove machines from scene
|
||||
for (const [id, repr] of this.machines) {
|
||||
@@ -103,7 +103,7 @@ export class MachineManager {
|
||||
|
||||
nextGridPos(): [number, number] {
|
||||
const occupiedPositions = new Set(
|
||||
Object.values(this.machinePositionsSignal()).map((data) =>
|
||||
Object.values(this.machinePositionsSignal() || {}).map((data) =>
|
||||
keyFromPos(data.position),
|
||||
),
|
||||
);
|
||||
|
||||
@@ -32,6 +32,9 @@ import {
|
||||
} from "./highlightStore";
|
||||
import { createMachineMesh } from "./MachineRepr";
|
||||
import { useClanContext } from "@/src/routes/Clan/Clan";
|
||||
import client from "@api/clan/client";
|
||||
import { navigateToClan } from "../hooks/clan";
|
||||
import { useNavigate } from "@solidjs/router";
|
||||
|
||||
function intersectMachines(
|
||||
event: MouseEvent,
|
||||
@@ -100,7 +103,7 @@ export function CubeScene(props: {
|
||||
onCreate: () => Promise<{ id: string }>;
|
||||
selectedIds: Accessor<Set<string>>;
|
||||
onSelect: (v: Set<string>) => void;
|
||||
sceneStore: Accessor<SceneData>;
|
||||
sceneStore: Accessor<SceneData | undefined>;
|
||||
setMachinePos: (machineId: string, pos: [number, number] | null) => void;
|
||||
isLoading: boolean;
|
||||
clanURI: string;
|
||||
@@ -131,9 +134,6 @@ export function CubeScene(props: {
|
||||
|
||||
let machineManager: MachineManager;
|
||||
|
||||
const [positionMode, setPositionMode] = createSignal<"grid" | "circle">(
|
||||
"grid",
|
||||
);
|
||||
// Managed by controls
|
||||
const [isDragging, setIsDragging] = createSignal(false);
|
||||
|
||||
@@ -142,10 +142,6 @@ export function CubeScene(props: {
|
||||
// TODO: Unify this with actionRepr position
|
||||
const [cursorPosition, setCursorPosition] = createSignal<[number, number]>();
|
||||
|
||||
const [cameraInfo, setCameraInfo] = createSignal({
|
||||
position: { x: 0, y: 0, z: 0 },
|
||||
spherical: { radius: 0, theta: 0, phi: 0 },
|
||||
});
|
||||
// Context menu state
|
||||
const [contextOpen, setContextOpen] = createSignal(false);
|
||||
const [menuPos, setMenuPos] = createSignal<{ x: number; y: number }>();
|
||||
@@ -157,7 +153,6 @@ export function CubeScene(props: {
|
||||
const BASE_SIZE = 0.9; // Height of the cube above the ground
|
||||
const CUBE_SIZE = BASE_SIZE / 1.5; //
|
||||
const BASE_HEIGHT = 0.05; // Height of the cube above the ground
|
||||
const CUBE_Y = 0 + CUBE_SIZE / 2 + BASE_HEIGHT / 2; // Y position of the cube above the ground
|
||||
const CUBE_SEGMENT_HEIGHT = CUBE_SIZE / 1;
|
||||
|
||||
const FLOOR_COLOR = 0xcdd8d9;
|
||||
@@ -201,6 +196,8 @@ export function CubeScene(props: {
|
||||
|
||||
const grid = new THREE.GridHelper(1000, 1000 / 1, 0xe1edef, 0xe1edef);
|
||||
|
||||
const navigate = useNavigate();
|
||||
|
||||
onMount(() => {
|
||||
// Scene setup
|
||||
scene = new THREE.Scene();
|
||||
@@ -311,21 +308,12 @@ export function CubeScene(props: {
|
||||
bgCamera,
|
||||
);
|
||||
|
||||
// controls.addEventListener("start", (e) => {
|
||||
// setIsDragging(true);
|
||||
// });
|
||||
// controls.addEventListener("end", (e) => {
|
||||
// setIsDragging(false);
|
||||
// });
|
||||
|
||||
// Lighting
|
||||
const ambientLight = new THREE.AmbientLight(0xd9f2f7, 0.72);
|
||||
scene.add(ambientLight);
|
||||
|
||||
const directionalLight = new THREE.DirectionalLight(0xffffff, 3.5);
|
||||
|
||||
// scene.add(new THREE.DirectionalLightHelper(directionalLight));
|
||||
// scene.add(new THREE.CameraHelper(camera));
|
||||
const lightPos = new THREE.Spherical(
|
||||
15,
|
||||
initialSphericalCameraPosition.phi - Math.PI / 8,
|
||||
@@ -412,30 +400,6 @@ export function CubeScene(props: {
|
||||
actionMachine = createActionMachine();
|
||||
scene.add(actionMachine);
|
||||
|
||||
// const spherical = new THREE.Spherical();
|
||||
// spherical.setFromVector3(camera.position);
|
||||
|
||||
// Function to update camera info
|
||||
const updateCameraInfo = () => {
|
||||
const spherical = new THREE.Spherical();
|
||||
spherical.setFromVector3(camera.position);
|
||||
setCameraInfo({
|
||||
position: {
|
||||
x: Math.round(camera.position.x * 100) / 100,
|
||||
y: Math.round(camera.position.y * 100) / 100,
|
||||
z: Math.round(camera.position.z * 100) / 100,
|
||||
},
|
||||
spherical: {
|
||||
radius: Math.round(spherical.radius * 100) / 100,
|
||||
theta: Math.round(spherical.theta * 100) / 100,
|
||||
phi: Math.round(spherical.phi * 100) / 100,
|
||||
},
|
||||
});
|
||||
};
|
||||
|
||||
// Initial camera info update
|
||||
updateCameraInfo();
|
||||
|
||||
createEffect(
|
||||
on(ctx.worldMode, (mode) => {
|
||||
if (mode === "create") {
|
||||
@@ -661,7 +625,8 @@ export function CubeScene(props: {
|
||||
});
|
||||
|
||||
const snapToGrid = (point: THREE.Vector3) => {
|
||||
if (!props.sceneStore) return;
|
||||
const store = props.sceneStore() || {};
|
||||
|
||||
// Snap to grid
|
||||
const snapped = new THREE.Vector3(
|
||||
Math.round(point.x / GRID_SIZE) * GRID_SIZE,
|
||||
@@ -670,7 +635,7 @@ export function CubeScene(props: {
|
||||
);
|
||||
|
||||
// Skip snapping if there's already a cube at this position
|
||||
const positions = Object.entries(props.sceneStore());
|
||||
const positions = Object.entries(store);
|
||||
const intersects = positions.some(
|
||||
([_id, p]) => p.position[0] === snapped.x && p.position[1] === snapped.z,
|
||||
);
|
||||
@@ -694,7 +659,6 @@ export function CubeScene(props: {
|
||||
};
|
||||
|
||||
const onAddClick = (event: MouseEvent) => {
|
||||
setPositionMode("grid");
|
||||
ctx.setWorldMode("create");
|
||||
renderLoop.requestRender();
|
||||
};
|
||||
@@ -706,9 +670,6 @@ export function CubeScene(props: {
|
||||
if (!actionRepr) return;
|
||||
|
||||
actionRepr.visible = true;
|
||||
// (actionRepr.material as THREE.MeshPhongMaterial).emissive.set(
|
||||
// worldMode() === "create" ? CREATE_BASE_EMISSIVE : MOVE_BASE_EMISSIVE,
|
||||
// );
|
||||
|
||||
// Calculate mouse position in normalized device coordinates
|
||||
// (-1 to +1) for both components
|
||||
@@ -736,23 +697,38 @@ export function CubeScene(props: {
|
||||
}
|
||||
}
|
||||
};
|
||||
const handleMenuSelect = (mode: "move") => {
|
||||
const handleMenuSelect = async (mode: "move" | "delete") => {
|
||||
const firstId = menuIntersection()[0];
|
||||
if (!firstId) {
|
||||
return;
|
||||
}
|
||||
const machine = machineManager.machines.get(firstId);
|
||||
if (mode === "delete") {
|
||||
console.log("deleting machine", firstId);
|
||||
await client.post("delete_machine", {
|
||||
body: {
|
||||
machine: { flake: { identifier: props.clanURI }, name: firstId },
|
||||
},
|
||||
});
|
||||
navigateToClan(navigate, props.clanURI);
|
||||
ctx.machinesQuery.refetch();
|
||||
ctx.serviceInstancesQuery.refetch();
|
||||
return;
|
||||
}
|
||||
|
||||
// Else "move" mode
|
||||
ctx.setWorldMode(mode);
|
||||
setHighlightGroups({ move: new Set(menuIntersection()) });
|
||||
|
||||
// Find the position of the first selected machine
|
||||
// Set the actionMachine position to that
|
||||
const firstId = menuIntersection()[0];
|
||||
if (firstId) {
|
||||
const machine = machineManager.machines.get(firstId);
|
||||
if (machine && actionMachine) {
|
||||
actionMachine.position.set(
|
||||
machine.group.position.x,
|
||||
0,
|
||||
machine.group.position.z,
|
||||
);
|
||||
setCursorPosition([machine.group.position.x, machine.group.position.z]);
|
||||
}
|
||||
if (machine && actionMachine) {
|
||||
actionMachine.position.set(
|
||||
machine.group.position.x,
|
||||
0,
|
||||
machine.group.position.z,
|
||||
);
|
||||
setCursorPosition([machine.group.position.x, machine.group.position.z]);
|
||||
}
|
||||
};
|
||||
|
||||
|
||||
@@ -766,6 +766,28 @@ def test_prompt(
|
||||
assert sops_store.get(my_generator, "prompt_persist").decode() == "prompt_persist"
|
||||
|
||||
|
||||
@pytest.mark.with_core
|
||||
def test_non_existing_dependency_raises_error(
|
||||
monkeypatch: pytest.MonkeyPatch,
|
||||
flake_with_sops: ClanFlake,
|
||||
) -> None:
|
||||
"""Ensure that a generator with a non-existing dependency raises a clear error."""
|
||||
flake = flake_with_sops
|
||||
|
||||
config = flake.machines["my_machine"] = create_test_machine_config()
|
||||
my_generator = config["clan"]["core"]["vars"]["generators"]["my_generator"]
|
||||
my_generator["files"]["my_value"]["secret"] = False
|
||||
my_generator["script"] = 'echo "$RANDOM" > "$out"/my_value'
|
||||
my_generator["dependencies"] = ["non_existing_generator"]
|
||||
flake.refresh()
|
||||
monkeypatch.chdir(flake.path)
|
||||
with pytest.raises(
|
||||
ClanError,
|
||||
match="Generator 'my_generator' on machine 'my_machine' depends on generator 'non_existing_generator', but 'non_existing_generator' does not exist",
|
||||
):
|
||||
cli.run(["vars", "generate", "--flake", str(flake.path), "my_machine"])
|
||||
|
||||
|
||||
@pytest.mark.with_core
|
||||
def test_shared_vars_must_never_depend_on_machine_specific_vars(
|
||||
monkeypatch: pytest.MonkeyPatch,
|
||||
|
||||
@@ -66,6 +66,41 @@ class Generator:
|
||||
_public_store: "StoreBase | None" = None
|
||||
_secret_store: "StoreBase | None" = None
|
||||
|
||||
@staticmethod
|
||||
def validate_dependencies(
|
||||
generator_name: str,
|
||||
machine_name: str,
|
||||
dependencies: list[str],
|
||||
generators_data: dict[str, dict],
|
||||
) -> list[GeneratorKey]:
|
||||
"""Validate and build dependency keys for a generator.
|
||||
|
||||
Args:
|
||||
generator_name: Name of the generator that has dependencies
|
||||
machine_name: Name of the machine the generator belongs to
|
||||
dependencies: List of dependency generator names
|
||||
generators_data: Dictionary of all available generators for this machine
|
||||
|
||||
Returns:
|
||||
List of GeneratorKey objects
|
||||
|
||||
Raises:
|
||||
ClanError: If a dependency does not exist
|
||||
|
||||
"""
|
||||
deps_list = []
|
||||
for dep in dependencies:
|
||||
if dep not in generators_data:
|
||||
msg = f"Generator '{generator_name}' on machine '{machine_name}' depends on generator '{dep}', but '{dep}' does not exist. Please check your configuration."
|
||||
raise ClanError(msg)
|
||||
deps_list.append(
|
||||
GeneratorKey(
|
||||
machine=None if generators_data[dep]["share"] else machine_name,
|
||||
name=dep,
|
||||
)
|
||||
)
|
||||
return deps_list
|
||||
|
||||
@property
|
||||
def key(self) -> GeneratorKey:
|
||||
if self.share:
|
||||
@@ -240,15 +275,12 @@ class Generator:
|
||||
name=gen_name,
|
||||
share=share,
|
||||
files=files,
|
||||
dependencies=[
|
||||
GeneratorKey(
|
||||
machine=None
|
||||
if generators_data[dep]["share"]
|
||||
else machine_name,
|
||||
name=dep,
|
||||
)
|
||||
for dep in gen_data["dependencies"]
|
||||
],
|
||||
dependencies=cls.validate_dependencies(
|
||||
gen_name,
|
||||
machine_name,
|
||||
gen_data["dependencies"],
|
||||
generators_data,
|
||||
),
|
||||
migrate_fact=gen_data.get("migrateFact"),
|
||||
validation_hash=gen_data.get("validationHash"),
|
||||
prompts=prompts,
|
||||
|
||||
@@ -59,9 +59,7 @@ def upload_sources(machine: Machine, ssh: Host, upload_inputs: bool) -> str:
|
||||
if not has_path_inputs and not upload_inputs:
|
||||
# Just copy the flake to the remote machine, we can substitute other inputs there.
|
||||
path = flake_data["path"]
|
||||
if machine._class_ == "darwin":
|
||||
remote_program_params = "?remote-program=bash -lc 'exec nix-daemon --stdio'"
|
||||
remote_url = f"ssh-ng://{remote_url_base}{remote_program_params}"
|
||||
remote_url = f"ssh-ng://{remote_url_base}"
|
||||
cmd = nix_command(
|
||||
[
|
||||
"copy",
|
||||
|
||||
@@ -17,7 +17,7 @@
|
||||
runCommand,
|
||||
setuptools,
|
||||
webkitgtk_6_0,
|
||||
wrapGAppsHook,
|
||||
wrapGAppsHook3,
|
||||
python,
|
||||
lib,
|
||||
stdenv,
|
||||
@@ -87,7 +87,7 @@ buildPythonApplication rec {
|
||||
nativeBuildInputs = [
|
||||
setuptools
|
||||
copyDesktopItems
|
||||
wrapGAppsHook
|
||||
wrapGAppsHook3
|
||||
gobject-introspection
|
||||
];
|
||||
|
||||
|
||||
Reference in New Issue
Block a user