Compare commits

..

1 Commits

41 changed files with 215 additions and 614 deletions

105
PLAN.md
View File

@@ -1,105 +0,0 @@
Title: Add nix-darwin Support to Clan Services (clan.service)
Summary
- Extend clan services so authors can ship a darwinModule alongside nixosModule.
- Wire service results into darwin machines the same way we already do for NixOS.
- Keep full backward compatibility: existing services that only export nixosModule continue to work unchanged.
Goals
- Service authors can return perInstance/perMachine darwinModule similarly to nixosModule.
- Darwin machines import the correct aggregated service module outputs.
- Documentation describes the new result attribute and authoring pattern.
Non-Goals (initial phase)
- No rework of service settings schema or UI beyond documenting darwinModule.
- No OS-specific extraModules handling (we will keep extraModules affecting only nixos aggregation initially to avoid breaking existing users).
- No sweeping updates of all services; well add a concrete example (users) and leave others to be migrated incrementally.
Design Overview
- Service result attributes gain darwinModule in both roles.<name>.perInstance and perMachine results.
- The service aggregator composes both nixosModule and darwinModule per machine.
- The machine wiring picks the correct module based on the machines class (nixos vs darwin).
Changes By File (with anchors)
- lib/inventory/distributed-service/service-module.nix
- Add darwinModule to per-instance return type next to nixosModule.
- Where: lib/inventory/distributed-service/service-module.nix:536 (options.nixosModule = mkOption { … })
- Action: Add sibling options.darwinModule = mkOption { type = types.deferredModule; default = { }; description = "A single nix-darwin module for the instance."; }.
- Add darwinModule to per-machine return type next to nixosModule.
- Where: lib/inventory/distributed-service/service-module.nix:666 (options.nixosModule = mkOption { … })
- Action: Add sibling options.darwinModule = mkOption { type = types.deferredModule; default = { }; description = "A single nix-darwin module for the machine."; }.
- Compose darwinModule per (role, instance, machine) similarly to nixosModule.
- Where: lib/inventory/distributed-service/service-module.nix:878893 (wrapper that builds nixosModule = { imports = [ instanceRes.nixosModule ] ++ extraModules … })
- Action: Build darwinModule = { imports = [ instanceRes.darwinModule ]; }.
Note: Do NOT include roles.*.extraModules here for darwin initially to avoid importing nixos-specific modules into darwin eval.
- Aggregate darwinModules in final result.
- Where: lib/inventory/distributed-service/service-module.nix:958993 (instanceResults builder and final nixosModule = { imports = [ machineResult.nixosModule ] ++ instanceResults.nixosModules; })
- Actions:
- Track instanceResults.darwinModules in parallel to instanceResults.nixosModules.
- Add final darwinModule = { imports = [ machineResult.darwinModule ] ++ instanceResults.darwinModules; }.
- modules/clan/distributed-services.nix
- Feed the right service module to each machine based on machineClass.
- Where: modules/clan/distributed-services.nix:147152
- Current: machineImports = fold over services, collecting serviceModule.result.final.${machineName}.nixosModule
- Change: If inventory.machines.${machineName}.machineClass == "darwin" then collect .darwinModule else .nixosModule.
- modules/clan/module.nix
- Ensure machineImports are included for both nixos and darwin machines.
- Where: modules/clan/module.nix:195 (currently ++ lib.optionals (_class == "nixos") (v.machineImports or [ ]))
- Change: Include machineImports for darwin as well (or remove the conditional and always append v.machineImports).
- docs/site/decisions/01-Clan-Modules.md
- Document darwinModule as a result attribute.
- Where: docs/site/decisions/01-Clan-Modules.md:129146 (Result attributes and perMachine text mentioning only nixosModule)
- Change: Add “darwinModule” to the Result attributes list and examples, mirroring nixosModule.
- Example service update: clanServices/users/default.nix
- Add perInstance.darwinModule and perMachine.darwinModule mirroring nixos behavior where feasible.
- Where: clanServices/users/default.nix:2890 (roles.default.perInstance.nixosModule), 148153 (perMachine.nixosModule)
- Change: Provide minimal darwinModule that sets users.users.<name> (and any safe, cross-platform bits). If some nixos-only settings (e.g., systemd hooks) exist, keep them nixos-only.
Implementation Steps
1) Service API extensions
- Add options.darwinModule to roles.*.perInstance and perMachine (see anchors above).
- Keep defaults to {} so services can omit it safely.
2) Aggregation logic
- result.allRoles: emit darwinModule wrapper from instanceRes.darwinModule.
- result.final:
- Collect instanceResults.darwinModules alongside instanceResults.nixosModules.
- Produce final darwinModule with [ machineResult.darwinModule ] ++ instanceResults.darwinModules.
- Leave exports logic unchanged.
3) Machine wiring
- modules/clan/distributed-services.nix: choose .darwinModule vs .nixosModule based on inventory.machines.<name>.machineClass.
- modules/clan/module.nix: include v.machineImports for both OS classes.
4) Example migration (users)
- Add darwinModule in clanServices/users/default.nix.
- Validate that users service evaluates for a darwin machine and does not reference nixos-specific options.
5) Documentation
- Update ADR docs to mention darwinModule in Result attributes and examples.
- Add a short “Authoring for Darwin” snippet showing perInstance/perMachine returning both modules.
6) Tests and verification
- Unit-level: extend lib/inventory/distributed-service/tests to assert presence of result.final.<machine>.darwinModule when perInstance/perMachine return it.
- Integration-level: evaluate a sample darwin machine (e.g., inventory.json has test-darwin-machine) and assert clan.darwinModules.<machine> includes the aggregated module.
- Sanity: ensure existing nixos-only services still evaluate unchanged.
Backward Compatibility
- Existing services that only return nixosModule continue to work.
- Darwin machines wont import service modules until services provide darwinModule, avoiding accidental breakage.
- extraModules remain applied only to nixos aggregation initially to prevent nixos-only modules from breaking darwin evaluation. We can add OS-specific extraModules in a follow-up (e.g., roles.*.extraModulesDarwin).
Acceptance Criteria
- Services can return darwinModule in perInstance/perMachine without errors.
- Darwin machines import aggregated darwinModule outputs from all participating services.
- nixos behavior remains unchanged for existing services.
- Documentation updated to reflect the new attribute and example.
Rollout Notes
- Start by updating clanServices/users as a working example.
- Encourage service authors to add darwinModule incrementally; no global migration is required.

View File

@@ -58,22 +58,20 @@
pkgs.buildPackages.xorg.lndir
pkgs.glibcLocales
pkgs.kbd.out
self.nixosConfigurations."test-flash-machine-${pkgs.stdenv.hostPlatform.system}".pkgs.perlPackages.ConfigIniFiles
self.nixosConfigurations."test-flash-machine-${pkgs.stdenv.hostPlatform.system}".pkgs.perlPackages.FileSlurp
self.nixosConfigurations."test-flash-machine-${pkgs.hostPlatform.system}".pkgs.perlPackages.ConfigIniFiles
self.nixosConfigurations."test-flash-machine-${pkgs.hostPlatform.system}".pkgs.perlPackages.FileSlurp
pkgs.bubblewrap
self.nixosConfigurations."test-flash-machine-${pkgs.stdenv.hostPlatform.system}".config.system.build.toplevel
self.nixosConfigurations."test-flash-machine-${pkgs.stdenv.hostPlatform.system}".config.system.build.diskoScript
self.nixosConfigurations."test-flash-machine-${pkgs.stdenv.hostPlatform.system}".config.system.build.diskoScript.drvPath
self.nixosConfigurations."test-flash-machine-${pkgs.hostPlatform.system}".config.system.build.toplevel
self.nixosConfigurations."test-flash-machine-${pkgs.hostPlatform.system}".config.system.build.diskoScript
self.nixosConfigurations."test-flash-machine-${pkgs.hostPlatform.system}".config.system.build.diskoScript.drvPath
]
++ builtins.map (i: i.outPath) (builtins.attrValues self.inputs);
closureInfo = pkgs.closureInfo { rootPaths = dependencies; };
in
{
# Skip flash test on aarch64-linux for now as it's too slow
checks =
lib.optionalAttrs (pkgs.stdenv.isLinux && pkgs.stdenv.hostPlatform.system != "aarch64-linux")
{
checks = lib.optionalAttrs (pkgs.stdenv.isLinux && pkgs.hostPlatform.system != "aarch64-linux") {
nixos-test-flash = self.clanLib.test.baseTest {
name = "flash";
nodes.target = {
@@ -102,7 +100,7 @@
machine.succeed("echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIRWUusawhlIorx7VFeQJHmMkhl9X3QpnvOdhnV/bQNG root@target' > ./test_id_ed25519.pub")
# Some distros like to automount disks with spaces
machine.succeed('mkdir -p "/mnt/with spaces" && mkfs.ext4 /dev/vdc && mount /dev/vdc "/mnt/with spaces"')
machine.succeed("clan flash write --ssh-pubkey ./test_id_ed25519.pub --keymap de --language de_DE.UTF-8 --debug --flake ${self.checks.x86_64-linux.clan-core-for-checks} --yes --disk main /dev/vdc test-flash-machine-${pkgs.stdenv.hostPlatform.system}")
machine.succeed("clan flash write --ssh-pubkey ./test_id_ed25519.pub --keymap de --language de_DE.UTF-8 --debug --flake ${self.checks.x86_64-linux.clan-core-for-checks} --yes --disk main /dev/vdc test-flash-machine-${pkgs.hostPlatform.system}")
'';
} { inherit pkgs self; };
};

View File

@@ -160,9 +160,9 @@
closureInfo = pkgs.closureInfo {
rootPaths = [
privateInputs.clan-core-for-checks
self.nixosConfigurations."test-install-machine-${pkgs.stdenv.hostPlatform.system}".config.system.build.toplevel
self.nixosConfigurations."test-install-machine-${pkgs.stdenv.hostPlatform.system}".config.system.build.initialRamdisk
self.nixosConfigurations."test-install-machine-${pkgs.stdenv.hostPlatform.system}".config.system.build.diskoScript
self.nixosConfigurations."test-install-machine-${pkgs.hostPlatform.system}".config.system.build.toplevel
self.nixosConfigurations."test-install-machine-${pkgs.hostPlatform.system}".config.system.build.initialRamdisk
self.nixosConfigurations."test-install-machine-${pkgs.hostPlatform.system}".config.system.build.diskoScript
pkgs.stdenv.drvPath
pkgs.bash.drvPath
pkgs.buildPackages.xorg.lndir
@@ -215,7 +215,7 @@
# Prepare test flake and Nix store
flake_dir = prepare_test_flake(
temp_dir,
"${self.checks.${pkgs.stdenv.hostPlatform.system}.clan-core-for-checks}",
"${self.checks.${pkgs.hostPlatform.system}.clan-core-for-checks}",
"${closureInfo}"
)
@@ -296,7 +296,7 @@
# Prepare test flake and Nix store
flake_dir = prepare_test_flake(
temp_dir,
"${self.checks.${pkgs.stdenv.hostPlatform.system}.clan-core-for-checks}",
"${self.checks.${pkgs.hostPlatform.system}.clan-core-for-checks}",
"${closureInfo}"
)

View File

@@ -2,7 +2,7 @@
let
cli = self.packages.${pkgs.stdenv.hostPlatform.system}.clan-cli-full;
cli = self.packages.${pkgs.hostPlatform.system}.clan-cli-full;
ollama-model = pkgs.callPackage ./qwen3-4b-instruct.nix { };
in
@@ -53,7 +53,7 @@ in
pytest
pytest-xdist
(cli.pythonRuntime.pkgs.toPythonModule cli)
self.legacyPackages.${pkgs.stdenv.hostPlatform.system}.nixosTestLib
self.legacyPackages.${pkgs.hostPlatform.system}.nixosTestLib
]
))
];

View File

@@ -2,7 +2,7 @@
let
cli = self.packages.${pkgs.stdenv.hostPlatform.system}.clan-cli-full;
cli = self.packages.${pkgs.hostPlatform.system}.clan-cli-full;
in
{
name = "systemd-abstraction";

View File

@@ -115,9 +115,9 @@
let
closureInfo = pkgs.closureInfo {
rootPaths = [
self.packages.${pkgs.stdenv.hostPlatform.system}.clan-cli
self.checks.${pkgs.stdenv.hostPlatform.system}.clan-core-for-checks
self.clanInternals.machines.${pkgs.stdenv.hostPlatform.system}.test-update-machine.config.system.build.toplevel
self.packages.${pkgs.hostPlatform.system}.clan-cli
self.checks.${pkgs.hostPlatform.system}.clan-core-for-checks
self.clanInternals.machines.${pkgs.hostPlatform.system}.test-update-machine.config.system.build.toplevel
pkgs.stdenv.drvPath
pkgs.bash.drvPath
pkgs.buildPackages.xorg.lndir
@@ -132,7 +132,7 @@
imports = [ self.nixosModules.test-update-machine ];
};
extraPythonPackages = _p: [
self.legacyPackages.${pkgs.stdenv.hostPlatform.system}.nixosTestLib
self.legacyPackages.${pkgs.hostPlatform.system}.nixosTestLib
];
testScript = ''
@@ -154,7 +154,7 @@
# Prepare test flake and Nix store
flake_dir = prepare_test_flake(
temp_dir,
"${self.checks.${pkgs.stdenv.hostPlatform.system}.clan-core-for-checks}",
"${self.checks.${pkgs.hostPlatform.system}.clan-core-for-checks}",
"${closureInfo}"
)
(flake_dir / ".clan-flake").write_text("") # Ensure .clan-flake exists
@@ -226,7 +226,7 @@
"--to",
"ssh://root@192.168.1.1",
"--no-check-sigs",
f"${self.packages.${pkgs.stdenv.hostPlatform.system}.clan-cli}",
f"${self.packages.${pkgs.hostPlatform.system}.clan-cli}",
"--extra-experimental-features", "nix-command flakes",
],
check=True,
@@ -242,7 +242,7 @@
"-o", "UserKnownHostsFile=/dev/null",
"-o", "StrictHostKeyChecking=no",
f"root@192.168.1.1",
"${self.packages.${pkgs.stdenv.hostPlatform.system}.clan-cli}/bin/clan",
"${self.packages.${pkgs.hostPlatform.system}.clan-cli}/bin/clan",
"machines",
"update",
"--debug",
@@ -270,7 +270,7 @@
# Run clan update command
subprocess.run([
"${self.packages.${pkgs.stdenv.hostPlatform.system}.clan-cli-full}/bin/clan",
"${self.packages.${pkgs.hostPlatform.system}.clan-cli-full}/bin/clan",
"machines",
"update",
"--debug",
@@ -297,7 +297,7 @@
# Run clan update command with --build-host
subprocess.run([
"${self.packages.${pkgs.stdenv.hostPlatform.system}.clan-cli-full}/bin/clan",
"${self.packages.${pkgs.hostPlatform.system}.clan-cli-full}/bin/clan",
"machines",
"update",
"--debug",

View File

@@ -1,6 +1,3 @@
!!! Danger "Experimental"
This service is experimental and will change in the future.
This service sets up a certificate authority (CA) that can issue certificates to
other machines in your clan. For this the `ca` role is used.
It additionally provides a `default` role, that can be applied to all machines

View File

@@ -1,6 +1,3 @@
!!! Danger "Experimental"
This service is experimental and will change in the future.
This module enables hosting clan-internal services easily, which can be resolved
inside your VPN. This allows defining a custom top-level domain (e.g. `.clan`)
and exposing endpoints from a machine to others, which will be

View File

@@ -1,83 +1 @@
!!! Danger "Experimental"
This service is for demonstration purpose only and may change in the future.
The Hello-World Clan Service is a minimal example showing how to build and register your own service.
It serves as a reference implementation and is used in clan-core CI tests to ensure compatibility.
## What it demonstrates
- How to define a basic Clan-compatible service.
- How to structure your service for discovery and configuration.
- How Clan services interact with nixos.
## Testing
This service demonstrates two levels of testing to ensure quality and stability across releases:
1. **Unit & Integration Testing** — via [`nix-unit`](https://github.com/nix-community/nix-unit)
2. **End-to-End Testing** — via **NixOS VM tests**, which we extended to support **container virtualization** for better performance.
We highly advocate following the [Practical Testing Pyramid](https://martinfowler.com/articles/practical-test-pyramid.html):
* Write **unit tests** for core logic and invariants.
* Add **one or two end-to-end (E2E)** tests to confirm your service starts and behaves correctly in a real NixOS environment.
NixOS is **untyped** and frequently changes; tests are the safest way to ensure long-term stability of services.
```
/ \
/ \
/ E2E \
/-------\
/ \
/Integration\
/-------------\
/ \
/ Unit Tests \
-------------------
```
### nix-unit
We highly advocate the usage of
[nix-unit](https://github.com/nix-community/nix-unit)
Example in: tests/eval-tests.nix
If you use flake-parts you can use the [native integration](https://flake.parts/options/nix-unit.html)
If nix-unit succeeds you'r nixos evaluation should be mostly correct.
!!! Tip
- Ensure most used 'settings' and variants are tested.
- Think about some important edge-cases your system should handle.
### NixOS VM / Container Test
!!! Warning "Early Vars & clanTest"
The testing system around vars is experimental
`clanTest` is still experimental and enables container virtualization by default.
This is still early and might have some limitations.
Some minimal boilerplate is needed to use `clanTest`
```nix
nixosLib = import (inputs.nixpkgs + "/nixos/lib") { }
nixosLib.runTest (
{ ... }:
{
imports = [
self.modules.nixosTest.clanTest
# Example in tests/vm/default.nix
testModule
];
hostPkgs = pkgs;
# Uncomment if you don't want or cannot use containers
# test.useContainers = false;
}
)
```
This a test README just to appease the eval warnings if we don't have one

View File

@@ -8,7 +8,7 @@
{
_class = "clan.service";
manifest.name = "clan-core/hello-word";
manifest.description = "Minimal example clan service that greets the world";
manifest.description = "This is a test";
manifest.readme = builtins.readFile ./README.md;
# This service provides two roles: "morning" and "evening". Roles can be

View File

@@ -26,7 +26,7 @@ in
# The hello-world service being tested
../../clanServices/hello-world
# Required modules
../../nixosModules
../../nixosModules/clanCore
];
testName = "hello-world";
tests = ./tests/eval-tests.nix;

View File

@@ -4,7 +4,7 @@
...
}:
let
testClan = clanLib.clan {
testFlake = clanLib.clan {
self = { };
# Point to the folder of the module
# TODO: make this optional
@@ -33,20 +33,10 @@ let
};
in
{
/**
We highly advocate the usage of:
https://github.com/nix-community/nix-unit
If you use flake-parts you can use the native integration: https://flake.parts/options/nix-unit.html
*/
test_simple = {
# Allows inspection via the nix-repl
# Ignored by nix-unit; it only looks at 'expr' and 'expected'
inherit testClan;
config = testFlake.config;
# Assert that jon has the
# configured greeting in 'environment.etc.hello.text'
expr = testClan.config.nixosConfigurations.jon.config.environment.etc."hello".text;
expected = "Good evening World!";
expr = { };
expected = { };
};
}

View File

@@ -1,5 +1,8 @@
!!! Danger "Experimental"
This service is experimental and will change in the future.
🚧🚧🚧 Experimental 🚧🚧🚧
Use at your own risk.
We are still refining its interfaces, instability and breakages are expected.
---

View File

@@ -1,6 +1,3 @@
!!! Danger "Experimental"
This service is experimental and will change in the future.
## Usage
```

View File

@@ -1,5 +1,8 @@
!!! Danger "Experimental"
This service is experimental and will change in the future.
🚧🚧🚧 Experimental 🚧🚧🚧
Use at your own risk.
We are still refining its interfaces, instability and breakages are expected.
---

View File

@@ -120,63 +120,6 @@
share = settings.share;
script =
(
if settings.prompt then
''
prompt_value=$(cat "$prompts"/user-password)
if [[ -n "''${prompt_value-}" ]]; then
echo "$prompt_value" | tr -d "\n" > "$out"/user-password
else
xkcdpass --numwords 4 --delimiter - --count 1 | tr -d "\n" > "$out"/user-password
fi
''
else
''
xkcdpass --numwords 4 --delimiter - --count 1 | tr -d "\n" > "$out"/user-password
''
)
+ ''
mkpasswd -s -m sha-512 < "$out"/user-password | tr -d "\n" > "$out"/user-password-hash
'';
};
};
darwinModule =
{
config,
pkgs,
lib,
...
}:
{
# For darwin, we currently only generate and manage the password secret.
# Hooking into actual macOS account management may be added later.
clan.core.vars.generators."user-password-${settings.user}" = {
files.user-password-hash.neededFor = "users";
files.user-password.deploy = false;
prompts.user-password = lib.mkIf settings.prompt {
display = {
group = settings.user;
label = "password";
required = false;
helperText = ''
Your password will be encrypted and stored securely using the secret store you've configured.
'';
};
type = "hidden";
persist = true;
description = "Leave empty to generate automatically";
};
runtimeInputs = [
pkgs.coreutils
pkgs.xkcdpass
pkgs.mkpasswd
];
share = settings.share;
script =
(
if settings.prompt then
@@ -206,7 +149,5 @@
# Immutable users to ensure that this module has exclusive control over the users.
users.mutableUsers = false;
};
# No-op for darwin by default; can be extended later if needed.
darwinModule = { };
};
}

View File

@@ -56,6 +56,8 @@
{
clanLib,
lib,
directory,
...
}:
let
@@ -298,6 +300,18 @@ in
...
}:
{
exports.networking = {
peers = lib.mapAttrs (name: _machine: {
host.plain =
clanLib.vars.getPublicValue {
flake = directory;
machine = name;
generator = "wireguard-network-${instanceName}";
file = "prefix";
}
+ "::1";
}) roles.controller.machines;
};
# Controllers connect to all peers and other controllers
nixosModule =

View File

@@ -1,5 +1,8 @@
!!! Danger "Experimental"
This service is experimental and will change in the future.
🚧🚧🚧 Experimental 🚧🚧🚧
Use at your own risk.
We are still refining its interfaces, instability and breakages are expected.
---

12
devFlake/flake.lock generated
View File

@@ -105,11 +105,11 @@
},
"nixpkgs-dev": {
"locked": {
"lastModified": 1762328495,
"narHash": "sha256-IUZvw5kvLiExApP9+SK/styzEKSqfe0NPclu9/z85OQ=",
"lastModified": 1761853358,
"narHash": "sha256-1tBdsBzYJOzVzNOmCFzFMWHw7UUbhkhiYCFGr+OjPTs=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "4c621660e393922cf68cdbfc40eb5a2d54d3989a",
"rev": "262333bca9b49964f8e3cad3af655466597c01d4",
"type": "github"
},
"original": {
@@ -208,11 +208,11 @@
"nixpkgs": []
},
"locked": {
"lastModified": 1762366246,
"narHash": "sha256-3xc/f/ZNb5ma9Fc9knIzEwygXotA+0BZFQ5V5XovSOQ=",
"lastModified": 1761311587,
"narHash": "sha256-Msq86cR5SjozQGCnC6H8C+0cD4rnx91BPltZ9KK613Y=",
"owner": "numtide",
"repo": "treefmt-nix",
"rev": "a82c779ca992190109e431d7d680860e6723e048",
"rev": "2eddae033e4e74bf581c2d1dfa101f9033dbd2dc",
"type": "github"
},
"original": {

View File

@@ -51,7 +51,7 @@ Make sure you have the following:
**Note:** This creates a new directory in your current location
```shellSession
nix run "https://git.clan.lol/clan/clan-core/archive/main.tar.gz#clan-cli" --refresh -- flakes create
nix run https://git.clan.lol/clan/clan-core/archive/main.tar.gz#clan-cli --refresh -- flakes create
```
3. Enter a **name** in the prompt:

View File

@@ -150,61 +150,10 @@ Those are very similar to NixOS VM tests, as in they run virtualized nixos machi
As of now the container test driver is a downstream development in clan-core.
Basically everything stated under the NixOS VM tests sections applies here, except some limitations.
### Using Container Tests vs VM Tests
Limitations:
Container tests are **enabled by default** for all tests using the clan testing framework.
They offer significant performance advantages over VM tests:
- **Faster startup**
- **Lower resource usage**: No full kernel boot or hardware emulation overhead
To control whether a test uses containers or VMs, use the `clan.test.useContainers` option:
```nix
{
clan = {
directory = ./.;
test.useContainers = true; # Use containers (default)
# test.useContainers = false; # Use VMs instead
};
}
```
**When to use VM tests instead of container tests:**
- Testing kernel features, modules, or boot processes
- Testing hardware-specific features
- When you need full system isolation
### System Requirements for Container Tests
Container tests require the **`uid-range`** system feature** in the Nix sandbox.
This feature allows Nix to allocate a range of UIDs for containers to use, enabling `systemd-nspawn` containers to run properly inside the Nix build sandbox.
**Configuration:**
The `uid-range` feature requires the `auto-allocate-uids` setting to be enabled in your Nix configuration.
To verify or enable it, add to your `/etc/nix/nix.conf` or NixOS configuration:
```nix
settings.experimental-features = [
"auto-allocate-uids"
];
nix.settings.auto-allocate-uids = true;
nix.settings.system-features = [ "uid-range" ];
```
**Technical details:**
- Container tests set `requiredSystemFeatures = [ "uid-range" ];` in their derivation (see `lib/test/container-test-driver/driver-module.nix:98`)
- Without this feature, containers cannot properly manage user namespaces and will fail to start
### Limitations
- Cannot run in interactive mode, however while the container test runs, it logs a nsenter command that can be used to log into each of the containers.
- Early implementation and limited by features.
- Cannot run in interactive mode, however while the container test runs, it logs a nsenter command that can be used to log into each of the container.
- setuid binaries don't work
### Where to find examples for NixOS container tests

36
flake.lock generated
View File

@@ -31,11 +31,11 @@
]
},
"locked": {
"lastModified": 1762276996,
"narHash": "sha256-TtcPgPmp2f0FAnc+DMEw4ardEgv1SGNR3/WFGH0N19M=",
"lastModified": 1761899396,
"narHash": "sha256-XOpKBp6HLzzMCbzW50TEuXN35zN5WGQREC7n34DcNMM=",
"owner": "nix-community",
"repo": "disko",
"rev": "af087d076d3860760b3323f6b583f4d828c1ac17",
"rev": "6f4cf5abbe318e4cd1e879506f6eeafd83f7b998",
"type": "github"
},
"original": {
@@ -51,11 +51,11 @@
]
},
"locked": {
"lastModified": 1762040540,
"narHash": "sha256-z5PlZ47j50VNF3R+IMS9LmzI5fYRGY/Z5O5tol1c9I4=",
"lastModified": 1760948891,
"narHash": "sha256-TmWcdiUUaWk8J4lpjzu4gCGxWY6/Ok7mOK4fIFfBuU4=",
"owner": "hercules-ci",
"repo": "flake-parts",
"rev": "0010412d62a25d959151790968765a70c436598b",
"rev": "864599284fc7c0ba6357ed89ed5e2cd5040f0c04",
"type": "github"
},
"original": {
@@ -71,11 +71,11 @@
]
},
"locked": {
"lastModified": 1762304480,
"narHash": "sha256-ikVIPB/ea/BAODk6aksgkup9k2jQdrwr4+ZRXtBgmSs=",
"lastModified": 1761339987,
"narHash": "sha256-IUaawVwItZKi64IA6kF6wQCLCzpXbk2R46dHn8sHkig=",
"owner": "nix-darwin",
"repo": "nix-darwin",
"rev": "b8c7ac030211f18bd1f41eae0b815571853db7a2",
"rev": "7cd9aac79ee2924a85c211d21fafd394b06a38de",
"type": "github"
},
"original": {
@@ -99,11 +99,11 @@
},
"nixos-facter-modules": {
"locked": {
"lastModified": 1762264948,
"narHash": "sha256-iaRf6n0KPl9hndnIft3blm1YTAyxSREV1oX0MFZ6Tk4=",
"lastModified": 1761137276,
"narHash": "sha256-4lDjGnWRBLwqKQ4UWSUq6Mvxu9r8DSqCCydodW/Jsi8=",
"owner": "nix-community",
"repo": "nixos-facter-modules",
"rev": "fa695bff9ec37fd5bbd7ee3181dbeb5f97f53c96",
"rev": "70bcd64225d167c7af9b475c4df7b5abba5c7de8",
"type": "github"
},
"original": {
@@ -115,10 +115,10 @@
"nixpkgs": {
"locked": {
"lastModified": 315532800,
"narHash": "sha256-LDT9wuUZtjPfmviCcVWif5+7j4kBI2mWaZwjNNeg4eg=",
"rev": "a7fc11be66bdfb5cdde611ee5ce381c183da8386",
"narHash": "sha256-yDxtm0PESdgNetiJN5+MFxgubBcLDTiuSjjrJiyvsvM=",
"rev": "d7f52a7a640bc54c7bb414cca603835bf8dd4b10",
"type": "tarball",
"url": "https://releases.nixos.org/nixpkgs/nixpkgs-25.11pre887438.a7fc11be66bd/nixexprs.tar.xz"
"url": "https://releases.nixos.org/nixpkgs/nixpkgs-25.11pre871443.d7f52a7a640b/nixexprs.tar.xz"
},
"original": {
"type": "tarball",
@@ -181,11 +181,11 @@
]
},
"locked": {
"lastModified": 1762366246,
"narHash": "sha256-3xc/f/ZNb5ma9Fc9knIzEwygXotA+0BZFQ5V5XovSOQ=",
"lastModified": 1761311587,
"narHash": "sha256-Msq86cR5SjozQGCnC6H8C+0cD4rnx91BPltZ9KK613Y=",
"owner": "numtide",
"repo": "treefmt-nix",
"rev": "a82c779ca992190109e431d7d680860e6723e048",
"rev": "2eddae033e4e74bf581c2d1dfa101f9033dbd2dc",
"type": "github"
},
"original": {

View File

@@ -561,15 +561,6 @@ in
```
'';
};
options.darwinModule = mkOption {
type = types.deferredModule;
default = { };
description = ''
A single nix-darwin module for the instance.
This mirrors `nixosModule` but targets darwin machines.
'';
};
})
];
};
@@ -695,15 +686,6 @@ in
```
'';
};
options.darwinModule = mkOption {
type = types.deferredModule;
default = { };
description = ''
A single nix-darwin module for the machine.
This mirrors `nixosModule` but targets darwin machines.
'';
};
})
];
};
@@ -908,11 +890,6 @@ in
lib.setDefaultModuleLocation "via inventory.instances.${instanceName}.roles.${roleName}" s
) instanceCfg.roles.${roleName}.extraModules);
};
darwinModule = {
imports = [
instanceRes.darwinModule
];
};
}
) instanceCfg.roles.${roleName}.machines or { };
@@ -1002,24 +979,11 @@ in
else
instanceAcc.nixosModules
);
darwinModules = (
if instance.allMachines.${machineName}.darwinModule or { } != { } then
instanceAcc.darwinModules
++ [
(lib.setDefaultModuleLocation
"Via instances.${instanceName}.roles.${roleName}.machines.${machineName}"
instance.allMachines.${machineName}.darwinModule
)
]
else
instanceAcc.darwinModules
);
}
) roleAcc role.allInstances
)
{
nixosModules = [ ];
darwinModules = [ ];
# ...
}
config.result.allRoles;
@@ -1057,12 +1021,6 @@ in
]
++ instanceResults.nixosModules;
};
darwinModule = {
imports = [
(lib.setDefaultModuleLocation "Via ${config.manifest.name}.perMachine - machine='${machineName}';" machineResult.darwinModule)
]
++ instanceResults.darwinModules;
};
}
) config.result.allMachines;
};

View File

@@ -145,23 +145,10 @@ in
internal = true;
type = types.raw;
default = lib.mapAttrs (machineName: _: {
# This is the list of service modules for each machine (nixos or darwin)
# This is the list of nixosModules for each machine
machineImports = lib.foldlAttrs (
acc: _module_ident: serviceModule:
let
modName =
if inventory.machines.${machineName}.machineClass == "darwin" then
"darwinModule"
else
"nixosModule";
finalForMachine = serviceModule.result.final.${machineName} or { };
picked =
if builtins.hasAttr modName finalForMachine then
(builtins.getAttr modName finalForMachine)
else
{ };
in
acc ++ [ picked ]
acc ++ [ serviceModule.result.final.${machineName}.nixosModule or { } ]
) [ ] config._services.mappedServices;
}) inventory.machines or { };
};

View File

@@ -192,7 +192,7 @@ in
# - darwinModules (_class = darwin)
(lib.optionalAttrs (clan-core ? "${_class}Modules") clan-core."${_class}Modules".clanCore)
]
++ (v.machineImports or [ ]);
++ lib.optionals (_class == "nixos") (v.machineImports or [ ]);
# default hostname
networking.hostName = lib.mkDefault name;

View File

@@ -5,7 +5,7 @@
}:
{
# If we also need zfs, we can use the unstable version as we otherwise don't have a new enough kernel version
boot.zfs.package = pkgs.zfs_unstable or pkgs.zfsUnstable;
boot.zfs.package = pkgs.zfsUnstable;
# Enable bcachefs support
boot.supportedFilesystems.bcachefs = lib.mkDefault true;

View File

@@ -18,7 +18,7 @@ let
inputs.data-mesher.nixosModules.data-mesher
];
config = {
clan.core.clanPkgs = lib.mkDefault self.packages.${pkgs.stdenv.hostPlatform.system};
clan.core.clanPkgs = lib.mkDefault self.packages.${pkgs.hostPlatform.system};
};
};
in

View File

@@ -6,7 +6,7 @@
}:
let
isUnstable = config.boot.zfs.package == pkgs.zfs_unstable or pkgs.zfsUnstable;
isUnstable = config.boot.zfs.package == pkgs.zfsUnstable;
zfsCompatibleKernelPackages = lib.filterAttrs (
name: kernelPackages:
(builtins.match "linux_[0-9]+_[0-9]+" name) != null
@@ -30,5 +30,5 @@ let
in
{
# Note this might jump back and worth as kernel get added or removed.
boot.kernelPackages = lib.mkIf (lib.meta.availableOn pkgs.stdenv.hostPlatform pkgs.zfs) latestKernelPackage;
boot.kernelPackages = lib.mkIf (lib.meta.availableOn pkgs.hostPlatform pkgs.zfs) latestKernelPackage;
}

View File

@@ -4,7 +4,6 @@
padding: 8px;
flex-direction: column;
align-items: flex-start;
gap: 4px;
border-radius: 5px;
border: 1px solid var(--clr-border-def-2, #d8e8eb);

View File

@@ -1,13 +1,11 @@
import { onCleanup, onMount } from "solid-js";
import styles from "./ContextMenu.module.css";
import { Typography } from "../Typography/Typography";
import { Divider } from "../Divider/Divider";
import Icon from "../Icon/Icon";
export const Menu = (props: {
x: number;
y: number;
onSelect: (option: "move" | "delete") => void;
onSelect: (option: "move") => void;
close: () => void;
intersect: string[];
}) => {
@@ -56,31 +54,13 @@ export const Menu = (props: {
>
<Typography
hierarchy="label"
size="s"
weight="bold"
color={currentMachine() ? "primary" : "quaternary"}
>
Move
</Typography>
</li>
<Divider />
<li
class={styles.item}
aria-disabled={!currentMachine()}
onClick={() => {
console.log("Delete clicked", currentMachine());
props.onSelect("delete");
props.close();
}}
>
<Typography
hierarchy="label"
color={currentMachine() ? "primary" : "quaternary"}
>
<span class="flex items-center gap-2">
Delete
<Icon icon="Trash" font-size="inherit" />
</span>
</Typography>
</li>
</ul>
);
};

View File

@@ -71,7 +71,7 @@ const Machines = () => {
}
const result = ctx.machinesQuery.data;
return Object.keys(result).length > 0 ? result : [];
return Object.keys(result).length > 0 ? result : undefined;
};
return (
@@ -117,7 +117,7 @@ const Machines = () => {
}
>
<nav>
<For each={Object.entries(machines())}>
<For each={Object.entries(machines()!)}>
{([id, machine]) => (
<MachineRoute
clanURI={clanURI}

View File

@@ -206,8 +206,8 @@ const ClanSceneController = (props: RouteSectionProps) => {
<AddMachine
onCreated={async (id) => {
const promise = currentPromise();
await ctx.machinesQuery.refetch();
if (promise) {
await ctx.machinesQuery.refetch();
promise.resolve({ id });
setCurrentPromise(null);
}

View File

@@ -18,12 +18,12 @@ export class MachineManager {
private disposeRoot: () => void;
private machinePositionsSignal: Accessor<SceneData | undefined>;
private machinePositionsSignal: Accessor<SceneData>;
constructor(
scene: THREE.Scene,
registry: ObjectRegistry,
machinePositionsSignal: Accessor<SceneData | undefined>,
machinePositionsSignal: Accessor<SceneData>,
machinesQueryResult: MachinesQueryResult,
selectedIds: Accessor<Set<string>>,
setMachinePos: (id: string, position: [number, number] | null) => void,
@@ -39,9 +39,8 @@ export class MachineManager {
if (!machinesQueryResult.data) return;
const actualIds = Object.keys(machinesQueryResult.data);
const machinePositions = machinePositionsSignal() || {};
const machinePositions = machinePositionsSignal();
// Remove stale
for (const id of Object.keys(machinePositions)) {
if (!actualIds.includes(id)) {
console.log("Removing stale machine", id);
@@ -62,7 +61,8 @@ export class MachineManager {
// Effect 2: sync store → scene
//
createEffect(() => {
const positions = machinePositionsSignal() || {};
const positions = machinePositionsSignal();
if (!positions) return;
// Remove machines from scene
for (const [id, repr] of this.machines) {
@@ -103,7 +103,7 @@ export class MachineManager {
nextGridPos(): [number, number] {
const occupiedPositions = new Set(
Object.values(this.machinePositionsSignal() || {}).map((data) =>
Object.values(this.machinePositionsSignal()).map((data) =>
keyFromPos(data.position),
),
);

View File

@@ -32,9 +32,6 @@ import {
} from "./highlightStore";
import { createMachineMesh } from "./MachineRepr";
import { useClanContext } from "@/src/routes/Clan/Clan";
import client from "@api/clan/client";
import { navigateToClan } from "../hooks/clan";
import { useNavigate } from "@solidjs/router";
function intersectMachines(
event: MouseEvent,
@@ -103,7 +100,7 @@ export function CubeScene(props: {
onCreate: () => Promise<{ id: string }>;
selectedIds: Accessor<Set<string>>;
onSelect: (v: Set<string>) => void;
sceneStore: Accessor<SceneData | undefined>;
sceneStore: Accessor<SceneData>;
setMachinePos: (machineId: string, pos: [number, number] | null) => void;
isLoading: boolean;
clanURI: string;
@@ -134,6 +131,9 @@ export function CubeScene(props: {
let machineManager: MachineManager;
const [positionMode, setPositionMode] = createSignal<"grid" | "circle">(
"grid",
);
// Managed by controls
const [isDragging, setIsDragging] = createSignal(false);
@@ -142,6 +142,10 @@ export function CubeScene(props: {
// TODO: Unify this with actionRepr position
const [cursorPosition, setCursorPosition] = createSignal<[number, number]>();
const [cameraInfo, setCameraInfo] = createSignal({
position: { x: 0, y: 0, z: 0 },
spherical: { radius: 0, theta: 0, phi: 0 },
});
// Context menu state
const [contextOpen, setContextOpen] = createSignal(false);
const [menuPos, setMenuPos] = createSignal<{ x: number; y: number }>();
@@ -153,6 +157,7 @@ export function CubeScene(props: {
const BASE_SIZE = 0.9; // Height of the cube above the ground
const CUBE_SIZE = BASE_SIZE / 1.5; //
const BASE_HEIGHT = 0.05; // Height of the cube above the ground
const CUBE_Y = 0 + CUBE_SIZE / 2 + BASE_HEIGHT / 2; // Y position of the cube above the ground
const CUBE_SEGMENT_HEIGHT = CUBE_SIZE / 1;
const FLOOR_COLOR = 0xcdd8d9;
@@ -196,8 +201,6 @@ export function CubeScene(props: {
const grid = new THREE.GridHelper(1000, 1000 / 1, 0xe1edef, 0xe1edef);
const navigate = useNavigate();
onMount(() => {
// Scene setup
scene = new THREE.Scene();
@@ -308,12 +311,21 @@ export function CubeScene(props: {
bgCamera,
);
// controls.addEventListener("start", (e) => {
// setIsDragging(true);
// });
// controls.addEventListener("end", (e) => {
// setIsDragging(false);
// });
// Lighting
const ambientLight = new THREE.AmbientLight(0xd9f2f7, 0.72);
scene.add(ambientLight);
const directionalLight = new THREE.DirectionalLight(0xffffff, 3.5);
// scene.add(new THREE.DirectionalLightHelper(directionalLight));
// scene.add(new THREE.CameraHelper(camera));
const lightPos = new THREE.Spherical(
15,
initialSphericalCameraPosition.phi - Math.PI / 8,
@@ -400,6 +412,30 @@ export function CubeScene(props: {
actionMachine = createActionMachine();
scene.add(actionMachine);
// const spherical = new THREE.Spherical();
// spherical.setFromVector3(camera.position);
// Function to update camera info
const updateCameraInfo = () => {
const spherical = new THREE.Spherical();
spherical.setFromVector3(camera.position);
setCameraInfo({
position: {
x: Math.round(camera.position.x * 100) / 100,
y: Math.round(camera.position.y * 100) / 100,
z: Math.round(camera.position.z * 100) / 100,
},
spherical: {
radius: Math.round(spherical.radius * 100) / 100,
theta: Math.round(spherical.theta * 100) / 100,
phi: Math.round(spherical.phi * 100) / 100,
},
});
};
// Initial camera info update
updateCameraInfo();
createEffect(
on(ctx.worldMode, (mode) => {
if (mode === "create") {
@@ -625,8 +661,7 @@ export function CubeScene(props: {
});
const snapToGrid = (point: THREE.Vector3) => {
const store = props.sceneStore() || {};
if (!props.sceneStore) return;
// Snap to grid
const snapped = new THREE.Vector3(
Math.round(point.x / GRID_SIZE) * GRID_SIZE,
@@ -635,7 +670,7 @@ export function CubeScene(props: {
);
// Skip snapping if there's already a cube at this position
const positions = Object.entries(store);
const positions = Object.entries(props.sceneStore());
const intersects = positions.some(
([_id, p]) => p.position[0] === snapped.x && p.position[1] === snapped.z,
);
@@ -659,6 +694,7 @@ export function CubeScene(props: {
};
const onAddClick = (event: MouseEvent) => {
setPositionMode("grid");
ctx.setWorldMode("create");
renderLoop.requestRender();
};
@@ -670,6 +706,9 @@ export function CubeScene(props: {
if (!actionRepr) return;
actionRepr.visible = true;
// (actionRepr.material as THREE.MeshPhongMaterial).emissive.set(
// worldMode() === "create" ? CREATE_BASE_EMISSIVE : MOVE_BASE_EMISSIVE,
// );
// Calculate mouse position in normalized device coordinates
// (-1 to +1) for both components
@@ -697,31 +736,15 @@ export function CubeScene(props: {
}
}
};
const handleMenuSelect = async (mode: "move" | "delete") => {
const firstId = menuIntersection()[0];
if (!firstId) {
return;
}
const machine = machineManager.machines.get(firstId);
if (mode === "delete") {
console.log("deleting machine", firstId);
await client.post("delete_machine", {
body: {
machine: { flake: { identifier: props.clanURI }, name: firstId },
},
});
navigateToClan(navigate, props.clanURI);
ctx.machinesQuery.refetch();
ctx.serviceInstancesQuery.refetch();
return;
}
// Else "move" mode
const handleMenuSelect = (mode: "move") => {
ctx.setWorldMode(mode);
setHighlightGroups({ move: new Set(menuIntersection()) });
// Find the position of the first selected machine
// Set the actionMachine position to that
const firstId = menuIntersection()[0];
if (firstId) {
const machine = machineManager.machines.get(firstId);
if (machine && actionMachine) {
actionMachine.position.set(
machine.group.position.x,
@@ -730,6 +753,7 @@ export function CubeScene(props: {
);
setCursorPosition([machine.group.position.x, machine.group.position.z]);
}
}
};
createEffect(

View File

@@ -766,28 +766,6 @@ def test_prompt(
assert sops_store.get(my_generator, "prompt_persist").decode() == "prompt_persist"
@pytest.mark.with_core
def test_non_existing_dependency_raises_error(
monkeypatch: pytest.MonkeyPatch,
flake_with_sops: ClanFlake,
) -> None:
"""Ensure that a generator with a non-existing dependency raises a clear error."""
flake = flake_with_sops
config = flake.machines["my_machine"] = create_test_machine_config()
my_generator = config["clan"]["core"]["vars"]["generators"]["my_generator"]
my_generator["files"]["my_value"]["secret"] = False
my_generator["script"] = 'echo "$RANDOM" > "$out"/my_value'
my_generator["dependencies"] = ["non_existing_generator"]
flake.refresh()
monkeypatch.chdir(flake.path)
with pytest.raises(
ClanError,
match="Generator 'my_generator' on machine 'my_machine' depends on generator 'non_existing_generator', but 'non_existing_generator' does not exist",
):
cli.run(["vars", "generate", "--flake", str(flake.path), "my_machine"])
@pytest.mark.with_core
def test_shared_vars_must_never_depend_on_machine_specific_vars(
monkeypatch: pytest.MonkeyPatch,

View File

@@ -66,41 +66,6 @@ class Generator:
_public_store: "StoreBase | None" = None
_secret_store: "StoreBase | None" = None
@staticmethod
def validate_dependencies(
generator_name: str,
machine_name: str,
dependencies: list[str],
generators_data: dict[str, dict],
) -> list[GeneratorKey]:
"""Validate and build dependency keys for a generator.
Args:
generator_name: Name of the generator that has dependencies
machine_name: Name of the machine the generator belongs to
dependencies: List of dependency generator names
generators_data: Dictionary of all available generators for this machine
Returns:
List of GeneratorKey objects
Raises:
ClanError: If a dependency does not exist
"""
deps_list = []
for dep in dependencies:
if dep not in generators_data:
msg = f"Generator '{generator_name}' on machine '{machine_name}' depends on generator '{dep}', but '{dep}' does not exist. Please check your configuration."
raise ClanError(msg)
deps_list.append(
GeneratorKey(
machine=None if generators_data[dep]["share"] else machine_name,
name=dep,
)
)
return deps_list
@property
def key(self) -> GeneratorKey:
if self.share:
@@ -275,12 +240,15 @@ class Generator:
name=gen_name,
share=share,
files=files,
dependencies=cls.validate_dependencies(
gen_name,
machine_name,
gen_data["dependencies"],
generators_data,
),
dependencies=[
GeneratorKey(
machine=None
if generators_data[dep]["share"]
else machine_name,
name=dep,
)
for dep in gen_data["dependencies"]
],
migrate_fact=gen_data.get("migrateFact"),
validation_hash=gen_data.get("validationHash"),
prompts=prompts,

View File

@@ -245,7 +245,7 @@ class SecretStore(StoreBase):
output_dir / "activation" / generator.name / file.name
)
out_file.parent.mkdir(parents=True, exist_ok=True)
out_file.write_bytes(file.value)
out_file.write_bytes(self.get(generator, file.name))
if "partitioning" in phases:
for generator in vars_generators:
for file in generator.files:
@@ -254,7 +254,7 @@ class SecretStore(StoreBase):
output_dir / "partitioning" / generator.name / file.name
)
out_file.parent.mkdir(parents=True, exist_ok=True)
out_file.write_bytes(file.value)
out_file.write_bytes(self.get(generator, file.name))
hash_data = self.generate_hash(machine)
if hash_data:

View File

@@ -246,7 +246,7 @@ class SecretStore(StoreBase):
)
# chmod after in case it doesn't have u+w
target_path.touch(mode=0o600)
target_path.write_bytes(file.value)
target_path.write_bytes(self.get(generator, file.name))
target_path.chmod(file.mode)
if "partitioning" in phases:
@@ -260,7 +260,7 @@ class SecretStore(StoreBase):
)
# chmod after in case it doesn't have u+w
target_path.touch(mode=0o600)
target_path.write_bytes(file.value)
target_path.write_bytes(self.get(generator, file.name))
target_path.chmod(file.mode)
@override

View File

@@ -211,7 +211,7 @@ class ClanSelectError(ClanError):
def __str__(self) -> str:
if self.description:
return f"{self.msg} Reason: {self.description}. Use flag '--debug' to see full nix trace."
return f"{self.msg} Reason: {self.description}"
return self.msg
def __repr__(self) -> str:

View File

@@ -59,7 +59,9 @@ def upload_sources(machine: Machine, ssh: Host, upload_inputs: bool) -> str:
if not has_path_inputs and not upload_inputs:
# Just copy the flake to the remote machine, we can substitute other inputs there.
path = flake_data["path"]
remote_url = f"ssh-ng://{remote_url_base}"
if machine._class_ == "darwin":
remote_program_params = "?remote-program=bash -lc 'exec nix-daemon --stdio'"
remote_url = f"ssh-ng://{remote_url_base}{remote_program_params}"
cmd = nix_command(
[
"copy",

View File

@@ -17,7 +17,7 @@
runCommand,
setuptools,
webkitgtk_6_0,
wrapGAppsHook3,
wrapGAppsHook,
python,
lib,
stdenv,
@@ -87,7 +87,7 @@ buildPythonApplication rec {
nativeBuildInputs = [
setuptools
copyDesktopItems
wrapGAppsHook3
wrapGAppsHook
gobject-introspection
];