Compare commits

...

66 Commits

Author SHA1 Message Date
Johannes Kirschbauer
842e6f1fca facts: remove facts and related tests 2025-08-13 20:33:23 +02:00
clan-bot
765bdb262a Merge pull request 'Update clan-core-for-checks in devFlake' (#4731) from update-devFlake-clan-core-for-checks into main 2025-08-13 15:22:38 +00:00
gitea-actions[bot]
05c00fbe82 Update clan-core-for-checks in devFlake 2025-08-13 15:01:35 +00:00
clan-bot
7e97734797 Merge pull request 'Update clan-core-for-checks in devFlake' (#4727) from update-devFlake-clan-core-for-checks into main 2025-08-13 13:57:32 +00:00
gitea-actions[bot]
6384c4654e Update clan-core-for-checks in devFlake 2025-08-13 13:54:09 +00:00
DavHau
72d3ad09a4 vars: refactor - pass Machine objects to run_generators 2025-08-13 12:45:47 +00:00
DavHau
a535450ec0 vars: refactor - unify get_generators and _get_closure 2025-08-13 12:45:47 +00:00
Mic92
aaeb616f82 Merge pull request 'Drop update-private-flake-inputs ci action' (#4730) from init-wireguard-service into main
Reviewed-on: https://git.clan.lol/clan/clan-core/pulls/4730
2025-08-13 12:42:59 +00:00
Jörg Thalheim
434edeaae1 drop update-private-flake-inputs 2025-08-13 14:35:43 +02:00
Mic92
a4efd3cb16 Merge pull request 'update-sops-nix2' (#4719) from update-sops-nix2 into main
Reviewed-on: https://git.clan.lol/clan/clan-core/pulls/4719
2025-08-13 12:34:37 +00:00
Jörg Thalheim
13131ccd6e docs/wireguard: put requirements at the top 2025-08-13 14:34:15 +02:00
hsjobeki
3a8309b01f Merge pull request 'UI/install: add loading animation' (#4723) from install-ui into main
Reviewed-on: https://git.clan.lol/clan/clan-core/pulls/4723
2025-08-13 12:19:23 +00:00
Johannes Kirschbauer
10065a7c8f UI/install: add loading to button 2025-08-13 14:15:52 +02:00
Johannes Kirschbauer
176b54e29d UI/Button: move state out of the button 2025-08-13 14:15:29 +02:00
Jörg Thalheim
be048d8307 morph/flash: use patched clan-core-for-checks
the other one doesn't override flake.lock
2025-08-13 11:41:09 +00:00
gitea-actions[bot]
52fcab30e7 Update sops-nix 2025-08-13 11:41:09 +00:00
Mic92
d3b423328f Merge pull request 'Add wireguard service module' (#3354) from init-wireguard-service into main
Reviewed-on: https://git.clan.lol/clan/clan-core/pulls/3354
2025-08-13 10:55:48 +00:00
Jörg Thalheim
1177e84dcc vars/generate: print the files that were found when files are missing
this helps fixing typos in the generator scripts
2025-08-13 12:29:52 +02:00
pinpox
414952dfa3 Add wireguard service module 2025-08-13 12:29:52 +02:00
DavHau
24194011ac vars: refactor - remove unnecessary return values
The boolean return value signaling if anything was ran or not isn't that useful. We are not doing anything with it.
2025-08-13 12:54:05 +07:00
DavHau
4f78a8ff94 Merge pull request 'networking_3' (#4507) from networking_3 into main
Reviewed-on: https://git.clan.lol/clan/clan-core/pulls/4507
2025-08-13 05:20:03 +00:00
DavHau
068b5d4c1e install: fix error message when target host not specified 2025-08-13 12:04:14 +07:00
DavHau
adccef4757 install: fix torify package not available 2025-08-13 12:04:14 +07:00
Qubasa
980d94d47d clan_cli: Improve cli message if no networks present 2025-08-13 12:04:14 +07:00
lassulus
a50b25eea2 clan-cli network: refactor, use new networking in ssh and install commands 2025-08-13 12:04:14 +07:00
lassulus
017989841d refactor: remove DeployInfo class and use Network/Remote directly
- Remove DeployInfo class entirely, replacing with direct Remote usage
- Update parse_qr_json_to_networks to return dict with network and remote
- Refactor all code to work with Remote lists instead of DeployInfo
- Add get_remote_for_machine context manager for network connections
- Update tests to use new Network/Remote structure
2025-08-13 12:04:14 +07:00
lassulus
c14a5fcc69 refactor: move ssh/upload.py from cli to lib
Move the upload module to clan_lib to better organize SSH-related
utilities. Updated all imports across the codebase.
2025-08-13 12:04:14 +07:00
clan-bot
4f60345ba7 Merge pull request 'Update clan-core-for-checks in devFlake' (#4726) from update-devFlake-clan-core-for-checks into main 2025-08-13 00:21:42 +00:00
gitea-actions[bot]
ece48d3b5f Update clan-core-for-checks in devFlake 2025-08-13 00:01:32 +00:00
clan-bot
4eea8d24f0 Merge pull request 'Update clan-core-for-checks in devFlake' (#4725) from update-devFlake-clan-core-for-checks into main 2025-08-12 20:26:23 +00:00
gitea-actions[bot]
49099df3fb Update clan-core-for-checks in devFlake 2025-08-12 20:01:32 +00:00
Johannes Kirschbauer
62ccba9fb5 ui/install: test connection 2025-08-12 21:04:18 +02:00
Johannes Kirschbauer
0b44770f1f UI/install: add loading animation 2025-08-12 20:45:55 +02:00
hsjobeki
61c3d7284a Merge pull request 'pkgs/clan/lib(install): implement separate nixos-anywhere install phases' (#4710) from ke-install-phases into main
Reviewed-on: https://git.clan.lol/clan/clan-core/pulls/4710
2025-08-12 15:34:15 +00:00
clan-bot
44b1be5ed4 Merge pull request 'Update clan-core-for-checks in devFlake' (#4717) from update-devFlake-clan-core-for-checks into main 2025-08-12 15:30:32 +00:00
Jörg Thalheim
88871bea69 clan_lib/flash: remove trailing whitespace 2025-08-12 17:14:52 +02:00
Johannes Kirschbauer
5141ea047c install: init secrets 2025-08-12 17:11:58 +02:00
gitea-actions[bot]
ff6a03a646 Update clan-core-for-checks in devFlake 2025-08-12 15:01:31 +00:00
Johannes Kirschbauer
bc379c985d ui/install: update storybook mock data 2025-08-12 16:35:34 +02:00
Johannes Kirschbauer
69d8b029d6 ui/install: fix alignment of some steps 2025-08-12 16:35:34 +02:00
Johannes Kirschbauer
f3617b0407 ui/vars: sanitize generator and prompt field names 2025-08-12 16:35:34 +02:00
Johannes Kirschbauer
a5205681cc ui/select: fix z-index of trigger 2025-08-12 16:35:34 +02:00
Johannes Kirschbauer
9880847d43 install: add progress to ui 2025-08-12 16:35:34 +02:00
a-kenji
8aa88b22ab pkgs/clan/lib(install): implement separate nixos-anywhere install phases
Split the `nixos-anywhere` phases into it's components,
so we provide the user with better feedback.

Closes: #4682
2025-08-12 16:35:34 +02:00
brianmcgee
ff979eba61 Merge pull request 'ui/integrate-clan-tags-machine-detail' (#4716) from ui/integrate-clan-tags-machine-detail into main
Reviewed-on: https://git.clan.lol/clan/clan-core/pulls/4716
2025-08-12 14:20:27 +00:00
Brian McGee
5d1abbd303 feat(ui): integrate tags info from field schema into tags section 2025-08-12 15:16:59 +01:00
Brian McGee
92e9bb2ed8 feat(ui): integrate list_tags api call into machine detail 2025-08-12 14:46:43 +01:00
brianmcgee
ea75c9bfa9 Merge pull request 'feat(ui): add small and transparent variants for Alert component' (#4713) from feat/small-variant-for-alert into main
Reviewed-on: https://git.clan.lol/clan/clan-core/pulls/4713
Reviewed-by: hsjobeki <hsjobeki@gmail.com>
2025-08-12 12:04:31 +00:00
hsjobeki
2adf65482d Merge pull request 'feat(api): add list_inventory_tags' (#4692) from feat/machine-tags-writeability into main
Reviewed-on: https://git.clan.lol/clan/clan-core/pulls/4692
2025-08-12 11:33:49 +00:00
DavHau
5684ddf104 vars: health check also for API not just cli 2025-08-12 11:28:02 +00:00
Johannes Kirschbauer
f74e444120 api/tags: add docs 2025-08-12 13:19:11 +02:00
Johannes Kirschbauer
0ef57bfc8e api/tags: add init.py for pytest 2025-08-12 13:07:36 +02:00
Brian McGee
8f43af3c48 feat(ui): add transparent option for Alert component 2025-08-12 11:52:38 +01:00
Brian McGee
eeaec583cb feat(ui): add small variant for Alert component 2025-08-12 11:52:37 +01:00
Johannes Kirschbauer
a9d1ff83f2 api/tags: split list into options and non-configurable tags 2025-08-12 12:41:15 +02:00
DavHau
89cb22147c Revert "machines update: support --target-host localhost"
This reverts commit a2818d4946cc66a08b9dd7a1ab95dc48ea708fe3.

Setting `--target-host localhost` breaks with:
sudo: no askpass program specified, try setting SUDO_ASKPASS
2025-08-12 17:39:40 +07:00
Jörg Thalheim
1006fc755e clanTest/vars-executor: add debugging to finalScript 2025-08-12 12:38:47 +02:00
clan-bot
f100177df3 Merge pull request 'Update clan-core-for-checks in devFlake' (#4709) from update-devFlake-clan-core-for-checks into main 2025-08-12 10:26:57 +00:00
Johannes Kirschbauer
cbd3b08296 api/tags: add from all possible sources 2025-08-12 11:05:10 +01:00
Brian McGee
2608bee30a feat(api): add list_inventory_tags 2025-08-12 11:05:10 +01:00
gitea-actions[bot]
a29459a384 Update clan-core-for-checks in devFlake 2025-08-12 10:01:30 +00:00
DavHau
1abdd45821 vars: add doc comments for fix() and health_check() 2025-08-12 09:13:54 +00:00
brianmcgee
b058fcc8eb Merge pull request 'fix(ui): swap colors for inverted/non-inverted in Divider component' (#4696) from fix/invert-default-color-scheme-divider into main
Reviewed-on: https://git.clan.lol/clan/clan-core/pulls/4696
2025-08-12 09:09:20 +00:00
Brian McGee
24ae95a007 fix(ui): swap colors for inverted/non-inverted in Divider component
Fixes #4602
2025-08-12 10:00:40 +01:00
brianmcgee
39510b613f Merge pull request 'fix color=inherit in typography component' (#4693) from fix/typography-color-inherit into main
Reviewed-on: https://git.clan.lol/clan/clan-core/pulls/4693
2025-08-12 09:00:20 +00:00
Brian McGee
dcdab61d13 feat(ui): improve color=inherit example in typography story 2025-08-12 09:56:26 +01:00
134 changed files with 2389 additions and 2064 deletions

View File

@@ -19,8 +19,7 @@ jobs:
uses: Mic92/update-flake-inputs-gitea@main
with:
# Exclude private flakes and update-clan-core checks flake
exclude-patterns: "devFlake/private/flake.nix,checks/impure/flake.nix"
exclude-patterns: "checks/impure/flake.nix"
auto-merge: true
gitea-token: ${{ secrets.CI_BOT_TOKEN }}
github-token: ${{ secrets.CI_BOT_GITHUB_TOKEN }}

View File

@@ -1,40 +0,0 @@
name: "Update private flake inputs"
on:
repository_dispatch:
workflow_dispatch:
schedule:
- cron: "0 3 * * *" # Run daily at 3 AM
jobs:
update-private-flake:
runs-on: nix
steps:
- uses: actions/checkout@v4
with:
submodules: true
- name: Update private flake inputs
run: |
# Update the private flake lock file
cd devFlake/private
nix flake update
cd ../..
# Update the narHash
bash ./devFlake/update-private-narhash
- name: Create pull request
env:
CI_BOT_TOKEN: ${{ secrets.CI_BOT_TOKEN }}
run: |
export GIT_AUTHOR_NAME=clan-bot GIT_AUTHOR_EMAIL=clan-bot@clan.lol GIT_COMMITTER_NAME=clan-bot GIT_COMMITTER_EMAIL=clan-bot@clan.lol
# Check if there are any changes
if ! git diff --quiet; then
git add devFlake/private/flake.lock devFlake/private.narHash
git commit -m "Update dev flake"
# Use shared PR creation script
export PR_BRANCH="update-dev-flake"
export PR_TITLE="Update dev flake"
export PR_BODY="This PR updates the dev flake inputs and corresponding narHash."
else
echo "No changes detected in dev flake inputs"
fi

View File

@@ -104,6 +104,7 @@ in
nixos-test-user-firewall-nftables = self.clanLib.test.containerTest ./user-firewall/nftables.nix nixosTestArgs;
service-dummy-test = import ./service-dummy-test nixosTestArgs;
wireguard = import ./wireguard nixosTestArgs;
service-dummy-test-from-flake = import ./service-dummy-test-from-flake nixosTestArgs;
};

View File

@@ -2,7 +2,6 @@
config,
self,
lib,
privateInputs,
...
}:
{
@@ -85,7 +84,7 @@
# Some distros like to automount disks with spaces
machine.succeed('mkdir -p "/mnt/with spaces" && mkfs.ext4 /dev/vdc && mount /dev/vdc "/mnt/with spaces"')
machine.succeed("clan flash write --debug --flake ${privateInputs.clan-core-for-checks} --yes --disk main /dev/vdc test-flash-machine-${pkgs.hostPlatform.system}")
machine.succeed("clan flash write --debug --flake ${self.checks.x86_64-linux.clan-core-for-checks} --yes --disk main /dev/vdc test-flash-machine-${pkgs.hostPlatform.system}")
'';
} { inherit pkgs self; };
};

View File

@@ -208,7 +208,7 @@
# Prepare test flake and Nix store
flake_dir = prepare_test_flake(
temp_dir,
"${privateInputs.clan-core-for-checks}",
"${self.checks.x86_64-linux.clan-core-for-checks}",
"${closureInfo}"
)
@@ -272,7 +272,7 @@
# Prepare test flake and Nix store
flake_dir = prepare_test_flake(
temp_dir,
"${privateInputs.clan-core-for-checks}",
"${self.checks.x86_64-linux.clan-core-for-checks}",
"${closureInfo}"
)

View File

@@ -1,6 +1,5 @@
{
self,
privateInputs,
...
}:
{
@@ -55,7 +54,7 @@
testScript = ''
start_all()
actual.fail("cat /etc/testfile")
actual.succeed("env CLAN_DIR=${privateInputs.clan-core-for-checks} clan machines morph test-morph-template --i-will-be-fired-for-using-this --debug --name test-morph-machine")
actual.succeed("env CLAN_DIR=${self.checks.x86_64-linux.clan-core-for-checks} clan machines morph test-morph-template --i-will-be-fired-for-using-this --debug --name test-morph-machine")
assert actual.succeed("cat /etc/testfile") == "morphed"
'';
} { inherit pkgs self; };

View File

@@ -174,7 +174,7 @@
##############
print("TEST: update with --build-host localhost --target-host localhost")
print("TEST: update with --build-host local")
with open(machine_config_path, "w") as f:
f.write("""
{
@@ -197,6 +197,15 @@
check=True
)
# allow machine to ssh into itself
subprocess.run([
"ssh",
"-o", "UserKnownHostsFile=/dev/null",
"-o", "StrictHostKeyChecking=no",
f"root@192.168.1.1",
"mkdir -p /root/.ssh && chmod 700 /root/.ssh && echo \"$(cat \"${../assets/ssh/privkey}\")\" > /root/.ssh/id_ed25519 && chmod 600 /root/.ssh/id_ed25519",
], check=True)
# install the clan-cli package into the container's Nix store
subprocess.run(
[
@@ -216,7 +225,7 @@
},
)
# Run ssh on the host to run the clan update command via --build-host localhost
# Run ssh on the host to run the clan update command via --build-host local
subprocess.run([
"ssh",
"-o", "UserKnownHostsFile=/dev/null",
@@ -230,8 +239,8 @@
"--host-key-check", "none",
"--upload-inputs", # Use local store instead of fetching from network
"--build-host", "localhost",
"--target-host", "localhost",
"test-update-machine",
"--target-host", f"root@localhost",
], check=True)
# Verify the update was successful

View File

@@ -0,0 +1,115 @@
{
pkgs,
nixosLib,
clan-core,
lib,
...
}:
nixosLib.runTest (
{ ... }:
let
machines = [
"controller1"
"controller2"
"peer1"
"peer2"
"peer3"
];
in
{
imports = [
clan-core.modules.nixosTest.clanTest
];
hostPkgs = pkgs;
name = "wireguard";
clan = {
directory = ./.;
modules."@clan/wireguard" = import ../../clanServices/wireguard/default.nix;
inventory = {
machines = lib.genAttrs machines (_: { });
instances = {
/*
wg-test-one
controller2 controller1
peer2 peer1 peer3
*/
wg-test-one = {
module.name = "@clan/wireguard";
module.input = "self";
roles.controller.machines."controller1".settings = {
endpoint = "192.168.1.1";
};
roles.controller.machines."controller2".settings = {
endpoint = "192.168.1.2";
};
roles.peer.machines = {
peer1.settings.controller = "controller1";
peer2.settings.controller = "controller2";
peer3.settings.controller = "controller1";
};
};
# TODO: Will this actually work with conflicting ports? Can we re-use interfaces?
#wg-test-two = {
# module.name = "@clan/wireguard";
# roles.controller.machines."controller1".settings = {
# endpoint = "192.168.1.1";
# port = 51922;
# };
# roles.peer.machines = {
# peer1 = { };
# };
#};
};
};
};
testScript = ''
start_all()
# Show all addresses
machines = [peer1, peer2, peer3, controller1, controller2]
for m in machines:
m.systemctl("start network-online.target")
for m in machines:
m.wait_for_unit("network-online.target")
m.wait_for_unit("systemd-networkd.service")
print("\n\n" + "="*60)
print("STARTING PING TESTS")
print("="*60)
for m1 in machines:
for m2 in machines:
if m1 != m2:
print(f"\n--- Pinging from {m1.name} to {m2.name}.wg-test-one ---")
m1.wait_until_succeeds(f"ping -c1 {m2.name}.wg-test-one >&2")
'';
}
)

View File

@@ -0,0 +1,6 @@
[
{
"publickey": "age1rnkc2vmrupy9234clyu7fpur5kephuqs3v7qauaw5zeg00jqjdasefn3cc",
"type": "age"
}
]

View File

@@ -0,0 +1,6 @@
[
{
"publickey": "age1t2hhg99d4p2yymuhngcy5ccutp8mvu7qwvg5cdhck303h9e7ha9qnlt635",
"type": "age"
}
]

View File

@@ -0,0 +1,6 @@
[
{
"publickey": "age1jts52rzlqcwjc36jkp56a7fmjn3czr7kl9ta2spkfzhvfama33sqacrzzd",
"type": "age"
}
]

View File

@@ -0,0 +1,6 @@
[
{
"publickey": "age12nqnp0zd435ckp5p0v2fv4p2x4cvur2mnxe8use2sx3fgy883vaq4ae75e",
"type": "age"
}
]

View File

@@ -0,0 +1,6 @@
[
{
"publickey": "age1sglr4zp34drjfydzeweq43fz3uwpul3hkh53lsfa9drhuzwmkqyqn5jegp",
"type": "age"
}
]

View File

@@ -0,0 +1,15 @@
{
"data": "ENC[AES256_GCM,data:zDF0RiBqaawpg+GaFkuLPomJ01Xu+lgY5JfUzaIk2j03XkCzIf8EMrmn6pRtBP3iUjPBm+gQSTQk6GHTONrixA5hRNyETV+UgQw=,iv:zUUCAGZ0cz4Tc2t/HOjVYNsdnrAOtid/Ns5ak7rnyCk=,tag:z43WtNSue4Ddf7AVu21IKA==,type:str]",
"sops": {
"age": [
{
"recipient": "age1qm0p4vf9jvcnn43s6l4prk8zn6cx0ep9gzvevxecv729xz540v8qa742eg",
"enc": "-----BEGIN AGE ENCRYPTED FILE-----\nYWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBlY1NEdjAzQm5RMFZWY3BJ\nclp6c01FdlZFK3dOSDB4cHc1NTdwMXErMFJFCnIrRVFNZEFYOG1rVUhFd2xsbTJ2\nVkJHNmdOWXlOcHJoQ0QzM1VyZmxmcGcKLS0tIFk1cEx4dFdvNGRwK1FWdDZsb1lR\nV2d1RFZtNzZqVFdtQ1FzNStEcEgyUUkKx8tkxqJz/Ko3xgvhvd6IYiV/lRGmrY13\nUZpYWR9tsQwZAR9dLjCyVU3JRuXeGB1unXC1CO0Ff3R0A/PuuRHh+g==\n-----END AGE ENCRYPTED FILE-----\n"
}
],
"lastmodified": "2025-08-13T09:19:37Z",
"mac": "ENC[AES256_GCM,data:8RGOUhZ2LGmC9ugULwHDgdMrtdo9vzBm3BJmL4XTuNJKm0NlKfgNLi1E4n9DMQ+kD4hKvcwbiUcwSGE8jZD6sm7Sh3bJi/HZCoiWm/O/OIzstli2NNDBGvQBgyWZA5H+kDjZ6aEi6icNWIlm5gsty7KduABnf5B3p0Bn5Uf5Bio=,iv:sGZp0XF+mgocVzAfHF8ATdlSE/5zyz5WUSRMJqNeDQs=,tag:ymYVBRwF5BOSAu5ONU2qKw==,type:str]",
"unencrypted_suffix": "_unencrypted",
"version": "3.10.2"
}
}

View File

@@ -0,0 +1 @@
../../../users/admin

View File

@@ -0,0 +1,15 @@
{
"data": "ENC[AES256_GCM,data:dHM7zWzqnC1QLRKYpbI2t63kOFnSaQy6ur9zlkLQf17Q03CNrqUsZtdEbwMnLR3llu7eVMhtvVRkXjEkvn3leb9HsNFmtk/DP70=,iv:roEZsBFqRypM106O5sehTzo7SySOJUJgAR738rTtOo8=,tag:VDd9/6uU0SAM7pWRLIUhUQ==,type:str]",
"sops": {
"age": [
{
"recipient": "age1qm0p4vf9jvcnn43s6l4prk8zn6cx0ep9gzvevxecv729xz540v8qa742eg",
"enc": "-----BEGIN AGE ENCRYPTED FILE-----\nYWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBKTEVYUmVGbUtOcHZ4cnc3\nKzNETnlxaVRKYTI3eWVHdEoyc3l2SnhsZ1J3CnB2RnZrOXM5Uml6TThDUlZjY25J\nbkJ6eUZ2ckN1NWpNUU9IaE93UDJQdlEKLS0tIC95ZDhkU0R1VHhCdldxdW4zSmps\nN3NqL1cvd05hRTRPdDA3R2pzNUFFajgKS+DJH14fH9AvEAa3PoUC1jEqKAzTmExN\nl32FeHTHbGMo1PKeaFm+Eg0WSpAmFE7beBunc5B73SW30ok6x4FcQw==\n-----END AGE ENCRYPTED FILE-----\n"
}
],
"lastmodified": "2025-08-13T09:19:47Z",
"mac": "ENC[AES256_GCM,data:77EnuBQyguvkCtobUg8/6zoLHjmeGDrSBZuIXOZBMxdbJjzhRg++qxQjuu6t0FoWATtz7u4Y3/jzUMGffr/N5HegqSq0D2bhv7AqJwBiVaOwd80fRTtM+YiP/zXsCk52Pj/Gadapg208bDPQ1BBDOyz/DrqZ7w//j+ARJjAnugI=,iv:IuTDmJKZEuHXJXjxrBw0gP2t6vpxAYEqbtpnVbavVCY=,tag:4EnpX6rOamtg1O+AaEQahQ==,type:str]",
"unencrypted_suffix": "_unencrypted",
"version": "3.10.2"
}
}

View File

@@ -0,0 +1 @@
../../../users/admin

View File

@@ -0,0 +1,15 @@
{
"data": "ENC[AES256_GCM,data:wcSsqxTKiMAnzPwxs5DNjcSdLyjVQ9UOrZxfSbOkVfniwx6F7xz6dLNhaDq7MHQ0vRWpg28yNs7NHrp52bYFnb/+eZsis46WiCw=,iv:B4t1lvS2gC601MtsmZfEiEulLWvSGei3/LSajwFS9Vs=,tag:hnRXlZyYEFfLJUrw1SqbSQ==,type:str]",
"sops": {
"age": [
{
"recipient": "age1qm0p4vf9jvcnn43s6l4prk8zn6cx0ep9gzvevxecv729xz540v8qa742eg",
"enc": "-----BEGIN AGE ENCRYPTED FILE-----\nYWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSAybUgya2VEdzMvRG1hdkpu\nM2pGNmcyVmcvYVZ1ZjJlY3A1bXFUUUtkMTI0CmJoRFZmejZjN2UxUXNuc1k5WnE2\nNmxIcnpNQ1lJZ3ZKSmhtSlVURXJTSUUKLS0tIGU4Wi9yZ3VYekJkVW9pNWFHblFk\na0gzbTVKUWdSam1sVjRUaUlTdVd5YWMKntRc9yb9VPOTMibp8QM5m57DilP01N/X\nPTQaw8oI40znnHdctTZz7S+W/3Te6sRnkOhFyalWmsKY0CWg/FELlA==\n-----END AGE ENCRYPTED FILE-----\n"
}
],
"lastmodified": "2025-08-13T09:19:58Z",
"mac": "ENC[AES256_GCM,data:8nq+ugkUJxE24lUIySySs/cAF8vnfqr936L/5F0O1QFwNrbpPmKRXkuwa6u0V+187L2952Id20Fym4ke59f3fJJsF840NCKDwDDZhBZ20q9GfOqIKImEom/Nzw6D0WXQLUT3w8EMyJ/F+UaJxnBNPR6f6+Kx4YgStYzCcA6Ahzg=,iv:VBPktEz7qwWBBnXE+xOP/EUVy7/AmNCHPoK56Yt/ZNc=,tag:qXONwOLFAlopymBEf5p4Sw==,type:str]",
"unencrypted_suffix": "_unencrypted",
"version": "3.10.2"
}
}

View File

@@ -0,0 +1 @@
../../../users/admin

View File

@@ -0,0 +1,15 @@
{
"data": "ENC[AES256_GCM,data:4d3ri0EsDmWRtA8vzvpPRLMsSp4MIMKwvtn0n0pRY05uBPXs3KcjnweMPIeTE1nIhqnMR2o2MfLah5TCPpaFax9+wxIt74uacbg=,iv:0LBAldTC/hN4QLCxgXTl6d9UB8WmUTnj4sD2zHQuG2w=,tag:zr/RhG/AU4g9xj9l2BprKw==,type:str]",
"sops": {
"age": [
{
"recipient": "age1qm0p4vf9jvcnn43s6l4prk8zn6cx0ep9gzvevxecv729xz540v8qa742eg",
"enc": "-----BEGIN AGE ENCRYPTED FILE-----\nYWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBvV0JnZDhlU1piU1g2cng0\ncytKOEZ6WlZlNGRGUjV3MmVMd2Nzc0ZwelgwCjBGdThCUGlXbVFYdnNoZWpJZ3Vm\nc2xkRXhxS09vdzltSVoxLzhFSVduak0KLS0tIE5DRjJ6cGxiVlB1eElHWXhxN1pJ\nYWtIMDMvb0Z6akJjUzlqeEFsNHkxL2cKpghv/QegnXimeqd9OPFouGM//jYvoVmw\n2d4mLT2JSMkEhpfGcqb6vswhdJfCiKuqr2B4bqwAnPMaykhsm8DFRQ==\n-----END AGE ENCRYPTED FILE-----\n"
}
],
"lastmodified": "2025-08-13T09:20:08Z",
"mac": "ENC[AES256_GCM,data:BzlQVAJ7HzcxNPKB3JhabqRX/uU0EElj172YecjmOflHnzz/s9xgfdAfJK/c53hXlX4LtGPnubH7a8jOolRq98zmZeBYE27+WLs2aN7Ufld6mYk90/i7u4CqR+Fh2Kfht04SlUJCjnS5A9bTPwU9XGRHJ0BiOhzTuSMUJTRaPRM=,iv:L50K5zc1o99Ix9nP0pb9PRH+VIN2yvq7JqKeVHxVXmc=,tag:XFLkSCsdbTPxbasDYYxcFQ==,type:str]",
"unencrypted_suffix": "_unencrypted",
"version": "3.10.2"
}
}

View File

@@ -0,0 +1 @@
../../../users/admin

View File

@@ -0,0 +1,15 @@
{
"data": "ENC[AES256_GCM,data:qfLm6+g1vYnESCik9uyBeKsY6Ju2Gq3arnn2I8HHNO67Ri5BWbOQTvtz7WT8/q94RwVjv8SGeJ/fsJSpwLSrJSbqTZCPAnYwzzQ=,iv:PnA9Ao8RRELNhNQYbaorstc0KaIXRU7h3+lgDCXZFHk=,tag:VeLgYQYwqthYihIoQTwYiA==,type:str]",
"sops": {
"age": [
{
"recipient": "age1qm0p4vf9jvcnn43s6l4prk8zn6cx0ep9gzvevxecv729xz540v8qa742eg",
"enc": "-----BEGIN AGE ENCRYPTED FILE-----\nYWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBNWVVQaDJFd0N3WHptRC9Z\nZTgxTWh5bnU1SkpqRWRXZnhPaFhpSVJmVEhrCjFvdHFYenNWaFNrdXlha09iS2xj\nOTZDcUNkcHkvTDUwNjM4Z3gxUkxreUEKLS0tIE5oY3Q2bWhsb2FSQTVGTWVSclJw\nWllrelRwT3duYjJJbTV0d3FwU1VuNlkK2eN3fHFX/sVUWom8TeZC9fddqnSCsC1+\nJRCZsG46uHDxqLcKIfdFWh++2t16XupQYk3kn+NUR/aMc3fR32Uwjw==\n-----END AGE ENCRYPTED FILE-----\n"
}
],
"lastmodified": "2025-08-13T09:20:18Z",
"mac": "ENC[AES256_GCM,data:nUwsPcP1bsDjAHFjQ1NlVkTwyZY4B+BpzNkMx9gl0rE14j425HVLtlhlLndhRp+XMpnDldQppLAAtSdzMsrw8r5efNgTRl7cu4Fy/b9cHt84k7m0aou5lrGus9SV1bM7/fzC9Xm7CSXBcRzyDGVsKC6UBl1rx+ybh7HyAN05XSo=,iv:It57H+zUUNPkoN1D8sYwyZx5zIFIga7mydhGUHYBCGE=,tag:mBQdYqUpjPknbYa13qESyw==,type:str]",
"unencrypted_suffix": "_unencrypted",
"version": "3.10.2"
}
}

View File

@@ -0,0 +1 @@
../../../users/admin

View File

@@ -0,0 +1,4 @@
{
"publickey": "age1qm0p4vf9jvcnn43s6l4prk8zn6cx0ep9gzvevxecv729xz540v8qa742eg",
"type": "age"
}

View File

@@ -0,0 +1 @@
../../../../../../sops/machines/controller1

View File

@@ -0,0 +1,19 @@
{
"data": "ENC[AES256_GCM,data:noe913+28JWkoDkGGMu++cc1+j5NPDoyIhWixdsowoiVO3cTWGkZ88SUGO5D,iv:ynYMljwqMcBdk8RpVcw/2Jflg2RCF28r4fKUgIAF8B4=,tag:+TsXDJgfUhKgg4iQVXKKlQ==,type:str]",
"sops": {
"age": [
{
"recipient": "age1qm0p4vf9jvcnn43s6l4prk8zn6cx0ep9gzvevxecv729xz540v8qa742eg",
"enc": "-----BEGIN AGE ENCRYPTED FILE-----\nYWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBhYVRReTZBQ05GYmVBVjhS\nNXM5aFlhVzZRaVl6UHl6S3JnMC9Sb1dwZ1ZjCmVuS2dEVExYZWROVklUZWFCSnM2\nZnlxbVNseTM2c0Q0TjhsT3NzYmtqREUKLS0tIHBRTFpvVGt6d1cxZ2lFclRsUVhZ\nZDlWaG9PcXVrNUZKaEgxWndjUDVpYjgKt0eOhAgcYdkg9JSEakx4FjChLTn3pis+\njOkuGd4JfXMKcwC7vJV5ygQBxzVJSBw+RucP7sYCBPK0m8Voj94ntw==\n-----END AGE ENCRYPTED FILE-----\n"
},
{
"recipient": "age1rnkc2vmrupy9234clyu7fpur5kephuqs3v7qauaw5zeg00jqjdasefn3cc",
"enc": "-----BEGIN AGE ENCRYPTED FILE-----\nYWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSB6MFJqNHNraG9DSnJZMFdz\ndU8zVXNTamxROFd1dWtuK2RiekhPdHhleVhFCi8zNWJDNXJMRUlDdjc4Q0UycTIz\nSGFGSmdnNU0wZWlDaTEwTzBqWjh6SFkKLS0tIEJOdjhOMDY2TUFLb3RPczNvMERx\nYkpSeW5VOXZvMlEvdm53MDE3aUFTNjgKyelSTjrTIR9I3rJd3krvzpsrKF1uGs4J\n4MtmQj0/3G+zPYZVBx7b3HF6B3f1Z7LYh05+z7nCnN/duXyPnDjNcg==\n-----END AGE ENCRYPTED FILE-----\n"
}
],
"lastmodified": "2025-08-13T09:19:37Z",
"mac": "ENC[AES256_GCM,data:+DmIkPG/H6tCtf8CvB98E1QFXv08QfTcCB3CRsi+XWnIRBkryRd/Au9JahViHMdK7MED8WNf84NWTjY2yH4y824/DjI8XXNMF1iVMo0CqY42xbVHtUuhXrYeT+c8CyEw+M6zfy1jC0+Bm3WQWgagz1G6A9SZk3D2ycu0N08+axA=,iv:kwBjTYebIy5i2hagAajSwwuKnSkrM9GyrnbeQXB2e/w=,tag:EgKJ5gVGYj1NGFUduxLGfg==,type:str]",
"unencrypted_suffix": "_unencrypted",
"version": "3.10.2"
}
}

View File

@@ -0,0 +1 @@
../../../../../../sops/users/admin

View File

@@ -0,0 +1 @@
lQfR7GhivN87XoXruTGOPjVPhNu1Brt//wyc3pdwE20=

View File

@@ -0,0 +1 @@
7470bb5c79df224a9b7f5a2259acd2e46db763c27e24cb3416c8b591cb328077

View File

@@ -0,0 +1 @@
fd51:19c1:3b:f700

View File

@@ -0,0 +1 @@
../../../../../../sops/machines/controller2

View File

@@ -0,0 +1,19 @@
{
"data": "ENC[AES256_GCM,data:2kehACgvNgoYGPwnW7p86BR0yUu689Chth6qZf9zoJtuTY9ATS68dxDyBc5S,iv:qb2iDUtExegTeN3jt6SA8RnU61W5GDDhn56QXiQT4gw=,tag:pSGPICX5p6qlZ1WMVoIEYQ==,type:str]",
"sops": {
"age": [
{
"recipient": "age1qm0p4vf9jvcnn43s6l4prk8zn6cx0ep9gzvevxecv729xz540v8qa742eg",
"enc": "-----BEGIN AGE ENCRYPTED FILE-----\nYWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBSTTR5TDY4RE9VYmlCK1dL\nWkVRcVZqVDlsbmQvUlJmdzF2b1Z1S0k3NngwCkFWNzRVaERtSmFsd0o2aFJOb0ZX\nSU9yUnVaNi9IUjJWeGRFcEpDUXo5WkEKLS0tIEczNkxiYnJsTWRoLzFhQVF1M21n\nWnZEdGV1N2N5d1FZQkJUQ1IrdGFLblkKPTpha2bxS8CCAMXWTDKX/WOcdvggaP3Y\nqewyahDNzb4ggP+LNKp55BtwFjdvoPoq4BpYOOgMRbQMMk+H1o9WFw==\n-----END AGE ENCRYPTED FILE-----\n"
},
{
"recipient": "age1t2hhg99d4p2yymuhngcy5ccutp8mvu7qwvg5cdhck303h9e7ha9qnlt635",
"enc": "-----BEGIN AGE ENCRYPTED FILE-----\nYWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBYcEZ6Tzk3M0pkV0tOdTBj\nenF2a0tHNnhBa0NrazMwV1VBbXBZR3pzSHpvCnBZOEU0VlFHS1FHcVpTTDdPczVV\nV0RFSlZ0VmIzWGoydEdKVXlIUE9OOEkKLS0tIFZ0cWVBR1loeVlWa2c4U3oweXE2\ncm1ja0JCS3U5Nk41dlAzV2NabDc2bDQKdgCDNnpRZlFPnEGlX6fo0SQX4yOB+E6r\ntnSwofR3xxZvkyme/6JJU5qBZXyCXEAhKMRkFyvJANXzMJAUo/Osow==\n-----END AGE ENCRYPTED FILE-----\n"
}
],
"lastmodified": "2025-08-13T09:19:48Z",
"mac": "ENC[AES256_GCM,data:e3EkL8vwRhLsec83Zi9DE3PKT+4RwgiffpN4QHcJKTgmDW6hzizWc5kAxbNWGJ9Qqe6sso2KY7tc+hg1lHEsmzjCbg153p8h+7lVI2XT6adi/CS8WZ2VpeL+0X9zDQCjqHmrESZAYFBdkLqO4jucdf0Pc3CKKD+N3BDDTwSUvHM=,iv:xvR7dJL8sdYen00ovrYT8PNxhB9XxSWDSRz1IK23I/o=,tag:OyhAvllBgfAp3eGeNpR/Nw==,type:str]",
"unencrypted_suffix": "_unencrypted",
"version": "3.10.2"
}
}

View File

@@ -0,0 +1 @@
../../../../../../sops/users/admin

View File

@@ -0,0 +1 @@
5Z7gbLFbXpEFfomW2pKyZBpZN5xvUtiqrIL0GVfNtQ8=

View File

@@ -0,0 +1 @@
c3672fdb9fb31ddaf6572fc813cf7a8fe50488ef4e9d534c62d4f29da60a1a99

View File

@@ -0,0 +1 @@
fd51:19c1:c1:aa00

View File

@@ -0,0 +1 @@
../../../../../../sops/machines/peer1

View File

@@ -0,0 +1,19 @@
{
"data": "ENC[AES256_GCM,data:b+akw85T3D9xc75CPLHucR//k7inpxKDvgpR8tCNKwNDRVjVHjcABhfZNLXW,iv:g11fZE8UI0MVh9GKdjR6leBlxa4wN7ZubozXG/VlBbw=,tag:0YkzWCW3zJ3Mt3br/jmTYw==,type:str]",
"sops": {
"age": [
{
"recipient": "age1jts52rzlqcwjc36jkp56a7fmjn3czr7kl9ta2spkfzhvfama33sqacrzzd",
"enc": "-----BEGIN AGE ENCRYPTED FILE-----\nYWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBXWkJUR0pIa2xOSEw2dThm\nYlNuOHZCVW93Wkc5LzE4YmpUTHRkZlk3ckc4CnN4M3ZRMWNFVitCT3FyWkxaR0di\nb0NmSXFhRHJmTWg0d05OcWx1LytscEEKLS0tIEtleTFqU3JrRjVsdHpJeTNuVUhF\nWEtnOVlXVXRFamFSak5ia2F2b0JiTzAKlhOBZvZ4AN+QqAYQXvd6YNmgVS4gtkWT\nbV3bLNTgwtrDtet9NDHM8vdF+cn5RZxwFfgmTbDEow6Zm8EXfpxj/g==\n-----END AGE ENCRYPTED FILE-----\n"
},
{
"recipient": "age1qm0p4vf9jvcnn43s6l4prk8zn6cx0ep9gzvevxecv729xz540v8qa742eg",
"enc": "-----BEGIN AGE ENCRYPTED FILE-----\nYWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSB6YVYyQkZqMTJYQTlyRG5Y\nbnJ2UkE1TS9FZkpSa2tQbk1hQjViMi9OcGk0CjFaZUdjU3JtNzh0bDFXdTdUVW4x\nanFqZHZjZjdzKzA2MC8vTWh3Uy82UGcKLS0tIDhyOFl3UGs3czdoMlpza3UvMlB1\nSE90MnpGc05sSCtmVWg0UVNVdmRvN2MKHlCr4U+7bsoYb+2fgT4mEseZCEjxrtLu\n55sR/4YH0vqMnIBnLTSA0e+WMrs3tQfseeJM5jY/ZNnpec1LbxkGTg==\n-----END AGE ENCRYPTED FILE-----\n"
}
],
"lastmodified": "2025-08-13T09:19:58Z",
"mac": "ENC[AES256_GCM,data:gEoEC9D2Z7k5F8egaY1qPXT5/96FFVsyofSBivQ28Ir/9xHX2j40PAQrYRJUWsk/GAUMOyi52Wm7kPuacw+bBcdtQ0+MCDEmjkEnh1V83eZ/baey7iMmg05uO92MYY5o4e7ZkwzXoAeMCMcfO0GqjNvsYJHF1pSNa+UNDj+eflw=,iv:dnIYpvhAdvUDe9md53ll42krb0sxcHy/toqGc7JFxNA=,tag:0WkZU7GeKMD1DQTYaI+1dg==,type:str]",
"unencrypted_suffix": "_unencrypted",
"version": "3.10.2"
}
}

View File

@@ -0,0 +1 @@
../../../../../../sops/users/admin

View File

@@ -0,0 +1 @@
juK7P/92N2t2t680aLIRobHc3ts49CsZBvfZOyIKpUc=

View File

@@ -0,0 +1 @@
b36142569a74a0de0f9b229f2a040ae33a22d53bef5e62aa6939912d0cda05ba

View File

@@ -0,0 +1 @@
6987:50a0:9b93:4337

View File

@@ -0,0 +1 @@
../../../../../../sops/machines/peer2

View File

@@ -0,0 +1,19 @@
{
"data": "ENC[AES256_GCM,data:apX2sLwtq6iQgLJslFwiRMNBUe0XLzLQbhKfmb2pKiJG7jGNHUgHJz3Ls4Ca,iv:HTDatm3iD5wACTkkd3LdRNvJfnfg75RMtn9G6Q7Fqd4=,tag:Mfehlljnes5CFD1NJdk27A==,type:str]",
"sops": {
"age": [
{
"recipient": "age12nqnp0zd435ckp5p0v2fv4p2x4cvur2mnxe8use2sx3fgy883vaq4ae75e",
"enc": "-----BEGIN AGE ENCRYPTED FILE-----\nYWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBVZzFyMUZsd2V2VWxOUmhP\nZE8yZTc4Q0RkZisxR25NemR1TzVDWmJZVjBVClA1MWhsU0xzSG16aUx3cWFWKzlG\nSkxrT09OTkVqLzlWejVESE1QWHVJaFkKLS0tIGxlaGVuWU43RXErNTB3c3FaUnM3\nT0N5M253anZkbnFkZWw2VHA0eWhxQW8Kd1PMtEX1h0Hd3fDLMi++gKJkzPi9FXUm\n+uYhx+pb+pJM+iLkPwP/q6AWC7T0T4bHfekkdzxrbsKMi73x/GrOiw==\n-----END AGE ENCRYPTED FILE-----\n"
},
{
"recipient": "age1qm0p4vf9jvcnn43s6l4prk8zn6cx0ep9gzvevxecv729xz540v8qa742eg",
"enc": "-----BEGIN AGE ENCRYPTED FILE-----\nYWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBqVzRIMWdlNjVwTURyMFkv\nSUhiajZkZVNuWklRYit6cno4UzNDa2szOFN3CkQ2TWhHb25pbmR1MlBsRXNLL2lx\ncVZ3c3BsWXN2aS9UUVYvN3I4S0xUSmMKLS0tIE5FV0U5aXVUZk9XL0U0Z2ZSNGd5\nbU9zY3IvMlpSNVFLYkRNQUpUYVZOWFUK7j4Otzb8CJTcT7aAj9/irxHEDXh1HkTg\nzz7Ho8/ZncNtaCVHlHxjTgVW9d5aIx8fSsV9LRCFwHMtNzvwj1Nshg==\n-----END AGE ENCRYPTED FILE-----\n"
}
],
"lastmodified": "2025-08-13T09:20:08Z",
"mac": "ENC[AES256_GCM,data:e7WNVEz78noHBiz6S3A6qNfop+yBXB3rYN0k4GvaQKz3b99naEHuqIF8Smzzt4XrbbiPKu2iLa5ddLBlqqsi32UQUB8JS9TY7hvW8ol+jpn0VxusGCXW9ThdDEsM/hXiPyr331C73zTvbOYI1hmcGMlJL9cunVRO9rkMtEqhEfo=,iv:6zt7wjIs1y5xDHNK+yLOwoOuUpY7/dOGJGT6UWAFeOg=,tag:gzFTgoxhoLzUV0lvzOhhfg==,type:str]",
"unencrypted_suffix": "_unencrypted",
"version": "3.10.2"
}
}

View File

@@ -0,0 +1 @@
../../../../../../sops/users/admin

View File

@@ -0,0 +1 @@
XI9uSaQRDBCb82cMnGzGJcbqRfDG/IXZobyeL+kV03k=

View File

@@ -0,0 +1 @@
360f9fce4a984eb87ce2a673eb5341ecb89c0f62126548d45ef25ff5243dd646

View File

@@ -0,0 +1 @@
3b21:3ced:003e:89b3

View File

@@ -0,0 +1 @@
../../../../../../sops/machines/peer3

View File

@@ -0,0 +1,19 @@
{
"data": "ENC[AES256_GCM,data:Gluvjes/3oH5YsDq00JDJyJgoEFcj56smioMArPSt309MDGExYX2QsCzeO1q,iv:oBBJRDdTj/1dWEvzhdFKQ2WfeCKyavKMLmnMbqnU5PM=,tag:2WNFxKz2dWyVcybpm5N4iw==,type:str]",
"sops": {
"age": [
{
"recipient": "age1qm0p4vf9jvcnn43s6l4prk8zn6cx0ep9gzvevxecv729xz540v8qa742eg",
"enc": "-----BEGIN AGE ENCRYPTED FILE-----\nYWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBtQWpjRmhZTFdPa2VSZkFN\nbUczMlY5bDBmMTdoMy8xcWxMaXpWVitMZGdjCnRWb2Y3eGpHU1hmNHRJVFBqbU5w\nVEZGdUIrQXk0U0dUUEZ6bE5EMFpTRHMKLS0tIGpYSmZmQThJUTlvTHpjc05ZVlM4\nQWhTOWxnUHZnYlJ3czE3ZUJ0L3ozWTQK3a7N0Zpzo4sUezYveqvKR49RUdJL23eD\n+cK5lk2xbtj+YHkeG+dg7UlHfDaicj0wnFH1KLuWmNd1ONa6eQp3BQ==\n-----END AGE ENCRYPTED FILE-----\n"
},
{
"recipient": "age1sglr4zp34drjfydzeweq43fz3uwpul3hkh53lsfa9drhuzwmkqyqn5jegp",
"enc": "-----BEGIN AGE ENCRYPTED FILE-----\nYWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSA3a2FOWlVsSkdnendrYmUz\ndEpuL1hZSWNFTUtDYm14S3V1aW9KS3hsazJRCkp2SkFFbi9hbGJpNks1MlNTL0s5\nTk5pcUMxaEJobkcvWmRGeU9jMkdNdzAKLS0tIDR6M0Y5eE1ETHJJejAzVW1EYy9v\nZCtPWHJPUkhuWnRzSGhMUUtTa280UmMKXvtnxyop7PmRvTOFkV80LziDjhGh93Pf\nYwhD/ByD/vMmr21Fd6PVHOX70FFT30BdnMc1/wt7c/0iAw4w4GoQsA==\n-----END AGE ENCRYPTED FILE-----\n"
}
],
"lastmodified": "2025-08-13T09:20:18Z",
"mac": "ENC[AES256_GCM,data:3nXMTma0UYXCco+EM8UW45cth7DVMboFBKyesL86GmaG6OlTkA2/25AeDrtSVO13a5c2jC6yNFK5dE6pSe5R9f0BoDF7d41mgc85zyn+LGECNWKC6hy6gADNSDD6RRuV1S3FisFQl1F1LD8LiSWmg/XNMZzChNlHYsCS8M+I84g=,iv:pu5VVXAVPmVoXy0BJ+hq5Ar8R0pZttKSYa4YS+dhDNc=,tag:xp1S/4qExnxMTGwhfLJrkA==,type:str]",
"unencrypted_suffix": "_unencrypted",
"version": "3.10.2"
}
}

View File

@@ -0,0 +1 @@
../../../../../../sops/users/admin

View File

@@ -0,0 +1 @@
t6qN4VGLR+VMhrBDNKQEXZVyRsEXs1/nGFRs5DI82F8=

View File

@@ -0,0 +1 @@
e3facc99b73fe029d4c295f71829a83f421f38d82361cf412326398175da162a

View File

@@ -0,0 +1 @@
e42b:bf85:33f4:f0b1

View File

@@ -0,0 +1,217 @@
# Wireguard VPN Service
This service provides a Wireguard-based VPN mesh network with automatic IPv6 address allocation and routing between clan machines.
## Overview
The wireguard service creates a secure mesh network between clan machines using two roles:
- **Controllers**: Machines with public endpoints that act as connection points and routers
- **Peers**: Machines that connect through controllers to access the network
## Requirements
- Controllers must have a publicly accessible endpoint (domain name or static IP)
- Peers must be in networks where UDP traffic is not blocked (uses port 51820 by default, configurable)
## Features
- Automatic IPv6 address allocation using ULA (Unique Local Address) prefixes
- Full mesh connectivity between all machines
- Automatic key generation and distribution
- IPv6 forwarding on controllers for inter-peer communication
- Support for multiple controllers for redundancy
## Network Architecture
### IPv6 Address Allocation
- Base network: `/40` ULA prefix (deterministically generated from instance name)
- Controllers: Each gets a `/56` subnet from the base `/40`
- Peers: Each gets a unique 64-bit host suffix that is used in ALL controller subnets
### Addressing Design
- Each peer generates a unique host suffix (e.g., `:8750:a09b:0:1`)
- This suffix is appended to each controller's `/56` prefix to create unique addresses
- Example: peer1 with suffix `:8750:a09b:0:1` gets:
- `fd51:19c1:3b:f700:8750:a09b:0:1` in controller1's subnet
- `fd51:19c1:c1:aa00:8750:a09b:0:1` in controller2's subnet
- Controllers allow each peer's `/96` subnet for routing flexibility
### Connectivity
- Peers use a single WireGuard interface with multiple IPs (one per controller subnet)
- Controllers connect to ALL other controllers and ALL peers on a single interface
- Controllers have IPv6 forwarding enabled to route traffic between peers
- All traffic between peers flows through controllers
- Symmetric routing is maintained as each peer has consistent IPs across all controllers
### Example Network Topology
```mermaid
graph TB
subgraph Controllers
C1[controller1<br/>endpoint: vpn1.example.com<br/>fd51:19c1:3b:f700::/56]
C2[controller2<br/>endpoint: vpn2.example.com<br/>fd51:19c1:c1:aa00::/56]
end
subgraph Peers
P1[peer1<br/>designated: controller1]
P2[peer2<br/>designated: controller2]
P3[peer3<br/>designated: controller1]
end
%% Controllers connect to each other
C1 <--> C2
%% All peers connect to all controllers
P1 <--> C1
P1 <--> C2
P2 <--> C1
P2 <--> C2
P3 <--> C1
P3 <--> C2
%% Peer-to-peer traffic flows through controllers
P1 -.->|via controllers| P3
P1 -.->|via controllers| P2
P2 -.->|via controllers| P3
classDef controller fill:#f9f,stroke:#333,stroke-width:4px
classDef peer fill:#bbf,stroke:#333,stroke-width:2px
class C1,C2 controller
class P1,P2,P3 peer
```
## Configuration
### Basic Setup with Single Controller
```nix
# In your flake.nix or inventory
{
services.wireguard.server1 = {
roles.controller = {
# Public endpoint where this controller can be reached
endpoint = "vpn.example.com";
# Optional: Change the UDP port (default: 51820)
port = 51820;
};
};
services.wireguard.laptop1 = {
roles.peer = {
# No configuration needed if only one controller exists
};
};
}
```
### Multiple Controllers Setup
```nix
{
services.wireguard.server1 = {
roles.controller = {
endpoint = "vpn1.example.com";
};
};
services.wireguard.server2 = {
roles.controller = {
endpoint = "vpn2.example.com";
};
};
services.wireguard.laptop1 = {
roles.peer = {
# Must specify which controller subnet is exposed as the default in /etc/hosts, when multiple controllers exist
controller = "server1";
};
};
}
```
### Advanced Options
### Automatic Hostname Resolution
The wireguard service automatically adds entries to `/etc/hosts` for all machines in the network. Each machine is accessible via its hostname in the format `<machine-name>.<instance-name>`.
For example, with an instance named `vpn`:
- `server1.vpn` - resolves to server1's IPv6 address
- `laptop1.vpn` - resolves to laptop1's IPv6 address
This allows machines to communicate using hostnames instead of IPv6 addresses:
```bash
# Ping another machine by hostname
ping6 server1.vpn
# SSH to another machine
ssh user@laptop1.vpn
```
## Troubleshooting
### Check Wireguard Status
```bash
sudo wg show
```
### Verify IP Addresses
```bash
ip addr show dev <instance-name>
```
### Check Routing
```bash
ip -6 route show dev <instance-name>
```
### Interface Fails to Start: "Address already in use"
If you see this error in your logs:
```
wireguard: Could not bring up interface, ignoring: Address already in use
```
This means the configured port (default: 51820) is already in use by another service or wireguard instance. Solutions:
1. **Check for conflicting wireguard instances:**
```bash
sudo wg show
sudo ss -ulnp | grep 51820
```
2. **Use a different port:**
```nix
services.wireguard.myinstance = {
roles.controller = {
endpoint = "vpn.example.com";
port = 51821; # Use a different port
};
};
```
3. **Ensure unique ports across multiple instances:**
If you have multiple wireguard instances on the same machine, each must use a different port.
### Key Management
Keys are automatically generated and stored in the clan vars system. To regenerate keys:
```bash
# Regenerate keys for a specific machine and instance
clan vars generate --service wireguard-keys-<instance-name> --regenerate --machine <machine-name>
# Apply the new keys
clan machines update <machine-name>
```
## Security Considerations
- All traffic is encrypted using Wireguard's modern cryptography
- Private keys never leave the machines they're generated on
- Public keys are distributed through the clan vars system
- Controllers must have publicly accessible endpoints
- Firewall rules are automatically configured for the Wireguard ports

View File

@@ -0,0 +1,456 @@
/*
There are two roles: peers and controllers:
- Every controller has an endpoint set
- There can be multiple peers
- There has to be one or more controllers
- Peers connect to ALL controllers (full mesh)
- If only one controller exists, peers automatically use it for IP allocation
- If multiple controllers exist, peers must specify which controller's subnet to use
- Controllers have IPv6 forwarding enabled, every peer and controller can reach
everyone else, via extra controller hops if necessary
Example:
controller2 controller1
peer2 peer1 peer3
Network Architecture:
IPv6 Address Allocation:
- Base network: /40 ULA prefix (generated from instance name)
- Controllers: Each gets a /56 subnet from the base /40
- Peers: Each gets a unique host suffix that is used in ALL controller subnets
Address Assignment:
- Each peer generates a unique 64-bit host suffix (e.g., :8750:a09b:0:1)
- This suffix is appended to each controller's /56 prefix
- Example: peer1 with suffix :8750:a09b:0:1 gets:
- fd51:19c1:3b:f700:8750:a09b:0:1 in controller1's subnet
- fd51:19c1:c1:aa00:8750:a09b:0:1 in controller2's subnet
Peers: Use a SINGLE interface that:
- Connects to ALL controllers
- Has multiple IPs, one in each controller's subnet (with /56 prefix)
- Routes to each controller's /56 subnet via that controller
- allowedIPs: Each controller's /56 subnet
- No routing conflicts due to unique IPs per subnet
Controllers: Use a SINGLE interface that:
- Connects to ALL peers and ALL other controllers
- Gets a /56 subnet from the base /40 network
- Has IPv6 forwarding enabled for routing between peers
- allowedIPs:
- For peers: A /96 range containing the peer's address in this controller's subnet
- For other controllers: The controller's /56 subnet
*/
{ ... }:
let
# Shared module for extraHosts configuration
extraHostsModule =
{
instanceName,
settings,
roles,
config,
lib,
...
}:
{
networking.extraHosts =
let
domain = if settings.domain == null then instanceName else settings.domain;
# Controllers use their subnet's ::1 address
controllerHosts = lib.mapAttrsToList (
name: _value:
let
prefix = builtins.readFile (
config.clan.core.settings.directory
+ "/vars/per-machine/${name}/wireguard-network-${instanceName}/prefix/value"
);
# Controller IP is always ::1 in their subnet
ip = prefix + "::1";
in
"${ip} ${name}.${domain}"
) roles.controller.machines;
# Peers use their suffix in their designated controller's subnet only
peerHosts = lib.mapAttrsToList (
peerName: peerValue:
let
peerSuffix = builtins.readFile (
config.clan.core.settings.directory
+ "/vars/per-machine/${peerName}/wireguard-network-${instanceName}/suffix/value"
);
# Determine designated controller
designatedController =
if (builtins.length (builtins.attrNames roles.controller.machines) == 1) then
(builtins.head (builtins.attrNames roles.controller.machines))
else
peerValue.settings.controller;
controllerPrefix = builtins.readFile (
config.clan.core.settings.directory
+ "/vars/per-machine/${designatedController}/wireguard-network-${instanceName}/prefix/value"
);
peerIP = controllerPrefix + ":" + peerSuffix;
in
"${peerIP} ${peerName}.${domain}"
) roles.peer.machines;
in
builtins.concatStringsSep "\n" (controllerHosts ++ peerHosts);
};
# Shared interface options
sharedInterface =
{ lib, ... }:
{
options.port = lib.mkOption {
type = lib.types.int;
example = 51820;
default = 51820;
description = ''
Port for the wireguard interface
'';
};
options.domain = lib.mkOption {
type = lib.types.nullOr lib.types.str;
defaultText = lib.literalExpression "instanceName";
default = null;
description = ''
Domain suffix to use for hostnames in /etc/hosts.
Defaults to the instance name.
'';
};
};
in
{
_class = "clan.service";
manifest.name = "clan-core/wireguard";
manifest.description = "Wireguard-based VPN mesh network with automatic IPv6 address allocation";
manifest.categories = [
"System"
"Network"
];
manifest.readme = builtins.readFile ./README.md;
# Peer options and configuration
roles.peer = {
interface =
{ lib, ... }:
{
imports = [ sharedInterface ];
options.controller = lib.mkOption {
type = lib.types.str;
example = "controller1";
description = ''
Machinename of the controller to attach to
'';
};
};
perInstance =
{
instanceName,
settings,
roles,
machine,
...
}:
{
# Set default domain to instanceName
# Peers connect to all controllers
nixosModule =
{
config,
pkgs,
lib,
...
}:
{
imports = [
(extraHostsModule {
inherit
instanceName
settings
roles
config
lib
;
})
];
# Network allocation generator for this peer - generates host suffix
clan.core.vars.generators."wireguard-network-${instanceName}" = {
files.suffix.secret = false;
runtimeInputs = with pkgs; [
python3
];
# Invalidate on hostname changes
validation.hostname = machine.name;
script = ''
${pkgs.python3}/bin/python3 ${./ipv6_allocator.py} "$out" "${instanceName}" peer "${machine.name}"
'';
};
# Single wireguard interface with multiple IPs
networking.wireguard.interfaces."${instanceName}" = {
ips =
# Get this peer's suffix
let
peerSuffix =
config.clan.core.vars.generators."wireguard-network-${instanceName}".files.suffix.value;
in
# Create an IP in each controller's subnet
lib.mapAttrsToList (
ctrlName: _:
let
controllerPrefix = builtins.readFile (
config.clan.core.settings.directory
+ "/vars/per-machine/${ctrlName}/wireguard-network-${instanceName}/prefix/value"
);
peerIP = controllerPrefix + ":" + peerSuffix;
in
"${peerIP}/56"
) roles.controller.machines;
privateKeyFile =
config.clan.core.vars.generators."wireguard-keys-${instanceName}".files."privatekey".path;
# Connect to all controllers
peers = lib.mapAttrsToList (name: value: {
publicKey = (
builtins.readFile (
config.clan.core.settings.directory
+ "/vars/per-machine/${name}/wireguard-keys-${instanceName}/publickey/value"
)
);
# Allow each controller's /56 subnet
allowedIPs = [
"${
builtins.readFile (
config.clan.core.settings.directory
+ "/vars/per-machine/${name}/wireguard-network-${instanceName}/prefix/value"
)
}::/56"
];
endpoint = "${value.settings.endpoint}:${toString value.settings.port}";
persistentKeepalive = 25;
}) roles.controller.machines;
};
};
};
};
# Controller options and configuration
roles.controller = {
interface =
{ lib, ... }:
{
imports = [ sharedInterface ];
options.endpoint = lib.mkOption {
type = lib.types.str;
example = "vpn.clan.lol";
description = ''
Endpoint where the controller can be reached
'';
};
};
perInstance =
{
settings,
instanceName,
roles,
machine,
...
}:
{
# Controllers connect to all peers and other controllers
nixosModule =
{
config,
pkgs,
lib,
...
}:
let
allOtherControllers = lib.filterAttrs (name: _v: name != machine.name) roles.controller.machines;
allPeers = roles.peer.machines;
in
{
imports = [
(extraHostsModule {
inherit
instanceName
settings
roles
config
lib
;
})
];
# Network allocation generator for this controller
clan.core.vars.generators."wireguard-network-${instanceName}" = {
files.prefix.secret = false;
runtimeInputs = with pkgs; [
python3
];
# Invalidate on network or hostname changes
validation.hostname = machine.name;
script = ''
${pkgs.python3}/bin/python3 ${./ipv6_allocator.py} "$out" "${instanceName}" controller "${machine.name}"
'';
};
# Enable ip forwarding, so wireguard peers can reach eachother
boot.kernel.sysctl."net.ipv6.conf.all.forwarding" = 1;
networking.firewall.allowedUDPPorts = [ settings.port ];
# Single wireguard interface
networking.wireguard.interfaces."${instanceName}" = {
listenPort = settings.port;
ips = [
# Controller uses ::1 in its /56 subnet but with /40 prefix for proper routing
"${config.clan.core.vars.generators."wireguard-network-${instanceName}".files.prefix.value}::1/40"
];
privateKeyFile =
config.clan.core.vars.generators."wireguard-keys-${instanceName}".files."privatekey".path;
# Connect to all peers and other controllers
peers = lib.mapAttrsToList (
name: value:
if allPeers ? ${name} then
# For peers: they now have our entire /56 subnet
{
publicKey = (
builtins.readFile (
config.clan.core.settings.directory
+ "/vars/per-machine/${name}/wireguard-keys-${instanceName}/publickey/value"
)
);
# Allow the peer's /96 range in ALL controller subnets
allowedIPs = lib.mapAttrsToList (
ctrlName: _:
let
controllerPrefix = builtins.readFile (
config.clan.core.settings.directory
+ "/vars/per-machine/${ctrlName}/wireguard-network-${instanceName}/prefix/value"
);
peerSuffix = builtins.readFile (
config.clan.core.settings.directory
+ "/vars/per-machine/${name}/wireguard-network-${instanceName}/suffix/value"
);
in
"${controllerPrefix}:${peerSuffix}/96"
) roles.controller.machines;
persistentKeepalive = 25;
}
else
# For other controllers: use their /56 subnet
{
publicKey = (
builtins.readFile (
config.clan.core.settings.directory
+ "/vars/per-machine/${name}/wireguard-keys-${instanceName}/publickey/value"
)
);
allowedIPs = [
"${
builtins.readFile (
config.clan.core.settings.directory
+ "/vars/per-machine/${name}/wireguard-network-${instanceName}/prefix/value"
)
}::/56"
];
endpoint = "${value.settings.endpoint}:${toString value.settings.port}";
persistentKeepalive = 25;
}
) (allPeers // allOtherControllers);
};
};
};
};
# Maps over all machines and produces one result per machine, regardless of role
perMachine =
{ instances, machine, ... }:
{
nixosModule =
{ pkgs, lib, ... }:
let
# Check if this machine has conflicting roles across all instances
machineRoleConflicts = lib.flatten (
lib.mapAttrsToList (
instanceName: instanceInfo:
let
isController =
instanceInfo.roles ? controller && instanceInfo.roles.controller.machines ? ${machine.name};
isPeer = instanceInfo.roles ? peer && instanceInfo.roles.peer.machines ? ${machine.name};
in
lib.optional (isController && isPeer) {
inherit instanceName;
machineName = machine.name;
}
) instances
);
in
{
# Add assertions for role conflicts
assertions = lib.forEach machineRoleConflicts (conflict: {
assertion = false;
message = ''
Machine '${conflict.machineName}' cannot have both 'controller' and 'peer' roles in the wireguard instance '${conflict.instanceName}'.
A machine must be either a controller or a peer, not both.
'';
});
# Generate keys for each instance where this machine participates
clan.core.vars.generators = lib.mapAttrs' (
name: _instanceInfo:
lib.nameValuePair "wireguard-keys-${name}" {
files.publickey.secret = false;
files.privatekey = { };
runtimeInputs = with pkgs; [
wireguard-tools
];
script = ''
wg genkey > $out/privatekey
wg pubkey < $out/privatekey > $out/publickey
'';
}
) instances;
};
};
}

View File

@@ -0,0 +1,7 @@
{ lib, ... }:
let
module = lib.modules.importApply ./default.nix { };
in
{
clan.modules.wireguard = module;
}

View File

@@ -0,0 +1,135 @@
#!/usr/bin/env python3
"""
IPv6 address allocator for WireGuard networks.
Network layout:
- Base network: /40 ULA prefix (fd00::/8 + 32 bits from hash)
- Controllers: Each gets a /56 subnet from the base /40 (256 controllers max)
- Peers: Each gets a /96 subnet from their controller's /56
"""
import hashlib
import ipaddress
import sys
from pathlib import Path
def hash_string(s: str) -> str:
"""Generate SHA256 hash of string."""
return hashlib.sha256(s.encode()).hexdigest()
def generate_ula_prefix(instance_name: str) -> ipaddress.IPv6Network:
"""
Generate a /40 ULA prefix from instance name.
Format: fd{32-bit hash}/40
This gives us fd00:0000:0000::/40 through fdff:ffff:ff00::/40
"""
h = hash_string(instance_name)
# For /40, we need 32 bits after 'fd' (8 hex chars)
# But only the first 32 bits count for the network prefix
# The last 8 bits of the 40-bit prefix must be 0
prefix_bits = int(h[:8], 16)
# Mask to ensure we only use the first 32 bits for /40
# This gives us addresses like fd28:387a::/40
prefix_bits = prefix_bits & 0xFFFFFF00 # Clear last 8 bits
# Format as IPv6 address
prefix = f"fd{prefix_bits:08x}"
prefix_formatted = f"{prefix[:4]}:{prefix[4:8]}::/40"
network = ipaddress.IPv6Network(prefix_formatted)
return network
def generate_controller_subnet(
base_network: ipaddress.IPv6Network, controller_name: str
) -> ipaddress.IPv6Network:
"""
Generate a /56 subnet for a controller from the base /40 network.
We have 16 bits (40 to 56) to allocate controller subnets.
This allows for 65,536 possible controller subnets.
"""
h = hash_string(controller_name)
# Take 16 bits from hash for the controller subnet ID
controller_id = int(h[:4], 16)
# Create the controller subnet by adding the controller ID to the base network
# The controller subnet is at base_prefix:controller_id::/56
base_int = int(base_network.network_address)
controller_subnet_int = base_int | (controller_id << (128 - 56))
controller_subnet = ipaddress.IPv6Network((controller_subnet_int, 56))
return controller_subnet
def generate_peer_suffix(peer_name: str) -> str:
"""
Generate a unique 64-bit host suffix for a peer.
This suffix will be used in all controller subnets to create unique addresses.
Format: :xxxx:xxxx:xxxx:xxxx (64 bits)
"""
h = hash_string(peer_name)
# Take 64 bits (16 hex chars) from hash for the host suffix
suffix_bits = h[:16]
# Format as IPv6 suffix without leading colon
suffix = f"{suffix_bits[0:4]}:{suffix_bits[4:8]}:{suffix_bits[8:12]}:{suffix_bits[12:16]}"
return suffix
def main() -> None:
if len(sys.argv) < 4:
print(
"Usage: ipv6_allocator.py <output_dir> <instance_name> <controller|peer> <machine_name>"
)
sys.exit(1)
output_dir = Path(sys.argv[1])
instance_name = sys.argv[2]
node_type = sys.argv[3]
# Generate base /40 network
base_network = generate_ula_prefix(instance_name)
if node_type == "controller":
if len(sys.argv) < 5:
print("Controller name required")
sys.exit(1)
controller_name = sys.argv[4]
subnet = generate_controller_subnet(base_network, controller_name)
# Extract clean prefix from subnet (e.g. "fd51:19c1:3b:f700::/56" -> "fd51:19c1:3b:f700")
prefix_str = str(subnet).split("/")[0].rstrip(":")
while prefix_str.endswith(":"):
prefix_str = prefix_str.rstrip(":")
# Write file
(output_dir / "prefix").write_text(prefix_str)
elif node_type == "peer":
if len(sys.argv) < 5:
print("Peer name required")
sys.exit(1)
peer_name = sys.argv[4]
# Generate the peer's host suffix
suffix = generate_peer_suffix(peer_name)
# Write file
(output_dir / "suffix").write_text(suffix)
else:
print(f"Unknown node type: {node_type}")
sys.exit(1)
if __name__ == "__main__":
main()

6
devFlake/flake.lock generated
View File

@@ -3,10 +3,10 @@
"clan-core-for-checks": {
"flake": false,
"locked": {
"lastModified": 1754973208,
"narHash": "sha256-K/abuL/G6TtwV6Oo/C5EloDfRd2lAbPhCxQ/KnIDI9k=",
"lastModified": 1755093452,
"narHash": "sha256-NKBss7QtNnOqYVyJmYCgaCvYZK0mpQTQc9fLgE1mGyk=",
"ref": "main",
"rev": "caae6c7a559d918de06636febc317e6c0a59e0cb",
"rev": "7e97734797f0c6bd3c2d3a51cf54a2a6b371c222",
"shallow": true,
"type": "git",
"url": "https://git.clan.lol/clan/clan-core"

View File

@@ -92,7 +92,6 @@ nav:
- Services:
- Overview:
- reference/clanServices/index.md
- reference/clanServices/admin.md
- reference/clanServices/borgbackup.md
- reference/clanServices/data-mesher.md
@@ -109,6 +108,7 @@ nav:
- reference/clanServices/trusted-nix-caches.md
- reference/clanServices/users.md
- reference/clanServices/wifi.md
- reference/clanServices/wireguard.md
- reference/clanServices/zerotier.md
- API: reference/clanServices/clan-service-author-interface.md
@@ -116,7 +116,6 @@ nav:
- Overview: reference/cli/index.md
- reference/cli/backups.md
- reference/cli/facts.md
- reference/cli/flakes.md
- reference/cli/flash.md
- reference/cli/machines.md

6
flake.lock generated
View File

@@ -146,11 +146,11 @@
]
},
"locked": {
"lastModified": 1754328224,
"narHash": "sha256-glPK8DF329/dXtosV7YSzRlF4n35WDjaVwdOMEoEXHA=",
"lastModified": 1754988908,
"narHash": "sha256-t+voe2961vCgrzPFtZxha0/kmFSHFobzF00sT8p9h0U=",
"owner": "Mic92",
"repo": "sops-nix",
"rev": "49021900e69812ba7ddb9e40f9170218a7eca9f4",
"rev": "3223c7a92724b5d804e9988c6b447a0d09017d48",
"type": "github"
},
"original": {

View File

@@ -124,7 +124,7 @@ rec {
]
)
}" \
${pkgs.runtimeShell} ${genInfo.finalScript}
${pkgs.runtimeShell} -x "${genInfo.finalScript}"
# Verify expected outputs were created
${lib.concatStringsSep "\n" (

View File

@@ -1,8 +1,8 @@
div.alert {
@apply flex gap-2.5 px-6 py-4 size-full rounded-md items-start;
@apply flex flex-row gap-2.5 p-4 rounded-md items-start;
&.has-icon {
@apply pl-4;
@apply pl-3;
svg.icon {
@apply relative top-0.5;
@@ -10,11 +10,15 @@ div.alert {
}
&.has-dismiss {
@apply pr-4;
@apply pr-3;
}
& > button.dismiss-trigger {
@apply relative top-0.5;
}
& > div.content {
@apply flex flex-col gap-2 size-full;
@apply flex flex-col size-full gap-1;
}
&.info {
@@ -33,7 +37,7 @@ div.alert {
@apply bg-semantic-success-1 border border-semantic-success-3 fg-semantic-success-3;
}
& > button.dismiss-trigger {
@apply relative top-0.5;
&.transparent {
@apply bg-transparent border-none p-0;
}
}

View File

@@ -3,16 +3,26 @@ import { Alert, AlertProps } from "@/src/components/Alert/Alert";
import { expect, fn } from "storybook/test";
import { StoryContext } from "@kachurun/storybook-solid-vite";
const AlertExamples = (props: AlertProps) => (
<div class="grid w-fit grid-cols-2 gap-8">
<div class="w-72">
<Alert {...props} />
</div>
<div class="w-72">
<Alert {...props} size="s" />
</div>
<div class="w-72">
<Alert {...props} transparent />
</div>
<div class="w-72">
<Alert {...props} size="s" transparent />
</div>
</div>
);
const meta: Meta<AlertProps> = {
title: "Components/Alert",
component: Alert,
decorators: [
(Story: StoryObj) => (
<div class="w-72">
<Story />
</div>
),
],
component: AlertExamples,
};
export default meta;
@@ -23,6 +33,7 @@ export const Info: Story = {
args: {
type: "info",
title: "Headline",
onDismiss: undefined,
description:
"Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua.",
},

View File

@@ -7,40 +7,63 @@ import { Alert as KAlert } from "@kobalte/core/alert";
import { Show } from "solid-js";
export interface AlertProps {
type: "success" | "error" | "warning" | "info";
title: string;
description?: string;
icon?: IconVariant;
type: "success" | "error" | "warning" | "info";
size?: "default" | "s";
title: string;
onDismiss?: () => void;
transparent?: boolean;
description?: string;
}
export const Alert = (props: AlertProps) => (
<KAlert
class={cx("alert", props.type, {
"has-icon": props.icon,
"has-dismiss": props.onDismiss,
})}
>
{props.icon && <Icon icon={props.icon} color="inherit" size="1rem" />}
<div class="content">
<Typography hierarchy="body" size="default" weight="bold" color="inherit">
{props.title}
</Typography>
<Show when={props.description}>
<Typography hierarchy="body" size="xs" color="inherit">
{props.description}
export const Alert = (props: AlertProps) => {
const size = () => props.size || "default";
const titleSize = () => (size() == "default" ? "default" : "xs");
const bodySize = () => (size() == "default" ? "xs" : "xxs");
const iconSize = () => (size() == "default" ? "1rem" : "0.75rem");
return (
<KAlert
class={cx("alert", props.type, {
"has-icon": props.icon,
"has-dismiss": props.onDismiss,
transparent: props.transparent,
})}
>
{props.icon && (
<Icon icon={props.icon} color="inherit" size={iconSize()} />
)}
<div class="content">
<Typography
hierarchy="body"
family="condensed"
size={titleSize()}
weight="bold"
color="inherit"
>
{props.title}
</Typography>
</Show>
</div>
{props.onDismiss && (
<Button
name="dismiss-alert"
class="dismiss-trigger"
onClick={props.onDismiss}
aria-label={`Dismiss ${props.type} alert`}
>
<Icon icon="Close" color="primary" size="0.75rem" />
</Button>
)}
</KAlert>
);
<Show when={props.description}>
<Typography
hierarchy="body"
family="condensed"
size={bodySize()}
color="inherit"
>
{props.description}
</Typography>
</Show>
</div>
{props.onDismiss && (
<Button
name="dismiss-alert"
class="dismiss-trigger"
onClick={props.onDismiss}
aria-label={`Dismiss ${props.type} alert`}
>
<Icon icon="Close" color="primary" size="0.75rem" />
</Button>
)}
</KAlert>
);
};

View File

@@ -22,7 +22,7 @@ export interface ButtonProps
startIcon?: IconVariant;
endIcon?: IconVariant;
class?: string;
onAction?: Action;
loading?: boolean;
}
const iconSizes: Record<Size, string> = {
@@ -40,31 +40,12 @@ export const Button = (props: ButtonProps) => {
"startIcon",
"endIcon",
"class",
"onAction",
"loading",
]);
const size = local.size || "default";
const hierarchy = local.hierarchy || "primary";
const [loading, setLoading] = createSignal(false);
const onClick = async () => {
if (!local.onAction) {
console.error("this should not be possible");
return;
}
setLoading(true);
try {
await local.onAction();
} catch (error) {
console.error("Error while executing action", error);
}
setLoading(false);
};
const iconSize = iconSizes[local.size || "default"];
const loadingClass =
@@ -81,16 +62,19 @@ export const Button = (props: ButtonProps) => {
hierarchy,
{
icon: local.icon,
loading: loading(),
loading: props.loading,
ghost: local.ghost,
},
)}
onClick={local.onAction ? onClick : undefined}
onClick={props.onClick}
{...other}
>
<Loader
hierarchy={hierarchy}
class={cx({ [idleClass]: !loading(), [loadingClass]: loading() })}
class={cx({
[idleClass]: !props.loading,
[loadingClass]: props.loading,
})}
/>
{local.startIcon && (

View File

@@ -1,8 +1,8 @@
hr {
@apply border-none outline-none bg-inv-2 self-stretch;
@apply border-none outline-none bg-def-3 self-stretch;
&.inverted {
@apply bg-def-3;
@apply bg-inv-2;
}
&[data-orientation="horizontal"] {

View File

@@ -11,7 +11,7 @@ import { Label } from "@/src/components/Form/Label";
import { Orienter } from "@/src/components/Form/Orienter";
import { CollectionNode } from "@kobalte/core";
interface MachineTag {
export interface MachineTag {
value: string;
disabled?: boolean;
new?: boolean;
@@ -24,11 +24,10 @@ export type MachineTagsProps = FieldProps & {
disabled?: boolean;
required?: boolean;
defaultValue?: string[];
defaultOptions?: string[];
readonlyOptions?: string[];
};
// tags which are applied to all machines and cannot be removed
const staticOptions = [{ value: "all", disabled: true }];
const uniqueOptions = (options: MachineTag[]) => {
const record: Record<string, MachineTag> = {};
options.forEach((option) => {
@@ -39,13 +38,8 @@ const uniqueOptions = (options: MachineTag[]) => {
return Object.values(record);
};
const sortedOptions = (options: MachineTag[]) => {
return options.sort((a, b) => {
if (a.new && !b.new) return -1;
if (a.disabled && !b.disabled) return -1;
return a.value.localeCompare(b.value);
});
};
const sortedOptions = (options: MachineTag[]) =>
options.sort((a, b) => a.value.localeCompare(b.value));
const sortedAndUniqueOptions = (options: MachineTag[]) =>
sortedOptions(uniqueOptions(options));
@@ -72,9 +66,15 @@ export const MachineTags = (props: MachineTagsProps) => {
(props.defaultValue || []).map((value) => ({ value })),
);
// todo this should be the superset of tags used across the entire clan and be passed in via a prop
// convert default options string[] into MachineTag[]
const [availableOptions, setAvailableOptions] = createSignal<MachineTag[]>(
sortedAndUniqueOptions([...staticOptions, ...defaultValue]),
sortedAndUniqueOptions([
...(props.readonlyOptions || []).map((value) => ({
value,
disabled: true,
})),
...(props.defaultOptions || []).map((value) => ({ value })),
]),
);
const onKeyDown = (event: KeyboardEvent) => {

View File

@@ -1,9 +1,9 @@
import { FieldSchema } from "@/src/hooks/queries";
import { SuccessData } from "@/src/hooks/api";
import { Maybe } from "@modular-forms/solid";
export const tooltipText = <T extends object, K extends keyof T>(
name: K,
schema: FieldSchema<T>,
export const tooltipText = (
name: string,
schema: SuccessData<"get_machine_fields_schema">,
staticValue: Maybe<string> = undefined,
): Maybe<string> => {
const entry = schema[name];

View File

@@ -4,7 +4,7 @@
&[data-expanded] {
@apply outline-def-2 outline-1 outline;
z-index: var(--z-index + 5);
z-index: calc(var(--z-index) + 5);
}
&[data-highlighted] {

View File

@@ -146,6 +146,7 @@ export const Select = (props: SelectProps) => {
<KSelect.HiddenSelect {...selectProps} />
<KSelect.Trigger
class={cx(styles.trigger)}
style={{ "--z-index": zIndex() }}
data-loading={loading() || undefined}
>
<KSelect.Value<Option>>

View File

@@ -24,15 +24,16 @@ const TypographyExamples: Component<TypographyExamplesProps> = (props) => (
<For each={props.sizes}>
{(size) => (
<tr
class="border-b border-def-3 even:bg-def-2"
class="border-b fg-semantic-info-1 border-def-3 even:bg-def-2"
classList={{
"border-inv-3 even:bg-inv-2": props.inverted,
"border-def-3 even:bg-def-2": !props.inverted,
}}
>
<For each={props.weights}>
{/* we set a foreground color to test color=inherit */}
{(weight) => (
<td class="px-6 py-2 ">
<td class="px-6 py-2">
<Show when={!props.colors}>
<Typography
hierarchy={props.hierarchy}

View File

@@ -6,20 +6,15 @@ import { useApiClient } from "./ApiClient";
export type ClanDetails = SuccessData<"get_clan_details">;
export type ClanDetailsWithURI = ClanDetails & { uri: string };
export type FieldSchema<T> = {
[K in keyof T]: {
readonly: boolean;
reason?: string;
};
};
export type Tags = SuccessData<"list_tags">;
export type Machine = SuccessData<"get_machine">;
export type ListMachines = SuccessData<"list_machines">;
export type MachineDetails = SuccessData<"get_machine_details">;
export interface MachineDetail {
tags: Tags;
machine: Machine;
fieldsSchema: FieldSchema<Machine>;
fieldsSchema: SuccessData<"get_machine_fields_schema">;
}
export type MachinesQueryResult = UseQueryResult<ListMachines>;
@@ -50,7 +45,12 @@ export const useMachineQuery = (clanURI: string, machineName: string) => {
return useQuery<MachineDetail>(() => ({
queryKey: ["clans", encodeBase64(clanURI), "machine", machineName],
queryFn: async () => {
const [machineCall, schemaCall] = await Promise.all([
const [tagsCall, machineCall, schemaCall] = await Promise.all([
client.fetch("list_tags", {
flake: {
identifier: clanURI,
},
}),
client.fetch("get_machine", {
name: machineName,
flake: {
@@ -67,6 +67,11 @@ export const useMachineQuery = (clanURI: string, machineName: string) => {
}),
]);
const tags = await tagsCall.result;
if (tags.status === "error") {
throw new Error("Error fetching tags: " + tags.errors[0].message);
}
const machine = await machineCall.result;
if (machine.status === "error") {
throw new Error("Error fetching machine: " + machine.errors[0].message);
@@ -81,6 +86,7 @@ export const useMachineQuery = (clanURI: string, machineName: string) => {
}
return {
tags: tags.data,
machine: machine.data,
fieldsSchema: writeSchema.data,
};
@@ -312,8 +318,13 @@ export const useMachineGenerators = (
],
queryFn: async () => {
const call = client.fetch("get_generators", {
base_dir: clanUri,
machine_name: machineName,
machine: {
name: machineName,
flake: {
identifier: clanUri,
},
},
full_closure: true, // TODO: Make this configurable
// TODO: Make this configurable
include_previous_values: true,
});

View File

@@ -30,6 +30,26 @@ export const SectionTags = (props: SectionTags) => {
return pick(machineQuery.data.machine, ["tags"]) satisfies FormValues;
};
const options = () => {
if (!machineQuery.isSuccess) {
return [[], []];
}
// these are static values or values which have been configured in nix and
// cannot be modified in the UI
const readonlyOptions =
machineQuery.data.fieldsSchema.tags?.readonly_members || [];
// filter out the read-only options from the superset of clan-wide options
const readonlySet = new Set(readonlyOptions);
const defaultOptions = (machineQuery.data.tags.options || []).filter(
(tag) => !readonlySet.has(tag),
);
return [defaultOptions, readonlyOptions];
};
return (
<Show when={machineQuery.isSuccess}>
<SidebarSectionForm
@@ -50,6 +70,8 @@ export const SectionTags = (props: SectionTags) => {
readOnly={!editing}
orientation="horizontal"
defaultValue={field.value}
defaultOptions={options()[0]}
readonlyOptions={options()[1]}
input={input}
/>
)}

View File

@@ -99,8 +99,12 @@ const welcome = (props: {
}) => {
const navigate = useNavigate();
const [loading, setLoading] = createSignal(false);
const selectFolder = async () => {
setLoading(true);
const uri = await selectClanFolder();
setLoading(false);
navigateToClan(navigate, uri);
};
@@ -136,7 +140,7 @@ const welcome = (props: {
Start building
</Button>
<div class="separator">
<Divider orientation="horizontal" inverted={true} />
<Divider orientation="horizontal" />
<Typography
hierarchy="body"
size="s"
@@ -146,9 +150,14 @@ const welcome = (props: {
>
or
</Typography>
<Divider orientation="horizontal" inverted={true} />
<Divider orientation="horizontal" />
</div>
<Button hierarchy="primary" ghost={true} onAction={selectFolder}>
<Button
hierarchy="primary"
ghost={true}
loading={loading()}
onClick={selectFolder}
>
Select folder
</Button>
</div>
@@ -251,6 +260,10 @@ export const Onboarding: Component<RouteSectionProps> = (props) => {
},
}).result;
await client.fetch("create_secrets_user", {
flake_dir: path,
}).result;
if (resp.status === "error") {
setWelcomeError(resp.errors[0].message);
setState("welcome");
@@ -318,7 +331,7 @@ export const Onboarding: Component<RouteSectionProps> = (props) => {
/>
)}
</Field>
<Divider inverted={true} />
<Divider />
<Field name="description">
{(field, input) => (
<TextArea

View File

@@ -18,29 +18,64 @@ type ResultDataMap = {
[K in OperationNames]: SuccessQuery<K>["data"];
};
export const mockFetcher: Fetcher = <K extends OperationNames>(
const mockFetcher: Fetcher = <K extends OperationNames>(
name: K,
_args: unknown,
): ApiCall<K> => {
// TODO: Make this configurable for every story
const resultData: Partial<ResultDataMap> = {
get_machine_flash_options: {
keymaps: ["DE_de", "US_en"],
languages: ["en", "de"],
},
get_system_file: ["id_rsa.pub"],
list_system_storage_devices: {
blockdevices: [
{
name: "sda_bla_bla",
path: "/dev/sda",
id_link: "sda_bla_bla",
},
{
name: "sdb_foo_foo",
path: "/dev/sdb",
id_link: "sdb_foo_foo",
},
] as SuccessQuery<"list_system_storage_devices">["data"]["blockdevices"],
},
get_machine_disk_schemas: {
"single-disk": {
readme: "This is a single disk installation schema",
frontmatter: {
description: "Single disk installation schema",
},
name: "single-disk",
placeholders: {
mainDisk: {
label: "Main Disk",
required: true,
options: ["disk1", "usb1"],
},
},
},
},
get_generators: [
{
name: "funny.gritty",
prompts: [
{
name: "gritty.name",
description: "Name of the gritty",
prompt_type: "line",
display: {
label: "Gritty Name",
group: "User",
required: true,
},
},
],
},
],
run_generators: null,
get_machine_hardware_summary: {
hardware_config: "nixos-facter",
},
};
return {

View File

@@ -7,7 +7,7 @@ import { StepLayout } from "../../Steps";
const ChoiceLocalOrRemote = () => {
const stepSignal = useStepper<InstallSteps>();
return (
<div class="flex flex-col gap-3">
<div class="flex flex-col gap-3 size-full">
<div class="flex flex-col gap-6 rounded-md px-4 py-6 text-fg-def-1 bg-def-2">
<div class="flex justify-between gap-2">
<div class="flex flex-col justify-center gap-1 px-1">

View File

@@ -16,10 +16,7 @@ import { Alert } from "@/src/components/Alert/Alert";
import { LoadingBar } from "@/src/components/LoadingBar/LoadingBar";
import { Button } from "@/src/components/Button/Button";
import Icon from "@/src/components/Icon/Icon";
import {
useMachineFlashOptions,
useSystemStorageOptions,
} from "@/src/hooks/queries";
import { useSystemStorageOptions } from "@/src/hooks/queries";
import { useApiClient } from "@/src/hooks/ApiClient";
import { onMount } from "solid-js";
@@ -144,8 +141,6 @@ const ConfigureImage = () => {
throw new Error("No data returned from api call");
};
const optionsQuery = useMachineFlashOptions();
let content: Node;
return (
@@ -309,15 +304,17 @@ const FlashProgress = () => {
});
const handleCancel = async () => {
const progress = store.flash.progress;
if (progress) {
await progress.cancel();
if (store.flash) {
const progress = store.flash.progress;
if (progress) {
await progress.cancel();
}
}
stepSignal.previous();
};
return (
<div class="flex size-full h-60 flex-col items-center justify-end bg-inv-4">
<div class="flex size-full flex-col items-center justify-center bg-inv-4">
<div class="mb-6 flex w-full max-w-md flex-col items-center gap-3 fg-inv-1">
<Typography
hierarchy="title"
@@ -344,7 +341,7 @@ const FlashDone = () => {
const stepSignal = useStepper<InstallSteps>();
return (
<div class="flex size-full flex-col items-center justify-between bg-inv-4">
<div class="flex w-full max-w-md flex-col items-center gap-3 py-6 fg-inv-1">
<div class="flex size-full max-w-md flex-col items-center justify-center gap-3 py-6 fg-inv-1">
<div class="rounded-full bg-semantic-success-4">
<Icon icon="Checkmark" class="size-9" />
</div>

View File

@@ -4,6 +4,7 @@ import {
createForm,
FieldValues,
getError,
getValue,
SubmitHandler,
valiForm,
} from "@modular-forms/solid";
@@ -13,7 +14,7 @@ import { getStepStore, useStepper } from "@/src/hooks/stepper";
import { InstallSteps, InstallStoreType, PromptValues } from "../install";
import { TextInput } from "@/src/components/Form/TextInput";
import { Alert } from "@/src/components/Alert/Alert";
import { For, Match, Show, Switch } from "solid-js";
import { createSignal, For, Match, Show, Switch } from "solid-js";
import { Divider } from "@/src/components/Divider/Divider";
import { Orienter } from "@/src/components/Form/Orienter";
import { Button } from "@/src/components/Button/Button";
@@ -29,6 +30,7 @@ import {
import { useClanURI } from "@/src/hooks/clan";
import { useApiClient } from "@/src/hooks/ApiClient";
import { ProcessMessage, useNotifyOrigin } from "@/src/hooks/notify";
import { Loader } from "@/src/components/Loader/Loader";
export const InstallHeader = (props: { machineName: string }) => {
return (
@@ -58,8 +60,9 @@ const ConfigureAddress = () => {
},
});
const [isReachable, setIsReachable] = createSignal<string | null>(null);
const client = useApiClient();
const clanUri = useClanURI();
// TODO: push values to the parent form Store
const handleSubmit: SubmitHandler<ConfigureAdressForm> = async (
values,
@@ -70,7 +73,24 @@ const ConfigureAddress = () => {
// Here you would typically trigger the ISO creation process
stepSignal.next();
console.log("Shit doesnt work", values);
};
const tryReachable = async () => {
const address = getValue(formStore, "targetHost");
if (!address) {
return;
}
const call = client.fetch("check_machine_ssh_login", {
remote: {
address,
},
});
const result = await call.result;
console.log("SSH login check result:", result);
if (result.status === "success") {
setIsReachable(address);
}
};
return (
@@ -99,12 +119,28 @@ const ConfigureAddress = () => {
)}
</Field>
</Fieldset>
<Button
disabled={!getValue(formStore, "targetHost")}
endIcon="ArrowRight"
onClick={tryReachable}
hierarchy="secondary"
>
Test Connection
</Button>
</div>
}
footer={
<div class="flex justify-between">
<BackButton />
<NextButton type="submit">Next</NextButton>
<NextButton
type="submit"
disabled={
!isReachable() ||
isReachable() !== getValue(formStore, "targetHost")
}
>
Next
</NextButton>
</div>
}
/>
@@ -158,15 +194,18 @@ const CheckHardware = () => {
Hardware Report
</Typography>
<Button
disabled={hardwareQuery.isLoading}
hierarchy="secondary"
startIcon="Report"
onClick={handleUpdateSummary}
class="flex gap-3"
loading={hardwareQuery.isFetching}
>
Update hardware report
</Button>
</Orienter>
<Divider orientation="horizontal" />
<Show when={hardwareQuery.isLoading}>Loading...</Show>
<Show when={hardwareQuery.data}>
{(d) => (
<Alert
@@ -318,6 +357,25 @@ interface PromptForm extends FieldValues {
promptValues: PromptValues;
}
const sanitize = (name: string) => {
return name.replace(".", "__dot__");
};
const restore = (name: string) => {
return name.replace("__dot__", ".");
};
const transformPromptValues = (
values: PromptValues,
transform: (s: string) => string,
): PromptValues =>
Object.fromEntries(
Object.entries(values).map(([key, value]) => [
transform(key),
Object.fromEntries(
Object.entries(value).map(([k, v]) => [transform(k), v]),
),
]),
);
interface PromptsFieldsProps {
generators: MachineGenerators;
}
@@ -338,8 +396,11 @@ const PromptsFields = (props: PromptsFieldsProps) => {
if (!acc[groupName]) acc[groupName] = { fields: [], name: groupName };
acc[groupName].fields.push({
prompt,
generator: generator.name,
prompt: {
...prompt,
name: sanitize(prompt.name),
},
generator: sanitize(generator.name),
value: prompt.previous_value,
});
}
@@ -351,16 +412,24 @@ const PromptsFields = (props: PromptsFieldsProps) => {
const [formStore, { Form, Field }] = createForm<PromptForm>({
initialValues: {
promptValues: store.install?.promptValues || {},
promptValues: transformPromptValues(
store.install?.promptValues || {},
sanitize,
),
},
});
console.log(groups);
const handleSubmit: SubmitHandler<PromptForm> = (values, event) => {
console.log("vars submitted", values);
const restoredValues: PromptValues = transformPromptValues(
values.promptValues,
restore,
);
set("install", (s) => ({ ...s, promptValues: values.promptValues }));
console.log("vars submitted", restoredValues);
set("install", (s) => ({ ...s, promptValues: restoredValues }));
console.log("vars preloaded");
// Here you would typically trigger the ISO creation process
stepSignal.next();
@@ -479,8 +548,12 @@ const InstallSummary = () => {
const runGenerators = client.fetch("run_generators", {
all_prompt_values: store.install.promptValues,
base_dir: clanUri,
machine_name: store.install.machineName,
machine: {
name: store.install.machineName,
flake: {
identifier: clanUri,
},
},
});
set("install", (s) => ({
@@ -516,13 +589,16 @@ const InstallSummary = () => {
<StepLayout
body={
<div class="flex flex-col gap-4">
<Fieldset legend="Address Configuration">
<Fieldset legend="Machine">
<Orienter orientation="horizontal">
{/* TOOD: Display the values emited from previous steps */}
<Display label="Target" value="flash-installer.local" />
<Display label="Name" value={store.install.machineName} />
</Orienter>
<Divider orientation="horizontal" />
<Orienter orientation="horizontal">
<Display label="Address" value={store.install.targetHost} />
</Orienter>
</Fieldset>
<Fieldset legend="Disk Configuration">
<Fieldset legend="Disk">
<Orienter orientation="horizontal">
<Display label="Disk Schema" value="Single" />
</Orienter>
@@ -545,7 +621,14 @@ const InstallSummary = () => {
);
};
type InstallTopic = "generators" | "upload-secrets" | "nixos-anywhere";
type InstallTopic = [
"generators",
"upload-secrets",
"nixos-anywhere",
"formatting",
"rebooting",
"installing",
][number];
const InstallProgress = () => {
const stepSignal = useStepper<InstallSteps>();
@@ -563,7 +646,7 @@ const InstallProgress = () => {
);
return (
<div class="flex size-full h-60 flex-col items-center justify-end bg-inv-4">
<div class="flex size-full flex-col items-center justify-center bg-inv-4">
<div class="mb-6 flex w-full max-w-md flex-col items-center gap-3 fg-inv-1">
<Typography
hierarchy="title"
@@ -599,6 +682,15 @@ const InstallProgress = () => {
<Match when={installState()?.topic === "nixos-anywhere"}>
Running nixos-anywhere ...
</Match>
<Match when={installState()?.topic === "formatting"}>
Formatting ...
</Match>
<Match when={installState()?.topic === "installing"}>
Installing ...
</Match>
<Match when={installState()?.topic === "rebooting"}>
Rebooting ...
</Match>
</Switch>
</Match>
</Switch>
@@ -625,7 +717,7 @@ const InstallDone = (props: InstallDoneProps) => {
const [store, get] = getStepStore<InstallStoreType>(stepSignal);
return (
<div class="flex w-full flex-col items-center bg-inv-4">
<div class="flex size-full flex-col items-center justify-center bg-inv-4">
<div class="flex w-full max-w-md flex-col items-center gap-3 py-6 fg-inv-1">
<div class="rounded-full bg-semantic-success-4">
<Icon icon="Checkmark" class="size-9" />

View File

@@ -20,7 +20,6 @@ from . import (
)
from .arg_actions import AppendOptionAction
from .clan import show
from .facts import cli as facts
from .flash import cli as flash_cli
from .hyperlink import help_hyperlink
from .machines import cli as machines
@@ -302,45 +301,6 @@ For more detailed information, visit: {help_hyperlink("secrets", "https://docs.c
)
secrets.register_parser(parser_secrets)
parser_facts = subparsers.add_parser(
"facts",
help="Manage facts",
description="Manage facts",
epilog=(
f"""
Note: Facts are being deprecated, please use Vars instead.
For a migration guide visit: {help_hyperlink("vars", "https://docs.clan.lol/guides/migrations/migration-facts-vars")}
This subcommand provides an interface to facts of clan machines.
Facts are artifacts that a service can generate.
There are public and secret facts.
Public facts can be referenced by other machines directly.
Public facts can include: ip addresses, public keys.
Secret facts can include: passwords, private keys.
A service is an included clan-module that implements facts generation functionality.
For example the zerotier module will generate private and public facts.
In this case the public fact will be the resulting zerotier-ip of the machine.
The secret fact will be the zerotier-identity-secret, which is used by zerotier
to prove the machine has control of the zerotier-ip.
Examples:
$ clan facts generate
Will generate facts for all machines.
$ clan facts generate --service [SERVICE] --regenerate
Will regenerate facts, if they are already generated for a specific service.
This is especially useful for resetting certain passwords while leaving the rest
of the facts for a machine in place.
For more detailed information, visit: {help_hyperlink("secrets", "https://docs.clan.lol/guides/secrets")}
"""
),
formatter_class=argparse.RawTextHelpFormatter,
)
facts.register_parser(parser_facts)
# like facts but with vars instead of facts
parser_vars = subparsers.add_parser(
"vars",

View File

@@ -1,60 +0,0 @@
import argparse
import logging
from clan_lib.flake import require_flake
from clan_lib.machines.machines import Machine
from clan_cli.completions import add_dynamic_completer, complete_machines
log = logging.getLogger(__name__)
def check_secrets(machine: Machine, service: None | str = None) -> bool:
missing_secret_facts = []
missing_public_facts = []
services = [service] if service else list(machine.facts_data.keys())
for service in services:
for secret_fact in machine.facts_data[service]["secret"]:
if isinstance(secret_fact, str):
secret_name = secret_fact
else:
secret_name = secret_fact["name"]
if not machine.secret_facts_store.exists(service, secret_name):
machine.info(
f"Secret fact '{secret_fact}' for service '{service}' is missing."
)
missing_secret_facts.append((service, secret_name))
for public_fact in machine.facts_data[service]["public"]:
if not machine.public_facts_store.exists(service, public_fact):
machine.info(
f"Public fact '{public_fact}' for service '{service}' is missing."
)
missing_public_facts.append((service, public_fact))
machine.debug(f"missing_secret_facts: {missing_secret_facts}")
machine.debug(f"missing_public_facts: {missing_public_facts}")
return not (missing_secret_facts or missing_public_facts)
def check_command(args: argparse.Namespace) -> None:
flake = require_flake(args.flake)
machine = Machine(
name=args.machine,
flake=flake,
)
check_secrets(machine, service=args.service)
def register_check_parser(parser: argparse.ArgumentParser) -> None:
machines_parser = parser.add_argument(
"machine",
help="The machine to check secrets for",
)
add_dynamic_completer(machines_parser, complete_machines)
parser.add_argument(
"--service",
help="the service to check",
)
parser.set_defaults(func=check_command)

View File

@@ -1,15 +0,0 @@
from pathlib import Path
import pytest
from clan_lib.errors import ClanError
from clan_cli.tests.helpers import cli
def test_check_command_no_flake(
tmp_path: Path, monkeypatch: pytest.MonkeyPatch
) -> None:
monkeypatch.chdir(tmp_path)
with pytest.raises(ClanError):
cli.run(["facts", "check", "machine1"])

View File

@@ -1,133 +0,0 @@
# !/usr/bin/env python3
import argparse
from clan_cli.hyperlink import help_hyperlink
from .check import register_check_parser
from .generate import register_generate_parser
from .list import register_list_parser
from .upload import register_upload_parser
# takes a (sub)parser and configures it
def register_parser(parser: argparse.ArgumentParser) -> None:
subparser = parser.add_subparsers(
title="command",
description="the command to run",
help="the command to run",
required=True,
)
check_parser = subparser.add_parser(
"check",
help="check if facts are up to date",
epilog=(
f"""
This subcommand allows checking if all facts are up to date.
Examples:
$ clan facts check [MACHINE]
Will check facts for the specified machine.
For more detailed information, visit: {help_hyperlink("secrets", "https://docs.clan.lol/guides/secrets")}
"""
),
formatter_class=argparse.RawTextHelpFormatter,
)
register_check_parser(check_parser)
list_parser = subparser.add_parser(
"list",
help="list all facts",
epilog=(
f"""
This subcommand allows listing all public facts for a specific machine.
The resulting list will be a json string with the name of the fact as its key
and the fact itself as it's value.
This is how an example output might look like:
```
\u007b
"[FACT_NAME]": "[FACT]"
\u007d
```
Examples:
$ clan facts list [MACHINE]
Will list facts for the specified machine.
For more detailed information, visit: {help_hyperlink("secrets", "https://docs.clan.lol/guides/secrets")}
"""
),
formatter_class=argparse.RawTextHelpFormatter,
)
register_list_parser(list_parser)
parser_generate = subparser.add_parser(
"generate",
help="generate public and secret facts for machines",
epilog=(
f"""
This subcommand allows control of the generation of facts.
Often this function will be invoked automatically on deploying machines,
but there are situations the user may want to have more granular control,
especially for the regeneration of certain services.
A service is an included clan-module that implements facts generation functionality.
For example the zerotier module will generate private and public facts.
In this case the public fact will be the resulting zerotier-ip of the machine.
The secret fact will be the zerotier-identity-secret, which is used by zerotier
to prove the machine has control of the zerotier-ip.
Examples:
$ clan facts generate
Will generate facts for all machines.
$ clan facts generate [MACHINE]
Will generate facts for the specified machine.
$ clan facts generate [MACHINE] --service [SERVICE]
Will generate facts for the specified machine for the specified service.
$ clan facts generate --service [SERVICE] --regenerate
Will regenerate facts, if they are already generated for a specific service.
This is especially useful for resetting certain passwords while leaving the rest
of the facts for a machine in place.
For more detailed information, visit: {help_hyperlink("secrets", "https://docs.clan.lol/guides/secrets")}
"""
),
formatter_class=argparse.RawTextHelpFormatter,
)
register_generate_parser(parser_generate)
parser_upload = subparser.add_parser(
"upload",
help="upload secrets for machines",
epilog=(
f"""
This subcommand allows uploading secrets to remote machines.
If using sops as a secret backend it will upload the private key to the machine.
If using password store it uploads all the secrets you manage to the machine.
The default backend is sops.
Examples:
$ clan facts upload [MACHINE]
Will upload secrets to a specific machine.
For more detailed information, visit: {help_hyperlink("secrets", "https://docs.clan.lol/guides/secrets")}
"""
),
formatter_class=argparse.RawTextHelpFormatter,
)
register_upload_parser(parser_upload)

View File

@@ -1,263 +0,0 @@
import argparse
import logging
import os
import sys
import traceback
from collections.abc import Callable
from pathlib import Path
from tempfile import TemporaryDirectory
from clan_lib.cmd import RunOpts, run
from clan_lib.errors import ClanError
from clan_lib.flake import require_flake
from clan_lib.git import commit_files
from clan_lib.machines.list import list_full_machines
from clan_lib.machines.machines import Machine
from clan_lib.nix import nix_shell
from clan_cli.completions import (
add_dynamic_completer,
complete_machines,
complete_services_for_machine,
)
from .check import check_secrets
from .public_modules import FactStoreBase
from .secret_modules import SecretStoreBase
log = logging.getLogger(__name__)
def read_multiline_input(prompt: str = "Finish with Ctrl-D") -> str:
"""
Read multi-line input from stdin.
"""
print(prompt, flush=True)
proc = run(["cat"], RunOpts(check=False))
log.info("Input received. Processing...")
return proc.stdout.rstrip(os.linesep).rstrip()
def bubblewrap_cmd(generator: str, facts_dir: Path, secrets_dir: Path) -> list[str]:
# fmt: off
return nix_shell(
[
"bash",
"bubblewrap",
],
[
"bwrap",
"--unshare-all",
"--tmpfs", "/",
"--ro-bind", "/nix/store", "/nix/store",
"--ro-bind", "/bin/sh", "/bin/sh",
"--dev", "/dev",
# not allowed to bind procfs in some sandboxes
"--bind", str(facts_dir), str(facts_dir),
"--bind", str(secrets_dir), str(secrets_dir),
"--chdir", "/",
# Doesn't work in our CI?
#"--proc", "/proc",
#"--hostname", "facts",
"--bind", "/proc", "/proc",
"--uid", "1000",
"--gid", "1000",
"--",
"bash", "-c", generator
],
)
# fmt: on
def generate_service_facts(
machine: Machine,
service: str,
regenerate: bool,
secret_facts_store: SecretStoreBase,
public_facts_store: FactStoreBase,
tmpdir: Path,
prompt: Callable[[str, str], str],
) -> bool:
service_dir = tmpdir / service
# check if all secrets exist and generate them if at least one is missing
needs_regeneration = not check_secrets(machine, service=service)
machine.debug(f"{service} needs_regeneration: {needs_regeneration}")
if not (needs_regeneration or regenerate):
return False
if not isinstance(machine.flake, Path):
msg = f"flake is not a Path: {machine.flake}"
msg += "fact/secret generation is only supported for local flakes"
env = os.environ.copy()
facts_dir = service_dir / "facts"
facts_dir.mkdir(parents=True)
env["facts"] = str(facts_dir)
secrets_dir = service_dir / "secrets"
secrets_dir.mkdir(parents=True)
env["secrets"] = str(secrets_dir)
# compatibility for old outputs.nix users
if isinstance(machine.facts_data[service]["generator"], str):
generator = machine.facts_data[service]["generator"]
else:
generator = machine.facts_data[service]["generator"]["finalScript"]
if machine.facts_data[service]["generator"]["prompt"]:
prompt_value = prompt(
service, machine.facts_data[service]["generator"]["prompt"]
)
env["prompt_value"] = prompt_value
from clan_lib import bwrap
if sys.platform == "linux" and bwrap.bubblewrap_works():
cmd = bubblewrap_cmd(generator, facts_dir, secrets_dir)
else:
cmd = ["bash", "-c", generator]
run(
cmd,
RunOpts(env=env),
)
files_to_commit = []
# store secrets
for secret_name, secret in machine.facts_data[service]["secret"].items():
groups = secret.get("groups", [])
secret_file = secrets_dir / secret_name
if not secret_file.is_file():
msg = f"did not generate a file for '{secret_name}' when running the following command:\n"
msg += generator
raise ClanError(msg)
secret_path = secret_facts_store.set(
service, secret_name, secret_file.read_bytes(), groups
)
if secret_path:
files_to_commit.append(secret_path)
# store facts
for name in machine.facts_data[service]["public"]:
fact_file = facts_dir / name
if not fact_file.is_file():
msg = f"did not generate a file for '{name}' when running the following command:\n"
msg += machine.facts_data[service]["generator"]
raise ClanError(msg)
fact_file = public_facts_store.set(service, name, fact_file.read_bytes())
if fact_file:
files_to_commit.append(fact_file)
commit_files(
files_to_commit,
machine.flake_dir,
f"Update facts/secrets for service {service} in machine {machine.name}",
)
return True
def prompt_func(service: str, text: str) -> str:
print(f"{text}: ")
return read_multiline_input()
def _generate_facts_for_machine(
machine: Machine,
service: str | None,
regenerate: bool,
tmpdir: Path,
prompt: Callable[[str, str], str] = prompt_func,
) -> bool:
local_temp = tmpdir / machine.name
local_temp.mkdir()
machine_updated = False
if service and service not in machine.facts_data:
services = list(machine.facts_data.keys())
msg = f"Could not find service with name: {service}. The following services are available: {services}"
raise ClanError(msg)
if service:
machine_service_facts = {service: machine.facts_data[service]}
else:
machine_service_facts = machine.facts_data
for service in machine_service_facts:
machine_updated |= generate_service_facts(
machine=machine,
service=service,
regenerate=regenerate,
secret_facts_store=machine.secret_facts_store,
public_facts_store=machine.public_facts_store,
tmpdir=local_temp,
prompt=prompt,
)
if machine_updated:
# flush caches to make sure the new secrets are available in evaluation
machine.flush_caches()
return machine_updated
def generate_facts(
machines: list[Machine],
service: str | None = None,
regenerate: bool = False,
prompt: Callable[[str, str], str] = prompt_func,
) -> bool:
was_regenerated = False
with TemporaryDirectory(prefix="facts-generate-") as _tmpdir:
tmpdir = Path(_tmpdir).resolve()
for machine in machines:
errors = 0
try:
was_regenerated |= _generate_facts_for_machine(
machine, service, regenerate, tmpdir, prompt
)
except (OSError, ClanError) as e:
machine.error(f"Failed to generate facts: {e}")
traceback.print_exc()
errors += 1
if errors > 0:
msg = (
f"Failed to generate facts for {errors} hosts. Check the logs above"
)
raise ClanError(msg)
if not was_regenerated and len(machines) > 0:
log.info("All secrets and facts are already up to date")
return was_regenerated
def generate_command(args: argparse.Namespace) -> None:
flake = require_flake(args.flake)
machines: list[Machine] = list(list_full_machines(flake).values())
if len(args.machines) > 0:
machines = list(
filter(
lambda m: m.name in args.machines,
machines,
)
)
generate_facts(machines, args.service, args.regenerate)
def register_generate_parser(parser: argparse.ArgumentParser) -> None:
machines_parser = parser.add_argument(
"machines",
type=str,
help="machine to generate facts for. if empty, generate facts for all machines",
nargs="*",
default=[],
)
add_dynamic_completer(machines_parser, complete_machines)
service_parser = parser.add_argument(
"--service",
type=str,
help="service to generate facts for, if empty, generate facts for every service",
default=None,
)
add_dynamic_completer(service_parser, complete_services_for_machine)
parser.add_argument(
"--regenerate",
action=argparse.BooleanOptionalAction,
help="whether to regenerate facts for the specified machine",
default=None,
)
parser.set_defaults(func=generate_command)

View File

@@ -1,15 +0,0 @@
from pathlib import Path
import pytest
from clan_lib.errors import ClanError
from clan_cli.tests.helpers import cli
def test_generate_command_no_flake(
tmp_path: Path, monkeypatch: pytest.MonkeyPatch
) -> None:
monkeypatch.chdir(tmp_path)
with pytest.raises(ClanError):
cli.run(["facts", "generate"])

View File

@@ -1,33 +0,0 @@
import argparse
import json
import logging
from clan_lib.flake import require_flake
from clan_lib.machines.machines import Machine
from clan_cli.completions import add_dynamic_completer, complete_machines
log = logging.getLogger(__name__)
def get_command(args: argparse.Namespace) -> None:
flake = require_flake(args.flake)
machine = Machine(name=args.machine, flake=flake)
# the raw_facts are bytestrings making them not json serializable
raw_facts = machine.public_facts_store.get_all()
facts = {}
for key in raw_facts["TODO"]:
facts[key] = raw_facts["TODO"][key].decode("utf8")
print(json.dumps(facts, indent=4))
def register_list_parser(parser: argparse.ArgumentParser) -> None:
machines_parser = parser.add_argument(
"machine",
help="The machine to print facts for",
)
add_dynamic_completer(machines_parser, complete_machines)
parser.set_defaults(func=get_command)

View File

@@ -1,13 +0,0 @@
from pathlib import Path
import pytest
from clan_lib.errors import ClanError
from clan_cli.tests.helpers import cli
def test_list_command_no_flake(tmp_path: Path, monkeypatch: pytest.MonkeyPatch) -> None:
monkeypatch.chdir(tmp_path)
with pytest.raises(ClanError):
cli.run(["facts", "list", "machine1"])

View File

@@ -1,30 +0,0 @@
from __future__ import annotations
from abc import ABC, abstractmethod
from pathlib import Path
import clan_lib.machines.machines as machines
class FactStoreBase(ABC):
@abstractmethod
def __init__(self, machine: machines.Machine) -> None:
pass
@abstractmethod
def exists(self, service: str, name: str) -> bool:
pass
@abstractmethod
def set(self, service: str, name: str, value: bytes) -> Path | None:
pass
# get a single fact
@abstractmethod
def get(self, service: str, name: str) -> bytes:
pass
# get all facts
@abstractmethod
def get_all(self) -> dict[str, dict[str, bytes]]:
pass

View File

@@ -1,51 +0,0 @@
from pathlib import Path
from clan_lib.errors import ClanError
from clan_lib.machines.machines import Machine
from . import FactStoreBase
class FactStore(FactStoreBase):
def __init__(self, machine: Machine) -> None:
self.machine = machine
self.works_remotely = False
def set(self, service: str, name: str, value: bytes) -> Path | None:
if self.machine.flake.is_local:
fact_path = (
self.machine.flake.path
/ "machines"
/ self.machine.name
/ "facts"
/ name
)
fact_path.parent.mkdir(parents=True, exist_ok=True)
fact_path.touch()
fact_path.write_bytes(value)
return fact_path
msg = f"in_flake fact storage is only supported for local flakes: {self.machine.flake}"
raise ClanError(msg)
def exists(self, service: str, name: str) -> bool:
fact_path = (
self.machine.flake_dir / "machines" / self.machine.name / "facts" / name
)
return fact_path.exists()
# get a single fact
def get(self, service: str, name: str) -> bytes:
fact_path = (
self.machine.flake_dir / "machines" / self.machine.name / "facts" / name
)
return fact_path.read_bytes()
# get all facts
def get_all(self) -> dict[str, dict[str, bytes]]:
facts_folder = self.machine.flake_dir / "machines" / self.machine.name / "facts"
facts: dict[str, dict[str, bytes]] = {}
facts["TODO"] = {}
if facts_folder.exists():
for fact_path in facts_folder.iterdir():
facts["TODO"][fact_path.name] = fact_path.read_bytes()
return facts

View File

@@ -1,47 +0,0 @@
import logging
from pathlib import Path
from clan_lib.dirs import vm_state_dir
from clan_lib.errors import ClanError
from clan_lib.machines.machines import Machine
from . import FactStoreBase
log = logging.getLogger(__name__)
class FactStore(FactStoreBase):
def __init__(self, machine: Machine) -> None:
self.machine = machine
self.works_remotely = False
self.dir = vm_state_dir(machine.flake.identifier, machine.name) / "facts"
machine.debug(f"FactStore initialized with dir {self.dir}")
def exists(self, service: str, name: str) -> bool:
fact_path = self.dir / service / name
return fact_path.exists()
def set(self, service: str, name: str, value: bytes) -> Path | None:
fact_path = self.dir / service / name
fact_path.parent.mkdir(parents=True, exist_ok=True)
fact_path.write_bytes(value)
return None
# get a single fact
def get(self, service: str, name: str) -> bytes:
fact_path = self.dir / service / name
if fact_path.exists():
return fact_path.read_bytes()
msg = f"Fact {name} for service {service} not found"
raise ClanError(msg)
# get all facts
def get_all(self) -> dict[str, dict[str, bytes]]:
facts: dict[str, dict[str, bytes]] = {}
if self.dir.exists():
for service in self.dir.iterdir():
facts[service.name] = {}
for fact in service.iterdir():
facts[service.name][fact.name] = fact.read_bytes()
return facts

View File

@@ -1,34 +0,0 @@
from __future__ import annotations
from abc import ABC, abstractmethod
from pathlib import Path
import clan_lib.machines.machines as machines
from clan_lib.ssh.host import Host
class SecretStoreBase(ABC):
@abstractmethod
def __init__(self, machine: machines.Machine) -> None:
pass
@abstractmethod
def set(
self, service: str, name: str, value: bytes, groups: list[str]
) -> Path | None:
pass
@abstractmethod
def get(self, service: str, name: str) -> bytes:
pass
@abstractmethod
def exists(self, service: str, name: str) -> bool:
pass
def needs_upload(self, host: Host) -> bool:
return True
@abstractmethod
def upload(self, output_dir: Path) -> None:
pass

View File

@@ -1,122 +0,0 @@
import os
import subprocess
from pathlib import Path
from typing import override
from clan_lib.cmd import Log, RunOpts
from clan_lib.machines.machines import Machine
from clan_lib.nix import nix_shell
from clan_lib.ssh.host import Host
from clan_cli.facts.secret_modules import SecretStoreBase
class SecretStore(SecretStoreBase):
def __init__(self, machine: Machine) -> None:
self.machine = machine
def set(
self, service: str, name: str, value: bytes, groups: list[str]
) -> Path | None:
subprocess.run(
nix_shell(
["pass"],
["pass", "insert", "-m", f"machines/{self.machine.name}/{name}"],
),
input=value,
check=True,
)
return None # we manage the files outside of the git repo
def get(self, service: str, name: str) -> bytes:
return subprocess.run(
nix_shell(
["pass"],
["pass", "show", f"machines/{self.machine.name}/{name}"],
),
check=True,
stdout=subprocess.PIPE,
).stdout
def exists(self, service: str, name: str) -> bool:
password_store = os.environ.get(
"PASSWORD_STORE_DIR", f"{os.environ['HOME']}/.password-store"
)
secret_path = Path(password_store) / f"machines/{self.machine.name}/{name}.gpg"
return secret_path.exists()
def generate_hash(self) -> bytes:
password_store = os.environ.get(
"PASSWORD_STORE_DIR", f"{os.environ['HOME']}/.password-store"
)
hashes = []
hashes.append(
subprocess.run(
nix_shell(
["git"],
[
"git",
"-C",
password_store,
"log",
"-1",
"--format=%H",
f"machines/{self.machine.name}",
],
),
stdout=subprocess.PIPE,
check=False,
).stdout.strip()
)
for symlink in Path(password_store).glob(f"machines/{self.machine.name}/**/*"):
if symlink.is_symlink():
hashes.append(
subprocess.run(
nix_shell(
["git"],
[
"git",
"-C",
password_store,
"log",
"-1",
"--format=%H",
str(symlink),
],
),
stdout=subprocess.PIPE,
check=False,
).stdout.strip()
)
# we sort the hashes to make sure that the order is always the same
hashes.sort()
return b"\n".join(hashes)
@override
def needs_upload(self, host: Host) -> bool:
local_hash = self.generate_hash()
with host.host_connection() as ssh:
remote_hash = ssh.run(
# TODO get the path to the secrets from the machine
["cat", f"{self.machine.secrets_upload_directory}/.pass_info"],
RunOpts(log=Log.STDERR, check=False),
).stdout.strip()
if not remote_hash:
print("remote hash is empty")
return True
return local_hash.decode() != remote_hash
def upload(self, output_dir: Path) -> None:
os.umask(0o077)
for service in self.machine.facts_data:
for secret in self.machine.facts_data[service]["secret"]:
if isinstance(secret, dict):
secret_name = secret["name"]
else:
# TODO: drop old format soon
secret_name = secret
(output_dir / secret_name).write_bytes(self.get(service, secret_name))
(output_dir / ".pass_info").write_bytes(self.generate_hash())

View File

@@ -1,72 +0,0 @@
from pathlib import Path
from typing import override
from clan_lib.machines.machines import Machine
from clan_lib.ssh.host import Host
from clan_cli.secrets.folders import sops_secrets_folder
from clan_cli.secrets.machines import add_machine, has_machine
from clan_cli.secrets.secrets import decrypt_secret, encrypt_secret, has_secret
from clan_cli.secrets.sops import generate_private_key, load_age_plugins
from . import SecretStoreBase
class SecretStore(SecretStoreBase):
def __init__(self, machine: Machine) -> None:
self.machine = machine
# no need to generate keys if we don't manage secrets
if not hasattr(self.machine, "facts_data"):
return
if not self.machine.facts_data:
return
if has_machine(self.machine.flake_dir, self.machine.name):
return
priv_key, pub_key = generate_private_key()
encrypt_secret(
self.machine.flake_dir,
sops_secrets_folder(self.machine.flake_dir)
/ f"{self.machine.name}-age.key",
priv_key,
add_groups=self.machine.select("config.clan.core.sops.defaultGroups"),
age_plugins=load_age_plugins(self.machine.flake),
)
add_machine(self.machine.flake_dir, self.machine.name, pub_key, False)
def set(
self, service: str, name: str, value: bytes, groups: list[str]
) -> Path | None:
path = (
sops_secrets_folder(self.machine.flake_dir) / f"{self.machine.name}-{name}"
)
encrypt_secret(
self.machine.flake_dir,
path,
value,
add_machines=[self.machine.name],
add_groups=groups,
age_plugins=load_age_plugins(self.machine.flake),
)
return path
def get(self, service: str, name: str) -> bytes:
return decrypt_secret(
sops_secrets_folder(self.machine.flake_dir) / f"{self.machine.name}-{name}",
age_plugins=load_age_plugins(self.machine.flake),
).encode("utf-8")
def exists(self, service: str, name: str) -> bool:
return has_secret(
sops_secrets_folder(self.machine.flake_dir) / f"{self.machine.name}-{name}",
)
@override
def needs_upload(self, host: Host) -> bool:
return False
# We rely now on the vars backend to upload the age key
def upload(self, output_dir: Path) -> None:
pass

View File

@@ -1,36 +0,0 @@
import shutil
from pathlib import Path
from typing import override
from clan_lib.dirs import vm_state_dir
from clan_lib.machines.machines import Machine
from . import SecretStoreBase
class SecretStore(SecretStoreBase):
def __init__(self, machine: Machine) -> None:
self.machine = machine
self.dir = vm_state_dir(machine.flake.identifier, machine.name) / "secrets"
self.dir.mkdir(parents=True, exist_ok=True)
def set(
self, service: str, name: str, value: bytes, groups: list[str]
) -> Path | None:
secret_file = self.dir / service / name
secret_file.parent.mkdir(parents=True, exist_ok=True)
secret_file.write_bytes(value)
return None # we manage the files outside of the git repo
def get(self, service: str, name: str) -> bytes:
secret_file = self.dir / service / name
return secret_file.read_bytes()
def exists(self, service: str, name: str) -> bool:
return (self.dir / service / name).exists()
@override
def upload(self, output_dir: Path) -> None:
if output_dir.exists():
shutil.rmtree(output_dir)
shutil.copytree(self.dir, output_dir)

View File

@@ -1,42 +0,0 @@
import argparse
import logging
from pathlib import Path
from tempfile import TemporaryDirectory
from clan_lib.flake import require_flake
from clan_lib.machines.machines import Machine
from clan_lib.ssh.host import Host
from clan_cli.completions import add_dynamic_completer, complete_machines
from clan_cli.ssh.upload import upload
log = logging.getLogger(__name__)
def upload_secrets(machine: Machine, host: Host) -> None:
if not machine.secret_facts_store.needs_upload(host):
machine.info("Secrets already uploaded")
return
with TemporaryDirectory(prefix="facts-upload-") as _tempdir:
local_secret_dir = Path(_tempdir).resolve()
machine.secret_facts_store.upload(local_secret_dir)
remote_secret_dir = Path(machine.secrets_upload_directory)
upload(host, local_secret_dir, remote_secret_dir)
def upload_command(args: argparse.Namespace) -> None:
flake = require_flake(args.flake)
machine = Machine(name=args.machine, flake=flake)
with machine.target_host().host_connection() as host:
upload_secrets(machine, host)
def register_upload_parser(parser: argparse.ArgumentParser) -> None:
machines_parser = parser.add_argument(
"machine",
help="The machine to upload secrets to",
)
add_dynamic_completer(machines_parser, complete_machines)
parser.set_defaults(func=upload_command)

View File

@@ -1,15 +0,0 @@
from pathlib import Path
import pytest
from clan_lib.errors import ClanError
from clan_cli.tests.helpers import cli
def test_upload_command_no_flake(
tmp_path: Path, monkeypatch: pytest.MonkeyPatch
) -> None:
monkeypatch.chdir(tmp_path)
with pytest.raises(ClanError):
cli.run(["facts", "upload", "machine1"])

View File

@@ -1,6 +1,8 @@
import argparse
import json
import logging
import sys
from contextlib import ExitStack
from pathlib import Path
from typing import get_args
@@ -8,6 +10,7 @@ from clan_lib.errors import ClanError
from clan_lib.flake import require_flake
from clan_lib.machines.install import BuildOn, InstallOptions, run_machine_install
from clan_lib.machines.machines import Machine
from clan_lib.network.qr_code import read_qr_image, read_qr_json
from clan_lib.ssh.host_key import HostKeyCheck
from clan_lib.ssh.remote import Remote
@@ -17,7 +20,6 @@ from clan_cli.completions import (
complete_target_host,
)
from clan_cli.machines.hardware import HardwareConfig
from clan_cli.ssh.deploy_info import DeployInfo, find_reachable_host, ssh_command_parse
log = logging.getLogger(__name__)
@@ -27,80 +29,71 @@ def install_command(args: argparse.Namespace) -> None:
flake = require_flake(args.flake)
# Only if the caller did not specify a target_host via args.target_host
# Find a suitable target_host that is reachable
target_host_str = args.target_host
deploy_info: DeployInfo | None = (
ssh_command_parse(args) if target_host_str is None else None
)
use_tor = False
if deploy_info:
host = find_reachable_host(deploy_info)
if host is None or host.socks_port:
use_tor = True
target_host_str = deploy_info.tor.target
else:
target_host_str = host.target
if args.password:
password = args.password
elif deploy_info and deploy_info.addrs[0].password:
password = deploy_info.addrs[0].password
else:
password = None
machine = Machine(name=args.machine, flake=flake)
host_key_check = args.host_key_check
if target_host_str is not None:
target_host = Remote.from_ssh_uri(
machine_name=machine.name, address=target_host_str
).override(host_key_check=host_key_check)
else:
target_host = machine.target_host().override(host_key_check=host_key_check)
if args.identity_file:
target_host = target_host.override(private_key=args.identity_file)
if machine._class_ == "darwin":
msg = "Installing macOS machines is not yet supported"
raise ClanError(msg)
if not args.yes:
while True:
ask = (
input(f"Install {args.machine} to {target_host.target}? [y/N] ")
.strip()
.lower()
with ExitStack() as stack:
remote: Remote
if args.target_host:
# TODO add network support here with either --network or some url magic
remote = Remote.from_ssh_uri(
machine_name=args.machine, address=args.target_host
)
if ask == "y":
break
if ask == "n" or ask == "":
return None
print(f"Invalid input '{ask}'. Please enter 'y' for yes or 'n' for no.")
elif args.png:
data = read_qr_image(Path(args.png))
qr_code = read_qr_json(data, args.flake)
remote = stack.enter_context(qr_code.get_best_remote())
elif args.json:
json_file = Path(args.json)
if json_file.is_file():
data = json.loads(json_file.read_text())
else:
data = json.loads(args.json)
if args.identity_file:
target_host = target_host.override(private_key=args.identity_file)
qr_code = read_qr_json(data, args.flake)
remote = stack.enter_context(qr_code.get_best_remote())
else:
msg = "No --target-host, --json or --png data provided"
raise ClanError(msg)
if password:
target_host = target_host.override(password=password)
machine = Machine(name=args.machine, flake=flake)
if args.host_key_check:
remote.override(host_key_check=args.host_key_check)
if use_tor:
target_host = target_host.override(
socks_port=9050, socks_wrapper=["torify"]
if machine._class_ == "darwin":
msg = "Installing macOS machines is not yet supported"
raise ClanError(msg)
if not args.yes:
while True:
ask = (
input(f"Install {args.machine} to {remote.target}? [y/N] ")
.strip()
.lower()
)
if ask == "y":
break
if ask == "n" or ask == "":
return None
print(
f"Invalid input '{ask}'. Please enter 'y' for yes or 'n' for no."
)
if args.identity_file:
remote = remote.override(private_key=args.identity_file)
if args.password:
remote = remote.override(password=args.password)
return run_machine_install(
InstallOptions(
machine=machine,
kexec=args.kexec,
phases=args.phases,
debug=args.debug,
no_reboot=args.no_reboot,
build_on=args.build_on if args.build_on is not None else None,
update_hardware_config=HardwareConfig(args.update_hardware_config),
),
target_host=remote,
)
return run_machine_install(
InstallOptions(
machine=machine,
kexec=args.kexec,
phases=args.phases,
debug=args.debug,
no_reboot=args.no_reboot,
build_on=args.build_on if args.build_on is not None else None,
update_hardware_config=HardwareConfig(args.update_hardware_config),
),
target_host=target_host,
)
except KeyboardInterrupt:
log.warning("Interrupted by user")
sys.exit(1)

View File

@@ -144,14 +144,10 @@ def update_command(args: argparse.Namespace) -> None:
build_host = machine.build_host()
# Figure out the target host
if args.target_host:
target_host: Host | None = None
if args.target_host == "localhost":
target_host = LocalHost()
else:
target_host = Remote.from_ssh_uri(
machine_name=machine.name,
address=args.target_host,
).override(host_key_check=host_key_check)
target_host = Remote.from_ssh_uri(
machine_name=machine.name,
address=args.target_host,
).override(host_key_check=host_key_check)
else:
target_host = machine.target_host().override(
host_key_check=host_key_check

View File

@@ -16,6 +16,9 @@ def overview_command(args: argparse.Namespace) -> None:
for peer_name, peer in network["peers"].items():
print(f"\t{peer_name}: {'[OFFLINE]' if not peer else f'[{peer}]'}")
if not overview:
print("No networks found.")
def register_overview_parser(parser: argparse.ArgumentParser) -> None:
parser.set_defaults(func=overview_command)

View File

@@ -16,8 +16,8 @@ def ping_command(args: argparse.Namespace) -> None:
networks = networks_from_flake(flake)
if not networks:
print("No networks found in the flake")
print("No networks found")
return
# If network is specified, only check that network
if network_name:
networks_to_check = [(network_name, networks[network_name])]

Some files were not shown because too many files have changed in this diff Show More