Compare commits
230 Commits
mdformat
...
test-updat
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
71407f88bf | ||
|
|
c9275db377 | ||
|
|
99dc4f6787 | ||
|
|
63c0db482f | ||
|
|
d2456be3dd | ||
|
|
c3c08482ac | ||
|
|
62126f0c32 | ||
|
|
28139560c2 | ||
|
|
45c916fb6d | ||
|
|
87ea942399 | ||
|
|
39a032a285 | ||
|
|
a06940e981 | ||
|
|
4aebfadc8a | ||
|
|
f45f26994e | ||
|
|
c777a1a2b9 | ||
|
|
36fe7822f7 | ||
|
|
0ccf3310f9 | ||
|
|
a8d6552caa | ||
|
|
a131448dcf | ||
|
|
14a52dbc2e | ||
|
|
565391bd8c | ||
|
|
9bffa2a774 | ||
|
|
e42a07423e | ||
|
|
c5178ac16a | ||
|
|
33791e06cd | ||
|
|
c7e3bf624e | ||
|
|
ba027c2239 | ||
|
|
25fdabee29 | ||
|
|
de69c63ee3 | ||
|
|
b9573636d8 | ||
|
|
3862ad2a06 | ||
|
|
c447aec9d3 | ||
|
|
5137d19b0f | ||
|
|
453f2649d3 | ||
|
|
58cfcf3d25 | ||
|
|
c260a97cc1 | ||
|
|
3eb64870b0 | ||
|
|
7412b958c6 | ||
|
|
a0c27194a6 | ||
|
|
3437af29cb | ||
|
|
0b1c12d2e5 | ||
|
|
8620761bbd | ||
|
|
d793b6ca07 | ||
|
|
17e9231657 | ||
|
|
acc2674d79 | ||
|
|
c34a21a3bb | ||
|
|
275bff23da | ||
|
|
1a766a3447 | ||
|
|
c22844c83b | ||
|
|
5472ca0e21 | ||
|
|
ad890b0b6b | ||
|
|
a364b5ebf3 | ||
|
|
d0134d131e | ||
|
|
ccf0dace11 | ||
|
|
9977a903ce | ||
|
|
dc9bf5068e | ||
|
|
6b4f79c9fa | ||
|
|
b2985b59e9 | ||
|
|
d4ac3b83ee | ||
|
|
00bf55be5a | ||
|
|
851d6aaa89 | ||
|
|
f007279bee | ||
|
|
5a3381d9ff | ||
|
|
83e51db2e7 | ||
|
|
4e4af8a52f | ||
|
|
54a8ec717e | ||
|
|
d3e5e6edf1 | ||
|
|
a4277ad312 | ||
|
|
8877f2d451 | ||
|
|
9275b66bd9 | ||
|
|
6a964f37d5 | ||
|
|
73f2a4f56f | ||
|
|
85fb0187ee | ||
|
|
db9812a08b | ||
|
|
ca69530591 | ||
|
|
fc5b0e4113 | ||
|
|
278af5f0f4 | ||
|
|
e7baf25ff7 | ||
|
|
fada75144c | ||
|
|
803ef5476f | ||
|
|
016bd263d0 | ||
|
|
f9143f8a5d | ||
|
|
92eb27fcb1 | ||
|
|
0cc9b91ae8 | ||
|
|
2ed3608e34 | ||
|
|
a92a1a7dd1 | ||
|
|
9a903be6d4 | ||
|
|
adea270b27 | ||
|
|
765eb142a5 | ||
|
|
faa1405d6b | ||
|
|
0c93aab818 | ||
|
|
56923ae2c3 | ||
|
|
e2f64e1d40 | ||
|
|
c574b84278 | ||
|
|
640f15d55e | ||
|
|
789d326273 | ||
|
|
1763d85d91 | ||
|
|
082fa05083 | ||
|
|
9ed7190606 | ||
|
|
6c22539dd4 | ||
|
|
e6819ede61 | ||
|
|
186a760529 | ||
|
|
a84aee7b0c | ||
|
|
cab2fa44ba | ||
|
|
5962149e55 | ||
|
|
00f9d08a4b | ||
|
|
3d0c843308 | ||
|
|
847138472b | ||
|
|
c7786a59fd | ||
|
|
3b2d357f10 | ||
|
|
a83dbf604c | ||
|
|
f77456a123 | ||
|
|
6e4c3a638d | ||
|
|
3d2127ce1e | ||
|
|
a4a5916fa2 | ||
|
|
f6727055cd | ||
|
|
0517d87caa | ||
|
|
89e587592c | ||
|
|
439495d738 | ||
|
|
0b2fd681be | ||
|
|
41de615331 | ||
|
|
b7639b1d81 | ||
|
|
602879c9e4 | ||
|
|
53e16242b9 | ||
|
|
24c5146763 | ||
|
|
dca7aa0487 | ||
|
|
647bc4e4df | ||
|
|
1c80223fe3 | ||
|
|
7ac9b00398 | ||
|
|
d37c9e3b04 | ||
|
|
0fe9d0e157 | ||
|
|
5479c767c1 | ||
|
|
edc389ba4b | ||
|
|
4cb17d42e1 | ||
|
|
f26499edb8 | ||
|
|
2857cb7ed8 | ||
|
|
3168fecd52 | ||
|
|
24c20ff243 | ||
|
|
8ba8fda54b | ||
|
|
0992a47b00 | ||
|
|
d5b09f18ed | ||
|
|
fb2fe36c87 | ||
|
|
3db51887b1 | ||
|
|
24f3bcca57 | ||
|
|
85006c8103 | ||
|
|
db5571d623 | ||
|
|
d4bdaec586 | ||
|
|
cb9c8e5b5a | ||
|
|
0a1802c341 | ||
|
|
dfae1a4429 | ||
|
|
c1dc73a21b | ||
|
|
8145740cc1 | ||
|
|
b2a54f5b0d | ||
|
|
9c9adc6e16 | ||
|
|
f7cde8eb0f | ||
|
|
501d020562 | ||
|
|
a9bafd71e1 | ||
|
|
166e4b8081 | ||
|
|
c3eb40f17a | ||
|
|
7330285150 | ||
|
|
8cf8573c61 | ||
|
|
5bfa0d7a9d | ||
|
|
8ea2dd9b72 | ||
|
|
6efcade56a | ||
|
|
6d2372be56 | ||
|
|
626af4691b | ||
|
|
63697ac4b1 | ||
|
|
0ebb1f0c66 | ||
|
|
1dda60847e | ||
|
|
a7bce4cb19 | ||
|
|
a5474bc25f | ||
|
|
f634b8f1fb | ||
|
|
0ad40a0233 | ||
|
|
78abc36cd3 | ||
|
|
f5158b068f | ||
|
|
e6066a6cb1 | ||
|
|
fc8b66effa | ||
|
|
16b92963fd | ||
|
|
2ff3d871ac | ||
|
|
108936ef07 | ||
|
|
c45d4cfec9 | ||
|
|
64217e1281 | ||
|
|
d1421bb534 | ||
|
|
ac20514a8e | ||
|
|
79c4e73a15 | ||
|
|
61a647b436 | ||
|
|
c9a709783a | ||
|
|
c55b369899 | ||
|
|
084b8bacd3 | ||
|
|
47ad7d8a95 | ||
|
|
3798808013 | ||
|
|
43a39267f3 | ||
|
|
db94ea2d2e | ||
|
|
f0533f9bba | ||
|
|
360048fd04 | ||
|
|
8f8426de52 | ||
|
|
4bce390e64 | ||
|
|
2b7837e2b6 | ||
|
|
cbf9678534 | ||
|
|
b38b10c9a6 | ||
|
|
31cbb7dc00 | ||
|
|
0fa4377793 | ||
|
|
7b0d10e8c2 | ||
|
|
bb41adab4b | ||
|
|
648aa7dc59 | ||
|
|
3073969c92 | ||
|
|
2f1dc3a33d | ||
|
|
b707dcea2d | ||
|
|
4f0c8025b2 | ||
|
|
b91bee537a | ||
|
|
7207a3e8cd | ||
|
|
ac675a5af0 | ||
|
|
64caebde62 | ||
|
|
4934884e0c | ||
|
|
22cd9baee2 | ||
|
|
84232b5355 | ||
|
|
5bc7c255c1 | ||
|
|
d11d83f699 | ||
|
|
2ef1b2a8fa | ||
|
|
f7414d7e6e | ||
|
|
ab384150b2 | ||
|
|
0b6939ffee | ||
|
|
bc6a1a9d17 | ||
|
|
7055461cf0 | ||
|
|
a9564df6a9 | ||
|
|
e2dfc74d02 | ||
|
|
326cb60aea | ||
|
|
68b264970a | ||
|
|
1fa4ef82e9 | ||
|
|
ec70de406b |
2
.github/workflows/repo-sync.yml
vendored
2
.github/workflows/repo-sync.yml
vendored
@@ -10,7 +10,7 @@ jobs:
|
|||||||
if: github.repository_owner == 'clan-lol'
|
if: github.repository_owner == 'clan-lol'
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v4
|
- uses: actions/checkout@v5
|
||||||
with:
|
with:
|
||||||
persist-credentials: false
|
persist-credentials: false
|
||||||
- uses: actions/create-github-app-token@v2
|
- uses: actions/create-github-app-token@v2
|
||||||
|
|||||||
@@ -1,6 +1,4 @@
|
|||||||
# Contributing to Clan
|
# Contributing to Clan
|
||||||
|
|
||||||
<!-- Local file: docs/CONTRIBUTING.md -->
|
<!-- Local file: docs/CONTRIBUTING.md -->
|
||||||
|
Go to the Contributing guide at https://docs.clan.lol/guides/contributing/CONTRIBUTING
|
||||||
Go to the Contributing guide at
|
|
||||||
https://docs.clan.lol/guides/contributing/CONTRIBUTING
|
|
||||||
@@ -16,3 +16,4 @@ FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
|
|||||||
COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
|
COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
|
||||||
IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
|
IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
|
||||||
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||||
|
|
||||||
|
|||||||
52
README.md
52
README.md
@@ -1,69 +1,45 @@
|
|||||||
# Clan core repository
|
# Clan core repository
|
||||||
|
|
||||||
Welcome to the Clan core repository, the heart of the
|
Welcome to the Clan core repository, the heart of the [clan.lol](https://clan.lol/) project! This monorepo is the foundation of Clan, a revolutionary open-source project aimed at restoring fun, freedom, and functionality to computing. Here, you'll find all the essential packages, NixOS modules, CLI tools, and tests needed to contribute to and work with the Clan project. Clan leverages the Nix system to ensure reliability, security, and seamless management of digital environments, putting the power back into the hands of users.
|
||||||
[clan.lol](https://clan.lol/) project! This monorepo is the foundation of Clan,
|
|
||||||
a revolutionary open-source project aimed at restoring fun, freedom, and
|
|
||||||
functionality to computing. Here, you'll find all the essential packages, NixOS
|
|
||||||
modules, CLI tools, and tests needed to contribute to and work with the Clan
|
|
||||||
project. Clan leverages the Nix system to ensure reliability, security, and
|
|
||||||
seamless management of digital environments, putting the power back into the
|
|
||||||
hands of users.
|
|
||||||
|
|
||||||
## Why Clan?
|
## Why Clan?
|
||||||
|
|
||||||
Our mission is simple: to democratize computing by providing tools that empower
|
Our mission is simple: to democratize computing by providing tools that empower users, foster innovation, and challenge outdated paradigms. Clan represents our contribution to a future where technology serves humanity, not the other way around. By participating in Clan, you're joining a movement dedicated to creating a secure, user-empowered digital future.
|
||||||
users, foster innovation, and challenge outdated paradigms. Clan represents our
|
|
||||||
contribution to a future where technology serves humanity, not the other way
|
|
||||||
around. By participating in Clan, you're joining a movement dedicated to
|
|
||||||
creating a secure, user-empowered digital future.
|
|
||||||
|
|
||||||
## Features of Clan
|
## Features of Clan
|
||||||
|
|
||||||
- **Full-Stack System Deployment:** Utilize Clan's toolkit alongside Nix's
|
- **Full-Stack System Deployment:** Utilize Clan's toolkit alongside Nix's reliability to build and manage systems effortlessly.
|
||||||
reliability to build and manage systems effortlessly.
|
|
||||||
- **Overlay Networks:** Secure, private communication channels between devices.
|
- **Overlay Networks:** Secure, private communication channels between devices.
|
||||||
- **Virtual Machine Integration:** Seamless operation of VM applications within
|
- **Virtual Machine Integration:** Seamless operation of VM applications within the main operating system.
|
||||||
the main operating system.
|
|
||||||
- **Robust Backup Management:** Long-term, self-hosted data preservation.
|
- **Robust Backup Management:** Long-term, self-hosted data preservation.
|
||||||
- **Intuitive Secret Management:** Simplified encryption and password management
|
- **Intuitive Secret Management:** Simplified encryption and password management processes.
|
||||||
processes.
|
|
||||||
|
|
||||||
## Getting started with Clan
|
## Getting started with Clan
|
||||||
|
|
||||||
If you're new to Clan and eager to dive in, start with our quickstart guide and
|
If you're new to Clan and eager to dive in, start with our quickstart guide and explore the core functionalities that Clan offers:
|
||||||
explore the core functionalities that Clan offers:
|
|
||||||
|
|
||||||
- **Quickstart Guide**: Check out
|
- **Quickstart Guide**: Check out [getting started](https://docs.clan.lol/#starting-with-a-new-clan-project)<!-- [docs/site/index.md](docs/site/index.md) --> to get up and running with Clan in no time.
|
||||||
[getting started](https://docs.clan.lol/#starting-with-a-new-clan-project)<!-- [docs/site/index.md](docs/site/index.md) -->
|
|
||||||
to get up and running with Clan in no time.
|
|
||||||
|
|
||||||
### Managing secrets
|
### Managing secrets
|
||||||
|
|
||||||
In the Clan ecosystem, security is paramount. Learn how to handle secrets
|
In the Clan ecosystem, security is paramount. Learn how to handle secrets effectively:
|
||||||
effectively:
|
|
||||||
|
|
||||||
- **Secrets Management**: Securely manage secrets by consulting
|
- **Secrets Management**: Securely manage secrets by consulting [Vars](https://docs.clan.lol/concepts/generators/)<!-- [secrets.md](docs/site/concepts/generators.md) -->.
|
||||||
[Vars](https://docs.clan.lol/concepts/generators/)<!-- [secrets.md](docs/site/concepts/generators.md) -->.
|
|
||||||
|
|
||||||
### Contributing to Clan
|
### Contributing to Clan
|
||||||
|
|
||||||
The Clan project thrives on community contributions. We welcome everyone to
|
The Clan project thrives on community contributions. We welcome everyone to contribute and collaborate:
|
||||||
contribute and collaborate:
|
|
||||||
|
|
||||||
- **Contribution Guidelines**: Make a meaningful impact by following the steps
|
- **Contribution Guidelines**: Make a meaningful impact by following the steps in [contributing](https://docs.clan.lol/contributing/contributing/)<!-- [contributing.md](docs/CONTRIBUTING.md) -->.
|
||||||
in
|
|
||||||
[contributing](https://docs.clan.lol/contributing/contributing/)<!-- [contributing.md](docs/CONTRIBUTING.md) -->.
|
|
||||||
|
|
||||||
## Join the revolution
|
## Join the revolution
|
||||||
|
|
||||||
Clan is more than a tool; it's a movement towards a better digital future. By
|
Clan is more than a tool; it's a movement towards a better digital future. By contributing to the Clan project, you're part of changing technology for the better, together.
|
||||||
contributing to the Clan project, you're part of changing technology for the
|
|
||||||
better, together.
|
|
||||||
|
|
||||||
### Community and support
|
### Community and support
|
||||||
|
|
||||||
Connect with us and the Clan community for support and discussion:
|
Connect with us and the Clan community for support and discussion:
|
||||||
|
|
||||||
- [Matrix channel](https://matrix.to/#/#clan:clan.lol) for live discussions.
|
- [Matrix channel](https://matrix.to/#/#clan:clan.lol) for live discussions.
|
||||||
- IRC bridge on [hackint#clan](https://chat.hackint.org/#/connect?join=clan) for
|
- IRC bridge on [hackint#clan](https://chat.hackint.org/#/connect?join=clan) for real-time chat support.
|
||||||
real-time chat support.
|
|
||||||
|
|||||||
@@ -302,7 +302,8 @@
|
|||||||
"test-install-machine-without-system",
|
"test-install-machine-without-system",
|
||||||
"-i", ssh_conn.ssh_key,
|
"-i", ssh_conn.ssh_key,
|
||||||
"--option", "store", os.environ['CLAN_TEST_STORE'],
|
"--option", "store", os.environ['CLAN_TEST_STORE'],
|
||||||
f"nonrootuser@localhost:{ssh_conn.host_port}"
|
"--target-host", f"nonrootuser@localhost:{ssh_conn.host_port}",
|
||||||
|
"--yes"
|
||||||
]
|
]
|
||||||
|
|
||||||
result = subprocess.run(clan_cmd, capture_output=True, cwd=flake_dir)
|
result = subprocess.run(clan_cmd, capture_output=True, cwd=flake_dir)
|
||||||
@@ -326,7 +327,9 @@
|
|||||||
"test-install-machine-without-system",
|
"test-install-machine-without-system",
|
||||||
"-i", ssh_conn.ssh_key,
|
"-i", ssh_conn.ssh_key,
|
||||||
"--option", "store", os.environ['CLAN_TEST_STORE'],
|
"--option", "store", os.environ['CLAN_TEST_STORE'],
|
||||||
f"nonrootuser@localhost:{ssh_conn.host_port}"
|
"--target-host",
|
||||||
|
f"nonrootuser@localhost:{ssh_conn.host_port}",
|
||||||
|
"--yes"
|
||||||
]
|
]
|
||||||
|
|
||||||
result = subprocess.run(clan_cmd, capture_output=True, cwd=flake_dir)
|
result = subprocess.run(clan_cmd, capture_output=True, cwd=flake_dir)
|
||||||
|
|||||||
@@ -1,6 +1,10 @@
|
|||||||
______________________________________________________________________
|
---
|
||||||
|
description = "Set up dummy-module"
|
||||||
|
categories = ["System"]
|
||||||
|
features = [ "inventory" ]
|
||||||
|
|
||||||
description = "Set up dummy-module" categories = ["System"] features = \[
|
[constraints]
|
||||||
"inventory" \]
|
roles.admin.min = 1
|
||||||
|
roles.admin.max = 1
|
||||||
|
---
|
||||||
|
|
||||||
## [constraints] roles.admin.min = 1 roles.admin.max = 1
|
|
||||||
|
|||||||
32
clanServices/certificates/README.md
Normal file
32
clanServices/certificates/README.md
Normal file
@@ -0,0 +1,32 @@
|
|||||||
|
This service sets up a certificate authority (CA) that can issue certificates to
|
||||||
|
other machines in your clan. For this the `ca` role is used.
|
||||||
|
It additionally provides a `default` role, that can be applied to all machines
|
||||||
|
in your clan and will make sure they trust your CA.
|
||||||
|
|
||||||
|
## Example Usage
|
||||||
|
|
||||||
|
The following configuration would add a CA for the top level domain `.foo`. If
|
||||||
|
the machine `server` now hosts a webservice at `https://something.foo`, it will
|
||||||
|
get a certificate from `ca` which is valid inside your clan. The machine
|
||||||
|
`client` will trust this certificate if it makes a request to
|
||||||
|
`https://something.foo`.
|
||||||
|
|
||||||
|
This clan service can be combined with the `coredns` service for easy to deploy,
|
||||||
|
SSL secured clan-internal service hosting.
|
||||||
|
|
||||||
|
```nix
|
||||||
|
inventory = {
|
||||||
|
machines.ca = { };
|
||||||
|
machines.client = { };
|
||||||
|
machines.server = { };
|
||||||
|
|
||||||
|
instances."certificates" = {
|
||||||
|
module.name = "certificates";
|
||||||
|
module.input = "self";
|
||||||
|
|
||||||
|
roles.ca.machines.ca.settings.tlds = [ "foo" ];
|
||||||
|
roles.default.machines.client = { };
|
||||||
|
roles.default.machines.server = { };
|
||||||
|
};
|
||||||
|
};
|
||||||
|
```
|
||||||
245
clanServices/certificates/default.nix
Normal file
245
clanServices/certificates/default.nix
Normal file
@@ -0,0 +1,245 @@
|
|||||||
|
{ ... }:
|
||||||
|
{
|
||||||
|
_class = "clan.service";
|
||||||
|
manifest.name = "certificates";
|
||||||
|
manifest.description = "Sets up a certificates internal to your Clan";
|
||||||
|
manifest.categories = [ "Network" ];
|
||||||
|
manifest.readme = builtins.readFile ./README.md;
|
||||||
|
|
||||||
|
roles.ca = {
|
||||||
|
|
||||||
|
interface =
|
||||||
|
{ lib, ... }:
|
||||||
|
{
|
||||||
|
|
||||||
|
options.acmeEmail = lib.mkOption {
|
||||||
|
type = lib.types.str;
|
||||||
|
default = "none@none.tld";
|
||||||
|
description = ''
|
||||||
|
Email address for account creation and correspondence from the CA.
|
||||||
|
It is recommended to use the same email for all certs to avoid account
|
||||||
|
creation limits.
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
|
||||||
|
options.tlds = lib.mkOption {
|
||||||
|
type = lib.types.listOf lib.types.str;
|
||||||
|
description = "Top level domain for this CA. Certificates will be issued and trusted for *.<tld>";
|
||||||
|
};
|
||||||
|
|
||||||
|
options.expire = lib.mkOption {
|
||||||
|
type = lib.types.nullOr lib.types.str;
|
||||||
|
description = "When the certificate should expire.";
|
||||||
|
default = "8760h";
|
||||||
|
example = "8760h";
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
perInstance =
|
||||||
|
{ settings, ... }:
|
||||||
|
{
|
||||||
|
nixosModule =
|
||||||
|
{
|
||||||
|
config,
|
||||||
|
pkgs,
|
||||||
|
lib,
|
||||||
|
...
|
||||||
|
}:
|
||||||
|
let
|
||||||
|
domains = map (tld: "ca.${tld}") settings.tlds;
|
||||||
|
in
|
||||||
|
{
|
||||||
|
security.acme.defaults.email = settings.acmeEmail;
|
||||||
|
security.acme = {
|
||||||
|
certs = builtins.listToAttrs (
|
||||||
|
map (domain: {
|
||||||
|
name = domain;
|
||||||
|
value = {
|
||||||
|
server = "https://${domain}:1443/acme/acme/directory";
|
||||||
|
};
|
||||||
|
}) domains
|
||||||
|
);
|
||||||
|
};
|
||||||
|
|
||||||
|
networking.firewall.allowedTCPPorts = [
|
||||||
|
80
|
||||||
|
443
|
||||||
|
];
|
||||||
|
|
||||||
|
services.nginx = {
|
||||||
|
enable = true;
|
||||||
|
recommendedProxySettings = true;
|
||||||
|
virtualHosts = builtins.listToAttrs (
|
||||||
|
map (domain: {
|
||||||
|
name = domain;
|
||||||
|
value = {
|
||||||
|
addSSL = true;
|
||||||
|
enableACME = true;
|
||||||
|
locations."/".proxyPass = "https://localhost:1443";
|
||||||
|
locations."= /ca.crt".alias =
|
||||||
|
config.clan.core.vars.generators.step-intermediate-cert.files."intermediate.crt".path;
|
||||||
|
};
|
||||||
|
}) domains
|
||||||
|
);
|
||||||
|
};
|
||||||
|
|
||||||
|
clan.core.vars.generators = {
|
||||||
|
|
||||||
|
# Intermediate key generator
|
||||||
|
"step-intermediate-key" = {
|
||||||
|
files."intermediate.key" = {
|
||||||
|
secret = true;
|
||||||
|
deploy = true;
|
||||||
|
owner = "step-ca";
|
||||||
|
group = "step-ca";
|
||||||
|
};
|
||||||
|
runtimeInputs = [ pkgs.step-cli ];
|
||||||
|
script = ''
|
||||||
|
step crypto keypair --kty EC --curve P-256 --no-password --insecure $out/intermediate.pub $out/intermediate.key
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
|
||||||
|
# Intermediate certificate generator
|
||||||
|
"step-intermediate-cert" = {
|
||||||
|
files."intermediate.crt".secret = false;
|
||||||
|
dependencies = [
|
||||||
|
"step-ca"
|
||||||
|
"step-intermediate-key"
|
||||||
|
];
|
||||||
|
runtimeInputs = [ pkgs.step-cli ];
|
||||||
|
script = ''
|
||||||
|
# Create intermediate certificate
|
||||||
|
step certificate create \
|
||||||
|
--ca $in/step-ca/ca.crt \
|
||||||
|
--ca-key $in/step-ca/ca.key \
|
||||||
|
--ca-password-file /dev/null \
|
||||||
|
--key $in/step-intermediate-key/intermediate.key \
|
||||||
|
--template ${pkgs.writeText "intermediate.tmpl" ''
|
||||||
|
{
|
||||||
|
"subject": {{ toJson .Subject }},
|
||||||
|
"keyUsage": ["certSign", "crlSign"],
|
||||||
|
"basicConstraints": {
|
||||||
|
"isCA": true,
|
||||||
|
"maxPathLen": 0
|
||||||
|
},
|
||||||
|
"nameConstraints": {
|
||||||
|
"critical": true,
|
||||||
|
"permittedDNSDomains": [${
|
||||||
|
(lib.strings.concatStringsSep "," (map (tld: ''"${tld}"'') settings.tlds))
|
||||||
|
}]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
''} ${lib.optionalString (settings.expire != null) "--not-after ${settings.expire}"} \
|
||||||
|
--not-before=-12h \
|
||||||
|
--no-password --insecure \
|
||||||
|
"Clan Intermediate CA" \
|
||||||
|
$out/intermediate.crt
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
services.step-ca = {
|
||||||
|
enable = true;
|
||||||
|
intermediatePasswordFile = "/dev/null";
|
||||||
|
address = "0.0.0.0";
|
||||||
|
port = 1443;
|
||||||
|
settings = {
|
||||||
|
root = config.clan.core.vars.generators.step-ca.files."ca.crt".path;
|
||||||
|
crt = config.clan.core.vars.generators.step-intermediate-cert.files."intermediate.crt".path;
|
||||||
|
key = config.clan.core.vars.generators.step-intermediate-key.files."intermediate.key".path;
|
||||||
|
dnsNames = domains;
|
||||||
|
logger.format = "text";
|
||||||
|
db = {
|
||||||
|
type = "badger";
|
||||||
|
dataSource = "/var/lib/step-ca/db";
|
||||||
|
};
|
||||||
|
authority = {
|
||||||
|
provisioners = [
|
||||||
|
{
|
||||||
|
type = "ACME";
|
||||||
|
name = "acme";
|
||||||
|
forceCN = true;
|
||||||
|
}
|
||||||
|
];
|
||||||
|
claims = {
|
||||||
|
maxTLSCertDuration = "2160h";
|
||||||
|
defaultTLSCertDuration = "2160h";
|
||||||
|
};
|
||||||
|
backdate = "1m0s";
|
||||||
|
};
|
||||||
|
tls = {
|
||||||
|
cipherSuites = [
|
||||||
|
"TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"
|
||||||
|
"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"
|
||||||
|
];
|
||||||
|
minVersion = 1.2;
|
||||||
|
maxVersion = 1.3;
|
||||||
|
renegotiation = false;
|
||||||
|
};
|
||||||
|
};
|
||||||
|
};
|
||||||
|
};
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
# Empty role, so we can add non-ca machins to the instance to trust the CA
|
||||||
|
roles.default = {
|
||||||
|
interface =
|
||||||
|
{ lib, ... }:
|
||||||
|
{
|
||||||
|
options.acmeEmail = lib.mkOption {
|
||||||
|
type = lib.types.str;
|
||||||
|
default = "none@none.tld";
|
||||||
|
description = ''
|
||||||
|
Email address for account creation and correspondence from the CA.
|
||||||
|
It is recommended to use the same email for all certs to avoid account
|
||||||
|
creation limits.
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
perInstance =
|
||||||
|
{ settings, ... }:
|
||||||
|
{
|
||||||
|
nixosModule.security.acme.defaults.email = settings.acmeEmail;
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
# All machines (independent of role) will trust the CA
|
||||||
|
perMachine.nixosModule =
|
||||||
|
{ pkgs, config, ... }:
|
||||||
|
{
|
||||||
|
# Root CA generator
|
||||||
|
clan.core.vars.generators = {
|
||||||
|
"step-ca" = {
|
||||||
|
share = true;
|
||||||
|
files."ca.key" = {
|
||||||
|
secret = true;
|
||||||
|
deploy = false;
|
||||||
|
};
|
||||||
|
files."ca.crt".secret = false;
|
||||||
|
runtimeInputs = [ pkgs.step-cli ];
|
||||||
|
script = ''
|
||||||
|
step certificate create --template ${pkgs.writeText "root.tmpl" ''
|
||||||
|
{
|
||||||
|
"subject": {{ toJson .Subject }},
|
||||||
|
"issuer": {{ toJson .Subject }},
|
||||||
|
"keyUsage": ["certSign", "crlSign"],
|
||||||
|
"basicConstraints": {
|
||||||
|
"isCA": true,
|
||||||
|
"maxPathLen": 1
|
||||||
|
}
|
||||||
|
}
|
||||||
|
''} "Clan Root CA" $out/ca.crt $out/ca.key \
|
||||||
|
--kty EC --curve P-256 \
|
||||||
|
--not-after=8760h \
|
||||||
|
--not-before=-12h \
|
||||||
|
--no-password --insecure
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
};
|
||||||
|
security.pki.certificateFiles = [ config.clan.core.vars.generators."step-ca".files."ca.crt".path ];
|
||||||
|
environment.systemPackages = [ pkgs.openssl ];
|
||||||
|
security.acme.acceptTerms = true;
|
||||||
|
};
|
||||||
|
}
|
||||||
21
clanServices/certificates/flake-module.nix
Normal file
21
clanServices/certificates/flake-module.nix
Normal file
@@ -0,0 +1,21 @@
|
|||||||
|
{
|
||||||
|
self,
|
||||||
|
lib,
|
||||||
|
...
|
||||||
|
}:
|
||||||
|
let
|
||||||
|
module = lib.modules.importApply ./default.nix {
|
||||||
|
inherit (self) packages;
|
||||||
|
};
|
||||||
|
in
|
||||||
|
{
|
||||||
|
clan.modules.certificates = module;
|
||||||
|
perSystem =
|
||||||
|
{ ... }:
|
||||||
|
{
|
||||||
|
clan.nixosTests.certificates = {
|
||||||
|
imports = [ ./tests/vm/default.nix ];
|
||||||
|
clan.modules.certificates = module;
|
||||||
|
};
|
||||||
|
};
|
||||||
|
}
|
||||||
84
clanServices/certificates/tests/vm/default.nix
Normal file
84
clanServices/certificates/tests/vm/default.nix
Normal file
@@ -0,0 +1,84 @@
|
|||||||
|
{
|
||||||
|
name = "certificates";
|
||||||
|
|
||||||
|
clan = {
|
||||||
|
directory = ./.;
|
||||||
|
inventory = {
|
||||||
|
|
||||||
|
machines.ca = { }; # 192.168.1.1
|
||||||
|
machines.client = { }; # 192.168.1.2
|
||||||
|
machines.server = { }; # 192.168.1.3
|
||||||
|
|
||||||
|
instances."certificates" = {
|
||||||
|
module.name = "certificates";
|
||||||
|
module.input = "self";
|
||||||
|
|
||||||
|
roles.ca.machines.ca.settings.tlds = [ "foo" ];
|
||||||
|
roles.default.machines.client = { };
|
||||||
|
roles.default.machines.server = { };
|
||||||
|
};
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
nodes =
|
||||||
|
let
|
||||||
|
hostConfig = ''
|
||||||
|
192.168.1.1 ca.foo
|
||||||
|
192.168.1.3 test.foo
|
||||||
|
'';
|
||||||
|
in
|
||||||
|
{
|
||||||
|
|
||||||
|
client.networking.extraHosts = hostConfig;
|
||||||
|
ca.networking.extraHosts = hostConfig;
|
||||||
|
|
||||||
|
server = {
|
||||||
|
|
||||||
|
networking.extraHosts = hostConfig;
|
||||||
|
|
||||||
|
# TODO: Could this be set automatically?
|
||||||
|
# I would like to get this information from the coredns module, but we
|
||||||
|
# cannot model dependencies yet
|
||||||
|
security.acme.certs."test.foo".server = "https://ca.foo/acme/acme/directory";
|
||||||
|
|
||||||
|
# Host a simple service on 'server', with SSL provided via our CA. 'client'
|
||||||
|
# should be able to curl it via https and accept the certificates
|
||||||
|
# presented
|
||||||
|
networking.firewall.allowedTCPPorts = [
|
||||||
|
80
|
||||||
|
443
|
||||||
|
];
|
||||||
|
|
||||||
|
services.nginx = {
|
||||||
|
enable = true;
|
||||||
|
virtualHosts."test.foo" = {
|
||||||
|
enableACME = true;
|
||||||
|
forceSSL = true;
|
||||||
|
locations."/" = {
|
||||||
|
return = "200 'test server response'";
|
||||||
|
extraConfig = "add_header Content-Type text/plain;";
|
||||||
|
};
|
||||||
|
};
|
||||||
|
};
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
testScript = ''
|
||||||
|
start_all()
|
||||||
|
|
||||||
|
import time
|
||||||
|
|
||||||
|
time.sleep(3)
|
||||||
|
ca.succeed("systemctl restart acme-order-renew-ca.foo.service ")
|
||||||
|
|
||||||
|
time.sleep(3)
|
||||||
|
server.succeed("systemctl restart acme-test.foo.service")
|
||||||
|
|
||||||
|
# It takes a while for the correct certs to appear (before that self-signed
|
||||||
|
# are presented by nginx) so we wait for a bit.
|
||||||
|
client.wait_until_succeeds("curl -v https://test.foo")
|
||||||
|
|
||||||
|
# Show certificate information for debugging
|
||||||
|
client.succeed("openssl s_client -connect test.foo:443 -servername test.foo </dev/null 2>/dev/null | openssl x509 -text -noout 1>&2")
|
||||||
|
'';
|
||||||
|
}
|
||||||
6
clanServices/certificates/tests/vm/sops/machines/ca/key.json
Executable file
6
clanServices/certificates/tests/vm/sops/machines/ca/key.json
Executable file
@@ -0,0 +1,6 @@
|
|||||||
|
[
|
||||||
|
{
|
||||||
|
"publickey": "age1yd2cden7jav8x4nzx2fwze2fsa5j0qm2m3t7zum765z3u4gj433q7dqj43",
|
||||||
|
"type": "age"
|
||||||
|
}
|
||||||
|
]
|
||||||
6
clanServices/certificates/tests/vm/sops/machines/client/key.json
Executable file
6
clanServices/certificates/tests/vm/sops/machines/client/key.json
Executable file
@@ -0,0 +1,6 @@
|
|||||||
|
[
|
||||||
|
{
|
||||||
|
"publickey": "age1js225d8jc507sgcg0fdfv2x3xv3asm4ds5c6s4hp37nq8spxu95sc5x3ce",
|
||||||
|
"type": "age"
|
||||||
|
}
|
||||||
|
]
|
||||||
6
clanServices/certificates/tests/vm/sops/machines/server/key.json
Executable file
6
clanServices/certificates/tests/vm/sops/machines/server/key.json
Executable file
@@ -0,0 +1,6 @@
|
|||||||
|
[
|
||||||
|
{
|
||||||
|
"publickey": "age1nwuh8lc604mnz5r8ku8zswyswnwv02excw237c0cmtlejp7xfp8sdrcwfa",
|
||||||
|
"type": "age"
|
||||||
|
}
|
||||||
|
]
|
||||||
@@ -0,0 +1,15 @@
|
|||||||
|
{
|
||||||
|
"data": "ENC[AES256_GCM,data:6+XilULKRuWtAZ6B8Lj9UqCfi1T6dmqrDqBNXqS4SvBwM1bIWiL6juaT1Q7ByOexzID7tY740gmQBqTey54uLydh8mW0m4ZtUqw=,iv:9kscsrMPBGkutTnxrc5nrc7tQXpzLxw+929pUDKqTu0=,tag:753uIjm8ZRs0xsjiejEY8g==,type:str]",
|
||||||
|
"sops": {
|
||||||
|
"age": [
|
||||||
|
{
|
||||||
|
"recipient": "age1qm0p4vf9jvcnn43s6l4prk8zn6cx0ep9gzvevxecv729xz540v8qa742eg",
|
||||||
|
"enc": "-----BEGIN AGE ENCRYPTED FILE-----\nYWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSA1d3kycldZRXhmR0FqTXJp\nWWU0MDBYNmxxbFE5M2xKYm5KWnQ0MXBHNEM4CjN4RFFVcFlkd3pjTFVDQ3Vackdj\nVTVhMWoxdFpsWHp5S1p4L05kYk5LUkkKLS0tIENtZFZZTjY2amFVQmZLZFplQzBC\nZm1vWFI4MXR1ZHIxTTQ5VXdSYUhvOTQKte0bKjXQ0xA8FrpuChjDUvjVqp97D8kT\n3tVh6scdjxW48VSBZP1GRmqcMqCdj75GvJTbWeNEV4PDBW7GI0UW+Q==\n-----END AGE ENCRYPTED FILE-----\n"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"lastmodified": "2025-09-02T08:42:39Z",
|
||||||
|
"mac": "ENC[AES256_GCM,data:AftMorrH7qX5ctVu5evYHn5h9pC4Mmm2VYaAV8Hy0PKTc777jNsL6DrxFVV3NVqtecpwrzZFWKgzukcdcRJe4veVeBrusmoZYtifH0AWZTEVpVlr2UXYYxCDmNZt1WHfVUo40bT//X6QM0ye6a/2Y1jYPbMbryQNcGmnpk9PDvU=,iv:5nk+d8hzA05LQp7ZHRbIgiENg2Ha6J6YzyducM6zcNU=,tag:dy1hqWVzMu/+fSK57h9ZCA==,type:str]",
|
||||||
|
"unencrypted_suffix": "_unencrypted",
|
||||||
|
"version": "3.10.2"
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1 @@
|
|||||||
|
../../../users/admin
|
||||||
@@ -0,0 +1,15 @@
|
|||||||
|
{
|
||||||
|
"data": "ENC[AES256_GCM,data:jdTuGQUYvT1yXei1RHKsOCsABmMlkcLuziHDVhA7NequZeNu0fSbrJTXQDCHsDGhlYRcjU5EsEDT750xdleXuD3Gs9zWvPVobI4=,iv:YVow3K1j6fzRF9bRfIEpuOkO/nRpku/UQxWNGC+UJQQ=,tag:cNLM5R7uu6QpwPB9K6MYzg==,type:str]",
|
||||||
|
"sops": {
|
||||||
|
"age": [
|
||||||
|
{
|
||||||
|
"recipient": "age1qm0p4vf9jvcnn43s6l4prk8zn6cx0ep9gzvevxecv729xz540v8qa742eg",
|
||||||
|
"enc": "-----BEGIN AGE ENCRYPTED FILE-----\nYWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBvOVF2WXRSL0NpQzFZR01I\nNU85TGcyQmVDazN1dmpuRFVTZEg5NDRKTGhrCk1IVjFSU1V6WHBVRnFWcHkyVERr\nTjFKbW1mQ2FWOWhjN2VPamMxVEQ5VkkKLS0tIENVUGlhanhuWGtDKzBzRmk2dE4v\nMXZBRXNMa3IrOTZTNHRUWVE3UXEwSWMK2cBLoL/H/Vxd/klVrqVLdX9Mww5j7gw/\nEWc5/hN+km6XoW+DiJxVG4qaJ7qqld6u5ZnKgJT+2h9CfjA04I2akg==\n-----END AGE ENCRYPTED FILE-----\n"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"lastmodified": "2025-09-02T08:42:51Z",
|
||||||
|
"mac": "ENC[AES256_GCM,data:zOBQVM2Ydu4v0+Fw3p3cEU+5+7eKaadV0tKro1JVOxclG1Vs6Myq57nw2eWf5JxIl0ulL+FavPKY26qOQ3aqcGOT3PMRlCda9z+0oSn9Im9bE/DzAGmoH/bp76kFkgTTOCZTMUoqJ+UJqv0qy1BH/92sSSKmYshEX6d1vr5ISrw=,iv:i9ZW4sLxOCan4UokHlySVr1CW39nCTusG4DmEPj/gIw=,tag:iZBDPHDkE3Vt5mFcFu1TPQ==,type:str]",
|
||||||
|
"unencrypted_suffix": "_unencrypted",
|
||||||
|
"version": "3.10.2"
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1 @@
|
|||||||
|
../../../users/admin
|
||||||
@@ -0,0 +1,15 @@
|
|||||||
|
{
|
||||||
|
"data": "ENC[AES256_GCM,data:5CJuHcxJMXZJ8GqAeG3BrbWtT1kade4kxgJsn1cRpmr1UgN0ZVYnluPEiBscClNSOzcc6vcrBpfTI3dj1tASKTLP58M+GDBFQDo=,iv:gsK7XqBGkYCoqAvyFlIXuJ27PKSbTmy7f6cgTmT2gow=,tag:qG5KejkBvy9ytfhGXa/Mnw==,type:str]",
|
||||||
|
"sops": {
|
||||||
|
"age": [
|
||||||
|
{
|
||||||
|
"recipient": "age1qm0p4vf9jvcnn43s6l4prk8zn6cx0ep9gzvevxecv729xz540v8qa742eg",
|
||||||
|
"enc": "-----BEGIN AGE ENCRYPTED FILE-----\nYWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBxbzVqYkplTzJKN1pwS3VM\naFFIK2VsR3lYUVExYW9ieERBL0tlcFZtVzJRCkpiLzdmWmFlOUZ5QUJ4WkhXZ2tQ\nZm92YXBCV0RpYnIydUdEVTRiamI4bjAKLS0tIG93a2htS1hFcjBOeVFnNCtQTHVr\na2FPYjVGbWtORjJVWXE5bndPU1RWcXMKikMEB7X+kb7OtiyqXn3HRpLYkCdoayDh\n7cjGnplk17q25/lRNHM4JVS5isFfuftCl01enESqkvgq+cwuFwa9DQ==\n-----END AGE ENCRYPTED FILE-----\n"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"lastmodified": "2025-09-02T08:42:59Z",
|
||||||
|
"mac": "ENC[AES256_GCM,data:xybV2D0xukZnH2OwRpIugPnS7LN9AbgGKwFioPJc1FQWx9TxMUVDwgMN6V5WrhWkXgF2zP4krtDYpEz4Vq+LbOjcnTUteuCc+7pMHubuRuip7j+M32MH1kuf4bVZuXbCfvm7brGxe83FzjoioLqzA8g/X6Q1q7/ErkNeFjluC3Q=,iv:QEW3EUKSRZY3fbXlP7z+SffWkQeXwMAa5K8RQW7NvPE=,tag:DhFxY7xr7H1Wbd527swD0Q==,type:str]",
|
||||||
|
"unencrypted_suffix": "_unencrypted",
|
||||||
|
"version": "3.10.2"
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1 @@
|
|||||||
|
../../../users/admin
|
||||||
@@ -0,0 +1,12 @@
|
|||||||
|
-----BEGIN CERTIFICATE-----
|
||||||
|
MIIBsDCCAVegAwIBAgIQbT1Ivm+uwyf0HNkJfan2BTAKBggqhkjOPQQDAjAXMRUw
|
||||||
|
EwYDVQQDEwxDbGFuIFJvb3QgQ0EwHhcNMjUwOTAxMjA0MzAzWhcNMjYwOTAyMDg0
|
||||||
|
MzAzWjAfMR0wGwYDVQQDExRDbGFuIEludGVybWVkaWF0ZSBDQTBZMBMGByqGSM49
|
||||||
|
AgEGCCqGSM49AwEHA0IABDXCNrUIotju9P1U6JxLV43sOxLlRphQJS4dM+lvjTZc
|
||||||
|
aQ+HwQg0AHVlQNRwS3JqKrJJtJVyKbZklh6eFaDPoj6jfTB7MA4GA1UdDwEB/wQE
|
||||||
|
AwIBBjASBgNVHRMBAf8ECDAGAQH/AgEAMB0GA1UdDgQWBBRKHaccHgP2ccSWVBWN
|
||||||
|
zGoDdTg7aTAfBgNVHSMEGDAWgBSfsnz4phMJx9su/kgeF/FbZQCBgzAVBgNVHR4B
|
||||||
|
Af8ECzAJoAcwBYIDZm9vMAoGCCqGSM49BAMCA0cAMEQCICiUDk1zGNzpS/iVKLfW
|
||||||
|
zUGaCagpn2mCx4xAXQM9UranAiAn68nVYGWjkzhU31wyCAupxOjw7Bt96XXqIAz9
|
||||||
|
hLLtMA==
|
||||||
|
-----END CERTIFICATE-----
|
||||||
@@ -0,0 +1 @@
|
|||||||
|
../../../../../../sops/machines/ca
|
||||||
@@ -0,0 +1,19 @@
|
|||||||
|
{
|
||||||
|
"data": "ENC[AES256_GCM,data:Auonh9fa7jSkld1Zyxw74x5ydj6Xc+0SOgiqumVETNCfner9K96Rmv1PkREuHNGWPsnzyEM3pRT8ijvu3QoKvy9QPCCewyT07Wqe4G74+bk1iMeAHsV3To6kHs6M8OISvE+CmG0+hlLmdfRSabTzyWPLHbOjvFTEEuA5G7xiryacSYOE++eeEHdn+oUDh/IMTcfLjCGMjsXFikx1Hb+ofeRTlCg47+0w4MXVvQkOzQB5V2C694jZXvZ19jd/ioqr8YASz2xatGvqwW6cpZxqOWyZJ0UAj/6yFk6tZWifqVB3wgU=,iv:ITFCrDkeWl4GWCebVq15ei9QmkOLDwUIYojKZ2TU6JU=,tag:8k4iYbCIusUykY79H86WUQ==,type:str]",
|
||||||
|
"sops": {
|
||||||
|
"age": [
|
||||||
|
{
|
||||||
|
"recipient": "age1qm0p4vf9jvcnn43s6l4prk8zn6cx0ep9gzvevxecv729xz540v8qa742eg",
|
||||||
|
"enc": "-----BEGIN AGE ENCRYPTED FILE-----\nYWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBsT25UbjJTQ2tzbnQyUm9p\neWx1UlZIeVpocnBqUCt0YnFlN2FOU25Lb0hNCmdXUUsyalRTbHRRQ0NLSGc1YllV\nUXRwaENhaXU1WmdnVDE0UWprUUUyeDAKLS0tIHV3dHU3aG5JclM0V3FadzN0SU14\ndFptbEJUNXQ4QVlqbkJ1TjAvdDQwSGsKcKPWUjhK7wzIpdIdksMShF2fpLdDTUBS\nZiU7P1T+3psxad9qhapvU0JrAY+9veFaYVEHha2aN/XKs8HqUcTp3A==\n-----END AGE ENCRYPTED FILE-----\n"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"recipient": "age1yd2cden7jav8x4nzx2fwze2fsa5j0qm2m3t7zum765z3u4gj433q7dqj43",
|
||||||
|
"enc": "-----BEGIN AGE ENCRYPTED FILE-----\nYWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBjZFVteVZwVGVmRE9NT3hG\nNGMyS3FSaXluM1FpeUp6SDVMUEpwYzg5SmdvCkRPU0QyU1JicGNkdlMyQWVkT0k3\nL2YrbDhWeGk4WFhxcUFmTmhZQ0pEQncKLS0tIG85Ui9rKzBJQ2VkMFBUQTMvSTlu\nbm8rZ09Wa24rQkNvTTNtYTZBN3MrZlkK7cjNhlUKZdOrRq/nKUsbUQgNTzX8jO+0\nzADpz6WCMvsJ15xazc10BGh03OtdMWl5tcoWMaZ71HWtI9Gip5DH0w==\n-----END AGE ENCRYPTED FILE-----\n"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"lastmodified": "2025-09-02T08:42:42Z",
|
||||||
|
"mac": "ENC[AES256_GCM,data:9xlO5Yis8DG/y8GjvP63NltD4xEL7zqdHL2cQE8gAoh/ZamAmK5ZL0ld80mB3eIYEPKZYvmUYI4Lkrge2ZdqyDoubrW+eJ3dxn9+StxA9FzXYwUE0t+bbsNJfOOp/kDojf060qLGsu0kAGKd2ca4WiDccR0Cieky335C7Zzhi/Q=,iv:bWQ4wr0CJHSN+6ipUbkYTDWZJyFQjDKszfpVX9EEUsY=,tag:kADIFgJBEGCvr5fPbbdEDA==,type:str]",
|
||||||
|
"unencrypted_suffix": "_unencrypted",
|
||||||
|
"version": "3.10.2"
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1 @@
|
|||||||
|
../../../../../../sops/users/admin
|
||||||
@@ -0,0 +1 @@
|
|||||||
|
25.11
|
||||||
@@ -0,0 +1 @@
|
|||||||
|
25.11
|
||||||
@@ -0,0 +1,10 @@
|
|||||||
|
-----BEGIN CERTIFICATE-----
|
||||||
|
MIIBcTCCARigAwIBAgIRAIix99+AE7Y+uyiLGaRHEhUwCgYIKoZIzj0EAwIwFzEV
|
||||||
|
MBMGA1UEAxMMQ2xhbiBSb290IENBMB4XDTI1MDkwMTIwNDI1N1oXDTI2MDkwMjA4
|
||||||
|
NDI1N1owFzEVMBMGA1UEAxMMQ2xhbiBSb290IENBMFkwEwYHKoZIzj0CAQYIKoZI
|
||||||
|
zj0DAQcDQgAEk7nn9kzxI+xkRmNMlxD+7T78UqV3aqus0foJh6uu1CHC+XaebMcw
|
||||||
|
JN95nAe3oYA3yZG6Mnq9nCxsYha4EhzGYqNFMEMwDgYDVR0PAQH/BAQDAgEGMBIG
|
||||||
|
A1UdEwEB/wQIMAYBAf8CAQEwHQYDVR0OBBYEFJ+yfPimEwnH2y7+SB4X8VtlAIGD
|
||||||
|
MAoGCCqGSM49BAMCA0cAMEQCIBId/CcbT5MPFL90xa+XQz+gVTdRwsu6Bg7ehMso
|
||||||
|
Bj0oAiBjSlttd5yeuZGXBm+O0Gl+WdKV60QlrWutNewXFS4UpQ==
|
||||||
|
-----END CERTIFICATE-----
|
||||||
@@ -0,0 +1,15 @@
|
|||||||
|
{
|
||||||
|
"data": "ENC[AES256_GCM,data:PnEXteU3I7U0OKgE+oR3xjHdLWYTpJjM/jlzxtGU0uP2pUBuQv3LxtEz+cP0ZsafHLNq2iNJ7xpUEE0g4d3M296S56oSocK3fREWBiJFiaC7SAEUiil1l3UCwHn7LzmdEmn8Kq7T+FK89wwqtVWIASLo2gZC/yHE5eEanEATTchGLSNiHJRzZ8n0Ekm8EFUA6czOqA5nPQHaSmeLzu1g80lSSi1ICly6dJksa6DVucwOyVFYFEeq8Dfyc1eyP8L1ee0D7QFYBMduYOXTKPtNnyDmdaQMj7cMMvE7fn04idIiAqw=,iv:nvLmAfFk2GXnnUy+Afr648R60Ou13eu9UKykkiA8Y+4=,tag:lTTAxfG0EDCU6u7xlW6xSQ==,type:str]",
|
||||||
|
"sops": {
|
||||||
|
"age": [
|
||||||
|
{
|
||||||
|
"recipient": "age1qm0p4vf9jvcnn43s6l4prk8zn6cx0ep9gzvevxecv729xz540v8qa742eg",
|
||||||
|
"enc": "-----BEGIN AGE ENCRYPTED FILE-----\nYWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBEMjNWUm5NbktQeTRWRjJE\nWWFZc2Rsa3I5aitPSno1WnhORENNcng5OHprCjNUQVhBVHFBcWFjaW5UdmxKTnZw\nQlI4MDk5Wkp0RElCeWgzZ2dFQkF2dkkKLS0tIDVreTkydnJ0RDdHSHlQeVV6bGlP\nTmpJOVBSb2dkVS9TZG5SRmFjdnQ1b3cKQ5XvwH1jD4XPVs5RzOotBDq8kiE6S5k2\nDBv6ugjsM5qV7/oGP9H69aSB4jKPZjEn3yiNw++Oorc8uXd5kSGh7w==\n-----END AGE ENCRYPTED FILE-----\n"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"lastmodified": "2025-09-02T08:43:00Z",
|
||||||
|
"mac": "ENC[AES256_GCM,data:3jFf66UyZUWEtPdPu809LCS3K/Hc6zbnluystl3eXS+KGI+dCoYmN9hQruRNBRxf6jli2RIlArmmEPBDQVt67gG/qugTdT12krWnYAZ78iocmOnkf44fWxn/pqVnn4JYpjEYRgy8ueGDnUkwvpGWVZpcXw5659YeDQuYOJ2mq0U=,iv:3k7fBPrABdLItQ2Z+Mx8Nx0eIEKo93zG/23K+Q5Hl3I=,tag:aehAObdx//DEjbKlOeM7iQ==,type:str]",
|
||||||
|
"unencrypted_suffix": "_unencrypted",
|
||||||
|
"version": "3.10.2"
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1 @@
|
|||||||
|
../../../../../sops/users/admin
|
||||||
68
clanServices/coredns/README.md
Normal file
68
clanServices/coredns/README.md
Normal file
@@ -0,0 +1,68 @@
|
|||||||
|
This module enables hosting clan-internal services easily, which can be resolved
|
||||||
|
inside your VPN. This allows defining a custom top-level domain (e.g. `.clan`)
|
||||||
|
and exposing endpoints from a machine to others, which will be
|
||||||
|
accessible under `http://<service>.clan` in your browser.
|
||||||
|
|
||||||
|
The service consists of two roles:
|
||||||
|
|
||||||
|
- A `server` role: This is the DNS-server that will be queried when trying to
|
||||||
|
resolve clan-internal services. It defines the top-level domain.
|
||||||
|
- A `default` role: This does two things. First, it sets up the nameservers so
|
||||||
|
thatclan-internal queries are resolved via the `server` machine, while
|
||||||
|
external queries are resolved as normal via DHCP. Second, it allows exposing
|
||||||
|
services (see example below).
|
||||||
|
|
||||||
|
## Example Usage
|
||||||
|
|
||||||
|
Here the machine `dnsserver` is designated as internal DNS-server for the TLD
|
||||||
|
`.foo`. `server01` will host an application that shall be reachable at
|
||||||
|
`http://one.foo` and `server02` is going to be reachable at `http://two.foo`.
|
||||||
|
`client` is any other machine that is part of the clan but does not host any
|
||||||
|
services.
|
||||||
|
|
||||||
|
When `client` tries to resolve `http://one.foo`, the DNS query will be
|
||||||
|
routed to `dnsserver`, which will answer with `192.168.1.3`. If it tries to
|
||||||
|
resolve some external domain (e.g. `https://clan.lol`), the query will not be
|
||||||
|
routed to `dnsserver` but resolved as before, via the nameservers advertised by
|
||||||
|
DHCP.
|
||||||
|
|
||||||
|
```nix
|
||||||
|
inventory = {
|
||||||
|
|
||||||
|
machines = {
|
||||||
|
dnsserver = { }; # 192.168.1.2
|
||||||
|
server01 = { }; # 192.168.1.3
|
||||||
|
server02 = { }; # 192.168.1.4
|
||||||
|
client = { }; # 192.168.1.5
|
||||||
|
};
|
||||||
|
|
||||||
|
instances = {
|
||||||
|
coredns = {
|
||||||
|
|
||||||
|
module.name = "@clan/coredns";
|
||||||
|
module.input = "self";
|
||||||
|
|
||||||
|
# Add the default role to all machines, including `client`
|
||||||
|
roles.default.tags.all = { };
|
||||||
|
|
||||||
|
# DNS server
|
||||||
|
roles.server.machines."dnsserver".settings = {
|
||||||
|
ip = "192.168.1.2";
|
||||||
|
tld = "foo";
|
||||||
|
};
|
||||||
|
|
||||||
|
# First service
|
||||||
|
roles.default.machines."server01".settings = {
|
||||||
|
ip = "192.168.1.3";
|
||||||
|
services = [ "one" ];
|
||||||
|
};
|
||||||
|
|
||||||
|
# Second service
|
||||||
|
roles.default.machines."server02".settings = {
|
||||||
|
ip = "192.168.1.4";
|
||||||
|
services = [ "two" ];
|
||||||
|
};
|
||||||
|
};
|
||||||
|
};
|
||||||
|
};
|
||||||
|
```
|
||||||
157
clanServices/coredns/default.nix
Normal file
157
clanServices/coredns/default.nix
Normal file
@@ -0,0 +1,157 @@
|
|||||||
|
{ ... }:
|
||||||
|
{
|
||||||
|
_class = "clan.service";
|
||||||
|
manifest.name = "coredns";
|
||||||
|
manifest.description = "Clan-internal DNS and service exposure";
|
||||||
|
manifest.categories = [ "Network" ];
|
||||||
|
manifest.readme = builtins.readFile ./README.md;
|
||||||
|
|
||||||
|
roles.server = {
|
||||||
|
|
||||||
|
interface =
|
||||||
|
{ lib, ... }:
|
||||||
|
{
|
||||||
|
options.tld = lib.mkOption {
|
||||||
|
type = lib.types.str;
|
||||||
|
default = "clan";
|
||||||
|
description = ''
|
||||||
|
Top-level domain for this instance. All services below this will be
|
||||||
|
resolved internally.
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
|
||||||
|
options.ip = lib.mkOption {
|
||||||
|
type = lib.types.str;
|
||||||
|
# TODO: Set a default
|
||||||
|
description = "IP for the DNS to listen on";
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
perInstance =
|
||||||
|
{
|
||||||
|
roles,
|
||||||
|
settings,
|
||||||
|
...
|
||||||
|
}:
|
||||||
|
{
|
||||||
|
nixosModule =
|
||||||
|
{
|
||||||
|
lib,
|
||||||
|
pkgs,
|
||||||
|
...
|
||||||
|
}:
|
||||||
|
{
|
||||||
|
|
||||||
|
networking.firewall.allowedTCPPorts = [ 53 ];
|
||||||
|
networking.firewall.allowedUDPPorts = [ 53 ];
|
||||||
|
|
||||||
|
services.coredns =
|
||||||
|
let
|
||||||
|
|
||||||
|
# Get all service entries for one host
|
||||||
|
hostServiceEntries =
|
||||||
|
host:
|
||||||
|
lib.strings.concatStringsSep "\n" (
|
||||||
|
map (
|
||||||
|
service: "${service} IN A ${roles.default.machines.${host}.settings.ip} ; ${host}"
|
||||||
|
) roles.default.machines.${host}.settings.services
|
||||||
|
);
|
||||||
|
|
||||||
|
zonefile = pkgs.writeTextFile {
|
||||||
|
name = "db.${settings.tld}";
|
||||||
|
text = ''
|
||||||
|
$TTL 3600
|
||||||
|
@ IN SOA ns.${settings.tld}. admin.${settings.tld}. 1 7200 3600 1209600 3600
|
||||||
|
IN NS ns.${settings.tld}.
|
||||||
|
ns IN A ${settings.ip} ; DNS server
|
||||||
|
|
||||||
|
''
|
||||||
|
+ (lib.strings.concatStringsSep "\n" (
|
||||||
|
map (host: hostServiceEntries host) (lib.attrNames roles.default.machines)
|
||||||
|
));
|
||||||
|
};
|
||||||
|
|
||||||
|
in
|
||||||
|
{
|
||||||
|
enable = true;
|
||||||
|
config = ''
|
||||||
|
. {
|
||||||
|
forward . 1.1.1.1
|
||||||
|
cache 30
|
||||||
|
}
|
||||||
|
|
||||||
|
${settings.tld} {
|
||||||
|
file ${zonefile}
|
||||||
|
}
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
};
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
roles.default = {
|
||||||
|
interface =
|
||||||
|
{ lib, ... }:
|
||||||
|
{
|
||||||
|
options.services = lib.mkOption {
|
||||||
|
type = lib.types.listOf lib.types.str;
|
||||||
|
default = [ ];
|
||||||
|
description = ''
|
||||||
|
Service endpoints this host exposes (without TLD). Each entry will
|
||||||
|
be resolved to <entry>.<tld> using the configured top-level domain.
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
|
||||||
|
options.ip = lib.mkOption {
|
||||||
|
type = lib.types.str;
|
||||||
|
# TODO: Set a default
|
||||||
|
description = "IP on which the services will listen";
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
perInstance =
|
||||||
|
{ roles, ... }:
|
||||||
|
{
|
||||||
|
nixosModule =
|
||||||
|
{ lib, ... }:
|
||||||
|
{
|
||||||
|
|
||||||
|
networking.nameservers = map (m: "127.0.0.1:5353#${roles.server.machines.${m}.settings.tld}") (
|
||||||
|
lib.attrNames roles.server.machines
|
||||||
|
);
|
||||||
|
|
||||||
|
services.resolved.domains = map (m: "~${roles.server.machines.${m}.settings.tld}") (
|
||||||
|
lib.attrNames roles.server.machines
|
||||||
|
);
|
||||||
|
|
||||||
|
services.unbound = {
|
||||||
|
enable = true;
|
||||||
|
settings = {
|
||||||
|
server = {
|
||||||
|
port = 5353;
|
||||||
|
verbosity = 2;
|
||||||
|
interface = [ "127.0.0.1" ];
|
||||||
|
access-control = [ "127.0.0.0/8 allow" ];
|
||||||
|
do-not-query-localhost = "no";
|
||||||
|
domain-insecure = map (m: "${roles.server.machines.${m}.settings.tld}.") (
|
||||||
|
lib.attrNames roles.server.machines
|
||||||
|
);
|
||||||
|
};
|
||||||
|
|
||||||
|
# Default: forward everything else to DHCP-provided resolvers
|
||||||
|
forward-zone = [
|
||||||
|
{
|
||||||
|
name = ".";
|
||||||
|
forward-addr = "127.0.0.53@53"; # Forward to systemd-resolved
|
||||||
|
}
|
||||||
|
];
|
||||||
|
stub-zone = map (m: {
|
||||||
|
name = "${roles.server.machines.${m}.settings.tld}.";
|
||||||
|
stub-addr = "${roles.server.machines.${m}.settings.ip}";
|
||||||
|
}) (lib.attrNames roles.server.machines);
|
||||||
|
};
|
||||||
|
};
|
||||||
|
};
|
||||||
|
};
|
||||||
|
};
|
||||||
|
}
|
||||||
@@ -3,14 +3,16 @@ let
|
|||||||
module = lib.modules.importApply ./default.nix { };
|
module = lib.modules.importApply ./default.nix { };
|
||||||
in
|
in
|
||||||
{
|
{
|
||||||
clan.modules.state-version = module;
|
clan.modules = {
|
||||||
|
coredns = module;
|
||||||
|
};
|
||||||
perSystem =
|
perSystem =
|
||||||
{ ... }:
|
{ ... }:
|
||||||
{
|
{
|
||||||
clan.nixosTests.state-version = {
|
clan.nixosTests.coredns = {
|
||||||
imports = [ ./tests/vm/default.nix ];
|
imports = [ ./tests/vm/default.nix ];
|
||||||
|
|
||||||
clan.modules."@clan/state-version" = module;
|
clan.modules."@clan/coredns" = module;
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
113
clanServices/coredns/tests/vm/default.nix
Normal file
113
clanServices/coredns/tests/vm/default.nix
Normal file
@@ -0,0 +1,113 @@
|
|||||||
|
{
|
||||||
|
...
|
||||||
|
}:
|
||||||
|
{
|
||||||
|
name = "coredns";
|
||||||
|
|
||||||
|
clan = {
|
||||||
|
directory = ./.;
|
||||||
|
test.useContainers = true;
|
||||||
|
inventory = {
|
||||||
|
|
||||||
|
machines = {
|
||||||
|
dns = { }; # 192.168.1.2
|
||||||
|
server01 = { }; # 192.168.1.3
|
||||||
|
server02 = { }; # 192.168.1.4
|
||||||
|
client = { }; # 192.168.1.1
|
||||||
|
};
|
||||||
|
|
||||||
|
instances = {
|
||||||
|
coredns = {
|
||||||
|
|
||||||
|
module.name = "@clan/coredns";
|
||||||
|
module.input = "self";
|
||||||
|
|
||||||
|
roles.default.tags.all = { };
|
||||||
|
|
||||||
|
# First service
|
||||||
|
roles.default.machines."server01".settings = {
|
||||||
|
ip = "192.168.1.3";
|
||||||
|
services = [ "one" ];
|
||||||
|
};
|
||||||
|
|
||||||
|
# Second service
|
||||||
|
roles.default.machines."server02".settings = {
|
||||||
|
ip = "192.168.1.4";
|
||||||
|
services = [ "two" ];
|
||||||
|
};
|
||||||
|
|
||||||
|
# DNS server
|
||||||
|
roles.server.machines."dns".settings = {
|
||||||
|
ip = "192.168.1.2";
|
||||||
|
tld = "foo";
|
||||||
|
};
|
||||||
|
};
|
||||||
|
};
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
nodes = {
|
||||||
|
dns =
|
||||||
|
{ pkgs, ... }:
|
||||||
|
{
|
||||||
|
environment.systemPackages = [ pkgs.net-tools ];
|
||||||
|
};
|
||||||
|
|
||||||
|
client =
|
||||||
|
{ pkgs, ... }:
|
||||||
|
{
|
||||||
|
environment.systemPackages = [ pkgs.net-tools ];
|
||||||
|
};
|
||||||
|
|
||||||
|
server01 = {
|
||||||
|
services.nginx = {
|
||||||
|
enable = true;
|
||||||
|
virtualHosts."one.foo" = {
|
||||||
|
locations."/" = {
|
||||||
|
return = "200 'test server response one'";
|
||||||
|
extraConfig = "add_header Content-Type text/plain;";
|
||||||
|
};
|
||||||
|
};
|
||||||
|
};
|
||||||
|
};
|
||||||
|
server02 = {
|
||||||
|
services.nginx = {
|
||||||
|
enable = true;
|
||||||
|
virtualHosts."two.foo" = {
|
||||||
|
locations."/" = {
|
||||||
|
return = "200 'test server response two'";
|
||||||
|
extraConfig = "add_header Content-Type text/plain;";
|
||||||
|
};
|
||||||
|
};
|
||||||
|
};
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
testScript = ''
|
||||||
|
import json
|
||||||
|
start_all()
|
||||||
|
|
||||||
|
machines = [server01, server02, dns, client]
|
||||||
|
|
||||||
|
for m in machines:
|
||||||
|
m.systemctl("start network-online.target")
|
||||||
|
|
||||||
|
for m in machines:
|
||||||
|
m.wait_for_unit("network-online.target")
|
||||||
|
|
||||||
|
# import time
|
||||||
|
# time.sleep(2333333)
|
||||||
|
|
||||||
|
# This should work, but is borken in tests i think? Instead we dig directly
|
||||||
|
|
||||||
|
# client.succeed("curl -k -v http://one.foo")
|
||||||
|
# client.succeed("curl -k -v http://two.foo")
|
||||||
|
|
||||||
|
answer = client.succeed("dig @192.168.1.2 one.foo")
|
||||||
|
assert "192.168.1.3" in answer, "IP not found"
|
||||||
|
|
||||||
|
answer = client.succeed("dig @192.168.1.2 two.foo")
|
||||||
|
assert "192.168.1.4" in answer, "IP not found"
|
||||||
|
|
||||||
|
'';
|
||||||
|
}
|
||||||
4
clanServices/coredns/tests/vm/sops/users/admin/key.json
Normal file
4
clanServices/coredns/tests/vm/sops/users/admin/key.json
Normal file
@@ -0,0 +1,4 @@
|
|||||||
|
{
|
||||||
|
"publickey": "age1qm0p4vf9jvcnn43s6l4prk8zn6cx0ep9gzvevxecv729xz540v8qa742eg",
|
||||||
|
"type": "age"
|
||||||
|
}
|
||||||
@@ -1,18 +1,15 @@
|
|||||||
A Dynamic-DNS (DDNS) service continuously keeps one or more DNS records in sync
|
|
||||||
with the current public IP address of your machine.\
|
|
||||||
In *clan* this service is backed by
|
|
||||||
[qdm12/ddns-updater](https://github.com/qdm12/ddns-updater).
|
|
||||||
|
|
||||||
> Info\
|
A Dynamic-DNS (DDNS) service continuously keeps one or more DNS records in sync with the current public IP address of your machine.
|
||||||
> ddns-updater itself is **heavily opinionated and version-specific**. Whenever
|
In *clan* this service is backed by [qdm12/ddns-updater](https://github.com/qdm12/ddns-updater).
|
||||||
> you need the exhaustive list of flags or provider-specific fields refer to its
|
|
||||||
> *versioned* documentation – **not** the GitHub README
|
|
||||||
|
|
||||||
______________________________________________________________________
|
> Info
|
||||||
|
> ddns-updater itself is **heavily opinionated and version-specific**. Whenever you need the exhaustive list of flags or
|
||||||
|
> provider-specific fields refer to its *versioned* documentation – **not** the GitHub README
|
||||||
|
---
|
||||||
|
|
||||||
# 1. Configuration model
|
# 1. Configuration model
|
||||||
|
|
||||||
Internally ddns-updater consumes a single file named `config.json`.\
|
Internally ddns-updater consumes a single file named `config.json`.
|
||||||
A minimal configuration for the registrar *Namecheap* looks like:
|
A minimal configuration for the registrar *Namecheap* looks like:
|
||||||
|
|
||||||
```json
|
```json
|
||||||
@@ -44,17 +41,16 @@ Another example for *Porkbun*:
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
When you write a `clan.nix` the **common** fields (`provider`, `domain`,
|
When you write a `clan.nix` the **common** fields (`provider`, `domain`, `period`, …) are already exposed as typed
|
||||||
`period`, …) are already exposed as typed *Nix options*.\
|
*Nix options*.
|
||||||
Registrar-specific or very new keys can be passed through an open attribute set
|
Registrar-specific or very new keys can be passed through an open attribute set called **extraSettings**.
|
||||||
called **extraSettings**.
|
|
||||||
|
|
||||||
______________________________________________________________________
|
---
|
||||||
|
|
||||||
# 2. Full Porkbun example
|
# 2. Full Porkbun example
|
||||||
|
|
||||||
Manage three records – `@`, `home` and `test` – of the domain `jon.blog` and
|
Manage three records – `@`, `home` and `test` – of the domain
|
||||||
refresh them every 15 minutes:
|
`jon.blog` and refresh them every 15 minutes:
|
||||||
|
|
||||||
```nix title="clan.nix" hl_lines="10-11"
|
```nix title="clan.nix" hl_lines="10-11"
|
||||||
inventory.instances = {
|
inventory.instances = {
|
||||||
@@ -84,8 +80,7 @@ inventory.instances = {
|
|||||||
};
|
};
|
||||||
```
|
```
|
||||||
|
|
||||||
1. `secret_field_name` tells the *vars-generator* to store the entered secret
|
1. `secret_field_name` tells the *vars-generator* to store the entered secret under the specified JSON field name in the configuration.
|
||||||
under the specified JSON field name in the configuration.
|
|
||||||
2. ddns-updater allows multiple hosts by separating them with a comma.
|
2. ddns-updater allows multiple hosts by separating them with a comma.
|
||||||
3. The `api_key` above is *public*; the corresponding **private key** is
|
3. The `api_key` above is *public*; the corresponding **private key** is retrieved through `secret_field_name`.
|
||||||
retrieved through `secret_field_name`.
|
|
||||||
|
|||||||
@@ -1,5 +1,4 @@
|
|||||||
This service will automatically set the emergency access password if your system
|
This service will automatically set the emergency access password if your system fails to boot.
|
||||||
fails to boot.
|
|
||||||
|
|
||||||
## Usage
|
## Usage
|
||||||
|
|
||||||
|
|||||||
@@ -1,6 +1,5 @@
|
|||||||
The importer module allows users to configure importing modules in a flexible
|
The importer module allows users to configure importing modules in a flexible and structured way.
|
||||||
and structured way. It exposes the `extraModules` functionality of the
|
It exposes the `extraModules` functionality of the inventory, without any added configuration.
|
||||||
inventory, without any added configuration.
|
|
||||||
|
|
||||||
## Usage
|
## Usage
|
||||||
|
|
||||||
@@ -22,6 +21,6 @@ inventory.instances = {
|
|||||||
};
|
};
|
||||||
```
|
```
|
||||||
|
|
||||||
This will import the module `modules/base.nix` to all machines that have the
|
This will import the module `modules/base.nix` to all machines that have the `all` tag,
|
||||||
`all` tag, which by default is every machine managed by the clan. And also
|
which by default is every machine managed by the clan.
|
||||||
import for all machines tagged with `zone1` the module at `modules/zone1.nix`.
|
And also import for all machines tagged with `zone1` the module at `modules/zone1.nix`.
|
||||||
|
|||||||
@@ -32,5 +32,4 @@ The service provides these commands:
|
|||||||
|
|
||||||
- `localbackup-create`: Create a new backup
|
- `localbackup-create`: Create a new backup
|
||||||
- `localbackup-list`: List available backups
|
- `localbackup-list`: List available backups
|
||||||
- `localbackup-restore`: Restore from backup (requires NAME and FOLDERS
|
- `localbackup-restore`: Restore from backup (requires NAME and FOLDERS environment variables)
|
||||||
environment variables)
|
|
||||||
|
|||||||
@@ -14,3 +14,4 @@ inventory.instances = {
|
|||||||
This service will eventually set up a monitoring stack for your clan. For now,
|
This service will eventually set up a monitoring stack for your clan. For now,
|
||||||
only a telegraf role is implemented, which exposes the currently deployed
|
only a telegraf role is implemented, which exposes the currently deployed
|
||||||
version of your configuration, so it can be used to check for required updates.
|
version of your configuration, so it can be used to check for required updates.
|
||||||
|
|
||||||
|
|||||||
@@ -1,13 +1,8 @@
|
|||||||
The `sshd` Clan service manages SSH to make it easy to securely access your
|
The `sshd` Clan service manages SSH to make it easy to securely access your machines over the internet. The service uses `vars` to store the SSH host keys for each machine to ensure they remain stable across deployments.
|
||||||
machines over the internet. The service uses `vars` to store the SSH host keys
|
|
||||||
for each machine to ensure they remain stable across deployments.
|
|
||||||
|
|
||||||
`sshd` also generates SSH certificates for both servers and clients allowing for
|
`sshd` also generates SSH certificates for both servers and clients allowing for certificate-based authentication for SSH.
|
||||||
certificate-based authentication for SSH.
|
|
||||||
|
|
||||||
The service also disables password-based authentication over SSH, to access your
|
The service also disables password-based authentication over SSH, to access your machines you'll need to use public key authentication or certificate-based authentication.
|
||||||
machines you'll need to use public key authentication or certificate-based
|
|
||||||
authentication.
|
|
||||||
|
|
||||||
## Usage
|
## Usage
|
||||||
|
|
||||||
|
|||||||
@@ -1,37 +0,0 @@
|
|||||||
This service generates the `system.stateVersion` of the nixos installation
|
|
||||||
automatically.
|
|
||||||
|
|
||||||
Possible values:
|
|
||||||
[system.stateVersion](https://search.nixos.org/options?channel=unstable&show=system.stateVersion&from=0&size=50&sort=relevance&type=packages&query=stateVersion)
|
|
||||||
|
|
||||||
## Usage
|
|
||||||
|
|
||||||
The following configuration will set `stateVersion` for all machines:
|
|
||||||
|
|
||||||
```
|
|
||||||
inventory.instances = {
|
|
||||||
state-version = {
|
|
||||||
module = {
|
|
||||||
name = "state-version";
|
|
||||||
input = "clan";
|
|
||||||
};
|
|
||||||
roles.default.tags.all = { };
|
|
||||||
};
|
|
||||||
```
|
|
||||||
|
|
||||||
## Migration
|
|
||||||
|
|
||||||
If you are already setting `system.stateVersion`, either let the automatic
|
|
||||||
generation happen, or trigger the generation manually for the machine. The
|
|
||||||
service will take the specified version, if one is already supplied through the
|
|
||||||
config.
|
|
||||||
|
|
||||||
To manually generate the version for a specified machine run:
|
|
||||||
|
|
||||||
```
|
|
||||||
clan vars generate [MACHINE]
|
|
||||||
```
|
|
||||||
|
|
||||||
If the setting was already set, you can then remove `system.stateVersion` from
|
|
||||||
your machine configuration. For new machines, just import the service as shown
|
|
||||||
above.
|
|
||||||
@@ -1,50 +0,0 @@
|
|||||||
{ ... }:
|
|
||||||
{
|
|
||||||
_class = "clan.service";
|
|
||||||
manifest.name = "clan-core/state-version";
|
|
||||||
manifest.description = "Automatically generate the state version of the nixos installation.";
|
|
||||||
manifest.categories = [ "System" ];
|
|
||||||
manifest.readme = builtins.readFile ./README.md;
|
|
||||||
|
|
||||||
roles.default = {
|
|
||||||
|
|
||||||
perInstance =
|
|
||||||
{ ... }:
|
|
||||||
{
|
|
||||||
nixosModule =
|
|
||||||
{
|
|
||||||
config,
|
|
||||||
lib,
|
|
||||||
...
|
|
||||||
}:
|
|
||||||
let
|
|
||||||
var = config.clan.core.vars.generators.state-version.files.version or { };
|
|
||||||
in
|
|
||||||
{
|
|
||||||
|
|
||||||
warnings = [
|
|
||||||
''
|
|
||||||
The clan.state-version service is deprecated and will be
|
|
||||||
removed on 2025-07-15 in favor of a nix option.
|
|
||||||
|
|
||||||
Please migrate your configuration to use `clan.core.settings.state-version.enable = true` instead.
|
|
||||||
''
|
|
||||||
];
|
|
||||||
|
|
||||||
system.stateVersion = lib.mkDefault (lib.removeSuffix "\n" var.value);
|
|
||||||
|
|
||||||
clan.core.vars.generators.state-version = {
|
|
||||||
files.version = {
|
|
||||||
secret = false;
|
|
||||||
value = lib.mkDefault config.system.nixos.release;
|
|
||||||
};
|
|
||||||
runtimeInputs = [ ];
|
|
||||||
script = ''
|
|
||||||
echo -n ${config.system.stateVersion} > "$out"/version
|
|
||||||
'';
|
|
||||||
};
|
|
||||||
};
|
|
||||||
};
|
|
||||||
};
|
|
||||||
|
|
||||||
}
|
|
||||||
@@ -1,22 +0,0 @@
|
|||||||
{ lib, ... }:
|
|
||||||
{
|
|
||||||
name = "service-state-version";
|
|
||||||
|
|
||||||
clan = {
|
|
||||||
directory = ./.;
|
|
||||||
inventory = {
|
|
||||||
machines.server = { };
|
|
||||||
instances.default = {
|
|
||||||
module.name = "@clan/state-version";
|
|
||||||
module.input = "self";
|
|
||||||
roles.default.machines."server" = { };
|
|
||||||
};
|
|
||||||
};
|
|
||||||
};
|
|
||||||
|
|
||||||
nodes.server = { };
|
|
||||||
|
|
||||||
testScript = lib.mkDefault ''
|
|
||||||
start_all()
|
|
||||||
'';
|
|
||||||
}
|
|
||||||
@@ -15,7 +15,6 @@
|
|||||||
|
|
||||||
Now the folder `~/syncthing/documents` will be shared with all your machines.
|
Now the folder `~/syncthing/documents` will be shared with all your machines.
|
||||||
|
|
||||||
## Documentation
|
|
||||||
|
|
||||||
Extensive documentation is available on the
|
## Documentation
|
||||||
[Syncthing](https://docs.syncthing.net/) website.
|
Extensive documentation is available on the [Syncthing](https://docs.syncthing.net/) website.
|
||||||
|
|||||||
@@ -46,8 +46,7 @@
|
|||||||
|
|
||||||
## Migration from `root-password` module
|
## Migration from `root-password` module
|
||||||
|
|
||||||
The deprecated `clan.root-password` module has been replaced by the `users`
|
The deprecated `clan.root-password` module has been replaced by the `users` module. Here's how to migrate:
|
||||||
module. Here's how to migrate:
|
|
||||||
|
|
||||||
### 1. Update your flake configuration
|
### 1. Update your flake configuration
|
||||||
|
|
||||||
|
|||||||
@@ -1,23 +1,17 @@
|
|||||||
# Wireguard VPN Service
|
# Wireguard VPN Service
|
||||||
|
|
||||||
This service provides a Wireguard-based VPN mesh network with automatic IPv6
|
This service provides a Wireguard-based VPN mesh network with automatic IPv6 address allocation and routing between clan machines.
|
||||||
address allocation and routing between clan machines.
|
|
||||||
|
|
||||||
## Overview
|
## Overview
|
||||||
|
|
||||||
The wireguard service creates a secure mesh network between clan machines using
|
The wireguard service creates a secure mesh network between clan machines using two roles:
|
||||||
two roles:
|
- **Controllers**: Machines with public endpoints that act as connection points and routers
|
||||||
|
|
||||||
- **Controllers**: Machines with public endpoints that act as connection points
|
|
||||||
and routers
|
|
||||||
- **Peers**: Machines that connect through controllers to access the network
|
- **Peers**: Machines that connect through controllers to access the network
|
||||||
|
|
||||||
## Requirements
|
## Requirements
|
||||||
|
|
||||||
- Controllers must have a publicly accessible endpoint (domain name or static
|
- Controllers must have a publicly accessible endpoint (domain name or static IP)
|
||||||
IP)
|
- Peers must be in networks where UDP traffic is not blocked (uses port 51820 by default, configurable)
|
||||||
- Peers must be in networks where UDP traffic is not blocked (uses port 51820 by
|
|
||||||
default, configurable)
|
|
||||||
|
|
||||||
## Features
|
## Features
|
||||||
|
|
||||||
@@ -30,33 +24,24 @@ two roles:
|
|||||||
## Network Architecture
|
## Network Architecture
|
||||||
|
|
||||||
### IPv6 Address Allocation
|
### IPv6 Address Allocation
|
||||||
|
- Base network: `/40` ULA prefix (deterministically generated from instance name)
|
||||||
- Base network: `/40` ULA prefix (deterministically generated from instance
|
|
||||||
name)
|
|
||||||
- Controllers: Each gets a `/56` subnet from the base `/40`
|
- Controllers: Each gets a `/56` subnet from the base `/40`
|
||||||
- Peers: Each gets a unique 64-bit host suffix that is used in ALL controller
|
- Peers: Each gets a unique 64-bit host suffix that is used in ALL controller subnets
|
||||||
subnets
|
|
||||||
|
|
||||||
### Addressing Design
|
### Addressing Design
|
||||||
|
|
||||||
- Each peer generates a unique host suffix (e.g., `:8750:a09b:0:1`)
|
- Each peer generates a unique host suffix (e.g., `:8750:a09b:0:1`)
|
||||||
- This suffix is appended to each controller's `/56` prefix to create unique
|
- This suffix is appended to each controller's `/56` prefix to create unique addresses
|
||||||
addresses
|
|
||||||
- Example: peer1 with suffix `:8750:a09b:0:1` gets:
|
- Example: peer1 with suffix `:8750:a09b:0:1` gets:
|
||||||
- `fd51:19c1:3b:f700:8750:a09b:0:1` in controller1's subnet
|
- `fd51:19c1:3b:f700:8750:a09b:0:1` in controller1's subnet
|
||||||
- `fd51:19c1:c1:aa00:8750:a09b:0:1` in controller2's subnet
|
- `fd51:19c1:c1:aa00:8750:a09b:0:1` in controller2's subnet
|
||||||
- Controllers allow each peer's `/96` subnet for routing flexibility
|
- Controllers allow each peer's `/96` subnet for routing flexibility
|
||||||
|
|
||||||
### Connectivity
|
### Connectivity
|
||||||
|
- Peers use a single WireGuard interface with multiple IPs (one per controller subnet)
|
||||||
- Peers use a single WireGuard interface with multiple IPs (one per controller
|
- Controllers connect to ALL other controllers and ALL peers on a single interface
|
||||||
subnet)
|
|
||||||
- Controllers connect to ALL other controllers and ALL peers on a single
|
|
||||||
interface
|
|
||||||
- Controllers have IPv6 forwarding enabled to route traffic between peers
|
- Controllers have IPv6 forwarding enabled to route traffic between peers
|
||||||
- All traffic between peers flows through controllers
|
- All traffic between peers flows through controllers
|
||||||
- Symmetric routing is maintained as each peer has consistent IPs across all
|
- Symmetric routing is maintained as each peer has consistent IPs across all controllers
|
||||||
controllers
|
|
||||||
|
|
||||||
### Example Network Topology
|
### Example Network Topology
|
||||||
|
|
||||||
@@ -146,14 +131,12 @@ graph TB
|
|||||||
|
|
||||||
### Advanced Options
|
### Advanced Options
|
||||||
|
|
||||||
|
|
||||||
### Automatic Hostname Resolution
|
### Automatic Hostname Resolution
|
||||||
|
|
||||||
The wireguard service automatically adds entries to `/etc/hosts` for all
|
The wireguard service automatically adds entries to `/etc/hosts` for all machines in the network. Each machine is accessible via its hostname in the format `<machine-name>.<instance-name>`.
|
||||||
machines in the network. Each machine is accessible via its hostname in the
|
|
||||||
format `<machine-name>.<instance-name>`.
|
|
||||||
|
|
||||||
For example, with an instance named `vpn`:
|
For example, with an instance named `vpn`:
|
||||||
|
|
||||||
- `server1.vpn` - resolves to server1's IPv6 address
|
- `server1.vpn` - resolves to server1's IPv6 address
|
||||||
- `laptop1.vpn` - resolves to laptop1's IPv6 address
|
- `laptop1.vpn` - resolves to laptop1's IPv6 address
|
||||||
|
|
||||||
@@ -170,19 +153,16 @@ ssh user@laptop1.vpn
|
|||||||
## Troubleshooting
|
## Troubleshooting
|
||||||
|
|
||||||
### Check Wireguard Status
|
### Check Wireguard Status
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
sudo wg show
|
sudo wg show
|
||||||
```
|
```
|
||||||
|
|
||||||
### Verify IP Addresses
|
### Verify IP Addresses
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
ip addr show dev <instance-name>
|
ip addr show dev <instance-name>
|
||||||
```
|
```
|
||||||
|
|
||||||
### Check Routing
|
### Check Routing
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
ip -6 route show dev <instance-name>
|
ip -6 route show dev <instance-name>
|
||||||
```
|
```
|
||||||
@@ -190,23 +170,19 @@ ip -6 route show dev <instance-name>
|
|||||||
### Interface Fails to Start: "Address already in use"
|
### Interface Fails to Start: "Address already in use"
|
||||||
|
|
||||||
If you see this error in your logs:
|
If you see this error in your logs:
|
||||||
|
|
||||||
```
|
```
|
||||||
wireguard: Could not bring up interface, ignoring: Address already in use
|
wireguard: Could not bring up interface, ignoring: Address already in use
|
||||||
```
|
```
|
||||||
|
|
||||||
This means the configured port (default: 51820) is already in use by another
|
This means the configured port (default: 51820) is already in use by another service or wireguard instance. Solutions:
|
||||||
service or wireguard instance. Solutions:
|
|
||||||
|
|
||||||
1. **Check for conflicting wireguard instances:**
|
1. **Check for conflicting wireguard instances:**
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
sudo wg show
|
sudo wg show
|
||||||
sudo ss -ulnp | grep 51820
|
sudo ss -ulnp | grep 51820
|
||||||
```
|
```
|
||||||
|
|
||||||
2. **Use a different port:**
|
2. **Use a different port:**
|
||||||
|
|
||||||
```nix
|
```nix
|
||||||
services.wireguard.myinstance = {
|
services.wireguard.myinstance = {
|
||||||
roles.controller = {
|
roles.controller = {
|
||||||
@@ -216,13 +192,12 @@ service or wireguard instance. Solutions:
|
|||||||
};
|
};
|
||||||
```
|
```
|
||||||
|
|
||||||
3. **Ensure unique ports across multiple instances:** If you have multiple
|
3. **Ensure unique ports across multiple instances:**
|
||||||
wireguard instances on the same machine, each must use a different port.
|
If you have multiple wireguard instances on the same machine, each must use a different port.
|
||||||
|
|
||||||
### Key Management
|
### Key Management
|
||||||
|
|
||||||
Keys are automatically generated and stored in the clan vars system. To
|
Keys are automatically generated and stored in the clan vars system. To regenerate keys:
|
||||||
regenerate keys:
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Regenerate keys for a specific machine and instance
|
# Regenerate keys for a specific machine and instance
|
||||||
@@ -239,3 +214,4 @@ clan machines update <machine-name>
|
|||||||
- Public keys are distributed through the clan vars system
|
- Public keys are distributed through the clan vars system
|
||||||
- Controllers must have publicly accessible endpoints
|
- Controllers must have publicly accessible endpoints
|
||||||
- Firewall rules are automatically configured for the Wireguard ports
|
- Firewall rules are automatically configured for the Wireguard ports
|
||||||
|
|
||||||
|
|||||||
@@ -12,6 +12,11 @@ import ipaddress
|
|||||||
import sys
|
import sys
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
|
|
||||||
|
# Constants for argument count validation
|
||||||
|
MIN_ARGS_BASE = 4
|
||||||
|
MIN_ARGS_CONTROLLER = 5
|
||||||
|
MIN_ARGS_PEER = 5
|
||||||
|
|
||||||
|
|
||||||
def hash_string(s: str) -> str:
|
def hash_string(s: str) -> str:
|
||||||
"""Generate SHA256 hash of string."""
|
"""Generate SHA256 hash of string."""
|
||||||
@@ -39,8 +44,7 @@ def generate_ula_prefix(instance_name: str) -> ipaddress.IPv6Network:
|
|||||||
prefix = f"fd{prefix_bits:08x}"
|
prefix = f"fd{prefix_bits:08x}"
|
||||||
prefix_formatted = f"{prefix[:4]}:{prefix[4:8]}::/40"
|
prefix_formatted = f"{prefix[:4]}:{prefix[4:8]}::/40"
|
||||||
|
|
||||||
network = ipaddress.IPv6Network(prefix_formatted)
|
return ipaddress.IPv6Network(prefix_formatted)
|
||||||
return network
|
|
||||||
|
|
||||||
|
|
||||||
def generate_controller_subnet(
|
def generate_controller_subnet(
|
||||||
@@ -60,9 +64,7 @@ def generate_controller_subnet(
|
|||||||
# The controller subnet is at base_prefix:controller_id::/56
|
# The controller subnet is at base_prefix:controller_id::/56
|
||||||
base_int = int(base_network.network_address)
|
base_int = int(base_network.network_address)
|
||||||
controller_subnet_int = base_int | (controller_id << (128 - 56))
|
controller_subnet_int = base_int | (controller_id << (128 - 56))
|
||||||
controller_subnet = ipaddress.IPv6Network((controller_subnet_int, 56))
|
return ipaddress.IPv6Network((controller_subnet_int, 56))
|
||||||
|
|
||||||
return controller_subnet
|
|
||||||
|
|
||||||
|
|
||||||
def generate_peer_suffix(peer_name: str) -> str:
|
def generate_peer_suffix(peer_name: str) -> str:
|
||||||
@@ -76,12 +78,11 @@ def generate_peer_suffix(peer_name: str) -> str:
|
|||||||
suffix_bits = h[:16]
|
suffix_bits = h[:16]
|
||||||
|
|
||||||
# Format as IPv6 suffix without leading colon
|
# Format as IPv6 suffix without leading colon
|
||||||
suffix = f"{suffix_bits[0:4]}:{suffix_bits[4:8]}:{suffix_bits[8:12]}:{suffix_bits[12:16]}"
|
return f"{suffix_bits[0:4]}:{suffix_bits[4:8]}:{suffix_bits[8:12]}:{suffix_bits[12:16]}"
|
||||||
return suffix
|
|
||||||
|
|
||||||
|
|
||||||
def main() -> None:
|
def main() -> None:
|
||||||
if len(sys.argv) < 4:
|
if len(sys.argv) < MIN_ARGS_BASE:
|
||||||
print(
|
print(
|
||||||
"Usage: ipv6_allocator.py <output_dir> <instance_name> <controller|peer> <machine_name>",
|
"Usage: ipv6_allocator.py <output_dir> <instance_name> <controller|peer> <machine_name>",
|
||||||
)
|
)
|
||||||
@@ -95,7 +96,7 @@ def main() -> None:
|
|||||||
base_network = generate_ula_prefix(instance_name)
|
base_network = generate_ula_prefix(instance_name)
|
||||||
|
|
||||||
if node_type == "controller":
|
if node_type == "controller":
|
||||||
if len(sys.argv) < 5:
|
if len(sys.argv) < MIN_ARGS_CONTROLLER:
|
||||||
print("Controller name required")
|
print("Controller name required")
|
||||||
sys.exit(1)
|
sys.exit(1)
|
||||||
|
|
||||||
@@ -111,7 +112,7 @@ def main() -> None:
|
|||||||
(output_dir / "prefix").write_text(prefix_str)
|
(output_dir / "prefix").write_text(prefix_str)
|
||||||
|
|
||||||
elif node_type == "peer":
|
elif node_type == "peer":
|
||||||
if len(sys.argv) < 5:
|
if len(sys.argv) < MIN_ARGS_PEER:
|
||||||
print("Peer name required")
|
print("Peer name required")
|
||||||
sys.exit(1)
|
sys.exit(1)
|
||||||
|
|
||||||
|
|||||||
33
clanServices/yggdrasil/README.md
Normal file
33
clanServices/yggdrasil/README.md
Normal file
@@ -0,0 +1,33 @@
|
|||||||
|
This module sets up [yggdrasil](https://yggdrasil-network.github.io/) across
|
||||||
|
your clan.
|
||||||
|
|
||||||
|
Yggdrasil is designed to be a future-proof and decentralised alternative to
|
||||||
|
the structured routing protocols commonly used today on the internet. Inside
|
||||||
|
your clan, it will allow you reaching all of your machines.
|
||||||
|
|
||||||
|
## Example Usage
|
||||||
|
|
||||||
|
While you can specify statically configured peers for each host, yggdrasil does
|
||||||
|
auto-discovery of local peers.
|
||||||
|
|
||||||
|
```nix
|
||||||
|
inventory = {
|
||||||
|
|
||||||
|
machines = {
|
||||||
|
peer1 = { };
|
||||||
|
peer2 = { };
|
||||||
|
};
|
||||||
|
|
||||||
|
instances = {
|
||||||
|
yggdrasil = {
|
||||||
|
|
||||||
|
# Deploy on all machines
|
||||||
|
roles.default.tags.all = { };
|
||||||
|
|
||||||
|
# Or individual hosts
|
||||||
|
roles.default.machines.peer1 = { };
|
||||||
|
roles.default.machines.peer2 = { };
|
||||||
|
};
|
||||||
|
};
|
||||||
|
};
|
||||||
|
```
|
||||||
116
clanServices/yggdrasil/default.nix
Normal file
116
clanServices/yggdrasil/default.nix
Normal file
@@ -0,0 +1,116 @@
|
|||||||
|
{ ... }:
|
||||||
|
{
|
||||||
|
_class = "clan.service";
|
||||||
|
manifest.name = "clan-core/yggdrasil";
|
||||||
|
manifest.description = "Yggdrasil encrypted IPv6 routing overlay network";
|
||||||
|
|
||||||
|
roles.default = {
|
||||||
|
interface =
|
||||||
|
{ lib, ... }:
|
||||||
|
{
|
||||||
|
options.extraMulticastInterfaces = lib.mkOption {
|
||||||
|
type = lib.types.listOf lib.types.attrs;
|
||||||
|
default = [ ];
|
||||||
|
description = ''
|
||||||
|
Additional interfaces to use for Multicast. See
|
||||||
|
https://yggdrasil-network.github.io/configurationref.html#multicastinterfaces
|
||||||
|
for reference.
|
||||||
|
'';
|
||||||
|
example = [
|
||||||
|
{
|
||||||
|
Regex = "(wg).*";
|
||||||
|
Beacon = true;
|
||||||
|
Listen = true;
|
||||||
|
Port = 5400;
|
||||||
|
Priority = 1020;
|
||||||
|
}
|
||||||
|
];
|
||||||
|
};
|
||||||
|
|
||||||
|
options.peers = lib.mkOption {
|
||||||
|
type = lib.types.listOf lib.types.str;
|
||||||
|
default = [ ];
|
||||||
|
description = ''
|
||||||
|
Static peers to configure for this host.
|
||||||
|
If not set, local peers will be auto-discovered
|
||||||
|
'';
|
||||||
|
example = [
|
||||||
|
"tcp://192.168.1.1:6443"
|
||||||
|
"quic://192.168.1.1:6443"
|
||||||
|
"tls://192.168.1.1:6443"
|
||||||
|
"ws://192.168.1.1:6443"
|
||||||
|
];
|
||||||
|
};
|
||||||
|
};
|
||||||
|
perInstance =
|
||||||
|
{ settings, ... }:
|
||||||
|
{
|
||||||
|
nixosModule =
|
||||||
|
{
|
||||||
|
config,
|
||||||
|
pkgs,
|
||||||
|
...
|
||||||
|
}:
|
||||||
|
{
|
||||||
|
|
||||||
|
clan.core.vars.generators.yggdrasil = {
|
||||||
|
|
||||||
|
files.privateKey = { };
|
||||||
|
files.publicKey.secret = false;
|
||||||
|
files.address.secret = false;
|
||||||
|
|
||||||
|
runtimeInputs = with pkgs; [
|
||||||
|
yggdrasil
|
||||||
|
jq
|
||||||
|
openssl
|
||||||
|
];
|
||||||
|
|
||||||
|
script = ''
|
||||||
|
# Generate private key
|
||||||
|
openssl genpkey -algorithm Ed25519 -out $out/privateKey
|
||||||
|
|
||||||
|
# Generate corresponding public key
|
||||||
|
openssl pkey -in $out/privateKey -pubout -out $out/publicKey
|
||||||
|
|
||||||
|
# Derive IPv6 address from key
|
||||||
|
echo "{ \"PrivateKeyPath\": \"$out/privateKey\"}" | yggdrasil -useconf -address > $out/address
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
|
||||||
|
systemd.services.yggdrasil.serviceConfig.BindReadOnlyPaths = [
|
||||||
|
"${config.clan.core.vars.generators.yggdrasil.files.privateKey.path}:/var/lib/yggdrasil/key"
|
||||||
|
];
|
||||||
|
|
||||||
|
services.yggdrasil = {
|
||||||
|
enable = true;
|
||||||
|
openMulticastPort = true;
|
||||||
|
persistentKeys = true;
|
||||||
|
settings = {
|
||||||
|
PrivateKeyPath = "/var/lib/yggdrasil/key";
|
||||||
|
IfName = "ygg";
|
||||||
|
Peers = settings.peers;
|
||||||
|
MulticastInterfaces = [
|
||||||
|
# Ethernet is preferred over WIFI
|
||||||
|
{
|
||||||
|
Regex = "(eth|en).*";
|
||||||
|
Beacon = true;
|
||||||
|
Listen = true;
|
||||||
|
Port = 5400;
|
||||||
|
Priority = 1024;
|
||||||
|
}
|
||||||
|
{
|
||||||
|
Regex = "(wl).*";
|
||||||
|
Beacon = true;
|
||||||
|
Listen = true;
|
||||||
|
Port = 5400;
|
||||||
|
Priority = 1025;
|
||||||
|
}
|
||||||
|
]
|
||||||
|
++ settings.extraMulticastInterfaces;
|
||||||
|
};
|
||||||
|
};
|
||||||
|
networking.firewall.allowedTCPPorts = [ 5400 ];
|
||||||
|
};
|
||||||
|
};
|
||||||
|
};
|
||||||
|
}
|
||||||
24
clanServices/yggdrasil/flake-module.nix
Normal file
24
clanServices/yggdrasil/flake-module.nix
Normal file
@@ -0,0 +1,24 @@
|
|||||||
|
{
|
||||||
|
self,
|
||||||
|
lib,
|
||||||
|
...
|
||||||
|
}:
|
||||||
|
let
|
||||||
|
module = lib.modules.importApply ./default.nix {
|
||||||
|
inherit (self) packages;
|
||||||
|
};
|
||||||
|
in
|
||||||
|
{
|
||||||
|
clan.modules = {
|
||||||
|
yggdrasil = module;
|
||||||
|
};
|
||||||
|
perSystem =
|
||||||
|
{ ... }:
|
||||||
|
{
|
||||||
|
clan.nixosTests.yggdrasil = {
|
||||||
|
imports = [ ./tests/vm/default.nix ];
|
||||||
|
|
||||||
|
clan.modules.yggdrasil = module;
|
||||||
|
};
|
||||||
|
};
|
||||||
|
}
|
||||||
93
clanServices/yggdrasil/tests/vm/default.nix
Normal file
93
clanServices/yggdrasil/tests/vm/default.nix
Normal file
@@ -0,0 +1,93 @@
|
|||||||
|
{
|
||||||
|
name = "yggdrasil";
|
||||||
|
|
||||||
|
clan = {
|
||||||
|
test.useContainers = false;
|
||||||
|
directory = ./.;
|
||||||
|
inventory = {
|
||||||
|
|
||||||
|
machines.peer1 = { };
|
||||||
|
machines.peer2 = { };
|
||||||
|
|
||||||
|
instances."yggdrasil" = {
|
||||||
|
module.name = "yggdrasil";
|
||||||
|
module.input = "self";
|
||||||
|
|
||||||
|
# Assign the roles to the two machines
|
||||||
|
roles.default.machines.peer1 = { };
|
||||||
|
roles.default.machines.peer2 = { };
|
||||||
|
};
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
# TODO remove after testing, this is just to make @pinpox' life easier
|
||||||
|
nodes =
|
||||||
|
let
|
||||||
|
c =
|
||||||
|
{ pkgs, ... }:
|
||||||
|
{
|
||||||
|
environment.systemPackages = with pkgs; [ net-tools ];
|
||||||
|
console = {
|
||||||
|
font = "Lat2-Terminus16";
|
||||||
|
keyMap = "colemak";
|
||||||
|
};
|
||||||
|
};
|
||||||
|
in
|
||||||
|
{
|
||||||
|
peer1 = c;
|
||||||
|
peer2 = c;
|
||||||
|
};
|
||||||
|
|
||||||
|
testScript = ''
|
||||||
|
start_all()
|
||||||
|
|
||||||
|
# Wait for both machines to be ready
|
||||||
|
peer1.wait_for_unit("multi-user.target")
|
||||||
|
peer2.wait_for_unit("multi-user.target")
|
||||||
|
|
||||||
|
# Check that yggdrasil service is running on both machines
|
||||||
|
peer1.wait_for_unit("yggdrasil")
|
||||||
|
peer2.wait_for_unit("yggdrasil")
|
||||||
|
|
||||||
|
peer1.succeed("systemctl is-active yggdrasil")
|
||||||
|
peer2.succeed("systemctl is-active yggdrasil")
|
||||||
|
|
||||||
|
# Check that both machines have yggdrasil network interfaces
|
||||||
|
# Yggdrasil creates a tun interface (usually tun0)
|
||||||
|
peer1.wait_until_succeeds("ip link show | grep -E 'ygg'", 30)
|
||||||
|
peer2.wait_until_succeeds("ip link show | grep -E 'ygg'", 30)
|
||||||
|
|
||||||
|
# Get yggdrasil IPv6 addresses from both machines
|
||||||
|
peer1_ygg_ip = peer1.succeed("yggdrasilctl -json getself | jq -r '.address'").strip()
|
||||||
|
peer2_ygg_ip = peer2.succeed("yggdrasilctl -json getself | jq -r '.address'").strip()
|
||||||
|
|
||||||
|
|
||||||
|
# TODO: enable this check. Values don't match up yet, but I can't
|
||||||
|
# update-vars to test, because the script is borken.
|
||||||
|
|
||||||
|
# Compare runtime addresses with saved addresses from vars
|
||||||
|
# expected_peer1_ip = "${builtins.readFile ./vars/per-machine/peer1/yggdrasil/address/value}"
|
||||||
|
# expected_peer2_ip = "${builtins.readFile ./vars/per-machine/peer2/yggdrasil/address/value}"
|
||||||
|
|
||||||
|
print(f"peer1 yggdrasil IP: {peer1_ygg_ip}")
|
||||||
|
print(f"peer2 yggdrasil IP: {peer2_ygg_ip}")
|
||||||
|
|
||||||
|
# print(f"peer1 expected IP: {expected_peer1_ip}")
|
||||||
|
# print(f"peer2 expected IP: {expected_peer2_ip}")
|
||||||
|
#
|
||||||
|
# # Verify that runtime addresses match expected addresses
|
||||||
|
# assert peer1_ygg_ip == expected_peer1_ip, f"peer1 runtime IP {peer1_ygg_ip} != expected IP {expected_peer1_ip}"
|
||||||
|
# assert peer2_ygg_ip == expected_peer2_ip, f"peer2 runtime IP {peer2_ygg_ip} != expected IP {expected_peer2_ip}"
|
||||||
|
|
||||||
|
# Wait a bit for the yggdrasil network to establish connectivity
|
||||||
|
import time
|
||||||
|
time.sleep(10)
|
||||||
|
|
||||||
|
# Test connectivity: peer1 should be able to ping peer2 via yggdrasil
|
||||||
|
peer1.succeed(f"ping -6 -c 3 {peer2_ygg_ip}")
|
||||||
|
|
||||||
|
# Test connectivity: peer2 should be able to ping peer1 via yggdrasil
|
||||||
|
peer2.succeed(f"ping -6 -c 3 {peer1_ygg_ip}")
|
||||||
|
|
||||||
|
'';
|
||||||
|
}
|
||||||
6
clanServices/yggdrasil/tests/vm/sops/machines/peer1/key.json
Executable file
6
clanServices/yggdrasil/tests/vm/sops/machines/peer1/key.json
Executable file
@@ -0,0 +1,6 @@
|
|||||||
|
[
|
||||||
|
{
|
||||||
|
"publickey": "age1r264u9yngfq8qkrveh4tn0rhfes02jfgrtqufdx4n4m3hs4rla2qx0rk4d",
|
||||||
|
"type": "age"
|
||||||
|
}
|
||||||
|
]
|
||||||
6
clanServices/yggdrasil/tests/vm/sops/machines/peer2/key.json
Executable file
6
clanServices/yggdrasil/tests/vm/sops/machines/peer2/key.json
Executable file
@@ -0,0 +1,6 @@
|
|||||||
|
[
|
||||||
|
{
|
||||||
|
"publickey": "age1p8kuf8s0nfekwreh4g38cgghp4nzszenx0fraeyky2me0nly2scstqunx8",
|
||||||
|
"type": "age"
|
||||||
|
}
|
||||||
|
]
|
||||||
@@ -0,0 +1,15 @@
|
|||||||
|
{
|
||||||
|
"data": "ENC[AES256_GCM,data:3dolkgdLC4y5fps4gGb9hf4QhwkUUBodlMOKT+/+erO70FB/pzYBg0mQjQy/uqjINzfIiM32iwVDnx3/Yyz5BDRo2CK+83UGEi4=,iv:FRp1HqlU06JeyEXXFO5WxJWxeLnmUJRWGuFKcr4JFOM=,tag:rbi30HJuqPHdU/TqInGXmg==,type:str]",
|
||||||
|
"sops": {
|
||||||
|
"age": [
|
||||||
|
{
|
||||||
|
"recipient": "age1qm0p4vf9jvcnn43s6l4prk8zn6cx0ep9gzvevxecv729xz540v8qa742eg",
|
||||||
|
"enc": "-----BEGIN AGE ENCRYPTED FILE-----\nYWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBoYXBxS1JuNW9NeC9YU0xY\nK2xQWDhUYjZ4VzZmeUw1aG9UN2trVnBGQ0J3Ckk0V3d0UFBkT0RnZjBoYjNRVEVW\nN2VEdCtUTUUwenhJSEErT0MyWDA2bHMKLS0tIHJJSzVtR3NCVXozbzREWjltN2ZG\nZm44Y1c4MWNIblcxbmt2YkdxVE10Z1UKmJKEjiYZ9U47QACkbacNTirQIcCvFjM/\nwVxSEVq524sK8LCyIEvsG4e3I3Kn0ybZjoth7J/jg7J4gb8MVw+leQ==\n-----END AGE ENCRYPTED FILE-----\n"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"lastmodified": "2025-09-16T08:13:06Z",
|
||||||
|
"mac": "ENC[AES256_GCM,data:6HJDkg0AWz+zx5niSIyBAGGaeemwPOqTCA/Fa6VjjyCh1wOav3OTzy/DRBOCze4V52hMGV3ULrI2V7G7DdvQy6LqiKBTQX5ZbWm3IxLASamJBjUJ1LvTm97WvyL54u/l2McYlaUIC8bYDl1UQUqDMo9pN4GwdjsRNCIl4O0Z7KY=,iv:zkWfYuhqwKpZk/16GlpKdAi2qS6LiPvadRJmxp2ZW+w=,tag:qz1gxVnT3OjWxKRKss5W8w==,type:str]",
|
||||||
|
"unencrypted_suffix": "_unencrypted",
|
||||||
|
"version": "3.10.2"
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1 @@
|
|||||||
|
../../../users/admin
|
||||||
@@ -0,0 +1,15 @@
|
|||||||
|
{
|
||||||
|
"data": "ENC[AES256_GCM,data:BW15ydnNpr0NIXu92nMsD/Y52BDEOsdZg2/fiM8lwSTJN3lEymrIBYsRrcPAnGpFb52d7oN8zdNz9WoW3f/Xwl136sWDz/sc0k4=,iv:7m77nOR/uXLMqXB5QmegtoYVqByJVFFqZIVOtlAonzg=,tag:8sUo9DRscNRajrk+CzHzHw==,type:str]",
|
||||||
|
"sops": {
|
||||||
|
"age": [
|
||||||
|
{
|
||||||
|
"recipient": "age1qm0p4vf9jvcnn43s6l4prk8zn6cx0ep9gzvevxecv729xz540v8qa742eg",
|
||||||
|
"enc": "-----BEGIN AGE ENCRYPTED FILE-----\nYWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBLVWpnSlJOTVU4NWRMSCto\nS0RaR2RCTUJjT1J0VzRPVTdPL2N5Yjl3c0EwCmlabm1aSzdlV29nb3lrZFBEZXR6\nRjI2TGZUNW1KQ3pLbDFscUlKSnVBNWcKLS0tIDlLR1VFSTRHeWNiQ29XK1pUUnlr\nVkVHOXdJeHhpcldYNVhpK1V6Nng0eW8KSsqJejY1kll6bUBUngiolCB7OhjyI0Gc\nH+9OrORt/nLnc51eo/4Oh9vp/dvSZzuW9MOF9m0f6B3WOFRVMAbukQ==\n-----END AGE ENCRYPTED FILE-----\n"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"lastmodified": "2025-09-16T08:13:15Z",
|
||||||
|
"mac": "ENC[AES256_GCM,data:dyLnGXBC4nGOgX2TrGhf8kI/+Et0PRy+Ppr228y3LYzgcmUunZl9R8+QXJN51OJSQ63gLun5TBw0v+3VnRVBodlhqTDtfACJ7eILCiArPJqeZoh5MR6HkF31yfqTRlXl1i6KHRPVWvjRIdwJ9yZVN1XNAUsxc7xovqS6kkkGPsA=,iv:7yXnpbU7Zf7GH1+Uimq8eXDUX1kO/nvTaGx4nmTrKdM=,tag:WNn9CUOdCAlksC0Qln5rVg==,type:str]",
|
||||||
|
"unencrypted_suffix": "_unencrypted",
|
||||||
|
"version": "3.10.2"
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1 @@
|
|||||||
|
../../../users/admin
|
||||||
@@ -0,0 +1,4 @@
|
|||||||
|
{
|
||||||
|
"publickey": "age1qm0p4vf9jvcnn43s6l4prk8zn6cx0ep9gzvevxecv729xz540v8qa742eg",
|
||||||
|
"type": "age"
|
||||||
|
}
|
||||||
@@ -0,0 +1 @@
|
|||||||
|
25.11
|
||||||
@@ -0,0 +1 @@
|
|||||||
|
200:91bb:f1ec:c580:6d52:70b3:4d60:7bf2
|
||||||
@@ -0,0 +1 @@
|
|||||||
|
../../../../../../sops/machines/peer1
|
||||||
@@ -0,0 +1,19 @@
|
|||||||
|
{
|
||||||
|
"data": "ENC[AES256_GCM,data:/YoEoYY8CmqK4Yk4fmZieIHIvRn779aikoo3+6SWI5SxuU8TLJVY9+Q7mRmnbCso/8RPMICWkZMIkfbxYi6Dwc4UFmLwPqCoeAYsFBiHsJ6QUoTm1qtDDfXcruFs8Mo93ZmJb7oJIC0a+sVbB5L1NsGmG3g+a+g=,iv:KrMjRIQXutv9WdNzI5VWD6SMDnGzs9LFWcG2d9a6XDg=,tag:x5gQN9FaatRBcHOyS2cicw==,type:str]",
|
||||||
|
"sops": {
|
||||||
|
"age": [
|
||||||
|
{
|
||||||
|
"recipient": "age1qm0p4vf9jvcnn43s6l4prk8zn6cx0ep9gzvevxecv729xz540v8qa742eg",
|
||||||
|
"enc": "-----BEGIN AGE ENCRYPTED FILE-----\nYWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBwQ0FNU1c4RDNKTHRtMy8z\nSEtQRzFXTVFvcitMWjVlMURPVkxsZC9wU25nCmt4TS81bnJidzFVZkxEY0ovWUtm\nVk5PMjZEWVJCei9rVTJ2bG1ZNWJoZGMKLS0tIHgyTEhIdUQ3YnlKVi9lNVpUZ0dI\nd3BLL05oMXFldGVKbkpoaklscDJMR3MKpUl/KNPrtyt4/bu3xXUAQIkugQXWjlPf\nFqFc1Vnqxynd+wJkkd/zYs4XcOraogOUj/WIRXkqXgdDDoEqb/VIBg==\n-----END AGE ENCRYPTED FILE-----\n"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"recipient": "age1r264u9yngfq8qkrveh4tn0rhfes02jfgrtqufdx4n4m3hs4rla2qx0rk4d",
|
||||||
|
"enc": "-----BEGIN AGE ENCRYPTED FILE-----\nYWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSArOUdkd3VVSTU3NHZ6aURB\na2dYMXhyMmVLMDVlM0dzVHpxbUw3K3BFcVNzCm1LczFyd3BubGwvRVUwQ1Q0aWZR\nL1hlb1VpZ3JnTVQ4Zm9wVnlJYVNuL00KLS0tIHlMRVMyNW9rWG45bVVtczF3MVNq\nL2d2RXhEeVcyRVNmSUF6cks5VStxVkUKugI1iDei32852wNV/zPlyVwKJH1UXOlY\nFQq7dqMJMWI6a5F+z4UdaHvzyKxF2CWBG7DVnaUSpq7Q3uGmibsSOQ==\n-----END AGE ENCRYPTED FILE-----\n"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"lastmodified": "2025-09-16T08:13:07Z",
|
||||||
|
"mac": "ENC[AES256_GCM,data:LIlgQgiQt9aHXagpXphxSnpju+DOxuBvPpz5Rr43HSwgbWFgZ8tqlH2C1xo2xsJIexWkc823J9txpy+PLFXSm4/NbQGbKSymjHNEIYaU1tBSQ0KZ+s22X3/ku3Hug7/MkEKv5JsroTEcu3FK6Fv7Mo0VWqUggenl9AsJ5BocUO4=,iv:LGOnpWsod1ek4isWVrHrS+ZOCPrhwlPliPOTiMVY0zY=,tag:tRuHBSd9HxOswNcqjvzg0w==,type:str]",
|
||||||
|
"unencrypted_suffix": "_unencrypted",
|
||||||
|
"version": "3.10.2"
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1 @@
|
|||||||
|
../../../../../../sops/users/admin
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
-----BEGIN PUBLIC KEY-----
|
||||||
|
MCowBQYDK2VwAyEAtyIHCZ0/yVbHpllPwgaWIFQ3Kb4fYMcOujgVmttA7gM=
|
||||||
|
-----END PUBLIC KEY-----
|
||||||
@@ -0,0 +1 @@
|
|||||||
|
25.11
|
||||||
@@ -0,0 +1 @@
|
|||||||
|
200:bb1f:6f1c:1852:173a:cb5e:5726:870
|
||||||
@@ -0,0 +1 @@
|
|||||||
|
../../../../../../sops/machines/peer2
|
||||||
@@ -0,0 +1,19 @@
|
|||||||
|
{
|
||||||
|
"data": "ENC[AES256_GCM,data:b1dbaJQGr8mnISch0iej+FhMnYOIFxOJYCvWDQseiczltXsBetbYr+89co5Sp7wmhQrH3tlWaih3HZe294Y9j8XvwpNUtmW3RZHsU/6EWA50LKcToFGFCcEBM/Nz9RStQXnjwLbRSLFuMlfoQttUATB2XYSm+Ng=,iv:YCeE3KbHaBhR0q10qO8Og1LBT5OUjsIDxfclpcLJh6I=,tag:M7y9HAC+fh8Fe8HoqQrnbg==,type:str]",
|
||||||
|
"sops": {
|
||||||
|
"age": [
|
||||||
|
{
|
||||||
|
"recipient": "age1p8kuf8s0nfekwreh4g38cgghp4nzszenx0fraeyky2me0nly2scstqunx8",
|
||||||
|
"enc": "-----BEGIN AGE ENCRYPTED FILE-----\nYWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSA3NTVOT2MxaDJsTXloVVcv\nellUdnVxSVdnZ1NBUGEwLzBiTGoyZENJdm1RClp5eHY3dkdVSzVJYk52dWFCQnlG\nclIrQUJ5RXRYTythWTFHR1NhVHlyMVkKLS0tIEFza3YwcUNiYUV5VWJQcTljY2ZR\nUnc3U1VubmZRTCtTTC9rd1kydnNYa00KqdwV3eRHA6Y865JXQ7lxbS6aTIGf/kQM\nqDFdiUdvEDqo19Df3QBJ7amQ1YjPqSIRbO8CJNPI8JqQJKTaBOgm9g==\n-----END AGE ENCRYPTED FILE-----\n"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"recipient": "age1qm0p4vf9jvcnn43s6l4prk8zn6cx0ep9gzvevxecv729xz540v8qa742eg",
|
||||||
|
"enc": "-----BEGIN AGE ENCRYPTED FILE-----\nYWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBzTmV0Skd5Zzk1SXc4ZDc3\nRi9wTVdDM1lTc3N0MXpNNVZjUWJ6VDZHd3hzCkpRZnNtSU14clkybWxvSEhST2py\nR29jcHdXSCtFRE02ejB0dzN1eGVQZ1kKLS0tIE9YVjJBRTg1SGZ5S3lYdFRUM3RW\nOGZjUEhURnJIVTBnZG43UFpTZkdseFUKOgHC10Rqf/QnzfCHUMEPb1PVo9E6qlpo\nW/F1I8ZqkFI8sWh54nilXeR8i8w+QCthliBxsxdDTv2FSxdnKNHu3A==\n-----END AGE ENCRYPTED FILE-----\n"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"lastmodified": "2025-09-16T08:13:15Z",
|
||||||
|
"mac": "ENC[AES256_GCM,data:0byytsY3tFK3r4qhM1+iYe9KYYKJ8cJO/HonYflbB0iTD+oRBnnDUuChPdBK50tQxH8aInlvgIGgi45OMk7IrFBtBYQRgFBUR5zDujzel9hJXQvpvqgvRMkzA542ngjxYmZ74mQB+pIuFhlVJCfdTN+smX6N4KyDRj9d8aKK0Qs=,iv:DC8nwgUAUSdOCr8TlgJX21SxOPOoJKYeNoYvwj5b9OI=,tag:cbJ8M+UzaghkvtEnRCp+GA==,type:str]",
|
||||||
|
"unencrypted_suffix": "_unencrypted",
|
||||||
|
"version": "3.10.2"
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1 @@
|
|||||||
|
../../../../../../sops/users/admin
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
-----BEGIN PUBLIC KEY-----
|
||||||
|
MCowBQYDK2VwAyEAonBIcfPW9GKaUNRs+8epsgQOShNbR9v26+3H80an2/c=
|
||||||
|
-----END PUBLIC KEY-----
|
||||||
@@ -13,37 +13,32 @@ inventory.instances = {
|
|||||||
};
|
};
|
||||||
```
|
```
|
||||||
|
|
||||||
The input should be named according to your flake input. All machines will be
|
The input should be named according to your flake input.
|
||||||
peers and connected to the zerotier network. Jon is the controller machine,
|
All machines will be peers and connected to the zerotier network.
|
||||||
which will will accept other machines into the network. Sara is a moon and sets
|
Jon is the controller machine, which will will accept other machines into the network.
|
||||||
the `stableEndpoint` setting with a publicly reachable IP, the moon is optional.
|
Sara is a moon and sets the `stableEndpoint` setting with a publicly reachable IP, the moon is optional.
|
||||||
|
|
||||||
|
|
||||||
## Overview
|
## Overview
|
||||||
|
|
||||||
This guide explains how to set up and manage a
|
This guide explains how to set up and manage a [ZeroTier VPN](https://zerotier.com) for a clan network. Each VPN requires a single controller and can support multiple peers and optional moons for better connectivity.
|
||||||
[ZeroTier VPN](https://zerotier.com) for a clan network. Each VPN requires a
|
|
||||||
single controller and can support multiple peers and optional moons for better
|
|
||||||
connectivity.
|
|
||||||
|
|
||||||
## Roles
|
## Roles
|
||||||
|
|
||||||
### 1. Controller
|
### 1. Controller
|
||||||
|
|
||||||
The [Controller](https://docs.zerotier.com/controller/) manages network
|
The [Controller](https://docs.zerotier.com/controller/) manages network membership and is responsible for admitting new peers.
|
||||||
membership and is responsible for admitting new peers. When a new node is added
|
When a new node is added to the clan, the controller must be updated to ensure it has the latest member list.
|
||||||
to the clan, the controller must be updated to ensure it has the latest member
|
|
||||||
list.
|
|
||||||
|
|
||||||
- **Key Points:**
|
- **Key Points:**
|
||||||
- Must be online to admit new machines to the VPN.
|
- Must be online to admit new machines to the VPN.
|
||||||
- Existing nodes can continue to communicate even when the controller is
|
- Existing nodes can continue to communicate even when the controller is offline.
|
||||||
offline.
|
|
||||||
|
|
||||||
### 2. Moons
|
### 2. Moons
|
||||||
|
|
||||||
[Moons](https://docs.zerotier.com/roots) act as relay nodes, providing direct
|
[Moons](https://docs.zerotier.com/roots) act as relay nodes,
|
||||||
connectivity to peers via their public IP addresses. They enable devices that
|
providing direct connectivity to peers via their public IP addresses.
|
||||||
are not publicly reachable to join the VPN by routing through these nodes.
|
They enable devices that are not publicly reachable to join the VPN by routing through these nodes.
|
||||||
|
|
||||||
- **Configuration Notes:**
|
- **Configuration Notes:**
|
||||||
- Each moon must define its public IP address.
|
- Each moon must define its public IP address.
|
||||||
@@ -51,8 +46,8 @@ are not publicly reachable to join the VPN by routing through these nodes.
|
|||||||
|
|
||||||
### 3. Peers
|
### 3. Peers
|
||||||
|
|
||||||
Peers are standard nodes in the VPN. They connect to other peers, moons, and the
|
Peers are standard nodes in the VPN.
|
||||||
controller as needed.
|
They connect to other peers, moons, and the controller as needed.
|
||||||
|
|
||||||
- **Purpose:**
|
- **Purpose:**
|
||||||
- General role for all machines that are neither controllers nor moons.
|
- General role for all machines that are neither controllers nor moons.
|
||||||
|
|||||||
18
devFlake/flake.lock
generated
18
devFlake/flake.lock
generated
@@ -3,10 +3,10 @@
|
|||||||
"clan-core-for-checks": {
|
"clan-core-for-checks": {
|
||||||
"flake": false,
|
"flake": false,
|
||||||
"locked": {
|
"locked": {
|
||||||
"lastModified": 1756133826,
|
"lastModified": 1756166884,
|
||||||
"narHash": "sha256-In3u7UVSjPzX9u4Af9K/jVy4MMAZBzxByMe4GREpHBo=",
|
"narHash": "sha256-skg4rwpbCjhpLlrv/Pndd43FoEgrJz98WARtGLhCSzo=",
|
||||||
"ref": "main",
|
"ref": "main",
|
||||||
"rev": "c4da43da0f583bd3cbcfd1f3acf74f9dc51b8fdd",
|
"rev": "f7414d7e6e58709af27b6fe16eb530278e81eaaf",
|
||||||
"shallow": true,
|
"shallow": true,
|
||||||
"type": "git",
|
"type": "git",
|
||||||
"url": "https://git.clan.lol/clan/clan-core"
|
"url": "https://git.clan.lol/clan/clan-core"
|
||||||
@@ -84,11 +84,11 @@
|
|||||||
},
|
},
|
||||||
"nixpkgs-dev": {
|
"nixpkgs-dev": {
|
||||||
"locked": {
|
"locked": {
|
||||||
"lastModified": 1756104823,
|
"lastModified": 1756662818,
|
||||||
"narHash": "sha256-wRzHREXDOrbCjy+sqo4t3JoInbB2PuhXIUa8NWdh9tk=",
|
"narHash": "sha256-Opggp4xiucQ5gBceZ6OT2vWAZOjQb3qULv39scGZ9Nw=",
|
||||||
"owner": "NixOS",
|
"owner": "NixOS",
|
||||||
"repo": "nixpkgs",
|
"repo": "nixpkgs",
|
||||||
"rev": "d7967bed5381e65208f4fb8d5502e3c36bb94759",
|
"rev": "2e6aeede9cb4896693434684bb0002ab2c0cfc09",
|
||||||
"type": "github"
|
"type": "github"
|
||||||
},
|
},
|
||||||
"original": {
|
"original": {
|
||||||
@@ -165,11 +165,11 @@
|
|||||||
"nixpkgs": []
|
"nixpkgs": []
|
||||||
},
|
},
|
||||||
"locked": {
|
"locked": {
|
||||||
"lastModified": 1755934250,
|
"lastModified": 1756662192,
|
||||||
"narHash": "sha256-CsDojnMgYsfshQw3t4zjRUkmMmUdZGthl16bXVWgRYU=",
|
"narHash": "sha256-F1oFfV51AE259I85av+MAia221XwMHCOtZCMcZLK2Jk=",
|
||||||
"owner": "numtide",
|
"owner": "numtide",
|
||||||
"repo": "treefmt-nix",
|
"repo": "treefmt-nix",
|
||||||
"rev": "74e1a52d5bd9430312f8d1b8b0354c92c17453e5",
|
"rev": "1aabc6c05ccbcbf4a635fb7a90400e44282f61c4",
|
||||||
"type": "github"
|
"type": "github"
|
||||||
},
|
},
|
||||||
"original": {
|
"original": {
|
||||||
|
|||||||
@@ -1,12 +1,9 @@
|
|||||||
# Contributing to Clan
|
# Contributing to Clan
|
||||||
|
|
||||||
**Continuous Integration (CI)**: Each pull request gets automatically tested by
|
|
||||||
gitea. If any errors are detected, it will block pull requests until they're
|
|
||||||
resolved.
|
|
||||||
|
|
||||||
**Dependency Management**: We use the [Nix package manager](https://nixos.org/)
|
**Continuous Integration (CI)**: Each pull request gets automatically tested by gitea. If any errors are detected, it will block pull requests until they're resolved.
|
||||||
to manage dependencies and ensure reproducibility, making your development
|
|
||||||
process more robust.
|
**Dependency Management**: We use the [Nix package manager](https://nixos.org/) to manage dependencies and ensure reproducibility, making your development process more robust.
|
||||||
|
|
||||||
## Supported Operating Systems
|
## Supported Operating Systems
|
||||||
|
|
||||||
@@ -19,37 +16,31 @@ Let's get your development environment up and running:
|
|||||||
|
|
||||||
1. **Install Nix Package Manager**:
|
1. **Install Nix Package Manager**:
|
||||||
|
|
||||||
- You can install the Nix package manager by either
|
- You can install the Nix package manager by either [downloading the Nix installer](https://github.com/DeterminateSystems/nix-installer/releases) or running this command:
|
||||||
[downloading the Nix installer](https://github.com/DeterminateSystems/nix-installer/releases)
|
|
||||||
or running this command:
|
|
||||||
```bash
|
```bash
|
||||||
curl --proto '=https' --tlsv1.2 -sSf -L https://install.determinate.systems/nix | sh -s -- install
|
curl --proto '=https' --tlsv1.2 -sSf -L https://install.determinate.systems/nix | sh -s -- install
|
||||||
```
|
```
|
||||||
|
|
||||||
2. **Install direnv**:
|
1. **Install direnv**:
|
||||||
|
|
||||||
- To automatically setup a devshell on entering the directory
|
- To automatically setup a devshell on entering the directory
|
||||||
```bash
|
```bash
|
||||||
nix profile install nixpkgs#nix-direnv-flakes nixpkgs#direnv
|
nix profile install nixpkgs#nix-direnv-flakes nixpkgs#direnv
|
||||||
```
|
```
|
||||||
|
|
||||||
3. **Add direnv to your shell**:
|
1. **Add direnv to your shell**:
|
||||||
|
|
||||||
- Direnv needs to [hook into your shell](https://direnv.net/docs/hook.html)
|
- Direnv needs to [hook into your shell](https://direnv.net/docs/hook.html) to work.
|
||||||
to work. You can do this by executing following command. The example below
|
You can do this by executing following command. The example below will setup direnv for `zsh` and `bash`
|
||||||
will setup direnv for `zsh` and `bash`
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
echo 'eval "$(direnv hook zsh)"' >> ~/.zshrc && echo 'eval "$(direnv hook bash)"' >> ~/.bashrc && eval "$SHELL"
|
echo 'eval "$(direnv hook zsh)"' >> ~/.zshrc && echo 'eval "$(direnv hook bash)"' >> ~/.bashrc && eval "$SHELL"
|
||||||
```
|
```
|
||||||
|
|
||||||
4. **Allow the devshell**
|
1. **Allow the devshell**
|
||||||
|
- Go to `clan-core/pkgs/clan-cli` and do a `direnv allow` to setup the necessary development environment to execute the `clan` command
|
||||||
- Go to `clan-core/pkgs/clan-cli` and do a `direnv allow` to setup the
|
|
||||||
necessary development environment to execute the `clan` command
|
|
||||||
|
|
||||||
5. **Create a Gitea Account**:
|
|
||||||
|
|
||||||
|
1. **Create a Gitea Account**:
|
||||||
- Register an account on https://git.clan.lol
|
- Register an account on https://git.clan.lol
|
||||||
- Fork the [clan-core](https://git.clan.lol/clan/clan-core) repository
|
- Fork the [clan-core](https://git.clan.lol/clan/clan-core) repository
|
||||||
- Clone the repository and navigate to it
|
- Clone the repository and navigate to it
|
||||||
@@ -57,18 +48,15 @@ Let's get your development environment up and running:
|
|||||||
```bash
|
```bash
|
||||||
git remote add upstream gitea@git.clan.lol:clan/clan-core.git
|
git remote add upstream gitea@git.clan.lol:clan/clan-core.git
|
||||||
```
|
```
|
||||||
|
1. **Allow .envrc**:
|
||||||
6. **Allow .envrc**:
|
|
||||||
|
|
||||||
- When you enter the directory, you'll receive an error message like this:
|
- When you enter the directory, you'll receive an error message like this:
|
||||||
```bash
|
```bash
|
||||||
direnv: error .envrc is blocked. Run `direnv allow` to approve its content
|
direnv: error .envrc is blocked. Run `direnv allow` to approve its content
|
||||||
```
|
```
|
||||||
- Execute `direnv allow` to automatically execute the shell script `.envrc`
|
- Execute `direnv allow` to automatically execute the shell script `.envrc` when entering the directory.
|
||||||
when entering the directory.
|
|
||||||
|
|
||||||
7. **(Optional) Install Git Hooks**:
|
|
||||||
|
|
||||||
|
1. **(Optional) Install Git Hooks**:
|
||||||
- To syntax check your code you can run:
|
- To syntax check your code you can run:
|
||||||
```bash
|
```bash
|
||||||
nix fmt
|
nix fmt
|
||||||
@@ -81,19 +69,15 @@ Let's get your development environment up and running:
|
|||||||
## Related Projects
|
## Related Projects
|
||||||
|
|
||||||
- **Data Mesher**: [data-mesher](https://git.clan.lol/clan/data-mesher)
|
- **Data Mesher**: [data-mesher](https://git.clan.lol/clan/data-mesher)
|
||||||
- **Nixos Facter**:
|
- **Nixos Facter**: [nixos-facter](https://github.com/nix-community/nixos-facter)
|
||||||
[nixos-facter](https://github.com/nix-community/nixos-facter)
|
- **Nixos Anywhere**: [nixos-anywhere](https://github.com/nix-community/nixos-anywhere)
|
||||||
- **Nixos Anywhere**:
|
|
||||||
[nixos-anywhere](https://github.com/nix-community/nixos-anywhere)
|
|
||||||
- **Disko**: [disko](https://github.com/nix-community/disko)
|
- **Disko**: [disko](https://github.com/nix-community/disko)
|
||||||
|
|
||||||
## Fixing Bugs or Adding Features in Clan-CLI
|
## Fixing Bugs or Adding Features in Clan-CLI
|
||||||
|
|
||||||
If you have a bug fix or feature that involves a related project, clone the
|
If you have a bug fix or feature that involves a related project, clone the relevant repository and replace its invocation in your local setup.
|
||||||
relevant repository and replace its invocation in your local setup.
|
|
||||||
|
|
||||||
For instance, if you need to update `nixos-anywhere` in clan-cli, find its
|
For instance, if you need to update `nixos-anywhere` in clan-cli, find its usage:
|
||||||
usage:
|
|
||||||
|
|
||||||
```python
|
```python
|
||||||
run(
|
run(
|
||||||
@@ -118,8 +102,7 @@ run(
|
|||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
The \<path_to_local_src> doesn't need to be a local path, it can be any valid
|
The <path_to_local_src> doesn't need to be a local path, it can be any valid [flakeref](https://nix.dev/manual/nix/2.26/command-ref/new-cli/nix3-flake.html#flake-references).
|
||||||
[flakeref](https://nix.dev/manual/nix/2.26/command-ref/new-cli/nix3-flake.html#flake-references).
|
|
||||||
And thus can point to test already opened PRs for example.
|
And thus can point to test already opened PRs for example.
|
||||||
|
|
||||||
# Standards
|
# Standards
|
||||||
@@ -127,5 +110,4 @@ And thus can point to test already opened PRs for example.
|
|||||||
- Every new module name should be in kebab-case.
|
- Every new module name should be in kebab-case.
|
||||||
- Every fact definition, where possible should be in kebab-case.
|
- Every fact definition, where possible should be in kebab-case.
|
||||||
- Every vars definition, where possible should be in kebab-case.
|
- Every vars definition, where possible should be in kebab-case.
|
||||||
- Command line help descriptions should start capitalized and should not end in
|
- Command line help descriptions should start capitalized and should not end in a period.
|
||||||
a period.
|
|
||||||
|
|||||||
@@ -2,27 +2,17 @@
|
|||||||
|
|
||||||
## General Description
|
## General Description
|
||||||
|
|
||||||
Self-hosting refers to the practice of hosting and maintaining servers,
|
Self-hosting refers to the practice of hosting and maintaining servers, networks, storage, services, and other types of infrastructure by oneself rather than relying on a third-party vendor. This could involve running a server from a home or business location, or leasing a dedicated server at a data center.
|
||||||
networks, storage, services, and other types of infrastructure by oneself rather
|
|
||||||
than relying on a third-party vendor. This could involve running a server from a
|
|
||||||
home or business location, or leasing a dedicated server at a data center.
|
|
||||||
|
|
||||||
There are several reasons for choosing to self-host. These can include:
|
There are several reasons for choosing to self-host. These can include:
|
||||||
|
|
||||||
1. Cost savings: Over time, self-hosting can be more cost-effective, especially
|
1. Cost savings: Over time, self-hosting can be more cost-effective, especially for businesses with large scale needs.
|
||||||
for businesses with large scale needs.
|
|
||||||
|
|
||||||
2. Control: Self-hosting provides a greater level of control over the
|
1. Control: Self-hosting provides a greater level of control over the infrastructure and services. It allows the owner to customize the system to their specific needs.
|
||||||
infrastructure and services. It allows the owner to customize the system to
|
|
||||||
their specific needs.
|
|
||||||
|
|
||||||
3. Privacy and security: Self-hosting can offer improved privacy and security
|
1. Privacy and security: Self-hosting can offer improved privacy and security because data remains under the control of the host rather than being stored on third-party servers.
|
||||||
because data remains under the control of the host rather than being stored
|
|
||||||
on third-party servers.
|
|
||||||
|
|
||||||
4. Independent: Being independent of third-party services can ensure that one's
|
1. Independent: Being independent of third-party services can ensure that one's websites, applications, or services remain up even if the third-party service goes down.
|
||||||
websites, applications, or services remain up even if the third-party service
|
|
||||||
goes down.
|
|
||||||
|
|
||||||
## Stories
|
## Stories
|
||||||
|
|
||||||
@@ -30,32 +20,23 @@ There are several reasons for choosing to self-host. These can include:
|
|||||||
|
|
||||||
Alice wants to self-host a mumble server for her family.
|
Alice wants to self-host a mumble server for her family.
|
||||||
|
|
||||||
- She visits to the Clan website, and follows the instructions on how to install
|
- She visits to the Clan website, and follows the instructions on how to install Clan-OS on her server.
|
||||||
Clan-OS on her server.
|
- Alice logs into a terminal on her server via SSH (alternatively uses Clan GUI app)
|
||||||
- Alice logs into a terminal on her server via SSH (alternatively uses Clan GUI
|
- Using the Clan CLI or GUI tool, alice creates a new private network for her family (VPN)
|
||||||
app)
|
- Alice now browses a list of curated Clan modules and finds a module for mumble.
|
||||||
- Using the Clan CLI or GUI tool, alice creates a new private network for her
|
|
||||||
family (VPN)
|
|
||||||
- Alice now browses a list of curated Clan modules and finds a module for
|
|
||||||
mumble.
|
|
||||||
- She adds this module to her network using the Clan tool.
|
- She adds this module to her network using the Clan tool.
|
||||||
- After that, she uses the clan tool to invite her family members to her network
|
- After that, she uses the clan tool to invite her family members to her network
|
||||||
- Other family members join the private network via the invitation.
|
- Other family members join the private network via the invitation.
|
||||||
- By accepting the invitation, other members automatically install all required
|
- By accepting the invitation, other members automatically install all required software to interact with the network on their machine.
|
||||||
software to interact with the network on their machine.
|
|
||||||
|
|
||||||
### Story 2: Adding a service to an existing network
|
### Story 2: Adding a service to an existing network
|
||||||
|
|
||||||
Alice wants to add a photos app to her private network
|
Alice wants to add a photos app to her private network
|
||||||
|
|
||||||
- She uses the clan CLI or GUI tool to manage her existing private Clan family
|
- She uses the clan CLI or GUI tool to manage her existing private Clan family network
|
||||||
network
|
- She discovers a module for photoprism, and adds it to her server using the tool
|
||||||
- She discovers a module for photoprism, and adds it to her server using the
|
- Other members who are already part of her network, will receive a notification that an update is required to their environment
|
||||||
tool
|
- After accepting, all new software and services to interact with the new photoprism service will be installed automatically.
|
||||||
- Other members who are already part of her network, will receive a notification
|
|
||||||
that an update is required to their environment
|
|
||||||
- After accepting, all new software and services to interact with the new
|
|
||||||
photoprism service will be installed automatically.
|
|
||||||
|
|
||||||
## Challenges
|
## Challenges
|
||||||
|
|
||||||
|
|||||||
@@ -2,53 +2,35 @@
|
|||||||
|
|
||||||
## General Description
|
## General Description
|
||||||
|
|
||||||
Joining a self-hosted infrastructure involves connecting to a network, server,
|
Joining a self-hosted infrastructure involves connecting to a network, server, or system that is privately owned and managed, instead of being hosted by a third-party service provider. This could be a business's internal server, a private cloud setup, or any other private IT infrastructure that is not publicly accessible or controlled by outside entities.
|
||||||
or system that is privately owned and managed, instead of being hosted by a
|
|
||||||
third-party service provider. This could be a business's internal server, a
|
|
||||||
private cloud setup, or any other private IT infrastructure that is not publicly
|
|
||||||
accessible or controlled by outside entities.
|
|
||||||
|
|
||||||
## Stories
|
## Stories
|
||||||
|
|
||||||
### Story 1: Joining a private network
|
### Story 1: Joining a private network
|
||||||
|
|
||||||
Alice' son Bob has never heard of Clan, but receives an invitation URL from
|
Alice' son Bob has never heard of Clan, but receives an invitation URL from Alice who already set up private Clan network for her family.
|
||||||
Alice who already set up private Clan network for her family.
|
|
||||||
|
|
||||||
Bob opens the invitation link and lands on the Clan website. He quickly learns
|
Bob opens the invitation link and lands on the Clan website. He quickly learns about what Clan is and can see that the invitation is for a private network of his family that hosts a number of services, like a private voice chat and a photo sharing platform.
|
||||||
about what Clan is and can see that the invitation is for a private network of
|
|
||||||
his family that hosts a number of services, like a private voice chat and a
|
|
||||||
photo sharing platform.
|
|
||||||
|
|
||||||
Bob decides to join the network and follows the instructions to install the Clan
|
Bob decides to join the network and follows the instructions to install the Clan tool on his computer.
|
||||||
tool on his computer.
|
|
||||||
|
|
||||||
Feeding the invitation link to the Clan tool, bob registers his machine with the
|
Feeding the invitation link to the Clan tool, bob registers his machine with the network.
|
||||||
network.
|
|
||||||
|
|
||||||
All programs required to interact with the network will be installed and
|
All programs required to interact with the network will be installed and configured automatically and securely.
|
||||||
configured automatically and securely.
|
|
||||||
|
|
||||||
Optionally, bob can customize the configuration of these programs through a
|
Optionally, bob can customize the configuration of these programs through a simplified configuration interface.
|
||||||
simplified configuration interface.
|
|
||||||
|
|
||||||
### Story 2: Receiving breaking changes
|
### Story 2: Receiving breaking changes
|
||||||
|
|
||||||
The Clan family network which Bob is part of received an update.
|
The Clan family network which Bob is part of received an update.
|
||||||
|
|
||||||
The existing photo sharing service has been removed and replaced with another
|
The existing photo sharing service has been removed and replaced with another alternative service. The new photo sharing service requires a different client app to view and upload photos.
|
||||||
alternative service. The new photo sharing service requires a different client
|
|
||||||
app to view and upload photos.
|
|
||||||
|
|
||||||
Bob accepts the update. Now his environment will be updated. The old client
|
Bob accepts the update. Now his environment will be updated. The old client software will be removed and the new one installed.
|
||||||
software will be removed and the new one installed.
|
|
||||||
|
|
||||||
Because Bob has customized the previous photo viewing app, he is notified that
|
Because Bob has customized the previous photo viewing app, he is notified that this customization is no longer valid, as the software has been removed (deprecation message).l
|
||||||
this customization is no longer valid, as the software has been removed
|
|
||||||
(deprecation message).l
|
|
||||||
|
|
||||||
Optionally, Bob can now customize the new photo viewing software through his
|
Optionally, Bob can now customize the new photo viewing software through his Clan configuration app or via a config file.
|
||||||
Clan configuration app or via a config file.
|
|
||||||
|
|
||||||
## Challenges
|
## Challenges
|
||||||
|
|
||||||
|
|||||||
@@ -2,30 +2,23 @@
|
|||||||
|
|
||||||
## General Description
|
## General Description
|
||||||
|
|
||||||
Clan modules are pieces of software that can be used by admins to build a
|
Clan modules are pieces of software that can be used by admins to build a private or public infrastructure.
|
||||||
private or public infrastructure.
|
|
||||||
|
|
||||||
Clan modules should have the following properties:
|
Clan modules should have the following properties:
|
||||||
|
|
||||||
1. Documented: It should be clear what the module does and how to use it.
|
1. Documented: It should be clear what the module does and how to use it.
|
||||||
2. Self contained: A module should be usable as is. If it requires any other
|
1. Self contained: A module should be usable as is. If it requires any other software or settings, those should be delivered with the module itself.
|
||||||
software or settings, those should be delivered with the module itself.
|
1. Simple to deploy and use: Modules should have opinionated defaults that just work. Any customization should be optional
|
||||||
3. Simple to deploy and use: Modules should have opinionated defaults that just
|
|
||||||
work. Any customization should be optional
|
|
||||||
|
|
||||||
## Stories
|
## Stories
|
||||||
|
|
||||||
### Story 1: Maintaining a shared folder module
|
### Story 1: Maintaining a shared folder module
|
||||||
|
|
||||||
Alice maintains a module for a shared folder service that she uses in her own
|
Alice maintains a module for a shared folder service that she uses in her own infra, but also publishes for the community.
|
||||||
infra, but also publishes for the community.
|
|
||||||
|
|
||||||
By following clan module standards (Backups, Interfaces, Output schema, etc),
|
By following clan module standards (Backups, Interfaces, Output schema, etc), other community members have an easy time re-using the module within their own infra.
|
||||||
other community members have an easy time re-using the module within their own
|
|
||||||
infra.
|
|
||||||
|
|
||||||
She benefits from publishing the module, because other community members start
|
She benefits from publishing the module, because other community members start using it and help to maintain it.
|
||||||
using it and help to maintain it.
|
|
||||||
|
|
||||||
## Challenges
|
## Challenges
|
||||||
|
|
||||||
|
|||||||
@@ -1,13 +1,11 @@
|
|||||||
{
|
{
|
||||||
lib,
|
lib,
|
||||||
config,
|
|
||||||
...
|
...
|
||||||
}:
|
}:
|
||||||
let
|
let
|
||||||
suffix = config.clan.core.vars.generators.disk-id.files.diskId.value;
|
|
||||||
mirrorBoot = idx: {
|
mirrorBoot = idx: {
|
||||||
# suffix is to prevent disk name collisions
|
# suffix is to prevent disk name collisions
|
||||||
name = idx + suffix;
|
name = idx;
|
||||||
type = "disk";
|
type = "disk";
|
||||||
device = "/dev/disk/by-id/${idx}";
|
device = "/dev/disk/by-id/${idx}";
|
||||||
content = {
|
content = {
|
||||||
|
|||||||
@@ -1,13 +1,11 @@
|
|||||||
{
|
{
|
||||||
lib,
|
lib,
|
||||||
config,
|
|
||||||
...
|
...
|
||||||
}:
|
}:
|
||||||
let
|
let
|
||||||
suffix = config.clan.core.vars.generators.disk-id.files.diskId.value;
|
|
||||||
mirrorBoot = idx: {
|
mirrorBoot = idx: {
|
||||||
# suffix is to prevent disk name collisions
|
# suffix is to prevent disk name collisions
|
||||||
name = idx + suffix;
|
name = idx;
|
||||||
type = "disk";
|
type = "disk";
|
||||||
device = "/dev/disk/by-id/${idx}";
|
device = "/dev/disk/by-id/${idx}";
|
||||||
content = {
|
content = {
|
||||||
|
|||||||
@@ -2,7 +2,7 @@ site_name: Clan Documentation
|
|||||||
site_url: https://docs.clan.lol
|
site_url: https://docs.clan.lol
|
||||||
repo_url: https://git.clan.lol/clan/clan-core/
|
repo_url: https://git.clan.lol/clan/clan-core/
|
||||||
repo_name: "_>"
|
repo_name: "_>"
|
||||||
edit_uri: _edit/main/docs/docs/
|
edit_uri: _edit/main/docs/site/
|
||||||
|
|
||||||
validation:
|
validation:
|
||||||
omitted_files: warn
|
omitted_files: warn
|
||||||
@@ -94,6 +94,8 @@ nav:
|
|||||||
- reference/clanServices/index.md
|
- reference/clanServices/index.md
|
||||||
- reference/clanServices/admin.md
|
- reference/clanServices/admin.md
|
||||||
- reference/clanServices/borgbackup.md
|
- reference/clanServices/borgbackup.md
|
||||||
|
- reference/clanServices/certificates.md
|
||||||
|
- reference/clanServices/coredns.md
|
||||||
- reference/clanServices/data-mesher.md
|
- reference/clanServices/data-mesher.md
|
||||||
- reference/clanServices/dyndns.md
|
- reference/clanServices/dyndns.md
|
||||||
- reference/clanServices/emergency-access.md
|
- reference/clanServices/emergency-access.md
|
||||||
@@ -106,12 +108,12 @@ nav:
|
|||||||
- reference/clanServices/monitoring.md
|
- reference/clanServices/monitoring.md
|
||||||
- reference/clanServices/packages.md
|
- reference/clanServices/packages.md
|
||||||
- reference/clanServices/sshd.md
|
- reference/clanServices/sshd.md
|
||||||
- reference/clanServices/state-version.md
|
|
||||||
- reference/clanServices/syncthing.md
|
- reference/clanServices/syncthing.md
|
||||||
- reference/clanServices/trusted-nix-caches.md
|
- reference/clanServices/trusted-nix-caches.md
|
||||||
- reference/clanServices/users.md
|
- reference/clanServices/users.md
|
||||||
- reference/clanServices/wifi.md
|
- reference/clanServices/wifi.md
|
||||||
- reference/clanServices/wireguard.md
|
- reference/clanServices/wireguard.md
|
||||||
|
- reference/clanServices/yggdrasil.md
|
||||||
- reference/clanServices/zerotier.md
|
- reference/clanServices/zerotier.md
|
||||||
- API: reference/clanServices/clan-service-author-interface.md
|
- API: reference/clanServices/clan-service-author-interface.md
|
||||||
|
|
||||||
@@ -173,6 +175,7 @@ theme:
|
|||||||
- content.code.annotate
|
- content.code.annotate
|
||||||
- content.code.copy
|
- content.code.copy
|
||||||
- content.tabs.link
|
- content.tabs.link
|
||||||
|
- content.action.edit
|
||||||
icon:
|
icon:
|
||||||
repo: fontawesome/brands/git
|
repo: fontawesome/brands/git
|
||||||
custom_dir: overrides
|
custom_dir: overrides
|
||||||
|
|||||||
@@ -48,7 +48,7 @@ CLAN_SERVICE_INTERFACE = os.environ.get("CLAN_SERVICE_INTERFACE")
|
|||||||
|
|
||||||
CLAN_MODULES_VIA_SERVICE = os.environ.get("CLAN_MODULES_VIA_SERVICE")
|
CLAN_MODULES_VIA_SERVICE = os.environ.get("CLAN_MODULES_VIA_SERVICE")
|
||||||
|
|
||||||
OUT = os.environ.get("out")
|
OUT = os.environ.get("out") # noqa: SIM112
|
||||||
|
|
||||||
|
|
||||||
def sanitize(text: str) -> str:
|
def sanitize(text: str) -> str:
|
||||||
@@ -551,8 +551,7 @@ def options_docs_from_tree(
|
|||||||
|
|
||||||
return output
|
return output
|
||||||
|
|
||||||
md = render_tree(root)
|
return render_tree(root)
|
||||||
return md
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
|
|||||||
@@ -1,10 +1,7 @@
|
|||||||
______________________________________________________________________
|
---
|
||||||
|
template: options.html
|
||||||
template: options.html hide:
|
hide:
|
||||||
|
- navigation
|
||||||
- navigation
|
- toc
|
||||||
- toc
|
---
|
||||||
|
|
||||||
______________________________________________________________________
|
|
||||||
|
|
||||||
<redoc src="/openapi.json" />
|
<redoc src="/openapi.json" />
|
||||||
@@ -1,35 +1,33 @@
|
|||||||
# Auto-included Files
|
# Auto-included Files
|
||||||
|
|
||||||
Clan automatically imports specific files from each machine directory and
|
Clan automatically imports specific files from each machine directory and registers them, reducing the need for manual configuration.
|
||||||
registers them, reducing the need for manual configuration.
|
|
||||||
|
|
||||||
## Machine Registration
|
## Machine Registration
|
||||||
|
|
||||||
Every folder under `machines/{machineName}` is automatically registered as a
|
Every folder under `machines/{machineName}` is automatically registered as a Clan machine.
|
||||||
Clan machine.
|
|
||||||
|
|
||||||
!!! info "Files loaded automatically for each machine"
|
!!! info "Files loaded automatically for each machine"
|
||||||
|
|
||||||
The following files are detected and imported for every Clan machine:
|
The following files are detected and imported for every Clan machine:
|
||||||
|
|
||||||
- [x] `machines/{machineName}/configuration.nix` Main configuration file for the
|
- [x] `machines/{machineName}/configuration.nix`
|
||||||
machine.
|
Main configuration file for the machine.
|
||||||
|
|
||||||
- [x] `machines/{machineName}/hardware-configuration.nix` Hardware-specific
|
- [x] `machines/{machineName}/hardware-configuration.nix`
|
||||||
configuration generated by NixOS.
|
Hardware-specific configuration generated by NixOS.
|
||||||
|
|
||||||
- [x] `machines/{machineName}/facter.json` Contains system facts. Automatically
|
- [x] `machines/{machineName}/facter.json`
|
||||||
generated — see [nixos-facter](https://clan.lol/blog/nixos-facter/) for
|
Contains system facts. Automatically generated — see [nixos-facter](https://clan.lol/blog/nixos-facter/) for details.
|
||||||
details.
|
|
||||||
|
|
||||||
- [x] `machines/{machineName}/disko.nix` Disk layout configuration. See the
|
- [x] `machines/{machineName}/disko.nix`
|
||||||
[disko quickstart](https://github.com/nix-community/disko/blob/master/docs/quickstart.md)
|
Disk layout configuration. See the [disko quickstart](https://github.com/nix-community/disko/blob/master/docs/quickstart.md) for more info.
|
||||||
for more info.
|
|
||||||
|
|
||||||
## Other Auto-included Files
|
## Other Auto-included Files
|
||||||
|
|
||||||
- **`inventory.json`** Managed by Clan's API. Merges with `clan.inventory` to
|
* **`inventory.json`**
|
||||||
extend the inventory.
|
Managed by Clan's API.
|
||||||
|
Merges with `clan.inventory` to extend the inventory.
|
||||||
|
|
||||||
- **`.clan-flake`** Sentinel file to be used to locate the root of a Clan
|
* **`.clan-flake`**
|
||||||
repository. Falls back to `.git`, `.hg`, `.svn`, or `flake.nix` if not found.
|
Sentinel file to be used to locate the root of a Clan repository.
|
||||||
|
Falls back to `.git`, `.hg`, `.svn`, or `flake.nix` if not found.
|
||||||
|
|||||||
@@ -1,21 +1,14 @@
|
|||||||
# Generators
|
# Generators
|
||||||
|
|
||||||
Defining a linux user's password via the nixos configuration previously required
|
Defining a linux user's password via the nixos configuration previously required running `mkpasswd ...` and then copying the hash back into the nix configuration.
|
||||||
running `mkpasswd ...` and then copying the hash back into the nix
|
|
||||||
configuration.
|
|
||||||
|
|
||||||
In this example, we will guide you through automating that interaction using
|
In this example, we will guide you through automating that interaction using clan `vars`.
|
||||||
clan `vars`.
|
|
||||||
|
|
||||||
For a more general explanation of what clan vars are and how it works, see the
|
For a more general explanation of what clan vars are and how it works, see the intro of the [Reference Documentation for vars](../reference/clan.core/vars.md)
|
||||||
intro of the [Reference Documentation for vars](../reference/clan.core/vars.md)
|
|
||||||
|
|
||||||
This guide assumes
|
This guide assumes
|
||||||
|
- Clan is set up already (see [Getting Started](../guides/getting-started/index.md))
|
||||||
- Clan is set up already (see
|
- a machine has been added to the clan (see [Adding Machines](../guides/getting-started/add-machines.md))
|
||||||
[Getting Started](../guides/getting-started/index.md))
|
|
||||||
- a machine has been added to the clan (see
|
|
||||||
[Adding Machines](../guides/getting-started/add-machines.md))
|
|
||||||
|
|
||||||
This section will walk you through the following steps:
|
This section will walk you through the following steps:
|
||||||
|
|
||||||
@@ -36,9 +29,7 @@ In this example, a `vars` `generator` is used to:
|
|||||||
- store the hash in a file
|
- store the hash in a file
|
||||||
- expose the file path to the nixos configuration
|
- expose the file path to the nixos configuration
|
||||||
|
|
||||||
Create a new nix file `root-password.nix` with the following content and import
|
Create a new nix file `root-password.nix` with the following content and import it into your `configuration.nix`
|
||||||
it into your `configuration.nix`
|
|
||||||
|
|
||||||
```nix
|
```nix
|
||||||
{config, pkgs, ...}: {
|
{config, pkgs, ...}: {
|
||||||
|
|
||||||
@@ -71,29 +62,24 @@ it into your `configuration.nix`
|
|||||||
## Inspect the status
|
## Inspect the status
|
||||||
|
|
||||||
Executing `clan vars list`, you should see the following:
|
Executing `clan vars list`, you should see the following:
|
||||||
|
|
||||||
```shellSession
|
```shellSession
|
||||||
$ clan vars list my_machine
|
$ clan vars list my_machine
|
||||||
root-password/password-hash: <not set>
|
root-password/password-hash: <not set>
|
||||||
```
|
```
|
||||||
|
|
||||||
...indicating that the value `password-hash` for the generator `root-password`
|
...indicating that the value `password-hash` for the generator `root-password` is not set yet.
|
||||||
is not set yet.
|
|
||||||
|
|
||||||
## Generate the values
|
## Generate the values
|
||||||
|
|
||||||
This step is not strictly necessary, as deploying the machine via
|
This step is not strictly necessary, as deploying the machine via `clan machines update` would trigger the generator as well.
|
||||||
`clan machines update` would trigger the generator as well.
|
|
||||||
|
|
||||||
To run the generator, execute `clan vars generate` for your machine
|
To run the generator, execute `clan vars generate` for your machine
|
||||||
|
|
||||||
```shellSession
|
```shellSession
|
||||||
$ clan vars generate my_machine
|
$ clan vars generate my_machine
|
||||||
Enter the value for root-password/password-input (hidden):
|
Enter the value for root-password/password-input (hidden):
|
||||||
```
|
```
|
||||||
|
|
||||||
After entering the value, the updated status is reported:
|
After entering the value, the updated status is reported:
|
||||||
|
|
||||||
```shellSession
|
```shellSession
|
||||||
Updated var root-password/password-hash
|
Updated var root-password/password-hash
|
||||||
old: <not set>
|
old: <not set>
|
||||||
@@ -106,7 +92,6 @@ With the last step, a new file was created in your repository:
|
|||||||
`vars/per-machine/my-machine/root-password/password-hash/value`
|
`vars/per-machine/my-machine/root-password/password-hash/value`
|
||||||
|
|
||||||
If the repository is a git repository, a commit was created automatically:
|
If the repository is a git repository, a commit was created automatically:
|
||||||
|
|
||||||
```shellSession
|
```shellSession
|
||||||
$ git log -n1
|
$ git log -n1
|
||||||
commit ... (HEAD -> master)
|
commit ... (HEAD -> master)
|
||||||
@@ -124,13 +109,9 @@ clan machines update my_machine
|
|||||||
|
|
||||||
## Share root password between machines
|
## Share root password between machines
|
||||||
|
|
||||||
If we just imported the `root-password.nix` from above into more machines, clan
|
If we just imported the `root-password.nix` from above into more machines, clan would ask for a new password for each additional machine.
|
||||||
would ask for a new password for each additional machine.
|
|
||||||
|
|
||||||
If the root password instead should only be entered once and shared across all
|
|
||||||
machines, the generator defined above needs to be declared as `shared`, by
|
|
||||||
adding `share = true` to it:
|
|
||||||
|
|
||||||
|
If the root password instead should only be entered once and shared across all machines, the generator defined above needs to be declared as `shared`, by adding `share = true` to it:
|
||||||
```nix
|
```nix
|
||||||
{config, pkgs, ...}: {
|
{config, pkgs, ...}: {
|
||||||
clan.vars.generators.root-password = {
|
clan.vars.generators.root-password = {
|
||||||
@@ -140,15 +121,13 @@ adding `share = true` to it:
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
Importing that shared generator into each machine, will ensure that the password
|
Importing that shared generator into each machine, will ensure that the password is only asked once the first machine gets updated and then re-used for all subsequent machines.
|
||||||
is only asked once the first machine gets updated and then re-used for all
|
|
||||||
subsequent machines.
|
|
||||||
|
|
||||||
## Change the root password
|
## Change the root password
|
||||||
|
|
||||||
Changing the password can be done via this command. Replace `my-machine` with
|
Changing the password can be done via this command.
|
||||||
your machine. If the password is shared, just pick any machine that has the
|
Replace `my-machine` with your machine.
|
||||||
generator declared.
|
If the password is shared, just pick any machine that has the generator declared.
|
||||||
|
|
||||||
```shellSession
|
```shellSession
|
||||||
$ clan vars generate my-machine --generator root-password --regenerate
|
$ clan vars generate my-machine --generator root-password --regenerate
|
||||||
@@ -162,6 +141,7 @@ Updated var root-password/password-hash
|
|||||||
new: $6$OyoQtDVzeemgh8EQ$zRK...
|
new: $6$OyoQtDVzeemgh8EQ$zRK...
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
## Further Reading
|
## Further Reading
|
||||||
|
|
||||||
- [Reference Documentation for `clan.core.vars` NixOS options](../reference/clan.core/vars.md)
|
- [Reference Documentation for `clan.core.vars` NixOS options](../reference/clan.core/vars.md)
|
||||||
|
|||||||
@@ -1,67 +1,51 @@
|
|||||||
`Inventory` is an abstract service layer for consistently configuring
|
|
||||||
distributed services across machine boundaries.
|
`Inventory` is an abstract service layer for consistently configuring distributed services across machine boundaries.
|
||||||
|
|
||||||
## Concept
|
## Concept
|
||||||
|
|
||||||
Its concept is slightly different to what NixOS veterans might be used to. The
|
Its concept is slightly different to what NixOS veterans might be used to. The inventory is a service definition on a higher level, not a machine configuration. This allows you to define a consistent and coherent service.
|
||||||
inventory is a service definition on a higher level, not a machine
|
|
||||||
configuration. This allows you to define a consistent and coherent service.
|
|
||||||
|
|
||||||
The inventory logic will automatically derive the modules and configurations to
|
The inventory logic will automatically derive the modules and configurations to enable on each machine in your `clan` based on its `role`. This makes it super easy to setup distributed `services` such as Backups, Networking, traditional cloud services, or peer-to-peer based applications.
|
||||||
enable on each machine in your `clan` based on its `role`. This makes it super
|
|
||||||
easy to setup distributed `services` such as Backups, Networking, traditional
|
|
||||||
cloud services, or peer-to-peer based applications.
|
|
||||||
|
|
||||||
The following tutorial will walk through setting up a Backup service where the
|
The following tutorial will walk through setting up a Backup service where the terms `Service` and `Role` will become more clear.
|
||||||
terms `Service` and `Role` will become more clear.
|
|
||||||
|
|
||||||
!!! example "Experimental status" The inventory implementation is not considered
|
!!! example "Experimental status"
|
||||||
stable yet. We are actively soliciting feedback from users.
|
The inventory implementation is not considered stable yet.
|
||||||
|
We are actively soliciting feedback from users.
|
||||||
|
|
||||||
```
|
Stabilizing the API is a priority.
|
||||||
Stabilizing the API is a priority.
|
|
||||||
```
|
|
||||||
|
|
||||||
## Prerequisites
|
## Prerequisites
|
||||||
|
|
||||||
- [x] [Add some machines](../guides/getting-started/add-machines.md) to your
|
- [x] [Add some machines](../guides/getting-started/add-machines.md) to your Clan.
|
||||||
Clan.
|
|
||||||
|
|
||||||
## Services
|
## Services
|
||||||
|
|
||||||
The inventory defines `services`. Membership of `machines` is defined via
|
The inventory defines `services`. Membership of `machines` is defined via `roles` exclusively.
|
||||||
`roles` exclusively.
|
|
||||||
|
|
||||||
See each [modules documentation](../reference/clanServices/index.md) for its
|
See each [modules documentation](../reference/clanServices/index.md) for its available roles.
|
||||||
available roles.
|
|
||||||
|
|
||||||
### Adding services to machines
|
### Adding services to machines
|
||||||
|
|
||||||
A service can be added to one or multiple machines via `Roles`. Clan's `Role`
|
A service can be added to one or multiple machines via `Roles`. Clan's `Role` interface provide sane defaults for a module this allows the module author to reduce the configuration overhead to a minimum.
|
||||||
interface provide sane defaults for a module this allows the module author to
|
|
||||||
reduce the configuration overhead to a minimum.
|
|
||||||
|
|
||||||
Each service can still be customized and configured according to the modules
|
Each service can still be customized and configured according to the modules options.
|
||||||
options.
|
|
||||||
|
|
||||||
- Per instance configuration via `services.<serviceName>.<instanceName>.config`
|
- Per instance configuration via `services.<serviceName>.<instanceName>.config`
|
||||||
- Per role configuration via
|
- Per role configuration via `services.<serviceName>.<instanceName>.roles.<roleName>.config`
|
||||||
`services.<serviceName>.<instanceName>.roles.<roleName>.config`
|
- Per machine configuration via `services.<serviceName>.<instanceName>.machines.<machineName>.config`
|
||||||
- Per machine configuration via
|
|
||||||
`services.<serviceName>.<instanceName>.machines.<machineName>.config`
|
|
||||||
|
|
||||||
### Setting up the Backup Service
|
### Setting up the Backup Service
|
||||||
|
|
||||||
!!! Example "Borgbackup Example"
|
!!! Example "Borgbackup Example"
|
||||||
|
|
||||||
````
|
To configure a service it needs to be added to the machine.
|
||||||
To configure a service it needs to be added to the machine.
|
It is required to assign the service (`borgbackup`) an arbitrary instance name. (`instance_1`)
|
||||||
It is required to assign the service (`borgbackup`) an arbitrary instance name. (`instance_1`)
|
|
||||||
|
|
||||||
See also: [Multiple Service Instances](#multiple-service-instances)
|
See also: [Multiple Service Instances](#multiple-service-instances)
|
||||||
|
|
||||||
```{.nix hl_lines="6-7"}
|
```{.nix hl_lines="6-7"}
|
||||||
clan-core.lib.clan {
|
clan-core.lib.clan {
|
||||||
inventory = {
|
inventory = {
|
||||||
services = {
|
services = {
|
||||||
borgbackup.instance_1 = {
|
borgbackup.instance_1 = {
|
||||||
@@ -71,9 +55,8 @@ clan-core.lib.clan {
|
|||||||
};
|
};
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
````
|
|
||||||
|
|
||||||
### Scaling the Backup
|
### Scaling the Backup
|
||||||
|
|
||||||
@@ -83,9 +66,8 @@ It is possible to add services to multiple machines via tags as shown
|
|||||||
|
|
||||||
!!! Example "Tags Example"
|
!!! Example "Tags Example"
|
||||||
|
|
||||||
````
|
```{.nix hl_lines="5 8 14"}
|
||||||
```{.nix hl_lines="5 8 14"}
|
clan-core.lib.clan {
|
||||||
clan-core.lib.clan {
|
|
||||||
inventory = {
|
inventory = {
|
||||||
machines = {
|
machines = {
|
||||||
"jon" = {
|
"jon" = {
|
||||||
@@ -103,23 +85,21 @@ clan-core.lib.clan {
|
|||||||
};
|
};
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
````
|
|
||||||
|
|
||||||
### Multiple Service Instances
|
### Multiple Service Instances
|
||||||
|
|
||||||
!!! danger "Important" Not all modules implement support for multiple instances
|
!!! danger "Important"
|
||||||
yet. Multiple instance usage could create complexity, refer to each modules
|
Not all modules implement support for multiple instances yet.
|
||||||
documentation, for intended usage.
|
Multiple instance usage could create complexity, refer to each modules documentation, for intended usage.
|
||||||
|
|
||||||
!!! Example
|
!!! Example
|
||||||
|
|
||||||
````
|
In this example `backup_server` has role `client` and `server` in different instances.
|
||||||
In this example `backup_server` has role `client` and `server` in different instances.
|
|
||||||
|
|
||||||
```{.nix hl_lines="11 14"}
|
```{.nix hl_lines="11 14"}
|
||||||
clan-core.lib.clan {
|
clan-core.lib.clan {
|
||||||
inventory = {
|
inventory = {
|
||||||
machines = {
|
machines = {
|
||||||
"jon" = {};
|
"jon" = {};
|
||||||
@@ -137,6 +117,5 @@ clan-core.lib.clan {
|
|||||||
};
|
};
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
````
|
|
||||||
|
|||||||
@@ -1,8 +1,7 @@
|
|||||||
# How Templates work
|
# How Templates work
|
||||||
|
|
||||||
Clan offers the ability to use templates for creating different resources. It
|
Clan offers the ability to use templates for creating different resources.
|
||||||
comes with some `<builtin>` templates and discovers all exposed templates from
|
It comes with some `<builtin>` templates and discovers all exposed templates from its flake's `inputs`
|
||||||
its flake's `inputs`
|
|
||||||
|
|
||||||
For example one can list all current templates like this:
|
For example one can list all current templates like this:
|
||||||
|
|
||||||
@@ -39,8 +38,7 @@ Available 'machine' templates
|
|||||||
|
|
||||||
Templates are referenced via the `--template` `selector`
|
Templates are referenced via the `--template` `selector`
|
||||||
|
|
||||||
clan-core ships its native/builtin templates. Those are referenced if the
|
clan-core ships its native/builtin templates. Those are referenced if the selector is a plain string ( without `#` or `./.` )
|
||||||
selector is a plain string ( without `#` or `./.` )
|
|
||||||
|
|
||||||
For example:
|
For example:
|
||||||
|
|
||||||
@@ -50,14 +48,11 @@ would use the native `<builtin>.flake-parts` template
|
|||||||
|
|
||||||
## Selectors follow nix flake `reference#attribute` syntax
|
## Selectors follow nix flake `reference#attribute` syntax
|
||||||
|
|
||||||
Selectors follow a very similar pattern as Nix's native attribute selection
|
Selectors follow a very similar pattern as Nix's native attribute selection behavior.
|
||||||
behavior.
|
|
||||||
|
|
||||||
Just like `nix build .` would build `packages.x86-linux.default` of the flake in
|
Just like `nix build .` would build `packages.x86-linux.default` of the flake in `./.`
|
||||||
`./.`
|
|
||||||
|
|
||||||
`clan flakes create --template=.` would create a clan from your **local**
|
`clan flakes create --template=.` would create a clan from your **local** `default` clan template (`templates.clan.default`).
|
||||||
`default` clan template (`templates.clan.default`).
|
|
||||||
|
|
||||||
In fact this command would be equivalent, just make it more explicit
|
In fact this command would be equivalent, just make it more explicit
|
||||||
|
|
||||||
@@ -65,11 +60,10 @@ In fact this command would be equivalent, just make it more explicit
|
|||||||
|
|
||||||
## Remote templates
|
## Remote templates
|
||||||
|
|
||||||
Just like with Nix you could specify a remote url or path to the flake
|
Just like with Nix you could specify a remote url or path to the flake containing the template
|
||||||
containing the template
|
|
||||||
|
|
||||||
`clan flakes create --template=github:owner/repo#foo`
|
`clan flakes create --template=github:owner/repo#foo`
|
||||||
|
|
||||||
!!! Note "Implementation Note" Not all features of Nix's attribute selection are
|
!!! Note "Implementation Note"
|
||||||
currently matched. There are minor differences in case of unexpected behavior
|
Not all features of Nix's attribute selection are currently matched.
|
||||||
please create an [issue](https://git.clan.lol/clan/clan-core/issues/new)
|
There are minor differences in case of unexpected behavior please create an [issue](https://git.clan.lol/clan/clan-core/issues/new)
|
||||||
|
|||||||
@@ -13,20 +13,15 @@ To define a service in Clan, you need to define two things:
|
|||||||
- `clanModule` - defined by module authors
|
- `clanModule` - defined by module authors
|
||||||
- `inventory` - defined by users
|
- `inventory` - defined by users
|
||||||
|
|
||||||
The `clanModule` is currently a plain NixOS module. It is conditionally imported
|
The `clanModule` is currently a plain NixOS module. It is conditionally imported into each machine depending on the `service` and `role`.
|
||||||
into each machine depending on the `service` and `role`.
|
|
||||||
|
|
||||||
A `role` is a function of a machine within a service. For example in the
|
A `role` is a function of a machine within a service. For example in the `backup` service there are `client` and `server` roles.
|
||||||
`backup` service there are `client` and `server` roles.
|
|
||||||
|
|
||||||
The `inventory` contains the settings for the user/consumer of the module. It
|
The `inventory` contains the settings for the user/consumer of the module. It describes what `services` run on each machine and with which `roles`.
|
||||||
describes what `services` run on each machine and with which `roles`.
|
|
||||||
|
|
||||||
Additionally any `service` can be instantiated multiple times.
|
Additionally any `service` can be instantiated multiple times.
|
||||||
|
|
||||||
This ADR proposes that we change how to write a `clanModule`. The `inventory`
|
This ADR proposes that we change how to write a `clanModule`. The `inventory` should get a new attribute called `instances` that allow for configuration of these modules.
|
||||||
should get a new attribute called `instances` that allow for configuration of
|
|
||||||
these modules.
|
|
||||||
|
|
||||||
### Status Quo
|
### Status Quo
|
||||||
|
|
||||||
@@ -71,15 +66,10 @@ in {
|
|||||||
|
|
||||||
Problems with the current way of writing clanModules:
|
Problems with the current way of writing clanModules:
|
||||||
|
|
||||||
1. No way to retrieve the config of a single service instance, together with its
|
1. No way to retrieve the config of a single service instance, together with its name.
|
||||||
name.
|
2. Directly exporting a single, anonymous nixosModule without any intermediary attribute layers doesn't leave room for exporting other inventory resources such as potentially `vars` or `homeManagerConfig`.
|
||||||
|
3. Can't access multiple config instances individually.
|
||||||
2. Directly exporting a single, anonymous nixosModule without any intermediary
|
Example:
|
||||||
attribute layers doesn't leave room for exporting other inventory resources
|
|
||||||
such as potentially `vars` or `homeManagerConfig`.
|
|
||||||
|
|
||||||
3. Can't access multiple config instances individually. Example:
|
|
||||||
|
|
||||||
```nix
|
```nix
|
||||||
inventory = {
|
inventory = {
|
||||||
services = {
|
services = {
|
||||||
@@ -96,119 +86,83 @@ Problems with the current way of writing clanModules:
|
|||||||
};
|
};
|
||||||
};
|
};
|
||||||
```
|
```
|
||||||
|
This doesn't work because all instance configs are applied to the same namespace. So this results in a conflict currently.
|
||||||
|
Resolving this problem means that new inventory modules cannot be plain nixos modules anymore. If they are configured via `instances` / `instanceConfig` they cannot be configured without using the inventory. (There might be ways to inject instanceConfig but that requires knowledge of inventory internals)
|
||||||
|
|
||||||
This doesn't work because all instance configs are applied to the same
|
4. Writing modules for multiple instances is cumbersome. Currently the clanModule author has to write one or multiple `fold` operations for potentially every nixos option to define how multiple service instances merge into every single one option. The new idea behind this adr is to pull the common fold function into the outer context provide it as a common helper. (See the example below. `perInstance` analog to the well known `perSystem` of flake-parts)
|
||||||
namespace. So this results in a conflict currently. Resolving this problem
|
|
||||||
means that new inventory modules cannot be plain nixos modules anymore. If
|
|
||||||
they are configured via `instances` / `instanceConfig` they cannot be
|
|
||||||
configured without using the inventory. (There might be ways to inject
|
|
||||||
instanceConfig but that requires knowledge of inventory internals)
|
|
||||||
|
|
||||||
4. Writing modules for multiple instances is cumbersome. Currently the
|
5. Each role has a different interface. We need to render that interface into json-schema which includes creating an unnecessary test machine currently. Defining the interface at a higher level (outside of any machine context) allows faster evaluation and an isolation by design from any machine.
|
||||||
clanModule author has to write one or multiple `fold` operations for
|
This allows rendering the UI (options tree) of a service by just knowing the service and the corresponding roles without creating a dummy machine.
|
||||||
potentially every nixos option to define how multiple service instances merge
|
|
||||||
into every single one option. The new idea behind this adr is to pull the
|
|
||||||
common fold function into the outer context provide it as a common helper.
|
|
||||||
(See the example below. `perInstance` analog to the well known `perSystem` of
|
|
||||||
flake-parts)
|
|
||||||
|
|
||||||
5. Each role has a different interface. We need to render that interface into
|
6. The interface of defining config is wrong. It is possible to define config that applies to multiple machine at once. It is possible to define config that applies to
|
||||||
json-schema which includes creating an unnecessary test machine currently.
|
a machine as a hole. But this is wrong behavior because the options exist at the role level. So config must also always exist at the role level.
|
||||||
Defining the interface at a higher level (outside of any machine context)
|
Currently we merge options and config together but that may produce conflicts. Those module system conflicts are very hard to foresee since they depend on what roles exist at runtime.
|
||||||
allows faster evaluation and an isolation by design from any machine. This
|
|
||||||
allows rendering the UI (options tree) of a service by just knowing the
|
|
||||||
service and the corresponding roles without creating a dummy machine.
|
|
||||||
|
|
||||||
6. The interface of defining config is wrong. It is possible to define config
|
|
||||||
that applies to multiple machine at once. It is possible to define config
|
|
||||||
that applies to a machine as a hole. But this is wrong behavior because the
|
|
||||||
options exist at the role level. So config must also always exist at the role
|
|
||||||
level. Currently we merge options and config together but that may produce
|
|
||||||
conflicts. Those module system conflicts are very hard to foresee since they
|
|
||||||
depend on what roles exist at runtime.
|
|
||||||
|
|
||||||
## Proposed Change
|
## Proposed Change
|
||||||
|
|
||||||
We will create a new module class which is defined by `_class = "clan.service"`
|
We will create a new module class which is defined by `_class = "clan.service"` ([documented here](https://nixos.org/manual/nixpkgs/stable/#module-system-lib-evalModules-param-class)).
|
||||||
([documented here](https://nixos.org/manual/nixpkgs/stable/#module-system-lib-evalModules-param-class)).
|
|
||||||
|
|
||||||
Existing clan modules will still work by continuing to be plain NixOS modules.
|
Existing clan modules will still work by continuing to be plain NixOS modules. All new modules can set `_class = "clan.service";` to use the proposed features.
|
||||||
All new modules can set `_class = "clan.service";` to use the proposed features.
|
|
||||||
|
|
||||||
In short the change introduces a new module class that makes the currently
|
In short the change introduces a new module class that makes the currently necessary folding of `clan.service`s `instances` and `roles` a common operation. The module author can define the inner function of the fold operations which is called a `clan.service` module.
|
||||||
necessary folding of `clan.service`s `instances` and `roles` a common operation.
|
|
||||||
The module author can define the inner function of the fold operations which is
|
|
||||||
called a `clan.service` module.
|
|
||||||
|
|
||||||
There are the following attributes of such a module:
|
There are the following attributes of such a module:
|
||||||
|
|
||||||
### `roles.<roleName>.interface`
|
### `roles.<roleName>.interface`
|
||||||
|
|
||||||
Each role can have a different interface for how to be configured. I.e.: A
|
Each role can have a different interface for how to be configured.
|
||||||
`client` role might have different options than a `server` role.
|
I.e.: A `client` role might have different options than a `server` role.
|
||||||
|
|
||||||
This attribute should be used to define `options`. (Not `config` !)
|
This attribute should be used to define `options`. (Not `config` !)
|
||||||
|
|
||||||
The end-user defines the corresponding `config`.
|
The end-user defines the corresponding `config`.
|
||||||
|
|
||||||
This submodule will be evaluated for each `instance role` combination and passed
|
This submodule will be evaluated for each `instance role` combination and passed as argument into `perInstance`.
|
||||||
as argument into `perInstance`.
|
|
||||||
|
|
||||||
This submodules `options` will be evaluated to build the UI for that module
|
This submodules `options` will be evaluated to build the UI for that module dynamically.
|
||||||
dynamically.
|
|
||||||
|
|
||||||
### **Result attributes**
|
### **Result attributes**
|
||||||
|
|
||||||
Some common result attributes are produced by modules of this proposal, those
|
Some common result attributes are produced by modules of this proposal, those will be referenced later in this document but are commonly defined as:
|
||||||
will be referenced later in this document but are commonly defined as:
|
|
||||||
|
|
||||||
- `nixosModule` A single nixos module.
|
- `nixosModule` A single nixos module. (`{config, ...}:{ environment.systemPackages = []; }`)
|
||||||
(`{config, ...}:{ environment.systemPackages = []; }`)
|
- `services.<serviceName>` An attribute set of `_class = clan.service`. Which contain the same thing as this whole ADR proposes.
|
||||||
- `services.<serviceName>` An attribute set of `_class = clan.service`. Which
|
|
||||||
contain the same thing as this whole ADR proposes.
|
|
||||||
- `vars` To be defined. Reserved for now.
|
- `vars` To be defined. Reserved for now.
|
||||||
|
|
||||||
### `roles.<roleName>.perInstance`
|
### `roles.<roleName>.perInstance`
|
||||||
|
|
||||||
This acts like a function that maps over all `service instances` of a given
|
This acts like a function that maps over all `service instances` of a given `role`.
|
||||||
`role`. It produces the previously defined **result attributes**.
|
It produces the previously defined **result attributes**.
|
||||||
|
|
||||||
I.e. This allows to produce multiple `nixosModules` one for every instance of
|
I.e. This allows to produce multiple `nixosModules` one for every instance of the service.
|
||||||
the service. Hence making multiple `service instances` convenient by leveraging
|
Hence making multiple `service instances` convenient by leveraging the module-system merge behavior.
|
||||||
the module-system merge behavior.
|
|
||||||
|
|
||||||
### `perMachine`
|
### `perMachine`
|
||||||
|
|
||||||
This acts like a function that maps over all `machines` of a given `service`. It
|
This acts like a function that maps over all `machines` of a given `service`.
|
||||||
produces the previously defined **result attributes**.
|
It produces the previously defined **result attributes**.
|
||||||
|
|
||||||
I.e. this allows to produce exactly one `nixosModule` per `service`. Making it
|
I.e. this allows to produce exactly one `nixosModule` per `service`.
|
||||||
easy to set nixos-options only once if they have a one-to-one relation to a
|
Making it easy to set nixos-options only once if they have a one-to-one relation to a service being enabled.
|
||||||
service being enabled.
|
|
||||||
|
|
||||||
Note: `lib.mkIf` can be used on i.e. `roleName` to make the scope more specific.
|
Note: `lib.mkIf` can be used on i.e. `roleName` to make the scope more specific.
|
||||||
|
|
||||||
### `services.<serviceName>`
|
### `services.<serviceName>`
|
||||||
|
|
||||||
This allows to define nested services. i.e the *service* `backup` might define a
|
This allows to define nested services.
|
||||||
nested *service* `ssh` which sets up an ssh connection.
|
i.e the *service* `backup` might define a nested *service* `ssh` which sets up an ssh connection.
|
||||||
|
|
||||||
This can be defined in `perMachine` and `perInstance`
|
This can be defined in `perMachine` and `perInstance`
|
||||||
|
|
||||||
- For Every `instance` a given `service` may add multiple nested `services`.
|
- For Every `instance` a given `service` may add multiple nested `services`.
|
||||||
- A given `service` may add a static set of nested `services`; Even if there are
|
- A given `service` may add a static set of nested `services`; Even if there are multiple instances of the same given service.
|
||||||
multiple instances of the same given service.
|
|
||||||
|
|
||||||
Q: Why is this not a top-level attribute? A: Because nested service definitions
|
Q: Why is this not a top-level attribute?
|
||||||
may also depend on a `role` which must be resolved depending on `machine` and
|
A: Because nested service definitions may also depend on a `role` which must be resolved depending on `machine` and `instance`. The top-level module doesn't know anything about machines. Keeping the service layer machine agnostic allows us to build the UI for a module without adding any machines. (One of the problems with the current system)
|
||||||
`instance`. The top-level module doesn't know anything about machines. Keeping
|
|
||||||
the service layer machine agnostic allows us to build the UI for a module
|
|
||||||
without adding any machines. (One of the problems with the current system)
|
|
||||||
|
|
||||||
```
|
```
|
||||||
zerotier/default.nix
|
zerotier/default.nix
|
||||||
```
|
```
|
||||||
|
|
||||||
```nix
|
```nix
|
||||||
# Some example module
|
# Some example module
|
||||||
{
|
{
|
||||||
@@ -267,25 +221,15 @@ zerotier/default.nix
|
|||||||
|
|
||||||
## Inventory.instances
|
## Inventory.instances
|
||||||
|
|
||||||
This document also proposes to add a new attribute to the inventory that allow
|
This document also proposes to add a new attribute to the inventory that allow for exclusive configuration of the new modules.
|
||||||
for exclusive configuration of the new modules. This allows to better separate
|
This allows to better separate the new and the old way of writing and configuring modules. Keeping the new implementation more focussed and keeping existing technical debt out from the beginning.
|
||||||
the new and the old way of writing and configuring modules. Keeping the new
|
|
||||||
implementation more focussed and keeping existing technical debt out from the
|
|
||||||
beginning.
|
|
||||||
|
|
||||||
The following thoughts went into this:
|
The following thoughts went into this:
|
||||||
|
|
||||||
- Getting rid of `<serviceName>`: Using only the attribute name (plain string)
|
- Getting rid of `<serviceName>`: Using only the attribute name (plain string) is not sufficient for defining the source of the service module. Encoding meta information into it would also require some extensible format specification and parser.
|
||||||
is not sufficient for defining the source of the service module. Encoding meta
|
- removing instanceConfig and machineConfig: There is no such config. Service configuration must always be role specific, because the options are defined on the role.
|
||||||
information into it would also require some extensible format specification
|
- renaming `config` to `settings` or similar. Since `config` is a module system internal name.
|
||||||
and parser.
|
- Tags and machines should be an attribute set to allow setting `settings` on that level instead.
|
||||||
- removing instanceConfig and machineConfig: There is no such config. Service
|
|
||||||
configuration must always be role specific, because the options are defined on
|
|
||||||
the role.
|
|
||||||
- renaming `config` to `settings` or similar. Since `config` is a module system
|
|
||||||
internal name.
|
|
||||||
- Tags and machines should be an attribute set to allow setting `settings` on
|
|
||||||
that level instead.
|
|
||||||
|
|
||||||
```nix
|
```nix
|
||||||
{
|
{
|
||||||
@@ -314,9 +258,7 @@ The following thoughts went into this:
|
|||||||
|
|
||||||
## Iteration note
|
## Iteration note
|
||||||
|
|
||||||
We want to implement the system as described. Once we have sufficient data on
|
We want to implement the system as described. Once we have sufficient data on real world use-cases and modules we might revisit this document along with the updated implementation.
|
||||||
real world use-cases and modules we might revisit this document along with the
|
|
||||||
updated implementation.
|
|
||||||
|
|
||||||
## Real world example
|
## Real world example
|
||||||
|
|
||||||
|
|||||||
@@ -6,8 +6,7 @@ Accepted
|
|||||||
|
|
||||||
## Context
|
## Context
|
||||||
|
|
||||||
In the long term we envision the clan application will consist of the following
|
In the long term we envision the clan application will consist of the following user facing tools in the long term.
|
||||||
user facing tools in the long term.
|
|
||||||
|
|
||||||
- `CLI`
|
- `CLI`
|
||||||
- `TUI`
|
- `TUI`
|
||||||
@@ -15,20 +14,17 @@ user facing tools in the long term.
|
|||||||
- `REST-API`
|
- `REST-API`
|
||||||
- `Mobile Application`
|
- `Mobile Application`
|
||||||
|
|
||||||
We might not be sure whether all of those will exist but the architecture should
|
We might not be sure whether all of those will exist but the architecture should be generic such that those are possible without major changes of the underlying system.
|
||||||
be generic such that those are possible without major changes of the underlying
|
|
||||||
system.
|
|
||||||
|
|
||||||
## Decision
|
## Decision
|
||||||
|
|
||||||
This leads to the conclusion that we should do `library` centric development.
|
This leads to the conclusion that we should do `library` centric development.
|
||||||
With the current `clan` python code being a library that can be imported to
|
With the current `clan` python code being a library that can be imported to create various tools ontop of it.
|
||||||
create various tools ontop of it. All **CLI** or **UI** related parts should be
|
All **CLI** or **UI** related parts should be moved out of the main library.
|
||||||
moved out of the main library.
|
|
||||||
|
|
||||||
Imagine roughly the following architecture:
|
Imagine roughly the following architecture:
|
||||||
|
|
||||||
```mermaid
|
``` mermaid
|
||||||
graph TD
|
graph TD
|
||||||
%% Define styles
|
%% Define styles
|
||||||
classDef frontend fill:#f9f,stroke:#333,stroke-width:2px;
|
classDef frontend fill:#f9f,stroke:#333,stroke-width:2px;
|
||||||
@@ -70,18 +66,14 @@ graph TD
|
|||||||
BusinessLogic --> NIX
|
BusinessLogic --> NIX
|
||||||
```
|
```
|
||||||
|
|
||||||
With this very simple design it is ensured that all the basic features remain
|
With this very simple design it is ensured that all the basic features remain stable across all frontends.
|
||||||
stable across all frontends. In the end it is straight forward to create python
|
In the end it is straight forward to create python library function calls in a testing framework to ensure that kind of stability.
|
||||||
library function calls in a testing framework to ensure that kind of stability.
|
|
||||||
|
|
||||||
Integration tests and smaller unit-tests should both be utilized to ensure the
|
Integration tests and smaller unit-tests should both be utilized to ensure the stability of the library.
|
||||||
stability of the library.
|
|
||||||
|
|
||||||
Note: Library function don't have to be json-serializable in general.
|
Note: Library function don't have to be json-serializable in general.
|
||||||
|
|
||||||
Persistence includes but is not limited to: creating git commits, writing to
|
Persistence includes but is not limited to: creating git commits, writing to inventory.json, reading and writing vars, and interacting with persisted data in general.
|
||||||
inventory.json, reading and writing vars, and interacting with persisted data in
|
|
||||||
general.
|
|
||||||
|
|
||||||
## Benefits / Drawbacks
|
## Benefits / Drawbacks
|
||||||
|
|
||||||
@@ -89,51 +81,34 @@ general.
|
|||||||
- (+) Consistency and inherent behavior
|
- (+) Consistency and inherent behavior
|
||||||
- (+) Performance & Scalability
|
- (+) Performance & Scalability
|
||||||
- (+) Different frontends for different user groups
|
- (+) Different frontends for different user groups
|
||||||
- (+) Documentation per library function makes it convenient to interact with
|
- (+) Documentation per library function makes it convenient to interact with the clan resources.
|
||||||
the clan resources.
|
- (+) Testing the library ensures stability of the underlyings for all layers above.
|
||||||
- (+) Testing the library ensures stability of the underlyings for all layers
|
|
||||||
above.
|
|
||||||
- (-) Complexity overhead
|
- (-) Complexity overhead
|
||||||
- (-) library needs to be designed / documented
|
- (-) library needs to be designed / documented
|
||||||
- (+) library can be well documented since it is a finite set of functions.
|
- (+) library can be well documented since it is a finite set of functions.
|
||||||
- (-) Error handling might be harder.
|
- (-) Error handling might be harder.
|
||||||
- (+) Common error reporting
|
- (+) Common error reporting
|
||||||
- (-) different frontends need different features. The library must include them
|
- (-) different frontends need different features. The library must include them all.
|
||||||
all.
|
|
||||||
- (+) All those core features must be implemented anyways.
|
- (+) All those core features must be implemented anyways.
|
||||||
- (+) VPN Benchmarking uses the existing library's already and works relatively
|
- (+) VPN Benchmarking uses the existing library's already and works relatively well.
|
||||||
well.
|
|
||||||
|
|
||||||
## Implementation considerations
|
## Implementation considerations
|
||||||
|
|
||||||
Not all required details that need to change over time are possible to be
|
Not all required details that need to change over time are possible to be pointed out ahead of time.
|
||||||
pointed out ahead of time. The goal of this document is to create a common
|
The goal of this document is to create a common understanding for how we like our project to be structured.
|
||||||
understanding for how we like our project to be structured. Any future commits
|
Any future commits should contribute to this goal.
|
||||||
should contribute to this goal.
|
|
||||||
|
|
||||||
Some ideas what might be needed to change:
|
Some ideas what might be needed to change:
|
||||||
|
|
||||||
- Having separate locations or packages for the library and the CLI.
|
- Having separate locations or packages for the library and the CLI.
|
||||||
- Rename the `clan_cli` package to `clan` and move the `cli` frontend into a
|
- Rename the `clan_cli` package to `clan` and move the `cli` frontend into a subfolder or a separate package.
|
||||||
subfolder or a separate package.
|
- Python Argparse or other cli related code should not exist in the `clan` python library.
|
||||||
- Python Argparse or other cli related code should not exist in the `clan`
|
- `__init__.py` should be very minimal. Only init the business logic models and resources. Note that all `__init__.py` files all the way up in the module tree are always executed as part of the python module import logic and thus should be as small as possible.
|
||||||
python library.
|
i.e. `from clan_cli.vars.generators import ...` executes both `clan_cli/__init__.py` and `clan_cli/vars/__init__.py` if any of those exist.
|
||||||
- `__init__.py` should be very minimal. Only init the business logic models and
|
|
||||||
resources. Note that all `__init__.py` files all the way up in the module tree
|
|
||||||
are always executed as part of the python module import logic and thus should
|
|
||||||
be as small as possible. i.e. `from clan_cli.vars.generators import ...`
|
|
||||||
executes both `clan_cli/__init__.py` and `clan_cli/vars/__init__.py` if any of
|
|
||||||
those exist.
|
|
||||||
- `api` folder doesn't make sense since the python library `clan` is the api.
|
- `api` folder doesn't make sense since the python library `clan` is the api.
|
||||||
- Logic needed for the webui that performs json serialization and
|
- Logic needed for the webui that performs json serialization and deserialization will be some `json-adapter` folder or package.
|
||||||
deserialization will be some `json-adapter` folder or package.
|
- Code for serializing dataclasses and typed dictionaries is needed for the persistence layer. (i.e. for read-write of inventory.json)
|
||||||
- Code for serializing dataclasses and typed dictionaries is needed for the
|
- The inventory-json is a backend resource, that is internal. Its logic includes merging, unmerging and partial updates with considering nix values and their priorities. Nobody should try to read or write to it directly.
|
||||||
persistence layer. (i.e. for read-write of inventory.json)
|
Instead there will be library methods i.e. to add a `service` or to update/read/delete some information from it.
|
||||||
- The inventory-json is a backend resource, that is internal. Its logic includes
|
- Library functions should be carefully designed with suitable conventions for writing good api's in mind. (i.e: https://swagger.io/resources/articles/best-practices-in-api-design/)
|
||||||
merging, unmerging and partial updates with considering nix values and their
|
|
||||||
priorities. Nobody should try to read or write to it directly. Instead there
|
|
||||||
will be library methods i.e. to add a `service` or to update/read/delete some
|
|
||||||
information from it.
|
|
||||||
- Library functions should be carefully designed with suitable conventions for
|
|
||||||
writing good api's in mind. (i.e:
|
|
||||||
https://swagger.io/resources/articles/best-practices-in-api-design/)
|
|
||||||
|
|||||||
@@ -6,39 +6,27 @@ Proposed after some conversation between @lassulus, @Mic92, & @lopter.
|
|||||||
|
|
||||||
## Context
|
## Context
|
||||||
|
|
||||||
It can be useful to refer to ADRs by their numbers, rather than their full
|
It can be useful to refer to ADRs by their numbers, rather than their full title. To that end, short and sequential numbers are useful.
|
||||||
title. To that end, short and sequential numbers are useful.
|
|
||||||
|
|
||||||
The issue is that an ADR number is effectively assigned when the ADR is merged,
|
The issue is that an ADR number is effectively assigned when the ADR is merged, before being merged its number is provisional. Because multiple ADRs can be written at the same time, you end-up with multiple provisional ADRs with the same number, for example this is the third ADR-3:
|
||||||
before being merged its number is provisional. Because multiple ADRs can be
|
|
||||||
written at the same time, you end-up with multiple provisional ADRs with the
|
|
||||||
same number, for example this is the third ADR-3:
|
|
||||||
|
|
||||||
1. ADR-3-clan-compat: see [#3212];
|
1. ADR-3-clan-compat: see [#3212];
|
||||||
2. ADR-3-fetching-nix-from-python: see [#3452];
|
2. ADR-3-fetching-nix-from-python: see [#3452];
|
||||||
3. ADR-3-numbering-process: this ADR.
|
3. ADR-3-numbering-process: this ADR.
|
||||||
|
|
||||||
This situation makes it impossible to refer to an ADR by its number, and why I
|
This situation makes it impossible to refer to an ADR by its number, and why I (@lopter) went with the arbitrary number 7 in [#3196].
|
||||||
(@lopter) went with the arbitrary number 7 in [#3196].
|
|
||||||
|
|
||||||
We could solve this problem by using the PR number as the ADR number
|
We could solve this problem by using the PR number as the ADR number (@lassulus). The issue is that PR numbers are getting big in clan-core which does not make them easy to remember, or use in conversation and code (@lopter).
|
||||||
(@lassulus). The issue is that PR numbers are getting big in clan-core which
|
|
||||||
does not make them easy to remember, or use in conversation and code (@lopter).
|
|
||||||
|
|
||||||
Another approach would be to move the ADRs in a different repository, this would
|
Another approach would be to move the ADRs in a different repository, this would reset the counter back to 1, and make it straightforward to keep ADR and PR numbers in sync (@lopter). The issue then is that ADR are not in context with their changes which makes them more difficult to review (@Mic92).
|
||||||
reset the counter back to 1, and make it straightforward to keep ADR and PR
|
|
||||||
numbers in sync (@lopter). The issue then is that ADR are not in context with
|
|
||||||
their changes which makes them more difficult to review (@Mic92).
|
|
||||||
|
|
||||||
## Decision
|
## Decision
|
||||||
|
|
||||||
A third approach would be to:
|
A third approach would be to:
|
||||||
|
|
||||||
1. Commit ADRs before they are approved, so that the next ADR number gets
|
1. Commit ADRs before they are approved, so that the next ADR number gets assigned;
|
||||||
assigned;
|
1. Open a PR for the proposed ADR;
|
||||||
2. Open a PR for the proposed ADR;
|
1. Update the ADR file committed in step 1, so that its markdown contents point to the PR that tracks it.
|
||||||
3. Update the ADR file committed in step 1, so that its markdown contents point
|
|
||||||
to the PR that tracks it.
|
|
||||||
|
|
||||||
## Consequences
|
## Consequences
|
||||||
|
|
||||||
@@ -48,13 +36,12 @@ This makes it easier to refer to them in conversation or in code.
|
|||||||
|
|
||||||
### You need to have commit access to get an ADR number assigned
|
### You need to have commit access to get an ADR number assigned
|
||||||
|
|
||||||
This makes it more difficult for someone external to the project to contribute
|
This makes it more difficult for someone external to the project to contribute an ADR.
|
||||||
an ADR.
|
|
||||||
|
|
||||||
### Creating a new ADR requires multiple commits
|
### Creating a new ADR requires multiple commits
|
||||||
|
|
||||||
Maybe a script or CI flow could help with that if it becomes painful.
|
Maybe a script or CI flow could help with that if it becomes painful.
|
||||||
|
|
||||||
[#3196]: https://git.clan.lol/clan/clan-core/pulls/3196/
|
|
||||||
[#3212]: https://git.clan.lol/clan/clan-core/pulls/3212/
|
[#3212]: https://git.clan.lol/clan/clan-core/pulls/3212/
|
||||||
[#3452]: https://git.clan.lol/clan/clan-core/pulls/3452/
|
[#3452]: https://git.clan.lol/clan/clan-core/pulls/3452/
|
||||||
|
[#3196]: https://git.clan.lol/clan/clan-core/pulls/3196/
|
||||||
|
|||||||
@@ -4,113 +4,83 @@ accepted
|
|||||||
|
|
||||||
## Context
|
## Context
|
||||||
|
|
||||||
In our clan-cli we need to get a lot of values from nix into the python runtime.
|
In our clan-cli we need to get a lot of values from nix into the python runtime. This is used to determine the hostname, the target ips address, scripts to generate vars, file locations and many more.
|
||||||
This is used to determine the hostname, the target ips address, scripts to
|
|
||||||
generate vars, file locations and many more.
|
|
||||||
|
|
||||||
Currently we use two different accessing methods:
|
Currently we use two different accessing methods:
|
||||||
|
|
||||||
### Method 1: deployment.json
|
### Method 1: deployment.json
|
||||||
|
|
||||||
A json file that serializes some predefined values into a JSON file as
|
A json file that serializes some predefined values into a JSON file as build-time artifact.
|
||||||
build-time artifact.
|
|
||||||
|
|
||||||
Downsides:
|
Downsides:
|
||||||
|
|
||||||
- no access to flake level values
|
* no access to flake level values
|
||||||
- all or nothing:
|
* all or nothing:
|
||||||
- values are either cached via deployment.json or not. So we can only put
|
* values are either cached via deployment.json or not. So we can only put cheap values into there,
|
||||||
cheap values into there,
|
* in the past var generation script were added here, which added a huge build time overhead for every time we wanted to do any action
|
||||||
- in the past var generation script were added here, which added a huge build
|
* duplicated nix code
|
||||||
time overhead for every time we wanted to do any action
|
* values need duplicated nix code, once to define them at the correct place in the module system (clan.core.vars.generators) and code to accumulate them again for the deployment.json (system.clan.deployment.data)
|
||||||
- duplicated nix code
|
* This duality adds unnecessary dependencies to the nixos module system.
|
||||||
- values need duplicated nix code, once to define them at the correct place in
|
|
||||||
the module system (clan.core.vars.generators) and code to accumulate them
|
|
||||||
again for the deployment.json (system.clan.deployment.data)
|
|
||||||
- This duality adds unnecessary dependencies to the nixos module system.
|
|
||||||
|
|
||||||
Benefits:
|
Benefits:
|
||||||
|
|
||||||
- Utilize `nix build` for caching the file.
|
* Utilize `nix build` for caching the file.
|
||||||
- Caching mechanism is very simple.
|
* Caching mechanism is very simple.
|
||||||
|
|
||||||
|
|
||||||
### Method 2: Direct access
|
### Method 2: Direct access
|
||||||
|
|
||||||
Directly calling the evaluator / build sandbox via `nix build` and
|
Directly calling the evaluator / build sandbox via `nix build` and `nix eval`within the Python code
|
||||||
`nix eval`within the Python code
|
|
||||||
|
|
||||||
Downsides:
|
Downsides:
|
||||||
|
|
||||||
- Access is not cached: Static overhead (see below: ~1.5s) is present every
|
* Access is not cached: Static overhead (see below: \~1.5s) is present every time, if we invoke `nix commands`
|
||||||
time, if we invoke `nix commands`
|
* The static overhead depends obviously which value we need to retrieve, since the `evalModules` overhead depends, whether we evaluate some attribute inside a machine or a flake attribute
|
||||||
- The static overhead depends obviously which value we need to retrieve, since
|
* Accessing more and more attributes with this method increases the static overhead, which leads to a linear decrease in performance.
|
||||||
the `evalModules` overhead depends, whether we evaluate some attribute
|
* Boilerplate for interacting with the CLI and Error handling code is repeated every time.
|
||||||
inside a machine or a flake attribute
|
|
||||||
- Accessing more and more attributes with this method increases the static
|
|
||||||
overhead, which leads to a linear decrease in performance.
|
|
||||||
- Boilerplate for interacting with the CLI and Error handling code is repeated
|
|
||||||
every time.
|
|
||||||
|
|
||||||
Benefits:
|
Benefits:
|
||||||
|
|
||||||
- Simple and native interaction with the `nix commands`is rather intuitive
|
* Simple and native interaction with the `nix commands`is rather intuitive
|
||||||
- Custom error handling for each attribute is easy
|
* Custom error handling for each attribute is easy
|
||||||
|
|
||||||
This sytem could be enhanced with custom nix expressions, which could be used in
|
This sytem could be enhanced with custom nix expressions, which could be used in places where we don't want to put values into deployment.json or want to fetch flake level values. This also has some downsides:
|
||||||
places where we don't want to put values into deployment.json or want to fetch
|
|
||||||
flake level values. This also has some downsides:
|
|
||||||
|
|
||||||
- technical debt
|
* technical debt
|
||||||
- we have to maintain custom nix expressions inside python code, embedding
|
* we have to maintain custom nix expressions inside python code, embedding code is error prone and the language linters won't help you here, so errors are common and harder to debug.
|
||||||
code is error prone and the language linters won't help you here, so errors
|
* we need custom error reporting code in case something goes wrong, either the value doesn't exist or there is an reported build error
|
||||||
are common and harder to debug.
|
* no caching/custom caching logic
|
||||||
- we need custom error reporting code in case something goes wrong, either the
|
* currently there is no infrastructure to cache those extra values, so we would need to store them somewhere, we could either enhance one of the many classes we have or don't cache them at all
|
||||||
value doesn't exist or there is an reported build error
|
* even if we implement caching for extra nix expressions, there can be no sharing between extra nix expressions. for example we have 2 nix expressions, one fetches paths and values for all generators and the second one fetches only the values, we still need to execute both of them in both contexts although the second one could be skipped if the first one is already cached
|
||||||
- no caching/custom caching logic
|
|
||||||
- currently there is no infrastructure to cache those extra values, so we
|
|
||||||
would need to store them somewhere, we could either enhance one of the many
|
|
||||||
classes we have or don't cache them at all
|
|
||||||
- even if we implement caching for extra nix expressions, there can be no
|
|
||||||
sharing between extra nix expressions. for example we have 2 nix
|
|
||||||
expressions, one fetches paths and values for all generators and the second
|
|
||||||
one fetches only the values, we still need to execute both of them in both
|
|
||||||
contexts although the second one could be skipped if the first one is
|
|
||||||
already cached
|
|
||||||
|
|
||||||
### Method 3: nix select
|
### Method 3: nix select
|
||||||
|
|
||||||
Move all code that extracts nix values into a common class:
|
Move all code that extracts nix values into a common class:
|
||||||
|
|
||||||
Downsides:
|
Downsides:
|
||||||
|
* added complexity for maintaining our own DSL
|
||||||
- added complexity for maintaining our own DSL
|
|
||||||
|
|
||||||
Benefits:
|
Benefits:
|
||||||
|
* we can implement an API (select DSL) to get those values from nix without writing complex nix expressions.
|
||||||
|
* we can implement caching of those values beyond the runtime of the CLI
|
||||||
|
* we can use precaching at different endpoints to eliminate most of multiple nix evaluations (except in cases where we have to break the cache or we don't know if we need the value in the value later and getting it is expensive).
|
||||||
|
|
||||||
|
|
||||||
- we can implement an API (select DSL) to get those values from nix without
|
|
||||||
writing complex nix expressions.
|
|
||||||
- we can implement caching of those values beyond the runtime of the CLI
|
|
||||||
- we can use precaching at different endpoints to eliminate most of multiple nix
|
|
||||||
evaluations (except in cases where we have to break the cache or we don't know
|
|
||||||
if we need the value in the value later and getting it is expensive).
|
|
||||||
|
|
||||||
## Decision
|
## Decision
|
||||||
|
|
||||||
Use Method 3 (nix select) for extracting values out of nix.
|
Use Method 3 (nix select) for extracting values out of nix.
|
||||||
|
|
||||||
This adds the Flake class in flake.py with a select method, which takes a
|
This adds the Flake class in flake.py with a select method, which takes a selector string and returns a python dict.
|
||||||
selector string and returns a python dict.
|
|
||||||
|
|
||||||
Example:
|
Example:
|
||||||
|
|
||||||
```python
|
```python
|
||||||
from clan_lib.flake import Flake
|
from clan_lib.flake import Flake
|
||||||
flake = Flake("github:lassulus/superconfig")
|
flake = Flake("github:lassulus/superconfig")
|
||||||
flake.select("nixosConfigurations.*.config.networking.hostName)
|
flake.select("nixosConfigurations.*.config.networking.hostName)
|
||||||
```
|
```
|
||||||
|
|
||||||
returns:
|
returns:
|
||||||
|
|
||||||
```
|
```
|
||||||
{
|
{
|
||||||
"ignavia": "ignavia",
|
"ignavia": "ignavia",
|
||||||
@@ -121,13 +91,7 @@ returns:
|
|||||||
|
|
||||||
## Consequences
|
## Consequences
|
||||||
|
|
||||||
- Faster execution due to caching most things beyond a single execution, if no
|
* Faster execution due to caching most things beyond a single execution, if no cache break happens execution is basically instant, because we don't need to run nix again.
|
||||||
cache break happens execution is basically instant, because we don't need to
|
* Better error reporting, since all nix values go through one chokepoint, we can parse error messages in that chokepoint and report them in a more user friendly way, for example if a value is missing at the expected location inside the module system.
|
||||||
run nix again.
|
* less embedded nix code inside python code
|
||||||
- Better error reporting, since all nix values go through one chokepoint, we can
|
* more portable CLI, since we need to import less modules into the module system and most things can be extracted by the python code directly
|
||||||
parse error messages in that chokepoint and report them in a more user
|
|
||||||
friendly way, for example if a value is missing at the expected location
|
|
||||||
inside the module system.
|
|
||||||
- less embedded nix code inside python code
|
|
||||||
- more portable CLI, since we need to import less modules into the module system
|
|
||||||
and most things can be extracted by the python code directly
|
|
||||||
|
|||||||
@@ -6,16 +6,12 @@ accepted
|
|||||||
|
|
||||||
## Context
|
## Context
|
||||||
|
|
||||||
Currently different operations (install, update) have different modes. Install
|
Currently different operations (install, update) have different modes. Install always evals locally and pushes the derivation to a remote system. update has a configurable buildHost and targetHost.
|
||||||
always evals locally and pushes the derivation to a remote system. update has a
|
Confusingly install always evals locally and update always evals on the targetHost, so hosts have different semantics in different operations contexts.
|
||||||
configurable buildHost and targetHost. Confusingly install always evals locally
|
|
||||||
and update always evals on the targetHost, so hosts have different semantics in
|
|
||||||
different operations contexts.
|
|
||||||
|
|
||||||
## Decision
|
## Decision
|
||||||
|
|
||||||
Add evalHost to make this clear and configurable for the user. This would leave
|
Add evalHost to make this clear and configurable for the user. This would leave us with:
|
||||||
us with:
|
|
||||||
|
|
||||||
- evalHost
|
- evalHost
|
||||||
- buildHost
|
- buildHost
|
||||||
@@ -23,29 +19,18 @@ us with:
|
|||||||
|
|
||||||
for the update and install operation.
|
for the update and install operation.
|
||||||
|
|
||||||
`evalHost` would be the machine that evaluates the nixos configuration. if
|
`evalHost` would be the machine that evaluates the nixos configuration. if evalHost is not localhost, we upload the non secret vars and the nix archived flake (this is usually the same operation) to the evalMachine.
|
||||||
evalHost is not localhost, we upload the non secret vars and the nix archived
|
|
||||||
flake (this is usually the same operation) to the evalMachine.
|
|
||||||
|
|
||||||
`buildHost` would be what is used by the machine to build, it would correspond
|
`buildHost` would be what is used by the machine to build, it would correspond to `--build-host` on the nixos-rebuild command or `--builders` for nix build.
|
||||||
to `--build-host` on the nixos-rebuild command or `--builders` for nix build.
|
|
||||||
|
|
||||||
`targetHost` would be the machine where the closure gets copied to and activated
|
`targetHost` would be the machine where the closure gets copied to and activated (either through install or switch-to-configuration). It corresponds to `--targetHost` for nixos-rebuild or where we usually point `nixos-anywhere` to.
|
||||||
(either through install or switch-to-configuration). It corresponds to
|
|
||||||
`--targetHost` for nixos-rebuild or where we usually point `nixos-anywhere` to.
|
|
||||||
|
|
||||||
This hosts could be set either through CLI args (or forms for the GUI) or via
|
This hosts could be set either through CLI args (or forms for the GUI) or via the inventory. If both are given, the CLI args would take precedence.
|
||||||
the inventory. If both are given, the CLI args would take precedence.
|
|
||||||
|
|
||||||
## Consequences
|
## Consequences
|
||||||
|
|
||||||
We now support every deployment model of every tool out there with a bunch of
|
We now support every deployment model of every tool out there with a bunch of simple flags. The semantics are more clear and we can write some nice documentation.
|
||||||
simple flags. The semantics are more clear and we can write some nice
|
|
||||||
documentation.
|
|
||||||
|
|
||||||
The install code has to be reworked, since nixos-anywhere has problems with
|
The install code has to be reworked, since nixos-anywhere has problems with evalHost and targetHost being the same machine, So we would need to kexec first and use the kexec image (or installer) as the evalHost afterwards.
|
||||||
evalHost and targetHost being the same machine, So we would need to kexec first
|
|
||||||
and use the kexec image (or installer) as the evalHost afterwards.
|
|
||||||
|
|
||||||
In cases where the evalHost doesn't have access to the targetHost or buildHost,
|
In cases where the evalHost doesn't have access to the targetHost or buildHost, we need to setup temporary entries for the lifetime of the command.
|
||||||
we need to setup temporary entries for the lifetime of the command.
|
|
||||||
|
|||||||
@@ -1,15 +1,13 @@
|
|||||||
# Architecture Decision Records
|
# Architecture Decision Records
|
||||||
|
|
||||||
This section contains the architecture decisions that have been reviewed and
|
This section contains the architecture decisions that have been reviewed and generally agreed upon
|
||||||
generally agreed upon
|
|
||||||
|
|
||||||
## What is an ADR?
|
## What is an ADR?
|
||||||
|
|
||||||
> An architecture decision record (ADR) is a document that captures an important
|
> An architecture decision record (ADR) is a document that captures an important architecture decision made along with its context and consequences.
|
||||||
> architecture decision made along with its context and consequences.
|
|
||||||
|
|
||||||
!!! Note For further reading about adr's we recommend
|
!!! Note
|
||||||
[architecture-decision-record](https://github.com/joelparkerhenderson/architecture-decision-record)
|
For further reading about adr's we recommend [architecture-decision-record](https://github.com/joelparkerhenderson/architecture-decision-record)
|
||||||
|
|
||||||
## Crafting a new ADR
|
## Crafting a new ADR
|
||||||
|
|
||||||
|
|||||||
@@ -1,9 +1,7 @@
|
|||||||
# Decision record template by Michael Nygard
|
# Decision record template by Michael Nygard
|
||||||
|
|
||||||
This is the template in
|
This is the template in [Documenting architecture decisions - Michael Nygard](http://thinkrelevance.com/blog/2011/11/15/documenting-architecture-decisions).
|
||||||
[Documenting architecture decisions - Michael Nygard](http://thinkrelevance.com/blog/2011/11/15/documenting-architecture-decisions).
|
You can use [adr-tools](https://github.com/npryce/adr-tools) for managing the ADR files.
|
||||||
You can use [adr-tools](https://github.com/npryce/adr-tools) for managing the
|
|
||||||
ADR files.
|
|
||||||
|
|
||||||
In each ADR file, write these sections:
|
In each ADR file, write these sections:
|
||||||
|
|
||||||
@@ -11,8 +9,7 @@ In each ADR file, write these sections:
|
|||||||
|
|
||||||
## Status
|
## Status
|
||||||
|
|
||||||
What is the status, such as proposed, accepted, rejected, deprecated,
|
What is the status, such as proposed, accepted, rejected, deprecated, superseded, etc.?
|
||||||
superseded, etc.?
|
|
||||||
|
|
||||||
## Context
|
## Context
|
||||||
|
|
||||||
|
|||||||
@@ -1,14 +1,11 @@
|
|||||||
## Using Age Plugins
|
## Using Age Plugins
|
||||||
|
|
||||||
If you wish to use a key generated using an [age plugin] as your admin key,
|
If you wish to use a key generated using an [age plugin] as your admin key, extra care is needed.
|
||||||
extra care is needed.
|
|
||||||
|
|
||||||
You must **precede your secret key with a comment that contains its
|
You must **precede your secret key with a comment that contains its corresponding recipient**.
|
||||||
corresponding recipient**.
|
|
||||||
|
|
||||||
This is usually output as part of the generation process and is only required
|
This is usually output as part of the generation process
|
||||||
because there is no unified mechanism for recovering a recipient from a plugin
|
and is only required because there is no unified mechanism for recovering a recipient from a plugin secret key.
|
||||||
secret key.
|
|
||||||
|
|
||||||
Here is an example:
|
Here is an example:
|
||||||
|
|
||||||
@@ -17,16 +14,15 @@ Here is an example:
|
|||||||
AGE-PLUGIN-FIDO2-HMAC-1QQPQZRFR7ZZ2WCV...
|
AGE-PLUGIN-FIDO2-HMAC-1QQPQZRFR7ZZ2WCV...
|
||||||
```
|
```
|
||||||
|
|
||||||
!!! note The comment that precedes the plugin secret key need only contain the
|
!!! note
|
||||||
recipient. Any other text is ignored.
|
The comment that precedes the plugin secret key need only contain the recipient.
|
||||||
|
Any other text is ignored.
|
||||||
|
|
||||||
```
|
In the example above, you can specify `# recipient: age1zdy...`, `# public: age1zdy....` or even
|
||||||
In the example above, you can specify `# recipient: age1zdy...`, `# public: age1zdy....` or even
|
just `# age1zdy....`
|
||||||
just `# age1zdy....`
|
|
||||||
```
|
|
||||||
|
|
||||||
You will need to add an entry into your `flake.nix` to ensure that the necessary
|
You will need to add an entry into your `flake.nix` to ensure that the necessary `age` plugins
|
||||||
`age` plugins are loaded when using Clan:
|
are loaded when using Clan:
|
||||||
|
|
||||||
```nix title="flake.nix"
|
```nix title="flake.nix"
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -1,3 +1,4 @@
|
|||||||
|
|
||||||
This guide explains how to set up and manage
|
This guide explains how to set up and manage
|
||||||
[BorgBackup](https://borgbackup.readthedocs.io/) for secure, efficient backups
|
[BorgBackup](https://borgbackup.readthedocs.io/) for secure, efficient backups
|
||||||
in a clan network. BorgBackup provides:
|
in a clan network. BorgBackup provides:
|
||||||
@@ -33,13 +34,11 @@ inventory.instances = {
|
|||||||
The input should be named according to your flake input. Jon is configured as a
|
The input should be named according to your flake input. Jon is configured as a
|
||||||
client machine with a destination pointing to a Hetzner Storage Box.
|
client machine with a destination pointing to a Hetzner Storage Box.
|
||||||
|
|
||||||
To see a list of all possible options go to
|
To see a list of all possible options go to [borgbackup clan service](../reference/clanServices/borgbackup.md)
|
||||||
[borgbackup clan service](../reference/clanServices/borgbackup.md)
|
|
||||||
|
|
||||||
## Roles
|
## Roles
|
||||||
|
|
||||||
A Clan Service can have multiple roles, each role applies different nix config
|
A Clan Service can have multiple roles, each role applies different nix config to the machine.
|
||||||
to the machine.
|
|
||||||
|
|
||||||
### 1. Client
|
### 1. Client
|
||||||
|
|
||||||
@@ -62,8 +61,8 @@ Destinations can be:
|
|||||||
|
|
||||||
## State management
|
## State management
|
||||||
|
|
||||||
Backups are based on [states](../reference/clan.core/state.md). A state defines
|
Backups are based on [states](../reference/clan.core/state.md). A state
|
||||||
which files should be backed up and how these files are obtained through
|
defines which files should be backed up and how these files are obtained through
|
||||||
pre/post backup and restore scripts.
|
pre/post backup and restore scripts.
|
||||||
|
|
||||||
Here's an example for a user application `linkding`:
|
Here's an example for a user application `linkding`:
|
||||||
@@ -124,8 +123,7 @@ clan.core.state.linkding = {
|
|||||||
|
|
||||||
## Managing backups
|
## Managing backups
|
||||||
|
|
||||||
In this section we go over how to manage your collection of backups with the
|
In this section we go over how to manage your collection of backups with the clan command.
|
||||||
clan command.
|
|
||||||
|
|
||||||
### Listing states
|
### Listing states
|
||||||
|
|
||||||
@@ -196,3 +194,6 @@ To restore only a specific service (e.g., `linkding`):
|
|||||||
```bash
|
```bash
|
||||||
clan backups restore --service linkding jon borgbackup storagebox::u444061@u444061.your-storagebox.de:/./borgbackup::jon-storagebox-2025-07-24T06:02:35
|
clan backups restore --service linkding jon borgbackup storagebox::u444061@u444061.your-storagebox.de:/./borgbackup::jon-storagebox-2025-07-24T06:02:35
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -1,22 +1,22 @@
|
|||||||
# Using `clanServices`
|
# Using the Inventory
|
||||||
|
|
||||||
Clan's `clanServices` system is a composable way to define and deploy services
|
Clan's inventory system is a composable way to define and deploy services across
|
||||||
across machines.
|
machines.
|
||||||
|
|
||||||
This guide shows how to **instantiate** a `clanService`, explains how service
|
This guide shows how to **instantiate** a `clanService`, explains how service
|
||||||
definitions are structured in your inventory, and how to pick or create services
|
definitions are structured in your inventory, and how to pick or create services
|
||||||
from modules exposed by flakes.
|
from modules exposed by flakes.
|
||||||
|
|
||||||
The term **Multi-host-modules** was introduced previously in the
|
The term **Multi-host-modules** was introduced previously in the [nixus
|
||||||
[nixus repository](https://github.com/infinisil/nixus) and represents a similar
|
repository](https://github.com/infinisil/nixus) and represents a similar
|
||||||
concept.
|
concept.
|
||||||
|
|
||||||
______________________________________________________________________
|
______________________________________________________________________
|
||||||
|
|
||||||
## Overview
|
## Overview
|
||||||
|
|
||||||
Services are used in `inventory.instances`, and then they attach to *roles* and
|
Services are used in `inventory.instances`, and assigned to *roles* and
|
||||||
*machines* — meaning you decide which machines run which part of the service.
|
*machines* -- meaning you decide which machines run which part of the service.
|
||||||
|
|
||||||
For example:
|
For example:
|
||||||
|
|
||||||
@@ -24,146 +24,155 @@ For example:
|
|||||||
inventory.instances = {
|
inventory.instances = {
|
||||||
borgbackup = {
|
borgbackup = {
|
||||||
roles.client.machines."laptop" = {};
|
roles.client.machines."laptop" = {};
|
||||||
roles.client.machines."server1" = {};
|
roles.client.machines."workstation" = {};
|
||||||
|
|
||||||
roles.server.machines."backup-box" = {};
|
roles.server.machines."backup-box" = {};
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
This says: “Run borgbackup as a *client* on my *laptop* and *server1*, and as a
|
This says: "Run borgbackup as a *client* on my *laptop* and *workstation*, and
|
||||||
*server* on *backup-box*.”
|
as a *server* on *backup-box*". `client` and `server` are roles defined by the
|
||||||
|
`borgbackup` service.
|
||||||
|
|
||||||
## Module source specification
|
## Module source specification
|
||||||
|
|
||||||
Each instance includes a reference to a **module specification** — this is how
|
Each instance includes a reference to a **module specification** -- this is how
|
||||||
Clan knows which service module to use and where it came from. Usually one would
|
Clan knows which service module to use and where it came from.
|
||||||
just use `imports` but we needd to make the `module source` configurable via
|
|
||||||
Python API. By default it is not required to specify the `module`, in which case
|
|
||||||
it defaults to the preprovided services of clan-core.
|
|
||||||
|
|
||||||
______________________________________________________________________
|
It is not required to specify the `module.input` parameter, in which case it
|
||||||
|
defaults to the pre-provided services of clan-core. In a similar fashion, the
|
||||||
## Override Example
|
`module.name` parameter can also be omitted, it will default to the name of the
|
||||||
|
instance.
|
||||||
|
|
||||||
Example of instantiating a `borgbackup` service using `clan-core`:
|
Example of instantiating a `borgbackup` service using `clan-core`:
|
||||||
|
|
||||||
```nix
|
```nix
|
||||||
inventory.instances = {
|
inventory.instances = {
|
||||||
# Instance Name: Different name for this 'borgbackup' instance
|
|
||||||
borgbackup = {
|
borgbackup = { # <- Instance name
|
||||||
# Since this is instances."borgbackup" the whole `module = { ... }` below is equivalent and optional.
|
|
||||||
module = {
|
# This can be partially/fully specified,
|
||||||
name = "borgbackup"; # <-- Name of the module (optional)
|
# - If the instance name is not the name of the module
|
||||||
input = "clan-core"; # <-- The flake input where the service is defined (optional)
|
# - If the input is not clan-core
|
||||||
};
|
# module = {
|
||||||
|
# name = "borgbackup"; # Name of the module (optional)
|
||||||
|
# input = "clan-core"; # The flake input where the service is defined (optional)
|
||||||
|
# };
|
||||||
|
|
||||||
# Participation of the machines is defined via roles
|
# Participation of the machines is defined via roles
|
||||||
# Right side needs to be an attribute set. Its purpose will become clear later
|
|
||||||
roles.client.machines."machine-a" = {};
|
roles.client.machines."machine-a" = {};
|
||||||
roles.server.machines."backup-host" = {};
|
roles.server.machines."backup-host" = {};
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
If you used `clan-core` as an input attribute for your flake:
|
## Module Settings
|
||||||
|
|
||||||
|
Each role might expose configurable options. See clan's [clanServices
|
||||||
|
reference](../reference/clanServices/index.md) for all available options.
|
||||||
|
|
||||||
|
Settings can be set in per-machine or per-role. The latter is applied to all
|
||||||
|
machines that are assigned to that role.
|
||||||
|
|
||||||
|
|
||||||
```nix
|
```nix
|
||||||
# ↓ module.input = "clan-core"
|
|
||||||
inputs.clan-core.url = "https://git.clan.lol/clan/clan-core/archive/main.tar.gz";
|
|
||||||
```
|
|
||||||
|
|
||||||
## Simplified Example
|
|
||||||
|
|
||||||
If only one instance is needed for a service and the service is a clan core
|
|
||||||
service, the `module` definition can be omitted.
|
|
||||||
|
|
||||||
```nix
|
|
||||||
# Simplified way of specifying a single instance
|
|
||||||
inventory.instances = {
|
inventory.instances = {
|
||||||
# instance name is `borgbackup` -> clan core module `borgbackup` will be loaded.
|
|
||||||
borgbackup = {
|
borgbackup = {
|
||||||
# Participation of the machines is defined via roles
|
# Settings for 'machine-a'
|
||||||
# Right side needs to be an attribute set. Its purpose will become clear later
|
|
||||||
roles.client.machines."machine-a" = {};
|
|
||||||
roles.server.machines."backup-host" = {};
|
|
||||||
};
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Configuration Example
|
|
||||||
|
|
||||||
Each role might expose configurable options
|
|
||||||
|
|
||||||
See clan's [clanServices reference](../reference/clanServices/index.md) for
|
|
||||||
available options
|
|
||||||
|
|
||||||
```nix
|
|
||||||
inventory.instances = {
|
|
||||||
borgbackup-example = {
|
|
||||||
module = {
|
|
||||||
name = "borgbackup";
|
|
||||||
input = "clan-core";
|
|
||||||
};
|
|
||||||
roles.client.machines."machine-a" = {
|
roles.client.machines."machine-a" = {
|
||||||
# 'client' -Settings of 'machine-a'
|
|
||||||
settings = {
|
settings = {
|
||||||
backupFolders = [
|
backupFolders = [
|
||||||
/home
|
/home
|
||||||
/var
|
/var
|
||||||
];
|
];
|
||||||
};
|
};
|
||||||
# ---------------------------
|
|
||||||
};
|
};
|
||||||
roles.server.machines."backup-host" = {};
|
|
||||||
|
# Settings for all machines of the role "server"
|
||||||
|
roles.server.settings = {
|
||||||
|
directory = "/var/lib/borgbackup";
|
||||||
|
};
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
## Tags
|
## Tags
|
||||||
|
|
||||||
Multiple members can be defined using tags as follows
|
Tags can be used to assign multiple machines to a role at once. It can be thought of as a grouping mechanism.
|
||||||
|
|
||||||
|
For example using the `all` tag for services that you want to be configured on all
|
||||||
|
your machines is a common pattern.
|
||||||
|
|
||||||
|
The following example could be used to backup all your machines to a common
|
||||||
|
backup server
|
||||||
|
|
||||||
```nix
|
```nix
|
||||||
inventory.instances = {
|
inventory.instances = {
|
||||||
borgbackup-example = {
|
borgbackup = {
|
||||||
module = {
|
# "All" machines are assigned to the borgbackup 'client' role
|
||||||
name = "borgbackup";
|
roles.client.tags = [ "all" ];
|
||||||
input = "clan-core";
|
|
||||||
};
|
# But only one specific machine (backup-host) is assigned to the 'server' role
|
||||||
#
|
|
||||||
# The 'all' -tag targets all machines
|
|
||||||
roles.client.tags."all" = {};
|
|
||||||
# ---------------------------
|
|
||||||
roles.server.machines."backup-host" = {};
|
roles.server.machines."backup-host" = {};
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Sharing additional Nix configuration
|
||||||
|
|
||||||
|
Sometimes you need to add custom NixOS configuration alongside your clan
|
||||||
|
services. The `extraModules` option allows you to include additional NixOS
|
||||||
|
configuration that is applied for every machine assigned to that role.
|
||||||
|
|
||||||
|
There are multiple valid syntaxes for specifying modules:
|
||||||
|
|
||||||
|
```nix
|
||||||
|
inventory.instances = {
|
||||||
|
borgbackup = {
|
||||||
|
roles.client = {
|
||||||
|
# Direct module reference
|
||||||
|
extraModules = [ ../nixosModules/borgbackup.nix ];
|
||||||
|
|
||||||
|
# Or using self (needs to be json serializable)
|
||||||
|
# See next example, for a workaround.
|
||||||
|
extraModules = [ self.nixosModules.borgbackup ];
|
||||||
|
|
||||||
|
# Or inline module definition, (needs to be json compatible)
|
||||||
|
extraModules = [
|
||||||
|
{
|
||||||
|
# Your module configuration here
|
||||||
|
# ...
|
||||||
|
#
|
||||||
|
# If the module needs to contain non-serializable expressions:
|
||||||
|
imports = [ ./path/to/non-serializable.nix ];
|
||||||
|
}
|
||||||
|
];
|
||||||
|
};
|
||||||
|
};
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
## Picking a clanService
|
## Picking a clanService
|
||||||
|
|
||||||
You can use services exposed by Clan's core module library, `clan-core`.
|
You can use services exposed by Clan's core module library, `clan-core`.
|
||||||
|
|
||||||
🔗 See:
|
🔗 See: [List of Available Services in clan-core](../reference/clanServices/index.md)
|
||||||
[List of Available Services in clan-core](../reference/clanServices/index.md)
|
|
||||||
|
|
||||||
## Defining Your Own Service
|
## Defining Your Own Service
|
||||||
|
|
||||||
You can also author your own `clanService` modules.
|
You can also author your own `clanService` modules.
|
||||||
|
|
||||||
🔗 Learn how to write your own service:
|
🔗 Learn how to write your own service: [Authoring a service](../guides/services/community.md)
|
||||||
[Authoring a service](../guides/services/community.md)
|
|
||||||
|
|
||||||
You might expose your service module from your flake — this makes it easy for
|
You might expose your service module from your flake — this makes it easy for other people to also use your module in their clan.
|
||||||
other people to also use your module in their clan.
|
|
||||||
|
|
||||||
______________________________________________________________________
|
______________________________________________________________________
|
||||||
|
|
||||||
## 💡 Tips for Working with clanServices
|
## 💡 Tips for Working with clanServices
|
||||||
|
|
||||||
- You can add multiple inputs to your flake (`clan-core`, `your-org-modules`,
|
- You can add multiple inputs to your flake (`clan-core`, `your-org-modules`, etc.) to mix and match services.
|
||||||
etc.) to mix and match services.
|
- Each service instance is isolated by its key in `inventory.instances`, allowing to deploy multiple versions or roles of the same service type.
|
||||||
- Each service instance is isolated by its key in `inventory.instances`,
|
|
||||||
allowing you to deploy multiple versions or roles of the same service type.
|
|
||||||
- Roles can target different machines or be scoped dynamically.
|
- Roles can target different machines or be scoped dynamically.
|
||||||
|
|
||||||
______________________________________________________________________
|
______________________________________________________________________
|
||||||
|
|||||||
@@ -1,21 +1,16 @@
|
|||||||
|
|
||||||
Here are some methods for debugging and testing the clan-cli
|
Here are some methods for debugging and testing the clan-cli
|
||||||
|
|
||||||
## Using a Development Branch
|
## Using a Development Branch
|
||||||
|
|
||||||
To streamline your development process, I suggest not installing `clan-cli`.
|
To streamline your development process, I suggest not installing `clan-cli`. Instead, clone the `clan-core` repository and add `clan-core/pkgs/clan-cli/bin` to your PATH to use the checked-out version directly.
|
||||||
Instead, clone the `clan-core` repository and add `clan-core/pkgs/clan-cli/bin`
|
|
||||||
to your PATH to use the checked-out version directly.
|
|
||||||
|
|
||||||
!!! Note After cloning, navigate to `clan-core/pkgs/clan-cli` and execute
|
!!! Note
|
||||||
`direnv allow` to activate the devshell. This will set up a symlink to nixpkgs
|
After cloning, navigate to `clan-core/pkgs/clan-cli` and execute `direnv allow` to activate the devshell. This will set up a symlink to nixpkgs at a specific location; without it, `clan-cli` won't function correctly.
|
||||||
at a specific location; without it, `clan-cli` won't function correctly.
|
|
||||||
|
|
||||||
With this setup, you can easily use
|
With this setup, you can easily use [breakpoint()](https://docs.python.org/3/library/pdb.html) to inspect the application's internal state as needed.
|
||||||
[breakpoint()](https://docs.python.org/3/library/pdb.html) to inspect the
|
|
||||||
application's internal state as needed.
|
|
||||||
|
|
||||||
This approach is feasible because `clan-cli` only requires a Python interpreter
|
This approach is feasible because `clan-cli` only requires a Python interpreter and has no other dependencies.
|
||||||
and has no other dependencies.
|
|
||||||
|
|
||||||
```nix
|
```nix
|
||||||
pkgs.mkShell {
|
pkgs.mkShell {
|
||||||
@@ -31,17 +26,11 @@ pkgs.mkShell {
|
|||||||
|
|
||||||
## Debugging nixos-anywhere
|
## Debugging nixos-anywhere
|
||||||
|
|
||||||
If you encounter a bug in a complex shell script such as `nixos-anywhere`, start
|
If you encounter a bug in a complex shell script such as `nixos-anywhere`, start by replacing the `nixos-anywhere` command with a local checkout of the project, look in the [contribution](./CONTRIBUTING.md) section for an example.
|
||||||
by replacing the `nixos-anywhere` command with a local checkout of the project,
|
|
||||||
look in the [contribution](./CONTRIBUTING.md) section for an example.
|
|
||||||
|
|
||||||
## The Debug Flag
|
## The Debug Flag
|
||||||
|
|
||||||
You can enhance your debugging process with the `--debug` flag in the `clan`
|
You can enhance your debugging process with the `--debug` flag in the `clan` command. When you add this flag to any command, it displays all subprocess commands initiated by `clan` in a readable format, along with the source code position that triggered them. This feature makes it easier to understand and trace what's happening under the hood.
|
||||||
command. When you add this flag to any command, it displays all subprocess
|
|
||||||
commands initiated by `clan` in a readable format, along with the source code
|
|
||||||
position that triggered them. This feature makes it easier to understand and
|
|
||||||
trace what's happening under the hood.
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
$ clan machines list --debug 1 ↵
|
$ clan machines list --debug 1 ↵
|
||||||
@@ -64,60 +53,46 @@ wintux
|
|||||||
|
|
||||||
## VSCode
|
## VSCode
|
||||||
|
|
||||||
If you're using VSCode, it has a handy feature that makes paths to source code
|
If you're using VSCode, it has a handy feature that makes paths to source code files clickable in the integrated terminal. Combined with the previously mentioned techniques, this allows you to open a Clan in VSCode, execute a command like `clan machines list --debug`, and receive a printed path to the code that initiates the subprocess. With the `Ctrl` key (or `Cmd` on macOS) and a mouse click, you can jump directly to the corresponding line in the code file and add a `breakpoint()` function to it, to inspect the internal state.
|
||||||
files clickable in the integrated terminal. Combined with the previously
|
|
||||||
mentioned techniques, this allows you to open a Clan in VSCode, execute a
|
|
||||||
command like `clan machines list --debug`, and receive a printed path to the
|
|
||||||
code that initiates the subprocess. With the `Ctrl` key (or `Cmd` on macOS) and
|
|
||||||
a mouse click, you can jump directly to the corresponding line in the code file
|
|
||||||
and add a `breakpoint()` function to it, to inspect the internal state.
|
|
||||||
|
|
||||||
## Finding Print Messages
|
## Finding Print Messages
|
||||||
|
|
||||||
To trace the origin of print messages in `clan-cli`, you can enable special
|
To trace the origin of print messages in `clan-cli`, you can enable special debugging features using environment variables:
|
||||||
debugging features using environment variables:
|
|
||||||
|
|
||||||
- Set `TRACE_PRINT=1` to include the source location with each print message:
|
- Set `TRACE_PRINT=1` to include the source location with each print message:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
export TRACE_PRINT=1
|
export TRACE_PRINT=1
|
||||||
```
|
```
|
||||||
|
When running commands with `--debug`, every print will show where it was triggered in the code.
|
||||||
|
|
||||||
When running commands with `--debug`, every print will show where it was
|
- To see a deeper stack trace for each print, set `TRACE_DEPTH` to the desired number of stack frames (e.g., 3):
|
||||||
triggered in the code.
|
|
||||||
|
|
||||||
- To see a deeper stack trace for each print, set `TRACE_DEPTH` to the desired
|
|
||||||
number of stack frames (e.g., 3):
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
export TRACE_DEPTH=3
|
export TRACE_DEPTH=3
|
||||||
```
|
```
|
||||||
|
|
||||||
### Additional Debug Logging
|
### Additional Debug Logging
|
||||||
|
|
||||||
You can enable more detailed logging for specific components by setting these
|
You can enable more detailed logging for specific components by setting these environment variables:
|
||||||
environment variables:
|
|
||||||
|
|
||||||
- `CLAN_DEBUG_NIX_SELECTORS=1` — verbose logs for flake.select operations
|
- `CLAN_DEBUG_NIX_SELECTORS=1` — verbose logs for flake.select operations
|
||||||
- `CLAN_DEBUG_NIX_PREFETCH=1` — verbose logs for flake.prefetch operations
|
- `CLAN_DEBUG_NIX_PREFETCH=1` — verbose logs for flake.prefetch operations
|
||||||
- `CLAN_DEBUG_COMMANDS=1` — print the diffed environment of executed commands
|
- `CLAN_DEBUG_COMMANDS=1` — print the diffed environment of executed commands
|
||||||
|
|
||||||
Example:
|
Example:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
export CLAN_DEBUG_NIX_SELECTORS=1
|
export CLAN_DEBUG_NIX_SELECTORS=1
|
||||||
export CLAN_DEBUG_NIX_PREFETCH=1
|
export CLAN_DEBUG_NIX_PREFETCH=1
|
||||||
export CLAN_DEBUG_COMMANDS=1
|
export CLAN_DEBUG_COMMANDS=1
|
||||||
```
|
```
|
||||||
|
|
||||||
These options help you pinpoint the source and context of print messages and
|
These options help you pinpoint the source and context of print messages and debug logs during development.
|
||||||
debug logs during development.
|
|
||||||
|
|
||||||
## Analyzing Performance
|
## Analyzing Performance
|
||||||
|
|
||||||
To understand what's causing slow performance, set the environment variable
|
To understand what's causing slow performance, set the environment variable `export CLAN_CLI_PERF=1`. When you complete a clan command, you'll see a summary of various performance metrics, helping you identify what's taking up time.
|
||||||
`export CLAN_CLI_PERF=1`. When you complete a clan command, you'll see a summary
|
|
||||||
of various performance metrics, helping you identify what's taking up time.
|
|
||||||
|
|
||||||
## See all possible packages and tests
|
## See all possible packages and tests
|
||||||
|
|
||||||
@@ -127,8 +102,7 @@ To quickly show all possible packages and tests execute:
|
|||||||
nix flake show
|
nix flake show
|
||||||
```
|
```
|
||||||
|
|
||||||
Under `checks` you will find all tests that are executed in our CI. Under
|
Under `checks` you will find all tests that are executed in our CI. Under `packages` you find all our projects.
|
||||||
`packages` you find all our projects.
|
|
||||||
|
|
||||||
```
|
```
|
||||||
git+file:///home/lhebendanz/Projects/clan-core
|
git+file:///home/lhebendanz/Projects/clan-core
|
||||||
@@ -163,22 +137,18 @@ git+file:///home/lhebendanz/Projects/clan-core
|
|||||||
└───default: template: Initialize a new clan flake
|
└───default: template: Initialize a new clan flake
|
||||||
```
|
```
|
||||||
|
|
||||||
You can execute every test separately by following the tree path
|
You can execute every test separately by following the tree path `nix run .#checks.x86_64-linux.clan-pytest -L` for example.
|
||||||
`nix run .#checks.x86_64-linux.clan-pytest -L` for example.
|
|
||||||
|
|
||||||
## Test Locally in Devshell with Breakpoints
|
## Test Locally in Devshell with Breakpoints
|
||||||
|
|
||||||
To test the CLI locally in a development environment and set breakpoints for
|
To test the CLI locally in a development environment and set breakpoints for debugging, follow these steps:
|
||||||
debugging, follow these steps:
|
|
||||||
|
|
||||||
1. Run the following command to execute your tests and allow for debugging with
|
1. Run the following command to execute your tests and allow for debugging with breakpoints:
|
||||||
breakpoints:
|
|
||||||
```bash
|
```bash
|
||||||
cd ./pkgs/clan-cli
|
cd ./pkgs/clan-cli
|
||||||
pytest -n0 -s --maxfail=1 ./tests/test_nameofthetest.py
|
pytest -n0 -s --maxfail=1 ./tests/test_nameofthetest.py
|
||||||
```
|
```
|
||||||
You can place `breakpoint()` in your Python code where you want to trigger a
|
You can place `breakpoint()` in your Python code where you want to trigger a breakpoint for debugging.
|
||||||
breakpoint for debugging.
|
|
||||||
|
|
||||||
## Test Locally in a Nix Sandbox
|
## Test Locally in a Nix Sandbox
|
||||||
|
|
||||||
@@ -196,21 +166,19 @@ nix build .#checks.x86_64-linux.clan-pytest-without-core
|
|||||||
|
|
||||||
If you need to inspect the Nix sandbox while running tests, follow these steps:
|
If you need to inspect the Nix sandbox while running tests, follow these steps:
|
||||||
|
|
||||||
1. Insert an endless sleep into your test code where you want to pause the
|
1. Insert an endless sleep into your test code where you want to pause the execution. For example:
|
||||||
execution. For example:
|
|
||||||
|
|
||||||
```python
|
```python
|
||||||
import time
|
import time
|
||||||
time.sleep(3600) # Sleep for one hour
|
time.sleep(3600) # Sleep for one hour
|
||||||
```
|
```
|
||||||
|
|
||||||
2. Use `cntr` and `psgrep` to attach to the Nix sandbox. This allows you to
|
2. Use `cntr` and `psgrep` to attach to the Nix sandbox. This allows you to interactively debug your code while it's paused. For example:
|
||||||
interactively debug your code while it's paused. For example:
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
psgrep <your_python_process_name>
|
psgrep <your_python_process_name>
|
||||||
cntr attach <container id, container name or process id>
|
cntr attach <container id, container name or process id>
|
||||||
```
|
```
|
||||||
|
|
||||||
Or you can also use the
|
Or you can also use the [nix breakpoint hook](https://nixos.org/manual/nixpkgs/stable/#breakpointhook)
|
||||||
[nix breakpoint hook](https://nixos.org/manual/nixpkgs/stable/#breakpointhook)
|
|
||||||
|
|||||||
@@ -2,11 +2,9 @@
|
|||||||
|
|
||||||
Each feature added to clan should be tested extensively via automated tests.
|
Each feature added to clan should be tested extensively via automated tests.
|
||||||
|
|
||||||
This document covers different methods of automated testing, including creating,
|
This document covers different methods of automated testing, including creating, running and debugging such tests.
|
||||||
running and debugging such tests.
|
|
||||||
|
|
||||||
In order to test the behavior of clan, different testing frameworks are used
|
In order to test the behavior of clan, different testing frameworks are used depending on the concern:
|
||||||
depending on the concern:
|
|
||||||
|
|
||||||
- NixOS VM tests: for high level integration
|
- NixOS VM tests: for high level integration
|
||||||
- NixOS container tests: for high level integration
|
- NixOS container tests: for high level integration
|
||||||
@@ -15,48 +13,37 @@ depending on the concern:
|
|||||||
|
|
||||||
## NixOS VM Tests
|
## NixOS VM Tests
|
||||||
|
|
||||||
The
|
The [NixOS VM Testing Framework](https://nixos.org/manual/nixos/stable/index.html#sec-nixos-tests) is used to create high level integration tests, by running one or more VMs generated from a specified config. Commands can be executed on the booted machine(s) to verify a deployment of a service works as expected. All machines within a test are connected by a virtual network. Internet access is not available.
|
||||||
[NixOS VM Testing Framework](https://nixos.org/manual/nixos/stable/index.html#sec-nixos-tests)
|
|
||||||
is used to create high level integration tests, by running one or more VMs
|
|
||||||
generated from a specified config. Commands can be executed on the booted
|
|
||||||
machine(s) to verify a deployment of a service works as expected. All machines
|
|
||||||
within a test are connected by a virtual network. Internet access is not
|
|
||||||
available.
|
|
||||||
|
|
||||||
### When to use VM tests
|
### When to use VM tests
|
||||||
|
|
||||||
- testing that a service defined through a clan module works as expected after
|
- testing that a service defined through a clan module works as expected after deployment
|
||||||
deployment
|
|
||||||
- testing clan-cli subcommands which require accessing a remote machine
|
- testing clan-cli subcommands which require accessing a remote machine
|
||||||
|
|
||||||
### When not to use VM tests
|
### When not to use VM tests
|
||||||
|
|
||||||
NixOS VM Tests are slow and expensive. They should only be used for testing high
|
NixOS VM Tests are slow and expensive. They should only be used for testing high level integration of components.
|
||||||
level integration of components. VM tests should be avoided wherever it is
|
VM tests should be avoided wherever it is possible to implement a cheaper unit test instead.
|
||||||
possible to implement a cheaper unit test instead.
|
|
||||||
|
|
||||||
- testing detailed behavior of a certain clan-cli command -> use unit testing
|
- testing detailed behavior of a certain clan-cli command -> use unit testing via pytest instead
|
||||||
via pytest instead
|
|
||||||
- regression testing -> add a unit test
|
- regression testing -> add a unit test
|
||||||
|
|
||||||
### Finding examples for VM tests
|
### Finding examples for VM tests
|
||||||
|
|
||||||
Existing nixos vm tests in clan-core can be found by using ripgrep:
|
Existing nixos vm tests in clan-core can be found by using ripgrep:
|
||||||
|
|
||||||
```shellSession
|
```shellSession
|
||||||
rg self.clanLib.test.baseTest
|
rg self.clanLib.test.baseTest
|
||||||
```
|
```
|
||||||
|
|
||||||
### Locating definitions of failing VM tests
|
### Locating definitions of failing VM tests
|
||||||
|
|
||||||
All nixos vm tests in clan are exported as individual flake outputs under
|
All nixos vm tests in clan are exported as individual flake outputs under `checks.x86_64-linux.{test-attr-name}`.
|
||||||
`checks.x86_64-linux.{test-attr-name}`. If a test fails in CI:
|
If a test fails in CI:
|
||||||
|
|
||||||
- look for the job name of the test near the top if the CI Job page, like, for
|
- look for the job name of the test near the top if the CI Job page, like, for example `gitea:clan/clan-core#checks.x86_64-linux.borgbackup/1242`
|
||||||
example `gitea:clan/clan-core#checks.x86_64-linux.borgbackup/1242`
|
- in this case `checks.x86_64-linux.borgbackup` is the attribute path
|
||||||
- in this case `checks.x86_64-linux.borgbackup` is the attribute path
|
- note the last element of that attribute path, in this case `borgbackup`
|
||||||
- note the last element of that attribute path, in this case `borgbackup`
|
- search for the attribute name inside the `/checks` directory via ripgrep
|
||||||
- search for the attribute name inside the `/checks` directory via ripgrep
|
|
||||||
|
|
||||||
example: locating the vm test named `borgbackup`:
|
example: locating the vm test named `borgbackup`:
|
||||||
|
|
||||||
@@ -70,15 +57,14 @@ $ rg "borgbackup =" ./checks
|
|||||||
|
|
||||||
### Adding vm tests
|
### Adding vm tests
|
||||||
|
|
||||||
Create a nixos test module under `/checks/{name}/default.nix` and import it in
|
Create a nixos test module under `/checks/{name}/default.nix` and import it in `/checks/flake-module.nix`.
|
||||||
`/checks/flake-module.nix`.
|
|
||||||
|
|
||||||
### Running VM tests
|
### Running VM tests
|
||||||
|
|
||||||
```shellSession
|
```shellSession
|
||||||
nix build .#checks.x86_64-linux.{test-attr-name}
|
nix build .#checks.x86_64-linux.{test-attr-name}
|
||||||
```
|
```
|
||||||
|
|
||||||
(replace `{test-attr-name}` with the name of the test)
|
(replace `{test-attr-name}` with the name of the test)
|
||||||
|
|
||||||
### Debugging VM tests
|
### Debugging VM tests
|
||||||
@@ -87,14 +73,12 @@ The following techniques can be used to debug a VM test:
|
|||||||
|
|
||||||
#### Print Statements
|
#### Print Statements
|
||||||
|
|
||||||
Locate the definition (see above) and add print statements, like, for example
|
Locate the definition (see above) and add print statements, like, for example `print(client.succeed("systemctl --failed"))`, then re-run the test via `nix build` (see above)
|
||||||
`print(client.succeed("systemctl --failed"))`, then re-run the test via
|
|
||||||
`nix build` (see above)
|
|
||||||
|
|
||||||
#### Interactive Shell
|
#### Interactive Shell
|
||||||
|
|
||||||
- Execute the vm test outside the nix Sandbox via the following command:
|
- Execute the vm test outside the nix Sandbox via the following command:
|
||||||
`nix run .#checks.x86_64-linux.{test-attr-name}.driver -- --interactive`
|
`nix run .#checks.x86_64-linux.{test-attr-name}.driver -- --interactive`
|
||||||
- Then run the commands in the machines manually, like for example:
|
- Then run the commands in the machines manually, like for example:
|
||||||
```python3
|
```python3
|
||||||
start_all()
|
start_all()
|
||||||
@@ -103,22 +87,19 @@ Locate the definition (see above) and add print statements, like, for example
|
|||||||
|
|
||||||
#### Breakpoints
|
#### Breakpoints
|
||||||
|
|
||||||
To get an interactive shell at a specific line in the VM test script, add a
|
To get an interactive shell at a specific line in the VM test script, add a `breakpoint()` call before the line to debug, then run the test outside of the sandbox via:
|
||||||
`breakpoint()` call before the line to debug, then run the test outside of the
|
`nix run .#checks.x86_64-linux.{test-attr-name}.driver`
|
||||||
sandbox via: `nix run .#checks.x86_64-linux.{test-attr-name}.driver`
|
|
||||||
|
|
||||||
## NixOS Container Tests
|
## NixOS Container Tests
|
||||||
|
|
||||||
Those are very similar to NixOS VM tests, as in they run virtualized nixos
|
Those are very similar to NixOS VM tests, as in they run virtualized nixos machines, but instead of using VMs, they use containers which are much cheaper to launch.
|
||||||
machines, but instead of using VMs, they use containers which are much cheaper
|
As of now the container test driver is a downstream development in clan-core.
|
||||||
to launch. As of now the container test driver is a downstream development in
|
Basically everything stated under the NixOS VM tests sections applies here, except some limitations.
|
||||||
clan-core. Basically everything stated under the NixOS VM tests sections applies
|
|
||||||
here, except some limitations.
|
|
||||||
|
|
||||||
Limitations:
|
Limitations:
|
||||||
|
|
||||||
- Cannot run in interactive mode, however while the container test runs, it logs
|
- Cannot run in interactive mode, however while the container test runs, it logs a nsenter command that can be used to log into each of the container.
|
||||||
a nsenter command that can be used to log into each of the container.
|
|
||||||
- setuid binaries don't work
|
- setuid binaries don't work
|
||||||
|
|
||||||
### Where to find examples for NixOS container tests
|
### Where to find examples for NixOS container tests
|
||||||
@@ -129,10 +110,10 @@ Existing NixOS container tests in clan-core can be found by using `ripgrep`:
|
|||||||
rg self.clanLib.test.containerTest
|
rg self.clanLib.test.containerTest
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
## Python tests via pytest
|
## Python tests via pytest
|
||||||
|
|
||||||
Since the Clan CLI is written in python, the `pytest` framework is used to
|
Since the Clan CLI is written in python, the `pytest` framework is used to define unit tests and integration tests via python
|
||||||
define unit tests and integration tests via python
|
|
||||||
|
|
||||||
Due to superior efficiency,
|
Due to superior efficiency,
|
||||||
|
|
||||||
@@ -140,52 +121,43 @@ Due to superior efficiency,
|
|||||||
|
|
||||||
- writing unit tests for python functions and modules, or bugfixes of such
|
- writing unit tests for python functions and modules, or bugfixes of such
|
||||||
- all integrations tests that do not require building or running a nixos machine
|
- all integrations tests that do not require building or running a nixos machine
|
||||||
- impure integrations tests that require internet access (very rare, try to
|
- impure integrations tests that require internet access (very rare, try to avoid)
|
||||||
avoid)
|
|
||||||
|
|
||||||
### When not to use python tests
|
### When not to use python tests
|
||||||
|
|
||||||
- integrations tests that require building or running a nixos machine (use NixOS
|
- integrations tests that require building or running a nixos machine (use NixOS VM or container tests instead)
|
||||||
VM or container tests instead)
|
|
||||||
- testing behavior of a nix function or library (use nix eval tests instead)
|
- testing behavior of a nix function or library (use nix eval tests instead)
|
||||||
|
|
||||||
### Finding examples of python tests
|
### Finding examples of python tests
|
||||||
|
|
||||||
Existing python tests in clan-core can be found by using `ripgrep`:
|
Existing python tests in clan-core can be found by using `ripgrep`:
|
||||||
|
|
||||||
```shellSession
|
```shellSession
|
||||||
rg "import pytest"
|
rg "import pytest"
|
||||||
```
|
```
|
||||||
|
|
||||||
### Locating definitions of failing python tests
|
### Locating definitions of failing python tests
|
||||||
|
|
||||||
If any python test fails in the CI pipeline, an error message like this can be
|
If any python test fails in the CI pipeline, an error message like this can be found at the end of the log:
|
||||||
found at the end of the log:
|
|
||||||
|
|
||||||
```
|
```
|
||||||
...
|
...
|
||||||
FAILED tests/test_machines_cli.py::test_machine_delete - clan_lib.errors.ClanError: Template 'new-machine' not in 'inputs.clan-core
|
FAILED tests/test_machines_cli.py::test_machine_delete - clan_lib.errors.ClanError: Template 'new-machine' not in 'inputs.clan-core
|
||||||
...
|
...
|
||||||
```
|
```
|
||||||
|
|
||||||
In this case the test is defined in the file `/tests/test_machines_cli.py` via
|
In this case the test is defined in the file `/tests/test_machines_cli.py` via the test function `test_machine_delete`.
|
||||||
the test function `test_machine_delete`.
|
|
||||||
|
|
||||||
### Adding python tests
|
### Adding python tests
|
||||||
|
|
||||||
If a specific python module is tested, the test should be located near the
|
If a specific python module is tested, the test should be located near the tested module in a subdirectory called `./tests`
|
||||||
tested module in a subdirectory called `./tests` If the test is not clearly
|
If the test is not clearly related to a specific module, put it in the top-level `./tests` directory of the tested python package. For `clan-cli` this would be `/pkgs/clan-cli/clan_cli/tests`.
|
||||||
related to a specific module, put it in the top-level `./tests` directory of the
|
All filenames must be prefixed with `test_` and test functions prefixed with `test_` for pytest to discover them.
|
||||||
tested python package. For `clan-cli` this would be
|
|
||||||
`/pkgs/clan-cli/clan_cli/tests`. All filenames must be prefixed with `test_` and
|
|
||||||
test functions prefixed with `test_` for pytest to discover them.
|
|
||||||
|
|
||||||
### Running python tests
|
### Running python tests
|
||||||
|
|
||||||
#### Running all python tests
|
#### Running all python tests
|
||||||
|
|
||||||
To run all python tests which are executed in the CI pipeline locally, use this
|
To run all python tests which are executed in the CI pipeline locally, use this `nix build` command
|
||||||
`nix build` command
|
|
||||||
|
|
||||||
```shellSession
|
```shellSession
|
||||||
nix build .#checks.x86_64-linux.clan-pytest-{with,without}-core
|
nix build .#checks.x86_64-linux.clan-pytest-{with,without}-core
|
||||||
@@ -196,27 +168,21 @@ nix build .#checks.x86_64-linux.clan-pytest-{with,without}-core
|
|||||||
To run a specific python test outside the nix sandbox
|
To run a specific python test outside the nix sandbox
|
||||||
|
|
||||||
1. Enter the development environment of the python package, by either:
|
1. Enter the development environment of the python package, by either:
|
||||||
|
- Having direnv enabled and entering the directory of the package (eg. `/pkgs/clan-cli`)
|
||||||
- Having direnv enabled and entering the directory of the package (eg.
|
- Or using the command `select-shell {package}` in the top-level dev shell of clan-core, (eg. `switch-shell clan-cli`)
|
||||||
`/pkgs/clan-cli`)
|
|
||||||
- Or using the command `select-shell {package}` in the top-level dev shell of
|
|
||||||
clan-core, (eg. `switch-shell clan-cli`)
|
|
||||||
|
|
||||||
2. Execute the test via pytest using issuing
|
2. Execute the test via pytest using issuing
|
||||||
`pytest ./path/to/test_file.py:test_function_name -s -n0`
|
`pytest ./path/to/test_file.py:test_function_name -s -n0`
|
||||||
|
|
||||||
The flags `-sn0` are useful to forwards all stdout/stderr output to the terminal
|
The flags `-sn0` are useful to forwards all stdout/stderr output to the terminal and be able to debug interactively via `breakpoint()`.
|
||||||
and be able to debug interactively via `breakpoint()`.
|
|
||||||
|
|
||||||
### Debugging python tests
|
### Debugging python tests
|
||||||
|
|
||||||
To debug a specific python test, find its definition (see above) and make sure
|
To debug a specific python test, find its definition (see above) and make sure to enter the correct dev environment for that python package.
|
||||||
to enter the correct dev environment for that python package.
|
|
||||||
|
|
||||||
Modify the test and add `breakpoint()` statements to it.
|
Modify the test and add `breakpoint()` statements to it.
|
||||||
|
|
||||||
Execute the test using the flags `-sn0` in order to get an interactive shell at
|
Execute the test using the flags `-sn0` in order to get an interactive shell at the breakpoint:
|
||||||
the breakpoint:
|
|
||||||
|
|
||||||
```shelSession
|
```shelSession
|
||||||
pytest ./path/to/test_file.py:test_function_name -sn0
|
pytest ./path/to/test_file.py:test_function_name -sn0
|
||||||
@@ -268,9 +234,7 @@ Failing nix eval tests look like this:
|
|||||||
> error: Tests failed
|
> error: Tests failed
|
||||||
```
|
```
|
||||||
|
|
||||||
To locate the definition, find the flake attribute name of the failing test near
|
To locate the definition, find the flake attribute name of the failing test near the top of the CI Job page, like for example `gitea:clan/clan-core#checks.x86_64-linux.eval-lib-values/1242`.
|
||||||
the top of the CI Job page, like for example
|
|
||||||
`gitea:clan/clan-core#checks.x86_64-linux.eval-lib-values/1242`.
|
|
||||||
|
|
||||||
In this case `eval-lib-values` is the attribute we are looking for.
|
In this case `eval-lib-values` is the attribute we are looking for.
|
||||||
|
|
||||||
@@ -283,8 +247,7 @@ lib/values/flake-module.nix
|
|||||||
grmpf@grmpf-nix ~/p/c/clan-core (test-docs)>
|
grmpf@grmpf-nix ~/p/c/clan-core (test-docs)>
|
||||||
```
|
```
|
||||||
|
|
||||||
In this case the test is defined in the file `lib/values/flake-module.nix` line
|
In this case the test is defined in the file `lib/values/flake-module.nix` line 21
|
||||||
21
|
|
||||||
|
|
||||||
### Adding nix eval tests
|
### Adding nix eval tests
|
||||||
|
|
||||||
@@ -292,22 +255,20 @@ In clan core, the following pattern is usually followed:
|
|||||||
|
|
||||||
- tests are put in a `test.nix` file
|
- tests are put in a `test.nix` file
|
||||||
- a CI Job is exposed via a `flake-module.nix`
|
- a CI Job is exposed via a `flake-module.nix`
|
||||||
- that `flake-module.nix` is imported via the `flake.nix` at the root of the
|
- that `flake-module.nix` is imported via the `flake.nix` at the root of the project
|
||||||
project
|
|
||||||
|
|
||||||
For example see `/lib/values/{test.nix,flake-module.nix}`.
|
For example see `/lib/values/{test.nix,flake-module.nix}`.
|
||||||
|
|
||||||
### Running nix eval tests
|
### Running nix eval tests
|
||||||
|
|
||||||
Since all nix eval tests are exposed via the flake outputs, they can be ran via
|
Since all nix eval tests are exposed via the flake outputs, they can be ran via `nix build`:
|
||||||
`nix build`:
|
|
||||||
|
|
||||||
```shellSession
|
```shellSession
|
||||||
nix build .#checks.x86_64-linux.{test-attr-name}
|
nix build .#checks.x86_64-linux.{test-attr-name}
|
||||||
```
|
```
|
||||||
|
|
||||||
For quicker iteration times, instead of `nix build` use the `nix-unit` command
|
For quicker iteration times, instead of `nix build` use the `nix-unit` command available in the dev environment.
|
||||||
available in the dev environment. Example:
|
Example:
|
||||||
|
|
||||||
```shellSession
|
```shellSession
|
||||||
nix-unit --flake .#legacyPackages.x86_64-linux.{test-attr-name}
|
nix-unit --flake .#legacyPackages.x86_64-linux.{test-attr-name}
|
||||||
@@ -315,23 +276,19 @@ nix-unit --flake .#legacyPackages.x86_64-linux.{test-attr-name}
|
|||||||
|
|
||||||
### Debugging nix eval tests
|
### Debugging nix eval tests
|
||||||
|
|
||||||
Follow the instructions above to find the definition of the test, then use one
|
Follow the instructions above to find the definition of the test, then use one of the following techniques:
|
||||||
of the following techniques:
|
|
||||||
|
|
||||||
#### Print debugging
|
#### Print debugging
|
||||||
|
|
||||||
Add `lib.trace` or `lib.traceVal` statements in order to print some variables
|
Add `lib.trace` or `lib.traceVal` statements in order to print some variables during evaluation
|
||||||
during evaluation
|
|
||||||
|
|
||||||
#### Nix repl
|
#### Nix repl
|
||||||
|
|
||||||
Use `nix repl` to evaluate and inspect the test.
|
Use `nix repl` to evaluate and inspect the test.
|
||||||
|
|
||||||
Each test consists of an `expr` (expression) and an `expected` field. `nix-unit`
|
Each test consists of an `expr` (expression) and an `expected` field. `nix-unit` simply checks if `expr == expected` and prints the diff if that's not the case.
|
||||||
simply checks if `expr == expected` and prints the diff if that's not the case.
|
|
||||||
|
|
||||||
`nix repl` can be used to inspect an `expr` manually, or any other variables
|
`nix repl` can be used to inspect an `expr` manually, or any other variables that you choose to expose.
|
||||||
that you choose to expose.
|
|
||||||
|
|
||||||
Example:
|
Example:
|
||||||
|
|
||||||
|
|||||||
@@ -1,26 +1,34 @@
|
|||||||
This guide provides an example setup for a single-disk ZFS system with native
|
|
||||||
encryption, accessible for decryption remotely.
|
|
||||||
|
|
||||||
!!! Warning This configuration only applies to `systemd-boot` enabled systems
|
This guide provides an example setup for a single-disk ZFS system with native encryption, accessible for decryption remotely.
|
||||||
and **requires** UEFI booting.
|
|
||||||
|
|
||||||
Replace the highlighted lines with your own disk-id. You can find our your
|
!!! Warning
|
||||||
disk-id by executing:
|
This configuration only applies to `systemd-boot` enabled systems and **requires** UEFI booting.
|
||||||
|
|
||||||
|
|
||||||
|
Replace the highlighted lines with your own disk-id.
|
||||||
|
You can find our your disk-id by executing:
|
||||||
```bash
|
```bash
|
||||||
lsblk --output NAME,ID-LINK,FSTYPE,SIZE,MOUNTPOINT
|
lsblk --output NAME,ID-LINK,FSTYPE,SIZE,MOUNTPOINT
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "**Single Disk**" Below is the configuration for `disko.nix`
|
|
||||||
`nix hl_lines="13 53" --8<-- "docs/code-examples/disko-single-disk.nix" `
|
|
||||||
|
|
||||||
=== "**Raid 1**" Below is the configuration for `disko.nix`
|
=== "**Single Disk**"
|
||||||
`nix hl_lines="13 53 54" --8<-- "docs/code-examples/disko-raid.nix" `
|
Below is the configuration for `disko.nix`
|
||||||
|
```nix hl_lines="13 53"
|
||||||
|
--8<-- "docs/code-examples/disko-single-disk.nix"
|
||||||
|
```
|
||||||
|
|
||||||
Below is the configuration for `initrd.nix`. Replace `<yourkey>` with your ssh
|
|
||||||
public key. Replace `kernelModules` with the ethernet module loaded one on your
|
|
||||||
target machine.
|
|
||||||
|
|
||||||
|
|
||||||
|
=== "**Raid 1**"
|
||||||
|
Below is the configuration for `disko.nix`
|
||||||
|
```nix hl_lines="13 53 54"
|
||||||
|
--8<-- "docs/code-examples/disko-raid.nix"
|
||||||
|
```
|
||||||
|
|
||||||
|
Below is the configuration for `initrd.nix`.
|
||||||
|
Replace `<yourkey>` with your ssh public key.
|
||||||
|
Replace `kernelModules` with the ethernet module loaded one on your target machine.
|
||||||
```nix hl_lines="18 29"
|
```nix hl_lines="18 29"
|
||||||
{config, pkgs, ...}:
|
{config, pkgs, ...}:
|
||||||
|
|
||||||
@@ -57,8 +65,7 @@ target machine.
|
|||||||
|
|
||||||
## Copying SSH Public Key
|
## Copying SSH Public Key
|
||||||
|
|
||||||
Before starting the installation process, ensure that the SSH public key is
|
Before starting the installation process, ensure that the SSH public key is copied to the NixOS installer.
|
||||||
copied to the NixOS installer.
|
|
||||||
|
|
||||||
1. Copy your public SSH key to the installer, if it has not been copied already:
|
1. Copy your public SSH key to the installer, if it has not been copied already:
|
||||||
|
|
||||||
@@ -86,8 +93,7 @@ nano /tmp/secret.key
|
|||||||
blkdiscard /dev/disk/by-id/<installdisk>
|
blkdiscard /dev/disk/by-id/<installdisk>
|
||||||
```
|
```
|
||||||
|
|
||||||
4. Run `clan` machines install, only running kexec and disko, with the following
|
4. Run `clan` machines install, only running kexec and disko, with the following command:
|
||||||
command:
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
clan machines install gchq-local --target-host root@nixos-installer --phases kexec,disko
|
clan machines install gchq-local --target-host root@nixos-installer --phases kexec,disko
|
||||||
@@ -113,29 +119,25 @@ zfs set keylocation=prompt zroot/root
|
|||||||
CTRL+D
|
CTRL+D
|
||||||
```
|
```
|
||||||
|
|
||||||
4. Locally generate ssh host keys. You only need to generate ones for the
|
4. Locally generate ssh host keys. You only need to generate ones for the algorithms you're using in `authorizedKeys`.
|
||||||
algorithms you're using in `authorizedKeys`.
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
ssh-keygen -q -N "" -C "" -t ed25519 -f ./initrd_host_ed25519_key
|
ssh-keygen -q -N "" -C "" -t ed25519 -f ./initrd_host_ed25519_key
|
||||||
ssh-keygen -q -N "" -C "" -t rsa -b 4096 -f ./initrd_host_rsa_key
|
ssh-keygen -q -N "" -C "" -t rsa -b 4096 -f ./initrd_host_rsa_key
|
||||||
```
|
```
|
||||||
|
|
||||||
5. Securely copy your local initrd ssh host keys to the installer's `/mnt`
|
5. Securely copy your local initrd ssh host keys to the installer's `/mnt` directory:
|
||||||
directory:
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
scp ./initrd_host* root@nixos-installer.local:/mnt/var/lib/
|
scp ./initrd_host* root@nixos-installer.local:/mnt/var/lib/
|
||||||
```
|
```
|
||||||
|
|
||||||
6. Install nixos to the mounted partitions
|
6. Install nixos to the mounted partitions
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
clan machines install gchq-local --target-host root@nixos-installer --phases install
|
clan machines install gchq-local --target-host root@nixos-installer --phases install
|
||||||
```
|
```
|
||||||
|
|
||||||
7. After the installation process, unmount `/mnt/boot`, change the ZFS
|
7. After the installation process, unmount `/mnt/boot`, change the ZFS mountpoints and unmount all the ZFS volumes by exporting the zpool:
|
||||||
mountpoints and unmount all the ZFS volumes by exporting the zpool:
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
umount /mnt/boot
|
umount /mnt/boot
|
||||||
@@ -162,9 +164,6 @@ ssh -p 7172 root@192.168.178.141
|
|||||||
systemd-tty-ask-password-agent
|
systemd-tty-ask-password-agent
|
||||||
```
|
```
|
||||||
|
|
||||||
After completing these steps, your NixOS should be successfully installed and
|
After completing these steps, your NixOS should be successfully installed and ready for use.
|
||||||
ready for use.
|
|
||||||
|
|
||||||
**Note:** Replace `root@nixos-installer.local` and `192.168.178.141` with the
|
**Note:** Replace `root@nixos-installer.local` and `192.168.178.141` with the appropriate user and IP addresses for your setup. Also, adjust `<SYS_PATH>` to reflect the correct system path for your environment.
|
||||||
appropriate user and IP addresses for your setup. Also, adjust `<SYS_PATH>` to
|
|
||||||
reflect the correct system path for your environment.
|
|
||||||
|
|||||||
@@ -1,9 +1,9 @@
|
|||||||
!!! Danger ":fontawesome-solid-road-barrier: Under Construction
|
|
||||||
:fontawesome-solid-road-barrier:" Currently under construction use with caution
|
|
||||||
|
|
||||||
```
|
!!! Danger ":fontawesome-solid-road-barrier: Under Construction :fontawesome-solid-road-barrier:"
|
||||||
:fontawesome-solid-road-barrier: :fontawesome-solid-road-barrier: :fontawesome-solid-road-barrier:
|
Currently under construction use with caution
|
||||||
```
|
|
||||||
|
:fontawesome-solid-road-barrier: :fontawesome-solid-road-barrier: :fontawesome-solid-road-barrier:
|
||||||
|
|
||||||
|
|
||||||
## Structure
|
## Structure
|
||||||
|
|
||||||
@@ -20,16 +20,13 @@ A disk template consists of exactly two files
|
|||||||
|
|
||||||
## `default.nix`
|
## `default.nix`
|
||||||
|
|
||||||
Placeholders are filled with their machine specific options when a template is
|
Placeholders are filled with their machine specific options when a template is used for a machine.
|
||||||
used for a machine.
|
|
||||||
|
|
||||||
The user can choose any valid options from the hardware report.
|
The user can choose any valid options from the hardware report.
|
||||||
|
|
||||||
The file itself is then copied to `machines/{machineName}/disko.nix` and will be
|
The file itself is then copied to `machines/{machineName}/disko.nix` and will be automatically loaded by the machine.
|
||||||
automatically loaded by the machine.
|
|
||||||
|
|
||||||
`single-disk/default.nix`
|
`single-disk/default.nix`
|
||||||
|
|
||||||
```
|
```
|
||||||
{
|
{
|
||||||
disko.devices = {
|
disko.devices = {
|
||||||
@@ -45,11 +42,9 @@ automatically loaded by the machine.
|
|||||||
|
|
||||||
## Placeholders
|
## Placeholders
|
||||||
|
|
||||||
Each template must declare the options of its placeholders depending on the
|
Each template must declare the options of its placeholders depending on the hardware-report.
|
||||||
hardware-report.
|
|
||||||
|
|
||||||
`api/disk.py`
|
`api/disk.py`
|
||||||
|
|
||||||
```py
|
```py
|
||||||
templates: dict[str, dict[str, Callable[[dict[str, Any]], Placeholder]]] = {
|
templates: dict[str, dict[str, Callable[[dict[str, Any]], Placeholder]]] = {
|
||||||
"single-disk": {
|
"single-disk": {
|
||||||
@@ -61,15 +56,13 @@ templates: dict[str, dict[str, Callable[[dict[str, Any]], Placeholder]]] = {
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
Introducing new local or global placeholders requires contributing to clan-core
|
Introducing new local or global placeholders requires contributing to clan-core `api/disks.py`.
|
||||||
`api/disks.py`.
|
|
||||||
|
|
||||||
### Predefined placeholders
|
### Predefined placeholders
|
||||||
|
|
||||||
Some placeholders provide predefined functionality
|
Some placeholders provide predefined functionality
|
||||||
|
|
||||||
- `uuid`: In most cases we recommend adding a unique id to all disks. This
|
- `uuid`: In most cases we recommend adding a unique id to all disks. This prevents the system to false boot from i.e. hot-plugged devices.
|
||||||
prevents the system to false boot from i.e. hot-plugged devices.
|
|
||||||
```
|
```
|
||||||
disko.devices = {
|
disko.devices = {
|
||||||
disk = {
|
disk = {
|
||||||
@@ -81,6 +74,7 @@ Some placeholders provide predefined functionality
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
## Readme
|
## Readme
|
||||||
|
|
||||||
The readme frontmatter must be of the same format as modules frontmatter.
|
The readme frontmatter must be of the same format as modules frontmatter.
|
||||||
@@ -96,5 +90,5 @@ Use this schema for simple setups where ....
|
|||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
The format and fields of this file is not clear yet. We might change that once
|
|
||||||
fully implemented.
|
The format and fields of this file is not clear yet. We might change that once fully implemented.
|
||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user