Compare commits
2 Commits
push-qxwmr
...
mdformat
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
41c52197ea | ||
|
|
a9c53b8b1e |
@@ -1,4 +1,6 @@
|
||||
# Contributing to Clan
|
||||
|
||||
<!-- Local file: docs/CONTRIBUTING.md -->
|
||||
Go to the Contributing guide at https://docs.clan.lol/guides/contributing/CONTRIBUTING
|
||||
|
||||
Go to the Contributing guide at
|
||||
https://docs.clan.lol/guides/contributing/CONTRIBUTING
|
||||
|
||||
@@ -16,4 +16,3 @@ FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
|
||||
COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
|
||||
IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
|
||||
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
|
||||
|
||||
52
README.md
52
README.md
@@ -1,45 +1,69 @@
|
||||
# Clan core repository
|
||||
|
||||
Welcome to the Clan core repository, the heart of the [clan.lol](https://clan.lol/) project! This monorepo is the foundation of Clan, a revolutionary open-source project aimed at restoring fun, freedom, and functionality to computing. Here, you'll find all the essential packages, NixOS modules, CLI tools, and tests needed to contribute to and work with the Clan project. Clan leverages the Nix system to ensure reliability, security, and seamless management of digital environments, putting the power back into the hands of users.
|
||||
Welcome to the Clan core repository, the heart of the
|
||||
[clan.lol](https://clan.lol/) project! This monorepo is the foundation of Clan,
|
||||
a revolutionary open-source project aimed at restoring fun, freedom, and
|
||||
functionality to computing. Here, you'll find all the essential packages, NixOS
|
||||
modules, CLI tools, and tests needed to contribute to and work with the Clan
|
||||
project. Clan leverages the Nix system to ensure reliability, security, and
|
||||
seamless management of digital environments, putting the power back into the
|
||||
hands of users.
|
||||
|
||||
## Why Clan?
|
||||
|
||||
Our mission is simple: to democratize computing by providing tools that empower users, foster innovation, and challenge outdated paradigms. Clan represents our contribution to a future where technology serves humanity, not the other way around. By participating in Clan, you're joining a movement dedicated to creating a secure, user-empowered digital future.
|
||||
Our mission is simple: to democratize computing by providing tools that empower
|
||||
users, foster innovation, and challenge outdated paradigms. Clan represents our
|
||||
contribution to a future where technology serves humanity, not the other way
|
||||
around. By participating in Clan, you're joining a movement dedicated to
|
||||
creating a secure, user-empowered digital future.
|
||||
|
||||
## Features of Clan
|
||||
|
||||
- **Full-Stack System Deployment:** Utilize Clan's toolkit alongside Nix's reliability to build and manage systems effortlessly.
|
||||
- **Full-Stack System Deployment:** Utilize Clan's toolkit alongside Nix's
|
||||
reliability to build and manage systems effortlessly.
|
||||
- **Overlay Networks:** Secure, private communication channels between devices.
|
||||
- **Virtual Machine Integration:** Seamless operation of VM applications within the main operating system.
|
||||
- **Virtual Machine Integration:** Seamless operation of VM applications within
|
||||
the main operating system.
|
||||
- **Robust Backup Management:** Long-term, self-hosted data preservation.
|
||||
- **Intuitive Secret Management:** Simplified encryption and password management processes.
|
||||
- **Intuitive Secret Management:** Simplified encryption and password management
|
||||
processes.
|
||||
|
||||
## Getting started with Clan
|
||||
|
||||
If you're new to Clan and eager to dive in, start with our quickstart guide and explore the core functionalities that Clan offers:
|
||||
If you're new to Clan and eager to dive in, start with our quickstart guide and
|
||||
explore the core functionalities that Clan offers:
|
||||
|
||||
- **Quickstart Guide**: Check out [getting started](https://docs.clan.lol/#starting-with-a-new-clan-project)<!-- [docs/site/index.md](docs/site/index.md) --> to get up and running with Clan in no time.
|
||||
- **Quickstart Guide**: Check out
|
||||
[getting started](https://docs.clan.lol/#starting-with-a-new-clan-project)<!-- [docs/site/index.md](docs/site/index.md) -->
|
||||
to get up and running with Clan in no time.
|
||||
|
||||
### Managing secrets
|
||||
|
||||
In the Clan ecosystem, security is paramount. Learn how to handle secrets effectively:
|
||||
In the Clan ecosystem, security is paramount. Learn how to handle secrets
|
||||
effectively:
|
||||
|
||||
- **Secrets Management**: Securely manage secrets by consulting [Vars](https://docs.clan.lol/concepts/generators/)<!-- [secrets.md](docs/site/concepts/generators.md) -->.
|
||||
- **Secrets Management**: Securely manage secrets by consulting
|
||||
[Vars](https://docs.clan.lol/concepts/generators/)<!-- [secrets.md](docs/site/concepts/generators.md) -->.
|
||||
|
||||
### Contributing to Clan
|
||||
|
||||
The Clan project thrives on community contributions. We welcome everyone to contribute and collaborate:
|
||||
The Clan project thrives on community contributions. We welcome everyone to
|
||||
contribute and collaborate:
|
||||
|
||||
- **Contribution Guidelines**: Make a meaningful impact by following the steps in [contributing](https://docs.clan.lol/contributing/contributing/)<!-- [contributing.md](docs/CONTRIBUTING.md) -->.
|
||||
- **Contribution Guidelines**: Make a meaningful impact by following the steps
|
||||
in
|
||||
[contributing](https://docs.clan.lol/contributing/contributing/)<!-- [contributing.md](docs/CONTRIBUTING.md) -->.
|
||||
|
||||
## Join the revolution
|
||||
|
||||
Clan is more than a tool; it's a movement towards a better digital future. By contributing to the Clan project, you're part of changing technology for the better, together.
|
||||
Clan is more than a tool; it's a movement towards a better digital future. By
|
||||
contributing to the Clan project, you're part of changing technology for the
|
||||
better, together.
|
||||
|
||||
### Community and support
|
||||
|
||||
Connect with us and the Clan community for support and discussion:
|
||||
|
||||
- [Matrix channel](https://matrix.to/#/#clan:clan.lol) for live discussions.
|
||||
- IRC bridge on [hackint#clan](https://chat.hackint.org/#/connect?join=clan) for real-time chat support.
|
||||
|
||||
- IRC bridge on [hackint#clan](https://chat.hackint.org/#/connect?join=clan) for
|
||||
real-time chat support.
|
||||
|
||||
@@ -1,10 +1,6 @@
|
||||
---
|
||||
description = "Set up dummy-module"
|
||||
categories = ["System"]
|
||||
features = [ "inventory" ]
|
||||
______________________________________________________________________
|
||||
|
||||
[constraints]
|
||||
roles.admin.min = 1
|
||||
roles.admin.max = 1
|
||||
---
|
||||
description = "Set up dummy-module" categories = ["System"] features = \[
|
||||
"inventory" \]
|
||||
|
||||
## [constraints] roles.admin.min = 1 roles.admin.max = 1
|
||||
|
||||
@@ -1,15 +1,18 @@
|
||||
A Dynamic-DNS (DDNS) service continuously keeps one or more DNS records in sync
|
||||
with the current public IP address of your machine.\
|
||||
In *clan* this service is backed by
|
||||
[qdm12/ddns-updater](https://github.com/qdm12/ddns-updater).
|
||||
|
||||
A Dynamic-DNS (DDNS) service continuously keeps one or more DNS records in sync with the current public IP address of your machine.
|
||||
In *clan* this service is backed by [qdm12/ddns-updater](https://github.com/qdm12/ddns-updater).
|
||||
> Info\
|
||||
> ddns-updater itself is **heavily opinionated and version-specific**. Whenever
|
||||
> you need the exhaustive list of flags or provider-specific fields refer to its
|
||||
> *versioned* documentation – **not** the GitHub README
|
||||
|
||||
> Info
|
||||
> ddns-updater itself is **heavily opinionated and version-specific**. Whenever you need the exhaustive list of flags or
|
||||
> provider-specific fields refer to its *versioned* documentation – **not** the GitHub README
|
||||
---
|
||||
______________________________________________________________________
|
||||
|
||||
# 1. Configuration model
|
||||
|
||||
Internally ddns-updater consumes a single file named `config.json`.
|
||||
Internally ddns-updater consumes a single file named `config.json`.\
|
||||
A minimal configuration for the registrar *Namecheap* looks like:
|
||||
|
||||
```json
|
||||
@@ -41,16 +44,17 @@ Another example for *Porkbun*:
|
||||
}
|
||||
```
|
||||
|
||||
When you write a `clan.nix` the **common** fields (`provider`, `domain`, `period`, …) are already exposed as typed
|
||||
*Nix options*.
|
||||
Registrar-specific or very new keys can be passed through an open attribute set called **extraSettings**.
|
||||
When you write a `clan.nix` the **common** fields (`provider`, `domain`,
|
||||
`period`, …) are already exposed as typed *Nix options*.\
|
||||
Registrar-specific or very new keys can be passed through an open attribute set
|
||||
called **extraSettings**.
|
||||
|
||||
---
|
||||
______________________________________________________________________
|
||||
|
||||
# 2. Full Porkbun example
|
||||
|
||||
Manage three records – `@`, `home` and `test` – of the domain
|
||||
`jon.blog` and refresh them every 15 minutes:
|
||||
Manage three records – `@`, `home` and `test` – of the domain `jon.blog` and
|
||||
refresh them every 15 minutes:
|
||||
|
||||
```nix title="clan.nix" hl_lines="10-11"
|
||||
inventory.instances = {
|
||||
@@ -80,7 +84,8 @@ inventory.instances = {
|
||||
};
|
||||
```
|
||||
|
||||
1. `secret_field_name` tells the *vars-generator* to store the entered secret under the specified JSON field name in the configuration.
|
||||
1. `secret_field_name` tells the *vars-generator* to store the entered secret
|
||||
under the specified JSON field name in the configuration.
|
||||
2. ddns-updater allows multiple hosts by separating them with a comma.
|
||||
3. The `api_key` above is *public*; the corresponding **private key** is retrieved through `secret_field_name`.
|
||||
|
||||
3. The `api_key` above is *public*; the corresponding **private key** is
|
||||
retrieved through `secret_field_name`.
|
||||
|
||||
@@ -1,4 +1,5 @@
|
||||
This service will automatically set the emergency access password if your system fails to boot.
|
||||
This service will automatically set the emergency access password if your system
|
||||
fails to boot.
|
||||
|
||||
## Usage
|
||||
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
The importer module allows users to configure importing modules in a flexible and structured way.
|
||||
It exposes the `extraModules` functionality of the inventory, without any added configuration.
|
||||
The importer module allows users to configure importing modules in a flexible
|
||||
and structured way. It exposes the `extraModules` functionality of the
|
||||
inventory, without any added configuration.
|
||||
|
||||
## Usage
|
||||
|
||||
@@ -21,6 +22,6 @@ inventory.instances = {
|
||||
};
|
||||
```
|
||||
|
||||
This will import the module `modules/base.nix` to all machines that have the `all` tag,
|
||||
which by default is every machine managed by the clan.
|
||||
And also import for all machines tagged with `zone1` the module at `modules/zone1.nix`.
|
||||
This will import the module `modules/base.nix` to all machines that have the
|
||||
`all` tag, which by default is every machine managed by the clan. And also
|
||||
import for all machines tagged with `zone1` the module at `modules/zone1.nix`.
|
||||
|
||||
@@ -32,4 +32,5 @@ The service provides these commands:
|
||||
|
||||
- `localbackup-create`: Create a new backup
|
||||
- `localbackup-list`: List available backups
|
||||
- `localbackup-restore`: Restore from backup (requires NAME and FOLDERS environment variables)
|
||||
- `localbackup-restore`: Restore from backup (requires NAME and FOLDERS
|
||||
environment variables)
|
||||
|
||||
@@ -14,4 +14,3 @@ inventory.instances = {
|
||||
This service will eventually set up a monitoring stack for your clan. For now,
|
||||
only a telegraf role is implemented, which exposes the currently deployed
|
||||
version of your configuration, so it can be used to check for required updates.
|
||||
|
||||
|
||||
@@ -1,8 +1,13 @@
|
||||
The `sshd` Clan service manages SSH to make it easy to securely access your machines over the internet. The service uses `vars` to store the SSH host keys for each machine to ensure they remain stable across deployments.
|
||||
The `sshd` Clan service manages SSH to make it easy to securely access your
|
||||
machines over the internet. The service uses `vars` to store the SSH host keys
|
||||
for each machine to ensure they remain stable across deployments.
|
||||
|
||||
`sshd` also generates SSH certificates for both servers and clients allowing for certificate-based authentication for SSH.
|
||||
`sshd` also generates SSH certificates for both servers and clients allowing for
|
||||
certificate-based authentication for SSH.
|
||||
|
||||
The service also disables password-based authentication over SSH, to access your machines you'll need to use public key authentication or certificate-based authentication.
|
||||
The service also disables password-based authentication over SSH, to access your
|
||||
machines you'll need to use public key authentication or certificate-based
|
||||
authentication.
|
||||
|
||||
## Usage
|
||||
|
||||
|
||||
@@ -15,6 +15,7 @@
|
||||
|
||||
Now the folder `~/syncthing/documents` will be shared with all your machines.
|
||||
|
||||
## Documentation
|
||||
|
||||
## Documentation
|
||||
Extensive documentation is available on the [Syncthing](https://docs.syncthing.net/) website.
|
||||
Extensive documentation is available on the
|
||||
[Syncthing](https://docs.syncthing.net/) website.
|
||||
|
||||
@@ -46,7 +46,8 @@
|
||||
|
||||
## Migration from `root-password` module
|
||||
|
||||
The deprecated `clan.root-password` module has been replaced by the `users` module. Here's how to migrate:
|
||||
The deprecated `clan.root-password` module has been replaced by the `users`
|
||||
module. Here's how to migrate:
|
||||
|
||||
### 1. Update your flake configuration
|
||||
|
||||
|
||||
@@ -1,17 +1,23 @@
|
||||
# Wireguard VPN Service
|
||||
|
||||
This service provides a Wireguard-based VPN mesh network with automatic IPv6 address allocation and routing between clan machines.
|
||||
This service provides a Wireguard-based VPN mesh network with automatic IPv6
|
||||
address allocation and routing between clan machines.
|
||||
|
||||
## Overview
|
||||
|
||||
The wireguard service creates a secure mesh network between clan machines using two roles:
|
||||
- **Controllers**: Machines with public endpoints that act as connection points and routers
|
||||
The wireguard service creates a secure mesh network between clan machines using
|
||||
two roles:
|
||||
|
||||
- **Controllers**: Machines with public endpoints that act as connection points
|
||||
and routers
|
||||
- **Peers**: Machines that connect through controllers to access the network
|
||||
|
||||
## Requirements
|
||||
|
||||
- Controllers must have a publicly accessible endpoint (domain name or static IP)
|
||||
- Peers must be in networks where UDP traffic is not blocked (uses port 51820 by default, configurable)
|
||||
- Controllers must have a publicly accessible endpoint (domain name or static
|
||||
IP)
|
||||
- Peers must be in networks where UDP traffic is not blocked (uses port 51820 by
|
||||
default, configurable)
|
||||
|
||||
## Features
|
||||
|
||||
@@ -24,24 +30,33 @@ The wireguard service creates a secure mesh network between clan machines using
|
||||
## Network Architecture
|
||||
|
||||
### IPv6 Address Allocation
|
||||
- Base network: `/40` ULA prefix (deterministically generated from instance name)
|
||||
|
||||
- Base network: `/40` ULA prefix (deterministically generated from instance
|
||||
name)
|
||||
- Controllers: Each gets a `/56` subnet from the base `/40`
|
||||
- Peers: Each gets a unique 64-bit host suffix that is used in ALL controller subnets
|
||||
- Peers: Each gets a unique 64-bit host suffix that is used in ALL controller
|
||||
subnets
|
||||
|
||||
### Addressing Design
|
||||
|
||||
- Each peer generates a unique host suffix (e.g., `:8750:a09b:0:1`)
|
||||
- This suffix is appended to each controller's `/56` prefix to create unique addresses
|
||||
- This suffix is appended to each controller's `/56` prefix to create unique
|
||||
addresses
|
||||
- Example: peer1 with suffix `:8750:a09b:0:1` gets:
|
||||
- `fd51:19c1:3b:f700:8750:a09b:0:1` in controller1's subnet
|
||||
- `fd51:19c1:c1:aa00:8750:a09b:0:1` in controller2's subnet
|
||||
- Controllers allow each peer's `/96` subnet for routing flexibility
|
||||
|
||||
### Connectivity
|
||||
- Peers use a single WireGuard interface with multiple IPs (one per controller subnet)
|
||||
- Controllers connect to ALL other controllers and ALL peers on a single interface
|
||||
|
||||
- Peers use a single WireGuard interface with multiple IPs (one per controller
|
||||
subnet)
|
||||
- Controllers connect to ALL other controllers and ALL peers on a single
|
||||
interface
|
||||
- Controllers have IPv6 forwarding enabled to route traffic between peers
|
||||
- All traffic between peers flows through controllers
|
||||
- Symmetric routing is maintained as each peer has consistent IPs across all controllers
|
||||
- Symmetric routing is maintained as each peer has consistent IPs across all
|
||||
controllers
|
||||
|
||||
### Example Network Topology
|
||||
|
||||
@@ -131,12 +146,14 @@ graph TB
|
||||
|
||||
### Advanced Options
|
||||
|
||||
|
||||
### Automatic Hostname Resolution
|
||||
|
||||
The wireguard service automatically adds entries to `/etc/hosts` for all machines in the network. Each machine is accessible via its hostname in the format `<machine-name>.<instance-name>`.
|
||||
The wireguard service automatically adds entries to `/etc/hosts` for all
|
||||
machines in the network. Each machine is accessible via its hostname in the
|
||||
format `<machine-name>.<instance-name>`.
|
||||
|
||||
For example, with an instance named `vpn`:
|
||||
|
||||
- `server1.vpn` - resolves to server1's IPv6 address
|
||||
- `laptop1.vpn` - resolves to laptop1's IPv6 address
|
||||
|
||||
@@ -153,16 +170,19 @@ ssh user@laptop1.vpn
|
||||
## Troubleshooting
|
||||
|
||||
### Check Wireguard Status
|
||||
|
||||
```bash
|
||||
sudo wg show
|
||||
```
|
||||
|
||||
### Verify IP Addresses
|
||||
|
||||
```bash
|
||||
ip addr show dev <instance-name>
|
||||
```
|
||||
|
||||
### Check Routing
|
||||
|
||||
```bash
|
||||
ip -6 route show dev <instance-name>
|
||||
```
|
||||
@@ -170,19 +190,23 @@ ip -6 route show dev <instance-name>
|
||||
### Interface Fails to Start: "Address already in use"
|
||||
|
||||
If you see this error in your logs:
|
||||
|
||||
```
|
||||
wireguard: Could not bring up interface, ignoring: Address already in use
|
||||
```
|
||||
|
||||
This means the configured port (default: 51820) is already in use by another service or wireguard instance. Solutions:
|
||||
This means the configured port (default: 51820) is already in use by another
|
||||
service or wireguard instance. Solutions:
|
||||
|
||||
1. **Check for conflicting wireguard instances:**
|
||||
|
||||
```bash
|
||||
sudo wg show
|
||||
sudo ss -ulnp | grep 51820
|
||||
```
|
||||
|
||||
2. **Use a different port:**
|
||||
|
||||
```nix
|
||||
services.wireguard.myinstance = {
|
||||
roles.controller = {
|
||||
@@ -192,12 +216,13 @@ This means the configured port (default: 51820) is already in use by another ser
|
||||
};
|
||||
```
|
||||
|
||||
3. **Ensure unique ports across multiple instances:**
|
||||
If you have multiple wireguard instances on the same machine, each must use a different port.
|
||||
3. **Ensure unique ports across multiple instances:** If you have multiple
|
||||
wireguard instances on the same machine, each must use a different port.
|
||||
|
||||
### Key Management
|
||||
|
||||
Keys are automatically generated and stored in the clan vars system. To regenerate keys:
|
||||
Keys are automatically generated and stored in the clan vars system. To
|
||||
regenerate keys:
|
||||
|
||||
```bash
|
||||
# Regenerate keys for a specific machine and instance
|
||||
@@ -214,4 +239,3 @@ clan machines update <machine-name>
|
||||
- Public keys are distributed through the clan vars system
|
||||
- Controllers must have publicly accessible endpoints
|
||||
- Firewall rules are automatically configured for the Wireguard ports
|
||||
|
||||
|
||||
@@ -13,32 +13,37 @@ inventory.instances = {
|
||||
};
|
||||
```
|
||||
|
||||
The input should be named according to your flake input.
|
||||
All machines will be peers and connected to the zerotier network.
|
||||
Jon is the controller machine, which will will accept other machines into the network.
|
||||
Sara is a moon and sets the `stableEndpoint` setting with a publicly reachable IP, the moon is optional.
|
||||
|
||||
The input should be named according to your flake input. All machines will be
|
||||
peers and connected to the zerotier network. Jon is the controller machine,
|
||||
which will will accept other machines into the network. Sara is a moon and sets
|
||||
the `stableEndpoint` setting with a publicly reachable IP, the moon is optional.
|
||||
|
||||
## Overview
|
||||
|
||||
This guide explains how to set up and manage a [ZeroTier VPN](https://zerotier.com) for a clan network. Each VPN requires a single controller and can support multiple peers and optional moons for better connectivity.
|
||||
This guide explains how to set up and manage a
|
||||
[ZeroTier VPN](https://zerotier.com) for a clan network. Each VPN requires a
|
||||
single controller and can support multiple peers and optional moons for better
|
||||
connectivity.
|
||||
|
||||
## Roles
|
||||
|
||||
### 1. Controller
|
||||
|
||||
The [Controller](https://docs.zerotier.com/controller/) manages network membership and is responsible for admitting new peers.
|
||||
When a new node is added to the clan, the controller must be updated to ensure it has the latest member list.
|
||||
The [Controller](https://docs.zerotier.com/controller/) manages network
|
||||
membership and is responsible for admitting new peers. When a new node is added
|
||||
to the clan, the controller must be updated to ensure it has the latest member
|
||||
list.
|
||||
|
||||
- **Key Points:**
|
||||
- Must be online to admit new machines to the VPN.
|
||||
- Existing nodes can continue to communicate even when the controller is offline.
|
||||
- Existing nodes can continue to communicate even when the controller is
|
||||
offline.
|
||||
|
||||
### 2. Moons
|
||||
|
||||
[Moons](https://docs.zerotier.com/roots) act as relay nodes,
|
||||
providing direct connectivity to peers via their public IP addresses.
|
||||
They enable devices that are not publicly reachable to join the VPN by routing through these nodes.
|
||||
[Moons](https://docs.zerotier.com/roots) act as relay nodes, providing direct
|
||||
connectivity to peers via their public IP addresses. They enable devices that
|
||||
are not publicly reachable to join the VPN by routing through these nodes.
|
||||
|
||||
- **Configuration Notes:**
|
||||
- Each moon must define its public IP address.
|
||||
@@ -46,8 +51,8 @@ They enable devices that are not publicly reachable to join the VPN by routing t
|
||||
|
||||
### 3. Peers
|
||||
|
||||
Peers are standard nodes in the VPN.
|
||||
They connect to other peers, moons, and the controller as needed.
|
||||
Peers are standard nodes in the VPN. They connect to other peers, moons, and the
|
||||
controller as needed.
|
||||
|
||||
- **Purpose:**
|
||||
- General role for all machines that are neither controllers nor moons.
|
||||
|
||||
@@ -1,9 +1,12 @@
|
||||
# Contributing to Clan
|
||||
|
||||
**Continuous Integration (CI)**: Each pull request gets automatically tested by
|
||||
gitea. If any errors are detected, it will block pull requests until they're
|
||||
resolved.
|
||||
|
||||
**Continuous Integration (CI)**: Each pull request gets automatically tested by gitea. If any errors are detected, it will block pull requests until they're resolved.
|
||||
|
||||
**Dependency Management**: We use the [Nix package manager](https://nixos.org/) to manage dependencies and ensure reproducibility, making your development process more robust.
|
||||
**Dependency Management**: We use the [Nix package manager](https://nixos.org/)
|
||||
to manage dependencies and ensure reproducibility, making your development
|
||||
process more robust.
|
||||
|
||||
## Supported Operating Systems
|
||||
|
||||
@@ -16,68 +19,81 @@ Let's get your development environment up and running:
|
||||
|
||||
1. **Install Nix Package Manager**:
|
||||
|
||||
- You can install the Nix package manager by either [downloading the Nix installer](https://github.com/DeterminateSystems/nix-installer/releases) or running this command:
|
||||
```bash
|
||||
curl --proto '=https' --tlsv1.2 -sSf -L https://install.determinate.systems/nix | sh -s -- install
|
||||
```
|
||||
- You can install the Nix package manager by either
|
||||
[downloading the Nix installer](https://github.com/DeterminateSystems/nix-installer/releases)
|
||||
or running this command:
|
||||
```bash
|
||||
curl --proto '=https' --tlsv1.2 -sSf -L https://install.determinate.systems/nix | sh -s -- install
|
||||
```
|
||||
|
||||
1. **Install direnv**:
|
||||
2. **Install direnv**:
|
||||
|
||||
- To automatically setup a devshell on entering the directory
|
||||
```bash
|
||||
nix profile install nixpkgs#nix-direnv-flakes nixpkgs#direnv
|
||||
```
|
||||
- To automatically setup a devshell on entering the directory
|
||||
```bash
|
||||
nix profile install nixpkgs#nix-direnv-flakes nixpkgs#direnv
|
||||
```
|
||||
|
||||
1. **Add direnv to your shell**:
|
||||
3. **Add direnv to your shell**:
|
||||
|
||||
- Direnv needs to [hook into your shell](https://direnv.net/docs/hook.html) to work.
|
||||
You can do this by executing following command. The example below will setup direnv for `zsh` and `bash`
|
||||
- Direnv needs to [hook into your shell](https://direnv.net/docs/hook.html)
|
||||
to work. You can do this by executing following command. The example below
|
||||
will setup direnv for `zsh` and `bash`
|
||||
|
||||
```bash
|
||||
echo 'eval "$(direnv hook zsh)"' >> ~/.zshrc && echo 'eval "$(direnv hook bash)"' >> ~/.bashrc && eval "$SHELL"
|
||||
```
|
||||
```bash
|
||||
echo 'eval "$(direnv hook zsh)"' >> ~/.zshrc && echo 'eval "$(direnv hook bash)"' >> ~/.bashrc && eval "$SHELL"
|
||||
```
|
||||
|
||||
1. **Allow the devshell**
|
||||
- Go to `clan-core/pkgs/clan-cli` and do a `direnv allow` to setup the necessary development environment to execute the `clan` command
|
||||
4. **Allow the devshell**
|
||||
|
||||
1. **Create a Gitea Account**:
|
||||
- Register an account on https://git.clan.lol
|
||||
- Fork the [clan-core](https://git.clan.lol/clan/clan-core) repository
|
||||
- Clone the repository and navigate to it
|
||||
- Add a new remote called upstream:
|
||||
```bash
|
||||
git remote add upstream gitea@git.clan.lol:clan/clan-core.git
|
||||
```
|
||||
1. **Allow .envrc**:
|
||||
- Go to `clan-core/pkgs/clan-cli` and do a `direnv allow` to setup the
|
||||
necessary development environment to execute the `clan` command
|
||||
|
||||
- When you enter the directory, you'll receive an error message like this:
|
||||
```bash
|
||||
direnv: error .envrc is blocked. Run `direnv allow` to approve its content
|
||||
```
|
||||
- Execute `direnv allow` to automatically execute the shell script `.envrc` when entering the directory.
|
||||
5. **Create a Gitea Account**:
|
||||
|
||||
1. **(Optional) Install Git Hooks**:
|
||||
- To syntax check your code you can run:
|
||||
```bash
|
||||
nix fmt
|
||||
```
|
||||
- To make this automatic install the git hooks
|
||||
```bash
|
||||
./scripts/pre-commit
|
||||
```
|
||||
- Register an account on https://git.clan.lol
|
||||
- Fork the [clan-core](https://git.clan.lol/clan/clan-core) repository
|
||||
- Clone the repository and navigate to it
|
||||
- Add a new remote called upstream:
|
||||
```bash
|
||||
git remote add upstream gitea@git.clan.lol:clan/clan-core.git
|
||||
```
|
||||
|
||||
6. **Allow .envrc**:
|
||||
|
||||
- When you enter the directory, you'll receive an error message like this:
|
||||
```bash
|
||||
direnv: error .envrc is blocked. Run `direnv allow` to approve its content
|
||||
```
|
||||
- Execute `direnv allow` to automatically execute the shell script `.envrc`
|
||||
when entering the directory.
|
||||
|
||||
7. **(Optional) Install Git Hooks**:
|
||||
|
||||
- To syntax check your code you can run:
|
||||
```bash
|
||||
nix fmt
|
||||
```
|
||||
- To make this automatic install the git hooks
|
||||
```bash
|
||||
./scripts/pre-commit
|
||||
```
|
||||
|
||||
## Related Projects
|
||||
|
||||
- **Data Mesher**: [data-mesher](https://git.clan.lol/clan/data-mesher)
|
||||
- **Nixos Facter**: [nixos-facter](https://github.com/nix-community/nixos-facter)
|
||||
- **Nixos Anywhere**: [nixos-anywhere](https://github.com/nix-community/nixos-anywhere)
|
||||
- **Nixos Facter**:
|
||||
[nixos-facter](https://github.com/nix-community/nixos-facter)
|
||||
- **Nixos Anywhere**:
|
||||
[nixos-anywhere](https://github.com/nix-community/nixos-anywhere)
|
||||
- **Disko**: [disko](https://github.com/nix-community/disko)
|
||||
|
||||
## Fixing Bugs or Adding Features in Clan-CLI
|
||||
|
||||
If you have a bug fix or feature that involves a related project, clone the relevant repository and replace its invocation in your local setup.
|
||||
If you have a bug fix or feature that involves a related project, clone the
|
||||
relevant repository and replace its invocation in your local setup.
|
||||
|
||||
For instance, if you need to update `nixos-anywhere` in clan-cli, find its usage:
|
||||
For instance, if you need to update `nixos-anywhere` in clan-cli, find its
|
||||
usage:
|
||||
|
||||
```python
|
||||
run(
|
||||
@@ -102,7 +118,8 @@ run(
|
||||
|
||||
```
|
||||
|
||||
The <path_to_local_src> doesn't need to be a local path, it can be any valid [flakeref](https://nix.dev/manual/nix/2.26/command-ref/new-cli/nix3-flake.html#flake-references).
|
||||
The \<path_to_local_src> doesn't need to be a local path, it can be any valid
|
||||
[flakeref](https://nix.dev/manual/nix/2.26/command-ref/new-cli/nix3-flake.html#flake-references).
|
||||
And thus can point to test already opened PRs for example.
|
||||
|
||||
# Standards
|
||||
@@ -110,4 +127,5 @@ And thus can point to test already opened PRs for example.
|
||||
- Every new module name should be in kebab-case.
|
||||
- Every fact definition, where possible should be in kebab-case.
|
||||
- Every vars definition, where possible should be in kebab-case.
|
||||
- Command line help descriptions should start capitalized and should not end in a period.
|
||||
- Command line help descriptions should start capitalized and should not end in
|
||||
a period.
|
||||
|
||||
@@ -2,17 +2,27 @@
|
||||
|
||||
## General Description
|
||||
|
||||
Self-hosting refers to the practice of hosting and maintaining servers, networks, storage, services, and other types of infrastructure by oneself rather than relying on a third-party vendor. This could involve running a server from a home or business location, or leasing a dedicated server at a data center.
|
||||
Self-hosting refers to the practice of hosting and maintaining servers,
|
||||
networks, storage, services, and other types of infrastructure by oneself rather
|
||||
than relying on a third-party vendor. This could involve running a server from a
|
||||
home or business location, or leasing a dedicated server at a data center.
|
||||
|
||||
There are several reasons for choosing to self-host. These can include:
|
||||
|
||||
1. Cost savings: Over time, self-hosting can be more cost-effective, especially for businesses with large scale needs.
|
||||
1. Cost savings: Over time, self-hosting can be more cost-effective, especially
|
||||
for businesses with large scale needs.
|
||||
|
||||
1. Control: Self-hosting provides a greater level of control over the infrastructure and services. It allows the owner to customize the system to their specific needs.
|
||||
2. Control: Self-hosting provides a greater level of control over the
|
||||
infrastructure and services. It allows the owner to customize the system to
|
||||
their specific needs.
|
||||
|
||||
1. Privacy and security: Self-hosting can offer improved privacy and security because data remains under the control of the host rather than being stored on third-party servers.
|
||||
3. Privacy and security: Self-hosting can offer improved privacy and security
|
||||
because data remains under the control of the host rather than being stored
|
||||
on third-party servers.
|
||||
|
||||
1. Independent: Being independent of third-party services can ensure that one's websites, applications, or services remain up even if the third-party service goes down.
|
||||
4. Independent: Being independent of third-party services can ensure that one's
|
||||
websites, applications, or services remain up even if the third-party service
|
||||
goes down.
|
||||
|
||||
## Stories
|
||||
|
||||
@@ -20,23 +30,32 @@ There are several reasons for choosing to self-host. These can include:
|
||||
|
||||
Alice wants to self-host a mumble server for her family.
|
||||
|
||||
- She visits to the Clan website, and follows the instructions on how to install Clan-OS on her server.
|
||||
- Alice logs into a terminal on her server via SSH (alternatively uses Clan GUI app)
|
||||
- Using the Clan CLI or GUI tool, alice creates a new private network for her family (VPN)
|
||||
- Alice now browses a list of curated Clan modules and finds a module for mumble.
|
||||
- She visits to the Clan website, and follows the instructions on how to install
|
||||
Clan-OS on her server.
|
||||
- Alice logs into a terminal on her server via SSH (alternatively uses Clan GUI
|
||||
app)
|
||||
- Using the Clan CLI or GUI tool, alice creates a new private network for her
|
||||
family (VPN)
|
||||
- Alice now browses a list of curated Clan modules and finds a module for
|
||||
mumble.
|
||||
- She adds this module to her network using the Clan tool.
|
||||
- After that, she uses the clan tool to invite her family members to her network
|
||||
- Other family members join the private network via the invitation.
|
||||
- By accepting the invitation, other members automatically install all required software to interact with the network on their machine.
|
||||
- By accepting the invitation, other members automatically install all required
|
||||
software to interact with the network on their machine.
|
||||
|
||||
### Story 2: Adding a service to an existing network
|
||||
|
||||
Alice wants to add a photos app to her private network
|
||||
|
||||
- She uses the clan CLI or GUI tool to manage her existing private Clan family network
|
||||
- She discovers a module for photoprism, and adds it to her server using the tool
|
||||
- Other members who are already part of her network, will receive a notification that an update is required to their environment
|
||||
- After accepting, all new software and services to interact with the new photoprism service will be installed automatically.
|
||||
- She uses the clan CLI or GUI tool to manage her existing private Clan family
|
||||
network
|
||||
- She discovers a module for photoprism, and adds it to her server using the
|
||||
tool
|
||||
- Other members who are already part of her network, will receive a notification
|
||||
that an update is required to their environment
|
||||
- After accepting, all new software and services to interact with the new
|
||||
photoprism service will be installed automatically.
|
||||
|
||||
## Challenges
|
||||
|
||||
|
||||
@@ -2,35 +2,53 @@
|
||||
|
||||
## General Description
|
||||
|
||||
Joining a self-hosted infrastructure involves connecting to a network, server, or system that is privately owned and managed, instead of being hosted by a third-party service provider. This could be a business's internal server, a private cloud setup, or any other private IT infrastructure that is not publicly accessible or controlled by outside entities.
|
||||
Joining a self-hosted infrastructure involves connecting to a network, server,
|
||||
or system that is privately owned and managed, instead of being hosted by a
|
||||
third-party service provider. This could be a business's internal server, a
|
||||
private cloud setup, or any other private IT infrastructure that is not publicly
|
||||
accessible or controlled by outside entities.
|
||||
|
||||
## Stories
|
||||
|
||||
### Story 1: Joining a private network
|
||||
|
||||
Alice' son Bob has never heard of Clan, but receives an invitation URL from Alice who already set up private Clan network for her family.
|
||||
Alice' son Bob has never heard of Clan, but receives an invitation URL from
|
||||
Alice who already set up private Clan network for her family.
|
||||
|
||||
Bob opens the invitation link and lands on the Clan website. He quickly learns about what Clan is and can see that the invitation is for a private network of his family that hosts a number of services, like a private voice chat and a photo sharing platform.
|
||||
Bob opens the invitation link and lands on the Clan website. He quickly learns
|
||||
about what Clan is and can see that the invitation is for a private network of
|
||||
his family that hosts a number of services, like a private voice chat and a
|
||||
photo sharing platform.
|
||||
|
||||
Bob decides to join the network and follows the instructions to install the Clan tool on his computer.
|
||||
Bob decides to join the network and follows the instructions to install the Clan
|
||||
tool on his computer.
|
||||
|
||||
Feeding the invitation link to the Clan tool, bob registers his machine with the network.
|
||||
Feeding the invitation link to the Clan tool, bob registers his machine with the
|
||||
network.
|
||||
|
||||
All programs required to interact with the network will be installed and configured automatically and securely.
|
||||
All programs required to interact with the network will be installed and
|
||||
configured automatically and securely.
|
||||
|
||||
Optionally, bob can customize the configuration of these programs through a simplified configuration interface.
|
||||
Optionally, bob can customize the configuration of these programs through a
|
||||
simplified configuration interface.
|
||||
|
||||
### Story 2: Receiving breaking changes
|
||||
|
||||
The Clan family network which Bob is part of received an update.
|
||||
|
||||
The existing photo sharing service has been removed and replaced with another alternative service. The new photo sharing service requires a different client app to view and upload photos.
|
||||
The existing photo sharing service has been removed and replaced with another
|
||||
alternative service. The new photo sharing service requires a different client
|
||||
app to view and upload photos.
|
||||
|
||||
Bob accepts the update. Now his environment will be updated. The old client software will be removed and the new one installed.
|
||||
Bob accepts the update. Now his environment will be updated. The old client
|
||||
software will be removed and the new one installed.
|
||||
|
||||
Because Bob has customized the previous photo viewing app, he is notified that this customization is no longer valid, as the software has been removed (deprecation message).l
|
||||
Because Bob has customized the previous photo viewing app, he is notified that
|
||||
this customization is no longer valid, as the software has been removed
|
||||
(deprecation message).l
|
||||
|
||||
Optionally, Bob can now customize the new photo viewing software through his Clan configuration app or via a config file.
|
||||
Optionally, Bob can now customize the new photo viewing software through his
|
||||
Clan configuration app or via a config file.
|
||||
|
||||
## Challenges
|
||||
|
||||
|
||||
@@ -2,23 +2,30 @@
|
||||
|
||||
## General Description
|
||||
|
||||
Clan modules are pieces of software that can be used by admins to build a private or public infrastructure.
|
||||
Clan modules are pieces of software that can be used by admins to build a
|
||||
private or public infrastructure.
|
||||
|
||||
Clan modules should have the following properties:
|
||||
|
||||
1. Documented: It should be clear what the module does and how to use it.
|
||||
1. Self contained: A module should be usable as is. If it requires any other software or settings, those should be delivered with the module itself.
|
||||
1. Simple to deploy and use: Modules should have opinionated defaults that just work. Any customization should be optional
|
||||
2. Self contained: A module should be usable as is. If it requires any other
|
||||
software or settings, those should be delivered with the module itself.
|
||||
3. Simple to deploy and use: Modules should have opinionated defaults that just
|
||||
work. Any customization should be optional
|
||||
|
||||
## Stories
|
||||
|
||||
### Story 1: Maintaining a shared folder module
|
||||
|
||||
Alice maintains a module for a shared folder service that she uses in her own infra, but also publishes for the community.
|
||||
Alice maintains a module for a shared folder service that she uses in her own
|
||||
infra, but also publishes for the community.
|
||||
|
||||
By following clan module standards (Backups, Interfaces, Output schema, etc), other community members have an easy time re-using the module within their own infra.
|
||||
By following clan module standards (Backups, Interfaces, Output schema, etc),
|
||||
other community members have an easy time re-using the module within their own
|
||||
infra.
|
||||
|
||||
She benefits from publishing the module, because other community members start using it and help to maintain it.
|
||||
She benefits from publishing the module, because other community members start
|
||||
using it and help to maintain it.
|
||||
|
||||
## Challenges
|
||||
|
||||
|
||||
@@ -1,7 +1,10 @@
|
||||
---
|
||||
template: options.html
|
||||
hide:
|
||||
- navigation
|
||||
- toc
|
||||
---
|
||||
<redoc src="/openapi.json" />
|
||||
______________________________________________________________________
|
||||
|
||||
template: options.html hide:
|
||||
|
||||
- navigation
|
||||
- toc
|
||||
|
||||
______________________________________________________________________
|
||||
|
||||
<redoc src="/openapi.json" />
|
||||
|
||||
@@ -1,33 +1,35 @@
|
||||
# Auto-included Files
|
||||
|
||||
Clan automatically imports specific files from each machine directory and registers them, reducing the need for manual configuration.
|
||||
Clan automatically imports specific files from each machine directory and
|
||||
registers them, reducing the need for manual configuration.
|
||||
|
||||
## Machine Registration
|
||||
|
||||
Every folder under `machines/{machineName}` is automatically registered as a Clan machine.
|
||||
Every folder under `machines/{machineName}` is automatically registered as a
|
||||
Clan machine.
|
||||
|
||||
!!! info "Files loaded automatically for each machine"
|
||||
|
||||
The following files are detected and imported for every Clan machine:
|
||||
|
||||
- [x] `machines/{machineName}/configuration.nix`
|
||||
Main configuration file for the machine.
|
||||
- [x] `machines/{machineName}/configuration.nix` Main configuration file for the
|
||||
machine.
|
||||
|
||||
- [x] `machines/{machineName}/hardware-configuration.nix`
|
||||
Hardware-specific configuration generated by NixOS.
|
||||
- [x] `machines/{machineName}/hardware-configuration.nix` Hardware-specific
|
||||
configuration generated by NixOS.
|
||||
|
||||
- [x] `machines/{machineName}/facter.json`
|
||||
Contains system facts. Automatically generated — see [nixos-facter](https://clan.lol/blog/nixos-facter/) for details.
|
||||
- [x] `machines/{machineName}/facter.json` Contains system facts. Automatically
|
||||
generated — see [nixos-facter](https://clan.lol/blog/nixos-facter/) for
|
||||
details.
|
||||
|
||||
- [x] `machines/{machineName}/disko.nix`
|
||||
Disk layout configuration. See the [disko quickstart](https://github.com/nix-community/disko/blob/master/docs/quickstart.md) for more info.
|
||||
- [x] `machines/{machineName}/disko.nix` Disk layout configuration. See the
|
||||
[disko quickstart](https://github.com/nix-community/disko/blob/master/docs/quickstart.md)
|
||||
for more info.
|
||||
|
||||
## Other Auto-included Files
|
||||
|
||||
* **`inventory.json`**
|
||||
Managed by Clan's API.
|
||||
Merges with `clan.inventory` to extend the inventory.
|
||||
- **`inventory.json`** Managed by Clan's API. Merges with `clan.inventory` to
|
||||
extend the inventory.
|
||||
|
||||
* **`.clan-flake`**
|
||||
Sentinel file to be used to locate the root of a Clan repository.
|
||||
Falls back to `.git`, `.hg`, `.svn`, or `flake.nix` if not found.
|
||||
- **`.clan-flake`** Sentinel file to be used to locate the root of a Clan
|
||||
repository. Falls back to `.git`, `.hg`, `.svn`, or `flake.nix` if not found.
|
||||
|
||||
@@ -1,14 +1,21 @@
|
||||
# Generators
|
||||
|
||||
Defining a linux user's password via the nixos configuration previously required running `mkpasswd ...` and then copying the hash back into the nix configuration.
|
||||
Defining a linux user's password via the nixos configuration previously required
|
||||
running `mkpasswd ...` and then copying the hash back into the nix
|
||||
configuration.
|
||||
|
||||
In this example, we will guide you through automating that interaction using clan `vars`.
|
||||
In this example, we will guide you through automating that interaction using
|
||||
clan `vars`.
|
||||
|
||||
For a more general explanation of what clan vars are and how it works, see the intro of the [Reference Documentation for vars](../reference/clan.core/vars.md)
|
||||
For a more general explanation of what clan vars are and how it works, see the
|
||||
intro of the [Reference Documentation for vars](../reference/clan.core/vars.md)
|
||||
|
||||
This guide assumes
|
||||
- Clan is set up already (see [Getting Started](../guides/getting-started/index.md))
|
||||
- a machine has been added to the clan (see [Adding Machines](../guides/getting-started/add-machines.md))
|
||||
|
||||
- Clan is set up already (see
|
||||
[Getting Started](../guides/getting-started/index.md))
|
||||
- a machine has been added to the clan (see
|
||||
[Adding Machines](../guides/getting-started/add-machines.md))
|
||||
|
||||
This section will walk you through the following steps:
|
||||
|
||||
@@ -29,7 +36,9 @@ In this example, a `vars` `generator` is used to:
|
||||
- store the hash in a file
|
||||
- expose the file path to the nixos configuration
|
||||
|
||||
Create a new nix file `root-password.nix` with the following content and import it into your `configuration.nix`
|
||||
Create a new nix file `root-password.nix` with the following content and import
|
||||
it into your `configuration.nix`
|
||||
|
||||
```nix
|
||||
{config, pkgs, ...}: {
|
||||
|
||||
@@ -62,24 +71,29 @@ Create a new nix file `root-password.nix` with the following content and import
|
||||
## Inspect the status
|
||||
|
||||
Executing `clan vars list`, you should see the following:
|
||||
|
||||
```shellSession
|
||||
$ clan vars list my_machine
|
||||
root-password/password-hash: <not set>
|
||||
```
|
||||
|
||||
...indicating that the value `password-hash` for the generator `root-password` is not set yet.
|
||||
...indicating that the value `password-hash` for the generator `root-password`
|
||||
is not set yet.
|
||||
|
||||
## Generate the values
|
||||
|
||||
This step is not strictly necessary, as deploying the machine via `clan machines update` would trigger the generator as well.
|
||||
This step is not strictly necessary, as deploying the machine via
|
||||
`clan machines update` would trigger the generator as well.
|
||||
|
||||
To run the generator, execute `clan vars generate` for your machine
|
||||
|
||||
```shellSession
|
||||
$ clan vars generate my_machine
|
||||
Enter the value for root-password/password-input (hidden):
|
||||
```
|
||||
|
||||
After entering the value, the updated status is reported:
|
||||
|
||||
```shellSession
|
||||
Updated var root-password/password-hash
|
||||
old: <not set>
|
||||
@@ -92,6 +106,7 @@ With the last step, a new file was created in your repository:
|
||||
`vars/per-machine/my-machine/root-password/password-hash/value`
|
||||
|
||||
If the repository is a git repository, a commit was created automatically:
|
||||
|
||||
```shellSession
|
||||
$ git log -n1
|
||||
commit ... (HEAD -> master)
|
||||
@@ -109,9 +124,13 @@ clan machines update my_machine
|
||||
|
||||
## Share root password between machines
|
||||
|
||||
If we just imported the `root-password.nix` from above into more machines, clan would ask for a new password for each additional machine.
|
||||
If we just imported the `root-password.nix` from above into more machines, clan
|
||||
would ask for a new password for each additional machine.
|
||||
|
||||
If the root password instead should only be entered once and shared across all
|
||||
machines, the generator defined above needs to be declared as `shared`, by
|
||||
adding `share = true` to it:
|
||||
|
||||
If the root password instead should only be entered once and shared across all machines, the generator defined above needs to be declared as `shared`, by adding `share = true` to it:
|
||||
```nix
|
||||
{config, pkgs, ...}: {
|
||||
clan.vars.generators.root-password = {
|
||||
@@ -121,13 +140,15 @@ If the root password instead should only be entered once and shared across all m
|
||||
}
|
||||
```
|
||||
|
||||
Importing that shared generator into each machine, will ensure that the password is only asked once the first machine gets updated and then re-used for all subsequent machines.
|
||||
Importing that shared generator into each machine, will ensure that the password
|
||||
is only asked once the first machine gets updated and then re-used for all
|
||||
subsequent machines.
|
||||
|
||||
## Change the root password
|
||||
|
||||
Changing the password can be done via this command.
|
||||
Replace `my-machine` with your machine.
|
||||
If the password is shared, just pick any machine that has the generator declared.
|
||||
Changing the password can be done via this command. Replace `my-machine` with
|
||||
your machine. If the password is shared, just pick any machine that has the
|
||||
generator declared.
|
||||
|
||||
```shellSession
|
||||
$ clan vars generate my-machine --generator root-password --regenerate
|
||||
@@ -141,7 +162,6 @@ Updated var root-password/password-hash
|
||||
new: $6$OyoQtDVzeemgh8EQ$zRK...
|
||||
```
|
||||
|
||||
|
||||
## Further Reading
|
||||
|
||||
- [Reference Documentation for `clan.core.vars` NixOS options](../reference/clan.core/vars.md)
|
||||
|
||||
@@ -1,62 +1,79 @@
|
||||
|
||||
`Inventory` is an abstract service layer for consistently configuring distributed services across machine boundaries.
|
||||
`Inventory` is an abstract service layer for consistently configuring
|
||||
distributed services across machine boundaries.
|
||||
|
||||
## Concept
|
||||
|
||||
Its concept is slightly different to what NixOS veterans might be used to. The inventory is a service definition on a higher level, not a machine configuration. This allows you to define a consistent and coherent service.
|
||||
Its concept is slightly different to what NixOS veterans might be used to. The
|
||||
inventory is a service definition on a higher level, not a machine
|
||||
configuration. This allows you to define a consistent and coherent service.
|
||||
|
||||
The inventory logic will automatically derive the modules and configurations to enable on each machine in your `clan` based on its `role`. This makes it super easy to setup distributed `services` such as Backups, Networking, traditional cloud services, or peer-to-peer based applications.
|
||||
The inventory logic will automatically derive the modules and configurations to
|
||||
enable on each machine in your `clan` based on its `role`. This makes it super
|
||||
easy to setup distributed `services` such as Backups, Networking, traditional
|
||||
cloud services, or peer-to-peer based applications.
|
||||
|
||||
The following tutorial will walk through setting up a Backup service where the terms `Service` and `Role` will become more clear.
|
||||
The following tutorial will walk through setting up a Backup service where the
|
||||
terms `Service` and `Role` will become more clear.
|
||||
|
||||
!!! example "Experimental status"
|
||||
The inventory implementation is not considered stable yet.
|
||||
We are actively soliciting feedback from users.
|
||||
!!! example "Experimental status" The inventory implementation is not considered
|
||||
stable yet. We are actively soliciting feedback from users.
|
||||
|
||||
Stabilizing the API is a priority.
|
||||
```
|
||||
Stabilizing the API is a priority.
|
||||
```
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- [x] [Add some machines](../guides/getting-started/add-machines.md) to your Clan.
|
||||
- [x] [Add some machines](../guides/getting-started/add-machines.md) to your
|
||||
Clan.
|
||||
|
||||
## Services
|
||||
|
||||
The inventory defines `services`. Membership of `machines` is defined via `roles` exclusively.
|
||||
The inventory defines `services`. Membership of `machines` is defined via
|
||||
`roles` exclusively.
|
||||
|
||||
See each [modules documentation](../reference/clanServices/index.md) for its available roles.
|
||||
See each [modules documentation](../reference/clanServices/index.md) for its
|
||||
available roles.
|
||||
|
||||
### Adding services to machines
|
||||
|
||||
A service can be added to one or multiple machines via `Roles`. Clan's `Role` interface provide sane defaults for a module this allows the module author to reduce the configuration overhead to a minimum.
|
||||
A service can be added to one or multiple machines via `Roles`. Clan's `Role`
|
||||
interface provide sane defaults for a module this allows the module author to
|
||||
reduce the configuration overhead to a minimum.
|
||||
|
||||
Each service can still be customized and configured according to the modules options.
|
||||
Each service can still be customized and configured according to the modules
|
||||
options.
|
||||
|
||||
- Per instance configuration via `services.<serviceName>.<instanceName>.config`
|
||||
- Per role configuration via `services.<serviceName>.<instanceName>.roles.<roleName>.config`
|
||||
- Per machine configuration via `services.<serviceName>.<instanceName>.machines.<machineName>.config`
|
||||
- Per role configuration via
|
||||
`services.<serviceName>.<instanceName>.roles.<roleName>.config`
|
||||
- Per machine configuration via
|
||||
`services.<serviceName>.<instanceName>.machines.<machineName>.config`
|
||||
|
||||
### Setting up the Backup Service
|
||||
|
||||
!!! Example "Borgbackup Example"
|
||||
|
||||
To configure a service it needs to be added to the machine.
|
||||
It is required to assign the service (`borgbackup`) an arbitrary instance name. (`instance_1`)
|
||||
````
|
||||
To configure a service it needs to be added to the machine.
|
||||
It is required to assign the service (`borgbackup`) an arbitrary instance name. (`instance_1`)
|
||||
|
||||
See also: [Multiple Service Instances](#multiple-service-instances)
|
||||
See also: [Multiple Service Instances](#multiple-service-instances)
|
||||
|
||||
```{.nix hl_lines="6-7"}
|
||||
clan-core.lib.clan {
|
||||
inventory = {
|
||||
services = {
|
||||
borgbackup.instance_1 = {
|
||||
# Machines can be added here.
|
||||
roles.client.machines = [ "jon" ];
|
||||
roles.server.machines = [ "backup_server" ];
|
||||
};
|
||||
```{.nix hl_lines="6-7"}
|
||||
clan-core.lib.clan {
|
||||
inventory = {
|
||||
services = {
|
||||
borgbackup.instance_1 = {
|
||||
# Machines can be added here.
|
||||
roles.client.machines = [ "jon" ];
|
||||
roles.server.machines = [ "backup_server" ];
|
||||
};
|
||||
};
|
||||
}
|
||||
```
|
||||
};
|
||||
}
|
||||
```
|
||||
````
|
||||
|
||||
### Scaling the Backup
|
||||
|
||||
@@ -66,56 +83,60 @@ It is possible to add services to multiple machines via tags as shown
|
||||
|
||||
!!! Example "Tags Example"
|
||||
|
||||
```{.nix hl_lines="5 8 14"}
|
||||
clan-core.lib.clan {
|
||||
inventory = {
|
||||
machines = {
|
||||
"jon" = {
|
||||
tags = [ "backup" ];
|
||||
};
|
||||
"sara" = {
|
||||
tags = [ "backup" ];
|
||||
};
|
||||
# ...
|
||||
````
|
||||
```{.nix hl_lines="5 8 14"}
|
||||
clan-core.lib.clan {
|
||||
inventory = {
|
||||
machines = {
|
||||
"jon" = {
|
||||
tags = [ "backup" ];
|
||||
};
|
||||
services = {
|
||||
borgbackup.instance_1 = {
|
||||
roles.client.tags = [ "backup" ];
|
||||
roles.server.machines = [ "backup_server" ];
|
||||
};
|
||||
"sara" = {
|
||||
tags = [ "backup" ];
|
||||
};
|
||||
# ...
|
||||
};
|
||||
services = {
|
||||
borgbackup.instance_1 = {
|
||||
roles.client.tags = [ "backup" ];
|
||||
roles.server.machines = [ "backup_server" ];
|
||||
};
|
||||
};
|
||||
}
|
||||
```
|
||||
};
|
||||
}
|
||||
```
|
||||
````
|
||||
|
||||
### Multiple Service Instances
|
||||
|
||||
!!! danger "Important"
|
||||
Not all modules implement support for multiple instances yet.
|
||||
Multiple instance usage could create complexity, refer to each modules documentation, for intended usage.
|
||||
!!! danger "Important" Not all modules implement support for multiple instances
|
||||
yet. Multiple instance usage could create complexity, refer to each modules
|
||||
documentation, for intended usage.
|
||||
|
||||
!!! Example
|
||||
|
||||
In this example `backup_server` has role `client` and `server` in different instances.
|
||||
````
|
||||
In this example `backup_server` has role `client` and `server` in different instances.
|
||||
|
||||
```{.nix hl_lines="11 14"}
|
||||
clan-core.lib.clan {
|
||||
inventory = {
|
||||
machines = {
|
||||
"jon" = {};
|
||||
"backup_server" = {};
|
||||
"backup_backup_server" = {}
|
||||
```{.nix hl_lines="11 14"}
|
||||
clan-core.lib.clan {
|
||||
inventory = {
|
||||
machines = {
|
||||
"jon" = {};
|
||||
"backup_server" = {};
|
||||
"backup_backup_server" = {}
|
||||
};
|
||||
services = {
|
||||
borgbackup.instance_1 = {
|
||||
roles.client.machines = [ "jon" ];
|
||||
roles.server.machines = [ "backup_server" ];
|
||||
};
|
||||
services = {
|
||||
borgbackup.instance_1 = {
|
||||
roles.client.machines = [ "jon" ];
|
||||
roles.server.machines = [ "backup_server" ];
|
||||
};
|
||||
borgbackup.instance_2 = {
|
||||
roles.client.machines = [ "backup_server" ];
|
||||
roles.server.machines = [ "backup_backup_server" ];
|
||||
};
|
||||
borgbackup.instance_2 = {
|
||||
roles.client.machines = [ "backup_server" ];
|
||||
roles.server.machines = [ "backup_backup_server" ];
|
||||
};
|
||||
};
|
||||
}
|
||||
```
|
||||
};
|
||||
}
|
||||
```
|
||||
````
|
||||
|
||||
@@ -1,7 +1,8 @@
|
||||
# How Templates work
|
||||
|
||||
Clan offers the ability to use templates for creating different resources.
|
||||
It comes with some `<builtin>` templates and discovers all exposed templates from its flake's `inputs`
|
||||
Clan offers the ability to use templates for creating different resources. It
|
||||
comes with some `<builtin>` templates and discovers all exposed templates from
|
||||
its flake's `inputs`
|
||||
|
||||
For example one can list all current templates like this:
|
||||
|
||||
@@ -38,7 +39,8 @@ Available 'machine' templates
|
||||
|
||||
Templates are referenced via the `--template` `selector`
|
||||
|
||||
clan-core ships its native/builtin templates. Those are referenced if the selector is a plain string ( without `#` or `./.` )
|
||||
clan-core ships its native/builtin templates. Those are referenced if the
|
||||
selector is a plain string ( without `#` or `./.` )
|
||||
|
||||
For example:
|
||||
|
||||
@@ -48,11 +50,14 @@ would use the native `<builtin>.flake-parts` template
|
||||
|
||||
## Selectors follow nix flake `reference#attribute` syntax
|
||||
|
||||
Selectors follow a very similar pattern as Nix's native attribute selection behavior.
|
||||
Selectors follow a very similar pattern as Nix's native attribute selection
|
||||
behavior.
|
||||
|
||||
Just like `nix build .` would build `packages.x86-linux.default` of the flake in `./.`
|
||||
Just like `nix build .` would build `packages.x86-linux.default` of the flake in
|
||||
`./.`
|
||||
|
||||
`clan flakes create --template=.` would create a clan from your **local** `default` clan template (`templates.clan.default`).
|
||||
`clan flakes create --template=.` would create a clan from your **local**
|
||||
`default` clan template (`templates.clan.default`).
|
||||
|
||||
In fact this command would be equivalent, just make it more explicit
|
||||
|
||||
@@ -60,10 +65,11 @@ In fact this command would be equivalent, just make it more explicit
|
||||
|
||||
## Remote templates
|
||||
|
||||
Just like with Nix you could specify a remote url or path to the flake containing the template
|
||||
Just like with Nix you could specify a remote url or path to the flake
|
||||
containing the template
|
||||
|
||||
`clan flakes create --template=github:owner/repo#foo`
|
||||
|
||||
!!! Note "Implementation Note"
|
||||
Not all features of Nix's attribute selection are currently matched.
|
||||
There are minor differences in case of unexpected behavior please create an [issue](https://git.clan.lol/clan/clan-core/issues/new)
|
||||
!!! Note "Implementation Note" Not all features of Nix's attribute selection are
|
||||
currently matched. There are minor differences in case of unexpected behavior
|
||||
please create an [issue](https://git.clan.lol/clan/clan-core/issues/new)
|
||||
|
||||
@@ -13,15 +13,20 @@ To define a service in Clan, you need to define two things:
|
||||
- `clanModule` - defined by module authors
|
||||
- `inventory` - defined by users
|
||||
|
||||
The `clanModule` is currently a plain NixOS module. It is conditionally imported into each machine depending on the `service` and `role`.
|
||||
The `clanModule` is currently a plain NixOS module. It is conditionally imported
|
||||
into each machine depending on the `service` and `role`.
|
||||
|
||||
A `role` is a function of a machine within a service. For example in the `backup` service there are `client` and `server` roles.
|
||||
A `role` is a function of a machine within a service. For example in the
|
||||
`backup` service there are `client` and `server` roles.
|
||||
|
||||
The `inventory` contains the settings for the user/consumer of the module. It describes what `services` run on each machine and with which `roles`.
|
||||
The `inventory` contains the settings for the user/consumer of the module. It
|
||||
describes what `services` run on each machine and with which `roles`.
|
||||
|
||||
Additionally any `service` can be instantiated multiple times.
|
||||
|
||||
This ADR proposes that we change how to write a `clanModule`. The `inventory` should get a new attribute called `instances` that allow for configuration of these modules.
|
||||
This ADR proposes that we change how to write a `clanModule`. The `inventory`
|
||||
should get a new attribute called `instances` that allow for configuration of
|
||||
these modules.
|
||||
|
||||
### Status Quo
|
||||
|
||||
@@ -66,103 +71,144 @@ in {
|
||||
|
||||
Problems with the current way of writing clanModules:
|
||||
|
||||
1. No way to retrieve the config of a single service instance, together with its name.
|
||||
2. Directly exporting a single, anonymous nixosModule without any intermediary attribute layers doesn't leave room for exporting other inventory resources such as potentially `vars` or `homeManagerConfig`.
|
||||
3. Can't access multiple config instances individually.
|
||||
Example:
|
||||
```nix
|
||||
inventory = {
|
||||
services = {
|
||||
network.c-base = {
|
||||
instanceConfig.ips = {
|
||||
mors = "172.139.0.2";
|
||||
};
|
||||
};
|
||||
network.gg23 = {
|
||||
instanceConfig.ips = {
|
||||
mors = "10.23.0.2";
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
```
|
||||
This doesn't work because all instance configs are applied to the same namespace. So this results in a conflict currently.
|
||||
Resolving this problem means that new inventory modules cannot be plain nixos modules anymore. If they are configured via `instances` / `instanceConfig` they cannot be configured without using the inventory. (There might be ways to inject instanceConfig but that requires knowledge of inventory internals)
|
||||
1. No way to retrieve the config of a single service instance, together with its
|
||||
name.
|
||||
|
||||
4. Writing modules for multiple instances is cumbersome. Currently the clanModule author has to write one or multiple `fold` operations for potentially every nixos option to define how multiple service instances merge into every single one option. The new idea behind this adr is to pull the common fold function into the outer context provide it as a common helper. (See the example below. `perInstance` analog to the well known `perSystem` of flake-parts)
|
||||
2. Directly exporting a single, anonymous nixosModule without any intermediary
|
||||
attribute layers doesn't leave room for exporting other inventory resources
|
||||
such as potentially `vars` or `homeManagerConfig`.
|
||||
|
||||
5. Each role has a different interface. We need to render that interface into json-schema which includes creating an unnecessary test machine currently. Defining the interface at a higher level (outside of any machine context) allows faster evaluation and an isolation by design from any machine.
|
||||
This allows rendering the UI (options tree) of a service by just knowing the service and the corresponding roles without creating a dummy machine.
|
||||
3. Can't access multiple config instances individually. Example:
|
||||
|
||||
6. The interface of defining config is wrong. It is possible to define config that applies to multiple machine at once. It is possible to define config that applies to
|
||||
a machine as a hole. But this is wrong behavior because the options exist at the role level. So config must also always exist at the role level.
|
||||
Currently we merge options and config together but that may produce conflicts. Those module system conflicts are very hard to foresee since they depend on what roles exist at runtime.
|
||||
```nix
|
||||
inventory = {
|
||||
services = {
|
||||
network.c-base = {
|
||||
instanceConfig.ips = {
|
||||
mors = "172.139.0.2";
|
||||
};
|
||||
};
|
||||
network.gg23 = {
|
||||
instanceConfig.ips = {
|
||||
mors = "10.23.0.2";
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
```
|
||||
|
||||
This doesn't work because all instance configs are applied to the same
|
||||
namespace. So this results in a conflict currently. Resolving this problem
|
||||
means that new inventory modules cannot be plain nixos modules anymore. If
|
||||
they are configured via `instances` / `instanceConfig` they cannot be
|
||||
configured without using the inventory. (There might be ways to inject
|
||||
instanceConfig but that requires knowledge of inventory internals)
|
||||
|
||||
4. Writing modules for multiple instances is cumbersome. Currently the
|
||||
clanModule author has to write one or multiple `fold` operations for
|
||||
potentially every nixos option to define how multiple service instances merge
|
||||
into every single one option. The new idea behind this adr is to pull the
|
||||
common fold function into the outer context provide it as a common helper.
|
||||
(See the example below. `perInstance` analog to the well known `perSystem` of
|
||||
flake-parts)
|
||||
|
||||
5. Each role has a different interface. We need to render that interface into
|
||||
json-schema which includes creating an unnecessary test machine currently.
|
||||
Defining the interface at a higher level (outside of any machine context)
|
||||
allows faster evaluation and an isolation by design from any machine. This
|
||||
allows rendering the UI (options tree) of a service by just knowing the
|
||||
service and the corresponding roles without creating a dummy machine.
|
||||
|
||||
6. The interface of defining config is wrong. It is possible to define config
|
||||
that applies to multiple machine at once. It is possible to define config
|
||||
that applies to a machine as a hole. But this is wrong behavior because the
|
||||
options exist at the role level. So config must also always exist at the role
|
||||
level. Currently we merge options and config together but that may produce
|
||||
conflicts. Those module system conflicts are very hard to foresee since they
|
||||
depend on what roles exist at runtime.
|
||||
|
||||
## Proposed Change
|
||||
|
||||
We will create a new module class which is defined by `_class = "clan.service"` ([documented here](https://nixos.org/manual/nixpkgs/stable/#module-system-lib-evalModules-param-class)).
|
||||
We will create a new module class which is defined by `_class = "clan.service"`
|
||||
([documented here](https://nixos.org/manual/nixpkgs/stable/#module-system-lib-evalModules-param-class)).
|
||||
|
||||
Existing clan modules will still work by continuing to be plain NixOS modules. All new modules can set `_class = "clan.service";` to use the proposed features.
|
||||
Existing clan modules will still work by continuing to be plain NixOS modules.
|
||||
All new modules can set `_class = "clan.service";` to use the proposed features.
|
||||
|
||||
In short the change introduces a new module class that makes the currently necessary folding of `clan.service`s `instances` and `roles` a common operation. The module author can define the inner function of the fold operations which is called a `clan.service` module.
|
||||
In short the change introduces a new module class that makes the currently
|
||||
necessary folding of `clan.service`s `instances` and `roles` a common operation.
|
||||
The module author can define the inner function of the fold operations which is
|
||||
called a `clan.service` module.
|
||||
|
||||
There are the following attributes of such a module:
|
||||
|
||||
### `roles.<roleName>.interface`
|
||||
|
||||
Each role can have a different interface for how to be configured.
|
||||
I.e.: A `client` role might have different options than a `server` role.
|
||||
Each role can have a different interface for how to be configured. I.e.: A
|
||||
`client` role might have different options than a `server` role.
|
||||
|
||||
This attribute should be used to define `options`. (Not `config` !)
|
||||
|
||||
The end-user defines the corresponding `config`.
|
||||
|
||||
This submodule will be evaluated for each `instance role` combination and passed as argument into `perInstance`.
|
||||
This submodule will be evaluated for each `instance role` combination and passed
|
||||
as argument into `perInstance`.
|
||||
|
||||
This submodules `options` will be evaluated to build the UI for that module dynamically.
|
||||
This submodules `options` will be evaluated to build the UI for that module
|
||||
dynamically.
|
||||
|
||||
### **Result attributes**
|
||||
|
||||
Some common result attributes are produced by modules of this proposal, those will be referenced later in this document but are commonly defined as:
|
||||
Some common result attributes are produced by modules of this proposal, those
|
||||
will be referenced later in this document but are commonly defined as:
|
||||
|
||||
- `nixosModule` A single nixos module. (`{config, ...}:{ environment.systemPackages = []; }`)
|
||||
- `services.<serviceName>` An attribute set of `_class = clan.service`. Which contain the same thing as this whole ADR proposes.
|
||||
- `nixosModule` A single nixos module.
|
||||
(`{config, ...}:{ environment.systemPackages = []; }`)
|
||||
- `services.<serviceName>` An attribute set of `_class = clan.service`. Which
|
||||
contain the same thing as this whole ADR proposes.
|
||||
- `vars` To be defined. Reserved for now.
|
||||
|
||||
### `roles.<roleName>.perInstance`
|
||||
|
||||
This acts like a function that maps over all `service instances` of a given `role`.
|
||||
It produces the previously defined **result attributes**.
|
||||
This acts like a function that maps over all `service instances` of a given
|
||||
`role`. It produces the previously defined **result attributes**.
|
||||
|
||||
I.e. This allows to produce multiple `nixosModules` one for every instance of the service.
|
||||
Hence making multiple `service instances` convenient by leveraging the module-system merge behavior.
|
||||
I.e. This allows to produce multiple `nixosModules` one for every instance of
|
||||
the service. Hence making multiple `service instances` convenient by leveraging
|
||||
the module-system merge behavior.
|
||||
|
||||
### `perMachine`
|
||||
|
||||
This acts like a function that maps over all `machines` of a given `service`.
|
||||
It produces the previously defined **result attributes**.
|
||||
This acts like a function that maps over all `machines` of a given `service`. It
|
||||
produces the previously defined **result attributes**.
|
||||
|
||||
I.e. this allows to produce exactly one `nixosModule` per `service`.
|
||||
Making it easy to set nixos-options only once if they have a one-to-one relation to a service being enabled.
|
||||
I.e. this allows to produce exactly one `nixosModule` per `service`. Making it
|
||||
easy to set nixos-options only once if they have a one-to-one relation to a
|
||||
service being enabled.
|
||||
|
||||
Note: `lib.mkIf` can be used on i.e. `roleName` to make the scope more specific.
|
||||
|
||||
### `services.<serviceName>`
|
||||
|
||||
This allows to define nested services.
|
||||
i.e the *service* `backup` might define a nested *service* `ssh` which sets up an ssh connection.
|
||||
This allows to define nested services. i.e the *service* `backup` might define a
|
||||
nested *service* `ssh` which sets up an ssh connection.
|
||||
|
||||
This can be defined in `perMachine` and `perInstance`
|
||||
|
||||
- For Every `instance` a given `service` may add multiple nested `services`.
|
||||
- A given `service` may add a static set of nested `services`; Even if there are multiple instances of the same given service.
|
||||
- A given `service` may add a static set of nested `services`; Even if there are
|
||||
multiple instances of the same given service.
|
||||
|
||||
Q: Why is this not a top-level attribute?
|
||||
A: Because nested service definitions may also depend on a `role` which must be resolved depending on `machine` and `instance`. The top-level module doesn't know anything about machines. Keeping the service layer machine agnostic allows us to build the UI for a module without adding any machines. (One of the problems with the current system)
|
||||
Q: Why is this not a top-level attribute? A: Because nested service definitions
|
||||
may also depend on a `role` which must be resolved depending on `machine` and
|
||||
`instance`. The top-level module doesn't know anything about machines. Keeping
|
||||
the service layer machine agnostic allows us to build the UI for a module
|
||||
without adding any machines. (One of the problems with the current system)
|
||||
|
||||
```
|
||||
zerotier/default.nix
|
||||
```
|
||||
|
||||
```nix
|
||||
# Some example module
|
||||
{
|
||||
@@ -221,15 +267,25 @@ zerotier/default.nix
|
||||
|
||||
## Inventory.instances
|
||||
|
||||
This document also proposes to add a new attribute to the inventory that allow for exclusive configuration of the new modules.
|
||||
This allows to better separate the new and the old way of writing and configuring modules. Keeping the new implementation more focussed and keeping existing technical debt out from the beginning.
|
||||
This document also proposes to add a new attribute to the inventory that allow
|
||||
for exclusive configuration of the new modules. This allows to better separate
|
||||
the new and the old way of writing and configuring modules. Keeping the new
|
||||
implementation more focussed and keeping existing technical debt out from the
|
||||
beginning.
|
||||
|
||||
The following thoughts went into this:
|
||||
|
||||
- Getting rid of `<serviceName>`: Using only the attribute name (plain string) is not sufficient for defining the source of the service module. Encoding meta information into it would also require some extensible format specification and parser.
|
||||
- removing instanceConfig and machineConfig: There is no such config. Service configuration must always be role specific, because the options are defined on the role.
|
||||
- renaming `config` to `settings` or similar. Since `config` is a module system internal name.
|
||||
- Tags and machines should be an attribute set to allow setting `settings` on that level instead.
|
||||
- Getting rid of `<serviceName>`: Using only the attribute name (plain string)
|
||||
is not sufficient for defining the source of the service module. Encoding meta
|
||||
information into it would also require some extensible format specification
|
||||
and parser.
|
||||
- removing instanceConfig and machineConfig: There is no such config. Service
|
||||
configuration must always be role specific, because the options are defined on
|
||||
the role.
|
||||
- renaming `config` to `settings` or similar. Since `config` is a module system
|
||||
internal name.
|
||||
- Tags and machines should be an attribute set to allow setting `settings` on
|
||||
that level instead.
|
||||
|
||||
```nix
|
||||
{
|
||||
@@ -258,7 +314,9 @@ The following thoughts went into this:
|
||||
|
||||
## Iteration note
|
||||
|
||||
We want to implement the system as described. Once we have sufficient data on real world use-cases and modules we might revisit this document along with the updated implementation.
|
||||
We want to implement the system as described. Once we have sufficient data on
|
||||
real world use-cases and modules we might revisit this document along with the
|
||||
updated implementation.
|
||||
|
||||
## Real world example
|
||||
|
||||
|
||||
@@ -6,7 +6,8 @@ Accepted
|
||||
|
||||
## Context
|
||||
|
||||
In the long term we envision the clan application will consist of the following user facing tools in the long term.
|
||||
In the long term we envision the clan application will consist of the following
|
||||
user facing tools in the long term.
|
||||
|
||||
- `CLI`
|
||||
- `TUI`
|
||||
@@ -14,17 +15,20 @@ In the long term we envision the clan application will consist of the following
|
||||
- `REST-API`
|
||||
- `Mobile Application`
|
||||
|
||||
We might not be sure whether all of those will exist but the architecture should be generic such that those are possible without major changes of the underlying system.
|
||||
We might not be sure whether all of those will exist but the architecture should
|
||||
be generic such that those are possible without major changes of the underlying
|
||||
system.
|
||||
|
||||
## Decision
|
||||
|
||||
This leads to the conclusion that we should do `library` centric development.
|
||||
With the current `clan` python code being a library that can be imported to create various tools ontop of it.
|
||||
All **CLI** or **UI** related parts should be moved out of the main library.
|
||||
With the current `clan` python code being a library that can be imported to
|
||||
create various tools ontop of it. All **CLI** or **UI** related parts should be
|
||||
moved out of the main library.
|
||||
|
||||
Imagine roughly the following architecture:
|
||||
|
||||
``` mermaid
|
||||
```mermaid
|
||||
graph TD
|
||||
%% Define styles
|
||||
classDef frontend fill:#f9f,stroke:#333,stroke-width:2px;
|
||||
@@ -66,14 +70,18 @@ graph TD
|
||||
BusinessLogic --> NIX
|
||||
```
|
||||
|
||||
With this very simple design it is ensured that all the basic features remain stable across all frontends.
|
||||
In the end it is straight forward to create python library function calls in a testing framework to ensure that kind of stability.
|
||||
With this very simple design it is ensured that all the basic features remain
|
||||
stable across all frontends. In the end it is straight forward to create python
|
||||
library function calls in a testing framework to ensure that kind of stability.
|
||||
|
||||
Integration tests and smaller unit-tests should both be utilized to ensure the stability of the library.
|
||||
Integration tests and smaller unit-tests should both be utilized to ensure the
|
||||
stability of the library.
|
||||
|
||||
Note: Library function don't have to be json-serializable in general.
|
||||
|
||||
Persistence includes but is not limited to: creating git commits, writing to inventory.json, reading and writing vars, and interacting with persisted data in general.
|
||||
Persistence includes but is not limited to: creating git commits, writing to
|
||||
inventory.json, reading and writing vars, and interacting with persisted data in
|
||||
general.
|
||||
|
||||
## Benefits / Drawbacks
|
||||
|
||||
@@ -81,34 +89,51 @@ Persistence includes but is not limited to: creating git commits, writing to inv
|
||||
- (+) Consistency and inherent behavior
|
||||
- (+) Performance & Scalability
|
||||
- (+) Different frontends for different user groups
|
||||
- (+) Documentation per library function makes it convenient to interact with the clan resources.
|
||||
- (+) Testing the library ensures stability of the underlyings for all layers above.
|
||||
- (+) Documentation per library function makes it convenient to interact with
|
||||
the clan resources.
|
||||
- (+) Testing the library ensures stability of the underlyings for all layers
|
||||
above.
|
||||
- (-) Complexity overhead
|
||||
- (-) library needs to be designed / documented
|
||||
- (+) library can be well documented since it is a finite set of functions.
|
||||
- (-) Error handling might be harder.
|
||||
- (+) Common error reporting
|
||||
- (-) different frontends need different features. The library must include them all.
|
||||
- (-) different frontends need different features. The library must include them
|
||||
all.
|
||||
- (+) All those core features must be implemented anyways.
|
||||
- (+) VPN Benchmarking uses the existing library's already and works relatively well.
|
||||
- (+) VPN Benchmarking uses the existing library's already and works relatively
|
||||
well.
|
||||
|
||||
## Implementation considerations
|
||||
|
||||
Not all required details that need to change over time are possible to be pointed out ahead of time.
|
||||
The goal of this document is to create a common understanding for how we like our project to be structured.
|
||||
Any future commits should contribute to this goal.
|
||||
Not all required details that need to change over time are possible to be
|
||||
pointed out ahead of time. The goal of this document is to create a common
|
||||
understanding for how we like our project to be structured. Any future commits
|
||||
should contribute to this goal.
|
||||
|
||||
Some ideas what might be needed to change:
|
||||
|
||||
- Having separate locations or packages for the library and the CLI.
|
||||
- Rename the `clan_cli` package to `clan` and move the `cli` frontend into a subfolder or a separate package.
|
||||
- Python Argparse or other cli related code should not exist in the `clan` python library.
|
||||
- `__init__.py` should be very minimal. Only init the business logic models and resources. Note that all `__init__.py` files all the way up in the module tree are always executed as part of the python module import logic and thus should be as small as possible.
|
||||
i.e. `from clan_cli.vars.generators import ...` executes both `clan_cli/__init__.py` and `clan_cli/vars/__init__.py` if any of those exist.
|
||||
- Rename the `clan_cli` package to `clan` and move the `cli` frontend into a
|
||||
subfolder or a separate package.
|
||||
- Python Argparse or other cli related code should not exist in the `clan`
|
||||
python library.
|
||||
- `__init__.py` should be very minimal. Only init the business logic models and
|
||||
resources. Note that all `__init__.py` files all the way up in the module tree
|
||||
are always executed as part of the python module import logic and thus should
|
||||
be as small as possible. i.e. `from clan_cli.vars.generators import ...`
|
||||
executes both `clan_cli/__init__.py` and `clan_cli/vars/__init__.py` if any of
|
||||
those exist.
|
||||
- `api` folder doesn't make sense since the python library `clan` is the api.
|
||||
- Logic needed for the webui that performs json serialization and deserialization will be some `json-adapter` folder or package.
|
||||
- Code for serializing dataclasses and typed dictionaries is needed for the persistence layer. (i.e. for read-write of inventory.json)
|
||||
- The inventory-json is a backend resource, that is internal. Its logic includes merging, unmerging and partial updates with considering nix values and their priorities. Nobody should try to read or write to it directly.
|
||||
Instead there will be library methods i.e. to add a `service` or to update/read/delete some information from it.
|
||||
- Library functions should be carefully designed with suitable conventions for writing good api's in mind. (i.e: https://swagger.io/resources/articles/best-practices-in-api-design/)
|
||||
|
||||
- Logic needed for the webui that performs json serialization and
|
||||
deserialization will be some `json-adapter` folder or package.
|
||||
- Code for serializing dataclasses and typed dictionaries is needed for the
|
||||
persistence layer. (i.e. for read-write of inventory.json)
|
||||
- The inventory-json is a backend resource, that is internal. Its logic includes
|
||||
merging, unmerging and partial updates with considering nix values and their
|
||||
priorities. Nobody should try to read or write to it directly. Instead there
|
||||
will be library methods i.e. to add a `service` or to update/read/delete some
|
||||
information from it.
|
||||
- Library functions should be carefully designed with suitable conventions for
|
||||
writing good api's in mind. (i.e:
|
||||
https://swagger.io/resources/articles/best-practices-in-api-design/)
|
||||
|
||||
@@ -6,27 +6,39 @@ Proposed after some conversation between @lassulus, @Mic92, & @lopter.
|
||||
|
||||
## Context
|
||||
|
||||
It can be useful to refer to ADRs by their numbers, rather than their full title. To that end, short and sequential numbers are useful.
|
||||
It can be useful to refer to ADRs by their numbers, rather than their full
|
||||
title. To that end, short and sequential numbers are useful.
|
||||
|
||||
The issue is that an ADR number is effectively assigned when the ADR is merged, before being merged its number is provisional. Because multiple ADRs can be written at the same time, you end-up with multiple provisional ADRs with the same number, for example this is the third ADR-3:
|
||||
The issue is that an ADR number is effectively assigned when the ADR is merged,
|
||||
before being merged its number is provisional. Because multiple ADRs can be
|
||||
written at the same time, you end-up with multiple provisional ADRs with the
|
||||
same number, for example this is the third ADR-3:
|
||||
|
||||
1. ADR-3-clan-compat: see [#3212];
|
||||
2. ADR-3-fetching-nix-from-python: see [#3452];
|
||||
3. ADR-3-numbering-process: this ADR.
|
||||
|
||||
This situation makes it impossible to refer to an ADR by its number, and why I (@lopter) went with the arbitrary number 7 in [#3196].
|
||||
This situation makes it impossible to refer to an ADR by its number, and why I
|
||||
(@lopter) went with the arbitrary number 7 in [#3196].
|
||||
|
||||
We could solve this problem by using the PR number as the ADR number (@lassulus). The issue is that PR numbers are getting big in clan-core which does not make them easy to remember, or use in conversation and code (@lopter).
|
||||
We could solve this problem by using the PR number as the ADR number
|
||||
(@lassulus). The issue is that PR numbers are getting big in clan-core which
|
||||
does not make them easy to remember, or use in conversation and code (@lopter).
|
||||
|
||||
Another approach would be to move the ADRs in a different repository, this would reset the counter back to 1, and make it straightforward to keep ADR and PR numbers in sync (@lopter). The issue then is that ADR are not in context with their changes which makes them more difficult to review (@Mic92).
|
||||
Another approach would be to move the ADRs in a different repository, this would
|
||||
reset the counter back to 1, and make it straightforward to keep ADR and PR
|
||||
numbers in sync (@lopter). The issue then is that ADR are not in context with
|
||||
their changes which makes them more difficult to review (@Mic92).
|
||||
|
||||
## Decision
|
||||
|
||||
A third approach would be to:
|
||||
|
||||
1. Commit ADRs before they are approved, so that the next ADR number gets assigned;
|
||||
1. Open a PR for the proposed ADR;
|
||||
1. Update the ADR file committed in step 1, so that its markdown contents point to the PR that tracks it.
|
||||
1. Commit ADRs before they are approved, so that the next ADR number gets
|
||||
assigned;
|
||||
2. Open a PR for the proposed ADR;
|
||||
3. Update the ADR file committed in step 1, so that its markdown contents point
|
||||
to the PR that tracks it.
|
||||
|
||||
## Consequences
|
||||
|
||||
@@ -36,12 +48,13 @@ This makes it easier to refer to them in conversation or in code.
|
||||
|
||||
### You need to have commit access to get an ADR number assigned
|
||||
|
||||
This makes it more difficult for someone external to the project to contribute an ADR.
|
||||
This makes it more difficult for someone external to the project to contribute
|
||||
an ADR.
|
||||
|
||||
### Creating a new ADR requires multiple commits
|
||||
|
||||
Maybe a script or CI flow could help with that if it becomes painful.
|
||||
|
||||
[#3196]: https://git.clan.lol/clan/clan-core/pulls/3196/
|
||||
[#3212]: https://git.clan.lol/clan/clan-core/pulls/3212/
|
||||
[#3452]: https://git.clan.lol/clan/clan-core/pulls/3452/
|
||||
[#3196]: https://git.clan.lol/clan/clan-core/pulls/3196/
|
||||
|
||||
@@ -4,83 +4,113 @@ accepted
|
||||
|
||||
## Context
|
||||
|
||||
In our clan-cli we need to get a lot of values from nix into the python runtime. This is used to determine the hostname, the target ips address, scripts to generate vars, file locations and many more.
|
||||
In our clan-cli we need to get a lot of values from nix into the python runtime.
|
||||
This is used to determine the hostname, the target ips address, scripts to
|
||||
generate vars, file locations and many more.
|
||||
|
||||
Currently we use two different accessing methods:
|
||||
|
||||
### Method 1: deployment.json
|
||||
|
||||
A json file that serializes some predefined values into a JSON file as build-time artifact.
|
||||
A json file that serializes some predefined values into a JSON file as
|
||||
build-time artifact.
|
||||
|
||||
Downsides:
|
||||
|
||||
* no access to flake level values
|
||||
* all or nothing:
|
||||
* values are either cached via deployment.json or not. So we can only put cheap values into there,
|
||||
* in the past var generation script were added here, which added a huge build time overhead for every time we wanted to do any action
|
||||
* duplicated nix code
|
||||
* values need duplicated nix code, once to define them at the correct place in the module system (clan.core.vars.generators) and code to accumulate them again for the deployment.json (system.clan.deployment.data)
|
||||
* This duality adds unnecessary dependencies to the nixos module system.
|
||||
- no access to flake level values
|
||||
- all or nothing:
|
||||
- values are either cached via deployment.json or not. So we can only put
|
||||
cheap values into there,
|
||||
- in the past var generation script were added here, which added a huge build
|
||||
time overhead for every time we wanted to do any action
|
||||
- duplicated nix code
|
||||
- values need duplicated nix code, once to define them at the correct place in
|
||||
the module system (clan.core.vars.generators) and code to accumulate them
|
||||
again for the deployment.json (system.clan.deployment.data)
|
||||
- This duality adds unnecessary dependencies to the nixos module system.
|
||||
|
||||
Benefits:
|
||||
|
||||
* Utilize `nix build` for caching the file.
|
||||
* Caching mechanism is very simple.
|
||||
|
||||
- Utilize `nix build` for caching the file.
|
||||
- Caching mechanism is very simple.
|
||||
|
||||
### Method 2: Direct access
|
||||
|
||||
Directly calling the evaluator / build sandbox via `nix build` and `nix eval`within the Python code
|
||||
|
||||
Directly calling the evaluator / build sandbox via `nix build` and
|
||||
`nix eval`within the Python code
|
||||
|
||||
Downsides:
|
||||
|
||||
* Access is not cached: Static overhead (see below: \~1.5s) is present every time, if we invoke `nix commands`
|
||||
* The static overhead depends obviously which value we need to retrieve, since the `evalModules` overhead depends, whether we evaluate some attribute inside a machine or a flake attribute
|
||||
* Accessing more and more attributes with this method increases the static overhead, which leads to a linear decrease in performance.
|
||||
* Boilerplate for interacting with the CLI and Error handling code is repeated every time.
|
||||
- Access is not cached: Static overhead (see below: ~1.5s) is present every
|
||||
time, if we invoke `nix commands`
|
||||
- The static overhead depends obviously which value we need to retrieve, since
|
||||
the `evalModules` overhead depends, whether we evaluate some attribute
|
||||
inside a machine or a flake attribute
|
||||
- Accessing more and more attributes with this method increases the static
|
||||
overhead, which leads to a linear decrease in performance.
|
||||
- Boilerplate for interacting with the CLI and Error handling code is repeated
|
||||
every time.
|
||||
|
||||
Benefits:
|
||||
|
||||
* Simple and native interaction with the `nix commands`is rather intuitive
|
||||
* Custom error handling for each attribute is easy
|
||||
- Simple and native interaction with the `nix commands`is rather intuitive
|
||||
- Custom error handling for each attribute is easy
|
||||
|
||||
This sytem could be enhanced with custom nix expressions, which could be used in places where we don't want to put values into deployment.json or want to fetch flake level values. This also has some downsides:
|
||||
This sytem could be enhanced with custom nix expressions, which could be used in
|
||||
places where we don't want to put values into deployment.json or want to fetch
|
||||
flake level values. This also has some downsides:
|
||||
|
||||
* technical debt
|
||||
* we have to maintain custom nix expressions inside python code, embedding code is error prone and the language linters won't help you here, so errors are common and harder to debug.
|
||||
* we need custom error reporting code in case something goes wrong, either the value doesn't exist or there is an reported build error
|
||||
* no caching/custom caching logic
|
||||
* currently there is no infrastructure to cache those extra values, so we would need to store them somewhere, we could either enhance one of the many classes we have or don't cache them at all
|
||||
* even if we implement caching for extra nix expressions, there can be no sharing between extra nix expressions. for example we have 2 nix expressions, one fetches paths and values for all generators and the second one fetches only the values, we still need to execute both of them in both contexts although the second one could be skipped if the first one is already cached
|
||||
- technical debt
|
||||
- we have to maintain custom nix expressions inside python code, embedding
|
||||
code is error prone and the language linters won't help you here, so errors
|
||||
are common and harder to debug.
|
||||
- we need custom error reporting code in case something goes wrong, either the
|
||||
value doesn't exist or there is an reported build error
|
||||
- no caching/custom caching logic
|
||||
- currently there is no infrastructure to cache those extra values, so we
|
||||
would need to store them somewhere, we could either enhance one of the many
|
||||
classes we have or don't cache them at all
|
||||
- even if we implement caching for extra nix expressions, there can be no
|
||||
sharing between extra nix expressions. for example we have 2 nix
|
||||
expressions, one fetches paths and values for all generators and the second
|
||||
one fetches only the values, we still need to execute both of them in both
|
||||
contexts although the second one could be skipped if the first one is
|
||||
already cached
|
||||
|
||||
### Method 3: nix select
|
||||
|
||||
Move all code that extracts nix values into a common class:
|
||||
|
||||
Downsides:
|
||||
* added complexity for maintaining our own DSL
|
||||
|
||||
- added complexity for maintaining our own DSL
|
||||
|
||||
Benefits:
|
||||
* we can implement an API (select DSL) to get those values from nix without writing complex nix expressions.
|
||||
* we can implement caching of those values beyond the runtime of the CLI
|
||||
* we can use precaching at different endpoints to eliminate most of multiple nix evaluations (except in cases where we have to break the cache or we don't know if we need the value in the value later and getting it is expensive).
|
||||
|
||||
|
||||
- we can implement an API (select DSL) to get those values from nix without
|
||||
writing complex nix expressions.
|
||||
- we can implement caching of those values beyond the runtime of the CLI
|
||||
- we can use precaching at different endpoints to eliminate most of multiple nix
|
||||
evaluations (except in cases where we have to break the cache or we don't know
|
||||
if we need the value in the value later and getting it is expensive).
|
||||
|
||||
## Decision
|
||||
|
||||
Use Method 3 (nix select) for extracting values out of nix.
|
||||
|
||||
This adds the Flake class in flake.py with a select method, which takes a selector string and returns a python dict.
|
||||
This adds the Flake class in flake.py with a select method, which takes a
|
||||
selector string and returns a python dict.
|
||||
|
||||
Example:
|
||||
|
||||
```python
|
||||
from clan_lib.flake import Flake
|
||||
flake = Flake("github:lassulus/superconfig")
|
||||
flake.select("nixosConfigurations.*.config.networking.hostName)
|
||||
```
|
||||
|
||||
returns:
|
||||
|
||||
```
|
||||
{
|
||||
"ignavia": "ignavia",
|
||||
@@ -91,7 +121,13 @@ returns:
|
||||
|
||||
## Consequences
|
||||
|
||||
* Faster execution due to caching most things beyond a single execution, if no cache break happens execution is basically instant, because we don't need to run nix again.
|
||||
* Better error reporting, since all nix values go through one chokepoint, we can parse error messages in that chokepoint and report them in a more user friendly way, for example if a value is missing at the expected location inside the module system.
|
||||
* less embedded nix code inside python code
|
||||
* more portable CLI, since we need to import less modules into the module system and most things can be extracted by the python code directly
|
||||
- Faster execution due to caching most things beyond a single execution, if no
|
||||
cache break happens execution is basically instant, because we don't need to
|
||||
run nix again.
|
||||
- Better error reporting, since all nix values go through one chokepoint, we can
|
||||
parse error messages in that chokepoint and report them in a more user
|
||||
friendly way, for example if a value is missing at the expected location
|
||||
inside the module system.
|
||||
- less embedded nix code inside python code
|
||||
- more portable CLI, since we need to import less modules into the module system
|
||||
and most things can be extracted by the python code directly
|
||||
|
||||
@@ -6,12 +6,16 @@ accepted
|
||||
|
||||
## Context
|
||||
|
||||
Currently different operations (install, update) have different modes. Install always evals locally and pushes the derivation to a remote system. update has a configurable buildHost and targetHost.
|
||||
Confusingly install always evals locally and update always evals on the targetHost, so hosts have different semantics in different operations contexts.
|
||||
Currently different operations (install, update) have different modes. Install
|
||||
always evals locally and pushes the derivation to a remote system. update has a
|
||||
configurable buildHost and targetHost. Confusingly install always evals locally
|
||||
and update always evals on the targetHost, so hosts have different semantics in
|
||||
different operations contexts.
|
||||
|
||||
## Decision
|
||||
|
||||
Add evalHost to make this clear and configurable for the user. This would leave us with:
|
||||
Add evalHost to make this clear and configurable for the user. This would leave
|
||||
us with:
|
||||
|
||||
- evalHost
|
||||
- buildHost
|
||||
@@ -19,18 +23,29 @@ Add evalHost to make this clear and configurable for the user. This would leave
|
||||
|
||||
for the update and install operation.
|
||||
|
||||
`evalHost` would be the machine that evaluates the nixos configuration. if evalHost is not localhost, we upload the non secret vars and the nix archived flake (this is usually the same operation) to the evalMachine.
|
||||
`evalHost` would be the machine that evaluates the nixos configuration. if
|
||||
evalHost is not localhost, we upload the non secret vars and the nix archived
|
||||
flake (this is usually the same operation) to the evalMachine.
|
||||
|
||||
`buildHost` would be what is used by the machine to build, it would correspond to `--build-host` on the nixos-rebuild command or `--builders` for nix build.
|
||||
`buildHost` would be what is used by the machine to build, it would correspond
|
||||
to `--build-host` on the nixos-rebuild command or `--builders` for nix build.
|
||||
|
||||
`targetHost` would be the machine where the closure gets copied to and activated (either through install or switch-to-configuration). It corresponds to `--targetHost` for nixos-rebuild or where we usually point `nixos-anywhere` to.
|
||||
`targetHost` would be the machine where the closure gets copied to and activated
|
||||
(either through install or switch-to-configuration). It corresponds to
|
||||
`--targetHost` for nixos-rebuild or where we usually point `nixos-anywhere` to.
|
||||
|
||||
This hosts could be set either through CLI args (or forms for the GUI) or via the inventory. If both are given, the CLI args would take precedence.
|
||||
This hosts could be set either through CLI args (or forms for the GUI) or via
|
||||
the inventory. If both are given, the CLI args would take precedence.
|
||||
|
||||
## Consequences
|
||||
|
||||
We now support every deployment model of every tool out there with a bunch of simple flags. The semantics are more clear and we can write some nice documentation.
|
||||
We now support every deployment model of every tool out there with a bunch of
|
||||
simple flags. The semantics are more clear and we can write some nice
|
||||
documentation.
|
||||
|
||||
The install code has to be reworked, since nixos-anywhere has problems with evalHost and targetHost being the same machine, So we would need to kexec first and use the kexec image (or installer) as the evalHost afterwards.
|
||||
The install code has to be reworked, since nixos-anywhere has problems with
|
||||
evalHost and targetHost being the same machine, So we would need to kexec first
|
||||
and use the kexec image (or installer) as the evalHost afterwards.
|
||||
|
||||
In cases where the evalHost doesn't have access to the targetHost or buildHost, we need to setup temporary entries for the lifetime of the command.
|
||||
In cases where the evalHost doesn't have access to the targetHost or buildHost,
|
||||
we need to setup temporary entries for the lifetime of the command.
|
||||
|
||||
@@ -1,13 +1,15 @@
|
||||
# Architecture Decision Records
|
||||
|
||||
This section contains the architecture decisions that have been reviewed and generally agreed upon
|
||||
This section contains the architecture decisions that have been reviewed and
|
||||
generally agreed upon
|
||||
|
||||
## What is an ADR?
|
||||
|
||||
> An architecture decision record (ADR) is a document that captures an important architecture decision made along with its context and consequences.
|
||||
> An architecture decision record (ADR) is a document that captures an important
|
||||
> architecture decision made along with its context and consequences.
|
||||
|
||||
!!! Note
|
||||
For further reading about adr's we recommend [architecture-decision-record](https://github.com/joelparkerhenderson/architecture-decision-record)
|
||||
!!! Note For further reading about adr's we recommend
|
||||
[architecture-decision-record](https://github.com/joelparkerhenderson/architecture-decision-record)
|
||||
|
||||
## Crafting a new ADR
|
||||
|
||||
|
||||
@@ -1,7 +1,9 @@
|
||||
# Decision record template by Michael Nygard
|
||||
|
||||
This is the template in [Documenting architecture decisions - Michael Nygard](http://thinkrelevance.com/blog/2011/11/15/documenting-architecture-decisions).
|
||||
You can use [adr-tools](https://github.com/npryce/adr-tools) for managing the ADR files.
|
||||
This is the template in
|
||||
[Documenting architecture decisions - Michael Nygard](http://thinkrelevance.com/blog/2011/11/15/documenting-architecture-decisions).
|
||||
You can use [adr-tools](https://github.com/npryce/adr-tools) for managing the
|
||||
ADR files.
|
||||
|
||||
In each ADR file, write these sections:
|
||||
|
||||
@@ -9,7 +11,8 @@ In each ADR file, write these sections:
|
||||
|
||||
## Status
|
||||
|
||||
What is the status, such as proposed, accepted, rejected, deprecated, superseded, etc.?
|
||||
What is the status, such as proposed, accepted, rejected, deprecated,
|
||||
superseded, etc.?
|
||||
|
||||
## Context
|
||||
|
||||
|
||||
@@ -1,11 +1,14 @@
|
||||
## Using Age Plugins
|
||||
|
||||
If you wish to use a key generated using an [age plugin] as your admin key, extra care is needed.
|
||||
If you wish to use a key generated using an [age plugin] as your admin key,
|
||||
extra care is needed.
|
||||
|
||||
You must **precede your secret key with a comment that contains its corresponding recipient**.
|
||||
You must **precede your secret key with a comment that contains its
|
||||
corresponding recipient**.
|
||||
|
||||
This is usually output as part of the generation process
|
||||
and is only required because there is no unified mechanism for recovering a recipient from a plugin secret key.
|
||||
This is usually output as part of the generation process and is only required
|
||||
because there is no unified mechanism for recovering a recipient from a plugin
|
||||
secret key.
|
||||
|
||||
Here is an example:
|
||||
|
||||
@@ -14,15 +17,16 @@ Here is an example:
|
||||
AGE-PLUGIN-FIDO2-HMAC-1QQPQZRFR7ZZ2WCV...
|
||||
```
|
||||
|
||||
!!! note
|
||||
The comment that precedes the plugin secret key need only contain the recipient.
|
||||
Any other text is ignored.
|
||||
!!! note The comment that precedes the plugin secret key need only contain the
|
||||
recipient. Any other text is ignored.
|
||||
|
||||
In the example above, you can specify `# recipient: age1zdy...`, `# public: age1zdy....` or even
|
||||
just `# age1zdy....`
|
||||
```
|
||||
In the example above, you can specify `# recipient: age1zdy...`, `# public: age1zdy....` or even
|
||||
just `# age1zdy....`
|
||||
```
|
||||
|
||||
You will need to add an entry into your `flake.nix` to ensure that the necessary `age` plugins
|
||||
are loaded when using Clan:
|
||||
You will need to add an entry into your `flake.nix` to ensure that the necessary
|
||||
`age` plugins are loaded when using Clan:
|
||||
|
||||
```nix title="flake.nix"
|
||||
{
|
||||
|
||||
@@ -1,4 +1,3 @@
|
||||
|
||||
This guide explains how to set up and manage
|
||||
[BorgBackup](https://borgbackup.readthedocs.io/) for secure, efficient backups
|
||||
in a clan network. BorgBackup provides:
|
||||
@@ -34,11 +33,13 @@ inventory.instances = {
|
||||
The input should be named according to your flake input. Jon is configured as a
|
||||
client machine with a destination pointing to a Hetzner Storage Box.
|
||||
|
||||
To see a list of all possible options go to [borgbackup clan service](../reference/clanServices/borgbackup.md)
|
||||
To see a list of all possible options go to
|
||||
[borgbackup clan service](../reference/clanServices/borgbackup.md)
|
||||
|
||||
## Roles
|
||||
|
||||
A Clan Service can have multiple roles, each role applies different nix config to the machine.
|
||||
A Clan Service can have multiple roles, each role applies different nix config
|
||||
to the machine.
|
||||
|
||||
### 1. Client
|
||||
|
||||
@@ -61,8 +62,8 @@ Destinations can be:
|
||||
|
||||
## State management
|
||||
|
||||
Backups are based on [states](../reference/clan.core/state.md). A state
|
||||
defines which files should be backed up and how these files are obtained through
|
||||
Backups are based on [states](../reference/clan.core/state.md). A state defines
|
||||
which files should be backed up and how these files are obtained through
|
||||
pre/post backup and restore scripts.
|
||||
|
||||
Here's an example for a user application `linkding`:
|
||||
@@ -123,7 +124,8 @@ clan.core.state.linkding = {
|
||||
|
||||
## Managing backups
|
||||
|
||||
In this section we go over how to manage your collection of backups with the clan command.
|
||||
In this section we go over how to manage your collection of backups with the
|
||||
clan command.
|
||||
|
||||
### Listing states
|
||||
|
||||
@@ -177,7 +179,7 @@ storagebox::username@username.your-storagebox.de:/./borgbackup::jon-storagebox-2
|
||||
|
||||
### Restoring backups
|
||||
|
||||
For restoring a backup you have two options.
|
||||
For restoring a backup you have two options.
|
||||
|
||||
#### Full restoration
|
||||
|
||||
@@ -194,6 +196,3 @@ To restore only a specific service (e.g., `linkding`):
|
||||
```bash
|
||||
clan backups restore --service linkding jon borgbackup storagebox::u444061@u444061.your-storagebox.de:/./borgbackup::jon-storagebox-2025-07-24T06:02:35
|
||||
```
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -1,16 +1,22 @@
|
||||
# Using `clanServices`
|
||||
|
||||
Clan's `clanServices` system is a composable way to define and deploy services across machines.
|
||||
Clan's `clanServices` system is a composable way to define and deploy services
|
||||
across machines.
|
||||
|
||||
This guide shows how to **instantiate** a `clanService`, explains how service definitions are structured in your inventory, and how to pick or create services from modules exposed by flakes.
|
||||
This guide shows how to **instantiate** a `clanService`, explains how service
|
||||
definitions are structured in your inventory, and how to pick or create services
|
||||
from modules exposed by flakes.
|
||||
|
||||
The term **Multi-host-modules** was introduced previously in the [nixus repository](https://github.com/infinisil/nixus) and represents a similar concept.
|
||||
The term **Multi-host-modules** was introduced previously in the
|
||||
[nixus repository](https://github.com/infinisil/nixus) and represents a similar
|
||||
concept.
|
||||
|
||||
---
|
||||
______________________________________________________________________
|
||||
|
||||
## Overview
|
||||
|
||||
Services are used in `inventory.instances`, and then they attach to *roles* and *machines* — meaning you decide which machines run which part of the service.
|
||||
Services are used in `inventory.instances`, and then they attach to *roles* and
|
||||
*machines* — meaning you decide which machines run which part of the service.
|
||||
|
||||
For example:
|
||||
|
||||
@@ -25,15 +31,18 @@ inventory.instances = {
|
||||
}
|
||||
```
|
||||
|
||||
This says: “Run borgbackup as a *client* on my *laptop* and *server1*, and as a *server* on *backup-box*.”
|
||||
This says: “Run borgbackup as a *client* on my *laptop* and *server1*, and as a
|
||||
*server* on *backup-box*.”
|
||||
|
||||
## Module source specification
|
||||
|
||||
Each instance includes a reference to a **module specification** — this is how Clan knows which service module to use and where it came from.
|
||||
Usually one would just use `imports` but we needd to make the `module source` configurable via Python API.
|
||||
By default it is not required to specify the `module`, in which case it defaults to the preprovided services of clan-core.
|
||||
Each instance includes a reference to a **module specification** — this is how
|
||||
Clan knows which service module to use and where it came from. Usually one would
|
||||
just use `imports` but we needd to make the `module source` configurable via
|
||||
Python API. By default it is not required to specify the `module`, in which case
|
||||
it defaults to the preprovided services of clan-core.
|
||||
|
||||
---
|
||||
______________________________________________________________________
|
||||
|
||||
## Override Example
|
||||
|
||||
@@ -65,7 +74,8 @@ inputs.clan-core.url = "https://git.clan.lol/clan/clan-core/archive/main.tar.gz"
|
||||
|
||||
## Simplified Example
|
||||
|
||||
If only one instance is needed for a service and the service is a clan core service, the `module` definition can be omitted.
|
||||
If only one instance is needed for a service and the service is a clan core
|
||||
service, the `module` definition can be omitted.
|
||||
|
||||
```nix
|
||||
# Simplified way of specifying a single instance
|
||||
@@ -84,7 +94,8 @@ inventory.instances = {
|
||||
|
||||
Each role might expose configurable options
|
||||
|
||||
See clan's [clanServices reference](../reference/clanServices/index.md) for available options
|
||||
See clan's [clanServices reference](../reference/clanServices/index.md) for
|
||||
available options
|
||||
|
||||
```nix
|
||||
inventory.instances = {
|
||||
@@ -132,28 +143,34 @@ inventory.instances = {
|
||||
|
||||
You can use services exposed by Clan's core module library, `clan-core`.
|
||||
|
||||
🔗 See: [List of Available Services in clan-core](../reference/clanServices/index.md)
|
||||
🔗 See:
|
||||
[List of Available Services in clan-core](../reference/clanServices/index.md)
|
||||
|
||||
## Defining Your Own Service
|
||||
|
||||
You can also author your own `clanService` modules.
|
||||
|
||||
🔗 Learn how to write your own service: [Authoring a service](../guides/services/community.md)
|
||||
🔗 Learn how to write your own service:
|
||||
[Authoring a service](../guides/services/community.md)
|
||||
|
||||
You might expose your service module from your flake — this makes it easy for other people to also use your module in their clan.
|
||||
You might expose your service module from your flake — this makes it easy for
|
||||
other people to also use your module in their clan.
|
||||
|
||||
---
|
||||
______________________________________________________________________
|
||||
|
||||
## 💡 Tips for Working with clanServices
|
||||
|
||||
* You can add multiple inputs to your flake (`clan-core`, `your-org-modules`, etc.) to mix and match services.
|
||||
* Each service instance is isolated by its key in `inventory.instances`, allowing you to deploy multiple versions or roles of the same service type.
|
||||
* Roles can target different machines or be scoped dynamically.
|
||||
- You can add multiple inputs to your flake (`clan-core`, `your-org-modules`,
|
||||
etc.) to mix and match services.
|
||||
- Each service instance is isolated by its key in `inventory.instances`,
|
||||
allowing you to deploy multiple versions or roles of the same service type.
|
||||
- Roles can target different machines or be scoped dynamically.
|
||||
|
||||
---
|
||||
______________________________________________________________________
|
||||
|
||||
## What's Next?
|
||||
|
||||
* [Author your own clanService →](../guides/services/community.md)
|
||||
* [Migrate from clanModules →](../guides/migrations/migrate-inventory-services.md)
|
||||
- [Author your own clanService →](../guides/services/community.md)
|
||||
- [Migrate from clanModules →](../guides/migrations/migrate-inventory-services.md)
|
||||
|
||||
<!-- TODO: * [Understand the architecture →](../explanation/clan-architecture.md) -->
|
||||
|
||||
@@ -1,16 +1,21 @@
|
||||
|
||||
Here are some methods for debugging and testing the clan-cli
|
||||
|
||||
## Using a Development Branch
|
||||
|
||||
To streamline your development process, I suggest not installing `clan-cli`. Instead, clone the `clan-core` repository and add `clan-core/pkgs/clan-cli/bin` to your PATH to use the checked-out version directly.
|
||||
To streamline your development process, I suggest not installing `clan-cli`.
|
||||
Instead, clone the `clan-core` repository and add `clan-core/pkgs/clan-cli/bin`
|
||||
to your PATH to use the checked-out version directly.
|
||||
|
||||
!!! Note
|
||||
After cloning, navigate to `clan-core/pkgs/clan-cli` and execute `direnv allow` to activate the devshell. This will set up a symlink to nixpkgs at a specific location; without it, `clan-cli` won't function correctly.
|
||||
!!! Note After cloning, navigate to `clan-core/pkgs/clan-cli` and execute
|
||||
`direnv allow` to activate the devshell. This will set up a symlink to nixpkgs
|
||||
at a specific location; without it, `clan-cli` won't function correctly.
|
||||
|
||||
With this setup, you can easily use [breakpoint()](https://docs.python.org/3/library/pdb.html) to inspect the application's internal state as needed.
|
||||
With this setup, you can easily use
|
||||
[breakpoint()](https://docs.python.org/3/library/pdb.html) to inspect the
|
||||
application's internal state as needed.
|
||||
|
||||
This approach is feasible because `clan-cli` only requires a Python interpreter and has no other dependencies.
|
||||
This approach is feasible because `clan-cli` only requires a Python interpreter
|
||||
and has no other dependencies.
|
||||
|
||||
```nix
|
||||
pkgs.mkShell {
|
||||
@@ -26,11 +31,17 @@ pkgs.mkShell {
|
||||
|
||||
## Debugging nixos-anywhere
|
||||
|
||||
If you encounter a bug in a complex shell script such as `nixos-anywhere`, start by replacing the `nixos-anywhere` command with a local checkout of the project, look in the [contribution](./CONTRIBUTING.md) section for an example.
|
||||
If you encounter a bug in a complex shell script such as `nixos-anywhere`, start
|
||||
by replacing the `nixos-anywhere` command with a local checkout of the project,
|
||||
look in the [contribution](./CONTRIBUTING.md) section for an example.
|
||||
|
||||
## The Debug Flag
|
||||
|
||||
You can enhance your debugging process with the `--debug` flag in the `clan` command. When you add this flag to any command, it displays all subprocess commands initiated by `clan` in a readable format, along with the source code position that triggered them. This feature makes it easier to understand and trace what's happening under the hood.
|
||||
You can enhance your debugging process with the `--debug` flag in the `clan`
|
||||
command. When you add this flag to any command, it displays all subprocess
|
||||
commands initiated by `clan` in a readable format, along with the source code
|
||||
position that triggered them. This feature makes it easier to understand and
|
||||
trace what's happening under the hood.
|
||||
|
||||
```bash
|
||||
$ clan machines list --debug 1 ↵
|
||||
@@ -53,46 +64,60 @@ wintux
|
||||
|
||||
## VSCode
|
||||
|
||||
If you're using VSCode, it has a handy feature that makes paths to source code files clickable in the integrated terminal. Combined with the previously mentioned techniques, this allows you to open a Clan in VSCode, execute a command like `clan machines list --debug`, and receive a printed path to the code that initiates the subprocess. With the `Ctrl` key (or `Cmd` on macOS) and a mouse click, you can jump directly to the corresponding line in the code file and add a `breakpoint()` function to it, to inspect the internal state.
|
||||
|
||||
|
||||
If you're using VSCode, it has a handy feature that makes paths to source code
|
||||
files clickable in the integrated terminal. Combined with the previously
|
||||
mentioned techniques, this allows you to open a Clan in VSCode, execute a
|
||||
command like `clan machines list --debug`, and receive a printed path to the
|
||||
code that initiates the subprocess. With the `Ctrl` key (or `Cmd` on macOS) and
|
||||
a mouse click, you can jump directly to the corresponding line in the code file
|
||||
and add a `breakpoint()` function to it, to inspect the internal state.
|
||||
|
||||
## Finding Print Messages
|
||||
|
||||
To trace the origin of print messages in `clan-cli`, you can enable special debugging features using environment variables:
|
||||
To trace the origin of print messages in `clan-cli`, you can enable special
|
||||
debugging features using environment variables:
|
||||
|
||||
- Set `TRACE_PRINT=1` to include the source location with each print message:
|
||||
```bash
|
||||
export TRACE_PRINT=1
|
||||
```
|
||||
When running commands with `--debug`, every print will show where it was triggered in the code.
|
||||
|
||||
- To see a deeper stack trace for each print, set `TRACE_DEPTH` to the desired number of stack frames (e.g., 3):
|
||||
```bash
|
||||
export TRACE_DEPTH=3
|
||||
```
|
||||
```bash
|
||||
export TRACE_PRINT=1
|
||||
```
|
||||
|
||||
When running commands with `--debug`, every print will show where it was
|
||||
triggered in the code.
|
||||
|
||||
- To see a deeper stack trace for each print, set `TRACE_DEPTH` to the desired
|
||||
number of stack frames (e.g., 3):
|
||||
|
||||
```bash
|
||||
export TRACE_DEPTH=3
|
||||
```
|
||||
|
||||
### Additional Debug Logging
|
||||
|
||||
You can enable more detailed logging for specific components by setting these environment variables:
|
||||
You can enable more detailed logging for specific components by setting these
|
||||
environment variables:
|
||||
|
||||
- `CLAN_DEBUG_NIX_SELECTORS=1` — verbose logs for flake.select operations
|
||||
- `CLAN_DEBUG_NIX_PREFETCH=1` — verbose logs for flake.prefetch operations
|
||||
- `CLAN_DEBUG_COMMANDS=1` — print the diffed environment of executed commands
|
||||
|
||||
Example:
|
||||
|
||||
```bash
|
||||
export CLAN_DEBUG_NIX_SELECTORS=1
|
||||
export CLAN_DEBUG_NIX_PREFETCH=1
|
||||
export CLAN_DEBUG_COMMANDS=1
|
||||
```
|
||||
|
||||
These options help you pinpoint the source and context of print messages and debug logs during development.
|
||||
|
||||
These options help you pinpoint the source and context of print messages and
|
||||
debug logs during development.
|
||||
|
||||
## Analyzing Performance
|
||||
|
||||
To understand what's causing slow performance, set the environment variable `export CLAN_CLI_PERF=1`. When you complete a clan command, you'll see a summary of various performance metrics, helping you identify what's taking up time.
|
||||
To understand what's causing slow performance, set the environment variable
|
||||
`export CLAN_CLI_PERF=1`. When you complete a clan command, you'll see a summary
|
||||
of various performance metrics, helping you identify what's taking up time.
|
||||
|
||||
## See all possible packages and tests
|
||||
|
||||
@@ -102,7 +127,8 @@ To quickly show all possible packages and tests execute:
|
||||
nix flake show
|
||||
```
|
||||
|
||||
Under `checks` you will find all tests that are executed in our CI. Under `packages` you find all our projects.
|
||||
Under `checks` you will find all tests that are executed in our CI. Under
|
||||
`packages` you find all our projects.
|
||||
|
||||
```
|
||||
git+file:///home/lhebendanz/Projects/clan-core
|
||||
@@ -137,18 +163,22 @@ git+file:///home/lhebendanz/Projects/clan-core
|
||||
└───default: template: Initialize a new clan flake
|
||||
```
|
||||
|
||||
You can execute every test separately by following the tree path `nix run .#checks.x86_64-linux.clan-pytest -L` for example.
|
||||
You can execute every test separately by following the tree path
|
||||
`nix run .#checks.x86_64-linux.clan-pytest -L` for example.
|
||||
|
||||
## Test Locally in Devshell with Breakpoints
|
||||
|
||||
To test the CLI locally in a development environment and set breakpoints for debugging, follow these steps:
|
||||
To test the CLI locally in a development environment and set breakpoints for
|
||||
debugging, follow these steps:
|
||||
|
||||
1. Run the following command to execute your tests and allow for debugging with breakpoints:
|
||||
1. Run the following command to execute your tests and allow for debugging with
|
||||
breakpoints:
|
||||
```bash
|
||||
cd ./pkgs/clan-cli
|
||||
pytest -n0 -s --maxfail=1 ./tests/test_nameofthetest.py
|
||||
```
|
||||
You can place `breakpoint()` in your Python code where you want to trigger a breakpoint for debugging.
|
||||
You can place `breakpoint()` in your Python code where you want to trigger a
|
||||
breakpoint for debugging.
|
||||
|
||||
## Test Locally in a Nix Sandbox
|
||||
|
||||
@@ -166,19 +196,21 @@ nix build .#checks.x86_64-linux.clan-pytest-without-core
|
||||
|
||||
If you need to inspect the Nix sandbox while running tests, follow these steps:
|
||||
|
||||
1. Insert an endless sleep into your test code where you want to pause the execution. For example:
|
||||
1. Insert an endless sleep into your test code where you want to pause the
|
||||
execution. For example:
|
||||
|
||||
```python
|
||||
import time
|
||||
time.sleep(3600) # Sleep for one hour
|
||||
```
|
||||
|
||||
2. Use `cntr` and `psgrep` to attach to the Nix sandbox. This allows you to interactively debug your code while it's paused. For example:
|
||||
2. Use `cntr` and `psgrep` to attach to the Nix sandbox. This allows you to
|
||||
interactively debug your code while it's paused. For example:
|
||||
|
||||
```bash
|
||||
psgrep <your_python_process_name>
|
||||
cntr attach <container id, container name or process id>
|
||||
```
|
||||
|
||||
Or you can also use the [nix breakpoint hook](https://nixos.org/manual/nixpkgs/stable/#breakpointhook)
|
||||
|
||||
Or you can also use the
|
||||
[nix breakpoint hook](https://nixos.org/manual/nixpkgs/stable/#breakpointhook)
|
||||
|
||||
@@ -2,9 +2,11 @@
|
||||
|
||||
Each feature added to clan should be tested extensively via automated tests.
|
||||
|
||||
This document covers different methods of automated testing, including creating, running and debugging such tests.
|
||||
This document covers different methods of automated testing, including creating,
|
||||
running and debugging such tests.
|
||||
|
||||
In order to test the behavior of clan, different testing frameworks are used depending on the concern:
|
||||
In order to test the behavior of clan, different testing frameworks are used
|
||||
depending on the concern:
|
||||
|
||||
- NixOS VM tests: for high level integration
|
||||
- NixOS container tests: for high level integration
|
||||
@@ -13,37 +15,48 @@ In order to test the behavior of clan, different testing frameworks are used dep
|
||||
|
||||
## NixOS VM Tests
|
||||
|
||||
The [NixOS VM Testing Framework](https://nixos.org/manual/nixos/stable/index.html#sec-nixos-tests) is used to create high level integration tests, by running one or more VMs generated from a specified config. Commands can be executed on the booted machine(s) to verify a deployment of a service works as expected. All machines within a test are connected by a virtual network. Internet access is not available.
|
||||
The
|
||||
[NixOS VM Testing Framework](https://nixos.org/manual/nixos/stable/index.html#sec-nixos-tests)
|
||||
is used to create high level integration tests, by running one or more VMs
|
||||
generated from a specified config. Commands can be executed on the booted
|
||||
machine(s) to verify a deployment of a service works as expected. All machines
|
||||
within a test are connected by a virtual network. Internet access is not
|
||||
available.
|
||||
|
||||
### When to use VM tests
|
||||
|
||||
- testing that a service defined through a clan module works as expected after deployment
|
||||
- testing that a service defined through a clan module works as expected after
|
||||
deployment
|
||||
- testing clan-cli subcommands which require accessing a remote machine
|
||||
|
||||
### When not to use VM tests
|
||||
|
||||
NixOS VM Tests are slow and expensive. They should only be used for testing high level integration of components.
|
||||
VM tests should be avoided wherever it is possible to implement a cheaper unit test instead.
|
||||
NixOS VM Tests are slow and expensive. They should only be used for testing high
|
||||
level integration of components. VM tests should be avoided wherever it is
|
||||
possible to implement a cheaper unit test instead.
|
||||
|
||||
- testing detailed behavior of a certain clan-cli command -> use unit testing via pytest instead
|
||||
- testing detailed behavior of a certain clan-cli command -> use unit testing
|
||||
via pytest instead
|
||||
- regression testing -> add a unit test
|
||||
|
||||
### Finding examples for VM tests
|
||||
|
||||
Existing nixos vm tests in clan-core can be found by using ripgrep:
|
||||
|
||||
```shellSession
|
||||
rg self.clanLib.test.baseTest
|
||||
```
|
||||
|
||||
### Locating definitions of failing VM tests
|
||||
|
||||
All nixos vm tests in clan are exported as individual flake outputs under `checks.x86_64-linux.{test-attr-name}`.
|
||||
If a test fails in CI:
|
||||
All nixos vm tests in clan are exported as individual flake outputs under
|
||||
`checks.x86_64-linux.{test-attr-name}`. If a test fails in CI:
|
||||
|
||||
- look for the job name of the test near the top if the CI Job page, like, for example `gitea:clan/clan-core#checks.x86_64-linux.borgbackup/1242`
|
||||
- in this case `checks.x86_64-linux.borgbackup` is the attribute path
|
||||
- note the last element of that attribute path, in this case `borgbackup`
|
||||
- search for the attribute name inside the `/checks` directory via ripgrep
|
||||
- look for the job name of the test near the top if the CI Job page, like, for
|
||||
example `gitea:clan/clan-core#checks.x86_64-linux.borgbackup/1242`
|
||||
- in this case `checks.x86_64-linux.borgbackup` is the attribute path
|
||||
- note the last element of that attribute path, in this case `borgbackup`
|
||||
- search for the attribute name inside the `/checks` directory via ripgrep
|
||||
|
||||
example: locating the vm test named `borgbackup`:
|
||||
|
||||
@@ -57,14 +70,15 @@ $ rg "borgbackup =" ./checks
|
||||
|
||||
### Adding vm tests
|
||||
|
||||
Create a nixos test module under `/checks/{name}/default.nix` and import it in `/checks/flake-module.nix`.
|
||||
|
||||
Create a nixos test module under `/checks/{name}/default.nix` and import it in
|
||||
`/checks/flake-module.nix`.
|
||||
|
||||
### Running VM tests
|
||||
|
||||
```shellSession
|
||||
nix build .#checks.x86_64-linux.{test-attr-name}
|
||||
```
|
||||
|
||||
(replace `{test-attr-name}` with the name of the test)
|
||||
|
||||
### Debugging VM tests
|
||||
@@ -73,12 +87,14 @@ The following techniques can be used to debug a VM test:
|
||||
|
||||
#### Print Statements
|
||||
|
||||
Locate the definition (see above) and add print statements, like, for example `print(client.succeed("systemctl --failed"))`, then re-run the test via `nix build` (see above)
|
||||
Locate the definition (see above) and add print statements, like, for example
|
||||
`print(client.succeed("systemctl --failed"))`, then re-run the test via
|
||||
`nix build` (see above)
|
||||
|
||||
#### Interactive Shell
|
||||
|
||||
- Execute the vm test outside the nix Sandbox via the following command:
|
||||
`nix run .#checks.x86_64-linux.{test-attr-name}.driver -- --interactive`
|
||||
`nix run .#checks.x86_64-linux.{test-attr-name}.driver -- --interactive`
|
||||
- Then run the commands in the machines manually, like for example:
|
||||
```python3
|
||||
start_all()
|
||||
@@ -87,19 +103,22 @@ Locate the definition (see above) and add print statements, like, for example `p
|
||||
|
||||
#### Breakpoints
|
||||
|
||||
To get an interactive shell at a specific line in the VM test script, add a `breakpoint()` call before the line to debug, then run the test outside of the sandbox via:
|
||||
`nix run .#checks.x86_64-linux.{test-attr-name}.driver`
|
||||
|
||||
To get an interactive shell at a specific line in the VM test script, add a
|
||||
`breakpoint()` call before the line to debug, then run the test outside of the
|
||||
sandbox via: `nix run .#checks.x86_64-linux.{test-attr-name}.driver`
|
||||
|
||||
## NixOS Container Tests
|
||||
|
||||
Those are very similar to NixOS VM tests, as in they run virtualized nixos machines, but instead of using VMs, they use containers which are much cheaper to launch.
|
||||
As of now the container test driver is a downstream development in clan-core.
|
||||
Basically everything stated under the NixOS VM tests sections applies here, except some limitations.
|
||||
Those are very similar to NixOS VM tests, as in they run virtualized nixos
|
||||
machines, but instead of using VMs, they use containers which are much cheaper
|
||||
to launch. As of now the container test driver is a downstream development in
|
||||
clan-core. Basically everything stated under the NixOS VM tests sections applies
|
||||
here, except some limitations.
|
||||
|
||||
Limitations:
|
||||
|
||||
- Cannot run in interactive mode, however while the container test runs, it logs a nsenter command that can be used to log into each of the container.
|
||||
- Cannot run in interactive mode, however while the container test runs, it logs
|
||||
a nsenter command that can be used to log into each of the container.
|
||||
- setuid binaries don't work
|
||||
|
||||
### Where to find examples for NixOS container tests
|
||||
@@ -110,10 +129,10 @@ Existing NixOS container tests in clan-core can be found by using `ripgrep`:
|
||||
rg self.clanLib.test.containerTest
|
||||
```
|
||||
|
||||
|
||||
## Python tests via pytest
|
||||
|
||||
Since the Clan CLI is written in python, the `pytest` framework is used to define unit tests and integration tests via python
|
||||
Since the Clan CLI is written in python, the `pytest` framework is used to
|
||||
define unit tests and integration tests via python
|
||||
|
||||
Due to superior efficiency,
|
||||
|
||||
@@ -121,43 +140,52 @@ Due to superior efficiency,
|
||||
|
||||
- writing unit tests for python functions and modules, or bugfixes of such
|
||||
- all integrations tests that do not require building or running a nixos machine
|
||||
- impure integrations tests that require internet access (very rare, try to avoid)
|
||||
|
||||
- impure integrations tests that require internet access (very rare, try to
|
||||
avoid)
|
||||
|
||||
### When not to use python tests
|
||||
|
||||
- integrations tests that require building or running a nixos machine (use NixOS VM or container tests instead)
|
||||
- integrations tests that require building or running a nixos machine (use NixOS
|
||||
VM or container tests instead)
|
||||
- testing behavior of a nix function or library (use nix eval tests instead)
|
||||
|
||||
### Finding examples of python tests
|
||||
|
||||
Existing python tests in clan-core can be found by using `ripgrep`:
|
||||
|
||||
```shellSession
|
||||
rg "import pytest"
|
||||
```
|
||||
|
||||
### Locating definitions of failing python tests
|
||||
|
||||
If any python test fails in the CI pipeline, an error message like this can be found at the end of the log:
|
||||
If any python test fails in the CI pipeline, an error message like this can be
|
||||
found at the end of the log:
|
||||
|
||||
```
|
||||
...
|
||||
FAILED tests/test_machines_cli.py::test_machine_delete - clan_lib.errors.ClanError: Template 'new-machine' not in 'inputs.clan-core
|
||||
...
|
||||
```
|
||||
|
||||
In this case the test is defined in the file `/tests/test_machines_cli.py` via the test function `test_machine_delete`.
|
||||
In this case the test is defined in the file `/tests/test_machines_cli.py` via
|
||||
the test function `test_machine_delete`.
|
||||
|
||||
### Adding python tests
|
||||
|
||||
If a specific python module is tested, the test should be located near the tested module in a subdirectory called `./tests`
|
||||
If the test is not clearly related to a specific module, put it in the top-level `./tests` directory of the tested python package. For `clan-cli` this would be `/pkgs/clan-cli/clan_cli/tests`.
|
||||
All filenames must be prefixed with `test_` and test functions prefixed with `test_` for pytest to discover them.
|
||||
If a specific python module is tested, the test should be located near the
|
||||
tested module in a subdirectory called `./tests` If the test is not clearly
|
||||
related to a specific module, put it in the top-level `./tests` directory of the
|
||||
tested python package. For `clan-cli` this would be
|
||||
`/pkgs/clan-cli/clan_cli/tests`. All filenames must be prefixed with `test_` and
|
||||
test functions prefixed with `test_` for pytest to discover them.
|
||||
|
||||
### Running python tests
|
||||
|
||||
#### Running all python tests
|
||||
|
||||
To run all python tests which are executed in the CI pipeline locally, use this `nix build` command
|
||||
To run all python tests which are executed in the CI pipeline locally, use this
|
||||
`nix build` command
|
||||
|
||||
```shellSession
|
||||
nix build .#checks.x86_64-linux.clan-pytest-{with,without}-core
|
||||
@@ -168,21 +196,27 @@ nix build .#checks.x86_64-linux.clan-pytest-{with,without}-core
|
||||
To run a specific python test outside the nix sandbox
|
||||
|
||||
1. Enter the development environment of the python package, by either:
|
||||
- Having direnv enabled and entering the directory of the package (eg. `/pkgs/clan-cli`)
|
||||
- Or using the command `select-shell {package}` in the top-level dev shell of clan-core, (eg. `switch-shell clan-cli`)
|
||||
|
||||
- Having direnv enabled and entering the directory of the package (eg.
|
||||
`/pkgs/clan-cli`)
|
||||
- Or using the command `select-shell {package}` in the top-level dev shell of
|
||||
clan-core, (eg. `switch-shell clan-cli`)
|
||||
|
||||
2. Execute the test via pytest using issuing
|
||||
`pytest ./path/to/test_file.py:test_function_name -s -n0`
|
||||
|
||||
The flags `-sn0` are useful to forwards all stdout/stderr output to the terminal and be able to debug interactively via `breakpoint()`.
|
||||
`pytest ./path/to/test_file.py:test_function_name -s -n0`
|
||||
|
||||
The flags `-sn0` are useful to forwards all stdout/stderr output to the terminal
|
||||
and be able to debug interactively via `breakpoint()`.
|
||||
|
||||
### Debugging python tests
|
||||
|
||||
To debug a specific python test, find its definition (see above) and make sure to enter the correct dev environment for that python package.
|
||||
To debug a specific python test, find its definition (see above) and make sure
|
||||
to enter the correct dev environment for that python package.
|
||||
|
||||
Modify the test and add `breakpoint()` statements to it.
|
||||
|
||||
Execute the test using the flags `-sn0` in order to get an interactive shell at the breakpoint:
|
||||
Execute the test using the flags `-sn0` in order to get an interactive shell at
|
||||
the breakpoint:
|
||||
|
||||
```shelSession
|
||||
pytest ./path/to/test_file.py:test_function_name -sn0
|
||||
@@ -234,7 +268,9 @@ Failing nix eval tests look like this:
|
||||
> error: Tests failed
|
||||
```
|
||||
|
||||
To locate the definition, find the flake attribute name of the failing test near the top of the CI Job page, like for example `gitea:clan/clan-core#checks.x86_64-linux.eval-lib-values/1242`.
|
||||
To locate the definition, find the flake attribute name of the failing test near
|
||||
the top of the CI Job page, like for example
|
||||
`gitea:clan/clan-core#checks.x86_64-linux.eval-lib-values/1242`.
|
||||
|
||||
In this case `eval-lib-values` is the attribute we are looking for.
|
||||
|
||||
@@ -247,7 +283,8 @@ lib/values/flake-module.nix
|
||||
grmpf@grmpf-nix ~/p/c/clan-core (test-docs)>
|
||||
```
|
||||
|
||||
In this case the test is defined in the file `lib/values/flake-module.nix` line 21
|
||||
In this case the test is defined in the file `lib/values/flake-module.nix` line
|
||||
21
|
||||
|
||||
### Adding nix eval tests
|
||||
|
||||
@@ -255,20 +292,22 @@ In clan core, the following pattern is usually followed:
|
||||
|
||||
- tests are put in a `test.nix` file
|
||||
- a CI Job is exposed via a `flake-module.nix`
|
||||
- that `flake-module.nix` is imported via the `flake.nix` at the root of the project
|
||||
- that `flake-module.nix` is imported via the `flake.nix` at the root of the
|
||||
project
|
||||
|
||||
For example see `/lib/values/{test.nix,flake-module.nix}`.
|
||||
|
||||
### Running nix eval tests
|
||||
|
||||
Since all nix eval tests are exposed via the flake outputs, they can be ran via `nix build`:
|
||||
Since all nix eval tests are exposed via the flake outputs, they can be ran via
|
||||
`nix build`:
|
||||
|
||||
```shellSession
|
||||
nix build .#checks.x86_64-linux.{test-attr-name}
|
||||
```
|
||||
|
||||
For quicker iteration times, instead of `nix build` use the `nix-unit` command available in the dev environment.
|
||||
Example:
|
||||
For quicker iteration times, instead of `nix build` use the `nix-unit` command
|
||||
available in the dev environment. Example:
|
||||
|
||||
```shellSession
|
||||
nix-unit --flake .#legacyPackages.x86_64-linux.{test-attr-name}
|
||||
@@ -276,19 +315,23 @@ nix-unit --flake .#legacyPackages.x86_64-linux.{test-attr-name}
|
||||
|
||||
### Debugging nix eval tests
|
||||
|
||||
Follow the instructions above to find the definition of the test, then use one of the following techniques:
|
||||
Follow the instructions above to find the definition of the test, then use one
|
||||
of the following techniques:
|
||||
|
||||
#### Print debugging
|
||||
|
||||
Add `lib.trace` or `lib.traceVal` statements in order to print some variables during evaluation
|
||||
Add `lib.trace` or `lib.traceVal` statements in order to print some variables
|
||||
during evaluation
|
||||
|
||||
#### Nix repl
|
||||
|
||||
Use `nix repl` to evaluate and inspect the test.
|
||||
|
||||
Each test consists of an `expr` (expression) and an `expected` field. `nix-unit` simply checks if `expr == expected` and prints the diff if that's not the case.
|
||||
Each test consists of an `expr` (expression) and an `expected` field. `nix-unit`
|
||||
simply checks if `expr == expected` and prints the diff if that's not the case.
|
||||
|
||||
`nix repl` can be used to inspect an `expr` manually, or any other variables that you choose to expose.
|
||||
`nix repl` can be used to inspect an `expr` manually, or any other variables
|
||||
that you choose to expose.
|
||||
|
||||
Example:
|
||||
|
||||
|
||||
@@ -1,34 +1,26 @@
|
||||
This guide provides an example setup for a single-disk ZFS system with native
|
||||
encryption, accessible for decryption remotely.
|
||||
|
||||
This guide provides an example setup for a single-disk ZFS system with native encryption, accessible for decryption remotely.
|
||||
!!! Warning This configuration only applies to `systemd-boot` enabled systems
|
||||
and **requires** UEFI booting.
|
||||
|
||||
!!! Warning
|
||||
This configuration only applies to `systemd-boot` enabled systems and **requires** UEFI booting.
|
||||
Replace the highlighted lines with your own disk-id. You can find our your
|
||||
disk-id by executing:
|
||||
|
||||
|
||||
Replace the highlighted lines with your own disk-id.
|
||||
You can find our your disk-id by executing:
|
||||
```bash
|
||||
lsblk --output NAME,ID-LINK,FSTYPE,SIZE,MOUNTPOINT
|
||||
```
|
||||
|
||||
=== "**Single Disk**" Below is the configuration for `disko.nix`
|
||||
`nix hl_lines="13 53" --8<-- "docs/code-examples/disko-single-disk.nix" `
|
||||
|
||||
=== "**Single Disk**"
|
||||
Below is the configuration for `disko.nix`
|
||||
```nix hl_lines="13 53"
|
||||
--8<-- "docs/code-examples/disko-single-disk.nix"
|
||||
```
|
||||
=== "**Raid 1**" Below is the configuration for `disko.nix`
|
||||
`nix hl_lines="13 53 54" --8<-- "docs/code-examples/disko-raid.nix" `
|
||||
|
||||
Below is the configuration for `initrd.nix`. Replace `<yourkey>` with your ssh
|
||||
public key. Replace `kernelModules` with the ethernet module loaded one on your
|
||||
target machine.
|
||||
|
||||
|
||||
=== "**Raid 1**"
|
||||
Below is the configuration for `disko.nix`
|
||||
```nix hl_lines="13 53 54"
|
||||
--8<-- "docs/code-examples/disko-raid.nix"
|
||||
```
|
||||
|
||||
Below is the configuration for `initrd.nix`.
|
||||
Replace `<yourkey>` with your ssh public key.
|
||||
Replace `kernelModules` with the ethernet module loaded one on your target machine.
|
||||
```nix hl_lines="18 29"
|
||||
{config, pkgs, ...}:
|
||||
|
||||
@@ -65,7 +57,8 @@ Replace `kernelModules` with the ethernet module loaded one on your target machi
|
||||
|
||||
## Copying SSH Public Key
|
||||
|
||||
Before starting the installation process, ensure that the SSH public key is copied to the NixOS installer.
|
||||
Before starting the installation process, ensure that the SSH public key is
|
||||
copied to the NixOS installer.
|
||||
|
||||
1. Copy your public SSH key to the installer, if it has not been copied already:
|
||||
|
||||
@@ -93,7 +86,8 @@ nano /tmp/secret.key
|
||||
blkdiscard /dev/disk/by-id/<installdisk>
|
||||
```
|
||||
|
||||
4. Run `clan` machines install, only running kexec and disko, with the following command:
|
||||
4. Run `clan` machines install, only running kexec and disko, with the following
|
||||
command:
|
||||
|
||||
```bash
|
||||
clan machines install gchq-local --target-host root@nixos-installer --phases kexec,disko
|
||||
@@ -119,25 +113,29 @@ zfs set keylocation=prompt zroot/root
|
||||
CTRL+D
|
||||
```
|
||||
|
||||
4. Locally generate ssh host keys. You only need to generate ones for the algorithms you're using in `authorizedKeys`.
|
||||
4. Locally generate ssh host keys. You only need to generate ones for the
|
||||
algorithms you're using in `authorizedKeys`.
|
||||
|
||||
```bash
|
||||
ssh-keygen -q -N "" -C "" -t ed25519 -f ./initrd_host_ed25519_key
|
||||
ssh-keygen -q -N "" -C "" -t rsa -b 4096 -f ./initrd_host_rsa_key
|
||||
```
|
||||
|
||||
5. Securely copy your local initrd ssh host keys to the installer's `/mnt` directory:
|
||||
5. Securely copy your local initrd ssh host keys to the installer's `/mnt`
|
||||
directory:
|
||||
|
||||
```bash
|
||||
scp ./initrd_host* root@nixos-installer.local:/mnt/var/lib/
|
||||
```
|
||||
|
||||
6. Install nixos to the mounted partitions
|
||||
|
||||
```bash
|
||||
clan machines install gchq-local --target-host root@nixos-installer --phases install
|
||||
```
|
||||
|
||||
7. After the installation process, unmount `/mnt/boot`, change the ZFS mountpoints and unmount all the ZFS volumes by exporting the zpool:
|
||||
7. After the installation process, unmount `/mnt/boot`, change the ZFS
|
||||
mountpoints and unmount all the ZFS volumes by exporting the zpool:
|
||||
|
||||
```bash
|
||||
umount /mnt/boot
|
||||
@@ -164,6 +162,9 @@ ssh -p 7172 root@192.168.178.141
|
||||
systemd-tty-ask-password-agent
|
||||
```
|
||||
|
||||
After completing these steps, your NixOS should be successfully installed and ready for use.
|
||||
After completing these steps, your NixOS should be successfully installed and
|
||||
ready for use.
|
||||
|
||||
**Note:** Replace `root@nixos-installer.local` and `192.168.178.141` with the appropriate user and IP addresses for your setup. Also, adjust `<SYS_PATH>` to reflect the correct system path for your environment.
|
||||
**Note:** Replace `root@nixos-installer.local` and `192.168.178.141` with the
|
||||
appropriate user and IP addresses for your setup. Also, adjust `<SYS_PATH>` to
|
||||
reflect the correct system path for your environment.
|
||||
|
||||
@@ -1,9 +1,9 @@
|
||||
!!! Danger ":fontawesome-solid-road-barrier: Under Construction
|
||||
:fontawesome-solid-road-barrier:" Currently under construction use with caution
|
||||
|
||||
!!! Danger ":fontawesome-solid-road-barrier: Under Construction :fontawesome-solid-road-barrier:"
|
||||
Currently under construction use with caution
|
||||
|
||||
:fontawesome-solid-road-barrier: :fontawesome-solid-road-barrier: :fontawesome-solid-road-barrier:
|
||||
|
||||
```
|
||||
:fontawesome-solid-road-barrier: :fontawesome-solid-road-barrier: :fontawesome-solid-road-barrier:
|
||||
```
|
||||
|
||||
## Structure
|
||||
|
||||
@@ -20,13 +20,16 @@ A disk template consists of exactly two files
|
||||
|
||||
## `default.nix`
|
||||
|
||||
Placeholders are filled with their machine specific options when a template is used for a machine.
|
||||
Placeholders are filled with their machine specific options when a template is
|
||||
used for a machine.
|
||||
|
||||
The user can choose any valid options from the hardware report.
|
||||
|
||||
The file itself is then copied to `machines/{machineName}/disko.nix` and will be automatically loaded by the machine.
|
||||
The file itself is then copied to `machines/{machineName}/disko.nix` and will be
|
||||
automatically loaded by the machine.
|
||||
|
||||
`single-disk/default.nix`
|
||||
|
||||
```
|
||||
{
|
||||
disko.devices = {
|
||||
@@ -42,9 +45,11 @@ The file itself is then copied to `machines/{machineName}/disko.nix` and will be
|
||||
|
||||
## Placeholders
|
||||
|
||||
Each template must declare the options of its placeholders depending on the hardware-report.
|
||||
Each template must declare the options of its placeholders depending on the
|
||||
hardware-report.
|
||||
|
||||
`api/disk.py`
|
||||
|
||||
```py
|
||||
templates: dict[str, dict[str, Callable[[dict[str, Any]], Placeholder]]] = {
|
||||
"single-disk": {
|
||||
@@ -56,24 +61,25 @@ templates: dict[str, dict[str, Callable[[dict[str, Any]], Placeholder]]] = {
|
||||
}
|
||||
```
|
||||
|
||||
Introducing new local or global placeholders requires contributing to clan-core `api/disks.py`.
|
||||
Introducing new local or global placeholders requires contributing to clan-core
|
||||
`api/disks.py`.
|
||||
|
||||
### Predefined placeholders
|
||||
|
||||
Some placeholders provide predefined functionality
|
||||
|
||||
- `uuid`: In most cases we recommend adding a unique id to all disks. This prevents the system to false boot from i.e. hot-plugged devices.
|
||||
```
|
||||
disko.devices = {
|
||||
disk = {
|
||||
main = {
|
||||
name = "main-{{uuid}}";
|
||||
...
|
||||
}
|
||||
- `uuid`: In most cases we recommend adding a unique id to all disks. This
|
||||
prevents the system to false boot from i.e. hot-plugged devices.
|
||||
```
|
||||
disko.devices = {
|
||||
disk = {
|
||||
main = {
|
||||
name = "main-{{uuid}}";
|
||||
...
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
}
|
||||
```
|
||||
|
||||
## Readme
|
||||
|
||||
@@ -90,5 +96,5 @@ Use this schema for simple setups where ....
|
||||
|
||||
```
|
||||
|
||||
|
||||
The format and fields of this file is not clear yet. We might change that once fully implemented.
|
||||
The format and fields of this file is not clear yet. We might change that once
|
||||
fully implemented.
|
||||
|
||||
@@ -1,11 +1,13 @@
|
||||
|
||||
Clan supports integration with [flake-parts](https://flake.parts/), a framework for constructing your `flake.nix` using modules.
|
||||
Clan supports integration with [flake-parts](https://flake.parts/), a framework
|
||||
for constructing your `flake.nix` using modules.
|
||||
|
||||
To construct your Clan using flake-parts, follow these steps:
|
||||
|
||||
## Update Your Flake Inputs
|
||||
|
||||
To begin, you'll need to add `flake-parts` as a new dependency in your flake's inputs. This is alongside the already existing dependencies, such as `clan-core` and `nixpkgs`. Here's how you can update your `flake.nix` file:
|
||||
To begin, you'll need to add `flake-parts` as a new dependency in your flake's
|
||||
inputs. This is alongside the already existing dependencies, such as `clan-core`
|
||||
and `nixpkgs`. Here's how you can update your `flake.nix` file:
|
||||
|
||||
```nix
|
||||
# flake.nix
|
||||
@@ -27,7 +29,9 @@ inputs = {
|
||||
|
||||
## Import the Clan flake-parts Module
|
||||
|
||||
After updating your flake inputs, the next step is to import the Clan flake-parts module. This will make the [Clan options](/options) available within `mkFlake`.
|
||||
After updating your flake inputs, the next step is to import the Clan
|
||||
flake-parts module. This will make the [Clan options](/options) available within
|
||||
`mkFlake`.
|
||||
|
||||
```nix
|
||||
{
|
||||
@@ -45,7 +49,8 @@ After updating your flake inputs, the next step is to import the Clan flake-part
|
||||
|
||||
## Configure Clan Settings and Define Machines
|
||||
|
||||
Next you'll need to configure Clan wide settings and define machines, here's an example of how `flake.nix` should look:
|
||||
Next you'll need to configure Clan wide settings and define machines, here's an
|
||||
example of how `flake.nix` should look:
|
||||
|
||||
```nix
|
||||
{
|
||||
@@ -90,7 +95,9 @@ Next you'll need to configure Clan wide settings and define machines, here's an
|
||||
}
|
||||
```
|
||||
|
||||
For detailed information about configuring `flake-parts` and the available options within Clan,
|
||||
refer to the [Clan module](https://git.clan.lol/clan/clan-core/src/branch/main/flakeModules/clan.nix) documentation.
|
||||
For detailed information about configuring `flake-parts` and the available
|
||||
options within Clan, refer to the
|
||||
[Clan module](https://git.clan.lol/clan/clan-core/src/branch/main/flakeModules/clan.nix)
|
||||
documentation.
|
||||
|
||||
---
|
||||
______________________________________________________________________
|
||||
|
||||
@@ -2,9 +2,11 @@
|
||||
|
||||
Machines can be added using the following methods
|
||||
|
||||
- Create a file `machines/{machine_name}/configuration.nix` (See: [File Autoincludes](../../concepts/autoincludes.md))
|
||||
- Create a file `machines/{machine_name}/configuration.nix` (See:
|
||||
[File Autoincludes](../../concepts/autoincludes.md))
|
||||
- Imperative via cli command: `clan machines create`
|
||||
- Editing nix expressions in flake.nix See [`clan-core.lib.clan`](/options/?scope=Flake Options (clan.nix file))
|
||||
- Editing nix expressions in flake.nix See
|
||||
\[`clan-core.lib.clan`\](/options/?scope=Flake Options (clan.nix file))
|
||||
|
||||
See the complete [list](../../concepts/autoincludes.md) of auto-loaded files.
|
||||
|
||||
@@ -12,43 +14,49 @@ See the complete [list](../../concepts/autoincludes.md) of auto-loaded files.
|
||||
|
||||
=== "clan.nix (declarative)"
|
||||
|
||||
```{.nix hl_lines="3-4"}
|
||||
{
|
||||
inventory.machines = {
|
||||
# Define a machine
|
||||
jon = { };
|
||||
};
|
||||
````
|
||||
```{.nix hl_lines="3-4"}
|
||||
{
|
||||
inventory.machines = {
|
||||
# Define a machine
|
||||
jon = { };
|
||||
};
|
||||
|
||||
# Additional NixOS configuration can be added here.
|
||||
# machines/jon/configuration.nix will be automatically imported.
|
||||
# See: https://docs.clan.lol/guides/more-machines/#automatic-registration
|
||||
machines = {
|
||||
# jon = { config, ... }: {
|
||||
# environment.systemPackages = [ pkgs.asciinema ];
|
||||
# };
|
||||
};
|
||||
}
|
||||
```
|
||||
# Additional NixOS configuration can be added here.
|
||||
# machines/jon/configuration.nix will be automatically imported.
|
||||
# See: https://docs.clan.lol/guides/more-machines/#automatic-registration
|
||||
machines = {
|
||||
# jon = { config, ... }: {
|
||||
# environment.systemPackages = [ pkgs.asciinema ];
|
||||
# };
|
||||
};
|
||||
}
|
||||
```
|
||||
````
|
||||
|
||||
=== "CLI (imperative)"
|
||||
|
||||
```sh
|
||||
clan machines create jon
|
||||
```
|
||||
````
|
||||
```sh
|
||||
clan machines create jon
|
||||
```
|
||||
|
||||
The imperative command might create a machine folder in `machines/jon`
|
||||
And might persist information in `inventory.json`
|
||||
The imperative command might create a machine folder in `machines/jon`
|
||||
And might persist information in `inventory.json`
|
||||
````
|
||||
|
||||
### Configuring a machine
|
||||
|
||||
!!! Note
|
||||
The option: `inventory.machines.<name>` is used to define metadata about the machine
|
||||
That includes for example `deploy.targethost` `machineClass` or `tags`
|
||||
!!! Note The option: `inventory.machines.<name>` is used to define metadata
|
||||
about the machine That includes for example `deploy.targethost` `machineClass`
|
||||
or `tags`
|
||||
|
||||
The option: `machines.<name>` is used to add extra *nixosConfiguration* to a machine
|
||||
```
|
||||
The option: `machines.<name>` is used to add extra *nixosConfiguration* to a machine
|
||||
```
|
||||
|
||||
Add the following to your `clan.nix` file for each machine.
|
||||
This example demonstrates what is needed based on a machine called `jon`:
|
||||
Add the following to your `clan.nix` file for each machine. This example
|
||||
demonstrates what is needed based on a machine called `jon`:
|
||||
|
||||
```{.nix .annotate title="clan.nix" hl_lines="3-6 15-19"}
|
||||
{
|
||||
@@ -74,8 +82,10 @@ This example demonstrates what is needed based on a machine called `jon`:
|
||||
}
|
||||
```
|
||||
|
||||
1. Tags can be used to automatically add this machine to services later on. - You dont need to set this now.
|
||||
2. Add your *ssh key* here - That will ensure you can always login to your machine via *ssh* in case something goes wrong.
|
||||
1. Tags can be used to automatically add this machine to services later on. -
|
||||
You dont need to set this now.
|
||||
2. Add your *ssh key* here - That will ensure you can always login to your
|
||||
machine via *ssh* in case something goes wrong.
|
||||
|
||||
### (Optional) Create a `configuration.nix`
|
||||
|
||||
@@ -94,8 +104,9 @@ This example demonstrates what is needed based on a machine called `jon`:
|
||||
|
||||
### (Optional) Renaming a Machine
|
||||
|
||||
Older templates included static machine folders like `jon` and `sara`.
|
||||
If your setup still uses such static machines, you can rename a machine folder to match your own machine name:
|
||||
Older templates included static machine folders like `jon` and `sara`. If your
|
||||
setup still uses such static machines, you can rename a machine folder to match
|
||||
your own machine name:
|
||||
|
||||
```bash
|
||||
git mv ./machines/jon ./machines/<your-machine-name>
|
||||
@@ -103,8 +114,8 @@ git mv ./machines/jon ./machines/<your-machine-name>
|
||||
|
||||
Since your Clan configuration lives inside a Git repository, remember:
|
||||
|
||||
* Only files tracked by Git (`git add`) are recognized.
|
||||
* Whenever you add, rename, or remove files, run:
|
||||
- Only files tracked by Git (`git add`) are recognized.
|
||||
- Whenever you add, rename, or remove files, run:
|
||||
|
||||
```bash
|
||||
git add ./machines/<your-machine-name>
|
||||
@@ -112,14 +123,17 @@ git add ./machines/<your-machine-name>
|
||||
|
||||
to stage the changes.
|
||||
|
||||
---
|
||||
______________________________________________________________________
|
||||
|
||||
### (Optional) Removing a Machine
|
||||
|
||||
If you want to work with a single machine for now, you can remove other machine entries both from your `flake.nix` and from the `machines` directory. For example, to remove the machine `sara`:
|
||||
If you want to work with a single machine for now, you can remove other machine
|
||||
entries both from your `flake.nix` and from the `machines` directory. For
|
||||
example, to remove the machine `sara`:
|
||||
|
||||
```bash
|
||||
git rm -rf ./machines/sara
|
||||
```
|
||||
|
||||
Make sure to also remove or update any references to that machine in your `nix files` or `inventory.json` if you have any of that
|
||||
Make sure to also remove or update any references to that machine in your
|
||||
`nix files` or `inventory.json` if you have any of that
|
||||
|
||||
@@ -1,19 +1,23 @@
|
||||
# How to add services
|
||||
|
||||
A service in clan is a self-contained, reusable unit of system configuration that provides a specific piece of functionality across one or more machines.
|
||||
A service in clan is a self-contained, reusable unit of system configuration
|
||||
that provides a specific piece of functionality across one or more machines.
|
||||
|
||||
Think of it as a recipe for running a tool — like automatic backups, VPN networking, monitoring, etc.
|
||||
Think of it as a recipe for running a tool — like automatic backups, VPN
|
||||
networking, monitoring, etc.
|
||||
|
||||
In Clan Services are multi-Host & role-based:
|
||||
|
||||
- Roles map machines to logical service responsibilities, enabling structured, clean deployments.
|
||||
- Roles map machines to logical service responsibilities, enabling structured,
|
||||
clean deployments.
|
||||
|
||||
- You can use tags instead of explicit machine names.
|
||||
|
||||
To learn more: [Guide about clanService](../clanServices.md)
|
||||
|
||||
!!! Important
|
||||
It is recommended to add at least one networking service such as `zerotier` that allows to reach all your clan machines from your setup computer across the globe.
|
||||
!!! Important It is recommended to add at least one networking service such as
|
||||
`zerotier` that allows to reach all your clan machines from your setup computer
|
||||
across the globe.
|
||||
|
||||
## Configure a Zerotier Network (recommended)
|
||||
|
||||
@@ -40,8 +44,10 @@ To learn more: [Guide about clanService](../clanServices.md)
|
||||
}
|
||||
```
|
||||
|
||||
1. See [reference/clanServices](../../reference/clanServices/index.md) for all available services and how to configure them.
|
||||
Or read [authoring/clanServices](../../guides/services/community.md) if you want to bring your own
|
||||
1. See [reference/clanServices](../../reference/clanServices/index.md) for all
|
||||
available services and how to configure them. Or read
|
||||
[authoring/clanServices](../../guides/services/community.md) if you want to
|
||||
bring your own
|
||||
|
||||
2. Replace `__YOUR_CONTROLLER_` with the *name* of your machine.
|
||||
|
||||
@@ -72,6 +78,9 @@ Adding the following services is recommended for most users:
|
||||
}
|
||||
```
|
||||
|
||||
1. The `admin` service will generate a **root-password** and **add your ssh-key** that allows for convienient administration.
|
||||
2. Equivalent to directly setting `authorizedKeys` like in [configuring a machine](./add-machines.md#configuring-a-machine)
|
||||
3. Adds `user = jon` as a user on all machines. Will create a `home` directory, and prompt for a password before deployment.
|
||||
1. The `admin` service will generate a **root-password** and **add your
|
||||
ssh-key** that allows for convienient administration.
|
||||
2. Equivalent to directly setting `authorizedKeys` like in
|
||||
[configuring a machine](./add-machines.md#configuring-a-machine)
|
||||
3. Adds `user = jon` as a user on all machines. Will create a `home` directory,
|
||||
and prompt for a password before deployment.
|
||||
|
||||
@@ -2,8 +2,9 @@
|
||||
|
||||
!!! Note "Under construction"
|
||||
|
||||
The users concept of clan is not done yet. This guide outlines some solutions from our community.
|
||||
Defining users can be done in many different ways. We want to highlight two approaches:
|
||||
The users concept of clan is not done yet. This guide outlines some solutions
|
||||
from our community. Defining users can be done in many different ways. We want
|
||||
to highlight two approaches:
|
||||
|
||||
- Using clan's [users](../../reference/clanServices/users.md) service.
|
||||
- Using a custom approach.
|
||||
@@ -12,8 +13,10 @@ Defining users can be done in many different ways. We want to highlight two appr
|
||||
|
||||
To add a first *user* this guide will be leveraging two things:
|
||||
|
||||
- [clanServices](../../reference/clanServices/index.md): Allows to bind arbitrary logic to something we call an `ìnstance`.
|
||||
- [clanServices/users](../../reference/clanServices/users.md): Implements logic for adding a single user perInstance.
|
||||
- [clanServices](../../reference/clanServices/index.md): Allows to bind
|
||||
arbitrary logic to something we call an `ìnstance`.
|
||||
- [clanServices/users](../../reference/clanServices/users.md): Implements logic
|
||||
for adding a single user perInstance.
|
||||
|
||||
The example shows how to add a user called `jon`:
|
||||
|
||||
@@ -45,19 +48,23 @@ The example shows how to add a user called `jon`:
|
||||
}
|
||||
```
|
||||
|
||||
1. Add `user = jon` as a user on all machines. Will create a `home` directory, and prompt for a password before deployment.
|
||||
1. Add `user = jon` as a user on all machines. Will create a `home` directory,
|
||||
and prompt for a password before deployment.
|
||||
2. Add this user to `all` machines
|
||||
3. Define the `name` of the user to be `jon`
|
||||
|
||||
The `users` service creates a `/home/jon` directory, allows `jon` to sign in and will take care of the user's password.
|
||||
The `users` service creates a `/home/jon` directory, allows `jon` to sign in and
|
||||
will take care of the user's password.
|
||||
|
||||
For more information see [clanService/users](../../reference/clanServices/users.md)
|
||||
For more information see
|
||||
[clanService/users](../../reference/clanServices/users.md)
|
||||
|
||||
## Using a custom approach
|
||||
|
||||
Some people like to define a `users` folder in their repository root.
|
||||
That allows to bind all user specific logic to a single place (`default.nix`)
|
||||
Which can be imported into individual machines to make the user available on that machine.
|
||||
Some people like to define a `users` folder in their repository root. That
|
||||
allows to bind all user specific logic to a single place (`default.nix`) Which
|
||||
can be imported into individual machines to make the user available on that
|
||||
machine.
|
||||
|
||||
```bash
|
||||
.
|
||||
@@ -72,10 +79,11 @@ Which can be imported into individual machines to make the user available on tha
|
||||
|
||||
## using [home-manager](https://github.com/nix-community/home-manager)
|
||||
|
||||
When using clan's `users` service it is possible to define extraModules.
|
||||
In fact this is always possible when using clan's services.
|
||||
When using clan's `users` service it is possible to define extraModules. In fact
|
||||
this is always possible when using clan's services.
|
||||
|
||||
We can use this property of clan services to bind a nixosModule to the user, which configures home-manager.
|
||||
We can use this property of clan services to bind a nixosModule to the user,
|
||||
which configures home-manager.
|
||||
|
||||
```{.nix title="clan.nix" hl_lines="22"}
|
||||
{
|
||||
@@ -107,11 +115,12 @@ We can use this property of clan services to bind a nixosModule to the user, whi
|
||||
}
|
||||
```
|
||||
|
||||
1. Type `path` or `string`: Must point to a separate file. Inlining a module is not possible
|
||||
1. Type `path` or `string`: Must point to a separate file. Inlining a module is
|
||||
not possible
|
||||
|
||||
!!! Note "This is inspiration"
|
||||
Our community might come up with better solutions soon.
|
||||
We are seeking contributions to improve this pattern if you have a nicer solution in mind.
|
||||
!!! Note "This is inspiration" Our community might come up with better solutions
|
||||
soon. We are seeking contributions to improve this pattern if you have a nicer
|
||||
solution in mind.
|
||||
|
||||
```nix title="users/jon/home.nix"
|
||||
# NixOS module to import home-manager and the home-manager configuration of 'jon'
|
||||
|
||||
@@ -1,8 +1,10 @@
|
||||
# Configure Disk Config
|
||||
|
||||
By default clan uses [disko](https://github.com/nix-community/disko) which allows for declarative disk partitioning.
|
||||
By default clan uses [disko](https://github.com/nix-community/disko) which
|
||||
allows for declarative disk partitioning.
|
||||
|
||||
To see what disk templates are available run:
|
||||
|
||||
```{.shellSession hl_lines="10" .no-copy}
|
||||
$ clan templates list
|
||||
|
||||
@@ -22,13 +24,14 @@ Available 'machine' templates
|
||||
│ └── test-morph-template: Morph a machine
|
||||
```
|
||||
|
||||
For this guide we will select the `single-disk` template, that uses
|
||||
`A simple ext4 disk with a single partition`.
|
||||
|
||||
For this guide we will select the `single-disk` template, that uses `A simple ext4 disk with a single partition`.
|
||||
|
||||
!!! tip
|
||||
For advanced partitioning, see [Disko templates](https://github.com/nix-community/disko-templates) or [Disko examples](https://github.com/nix-community/disko/tree/master/example).
|
||||
You can also [contribute a disk template to clan core](https://docs.clan.lol/guides/disko-templates/community/)
|
||||
|
||||
!!! tip For advanced partitioning, see
|
||||
[Disko templates](https://github.com/nix-community/disko-templates) or
|
||||
[Disko examples](https://github.com/nix-community/disko/tree/master/example).
|
||||
You can also
|
||||
[contribute a disk template to clan core](https://docs.clan.lol/guides/disko-templates/community/)
|
||||
|
||||
To setup a disk schema for a machine run
|
||||
|
||||
@@ -55,22 +58,20 @@ Should now be successful
|
||||
Applied disk template 'single-disk' to machine 'jon'
|
||||
```
|
||||
|
||||
A disko.nix file should be created in `machines/jon`
|
||||
You can have a look and customize it if needed.
|
||||
A disko.nix file should be created in `machines/jon` You can have a look and
|
||||
customize it if needed.
|
||||
|
||||
!!! Danger
|
||||
Don't change the `disko.nix` after the machine is installed for the first time, unless you really know what you are doing.
|
||||
Changing disko configuration requires wiping and reinstalling the machine.
|
||||
!!! Danger Don't change the `disko.nix` after the machine is installed for the
|
||||
first time, unless you really know what you are doing. Changing disko
|
||||
configuration requires wiping and reinstalling the machine.
|
||||
|
||||
## Deploy the machine
|
||||
|
||||
**Finally deployment time!**
|
||||
|
||||
This command is destructive and will format your disk and install NixOS on it! It is equivalent to appending `--phases kexec,disko,install,reboot`.
|
||||
**Finally deployment time!**
|
||||
|
||||
This command is destructive and will format your disk and install NixOS on it!
|
||||
It is equivalent to appending `--phases kexec,disko,install,reboot`.
|
||||
|
||||
```bash
|
||||
clan machines install [MACHINE] --target-host root@<IP>
|
||||
```
|
||||
|
||||
|
||||
|
||||
@@ -2,24 +2,26 @@
|
||||
|
||||
This guide will help you convert your existing NixOS configurations into a Clan.
|
||||
|
||||
!!! Warning
|
||||
Migrating instead of starting new can be trickier and might lead to bugs or
|
||||
unexpected issues. We recommend reading the [Getting Started](./index.md) guide first.
|
||||
!!! Warning Migrating instead of starting new can be trickier and might lead to
|
||||
bugs or unexpected issues. We recommend reading the
|
||||
[Getting Started](./index.md) guide first.
|
||||
|
||||
Once you have a working setup and understand the concepts transfering your NixOS configurations over is easy.
|
||||
```
|
||||
Once you have a working setup and understand the concepts transfering your NixOS configurations over is easy.
|
||||
```
|
||||
|
||||
## Back up your existing configuration
|
||||
|
||||
Before you start, it is strongly recommended to back up your existing
|
||||
configuration in any form you see fit. If you use version control to manage
|
||||
your configuration changes, it is also a good idea to follow the migration
|
||||
guide in a separte branch until everything works as expected.
|
||||
configuration in any form you see fit. If you use version control to manage your
|
||||
configuration changes, it is also a good idea to follow the migration guide in a
|
||||
separte branch until everything works as expected.
|
||||
|
||||
## Starting Point
|
||||
|
||||
We assume you are already using NixOS flakes to manage your configuration. If
|
||||
not, migrate to a flake-based setup following the official [NixOS
|
||||
documentation](https://nix.dev/manual/nix/2.25/command-ref/new-cli/nix3-flake.html).
|
||||
not, migrate to a flake-based setup following the official
|
||||
[NixOS documentation](https://nix.dev/manual/nix/2.25/command-ref/new-cli/nix3-flake.html).
|
||||
The snippet below shows a common Nix flake. For this example we will assume you
|
||||
have have two hosts: **berlin** and **cologne**.
|
||||
|
||||
@@ -67,9 +69,9 @@ output parameters.
|
||||
+ outputs = { self, nixpkgs, clan-core }:
|
||||
```
|
||||
|
||||
The existing `nixosConfigurations` output of your flake will be created by
|
||||
clan. In addition, a new `clanInternals` output will be added. Since both of
|
||||
these are provided by the output of `clan-core.lib.clan`, a common syntax is to use a
|
||||
The existing `nixosConfigurations` output of your flake will be created by clan.
|
||||
In addition, a new `clanInternals` output will be added. Since both of these are
|
||||
provided by the output of `clan-core.lib.clan`, a common syntax is to use a
|
||||
`let...in` statement to create your clan and access it's parameters in the flake
|
||||
outputs.
|
||||
|
||||
@@ -112,10 +114,9 @@ For the provide flake example, your flake should now look like this:
|
||||
|
||||
✅ Et voilà! Your existing hosts are now part of a clan.
|
||||
|
||||
Existing Nix tooling
|
||||
should still work as normal. To check that you didn't make any errors, run `nix
|
||||
flake show` and verify both hosts are still recognized as if nothing had
|
||||
changed. You should also see the new `clan` output.
|
||||
Existing Nix tooling should still work as normal. To check that you didn't make
|
||||
any errors, run `nix flake show` and verify both hosts are still recognized as
|
||||
if nothing had changed. You should also see the new `clan` output.
|
||||
|
||||
```
|
||||
❯ nix flake show
|
||||
@@ -171,7 +172,8 @@ Clan needs to know where it can reach your hosts. For testing purpose set
|
||||
}
|
||||
```
|
||||
|
||||
See our guide on for properly [configuring machines networking](../networking.md)
|
||||
See our guide on for properly
|
||||
[configuring machines networking](../networking.md)
|
||||
|
||||
## Next Steps
|
||||
|
||||
|
||||
@@ -1,15 +1,27 @@
|
||||
# USB Installer Image for Physical Machines
|
||||
|
||||
To install Clan on physical machines, you need to use our custom installer image. This is necessary for proper installation and operation.
|
||||
To install Clan on physical machines, you need to use our custom installer
|
||||
image. This is necessary for proper installation and operation.
|
||||
|
||||
!!! note "Deploying to a Virtual Machine?"
|
||||
If you're deploying to a virtual machine (VM), you can skip this section and go directly to the [Deploy Virtual Machine](./hardware-report-virtual.md) step. In this scenario, we automatically use [nixos-anywhere](https://github.com/nix-community/nixos-anywhere) to replace the kernel during runtime.
|
||||
!!! note "Deploying to a Virtual Machine?" If you're deploying to a virtual
|
||||
machine (VM), you can skip this section and go directly to the
|
||||
[Deploy Virtual Machine](./hardware-report-virtual.md) step. In this scenario,
|
||||
we automatically use
|
||||
[nixos-anywhere](https://github.com/nix-community/nixos-anywhere) to replace the
|
||||
kernel during runtime.
|
||||
|
||||
??? info "Why nixos-anywhere Doesn't Work on Physical Hardware?"
|
||||
nixos-anywhere relies on [kexec](https://wiki.archlinux.org/title/Kexec) to replace the running kernel with our custom one. This method often has compatibility issues with real hardware, especially systems with dedicated graphics cards like laptops and servers, leading to crashes and black screens.
|
||||
??? info "Why nixos-anywhere Doesn't Work on Physical Hardware?" nixos-anywhere
|
||||
relies on [kexec](https://wiki.archlinux.org/title/Kexec) to replace the running
|
||||
kernel with our custom one. This method often has compatibility issues with real
|
||||
hardware, especially systems with dedicated graphics cards like laptops and
|
||||
servers, leading to crashes and black screens.
|
||||
|
||||
??? info "Reasons for a Custom Install Image"
|
||||
Our custom install images are built to include essential tools like [nixos-facter](https://github.com/nix-community/nixos-facter) and support for [ZFS](https://wiki.archlinux.org/title/ZFS). They're also optimized to run on systems with as little as 1 GB of RAM, ensuring efficient performance even on lower-end hardware.
|
||||
??? info "Reasons for a Custom Install Image" Our custom install images are
|
||||
built to include essential tools like
|
||||
[nixos-facter](https://github.com/nix-community/nixos-facter) and support for
|
||||
[ZFS](https://wiki.archlinux.org/title/ZFS). They're also optimized to run on
|
||||
systems with as little as 1 GB of RAM, ensuring efficient performance even on
|
||||
lower-end hardware.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
@@ -22,23 +34,24 @@ To install Clan on physical machines, you need to use our custom installer image
|
||||
|
||||
2. Identify your flash drive with `lsblk`:
|
||||
|
||||
```shellSession
|
||||
lsblk
|
||||
```
|
||||
```shellSession
|
||||
lsblk
|
||||
```
|
||||
|
||||
```{.shellSession hl_lines="2" .no-copy}
|
||||
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
|
||||
sdb 8:0 1 117,2G 0 disk
|
||||
└─sdb1 8:1 1 117,2G 0 part /run/media/qubasa/INTENSO
|
||||
nvme0n1 259:0 0 1,8T 0 disk
|
||||
├─nvme0n1p1 259:1 0 512M 0 part /boot
|
||||
└─nvme0n1p2 259:2 0 1,8T 0 part
|
||||
└─luks-f7600028-9d83-4967-84bc-dd2f498bc486 254:0 0 1,8T 0 crypt /nix/store
|
||||
```
|
||||
```{.shellSession hl_lines="2" .no-copy}
|
||||
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
|
||||
sdb 8:0 1 117,2G 0 disk
|
||||
└─sdb1 8:1 1 117,2G 0 part /run/media/qubasa/INTENSO
|
||||
nvme0n1 259:0 0 1,8T 0 disk
|
||||
├─nvme0n1p1 259:1 0 512M 0 part /boot
|
||||
└─nvme0n1p2 259:2 0 1,8T 0 part
|
||||
└─luks-f7600028-9d83-4967-84bc-dd2f498bc486 254:0 0 1,8T 0 crypt /nix/store
|
||||
```
|
||||
|
||||
!!! Info "In this case the USB device is `sdb`"
|
||||
!!! Info "In this case the USB device is `sdb`"
|
||||
|
||||
3. Ensure all partitions on the drive are unmounted. Replace `sdb1` in the command below with your device identifier (like `sdc1`, etc.):
|
||||
3. Ensure all partitions on the drive are unmounted. Replace `sdb1` in the
|
||||
command below with your device identifier (like `sdc1`, etc.):
|
||||
|
||||
```shellSession
|
||||
sudo umount /dev/sdb1
|
||||
@@ -46,117 +59,122 @@ sudo umount /dev/sdb1
|
||||
|
||||
## Installer
|
||||
|
||||
=== "**Linux OS**"
|
||||
**Create a Custom Installer**
|
||||
=== "**Linux OS**" **Create a Custom Installer**
|
||||
|
||||
We recommend to build your own installer because of the following reasons:
|
||||
````
|
||||
We recommend to build your own installer because of the following reasons:
|
||||
|
||||
- Include your ssh public keys into the image that allows passwordless ssh connection later on.
|
||||
- Set your preferred language and keymap
|
||||
- Include your ssh public keys into the image that allows passwordless ssh connection later on.
|
||||
- Set your preferred language and keymap
|
||||
|
||||
```bash
|
||||
clan flash write --flake https://git.clan.lol/clan/clan-core/archive/main.tar.gz \
|
||||
--ssh-pubkey $HOME/.ssh/id_ed25519.pub \
|
||||
--keymap us \
|
||||
--language en_US.UTF-8 \
|
||||
--disk main /dev/sd<X> \
|
||||
flash-installer
|
||||
```bash
|
||||
clan flash write --flake https://git.clan.lol/clan/clan-core/archive/main.tar.gz \
|
||||
--ssh-pubkey $HOME/.ssh/id_ed25519.pub \
|
||||
--keymap us \
|
||||
--language en_US.UTF-8 \
|
||||
--disk main /dev/sd<X> \
|
||||
flash-installer
|
||||
```
|
||||
!!! Note
|
||||
Replace `$HOME/.ssh/id_ed25519.pub` with a path to your SSH public key.
|
||||
Replace `/dev/sd<X>` with the drive path you want to flash
|
||||
|
||||
!!! Danger "Specifying the wrong device can lead to unrecoverable data loss."
|
||||
|
||||
The `clan flash` utility will erase the disk. Make sure to specify the correct device
|
||||
|
||||
- **SSH-Pubkey Option**
|
||||
|
||||
To add an ssh public key into the installer image append the option:
|
||||
```
|
||||
!!! Note
|
||||
Replace `$HOME/.ssh/id_ed25519.pub` with a path to your SSH public key.
|
||||
Replace `/dev/sd<X>` with the drive path you want to flash
|
||||
--ssh-pubkey <pubkey_path>
|
||||
```
|
||||
If you do not have an ssh key yet, you can generate one with `ssh-keygen -t ed25519` command.
|
||||
This ssh key will be installed into the root user.
|
||||
|
||||
!!! Danger "Specifying the wrong device can lead to unrecoverable data loss."
|
||||
- **Connect to the installer**
|
||||
|
||||
The `clan flash` utility will erase the disk. Make sure to specify the correct device
|
||||
On boot, the installer will display on-screen the IP address it received from the network.
|
||||
If you need to configure Wi-Fi first, refer to the next section.
|
||||
If Multicast-DNS (Avahi) is enabled on your own machine, you can also access the installer using the `flash-installer.local` address.
|
||||
|
||||
- **SSH-Pubkey Option**
|
||||
- **List Keymaps**
|
||||
|
||||
To add an ssh public key into the installer image append the option:
|
||||
```
|
||||
--ssh-pubkey <pubkey_path>
|
||||
```
|
||||
If you do not have an ssh key yet, you can generate one with `ssh-keygen -t ed25519` command.
|
||||
This ssh key will be installed into the root user.
|
||||
|
||||
- **Connect to the installer**
|
||||
|
||||
On boot, the installer will display on-screen the IP address it received from the network.
|
||||
If you need to configure Wi-Fi first, refer to the next section.
|
||||
If Multicast-DNS (Avahi) is enabled on your own machine, you can also access the installer using the `flash-installer.local` address.
|
||||
|
||||
- **List Keymaps**
|
||||
|
||||
You can get a list of all keymaps with the following command:
|
||||
```
|
||||
clan flash list keymaps
|
||||
```
|
||||
|
||||
- **List Languages**
|
||||
|
||||
You can get a list of all languages with the following command:
|
||||
```
|
||||
clan flash list languages
|
||||
```
|
||||
|
||||
=== "**Other OS**"
|
||||
**Download Generic Installer**
|
||||
|
||||
For x86_64:
|
||||
|
||||
```shellSession
|
||||
wget https://github.com/nix-community/nixos-images/releases/download/nixos-unstable/nixos-installer-x86_64-linux.iso
|
||||
You can get a list of all keymaps with the following command:
|
||||
```
|
||||
clan flash list keymaps
|
||||
```
|
||||
|
||||
For generic arm64 / aarch64 (probably does not work on raspberry pi...)
|
||||
- **List Languages**
|
||||
|
||||
```shellSession
|
||||
wget https://github.com/nix-community/nixos-images/releases/download/nixos-unstable/nixos-installer-aarch64-linux.iso
|
||||
You can get a list of all languages with the following command:
|
||||
```
|
||||
|
||||
!!! Note
|
||||
If you don't have `wget` installed, you can use `curl --progress-bar -OL <url>` instead.
|
||||
|
||||
## Flash the Installer to the USB Drive
|
||||
|
||||
!!! Danger "Specifying the wrong device can lead to unrecoverable data loss."
|
||||
|
||||
The `dd` utility will erase the disk. Make sure to specify the correct device (`of=...`)
|
||||
|
||||
For example if the USB device is `sdb` use `of=/dev/sdb` (on macOS it will look more like /dev/disk1)
|
||||
|
||||
On Linux, you can use the `lsblk` utility to identify the correct disko
|
||||
|
||||
```
|
||||
lsblk --output NAME,ID-LINK,FSTYPE,SIZE,MOUNTPOINT
|
||||
clan flash list languages
|
||||
```
|
||||
````
|
||||
|
||||
On macos use `diskutil`:
|
||||
=== "**Other OS**" **Download Generic Installer**
|
||||
|
||||
```
|
||||
diskutil list
|
||||
```
|
||||
````
|
||||
For x86_64:
|
||||
|
||||
Use the `dd` utility to write the NixOS installer image to your USB drive.
|
||||
Replace `/dev/sd<X>` with your external drive from above.
|
||||
```shellSession
|
||||
wget https://github.com/nix-community/nixos-images/releases/download/nixos-unstable/nixos-installer-x86_64-linux.iso
|
||||
```
|
||||
|
||||
```shellSession
|
||||
sudo dd bs=4M conv=fsync status=progress if=./nixos-installer-x86_64-linux.iso of=/dev/sd<X>
|
||||
```
|
||||
For generic arm64 / aarch64 (probably does not work on raspberry pi...)
|
||||
|
||||
- **Connect to the installer
|
||||
```shellSession
|
||||
wget https://github.com/nix-community/nixos-images/releases/download/nixos-unstable/nixos-installer-aarch64-linux.iso
|
||||
```
|
||||
|
||||
On boot, the installer will display on-screen the IP address it received from the network.
|
||||
If you need to configure Wi-Fi first, refer to the next section.
|
||||
If Multicast-DNS (Avahi) is enabled on your own machine, you can also access the installer using the `nixos-installer.local` address.
|
||||
!!! Note
|
||||
If you don't have `wget` installed, you can use `curl --progress-bar -OL <url>` instead.
|
||||
|
||||
## Flash the Installer to the USB Drive
|
||||
|
||||
!!! Danger "Specifying the wrong device can lead to unrecoverable data loss."
|
||||
|
||||
The `dd` utility will erase the disk. Make sure to specify the correct device (`of=...`)
|
||||
|
||||
For example if the USB device is `sdb` use `of=/dev/sdb` (on macOS it will look more like /dev/disk1)
|
||||
|
||||
On Linux, you can use the `lsblk` utility to identify the correct disko
|
||||
|
||||
```
|
||||
lsblk --output NAME,ID-LINK,FSTYPE,SIZE,MOUNTPOINT
|
||||
```
|
||||
|
||||
On macos use `diskutil`:
|
||||
|
||||
```
|
||||
diskutil list
|
||||
```
|
||||
|
||||
Use the `dd` utility to write the NixOS installer image to your USB drive.
|
||||
Replace `/dev/sd<X>` with your external drive from above.
|
||||
|
||||
```shellSession
|
||||
sudo dd bs=4M conv=fsync status=progress if=./nixos-installer-x86_64-linux.iso of=/dev/sd<X>
|
||||
```
|
||||
|
||||
- **Connect to the installer
|
||||
|
||||
On boot, the installer will display on-screen the IP address it received from the network.
|
||||
If you need to configure Wi-Fi first, refer to the next section.
|
||||
If Multicast-DNS (Avahi) is enabled on your own machine, you can also access the installer using the `nixos-installer.local` address.
|
||||
````
|
||||
|
||||
## Boot From USB Stick
|
||||
|
||||
- To use, boot from the Clan USB drive with **secure boot turned off**. For step by step instructions go to [Disabling Secure Boot](../../guides/secure-boot.md)
|
||||
- To use, boot from the Clan USB drive with **secure boot turned off**. For step
|
||||
by step instructions go to
|
||||
[Disabling Secure Boot](../../guides/secure-boot.md)
|
||||
|
||||
## (Optional) Connect to Wifi Manually
|
||||
|
||||
If you don't have access via LAN the Installer offers support for connecting via Wifi.
|
||||
If you don't have access via LAN the Installer offers support for connecting via
|
||||
Wifi.
|
||||
|
||||
```shellSession
|
||||
iwctl
|
||||
@@ -197,7 +215,7 @@ IPv4 address 192.168.188.50 (Your new local ip)
|
||||
|
||||
Press ++ctrl+d++ to exit `IWD`.
|
||||
|
||||
!!! Important
|
||||
Press ++ctrl+d++ **again** to update the displayed QR code and connection information.
|
||||
!!! Important Press ++ctrl+d++ **again** to update the displayed QR code and
|
||||
connection information.
|
||||
|
||||
You're all set up
|
||||
|
||||
@@ -1,9 +1,13 @@
|
||||
### Generate Facts and Vars
|
||||
|
||||
Typically, this step is handled automatically when a machine is deployed. However, to enable the use of `nix flake check` with your configuration, it must be completed manually beforehand.
|
||||
Typically, this step is handled automatically when a machine is deployed.
|
||||
However, to enable the use of `nix flake check` with your configuration, it must
|
||||
be completed manually beforehand.
|
||||
|
||||
Currently, generating all the necessary facts requires two separate commands. This is due to the coexistence of two parallel secret management solutions:
|
||||
the newer, recommended version (`clan vars`) and the older version (`clan facts`) that we are slowly phasing out.
|
||||
Currently, generating all the necessary facts requires two separate commands.
|
||||
This is due to the coexistence of two parallel secret management solutions: the
|
||||
newer, recommended version (`clan vars`) and the older version (`clan facts`)
|
||||
that we are slowly phasing out.
|
||||
|
||||
To generate both facts and vars, execute the following commands:
|
||||
|
||||
@@ -11,7 +15,6 @@ To generate both facts and vars, execute the following commands:
|
||||
clan facts generate && clan vars generate
|
||||
```
|
||||
|
||||
|
||||
### Check Configuration
|
||||
|
||||
Validate your configuration by running:
|
||||
@@ -20,9 +23,11 @@ Validate your configuration by running:
|
||||
nix flake check
|
||||
```
|
||||
|
||||
This command helps ensure that your system configuration is correct and free from errors.
|
||||
This command helps ensure that your system configuration is correct and free
|
||||
from errors.
|
||||
|
||||
!!! Tip
|
||||
|
||||
You can integrate this step into your [Continuous Integration](https://en.wikipedia.org/wiki/Continuous_integration) workflow to ensure that only valid Nix configurations are merged into your codebase.
|
||||
|
||||
```
|
||||
You can integrate this step into your [Continuous Integration](https://en.wikipedia.org/wiki/Continuous_integration) workflow to ensure that only valid Nix configurations are merged into your codebase.
|
||||
```
|
||||
|
||||
@@ -1,22 +1,29 @@
|
||||
# Installing a Physical Machine
|
||||
|
||||
Now that you have created a machine, added some services, and set up secrets, this guide will walk you through how to deploy it.
|
||||
|
||||
Now that you have created a machine, added some services, and set up secrets,
|
||||
this guide will walk you through how to deploy it.
|
||||
|
||||
### Step 0. Prerequisites
|
||||
|
||||
- [x] RAM > 2GB
|
||||
- [x] **Two Computers**: You need one computer that you're getting ready (we'll call this the Target Computer) and another one to set it up from (we'll call this the Setup Computer). Make sure both can talk to each other over the network using SSH.
|
||||
- [x] **Machine configuration**: See our basic [adding and configuring machine guide](./add-machines.md)
|
||||
- [x] **Initialized secrets**: See [secrets](../secrets.md) for how to initialize your secrets.
|
||||
- [x] **Two Computers**: You need one computer that you're getting ready (we'll
|
||||
call this the Target Computer) and another one to set it up from (we'll call
|
||||
this the Setup Computer). Make sure both can talk to each other over the
|
||||
network using SSH.
|
||||
- [x] **Machine configuration**: See our basic
|
||||
[adding and configuring machine guide](./add-machines.md)
|
||||
- [x] **Initialized secrets**: See [secrets](../secrets.md) for how to
|
||||
initialize your secrets.
|
||||
- [x] **USB Flash Drive**: See [Clan Installer](./create-installer.md)
|
||||
|
||||
|
||||
### Image Installer
|
||||
|
||||
This method makes use of the [image installers](./create-installer.md).
|
||||
|
||||
The installer will randomly generate a password and local addresses on boot, then run a SSH server with these preconfigured.
|
||||
The installer shows its deployment relevant information in two formats, a text form, as well as a QR code.
|
||||
|
||||
The installer will randomly generate a password and local addresses on boot,
|
||||
then run a SSH server with these preconfigured. The installer shows its
|
||||
deployment relevant information in two formats, a text form, as well as a QR
|
||||
code.
|
||||
|
||||
This is an example of the booted installer.
|
||||
|
||||
@@ -50,65 +57,70 @@ This is an example of the booted installer.
|
||||
└─────────────────────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
1. This is not an actual QR code, because it is displayed rather poorly on text sites.
|
||||
This would be the actual content of this specific QR code prettified:
|
||||
```json
|
||||
{
|
||||
"pass": "cheesy-capital-unwell",
|
||||
"tor": "6evxy5yhzytwpnhc2vpscrbti3iktxdhpnf6yim6bbs25p4v6beemzyd.onion",
|
||||
"addrs": [
|
||||
"2001:9e8:347:ca00:21e:6ff:fe45:3c92"
|
||||
]
|
||||
}
|
||||
```
|
||||
1. This is not an actual QR code, because it is displayed rather poorly on text
|
||||
sites. This would be the actual content of this specific QR code prettified:
|
||||
|
||||
To generate the actual QR code, that would be displayed use:
|
||||
```shellSession
|
||||
echo '{"pass":"cheesy-capital-unwell","tor":"6evxy5yhzytwpnhc2vpscrbti3iktxdhpnf6yim6bbs25p4v6beemzyd.onion","addrs":["2001:9e8:347:ca00:21e:6ff:fe45:3c92"]}' | nix run nixpkgs#qrencode -- -s 2 -m 2 -t utf8
|
||||
```
|
||||
2. The root password for the installer medium.
|
||||
This password is autogenerated and meant to be easily typeable.
|
||||
3. See how to connect the installer medium to wlan [here](./create-installer.md).
|
||||
```json
|
||||
{
|
||||
"pass": "cheesy-capital-unwell",
|
||||
"tor": "6evxy5yhzytwpnhc2vpscrbti3iktxdhpnf6yim6bbs25p4v6beemzyd.onion",
|
||||
"addrs": [
|
||||
"2001:9e8:347:ca00:21e:6ff:fe45:3c92"
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
!!!tip
|
||||
For easy sharing of deployment information via QR code, we highly recommend using [KDE Connect](https://apps.kde.org/de/kdeconnect/).
|
||||
To generate the actual QR code, that would be displayed use:
|
||||
|
||||
```shellSession
|
||||
echo '{"pass":"cheesy-capital-unwell","tor":"6evxy5yhzytwpnhc2vpscrbti3iktxdhpnf6yim6bbs25p4v6beemzyd.onion","addrs":["2001:9e8:347:ca00:21e:6ff:fe45:3c92"]}' | nix run nixpkgs#qrencode -- -s 2 -m 2 -t utf8
|
||||
```
|
||||
|
||||
2. The root password for the installer medium. This password is autogenerated
|
||||
and meant to be easily typeable.
|
||||
|
||||
3. See how to connect the installer medium to wlan
|
||||
[here](./create-installer.md).
|
||||
|
||||
!!!tip For easy sharing of deployment information via QR code, we highly
|
||||
recommend using [KDE Connect](https://apps.kde.org/de/kdeconnect/).
|
||||
|
||||
There are two ways to deploy your machine:
|
||||
|
||||
=== "Password" ### Generating a Hardware Report
|
||||
|
||||
=== "Password"
|
||||
### Generating a Hardware Report
|
||||
|
||||
The following command will generate a hardware report with [nixos-facter](https://github.com/nix-community/nixos-facter) and writes it back into your machine folder. The `--phases kexec` flag makes sure we are not yet formatting anything, instead if the target system is not a NixOS machine it will use [kexec](https://wiki.archlinux.org/title/Kexec) to switch to a NixOS kernel.
|
||||
````
|
||||
The following command will generate a hardware report with [nixos-facter](https://github.com/nix-community/nixos-facter) and writes it back into your machine folder. The `--phases kexec` flag makes sure we are not yet formatting anything, instead if the target system is not a NixOS machine it will use [kexec](https://wiki.archlinux.org/title/Kexec) to switch to a NixOS kernel.
|
||||
|
||||
|
||||
```terminal
|
||||
clan machines install [MACHINE] \
|
||||
--update-hardware-config nixos-facter \
|
||||
--phases kexec \
|
||||
--target-host root@192.168.178.169
|
||||
```
|
||||
```terminal
|
||||
clan machines install [MACHINE] \
|
||||
--update-hardware-config nixos-facter \
|
||||
--phases kexec \
|
||||
--target-host root@192.168.178.169
|
||||
```
|
||||
````
|
||||
|
||||
=== "QR Code"
|
||||
### Generating a Hardware Report
|
||||
=== "QR Code" ### Generating a Hardware Report
|
||||
|
||||
The following command will generate a hardware report with [nixos-facter](https://github.com/nix-community/nixos-facter) and writes it back into your machine folder. The `--phases kexec` flag makes sure we are not yet formatting anything, instead if the target system is not a NixOS machine it will use [kexec](https://wiki.archlinux.org/title/Kexec) to switch to a NixOS kernel.
|
||||
````
|
||||
The following command will generate a hardware report with [nixos-facter](https://github.com/nix-community/nixos-facter) and writes it back into your machine folder. The `--phases kexec` flag makes sure we are not yet formatting anything, instead if the target system is not a NixOS machine it will use [kexec](https://wiki.archlinux.org/title/Kexec) to switch to a NixOS kernel.
|
||||
|
||||
#### Using a JSON String or File Path
|
||||
Copy the JSON string contained in the QR Code and provide its path or paste it directly:
|
||||
```terminal
|
||||
clan machines install [MACHINE] --json [JSON] \
|
||||
--update-hardware-config nixos-facter \
|
||||
--phases kexec
|
||||
```
|
||||
|
||||
#### Using an Image Containing the QR Code
|
||||
Provide the path to an image file containing the QR code displayed by the installer:
|
||||
```terminal
|
||||
clan machines install [MACHINE] --png [PATH] \
|
||||
--update-hardware-config nixos-facter \
|
||||
--phases kexec
|
||||
```
|
||||
#### Using a JSON String or File Path
|
||||
Copy the JSON string contained in the QR Code and provide its path or paste it directly:
|
||||
```terminal
|
||||
clan machines install [MACHINE] --json [JSON] \
|
||||
--update-hardware-config nixos-facter \
|
||||
--phases kexec
|
||||
```
|
||||
|
||||
#### Using an Image Containing the QR Code
|
||||
Provide the path to an image file containing the QR code displayed by the installer:
|
||||
```terminal
|
||||
clan machines install [MACHINE] --png [PATH] \
|
||||
--update-hardware-config nixos-facter \
|
||||
--phases kexec
|
||||
```
|
||||
````
|
||||
|
||||
If you are using our template `[MACHINE]` would be `jon`
|
||||
|
||||
@@ -1,23 +1,29 @@
|
||||
# Generate a VM Hardware Report
|
||||
|
||||
Now that you have created a machine, added some services, and set up secrets, this guide will walk you through how to deploy it.
|
||||
|
||||
Now that you have created a machine, added some services, and set up secrets,
|
||||
this guide will walk you through how to deploy it.
|
||||
|
||||
## Prerequisites
|
||||
- [x] RAM > 2GB
|
||||
- [x] **Two Computers**: You need one computer that you're getting ready (we'll call this the Target Computer) and another one to set it up from (we'll call this the Setup Computer). Make sure both can talk to each other over the network using SSH.
|
||||
- [x] **Machine configuration**: See our basic [adding and configuring machine guide](./add-machines.md)
|
||||
|
||||
- [x] RAM > 2GB
|
||||
- [x] **Two Computers**: You need one computer that you're getting ready (we'll
|
||||
call this the Target Computer) and another one to set it up from (we'll call
|
||||
this the Setup Computer). Make sure both can talk to each other over the
|
||||
network using SSH.
|
||||
- [x] **Machine configuration**: See our basic
|
||||
[adding and configuring machine guide](./add-machines.md)
|
||||
|
||||
Clan supports any cloud machine if it is reachable via SSH and supports `kexec`.
|
||||
|
||||
??? tip "NixOS can cause strange issues when booting in certain cloud
|
||||
environments." If on Linode: Make sure that the system uses "Direct Disk boot
|
||||
kernel" (found in the configuration panel)
|
||||
|
||||
??? tip "NixOS can cause strange issues when booting in certain cloud environments."
|
||||
If on Linode: Make sure that the system uses "Direct Disk boot kernel" (found in the configuration panel)
|
||||
|
||||
|
||||
The following command will generate a hardware report with [nixos-facter](https://github.com/nix-community/nixos-facter) and writes it back into your machine folder. The `--phases kexec` flag makes sure we are not yet formatting anything, instead if the target system is not a NixOS machine it will use [kexec](https://wiki.archlinux.org/title/Kexec) to switch to a NixOS kernel.
|
||||
|
||||
The following command will generate a hardware report with
|
||||
[nixos-facter](https://github.com/nix-community/nixos-facter) and writes it back
|
||||
into your machine folder. The `--phases kexec` flag makes sure we are not yet
|
||||
formatting anything, instead if the target system is not a NixOS machine it will
|
||||
use [kexec](https://wiki.archlinux.org/title/Kexec) to switch to a NixOS kernel.
|
||||
|
||||
```terminal
|
||||
clan machines install [MACHINE] \
|
||||
@@ -26,5 +32,7 @@ clan machines install [MACHINE] \
|
||||
--target-host myuser@<IP>
|
||||
```
|
||||
|
||||
!!! Warning
|
||||
After running the above command, be aware that the SSH login user changes from `myuser` to `root`. For subsequent SSH connections to the target machine, use `root` as the login user. This change occurs because the system switches to the NixOS kernel using `kexec`.
|
||||
!!! Warning After running the above command, be aware that the SSH login user
|
||||
changes from `myuser` to `root`. For subsequent SSH connections to the target
|
||||
machine, use `root` as the login user. This change occurs because the system
|
||||
switches to the NixOS kernel using `kexec`.
|
||||
|
||||
@@ -1,66 +1,72 @@
|
||||
# :material-clock-fast: Getting Started
|
||||
|
||||
Ready to manage your fleet of machines?
|
||||
Ready to manage your fleet of machines?
|
||||
|
||||
We will create a declarative infrastructure using **clan**, **git**, and **nix flakes**.
|
||||
We will create a declarative infrastructure using **clan**, **git**, and **nix
|
||||
flakes**.
|
||||
|
||||
You'll finish with a centrally managed fleet, ready to import your existing NixOS configuration.
|
||||
You'll finish with a centrally managed fleet, ready to import your existing
|
||||
NixOS configuration.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Make sure you have the following:
|
||||
|
||||
* 💻 **Administration Machine**: Run the setup commands from this machine.
|
||||
* 🛠️ **Nix**: The Nix package manager, installed on your administration machine.
|
||||
- 💻 **Administration Machine**: Run the setup commands from this machine.
|
||||
|
||||
??? info "**How to install Nix (Linux / MacOS / NixOS)**"
|
||||
- 🛠️ **Nix**: The Nix package manager, installed on your administration machine.
|
||||
|
||||
**On Linux or macOS:**
|
||||
??? info "**How to install Nix (Linux / MacOS / NixOS)**"
|
||||
|
||||
1. Run the recommended installer:
|
||||
```shellSession
|
||||
curl --proto '=https' --tlsv1.2 -sSf -L [https://install.determinate.systems/nix](https://install.determinate.systems/nix) | sh -s -- install
|
||||
```
|
||||
````
|
||||
**On Linux or macOS:**
|
||||
|
||||
2. After installation, ensure flakes are enabled by adding this line to `~/.config/nix/nix.conf`:
|
||||
```
|
||||
experimental-features = nix-command flakes
|
||||
```
|
||||
|
||||
**On NixOS:**
|
||||
|
||||
Nix is already installed. You only need to enable flakes for your user in your `configuration.nix`:
|
||||
|
||||
```nix
|
||||
{
|
||||
nix.settings.experimental-features = [ "nix-command" "flakes" ];
|
||||
}
|
||||
1. Run the recommended installer:
|
||||
```shellSession
|
||||
curl --proto '=https' --tlsv1.2 -sSf -L [https://install.determinate.systems/nix](https://install.determinate.systems/nix) | sh -s -- install
|
||||
```
|
||||
Then, run `nixos-rebuild switch` to apply the changes.
|
||||
|
||||
* 🎯 **Target Machine(s)**: A remote machine with SSH, or your local machine (if NixOS).
|
||||
2. After installation, ensure flakes are enabled by adding this line to `~/.config/nix/nix.conf`:
|
||||
```
|
||||
experimental-features = nix-command flakes
|
||||
```
|
||||
|
||||
**On NixOS:**
|
||||
|
||||
Nix is already installed. You only need to enable flakes for your user in your `configuration.nix`:
|
||||
|
||||
```nix
|
||||
{
|
||||
nix.settings.experimental-features = [ "nix-command" "flakes" ];
|
||||
}
|
||||
```
|
||||
Then, run `nixos-rebuild switch` to apply the changes.
|
||||
````
|
||||
|
||||
- 🎯 **Target Machine(s)**: A remote machine with SSH, or your local machine (if
|
||||
NixOS).
|
||||
|
||||
## Create a New Clan
|
||||
|
||||
1. Navigate to your desired directory:
|
||||
|
||||
```shellSession
|
||||
cd <your-directory>
|
||||
```
|
||||
|
||||
```shellSession
|
||||
cd <your-directory>
|
||||
```
|
||||
|
||||
2. Create a new clan flake:
|
||||
|
||||
**Note:** This creates a new directory in your current location
|
||||
**Note:** This creates a new directory in your current location
|
||||
|
||||
```shellSession
|
||||
nix run https://git.clan.lol/clan/clan-core/archive/main.tar.gz#clan-cli --refresh -- flakes create
|
||||
```
|
||||
```shellSession
|
||||
nix run https://git.clan.lol/clan/clan-core/archive/main.tar.gz#clan-cli --refresh -- flakes create
|
||||
```
|
||||
|
||||
3. Enter a **name** in the prompt:
|
||||
|
||||
```terminalSession
|
||||
Enter a name for the new clan: my-clan
|
||||
```
|
||||
```terminalSession
|
||||
Enter a name for the new clan: my-clan
|
||||
```
|
||||
|
||||
## Project Structure
|
||||
|
||||
@@ -75,11 +81,11 @@ my-clan/
|
||||
└── sops/
|
||||
```
|
||||
|
||||
!!! note "Templates"
|
||||
This is the structure for the `default` template.
|
||||
|
||||
Use `clan templates list` and `clan templates --help` for available templates & more. Keep in mind that the exact files may change as templates evolve.
|
||||
!!! note "Templates" This is the structure for the `default` template.
|
||||
|
||||
```
|
||||
Use `clan templates list` and `clan templates --help` for available templates & more. Keep in mind that the exact files may change as templates evolve.
|
||||
```
|
||||
|
||||
## Activate the Environment
|
||||
|
||||
@@ -91,24 +97,29 @@ cd my-clan
|
||||
|
||||
Now, activate the environment using one of the following methods.
|
||||
|
||||
=== "Automatic (direnv, recommended)"
|
||||
**Prerequisite**: You must have [nix-direnv](https://github.com/nix-community/nix-direnv) installed.
|
||||
=== "Automatic (direnv, recommended)" **Prerequisite**: You must have
|
||||
[nix-direnv](https://github.com/nix-community/nix-direnv) installed.
|
||||
|
||||
Run `direnv allow` to automatically load the environment whenever you enter this directory.
|
||||
```shellSession
|
||||
direnv allow
|
||||
```
|
||||
````
|
||||
Run `direnv allow` to automatically load the environment whenever you enter this directory.
|
||||
```shellSession
|
||||
direnv allow
|
||||
```
|
||||
````
|
||||
|
||||
=== "Manual (nix develop)"
|
||||
Run nix develop to load the environment for your current shell session.
|
||||
=== "Manual (nix develop)" Run nix develop to load the environment for your
|
||||
current shell session.
|
||||
|
||||
```shellSession
|
||||
nix develop
|
||||
```
|
||||
````
|
||||
```shellSession
|
||||
nix develop
|
||||
```
|
||||
````
|
||||
|
||||
## Verify the Setup
|
||||
|
||||
Once your environment is active, verify that the clan command is available by running:
|
||||
Once your environment is active, verify that the clan command is available by
|
||||
running:
|
||||
|
||||
```shellSession
|
||||
clan show
|
||||
@@ -121,9 +132,10 @@ Name: __CHANGE_ME__
|
||||
Description: None
|
||||
```
|
||||
|
||||
This confirms your setup is working correctly.
|
||||
This confirms your setup is working correctly.
|
||||
|
||||
You can now change the default name by editing the `meta.name` field in your `clan.nix` file.
|
||||
You can now change the default name by editing the `meta.name` field in your
|
||||
`clan.nix` file.
|
||||
|
||||
```{.nix title="clan.nix" hl_lines="3"}
|
||||
{
|
||||
@@ -134,4 +146,3 @@ You can now change the default name by editing the `meta.name` field in your `cl
|
||||
# elided
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
@@ -1,12 +1,12 @@
|
||||
|
||||
# Update Your Machines
|
||||
|
||||
Clan CLI enables you to remotely update your machines over SSH. This requires setting up a target address for each target machine.
|
||||
Clan CLI enables you to remotely update your machines over SSH. This requires
|
||||
setting up a target address for each target machine.
|
||||
|
||||
### Setting `targetHost`
|
||||
|
||||
In your Nix files, set the `targetHost` to the reachable IP address of your new machine. This eliminates the need to specify `--target-host` with every command.
|
||||
|
||||
In your Nix files, set the `targetHost` to the reachable IP address of your new
|
||||
machine. This eliminates the need to specify `--target-host` with every command.
|
||||
|
||||
```{.nix title="clan.nix" hl_lines="9"}
|
||||
{
|
||||
@@ -23,16 +23,15 @@ inventory.machines = {
|
||||
# [...]
|
||||
}
|
||||
```
|
||||
|
||||
The use of `root@` in the target address implies SSH access as the `root` user.
|
||||
Ensure that the root login is secured and only used when necessary.
|
||||
|
||||
|
||||
### Setting a Build Host
|
||||
|
||||
If the machine does not have enough resources to run the NixOS evaluation or build itself,
|
||||
it is also possible to specify a build host instead.
|
||||
During an update, the cli will ssh into the build host and run `nixos-rebuild` from there.
|
||||
|
||||
If the machine does not have enough resources to run the NixOS evaluation or
|
||||
build itself, it is also possible to specify a build host instead. During an
|
||||
update, the cli will ssh into the build host and run `nixos-rebuild` from there.
|
||||
|
||||
```{.nix hl_lines="5" .no-copy}
|
||||
buildClan {
|
||||
@@ -55,10 +54,10 @@ clan machines update jon --build-host root@192.168.1.10
|
||||
clan machines update jon --build-host local
|
||||
```
|
||||
|
||||
!!! Note
|
||||
Make sure that the CPU architecture is the same for the buildHost as for the targetHost.
|
||||
Example:
|
||||
If you want to deploy to a macOS machine, your architecture is an ARM64-Darwin, that means you need a second macOS machine to build it.
|
||||
!!! Note Make sure that the CPU architecture is the same for the buildHost as
|
||||
for the targetHost. Example: If you want to deploy to a macOS machine, your
|
||||
architecture is an ARM64-Darwin, that means you need a second macOS machine to
|
||||
build it.
|
||||
|
||||
### Updating Machine Configurations
|
||||
|
||||
@@ -68,17 +67,18 @@ Execute the following command to update the specified machine:
|
||||
clan machines update jon
|
||||
```
|
||||
|
||||
You can also update all configured machines simultaneously by omitting the machine name:
|
||||
You can also update all configured machines simultaneously by omitting the
|
||||
machine name:
|
||||
|
||||
```bash
|
||||
clan machines update
|
||||
```
|
||||
|
||||
|
||||
### Excluding a machine from `clan machine update`
|
||||
|
||||
To exclude machines from being updated when running `clan machines update` without any machines specified,
|
||||
one can set the `clan.deployment.requireExplicitUpdate` option to true:
|
||||
To exclude machines from being updated when running `clan machines update`
|
||||
without any machines specified, one can set the
|
||||
`clan.deployment.requireExplicitUpdate` option to true:
|
||||
|
||||
```{.nix hl_lines="5" .no-copy}
|
||||
buildClan {
|
||||
@@ -91,19 +91,22 @@ buildClan {
|
||||
};
|
||||
```
|
||||
|
||||
This is useful for machines that are not always online or are not part of the regular update cycle.
|
||||
This is useful for machines that are not always online or are not part of the
|
||||
regular update cycle.
|
||||
|
||||
### Uploading Flake Inputs
|
||||
|
||||
When updating remote machines, flake inputs are usually fetched by the build host.
|
||||
However, if your flake inputs require authentication (e.g., private repositories),
|
||||
you can use the `--upload-inputs` flag to upload all inputs from your local machine:
|
||||
When updating remote machines, flake inputs are usually fetched by the build
|
||||
host. However, if your flake inputs require authentication (e.g., private
|
||||
repositories), you can use the `--upload-inputs` flag to upload all inputs from
|
||||
your local machine:
|
||||
|
||||
```bash
|
||||
clan machines update jon --upload-inputs
|
||||
```
|
||||
|
||||
This is particularly useful when:
|
||||
|
||||
- Your flake references private Git repositories
|
||||
- Authentication credentials are only available on your local machine
|
||||
- The build host doesn't have access to certain network resources
|
||||
|
||||
@@ -6,12 +6,14 @@ This guide explains how to manage macOS machines using Clan.
|
||||
|
||||
Currently, Clan supports the following features for macOS:
|
||||
|
||||
- `clan machines update` for existing [nix-darwin](https://github.com/nix-darwin/nix-darwin) installations
|
||||
- `clan machines update` for existing
|
||||
[nix-darwin](https://github.com/nix-darwin/nix-darwin) installations
|
||||
- Support for [vars](../concepts/generators.md)
|
||||
|
||||
## Add Your Machine to Your Clan Flake
|
||||
|
||||
In this example, we'll name the machine `yourmachine`. Replace this with your preferred machine name.
|
||||
In this example, we'll name the machine `yourmachine`. Replace this with your
|
||||
preferred machine name.
|
||||
|
||||
=== "**If using clan-core.lib.clan**"
|
||||
|
||||
@@ -37,7 +39,8 @@ clan-core.lib.clan {
|
||||
|
||||
## Add a `configuration.nix` for Your Machine
|
||||
|
||||
Create the file `./machines/yourmachine/configuration.nix` with the following content (replace `yourmachine` with your chosen machine name):
|
||||
Create the file `./machines/yourmachine/configuration.nix` with the following
|
||||
content (replace `yourmachine` with your chosen machine name):
|
||||
|
||||
```nix
|
||||
{
|
||||
@@ -60,12 +63,13 @@ Replace `yourmachine` with your chosen machine name.
|
||||
|
||||
## Install Nix
|
||||
|
||||
Install Nix on your macOS machine using one of the methods described in the [nix-darwin prerequisites](https://github.com/nix-darwin/nix-darwin?tab=readme-ov-file#prerequisites).
|
||||
|
||||
Install Nix on your macOS machine using one of the methods described in the
|
||||
[nix-darwin prerequisites](https://github.com/nix-darwin/nix-darwin?tab=readme-ov-file#prerequisites).
|
||||
|
||||
## Install nix-darwin
|
||||
|
||||
Upload your Clan flake to the macOS machine. Then, from within your flake directory, run:
|
||||
Upload your Clan flake to the macOS machine. Then, from within your flake
|
||||
directory, run:
|
||||
|
||||
```sh
|
||||
sudo nix run nix-darwin/master#darwin-rebuild -- switch --flake .#yourmachine
|
||||
|
||||
@@ -1,12 +1,12 @@
|
||||
|
||||
This guide provides detailed instructions for configuring
|
||||
[ZeroTier VPN](https://zerotier.com) within Clan. Follow the
|
||||
outlined steps to set up a machine as a VPN controller (`<CONTROLLER>`) and to
|
||||
include a new machine into the VPN.
|
||||
[ZeroTier VPN](https://zerotier.com) within Clan. Follow the outlined steps to
|
||||
set up a machine as a VPN controller (`<CONTROLLER>`) and to include a new
|
||||
machine into the VPN.
|
||||
|
||||
## Concept
|
||||
|
||||
By default all machines within one clan are connected via a chosen network technology.
|
||||
By default all machines within one clan are connected via a chosen network
|
||||
technology.
|
||||
|
||||
```{.no-copy}
|
||||
Clan
|
||||
@@ -15,19 +15,22 @@ Clan
|
||||
Node B
|
||||
```
|
||||
|
||||
This guide shows you how to configure `zerotier` through clan's `Inventory` System.
|
||||
This guide shows you how to configure `zerotier` through clan's `Inventory`
|
||||
System.
|
||||
|
||||
## The Controller
|
||||
|
||||
The controller is the initial entrypoint for new machines into the vpn.
|
||||
It will sign the id's of new machines.
|
||||
Once id's are signed, the controller's continuous operation is not essential.
|
||||
A good controller choice is nevertheless a machine that can always be reached for updates - so that new peers can be added to the network.
|
||||
The controller is the initial entrypoint for new machines into the vpn. It will
|
||||
sign the id's of new machines. Once id's are signed, the controller's continuous
|
||||
operation is not essential. A good controller choice is nevertheless a machine
|
||||
that can always be reached for updates - so that new peers can be added to the
|
||||
network.
|
||||
|
||||
For the purpose of this guide we have two machines:
|
||||
|
||||
- The `controller` machine, which will be the zerotier controller.
|
||||
- The `new_machine` machine, which is the machine we want to add to the vpn network.
|
||||
- The `new_machine` machine, which is the machine we want to add to the vpn
|
||||
network.
|
||||
|
||||
## Configure the Service
|
||||
|
||||
@@ -99,12 +102,15 @@ The status should be "ONLINE":
|
||||
|
||||
## Further
|
||||
|
||||
Currently you can only use **Zerotier** as networking technology because this is the first network stack we aim to support.
|
||||
In the future we plan to add additional network technologies like tinc, head/tailscale, yggdrassil and mycelium.
|
||||
Currently you can only use **Zerotier** as networking technology because this is
|
||||
the first network stack we aim to support. In the future we plan to add
|
||||
additional network technologies like tinc, head/tailscale, yggdrassil and
|
||||
mycelium.
|
||||
|
||||
We chose zerotier because in our tests it was a straight forwards solution to bootstrap.
|
||||
It allows you to selfhost a controller and the controller doesn't need to be globally reachable.
|
||||
Which made it a good fit for starting the project.
|
||||
We chose zerotier because in our tests it was a straight forwards solution to
|
||||
bootstrap. It allows you to selfhost a controller and the controller doesn't
|
||||
need to be globally reachable. Which made it a good fit for starting the
|
||||
project.
|
||||
|
||||
## Debugging
|
||||
|
||||
@@ -134,16 +140,20 @@ $ sudo zerotier-cli info
|
||||
|
||||
=== "with ZerotierIP"
|
||||
|
||||
```bash
|
||||
$ sudo zerotier-members allow --member-ip <IP>
|
||||
```
|
||||
````
|
||||
```bash
|
||||
$ sudo zerotier-members allow --member-ip <IP>
|
||||
```
|
||||
|
||||
Substitute `<IP>` with the ZeroTier IP obtained previously.
|
||||
Substitute `<IP>` with the ZeroTier IP obtained previously.
|
||||
````
|
||||
|
||||
=== "with ZerotierID"
|
||||
|
||||
```bash
|
||||
$ sudo zerotier-members allow <ID>
|
||||
```
|
||||
````
|
||||
```bash
|
||||
$ sudo zerotier-members allow <ID>
|
||||
```
|
||||
|
||||
Substitute `<ID>` with the ZeroTier ID obtained previously.
|
||||
Substitute `<ID>` with the ZeroTier ID obtained previously.
|
||||
````
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
# Migrate disko config from `clanModules.disk-id`
|
||||
|
||||
If you previously bootstrapped a machine's disk using `clanModules.disk-id`, you should now migrate to a standalone, self-contained disko configuration. This ensures long-term stability and avoids reliance on dynamic values from Clan.
|
||||
If you previously bootstrapped a machine's disk using `clanModules.disk-id`, you
|
||||
should now migrate to a standalone, self-contained disko configuration. This
|
||||
ensures long-term stability and avoids reliance on dynamic values from Clan.
|
||||
|
||||
If your `disko.nix` currently looks something like this:
|
||||
|
||||
@@ -49,8 +51,8 @@ Run the following command to retrieve the generated disk ID for your machine:
|
||||
clan vars list <machineName>
|
||||
```
|
||||
|
||||
Which should print the generated `disk-id/diskId` value in clear text
|
||||
You should see output like:
|
||||
Which should print the generated `disk-id/diskId` value in clear text You should
|
||||
see output like:
|
||||
|
||||
```shellSession
|
||||
disk-id/diskId: fcef30a749f8451d8f60c46e1ead726f
|
||||
@@ -68,7 +70,8 @@ We are going to make three changes:
|
||||
|
||||
- Remove `let in, imports, {lib,clan-core,config, ...}:` to isolate the file.
|
||||
- Replace `suffix` with the actual disk-id
|
||||
- Move `disko.devices.disk.main.device` from `flake.nix` or `configuration.nix` into this file.
|
||||
- Move `disko.devices.disk.main.device` from `flake.nix` or `configuration.nix`
|
||||
into this file.
|
||||
|
||||
```{.nix title="disko.nix" hl_lines="7-9 11-14"}
|
||||
{
|
||||
@@ -93,6 +96,8 @@ We are going to make three changes:
|
||||
}
|
||||
```
|
||||
|
||||
These steps are only needed for existing configurations that depend on the `diskId` module.
|
||||
These steps are only needed for existing configurations that depend on the
|
||||
`diskId` module.
|
||||
|
||||
For newer machines clan offers simple *disk templates* via its [templates cli](../../reference/cli/templates.md)
|
||||
For newer machines clan offers simple *disk templates* via its
|
||||
[templates cli](../../reference/cli/templates.md)
|
||||
|
||||
@@ -1,21 +1,23 @@
|
||||
# Migrating from using `clanModules` to `clanServices`
|
||||
|
||||
**Audience**: This is a guide for **people using `clanModules`**.
|
||||
If you are a **module author** and need to migrate your modules please consult our **new** [clanServices authoring guide](../../guides/services/community.md)
|
||||
**Audience**: This is a guide for **people using `clanModules`**. If you are a
|
||||
**module author** and need to migrate your modules please consult our **new**
|
||||
[clanServices authoring guide](../../guides/services/community.md)
|
||||
|
||||
## What's Changing?
|
||||
|
||||
Clan is transitioning from the legacy `clanModules` system to the `clanServices` system. This guide will help you migrate your service definitions from the old format (`inventory.services`) to the new format (`inventory.instances`).
|
||||
Clan is transitioning from the legacy `clanModules` system to the `clanServices`
|
||||
system. This guide will help you migrate your service definitions from the old
|
||||
format (`inventory.services`) to the new format (`inventory.instances`).
|
||||
|
||||
| Feature | `clanModules` (Old) | `clanServices` (New) |
|
||||
| ---------------- | -------------------------- | ----------------------- |
|
||||
| Module Class | `"nixos"` | `"clan.service"` |
|
||||
| Inventory Key | `services` | `instances` |
|
||||
| Module Source | Static | Composable via flakes |
|
||||
| Custom Settings | Loosely structured | Strongly typed per-role |
|
||||
| Migration Status | Deprecated (to be removed) | ✅ Preferred |
|
||||
| Feature | `clanModules` (Old) | `clanServices` (New) | | ---------------- |
|
||||
-------------------------- | ----------------------- | | Module Class |
|
||||
`"nixos"` | `"clan.service"` | | Inventory Key | `services` | `instances` | |
|
||||
Module Source | Static | Composable via flakes | | Custom Settings | Loosely
|
||||
structured | Strongly typed per-role | | Migration Status | Deprecated (to be
|
||||
removed) | ✅ Preferred |
|
||||
|
||||
---
|
||||
______________________________________________________________________
|
||||
|
||||
## Before: Old `services` Definition
|
||||
|
||||
@@ -66,7 +68,7 @@ services = {
|
||||
};
|
||||
```
|
||||
|
||||
---
|
||||
______________________________________________________________________
|
||||
|
||||
## ✅ After: New `instances` Definition with `clanServices`
|
||||
|
||||
@@ -151,13 +153,14 @@ instances = {
|
||||
};
|
||||
```
|
||||
|
||||
---
|
||||
______________________________________________________________________
|
||||
|
||||
## Steps to Migrate
|
||||
|
||||
### Move `services` entries to `instances`
|
||||
|
||||
Check if a service that you use has been migrated [In our reference](../../reference/clanServices/index.md)
|
||||
Check if a service that you use has been migrated
|
||||
[In our reference](../../reference/clanServices/index.md)
|
||||
|
||||
In your inventory, move it from:
|
||||
|
||||
@@ -171,9 +174,10 @@ to:
|
||||
instances = { ... };
|
||||
```
|
||||
|
||||
Each nested service-instance-pair becomes a flat key, like `borgbackup.simple → borgbackup-simple`.
|
||||
Each nested service-instance-pair becomes a flat key, like
|
||||
`borgbackup.simple → borgbackup-simple`.
|
||||
|
||||
---
|
||||
______________________________________________________________________
|
||||
|
||||
### Add `module.name` and `module.input`
|
||||
|
||||
@@ -194,15 +198,15 @@ inputs.clan-core.url = "github:clan/clan-core";
|
||||
|
||||
Then refer to it as `input = "clan-core"`.
|
||||
|
||||
---
|
||||
______________________________________________________________________
|
||||
|
||||
### Move role and machine config under `roles`
|
||||
|
||||
In the new system:
|
||||
|
||||
* Use `roles.<role>.machines.<hostname>.settings` for machine-specific config.
|
||||
* Use `roles.<role>.settings` for role-wide config.
|
||||
* Remove: `.config` as a top-level attribute is removed.
|
||||
- Use `roles.<role>.machines.<hostname>.settings` for machine-specific config.
|
||||
- Use `roles.<role>.settings` for role-wide config.
|
||||
- Remove: `.config` as a top-level attribute is removed.
|
||||
|
||||
Example:
|
||||
|
||||
@@ -214,7 +218,8 @@ roles.default.machines."test-inventory-machine".settings = {
|
||||
|
||||
### Important Type Changes
|
||||
|
||||
The new `instances` format uses **attribute sets** instead of **lists** for tags and machines:
|
||||
The new `instances` format uses **attribute sets** instead of **lists** for tags
|
||||
and machines:
|
||||
|
||||
```nix
|
||||
# ❌ Old format (lists)
|
||||
@@ -239,73 +244,70 @@ roles.moon.machines.eva = { };
|
||||
roles.moon.machines.eve = { };
|
||||
```
|
||||
|
||||
---
|
||||
______________________________________________________________________
|
||||
|
||||
## Migration Status of clanModules
|
||||
|
||||
The following table shows the migration status of each deprecated clanModule:
|
||||
|
||||
| clanModule | Migration Status | Notes |
|
||||
| clanModule | Migration Status | Notes |
|
||||
|--------------------------|-------------------------------------------------------------------|------------------------------------------------------------------|
|
||||
| `admin` | ✅ [Migrated](../../reference/clanServices/admin.md) | |
|
||||
| `auto-upgrade` | ❌ Removed | |
|
||||
| `borgbackup-static` | ❌ Removed | |
|
||||
| `borgbackup` | ✅ [Migrated](../../reference/clanServices/borgbackup.md) | |
|
||||
| `data-mesher` | ✅ [Migrated](../../reference/clanServices/data-mesher.md) | |
|
||||
| `deltachat` | ❌ Removed | |
|
||||
| `disk-id` | ❌ Removed | |
|
||||
| `dyndns` | [Being Migrated](https://git.clan.lol/clan/clan-core/pulls/4390) | |
|
||||
| `ergochat` | ❌ Removed | |
|
||||
| `garage` | ✅ [Migrated](../../reference/clanServices/garage.md) | |
|
||||
| `golem-provider` | ❌ Removed | |
|
||||
| `heisenbridge` | ❌ Removed | |
|
||||
| `importer` | ✅ [Migrated](../../reference/clanServices/importer.md) | |
|
||||
| `iwd` | ❌ Removed | Use [wifi service](../../reference/clanServices/wifi.md) instead |
|
||||
| `localbackup` | ✅ [Migrated](../../reference/clanServices/localbackup.md) | |
|
||||
| `localsend` | ❌ Removed | |
|
||||
| `machine-id` | ❌ Removed | Now an [option](../../reference/clan.core/settings.md) |
|
||||
| `matrix-synapse` | ✅ [Migrated](../../reference/clanServices/matrix-synapse.md) | |
|
||||
| `moonlight` | ❌ Removed | |
|
||||
| `mumble` | ❌ Removed | |
|
||||
| `mycelium` | ✅ [Migrated](../../reference/clanServices/mycelium.md) | |
|
||||
| `nginx` | ❌ Removed | |
|
||||
| `packages` | ✅ [Migrated](../../reference/clanServices/packages.md) | |
|
||||
| `postgresql` | ❌ Removed | Now an [option](../../reference/clan.core/settings.md) |
|
||||
| `root-password` | ✅ [Migrated](../../reference/clanServices/users.md) | See [migration guide](../../reference/clanServices/users.md#migration-from-root-password-module) |
|
||||
| `single-disk` | ❌ Removed | |
|
||||
| `sshd` | ✅ [Migrated](../../reference/clanServices/sshd.md) | |
|
||||
| `state-version` | ✅ [Migrated](../../reference/clanServices/state-version.md) | |
|
||||
| `static-hosts` | ❌ Removed | |
|
||||
| `sunshine` | ❌ Removed | |
|
||||
| `syncthing-static-peers` | ❌ Removed | |
|
||||
| `syncthing` | ✅ [Migrated](../../reference/clanServices/syncthing.md) | |
|
||||
| `thelounge` | ❌ Removed | |
|
||||
| `trusted-nix-caches` | ✅ [Migrated](../../reference/clanServices/trusted-nix-caches.md) | |
|
||||
| `user-password` | ✅ [Migrated](../../reference/clanServices/users.md) | |
|
||||
| `vaultwarden` | ❌ Removed | |
|
||||
| `xfce` | ❌ Removed | |
|
||||
| `zerotier-static-peers` | ❌ Removed | |
|
||||
| `zerotier` | ✅ [Migrated](../../reference/clanServices/zerotier.md) | |
|
||||
| `zt-tcp-relay` | ❌ Removed | |
|
||||
| `admin` | ✅ [Migrated](../../reference/clanServices/admin.md) | | |
|
||||
`auto-upgrade` | ❌ Removed | | | `borgbackup-static` | ❌ Removed | | |
|
||||
`borgbackup` | ✅ [Migrated](../../reference/clanServices/borgbackup.md) | | |
|
||||
`data-mesher` | ✅ [Migrated](../../reference/clanServices/data-mesher.md) | | |
|
||||
`deltachat` | ❌ Removed | | | `disk-id` | ❌ Removed | | | `dyndns` |
|
||||
[Being Migrated](https://git.clan.lol/clan/clan-core/pulls/4390) | | |
|
||||
`ergochat` | ❌ Removed | | | `garage` | ✅
|
||||
[Migrated](../../reference/clanServices/garage.md) | | | `golem-provider` | ❌
|
||||
Removed | | | `heisenbridge` | ❌ Removed | | | `importer` | ✅
|
||||
[Migrated](../../reference/clanServices/importer.md) | | | `iwd` | ❌ Removed |
|
||||
Use [wifi service](../../reference/clanServices/wifi.md) instead | |
|
||||
`localbackup` | ✅ [Migrated](../../reference/clanServices/localbackup.md) | | |
|
||||
`localsend` | ❌ Removed | | | `machine-id` | ❌ Removed | Now an
|
||||
[option](../../reference/clan.core/settings.md) | | `matrix-synapse` | ✅
|
||||
[Migrated](../../reference/clanServices/matrix-synapse.md) | | | `moonlight` | ❌
|
||||
Removed | | | `mumble` | ❌ Removed | | | `mycelium` | ✅
|
||||
[Migrated](../../reference/clanServices/mycelium.md) | | | `nginx` | ❌ Removed |
|
||||
| | `packages` | ✅ [Migrated](../../reference/clanServices/packages.md) | | |
|
||||
`postgresql` | ❌ Removed | Now an
|
||||
[option](../../reference/clan.core/settings.md) | | `root-password` | ✅
|
||||
[Migrated](../../reference/clanServices/users.md) | See
|
||||
[migration guide](../../reference/clanServices/users.md#migration-from-root-password-module)
|
||||
| | `single-disk` | ❌ Removed | | | `sshd` | ✅
|
||||
[Migrated](../../reference/clanServices/sshd.md) | | | `state-version` | ✅
|
||||
[Migrated](../../reference/clanServices/state-version.md) | | | `static-hosts` |
|
||||
❌ Removed | | | `sunshine` | ❌ Removed | | | `syncthing-static-peers` | ❌
|
||||
Removed | | | `syncthing` | ✅
|
||||
[Migrated](../../reference/clanServices/syncthing.md) | | | `thelounge` | ❌
|
||||
Removed | | | `trusted-nix-caches` | ✅
|
||||
[Migrated](../../reference/clanServices/trusted-nix-caches.md) | | |
|
||||
`user-password` | ✅ [Migrated](../../reference/clanServices/users.md) | | |
|
||||
`vaultwarden` | ❌ Removed | | | `xfce` | ❌ Removed | | | `zerotier-static-peers`
|
||||
| ❌ Removed | | | `zerotier` | ✅
|
||||
[Migrated](../../reference/clanServices/zerotier.md) | | | `zt-tcp-relay` | ❌
|
||||
Removed | |
|
||||
|
||||
---
|
||||
______________________________________________________________________
|
||||
|
||||
!!! Warning
|
||||
* Old `clanModules` (`class = "nixos"`) are deprecated and will be removed in the near future.
|
||||
* `inventory.services` is no longer recommended; use `inventory.instances` instead.
|
||||
* Module authors should begin exporting service modules under the `clan.modules` attribute of their flake.
|
||||
!!! Warning * Old `clanModules` (`class = "nixos"`) are deprecated and will be
|
||||
removed in the near future. * `inventory.services` is no longer recommended; use
|
||||
`inventory.instances` instead. * Module authors should begin exporting service
|
||||
modules under the `clan.modules` attribute of their flake.
|
||||
|
||||
## Troubleshooting Common Migration Errors
|
||||
|
||||
### Error: "not of type `attribute set of (submodule)`"
|
||||
|
||||
This error occurs when using lists instead of attribute sets for tags or machines:
|
||||
This error occurs when using lists instead of attribute sets for tags or
|
||||
machines:
|
||||
|
||||
```
|
||||
error: A definition for option `flake.clan.inventory.instances.borgbackup-blob64.roles.client.tags' is not of type `attribute set of (submodule)'.
|
||||
```
|
||||
|
||||
**Solution**: Convert lists to attribute sets as shown in the "Important Type Changes" section above.
|
||||
**Solution**: Convert lists to attribute sets as shown in the "Important Type
|
||||
Changes" section above.
|
||||
|
||||
### Error: "unsupported attribute `module`"
|
||||
|
||||
@@ -315,7 +317,8 @@ This error indicates the module structure is incorrect:
|
||||
error: Module ':anon-4:anon-1' has an unsupported attribute `module'.
|
||||
```
|
||||
|
||||
**Solution**: Ensure the `module` attribute has exactly two fields: `name` and `input`.
|
||||
**Solution**: Ensure the `module` attribute has exactly two fields: `name` and
|
||||
`input`.
|
||||
|
||||
### Error: "attribute 'pkgs' missing"
|
||||
|
||||
@@ -325,25 +328,30 @@ This suggests the instance configuration is trying to use imports incorrectly:
|
||||
error: attribute 'pkgs' missing
|
||||
```
|
||||
|
||||
**Solution**: Use the `module = { name = "..."; input = "..."; }` format instead of `imports`.
|
||||
**Solution**: Use the `module = { name = "..."; input = "..."; }` format instead
|
||||
of `imports`.
|
||||
|
||||
### Removed Features
|
||||
|
||||
The following features from the old `services` format are no longer supported in `instances`:
|
||||
The following features from the old `services` format are no longer supported in
|
||||
`instances`:
|
||||
|
||||
- Top-level `config` attribute (use `roles.<role>.settings` instead)
|
||||
- Direct module imports (use the `module` declaration instead)
|
||||
|
||||
### extraModules Support
|
||||
|
||||
The `extraModules` attribute is still supported in the new instances format! The key change is how modules are specified:
|
||||
The `extraModules` attribute is still supported in the new instances format! The
|
||||
key change is how modules are specified:
|
||||
|
||||
**Old format (string paths relative to clan root):**
|
||||
|
||||
```nix
|
||||
roles.client.extraModules = [ "nixosModules/borgbackup.nix" ];
|
||||
```
|
||||
|
||||
**New format (NixOS modules):**
|
||||
|
||||
```nix
|
||||
# Direct module reference
|
||||
roles.client.extraModules = [ ../nixosModules/borgbackup.nix ];
|
||||
@@ -359,11 +367,14 @@ roles.client.extraModules = [
|
||||
];
|
||||
```
|
||||
|
||||
The `extraModules` now expects actual **NixOS modules** rather than string paths. This provides better type checking and more flexibility in how modules are specified.
|
||||
The `extraModules` now expects actual **NixOS modules** rather than string
|
||||
paths. This provides better type checking and more flexibility in how modules
|
||||
are specified.
|
||||
|
||||
**Alternative: Using @clan/importer**
|
||||
|
||||
For scenarios where you need to import modules with specific tag-based targeting, you can also use the dedicated `@clan/importer` service:
|
||||
For scenarios where you need to import modules with specific tag-based
|
||||
targeting, you can also use the dedicated `@clan/importer` service:
|
||||
|
||||
```nix
|
||||
instances = {
|
||||
@@ -378,6 +389,6 @@ instances = {
|
||||
|
||||
## Further reference
|
||||
|
||||
* [Inventory Concept](../../concepts/inventory.md)
|
||||
* [Authoring a 'clan.service' module](../../guides/services/community.md)
|
||||
* [ClanServices](../clanServices.md)
|
||||
- [Inventory Concept](../../concepts/inventory.md)
|
||||
- [Authoring a 'clan.service' module](../../guides/services/community.md)
|
||||
- [ClanServices](../clanServices.md)
|
||||
|
||||
@@ -1,16 +1,20 @@
|
||||
# Migrate modules from `facts` to `vars`.
|
||||
|
||||
For a high level overview about `vars` see our [blog post](https://clan.lol/blog/vars/).
|
||||
For a high level overview about `vars` see our
|
||||
[blog post](https://clan.lol/blog/vars/).
|
||||
|
||||
This guide will help you migrate your modules that still use our [`facts`](../../guides/secrets.md) backend
|
||||
to the [`vars`](../../concepts/generators.md) backend.
|
||||
This guide will help you migrate your modules that still use our
|
||||
[`facts`](../../guides/secrets.md) backend to the
|
||||
[`vars`](../../concepts/generators.md) backend.
|
||||
|
||||
The `vars` [module](../../reference/clan.core/vars.md) and the clan [command](../../reference/cli/vars.md) work in tandem, they should ideally be kept in sync.
|
||||
The `vars` [module](../../reference/clan.core/vars.md) and the clan
|
||||
[command](../../reference/cli/vars.md) work in tandem, they should ideally be
|
||||
kept in sync.
|
||||
|
||||
## Keep Existing Values
|
||||
|
||||
In order to keep existing values and move them from `facts` to `vars`
|
||||
we will need to set the corresponding option in the vars module:
|
||||
In order to keep existing values and move them from `facts` to `vars` we will
|
||||
need to set the corresponding option in the vars module:
|
||||
|
||||
```
|
||||
migrateFact = "fact-name"
|
||||
@@ -19,10 +23,10 @@ migrateFact = "fact-name"
|
||||
This will now check on `vars` generation if there is an existing `fact` with the
|
||||
name already present and if that is the case will migrate it to `vars`.
|
||||
|
||||
Let us look at the mapping a little closer.
|
||||
Suppose we have the following fact: `facts.services.vaultwarden.secret.admin`.
|
||||
This would read as follows: The `vaultwarden` `fact` service has the `admin` secret.
|
||||
In order to migrate this fact we would need to have the following `vars` configuration:
|
||||
Let us look at the mapping a little closer. Suppose we have the following fact:
|
||||
`facts.services.vaultwarden.secret.admin`. This would read as follows: The
|
||||
`vaultwarden` `fact` service has the `admin` secret. In order to migrate this
|
||||
fact we would need to have the following `vars` configuration:
|
||||
|
||||
```nix
|
||||
vars.generators.vaultwarden = {
|
||||
@@ -31,13 +35,14 @@ vars.generators.vaultwarden = {
|
||||
};
|
||||
```
|
||||
|
||||
And this would read as follows: The vaultwarden `vars` module generates the admin file.
|
||||
|
||||
And this would read as follows: The vaultwarden `vars` module generates the
|
||||
admin file.
|
||||
|
||||
## Prompts
|
||||
|
||||
Because prompts can be a necessity for certain systems `vars` have a shorthand for defining them.
|
||||
A prompt is a request for user input. Let us look how user input used to be handled in facts:
|
||||
Because prompts can be a necessity for certain systems `vars` have a shorthand
|
||||
for defining them. A prompt is a request for user input. Let us look how user
|
||||
input used to be handled in facts:
|
||||
|
||||
```nix
|
||||
facts.services.forgejo-api = {
|
||||
@@ -46,7 +51,9 @@ facts.services.forgejo-api = {
|
||||
generator.script = "cp $prompt_value > $secret/token";
|
||||
};
|
||||
```
|
||||
|
||||
To have analogous functionality in `vars`:
|
||||
|
||||
```nix
|
||||
vars.generators.forgejo-api = {
|
||||
prompts.token = {
|
||||
@@ -55,8 +62,11 @@ vars.generators.forgejo-api = {
|
||||
};
|
||||
};
|
||||
```
|
||||
This does not only simplify prompting, it also now allows us to define multiple prompts in one generator.
|
||||
A more analogous way to the `fact` method is available, in case the module author needs more flexibility with the prompt input:
|
||||
|
||||
This does not only simplify prompting, it also now allows us to define multiple
|
||||
prompts in one generator. A more analogous way to the `fact` method is
|
||||
available, in case the module author needs more flexibility with the prompt
|
||||
input:
|
||||
|
||||
```nix
|
||||
vars.generators.forgejo-api = {
|
||||
@@ -92,8 +102,9 @@ facts.services.syncthing = {
|
||||
};
|
||||
```
|
||||
|
||||
This would be the corresponding `vars` module, which also will migrate existing
|
||||
facts.
|
||||
|
||||
This would be the corresponding `vars` module, which also will migrate existing facts.
|
||||
```nix
|
||||
vars.generators.syncthing = {
|
||||
migrateFact = "syncthing";
|
||||
@@ -116,12 +127,15 @@ vars.generators.syncthing = {
|
||||
'';
|
||||
};
|
||||
```
|
||||
Most of the usage patterns stay the same, but `vars` have a more ergonomic interface.
|
||||
There are not two different ways to define files anymore (public/secret).
|
||||
Now files are defined under the `files` attribute and are secret by default.
|
||||
|
||||
Most of the usage patterns stay the same, but `vars` have a more ergonomic
|
||||
interface. There are not two different ways to define files anymore
|
||||
(public/secret). Now files are defined under the `files` attribute and are
|
||||
secret by default.
|
||||
|
||||
## Happy Migration
|
||||
|
||||
We hope this gives you a clear path to start and finish your migration from `facts` to `vars`.
|
||||
Please do not hesitate reaching out if something is still unclear - either through [matrix](https://matrix.to/#/#clan:clan.lol) or through our git [forge](https://git.clan.lol/clan/clan-core).
|
||||
We hope this gives you a clear path to start and finish your migration from
|
||||
`facts` to `vars`. Please do not hesitate reaching out if something is still
|
||||
unclear - either through [matrix](https://matrix.to/#/#clan:clan.lol) or through
|
||||
our git [forge](https://git.clan.lol/clan/clan-core).
|
||||
|
||||
@@ -1,14 +1,19 @@
|
||||
# Connecting to Your Machines
|
||||
|
||||
Clan provides automatic networking with fallback mechanisms to reliably connect to your machines.
|
||||
Clan provides automatic networking with fallback mechanisms to reliably connect
|
||||
to your machines.
|
||||
|
||||
## Option 1: Automatic Networking with Fallback (Recommended)
|
||||
|
||||
Clan's networking module automatically manages connections through various network technologies with intelligent fallback. When you run `clan ssh` or `clan machines update`, Clan tries each configured network by priority until one succeeds.
|
||||
Clan's networking module automatically manages connections through various
|
||||
network technologies with intelligent fallback. When you run `clan ssh` or
|
||||
`clan machines update`, Clan tries each configured network by priority until one
|
||||
succeeds.
|
||||
|
||||
### Basic Setup with Internet Service
|
||||
|
||||
For machines with public IPs or DNS names, use the `internet` service to configure direct SSH while keeping fallback options:
|
||||
For machines with public IPs or DNS names, use the `internet` service to
|
||||
configure direct SSH while keeping fallback options:
|
||||
|
||||
```{.nix title="flake.nix" hl_lines="7-10 14-16"}
|
||||
{
|
||||
@@ -76,6 +81,7 @@ For machines with public IPs or DNS names, use the `internet` service to configu
|
||||
### How It Works
|
||||
|
||||
Clan automatically tries networks in order of priority:
|
||||
|
||||
1. Direct internet connections (if configured)
|
||||
2. VPN networks (ZeroTier, Tailscale, etc.)
|
||||
3. Tor hidden services
|
||||
@@ -98,12 +104,14 @@ clan network overview
|
||||
|
||||
## Option 2: Manual targetHost (Bypasses Fallback!)
|
||||
|
||||
!!! warning
|
||||
Setting `targetHost` directly **disables all automatic networking and fallback**. Only use this if you need complete control and don't want Clan's intelligent connection management.
|
||||
!!! warning Setting `targetHost` directly **disables all automatic networking
|
||||
and fallback**. Only use this if you need complete control and don't want Clan's
|
||||
intelligent connection management.
|
||||
|
||||
### Using Inventory (For Static Addresses)
|
||||
|
||||
Use inventory-level `targetHost` when the address is **static** and doesn't depend on NixOS configuration:
|
||||
Use inventory-level `targetHost` when the address is **static** and doesn't
|
||||
depend on NixOS configuration:
|
||||
|
||||
```{.nix title="flake.nix" hl_lines="8"}
|
||||
{
|
||||
@@ -124,13 +132,15 @@ Use inventory-level `targetHost` when the address is **static** and doesn't depe
|
||||
```
|
||||
|
||||
**When to use inventory-level:**
|
||||
|
||||
- Static IP addresses: `"root@192.168.1.100"`
|
||||
- DNS names: `"user@server.example.com"`
|
||||
- Any address that doesn't change based on machine configuration
|
||||
|
||||
### Using NixOS Configuration (For Dynamic Addresses)
|
||||
|
||||
Use machine-level `targetHost` when you need to **interpolate values from the NixOS configuration**:
|
||||
Use machine-level `targetHost` when you need to **interpolate values from the
|
||||
NixOS configuration**:
|
||||
|
||||
```{.nix title="flake.nix" hl_lines="7"}
|
||||
{
|
||||
@@ -151,23 +161,25 @@ Use machine-level `targetHost` when you need to **interpolate values from the Ni
|
||||
```
|
||||
|
||||
**When to use machine-level (NixOS config):**
|
||||
|
||||
- Using hostName from config: `"root@${config.networking.hostName}.local"`
|
||||
- Building from multiple config values: `"${config.users.users.deploy.name}@${config.networking.hostName}"`
|
||||
- Building from multiple config values:
|
||||
`"${config.users.users.deploy.name}@${config.networking.hostName}"`
|
||||
- Any address that depends on evaluated NixOS configuration
|
||||
|
||||
!!! info "Key Difference"
|
||||
**Inventory-level** (`deploy.targetHost`) is evaluated immediately and works with static strings.
|
||||
**Machine-level** (`clan.core.networking.targetHost`) is evaluated after NixOS configuration and can access `config.*` values.
|
||||
!!! info "Key Difference" **Inventory-level** (`deploy.targetHost`) is evaluated
|
||||
immediately and works with static strings. **Machine-level**
|
||||
(`clan.core.networking.targetHost`) is evaluated after NixOS configuration and
|
||||
can access `config.*` values.
|
||||
|
||||
## Quick Decision Guide
|
||||
|
||||
| Scenario | Recommended Approach | Why |
|
||||
|----------|---------------------|-----|
|
||||
| Public servers | `internet` service | Keeps fallback options |
|
||||
| Mixed infrastructure | Multiple networks | Automatic failover |
|
||||
| Machines behind NAT | ZeroTier/Tor | NAT traversal with fallback |
|
||||
| Testing/debugging | Manual targetHost | Full control, no magic |
|
||||
| Single static machine | Manual targetHost | Simple, no overhead |
|
||||
|----------|---------------------|-----| | Public servers | `internet` service |
|
||||
Keeps fallback options | | Mixed infrastructure | Multiple networks | Automatic
|
||||
failover | | Machines behind NAT | ZeroTier/Tor | NAT traversal with fallback |
|
||||
| Testing/debugging | Manual targetHost | Full control, no magic | | Single
|
||||
static machine | Manual targetHost | Simple, no overhead |
|
||||
|
||||
## Command-Line Override
|
||||
|
||||
@@ -181,4 +193,5 @@ clan machines update server --target-host root@backup-ip.com
|
||||
clan ssh laptop --target-host user@10.0.0.5
|
||||
```
|
||||
|
||||
Use this for debugging or emergency access when automatic networking isn't working.
|
||||
Use this for debugging or emergency access when automatic networking isn't
|
||||
working.
|
||||
|
||||
@@ -1,24 +1,29 @@
|
||||
This article provides an overview over the underlying secrets system which is used by [Vars](../concepts/generators.md).
|
||||
Under most circumstances you should use [Vars](../concepts/generators.md) directly instead.
|
||||
This article provides an overview over the underlying secrets system which is
|
||||
used by [Vars](../concepts/generators.md). Under most circumstances you should
|
||||
use [Vars](../concepts/generators.md) directly instead.
|
||||
|
||||
Consider using `clan secrets` only for managing admin users and groups, as well as a debugging tool.
|
||||
Consider using `clan secrets` only for managing admin users and groups, as well
|
||||
as a debugging tool.
|
||||
|
||||
Manually interacting with secrets via `clan secrets [set|remove]`, etc may break the integrity of your `Vars` state.
|
||||
Manually interacting with secrets via `clan secrets [set|remove]`, etc may break
|
||||
the integrity of your `Vars` state.
|
||||
|
||||
---
|
||||
______________________________________________________________________
|
||||
|
||||
Clan enables encryption of secrets (such as passwords & keys) ensuring security and ease-of-use among users.
|
||||
Clan enables encryption of secrets (such as passwords & keys) ensuring security
|
||||
and ease-of-use among users.
|
||||
|
||||
By default, Clan uses the [sops](https://github.com/getsops/sops) format
|
||||
and integrates with [sops-nix](https://github.com/Mic92/sops-nix) on NixOS machines.
|
||||
Clan can also be configured to be used with other secret store [backends](../reference/clan.core/vars.md#clan.core.vars.settings.secretStore).
|
||||
By default, Clan uses the [sops](https://github.com/getsops/sops) format and
|
||||
integrates with [sops-nix](https://github.com/Mic92/sops-nix) on NixOS machines.
|
||||
Clan can also be configured to be used with other secret store
|
||||
[backends](../reference/clan.core/vars.md#clan.core.vars.settings.secretStore).
|
||||
|
||||
## Create Your Admin Keypair
|
||||
|
||||
To get started, you'll need to create **your admin keypair**.
|
||||
|
||||
!!! info
|
||||
Don't worry — if you've already made one before, this step won't change or overwrite it.
|
||||
!!! info Don't worry — if you've already made one before, this step won't change
|
||||
or overwrite it.
|
||||
|
||||
```bash
|
||||
clan secrets key generate
|
||||
@@ -33,22 +38,23 @@ Generated age private key at '/home/joerg/.config/sops/age/keys.txt' for your us
|
||||
Also add your age public key to the repository with 'clan secrets users add YOUR_USER age1wkth7uhpkl555g40t8hjsysr20drq286netu8zptw50lmqz7j95sw2t3l7' (replace YOUR_USER with your actual username)
|
||||
```
|
||||
|
||||
!!! warning
|
||||
Make sure to keep a safe backup of the private key you've just created.
|
||||
If it's lost, you won't be able to get to your secrets anymore because they all need the admin key to be unlocked.
|
||||
!!! warning Make sure to keep a safe backup of the private key you've just
|
||||
created. If it's lost, you won't be able to get to your secrets anymore because
|
||||
they all need the admin key to be unlocked.
|
||||
|
||||
If you already have an [age] secret key and want to use that instead, you can simply edit `~/.config/sops/age/keys.txt`:
|
||||
If you already have an [age] secret key and want to use that instead, you can
|
||||
simply edit `~/.config/sops/age/keys.txt`:
|
||||
|
||||
```title="~/.config/sops/age/keys.txt"
|
||||
AGE-SECRET-KEY-13GWMK0KNNKXPTJ8KQ9LPSQZU7G3KU8LZDW474NX3D956GGVFAZRQTAE3F4
|
||||
```
|
||||
|
||||
Alternatively, you can provide your [age] secret key as an environment variable `SOPS_AGE_KEY`, or in a different file
|
||||
using `SOPS_AGE_KEY_FILE`.
|
||||
For more information see the [SOPS] guide on [encrypting with age].
|
||||
Alternatively, you can provide your [age] secret key as an environment variable
|
||||
`SOPS_AGE_KEY`, or in a different file using `SOPS_AGE_KEY_FILE`. For more
|
||||
information see the [SOPS] guide on [encrypting with age].
|
||||
|
||||
!!! note
|
||||
It's safe to add any secrets created by the clan CLI and placed in your repository to version control systems like `git`.
|
||||
!!! note It's safe to add any secrets created by the clan CLI and placed in your
|
||||
repository to version control systems like `git`.
|
||||
|
||||
## Add Your Public Key(s)
|
||||
|
||||
@@ -56,7 +62,8 @@ For more information see the [SOPS] guide on [encrypting with age].
|
||||
clan secrets users add $USER --age-key <your_public_key>
|
||||
```
|
||||
|
||||
It's best to choose the same username as on your Setup/Admin Machine that you use to control the deployment with.
|
||||
It's best to choose the same username as on your Setup/Admin Machine that you
|
||||
use to control the deployment with.
|
||||
|
||||
Once run this will create the following files:
|
||||
|
||||
@@ -66,17 +73,21 @@ sops/
|
||||
└── <your_username>/
|
||||
└── key.json
|
||||
```
|
||||
If you followed the quickstart tutorial all necessary secrets are initialized at this point.
|
||||
|
||||
!!! note
|
||||
You can add multiple age keys for a user by providing multiple `--age-key <your_public_key>` flags:
|
||||
If you followed the quickstart tutorial all necessary secrets are initialized at
|
||||
this point.
|
||||
|
||||
```console
|
||||
clan secrets users add $USER \
|
||||
--age-key <your_public_key_1> \
|
||||
--age-key <your_public_key_2> \
|
||||
...
|
||||
```
|
||||
!!! note You can add multiple age keys for a user by providing multiple
|
||||
`--age-key <your_public_key>` flags:
|
||||
|
||||
````
|
||||
```console
|
||||
clan secrets users add $USER \
|
||||
--age-key <your_public_key_1> \
|
||||
--age-key <your_public_key_2> \
|
||||
...
|
||||
```
|
||||
````
|
||||
|
||||
## Manage Your Public Key(s)
|
||||
|
||||
@@ -111,11 +122,6 @@ To remove a key from your user:
|
||||
clan secrets users remove-key $USER --age-key <your_public_key>
|
||||
```
|
||||
|
||||
[age]: https://github.com/FiloSottile/age
|
||||
[age plugin]: https://github.com/FiloSottile/awesome-age?tab=readme-ov-file#plugins
|
||||
[sops]: https://github.com/getsops/sops
|
||||
[encrypting with age]: https://github.com/getsops/sops?tab=readme-ov-file#encrypting-using-age
|
||||
|
||||
## Adding a Secret
|
||||
|
||||
```shellSession
|
||||
@@ -139,8 +145,10 @@ clan secrets list
|
||||
|
||||
A NixOS machine will automatically import all secrets that are encrypted for the
|
||||
current machine. At runtime it will use the host key to decrypt all secrets into
|
||||
an in-memory, non-persistent filesystem using [sops-nix](https://github.com/Mic92/sops-nix).
|
||||
In your nixos configuration you can get a path to secrets like this `config.sops.secrets.<name>.path`. For example:
|
||||
an in-memory, non-persistent filesystem using
|
||||
[sops-nix](https://github.com/Mic92/sops-nix). In your nixos configuration you
|
||||
can get a path to secrets like this `config.sops.secrets.<name>.path`. For
|
||||
example:
|
||||
|
||||
```nix
|
||||
{ config, ...}: {
|
||||
@@ -155,7 +163,8 @@ In your nixos configuration you can get a path to secrets like this `config.sops
|
||||
|
||||
## Assigning Access
|
||||
|
||||
When using `clan secrets set <secret>` without arguments, secrets are encrypted for the key of the user named like your current $USER.
|
||||
When using `clan secrets set <secret>` without arguments, secrets are encrypted
|
||||
for the key of the user named like your current $USER.
|
||||
|
||||
To add machines/users to an existing secret use:
|
||||
|
||||
@@ -203,7 +212,8 @@ Here's how to get started:
|
||||
clan secrets groups add-secret <group_name> <secret_name>
|
||||
```
|
||||
|
||||
**TIP** To encrypt all secrets of a machine for a specific group, use the following NixOS configuration:
|
||||
**TIP** To encrypt all secrets of a machine for a specific group, use the
|
||||
following NixOS configuration:
|
||||
|
||||
```
|
||||
{
|
||||
@@ -213,7 +223,8 @@ Here's how to get started:
|
||||
|
||||
### Adding Machine Keys
|
||||
|
||||
New machines in Clan come with age keys stored in `./sops/machines/<machine_name>`. To list these machines:
|
||||
New machines in Clan come with age keys stored in
|
||||
`./sops/machines/<machine_name>`. To list these machines:
|
||||
|
||||
```bash
|
||||
clan secrets machines list
|
||||
@@ -233,24 +244,29 @@ To fetch an age key from an SSH host key:
|
||||
|
||||
### Migration: Importing existing sops-based keys / sops-nix
|
||||
|
||||
`clan secrets` stores each secret in a single file, whereas [sops](https://github.com/Mic92/sops-nix) commonly allows to put all secrets in a yaml or json document.
|
||||
`clan secrets` stores each secret in a single file, whereas
|
||||
[sops](https://github.com/Mic92/sops-nix) commonly allows to put all secrets in
|
||||
a yaml or json document.
|
||||
|
||||
If you already happened to use sops-nix, you can migrate by using the `clan secrets import-sops` command by importing these files:
|
||||
If you already happened to use sops-nix, you can migrate by using the
|
||||
`clan secrets import-sops` command by importing these files:
|
||||
|
||||
```bash
|
||||
% clan secrets import-sops --prefix matchbox- --group admins --machine matchbox nixos/matchbox/secrets/secrets.yaml
|
||||
```
|
||||
|
||||
This will create secrets for each secret found in `nixos/matchbox/secrets/secrets.yaml` in a `./sops` folder of your repository.
|
||||
Each member of the group `admins` in this case will be able to decrypt the secrets with their respective key.
|
||||
|
||||
Since our clan secret module will auto-import secrets that are encrypted for a particular nixos machine,
|
||||
you can now remove `sops.secrets.<secrets> = { };` unless you need to specify more options for the secret like owner/group of the secret file.
|
||||
This will create secrets for each secret found in
|
||||
`nixos/matchbox/secrets/secrets.yaml` in a `./sops` folder of your repository.
|
||||
Each member of the group `admins` in this case will be able to decrypt the
|
||||
secrets with their respective key.
|
||||
|
||||
Since our clan secret module will auto-import secrets that are encrypted for a
|
||||
particular nixos machine, you can now remove `sops.secrets.<secrets> = { };`
|
||||
unless you need to specify more options for the secret like owner/group of the
|
||||
secret file.
|
||||
|
||||
## Indepth Explanation
|
||||
|
||||
|
||||
The secrets system conceptually knows two different entities:
|
||||
|
||||
- **Machine**: consumes secrets
|
||||
@@ -258,33 +274,50 @@ The secrets system conceptually knows two different entities:
|
||||
|
||||
**A Users** Can add or revoke machines' access to secrets.
|
||||
|
||||
**A machine** Can decrypt secrets that where encrypted specifically for that machine.
|
||||
**A machine** Can decrypt secrets that where encrypted specifically for that
|
||||
machine.
|
||||
|
||||
!!! Danger
|
||||
**Always make sure at least one _User_ has access to a secret**. Otherwise you could lock yourself out from accessing the secret.
|
||||
!!! Danger **Always make sure at least one _User_ has access to a secret**.
|
||||
Otherwise you could lock yourself out from accessing the secret.
|
||||
|
||||
### Inherited implications
|
||||
|
||||
By default clan uses [sops](https://github.com/getsops/sops) through [sops-nix](https://github.com/Mic92/sops-nix) for managing its secrets which inherits some implications that are important to understand:
|
||||
By default clan uses [sops](https://github.com/getsops/sops) through
|
||||
[sops-nix](https://github.com/Mic92/sops-nix) for managing its secrets which
|
||||
inherits some implications that are important to understand:
|
||||
|
||||
- **Public/Private keys**: Entities are identified via their public keys. Each
|
||||
Entity can use their respective private key to decrypt a secret.
|
||||
|
||||
- **Public/Private keys**: Entities are identified via their public keys. Each Entity can use their respective private key to decrypt a secret.
|
||||
- **Public keys are stored**: All Public keys are stored inside the repository
|
||||
- **Secrets are stored Encrypted**: secrets are stored inside the repository encrypted with the respective public keys
|
||||
- **Secrets are deployed encrypted**: Fully encrypted secrets are deployed to machines at deployment time.
|
||||
- **Secrets are decrypted by sops on-demand**: Each machine decrypts its secrets at runtime and stores them at an ephemeral location.
|
||||
- **Machine key-pairs are auto-generated**: When a machine is created **no user-interaction is required** to setup public/private key-pairs.
|
||||
- **secrets are re-encrypted**: In case machines, users or groups are modified secrets get re-encrypted on demand.
|
||||
|
||||
!!! Important
|
||||
After revoking access to a secret you should also change the underlying secret. i.e. change the API key, or the password.
|
||||
- **Secrets are stored Encrypted**: secrets are stored inside the repository
|
||||
encrypted with the respective public keys
|
||||
|
||||
---
|
||||
- **Secrets are deployed encrypted**: Fully encrypted secrets are deployed to
|
||||
machines at deployment time.
|
||||
|
||||
- **Secrets are decrypted by sops on-demand**: Each machine decrypts its secrets
|
||||
at runtime and stores them at an ephemeral location.
|
||||
|
||||
- **Machine key-pairs are auto-generated**: When a machine is created **no
|
||||
user-interaction is required** to setup public/private key-pairs.
|
||||
|
||||
- **secrets are re-encrypted**: In case machines, users or groups are modified
|
||||
secrets get re-encrypted on demand.
|
||||
|
||||
!!! Important After revoking access to a secret you should also change the
|
||||
underlying secret. i.e. change the API key, or the password.
|
||||
|
||||
______________________________________________________________________
|
||||
|
||||
### Machine and user keys
|
||||
|
||||
The following diagrams illustrates how a user can provide a secret (i.e. a Password).
|
||||
The following diagrams illustrates how a user can provide a secret (i.e. a
|
||||
Password).
|
||||
|
||||
- By using the **Clan CLI** a user encrypts the password with both the **User public-key** and the **machine's public-key**
|
||||
- By using the **Clan CLI** a user encrypts the password with both the **User
|
||||
public-key** and the **machine's public-key**
|
||||
|
||||
- The *Machine* can decrypt the password with its private-key on demand.
|
||||
|
||||
@@ -305,14 +338,14 @@ Rel_R(secret, machine, "Decrypt", "", "machine privkey" )
|
||||
@enduml
|
||||
```
|
||||
|
||||
|
||||
#### User groups
|
||||
|
||||
Here we illustrate how machine groups work.
|
||||
|
||||
Common use cases:
|
||||
|
||||
- **Shared Management**: Access among multiple users. I.e. a subset of secrets/machines that have two admins
|
||||
- **Shared Management**: Access among multiple users. I.e. a subset of
|
||||
secrets/machines that have two admins
|
||||
|
||||
```plantuml
|
||||
@startuml
|
||||
@@ -335,7 +368,7 @@ Rel_R(secret, machine, "Decrypt", "", "machine privkey" )
|
||||
|
||||
<!-- TODO: See also [Groups Reference](#groups-reference) -->
|
||||
|
||||
---
|
||||
______________________________________________________________________
|
||||
|
||||
#### Machine groups
|
||||
|
||||
@@ -366,10 +399,9 @@ Rel(secret, c1, "Decrypt", "", "Both machine A or B can decrypt using their priv
|
||||
|
||||
<!-- TODO: See also [Groups Reference](#groups-reference) -->
|
||||
|
||||
|
||||
|
||||
See the [readme](https://github.com/Mic92/sops-nix) of sops-nix for more
|
||||
examples.
|
||||
|
||||
|
||||
|
||||
[age]: https://github.com/FiloSottile/age
|
||||
[encrypting with age]: https://github.com/getsops/sops?tab=readme-ov-file#encrypting-using-age
|
||||
[sops]: https://github.com/getsops/sops
|
||||
|
||||
@@ -1,4 +1,8 @@
|
||||
At the moment, NixOS/Clan does not support [Secure Boot](https://wiki.gentoo.org/wiki/Secure_Boot). Therefore, you need to disable it in the BIOS. You can watch this [video guide](https://www.youtube.com/watch?v=BKVShiMUePc) or follow the instructions below:
|
||||
At the moment, NixOS/Clan does not support
|
||||
[Secure Boot](https://wiki.gentoo.org/wiki/Secure_Boot). Therefore, you need to
|
||||
disable it in the BIOS. You can watch this
|
||||
[video guide](https://www.youtube.com/watch?v=BKVShiMUePc) or follow the
|
||||
instructions below:
|
||||
|
||||
## Insert the USB Stick
|
||||
|
||||
@@ -7,49 +11,46 @@ At the moment, NixOS/Clan does not support [Secure Boot](https://wiki.gentoo.org
|
||||
## Access the UEFI/BIOS Menu
|
||||
|
||||
- Restart your computer.
|
||||
- As your computer restarts, press the appropriate key to enter the UEFI/BIOS settings.
|
||||
??? tip "The key depends on your laptop or motherboard manufacturer. Click to see a reference list:"
|
||||
|
||||
| Manufacturer | UEFI/BIOS Key(s) |
|
||||
|--------------------|---------------------------|
|
||||
| ASUS | `Del`, `F2` |
|
||||
| MSI | `Del`, `F2` |
|
||||
| Gigabyte | `Del`, `F2` |
|
||||
| ASRock | `Del`, `F2` |
|
||||
| Lenovo | `F1`, `F2`, `Enter` (alternatively `Fn + F2`) |
|
||||
| HP | `Esc`, `F10` |
|
||||
| Dell | `F2`, `Fn + F2`, `Esc` |
|
||||
| Acer | `F2`, `Del` |
|
||||
| Samsung | `F2`, `F10` |
|
||||
| Toshiba | `F2`, `Esc` |
|
||||
| Sony | `F2`, `Assist` button |
|
||||
| Fujitsu | `F2` |
|
||||
| Microsoft Surface | `Volume Up` + `Power` |
|
||||
| IBM/Lenovo ThinkPad| `Enter`, `F1`, `F12` |
|
||||
| Biostar | `Del` |
|
||||
| Zotac | `Del`, `F2` |
|
||||
| EVGA | `Del` |
|
||||
| Origin PC | `F2`, `Delete` |
|
||||
- As your computer restarts, press the appropriate key to enter the UEFI/BIOS
|
||||
settings. ??? tip "The key depends on your laptop or motherboard manufacturer.
|
||||
Click to see a reference list:"
|
||||
|
||||
!!! Note
|
||||
Pressing the key quickly and repeatedly is sometimes necessary to access the UEFI/BIOS menu, as the window to enter this mode is brief.
|
||||
| Manufacturer | UEFI/BIOS Key(s) |
|
||||
|--------------------|---------------------------| | ASUS | `Del`, `F2` | |
|
||||
MSI | `Del`, `F2` | | Gigabyte | `Del`, `F2` | | ASRock | `Del`, `F2` | |
|
||||
Lenovo | `F1`, `F2`, `Enter` (alternatively `Fn + F2`) | | HP | `Esc`, `F10` |
|
||||
| Dell | `F2`, `Fn + F2`, `Esc` | | Acer | `F2`, `Del` | | Samsung | `F2`,
|
||||
`F10` | | Toshiba | `F2`, `Esc` | | Sony | `F2`, `Assist` button | | Fujitsu |
|
||||
`F2` | | Microsoft Surface | `Volume Up` + `Power` | | IBM/Lenovo ThinkPad|
|
||||
`Enter`, `F1`, `F12` | | Biostar | `Del` | | Zotac | `Del`, `F2` | | EVGA |
|
||||
`Del` | | Origin PC | `F2`, `Delete` |
|
||||
|
||||
!!! Note Pressing the key quickly and repeatedly is sometimes necessary to
|
||||
access the UEFI/BIOS menu, as the window to enter this mode is brief.
|
||||
|
||||
## Access Advanced Mode (Optional)
|
||||
|
||||
- If your UEFI/BIOS has a `Simple` or `Easy` mode interface, look for an option labeled `Advanced Mode` (often found in the lower right corner).
|
||||
- Click on `Advanced Mode` to access more settings. This step is optional, as your boot settings might be available in the basic view.
|
||||
- If your UEFI/BIOS has a `Simple` or `Easy` mode interface, look for an option
|
||||
labeled `Advanced Mode` (often found in the lower right corner).
|
||||
- Click on `Advanced Mode` to access more settings. This step is optional, as
|
||||
your boot settings might be available in the basic view.
|
||||
|
||||
## Disable Secure Boot
|
||||
|
||||
- Locate the `Secure Boot` option in your UEFI/BIOS settings. This is typically found under a `Security` tab, `Boot` tab, or a similarly named section.
|
||||
- Locate the `Secure Boot` option in your UEFI/BIOS settings. This is typically
|
||||
found under a `Security` tab, `Boot` tab, or a similarly named section.
|
||||
- Set the `Secure Boot` option to `Disabled`.
|
||||
|
||||
## Change Boot Order
|
||||
|
||||
- Find the option to adjust the boot order—often labeled `Boot Order`, `Boot Sequence`, or `Boot Priority`.
|
||||
- Ensure that your USB device is set as the first boot option. This allows your computer to boot from the USB stick.
|
||||
- Find the option to adjust the boot order—often labeled `Boot Order`,
|
||||
`Boot Sequence`, or `Boot Priority`.
|
||||
- Ensure that your USB device is set as the first boot option. This allows your
|
||||
computer to boot from the USB stick.
|
||||
|
||||
## Save and Exit
|
||||
|
||||
- Save your changes before exiting the UEFI/BIOS menu. Look for a `Save & Exit` option or press the corresponding function key (often `F10`).
|
||||
- Save your changes before exiting the UEFI/BIOS menu. Look for a `Save & Exit`
|
||||
option or press the corresponding function key (often `F10`).
|
||||
- Your computer should now restart and boot from the USB stick.
|
||||
|
||||
@@ -2,18 +2,24 @@
|
||||
|
||||
## Service Module Specification
|
||||
|
||||
This section explains how to author a clan service module.
|
||||
We discussed the initial architecture in [01-clan-service-modules](../../decisions/01-ClanModules.md) and decided to rework the format.
|
||||
This section explains how to author a clan service module. We discussed the
|
||||
initial architecture in
|
||||
[01-clan-service-modules](../../decisions/01-ClanModules.md) and decided to
|
||||
rework the format.
|
||||
|
||||
For the full specification and current state see: **[Service Author Reference](../../reference/clanServices/clan-service-author-interface.md)**
|
||||
For the full specification and current state see:
|
||||
**[Service Author Reference](../../reference/clanServices/clan-service-author-interface.md)**
|
||||
|
||||
### A Minimal module
|
||||
|
||||
First of all we need to register our module into the `clan.modules` attribute. Make sure to choose a unique name so the module doesn't have a name collision with any of the core modules.
|
||||
First of all we need to register our module into the `clan.modules` attribute.
|
||||
Make sure to choose a unique name so the module doesn't have a name collision
|
||||
with any of the core modules.
|
||||
|
||||
While not required we recommend to prefix your module attribute name.
|
||||
|
||||
If you export the module from your flake, other people will be able to import it and use it within their clan.
|
||||
If you export the module from your flake, other people will be able to import it
|
||||
and use it within their clan.
|
||||
|
||||
i.e. `@hsjobeki/customNetworking`
|
||||
|
||||
@@ -35,8 +41,9 @@ outputs = inputs: inputs.flake-parts.lib.mkFlake { inherit inputs; } ({
|
||||
|
||||
The imported module file must fulfill at least the following requirements:
|
||||
|
||||
- Be an actual module. That means: Be either an attribute set; or a function that returns an attribute set.
|
||||
- Required `_class = "clan.service"
|
||||
- Be an actual module. That means: Be either an attribute set; or a function
|
||||
that returns an attribute set.
|
||||
- Required \`\_class = "clan.service"
|
||||
- Required `manifest.name = "<name of the provided service>"`
|
||||
|
||||
```nix title="/service-modules/networking.nix"
|
||||
@@ -47,19 +54,28 @@ The imported module file must fulfill at least the following requirements:
|
||||
}
|
||||
```
|
||||
|
||||
For more attributes see: **[Service Author Reference](../../reference/clanServices/clan-service-author-interface.md)**
|
||||
For more attributes see:
|
||||
**[Service Author Reference](../../reference/clanServices/clan-service-author-interface.md)**
|
||||
|
||||
### Adding functionality to the module
|
||||
|
||||
While the very minimal module is valid in itself it has no way of adding any machines to it, because it doesn't specify any roles.
|
||||
While the very minimal module is valid in itself it has no way of adding any
|
||||
machines to it, because it doesn't specify any roles.
|
||||
|
||||
The next logical step is to think about the interactions between the machines and define *roles* for them.
|
||||
The next logical step is to think about the interactions between the machines
|
||||
and define *roles* for them.
|
||||
|
||||
Here is a short guide with some conventions:
|
||||
|
||||
- [ ] If they all have the same relation to each other `peer` is commonly used. `peers` can often talk to each other directly.
|
||||
- [ ] Often machines don't necessarily have direct relation to each other and there is one elevated machine in the middle classically know as `client-server`. `clients` are less likely to talk directly to each other than `peers`
|
||||
- [ ] If your machines don't have any relation and/or interactions to each other you should reconsider if the desired functionality is really a multi-host service.
|
||||
- [ ] If they all have the same relation to each other `peer` is commonly used.
|
||||
`peers` can often talk to each other directly.
|
||||
- [ ] Often machines don't necessarily have direct relation to each other and
|
||||
there is one elevated machine in the middle classically know as
|
||||
`client-server`. `clients` are less likely to talk directly to each other than
|
||||
`peers`
|
||||
- [ ] If your machines don't have any relation and/or interactions to each other
|
||||
you should reconsider if the desired functionality is really a multi-host
|
||||
service.
|
||||
|
||||
```nix title="/service-modules/networking.nix"
|
||||
{
|
||||
@@ -145,14 +161,16 @@ Next we need to define the settings and the behavior of these distinct roles.
|
||||
|
||||
## Using values from a NixOS machine inside the module
|
||||
|
||||
!!! Example "Experimental Status"
|
||||
This feature is experimental and should be used with care.
|
||||
!!! Example "Experimental Status" This feature is experimental and should be
|
||||
used with care.
|
||||
|
||||
Sometimes a settings value depends on something within a machines `config`.
|
||||
|
||||
Since the `interface` is defined completely machine-agnostic this means values from a machine cannot be set in the traditional way.
|
||||
Since the `interface` is defined completely machine-agnostic this means values
|
||||
from a machine cannot be set in the traditional way.
|
||||
|
||||
The following example shows how to create a local instance of machine specific settings.
|
||||
The following example shows how to create a local instance of machine specific
|
||||
settings.
|
||||
|
||||
```nix title="someservice.nix"
|
||||
{
|
||||
@@ -174,11 +192,14 @@ The following example shows how to create a local instance of machine specific s
|
||||
}
|
||||
```
|
||||
|
||||
!!! Danger
|
||||
`localSettings` are a local attribute. Other machines cannot access it.
|
||||
If calling extendSettings is done that doesn't change the original `settings` this means if a different machine tries to access i.e `roles.client.settings` it would *NOT* contain your changes.
|
||||
!!! Danger `localSettings` are a local attribute. Other machines cannot access
|
||||
it. If calling extendSettings is done that doesn't change the original
|
||||
`settings` this means if a different machine tries to access i.e
|
||||
`roles.client.settings` it would *NOT* contain your changes.
|
||||
|
||||
Exposing the changed settings to other machines would come with a huge performance penalty, thats why we don't want to offer it.
|
||||
```
|
||||
Exposing the changed settings to other machines would come with a huge performance penalty, thats why we don't want to offer it.
|
||||
```
|
||||
|
||||
## Passing `self` or `pkgs` to the module
|
||||
|
||||
@@ -189,11 +210,14 @@ In general we found the following two best practices:
|
||||
1. Using `lib.importApply`
|
||||
2. Using a wrapper module
|
||||
|
||||
Both have pros and cons. After all using `importApply` is the easier one, but might be more limiting sometimes.
|
||||
Both have pros and cons. After all using `importApply` is the easier one, but
|
||||
might be more limiting sometimes.
|
||||
|
||||
### Using `importApply`
|
||||
|
||||
Using [importApply](https://github.com/NixOS/nixpkgs/pull/230588) is essentially the same as `import file` followed by a function-application; but preserves the error location.
|
||||
Using [importApply](https://github.com/NixOS/nixpkgs/pull/230588) is essentially
|
||||
the same as `import file` followed by a function-application; but preserves the
|
||||
error location.
|
||||
|
||||
Imagine your module looks like this
|
||||
|
||||
@@ -234,7 +258,8 @@ outputs = inputs: flake-parts.lib.mkFlake { inherit inputs; } ({self, lib, ...}:
|
||||
}
|
||||
```
|
||||
|
||||
Then wrap the module and forward the variable `self` from the outer context into the module
|
||||
Then wrap the module and forward the variable `self` from the outer context into
|
||||
the module
|
||||
|
||||
```nix title="flake.nix"
|
||||
# ...
|
||||
@@ -255,9 +280,10 @@ outputs = inputs: flake-parts.lib.mkFlake { inherit inputs; } ({self, lib, ...}:
|
||||
})
|
||||
```
|
||||
|
||||
The benefit of this approach is that downstream users can override the value of `myClan` by using `mkForce` or other priority modifiers.
|
||||
The benefit of this approach is that downstream users can override the value of
|
||||
`myClan` by using `mkForce` or other priority modifiers.
|
||||
|
||||
---
|
||||
______________________________________________________________________
|
||||
|
||||
## Further
|
||||
|
||||
|
||||
@@ -1,27 +1,36 @@
|
||||
---
|
||||
______________________________________________________________________
|
||||
|
||||
hide:
|
||||
- navigation
|
||||
- toc
|
||||
---
|
||||
|
||||
- navigation
|
||||
- toc
|
||||
|
||||
______________________________________________________________________
|
||||
|
||||
# :material-home: What is Clan?
|
||||
|
||||
[Clan](https://clan.lol/) is a peer-to-peer computer management framework that
|
||||
empowers you to **selfhost in a reliable and scalable way**.
|
||||
|
||||
Built on NixOS, Clan provides a **declarative interface for managing machines** with automated [secret management](./guides/secrets.md), easy [mesh VPN
|
||||
connectivity](./guides/mesh-vpn.md), and [automated backups](./guides/backups.md).
|
||||
Built on NixOS, Clan provides a **declarative interface for managing machines**
|
||||
with automated [secret management](./guides/secrets.md), easy
|
||||
[mesh VPN connectivity](./guides/mesh-vpn.md), and
|
||||
[automated backups](./guides/backups.md).
|
||||
|
||||
Whether you're running a homelab or maintaining critical computing infrastructure,
|
||||
Clan will help **reduce maintenance burden** by allowing a **git repository to define your whole network** of computers.
|
||||
Whether you're running a homelab or maintaining critical computing
|
||||
infrastructure, Clan will help **reduce maintenance burden** by allowing a **git
|
||||
repository to define your whole network** of computers.
|
||||
|
||||
In combination with [sops-nix](https://github.com/Mic92/sops-nix), [nixos-anywhere](https://github.com/nix-community/nixos-anywhere) and [disko](https://github.com/nix-community/disko), Clan makes it possible to have **collaborative infrastructure**.
|
||||
|
||||
At the heart of Clan are [Clan Services](./reference/clanServices/index.md) - the core
|
||||
concept that enables you to add functionality across multiple machines in your
|
||||
network. While Clan ships with essential core services, you can [create custom
|
||||
services](./guides/clanServices.md) tailored to your specific needs.
|
||||
In combination with [sops-nix](https://github.com/Mic92/sops-nix),
|
||||
[nixos-anywhere](https://github.com/nix-community/nixos-anywhere) and
|
||||
[disko](https://github.com/nix-community/disko), Clan makes it possible to have
|
||||
**collaborative infrastructure**.
|
||||
|
||||
At the heart of Clan are [Clan Services](./reference/clanServices/index.md) -
|
||||
the core concept that enables you to add functionality across multiple machines
|
||||
in your network. While Clan ships with essential core services, you can
|
||||
[create custom services](./guides/clanServices.md) tailored to your specific
|
||||
needs.
|
||||
|
||||
## :material-book: Guides
|
||||
|
||||
@@ -29,23 +38,23 @@ How-to Guides for achieving a certain goal or solving a specific issue.
|
||||
|
||||
<div class="grid cards" markdown>
|
||||
|
||||
- [:material-clock-fast: Getting Started](./guides/getting-started/index.md)
|
||||
- [:material-clock-fast: Getting Started](./guides/getting-started/index.md)
|
||||
|
||||
---
|
||||
______________________________________________________________________
|
||||
|
||||
Get started in less than 20 minutes!
|
||||
Get started in less than 20 minutes!
|
||||
|
||||
- [macOS](./guides/macos.md)
|
||||
- [macOS](./guides/macos.md)
|
||||
|
||||
---
|
||||
______________________________________________________________________
|
||||
|
||||
Using Clan to manage your macOS machines
|
||||
Using Clan to manage your macOS machines
|
||||
|
||||
- [Contribute](./guides/contributing/CONTRIBUTING.md)
|
||||
- [Contribute](./guides/contributing/CONTRIBUTING.md)
|
||||
|
||||
---
|
||||
______________________________________________________________________
|
||||
|
||||
How to set up a development environment
|
||||
How to set up a development environment
|
||||
|
||||
</div>
|
||||
|
||||
@@ -55,21 +64,21 @@ Explore the underlying principles of Clan
|
||||
|
||||
<div class="grid cards" markdown>
|
||||
|
||||
- [Generators](./concepts/generators.md)
|
||||
- [Generators](./concepts/generators.md)
|
||||
|
||||
---
|
||||
______________________________________________________________________
|
||||
|
||||
Learn about Generators, our way to secret management
|
||||
Learn about Generators, our way to secret management
|
||||
|
||||
- [Inventory](./concepts/inventory.md)
|
||||
- [Inventory](./concepts/inventory.md)
|
||||
|
||||
---
|
||||
______________________________________________________________________
|
||||
|
||||
Learn about the Inventory, a multi machine Nix interface
|
||||
Learn about the Inventory, a multi machine Nix interface
|
||||
|
||||
</div>
|
||||
|
||||
|
||||
## Blog
|
||||
|
||||
Visit our [Clan Blog](https://clan.lol/blog/) for the latest updates, tutorials, and community stories.
|
||||
Visit our [Clan Blog](https://clan.lol/blog/) for the latest updates, tutorials,
|
||||
and community stories.
|
||||
|
||||
@@ -1,38 +1,49 @@
|
||||
# :material-api: Glossary
|
||||
|
||||
## clan
|
||||
|
||||
Collection of machines interconnected in a network.
|
||||
|
||||
## clan-app
|
||||
|
||||
Graphical interface for managing clans. Simpler to use than the `clan-cli`.
|
||||
|
||||
## clan-cli
|
||||
|
||||
Command-line tool for managing clans.
|
||||
|
||||
## clanModule
|
||||
Module that enables configuration via the inventory.
|
||||
Legacy `clanModules` also support configuration outside the inventory.
|
||||
|
||||
Module that enables configuration via the inventory. Legacy `clanModules` also
|
||||
support configuration outside the inventory.
|
||||
|
||||
## clanService
|
||||
|
||||
Service defined and managed through a clan configuration.
|
||||
|
||||
## clanURL
|
||||
Flake URL-like syntax used to link to clans.
|
||||
Required to connect the `url-open` application to the `clan-app`.
|
||||
|
||||
Flake URL-like syntax used to link to clans. Required to connect the `url-open`
|
||||
application to the `clan-app`.
|
||||
|
||||
## facts *(deprecated)*
|
||||
System for creating secrets and public files in a declarative way.
|
||||
**Note:** Deprecated, use `vars` instead.
|
||||
|
||||
System for creating secrets and public files in a declarative way. **Note:**
|
||||
Deprecated, use `vars` instead.
|
||||
|
||||
## inventory
|
||||
|
||||
JSON-like structure used to configure multiple machines.
|
||||
|
||||
## machine
|
||||
|
||||
A physical computer or virtual machine.
|
||||
|
||||
## role
|
||||
Specific function assigned to a machine within a service.
|
||||
Allows assignment of sane default configurations in multi-machine services.
|
||||
|
||||
Specific function assigned to a machine within a service. Allows assignment of
|
||||
sane default configurations in multi-machine services.
|
||||
|
||||
## vars
|
||||
|
||||
System for creating secrets and public files in a declarative way.
|
||||
|
||||
@@ -1,20 +1,23 @@
|
||||
# :material-api: Overview
|
||||
|
||||
This section of the site provides an overview of available options and commands within the Clan Framework.
|
||||
This section of the site provides an overview of available options and commands
|
||||
within the Clan Framework.
|
||||
|
||||
---
|
||||
______________________________________________________________________
|
||||
|
||||
- [Clan Configuration Option](/options) - for defining a Clan
|
||||
- Learn how to use the [Clan CLI](./cli/index.md)
|
||||
- Explore available [services](./clanServices/index.md)
|
||||
- [NixOS Configuration Options](./clan.core/index.md) - Additional options avilable on a NixOS machine.
|
||||
- [NixOS Configuration Options](./clan.core/index.md) - Additional options
|
||||
avilable on a NixOS machine.
|
||||
|
||||
---
|
||||
______________________________________________________________________
|
||||
|
||||
!!! Note
|
||||
This documentation is always built for the main branch.
|
||||
If you need documentation for a specific commit you can build it on your own
|
||||
!!! Note This documentation is always built for the main branch. If you need
|
||||
documentation for a specific commit you can build it on your own
|
||||
|
||||
```bash
|
||||
nix build 'git+https://git.clan.lol/clan/clan-core?ref=0324f4d4b87d932163f351e53b23b0b17f2b5e15#docs'
|
||||
```
|
||||
````
|
||||
```bash
|
||||
nix build 'git+https://git.clan.lol/clan/clan-core?ref=0324f4d4b87d932163f351e53b23b0b17f2b5e15#docs'
|
||||
```
|
||||
````
|
||||
|
||||
@@ -7,6 +7,9 @@
|
||||
treefmt.projectRootFile = "LICENSE.md";
|
||||
treefmt.programs.shellcheck.enable = true;
|
||||
|
||||
treefmt.programs.mdformat.enable = true;
|
||||
treefmt.programs.mdformat.settings.number = true;
|
||||
treefmt.programs.mdformat.settings.wrap = 80;
|
||||
treefmt.programs.mypy.enable = true;
|
||||
treefmt.programs.nixfmt.enable = true;
|
||||
treefmt.programs.nixfmt.package = pkgs.nixfmt-rfc-style;
|
||||
@@ -37,8 +40,6 @@
|
||||
"*/gnupg-home/*"
|
||||
"*/sops/secrets/*"
|
||||
"vars/*"
|
||||
# prettier messes up our mkdocs flavoured markdown
|
||||
"*.md"
|
||||
"**/node_modules/*"
|
||||
"**/.mypy_cache/*"
|
||||
|
||||
@@ -96,6 +97,8 @@
|
||||
];
|
||||
excludes = [
|
||||
"*/asciinema-player/*"
|
||||
# prettier messes up our mkdocs flavoured markdown
|
||||
"*.md"
|
||||
];
|
||||
};
|
||||
treefmt.programs.mypy.directories = {
|
||||
|
||||
@@ -11,13 +11,15 @@ Such as:
|
||||
|
||||
## Structure
|
||||
|
||||
Similar to `nixpkgs/lib` this produces a recursive attribute set in a fixed-point.
|
||||
Functions within lib can depend on each other to create new abstractions.
|
||||
Similar to `nixpkgs/lib` this produces a recursive attribute set in a
|
||||
fixed-point. Functions within lib can depend on each other to create new
|
||||
abstractions.
|
||||
|
||||
### Conventions
|
||||
|
||||
Note: This is not consistently enforced yet.
|
||||
If you start a new feature, or refactoring/touching existing ones, please help us to move towards the below illustrated.
|
||||
Note: This is not consistently enforced yet. If you start a new feature, or
|
||||
refactoring/touching existing ones, please help us to move towards the below
|
||||
illustrated.
|
||||
|
||||
A single feature-set/module may be organized like this:
|
||||
|
||||
@@ -31,8 +33,8 @@ A single feature-set/module may be organized like this:
|
||||
}
|
||||
```
|
||||
|
||||
Every bigger feature should live in a subfolder with the feature name.
|
||||
It should contain two files:
|
||||
Every bigger feature should live in a subfolder with the feature name. It should
|
||||
contain two files:
|
||||
|
||||
- `impl.nix`
|
||||
- `test.nix`
|
||||
@@ -41,6 +43,7 @@ It should contain two files:
|
||||
```
|
||||
Example filetree
|
||||
```
|
||||
|
||||
```sh
|
||||
.
|
||||
├── default.nix
|
||||
@@ -69,4 +72,5 @@ Example filetree
|
||||
|
||||
For testing we use [nix-unit](https://github.com/nix-community/nix-unit)
|
||||
|
||||
TODO: define a helper that automatically hooks up `tests` in `flake.legacyPackages` and a corresponding buildable `checks` attribute
|
||||
TODO: define a helper that automatically hooks up `tests` in
|
||||
`flake.legacyPackages` and a corresponding buildable `checks` attribute
|
||||
|
||||
@@ -1,10 +1,13 @@
|
||||
# User Firewall Module
|
||||
|
||||
This NixOS module provides network access restrictions for non-privileged users, ensuring they can only access local services and VPN interfaces while blocking direct internet access.
|
||||
This NixOS module provides network access restrictions for non-privileged users,
|
||||
ensuring they can only access local services and VPN interfaces while blocking
|
||||
direct internet access.
|
||||
|
||||
## Overview
|
||||
|
||||
The `user-firewall` module implements firewall rules that:
|
||||
|
||||
- Block all outbound network traffic for normal (non-system) users by default
|
||||
- Allow specific users to bypass restrictions (exemptUsers)
|
||||
- Permit traffic on specific interfaces (like VPNs and localhost)
|
||||
@@ -23,7 +26,9 @@ Add the module to your NixOS configuration:
|
||||
}
|
||||
```
|
||||
|
||||
The module is automatically enabled once imported. It will immediately start restricting network access for all normal users except those listed in `exemptUsers`.
|
||||
The module is automatically enabled once imported. It will immediately start
|
||||
restricting network access for all normal users except those listed in
|
||||
`exemptUsers`.
|
||||
|
||||
## Configuration
|
||||
|
||||
@@ -63,19 +68,25 @@ The module is automatically enabled once imported. It will immediately start res
|
||||
|
||||
## How It Works
|
||||
|
||||
1. **User Classification**: The module automatically identifies all normal users (non-system users) and applies restrictions to those not in the `exemptUsers` list.
|
||||
1. **User Classification**: The module automatically identifies all normal users
|
||||
(non-system users) and applies restrictions to those not in the `exemptUsers`
|
||||
list.
|
||||
|
||||
2. **Firewall Rules**:
|
||||
- For iptables: Creates a custom chain `user-firewall-output` in the OUTPUT table
|
||||
|
||||
- For iptables: Creates a custom chain `user-firewall-output` in the OUTPUT
|
||||
table
|
||||
- For nftables: Creates a table `inet user-firewall` with an output chain
|
||||
- Rules check outgoing packets and reject those from restricted users
|
||||
|
||||
3. **Interface Patterns**: Supports wildcards in interface names:
|
||||
|
||||
- `*` matches any characters (e.g., `wg*` matches `wg0`, `wg-home`)
|
||||
|
||||
## Default Allowed Interfaces
|
||||
|
||||
The module comes with sensible defaults for common VPN and overlay network interfaces:
|
||||
The module comes with sensible defaults for common VPN and overlay network
|
||||
interfaces:
|
||||
|
||||
- `lo` - Loopback (localhost access)
|
||||
- `tun*` - OpenVPN, OpenConnect
|
||||
@@ -95,7 +106,9 @@ The module comes with sensible defaults for common VPN and overlay network inter
|
||||
## Use Cases
|
||||
|
||||
### 1. Public Kiosk Systems
|
||||
|
||||
Restrict users to only access local services:
|
||||
|
||||
```nix
|
||||
{
|
||||
networking.user-firewall = {
|
||||
@@ -106,7 +119,9 @@ Restrict users to only access local services:
|
||||
```
|
||||
|
||||
### 2. Corporate Workstations
|
||||
|
||||
Force all traffic through corporate VPN:
|
||||
|
||||
```nix
|
||||
{
|
||||
networking.user-firewall = {
|
||||
@@ -132,15 +147,18 @@ nix build .#checks.x86_64-linux.user-firewall-nftables
|
||||
|
||||
### Check Active Rules
|
||||
|
||||
The output includes package counters for each firewall rule, that can help to debug connectivity issues.
|
||||
The output includes package counters for each firewall rule, that can help to
|
||||
debug connectivity issues.
|
||||
|
||||
For iptables:
|
||||
|
||||
```bash
|
||||
sudo iptables -L user-firewall-output -n -v
|
||||
sudo ip6tables -L user-firewall-output -n -v
|
||||
```
|
||||
|
||||
For nftables:
|
||||
|
||||
```bash
|
||||
sudo nft list table inet user-firewall
|
||||
|
||||
@@ -148,26 +166,33 @@ sudo nft list table inet user-firewall
|
||||
sudo watch -n1 'nft list table inet user-firewall'
|
||||
```
|
||||
|
||||
Check which rule your VPN traffic is hitting. If packets are being rejected, verify:
|
||||
Check which rule your VPN traffic is hitting. If packets are being rejected,
|
||||
verify:
|
||||
|
||||
1. Your VPN interface name matches the patterns in `allowedInterfaces`
|
||||
2. Your user is listed in `exemptUsers` if needed
|
||||
|
||||
To see your current network interfaces:
|
||||
|
||||
```bash
|
||||
ip link show | grep -E '^[0-9]+:'
|
||||
```
|
||||
|
||||
### Common Issues
|
||||
|
||||
1. **Service Connection Failures**: If local services fail to connect, ensure `lo` is in `allowedInterfaces`.
|
||||
1. **Service Connection Failures**: If local services fail to connect, ensure
|
||||
`lo` is in `allowedInterfaces`.
|
||||
|
||||
2. **VPN Not Working**: Check that your VPN interface name matches the patterns in `allowedInterfaces`. You can find your interface name with `ip link show`.
|
||||
2. **VPN Not Working**: Check that your VPN interface name matches the patterns
|
||||
in `allowedInterfaces`. You can find your interface name with `ip link show`.
|
||||
|
||||
3. **User Still Has Access**: Verify the user is a normal user (not a system user) and not in `exemptUsers`.
|
||||
3. **User Still Has Access**: Verify the user is a normal user (not a system
|
||||
user) and not in `exemptUsers`.
|
||||
|
||||
## Security Considerations
|
||||
|
||||
- This module provides defense in depth but should not be the only security measure
|
||||
- This module provides defense in depth but should not be the only security
|
||||
measure
|
||||
- System users (like `nginx`, `systemd-*`) are not restricted
|
||||
- Root user always has full network access
|
||||
- Restrictions apply at the packet filter level, not application level
|
||||
|
||||
@@ -4,7 +4,8 @@ A powerful application that allows users to create and manage their own Clans.
|
||||
|
||||
## Getting Started
|
||||
|
||||
Enter the `pkgs/clan-app` directory and allow [direnv] to load the `clan-app` devshell with `direnv allow`:
|
||||
Enter the `pkgs/clan-app` directory and allow [direnv] to load the `clan-app`
|
||||
devshell with `direnv allow`:
|
||||
|
||||
```console
|
||||
❯ direnv allow
|
||||
@@ -31,29 +32,35 @@ $ process-compose --use-uds --keep-project -n app
|
||||
|
||||
This will start a [process-compose] instance containing two processes:
|
||||
|
||||
* `clan-app-ui` which is a background process running a [vite] server for `./ui` in a hot-reload fashion
|
||||
* `clan-app` which is a [foreground process](https://f1bonacc1.github.io/process-compose/launcher/?h=foreground#foreground-processes),
|
||||
that is started on demand and provides the [webview] wrapper for the UI.
|
||||
- `clan-app-ui` which is a background process running a [vite] server for `./ui`
|
||||
in a hot-reload fashion
|
||||
- `clan-app` which is a
|
||||
[foreground process](https://f1bonacc1.github.io/process-compose/launcher/?h=foreground#foreground-processes),
|
||||
that is started on demand and provides the [webview] wrapper for the UI.
|
||||
|
||||
Wait for the `clan-app-ui` process to enter the `Running` state, then navigate to the `clan-app` process and press `F7`.
|
||||
This will start the [webview] window and bring `clan-app`'s terminal into the foreground, allowing for interaction with
|
||||
the debugger if required.
|
||||
Wait for the `clan-app-ui` process to enter the `Running` state, then navigate
|
||||
to the `clan-app` process and press `F7`. This will start the [webview] window
|
||||
and bring `clan-app`'s terminal into the foreground, allowing for interaction
|
||||
with the debugger if required.
|
||||
|
||||
If you need to restart, simply enter `ctrl+c` and you will be dropped back into the `process-compose` terminal.
|
||||
From there you can start `clan-app` again with `F7`.
|
||||
If you need to restart, simply enter `ctrl+c` and you will be dropped back into
|
||||
the `process-compose` terminal. From there you can start `clan-app` again with
|
||||
`F7`.
|
||||
|
||||
> **Note**
|
||||
> If you are interacting with a breakpoint, do not continue/exit with `ctrl+c` as this will introduce a quirk
|
||||
> the next time you start `clan-app` where you will be unable to see the input you are typing in a debugging session.
|
||||
> **Note** If you are interacting with a breakpoint, do not continue/exit with
|
||||
> `ctrl+c` as this will introduce a quirk the next time you start `clan-app`
|
||||
> where you will be unable to see the input you are typing in a debugging
|
||||
> session.
|
||||
>
|
||||
> Instead, exit the debugger with `q+Enter`.
|
||||
|
||||
Follow the instructions below to set up your development environment and start the application:
|
||||
Follow the instructions below to set up your development environment and start
|
||||
the application:
|
||||
|
||||
## Storybook
|
||||
|
||||
We use [Storybook] to develop UI components.
|
||||
It can be started by running the following:
|
||||
We use [Storybook] to develop UI components. It can be started by running the
|
||||
following:
|
||||
|
||||
```console
|
||||
$ process-compose --use-uds --keep-project -n storybook
|
||||
@@ -61,20 +68,20 @@ $ process-compose --use-uds --keep-project -n storybook
|
||||
|
||||
This will start a [process-compose] instance containing two processes:
|
||||
|
||||
* `storybook` which is the main [storybook] process.
|
||||
* `luakit` which is a [webkit]-based browser for viewing the stories with. This is the same underlying engine used when
|
||||
rendering the app.
|
||||
- `storybook` which is the main [storybook] process.
|
||||
- `luakit` which is a [webkit]-based browser for viewing the stories with. This
|
||||
is the same underlying engine used when rendering the app.
|
||||
|
||||
You can run storybook tests with `npm run test-storybook`.
|
||||
If you change how a component(s) renders,
|
||||
you will need to update the snapshots with `npm run test-storybook-update-snapshots`.
|
||||
You can run storybook tests with `npm run test-storybook`. If you change how a
|
||||
component(s) renders, you will need to update the snapshots with
|
||||
`npm run test-storybook-update-snapshots`.
|
||||
|
||||
## Start clan-app without process-compose
|
||||
|
||||
|
||||
1. **Navigate to the Webview UI Directory**
|
||||
|
||||
Go to the `clan-core/pkgs/clan-app/ui` directory and start the web server by executing:
|
||||
Go to the `clan-core/pkgs/clan-app/ui` directory and start the web server by
|
||||
executing:
|
||||
|
||||
```bash
|
||||
npm install
|
||||
@@ -89,7 +96,8 @@ you will need to update the snapshots with `npm run test-storybook-update-snapsh
|
||||
./bin/clan-app --debug --content-uri http://localhost:3000
|
||||
```
|
||||
|
||||
This will start the application in debug mode and link it to the web server running at `http://localhost:3000`.
|
||||
This will start the application in debug mode and link it to the web server
|
||||
running at `http://localhost:3000`.
|
||||
|
||||
### Debugging Style and Layout
|
||||
|
||||
@@ -110,10 +118,14 @@ $ ./pygdb.sh ./bin/clan-app --content-uri http://localhost:3000/ --debug
|
||||
```
|
||||
|
||||
I recommend creating the file `.local.env` with the content:
|
||||
|
||||
```bash
|
||||
export WEBVIEW_LIB_DIR=$HOME/Projects/webview/build/core
|
||||
```
|
||||
where `WEBVIEW_LIB_DIR` points to a local checkout of the webview lib source, that has been build by hand. The `.local.env` file will be automatically sourced if it exists and will be ignored by git.
|
||||
|
||||
where `WEBVIEW_LIB_DIR` points to a local checkout of the webview lib source,
|
||||
that has been build by hand. The `.local.env` file will be automatically sourced
|
||||
if it exists and will be ignored by git.
|
||||
|
||||
### Profiling
|
||||
|
||||
@@ -122,4 +134,3 @@ To activate profiling you can run
|
||||
```bash
|
||||
CLAN_CLI_PERF=1 ./bin/clan-app
|
||||
```
|
||||
|
||||
|
||||
@@ -35,7 +35,8 @@ ready to be deployed!
|
||||
|
||||
Starts an instance of [storybook](https://storybook.js.org/).
|
||||
|
||||
For more info on how to write stories, please [see here](https://storybook.js.org/docs).
|
||||
For more info on how to write stories, please
|
||||
[see here](https://storybook.js.org/docs).
|
||||
|
||||
## Deployment
|
||||
|
||||
|
||||
@@ -4,8 +4,9 @@ The Clan CLI contains the command line interface.
|
||||
|
||||
## Hacking on the CLI
|
||||
|
||||
We recommend setting up [direnv](https://direnv.net/) to load the development with Nix.
|
||||
If you do not have it set up you can also use `nix develop` directly like this:
|
||||
We recommend setting up [direnv](https://direnv.net/) to load the development
|
||||
with Nix. If you do not have it set up you can also use `nix develop` directly
|
||||
like this:
|
||||
|
||||
```
|
||||
use flake .#clan-cli --builders ''
|
||||
@@ -19,8 +20,8 @@ After you can use the local bin wrapper to test things in the CLI:
|
||||
|
||||
## Run locally single-threaded for debugging
|
||||
|
||||
By default tests run in parallel using pytest-parallel.
|
||||
pytest-parallel however breaks `breakpoint()`. To disable it, use this:
|
||||
By default tests run in parallel using pytest-parallel. pytest-parallel however
|
||||
breaks `breakpoint()`. To disable it, use this:
|
||||
|
||||
```bash
|
||||
pytest -n0 -s
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
# Clan VM Manager
|
||||
|
||||
Provides users with the simple functionality to manage their locally registered clans.
|
||||
Provides users with the simple functionality to manage their locally registered
|
||||
clans.
|
||||
|
||||

|
||||
|
||||
@@ -52,8 +53,9 @@ CLAN_CLI_PERF=1 ./bin/clan-vm-manager
|
||||
|
||||
> Note:
|
||||
>
|
||||
> we recognized bugs when starting some cli-commands through the integrated vs-code terminal.
|
||||
> If encountering issues make sure to run commands in a regular os-shell.
|
||||
> we recognized bugs when starting some cli-commands through the integrated
|
||||
> vs-code terminal. If encountering issues make sure to run commands in a
|
||||
> regular os-shell.
|
||||
|
||||
lib-Adw has a demo application showing all widgets. You can run it by executing
|
||||
|
||||
@@ -77,15 +79,31 @@ gtk4-icon-browser
|
||||
|
||||
Here are some important documentation links related to the Clan VM Manager:
|
||||
|
||||
- [Adw PyGobject Reference](http://lazka.github.io/pgi-docs/index.html#Adw-1): This link provides the PyGObject reference documentation for the Adw library, which is used in the Clan VM Manager. It contains detailed information about the Adw widgets and their usage.
|
||||
- [Adw PyGobject Reference](http://lazka.github.io/pgi-docs/index.html#Adw-1):
|
||||
This link provides the PyGObject reference documentation for the Adw library,
|
||||
which is used in the Clan VM Manager. It contains detailed information about
|
||||
the Adw widgets and their usage.
|
||||
|
||||
- [GTK4 PyGobject Reference](http://lazka.github.io/pgi-docs/index.html#Gtk-4.0): This link provides the PyGObject reference documentation for GTK4, the toolkit used for building the user interface of the Clan VM Manager. It includes information about GTK4 widgets, signals, and other features.
|
||||
- [GTK4 PyGobject Reference](http://lazka.github.io/pgi-docs/index.html#Gtk-4.0):
|
||||
This link provides the PyGObject reference documentation for GTK4, the toolkit
|
||||
used for building the user interface of the Clan VM Manager. It includes
|
||||
information about GTK4 widgets, signals, and other features.
|
||||
|
||||
- [Adw Widget Gallery](https://gnome.pages.gitlab.gnome.org/libadwaita/doc/main/widget-gallery.html): This link showcases a widget gallery for Adw, allowing you to see the available widgets and their visual appearance. It can be helpful for designing the user interface of the Clan VM Manager.
|
||||
- [Adw Widget Gallery](https://gnome.pages.gitlab.gnome.org/libadwaita/doc/main/widget-gallery.html):
|
||||
This link showcases a widget gallery for Adw, allowing you to see the
|
||||
available widgets and their visual appearance. It can be helpful for designing
|
||||
the user interface of the Clan VM Manager.
|
||||
|
||||
- [Python + GTK3 Tutorial](https://python-gtk-3-tutorial.readthedocs.io/en/latest/textview.html): Although the Clan VM Manager uses GTK4, this tutorial for GTK3 can still be useful as it covers the basics of building GTK-based applications with Python. It includes examples and explanations for various GTK widgets, including text views.
|
||||
- [Python + GTK3 Tutorial](https://python-gtk-3-tutorial.readthedocs.io/en/latest/textview.html):
|
||||
Although the Clan VM Manager uses GTK4, this tutorial for GTK3 can still be
|
||||
useful as it covers the basics of building GTK-based applications with Python.
|
||||
It includes examples and explanations for various GTK widgets, including text
|
||||
views.
|
||||
|
||||
- [GNOME Human Interface Guidelines](https://developer.gnome.org/hig/): This link provides the GNOME Human Interface Guidelines, which offer design and usability recommendations for creating GNOME applications. It covers topics such as layout, navigation, and interaction patterns.
|
||||
- [GNOME Human Interface Guidelines](https://developer.gnome.org/hig/): This
|
||||
link provides the GNOME Human Interface Guidelines, which offer design and
|
||||
usability recommendations for creating GNOME applications. It covers topics
|
||||
such as layout, navigation, and interaction patterns.
|
||||
|
||||
## Error handling
|
||||
|
||||
|
||||
@@ -1,7 +1,10 @@
|
||||
# Webkit GTK doesn't interop flawless with Solid.js build result
|
||||
|
||||
1. Webkit expects script tag to be in `body` only solid.js puts the in the head.
|
||||
2. script and css files are loaded with type="module" and crossorigin tags being set. WebKit silently fails to load then.
|
||||
3. Paths to resiources are not allowed to start with "/" because webkit interprets them relative to the system and not the base url.
|
||||
4. webkit doesn't support native features such as directly handling external urls (i.e opening them in the default browser)
|
||||
6. Other problems to be found?
|
||||
2. script and css files are loaded with type="module" and crossorigin tags being
|
||||
set. WebKit silently fails to load then.
|
||||
3. Paths to resiources are not allowed to start with "/" because webkit
|
||||
interprets them relative to the system and not the base url.
|
||||
4. webkit doesn't support native features such as directly handling external
|
||||
urls (i.e opening them in the default browser)
|
||||
5. Other problems to be found?
|
||||
|
||||
@@ -11,6 +11,7 @@ https://help.figma.com/hc/en-us/articles/8085703771159-Manage-personal-access-to
|
||||
|
||||
## Development on the script
|
||||
|
||||
- Vscode: open this folder as project root. This will configure vscode to use deno instead of nodejs.
|
||||
- Vscode: open this folder as project root. This will configure vscode to use
|
||||
deno instead of nodejs.
|
||||
|
||||
- Non-vscode: Use the deno lsp of your editor
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
---
|
||||
description = "Simple single disk schema"
|
||||
---
|
||||
______________________________________________________________________
|
||||
|
||||
## description = "Simple single disk schema"
|
||||
|
||||
# Description
|
||||
|
||||
This schema defines a GPT-based disk layout.
|
||||
@@ -13,11 +14,13 @@ This schema defines a GPT-based disk layout.
|
||||
### **Partitions**
|
||||
|
||||
1. **EFI System Partition (ESP)**
|
||||
|
||||
- Size: `500M`.
|
||||
- Filesystem: `vfat`.
|
||||
- Mount Point: `/boot` (secure `umask=0077`).
|
||||
|
||||
2. **Root Partition**
|
||||
|
||||
- Size: Remaining disk space (`100%`).
|
||||
- Filesystem: `ext4`.
|
||||
- Mount Point: `/`.
|
||||
|
||||
Reference in New Issue
Block a user