Compare commits

...

No commits in common. "d6c465424a34a35f1a97c4af4f861137abe250f1" and "2923d87a92463a739237c7c5050f07f5f1f2fafa" have entirely different histories.

55 changed files with 1239 additions and 5593 deletions

34
.gitignore vendored
View File

@ -1,24 +1,24 @@
*
!.gitignore
!devenv.lock
!devenv.nix
!devenv.yaml
!docker-compose.yml
!devenv.*
!biome.json
!api/
!api/package.json
!api/bun.lock
!api/README.md
!api/CLAUDE.md
!api/tsconfig.json
!api/Dockerfile
!api/.dockerignore
!api/src/
!api/src/**
!docs/
!docs/**
!tui/
!tui/Cargo.*
!tui/src/
!tui/src/**
!api/
!api/.env
!api/.env.example
!api/package.json
!api/bun.lock
!api/tsconfig.json
!api/drizzle*
!api/src/
!api/src/**
!api/drizzle/
!api/drizzle/**
!charts/
!charts/**

6
api/.dockerignore Normal file
View File

@ -0,0 +1,6 @@
*
!package.json
!bun.lock
!tsconfig.json
!src/
!src/**

View File

@ -1 +0,0 @@
# This file is committed, no secrets here!

View File

@ -1,2 +0,0 @@
DATABASE_URL="postgresql://{username}:{password}@{host}:{port}/{db}"
BETTER_AUTH_SECRET=$(openssl)

111
api/CLAUDE.md Normal file
View File

@ -0,0 +1,111 @@
---
description: Use Bun instead of Node.js, npm, pnpm, or vite.
globs: "*.ts, *.tsx, *.html, *.css, *.js, *.jsx, package.json"
alwaysApply: false
---
Default to using Bun instead of Node.js.
- Use `bun <file>` instead of `node <file>` or `ts-node <file>`
- Use `bun test` instead of `jest` or `vitest`
- Use `bun build <file.html|file.ts|file.css>` instead of `webpack` or `esbuild`
- Use `bun install` instead of `npm install` or `yarn install` or `pnpm install`
- Use `bun run <script>` instead of `npm run <script>` or `yarn run <script>` or `pnpm run <script>`
- Bun automatically loads .env, so don't use dotenv.
## APIs
- `Bun.serve()` supports WebSockets, HTTPS, and routes. Don't use `express`.
- `bun:sqlite` for SQLite. Don't use `better-sqlite3`.
- `Bun.redis` for Redis. Don't use `ioredis`.
- `Bun.sql` for Postgres. Don't use `pg` or `postgres.js`.
- `WebSocket` is built-in. Don't use `ws`.
- Prefer `Bun.file` over `node:fs`'s readFile/writeFile
- Bun.$`ls` instead of execa.
## Testing
Use `bun test` to run tests.
```ts#index.test.ts
import { test, expect } from "bun:test";
test("hello world", () => {
expect(1).toBe(1);
});
```
## Frontend
Use HTML imports with `Bun.serve()`. Don't use `vite`. HTML imports fully support React, CSS, Tailwind.
Server:
```ts#index.ts
import index from "./index.html"
Bun.serve({
routes: {
"/": index,
"/api/users/:id": {
GET: (req) => {
return new Response(JSON.stringify({ id: req.params.id }));
},
},
},
// optional websocket support
websocket: {
open: (ws) => {
ws.send("Hello, world!");
},
message: (ws, message) => {
ws.send(message);
},
close: (ws) => {
// handle close
}
},
development: {
hmr: true,
console: true,
}
})
```
HTML files can import .tsx, .jsx or .js files directly and Bun's bundler will transpile & bundle automatically. `<link>` tags can point to stylesheets and Bun's CSS bundler will bundle.
```html#index.html
<html>
<body>
<h1>Hello, world!</h1>
<script type="module" src="./frontend.tsx"></script>
</body>
</html>
```
With the following `frontend.tsx`:
```tsx#frontend.tsx
import React from "react";
// import .css files directly and it works
import './index.css';
import { createRoot } from "react-dom/client";
const root = createRoot(document.body);
export default function Frontend() {
return <h1>Hello, world!</h1>;
}
root.render(<Frontend />);
```
Then, run index.ts
```sh
bun --hot ./index.ts
```
For more information, read the Bun API docs in `node_modules/bun-types/docs/**.md`.

14
api/Dockerfile Normal file
View File

@ -0,0 +1,14 @@
FROM oven/bun:1.2.19-alpine
WORKDIR /app
COPY package.json bun.lock ./
RUN bun install --frozen-lockfile
COPY . .
EXPOSE 3000
CMD ["bun", "src/index.ts"]

View File

@ -1,11 +1,15 @@
# api
To install dependencies:
```sh
```bash
bun install
```
To run:
```sh
bun run dev
```bash
bun run index.ts
```
open http://localhost:3000
This project was created using `bun init` in bun v1.2.19. [Bun](https://bun.com) is a fast all-in-one JavaScript runtime.

View File

@ -4,267 +4,136 @@
"": {
"name": "api",
"dependencies": {
"@t3-oss/env-core": "^0.13.8",
"better-auth": "^1.2.12",
"drizzle-orm": "^0.44.2",
"hono": "^4.8.4",
"zod": "^4.0.1",
"@effect/platform": "^0.90.0",
"@effect/platform-bun": "^0.77.0",
"@electric-sql/pglite": "^0.3.7",
"effect": "^3.17.6",
},
"devDependencies": {
"@biomejs/biome": "2.1.1",
"@types/bun": "latest",
"drizzle-kit": "^0.31.4",
"typescript": "^5.8.3",
},
"peerDependencies": {
"typescript": "^5",
},
},
},
"packages": {
"@better-auth/utils": ["@better-auth/utils@0.2.5", "", { "dependencies": { "typescript": "^5.8.2", "uncrypto": "^0.1.3" } }, "sha512-uI2+/8h/zVsH8RrYdG8eUErbuGBk16rZKQfz8CjxQOyCE6v7BqFYEbFwvOkvl1KbUdxhqOnXp78+uE5h8qVEgQ=="],
"@effect/cluster": ["@effect/cluster@0.46.2", "", { "peerDependencies": { "@effect/platform": "^0.90.0", "@effect/rpc": "^0.68.0", "@effect/sql": "^0.44.0", "@effect/workflow": "^0.8.1", "effect": "^3.17.3" } }, "sha512-kkAvDzy1OX0pcucIHOxd8JER8EFGmypRQAjZDdxlyvFrZv5IZSHvCL5owKGzu11NiaAc2R0rAWZNPQN8GhIZQA=="],
"@better-fetch/fetch": ["@better-fetch/fetch@1.1.18", "", {}, "sha512-rEFOE1MYIsBmoMJtQbl32PGHHXuG2hDxvEd7rUHE0vCBoFQVSDqaVs9hkZEtHCxRoY+CljXKFCOuJ8uxqw1LcA=="],
"@effect/experimental": ["@effect/experimental@0.54.3", "", { "dependencies": { "uuid": "^11.0.3" }, "peerDependencies": { "@effect/platform": "^0.90.0", "effect": "^3.17.4", "ioredis": "^5", "lmdb": "^3" }, "optionalPeers": ["ioredis", "lmdb"] }, "sha512-FR+4KfGxte/BwQyVvbq8boWSWyN5p69tdtUQX9Owf/JfnLmZY42d+L3nnn1Gg8EhTPiAk+hMnODWWHnV03JmbQ=="],
"@biomejs/biome": ["@biomejs/biome@2.1.1", "", { "optionalDependencies": { "@biomejs/cli-darwin-arm64": "2.1.1", "@biomejs/cli-darwin-x64": "2.1.1", "@biomejs/cli-linux-arm64": "2.1.1", "@biomejs/cli-linux-arm64-musl": "2.1.1", "@biomejs/cli-linux-x64": "2.1.1", "@biomejs/cli-linux-x64-musl": "2.1.1", "@biomejs/cli-win32-arm64": "2.1.1", "@biomejs/cli-win32-x64": "2.1.1" }, "bin": { "biome": "bin/biome" } }, "sha512-HFGYkxG714KzG+8tvtXCJ1t1qXQMzgWzfvQaUjxN6UeKv+KvMEuliInnbZLJm6DXFXwqVi6446EGI0sGBLIYng=="],
"@effect/platform": ["@effect/platform@0.90.0", "", { "dependencies": { "@opentelemetry/semantic-conventions": "^1.33.0", "find-my-way-ts": "^0.1.6", "msgpackr": "^1.11.4", "multipasta": "^0.2.7" }, "peerDependencies": { "effect": "^3.17.0" } }, "sha512-F26RZO8qVyCLH43EF9BvJwrhtFsZL2Xv66Jxxjj/sBIes8TOVpyebaysQ7Tz33xALobwU1eNgm8vh18VkJiWnQ=="],
"@biomejs/cli-darwin-arm64": ["@biomejs/cli-darwin-arm64@2.1.1", "", { "os": "darwin", "cpu": "arm64" }, "sha512-2Muinu5ok4tWxq4nu5l19el48cwCY/vzvI7Vjbkf3CYIQkjxZLyj0Ad37Jv2OtlXYaLvv+Sfu1hFeXt/JwRRXQ=="],
"@effect/platform-bun": ["@effect/platform-bun@0.77.0", "", { "dependencies": { "@effect/platform-node-shared": "^0.47.0", "multipasta": "^0.2.7" }, "peerDependencies": { "@effect/cluster": "^0.46.0", "@effect/platform": "^0.90.0", "@effect/rpc": "^0.68.0", "@effect/sql": "^0.44.0", "effect": "^3.17.1" } }, "sha512-M5wB11Jt2zlV3GfLh4ZHsb8CU/EsYNXhQyLCI/rqcyNxyL1t25co3w50lpsv4a3Z7uvVfHHy2636z0CRNXnhuQ=="],
"@biomejs/cli-darwin-x64": ["@biomejs/cli-darwin-x64@2.1.1", "", { "os": "darwin", "cpu": "x64" }, "sha512-cC8HM5lrgKQXLAK+6Iz2FrYW5A62pAAX6KAnRlEyLb+Q3+Kr6ur/sSuoIacqlp1yvmjHJqjYfZjPvHWnqxoEIA=="],
"@effect/platform-node-shared": ["@effect/platform-node-shared@0.47.0", "", { "dependencies": { "@parcel/watcher": "^2.5.1", "multipasta": "^0.2.7", "ws": "^8.18.2" }, "peerDependencies": { "@effect/cluster": "^0.46.0", "@effect/platform": "^0.90.0", "@effect/rpc": "^0.68.0", "@effect/sql": "^0.44.0", "effect": "^3.17.1" } }, "sha512-ITsvT1Upphnf5Iq6gkUef4oy/ivoJkl8grtIuVkNE38I3EC57A/00anDXlwSgUd7i4pRT+KX5ypcc1/TsehCeg=="],
"@biomejs/cli-linux-arm64": ["@biomejs/cli-linux-arm64@2.1.1", "", { "os": "linux", "cpu": "arm64" }, "sha512-tw4BEbhAUkWPe4WBr6IX04DJo+2jz5qpPzpW/SWvqMjb9QuHY8+J0M23V8EPY/zWU4IG8Ui0XESapR1CB49Q7g=="],
"@effect/rpc": ["@effect/rpc@0.68.2", "", { "peerDependencies": { "@effect/platform": "^0.90.0", "effect": "^3.17.5" } }, "sha512-AFmOeB+Tl71yIDCA9ZSK0wd2uWZrPTvkJ4kcGo8Ad7okUMsAwwz2AOfJHayFbbA4XRUS8rLmSYI8H3oM0yvqVQ=="],
"@biomejs/cli-linux-arm64-musl": ["@biomejs/cli-linux-arm64-musl@2.1.1", "", { "os": "linux", "cpu": "arm64" }, "sha512-/7FBLnTswu4jgV9ttI3AMIdDGqVEPIZd8I5u2D4tfCoj8rl9dnjrEQbAIDlWhUXdyWlFSz8JypH3swU9h9P+2A=="],
"@effect/sql": ["@effect/sql@0.44.0", "", { "dependencies": { "@opentelemetry/semantic-conventions": "^1.33.0", "uuid": "^11.0.3" }, "peerDependencies": { "@effect/experimental": "^0.54.0", "@effect/platform": "^0.90.0", "effect": "^3.17.0" } }, "sha512-HxVEk9ufZZnJ2AuqUlgirjlSDYQ49QDM6o7MkcFQtp4UKrCmDgTshNre11rACOiMZH3ywH6cWViJ1eLwf10D2A=="],
"@biomejs/cli-linux-x64": ["@biomejs/cli-linux-x64@2.1.1", "", { "os": "linux", "cpu": "x64" }, "sha512-3WJ1GKjU7NzZb6RTbwLB59v9cTIlzjbiFLDB0z4376TkDqoNYilJaC37IomCr/aXwuU8QKkrYoHrgpSq5ffJ4Q=="],
"@effect/workflow": ["@effect/workflow@0.8.1", "", { "peerDependencies": { "@effect/platform": "^0.90.0", "@effect/rpc": "^0.68.0", "effect": "^3.17.1" } }, "sha512-kZRChyGlhgaPB19st/F0FZmz9Y6aQSLBbbC/oBiXspB4rdxkP10O1Z7nVsCcASQ10hfxjdoR816kxf9LctSTKQ=="],
"@biomejs/cli-linux-x64-musl": ["@biomejs/cli-linux-x64-musl@2.1.1", "", { "os": "linux", "cpu": "x64" }, "sha512-kUu+loNI3OCD2c12cUt7M5yaaSjDnGIksZwKnueubX6c/HWUyi/0mPbTBHR49Me3F0KKjWiKM+ZOjsmC+lUt9g=="],
"@electric-sql/pglite": ["@electric-sql/pglite@0.3.7", "", {}, "sha512-5c3mybVrhxu5s47zFZtIGdG8YHkKCBENOmqxnNBjY53ZoDhADY/c5UqBDl159b7qtkzNPtbbb893wL9zi1kAuw=="],
"@biomejs/cli-win32-arm64": ["@biomejs/cli-win32-arm64@2.1.1", "", { "os": "win32", "cpu": "arm64" }, "sha512-vEHK0v0oW+E6RUWLoxb2isI3rZo57OX9ZNyyGH701fZPj6Il0Rn1f5DMNyCmyflMwTnIQstEbs7n2BxYSqQx4Q=="],
"@msgpackr-extract/msgpackr-extract-darwin-arm64": ["@msgpackr-extract/msgpackr-extract-darwin-arm64@3.0.3", "", { "os": "darwin", "cpu": "arm64" }, "sha512-QZHtlVgbAdy2zAqNA9Gu1UpIuI8Xvsd1v8ic6B2pZmeFnFcMWiPLfWXh7TVw4eGEZ/C9TH281KwhVoeQUKbyjw=="],
"@biomejs/cli-win32-x64": ["@biomejs/cli-win32-x64@2.1.1", "", { "os": "win32", "cpu": "x64" }, "sha512-i2PKdn70kY++KEF/zkQFvQfX1e8SkA8hq4BgC+yE9dZqyLzB/XStY2MvwI3qswlRgnGpgncgqe0QYKVS1blksg=="],
"@msgpackr-extract/msgpackr-extract-darwin-x64": ["@msgpackr-extract/msgpackr-extract-darwin-x64@3.0.3", "", { "os": "darwin", "cpu": "x64" }, "sha512-mdzd3AVzYKuUmiWOQ8GNhl64/IoFGol569zNRdkLReh6LRLHOXxU4U8eq0JwaD8iFHdVGqSy4IjFL4reoWCDFw=="],
"@drizzle-team/brocli": ["@drizzle-team/brocli@0.10.2", "", {}, "sha512-z33Il7l5dKjUgGULTqBsQBQwckHh5AbIuxhdsIxDDiZAzBOrZO6q9ogcWC65kU382AfynTfgNumVcNIjuIua6w=="],
"@msgpackr-extract/msgpackr-extract-linux-arm": ["@msgpackr-extract/msgpackr-extract-linux-arm@3.0.3", "", { "os": "linux", "cpu": "arm" }, "sha512-fg0uy/dG/nZEXfYilKoRe7yALaNmHoYeIoJuJ7KJ+YyU2bvY8vPv27f7UKhGRpY6euFYqEVhxCFZgAUNQBM3nw=="],
"@esbuild-kit/core-utils": ["@esbuild-kit/core-utils@3.3.2", "", { "dependencies": { "esbuild": "~0.18.20", "source-map-support": "^0.5.21" } }, "sha512-sPRAnw9CdSsRmEtnsl2WXWdyquogVpB3yZ3dgwJfe8zrOzTsV7cJvmwrKVa+0ma5BoiGJ+BoqkMvawbayKUsqQ=="],
"@msgpackr-extract/msgpackr-extract-linux-arm64": ["@msgpackr-extract/msgpackr-extract-linux-arm64@3.0.3", "", { "os": "linux", "cpu": "arm64" }, "sha512-YxQL+ax0XqBJDZiKimS2XQaf+2wDGVa1enVRGzEvLLVFeqa5kx2bWbtcSXgsxjQB7nRqqIGFIcLteF/sHeVtQg=="],
"@esbuild-kit/esm-loader": ["@esbuild-kit/esm-loader@2.6.5", "", { "dependencies": { "@esbuild-kit/core-utils": "^3.3.2", "get-tsconfig": "^4.7.0" } }, "sha512-FxEMIkJKnodyA1OaCUoEvbYRkoZlLZ4d/eXFu9Fh8CbBBgP5EmZxrfTRyN0qpXZ4vOvqnE5YdRdcrmUUXuU+dA=="],
"@msgpackr-extract/msgpackr-extract-linux-x64": ["@msgpackr-extract/msgpackr-extract-linux-x64@3.0.3", "", { "os": "linux", "cpu": "x64" }, "sha512-cvwNfbP07pKUfq1uH+S6KJ7dT9K8WOE4ZiAcsrSes+UY55E/0jLYc+vq+DO7jlmqRb5zAggExKm0H7O/CBaesg=="],
"@esbuild/aix-ppc64": ["@esbuild/aix-ppc64@0.25.6", "", { "os": "aix", "cpu": "ppc64" }, "sha512-ShbM/3XxwuxjFiuVBHA+d3j5dyac0aEVVq1oluIDf71hUw0aRF59dV/efUsIwFnR6m8JNM2FjZOzmaZ8yG61kw=="],
"@msgpackr-extract/msgpackr-extract-win32-x64": ["@msgpackr-extract/msgpackr-extract-win32-x64@3.0.3", "", { "os": "win32", "cpu": "x64" }, "sha512-x0fWaQtYp4E6sktbsdAqnehxDgEc/VwM7uLsRCYWaiGu0ykYdZPiS8zCWdnjHwyiumousxfBm4SO31eXqwEZhQ=="],
"@esbuild/android-arm": ["@esbuild/android-arm@0.25.6", "", { "os": "android", "cpu": "arm" }, "sha512-S8ToEOVfg++AU/bHwdksHNnyLyVM+eMVAOf6yRKFitnwnbwwPNqKr3srzFRe7nzV69RQKb5DgchIX5pt3L53xg=="],
"@opentelemetry/semantic-conventions": ["@opentelemetry/semantic-conventions@1.36.0", "", {}, "sha512-TtxJSRD8Ohxp6bKkhrm27JRHAxPczQA7idtcTOMYI+wQRRrfgqxHv1cFbCApcSnNjtXkmzFozn6jQtFrOmbjPQ=="],
"@esbuild/android-arm64": ["@esbuild/android-arm64@0.25.6", "", { "os": "android", "cpu": "arm64" }, "sha512-hd5zdUarsK6strW+3Wxi5qWws+rJhCCbMiC9QZyzoxfk5uHRIE8T287giQxzVpEvCwuJ9Qjg6bEjcRJcgfLqoA=="],
"@parcel/watcher": ["@parcel/watcher@2.5.1", "", { "dependencies": { "detect-libc": "^1.0.3", "is-glob": "^4.0.3", "micromatch": "^4.0.5", "node-addon-api": "^7.0.0" }, "optionalDependencies": { "@parcel/watcher-android-arm64": "2.5.1", "@parcel/watcher-darwin-arm64": "2.5.1", "@parcel/watcher-darwin-x64": "2.5.1", "@parcel/watcher-freebsd-x64": "2.5.1", "@parcel/watcher-linux-arm-glibc": "2.5.1", "@parcel/watcher-linux-arm-musl": "2.5.1", "@parcel/watcher-linux-arm64-glibc": "2.5.1", "@parcel/watcher-linux-arm64-musl": "2.5.1", "@parcel/watcher-linux-x64-glibc": "2.5.1", "@parcel/watcher-linux-x64-musl": "2.5.1", "@parcel/watcher-win32-arm64": "2.5.1", "@parcel/watcher-win32-ia32": "2.5.1", "@parcel/watcher-win32-x64": "2.5.1" } }, "sha512-dfUnCxiN9H4ap84DvD2ubjw+3vUNpstxa0TneY/Paat8a3R4uQZDLSvWjmznAY/DoahqTHl9V46HF/Zs3F29pg=="],
"@esbuild/android-x64": ["@esbuild/android-x64@0.25.6", "", { "os": "android", "cpu": "x64" }, "sha512-0Z7KpHSr3VBIO9A/1wcT3NTy7EB4oNC4upJ5ye3R7taCc2GUdeynSLArnon5G8scPwaU866d3H4BCrE5xLW25A=="],
"@parcel/watcher-android-arm64": ["@parcel/watcher-android-arm64@2.5.1", "", { "os": "android", "cpu": "arm64" }, "sha512-KF8+j9nNbUN8vzOFDpRMsaKBHZ/mcjEjMToVMJOhTozkDonQFFrRcfdLWn6yWKCmJKmdVxSgHiYvTCef4/qcBA=="],
"@esbuild/darwin-arm64": ["@esbuild/darwin-arm64@0.25.6", "", { "os": "darwin", "cpu": "arm64" }, "sha512-FFCssz3XBavjxcFxKsGy2DYK5VSvJqa6y5HXljKzhRZ87LvEi13brPrf/wdyl/BbpbMKJNOr1Sd0jtW4Ge1pAA=="],
"@parcel/watcher-darwin-arm64": ["@parcel/watcher-darwin-arm64@2.5.1", "", { "os": "darwin", "cpu": "arm64" }, "sha512-eAzPv5osDmZyBhou8PoF4i6RQXAfeKL9tjb3QzYuccXFMQU0ruIc/POh30ePnaOyD1UXdlKguHBmsTs53tVoPw=="],
"@esbuild/darwin-x64": ["@esbuild/darwin-x64@0.25.6", "", { "os": "darwin", "cpu": "x64" }, "sha512-GfXs5kry/TkGM2vKqK2oyiLFygJRqKVhawu3+DOCk7OxLy/6jYkWXhlHwOoTb0WqGnWGAS7sooxbZowy+pK9Yg=="],
"@parcel/watcher-darwin-x64": ["@parcel/watcher-darwin-x64@2.5.1", "", { "os": "darwin", "cpu": "x64" }, "sha512-1ZXDthrnNmwv10A0/3AJNZ9JGlzrF82i3gNQcWOzd7nJ8aj+ILyW1MTxVk35Db0u91oD5Nlk9MBiujMlwmeXZg=="],
"@esbuild/freebsd-arm64": ["@esbuild/freebsd-arm64@0.25.6", "", { "os": "freebsd", "cpu": "arm64" }, "sha512-aoLF2c3OvDn2XDTRvn8hN6DRzVVpDlj2B/F66clWd/FHLiHaG3aVZjxQX2DYphA5y/evbdGvC6Us13tvyt4pWg=="],
"@parcel/watcher-freebsd-x64": ["@parcel/watcher-freebsd-x64@2.5.1", "", { "os": "freebsd", "cpu": "x64" }, "sha512-SI4eljM7Flp9yPuKi8W0ird8TI/JK6CSxju3NojVI6BjHsTyK7zxA9urjVjEKJ5MBYC+bLmMcbAWlZ+rFkLpJQ=="],
"@esbuild/freebsd-x64": ["@esbuild/freebsd-x64@0.25.6", "", { "os": "freebsd", "cpu": "x64" }, "sha512-2SkqTjTSo2dYi/jzFbU9Plt1vk0+nNg8YC8rOXXea+iA3hfNJWebKYPs3xnOUf9+ZWhKAaxnQNUf2X9LOpeiMQ=="],
"@parcel/watcher-linux-arm-glibc": ["@parcel/watcher-linux-arm-glibc@2.5.1", "", { "os": "linux", "cpu": "arm" }, "sha512-RCdZlEyTs8geyBkkcnPWvtXLY44BCeZKmGYRtSgtwwnHR4dxfHRG3gR99XdMEdQ7KeiDdasJwwvNSF5jKtDwdA=="],
"@esbuild/linux-arm": ["@esbuild/linux-arm@0.25.6", "", { "os": "linux", "cpu": "arm" }, "sha512-SZHQlzvqv4Du5PrKE2faN0qlbsaW/3QQfUUc6yO2EjFcA83xnwm91UbEEVx4ApZ9Z5oG8Bxz4qPE+HFwtVcfyw=="],
"@parcel/watcher-linux-arm-musl": ["@parcel/watcher-linux-arm-musl@2.5.1", "", { "os": "linux", "cpu": "arm" }, "sha512-6E+m/Mm1t1yhB8X412stiKFG3XykmgdIOqhjWj+VL8oHkKABfu/gjFj8DvLrYVHSBNC+/u5PeNrujiSQ1zwd1Q=="],
"@esbuild/linux-arm64": ["@esbuild/linux-arm64@0.25.6", "", { "os": "linux", "cpu": "arm64" }, "sha512-b967hU0gqKd9Drsh/UuAm21Khpoh6mPBSgz8mKRq4P5mVK8bpA+hQzmm/ZwGVULSNBzKdZPQBRT3+WuVavcWsQ=="],
"@parcel/watcher-linux-arm64-glibc": ["@parcel/watcher-linux-arm64-glibc@2.5.1", "", { "os": "linux", "cpu": "arm64" }, "sha512-LrGp+f02yU3BN9A+DGuY3v3bmnFUggAITBGriZHUREfNEzZh/GO06FF5u2kx8x+GBEUYfyTGamol4j3m9ANe8w=="],
"@esbuild/linux-ia32": ["@esbuild/linux-ia32@0.25.6", "", { "os": "linux", "cpu": "ia32" }, "sha512-aHWdQ2AAltRkLPOsKdi3xv0mZ8fUGPdlKEjIEhxCPm5yKEThcUjHpWB1idN74lfXGnZ5SULQSgtr5Qos5B0bPw=="],
"@parcel/watcher-linux-arm64-musl": ["@parcel/watcher-linux-arm64-musl@2.5.1", "", { "os": "linux", "cpu": "arm64" }, "sha512-cFOjABi92pMYRXS7AcQv9/M1YuKRw8SZniCDw0ssQb/noPkRzA+HBDkwmyOJYp5wXcsTrhxO0zq1U11cK9jsFg=="],
"@esbuild/linux-loong64": ["@esbuild/linux-loong64@0.25.6", "", { "os": "linux", "cpu": "none" }, "sha512-VgKCsHdXRSQ7E1+QXGdRPlQ/e08bN6WMQb27/TMfV+vPjjTImuT9PmLXupRlC90S1JeNNW5lzkAEO/McKeJ2yg=="],
"@parcel/watcher-linux-x64-glibc": ["@parcel/watcher-linux-x64-glibc@2.5.1", "", { "os": "linux", "cpu": "x64" }, "sha512-GcESn8NZySmfwlTsIur+49yDqSny2IhPeZfXunQi48DMugKeZ7uy1FX83pO0X22sHntJ4Ub+9k34XQCX+oHt2A=="],
"@esbuild/linux-mips64el": ["@esbuild/linux-mips64el@0.25.6", "", { "os": "linux", "cpu": "none" }, "sha512-WViNlpivRKT9/py3kCmkHnn44GkGXVdXfdc4drNmRl15zVQ2+D2uFwdlGh6IuK5AAnGTo2qPB1Djppj+t78rzw=="],
"@parcel/watcher-linux-x64-musl": ["@parcel/watcher-linux-x64-musl@2.5.1", "", { "os": "linux", "cpu": "x64" }, "sha512-n0E2EQbatQ3bXhcH2D1XIAANAcTZkQICBPVaxMeaCVBtOpBZpWJuf7LwyWPSBDITb7In8mqQgJ7gH8CILCURXg=="],
"@esbuild/linux-ppc64": ["@esbuild/linux-ppc64@0.25.6", "", { "os": "linux", "cpu": "ppc64" }, "sha512-wyYKZ9NTdmAMb5730I38lBqVu6cKl4ZfYXIs31Baf8aoOtB4xSGi3THmDYt4BTFHk7/EcVixkOV2uZfwU3Q2Jw=="],
"@parcel/watcher-win32-arm64": ["@parcel/watcher-win32-arm64@2.5.1", "", { "os": "win32", "cpu": "arm64" }, "sha512-RFzklRvmc3PkjKjry3hLF9wD7ppR4AKcWNzH7kXR7GUe0Igb3Nz8fyPwtZCSquGrhU5HhUNDr/mKBqj7tqA2Vw=="],
"@esbuild/linux-riscv64": ["@esbuild/linux-riscv64@0.25.6", "", { "os": "linux", "cpu": "none" }, "sha512-KZh7bAGGcrinEj4qzilJ4hqTY3Dg2U82c8bv+e1xqNqZCrCyc+TL9AUEn5WGKDzm3CfC5RODE/qc96OcbIe33w=="],
"@parcel/watcher-win32-ia32": ["@parcel/watcher-win32-ia32@2.5.1", "", { "os": "win32", "cpu": "ia32" }, "sha512-c2KkcVN+NJmuA7CGlaGD1qJh1cLfDnQsHjE89E60vUEMlqduHGCdCLJCID5geFVM0dOtA3ZiIO8BoEQmzQVfpQ=="],
"@esbuild/linux-s390x": ["@esbuild/linux-s390x@0.25.6", "", { "os": "linux", "cpu": "s390x" }, "sha512-9N1LsTwAuE9oj6lHMyyAM+ucxGiVnEqUdp4v7IaMmrwb06ZTEVCIs3oPPplVsnjPfyjmxwHxHMF8b6vzUVAUGw=="],
"@parcel/watcher-win32-x64": ["@parcel/watcher-win32-x64@2.5.1", "", { "os": "win32", "cpu": "x64" }, "sha512-9lHBdJITeNR++EvSQVUcaZoWupyHfXe1jZvGZ06O/5MflPcuPLtEphScIBL+AiCWBO46tDSHzWyD0uDmmZqsgA=="],
"@esbuild/linux-x64": ["@esbuild/linux-x64@0.25.6", "", { "os": "linux", "cpu": "x64" }, "sha512-A6bJB41b4lKFWRKNrWoP2LHsjVzNiaurf7wyj/XtFNTsnPuxwEBWHLty+ZE0dWBKuSK1fvKgrKaNjBS7qbFKig=="],
"@standard-schema/spec": ["@standard-schema/spec@1.0.0", "", {}, "sha512-m2bOd0f2RT9k8QJx1JN85cZYyH1RqFBdlwtkSlf4tBDYLCiiZnv1fIIwacK6cqwXavOydf0NPToMQgpKq+dVlA=="],
"@esbuild/netbsd-arm64": ["@esbuild/netbsd-arm64@0.25.6", "", { "os": "none", "cpu": "arm64" }, "sha512-IjA+DcwoVpjEvyxZddDqBY+uJ2Snc6duLpjmkXm/v4xuS3H+3FkLZlDm9ZsAbF9rsfP3zeA0/ArNDORZgrxR/Q=="],
"@types/bun": ["@types/bun@1.2.19", "", { "dependencies": { "bun-types": "1.2.19" } }, "sha512-d9ZCmrH3CJ2uYKXQIUuZ/pUnTqIvLDS0SK7pFmbx8ma+ziH/FRMoAq5bYpRG7y+w1gl+HgyNZbtqgMq4W4e2Lg=="],
"@esbuild/netbsd-x64": ["@esbuild/netbsd-x64@0.25.6", "", { "os": "none", "cpu": "x64" }, "sha512-dUXuZr5WenIDlMHdMkvDc1FAu4xdWixTCRgP7RQLBOkkGgwuuzaGSYcOpW4jFxzpzL1ejb8yF620UxAqnBrR9g=="],
"@types/node": ["@types/node@24.2.0", "", { "dependencies": { "undici-types": "~7.10.0" } }, "sha512-3xyG3pMCq3oYCNg7/ZP+E1ooTaGB4cG8JWRsqqOYQdbWNY4zbaV0Ennrd7stjiJEFZCaybcIgpTjJWHRfBSIDw=="],
"@esbuild/openbsd-arm64": ["@esbuild/openbsd-arm64@0.25.6", "", { "os": "openbsd", "cpu": "arm64" }, "sha512-l8ZCvXP0tbTJ3iaqdNf3pjaOSd5ex/e6/omLIQCVBLmHTlfXW3zAxQ4fnDmPLOB1x9xrcSi/xtCWFwCZRIaEwg=="],
"@types/react": ["@types/react@19.1.9", "", { "dependencies": { "csstype": "^3.0.2" } }, "sha512-WmdoynAX8Stew/36uTSVMcLJJ1KRh6L3IZRx1PZ7qJtBqT3dYTgyDTx8H1qoRghErydW7xw9mSJ3wS//tCRpFA=="],
"@esbuild/openbsd-x64": ["@esbuild/openbsd-x64@0.25.6", "", { "os": "openbsd", "cpu": "x64" }, "sha512-hKrmDa0aOFOr71KQ/19JC7az1P0GWtCN1t2ahYAf4O007DHZt/dW8ym5+CUdJhQ/qkZmI1HAF8KkJbEFtCL7gw=="],
"braces": ["braces@3.0.3", "", { "dependencies": { "fill-range": "^7.1.1" } }, "sha512-yQbXgO/OSZVD2IsiLlro+7Hf6Q18EJrKSEsdoMzKePKXct3gvD8oLcOQdIzGupr5Fj+EDe8gO/lxc1BzfMpxvA=="],
"@esbuild/openharmony-arm64": ["@esbuild/openharmony-arm64@0.25.6", "", { "os": "none", "cpu": "arm64" }, "sha512-+SqBcAWoB1fYKmpWoQP4pGtx+pUUC//RNYhFdbcSA16617cchuryuhOCRpPsjCblKukAckWsV+aQ3UKT/RMPcA=="],
"@esbuild/sunos-x64": ["@esbuild/sunos-x64@0.25.6", "", { "os": "sunos", "cpu": "x64" }, "sha512-dyCGxv1/Br7MiSC42qinGL8KkG4kX0pEsdb0+TKhmJZgCUDBGmyo1/ArCjNGiOLiIAgdbWgmWgib4HoCi5t7kA=="],
"@esbuild/win32-arm64": ["@esbuild/win32-arm64@0.25.6", "", { "os": "win32", "cpu": "arm64" }, "sha512-42QOgcZeZOvXfsCBJF5Afw73t4veOId//XD3i+/9gSkhSV6Gk3VPlWncctI+JcOyERv85FUo7RxuxGy+z8A43Q=="],
"@esbuild/win32-ia32": ["@esbuild/win32-ia32@0.25.6", "", { "os": "win32", "cpu": "ia32" }, "sha512-4AWhgXmDuYN7rJI6ORB+uU9DHLq/erBbuMoAuB4VWJTu5KtCgcKYPynF0YI1VkBNuEfjNlLrFr9KZPJzrtLkrQ=="],
"@esbuild/win32-x64": ["@esbuild/win32-x64@0.25.6", "", { "os": "win32", "cpu": "x64" }, "sha512-NgJPHHbEpLQgDH2MjQu90pzW/5vvXIZ7KOnPyNBm92A6WgZ/7b6fJyUBjoumLqeOQQGqY2QjQxRo97ah4Sj0cA=="],
"@hexagon/base64": ["@hexagon/base64@1.1.28", "", {}, "sha512-lhqDEAvWixy3bZ+UOYbPwUbBkwBq5C1LAJ/xPC8Oi+lL54oyakv/npbA0aU2hgCsx/1NUd4IBvV03+aUBWxerw=="],
"@levischuck/tiny-cbor": ["@levischuck/tiny-cbor@0.2.11", "", {}, "sha512-llBRm4dT4Z89aRsm6u2oEZ8tfwL/2l6BwpZ7JcyieouniDECM5AqNgr/y08zalEIvW3RSK4upYyybDcmjXqAow=="],
"@noble/ciphers": ["@noble/ciphers@0.6.0", "", {}, "sha512-mIbq/R9QXk5/cTfESb1OKtyFnk7oc1Om/8onA1158K9/OZUQFDEVy55jVTato+xmp3XX6F6Qh0zz0Nc1AxAlRQ=="],
"@noble/hashes": ["@noble/hashes@1.8.0", "", {}, "sha512-jCs9ldd7NwzpgXDIf6P3+NrHh9/sD6CQdxHyjQI+h/6rDNo88ypBxxz45UDuZHz9r3tNz7N/VInSVoVdtXEI4A=="],
"@peculiar/asn1-android": ["@peculiar/asn1-android@2.3.16", "", { "dependencies": { "@peculiar/asn1-schema": "^2.3.15", "asn1js": "^3.0.5", "tslib": "^2.8.1" } }, "sha512-a1viIv3bIahXNssrOIkXZIlI2ePpZaNmR30d4aBL99mu2rO+mT9D6zBsp7H6eROWGtmwv0Ionp5olJurIo09dw=="],
"@peculiar/asn1-ecc": ["@peculiar/asn1-ecc@2.3.15", "", { "dependencies": { "@peculiar/asn1-schema": "^2.3.15", "@peculiar/asn1-x509": "^2.3.15", "asn1js": "^3.0.5", "tslib": "^2.8.1" } }, "sha512-/HtR91dvgog7z/WhCVdxZJ/jitJuIu8iTqiyWVgRE9Ac5imt2sT/E4obqIVGKQw7PIy+X6i8lVBoT6wC73XUgA=="],
"@peculiar/asn1-rsa": ["@peculiar/asn1-rsa@2.3.15", "", { "dependencies": { "@peculiar/asn1-schema": "^2.3.15", "@peculiar/asn1-x509": "^2.3.15", "asn1js": "^3.0.5", "tslib": "^2.8.1" } }, "sha512-p6hsanvPhexRtYSOHihLvUUgrJ8y0FtOM97N5UEpC+VifFYyZa0iZ5cXjTkZoDwxJ/TTJ1IJo3HVTB2JJTpXvg=="],
"@peculiar/asn1-schema": ["@peculiar/asn1-schema@2.3.15", "", { "dependencies": { "asn1js": "^3.0.5", "pvtsutils": "^1.3.6", "tslib": "^2.8.1" } }, "sha512-QPeD8UA8axQREpgR5UTAfu2mqQmm97oUqahDtNdBcfj3qAnoXzFdQW+aNf/tD2WVXF8Fhmftxoj0eMIT++gX2w=="],
"@peculiar/asn1-x509": ["@peculiar/asn1-x509@2.3.15", "", { "dependencies": { "@peculiar/asn1-schema": "^2.3.15", "asn1js": "^3.0.5", "pvtsutils": "^1.3.6", "tslib": "^2.8.1" } }, "sha512-0dK5xqTqSLaxv1FHXIcd4Q/BZNuopg+u1l23hT9rOmQ1g4dNtw0g/RnEi+TboB0gOwGtrWn269v27cMgchFIIg=="],
"@simplewebauthn/browser": ["@simplewebauthn/browser@13.1.2", "", {}, "sha512-aZnW0KawAM83fSBUgglP5WofbrLbLyr7CoPqYr66Eppm7zO86YX6rrCjRB3hQKPrL7ATvY4FVXlykZ6w6FwYYw=="],
"@simplewebauthn/server": ["@simplewebauthn/server@13.1.2", "", { "dependencies": { "@hexagon/base64": "^1.1.27", "@levischuck/tiny-cbor": "^0.2.2", "@peculiar/asn1-android": "^2.3.10", "@peculiar/asn1-ecc": "^2.3.8", "@peculiar/asn1-rsa": "^2.3.8", "@peculiar/asn1-schema": "^2.3.8", "@peculiar/asn1-x509": "^2.3.8" } }, "sha512-VwoDfvLXSCaRiD+xCIuyslU0HLxVggeE5BL06+GbsP2l1fGf5op8e0c3ZtKoi+vSg1q4ikjtAghC23ze2Q3H9g=="],
"@t3-oss/env-core": ["@t3-oss/env-core@0.13.8", "", { "peerDependencies": { "arktype": "^2.1.0", "typescript": ">=5.0.0", "valibot": "^1.0.0-beta.7 || ^1.0.0", "zod": "^3.24.0 || ^4.0.0-beta.0" }, "optionalPeers": ["arktype", "typescript", "valibot", "zod"] }, "sha512-L1inmpzLQyYu4+Q1DyrXsGJYCXbtXjC4cICw1uAKv0ppYPQv656lhZPU91Qd1VS6SO/bou1/q5ufVzBGbNsUpw=="],
"@types/bun": ["@types/bun@1.2.18", "", { "dependencies": { "bun-types": "1.2.18" } }, "sha512-Xf6RaWVheyemaThV0kUfaAUvCNokFr+bH8Jxp+tTZfx7dAPA8z9ePnP9S9+Vspzuxxx9JRAXhnyccRj3GyCMdQ=="],
"@types/node": ["@types/node@24.0.12", "", { "dependencies": { "undici-types": "~7.8.0" } }, "sha512-LtOrbvDf5ndC9Xi+4QZjVL0woFymF/xSTKZKPgrrl7H7XoeDvnD+E2IclKVDyaK9UM756W/3BXqSU+JEHopA9g=="],
"@types/pg": ["@types/pg@8.15.4", "", { "dependencies": { "@types/node": "*", "pg-protocol": "*", "pg-types": "^2.2.0" } }, "sha512-I6UNVBAoYbvuWkkU3oosC8yxqH21f4/Jc4DK71JLG3dT2mdlGe1z+ep/LQGXaKaOgcvUrsQoPRqfgtMcvZiJhg=="],
"@types/react": ["@types/react@19.1.8", "", { "dependencies": { "csstype": "^3.0.2" } }, "sha512-AwAfQ2Wa5bCx9WP8nZL2uMZWod7J7/JSplxbTmBQ5ms6QpqNYm672H0Vu9ZVKVngQ+ii4R/byguVEUZQyeg44g=="],
"asn1js": ["asn1js@3.0.6", "", { "dependencies": { "pvtsutils": "^1.3.6", "pvutils": "^1.1.3", "tslib": "^2.8.1" } }, "sha512-UOCGPYbl0tv8+006qks/dTgV9ajs97X2p0FAbyS2iyCRrmLSRolDaHdp+v/CLgnzHc3fVB+CwYiUmei7ndFcgA=="],
"better-auth": ["better-auth@1.2.12", "", { "dependencies": { "@better-auth/utils": "0.2.5", "@better-fetch/fetch": "^1.1.18", "@noble/ciphers": "^0.6.0", "@noble/hashes": "^1.6.1", "@simplewebauthn/browser": "^13.0.0", "@simplewebauthn/server": "^13.0.0", "better-call": "^1.0.8", "defu": "^6.1.4", "jose": "^6.0.11", "kysely": "^0.28.2", "nanostores": "^0.11.3", "zod": "^3.24.1" } }, "sha512-YicCyjQ+lxb7YnnaCewrVOjj3nPVa0xcfrOJK7k5MLMX9Mt9UnJ8GYaVQNHOHLyVxl92qc3C758X1ihqAUzm4w=="],
"better-call": ["better-call@1.0.11", "", { "dependencies": { "@better-fetch/fetch": "^1.1.4", "rou3": "^0.5.1", "set-cookie-parser": "^2.7.1", "uncrypto": "^0.1.3" } }, "sha512-MOM01EMZFMzApWq9+WfqAnl2+DzFoMNp4H+lTFE1p7WF4evMeaQAAcOhI1WwMjITV4PGIWJ3Vn5GciQ5VHXbIA=="],
"buffer-from": ["buffer-from@1.1.2", "", {}, "sha512-E+XQCRwSbaaiChtv6k6Dwgc+bx+Bs6vuKJHHl5kox/BaKbhiXzqQOwK4cO22yElGp2OCmjwVhT3HmxgyPGnJfQ=="],
"bun-types": ["bun-types@1.2.18", "", { "dependencies": { "@types/node": "*" }, "peerDependencies": { "@types/react": "^19" } }, "sha512-04+Eha5NP7Z0A9YgDAzMk5PHR16ZuLVa83b26kH5+cp1qZW4F6FmAURngE7INf4tKOvCE69vYvDEwoNl1tGiWw=="],
"bun-types": ["bun-types@1.2.19", "", { "dependencies": { "@types/node": "*" }, "peerDependencies": { "@types/react": "^19" } }, "sha512-uAOTaZSPuYsWIXRpj7o56Let0g/wjihKCkeRqUBhlLVM/Bt+Fj9xTo+LhC1OV1XDaGkz4hNC80et5xgy+9KTHQ=="],
"csstype": ["csstype@3.1.3", "", {}, "sha512-M1uQkMl8rQK/szD0LNhtqxIPLpimGm8sOBwU7lLnCpSbTyY3yeU1Vc7l4KT5zT4s/yOxHH5O7tIuuLOCnLADRw=="],
"debug": ["debug@4.4.1", "", { "dependencies": { "ms": "^2.1.3" } }, "sha512-KcKCqiftBJcZr++7ykoDIEwSa3XWowTfNPo92BYxjXiyYEVrUQh2aLyhxBCwww+heortUFxEJYcRzosstTEBYQ=="],
"detect-libc": ["detect-libc@1.0.3", "", { "bin": { "detect-libc": "./bin/detect-libc.js" } }, "sha512-pGjwhsmsp4kL2RTz08wcOlGN83otlqHeD/Z5T8GXZB+/YcpQ/dgo+lbU8ZsGxV0HIvqqxo9l7mqYwyYMD9bKDg=="],
"defu": ["defu@6.1.4", "", {}, "sha512-mEQCMmwJu317oSz8CwdIOdwf3xMif1ttiM8LTufzc3g6kR+9Pe236twL8j3IYT1F7GfRgGcW6MWxzZjLIkuHIg=="],
"effect": ["effect@3.17.6", "", { "dependencies": { "@standard-schema/spec": "^1.0.0", "fast-check": "^3.23.1" } }, "sha512-BDVr3TEI6JpTnsZwDzXlzxDtyMS0cwtfWmhqfL3nl7Be/443+geFYAlVpCy7SCkLCck0NbmFX86LtlCZtCgdxA=="],
"drizzle-kit": ["drizzle-kit@0.31.4", "", { "dependencies": { "@drizzle-team/brocli": "^0.10.2", "@esbuild-kit/esm-loader": "^2.5.5", "esbuild": "^0.25.4", "esbuild-register": "^3.5.0" }, "bin": { "drizzle-kit": "bin.cjs" } }, "sha512-tCPWVZWZqWVx2XUsVpJRnH9Mx0ClVOf5YUHerZ5so1OKSlqww4zy1R5ksEdGRcO3tM3zj0PYN6V48TbQCL1RfA=="],
"fast-check": ["fast-check@3.23.2", "", { "dependencies": { "pure-rand": "^6.1.0" } }, "sha512-h5+1OzzfCC3Ef7VbtKdcv7zsstUQwUDlYpUTvjeUsJAssPgLn7QzbboPtL5ro04Mq0rPOsMzl7q5hIbRs2wD1A=="],
"drizzle-orm": ["drizzle-orm@0.44.2", "", { "peerDependencies": { "@aws-sdk/client-rds-data": ">=3", "@cloudflare/workers-types": ">=4", "@electric-sql/pglite": ">=0.2.0", "@libsql/client": ">=0.10.0", "@libsql/client-wasm": ">=0.10.0", "@neondatabase/serverless": ">=0.10.0", "@op-engineering/op-sqlite": ">=2", "@opentelemetry/api": "^1.4.1", "@planetscale/database": ">=1.13", "@prisma/client": "*", "@tidbcloud/serverless": "*", "@types/better-sqlite3": "*", "@types/pg": "*", "@types/sql.js": "*", "@upstash/redis": ">=1.34.7", "@vercel/postgres": ">=0.8.0", "@xata.io/client": "*", "better-sqlite3": ">=7", "bun-types": "*", "expo-sqlite": ">=14.0.0", "gel": ">=2", "knex": "*", "kysely": "*", "mysql2": ">=2", "pg": ">=8", "postgres": ">=3", "sql.js": ">=1", "sqlite3": ">=5" }, "optionalPeers": ["@aws-sdk/client-rds-data", "@cloudflare/workers-types", "@electric-sql/pglite", "@libsql/client", "@libsql/client-wasm", "@neondatabase/serverless", "@op-engineering/op-sqlite", "@opentelemetry/api", "@planetscale/database", "@prisma/client", "@tidbcloud/serverless", "@types/better-sqlite3", "@types/pg", "@types/sql.js", "@upstash/redis", "@vercel/postgres", "@xata.io/client", "better-sqlite3", "bun-types", "expo-sqlite", "gel", "knex", "kysely", "mysql2", "pg", "postgres", "sql.js", "sqlite3"] }, "sha512-zGAqBzWWkVSFjZpwPOrmCrgO++1kZ5H/rZ4qTGeGOe18iXGVJWf3WPfHOVwFIbmi8kHjfJstC6rJomzGx8g/dQ=="],
"fill-range": ["fill-range@7.1.1", "", { "dependencies": { "to-regex-range": "^5.0.1" } }, "sha512-YsGpe3WHLK8ZYi4tWDg2Jy3ebRz2rXowDxnld4bkQB00cc/1Zw9AWnC0i9ztDJitivtQvaI9KaLyKrc+hBW0yg=="],
"esbuild": ["esbuild@0.25.6", "", { "optionalDependencies": { "@esbuild/aix-ppc64": "0.25.6", "@esbuild/android-arm": "0.25.6", "@esbuild/android-arm64": "0.25.6", "@esbuild/android-x64": "0.25.6", "@esbuild/darwin-arm64": "0.25.6", "@esbuild/darwin-x64": "0.25.6", "@esbuild/freebsd-arm64": "0.25.6", "@esbuild/freebsd-x64": "0.25.6", "@esbuild/linux-arm": "0.25.6", "@esbuild/linux-arm64": "0.25.6", "@esbuild/linux-ia32": "0.25.6", "@esbuild/linux-loong64": "0.25.6", "@esbuild/linux-mips64el": "0.25.6", "@esbuild/linux-ppc64": "0.25.6", "@esbuild/linux-riscv64": "0.25.6", "@esbuild/linux-s390x": "0.25.6", "@esbuild/linux-x64": "0.25.6", "@esbuild/netbsd-arm64": "0.25.6", "@esbuild/netbsd-x64": "0.25.6", "@esbuild/openbsd-arm64": "0.25.6", "@esbuild/openbsd-x64": "0.25.6", "@esbuild/openharmony-arm64": "0.25.6", "@esbuild/sunos-x64": "0.25.6", "@esbuild/win32-arm64": "0.25.6", "@esbuild/win32-ia32": "0.25.6", "@esbuild/win32-x64": "0.25.6" }, "bin": { "esbuild": "bin/esbuild" } }, "sha512-GVuzuUwtdsghE3ocJ9Bs8PNoF13HNQ5TXbEi2AhvVb8xU1Iwt9Fos9FEamfoee+u/TOsn7GUWc04lz46n2bbTg=="],
"find-my-way-ts": ["find-my-way-ts@0.1.6", "", {}, "sha512-a85L9ZoXtNAey3Y6Z+eBWW658kO/MwR7zIafkIUPUMf3isZG0NCs2pjW2wtjxAKuJPxMAsHUIP4ZPGv0o5gyTA=="],
"esbuild-register": ["esbuild-register@3.6.0", "", { "dependencies": { "debug": "^4.3.4" }, "peerDependencies": { "esbuild": ">=0.12 <1" } }, "sha512-H2/S7Pm8a9CL1uhp9OvjwrBh5Pvx0H8qVOxNu8Wed9Y7qv56MPtq+GGM8RJpq6glYJn9Wspr8uw7l55uyinNeg=="],
"is-extglob": ["is-extglob@2.1.1", "", {}, "sha512-SbKbANkN603Vi4jEZv49LeVJMn4yGwsbzZworEoyEiutsN3nJYdbO36zfhGJ6QEDpOZIFkDtnq5JRxmvl3jsoQ=="],
"get-tsconfig": ["get-tsconfig@4.10.1", "", { "dependencies": { "resolve-pkg-maps": "^1.0.0" } }, "sha512-auHyJ4AgMz7vgS8Hp3N6HXSmlMdUyhSUrfBF16w153rxtLIEOE+HGqaBppczZvnHLqQJfiHotCYpNhl0lUROFQ=="],
"is-glob": ["is-glob@4.0.3", "", { "dependencies": { "is-extglob": "^2.1.1" } }, "sha512-xelSayHH36ZgE7ZWhli7pW34hNbNl8Ojv5KVmkJD4hBdD3th8Tfk9vYasLM+mXWOZhFkgZfxhLSnrwRr4elSSg=="],
"hono": ["hono@4.8.4", "", {}, "sha512-KOIBp1+iUs0HrKztM4EHiB2UtzZDTBihDtOF5K6+WaJjCPeaW4Q92R8j63jOhvJI5+tZSMuKD9REVEXXY9illg=="],
"is-number": ["is-number@7.0.0", "", {}, "sha512-41Cifkg6e8TylSpdtTpeLVMqvSBEVzTttHvERD741+pnZ8ANv0004MRL43QKPDlK9cGvNp6NZWZUBlbGXYxxng=="],
"jose": ["jose@6.0.11", "", {}, "sha512-QxG7EaliDARm1O1S8BGakqncGT9s25bKL1WSf6/oa17Tkqwi8D2ZNglqCF+DsYF88/rV66Q/Q2mFAy697E1DUg=="],
"micromatch": ["micromatch@4.0.8", "", { "dependencies": { "braces": "^3.0.3", "picomatch": "^2.3.1" } }, "sha512-PXwfBhYu0hBCPw8Dn0E+WDYb7af3dSLVWKi3HGv84IdF4TyFoC0ysxFd0Goxw7nSv4T/PzEJQxsYsEiFCKo2BA=="],
"kysely": ["kysely@0.28.2", "", {}, "sha512-4YAVLoF0Sf0UTqlhgQMFU9iQECdah7n+13ANkiuVfRvlK+uI0Etbgd7bVP36dKlG+NXWbhGua8vnGt+sdhvT7A=="],
"msgpackr": ["msgpackr@1.11.5", "", { "optionalDependencies": { "msgpackr-extract": "^3.0.2" } }, "sha512-UjkUHN0yqp9RWKy0Lplhh+wlpdt9oQBYgULZOiFhV3VclSF1JnSQWZ5r9gORQlNYaUKQoR8itv7g7z1xDDuACA=="],
"ms": ["ms@2.1.3", "", {}, "sha512-6FlzubTLZG3J2a/NVCAleEhjzq5oxgHyaCU9yYXvcLsvoVaHJq/s5xXI6/XXP6tz7R9xAOtHnSO/tXtF3WRTlA=="],
"msgpackr-extract": ["msgpackr-extract@3.0.3", "", { "dependencies": { "node-gyp-build-optional-packages": "5.2.2" }, "optionalDependencies": { "@msgpackr-extract/msgpackr-extract-darwin-arm64": "3.0.3", "@msgpackr-extract/msgpackr-extract-darwin-x64": "3.0.3", "@msgpackr-extract/msgpackr-extract-linux-arm": "3.0.3", "@msgpackr-extract/msgpackr-extract-linux-arm64": "3.0.3", "@msgpackr-extract/msgpackr-extract-linux-x64": "3.0.3", "@msgpackr-extract/msgpackr-extract-win32-x64": "3.0.3" }, "bin": { "download-msgpackr-prebuilds": "bin/download-prebuilds.js" } }, "sha512-P0efT1C9jIdVRefqjzOQ9Xml57zpOXnIuS+csaB4MdZbTdmGDLo8XhzBG1N7aO11gKDDkJvBLULeFTo46wwreA=="],
"nanostores": ["nanostores@0.11.4", "", {}, "sha512-k1oiVNN4hDK8NcNERSZLQiMfRzEGtfnvZvdBvey3SQbgn8Dcrk0h1I6vpxApjb10PFUflZrgJ2WEZyJQ+5v7YQ=="],
"multipasta": ["multipasta@0.2.7", "", {}, "sha512-KPA58d68KgGil15oDqXjkUBEBYc00XvbPj5/X+dyzeo/lWm9Nc25pQRlf1D+gv4OpK7NM0J1odrbu9JNNGvynA=="],
"pg": ["pg@8.16.3", "", { "dependencies": { "pg-connection-string": "^2.9.1", "pg-pool": "^3.10.1", "pg-protocol": "^1.10.3", "pg-types": "2.2.0", "pgpass": "1.0.5" }, "optionalDependencies": { "pg-cloudflare": "^1.2.7" }, "peerDependencies": { "pg-native": ">=3.0.1" }, "optionalPeers": ["pg-native"] }, "sha512-enxc1h0jA/aq5oSDMvqyW3q89ra6XIIDZgCX9vkMrnz5DFTw/Ny3Li2lFQ+pt3L6MCgm/5o2o8HW9hiJji+xvw=="],
"node-addon-api": ["node-addon-api@7.1.1", "", {}, "sha512-5m3bsyrjFWE1xf7nz7YXdN4udnVtXK6/Yfgn5qnahL6bCkf2yKt4k3nuTKAtT4r3IG8JNR2ncsIMdZuAzJjHQQ=="],
"pg-cloudflare": ["pg-cloudflare@1.2.7", "", {}, "sha512-YgCtzMH0ptvZJslLM1ffsY4EuGaU0cx4XSdXLRFae8bPP4dS5xL1tNB3k2o/N64cHJpwU7dxKli/nZ2lUa5fLg=="],
"node-gyp-build-optional-packages": ["node-gyp-build-optional-packages@5.2.2", "", { "dependencies": { "detect-libc": "^2.0.1" }, "bin": { "node-gyp-build-optional-packages": "bin.js", "node-gyp-build-optional-packages-optional": "optional.js", "node-gyp-build-optional-packages-test": "build-test.js" } }, "sha512-s+w+rBWnpTMwSFbaE0UXsRlg7hU4FjekKU4eyAih5T8nJuNZT1nNsskXpxmeqSK9UzkBl6UgRlnKc8hz8IEqOw=="],
"pg-connection-string": ["pg-connection-string@2.9.1", "", {}, "sha512-nkc6NpDcvPVpZXxrreI/FOtX3XemeLl8E0qFr6F2Lrm/I8WOnaWNhIPK2Z7OHpw7gh5XJThi6j6ppgNoaT1w4w=="],
"picomatch": ["picomatch@2.3.1", "", {}, "sha512-JU3teHTNjmE2VCGFzuY8EXzCDVwEqB2a8fsIvwaStHhAWJEeVd1o1QD80CU6+ZdEXXSLbSsuLwJjkCBWqRQUVA=="],
"pg-int8": ["pg-int8@1.0.1", "", {}, "sha512-WCtabS6t3c8SkpDBUlb1kjOs7l66xsGdKpIPZsg4wR+B3+u9UAum2odSsF9tnvxg80h4ZxLWMy4pRjOsFIqQpw=="],
"pure-rand": ["pure-rand@6.1.0", "", {}, "sha512-bVWawvoZoBYpp6yIoQtQXHZjmz35RSVHnUOTefl8Vcjr8snTPY1wnpSPMWekcFwbxI6gtmT7rSYPFvz71ldiOA=="],
"pg-pool": ["pg-pool@3.10.1", "", { "peerDependencies": { "pg": ">=8.0" } }, "sha512-Tu8jMlcX+9d8+QVzKIvM/uJtp07PKr82IUOYEphaWcoBhIYkoHpLXN3qO59nAI11ripznDsEzEv8nUxBVWajGg=="],
"to-regex-range": ["to-regex-range@5.0.1", "", { "dependencies": { "is-number": "^7.0.0" } }, "sha512-65P7iz6X5yEr1cwcgvQxbbIw7Uk3gOy5dIdtZ4rDveLqhrdJP+Li/Hx6tyK0NEb+2GCyneCMJiGqrADCSNk8sQ=="],
"pg-protocol": ["pg-protocol@1.10.3", "", {}, "sha512-6DIBgBQaTKDJyxnXaLiLR8wBpQQcGWuAESkRBX/t6OwA8YsqP+iVSiond2EDy6Y/dsGk8rh/jtax3js5NeV7JQ=="],
"typescript": ["typescript@5.9.2", "", { "bin": { "tsc": "bin/tsc", "tsserver": "bin/tsserver" } }, "sha512-CWBzXQrc/qOkhidw1OzBTQuYRbfyxDXJMVJ1XNwUHGROVmuaeiEm3OslpZ1RV96d7SKKjZKrSJu3+t/xlw3R9A=="],
"pg-types": ["pg-types@2.2.0", "", { "dependencies": { "pg-int8": "1.0.1", "postgres-array": "~2.0.0", "postgres-bytea": "~1.0.0", "postgres-date": "~1.0.4", "postgres-interval": "^1.1.0" } }, "sha512-qTAAlrEsl8s4OiEQY69wDvcMIdQN6wdz5ojQiOy6YRMuynxenON0O5oCpJI6lshc6scgAY8qvJ2On/p+CXY0GA=="],
"undici-types": ["undici-types@7.10.0", "", {}, "sha512-t5Fy/nfn+14LuOc2KNYg75vZqClpAiqscVvMygNnlsHBFpSXdJaYtXMcdNLpl/Qvc3P2cB3s6lOV51nqsFq4ag=="],
"pgpass": ["pgpass@1.0.5", "", { "dependencies": { "split2": "^4.1.0" } }, "sha512-FdW9r/jQZhSeohs1Z3sI1yxFQNFvMcnmfuj4WBMUTxOrAyLMaTcE1aAMBiTlbMNaXvBCQuVi0R7hd8udDSP7ug=="],
"uuid": ["uuid@11.1.0", "", { "bin": { "uuid": "dist/esm/bin/uuid" } }, "sha512-0/A9rDy9P7cJ+8w1c9WD9V//9Wj15Ce2MPz8Ri6032usz+NfePxx5AcN3bN+r6ZL6jEo066/yNYB3tn4pQEx+A=="],
"postgres-array": ["postgres-array@2.0.0", "", {}, "sha512-VpZrUqU5A69eQyW2c5CA1jtLecCsN2U/bD6VilrFDWq5+5UIEVO7nazS3TEcHf1zuPYO/sqGvUvW62g86RXZuA=="],
"ws": ["ws@8.18.3", "", { "peerDependencies": { "bufferutil": "^4.0.1", "utf-8-validate": ">=5.0.2" }, "optionalPeers": ["bufferutil", "utf-8-validate"] }, "sha512-PEIGCY5tSlUt50cqyMXfCzX+oOPqN0vuGqWzbcJ2xvnkzkq46oOpz7dQaTDBdfICb4N14+GARUDw2XV2N4tvzg=="],
"postgres-bytea": ["postgres-bytea@1.0.0", "", {}, "sha512-xy3pmLuQqRBZBXDULy7KbaitYqLcmxigw14Q5sj8QBVLqEwXfeybIKVWiqAXTlcvdvb0+xkOtDbfQMOf4lST1w=="],
"postgres-date": ["postgres-date@1.0.7", "", {}, "sha512-suDmjLVQg78nMK2UZ454hAG+OAW+HQPZ6n++TNDUX+L0+uUlLywnoxJKDou51Zm+zTCjrCl0Nq6J9C5hP9vK/Q=="],
"postgres-interval": ["postgres-interval@1.2.0", "", { "dependencies": { "xtend": "^4.0.0" } }, "sha512-9ZhXKM/rw350N1ovuWHbGxnGh/SNJ4cnxHiM0rxE4VN41wsg8P8zWn9hv/buK00RP4WvlOyr/RBDiptyxVbkZQ=="],
"pvtsutils": ["pvtsutils@1.3.6", "", { "dependencies": { "tslib": "^2.8.1" } }, "sha512-PLgQXQ6H2FWCaeRak8vvk1GW462lMxB5s3Jm673N82zI4vqtVUPuZdffdZbPDFRoU8kAhItWFtPCWiPpp4/EDg=="],
"pvutils": ["pvutils@1.1.3", "", {}, "sha512-pMpnA0qRdFp32b1sJl1wOJNxZLQ2cbQx+k6tjNtZ8CpvVhNqEPRgivZ2WOUev2YMajecdH7ctUPDvEe87nariQ=="],
"resolve-pkg-maps": ["resolve-pkg-maps@1.0.0", "", {}, "sha512-seS2Tj26TBVOC2NIc2rOe2y2ZO7efxITtLZcGSOnHHNOQ7CkiUBfw0Iw2ck6xkIhPwLhKNLS8BO+hEpngQlqzw=="],
"rou3": ["rou3@0.5.1", "", {}, "sha512-OXMmJ3zRk2xeXFGfA3K+EOPHC5u7RDFG7lIOx0X1pdnhUkI8MdVrbV+sNsD80ElpUZ+MRHdyxPnFthq9VHs8uQ=="],
"set-cookie-parser": ["set-cookie-parser@2.7.1", "", {}, "sha512-IOc8uWeOZgnb3ptbCURJWNjWUPcO3ZnTTdzsurqERrP6nPyv+paC55vJM0LpOlT2ne+Ix+9+CRG1MNLlyZ4GjQ=="],
"source-map": ["source-map@0.6.1", "", {}, "sha512-UjgapumWlbMhkBgzT7Ykc5YXUT46F0iKu8SGXq0bcwP5dz/h0Plj6enJqjz1Zbq2l5WaqYnrVbwWOWMyF3F47g=="],
"source-map-support": ["source-map-support@0.5.21", "", { "dependencies": { "buffer-from": "^1.0.0", "source-map": "^0.6.0" } }, "sha512-uBHU3L3czsIyYXKX88fdrGovxdSCoTGDRZ6SYXtSRxLZUzHg5P/66Ht6uoUlHu9EZod+inXhKo3qQgwXUT/y1w=="],
"split2": ["split2@4.2.0", "", {}, "sha512-UcjcJOWknrNkF6PLX83qcHM6KHgVKNkV62Y8a5uYDVv9ydGQVwAHMKqHdJje1VTWpljG0WYpCDhrCdAOYH4TWg=="],
"tslib": ["tslib@2.8.1", "", {}, "sha512-oJFu94HQb+KVduSUQL7wnpmqnfmLsOA/nAh6b6EH0wCEoK0/mPeXU6c3wKDV83MkOuHPRHtSXKKU99IBazS/2w=="],
"typescript": ["typescript@5.8.3", "", { "bin": { "tsc": "bin/tsc", "tsserver": "bin/tsserver" } }, "sha512-p1diW6TqL9L07nNxvRMM7hMMw4c5XOo/1ibL4aAIGmSAt9slTE1Xgw5KWuof2uTOvCg9BY7ZRi+GaF+7sfgPeQ=="],
"uncrypto": ["uncrypto@0.1.3", "", {}, "sha512-Ql87qFHB3s/De2ClA9e0gsnS6zXG27SkTiSJwjCc9MebbfapQfuPzumMIUMi38ezPZVNFcHI9sUIepeQfw8J8Q=="],
"undici-types": ["undici-types@7.8.0", "", {}, "sha512-9UJ2xGDvQ43tYyVMpuHlsgApydB8ZKfVYTsLDhXkFL/6gfkp+U8xTGdh8pMJv1SpZna0zxG1DwsKZsreLbXBxw=="],
"xtend": ["xtend@4.0.2", "", {}, "sha512-LKYU1iAXJXUgAXn9URjiu+MWhyUXHsvfp7mcuYm9dSUKK0/CjtrUwFAxD82/mCWbtLsGjFIad0wIsod4zrTAEQ=="],
"zod": ["zod@4.0.1", "", {}, "sha512-3ayMCwqt0I18sfM+jC4t06mieo1n5B7jeH7GtPvCmzSfnecxO8+qBLtD56wOLccc3TJOMHUhiaG36ck6dtmKeQ=="],
"@esbuild-kit/core-utils/esbuild": ["esbuild@0.18.20", "", { "optionalDependencies": { "@esbuild/android-arm": "0.18.20", "@esbuild/android-arm64": "0.18.20", "@esbuild/android-x64": "0.18.20", "@esbuild/darwin-arm64": "0.18.20", "@esbuild/darwin-x64": "0.18.20", "@esbuild/freebsd-arm64": "0.18.20", "@esbuild/freebsd-x64": "0.18.20", "@esbuild/linux-arm": "0.18.20", "@esbuild/linux-arm64": "0.18.20", "@esbuild/linux-ia32": "0.18.20", "@esbuild/linux-loong64": "0.18.20", "@esbuild/linux-mips64el": "0.18.20", "@esbuild/linux-ppc64": "0.18.20", "@esbuild/linux-riscv64": "0.18.20", "@esbuild/linux-s390x": "0.18.20", "@esbuild/linux-x64": "0.18.20", "@esbuild/netbsd-x64": "0.18.20", "@esbuild/openbsd-x64": "0.18.20", "@esbuild/sunos-x64": "0.18.20", "@esbuild/win32-arm64": "0.18.20", "@esbuild/win32-ia32": "0.18.20", "@esbuild/win32-x64": "0.18.20" }, "bin": { "esbuild": "bin/esbuild" } }, "sha512-ceqxoedUrcayh7Y7ZX6NdbbDzGROiyVBgC4PriJThBKSVPWnnFHZAkfI1lJT8QFkOwH4qOS2SJkS4wvpGl8BpA=="],
"better-auth/zod": ["zod@3.25.76", "", {}, "sha512-gzUt/qt81nXsFGKIFcC3YnfEAx5NkunCfnDlvuBSSFS02bcXu4Lmea0AFIUwbLWxWPx3d9p8S5QoaujKcNQxcQ=="],
"@esbuild-kit/core-utils/esbuild/@esbuild/android-arm": ["@esbuild/android-arm@0.18.20", "", { "os": "android", "cpu": "arm" }, "sha512-fyi7TDI/ijKKNZTUJAQqiG5T7YjJXgnzkURqmGj13C6dCqckZBLdl4h7bkhHt/t0WP+zO9/zwroDvANaOqO5Sw=="],
"@esbuild-kit/core-utils/esbuild/@esbuild/android-arm64": ["@esbuild/android-arm64@0.18.20", "", { "os": "android", "cpu": "arm64" }, "sha512-Nz4rJcchGDtENV0eMKUNa6L12zz2zBDXuhj/Vjh18zGqB44Bi7MBMSXjgunJgjRhCmKOjnPuZp4Mb6OKqtMHLQ=="],
"@esbuild-kit/core-utils/esbuild/@esbuild/android-x64": ["@esbuild/android-x64@0.18.20", "", { "os": "android", "cpu": "x64" }, "sha512-8GDdlePJA8D6zlZYJV/jnrRAi6rOiNaCC/JclcXpB+KIuvfBN4owLtgzY2bsxnx666XjJx2kDPUmnTtR8qKQUg=="],
"@esbuild-kit/core-utils/esbuild/@esbuild/darwin-arm64": ["@esbuild/darwin-arm64@0.18.20", "", { "os": "darwin", "cpu": "arm64" }, "sha512-bxRHW5kHU38zS2lPTPOyuyTm+S+eobPUnTNkdJEfAddYgEcll4xkT8DB9d2008DtTbl7uJag2HuE5NZAZgnNEA=="],
"@esbuild-kit/core-utils/esbuild/@esbuild/darwin-x64": ["@esbuild/darwin-x64@0.18.20", "", { "os": "darwin", "cpu": "x64" }, "sha512-pc5gxlMDxzm513qPGbCbDukOdsGtKhfxD1zJKXjCCcU7ju50O7MeAZ8c4krSJcOIJGFR+qx21yMMVYwiQvyTyQ=="],
"@esbuild-kit/core-utils/esbuild/@esbuild/freebsd-arm64": ["@esbuild/freebsd-arm64@0.18.20", "", { "os": "freebsd", "cpu": "arm64" }, "sha512-yqDQHy4QHevpMAaxhhIwYPMv1NECwOvIpGCZkECn8w2WFHXjEwrBn3CeNIYsibZ/iZEUemj++M26W3cNR5h+Tw=="],
"@esbuild-kit/core-utils/esbuild/@esbuild/freebsd-x64": ["@esbuild/freebsd-x64@0.18.20", "", { "os": "freebsd", "cpu": "x64" }, "sha512-tgWRPPuQsd3RmBZwarGVHZQvtzfEBOreNuxEMKFcd5DaDn2PbBxfwLcj4+aenoh7ctXcbXmOQIn8HI6mCSw5MQ=="],
"@esbuild-kit/core-utils/esbuild/@esbuild/linux-arm": ["@esbuild/linux-arm@0.18.20", "", { "os": "linux", "cpu": "arm" }, "sha512-/5bHkMWnq1EgKr1V+Ybz3s1hWXok7mDFUMQ4cG10AfW3wL02PSZi5kFpYKrptDsgb2WAJIvRcDm+qIvXf/apvg=="],
"@esbuild-kit/core-utils/esbuild/@esbuild/linux-arm64": ["@esbuild/linux-arm64@0.18.20", "", { "os": "linux", "cpu": "arm64" }, "sha512-2YbscF+UL7SQAVIpnWvYwM+3LskyDmPhe31pE7/aoTMFKKzIc9lLbyGUpmmb8a8AixOL61sQ/mFh3jEjHYFvdA=="],
"@esbuild-kit/core-utils/esbuild/@esbuild/linux-ia32": ["@esbuild/linux-ia32@0.18.20", "", { "os": "linux", "cpu": "ia32" }, "sha512-P4etWwq6IsReT0E1KHU40bOnzMHoH73aXp96Fs8TIT6z9Hu8G6+0SHSw9i2isWrD2nbx2qo5yUqACgdfVGx7TA=="],
"@esbuild-kit/core-utils/esbuild/@esbuild/linux-loong64": ["@esbuild/linux-loong64@0.18.20", "", { "os": "linux", "cpu": "none" }, "sha512-nXW8nqBTrOpDLPgPY9uV+/1DjxoQ7DoB2N8eocyq8I9XuqJ7BiAMDMf9n1xZM9TgW0J8zrquIb/A7s3BJv7rjg=="],
"@esbuild-kit/core-utils/esbuild/@esbuild/linux-mips64el": ["@esbuild/linux-mips64el@0.18.20", "", { "os": "linux", "cpu": "none" }, "sha512-d5NeaXZcHp8PzYy5VnXV3VSd2D328Zb+9dEq5HE6bw6+N86JVPExrA6O68OPwobntbNJ0pzCpUFZTo3w0GyetQ=="],
"@esbuild-kit/core-utils/esbuild/@esbuild/linux-ppc64": ["@esbuild/linux-ppc64@0.18.20", "", { "os": "linux", "cpu": "ppc64" }, "sha512-WHPyeScRNcmANnLQkq6AfyXRFr5D6N2sKgkFo2FqguP44Nw2eyDlbTdZwd9GYk98DZG9QItIiTlFLHJHjxP3FA=="],
"@esbuild-kit/core-utils/esbuild/@esbuild/linux-riscv64": ["@esbuild/linux-riscv64@0.18.20", "", { "os": "linux", "cpu": "none" }, "sha512-WSxo6h5ecI5XH34KC7w5veNnKkju3zBRLEQNY7mv5mtBmrP/MjNBCAlsM2u5hDBlS3NGcTQpoBvRzqBcRtpq1A=="],
"@esbuild-kit/core-utils/esbuild/@esbuild/linux-s390x": ["@esbuild/linux-s390x@0.18.20", "", { "os": "linux", "cpu": "s390x" }, "sha512-+8231GMs3mAEth6Ja1iK0a1sQ3ohfcpzpRLH8uuc5/KVDFneH6jtAJLFGafpzpMRO6DzJ6AvXKze9LfFMrIHVQ=="],
"@esbuild-kit/core-utils/esbuild/@esbuild/linux-x64": ["@esbuild/linux-x64@0.18.20", "", { "os": "linux", "cpu": "x64" }, "sha512-UYqiqemphJcNsFEskc73jQ7B9jgwjWrSayxawS6UVFZGWrAAtkzjxSqnoclCXxWtfwLdzU+vTpcNYhpn43uP1w=="],
"@esbuild-kit/core-utils/esbuild/@esbuild/netbsd-x64": ["@esbuild/netbsd-x64@0.18.20", "", { "os": "none", "cpu": "x64" }, "sha512-iO1c++VP6xUBUmltHZoMtCUdPlnPGdBom6IrO4gyKPFFVBKioIImVooR5I83nTew5UOYrk3gIJhbZh8X44y06A=="],
"@esbuild-kit/core-utils/esbuild/@esbuild/openbsd-x64": ["@esbuild/openbsd-x64@0.18.20", "", { "os": "openbsd", "cpu": "x64" }, "sha512-e5e4YSsuQfX4cxcygw/UCPIEP6wbIL+se3sxPdCiMbFLBWu0eiZOJ7WoD+ptCLrmjZBK1Wk7I6D/I3NglUGOxg=="],
"@esbuild-kit/core-utils/esbuild/@esbuild/sunos-x64": ["@esbuild/sunos-x64@0.18.20", "", { "os": "sunos", "cpu": "x64" }, "sha512-kDbFRFp0YpTQVVrqUd5FTYmWo45zGaXe0X8E1G/LKFC0v8x0vWrhOWSLITcCn63lmZIxfOMXtCfti/RxN/0wnQ=="],
"@esbuild-kit/core-utils/esbuild/@esbuild/win32-arm64": ["@esbuild/win32-arm64@0.18.20", "", { "os": "win32", "cpu": "arm64" }, "sha512-ddYFR6ItYgoaq4v4JmQQaAI5s7npztfV4Ag6NrhiaW0RrnOXqBkgwZLofVTlq1daVTQNhtI5oieTvkRPfZrePg=="],
"@esbuild-kit/core-utils/esbuild/@esbuild/win32-ia32": ["@esbuild/win32-ia32@0.18.20", "", { "os": "win32", "cpu": "ia32" }, "sha512-Wv7QBi3ID/rROT08SABTS7eV4hX26sVduqDOTe1MvGMjNd3EjOz4b7zeexIR62GTIEKrfJXKL9LFxTYgkyeu7g=="],
"@esbuild-kit/core-utils/esbuild/@esbuild/win32-x64": ["@esbuild/win32-x64@0.18.20", "", { "os": "win32", "cpu": "x64" }, "sha512-kTdfRcSiDfQca/y9QIkng02avJ+NCaQvrMejlsB3RRv5sE9rRoeBPISaZpKxHELzRxZyLvNts1P27W3wV+8geQ=="],
"node-gyp-build-optional-packages/detect-libc": ["detect-libc@2.0.4", "", {}, "sha512-3UDv+G9CsCKO1WKMGw9fwq/SWJYbI0c5Y7LU1AXYoDdbhE2AHQ6N6Nb34sG8Fj7T5APy8qXDCKuuIHd1BR0tVA=="],
}
}

View File

@ -1,11 +0,0 @@
import { defineConfig } from "drizzle-kit";
import { env } from "@/env";
export default defineConfig({
out: "./drizzle",
schema: "./src/db/schema/*",
dialect: "postgresql",
dbCredentials: {
url: env.DATABASE_URL,
},
});

View File

@ -1,80 +0,0 @@
CREATE SCHEMA "shared";
--> statement-breakpoint
CREATE TABLE "shared"."account" (
"id" uuid PRIMARY KEY DEFAULT gen_random_uuid() NOT NULL,
"account_id" text NOT NULL,
"provider_id" text NOT NULL,
"user_id" uuid NOT NULL,
"access_token" text,
"refresh_token" text,
"id_token" text,
"access_token_expires_at" timestamp,
"refresh_token_expires_at" timestamp,
"scope" text,
"password" text,
"created_at" timestamp NOT NULL,
"updated_at" timestamp NOT NULL
);
--> statement-breakpoint
CREATE TABLE "shared"."apikey" (
"id" uuid PRIMARY KEY DEFAULT gen_random_uuid() NOT NULL,
"name" text,
"start" text,
"prefix" text,
"key" text NOT NULL,
"user_id" uuid NOT NULL,
"refill_interval" integer,
"refill_amount" integer,
"last_refill_at" timestamp,
"enabled" boolean DEFAULT true,
"rate_limit_enabled" boolean DEFAULT true,
"rate_limit_time_window" integer DEFAULT 86400000,
"rate_limit_max" integer DEFAULT 10,
"request_count" integer,
"remaining" integer,
"last_request" timestamp,
"expires_at" timestamp,
"created_at" timestamp NOT NULL,
"updated_at" timestamp NOT NULL,
"permissions" text,
"metadata" text
);
--> statement-breakpoint
CREATE TABLE "shared"."session" (
"id" uuid PRIMARY KEY DEFAULT gen_random_uuid() NOT NULL,
"expires_at" timestamp NOT NULL,
"token" text NOT NULL,
"created_at" timestamp NOT NULL,
"updated_at" timestamp NOT NULL,
"ip_address" text,
"user_agent" text,
"user_id" uuid NOT NULL,
CONSTRAINT "session_token_unique" UNIQUE("token")
);
--> statement-breakpoint
CREATE TABLE "shared"."user" (
"id" uuid PRIMARY KEY DEFAULT gen_random_uuid() NOT NULL,
"name" text NOT NULL,
"email" text NOT NULL,
"email_verified" boolean NOT NULL,
"image" text,
"created_at" timestamp NOT NULL,
"updated_at" timestamp NOT NULL,
"username" text,
"display_username" text,
CONSTRAINT "user_email_unique" UNIQUE("email"),
CONSTRAINT "user_username_unique" UNIQUE("username")
);
--> statement-breakpoint
CREATE TABLE "shared"."verification" (
"id" uuid PRIMARY KEY DEFAULT gen_random_uuid() NOT NULL,
"identifier" text NOT NULL,
"value" text NOT NULL,
"expires_at" timestamp NOT NULL,
"created_at" timestamp,
"updated_at" timestamp
);
--> statement-breakpoint
ALTER TABLE "shared"."account" ADD CONSTRAINT "account_user_id_user_id_fk" FOREIGN KEY ("user_id") REFERENCES "shared"."user"("id") ON DELETE cascade ON UPDATE no action;--> statement-breakpoint
ALTER TABLE "shared"."apikey" ADD CONSTRAINT "apikey_user_id_user_id_fk" FOREIGN KEY ("user_id") REFERENCES "shared"."user"("id") ON DELETE cascade ON UPDATE no action;--> statement-breakpoint
ALTER TABLE "shared"."session" ADD CONSTRAINT "session_user_id_user_id_fk" FOREIGN KEY ("user_id") REFERENCES "shared"."user"("id") ON DELETE cascade ON UPDATE no action;

View File

@ -1,86 +0,0 @@
CREATE SCHEMA "habit_tracker";
--> statement-breakpoint
CREATE SCHEMA "intake_tracker";
--> statement-breakpoint
CREATE TABLE "habit_tracker"."habit" (
"id" uuid PRIMARY KEY DEFAULT gen_random_uuid() NOT NULL,
"user_id" uuid NOT NULL,
"name" varchar(255) NOT NULL,
"description" text,
"frequency_type" varchar(20) NOT NULL,
"target_count" integer DEFAULT 1 NOT NULL,
"interval_days" integer,
"active" boolean DEFAULT true NOT NULL,
"updated_at" timestamp DEFAULT now() NOT NULL,
"created_at" timestamp DEFAULT now() NOT NULL,
"deleted_at" timestamp,
CONSTRAINT "freqency_type_check" CHECK ("habit_tracker"."habit"."frequency_type" IN ('daily', 'interval', 'multi_daily')),
CONSTRAINT "target_count_check" CHECK ("habit_tracker"."habit"."target_count" > 0),
CONSTRAINT "interval_days_check" CHECK (("habit_tracker"."habit"."frequency_type" = 'interval' AND "habit_tracker"."habit"."interval_days" IS NOT NULL AND "habit_tracker"."habit"."interval_days" > 0) OR ("habit_tracker"."habit"."frequency_type" != 'interval' AND "habit_tracker"."habit"."interval_days" IS NULL))
);
--> statement-breakpoint
CREATE TABLE "habit_tracker"."habit_completion" (
"id" uuid PRIMARY KEY DEFAULT gen_random_uuid() NOT NULL,
"habit_id" uuid NOT NULL,
"notes" text,
"completed_at" timestamp DEFAULT now() NOT NULL,
"updated_at" timestamp DEFAULT now() NOT NULL,
"created_at" timestamp DEFAULT now() NOT NULL,
"deleted_at" timestamp
);
--> statement-breakpoint
CREATE TABLE "intake_tracker"."daily_summary" (
"id" uuid PRIMARY KEY DEFAULT gen_random_uuid() NOT NULL,
"intake_metric_id" uuid NOT NULL,
"date" date NOT NULL,
"total_value" numeric(10, 2) NOT NULL,
"entry_count" integer NOT NULL,
"first_entry_at" timestamp,
"last_entry_at" timestamp,
"updated_at" timestamp DEFAULT now() NOT NULL,
"created_at" timestamp DEFAULT now() NOT NULL,
"deleted_at" timestamp,
CONSTRAINT "daily_summary_intake_metric_id_date_unique" UNIQUE("intake_metric_id","date"),
CONSTRAINT "positive_total_check" CHECK ("intake_tracker"."daily_summary"."total_value" > 0),
CONSTRAINT "positive_count_check" CHECK ("intake_tracker"."daily_summary"."entry_count" > 0)
);
--> statement-breakpoint
CREATE TABLE "intake_tracker"."intake_metric" (
"id" uuid PRIMARY KEY DEFAULT gen_random_uuid() NOT NULL,
"user_id" uuid NOT NULL,
"metric_type" varchar(50) NOT NULL,
"unit" varchar(20) NOT NULL,
"display_name" varchar(100) NOT NULL,
"target_value" numeric(10, 2),
"min_value" numeric(10, 2),
"max_value" numeric(10, 2),
"is_cumulative" boolean DEFAULT true NOT NULL,
"active" boolean DEFAULT true NOT NULL,
"updated_at" timestamp DEFAULT now() NOT NULL,
"created_at" timestamp DEFAULT now() NOT NULL,
"deleted_at" timestamp,
CONSTRAINT "intake_metric_user_id_metric_type_unique" UNIQUE("user_id","metric_type"),
CONSTRAINT "positive_target_check" CHECK ("intake_tracker"."intake_metric"."target_value" IS NULL OR "intake_tracker"."intake_metric"."target_value" > 0),
CONSTRAINT "positive_min_check" CHECK ("intake_tracker"."intake_metric"."min_value" IS NULL OR "intake_tracker"."intake_metric"."min_value" >= 0),
CONSTRAINT "positive_max_check" CHECK ("intake_tracker"."intake_metric"."max_value" IS NULL OR "intake_tracker"."intake_metric"."max_value" >= 0),
CONSTRAINT "min_max_check" CHECK ("intake_tracker"."intake_metric"."min_value" IS NULL OR "intake_tracker"."intake_metric"."max_value" IS NULL OR "intake_tracker"."intake_metric"."min_value" <= "intake_tracker"."intake_metric"."max_value")
);
--> statement-breakpoint
CREATE TABLE "intake_tracker"."intake_record" (
"id" uuid PRIMARY KEY DEFAULT gen_random_uuid() NOT NULL,
"intake_metric_id" uuid NOT NULL,
"value" numeric(10, 2),
"recorded_at" timestamp DEFAULT now() NOT NULL,
"notes" text,
"updated_at" timestamp DEFAULT now() NOT NULL,
"created_at" timestamp DEFAULT now() NOT NULL,
"deleted_at" timestamp,
CONSTRAINT "positive_value_check" CHECK ("intake_tracker"."intake_record"."value" > 0),
CONSTRAINT "recorded_at_not_future_check" CHECK ("intake_tracker"."intake_record"."recorded_at" <= NOW())
);
--> statement-breakpoint
ALTER TABLE "habit_tracker"."habit" ADD CONSTRAINT "habit_user_id_user_id_fk" FOREIGN KEY ("user_id") REFERENCES "shared"."user"("id") ON DELETE cascade ON UPDATE no action;--> statement-breakpoint
ALTER TABLE "habit_tracker"."habit_completion" ADD CONSTRAINT "habit_completion_habit_id_habit_id_fk" FOREIGN KEY ("habit_id") REFERENCES "habit_tracker"."habit"("id") ON DELETE cascade ON UPDATE no action;--> statement-breakpoint
ALTER TABLE "intake_tracker"."daily_summary" ADD CONSTRAINT "daily_summary_intake_metric_id_intake_metric_id_fk" FOREIGN KEY ("intake_metric_id") REFERENCES "intake_tracker"."intake_metric"("id") ON DELETE cascade ON UPDATE no action;--> statement-breakpoint
ALTER TABLE "intake_tracker"."intake_metric" ADD CONSTRAINT "intake_metric_user_id_user_id_fk" FOREIGN KEY ("user_id") REFERENCES "shared"."user"("id") ON DELETE cascade ON UPDATE no action;--> statement-breakpoint
ALTER TABLE "intake_tracker"."intake_record" ADD CONSTRAINT "intake_record_intake_metric_id_intake_metric_id_fk" FOREIGN KEY ("intake_metric_id") REFERENCES "intake_tracker"."intake_metric"("id") ON DELETE cascade ON UPDATE no action;

View File

@ -1,505 +0,0 @@
{
"id": "6f6d04f6-92de-4c28-8736-75fafbfa3aef",
"prevId": "00000000-0000-0000-0000-000000000000",
"version": "7",
"dialect": "postgresql",
"tables": {
"shared.account": {
"name": "account",
"schema": "shared",
"columns": {
"id": {
"name": "id",
"type": "uuid",
"primaryKey": true,
"notNull": true,
"default": "gen_random_uuid()"
},
"account_id": {
"name": "account_id",
"type": "text",
"primaryKey": false,
"notNull": true
},
"provider_id": {
"name": "provider_id",
"type": "text",
"primaryKey": false,
"notNull": true
},
"user_id": {
"name": "user_id",
"type": "uuid",
"primaryKey": false,
"notNull": true
},
"access_token": {
"name": "access_token",
"type": "text",
"primaryKey": false,
"notNull": false
},
"refresh_token": {
"name": "refresh_token",
"type": "text",
"primaryKey": false,
"notNull": false
},
"id_token": {
"name": "id_token",
"type": "text",
"primaryKey": false,
"notNull": false
},
"access_token_expires_at": {
"name": "access_token_expires_at",
"type": "timestamp",
"primaryKey": false,
"notNull": false
},
"refresh_token_expires_at": {
"name": "refresh_token_expires_at",
"type": "timestamp",
"primaryKey": false,
"notNull": false
},
"scope": {
"name": "scope",
"type": "text",
"primaryKey": false,
"notNull": false
},
"password": {
"name": "password",
"type": "text",
"primaryKey": false,
"notNull": false
},
"created_at": {
"name": "created_at",
"type": "timestamp",
"primaryKey": false,
"notNull": true
},
"updated_at": {
"name": "updated_at",
"type": "timestamp",
"primaryKey": false,
"notNull": true
}
},
"indexes": {},
"foreignKeys": {
"account_user_id_user_id_fk": {
"name": "account_user_id_user_id_fk",
"tableFrom": "account",
"tableTo": "user",
"schemaTo": "shared",
"columnsFrom": [
"user_id"
],
"columnsTo": [
"id"
],
"onDelete": "cascade",
"onUpdate": "no action"
}
},
"compositePrimaryKeys": {},
"uniqueConstraints": {},
"policies": {},
"checkConstraints": {},
"isRLSEnabled": false
},
"shared.apikey": {
"name": "apikey",
"schema": "shared",
"columns": {
"id": {
"name": "id",
"type": "uuid",
"primaryKey": true,
"notNull": true,
"default": "gen_random_uuid()"
},
"name": {
"name": "name",
"type": "text",
"primaryKey": false,
"notNull": false
},
"start": {
"name": "start",
"type": "text",
"primaryKey": false,
"notNull": false
},
"prefix": {
"name": "prefix",
"type": "text",
"primaryKey": false,
"notNull": false
},
"key": {
"name": "key",
"type": "text",
"primaryKey": false,
"notNull": true
},
"user_id": {
"name": "user_id",
"type": "uuid",
"primaryKey": false,
"notNull": true
},
"refill_interval": {
"name": "refill_interval",
"type": "integer",
"primaryKey": false,
"notNull": false
},
"refill_amount": {
"name": "refill_amount",
"type": "integer",
"primaryKey": false,
"notNull": false
},
"last_refill_at": {
"name": "last_refill_at",
"type": "timestamp",
"primaryKey": false,
"notNull": false
},
"enabled": {
"name": "enabled",
"type": "boolean",
"primaryKey": false,
"notNull": false,
"default": true
},
"rate_limit_enabled": {
"name": "rate_limit_enabled",
"type": "boolean",
"primaryKey": false,
"notNull": false,
"default": true
},
"rate_limit_time_window": {
"name": "rate_limit_time_window",
"type": "integer",
"primaryKey": false,
"notNull": false,
"default": 86400000
},
"rate_limit_max": {
"name": "rate_limit_max",
"type": "integer",
"primaryKey": false,
"notNull": false,
"default": 10
},
"request_count": {
"name": "request_count",
"type": "integer",
"primaryKey": false,
"notNull": false
},
"remaining": {
"name": "remaining",
"type": "integer",
"primaryKey": false,
"notNull": false
},
"last_request": {
"name": "last_request",
"type": "timestamp",
"primaryKey": false,
"notNull": false
},
"expires_at": {
"name": "expires_at",
"type": "timestamp",
"primaryKey": false,
"notNull": false
},
"created_at": {
"name": "created_at",
"type": "timestamp",
"primaryKey": false,
"notNull": true
},
"updated_at": {
"name": "updated_at",
"type": "timestamp",
"primaryKey": false,
"notNull": true
},
"permissions": {
"name": "permissions",
"type": "text",
"primaryKey": false,
"notNull": false
},
"metadata": {
"name": "metadata",
"type": "text",
"primaryKey": false,
"notNull": false
}
},
"indexes": {},
"foreignKeys": {
"apikey_user_id_user_id_fk": {
"name": "apikey_user_id_user_id_fk",
"tableFrom": "apikey",
"tableTo": "user",
"schemaTo": "shared",
"columnsFrom": [
"user_id"
],
"columnsTo": [
"id"
],
"onDelete": "cascade",
"onUpdate": "no action"
}
},
"compositePrimaryKeys": {},
"uniqueConstraints": {},
"policies": {},
"checkConstraints": {},
"isRLSEnabled": false
},
"shared.session": {
"name": "session",
"schema": "shared",
"columns": {
"id": {
"name": "id",
"type": "uuid",
"primaryKey": true,
"notNull": true,
"default": "gen_random_uuid()"
},
"expires_at": {
"name": "expires_at",
"type": "timestamp",
"primaryKey": false,
"notNull": true
},
"token": {
"name": "token",
"type": "text",
"primaryKey": false,
"notNull": true
},
"created_at": {
"name": "created_at",
"type": "timestamp",
"primaryKey": false,
"notNull": true
},
"updated_at": {
"name": "updated_at",
"type": "timestamp",
"primaryKey": false,
"notNull": true
},
"ip_address": {
"name": "ip_address",
"type": "text",
"primaryKey": false,
"notNull": false
},
"user_agent": {
"name": "user_agent",
"type": "text",
"primaryKey": false,
"notNull": false
},
"user_id": {
"name": "user_id",
"type": "uuid",
"primaryKey": false,
"notNull": true
}
},
"indexes": {},
"foreignKeys": {
"session_user_id_user_id_fk": {
"name": "session_user_id_user_id_fk",
"tableFrom": "session",
"tableTo": "user",
"schemaTo": "shared",
"columnsFrom": [
"user_id"
],
"columnsTo": [
"id"
],
"onDelete": "cascade",
"onUpdate": "no action"
}
},
"compositePrimaryKeys": {},
"uniqueConstraints": {
"session_token_unique": {
"name": "session_token_unique",
"nullsNotDistinct": false,
"columns": [
"token"
]
}
},
"policies": {},
"checkConstraints": {},
"isRLSEnabled": false
},
"shared.user": {
"name": "user",
"schema": "shared",
"columns": {
"id": {
"name": "id",
"type": "uuid",
"primaryKey": true,
"notNull": true,
"default": "gen_random_uuid()"
},
"name": {
"name": "name",
"type": "text",
"primaryKey": false,
"notNull": true
},
"email": {
"name": "email",
"type": "text",
"primaryKey": false,
"notNull": true
},
"email_verified": {
"name": "email_verified",
"type": "boolean",
"primaryKey": false,
"notNull": true
},
"image": {
"name": "image",
"type": "text",
"primaryKey": false,
"notNull": false
},
"created_at": {
"name": "created_at",
"type": "timestamp",
"primaryKey": false,
"notNull": true
},
"updated_at": {
"name": "updated_at",
"type": "timestamp",
"primaryKey": false,
"notNull": true
},
"username": {
"name": "username",
"type": "text",
"primaryKey": false,
"notNull": false
},
"display_username": {
"name": "display_username",
"type": "text",
"primaryKey": false,
"notNull": false
}
},
"indexes": {},
"foreignKeys": {},
"compositePrimaryKeys": {},
"uniqueConstraints": {
"user_email_unique": {
"name": "user_email_unique",
"nullsNotDistinct": false,
"columns": [
"email"
]
},
"user_username_unique": {
"name": "user_username_unique",
"nullsNotDistinct": false,
"columns": [
"username"
]
}
},
"policies": {},
"checkConstraints": {},
"isRLSEnabled": false
},
"shared.verification": {
"name": "verification",
"schema": "shared",
"columns": {
"id": {
"name": "id",
"type": "uuid",
"primaryKey": true,
"notNull": true,
"default": "gen_random_uuid()"
},
"identifier": {
"name": "identifier",
"type": "text",
"primaryKey": false,
"notNull": true
},
"value": {
"name": "value",
"type": "text",
"primaryKey": false,
"notNull": true
},
"expires_at": {
"name": "expires_at",
"type": "timestamp",
"primaryKey": false,
"notNull": true
},
"created_at": {
"name": "created_at",
"type": "timestamp",
"primaryKey": false,
"notNull": false
},
"updated_at": {
"name": "updated_at",
"type": "timestamp",
"primaryKey": false,
"notNull": false
}
},
"indexes": {},
"foreignKeys": {},
"compositePrimaryKeys": {},
"uniqueConstraints": {},
"policies": {},
"checkConstraints": {},
"isRLSEnabled": false
}
},
"enums": {},
"schemas": {
"shared": "shared"
},
"sequences": {},
"roles": {},
"policies": {},
"views": {},
"_meta": {
"columns": {},
"schemas": {},
"tables": {}
}
}

File diff suppressed because it is too large Load Diff

View File

@ -1,20 +0,0 @@
{
"version": "7",
"dialect": "postgresql",
"entries": [
{
"idx": 0,
"version": "7",
"when": 1752443645228,
"tag": "0000_hard_violations",
"breakpoints": true
},
{
"idx": 1,
"version": "7",
"when": 1752445599098,
"tag": "0001_stormy_gertrude_yorkes",
"breakpoints": true
}
]
}

View File

@ -1,20 +1,18 @@
{
"name": "api",
"module": "index.ts",
"type": "module",
"scripts": {
"dev": "bun run --hot src/index.ts"
"private": true,
"devDependencies": {
"@types/bun": "latest"
},
"peerDependencies": {
"typescript": "^5"
},
"dependencies": {
"@t3-oss/env-core": "^0.13.8",
"better-auth": "^1.2.12",
"drizzle-orm": "^0.44.2",
"hono": "^4.8.4",
"zod": "^4.0.1"
},
"devDependencies": {
"@biomejs/biome": "2.1.1",
"@types/bun": "latest",
"drizzle-kit": "^0.31.4",
"typescript": "^5.8.3"
"@effect/platform": "^0.90.0",
"@effect/platform-bun": "^0.77.0",
"@electric-sql/pglite": "^0.3.7",
"effect": "^3.17.6"
}
}

View File

@ -1,5 +0,0 @@
import { drizzle } from "drizzle-orm/bun-sql";
import { env } from "@/env";
import * as authSchema from "./schema/auth";
export const db = drizzle(env.DATABASE_URL, { schema: { ...authSchema } });

View File

@ -1,95 +0,0 @@
import {
boolean,
integer,
pgSchema,
text,
timestamp,
uuid,
} from "drizzle-orm/pg-core";
import { idPrimaryKey } from "./helpers";
export const shared = pgSchema("shared");
export const user = shared.table("user", {
id: idPrimaryKey,
name: text("name").notNull(),
email: text("email").notNull().unique(),
emailVerified: boolean("email_verified")
.$defaultFn(() => !1)
.notNull(),
image: text("image"),
createdAt: timestamp("created_at")
.$defaultFn(() => new Date())
.notNull(),
updatedAt: timestamp("updated_at")
.$defaultFn(() => new Date())
.notNull(),
username: text("username").unique(),
displayUsername: text("display_username"),
});
export const session = shared.table("session", {
id: idPrimaryKey,
expiresAt: timestamp("expires_at").notNull(),
token: text("token").notNull().unique(),
createdAt: timestamp("created_at").notNull(),
updatedAt: timestamp("updated_at").notNull(),
ipAddress: text("ip_address"),
userAgent: text("user_agent"),
userId: uuid("user_id")
.notNull()
.references(() => user.id, { onDelete: "cascade" }),
});
export const account = shared.table("account", {
id: idPrimaryKey,
accountId: text("account_id").notNull(),
providerId: text("provider_id").notNull(),
userId: uuid("user_id")
.notNull()
.references(() => user.id, { onDelete: "cascade" }),
accessToken: text("access_token"),
refreshToken: text("refresh_token"),
idToken: text("id_token"),
accessTokenExpiresAt: timestamp("access_token_expires_at"),
refreshTokenExpiresAt: timestamp("refresh_token_expires_at"),
scope: text("scope"),
password: text("password"),
createdAt: timestamp("created_at").notNull(),
updatedAt: timestamp("updated_at").notNull(),
});
export const verification = shared.table("verification", {
id: idPrimaryKey,
identifier: text("identifier").notNull(),
value: text("value").notNull(),
expiresAt: timestamp("expires_at").notNull(),
createdAt: timestamp("created_at").$defaultFn(() => new Date()),
updatedAt: timestamp("updated_at").$defaultFn(() => new Date()),
});
export const apikey = shared.table("apikey", {
id: idPrimaryKey,
name: text("name"),
start: text("start"),
prefix: text("prefix"),
key: text("key").notNull(),
userId: uuid("user_id")
.notNull()
.references(() => user.id, { onDelete: "cascade" }),
refillInterval: integer("refill_interval"),
refillAmount: integer("refill_amount"),
lastRefillAt: timestamp("last_refill_at"),
enabled: boolean("enabled").default(true),
rateLimitEnabled: boolean("rate_limit_enabled").default(true),
rateLimitTimeWindow: integer("rate_limit_time_window").default(86400000),
rateLimitMax: integer("rate_limit_max").default(10),
requestCount: integer("request_count"),
remaining: integer("remaining"),
lastRequest: timestamp("last_request"),
expiresAt: timestamp("expires_at"),
createdAt: timestamp("created_at").notNull(),
updatedAt: timestamp("updated_at").notNull(),
permissions: text("permissions"),
metadata: text("metadata"),
});

View File

@ -1,53 +0,0 @@
import { sql } from "drizzle-orm";
import {
boolean,
check,
integer,
pgSchema,
text,
timestamp,
uuid,
varchar,
} from "drizzle-orm/pg-core";
import { user } from "./auth";
import { idPrimaryKey, timestampSchema } from "./helpers";
export const habitTracker = pgSchema("habit_tracker");
export const habit = habitTracker.table(
"habit",
{
id: idPrimaryKey,
userId: uuid("user_id")
.notNull()
.references(() => user.id, { onDelete: "cascade" }),
name: varchar("name", { length: 255 }).notNull(),
description: text("description"),
frequencyType: varchar("frequency_type", { length: 20 }).notNull(),
targetCount: integer("target_count").notNull().default(1),
intervalDays: integer("interval_days"),
active: boolean("active").notNull().default(true),
...timestampSchema,
},
(t) => [
check(
"freqency_type_check",
sql`${t.frequencyType} IN ('daily', 'interval', 'multi_daily')`,
),
check("target_count_check", sql`${t.targetCount} > 0`),
check(
"interval_days_check",
sql`(${t.frequencyType} = 'interval' AND ${t.intervalDays} IS NOT NULL AND ${t.intervalDays} > 0) OR (${t.frequencyType} != 'interval' AND ${t.intervalDays} IS NULL)`,
),
],
);
export const habitCompletion = habitTracker.table("habit_completion", {
id: uuid("id").primaryKey().default(sql`gen_random_uuid()`),
habitId: uuid("habit_id")
.notNull()
.references(() => habit.id, { onDelete: "cascade" }),
notes: text("notes"),
completed_at: timestamp("completed_at").defaultNow().notNull(),
...timestampSchema,
});

View File

@ -1,15 +0,0 @@
import { sql } from "drizzle-orm";
import { timestamp, uuid } from "drizzle-orm/pg-core";
export const timestampSchema = {
updated_at: timestamp()
.defaultNow()
.$onUpdate(() => new Date())
.notNull(),
created_at: timestamp().defaultNow().notNull(),
deleted_at: timestamp(),
};
export const idPrimaryKey = uuid("id")
.primaryKey()
.default(sql`gen_random_uuid()`);

View File

@ -1,96 +0,0 @@
import { sql } from "drizzle-orm";
import {
boolean,
check,
date,
decimal,
integer,
pgSchema,
text,
timestamp,
unique,
uuid,
varchar,
} from "drizzle-orm/pg-core";
import { user } from "./auth";
import { idPrimaryKey, timestampSchema } from "./helpers";
export const intakeTracker = pgSchema("intake_tracker");
export const intakeMetric = intakeTracker.table(
"intake_metric",
{
id: idPrimaryKey,
userId: uuid("user_id")
.notNull()
.references(() => user.id, { onDelete: "cascade" }),
metricType: varchar("metric_type", { length: 50 }).notNull(),
unit: varchar("unit", { length: 20 }).notNull(),
displayName: varchar("display_name", { length: 100 }).notNull(),
targetValue: decimal("target_value", { precision: 10, scale: 2 }),
minValue: decimal("min_value", { precision: 10, scale: 2 }),
maxValue: decimal("max_value", { precision: 10, scale: 2 }),
isCumulative: boolean("is_cumulative").notNull().default(true),
active: boolean("active").notNull().default(true),
...timestampSchema,
},
(t) => [
unique().on(t.userId, t.metricType),
check(
"positive_target_check",
sql`${t.targetValue} IS NULL OR ${t.targetValue} > 0`,
),
check(
"positive_min_check",
sql`${t.minValue} IS NULL OR ${t.minValue} >= 0`,
),
check(
"positive_max_check",
sql`${t.maxValue} IS NULL OR ${t.maxValue} >= 0`,
),
check(
"min_max_check",
sql`${t.minValue} IS NULL OR ${t.maxValue} IS NULL OR ${t.minValue} <= ${t.maxValue}`,
),
],
);
export const intakeRecord = intakeTracker.table(
"intake_record",
{
id: idPrimaryKey,
intakeMetricId: uuid("intake_metric_id")
.notNull()
.references(() => intakeMetric.id, { onDelete: "cascade" }),
value: decimal({ precision: 10, scale: 2 }),
recordedAt: timestamp("recorded_at").notNull().defaultNow(),
notes: text("notes"),
...timestampSchema,
},
(t) => [
check("positive_value_check", sql`${t.value} > 0`),
check("recorded_at_not_future_check", sql`${t.recordedAt} <= NOW()`),
],
);
export const dailySummary = intakeTracker.table(
"daily_summary",
{
id: idPrimaryKey,
intakeMetricId: uuid("intake_metric_id")
.notNull()
.references(() => intakeMetric.id, { onDelete: "cascade" }),
date: date("date").notNull(),
totalValue: decimal("total_value", { precision: 10, scale: 2 }).notNull(),
entryCount: integer("entry_count").notNull(),
firstEntryAt: timestamp("first_entry_at"),
lastEntryAt: timestamp("last_entry_at"),
...timestampSchema,
},
(t) => [
unique().on(t.intakeMetricId, t.date),
check("positive_total_check", sql`${t.totalValue} > 0`),
check("positive_count_check", sql`${t.entryCount} > 0`),
],
);

View File

@ -1,11 +0,0 @@
import { createEnv } from "@t3-oss/env-core";
import * as z from "zod";
export const env = createEnv({
server: {
DATABASE_URL: z.url(),
BETTER_AUTH_SECRET: z.string(),
BETTER_AUTH_URL: z.url(),
},
runtimeEnv: process.env,
});

View File

@ -1,10 +0,0 @@
import { describe, expect, it } from "bun:test";
import app from ".";
describe("API Health Check", () => {
it("should return 200 Response", async () => {
const req = new Request("http://localhost:3000/");
const res = await app.fetch(req);
expect(res.status).toBe(200);
});
});

View File

@ -1,14 +1,49 @@
import { Hono } from "hono";
import { logger } from "hono/logger";
import { auth } from "@/lib/auth";
import {
HttpApi,
HttpApiBuilder,
HttpApiEndpoint,
HttpApiGroup,
} from "@effect/platform";
import { BunHttpServer, BunRuntime } from "@effect/platform-bun";
import { Console, Context, Effect, Layer, Schema } from "effect";
const app = new Hono();
const MyApi = HttpApi.make("MyAPI").add(
HttpApiGroup.make("Greetings")
.add(HttpApiEndpoint.get("hello-world")`/`.addSuccess(Schema.String))
.add(HttpApiEndpoint.get("hello-failure")`/fail`.addError(Schema.String)),
);
app.use(logger());
class UsersRepository extends Context.Tag("UsersRepository")<
UsersRepository,
{
readonly findById: (id: number) => Effect.Effect<string>;
}
>() {}
app.on(["POST", "GET"], "/api/auth/**", (c) => auth.handler(c.req.raw));
app.get("/", (c) => {
return c.text("Hello Hono!");
// const repo = UsersRepository.of({ findById: (id) => Effect.succeed(`${id}`) });
const GreetingsLive = HttpApiBuilder.group(MyApi, "Greetings", (handlers) =>
handlers
.handle("hello-world", () =>
Effect.gen(function* () {
const repository = yield* UsersRepository;
yield* Console.log("<- Hello World handler invoked");
return yield* repository.findById(42);
}),
)
.handle("hello-failure", () => Effect.fail("Hello fail")),
);
const MyApiLive = HttpApiBuilder.api(MyApi).pipe(Layer.provide(GreetingsLive));
const userRepoLive = Layer.succeed(UsersRepository, {
findById: (id) => Effect.succeed(`${id}`),
});
export default app;
const ServerLive = HttpApiBuilder.serve().pipe(
Layer.provide(MyApiLive),
Layer.provide(BunHttpServer.layer({ port: 3000 })),
Layer.provide(userRepoLive),
);
Layer.launch(ServerLive).pipe(BunRuntime.runMain);

View File

@ -1,27 +0,0 @@
import { betterAuth } from "better-auth";
import { drizzleAdapter } from "better-auth/adapters/drizzle";
import { apiKey, username } from "better-auth/plugins";
import { db } from "@/db";
export const auth = betterAuth({
logger: {
level: "debug",
},
database: drizzleAdapter(db, {
provider: "pg",
}),
advanced: {
database: {
generateId: false,
},
},
plugins: [username(), apiKey()],
emailAndPassword: {
enabled: true,
},
user: {
deleteUser: {
enabled: true,
},
},
});

View File

@ -1,11 +0,0 @@
// mainly for testing
//
import { createAuthClient } from "better-auth/client";
import { apiKeyClient } from "better-auth/client/plugins";
import { env } from "@/env";
export const authClient = createAuthClient({
baseURL: env.BETTER_AUTH_URL,
plugins: [apiKeyClient()],
});

0
api/src/services/auth.ts Normal file
View File

View File

@ -1,307 +0,0 @@
import {
afterAll,
beforeAll,
beforeEach,
describe,
expect,
it,
} from "bun:test";
import { auth } from "@/lib/auth";
import { AuthTestHelper, buildHeaders, type TestUser } from "../utils";
describe("API Key Management", () => {
let user: TestUser;
let authHeaders: Headers;
let userId: string;
let authHelper: AuthTestHelper;
beforeAll(async () => {
// Create user and sign in for each test
authHelper = new AuthTestHelper();
const authUser = await authHelper.createAuthenticatedUser();
user = authUser.user;
authHeaders = authUser.authHeaders;
userId = authUser.userId;
});
afterAll(async () => {
// Clean up: delete user and associated data
authHelper.cleanupUser(authHeaders, user.password);
});
describe("API Key Creation", () => {
it("creates an API key successfully", async () => {
const res = await auth.api.createApiKey({
headers: authHeaders,
body: {
name: "Test API Key",
},
});
expect(res).toBeDefined();
expect(res.id).toBeDefined();
expect(res.name).toBe("Test API Key");
expect(res.key).toBeDefined();
expect(res.userId).toBe(userId);
expect(res.enabled).toBe(true);
});
it("creates an API key with custom name", async () => {
const res = await auth.api.createApiKey({
headers: authHeaders,
body: {
name: "Custom API Key",
},
});
expect(res).toBeDefined();
expect(res.name).toBe("Custom API Key");
expect(res.key).toBeDefined();
expect(res.userId).toBe(userId);
});
it("rejects API key creation without authentication", async () => {
try {
await auth.api.createApiKey({
headers: new Headers(),
body: {
name: "Unauthorized Key",
},
});
expect(true).toBe(false); // Should not reach here
} catch (error) {
expect(error).toBeDefined();
}
});
});
describe("API Key Listing", () => {
it("lists user's API keys", async () => {
// Create multiple API keys
await auth.api.createApiKey({
headers: authHeaders,
body: { name: "Key 1" },
});
await auth.api.createApiKey({
headers: authHeaders,
body: { name: "Key 2" },
});
const res = await auth.api.listApiKeys({
headers: authHeaders,
});
expect(res).toBeDefined();
expect(Array.isArray(res)).toBe(true);
expect(res.length).toBe(4);
expect(res.some((key) => key.name === "Key 1")).toBe(true);
expect(res.some((key) => key.name === "Key 2")).toBe(true);
});
it("rejects listing without authentication", async () => {
try {
await auth.api.listApiKeys({
headers: new Headers(),
});
expect(true).toBe(false); // Should not reach here
} catch (error) {
expect(error).toBeDefined();
}
});
});
describe("API Key Deletion", () => {
let apiKeyId: string;
beforeEach(async () => {
const res = await auth.api.createApiKey({
headers: authHeaders,
body: { name: "Key to Revoke" },
});
apiKeyId = res.id;
});
it("deletes an API key successfully", async () => {
const res = await auth.api.deleteApiKey({
headers: authHeaders,
body: { keyId: apiKeyId },
});
expect(res.success).toBe(true);
// Verify the key is no longer in the list
const keys = await auth.api.listApiKeys({
headers: authHeaders,
});
expect(keys.find((key) => key.id === apiKeyId)).toBeUndefined();
});
it("rejects deleting non-existent API key", async () => {
try {
await auth.api.deleteApiKey({
headers: authHeaders,
body: { keyId: "non-existent-key-id" },
});
expect(true).toBe(false); // Should not reach here
} catch (error) {
expect(error).toBeDefined();
}
});
it("rejects deleting API key without authentication", async () => {
try {
await auth.api.deleteApiKey({
headers: new Headers(),
body: { keyId: apiKeyId },
});
expect(true).toBe(false); // Should not reach here
} catch (error) {
expect(error).toBeDefined();
}
});
it("rejects deleting another user's API key", async () => {
// Create another user
const otherUser = {
email: "other-user@example.com",
password: "password123",
name: "Other User",
username: "otheruser",
};
await auth.api.signUpEmail({
body: otherUser,
});
const { headers: otherHeaders } = await auth.api.signInEmail({
body: {
email: otherUser.email,
password: otherUser.password,
},
returnHeaders: true,
});
try {
await auth.api.deleteApiKey({
headers: buildHeaders(otherHeaders),
body: { keyId: apiKeyId },
});
expect(true).toBe(false); // Should not reach here
} catch (error) {
expect(error).toBeDefined();
}
// Clean up other user
await auth.api.deleteUser({
headers: buildHeaders(otherHeaders),
body: { password: otherUser.password },
});
});
});
describe("API Key Verification", () => {
let apiKey: string;
let apiKeyId: string;
beforeEach(async () => {
const res = await auth.api.createApiKey({
headers: authHeaders,
body: { name: "Test Auth Key" },
});
apiKey = res.key;
apiKeyId = res.id;
});
it("verifies valid API key", async () => {
const res = await auth.api.verifyApiKey({
body: { key: apiKey },
});
expect(res).toBeDefined();
expect(res.valid).toBe(true);
});
it("rejects invalid API key", async () => {
const res = await auth.api.verifyApiKey({
body: { key: "invalid-key" },
});
expect(res.valid).toBe(false);
});
it("gets API key details", async () => {
const res = await auth.api.getApiKey({
headers: authHeaders,
query: { id: apiKeyId },
});
expect(res).toBeDefined();
expect(res.id).toBe(apiKeyId);
expect(res.name).toBe("Test Auth Key");
expect(res.userId).toBe(userId);
});
it("updates API key", async () => {
const res = await auth.api.updateApiKey({
headers: authHeaders,
body: {
keyId: apiKeyId,
name: "Updated API Key Name",
},
});
expect(res).toBeDefined();
expect(res.name).toBe("Updated API Key Name");
});
it("gets session from api key", async () => {
const res = await auth.api.getSession({
headers: new Headers({ "x-api-key": apiKey }),
});
expect(res).toBeDefined();
expect(res?.session).toBeDefined();
expect(res?.session?.userId).toBe(userId);
expect(res?.user.username).toBe(user.username);
});
});
describe("API Key Management", () => {
it("manages multiple API keys correctly", async () => {
// Create multiple keys
const key1 = await auth.api.createApiKey({
headers: authHeaders,
body: { name: "Key 1" },
});
const key2 = await auth.api.createApiKey({
headers: authHeaders,
body: { name: "Key 2" },
});
// List keys
const keys = await auth.api.listApiKeys({
headers: authHeaders,
});
expect(keys.length).toBeGreaterThanOrEqual(2);
expect(keys.find((k) => k.id === key1.id)).toBeDefined();
expect(keys.find((k) => k.id === key2.id)).toBeDefined();
// Delete one key
await auth.api.deleteApiKey({
headers: authHeaders,
body: { keyId: key1.id },
});
// Verify it's removed
const updatedKeys = await auth.api.listApiKeys({
headers: authHeaders,
});
expect(updatedKeys.find((k) => k.id === key1.id)).toBeUndefined();
expect(updatedKeys.find((k) => k.id === key2.id)).toBeDefined();
});
});
});

View File

@ -1,121 +0,0 @@
import { beforeEach, describe, expect, it } from "bun:test";
import { auth } from "@/lib/auth";
import { AuthTestHelper } from "../utils";
describe("Authentication Integration Flow", () => {
let authHelper: AuthTestHelper;
beforeEach(() => {
authHelper = new AuthTestHelper();
});
it("completes full auth flow: signup -> signin -> create API key -> use API key -> cleanup", async () => {
// 1. Create and authenticate user
const { user, userId, authHeaders } =
await authHelper.createAuthenticatedUser({
email: `integration-${Date.now()}@example.com`,
name: "Integration Test User",
username: `integrationuser${Date.now()}`,
});
// 2. Create API key using session auth
const apiKeyRes = await authHelper.createApiKey(authHeaders, {
name: "Integration Test Key",
});
expect(apiKeyRes.id).toBeDefined();
expect(apiKeyRes.key).toBeDefined();
expect(apiKeyRes.userId).toBe(userId);
expect(apiKeyRes.name).toBe("Integration Test Key");
// 3. Verify API key with verification endpoint
const verifyRes = await auth.api.verifyApiKey({
body: { key: apiKeyRes.key },
});
expect(verifyRes.valid).toBe(true);
// 4. List API keys to verify it appears
const keysRes = await auth.api.listApiKeys({
headers: authHeaders,
});
expect(keysRes).toBeDefined();
expect(Array.isArray(keysRes)).toBe(true);
expect(keysRes.length).toBe(1);
expect(keysRes?.[0]?.id).toBe(apiKeyRes.id);
// 5. Delete API key
const deleteRes = await auth.api.deleteApiKey({
headers: authHeaders,
body: { keyId: apiKeyRes.id },
});
expect(deleteRes.success).toBe(true);
// 6. Verify API key no longer works
const invalidVerifyRes = await auth.api.verifyApiKey({
body: { key: apiKeyRes.key },
});
expect(invalidVerifyRes.valid).toBe(false);
// 7. Cleanup user
await authHelper.cleanupUser(authHeaders, user.password);
});
it("handles multiple concurrent API keys per user", async () => {
const { user, authHeaders } = await authHelper.createAuthenticatedUser();
// Create multiple API keys
const [key1, key2, key3] = await Promise.all([
authHelper.createApiKey(authHeaders, { name: "Key 1" }),
authHelper.createApiKey(authHeaders, { name: "Key 2" }),
authHelper.createApiKey(authHeaders, { name: "Key 3" }),
]);
// Verify all keys work
const [verify1, verify2, verify3] = await Promise.all([
auth.api.verifyApiKey({ body: { key: key1.key } }),
auth.api.verifyApiKey({ body: { key: key2.key } }),
auth.api.verifyApiKey({ body: { key: key3.key } }),
]);
expect(verify1.valid).toBe(true);
expect(verify2.valid).toBe(true);
expect(verify3.valid).toBe(true);
// List all keys
const allKeys = await auth.api.listApiKeys({ headers: authHeaders });
expect(allKeys.length).toBe(3);
// Delete one key
await auth.api.deleteApiKey({
headers: authHeaders,
body: { keyId: key2.id },
});
// Verify only 2 keys remain
const remainingKeys = await auth.api.listApiKeys({ headers: authHeaders });
expect(remainingKeys.length).toBe(2);
expect(remainingKeys.find((k) => k.id === key2.id)).toBeUndefined();
// Verify deleted key doesn't work
const deletedVerify = await auth.api.verifyApiKey({
body: { key: key2.key },
});
expect(deletedVerify.valid).toBe(false);
// Verify other keys still work
const [stillWorking1, stillWorking3] = await Promise.all([
auth.api.verifyApiKey({ body: { key: key1.key } }),
auth.api.verifyApiKey({ body: { key: key3.key } }),
]);
expect(stillWorking1.valid).toBe(true);
expect(stillWorking3.valid).toBe(true);
// Cleanup
await authHelper.cleanupUser(authHeaders, user.password);
});
});

View File

@ -1,125 +0,0 @@
import { describe, expect, it } from "bun:test";
import { db } from "@/db";
import { auth } from "@/lib/auth";
import { buildHeaders } from "../utils";
describe("User Authentication", () => {
const user = {
email: "test-user@example.com",
password: "password123",
name: "Test User",
username: "testuser",
};
let authHeaders: Headers;
describe("User Registration", () => {
it("creates a new user successfully", async () => {
const res = await auth.api.signUpEmail({
body: {
...user,
},
});
expect(res.user.email).toBe(user.email);
expect(res.user.name).toBe(user.name);
// Note: Username field may not be returned in the response even if set
const foundUser = await db.query.user.findFirst({
where: (users, { eq }) => eq(users.email, user.email),
});
expect(foundUser).not.toBeNull();
expect(foundUser?.email).toBe(user.email);
});
it("prevents duplicate user registration", async () => {
try {
await auth.api.signUpEmail({
body: {
...user,
},
});
expect(true).toBe(false); // Should not reach here
} catch (error) {
expect(error).toBeDefined();
}
});
});
describe("User Login", () => {
it("logs in user with correct credentials", async () => {
const { headers: signInHeaders, response: signInRes } =
await auth.api.signInEmail({
body: {
email: user.email,
password: user.password,
},
returnHeaders: true,
});
expect(signInRes.token).not.toBeNull();
expect(signInRes.user.name).toBe(user.name);
expect(signInRes.user.email).toBe(user.email);
authHeaders = buildHeaders(signInHeaders);
});
it("rejects login with incorrect password", async () => {
try {
await auth.api.signInEmail({
body: {
email: user.email,
password: "wrongpassword",
},
});
expect(true).toBe(false); // Should not reach here
} catch (error) {
expect(error).toBeDefined();
}
});
it("rejects login with non-existent email", async () => {
try {
await auth.api.signInEmail({
body: {
email: "nonexistent@example.com",
password: user.password,
},
});
expect(true).toBe(false); // Should not reach here
} catch (error) {
expect(error).toBeDefined();
}
});
});
describe("Session Management", () => {
it("retrieves valid session with auth headers", async () => {
const res = await auth.api.getSession({ headers: authHeaders });
expect(res?.user.username).toBe(user.username);
expect(res?.user.email).toBe(user.email);
});
it("returns null for session without auth headers", async () => {
const res = await auth.api.getSession({ headers: new Headers() });
expect(res?.user).toBeUndefined();
});
});
describe("User Deletion", () => {
it("deletes user with correct password", async () => {
const deleteRes = await auth.api.deleteUser({
headers: authHeaders,
body: {
password: user.password,
},
});
expect(deleteRes.success).toBe(true);
// Verify user is actually deleted
const foundUser = await db.query.user.findFirst({
where: (users, { eq }) => eq(users.email, user.email),
});
expect(foundUser).toBeUndefined();
});
});
});

View File

@ -1,17 +0,0 @@
import { afterAll, beforeAll } from "bun:test";
import { migrate } from "drizzle-orm/bun-sql/migrator";
import { db } from "@/db";
beforeAll(async () => {
await migrate(db, { migrationsFolder: "./drizzle" });
});
afterAll(async () => {
// Clear all test data from database - note: order matters due to foreign keys
const { session, apikey, account, user } = await import("@/db/schema/auth");
await db.delete(session).execute();
await db.delete(apikey).execute();
await db.delete(account).execute();
await db.delete(user).execute();
});

View File

@ -1,101 +0,0 @@
import { db } from "@/db";
import { auth } from "@/lib/auth";
export function buildHeaders(signInHeaders: Headers) {
const headers = new Headers();
for (const cookie of signInHeaders.getSetCookie() ?? []) {
headers.append("cookie", cookie);
}
return headers;
}
export interface TestUser {
email: string;
password: string;
name: string;
username: string;
}
export class AuthTestHelper {
createTestUser(userOverrides: Partial<TestUser> = {}): TestUser {
const defaultUser: TestUser = {
email: `test-${Date.now()}@example.com`,
password: "password123",
name: "Test User",
username: `testuser${Date.now()}`,
};
return { ...defaultUser, ...userOverrides };
}
async signUpUser(user: TestUser) {
return await auth.api.signUpEmail({
body: user,
});
}
async signInUser(user: Pick<TestUser, "email" | "password">) {
return await auth.api.signInEmail({
body: {
email: user.email,
password: user.password,
},
returnHeaders: true,
});
}
async createAuthenticatedUser(userOverrides: Partial<TestUser> = {}) {
const user = this.createTestUser(userOverrides);
const signUpRes = await this.signUpUser(user);
const { headers: signInHeaders } = await this.signInUser(user);
const authHeaders = buildHeaders(signInHeaders);
return {
user,
userId: signUpRes.user.id,
authHeaders,
};
}
async cleanupUser(authHeaders: Headers, password: string) {
try {
await auth.api.deleteUser({
headers: authHeaders,
body: { password },
});
} catch (_) {
// User might already be deleted, ignore error
}
}
async createApiKey(
authHeaders: Headers,
keyOptions: Partial<
Parameters<typeof auth.api.createApiKey>["0"]["body"]
> = {},
) {
return await auth.api.createApiKey({
headers: authHeaders,
body: {
name: `Test API Key ${Date.now()}`,
...keyOptions,
},
});
}
createApiKeyHeaders(apiKey: string) {
const headers = new Headers();
headers.set("Authorization", `Bearer ${apiKey}`);
return headers;
}
async clearDatabase() {
// Clear all test data from database - note: order matters due to foreign keys
const { session, apikey, account, user } = await import("@/db/schema/auth");
await db.delete(session).execute();
await db.delete(apikey).execute();
await db.delete(account).execute();
await db.delete(user).execute();
}
}

View File

@ -1,31 +1,29 @@
{
"compilerOptions": {
"jsxImportSource": "hono/jsx",
// Environment setup & latest features
"lib": ["ESNext"],
"target": "ESNext",
"module": "ESNext",
"module": "Preserve",
"moduleDetection": "force",
"jsx": "react-jsx",
"allowJs": true,
// Bundler mode
"moduleResolution": "bundler",
"allowImportingTsExtensions": true,
"verbatimModuleSyntax": true,
"noEmit": true,
// Best practices
"strict": true,
"skipLibCheck": true,
"noFallthroughCasesInSwitch": true,
"noUncheckedIndexedAccess": true,
"noImplicitOverride": true,
// Some stricter flags (disabled by default)
"noUnusedLocals": true,
"noUnusedParameters": true,
"noPropertyAccessFromIndexSignature": true,
"baseUrl": ".",
"paths": {
"@/*": ["./src/*"]
}
"noPropertyAccessFromIndexSignature": true
}
}

View File

@ -1,41 +1,42 @@
{
"$schema": "https://biomejs.dev/schemas/2.1.1/schema.json",
"vcs": {
"enabled": false,
"clientKind": "git",
"useIgnoreFile": false
},
"files": {
"ignoreUnknown": false
},
"formatter": {
"enabled": true,
"indentStyle": "tab"
},
"linter": {
"enabled": true,
"rules": {
"recommended": true,
"correctness": {
"noUnusedImports": {
"level": "warn",
"fix": "safe"
}
}
}
},
"javascript": {
"formatter": {
"quoteStyle": "double",
"semicolons": "always"
}
},
"assist": {
"enabled": true,
"actions": {
"source": {
"organizeImports": "on"
}
}
}
"$schema": "https://biomejs.dev/schemas/2.1.2/schema.json",
"vcs": {
"enabled": false,
"clientKind": "git",
"useIgnoreFile": false
},
"files": {
"ignoreUnknown": false
},
"formatter": {
"enabled": true,
"indentStyle": "tab"
},
"linter": {
"enabled": true,
"rules": {
"recommended": true,
"correctness": {
"noUnusedImports": {
"level": "warn",
"fix": "safe",
"options": {}
}
}
}
},
"javascript": {
"formatter": {
"quoteStyle": "double",
"semicolons": "always"
}
},
"assist": {
"enabled": true,
"actions": {
"source": {
"organizeImports": "on"
}
}
}
}

15
charts/system/Chart.yaml Normal file
View File

@ -0,0 +1,15 @@
apiVersion: v2
name: system
description: A Helm chart for System
type: application
version: 0.1.0
appVersion: "1.0.0"
keywords:
- bun
- typescript
- effect-ts
- api
home: https://git.yadunut.dev/yadunut/system
maintainers:
- name: Yadunand Prem
email: git@yadunut.com

View File

@ -0,0 +1,62 @@
1. Get the application URL by running these commands:
{{- if .Values.ingress.enabled }}
{{- range $host := .Values.ingress.hosts }}
{{- range .paths }}
http{{ if $.Values.ingress.tls }}s{{ end }}://{{ $host.host }}{{ .path }}
{{- end }}
{{- end }}
{{- else if contains "NodePort" .Values.service.type }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "system.fullname" . }})
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
{{- else if contains "LoadBalancer" .Values.service.type }}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get --namespace {{ .Release.Namespace }} svc -w {{ include "system.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "system.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}")
echo http://$SERVICE_IP:{{ .Values.service.port }}
{{- else if contains "ClusterIP" .Values.service.type }}
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "system.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
export CONTAINER_PORT=$(kubectl get pod --namespace {{ .Release.Namespace }} $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl --namespace {{ .Release.Namespace }} port-forward $POD_NAME 8080:$CONTAINER_PORT
{{- end }}
2. Database Configuration:
Using external PostgreSQL database
Host: {{ .Values.database.postgresql.host }}:{{ .Values.database.postgresql.port }}
Database: {{ .Values.database.postgresql.database }}
{{- if .Values.database.postgresql.existingSecret }}
Credentials from secret: {{ .Values.database.postgresql.existingSecret }}
{{- else }}
Using generated credentials from secret: {{ include "system.fullname" . }}-postgresql
{{- end }}
3. Scaling:
{{- if .Values.autoscaling.enabled }}
Horizontal Pod Autoscaler is enabled
Min replicas: {{ .Values.autoscaling.minReplicas }}
Max replicas: {{ .Values.autoscaling.maxReplicas }}
CPU target: {{ .Values.autoscaling.targetCPUUtilizationPercentage }}%
{{- if .Values.autoscaling.targetMemoryUtilizationPercentage }}
Memory target: {{ .Values.autoscaling.targetMemoryUtilizationPercentage }}%
{{- end }}
{{- else }}
Running {{ .Values.replicaCount }} replica(s)
To enable autoscaling, set autoscaling.enabled=true
{{- end }}
4. Health Checks:
{{- if .Values.healthcheck.enabled }}
Health checks are enabled
Readiness probe: {{ .Values.healthcheck.readinessProbe.httpGet.path }}
Liveness probe: {{ .Values.healthcheck.livenessProbe.httpGet.path }}
{{- else }}
Health checks are disabled
{{- end }}
5. Security:
Running as non-root user (UID: {{ .Values.securityContext.runAsUser }})
Read-only root filesystem: {{ .Values.securityContext.readOnlyRootFilesystem }}
For more information about this chart, visit:
https://git.yadunut.dev/yadunut/system

View File

@ -0,0 +1,81 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "system.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "system.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "system.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "system.labels" -}}
helm.sh/chart: {{ include "system.chart" . }}
{{ include "system.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "system.selectorLabels" -}}
app.kubernetes.io/name: {{ include "system.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{/*
Create the name of the service account to use
*/}}
{{- define "system.serviceAccountName" -}}
{{- if .Values.serviceAccount.create }}
{{- default (include "system.fullname" .) .Values.serviceAccount.name }}
{{- else }}
{{- default "default" .Values.serviceAccount.name }}
{{- end }}
{{- end }}
{{/*
Create the image name
*/}}
{{- define "system.image" -}}
{{- $tag := .Values.image.tag | default .Chart.AppVersion }}
{{- printf "%s:%s" .Values.image.repository $tag }}
{{- end }}
{{/*
Database secret name
*/}}
{{- define "system.databaseSecretName" -}}
{{- if .Values.database.postgresql.existingSecret }}
{{- .Values.database.postgresql.existingSecret }}
{{- else }}
{{- include "system.fullname" . }}-postgresql
{{- end }}
{{- end }}

View File

@ -0,0 +1,16 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "system.fullname" . }}-config
labels:
{{- include "system.labels" . | nindent 4 }}
data:
PORT: {{ .Values.app.port | quote }}
NODE_ENV: {{ .Values.app.env | quote }}
DATABASE_TYPE: {{ .Values.database.type | quote }}
{{- if eq .Values.database.type "postgresql" }}
POSTGRES_HOST: {{ .Values.database.postgresql.host | quote }}
POSTGRES_PORT: {{ .Values.database.postgresql.port | quote }}
POSTGRES_DB: {{ .Values.database.postgresql.database | quote }}
POSTGRES_USER: {{ .Values.database.postgresql.username | quote }}
{{- end }}

View File

@ -0,0 +1,77 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "system.fullname" . }}
labels:
{{- include "system.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "system.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "system.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "system.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: {{ include "system.image" . }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: {{ .Values.service.targetPort }}
protocol: TCP
env:
- name: PORT
value: {{ .Values.app.port | quote }}
- name: NODE_ENV
value: {{ .Values.app.env | quote }}
{{- if eq .Values.database.type "postgresql" }}
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: {{ include "system.databaseSecretName" . }}
key: database-url
{{- end }}
{{- if .Values.healthcheck.enabled }}
livenessProbe:
{{- toYaml .Values.healthcheck.livenessProbe | nindent 12 }}
readinessProbe:
{{- toYaml .Values.healthcheck.readinessProbe | nindent 12 }}
{{- end }}
resources:
{{- toYaml .Values.resources | nindent 12 }}
volumeMounts:
- name: tmp
mountPath: /tmp
volumes:
- name: tmp
emptyDir: {}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}

View File

@ -0,0 +1,32 @@
{{- if .Values.autoscaling.enabled }}
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: {{ include "system.fullname" . }}
labels:
{{- include "system.labels" . | nindent 4 }}
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ include "system.fullname" . }}
minReplicas: {{ .Values.autoscaling.minReplicas }}
maxReplicas: {{ .Values.autoscaling.maxReplicas }}
metrics:
{{- if .Values.autoscaling.targetCPUUtilizationPercentage }}
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: {{ .Values.autoscaling.targetCPUUtilizationPercentage }}
{{- end }}
{{- if .Values.autoscaling.targetMemoryUtilizationPercentage }}
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: {{ .Values.autoscaling.targetMemoryUtilizationPercentage }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,59 @@
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "system.fullname" . -}}
{{- $svcPort := .Values.service.port -}}
{{- if and .Values.ingress.className (not (hasKey .Values.ingress.annotations "kubernetes.io/ingress.class")) }}
{{- $_ := set .Values.ingress.annotations "kubernetes.io/ingress.class" .Values.ingress.className}}
{{- end }}
{{- if semverCompare ">=1.19-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1
{{- else if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1beta1
{{- else -}}
apiVersion: extensions/v1beta1
{{- end }}
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
{{- include "system.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if and .Values.ingress.className (semverCompare ">=1.18-0" .Capabilities.KubeVersion.GitVersion) }}
ingressClassName: {{ .Values.ingress.className }}
{{- end }}
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: {{ .path }}
{{- if and .pathType (semverCompare ">=1.18-0" $.Capabilities.KubeVersion.GitVersion) }}
pathType: {{ .pathType }}
{{- end }}
backend:
{{- if semverCompare ">=1.19-0" $.Capabilities.KubeVersion.GitVersion }}
service:
name: {{ $fullName }}
port:
number: {{ $svcPort }}
{{- else }}
serviceName: {{ $fullName }}
servicePort: {{ $svcPort }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,2 @@
# PVC template removed - not needed for production PostgreSQL deployment
# PGLite is only used in test environments

View File

@ -0,0 +1,12 @@
{{- if and (eq .Values.database.type "postgresql") (not .Values.database.postgresql.existingSecret) }}
apiVersion: v1
kind: Secret
metadata:
name: {{ include "system.fullname" . }}-postgresql
labels:
{{- include "system.labels" . | nindent 4 }}
type: Opaque
data:
postgresql-password: {{ "changeme" | b64enc | quote }}
database-url: {{ printf "postgresql://%s:changeme@%s:%d/%s" .Values.database.postgresql.username .Values.database.postgresql.host (.Values.database.postgresql.port | int) .Values.database.postgresql.database | b64enc | quote }}
{{- end }}

View File

@ -0,0 +1,15 @@
apiVersion: v1
kind: Service
metadata:
name: {{ include "system.fullname" . }}
labels:
{{- include "system.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: {{ .Values.service.targetPort }}
protocol: TCP
name: http
selector:
{{- include "system.selectorLabels" . | nindent 4 }}

View File

@ -0,0 +1,12 @@
{{- if .Values.serviceAccount.create -}}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "system.serviceAccountName" . }}
labels:
{{- include "system.labels" . | nindent 4 }}
{{- with .Values.serviceAccount.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,106 @@
# Production environment overrides for system
replicaCount: 3
image:
repository: system/api
tag: "latest"
pullPolicy: Always
app:
env: production
database:
type: postgresql
postgresql:
enabled: true
host: postgres-prod.internal
port: 5432
database: system2_production
username: system2_production
existingSecret: system-prod-postgresql
persistence:
enabled: false
resources:
limits:
cpu: 2000m
memory: 2Gi
requests:
cpu: 500m
memory: 512Mi
autoscaling:
enabled: true
minReplicas: 3
maxReplicas: 20
targetCPUUtilizationPercentage: 70
targetMemoryUtilizationPercentage: 80
ingress:
enabled: true
className: "nginx"
annotations:
kubernetes.io/tls-acme: "true"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
nginx.ingress.kubernetes.io/rate-limit: "100"
nginx.ingress.kubernetes.io/rate-limit-window: "1m"
hosts:
- host: api.system2.example.com
paths:
- path: /
pathType: Prefix
tls:
- secretName: system-prod-tls
hosts:
- api.system2.example.com
healthcheck:
enabled: true
readinessProbe:
initialDelaySeconds: 15
periodSeconds: 5
timeoutSeconds: 3
failureThreshold: 3
livenessProbe:
initialDelaySeconds: 120
periodSeconds: 30
timeoutSeconds: 5
failureThreshold: 3
# Production-specific security settings
podSecurityContext:
fsGroup: 2000
runAsNonRoot: true
runAsUser: 1000
seccompProfile:
type: RuntimeDefault
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
# Pod disruption budget for high availability
podDisruptionBudget:
enabled: true
minAvailable: 2
# Node affinity for production workloads
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app.kubernetes.io/name
operator: In
values:
- system
topologyKey: kubernetes.io/hostname

View File

@ -0,0 +1,64 @@
# Staging environment overrides for system
replicaCount: 2
image:
repository: system/api
tag: "staging"
pullPolicy: Always
app:
env: staging
database:
type: postgresql
postgresql:
enabled: true
host: postgres-staging.internal
port: 5432
database: system2_staging
username: system2_staging
existingSecret: system-staging-postgresql
persistence:
enabled: false
resources:
limits:
cpu: 1000m
memory: 1Gi
requests:
cpu: 200m
memory: 256Mi
autoscaling:
enabled: true
minReplicas: 2
maxReplicas: 10
targetCPUUtilizationPercentage: 70
targetMemoryUtilizationPercentage: 80
ingress:
enabled: true
className: "nginx"
annotations:
kubernetes.io/tls-acme: "true"
cert-manager.io/cluster-issuer: "letsencrypt-staging"
hosts:
- host: api-staging.system2.example.com
paths:
- path: /
pathType: Prefix
tls:
- secretName: system-staging-tls
hosts:
- api-staging.system2.example.com
healthcheck:
enabled: true
readinessProbe:
initialDelaySeconds: 10
periodSeconds: 5
livenessProbe:
initialDelaySeconds: 60
periodSeconds: 30

125
charts/system/values.yaml Normal file
View File

@ -0,0 +1,125 @@
# Default values for system
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
image:
repository: harbor.yadunut.dev/yadunut/system-api
pullPolicy: IfNotPresent
# Overrides the image tag whose default is the chart appVersion.
tag: ""
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: ""
podAnnotations: {}
podSecurityContext:
fsGroup: 2000
runAsNonRoot: true
runAsUser: 1000
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
service:
type: ClusterIP
port: 3000
targetPort: 3000
ingress:
enabled: false
className: ""
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts:
- host: system.local
paths:
- path: /
pathType: Prefix
tls: []
# - secretName: system-tls
# hosts:
# - system.local
resources:
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 100m
memory: 128Mi
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 100
targetCPUUtilizationPercentage: 80
# targetMemoryUtilizationPercentage: 80
nodeSelector: {}
tolerations: []
affinity: {}
# Application configuration
app:
port: 3000
env: development
# Database configuration
database:
# Use external PostgreSQL
type: postgresql
# PostgreSQL configuration
postgresql:
enabled: false
host: ""
port: 5432
database: system2
username: system2
# Password should be provided via secret
existingSecret: ""
secretKey: "postgresql-password"
# Persistent volume configuration (not used in production with external PostgreSQL)
persistence:
enabled: false
# storageClass: "-"
accessMode: ReadWriteOnce
size: 1Gi
# Health checks
healthcheck:
enabled: true
livenessProbe:
httpGet:
path: /
port: http
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /
port: http
initialDelaySeconds: 5
periodSeconds: 5

View File

@ -3,10 +3,10 @@
"devenv": {
"locked": {
"dir": "src/modules",
"lastModified": 1752088718,
"lastModified": 1754344171,
"owner": "cachix",
"repo": "devenv",
"rev": "192a48f9f9f830ed0afacfa7540eb459111377a6",
"rev": "03e3a284d2e16e5aaced317cf84dfb392470ca6e",
"type": "github"
},
"original": {
@ -16,26 +16,6 @@
"type": "github"
}
},
"fenix": {
"inputs": {
"nixpkgs": [
"nixpkgs"
],
"rust-analyzer-src": "rust-analyzer-src"
},
"locked": {
"lastModified": 1752043172,
"owner": "nix-community",
"repo": "fenix",
"rev": "84802dd540bd6e41380aeae57713de334f0626b2",
"type": "github"
},
"original": {
"owner": "nix-community",
"repo": "fenix",
"type": "github"
}
},
"flake-compat": {
"flake": false,
"locked": {
@ -94,10 +74,10 @@
},
"nixpkgs": {
"locked": {
"lastModified": 1750441195,
"lastModified": 1753719760,
"owner": "cachix",
"repo": "devenv-nixpkgs",
"rev": "0ceffe312871b443929ff3006960d29b120dc627",
"rev": "0f871fffdc0e5852ec25af99ea5f09ca7be9b632",
"type": "github"
},
"original": {
@ -110,7 +90,6 @@
"root": {
"inputs": {
"devenv": "devenv",
"fenix": "fenix",
"git-hooks": "git-hooks",
"nixpkgs": "nixpkgs",
"pre-commit-hooks": [
@ -119,22 +98,6 @@
"rust-overlay": "rust-overlay"
}
},
"rust-analyzer-src": {
"flake": false,
"locked": {
"lastModified": 1752028226,
"owner": "rust-lang",
"repo": "rust-analyzer",
"rev": "e429bac8793c24a99b643c4813ece813901c8c79",
"type": "github"
},
"original": {
"owner": "rust-lang",
"ref": "nightly",
"repo": "rust-analyzer",
"type": "github"
}
},
"rust-overlay": {
"inputs": {
"nixpkgs": [
@ -142,10 +105,10 @@
]
},
"locked": {
"lastModified": 1752028888,
"lastModified": 1754362243,
"owner": "oxalica",
"repo": "rust-overlay",
"rev": "a0f1c656e053463b47639234b151a05e4441bb19",
"rev": "3ec3244ffb877f1b7f5d2dbff19241982ab25ff2",
"type": "github"
},
"original": {

View File

@ -12,6 +12,8 @@
git
bun
cargo-generate
kubernetes-helm
helm-ls
];
# https://devenv.sh/languages/

View File

@ -1,9 +1,4 @@
inputs:
fenix:
url: github:nix-community/fenix
inputs:
nixpkgs:
follows: nixpkgs
nixpkgs:
url: github:cachix/devenv-nixpkgs/rolling
rust-overlay:

View File

@ -2,7 +2,7 @@ version: "3.8"
services:
postgres:
image: postgres:15
image: postgres:18beta2-alpine
container_name: postgres-dev
environment:
POSTGRES_DB: system
@ -13,19 +13,3 @@ services:
volumes:
- postgres_data:/var/lib/postgresql/data
restart: unless-stopped
postgres-test:
image: postgres:15
container_name: postgres-test
environment:
POSTGRES_DB: system_test
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
ports:
- "5433:5432"
tmpfs:
- /var/lib/postgresql/data
restart: unless-stopped
volumes:
postgres_data:

47
docs/ROADMAP.md Normal file
View File

@ -0,0 +1,47 @@
# ROADMAP
This document outlines the roadmap of tasks done by claude in this project.
## Development Workflow
1. **Task Planning**
- Study existing code and documentation and understand the current state
- Update `ROADMAP.md` to include the new task
2. **Task Creation**
- Study existing code and documentation and understand the current state
- Create a new task in `docs/tasks/` directory
- Name format: `XX-<task-name>.md` (e.g., `01-setup-docker.md`)
- Include high level specification of the task, including:
- Relevant files
- Purpose and goals
- Key components and technologies involved
- Expected outcomes and deliverables
- Tests (If applicable)
- Implementation steps
- TODO items and subtasks
- Note that this is a new task, the TODO items should be unchecked.
3. **Task Implementation**
- Study existing code and documentation and understand the current state
- Follow the specification from the tasks file
- Implement features and functionality
- Update step progress within the task file after each step
4. **Roadmap Update**
- Mark completed tasks with [X] in the `ROADMAP.md` file
- Add reference to the task file (e.g. See [01-setup-docker.md](docs/tasks/01-setup-docker.md))
## Development Tasks
- [x] **Task 00: Helm Chart Deployment** - See [00-helm-chart-deployment.md](docs/tasks/00-helm-chart-deployment.md)
- Create a Helm chart for Kubernetes deployment of the application.
- Create `charts/` directory structure with standard Helm chart layout (Chart.yaml, values.yaml, templates/)
- Define Kubernetes deployment manifests for the Bun-based API server with proper resource limits and health checks
- Configure PostgreSQL database deployment or external database connection options in the chart
- Set up ingress configuration with configurable domain and TLS certificate management
- Add comprehensive values.yaml with environment-specific overrides for development, staging, and production deployments

File diff suppressed because it is too large Load Diff

View File

@ -1,986 +0,0 @@
# Database Architecture
## Overview
This system uses a PostgreSQL database with a multi-schema architecture to support multiple tools while enabling cross-tool data sharing and unified dashboards. Each tool has its own schema, with a shared schema for common functionality like user management.
## Schema Structure
### Shared Schema (`shared`)
Contains common functionality used across all tools:
- User management and authentication (using Better Auth)
- Cross-tool settings and preferences
- Shared utilities
### Tool-Specific Schemas
Each tool has its own schema:
- `habit_tracker` - Binary habit completion tracking
- `intake_tracker` - Quantitative metrics tracking (water, steps, weight, etc.)
- `food_tracker` - Complex food intake and nutrition tracking (planned)
- `future_tool_1` - Placeholder for future tools
## Database Setup
```sql
-- Enable UUID extension
CREATE EXTENSION IF NOT EXISTS "pgcrypto";
-- Create schemas
CREATE SCHEMA IF NOT EXISTS shared;
CREATE SCHEMA IF NOT EXISTS habit_tracker;
CREATE SCHEMA IF NOT EXISTS intake_tracker;
```
## Table Definitions
### Shared Schema Tables
#### Better Auth Tables
The system uses Better Auth for authentication, which creates the following tables:
##### Users Table
```sql
CREATE TABLE shared.user (
id TEXT PRIMARY KEY NOT NULL,
name TEXT NOT NULL,
email TEXT NOT NULL,
email_verified BOOLEAN NOT NULL,
image TEXT,
created_at TIMESTAMP NOT NULL,
updated_at TIMESTAMP NOT NULL,
username TEXT UNIQUE,
display_username TEXT,
CONSTRAINT user_email_unique UNIQUE(email),
CONSTRAINT user_username_unique UNIQUE(username)
);
```
##### Sessions Table
```sql
CREATE TABLE shared.session (
id TEXT PRIMARY KEY NOT NULL,
expires_at TIMESTAMP NOT NULL,
token TEXT NOT NULL,
created_at TIMESTAMP NOT NULL,
updated_at TIMESTAMP NOT NULL,
ip_address TEXT,
user_agent TEXT,
user_id TEXT NOT NULL,
CONSTRAINT session_token_unique UNIQUE(token),
CONSTRAINT session_user_id_user_id_fk FOREIGN KEY (user_id) REFERENCES shared.user(id) ON DELETE CASCADE
);
```
##### Accounts Table
```sql
CREATE TABLE shared.account (
id TEXT PRIMARY KEY NOT NULL,
account_id TEXT NOT NULL,
provider_id TEXT NOT NULL,
user_id TEXT NOT NULL,
access_token TEXT,
refresh_token TEXT,
id_token TEXT,
access_token_expires_at TIMESTAMP,
refresh_token_expires_at TIMESTAMP,
scope TEXT,
password TEXT,
created_at TIMESTAMP NOT NULL,
updated_at TIMESTAMP NOT NULL,
CONSTRAINT account_user_id_user_id_fk FOREIGN KEY (user_id) REFERENCES shared.user(id) ON DELETE CASCADE
);
```
##### Verification Table
```sql
CREATE TABLE shared.verification (
id TEXT PRIMARY KEY NOT NULL,
identifier TEXT NOT NULL,
value TEXT NOT NULL,
expires_at TIMESTAMP NOT NULL,
created_at TIMESTAMP,
updated_at TIMESTAMP
);
```
##### API Keys Table
```sql
CREATE TABLE shared.apikey (
id TEXT PRIMARY KEY NOT NULL,
name TEXT,
start TEXT,
prefix TEXT,
key TEXT NOT NULL,
user_id TEXT NOT NULL,
refill_interval INTEGER,
refill_amount INTEGER,
last_refill_at TIMESTAMP,
enabled BOOLEAN DEFAULT TRUE,
rate_limit_enabled BOOLEAN DEFAULT TRUE,
rate_limit_time_window INTEGER DEFAULT 86400000,
rate_limit_max INTEGER DEFAULT 10,
request_count INTEGER,
remaining INTEGER,
last_request TIMESTAMP,
expires_at TIMESTAMP,
created_at TIMESTAMP NOT NULL,
updated_at TIMESTAMP NOT NULL,
permissions TEXT,
metadata TEXT,
CONSTRAINT apikey_user_id_user_id_fk FOREIGN KEY (user_id) REFERENCES shared.user(id) ON DELETE CASCADE
);
```
#### Legacy User Settings Table (if needed)
```sql
CREATE TABLE shared.user_settings (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
user_id TEXT NOT NULL REFERENCES shared.user(id) ON DELETE CASCADE,
key VARCHAR(100) NOT NULL,
value JSONB,
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW(),
UNIQUE(user_id, key)
);
```
### Habit Tracker Schema Tables
#### Habits Table
```sql
CREATE TABLE habit_tracker.habits (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
user_id TEXT NOT NULL REFERENCES shared.user(id) ON DELETE CASCADE,
name VARCHAR(255) NOT NULL,
description TEXT,
frequency_type VARCHAR(20) NOT NULL CHECK (frequency_type IN ('daily', 'interval', 'multi_daily')),
target_count INTEGER NOT NULL DEFAULT 1,
interval_days INTEGER, -- NULL for daily/multi_daily, required for interval
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW(),
active BOOLEAN DEFAULT TRUE
);
```
#### Habit Completions Table
```sql
CREATE TABLE habit_tracker.habit_completions (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
habit_id UUID NOT NULL REFERENCES habit_tracker.habits(id) ON DELETE CASCADE,
completed_at TIMESTAMP NOT NULL DEFAULT NOW(),
notes TEXT,
created_at TIMESTAMP DEFAULT NOW()
);
```
### Intake Tracker Schema Tables
#### Intake Metrics Table
```sql
CREATE TABLE intake_tracker.intake_metrics (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
user_id TEXT NOT NULL REFERENCES shared.user(id) ON DELETE CASCADE,
metric_type VARCHAR(50) NOT NULL, -- 'water', 'weight', 'steps', 'sleep_hours'
unit VARCHAR(20) NOT NULL, -- 'ml', 'kg', 'steps', 'hours'
display_name VARCHAR(100) NOT NULL,
target_value DECIMAL(10,2), -- optional daily target
is_cumulative BOOLEAN NOT NULL DEFAULT TRUE, -- true for water/steps, false for weight
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW(),
active BOOLEAN DEFAULT TRUE,
UNIQUE(user_id, metric_type)
);
```
#### Intake Records Table
```sql
CREATE TABLE intake_tracker.intake_records (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
intake_metric_id UUID NOT NULL REFERENCES intake_tracker.intake_metrics(id) ON DELETE CASCADE,
value DECIMAL(10,2) NOT NULL,
recorded_at TIMESTAMP NOT NULL DEFAULT NOW(),
notes TEXT,
created_at TIMESTAMP DEFAULT NOW()
);
```
#### Daily Summaries Table
```sql
CREATE TABLE intake_tracker.daily_summaries (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
intake_metric_id UUID NOT NULL REFERENCES intake_tracker.intake_metrics(id) ON DELETE CASCADE,
date DATE NOT NULL,
total_value DECIMAL(10,2) NOT NULL,
entry_count INTEGER NOT NULL,
first_entry_at TIMESTAMP,
last_entry_at TIMESTAMP,
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW(),
UNIQUE(intake_metric_id, date)
);
```
## Habit Types
### Daily Habits
**Purpose**: Habits that should be completed once every day
**Examples**: Taking vitamins, morning meditation, reading
**Configuration**:
- `frequency_type = 'daily'`
- `target_count = 1`
- `interval_days = NULL`
### Multi-Daily Habits
**Purpose**: Habits that should be completed multiple times per day
**Examples**: Brushing teeth (2x), drinking water (8x), taking breaks (5x)
**Configuration**:
- `frequency_type = 'multi_daily'`
- `target_count = N` (number of times per day)
- `interval_days = NULL`
### Interval Habits
**Purpose**: Habits that should be completed X days after the last completion
**Examples**: Cleaning house (every 7 days), car maintenance (every 30 days)
**Configuration**:
- `frequency_type = 'interval'`
- `target_count = 1`
- `interval_days = X` (days between completions)
## Constraints and Business Rules
### Table Constraints
```sql
-- Habits table constraints
ALTER TABLE habit_tracker.habits
ADD CONSTRAINT check_target_count CHECK (target_count > 0),
ADD CONSTRAINT check_interval_days CHECK (
(frequency_type = 'interval' AND interval_days IS NOT NULL AND interval_days > 0)
OR (frequency_type != 'interval' AND interval_days IS NULL)
);
```
### Intake Tracker Constraints
```sql
-- Intake tracker constraints
ALTER TABLE intake_tracker.intake_metrics
ADD CONSTRAINT check_positive_target CHECK (target_value IS NULL OR target_value > 0),
ADD CONSTRAINT check_positive_min CHECK (min_value IS NULL OR min_value >= 0),
ADD CONSTRAINT check_positive_max CHECK (max_value IS NULL OR max_value > 0),
ADD CONSTRAINT check_min_max_order CHECK (min_value IS NULL OR max_value IS NULL OR min_value <= max_value);
ALTER TABLE intake_tracker.intake_records
ADD CONSTRAINT check_positive_value CHECK (value > 0),
ADD CONSTRAINT check_recorded_at_not_future CHECK (recorded_at <= NOW());
ALTER TABLE intake_tracker.daily_summaries
ADD CONSTRAINT check_positive_total CHECK (total_value > 0),
ADD CONSTRAINT check_positive_count CHECK (entry_count > 0);
```
### Business Rules
#### Habit Tracker
1. Each habit belongs to exactly one user
2. Interval habits must have `interval_days > 0`
3. Daily and multi-daily habits must have `interval_days = NULL`
4. Target count must be positive
5. Completions are immutable (no updates, only inserts)
6. Deleting a user cascades to delete their habits and completions
#### Intake Tracker
1. Each intake metric definition belongs to exactly one user
2. Each user can have only one metric definition per metric type
3. Intake record values must be positive
4. Recorded timestamp cannot be in the future
5. Daily summaries are automatically maintained
6. Intake records are immutable (no updates, only inserts)
7. Target, min, and max values are optional but must be positive if set
8. Deleting a user cascades to delete their metrics and records
9. Deleting a metric definition cascades to delete its records and summaries
## Performance Optimization
### Indexes
```sql
-- Primary access patterns
CREATE INDEX idx_habits_user_id ON habit_tracker.habits(user_id);
CREATE INDEX idx_habits_active ON habit_tracker.habits(active) WHERE active = true;
CREATE INDEX idx_completions_habit_id ON habit_tracker.habit_completions(habit_id);
CREATE INDEX idx_completions_completed_at ON habit_tracker.habit_completions(completed_at);
-- Composite indexes for common queries
CREATE INDEX idx_habits_user_active ON habit_tracker.habits(user_id, active) WHERE active = true;
CREATE INDEX idx_completions_habit_date ON habit_tracker.habit_completions(habit_id, completed_at);
-- User settings access
CREATE INDEX idx_user_settings_user_key ON shared.user_settings(user_id, key);
-- Intake tracker indexes
CREATE INDEX idx_intake_metrics_user_id ON intake_tracker.intake_metrics(user_id);
CREATE INDEX idx_intake_metrics_user_type ON intake_tracker.intake_metrics(user_id, metric_type);
CREATE INDEX idx_intake_metrics_active ON intake_tracker.intake_metrics(active) WHERE active = true;
CREATE INDEX idx_intake_records_metric_id ON intake_tracker.intake_records(intake_metric_id);
CREATE INDEX idx_intake_records_recorded_at ON intake_tracker.intake_records(recorded_at);
CREATE INDEX idx_intake_records_metric_date ON intake_tracker.intake_records(intake_metric_id, recorded_at);
CREATE INDEX idx_daily_summaries_metric_date ON intake_tracker.daily_summaries(intake_metric_id, date);
CREATE INDEX idx_daily_summaries_date ON intake_tracker.daily_summaries(date);
```
### Query Optimization
- Use composite indexes for user-specific queries
- Partition completions table by month for large datasets (future scaling)
- Consider materialized views for streak calculations and statistics
## Usage Examples
### Creating Habits
```sql
-- Daily vitamin habit
INSERT INTO habit_tracker.habits (user_id, name, frequency_type, target_count)
VALUES ('user-uuid', 'Take vitamins', 'daily', 1);
-- Twice-daily teeth brushing
INSERT INTO habit_tracker.habits (user_id, name, frequency_type, target_count)
VALUES ('user-uuid', 'Brush teeth', 'multi_daily', 2);
-- Weekly house cleaning
INSERT INTO habit_tracker.habits (user_id, name, frequency_type, interval_days)
VALUES ('user-uuid', 'Clean house', 'interval', 7);
```
### Recording Completions
```sql
-- Complete a habit
INSERT INTO habit_tracker.habit_completions (habit_id, notes)
VALUES ('habit-uuid', 'Completed during lunch break');
-- Complete with specific timestamp
INSERT INTO habit_tracker.habit_completions (habit_id, completed_at, notes)
VALUES ('habit-uuid', '2024-01-15 08:30:00', 'Morning dose');
```
### Creating Intake Metrics
```sql
-- Create water intake metric
INSERT INTO intake_tracker.intake_metrics (user_id, metric_type, unit, display_name, target_value, is_cumulative)
VALUES ('user-uuid', 'water', 'ml', 'Water Intake', 2000, true);
-- Create weight tracking metric
INSERT INTO intake_tracker.intake_metrics (user_id, metric_type, unit, display_name, min_value, max_value, is_cumulative)
VALUES ('user-uuid', 'weight', 'kg', 'Body Weight', 50, 150, false);
-- Create steps tracking metric
INSERT INTO intake_tracker.intake_metrics (user_id, metric_type, unit, display_name, target_value, is_cumulative)
VALUES ('user-uuid', 'steps', 'steps', 'Daily Steps', 10000, true);
```
### Recording Intake Data
```sql
-- Record water intake (assuming metric already exists)
INSERT INTO intake_tracker.intake_records (intake_metric_id, value, notes)
VALUES ('water-metric-uuid', 500, 'Morning hydration');
-- Record weight
INSERT INTO intake_tracker.intake_records (intake_metric_id, value)
VALUES ('weight-metric-uuid', 70.5);
-- Record steps with specific time
INSERT INTO intake_tracker.intake_records (intake_metric_id, value, recorded_at)
VALUES ('steps-metric-uuid', 8432, '2024-01-15 18:00:00');
```
### Common Queries
```sql
-- Get user's active habits
SELECT * FROM habit_tracker.habits
WHERE user_id = 'test-user-id' AND active = true
ORDER BY created_at;
-- Get today's completions for a habit
SELECT * FROM habit_tracker.habit_completions
WHERE habit_id = 'habit-uuid'
AND completed_at >= CURRENT_DATE
AND completed_at < CURRENT_DATE + INTERVAL '1 day';
-- Check if daily habit is completed today
SELECT COUNT(*) as completions_today
FROM habit_tracker.habit_completions hc
JOIN habit_tracker.habits h ON hc.habit_id = h.id
WHERE h.user_id = 'test-user-id'
AND h.frequency_type = 'daily'
AND hc.completed_at >= CURRENT_DATE;
-- Get habits due for interval-based completion
SELECT h.*,
MAX(hc.completed_at) as last_completed,
MAX(hc.completed_at) + INTERVAL '1 day' * h.interval_days as next_due
FROM habit_tracker.habits h
LEFT JOIN habit_tracker.habit_completions hc ON h.id = hc.habit_id
WHERE h.user_id = 'test-user-id'
AND h.frequency_type = 'interval'
AND h.active = true
GROUP BY h.id
HAVING MAX(hc.completed_at) IS NULL
OR MAX(hc.completed_at) + INTERVAL '1 day' * h.interval_days <= NOW();
-- Get user's active intake metrics
SELECT * FROM intake_tracker.intake_metrics
WHERE user_id = 'test-user-id' AND active = true
ORDER BY display_name;
-- Get today's water intake
SELECT SUM(ir.value) as total_water_ml, im.target_value, im.unit
FROM intake_tracker.intake_records ir
JOIN intake_tracker.intake_metrics im ON ir.intake_metric_id = im.id
WHERE im.user_id = 'test-user-id'
AND im.metric_type = 'water'
AND ir.recorded_at >= CURRENT_DATE
AND ir.recorded_at < CURRENT_DATE + INTERVAL '1 day'
GROUP BY im.target_value, im.unit;
-- Get daily summaries for water intake this week
SELECT ds.date, ds.total_value, ds.entry_count, im.target_value, im.unit
FROM intake_tracker.daily_summaries ds
JOIN intake_tracker.intake_metrics im ON ds.intake_metric_id = im.id
WHERE im.user_id = 'test-user-id'
AND im.metric_type = 'water'
AND ds.date >= CURRENT_DATE - INTERVAL '7 days'
ORDER BY ds.date;
-- Get latest weight entry
SELECT ir.value, im.unit, ir.recorded_at, im.min_value, im.max_value
FROM intake_tracker.intake_records ir
JOIN intake_tracker.intake_metrics im ON ir.intake_metric_id = im.id
WHERE im.user_id = 'test-user-id'
AND im.metric_type = 'weight'
ORDER BY ir.recorded_at DESC
LIMIT 1;
-- Check if daily target is met
SELECT im.display_name, im.target_value, COALESCE(SUM(ir.value), 0) as current_total,
CASE WHEN im.target_value IS NOT NULL AND COALESCE(SUM(ir.value), 0) >= im.target_value
THEN 'target_met' ELSE 'target_not_met' END as status
FROM intake_tracker.intake_metrics im
LEFT JOIN intake_tracker.intake_records ir ON im.id = ir.intake_metric_id
AND ir.recorded_at >= CURRENT_DATE
AND ir.recorded_at < CURRENT_DATE + INTERVAL '1 day'
WHERE im.user_id = 'test-user-id'
AND im.is_cumulative = true
AND im.active = true
GROUP BY im.id, im.display_name, im.target_value;
```
## Migration Scripts
### Initial Setup
```sql
-- Create database (run as superuser)
CREATE DATABASE personal_system;
-- Connect to database and run setup
\c personal_system;
-- Enable extensions
CREATE EXTENSION IF NOT EXISTS "pgcrypto";
-- Create schemas
CREATE SCHEMA IF NOT EXISTS shared;
CREATE SCHEMA IF NOT EXISTS habit_tracker;
CREATE SCHEMA IF NOT EXISTS intake_tracker;
-- Create shared tables (Better Auth tables)
CREATE TABLE shared.user (
id TEXT PRIMARY KEY NOT NULL,
name TEXT NOT NULL,
email TEXT NOT NULL,
email_verified BOOLEAN NOT NULL,
image TEXT,
created_at TIMESTAMP NOT NULL,
updated_at TIMESTAMP NOT NULL,
username TEXT UNIQUE,
display_username TEXT,
CONSTRAINT user_email_unique UNIQUE(email),
CONSTRAINT user_username_unique UNIQUE(username)
);
CREATE TABLE shared.session (
id TEXT PRIMARY KEY NOT NULL,
expires_at TIMESTAMP NOT NULL,
token TEXT NOT NULL,
created_at TIMESTAMP NOT NULL,
updated_at TIMESTAMP NOT NULL,
ip_address TEXT,
user_agent TEXT,
user_id TEXT NOT NULL,
CONSTRAINT session_token_unique UNIQUE(token),
CONSTRAINT session_user_id_user_id_fk FOREIGN KEY (user_id) REFERENCES shared.user(id) ON DELETE CASCADE
);
CREATE TABLE shared.account (
id TEXT PRIMARY KEY NOT NULL,
account_id TEXT NOT NULL,
provider_id TEXT NOT NULL,
user_id TEXT NOT NULL,
access_token TEXT,
refresh_token TEXT,
id_token TEXT,
access_token_expires_at TIMESTAMP,
refresh_token_expires_at TIMESTAMP,
scope TEXT,
password TEXT,
created_at TIMESTAMP NOT NULL,
updated_at TIMESTAMP NOT NULL,
CONSTRAINT account_user_id_user_id_fk FOREIGN KEY (user_id) REFERENCES shared.user(id) ON DELETE CASCADE
);
CREATE TABLE shared.verification (
id TEXT PRIMARY KEY NOT NULL,
identifier TEXT NOT NULL,
value TEXT NOT NULL,
expires_at TIMESTAMP NOT NULL,
created_at TIMESTAMP,
updated_at TIMESTAMP
);
CREATE TABLE shared.apikey (
id TEXT PRIMARY KEY NOT NULL,
name TEXT,
start TEXT,
prefix TEXT,
key TEXT NOT NULL,
user_id TEXT NOT NULL,
refill_interval INTEGER,
refill_amount INTEGER,
last_refill_at TIMESTAMP,
enabled BOOLEAN DEFAULT TRUE,
rate_limit_enabled BOOLEAN DEFAULT TRUE,
rate_limit_time_window INTEGER DEFAULT 86400000,
rate_limit_max INTEGER DEFAULT 10,
request_count INTEGER,
remaining INTEGER,
last_request TIMESTAMP,
expires_at TIMESTAMP,
created_at TIMESTAMP NOT NULL,
updated_at TIMESTAMP NOT NULL,
permissions TEXT,
metadata TEXT,
CONSTRAINT apikey_user_id_user_id_fk FOREIGN KEY (user_id) REFERENCES shared.user(id) ON DELETE CASCADE
);
-- Legacy user settings table (if needed)
CREATE TABLE shared.user_settings (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
user_id TEXT NOT NULL REFERENCES shared.user(id) ON DELETE CASCADE,
key VARCHAR(100) NOT NULL,
value JSONB,
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW(),
UNIQUE(user_id, key)
);
-- Create habit tracker tables
CREATE TABLE habit_tracker.habits (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
user_id TEXT NOT NULL REFERENCES shared.user(id) ON DELETE CASCADE,
name VARCHAR(255) NOT NULL,
description TEXT,
frequency_type VARCHAR(20) NOT NULL CHECK (frequency_type IN ('daily', 'interval', 'multi_daily')),
target_count INTEGER NOT NULL DEFAULT 1,
interval_days INTEGER,
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW(),
active BOOLEAN DEFAULT TRUE
);
CREATE TABLE habit_tracker.habit_completions (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
habit_id UUID NOT NULL REFERENCES habit_tracker.habits(id) ON DELETE CASCADE,
completed_at TIMESTAMP NOT NULL DEFAULT NOW(),
notes TEXT,
created_at TIMESTAMP DEFAULT NOW()
);
-- Add constraints
ALTER TABLE habit_tracker.habits
ADD CONSTRAINT check_target_count CHECK (target_count > 0),
ADD CONSTRAINT check_interval_days CHECK (
(frequency_type = 'interval' AND interval_days IS NOT NULL AND interval_days > 0)
OR (frequency_type != 'interval' AND interval_days IS NULL)
);
-- Create habit tracker indexes
CREATE INDEX idx_habits_user_id ON habit_tracker.habits(user_id);
CREATE INDEX idx_habits_active ON habit_tracker.habits(active) WHERE active = true;
CREATE INDEX idx_completions_habit_id ON habit_tracker.habit_completions(habit_id);
CREATE INDEX idx_completions_completed_at ON habit_tracker.habit_completions(completed_at);
CREATE INDEX idx_habits_user_active ON habit_tracker.habits(user_id, active) WHERE active = true;
CREATE INDEX idx_completions_habit_date ON habit_tracker.habit_completions(habit_id, completed_at);
CREATE INDEX idx_user_settings_user_key ON shared.user_settings(user_id, key);
-- Create intake tracker tables
CREATE TABLE intake_tracker.intake_metrics (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
user_id TEXT NOT NULL REFERENCES shared.user(id) ON DELETE CASCADE,
metric_type VARCHAR(50) NOT NULL,
unit VARCHAR(20) NOT NULL,
display_name VARCHAR(100) NOT NULL,
target_value DECIMAL(10,2),
min_value DECIMAL(10,2),
max_value DECIMAL(10,2),
is_cumulative BOOLEAN NOT NULL DEFAULT TRUE,
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW(),
active BOOLEAN DEFAULT TRUE,
UNIQUE(user_id, metric_type)
);
CREATE TABLE intake_tracker.intake_records (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
intake_metric_id UUID NOT NULL REFERENCES intake_tracker.intake_metrics(id) ON DELETE CASCADE,
value DECIMAL(10,2) NOT NULL,
recorded_at TIMESTAMP NOT NULL DEFAULT NOW(),
notes TEXT,
created_at TIMESTAMP DEFAULT NOW()
);
CREATE TABLE intake_tracker.daily_summaries (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
intake_metric_id UUID NOT NULL REFERENCES intake_tracker.intake_metrics(id) ON DELETE CASCADE,
date DATE NOT NULL,
total_value DECIMAL(10,2) NOT NULL,
entry_count INTEGER NOT NULL,
first_entry_at TIMESTAMP,
last_entry_at TIMESTAMP,
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW(),
UNIQUE(intake_metric_id, date)
);
-- Add intake tracker constraints
ALTER TABLE intake_tracker.intake_metrics
ADD CONSTRAINT check_positive_target CHECK (target_value IS NULL OR target_value > 0),
ADD CONSTRAINT check_positive_min CHECK (min_value IS NULL OR min_value >= 0),
ADD CONSTRAINT check_positive_max CHECK (max_value IS NULL OR max_value > 0),
ADD CONSTRAINT check_min_max_order CHECK (min_value IS NULL OR max_value IS NULL OR min_value <= max_value);
ALTER TABLE intake_tracker.intake_records
ADD CONSTRAINT check_positive_value CHECK (value > 0),
ADD CONSTRAINT check_recorded_at_not_future CHECK (recorded_at <= NOW());
ALTER TABLE intake_tracker.daily_summaries
ADD CONSTRAINT check_positive_total CHECK (total_value > 0),
ADD CONSTRAINT check_positive_count CHECK (entry_count > 0);
-- Create intake tracker indexes
CREATE INDEX idx_intake_metrics_user_id ON intake_tracker.intake_metrics(user_id);
CREATE INDEX idx_intake_metrics_user_type ON intake_tracker.intake_metrics(user_id, metric_type);
CREATE INDEX idx_intake_metrics_active ON intake_tracker.intake_metrics(active) WHERE active = true;
CREATE INDEX idx_intake_records_metric_id ON intake_tracker.intake_records(intake_metric_id);
CREATE INDEX idx_intake_records_recorded_at ON intake_tracker.intake_records(recorded_at);
CREATE INDEX idx_intake_records_metric_date ON intake_tracker.intake_records(intake_metric_id, recorded_at);
CREATE INDEX idx_daily_summaries_metric_date ON intake_tracker.daily_summaries(intake_metric_id, date);
CREATE INDEX idx_daily_summaries_date ON intake_tracker.daily_summaries(date);
```
### Sample Data
```sql
-- Create a test user (Better Auth format)
INSERT INTO shared.user (id, name, email, email_verified, username, created_at, updated_at)
VALUES ('test-user-id', 'Test User', 'test@example.com', true, 'testuser', NOW(), NOW());
-- Get user ID (replace with actual ID in production)
-- SELECT id FROM shared.user WHERE username = 'testuser';
-- Create sample habits (replace test-user-id with actual user ID)
INSERT INTO habit_tracker.habits (user_id, name, frequency_type, target_count) VALUES
('test-user-id', 'Take vitamins', 'daily', 1),
('test-user-id', 'Brush teeth', 'multi_daily', 2),
('test-user-id', 'Exercise', 'daily', 1);
INSERT INTO habit_tracker.habits (user_id, name, frequency_type, interval_days) VALUES
('test-user-id', 'Clean house', 'interval', 7),
('test-user-id', 'Grocery shopping', 'interval', 3);
-- Create sample intake metrics (replace test-user-id with actual user ID)
INSERT INTO intake_tracker.intake_metrics (user_id, metric_type, unit, display_name, target_value, is_cumulative) VALUES
('test-user-id', 'water', 'ml', 'Water Intake', 2000, true),
('test-user-id', 'weight', 'kg', 'Body Weight', NULL, false),
('test-user-id', 'steps', 'steps', 'Daily Steps', 10000, true);
-- Create sample intake records (replace metric-uuid with actual metric IDs)
INSERT INTO intake_tracker.intake_records (intake_metric_id, value, notes) VALUES
('water-metric-uuid', 500, 'Morning glass'),
('water-metric-uuid', 300, 'After workout'),
('weight-metric-uuid', 70.5, 'Morning weigh-in'),
('steps-metric-uuid', 8432, 'Daily walk');
```
## Cross-System Integration
### Goal-Based Habits (Optional Enhancement)
The system supports linking habits to quantitative goals through optional fields:
```sql
-- Optional goal tracking fields for habits
ALTER TABLE habit_tracker.habits
ADD COLUMN goal_type VARCHAR(50), -- 'water_intake', 'step_count', etc.
ADD COLUMN goal_target DECIMAL,
ADD COLUMN goal_unit VARCHAR(20);
```
### Integration Patterns
#### API-Level Integration
Cross-system relationships are handled at the application layer rather than database foreign keys:
```sql
-- Example: Goal-based habit completion
-- 1. Check if habit has a goal
SELECT goal_type, goal_target, goal_unit
FROM habit_tracker.habits
WHERE id = 'habit-uuid' AND goal_type IS NOT NULL;
-- 2. Check if goal is met in intake tracker
SELECT COALESCE(SUM(ir.value), 0) as current_total
FROM intake_tracker.intake_records ir
JOIN intake_tracker.intake_metrics im ON ir.intake_metric_id = im.id
WHERE im.user_id = 'user-uuid'
AND im.metric_type = 'water'
AND ir.recorded_at >= CURRENT_DATE;
-- 3. Auto-complete habit if goal is met
INSERT INTO habit_tracker.habit_completions (habit_id, notes)
SELECT 'habit-uuid', 'Auto-completed: 2L water goal reached'
WHERE 2000 >= 2000; -- goal_target comparison
```
#### Cross-System Use Cases
1. **Goal-Driven Habits**: "Drink 2L water daily" auto-completes when water intake reaches 2L
2. **Progress Tracking**: Dashboard shows habit streaks alongside metric trends
3. **Unified Reporting**: Weekly summaries combine habit completion rates with metric averages
4. **Threshold Alerts**: Notifications when metrics fall below habit-related targets
#### Data Sharing Best Practices
- **Loose Coupling**: Systems communicate through shared user IDs and metric types
- **Event-Driven**: Consider publishing events when goals are met or habits completed
- **API Aggregation**: Combine data from multiple schemas in API responses
- **Separate Concerns**: Each system optimized for its domain while supporting integration
### Example Integration Scenarios
#### Water Intake Habit
```sql
-- Create goal-based habit
INSERT INTO habit_tracker.habits (user_id, name, frequency_type, goal_type, goal_target, goal_unit)
VALUES ('user-uuid', 'Drink enough water', 'daily', 'water', 2000, 'ml');
-- First create the metric definition
INSERT INTO intake_tracker.intake_metrics (user_id, metric_type, unit, display_name, target_value, is_cumulative)
VALUES ('user-uuid', 'water', 'ml', 'Water Intake', 2000, true);
-- Record water intake
INSERT INTO intake_tracker.intake_records (intake_metric_id, value)
VALUES ('water-metric-uuid', 500);
-- API logic checks if daily goal is met and auto-completes habit
```
#### Fitness Integration
```sql
-- Create step-based habit
INSERT INTO habit_tracker.habits (user_id, name, frequency_type, goal_type, goal_target, goal_unit)
VALUES ('user-uuid', 'Get 10k steps', 'daily', 'steps', 10000, 'steps');
-- First create the metric definition
INSERT INTO intake_tracker.intake_metrics (user_id, metric_type, unit, display_name, target_value, is_cumulative)
VALUES ('user-uuid', 'steps', 'steps', 'Daily Steps', 10000, true);
-- Steps can be recorded from fitness tracker
INSERT INTO intake_tracker.intake_records (intake_metric_id, value)
VALUES ('steps-metric-uuid', 8432);
```
## Future Considerations
### Scaling
- Partition `habit_completions` by month when data grows large
- Implement read replicas for dashboard queries
- Consider caching for frequently accessed habit status
### Additional Features
- Habit streaks (consecutive completions)
- Habit statistics and analytics
- Habit templates and sharing
- Reminder/notification system
- Habit dependencies and chains
### Cross-Tool Integration
- Link habits to calendar events
- Connect with fitness/health tracking tools
- Integration with task management systems
- Shared reporting and dashboard capabilities
## Food Tracker Schema (Future Implementation)
### Overview
The food tracker system will handle complex nutritional tracking including multiple foods per meal, recipes, and comprehensive nutritional analysis. This system is designed to be more sophisticated than the simple intake tracker.
### Schema Design
#### Food Database
```sql
CREATE SCHEMA food_tracker;
-- Master food database
CREATE TABLE food_tracker.foods (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
name VARCHAR(255) NOT NULL,
brand VARCHAR(100),
barcode VARCHAR(50),
category VARCHAR(50), -- 'vegetables', 'proteins', 'grains', etc.
serving_size_grams DECIMAL(8,2) NOT NULL DEFAULT 100,
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW()
);
-- Nutritional information per 100g
CREATE TABLE food_tracker.food_nutrition (
food_id UUID PRIMARY KEY REFERENCES food_tracker.foods(id) ON DELETE CASCADE,
calories DECIMAL(8,2) NOT NULL,
protein_g DECIMAL(8,2) NOT NULL DEFAULT 0,
carbs_g DECIMAL(8,2) NOT NULL DEFAULT 0,
fat_g DECIMAL(8,2) NOT NULL DEFAULT 0,
fiber_g DECIMAL(8,2) NOT NULL DEFAULT 0,
sugar_g DECIMAL(8,2) NOT NULL DEFAULT 0,
sodium_mg DECIMAL(8,2) NOT NULL DEFAULT 0,
potassium_mg DECIMAL(8,2) NOT NULL DEFAULT 0,
calcium_mg DECIMAL(8,2) NOT NULL DEFAULT 0,
iron_mg DECIMAL(8,2) NOT NULL DEFAULT 0,
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW()
);
```
#### Meal Tracking
```sql
-- User's meals
CREATE TABLE food_tracker.meals (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
user_id UUID NOT NULL REFERENCES shared.users(id) ON DELETE CASCADE,
meal_type VARCHAR(20) NOT NULL CHECK (meal_type IN ('breakfast', 'lunch', 'dinner', 'snack')),
consumed_at TIMESTAMP NOT NULL DEFAULT NOW(),
notes TEXT,
created_at TIMESTAMP DEFAULT NOW()
);
-- Individual food items within meals
CREATE TABLE food_tracker.meal_items (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
meal_id UUID NOT NULL REFERENCES food_tracker.meals(id) ON DELETE CASCADE,
food_id UUID NOT NULL REFERENCES food_tracker.foods(id),
quantity_grams DECIMAL(8,2) NOT NULL,
notes TEXT,
created_at TIMESTAMP DEFAULT NOW()
);
```
#### Recipe Support
```sql
-- User-created recipes
CREATE TABLE food_tracker.recipes (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
user_id UUID NOT NULL REFERENCES shared.users(id) ON DELETE CASCADE,
name VARCHAR(255) NOT NULL,
description TEXT,
servings INTEGER NOT NULL DEFAULT 1,
prep_time_minutes INTEGER,
cook_time_minutes INTEGER,
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW()
);
-- Ingredients in recipes
CREATE TABLE food_tracker.recipe_ingredients (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
recipe_id UUID NOT NULL REFERENCES food_tracker.recipes(id) ON DELETE CASCADE,
food_id UUID NOT NULL REFERENCES food_tracker.foods(id),
quantity_grams DECIMAL(8,2) NOT NULL,
notes TEXT,
created_at TIMESTAMP DEFAULT NOW()
);
-- Allow meals to include recipes
ALTER TABLE food_tracker.meal_items
ADD COLUMN recipe_id UUID REFERENCES food_tracker.recipes(id),
ADD COLUMN recipe_servings DECIMAL(4,2) DEFAULT 1,
ADD CONSTRAINT check_food_or_recipe CHECK (
(food_id IS NOT NULL AND recipe_id IS NULL) OR
(food_id IS NULL AND recipe_id IS NOT NULL)
);
```
#### Daily Nutritional Summaries
```sql
-- Cached daily nutrition totals for performance
CREATE TABLE food_tracker.daily_nutrition (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
user_id UUID NOT NULL REFERENCES shared.users(id) ON DELETE CASCADE,
date DATE NOT NULL,
total_calories DECIMAL(8,2) NOT NULL DEFAULT 0,
total_protein_g DECIMAL(8,2) NOT NULL DEFAULT 0,
total_carbs_g DECIMAL(8,2) NOT NULL DEFAULT 0,
total_fat_g DECIMAL(8,2) NOT NULL DEFAULT 0,
total_fiber_g DECIMAL(8,2) NOT NULL DEFAULT 0,
total_sugar_g DECIMAL(8,2) NOT NULL DEFAULT 0,
total_sodium_mg DECIMAL(8,2) NOT NULL DEFAULT 0,
meal_count INTEGER NOT NULL DEFAULT 0,
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW(),
UNIQUE(user_id, date)
);
```
### Performance Optimization
```sql
-- Food tracker indexes
CREATE INDEX idx_foods_name ON food_tracker.foods(name);
CREATE INDEX idx_foods_category ON food_tracker.foods(category);
CREATE INDEX idx_foods_barcode ON food_tracker.foods(barcode) WHERE barcode IS NOT NULL;
CREATE INDEX idx_meals_user_date ON food_tracker.meals(user_id, consumed_at);
CREATE INDEX idx_meals_user_type ON food_tracker.meals(user_id, meal_type);
CREATE INDEX idx_meal_items_meal_id ON food_tracker.meal_items(meal_id);
CREATE INDEX idx_meal_items_food_id ON food_tracker.meal_items(food_id);
CREATE INDEX idx_recipes_user_id ON food_tracker.recipes(user_id);
CREATE INDEX idx_recipe_ingredients_recipe_id ON food_tracker.recipe_ingredients(recipe_id);
CREATE INDEX idx_daily_nutrition_user_date ON food_tracker.daily_nutrition(user_id, date);
```
### Business Rules
1. **Food Database**: Centralized food database with nutritional info per 100g
2. **Meals**: Users can create meals with multiple food items
3. **Portions**: All quantities stored in grams for consistency
4. **Recipes**: Users can create reusable recipes with calculated nutrition
5. **Daily Summaries**: Automatically maintained for performance
6. **Immutable Entries**: Meal items are immutable once created
7. **Flexible Structure**: Support for both individual foods and complex recipes
### Integration with Other Systems
- **Habit Tracker**: "Eat healthy" habits can be linked to calorie/macro targets
- **Intake Tracker**: Water intake can be displayed alongside food intake
- **Goals**: Daily nutrition goals can trigger habit completions
### Future Enhancements
- **Barcode Scanning**: Mobile app integration for quick food entry
- **Restaurant Database**: Integration with restaurant nutritional databases
- **Meal Planning**: Weekly meal planning and grocery lists
- **Photo Recognition**: AI-powered food identification from photos
- **Macro Tracking**: Detailed macronutrient analysis and goals
- **Export Integration**: Connect with fitness apps and wearables

View File

@ -0,0 +1,182 @@
# Task 00: Helm Chart Deployment
## Overview
Create a comprehensive Helm chart for Kubernetes deployment of the Bun-based API server application. This task will enable containerized deployment across different environments (staging, production) with proper configuration management and scalability.
## Relevant Files
- `api/Dockerfile` - Existing container definition for the Bun API server
- `api/package.json` - Application dependencies and metadata
- `api/src/index.ts` - Main application entry point using Effect-TS
- `docker-compose.yml` - Current PostgreSQL development setup
- `charts/` - New directory for Helm chart files (to be created)
## Purpose and Goals
- Enable Kubernetes deployment of the Bun API server across multiple environments
- Provide configurable database connection options (embedded PGLite vs external PostgreSQL)
- Implement proper resource management, health checks, and scaling capabilities
- Support ingress configuration with TLS certificate management
- Allow environment-specific customization through values files
## Key Components and Technologies
- **Helm 3.x** - Kubernetes package manager for templating and deployment
- **Kubernetes** - Container orchestration platform
- **Bun Runtime** - JavaScript/TypeScript runtime (oven/bun:1.2.19-alpine base image)
- **Effect-TS** - Functional programming framework used by the API
- **PostgreSQL** - Database option for production deployments
- **Ingress Controller** - For external traffic routing and TLS termination
## Expected Outcomes and Deliverables
### Chart Structure
```
charts/system/
├── Chart.yaml # Chart metadata and version information
├── values.yaml # Default configuration values
├── values-staging.yaml # Staging environment overrides
├── values-prod.yaml # Production environment overrides
└── templates/
├── deployment.yaml # API server deployment manifest
├── service.yaml # Service definition for API endpoints
├── ingress.yaml # Ingress configuration with TLS
├── configmap.yaml # Configuration data for the application
├── secret.yaml # Sensitive configuration (database credentials)
├── hpa.yaml # Horizontal Pod Autoscaler (optional)
└── NOTES.txt # Post-installation instructions
```
### Key Features
- **Multi-environment support** with environment-specific values files
- **Database flexibility** - configurable PostgreSQL or PGLite usage
- **Resource management** with CPU/memory limits and requests
- **Health checks** - readiness and liveness probes for the API server
- **Horizontal scaling** capability with HPA configuration
- **Ingress configuration** with configurable domains and TLS certificates
- **Security** - non-root container execution and proper secret management
## Tests
### Validation Tests
- [x] Helm chart linting (`helm lint charts/system/`)
- [x] Template rendering validation (`helm template charts/system/`)
- [x] Values schema validation for all environment files
- [x] Kubernetes manifest syntax validation
## Implementation Steps
### Step 1: Create Chart Foundation
- [x] Create `charts/system/` directory structure
- [x] Initialize `Chart.yaml` with proper metadata
- [x] Create base `values.yaml` with comprehensive default values
- [x] Set up templating helpers in `_helpers.tpl`
### Step 2: Core Kubernetes Manifests
- [x] Create `deployment.yaml` template for the Bun API server
- Configure container image and tag templating
- Set up resource limits and requests
- Add environment variable configuration
- Implement readiness and liveness probes
- [x] Create `service.yaml` template for API endpoints
- Configure service type and port mapping
- Add service annotations for load balancer configuration
- [x] Create `configmap.yaml` for application configuration
- Environment-specific settings
- Database connection parameters
### Step 3: Database Configuration
- [x] Add PostgreSQL database deployment option in templates
- External database connection configuration
- [x] Configure PGLite embedded database option
- Volume mounting for data persistence
- Memory/storage configuration
- [x] Create `secret.yaml` template for database credentials
- Templated secret generation
- External secret integration capabilities
### Step 4: Ingress and Networking
- [x] Create `ingress.yaml` template
- Configurable host domains
- TLS certificate management (cert-manager integration)
- Path-based routing configuration
- Ingress class configuration
- [ ] Add network policies (optional)
- Database access restrictions
- External traffic controls
### Step 5: Scaling and Performance
- [x] Create `hpa.yaml` template for horizontal pod autoscaling
- CPU and memory-based scaling triggers
- Custom metrics integration capabilities
- [ ] Add resource monitoring configurations
- ServiceMonitor for Prometheus (if applicable)
- Logging configuration
### Step 6: Environment-Specific Values
- [x] Create `values-staging.yaml`
- Multi-replica setup
- External PostgreSQL configuration
- Production-like resource allocation
- Staging domain configuration
- [x] Create `values-prod.yaml`
- High availability configuration
- External PostgreSQL with connection pooling
- Strict resource limits and security policies
- Production domain and TLS settings
### Step 7: Documentation and Validation
- [x] Create comprehensive `NOTES.txt` with deployment instructions
- [x] Add inline documentation to all template files
- [ ] Create deployment guide in `charts/system/README.md`
- [x] Validate all templates with different values files
## TODO Items and Subtasks
### Prerequisites
- [ ] Verify Kubernetes cluster access and Helm installation
- [ ] Determine ingress controller type (nginx, traefik, etc.)
- [ ] Identify certificate management strategy (cert-manager, manual)
- [ ] Choose container registry for image storage
### Database Strategy
- [ ] Define database migration strategy for Kubernetes deployment
- [ ] Configure backup and restore procedures for PostgreSQL
- [ ] Set up database monitoring and alerting
- [ ] Plan for database scaling and connection pooling
### Security Considerations
- [ ] Implement Pod Security Standards compliance
- [ ] Configure RBAC permissions for the application
- [ ] Set up secret rotation strategies
- [ ] Add network security policies
### Monitoring and Observability
- [ ] Integrate with logging infrastructure (Fluentd, Logstash)
- [ ] Add metrics collection (Prometheus integration)
- [ ] Configure distributed tracing (if applicable)
- [ ] Set up alerting rules for application health
### CI/CD Integration
- [ ] Create GitHub Actions workflow for chart testing
- [ ] Set up automated chart versioning and publishing
- [ ] Add chart security scanning
- [ ] Implement automated deployment pipelines
This task will provide a production-ready Kubernetes deployment solution for the Bun-based API server, enabling scalable and manageable deployments across multiple environments.