Create chart

This commit is contained in:
Yadunand Prem 2025-08-05 14:41:45 +08:00
parent b24ced7bc9
commit 2923d87a92
18 changed files with 915 additions and 0 deletions

6
.gitignore vendored
View File

@ -16,3 +16,9 @@
!api/.dockerignore
!api/src/
!api/src/**
!docs/
!docs/**
!charts/
!charts/**

15
charts/system/Chart.yaml Normal file
View File

@ -0,0 +1,15 @@
apiVersion: v2
name: system
description: A Helm chart for System
type: application
version: 0.1.0
appVersion: "1.0.0"
keywords:
- bun
- typescript
- effect-ts
- api
home: https://git.yadunut.dev/yadunut/system
maintainers:
- name: Yadunand Prem
email: git@yadunut.com

View File

@ -0,0 +1,62 @@
1. Get the application URL by running these commands:
{{- if .Values.ingress.enabled }}
{{- range $host := .Values.ingress.hosts }}
{{- range .paths }}
http{{ if $.Values.ingress.tls }}s{{ end }}://{{ $host.host }}{{ .path }}
{{- end }}
{{- end }}
{{- else if contains "NodePort" .Values.service.type }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "system.fullname" . }})
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
{{- else if contains "LoadBalancer" .Values.service.type }}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get --namespace {{ .Release.Namespace }} svc -w {{ include "system.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "system.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}")
echo http://$SERVICE_IP:{{ .Values.service.port }}
{{- else if contains "ClusterIP" .Values.service.type }}
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "system.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
export CONTAINER_PORT=$(kubectl get pod --namespace {{ .Release.Namespace }} $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl --namespace {{ .Release.Namespace }} port-forward $POD_NAME 8080:$CONTAINER_PORT
{{- end }}
2. Database Configuration:
Using external PostgreSQL database
Host: {{ .Values.database.postgresql.host }}:{{ .Values.database.postgresql.port }}
Database: {{ .Values.database.postgresql.database }}
{{- if .Values.database.postgresql.existingSecret }}
Credentials from secret: {{ .Values.database.postgresql.existingSecret }}
{{- else }}
Using generated credentials from secret: {{ include "system.fullname" . }}-postgresql
{{- end }}
3. Scaling:
{{- if .Values.autoscaling.enabled }}
Horizontal Pod Autoscaler is enabled
Min replicas: {{ .Values.autoscaling.minReplicas }}
Max replicas: {{ .Values.autoscaling.maxReplicas }}
CPU target: {{ .Values.autoscaling.targetCPUUtilizationPercentage }}%
{{- if .Values.autoscaling.targetMemoryUtilizationPercentage }}
Memory target: {{ .Values.autoscaling.targetMemoryUtilizationPercentage }}%
{{- end }}
{{- else }}
Running {{ .Values.replicaCount }} replica(s)
To enable autoscaling, set autoscaling.enabled=true
{{- end }}
4. Health Checks:
{{- if .Values.healthcheck.enabled }}
Health checks are enabled
Readiness probe: {{ .Values.healthcheck.readinessProbe.httpGet.path }}
Liveness probe: {{ .Values.healthcheck.livenessProbe.httpGet.path }}
{{- else }}
Health checks are disabled
{{- end }}
5. Security:
Running as non-root user (UID: {{ .Values.securityContext.runAsUser }})
Read-only root filesystem: {{ .Values.securityContext.readOnlyRootFilesystem }}
For more information about this chart, visit:
https://git.yadunut.dev/yadunut/system

View File

@ -0,0 +1,81 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "system.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "system.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "system.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "system.labels" -}}
helm.sh/chart: {{ include "system.chart" . }}
{{ include "system.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "system.selectorLabels" -}}
app.kubernetes.io/name: {{ include "system.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{/*
Create the name of the service account to use
*/}}
{{- define "system.serviceAccountName" -}}
{{- if .Values.serviceAccount.create }}
{{- default (include "system.fullname" .) .Values.serviceAccount.name }}
{{- else }}
{{- default "default" .Values.serviceAccount.name }}
{{- end }}
{{- end }}
{{/*
Create the image name
*/}}
{{- define "system.image" -}}
{{- $tag := .Values.image.tag | default .Chart.AppVersion }}
{{- printf "%s:%s" .Values.image.repository $tag }}
{{- end }}
{{/*
Database secret name
*/}}
{{- define "system.databaseSecretName" -}}
{{- if .Values.database.postgresql.existingSecret }}
{{- .Values.database.postgresql.existingSecret }}
{{- else }}
{{- include "system.fullname" . }}-postgresql
{{- end }}
{{- end }}

View File

@ -0,0 +1,16 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "system.fullname" . }}-config
labels:
{{- include "system.labels" . | nindent 4 }}
data:
PORT: {{ .Values.app.port | quote }}
NODE_ENV: {{ .Values.app.env | quote }}
DATABASE_TYPE: {{ .Values.database.type | quote }}
{{- if eq .Values.database.type "postgresql" }}
POSTGRES_HOST: {{ .Values.database.postgresql.host | quote }}
POSTGRES_PORT: {{ .Values.database.postgresql.port | quote }}
POSTGRES_DB: {{ .Values.database.postgresql.database | quote }}
POSTGRES_USER: {{ .Values.database.postgresql.username | quote }}
{{- end }}

View File

@ -0,0 +1,77 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "system.fullname" . }}
labels:
{{- include "system.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "system.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "system.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "system.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: {{ include "system.image" . }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: {{ .Values.service.targetPort }}
protocol: TCP
env:
- name: PORT
value: {{ .Values.app.port | quote }}
- name: NODE_ENV
value: {{ .Values.app.env | quote }}
{{- if eq .Values.database.type "postgresql" }}
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: {{ include "system.databaseSecretName" . }}
key: database-url
{{- end }}
{{- if .Values.healthcheck.enabled }}
livenessProbe:
{{- toYaml .Values.healthcheck.livenessProbe | nindent 12 }}
readinessProbe:
{{- toYaml .Values.healthcheck.readinessProbe | nindent 12 }}
{{- end }}
resources:
{{- toYaml .Values.resources | nindent 12 }}
volumeMounts:
- name: tmp
mountPath: /tmp
volumes:
- name: tmp
emptyDir: {}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}

View File

@ -0,0 +1,32 @@
{{- if .Values.autoscaling.enabled }}
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: {{ include "system.fullname" . }}
labels:
{{- include "system.labels" . | nindent 4 }}
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ include "system.fullname" . }}
minReplicas: {{ .Values.autoscaling.minReplicas }}
maxReplicas: {{ .Values.autoscaling.maxReplicas }}
metrics:
{{- if .Values.autoscaling.targetCPUUtilizationPercentage }}
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: {{ .Values.autoscaling.targetCPUUtilizationPercentage }}
{{- end }}
{{- if .Values.autoscaling.targetMemoryUtilizationPercentage }}
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: {{ .Values.autoscaling.targetMemoryUtilizationPercentage }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,59 @@
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "system.fullname" . -}}
{{- $svcPort := .Values.service.port -}}
{{- if and .Values.ingress.className (not (hasKey .Values.ingress.annotations "kubernetes.io/ingress.class")) }}
{{- $_ := set .Values.ingress.annotations "kubernetes.io/ingress.class" .Values.ingress.className}}
{{- end }}
{{- if semverCompare ">=1.19-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1
{{- else if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1beta1
{{- else -}}
apiVersion: extensions/v1beta1
{{- end }}
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
{{- include "system.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if and .Values.ingress.className (semverCompare ">=1.18-0" .Capabilities.KubeVersion.GitVersion) }}
ingressClassName: {{ .Values.ingress.className }}
{{- end }}
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: {{ .path }}
{{- if and .pathType (semverCompare ">=1.18-0" $.Capabilities.KubeVersion.GitVersion) }}
pathType: {{ .pathType }}
{{- end }}
backend:
{{- if semverCompare ">=1.19-0" $.Capabilities.KubeVersion.GitVersion }}
service:
name: {{ $fullName }}
port:
number: {{ $svcPort }}
{{- else }}
serviceName: {{ $fullName }}
servicePort: {{ $svcPort }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,2 @@
# PVC template removed - not needed for production PostgreSQL deployment
# PGLite is only used in test environments

View File

@ -0,0 +1,12 @@
{{- if and (eq .Values.database.type "postgresql") (not .Values.database.postgresql.existingSecret) }}
apiVersion: v1
kind: Secret
metadata:
name: {{ include "system.fullname" . }}-postgresql
labels:
{{- include "system.labels" . | nindent 4 }}
type: Opaque
data:
postgresql-password: {{ "changeme" | b64enc | quote }}
database-url: {{ printf "postgresql://%s:changeme@%s:%d/%s" .Values.database.postgresql.username .Values.database.postgresql.host (.Values.database.postgresql.port | int) .Values.database.postgresql.database | b64enc | quote }}
{{- end }}

View File

@ -0,0 +1,15 @@
apiVersion: v1
kind: Service
metadata:
name: {{ include "system.fullname" . }}
labels:
{{- include "system.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: {{ .Values.service.targetPort }}
protocol: TCP
name: http
selector:
{{- include "system.selectorLabels" . | nindent 4 }}

View File

@ -0,0 +1,12 @@
{{- if .Values.serviceAccount.create -}}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "system.serviceAccountName" . }}
labels:
{{- include "system.labels" . | nindent 4 }}
{{- with .Values.serviceAccount.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,106 @@
# Production environment overrides for system
replicaCount: 3
image:
repository: system/api
tag: "latest"
pullPolicy: Always
app:
env: production
database:
type: postgresql
postgresql:
enabled: true
host: postgres-prod.internal
port: 5432
database: system2_production
username: system2_production
existingSecret: system-prod-postgresql
persistence:
enabled: false
resources:
limits:
cpu: 2000m
memory: 2Gi
requests:
cpu: 500m
memory: 512Mi
autoscaling:
enabled: true
minReplicas: 3
maxReplicas: 20
targetCPUUtilizationPercentage: 70
targetMemoryUtilizationPercentage: 80
ingress:
enabled: true
className: "nginx"
annotations:
kubernetes.io/tls-acme: "true"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
nginx.ingress.kubernetes.io/rate-limit: "100"
nginx.ingress.kubernetes.io/rate-limit-window: "1m"
hosts:
- host: api.system2.example.com
paths:
- path: /
pathType: Prefix
tls:
- secretName: system-prod-tls
hosts:
- api.system2.example.com
healthcheck:
enabled: true
readinessProbe:
initialDelaySeconds: 15
periodSeconds: 5
timeoutSeconds: 3
failureThreshold: 3
livenessProbe:
initialDelaySeconds: 120
periodSeconds: 30
timeoutSeconds: 5
failureThreshold: 3
# Production-specific security settings
podSecurityContext:
fsGroup: 2000
runAsNonRoot: true
runAsUser: 1000
seccompProfile:
type: RuntimeDefault
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
# Pod disruption budget for high availability
podDisruptionBudget:
enabled: true
minAvailable: 2
# Node affinity for production workloads
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app.kubernetes.io/name
operator: In
values:
- system
topologyKey: kubernetes.io/hostname

View File

@ -0,0 +1,64 @@
# Staging environment overrides for system
replicaCount: 2
image:
repository: system/api
tag: "staging"
pullPolicy: Always
app:
env: staging
database:
type: postgresql
postgresql:
enabled: true
host: postgres-staging.internal
port: 5432
database: system2_staging
username: system2_staging
existingSecret: system-staging-postgresql
persistence:
enabled: false
resources:
limits:
cpu: 1000m
memory: 1Gi
requests:
cpu: 200m
memory: 256Mi
autoscaling:
enabled: true
minReplicas: 2
maxReplicas: 10
targetCPUUtilizationPercentage: 70
targetMemoryUtilizationPercentage: 80
ingress:
enabled: true
className: "nginx"
annotations:
kubernetes.io/tls-acme: "true"
cert-manager.io/cluster-issuer: "letsencrypt-staging"
hosts:
- host: api-staging.system2.example.com
paths:
- path: /
pathType: Prefix
tls:
- secretName: system-staging-tls
hosts:
- api-staging.system2.example.com
healthcheck:
enabled: true
readinessProbe:
initialDelaySeconds: 10
periodSeconds: 5
livenessProbe:
initialDelaySeconds: 60
periodSeconds: 30

125
charts/system/values.yaml Normal file
View File

@ -0,0 +1,125 @@
# Default values for system
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
image:
repository: harbor.yadunut.dev/yadunut/system-api
pullPolicy: IfNotPresent
# Overrides the image tag whose default is the chart appVersion.
tag: ""
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: ""
podAnnotations: {}
podSecurityContext:
fsGroup: 2000
runAsNonRoot: true
runAsUser: 1000
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
service:
type: ClusterIP
port: 3000
targetPort: 3000
ingress:
enabled: false
className: ""
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts:
- host: system.local
paths:
- path: /
pathType: Prefix
tls: []
# - secretName: system-tls
# hosts:
# - system.local
resources:
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 100m
memory: 128Mi
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 100
targetCPUUtilizationPercentage: 80
# targetMemoryUtilizationPercentage: 80
nodeSelector: {}
tolerations: []
affinity: {}
# Application configuration
app:
port: 3000
env: development
# Database configuration
database:
# Use external PostgreSQL
type: postgresql
# PostgreSQL configuration
postgresql:
enabled: false
host: ""
port: 5432
database: system2
username: system2
# Password should be provided via secret
existingSecret: ""
secretKey: "postgresql-password"
# Persistent volume configuration (not used in production with external PostgreSQL)
persistence:
enabled: false
# storageClass: "-"
accessMode: ReadWriteOnce
size: 1Gi
# Health checks
healthcheck:
enabled: true
livenessProbe:
httpGet:
path: /
port: http
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /
port: http
initialDelaySeconds: 5
periodSeconds: 5

View File

@ -12,6 +12,8 @@
git
bun
cargo-generate
kubernetes-helm
helm-ls
];
# https://devenv.sh/languages/

47
docs/ROADMAP.md Normal file
View File

@ -0,0 +1,47 @@
# ROADMAP
This document outlines the roadmap of tasks done by claude in this project.
## Development Workflow
1. **Task Planning**
- Study existing code and documentation and understand the current state
- Update `ROADMAP.md` to include the new task
2. **Task Creation**
- Study existing code and documentation and understand the current state
- Create a new task in `docs/tasks/` directory
- Name format: `XX-<task-name>.md` (e.g., `01-setup-docker.md`)
- Include high level specification of the task, including:
- Relevant files
- Purpose and goals
- Key components and technologies involved
- Expected outcomes and deliverables
- Tests (If applicable)
- Implementation steps
- TODO items and subtasks
- Note that this is a new task, the TODO items should be unchecked.
3. **Task Implementation**
- Study existing code and documentation and understand the current state
- Follow the specification from the tasks file
- Implement features and functionality
- Update step progress within the task file after each step
4. **Roadmap Update**
- Mark completed tasks with [X] in the `ROADMAP.md` file
- Add reference to the task file (e.g. See [01-setup-docker.md](docs/tasks/01-setup-docker.md))
## Development Tasks
- [x] **Task 00: Helm Chart Deployment** - See [00-helm-chart-deployment.md](docs/tasks/00-helm-chart-deployment.md)
- Create a Helm chart for Kubernetes deployment of the application.
- Create `charts/` directory structure with standard Helm chart layout (Chart.yaml, values.yaml, templates/)
- Define Kubernetes deployment manifests for the Bun-based API server with proper resource limits and health checks
- Configure PostgreSQL database deployment or external database connection options in the chart
- Set up ingress configuration with configurable domain and TLS certificate management
- Add comprehensive values.yaml with environment-specific overrides for development, staging, and production deployments

View File

@ -0,0 +1,182 @@
# Task 00: Helm Chart Deployment
## Overview
Create a comprehensive Helm chart for Kubernetes deployment of the Bun-based API server application. This task will enable containerized deployment across different environments (staging, production) with proper configuration management and scalability.
## Relevant Files
- `api/Dockerfile` - Existing container definition for the Bun API server
- `api/package.json` - Application dependencies and metadata
- `api/src/index.ts` - Main application entry point using Effect-TS
- `docker-compose.yml` - Current PostgreSQL development setup
- `charts/` - New directory for Helm chart files (to be created)
## Purpose and Goals
- Enable Kubernetes deployment of the Bun API server across multiple environments
- Provide configurable database connection options (embedded PGLite vs external PostgreSQL)
- Implement proper resource management, health checks, and scaling capabilities
- Support ingress configuration with TLS certificate management
- Allow environment-specific customization through values files
## Key Components and Technologies
- **Helm 3.x** - Kubernetes package manager for templating and deployment
- **Kubernetes** - Container orchestration platform
- **Bun Runtime** - JavaScript/TypeScript runtime (oven/bun:1.2.19-alpine base image)
- **Effect-TS** - Functional programming framework used by the API
- **PostgreSQL** - Database option for production deployments
- **Ingress Controller** - For external traffic routing and TLS termination
## Expected Outcomes and Deliverables
### Chart Structure
```
charts/system/
├── Chart.yaml # Chart metadata and version information
├── values.yaml # Default configuration values
├── values-staging.yaml # Staging environment overrides
├── values-prod.yaml # Production environment overrides
└── templates/
├── deployment.yaml # API server deployment manifest
├── service.yaml # Service definition for API endpoints
├── ingress.yaml # Ingress configuration with TLS
├── configmap.yaml # Configuration data for the application
├── secret.yaml # Sensitive configuration (database credentials)
├── hpa.yaml # Horizontal Pod Autoscaler (optional)
└── NOTES.txt # Post-installation instructions
```
### Key Features
- **Multi-environment support** with environment-specific values files
- **Database flexibility** - configurable PostgreSQL or PGLite usage
- **Resource management** with CPU/memory limits and requests
- **Health checks** - readiness and liveness probes for the API server
- **Horizontal scaling** capability with HPA configuration
- **Ingress configuration** with configurable domains and TLS certificates
- **Security** - non-root container execution and proper secret management
## Tests
### Validation Tests
- [x] Helm chart linting (`helm lint charts/system/`)
- [x] Template rendering validation (`helm template charts/system/`)
- [x] Values schema validation for all environment files
- [x] Kubernetes manifest syntax validation
## Implementation Steps
### Step 1: Create Chart Foundation
- [x] Create `charts/system/` directory structure
- [x] Initialize `Chart.yaml` with proper metadata
- [x] Create base `values.yaml` with comprehensive default values
- [x] Set up templating helpers in `_helpers.tpl`
### Step 2: Core Kubernetes Manifests
- [x] Create `deployment.yaml` template for the Bun API server
- Configure container image and tag templating
- Set up resource limits and requests
- Add environment variable configuration
- Implement readiness and liveness probes
- [x] Create `service.yaml` template for API endpoints
- Configure service type and port mapping
- Add service annotations for load balancer configuration
- [x] Create `configmap.yaml` for application configuration
- Environment-specific settings
- Database connection parameters
### Step 3: Database Configuration
- [x] Add PostgreSQL database deployment option in templates
- External database connection configuration
- [x] Configure PGLite embedded database option
- Volume mounting for data persistence
- Memory/storage configuration
- [x] Create `secret.yaml` template for database credentials
- Templated secret generation
- External secret integration capabilities
### Step 4: Ingress and Networking
- [x] Create `ingress.yaml` template
- Configurable host domains
- TLS certificate management (cert-manager integration)
- Path-based routing configuration
- Ingress class configuration
- [ ] Add network policies (optional)
- Database access restrictions
- External traffic controls
### Step 5: Scaling and Performance
- [x] Create `hpa.yaml` template for horizontal pod autoscaling
- CPU and memory-based scaling triggers
- Custom metrics integration capabilities
- [ ] Add resource monitoring configurations
- ServiceMonitor for Prometheus (if applicable)
- Logging configuration
### Step 6: Environment-Specific Values
- [x] Create `values-staging.yaml`
- Multi-replica setup
- External PostgreSQL configuration
- Production-like resource allocation
- Staging domain configuration
- [x] Create `values-prod.yaml`
- High availability configuration
- External PostgreSQL with connection pooling
- Strict resource limits and security policies
- Production domain and TLS settings
### Step 7: Documentation and Validation
- [x] Create comprehensive `NOTES.txt` with deployment instructions
- [x] Add inline documentation to all template files
- [ ] Create deployment guide in `charts/system/README.md`
- [x] Validate all templates with different values files
## TODO Items and Subtasks
### Prerequisites
- [ ] Verify Kubernetes cluster access and Helm installation
- [ ] Determine ingress controller type (nginx, traefik, etc.)
- [ ] Identify certificate management strategy (cert-manager, manual)
- [ ] Choose container registry for image storage
### Database Strategy
- [ ] Define database migration strategy for Kubernetes deployment
- [ ] Configure backup and restore procedures for PostgreSQL
- [ ] Set up database monitoring and alerting
- [ ] Plan for database scaling and connection pooling
### Security Considerations
- [ ] Implement Pod Security Standards compliance
- [ ] Configure RBAC permissions for the application
- [ ] Set up secret rotation strategies
- [ ] Add network security policies
### Monitoring and Observability
- [ ] Integrate with logging infrastructure (Fluentd, Logstash)
- [ ] Add metrics collection (Prometheus integration)
- [ ] Configure distributed tracing (if applicable)
- [ ] Set up alerting rules for application health
### CI/CD Integration
- [ ] Create GitHub Actions workflow for chart testing
- [ ] Set up automated chart versioning and publishing
- [ ] Add chart security scanning
- [ ] Implement automated deployment pipelines
This task will provide a production-ready Kubernetes deployment solution for the Bun-based API server, enabling scalable and manageable deployments across multiple environments.