Skip to content

Home Kubernetes Cluster

A 4-node Kubernetes cluster built on Orange Pi 5 single-board computers. It runs real workloads — an AI agent platform, home automation, graph databases, and security scanning — not as a learning exercise alone, but as infrastructure I depend on daily.

Three reasons converged:

Learning by doing. I wanted vanilla Kubernetes experience — not a managed service where the hard parts are abstracted away, and not a simplified distribution that hides the operational reality. Setting up kubeadm on ARM64 SBCs means confronting every layer: CNI networking, storage provisioning, certificate rotation, etcd health, kubelet configuration. The kind of understanding you can’t get from documentation alone.

A platform for AI agents. I’m building OpenClaw, an AI agent platform that needs always-on infrastructure with persistent storage, network policies, and the ability to run multiple interconnected services. A home cluster gives me full control over the stack without cloud costs that scale with experimentation.

Production-grade homelab. Home Assistant controls physical systems in my house. That demands reliability — not “hobby project” reliability, but actual operational discipline. Running it on Kubernetes forces me to think about high availability, rolling updates, and failure recovery for something that matters.

graph TB
subgraph tailscale["Tailscale Mesh Network"]
direction TB
subgraph cluster["Kubernetes Cluster — v1.28.2 (kubeadm)"]
direction LR
subgraph node1["Node 1 — Control Plane"]
api["API Server / etcd"]
cilium1["Cilium Agent"]
end
subgraph node2["Node 2"]
openclaw["OpenClaw Platform"]
mcp["MCP Gateway"]
signal["Signal CLI"]
cilium2["Cilium Agent"]
end
subgraph node3["Node 3"]
ha["Home Assistant"]
falkor["FalkorDB (Graphiti)"]
tools["MCP Tool Servers"]
cilium3["Cilium Agent"]
end
subgraph node4["Node 4"]
workloads["Additional Workloads"]
cilium4["Cilium Agent"]
end
end
subgraph storage["Longhorn Distributed Storage"]
vol1["Replicated Volumes (2x)"]
end
end
cluster --> storage
tailscale -.->|"Encrypted Overlay"| internet["Remote Access"]

Each node is an Orange Pi 5 — Rockchip RK3588S, 8 ARM Cortex cores, 16 GB RAM. Total cluster capacity: 32 cores, 64 GB RAM. Modest by cloud standards, substantial for a homelab.

Not K3s, not MicroK8s. Vanilla Kubernetes deployed with kubeadm. The tradeoff is more operational overhead in exchange for a cluster that behaves exactly like production Kubernetes everywhere else. When I troubleshoot an issue here, the knowledge transfers directly to any enterprise or cloud deployment.

Version: v1.28.2

Cilium replaces kube-proxy and handles all networking via eBPF programs attached directly to the Linux kernel. Two reasons this matters on resource-constrained nodes:

  • Performance. eBPF avoids the iptables chains that scale poorly with service count. On nodes with 16 GB RAM running multiple workloads, efficiency matters.
  • Network policies. CiliumNetworkPolicy resources provide L3-L7 policy enforcement. Every namespace has explicit ingress/egress rules. Home Assistant, which controls physical devices, gets locked down to only the traffic it needs.

Distributed block storage across all four nodes with 2x replication. When a node goes down for maintenance, volumes remain available. Longhorn was chosen because it’s designed for commodity hardware — it doesn’t assume enterprise SSDs or dedicated storage networks. It runs on the same disks the OS uses, which is exactly the constraint SBCs impose.

Version: v1.10.1, 2x replication factor

Every node joins a Tailscale mesh network. No ports exposed to the public internet. Remote access works through WireGuard tunnels with identity-based authentication. This is the only path into the cluster from outside the local network.

All workloads are defined as Helm charts and deployed through a GitOps workflow. Infrastructure changes go through version control. This isn’t optional when you’re running a cluster you can’t physically access half the time — you need to know exactly what’s deployed and why.

WorkloadPurpose
OpenClawAI agent platform — long-running autonomous agents with tool access, memory, and inter-agent communication
Home AssistantHome automation — thermostat, lighting, presence detection, physical device control
MCP GatewayModel Context Protocol router — connects AI agents to tools and data sources
Signal CLIMessaging integration — agents can send and receive Signal messages
FalkorDBGraph database backing Graphiti — episodic and semantic memory for AI agents
MCP Tool ServersVarious tool servers — web search, file access, calendar, and custom integrations
Security ScanningTrivy and kube-bench for vulnerability scanning and CIS benchmark compliance

Each major infrastructure domain has a dedicated page with architecture details, configuration specifics, and operational lessons:

  • Networking — Cilium eBPF dataplane, CiliumNetworkPolicy patterns, MetalLB L2 load balancing, ingress-nginx routing
  • Storage — Longhorn distributed block storage, 2x replication strategy, PVC patterns, backup roadmap
  • GitOps — Flux v2 reconciliation, Helm chart management, deployment workflows, secrets with Vault and External Secrets Operator

Not everything publishes ARM64 container images. Every new tool requires checking multi-arch support before adoption. Some projects publish linux/amd64 only, which means either finding alternatives, building from source, or contributing ARM64 support upstream. This is getting better year over year, but it’s still a real constraint.

The Orange Pi 5 runs a Rockchip vendor kernel (6.1.115-vendor-rk35xx). This isn’t mainline Linux — it includes proprietary patches for hardware support. The practical impact: kernel features like BTF (BPF Type Format) may not be available, which affects tools that depend on CO-RE (Compile Once, Run Everywhere) eBPF. Falco, for example, needs to fall back to its kernel module driver instead of modern eBPF on these nodes.

16 GB of RAM per node sounds generous until you’re running a graph database, an AI agent platform, a home automation system, and Kubernetes system components on the same hardware. Resource requests and limits aren’t aspirational here — they’re load-bearing. Every workload gets explicit CPU and memory bounds, and I’ve learned exactly what happens when you get them wrong.

Single-board computers aren’t designed for 24/7 server workloads. Thermal management matters — sustained CPU load on passively cooled boards will thermal-throttle. Storage I/O on eMMC or SD cards has different reliability characteristics than enterprise SSDs. Power supplies need to be reliable; an unstable 5V rail takes out a node. These are infrastructure problems, not Kubernetes problems, but they’re inseparable when your data center is a shelf in your office.

This cluster has taught me more about Kubernetes operations than any managed service could. Certificate expiration, etcd compaction, node drain procedures, storage rebalancing after a node failure — these are things you read about in documentation but internalize only when they happen at 11 PM and Home Assistant stops working.

ComponentDetail
Nodes4× Orange Pi 5 (RK3588S)
CPU8 cores per node (32 total) — ARM Cortex-A76/A55
RAM16 GB per node (64 GB total)
Kubernetesv1.28.2 via kubeadm
CNICilium with eBPF
StorageLonghorn v1.10.1, 2× replication
NetworkingTailscale mesh (WireGuard)
DeploymentHelm charts, GitOps
Kernel6.1.115-vendor-rk35xx (Rockchip)
ArchitectureARM64 (aarch64)