The Beginning: Why Kubernetes?

It all started with a simple desire to learn Kubernetes. As someone who’s always been fascinated by container orchestration and distributed systems, the idea of managing multiple containers across a cluster was incredibly appealing. However, there was one significant challenge: limited resources.

Traditional Kubernetes clusters require substantial computing power and memory, which wasn’t feasible with my current setup. That’s when I discovered K3S - a lightweight Kubernetes distribution designed specifically for resource-constrained environments.

🎬 Learning from NetworkChuck’s Tutorial

I found NetworkChuck’s excellent K3S tutorial which provided a solid foundation for understanding K3S cluster setup. However, my specific requirements and hardware setup were different from the tutorial, which led to some interesting improvisations and adaptations.

🎯 What I Built

My K3S Minicube cluster consists of:

  • 1 Raspberry Pi 4B (8GB RAM): Master node running K3S server
  • 2 Raspberry Pi 4B (8GB RAM each): Worker nodes for running workloads
  • 1 Proxmox VM (Ubuntu Server): Rancher management and monitoring
  • 3-Node K3S Cluster: Distributed across physical Raspberry Pi infrastructure

🛠️ The Technical Stack

Adapting the Tutorial to My Setup

While NetworkChuck’s tutorial was excellent, I had to make several adaptations to fit my specific environment:

Hardware Differences

  • Tutorial: Used all Raspberry Pi devices for both master and worker nodes
  • My Setup: Used Raspberry Pi for master node + additional Raspberry Pi devices for workers + separate Proxmox VM for Rancher
  • Reason: I wanted to leverage existing Proxmox infrastructure for management while keeping the K3S cluster on physical Raspberry Pi devices

Storage Requirements

  • Tutorial: Used smaller storage cards
  • My Setup: Required 128GB for master node due to etcd and container image accumulation
  • Learning: Master nodes need significantly more storage than worker nodes

Network Configuration

  • Tutorial: Simple local network setup
  • My Setup: Implemented VLAN isolation and more complex networking
  • Benefit: Better security and network management

Management Layer

  • Tutorial: Basic K3S setup
  • My Setup: Added Rancher for enhanced cluster management
  • Advantage: Better monitoring, user management, and multi-cluster capabilities

K3S: Lightweight Kubernetes

K3S is a certified Kubernetes distribution that’s perfect for edge computing and resource-constrained environments. It’s a single binary that includes everything needed to run Kubernetes.

# Installing K3S on the master node
curl -sfL https://get.k3s.io | sh -

# Getting the join token for worker nodes
sudo cat /var/lib/rancher/k3s/server/node-token

Hardware Configuration

Raspberry Pi 4B Setup

Each Raspberry Pi 4B was configured with:

  • OS: Raspberry Pi OS Lite (64-bit)
  • Storage: 128GB microSD card for master node, 64GB for worker nodes
  • Network: Gigabit Ethernet for reliable cluster communication
  • Power: Stable power supply with UPS backup

Important Note: The master node requires significantly more storage due to etcd database, logs, and container images. I initially used a 32GB SD card for the master node but had to upgrade to 128GB as the cluster grew and accumulated data.

Proxmox VM Configuration

The Rancher management server runs on Proxmox with:

  • OS: Ubuntu Server 22.04 LTS
  • Resources: 4 vCPUs, 8GB RAM, 50GB storage
  • Network: Dedicated VLAN for cluster traffic
  • Storage: NVMe SSD for optimal performance

🏗️ Cluster Architecture

The cluster follows a hybrid architecture combining physical and virtual infrastructure:

K3s 4-Node Cluster ArchitectureProxmox Host• Hardware: Raspberry Pi 4• OS: Proxmox VE• Virtualization LayerK3s Master• etcd Database• API Server• Controller Manager• Scheduler• KubeletWorker Node 1• Kubelet• Container Runtime• Kube-proxy• Pods• ServicesWorker Node 2• Kubelet• Container Runtime• Kube-proxy• Pods• ServicesWorker Node 3• Kubelet• Container Runtime• Kube-proxy• Pods• ServicesLoad Balancer• HAProxy• Traffic Distribution• Health ChecksStorage• Local Storage• Persistent Volumes• ConfigMapsMonitoring• Prometheus• Grafana• Node ExporterApplications• Web Applications• Microservices• Databases• APIs

📦 Installation Process

Step 1: Master Node Setup (Raspberry Pi)

# Update system
sudo apt update && sudo apt upgrade -y

# Install K3S server
curl -sfL https://get.k3s.io | sh -

# Check cluster status
sudo k3s kubectl get nodes

# Get join token
sudo cat /var/lib/rancher/k3s/server/node-token

Step 2: Worker Nodes Setup (Raspberry Pi)

# On each Raspberry Pi
curl -sfL https://get.k3s.io | K3S_URL=https://MASTER_IP:6443 K3S_TOKEN=NODE_TOKEN sh -

# Verify node joined the cluster
sudo k3s kubectl get nodes

Step 3: Rancher Installation (Proxmox VM)

# Install Helm
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash

# Add Rancher Helm repository
helm repo add rancher-latest https://releases.rancher.com/server-charts/latest

# Install Rancher
helm install rancher rancher-latest/rancher \
  --namespace cattle-system \
  --create-namespace \
  --set hostname=rancher.your-domain.com \
  --set bootstrapPassword=admin123

🔧 Configuration and Optimization

Network Configuration

Each node was configured with static IP addresses and proper DNS resolution:

# /etc/dhcpcd.conf on Raspberry Pi
interface eth0
static ip_address=192.168.1.10/24
static routers=192.168.1.1
static domain_name_servers=8.8.8.8 8.8.4.4

Storage Configuration

Master Node Storage Requirements

The master node has significantly higher storage requirements due to:

  • etcd Database: Stores all cluster state and configuration
  • Container Images: Cached images for deployments
  • Logs: System and application logs
  • Temporary Files: Build artifacts and temporary data

Storage Evolution: I initially used a 32GB SD card for the master node, but as the cluster grew, I experienced storage issues and had to upgrade to 128GB. This is a critical consideration for anyone planning a similar setup.

Worker Node Storage

Worker nodes can function with smaller storage (64GB) since they primarily run containers and don’t store cluster state.

Local Storage Classes

For persistent storage, I configured local storage classes:

# local-storage.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer

Resource Limits

Given the limited resources, I implemented strict resource limits:

# Example pod with resource limits
apiVersion: v1
kind: Pod
metadata:
  name: resource-limited-pod
spec:
  containers:
  - name: app
    image: nginx
    resources:
      requests:
        memory: "64Mi"
        cpu: "250m"
      limits:
        memory: "128Mi"
        cpu: "500m"

🚀 Applications and Workloads

Home Assistant Deployment

One of the first applications deployed was Home Assistant for home automation:

# home-assistant.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: home-assistant
spec:
  replicas: 1
  selector:
    matchLabels:
      app: home-assistant
  template:
    metadata:
      labels:
        app: home-assistant
    spec:
      containers:
      - name: home-assistant
        image: homeassistant/home-assistant:latest
        ports:
        - containerPort: 8123
        volumeMounts:
        - name: config
          mountPath: /config
      volumes:
      - name: config
        persistentVolumeClaim:
          claimName: home-assistant-pvc

Monitoring Stack

Implemented a lightweight monitoring solution using Prometheus and Grafana:

# Install monitoring stack
kubectl create namespace monitoring
helm install prometheus prometheus-community/kube-prometheus-stack \
  --namespace monitoring \
  --set grafana.enabled=true \
  --set prometheus.prometheusSpec.resources.requests.memory=256Mi \
  --set prometheus.prometheusSpec.resources.limits.memory=512Mi

📊 Performance and Monitoring

Cluster Metrics

The cluster performance was monitored using Rancher’s built-in monitoring:

  • CPU Usage: Average 30-40% across nodes
  • Memory Usage: 60-70% utilization
  • Network: Stable with minimal latency
  • Storage: Efficient use of local storage

Resource Optimization

Several optimizations were implemented:

  1. Node Affinity: Critical workloads pinned to specific nodes
  2. Resource Quotas: Prevented resource hogging
  3. Horizontal Pod Autoscaling: Automatic scaling based on demand
  4. Pod Disruption Budgets: Ensured high availability

🎓 Learning Outcomes

Kubernetes Concepts Mastered

Through this project, I gained deep understanding of:

  • Pod Management: Understanding pod lifecycle and scheduling
  • Service Discovery: Internal and external service communication
  • Persistent Storage: PVCs and storage classes
  • Security: RBAC, service accounts, and network policies
  • Monitoring: Metrics collection and alerting
  • Backup and Recovery: Etcd backup and restore procedures

Practical Skills Developed

  • Cluster Administration: Day-to-day cluster management
  • Troubleshooting: Debugging pod and service issues
  • CI/CD Integration: Deploying applications through GitOps
  • Security Hardening: Implementing security best practices
  • Performance Tuning: Optimizing resource usage

🔍 Challenges and Solutions

Challenge 0: Adapting the Tutorial

Problem: NetworkChuck’s tutorial didn’t match my specific hardware and requirements Solution: Analyzed the tutorial’s core concepts and adapted them to my hybrid setup (Proxmox VM + Raspberry Pi workers)

Key Improvisations:

  • Master Node Placement: Used Raspberry Pi for master node (as in tutorial) but added separate Proxmox VM for Rancher management
  • Storage Planning: Upgraded from 32GB to 128GB for master node after experiencing storage issues
  • Network Architecture: Implemented VLAN isolation for better security
  • Management Tools: Added Rancher on separate Proxmox VM for enhanced cluster management beyond basic K3S

Challenge 1: Resource Constraints

Problem: Limited RAM and CPU on Raspberry Pi nodes Solution: Implemented strict resource limits and efficient workload distribution

Challenge 2: Network Stability

Problem: Intermittent network connectivity issues Solution: Configured static IPs and implemented health checks

Challenge 3: Storage Management

Problem: Limited storage on microSD cards, especially for master node Solution: Upgraded master node to 128GB SD card, used external USB storage for worker nodes, and implemented storage quotas

Challenge 4: Monitoring Complexity

Problem: Complex monitoring setup for small cluster Solution: Used lightweight monitoring stack with custom dashboards

🚀 Production Readiness

Security Hardening

  • RBAC: Implemented role-based access control
  • Network Policies: Restricted pod-to-pod communication
  • Secrets Management: Secure handling of sensitive data
  • Regular Updates: Automated security patches

Backup Strategy

  • Etcd Backup: Automated daily backups
  • Application Data: Persistent volume backups
  • Configuration: Git-based configuration management
  • Disaster Recovery: Documented recovery procedures

📈 Future Enhancements

Planned Improvements

  1. Multi-Cluster Management: Expand to multiple clusters
  2. GitOps Implementation: ArgoCD for declarative deployments
  3. Service Mesh: Istio for advanced networking
  4. Advanced Monitoring: Custom metrics and alerting
  5. Security Scanning: Container vulnerability scanning

Scalability Considerations

  • Node Expansion: Adding more Raspberry Pi nodes
  • Storage Scaling: Implementing distributed storage
  • Load Balancing: Advanced load balancing strategies
  • High Availability: Multi-master setup

💡 Key Takeaways

Cost-Effective Learning

This project demonstrated that you don’t need expensive hardware to learn Kubernetes. With careful planning and optimization, even limited resources can provide a robust learning environment.

Practical Experience

Hands-on experience with a real cluster provided insights that theoretical learning couldn’t match. Understanding the challenges of managing distributed systems in production-like conditions was invaluable.

Adaptation and Improvisation

The project taught me the importance of adapting tutorials and guides to fit specific requirements. While NetworkChuck’s tutorial was excellent, I had to improvise and modify the approach to work with my hybrid setup. This real-world problem-solving was invaluable for understanding the underlying concepts.

Community Value

The open-source nature of K3S and the supportive community made this project possible. Contributing back to the community through documentation and sharing experiences became part of the journey.

🔗 Resources and References

Documentation

Tools and Software

Community Resources

Video Tutorials

🎯 Conclusion

Building this K3S minicube cluster was an incredibly rewarding experience that taught me more about Kubernetes than any course or tutorial could. The combination of physical Raspberry Pi devices and a virtual Proxmox environment created a realistic production-like scenario with real-world challenges.

The project demonstrated that with the right tools and approach, you can build sophisticated infrastructure even with limited resources. K3S proved to be an excellent choice for learning Kubernetes, providing all the essential features while being lightweight and resource-efficient.

Most importantly, this project reinforced my belief in hands-on learning and the value of building real systems. The challenges encountered and solved provided practical experience that will be invaluable in professional Kubernetes environments.


This project continues to evolve as I explore new Kubernetes features and best practices. The cluster serves as both a learning platform and a foundation for future homelab projects.

Keywords: K3S, Kubernetes, Raspberry Pi, Proxmox, Rancher, Homelab, DevOps, Container Orchestration, Learning, Resource Optimization