From Proxmox to Kubernetes - Evolving My Homelab (part 3)
This is part 3 of my series detailing a transition of my Homelab architecture to using Kubernetes with Proxmox. You can check out the previous parts here:
- From Proxmox to Kubernetes - Evolving My Homelab (part 1)
- From Proxmox to Kubernetes - Evolving My Homelab (part 2)
In part 3, I’ll explore my implementation strategy for Kubernetes and how it has transformed my homelab.
The Strategic Shift to Kubernetes
Kubernetes has become the logical next step for my homelab. I've pointed this out briefly in parts 1 and 2. I enjoy the break-and-fix cycle of learning in my homelab. However, over the years I have found more joy with making things work consistently, from the very start, every time.
My LXC-heavy setup worked as intended, but whenever something broke, I had to manually trace the problem and fix it which was always a great learning process, but not ideal long-term. It was fun, but not ideal, especially long-term, for my sanity! Adopting Kubernetes also aligns my homelab practices with modern production environments, thereby bridging my hobby and learning platform with career skills.
Kubernetes provides a layer of orchestration, in addition to being a strong industry-standard practice in enterprise environments. That second part was the ultimate driver to this decision. That said, the first part, namely orchestration, was what ignited the idea for me. I could strategically design and architect my containers as deployments, where they would continue on through failures or updates, without my constant intervention and diagnostic microscope. My deployments would themselves manage replicasets and pods for me.
If a deployment had an update and I wanted it rolled out, I'd make that change to a manifest, and Kubernetes would perform the update without any downtime to the service. That's not always the case, but the orchestration of that update was what I wanted. And the idea that everything could just be torn down and redeployed just as it was originally is exceedingly gratifying to me.
Enhanced Automation and Service Mesh Capabilities
Since I was already using Ansible to some degree to automate configurations, it made sense to continue that practice. Although I haven't completed it yet, the end goal with Ansible is to have a complete repository to be used for consistent configurations across my Kubernetes cluster. The initial focus will be on security related configurations, such as firewall rules, SELinux, and permissions and accounts. However, eventually I want a complete bootstrap process where I could run a few simple Ansible commands to get a brand new cluster deployed and configured.
Most people don't necessarily think about the Kubernetes service mesh. In short, it's a platform of additional capabilities to manage and secure service-to-service communication within the cluster. It handles internal (East-West) traffic between services, providing load balancing, security, and observability features that complement traditional ingress (North-South) routing. The key here is that it's for control and data layers. This communication also includes metrics and logging. What I like best is the security functionality it can provide. Enforcing policies for service access and encryption.
Addressing the complexity trade-off
Adopting Kubernetes in a homelab environment introduces substantial complexity and setup overhead. Setting up manifests, learning new abstractions, and integrating with existing tools I was already using like Ansible can be daunting at first. However, the long-term benefits far outweigh the up-front investment in time and learning. Automated recovery, seamless updates, improved security, and the ability to scale or redeploy services with minimal effort to name a few. By embracing this complexity early, I’m building a foundation that will make future growth, experimentation, and maintenance much more manageable and resilient.