I know it can be tedious and more of a time sink to read through API reference or schema. But it is absolutely worth it, especially if you also take notes for yourself.
Let me explain. I have been wrapping my head around how I can get up and running with using CloudNativePG for my databases. Starting with Linkding, but have plans to expand it to a half dozen other self-hosted apps.
I didn't want to just copy someone else's manifest, or ask GenAI to create a manifest for me. I took the time to read what each field was for, and went down the rabbit hole reading first the Cluster reference, then ClusterSpec and several deeper fields within that.
This is part 3 of my series detailing a transition of my Homelab architecture to using Kubernetes with Proxmox. You can check out the previous parts here:
Kubernetes has become the logical next step for my homelab. I've pointed this out briefly in parts 1 and 2. I enjoy the break-and-fix cycle of learning in my homelab. However, over the years I have found more joy with making things work consistently, from the very start, every time.
After getting my Linkding deployment working through my FluxCD GitOps lifecycle, I quickly realized that I was missing some key functionality and configuration steps.
The first one being that my Linkding app wasn't being exposed for me to access locally on my network. It was only accessible from a node in my cluster that could access the cluster IP addresses. This a is problem if I'm planning to use Linkding for all my bookmarks!
The next one being that the Deployment does not declare any superuser account. In the original version of my Deployment I was required to perform an exec command inside the Container to create my superuser name and password before I could ever login. This was using a python script and very tedious! Not what I want if my aim is to have a declarative, stateful Deployment where we could potentially deploy Linkding to a brand new Kubernetes cluster with a superuser already setup and configured. I have the PersistentVolumeClaim setup for the data directory to persist within the cluster, but an initial or bootstrap deploy to a brand new cluster would not result in any superuser account getting setup. This relates to the idea of idempotentency, where I want the Deployment to be applied the first time and any number of times after that without changing the outcome beyond the initial deployment.
These updates support declarative, repeatable deployments of linkding and improves security by not hardcoding credentials.
In this post, I continue my journey evolving my homelab from a simple Proxmox setup to a more robust Kubernetes-based architecture. Building on part one, I’ll share what worked, what didn’t, and how my approach to self-hosting and automation has changed over time.
In this second part for my Secrets Management with External Secrets Operator (ESO) and 1Password, I will be detailing how I configured my ESO deployment through GitOps using Flux, Kustomization, and Secrets resources. You can read the first part here: Secrets Management With External Secrets Operator and 1Password (part 1).
A recap on why ESO: the goal of the ESO operator is to synchronize secrets from these external sources into Kubernetes secrets, so they can be more easily accessed and used throughout the cluster.
After getting my Linkding deployment working through my FluxCD GitOps lifecycle, I quickly realized that I was missing some key functionality and configuration steps.
The first one being that my Linkding app wasn't being exposed for me to access locally on my network. It was only accessible from a node in my cluster that could access the cluster IP addresses. This a is problem if I'm planning to use Linkding for all my bookmarks!
The next one being that the Deployment does not declare any superuser account. In the original version of my Deployment I was required to perform an exec command inside the Container to create my superuser name and password before I could ever login. This was using a python script and very tedious! Not what I want if my aim is to have a declarative, stateful Deployment where we could potentially deploy Linkding to a brand new Kubernetes cluster with a superuser already setup and configured. I have the PersistentVolumeClaim setup for the data directory to persist within the cluster, but an initial or bootstrap deploy to a brand new cluster would not result in any superuser account getting setup. This relates to the idea of idempotentency, where I want the Deployment to be applied the first time and any number of times after that without changing the outcome beyond the initial deployment.
These updates support declarative, repeatable deployments of linkding and improves security by not hardcoding credentials.
In this first part for my Secrets Management with External Secrets Operator (ESO) and 1Password series, I'm going to detail how to get ESO deployed through GitOps using Flux, Kustomization resources, and Helm resources. All of these configuration files can be found in my homelab GitHub repository located here: https://github.com/cyberwatchdoug/homelab/tree/main
What exactly is External Secrets Operator, and why should we use it? Great question. ESO is a Kubernetes operator that solves the dilemma of secrets management in Kubernetes from external sources. The list of providers is lengthy, but it includes important players like AWS, Google, Azure, HashiCorp, CyberArk, and 1Password. The goal of this operator is to synchronize secrets from these external sources into Kubernetes secrets, so they can be more easily accessed and used throughout the cluster.
Ever since I started tinkering with FluxCD in my homelab Kubernetes cluster, I've been on a kick with maximizing both best practices with it, and also automating the orchestration of my self-hosted services.
It's quite a ride! But as this post is my process for getting the Linkding bookmark service deployed, I will not be going into detail on how to get FluxCD setup and configured. That's a prerequisite you have hopefully already gone through.
So I've been tinkering with Kubernetes in my homelab for some time now. It's been more of a fun experiment, however, things really started to click for me with how much I enjoyed the declarative orchestration posibilities. Kubernetes os known for container orchestration, and it does allow both imperative and declarative management.
Well, I've already been doing imperative management across my whole homelab with my Proxmox setups, VMs, LXCs, and containers within VMs. So I knew what that required. It's a great way to learn, and truly helps with building a strong problem solving mentality because you are making the configuration update and see the immediate results of your change. So when things break, you can just review what you had just done and learn why it happened and how to resolve it.
As my data consumption and storage needs grow - both at work and at home - reliable automation becomes non-negotiable. Over the past week, I have invested much needed time to my homelab_ansible GitHub repository; focusing on crafing a robust backup playbook for my paperless-ngx deployment. My goal? A backup workflow I trust to safeguard my documents, regardless of infrastructure or underlying OS.