Skip to content

Updating Linkding Deployment With Cloudflare Tunnels

This is an update to my previous post regarding my process for creating a Linkding self-hosted service with FluxCD in my homelab Kubernetes cluster.

You can read the original post here: Creating Linkding Deployment with FluxCD

After getting my Linkding deployment working through my FluxCD GitOps lifecycle, I quickly realized that I was missing some key functionality and configuration steps.

The first one being that my Linkding app wasn't being exposed for me to access locally on my network. It was only accessible from a node in my cluster that could access the cluster IP addresses. This a is problem if I'm planning to use Linkding for all my bookmarks!

The next one being that the Deployment does not declare any superuser account. In the original version of my Deployment I was required to perform an exec command inside the Container to create my superuser name and password before I could ever login. This was using a python script and very tedious! Not what I want if my aim is to have a declarative, stateful Deployment where we could potentially deploy Linkding to a brand new Kubernetes cluster with a superuser already setup and configured. I have the PersistentVolumeClaim setup for the data directory to persist within the cluster, but an initial or bootstrap deploy to a brand new cluster would not result in any superuser account getting setup. This relates to the idea of idempotentency, where I want the Deployment to be applied the first time and any number of times after that without changing the outcome beyond the initial deployment.

These updates support declarative, repeatable deployments of linkding and improves security by not hardcoding credentials.

For a full breakdown of this updated structure to my Linkding Deployment you can check out my homelab GitHub repository at https://github.com/cyberwatchdoug/homelab/tree/main

Updated Architecture

If you recall, my GitOps using Flux is organized by folders for my apps labeled apps/base and apps/staging. The updates to that architecture is the addition of three files service.yaml, ingress.yaml, and secrets.yaml as well as updating the kustomization.yaml file in the apps/base/linkding/ directory to add these two new files to the resources list.

Here's a visual of my updated folder setup:

homelab
└──apps
   ├── base
   │   └── linkding
   │       ├── deployment.yaml (updated)
   │       ├── kustomization.yaml (updated)
   │       ├── namespace.yaml
   │       ├── storage.yaml
   │       ├── service.yaml (new)
   │       ├── ingress.yaml (new)
   │       └── secrets.yaml (new)
   └── staging
       └── linkding
           └── kustomization.yaml

Updated Kustomization File

Nothing major here, just the addition of service.yaml, ingress.yaml and secrets.yaml files to the resources list:

kustomization.yaml apps/base/linkding/

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: linkding
resources:
  - namespace.yaml
  - storage.yaml
  - deployment.yaml
  - service.yaml 
  - secrets.yaml

New service.yaml File

This file defines a Kubernetes Service resource for my Linkding app. It will expose the app internally within the k8s cluster on port 9090, forwarding traffic to the application's pods on target port 9090. I'm using a selector of app: linkding to ensure traffic routing is accurate. Using type: ClusterIP means it is only accessible within the cluster. This will make sense after I explain the Ingress resource created in the next file.

service.yaml

apiVersion: v1
kind: Service
metadata:
  name: linkding
  namespace: linkding
spec:
  selector:
    app: linkding
  ports:
    - port: 9090
      targetPort: 9090
  type: ClusterIP

New ingress.yaml File

This file defines a Kubernetes Ingress resource for my Linkding app. It will enable external HTTP access to the service (defined above). Since k3s uses Traefik as the ingress controller, this will route requests to linkding.local to the internal linkding Service on port 9090. I've specified path: / to make sure all HTTP requests to linkding.local get forwarded to the linkding Service. In short, this allows me to access my Linkding app locally on my network by the specified hostname.

ingress.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: linkding
  namespace: linkding
  annotations:
    traefik.ingress.kubernetes.io/router.entrypoints: web
spec:
  entryPoints:
    - web # use 'web' for HTTP only
  rules:
    - host: linkding.local
      http:
        paths:
        - path: /
          pathType: Prefix
          backend:
            service:
              name: linkding
              port:
                number: 9090

Something I'd like to point out here regarding exposing multiple apps that all listen on the same container port. Inside Kubernetes, there is no port conflict because each Service is an isolated abstraction with its own virtual IP inside the cluster. But for it to work that way the following must be true:

  1. Each Service has a unique name.
  2. Each Ingress path or hostname is unique.
  3. Each Service is of type ClusterIP.

New secrets.yaml File

For this resource, you must have already deployed and configured the External Secrets Operator in your cluster. I've detailed how I have done this in the following posts:

The file below defines an ExternalSecret resource for my Linkding app. It essentially instructs the External Secrets Operator to retrieve the secret keys LD_SUPERUSER_NAME and LD_SUPERUSER_PASSWORD from an external provider and then store them in a cluster Secret resource named linkding-container-env. The external provider is referenced in the secretStoreRef section. I've set this to refresh every 12 hours, but will likely change that to a shorter interval in the future.

secrets.yaml

apiVersion: external-secrets.io/v1
kind: ExternalSecret
metadata:
  name: linkding-container-env
  namespace: linkding
spec:
  refreshInterval: 12h
  secretStoreRef:
    name: onep-store
    kind: ClusterSecretStore
  data:
    - secretKey: LD_SUPERUSER_NAME
      remoteRef:
        key: "Linkding/username"
    - secretKey: LD_SUPERUSER_PASSWORD
      remoteRef:
        key: "Linkding/password"

Updated deployment.yaml File

The additions to this file enable the use of the secrets retrieved from the external provider configured in the previous file (secrets.yaml).

To clarify, the Secret resource named linkding-container-env will contain two secrets: LD_SUPERUSER_NAME and LD_SUPERUSER_PASSWORD, with their respective values retrieved from the external provider.

The new envFrom: block takes the key=value pairs from the referenced Secret resource and passes them as environment variables to the linkding container. This is why the secret keys match the environment variable options for the Linkding application.

There is no need to worry about these variables being set every time a new linkding container is deployed. They will not overwrite an existing superuser if one already exist. These variables are only used when the database is empty, specifically during any initial startup. If there is an existing database, they are ignored.

deployment.yaml (updated containers section)

      containers:
        - name: linkding
          image: sissbruecker/linkding:1.41.0
          ports:
            - containerPort: 9090
          securityContext:
            allowPrivilegeEscalation: false
          envFrom:
            - secretRef:
                name: linkding-container-env
          volumeMounts:
            - name: linkding-data
              mountPath: "/etc/linkding/data"

Wrap-Up

With these improvement, my Linkding deployment is now fully declarative, secure, and accessible on my local network. By integrating FluxCD, Kustomize, and the External Secrets Operator, I have streamlined the setup and ongoing management of my self-hosted service. This approach will also ensure that deployments are repeatable, credentials are managed securely, and the application is always available as intended.

If you have questions or want to share your own experiences with GitOps and Kubernetes, feel free to reach out. Find me @cyberwatchdoug on most places.