

Sure thing, I’ll edit this reply when I get back to my computer. Just note that I also have a tailscale and nginx container in the pod which are not necessary.
You’ll see my nginx config which reverse proxies to the port the service is running on. On public servers I have another nginx running with SSL that proxies to the port I map the pod’s port 80 to.
I usually run my pods as an unpriviledged user with loginctl enable-linger
which starts the enabled systemctl --user
services on boot.
All that being said I haven’t publically exposed linkwarden yet, mainly because it’s the second most resource intensive service I run and I have all my public stuff on a shitty vps.
Edit: My opsec is so bad hahaha
Edit2: I just realized the caps I gave were to the tailscale container, not the linkwarden container. Linkwarden can run with no caps :)
I added the tailscale stuff back
files:
linkwarden-pod.kube:
[Install]
WantedBy=default.target
[Kube]
# Point to the yaml in the same directory
Yaml=linkwarden-pod.yml
PublishPort=127.0.0.1:7777:80
AutoUpdate=registry
[Service]
Restart=always
linkwarden-pod.yml:
---
apiVersion: v1
kind: Pod
metadata:
name: linkwarden
spec:
containers:
- name: ts-linkwarden
image: docker.io/tailscale/tailscale:latest
env:
- name: TS_HOSTNAME
value: "link"
- name: TS_STATE_DIR
value: /var/lib/tailscale
- name: TS_AUTHKEY
valueFrom:
secretKeyRef:
name: ts-auth-kube
key: ts-auth
volumeMounts:
- name: linkwarden-ts-storage
mountPath: /var/lib/tailscale
securityContext:
capabilities:
add:
- NET_ADMIN
- SYS_MODULE
- name: linkwarden
image: ghcr.io/linkwarden/linkwarden:latest
env:
- name: INSTANCE_NAME
value: link.mydomain.com
- name: AUTH_URL
value: http://linkwarden:3000/api/v1/auth
- name: NEXTAUTH_SECRET
value: LOL_I_JUST_PUBLISHED_THIS_I_CHANGED_IT
- name: DATABASE_URL
value: postgresql://postgres:password@linkwarden-postgres:5432/postgres
- name: NEXT_PUBLIC_DISABLE_REGISTRATION
value: "true"
- name: linkwarden-nginx
image: docker.io/library/nginx:alpine
volumeMounts:
- name: linkwarden-nginx-conf
subPath: nginx.conf
mountPath: /etc/nginx/nginx.conf
readOnly: true
- name: linkwarden-postgres
image: docker.io/library/postgres:latest
env:
- name: POSTGRES_PASSWORD
value: "password"
volumeMounts:
- name: linkwarden-postgres-db
mountPath: /var/lib/postgresql/data
volumes:
- name: linkwarden-nginx-conf
configMap:
name: linkwarden-nginx-conf
items:
- key: nginx.conf
path: nginx.conf
- name: linkwarden-postgres-db
persistentVolumeClaim:
claimName: linkwarden-postgres-db-claim
- name: linkwarden-ts-storage
persistentVolumeClaim:
claimName: linkwarden-ts-pv-claim
---
apiVersion: v1
kind: ConfigMap
metadata:
name: linkwarden-nginx-conf
data:
nginx.conf: |
#user nobody;
worker_processes 1;
#pid logs/nginx.pid;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
#keepalive_timeout 0;
keepalive_timeout 65;
gzip off;
# set_real_ip_from cw.55.55.1;
real_ip_header X-Forwarded-For;
real_ip_recursive on;
server {
listen 80;
server_name _;
location / {
proxy_pass http://localhost:3000/;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Scheme $scheme;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Accept-Encoding "";
proxy_set_header Host $host;
}
}
}
I also have a little helper script you might like
#!/bin/bash
SYSTEMD_DIRECTORY="${HOME}/.config/containers/systemd"
POD_NAME="linkwarden-pod"
mkdir -p "$SYSTEMD_DIRECTORY"
cp "${POD_NAME}".{kube,yml} "${SYSTEMD_DIRECTORY}"/
systemctl --user daemon-reload
I think it’s cool that I can take that config and drop it into kubernetes and it usually just works. I don’t have a cluster anymore, but if I decide to use one in the future, the overhead will be negligible