In my homelab, I constantly create and delete services while tinkering. That naturally leads to frequent DNS changes. I wanted a reliable, self-hosted DNS solution that integrates cleanly with Kubernetes and supports automation.
Bind9 felt like the right choice: stable, flexible, and well understood. This post explains why I run Bind9 inside Kubernetes and why that decision is not just “overengineering.”
I hear you:
“Why put a simple DNS server into Kubernetes and deal with volumes, pods, and extra complexity?”
This is a valid argument. A friend made the same point and honestly, he was not wrong.
But infrastructure choices are not just about fancy things or simple things. They’re about trade offs. What matters is what you gain vs what you lose.
At this point, everything in my homelab runs on Kubernetes.
My setup:
VM live migration sounds fancy until you watch it crawl on a 1Gigabit. If a node dies mid migration, you’re stuck waiting. Moving entire virtual machines over a slow network just isn’t worth it.
Kubernetes solves this problem naturally.
Instead of migrating machines, it recreates workloads:
All infrastructure lives inside the cluster, not tied to Proxmox-specific HA features. Proxmox becomes exactly what it should be: boring, stable compute.
The real win is declarative control.
(This is not true, I haven't implemented Semaphore) Soon, Semaphore will sit on top and make this even smoother.
With limited bandwidth and frequent experimentation, Kubernetes isn’t overkill it’s practical.
So yes, Bind9 on Kubernetes looks complex overengineering from the outside.
But inside this system, it’s the simplest consistent choice.
Terraform DNS module
https://github.com/SujithThirumalaisamy/homelab/tree/main/terraform/dns
Kubernetes Manifests
https://gist.github.com/SujithThirumalaisamy/49c30047931971b6acccc6a348dfdc90