max-pfeiffer/proxmox-talos-opentofu
A turnkey Kubernetes cluster built with Talos Linux running on a Proxmox VE hypervisor. Cilium CNI. Provisioning with OpenTofu.
proxmox-talos-opentofu
A turnkey Kubernetes cluster built with Talos Linux running on a
Proxmox VE hypervisor.
Provisioning is done with OpenTofu.
Kubernetes cluster features:
- Talos Linux v1.11.6
- Kubernetes v1.34.2
- no kube-proxy
- Cilium v1.18.3 as Container Network Interface (CNI)
- without kube-proxy
- with L2 loadbalancer support
- with Ingress controller support
- with Gateway API support
- with Egress gateway support
- Gateway API v1.3.0 CRDs are installed
This Kubernetes cluster is meant to be used in a test or home lab environment.
Requirements
You need to have installed on your local machine:
Provisioning
The project is grouped into three sections:
- proxmox: provisioning of virtual machines, operating system and Kubernetes cluster
- kubernetes: provisioning of Kubernetes resources in the running Kubernetes cluster
- argocd: provisioning of Kubernetes resources using GitOps approach, can be configured with
install_argocd_app_of_appsflag
This way you can choose to only provision the cluster itself or/and provision Kubernetes resources and bootstrap
also ArgoCD.
You will have an ArgoCD instance running in the cluster eventually. You can then
install your applications using the GitOps approach. Have a look at install_argocd_app_of_apps and the related
configuration variables for further options.
The main idea is to provision the Kubernetes cluster and bootstrap ArgoCD with infrastructure as code
using OpenTofu. So it can be rolled out very quickly and consistently. All other Kubernetes resources are then
installed with ArgoCD using a git repository.
Usually you want to keep your Kubernetes cluster infrastructure and the Kubernetes resources in a separate repositories.
That way you have everything decoupled, and you can migrate your applications to a new cluster infrastructure more easily.
I added the Kubernetes resources in the argocd directory mainly for demonstration purposes.
Proxmox VE
First step is to provision the Proxmox part: create a configuration.auto.tfvars file based on the example and
edit it so it suits your needs:
$ cd proxmox
$ cp configuration.auto.tfvars.example configuration.auto.tfvars
$ vim configuration.auto.tfvarsThen apply the configuration using OpenTofu:
$ tofu init
$ tofu plan
$ tofu applyYou can then grab and move the kube config file for Kubernetes provisioning like so:
$ tofu output -raw kubeconfig > ~/.kube/config
$ chmod 600 ~/.kube/configTest if your cluster access works by listing the nodes:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
your-cp-0 Ready control-plane 5d v1.34.2
your-worker-0 Ready <none> 5d v1.34.2You might need to wait a bit until the nodes come up. Proceed with the next step when all nodes are in the Ready
state.
Kubernetes
Secondly, you can provision the resources inside the Kubernetes cluster. You have a couple of options to choose
from. All options can be configured using variables in configuration.auto.tfvars:
- Quick start: installs Cilium LB config, ArgoCD, Ingress without TLS (default settings) with OpenTofu. ArgoCD is
available on http://argocd.local.- install_cilium_lb_config = true
- argocd_helm_values: see defaults in variables.tf
- install_argocd_app_of_apps = false
- install_argocd_app_of_apps_git_repo_secret = false
- GitOps using your own repository: installs ArgoCD, no Cilium LB config, no Ingress and the Kubernetes resources in
the repository you specify inargocd_app_of_apps_source. Credentials for a private repository can be configured
and installed with OpenTofu usinginstall_argocd_app_of_apps_git_repo_secretand the related variables:- install_cilium_lb_config = false
- argocd_helm_values: add your Helm values and override defaults, for instance keep server insecure and switch off ingress
- install_argocd_app_of_apps = true
- argocd_app_of_apps_source = YOUR SOURCE SETTINGS
- install_argocd_app_of_apps_git_repo_secret = true
- argocd_app_of_apps_git_repo_secret_url = "https://github.com/you/yourrepo.git"
- argocd_app_of_apps_git_repo_secret_password_or_token = "github_pat_OLImf09435459hfjoi9m435298524jtfjn45i8tmnmds329023jdhn"
These are two use cases I envision here. Please regard them as examples. Of course, you can combine the variables to
any other setup which suits your needs.
For doing a GitOps quick start you can fork this repository and point the argocd_app_of_apps_source to the
argocd directory of your newly forked repository. This way you can make use of the example Kubernetes resources in
argocd directory and edit them to match your infrastructure.
Create a configuration.auto.tfvars like so and edit it to your liking:
$ cd kubernetes
$ cp configuration.auto.tfvars.example configuration.auto.tfvars
$ vim configuration.auto.tfvarsThen do the provisioning with OpenTofu:
$ tofu init
$ tofu plan
$ tofu applyYou can grab the ArgoCD initial admin password with kubectl afterwards:
$ kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath='{.data.password}' | base64 -dArgoCD web user interface should be up and running by now. You can access it in your web browser on
http://argocd.local if you didn't change the defaults or under the domain you configured with argocd_domain.
Or log in using ArgoCD CLI (if installed)
and check on sync status of your apps:
$ argocd login --port-forward --port-forward-namespace argocd --plaintext
$ argocd app list --port-forward --port-forward-namespace argocd --plaintextRoadmap
Proxmox part:
- make node resources configurable (CPU, memory, etc.)
- make version upgrades possible for Kubernetes Nodes with OpenTofu
GitOps part:
- add more storage options i.e. Ceph, local
- add Keycloak operator and Keycloak instance for SSO
- add Prometheus/Grafana for monitoring
- add Alloy/Loki for logging
- add Velero for disaster recovery
I am happy to receive pull requests for any improvements.